First, you need to install the dependencies:
python -m pip install -r requirements.txt
Then, you can run the preprocessing of the data script:
python gen_data.py --data=<dataset> --hours=1 --p_steps=0
--data
is the dataset to use. The options are air_quality, traffic, energy, power, parking, room, solar, kolkata, turbine, joho, electricity, iot and wind.--p_steps
is the number of previous time steps to add as features to the final dataset.
You can also run the preprocessing script for all the datasets, with the paper's parameters with the following bash script:
/bin/bash gen_data.sh
For training the model, you can run:
python training_pipeline.py --data=<dataset> --logging=info --ii=<int>
--data
is the dataset to use. The options are air_quality, traffic, energy, power, parking, room, solar, kolkata, turbine, joho, electricity, iot and wind.--logging
is the logging level. The options aredebug
,info
,warning
,error
andcritical
. The default isinfo
.--ii
is the number of the importance iterations of the experiment. If it's 0 then feature importance is not calculated. The default is 0.
Download & information about the dataset
Download & information about the dataset
Download & information about the dataset
Download & information about the dataset
Download & information about the dataset
Download & information about the dataset
Download & information about the dataset
Download & information about the dataset