Our model architecture is shown below:
To install requirements:
pip install -r requirements.txt
To train the model(s) in the paper, run this command:
python training_and_evaluation.py "<type of dataset>" "<model>" num_iterations num_repeat_runs n_C n_T 0
where is for example ETT or exchange, is for example, TNP or taylorformer, where n_C and n_T are the number of context and target points, respectively.
You will have needed to create appropriate folders to store the model weights and evaluation metrics. We have included a folder for the taylorformer on the ETT dataset, with n_T = 96, as an example. Its path is weights_/forecasting/ETT/taylorformer/96
.
Evaluation metrics (mse and log-likelihood) for each of the repeat runs are saved in the corresponding folder e.g. weights_/forecasting/ETT/taylorformer/96
. The mean and standard deviations are used when reporting the results.
Here is an example of how to load a pre-trained model for the ETT dataset with the Taylorformer for the target-96-context-96 setting.
python pre_trained_model_ex.py 0 37
We show our results on the forecasting datasets. More results can be found in the paper.