Shourya Bose
commited on
Commit
·
0aa0c9c
1
Parent(s):
4ad6488
update readme
Browse files
README.md
CHANGED
|
@@ -7,13 +7,6 @@ This repository provides model weights to run load forecasting models trained on
|
|
| 7 |
|
| 8 |
Note that `lookback` is denoted by `L` and `lookahead` by `T` in the weights directory. We provide weights for the following `(L,T)` pairs: `(512,4)`, `(512,48)`, and `(512,96)`, and for `HOM`ogenous and `HET`erogenous datasets.
|
| 9 |
|
| 10 |
-
## Data
|
| 11 |
-
|
| 12 |
-
When using the companion [dataset](https://huggingface.co/datasets/APPFL/Illinois_load_datasets), the following points must be noted (see the page for more information on configuring the data loaders):
|
| 13 |
-
|
| 14 |
-
- All models accept normalized inputs and produce normalized outputs, i.e. set `normalize = True` when generating the datasets.
|
| 15 |
-
- For Transformer, Autoformer, Informer, and TimesNet set `transformer = True`, while for LSTM, LSTNet, and PatchTST set `transformer = False`.
|
| 16 |
-
|
| 17 |
## Packages
|
| 18 |
|
| 19 |
Executing the code only requires `numpy` and `torch` (PyTorch) packages. You can either have them in your Python base installation, or use a `conda` environment.
|
|
@@ -22,7 +15,7 @@ Executing the code only requires `numpy` and `torch` (PyTorch) packages. You can
|
|
| 22 |
|
| 23 |
In order to see how to use the model definitions and load the weights into them, see `example.py`.
|
| 24 |
|
| 25 |
-
## Technical Details
|
| 26 |
|
| 27 |
In input layout of the models are as follows:
|
| 28 |
|
|
@@ -31,6 +24,7 @@ In input layout of the models are as follows:
|
|
| 31 |
- `input` is a tensor of shape `(B,L,num_features)` where `B` is the batch size, `L` is the lookback duration, and `num_features` is 8 for our current application.
|
| 32 |
- `future_time_idx` is a tensor of shape `(B,T,2)` where `T` is the lookahead and 2 is the number of time index features.
|
| 33 |
- The time indices in `input` as well as `fut_time_idx` are both normalized.
|
|
|
|
| 34 |
- Non-time features are normalized. The mean and standard deviation of the [companion dataset](https://huggingface.co/datasets/APPFL/Illinois_load_datasets) can be inferred by executing `example_dataset.py` there and looking at `Case 1` and `Case 4`.
|
| 35 |
- The output shape is `(B,1)` denoting the pointwise forecast `T` steps into the future.
|
| 36 |
- The `forward()` functions of `Transformer`, `Autoformer`, `Informer`, and `TimesNet` take in two arguments:` forward(input, future_time_idx)`. They are laid out as follows:
|
|
@@ -38,12 +32,14 @@ In input layout of the models are as follows:
|
|
| 38 |
- `input` is a tensor of shape `(B,L,num_features)` where `B` is the batch size, `L` is the lookback duration, and `num_features` is 8 for our current application.
|
| 39 |
- `future_time_idx` is a tensor of shape `(B,T,2)` where `T` is the lookahead and 2 is the number of time index features.
|
| 40 |
- The time indices in `input` as well as `fut_time_idx` are un-normalized to allow for embedding.
|
|
|
|
| 41 |
- Non-time features are normalized. The mean and standard deviation of the [companion dataset](https://huggingface.co/datasets/APPFL/Illinois_load_datasets) can be inferred by executing `example_dataset.py` there and looking at `Case 2` and `Case 5`.
|
| 42 |
- The output shape is `(B,1)` denoting the pointwise forecast `T` steps into the future.
|
| 43 |
- The `forward()` functions of `TimesFM` takes in one argument:` forward(input)`. It is laid out as follows:
|
| 44 |
|
| 45 |
- `input` is a tensor of shape `(B,L)` where `B` is the batch size and `L` is the lookback duration. Since it is univariate, there is only one feature.
|
| 46 |
- The sole feature is normalized. The mean and standard deviation of the [companion dataset](https://huggingface.co/datasets/APPFL/Illinois_load_datasets) can be inferred by executing `example_dataset.py` there and looking at `Case 3` and `Case 6`.
|
|
|
|
| 47 |
- The output shape is `(B,T)` denoting the rolling horizon forecast `T` steps into the future.
|
| 48 |
|
| 49 |
## Credits
|
|
|
|
| 7 |
|
| 8 |
Note that `lookback` is denoted by `L` and `lookahead` by `T` in the weights directory. We provide weights for the following `(L,T)` pairs: `(512,4)`, `(512,48)`, and `(512,96)`, and for `HOM`ogenous and `HET`erogenous datasets.
|
| 9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
## Packages
|
| 11 |
|
| 12 |
Executing the code only requires `numpy` and `torch` (PyTorch) packages. You can either have them in your Python base installation, or use a `conda` environment.
|
|
|
|
| 15 |
|
| 16 |
In order to see how to use the model definitions and load the weights into them, see `example.py`.
|
| 17 |
|
| 18 |
+
## Technical Details for Running the Models
|
| 19 |
|
| 20 |
In input layout of the models are as follows:
|
| 21 |
|
|
|
|
| 24 |
- `input` is a tensor of shape `(B,L,num_features)` where `B` is the batch size, `L` is the lookback duration, and `num_features` is 8 for our current application.
|
| 25 |
- `future_time_idx` is a tensor of shape `(B,T,2)` where `T` is the lookahead and 2 is the number of time index features.
|
| 26 |
- The time indices in `input` as well as `fut_time_idx` are both normalized.
|
| 27 |
+
- The custom `torch.utils.data.Dataset` class for the train, val, and test sets can be generated by executing the `get_data_and_generate_train_val_test_sets` function in the `custom_dataset.py` file in the [companion dataset](https://huggingface.co/datasets/APPFL/Illinois_load_datasets).
|
| 28 |
- Non-time features are normalized. The mean and standard deviation of the [companion dataset](https://huggingface.co/datasets/APPFL/Illinois_load_datasets) can be inferred by executing `example_dataset.py` there and looking at `Case 1` and `Case 4`.
|
| 29 |
- The output shape is `(B,1)` denoting the pointwise forecast `T` steps into the future.
|
| 30 |
- The `forward()` functions of `Transformer`, `Autoformer`, `Informer`, and `TimesNet` take in two arguments:` forward(input, future_time_idx)`. They are laid out as follows:
|
|
|
|
| 32 |
- `input` is a tensor of shape `(B,L,num_features)` where `B` is the batch size, `L` is the lookback duration, and `num_features` is 8 for our current application.
|
| 33 |
- `future_time_idx` is a tensor of shape `(B,T,2)` where `T` is the lookahead and 2 is the number of time index features.
|
| 34 |
- The time indices in `input` as well as `fut_time_idx` are un-normalized to allow for embedding.
|
| 35 |
+
- The custom `torch.utils.data.Dataset` class for the train, val, and test sets can be generated by executing the `get_data_and_generate_train_val_test_sets` function in the `custom_dataset.py` file in the [companion dataset](https://huggingface.co/datasets/APPFL/Illinois_load_datasets).
|
| 36 |
- Non-time features are normalized. The mean and standard deviation of the [companion dataset](https://huggingface.co/datasets/APPFL/Illinois_load_datasets) can be inferred by executing `example_dataset.py` there and looking at `Case 2` and `Case 5`.
|
| 37 |
- The output shape is `(B,1)` denoting the pointwise forecast `T` steps into the future.
|
| 38 |
- The `forward()` functions of `TimesFM` takes in one argument:` forward(input)`. It is laid out as follows:
|
| 39 |
|
| 40 |
- `input` is a tensor of shape `(B,L)` where `B` is the batch size and `L` is the lookback duration. Since it is univariate, there is only one feature.
|
| 41 |
- The sole feature is normalized. The mean and standard deviation of the [companion dataset](https://huggingface.co/datasets/APPFL/Illinois_load_datasets) can be inferred by executing `example_dataset.py` there and looking at `Case 3` and `Case 6`.
|
| 42 |
+
- The custom `torch.utils.data.Dataset` class for the train, val, and test sets can be generated by executing the `get_data_and_generate_train_val_test_sets` function in the `custom_dataset_univariate.py` file in the [companion dataset](https://huggingface.co/datasets/APPFL/Illinois_load_datasets).
|
| 43 |
- The output shape is `(B,T)` denoting the rolling horizon forecast `T` steps into the future.
|
| 44 |
|
| 45 |
## Credits
|