Update README.md
Browse files
README.md
CHANGED
|
@@ -8,6 +8,10 @@ license: apache-2.0
|
|
| 8 |
<img src="logo.png" alt="Logo" height="300"/>
|
| 9 |
</p>
|
| 10 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
## Overview
|
| 12 |
|
| 13 |
**MANTIS** is an open-source python package with a pre-trained time series classification foundation model implemented by Huawei Noah's Ark Lab.
|
|
@@ -65,6 +69,20 @@ Please refer to [`getting_started/`](https://github.com/vfeofanov/mantis/tree/ma
|
|
| 65 |
|
| 66 |
Below we summarize the basic commands needed to use the package.
|
| 67 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
### Initialization.
|
| 69 |
|
| 70 |
To load our pre-trained model from the HuggingFace, it is sufficient to run:
|
|
@@ -125,6 +143,15 @@ adapter = LinearChannelCombiner(num_channels=X.shape[1], new_num_channels=5)
|
|
| 125 |
model.fit(X, y, adapter=adapter, fine_tuning_type='adapter_head')
|
| 126 |
```
|
| 127 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 128 |
## Citing Mantis 📚
|
| 129 |
|
| 130 |
If you use Mantis in your work, please cite this technical report:
|
|
|
|
| 8 |
<img src="logo.png" alt="Logo" height="300"/>
|
| 9 |
</p>
|
| 10 |
|
| 11 |
+
<br>
|
| 12 |
+
|
| 13 |
+
> **🚨 NEW Version 0.2.0: Mantis pre-training is now available! 🚨**
|
| 14 |
+
|
| 15 |
## Overview
|
| 16 |
|
| 17 |
**MANTIS** is an open-source python package with a pre-trained time series classification foundation model implemented by Huawei Noah's Ark Lab.
|
|
|
|
| 69 |
|
| 70 |
Below we summarize the basic commands needed to use the package.
|
| 71 |
|
| 72 |
+
### Prepare Data.
|
| 73 |
+
|
| 74 |
+
As an input, Mantis accepts any time series with sequence length **proportional** to 32, which corresponds to the number of tokens fixed in our model.
|
| 75 |
+
We found that resizing time series via interpolation is generally a good choice:
|
| 76 |
+
``` python
|
| 77 |
+
import torch
|
| 78 |
+
import torch.nn.functional as F
|
| 79 |
+
|
| 80 |
+
def resize(X):
|
| 81 |
+
X_scaled = F.interpolate(torch.tensor(X, dtype=torch.float), size=512, mode='linear', align_corners=False)
|
| 82 |
+
return X_scaled.numpy()
|
| 83 |
+
```
|
| 84 |
+
Generally speaking, the interpolation size is a hyperparameter to play with. Nevertheless, since Mantis was pre-trained on sequences of length 512, interpolating to this length looks reasonable in most of cases.
|
| 85 |
+
|
| 86 |
### Initialization.
|
| 87 |
|
| 88 |
To load our pre-trained model from the HuggingFace, it is sufficient to run:
|
|
|
|
| 143 |
model.fit(X, y, adapter=adapter, fine_tuning_type='adapter_head')
|
| 144 |
```
|
| 145 |
|
| 146 |
+
### Pre-training.
|
| 147 |
+
|
| 148 |
+
The model can be pre-trained using the `pretrain` method of `MantisTrainer` that supports data parallelization. You can see a pre-training demo at `getting_started/pretrain.py`.
|
| 149 |
+
For example, to pre-train the model on 4 GPUs, you can run the following commands:
|
| 150 |
+
```
|
| 151 |
+
cd getting_started/
|
| 152 |
+
python -m torch.distributed.run --nproc_per_node=4 --nnodes=1 pretrain.py --seed 42
|
| 153 |
+
```
|
| 154 |
+
|
| 155 |
## Citing Mantis 📚
|
| 156 |
|
| 157 |
If you use Mantis in your work, please cite this technical report:
|