Vladyslav Moroshan
commited on
Commit
ยท
1b89a8e
1
Parent(s):
0a58567
Update README.md
Browse files
README.md
CHANGED
|
@@ -2,20 +2,25 @@
|
|
| 2 |
license: apache-2.0
|
| 3 |
library_name: tempo-pfn
|
| 4 |
tags:
|
| 5 |
-
-
|
| 6 |
-
-
|
| 7 |
-
-
|
| 8 |
-
-
|
| 9 |
-
-
|
| 10 |
-
-
|
| 11 |
-
-
|
| 12 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
arxiv: 2510.25502
|
| 14 |
---
|
| 15 |
|
| 16 |
# TempoPFN: Synthetic Pre-Training of Linear RNNs for Zero-Shot Time Series Forecasting
|
| 17 |
|
| 18 |
-
[](https://arxiv.org/abs/2510.25502) [](https://huggingface.co/spaces/Salesforce/GIFT-Eval) [](https://github.com/automl/TempoPFN) [](https://github.com/automl/TempoPFN/blob/main/LICENSE)
|
| 24 |
|
| 25 |
---
|
| 26 |
|
|
|
|
| 57 |
|
| 58 |
2. **Set up the environment:**
|
| 59 |
```bash
|
| 60 |
+
python -m venv venv && source venv/bin/activate
|
| 61 |
|
| 62 |
+
# 1. Install PyTorch version matching your CUDA version
|
| 63 |
+
# Example for CUDA 12.8:
|
| 64 |
+
pip install torch --index-url https://download.pytorch.org/whl/cu128
|
| 65 |
|
| 66 |
+
# 2. Clone the Hugging Face repository
|
| 67 |
+
git clone https://huggingface.co/AutoML-org/TempoPFN
|
| 68 |
+
cd TempoPFN
|
|
|
|
| 69 |
|
| 70 |
+
# 3. Set up the environment
|
| 71 |
+
python3.12 -m venv venv & source venv/bin/activate
|
| 72 |
+
export PYTHONPATH=$PWD
|
| 73 |
|
| 74 |
+
# 4. Install PyTorch version matching your CUDA version
|
| 75 |
+
pip install torch --index-url https://download.pytorch.org/whl/cu128
|
|
|
|
| 76 |
|
| 77 |
+
# 5. Install dependencies
|
| 78 |
+
pip install .
|
| 79 |
+
pip install .[dev]
|
| 80 |
|
| 81 |
+
# 4. Run the Quick Start Script
|
|
|
|
| 82 |
python examples/quick_start_tempo_pfn.py
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
+
# 5. Alternatively, you can run the Notebook version
|
|
|
|
| 85 |
jupyter notebook examples/quick_start_tempo_pfn.ipynb
|
| 86 |
```
|
| 87 |
|
| 88 |
### Hardware & Performance Tips
|
| 89 |
|
| 90 |
+
**GPU Required:** Inference requires a CUDA-capable GPU with a matching PyTorch version installed. Tested on NVIDIA A100/H100.
|
| 91 |
|
| 92 |
+
**First Run:** The first inference for a new sequence length will be slow due to Triton kernel compilation. Subsequent runs will be fast.
|
| 93 |
|
| 94 |
+
**Cache Tip:** If using a network filesystem, prevent slowdowns by routing caches to a local directory (like `/tmp`) *before* running:
|
| 95 |
```bash
|
| 96 |
LOCAL_CACHE_BASE="${TMPDIR:-/tmp}/tsf-$(date +%s)"
|
| 97 |
mkdir -p "${LOCAL_CACHE_BASE}/triton" "${LOCAL_CACHE_BASE}/torchinductor"
|
|
|
|
| 103 |
|
| 104 |
## ๐ Training
|
| 105 |
|
| 106 |
+
All training and model parameters are controlled via YAML files in `configs/`.
|
| 107 |
+
|
| 108 |
```bash
|
| 109 |
+
# Single-GPU (Debug)
|
| 110 |
torchrun --standalone --nproc_per_node=1 src/training/trainer_dist.py --config ./configs/train.yaml
|
|
|
|
| 111 |
|
| 112 |
+
# Multi-GPU (e.g., 8 GPUs)
|
|
|
|
|
|
|
|
|
|
| 113 |
torchrun --standalone --nproc_per_node=8 src/training/trainer_dist.py --config ./configs/train.yaml
|
| 114 |
```
|
| 115 |
|
|
|
|
|
|
|
|
|
|
| 116 |
|
| 117 |
## ๐พ Synthetic Data Generation
|
| 118 |
|