Vladyslav Moroshan commited on
Commit
1b89a8e
ยท
1 Parent(s): 0a58567

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -46
README.md CHANGED
@@ -2,20 +2,25 @@
2
  license: apache-2.0
3
  library_name: tempo-pfn
4
  tags:
5
- - time-series-forecasting
6
- - zero-shot
7
- - rnn
8
- - linear-rnn
9
- - synthetic-data
10
- - foundation-model
11
- - automl
12
- - time-series
 
 
 
 
 
13
  arxiv: 2510.25502
14
  ---
15
 
16
  # TempoPFN: Synthetic Pre-Training of Linear RNNs for Zero-Shot Time Series Forecasting
17
 
18
- [![arXiv](https://img.shields.io/badge/arXiv-2510.25502-b31b1b.svg)](https://arxiv.org/abs/2510.25502) [![License](https://img.shields.io/badge/License-Apache_2.0-green.svg)](https://github.com/automl/TempoPFN/blob/main/LICENSE)
19
 
20
  ---
21
 
@@ -52,49 +57,41 @@ This repository includes the **pretrained 38M parameter model** (`models/checkpo
52
 
53
  2. **Set up the environment:**
54
  ```bash
55
- python -m venv venv && source venv/bin/activate
56
 
57
- # 1. Install PyTorch version matching your CUDA version
58
- # Example for CUDA 12.8:
59
- pip install torch --index-url https://download.pytorch.org/whl/cu128
60
 
61
- # 2. Install TempoPFN and all other dependencies
62
- pip install -r requirements.txt
63
- export PYTHONPATH=$PWD
64
- ```
65
 
66
- ## ๐Ÿš€ Quick Start: Run the Demo
 
 
67
 
68
- **Prerequisites:**
69
- * You must have a **CUDA-capable GPU** with a matching PyTorch version installed.
70
- * You have run `export PYTHONPATH=$PWD` from the repo's root directory (see Installation).
71
 
72
- ### 1. Run the Quick Start Script
 
 
73
 
74
- Run a demo forecast on a synthetic sine wave. This script will automatically find and load the `models/checkpoint_38M.pth` file included in this repository.
75
- ```bash
76
  python examples/quick_start_tempo_pfn.py
77
- ```
78
-
79
- ### 2. Run with a Different Checkpoint (Optional)
80
-
81
- If you have trained your own model, you can point the script to it:
82
- ```bash
83
- python examples/quick_start_tempo_pfn.py --checkpoint /path/to/your/checkpoint.pth
84
- ```
85
 
86
- ### 3. Run the Notebook version
87
- ```bash
88
  jupyter notebook examples/quick_start_tempo_pfn.ipynb
89
  ```
90
 
91
  ### Hardware & Performance Tips
92
 
93
- **GPU Required:** Inference requires a CUDA-capable GPU. Tested on NVIDIA A100/H100.
94
 
95
- **First Inference May Be Slow:** Initial calls for unseen sequence lengths trigger Triton kernel compilation. Subsequent runs are cached and fast.
96
 
97
- **Triton Caches:** To prevent slowdowns from writing caches to a network filesystem, route caches to a local directory (like `/tmp`) before running:
98
  ```bash
99
  LOCAL_CACHE_BASE="${TMPDIR:-/tmp}/tsf-$(date +%s)"
100
  mkdir -p "${LOCAL_CACHE_BASE}/triton" "${LOCAL_CACHE_BASE}/torchinductor"
@@ -106,21 +103,16 @@ python examples/quick_start_tempo_pfn.py
106
 
107
  ## ๐Ÿš‚ Training
108
 
109
- ### Single-GPU Training (for debugging)
 
110
  ```bash
 
111
  torchrun --standalone --nproc_per_node=1 src/training/trainer_dist.py --config ./configs/train.yaml
112
- ```
113
 
114
- ### Multi-GPU Training (Single-Node)
115
-
116
- This example uses 8 GPUs. The training script uses PyTorch DistributedDataParallel (DDP).
117
- ```bash
118
  torchrun --standalone --nproc_per_node=8 src/training/trainer_dist.py --config ./configs/train.yaml
119
  ```
120
 
121
- ### Configuration
122
-
123
- All training and model parameters are controlled via YAML files in `configs/` (architecture, optimizers, paths).
124
 
125
  ## ๐Ÿ’พ Synthetic Data Generation
126
 
 
2
  license: apache-2.0
3
  library_name: tempo-pfn
4
  tags:
5
+ - TempoPFN
6
+ - time-series
7
+ - forecasting
8
+ - time-series-forecasting
9
+ - foundation models
10
+ - pretrained models
11
+ - zero-shot
12
+ - synthetic-data
13
+ - rnn
14
+ - linear-rnn
15
+ leaderboards:
16
+ - Salesforce/GIFT-Eval
17
+ pipeline_tag: time-series-forecasting
18
  arxiv: 2510.25502
19
  ---
20
 
21
  # TempoPFN: Synthetic Pre-Training of Linear RNNs for Zero-Shot Time Series Forecasting
22
 
23
+ [![preprint](https://img.shields.io/static/v1?label=Paper&message=2510.25502&color=B31B1B&logo=arXiv)](https://arxiv.org/abs/2510.25502) [![GIFT-Eval](https://img.shields.io/badge/%F0%9F%8F%86%20GIFT--Eval-Leaderboard-0078D4)](https://huggingface.co/spaces/Salesforce/GIFT-Eval) [![github](https://img.shields.io/badge/%F0%9F%92%BB%20GitHub-Repo-grey)](https://github.com/automl/TempoPFN) [![License: Apache 2.0](https://img.shields.io/badge/License-Apache%202.0-green.svg)](https://github.com/automl/TempoPFN/blob/main/LICENSE)
24
 
25
  ---
26
 
 
57
 
58
  2. **Set up the environment:**
59
  ```bash
60
+ python -m venv venv && source venv/bin/activate
61
 
62
+ # 1. Install PyTorch version matching your CUDA version
63
+ # Example for CUDA 12.8:
64
+ pip install torch --index-url https://download.pytorch.org/whl/cu128
65
 
66
+ # 2. Clone the Hugging Face repository
67
+ git clone https://huggingface.co/AutoML-org/TempoPFN
68
+ cd TempoPFN
 
69
 
70
+ # 3. Set up the environment
71
+ python3.12 -m venv venv & source venv/bin/activate
72
+ export PYTHONPATH=$PWD
73
 
74
+ # 4. Install PyTorch version matching your CUDA version
75
+ pip install torch --index-url https://download.pytorch.org/whl/cu128
 
76
 
77
+ # 5. Install dependencies
78
+ pip install .
79
+ pip install .[dev]
80
 
81
+ # 4. Run the Quick Start Script
 
82
  python examples/quick_start_tempo_pfn.py
 
 
 
 
 
 
 
 
83
 
84
+ # 5. Alternatively, you can run the Notebook version
 
85
  jupyter notebook examples/quick_start_tempo_pfn.ipynb
86
  ```
87
 
88
  ### Hardware & Performance Tips
89
 
90
+ **GPU Required:** Inference requires a CUDA-capable GPU with a matching PyTorch version installed. Tested on NVIDIA A100/H100.
91
 
92
+ **First Run:** The first inference for a new sequence length will be slow due to Triton kernel compilation. Subsequent runs will be fast.
93
 
94
+ **Cache Tip:** If using a network filesystem, prevent slowdowns by routing caches to a local directory (like `/tmp`) *before* running:
95
  ```bash
96
  LOCAL_CACHE_BASE="${TMPDIR:-/tmp}/tsf-$(date +%s)"
97
  mkdir -p "${LOCAL_CACHE_BASE}/triton" "${LOCAL_CACHE_BASE}/torchinductor"
 
103
 
104
  ## ๐Ÿš‚ Training
105
 
106
+ All training and model parameters are controlled via YAML files in `configs/`.
107
+
108
  ```bash
109
+ # Single-GPU (Debug)
110
  torchrun --standalone --nproc_per_node=1 src/training/trainer_dist.py --config ./configs/train.yaml
 
111
 
112
+ # Multi-GPU (e.g., 8 GPUs)
 
 
 
113
  torchrun --standalone --nproc_per_node=8 src/training/trainer_dist.py --config ./configs/train.yaml
114
  ```
115
 
 
 
 
116
 
117
  ## ๐Ÿ’พ Synthetic Data Generation
118