Upload folder using huggingface_hub
Browse files- README.md +57 -0
- hub_metadata.json +6 -0
- imputation_head.safetensors +3 -0
- metadata.json +7 -0
- model.safetensors +3 -0
- pretrain_config.json +16 -0
README.md
ADDED
|
@@ -0,0 +1,57 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
tags:
|
| 3 |
+
- traffic-forecasting
|
| 4 |
+
- time-series
|
| 5 |
+
- graph-neural-network
|
| 6 |
+
- stgformer_pretrained
|
| 7 |
+
datasets:
|
| 8 |
+
- largest-gla
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# Spatial-Temporal Graph Transformer (Pretrained) - LARGEST-GLA
|
| 12 |
+
|
| 13 |
+
Spatial-Temporal Graph Transformer (Pretrained) (STGFORMER_PRETRAINED) trained on LARGEST-GLA dataset for traffic speed forecasting.
|
| 14 |
+
|
| 15 |
+
## Model Description
|
| 16 |
+
|
| 17 |
+
STGFormer pretrained checkpoint for LARGEST-GLA. This checkpoint contains pretrained model weights and imputation head from masked node pretraining. Use with load_from config option.
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
## Dataset
|
| 22 |
+
|
| 23 |
+
**LARGEST-GLA**: Traffic speed data from highway sensors.
|
| 24 |
+
|
| 25 |
+
## Usage
|
| 26 |
+
|
| 27 |
+
```python
|
| 28 |
+
from utils.stgformer import load_from_hub
|
| 29 |
+
|
| 30 |
+
# Load model from Hub
|
| 31 |
+
model, scaler = load_from_hub("LARGEST-GLA", hf_repo_prefix="STGFORMER_PRETRAINED")
|
| 32 |
+
|
| 33 |
+
# Get predictions
|
| 34 |
+
from utils.stgformer import get_predictions
|
| 35 |
+
predictions = get_predictions(model, scaler, test_dataset)
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
## Training
|
| 39 |
+
|
| 40 |
+
Model was trained using the STGFORMER_PRETRAINED implementation with default hyperparameters.
|
| 41 |
+
|
| 42 |
+
## Citation
|
| 43 |
+
|
| 44 |
+
If you use this model, please cite the original STGFORMER_PRETRAINED paper:
|
| 45 |
+
|
| 46 |
+
```bibtex
|
| 47 |
+
@inproceedings{lan2022stgformer,
|
| 48 |
+
title={STGformer: Spatial-Temporal Graph Transformer for Traffic Forecasting},
|
| 49 |
+
author={Lan, Shengnan and Ma, Yong and Huang, Weijia and Wang, Wanwei and Yang, Hui and Li, Peng},
|
| 50 |
+
booktitle={IEEE Transactions on Neural Networks and Learning Systems},
|
| 51 |
+
year={2022}
|
| 52 |
+
}
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
## License
|
| 56 |
+
|
| 57 |
+
This model checkpoint is released under the same license as the training code.
|
hub_metadata.json
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"dataset": "LARGEST-GLA",
|
| 3 |
+
"checkpoint_type": "pretrained",
|
| 4 |
+
"framework": "PyTorch",
|
| 5 |
+
"hf_repo_prefix": "emelle/STGFormer-pretrain"
|
| 6 |
+
}
|
imputation_head.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f3859448ad6afe4121c06db5277efd00408182778593768ae911297614cdb59d
|
| 3 |
+
size 6328
|
metadata.json
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"dataset": "LARGEST-GLA",
|
| 3 |
+
"upload_date": "2025-12-11T07:31:38.058165",
|
| 4 |
+
"metrics": {},
|
| 5 |
+
"framework": "PyTorch",
|
| 6 |
+
"model_type": "STGFORMER_PRETRAINED"
|
| 7 |
+
}
|
model.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3d96b3a81501a79cc9db9f3a40c5ee12c157ccf071a12d6f2f91657db9d8f407
|
| 3 |
+
size 21158960
|
pretrain_config.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"dataset_name": "LARGEST-GLA",
|
| 3 |
+
"pretrain_config": {
|
| 4 |
+
"stage1_epochs": 10,
|
| 5 |
+
"stage1_mask_ratio": 0.15,
|
| 6 |
+
"stage2_epochs": 10,
|
| 7 |
+
"stage2_mask_ratio": 0.1,
|
| 8 |
+
"use_normalized_data": true,
|
| 9 |
+
"pretrain_data_fraction": 0.1,
|
| 10 |
+
"pretrain_batch_size": 8,
|
| 11 |
+
"learning_rate": 0.001,
|
| 12 |
+
"save_to": "emelle/STGFormer-pretrain"
|
| 13 |
+
},
|
| 14 |
+
"model_dim": 128,
|
| 15 |
+
"num_nodes": 3834
|
| 16 |
+
}
|