witgaw commited on
Commit
4e796cf
·
verified ·
1 Parent(s): dd37731

Upload STGFORMER_MAMBA_FAST model trained on PEMS-BAY

Browse files
Files changed (6) hide show
  1. README.md +57 -0
  2. config.json +28 -0
  3. hub_metadata.json +7 -0
  4. metadata.json +7 -0
  5. model.safetensors +3 -0
  6. scaler.json +0 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - traffic-forecasting
4
+ - time-series
5
+ - graph-neural-network
6
+ - stgformer_mamba_fast
7
+ datasets:
8
+ - pems-bay
9
+ ---
10
+
11
+ # Spatial-Temporal Graph Transformer (Mamba Fast) - PEMS-BAY
12
+
13
+ Spatial-Temporal Graph Transformer (Mamba Fast) (STGFORMER_MAMBA_FAST) trained on PEMS-BAY dataset for traffic speed forecasting.
14
+
15
+ ## Model Description
16
+
17
+ STGFormer with optimized Mamba SSM (fast config for validation)
18
+
19
+
20
+
21
+ ## Dataset
22
+
23
+ **PEMS-BAY**: Traffic speed data from highway sensors.
24
+
25
+ ## Usage
26
+
27
+ ```python
28
+ from utils.stgformer import load_from_hub
29
+
30
+ # Load model from Hub
31
+ model, scaler = load_from_hub("PEMS-BAY", hf_repo_prefix="STGFORMER_MAMBA_FAST")
32
+
33
+ # Get predictions
34
+ from utils.stgformer import get_predictions
35
+ predictions = get_predictions(model, scaler, test_dataset)
36
+ ```
37
+
38
+ ## Training
39
+
40
+ Model was trained using the STGFORMER_MAMBA_FAST implementation with default hyperparameters.
41
+
42
+ ## Citation
43
+
44
+ If you use this model, please cite the original STGFORMER_MAMBA_FAST paper:
45
+
46
+ ```bibtex
47
+ @inproceedings{lan2022stgformer,
48
+ title={STGformer: Spatial-Temporal Graph Transformer for Traffic Forecasting},
49
+ author={Lan, Shengnan and Ma, Yong and Huang, Weijia and Wang, Wanwei and Yang, Hui and Li, Peng},
50
+ booktitle={IEEE Transactions on Neural Networks and Learning Systems},
51
+ year={2022}
52
+ }
53
+ ```
54
+
55
+ ## License
56
+
57
+ This model checkpoint is released under the same license as the training code.
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "num_nodes": 325,
3
+ "in_steps": 12,
4
+ "out_steps": 12,
5
+ "input_dim": 1,
6
+ "output_dim": 1,
7
+ "steps_per_day": 288,
8
+ "input_embedding_dim": 24,
9
+ "tod_embedding_dim": 24,
10
+ "dow_embedding_dim": 0,
11
+ "spatial_embedding_dim": 0,
12
+ "adaptive_embedding_dim": 80,
13
+ "num_heads": 4,
14
+ "num_layers": 3,
15
+ "dropout_a": 0.3,
16
+ "use_mixed_proj": true,
17
+ "model_dim": 128,
18
+ "dataset": "PEMS-BAY",
19
+ "graph_mode": "learned",
20
+ "lambda_hybrid": 0.5,
21
+ "sparsity_k": null,
22
+ "propagation_mode": "power",
23
+ "temporal_mode": "mamba",
24
+ "mamba_d_state": 8,
25
+ "mamba_d_conv": 2,
26
+ "mamba_expand": 1,
27
+ "use_zero_init": true
28
+ }
hub_metadata.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "dataset": "PEMS-BAY",
3
+ "metrics": {},
4
+ "framework": "PyTorch",
5
+ "hf_repo_prefix": "STGFORMER_MAMBA_FAST",
6
+ "implementation": "internal"
7
+ }
metadata.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "dataset": "PEMS-BAY",
3
+ "upload_date": "2025-12-07T11:44:49.636535",
4
+ "metrics": {},
5
+ "framework": "PyTorch",
6
+ "model_type": "STGFORMER_MAMBA_FAST"
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f65237c57337d27d75009d9a3f2da7177cdcf6e557c49ef90f18797ba465a8df
3
+ size 9007472
scaler.json ADDED
The diff for this file is too large to render. See raw diff