Safetensors
patchtst
abao commited on
Commit
9b4b5f0
·
verified ·
1 Parent(s): bf79d10

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,8 +7,8 @@ license: cc-by-nc-4.0
7
  This is a scaled-up version of the checkpoint originally presented in our preprint. This model has 12 layers with 12 attention heads each.
8
 
9
  Trained with larger dataset of multiple initial conditions per system, with mixed periods as well.
10
- Specifically, using 8 out of the 16 initial conditions (ICs) per system that we provide in [skew-mixedp-ic16 dataset](https://huggingface.co/datasets/GilpinLab/skew-mixedp-ic16)
11
-
12
  *Panda*: Patched Attention for Nonlinear Dynamics.
13
 
14
  Paper abstract:
 
7
  This is a scaled-up version of the checkpoint originally presented in our preprint. This model has 12 layers with 12 attention heads each.
8
 
9
  Trained with larger dataset of multiple initial conditions per system, with mixed periods as well.
10
+ Specifically, using 8 out of the 16 initial conditions (ICs) per system that we provide in our [skew-mixedp-ic16 dataset](https://huggingface.co/datasets/GilpinLab/skew-mixedp-ic16)
11
+ We trained this model with per-device batch size 384, across 6 AMD MI100X GPUs, for 800k iterations.
12
  *Panda*: Patched Attention for Nonlinear Dynamics.
13
 
14
  Paper abstract: