Safetensors
patchtst
abao commited on
Commit
bf79d10
·
verified ·
1 Parent(s): 25d348d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -6,7 +6,8 @@ license: cc-by-nc-4.0
6
 
7
  This is a scaled-up version of the checkpoint originally presented in our preprint. This model has 12 layers with 12 attention heads each.
8
 
9
- Trained with larger dataset of multiple initial conditions per system, with mixed periods as well.
 
10
 
11
  *Panda*: Patched Attention for Nonlinear Dynamics.
12
 
 
6
 
7
  This is a scaled-up version of the checkpoint originally presented in our preprint. This model has 12 layers with 12 attention heads each.
8
 
9
+ Trained with larger dataset of multiple initial conditions per system, with mixed periods as well.
10
+ Specifically, using 8 out of the 16 initial conditions (ICs) per system that we provide in [skew-mixedp-ic16 dataset](https://huggingface.co/datasets/GilpinLab/skew-mixedp-ic16)
11
 
12
  *Panda*: Patched Attention for Nonlinear Dynamics.
13