Update README.md
Browse files
README.md
CHANGED
|
@@ -5,15 +5,15 @@ inference: false
|
|
| 5 |
<br>
|
| 6 |
<br>
|
| 7 |
|
| 8 |
-
# LWM-
|
| 9 |
|
| 10 |
## Model details
|
| 11 |
|
| 12 |
**Model type:**
|
| 13 |
-
LWM-
|
| 14 |
|
| 15 |
**Model date:**
|
| 16 |
-
LWM-
|
| 17 |
|
| 18 |
**Paper or resources for more information:**
|
| 19 |
https://largeworldmodel.github.io/
|
|
|
|
| 5 |
<br>
|
| 6 |
<br>
|
| 7 |
|
| 8 |
+
# LWM-128K-Jax Model Card
|
| 9 |
|
| 10 |
## Model details
|
| 11 |
|
| 12 |
**Model type:**
|
| 13 |
+
LWM-128K-Jax is an open-source model trained from LLaMA-2 on a subset of Books3 filtered data, along with a large collection if image and video data. It is an auto-regressive vision-language model, based on the transformer architecture. These are the Jax / Flax version of the parameters.
|
| 14 |
|
| 15 |
**Model date:**
|
| 16 |
+
LWM-128K-Jax was trained in January 2024.
|
| 17 |
|
| 18 |
**Paper or resources for more information:**
|
| 19 |
https://largeworldmodel.github.io/
|