Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,4 +7,22 @@ sdk: static
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
+
- **[2024-06]** 🚀🚀 We release the `LongVA`, a long language model with state-of-the-art performance on video understanding tasks.
|
| 11 |
+
|
| 12 |
+
[GitHub](https://github.com/EvolvingLMMs-Lab/LongVA) | [Blog](https://lmms-lab.github.io/posts/longva/)
|
| 13 |
+
|
| 14 |
+
- **[2024-06]** 🎬🎬 The `lmms-eval/v0.2` has been upgraded to support video evaluations for video models like LLaVA-NeXT Video and Gemini 1.5 Pro across tasks such as EgoSchema, PerceptionTest, VideoMME, and more.
|
| 15 |
+
|
| 16 |
+
[GitHub](https://github.com/EvolvingLMMs-Lab/lmms-eval) | [Blog](https://lmms-lab.github.io/posts/lmms-eval-0.2/)
|
| 17 |
+
|
| 18 |
+
- **[2024-05]** 🚀🚀 We release the `LLaVA-NeXT Video`, a video model with state-of-the-art performance and reaching to Google's Gemini level performance on diverse video understanding tasks.
|
| 19 |
+
|
| 20 |
+
[GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-04-30-llava-next-video/)
|
| 21 |
+
|
| 22 |
+
- **[2024-05]** 🚀🚀 We release the `LLaVA-NeXT` with state-of-the-art and near GPT-4V performance at multiple multimodal benchmarks. LLaVA model family now reaches at 72B, and 110B parameters level.
|
| 23 |
+
|
| 24 |
+
[GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-05-10-llava-next-stronger-llms/)
|
| 25 |
+
|
| 26 |
+
- **[2024-03]** We release the `lmms-eval`, a toolkit for holistic evaluations with 50+ multimodal datasets and 10+ models.
|
| 27 |
+
|
| 28 |
+
[GitHub](https://github.com/EvolvingLMMs-Lab/lmms-eval) | [Blog](https://lmms-lab.github.io/posts/lmms-eval-0.1/)
|