Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,6 +7,10 @@ sdk: static
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
- **[2024-06]** 🚀🚀 We release the `LongVA`, a long language model with state-of-the-art performance on video understanding tasks.
|
| 11 |
|
| 12 |
[GitHub](https://github.com/EvolvingLMMs-Lab/LongVA) | [Blog](https://lmms-lab.github.io/posts/longva/)
|
|
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
+
- **[2024-06]** 🚀🚀 We release the `LLaVA-NeXT-Interleave`, an all-around LMM that extends the model capabilities to new real-world settings: Multi-image, Multi-frame (videos), Multi-view (3D) and maintains the performance of the Multi-patch (single-image) scenarios.
|
| 11 |
+
|
| 12 |
+
[GitHub](https://github.com/LLaVA-VL/LLaVA-NeXT) | [Blog](https://llava-vl.github.io/blog/2024-06-16-llava-next-interleave/)
|
| 13 |
+
|
| 14 |
- **[2024-06]** 🚀🚀 We release the `LongVA`, a long language model with state-of-the-art performance on video understanding tasks.
|
| 15 |
|
| 16 |
[GitHub](https://github.com/EvolvingLMMs-Lab/LongVA) | [Blog](https://lmms-lab.github.io/posts/longva/)
|