Update README.md
Browse files
README.md
CHANGED
|
@@ -275,8 +275,8 @@ texts = [item["text"] for item in result["results"]]
|
|
| 275 |
|
| 276 |
MOSS-VL-Base-0408 is a pretrained base checkpoint, and we are actively improving several core capabilities for future iterations:
|
| 277 |
|
| 278 |
-
- 📄 Stronger OCR, Especially for Long Documents — We plan to further improve text recognition, document parsing, and long-document understanding. A key focus is achieving near-lossless information extraction for extremely long and structurally complex inputs, such as accurately parsing texts, tables, and mathematical layouts from multi-page academic papers (dozens of pages) or dense PDF reports without degrading context or structural integrity.
|
| 279 |
-
- 🎬 Expanded Long
|
| 280 |
|
| 281 |
> [!NOTE]
|
| 282 |
> We expect future releases to continue strengthening the base model itself while also enabling stronger downstream aligned variants built on top of it.
|
|
|
|
| 275 |
|
| 276 |
MOSS-VL-Base-0408 is a pretrained base checkpoint, and we are actively improving several core capabilities for future iterations:
|
| 277 |
|
| 278 |
+
- 📄 **Stronger OCR, Especially for Long Documents** — We plan to further improve text recognition, document parsing, and long-document understanding. A key focus is achieving near-lossless information extraction and understanding for extremely long and structurally complex inputs, such as accurately parsing texts, tables, and mathematical layouts from multi-page academic papers (dozens of pages) or dense PDF reports without degrading context or structural integrity.
|
| 279 |
+
- 🎬 **Expanded Extremely Long Video Understanding** — We aim to significantly extend the model's capacity for comprehending extremely long videos spanning several hours to dozens of hours. This includes advancing temporal reasoning and cross-frame event tracking for continuous analysis of full-length movies, lengthy meetings, or extended surveillance streams, enabling robust retrieval and understanding over ultra-long visual contexts.
|
| 280 |
|
| 281 |
> [!NOTE]
|
| 282 |
> We expect future releases to continue strengthening the base model itself while also enabling stronger downstream aligned variants built on top of it.
|