Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -5,13 +5,14 @@ colorFrom: pink
|
|
| 5 |
colorTo: pink
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
Welcome to the LCO-Embedding project - Scaling Language-centric Omnimodal Representation Learning.
|
| 11 |
|
| 12 |
### Highlights:
|
| 13 |
-
- We introduce LCO-Embedding
|
| 14 |
-
- We introduce Generation-Representation Scaling
|
| 15 |
-
- We introduce SeaDoc
|
| 16 |
|
| 17 |
<!-- * Code: []() -->
|
|
|
|
| 5 |
colorTo: pink
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
+
license: apache-2.0
|
| 9 |
---
|
| 10 |
|
| 11 |
Welcome to the LCO-Embedding project - Scaling Language-centric Omnimodal Representation Learning.
|
| 12 |
|
| 13 |
### Highlights:
|
| 14 |
+
- We introduce **LCO-Embedding**, a language-centric omnimodal representation learning method and the LCO-Embedding model families, setting a new state-of-the-art on MIEB (Massive Image Embedding Benchmark) while supporting audio and videos.
|
| 15 |
+
- We introduce the **Generation-Representation Scaling Law**, and connect models' generative capabilities and their representation upper bound.
|
| 16 |
+
- We introduce **SeaDoc**, a challenging visual document retrieval task in Southeast Asian languages, and show that continual generative pretraining before contrastive learning raises the representation upper bound.
|
| 17 |
|
| 18 |
<!-- * Code: []() -->
|