Feature Extraction
Transformers
Safetensors
qwen2_5_omni_thinker
image-text-to-text
multimodal-embedding
Instructions to use LCO-Embedding/LCO-Embedding-Omni-3B-2605 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LCO-Embedding/LCO-Embedding-Omni-3B-2605 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="LCO-Embedding/LCO-Embedding-Omni-3B-2605")# Load model directly from transformers import AutoTokenizer, AutoModelForImageTextToText tokenizer = AutoTokenizer.from_pretrained("LCO-Embedding/LCO-Embedding-Omni-3B-2605") model = AutoModelForImageTextToText.from_pretrained("LCO-Embedding/LCO-Embedding-Omni-3B-2605") - Notebooks
- Google Colab
- Kaggle
May 2026 update of LCO-Embedding models
It's been a while since we introduced LCO-Embedding 3B & 7B in Oct 2025. Now we release a small update LCO-Embedding-Omni-3B-2605!
In this version, we make substantial improvements on all 4 modalities (text, image, audio, video).
Main Benchmarks (text, image, audio):
Other Capabilities (code, video)
Checkpoint Overview
| Model | Release Time |
|---|---|
| LCO-Embedding-Omni-3B | Oct 2025 |
| LCO-Embedding-Omni-7B | Oct 2025 |
| LCO-Embedding-Omni-3B-2605 | May 2026 |
Usage
All inference code is the same with our OG models and can seamlessly support the new checkpoint by changing the model name.
Contributors
LCO-Embedding Team members that made this release happen:
Chenghao Xiao, Ruifeng Yuan, Long Li, Fengyu Cai, Yiqi Liu, Yang Wang, Chenghua Lin, Hao Zhang, Hou Pong Chan, Ling Zhang
- Downloads last month
- 130