e5-omni: Explicit Cross-modal Alignment for Omni-modal Embeddings Paper β’ 2601.03666 β’ Published 19 days ago β’ 4
e5-omni: Explicit Cross-modal Alignment for Omni-modal Embeddings Paper β’ 2601.03666 β’ Published 19 days ago β’ 4
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training Paper β’ 2305.14342 β’ Published May 23, 2023
Chain of Thought Empowers Transformers to Solve Inherently Serial Problems Paper β’ 2402.12875 β’ Published Feb 20, 2024 β’ 13
Same Pre-training Loss, Better Downstream: Implicit Bias Matters for Language Models Paper β’ 2210.14199 β’ Published Oct 25, 2022
MoCa: Modality-aware Continual Pre-training Makes Better Bidirectional Multimodal Embeddings Paper β’ 2506.23115 β’ Published Jun 29, 2025 β’ 36
MoCa: Modality-aware Continual Pre-training Makes Better Bidirectional Multimodal Embeddings Paper β’ 2506.23115 β’ Published Jun 29, 2025 β’ 36
MoCa: Modality-aware Continual Pre-training Makes Better Bidirectional Multimodal Embeddings Paper β’ 2506.23115 β’ Published Jun 29, 2025 β’ 36