Commit History

v10.1.6: Restore missing PositionalEncoding class
99f6209
Running

Elliotasdasdasfasas commited on

v10.1.5: Add debug trace for VL-JEPA loading errors
77e67bb

Elliotasdasdasfasas commited on

v10.1.4: Restore missing constants Y_ENCODER_MODEL and FALLBACK_RERANKER
d6b37e6

Elliotasdasdasfasas commited on

v10.1.3: Enable active training with AdamW optimizer
f660add

Elliotasdasdasfasas commited on

v10.1.1: Fix VL-JEPA train_step/analysis dimension logic
f048425

Elliotasdasdasfasas commited on

v10.1: Sequence-Based VL-JEPA (8 Layers, Node Tokens)
b5e36bd

Elliotasdasdasfasas commited on

v10.0: VL-JEPA Paper-Faithful (Transformer 4L + nomic-embed)
2aadd90

Elliotasdasdasfasas commited on

v9.0: VL-JEPA Thought Predictor (L18.5) - arXiv:2512.10942
c2f31a8

Elliotasdasdasfasas commited on

v8.0: Full Qwen3-VL Stack (L14 Embedding + L15 MRL + L16 Reranker)
77394fe

Elliotasdasdasfasas commited on

v7.0: Qwen3-VL-Embedding-2B (2048D, MRL, Multimodal)
d897b97

Elliotasdasdasfasas commited on

v6.0: Official Qwen3-Embedding-0.6B with native MRL support
285625f

Elliotasdasdasfasas commited on

v5.0: Original Qwen 3-Layer Architecture (L14 Qwen + L15 MRL + L16 MiniLM)
d344869

Elliotasdasdasfasas commited on

v4.2: BGE-base-en 768D - proven to work on cpu-basic
f4e7440

Elliotasdasdasfasas commited on

v4.1: Fix - Use nomic-embed-text with MRL support instead of Qwen LLM
ae50032

Elliotasdasdasfasas commited on

v4.0: Qwen L14-L15 Embedding + MiniLM L16 Reranker
3b50411

Elliotasdasdasfasas commited on

v3.2: Switch to sentence-transformers (MiniLM) for reliable CPU embedding
6ad64d2

Elliotasdasdasfasas commited on

v3.1: Lazy Loading - models load on-demand to save RAM
abccd64

Elliotasdasdasfasas commited on

v3.0: Two-Stage RAG - Embedding + Reranker with /rerank endpoint
968deaf

Elliotasdasdasfasas commited on

Enhanced multimodal API: EOS token, MRL (64-2048D), /embed_image, /embed_multi endpoints
53c2dc8

Elliotasdasdasfasas commited on