Revela
Collection
9 items
•
Updated
Revela-500M is a self-supervised bi-encoder dense-retriever trained with the Revela objective on raw Wikipedia text.
It uses the 500 M-parameter Qwen 2.5-0.5B backbone and was trained on 320 K Wikipedia batches (batch size = 16).
The in-batch attention mechanism enables fully self-supervised learning without manually-mined relevance labels.
See the paper for full details.
| Binary | Description |
|---|---|
| trumancai/Revela-code-3b | 3 B-parameter code-retriever. |
| trumancai/Revela-code-1b | 1 B-parameter code-retriever. |
| trumancai/Revela-code-500M | 500 M-parameter code-retriever. |
| trumancai/Revela-3b | 3 B-parameter Wikipedia retriever. |
| trumancai/Revela-1b | 1 B-parameter Wikipedia retriever. |
| trumancai/Revela-500M | 500 M-parameter Wikipedia retriever. |
| trumancai/revela_code_training_corpus | Code training corpus. |
| trumancai/revela_training_corpus | Wikipedia training corpus. |
We can evaluate the trained models with customized mteb.
from mteb.model_meta import ModelMeta
from mteb.models.repllama_models import RepLLaMAWrapper, _loader
revela_llama_3b = ModelMeta(
loader=_loader(
RepLLaMAWrapper,
base_model_name_or_path="meta-llama/Llama-3.2-3B",
peft_model_name_or_path="trumancai/Revela-3b",
device_map="auto",
torch_dtype=torch.bfloat16,
),
name="trumancai/Revela-3b",
languages=["eng_Latn"],
open_source=True,
revision="2b31c92f23acc46762587ea37cb55032da788561", # base-peft revision
release_date="2025-04-13",
)
revela_llama_3b_model = revela_llama_3b.loader()
evaluation = mteb.MTEB(tasks=["SciFact", "NFCorpus"])
evaluation.run(model=revela_llama_3b_model, output_folder="results/Revela-3b")
Base model
Qwen/Qwen2.5-0.5B