Mizan-Rerank-v2

A high-performance open-source cross-encoder model for reranking Arabic long texts, fine-tuned from Alibaba-NLP/gte-multilingual-reranker-base with state-of-the-art results on Arabic reranking benchmarks.

Hugging Face Model Size License Demo

Try It Out

Open full demo →

Overview

Mizan-Rerank-v2 is a cross-encoder reranking model based on Alibaba-NLP/gte-multilingual-reranker-base, specifically fine-tuned for Arabic text reranking. It excels at reranking long documents (up to 8192 tokens) and outperforms both its base model and larger competitors on Arabic reranking benchmarks.

Key Features

  • Long Document Support: Handles up to 8192 tokens using RoPE position embeddings with NTK scaling
  • Superior Arabic Performance: Outperforms BAAI/bge-reranker-v2-m3 (568M) despite being nearly half the size
  • Arabic Language Optimization: Fine-tuned on 1.2M+ Arabic query-document pairs from diverse sources

Performance Benchmarks

Reranker Benchmark Comparison

Reranking Evaluation (ndcg@10)

Model Parameters Reranking Triplet MIRACL (Long Docs) WikiQA MedQA
Mizan-Rerank-v2 305M 1.0000 0.9993 0.8091 0.8258 0.6775
BAAI/bge-reranker-v2-m3 568M 1.0000 0.9998 0.7231 0.8669 0.6584
Alibaba-NLP/gte-multilingual-reranker-base 305M 1.0000 0.9991 0.7539 0.8275 0.6648
ALJIACHI/Mizan-Rerank-v1 149M 0.9986 0.9955 0.7370 0.7739 0.5502

Key Improvements over Base Model

Benchmark Base Model Mizan-Rerank-v2 Improvement
Reranking 1.0000 1.0000 --
Triplet 0.9991 0.9993 +0.0002
MIRACL (Long Docs) 0.7539 0.8091 +0.0552
WikiQA 0.8275 0.8258 -0.0017
MedQA 0.6648 0.6775 +0.0127

Key Improvements over BAAI/bge-reranker-v2-m3

Benchmark bge-reranker-v2-m3 Mizan-Rerank-v2 Improvement
Reranking 1.0000 1.0000 --
Triplet 0.9998 0.9993 -0.0005
MIRACL (Long Docs) 0.7231 0.8091 +0.0860
WikiQA 0.8669 0.8258 -0.0411
MedQA 0.6584 0.6775 +0.0191

Model Details

  • Model Type: Cross Encoder
  • Base Model: Alibaba-NLP/gte-multilingual-reranker-base
  • Architecture: NewForSequenceClassification (12 layers, 768 hidden, 12 heads)
  • Maximum Sequence Length: 8192 tokens
  • Position Embeddings: RoPE with NTK scaling (factor 8.0)
  • Number of Output Labels: 1
  • Language: Arabic (ar), English (en)
  • License: Apache 2.0

Usage

Using Sentence Transformers

pip install -U sentence-transformers
from sentence_transformers import CrossEncoder

# Load model
model = CrossEncoder("ALJIACHI/Mizan-Rerank-v2", max_length=8192, trust_remote_code=True)

# Score query-document pairs
pairs = [
    ["ما هو تفسير الآية وجعلنا من الماء كل شيء حي",
     "تعني الآية أن الماء هو عنصر أساسي في حياة جميع الكائنات الحية، وهو ضروري لاستمرار الحياة."],
    ["ما هو تفسير الآية وجعلنا من الماء كل شيء حي",
     "تم اكتشاف كواكب خارج المجموعة الشمسية تحتوي على مياه متجمدة."],
    ["ما هو تفسير الآية وجعلنا من الماء كل شيء حي",
     "تحدث القرآن الكريم عن البرق والرعد في عدة مواضع مختلفة."],
]

scores = model.predict(pairs)
print(scores)
# High score for the relevant passage, low scores for irrelevant ones

# Or rank documents for a query
ranks = model.rank(
    "ما هو تفسير الآية وجعلنا من الماء كل شيء حي",
    [
        "تعني الآية أن الماء هو عنصر أساسي في حياة جميع الكائنات الحية، وهو ضروري لاستمرار الحياة.",
        "تم اكتشاف كواكب خارج المجموعة الشمسية تحتوي على مياه متجمدة.",
        "تحدث القرآن الكريم عن البرق والرعد في عدة مواضع مختلفة.",
    ]
)
print(ranks)
# [{'corpus_id': 0, 'score': ...}, {'corpus_id': 1, 'score': ...}, ...]

Using Transformers Directly

from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch

model = AutoModelForSequenceClassification.from_pretrained(
    "ALJIACHI/Mizan-Rerank-v2",
    trust_remote_code=True,
    torch_dtype=torch.float16,
)
tokenizer = AutoTokenizer.from_pretrained("ALJIACHI/Mizan-Rerank-v2")

def get_relevance_score(query, passage):
    inputs = tokenizer(query, passage, return_tensors="pt", padding=True, truncation=True, max_length=8192)
    with torch.no_grad():
        outputs = model(**inputs)
    return torch.sigmoid(outputs.logits).item()

query = "ما هي فوائد فيتامين د؟"
passages = [
    "يساعد فيتامين د في تعزيز صحة العظام وتقوية الجهاز المناعي، كما يلعب دوراً مهماً في امتصاص الكالسيوم.",
    "يستخدم فيتامين د في بعض الصناعات الغذائية كمادة حافظة.",
    "أطلقت وزارة الزراعة حملة وطنية لزيادة الوعي بأهمية الزراعة العضوية.",
]

scores = [(p, get_relevance_score(query, p)) for p in passages]
reranked = sorted(scores, key=lambda x: x[1], reverse=True)

for passage, score in reranked:
    print(f"Score: {score:.4f} | {passage[:80]}...")

Training Details

Training Data

Trained on 1,199,634 query-document pairs from diverse Arabic sources

Training Configuration

Parameter Value
Base Model Alibaba-NLP/gte-multilingual-reranker-base
Max Sequence Length 8192
Batch Size 2
Gradient Accumulation Steps 16
Effective Batch Size 32
Learning Rate 5e-7
LR Scheduler Cosine
Warmup Ratio 0.1
Precision FP16
Gradient Checkpointing Enabled
Loss Function BinaryCrossEntropyLoss (pos_weight=1.24)

Applications

  • Arabic search engines and information retrieval systems
  • RAG (Retrieval-Augmented Generation) pipelines
  • Islamic text search and jurisprudence Q&A
  • Digital library and archive search
  • Long-document Arabic content analysis
  • E-learning platforms with Arabic content

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 5.4.1
  • Transformers: 4.55.4
  • PyTorch: 2.8.0+cu126
  • Accelerate: 1.10.0
  • Datasets: 3.5.0
  • Tokenizers: 0.21.0

Citation

@software{Mizan_Rerank_v2_2026,
  author = {Ali Aljiachi},
  title = {Mizan-Rerank-v2: Arabic Long-Context Text Reranking Model},
  year = {2026},
  publisher = {Hugging Face},
  url = {https://huggingface.co/ALJIACHI/Mizan-Rerank-v2}
}
@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

License

Released under the Apache 2.0 License.

Downloads last month
894
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ALJIACHI/Mizan-Rerank-V2

Finetuned
(11)
this model

Paper for ALJIACHI/Mizan-Rerank-V2