| --- |
| language: |
| - en |
| - ru |
| tags: |
| - efficientrag |
| - multi-hop-qa |
| - token-classification |
| - sequence-classification |
| - deberta-v3 |
| license: mit |
| base_model: microsoft/mdeberta-v3-base |
| --- |
| |
| # EfficientRAG Labeler (mdeberta-v3-base) |
|
|
| **Labeler** component of [EfficientRAG](https://arxiv.org/abs/2408.04259) — dual-headed DeBERTa model for multi-hop retrieval. |
|
|
| ## What it does |
|
|
| Given a query and a retrieved chunk, the Labeler: |
| 1. **Sequence classification**: Is this chunk relevant (`CONTINUE`) or irrelevant (`TERMINATE`)? |
| 2. **Token classification**: Which tokens in the chunk are useful for answering? |
|
|
| ## Architecture |
|
|
| - Base: `microsoft/mdeberta-v3-base` (86M params, multilingual) |
| - Custom dual head: `DebertaForSequenceTokenClassification` |
| - Token head: binary per-token (useful/useless) |
| - Sequence head: binary per-chunk (CONTINUE/TERMINATE) |
|
|
| ## Training |
|
|
| | | | |
| |--|--| |
| | Data | 30,818 samples (HotpotQA EN + Dragon-derec RU) | |
| | Epochs | 2 | |
| | Batch size | 4 | |
| | LR | 5e-6 | |
| | Max length | 384 | |
| | Hardware | Apple M3 Pro, ~3.4 hours | |
|
|
| ## Usage |
|
|
|
|
|
|
| ## Results on DRAGON benchmark |
|
|
| | Metric | Baseline | EfficientRAG | Delta | |
| |--------|----------|-------------|-------| |
| | MRR (multi-hop) | 0.736 | 0.798 | **+0.062** | |
| | MRR (overall) | 0.783 | 0.822 | **+0.040** | |
| | Precision | 0.187 | 0.582 | **+0.395** | |
|
|
| ## Related |
|
|
| - Training data: [Necent/efficientrag-labeler-training-data](https://huggingface.co/datasets/Necent/efficientrag-labeler-training-data) |
| - Filter model: [Necent/efficientrag-filter-mdeberta-v3-base](https://huggingface.co/Necent/efficientrag-filter-mdeberta-v3-base) |
| - Paper: [EfficientRAG (arXiv:2408.04259)](https://arxiv.org/abs/2408.04259) |
|
|