File size: 1,670 Bytes
1fdf3e0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
language:
- en
- ru
tags:
- efficientrag
- multi-hop-qa
- token-classification
- sequence-classification
- deberta-v3
license: mit
base_model: microsoft/mdeberta-v3-base
---

# EfficientRAG Labeler (mdeberta-v3-base)

**Labeler** component of [EfficientRAG](https://arxiv.org/abs/2408.04259) — dual-headed DeBERTa model for multi-hop retrieval.

## What it does

Given a query and a retrieved chunk, the Labeler:
1. **Sequence classification**: Is this chunk relevant (`CONTINUE`) or irrelevant (`TERMINATE`)?
2. **Token classification**: Which tokens in the chunk are useful for answering?

## Architecture

- Base: `microsoft/mdeberta-v3-base` (86M params, multilingual)
- Custom dual head: `DebertaForSequenceTokenClassification`
  - Token head: binary per-token (useful/useless)
  - Sequence head: binary per-chunk (CONTINUE/TERMINATE)

## Training

| | |
|--|--|
| Data | 30,818 samples (HotpotQA EN + Dragon-derec RU) |
| Epochs | 2 |
| Batch size | 4 |
| LR | 5e-6 |
| Max length | 384 |
| Hardware | Apple M3 Pro, ~3.4 hours |

## Usage



## Results on DRAGON benchmark

| Metric | Baseline | EfficientRAG | Delta |
|--------|----------|-------------|-------|
| MRR (multi-hop) | 0.736 | 0.798 | **+0.062** |
| MRR (overall) | 0.783 | 0.822 | **+0.040** |
| Precision | 0.187 | 0.582 | **+0.395** |

## Related

- Training data: [Necent/efficientrag-labeler-training-data](https://huggingface.co/datasets/Necent/efficientrag-labeler-training-data)
- Filter model: [Necent/efficientrag-filter-mdeberta-v3-base](https://huggingface.co/Necent/efficientrag-filter-mdeberta-v3-base)
- Paper: [EfficientRAG (arXiv:2408.04259)](https://arxiv.org/abs/2408.04259)