File size: 3,377 Bytes
c7b1e81
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4581cf1
 
c7b1e81
b555bb8
c7b1e81
 
b555bb8
 
c7b1e81
 
b555bb8
c7b1e81
 
 
 
 
 
 
 
 
 
 
b555bb8
 
22cb3d8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b555bb8
 
c7b1e81
b555bb8
 
 
 
 
 
 
c7b1e81
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
license: apache-2.0
language:
- en
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-generation
tags:
- rag
---


<div align="center">
  <b style="font-size: 40px;">Ext2Gen-8B-R2</b>
</div>

Note: We are still working on this.

Are you looking for a more robust and reliable generation model for RAG system?

Here is a Ext2Gen-8B-R2 model that effectively mitigates hallucinations caused by retrieval noise and information overload.

See the details in our paper [Link](https://arxiv.org/pdf/2503.04789)

### What is Ext2Gen-8B-R2?
Ext2Gen-8B-R2 is built upon Llama3.2-8B-Instruct, incorporating preference-aligned fine-tuning through pairwise feedback learning. 

This training strategy enables the model to:
- Extract highly relevant sentences from retrieved chunks before generating an answer.
- Filter out irrelevant or misleading information, reducing hallucinations.
- Align generation with human preferences by optimizing for faithfulness, completeness, and conciseness.

### Why does Ext2Gen-8B-R2 outperform standard RAG models?
Standard RAG models often struggle due to:
- Uncertain Placement – Relevant information may appear in unpredictable locations within retrieved chunks, making it difficult for LLMs to utilize it effectively.
- Information Overload – The presence of irrelevant chunks can distract the model, leading to errors or hallucinations.
- Lack of Alignment – Most generation models are not explicitly trained to prioritize relevant content over noise.

### Prompt

- query: the query to answer
- chunk_list: the list of retrieved chunks, e.g., ["chunk 1", "chunk 2", "chunk 3"]

```python
def format_prompt_template(query, chunk_list):

    chunk_list = ['[Chunk ID: '+ str(idx+1) + '] ' + chunk_text for idx, chunk_text in enumerate(chunk_list)]
    chunk_list = '\n\n'.join(chunk_list)

    prompt = '''
You are an expert assistant trained to extract essential sentences from document chunks and generate answers based on the extracted sentences.
Your task is twofold:
- Extraction: Identify sentences that contribute to constructing a precise and accurate response to the given query.
- Generation: Formulate a concise and coherent answer based on the extracted sentences.


### Extraction Instruction:
- A query will be provided for you to answer.
- Extract only the sentences that contribute to forming an answer to the query. 
- Ensure that the extracted sentences are sufficient to derive a correct and complete answer.
- If no relevant sentences are found in the provided chunks, return an empty list.


### Generation Instruction:
- Use the extracted sentences to generate a well-formed answer to the query. 
- If no sentences are extracted, return "No Answer".


### Output Example:
Extracted Sentences:
- Sentence 1
- Sentence 2

Answer: Your Answer


### Query: 
%s


### Chunk List:
%s


### Output:
''' % (query, chunk_list)
    
    return prompt.strip()

```



### Performance Benchmark
Our evaluations demonstrate that Ext2Gen-8B-R2 significantly enhances robustness in RAG systems:
* We conduct a QA task using RAG Systems on NQ, MS-MARCO, HotpotQA datasets.
* The difference is the generation backbone: Llama3.1-8B-Instruct vs. Ext2Gen-8B-R2

See the results in the Figure below:

![image/png](https://cdn-uploads.huggingface.co/production/uploads/63c9da8d5fdc575773c84816/4mbreGv3QNxKOY8HzCLxx.png)