Update README.md
Browse files
README.md
CHANGED
|
@@ -4,4 +4,27 @@ task_categories:
|
|
| 4 |
- question-answering
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
- question-answering
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
+
---
|
| 8 |
+
|
| 9 |
+
This repo contains the raw data for the paper [**From Atomic to Composite: Reinforcement Learning Enables Generalization in Complementary Reasoning**](https://arxiv.org/pdf/2512.01970) with human biographies from a synthetic knowledge graph.
|
| 10 |
+
|
| 11 |
+
The code to generate these data is available at [https://github.com/sitaocheng/from_atomic_to_composite](https://github.com/sitaocheng/from_atomic_to_composite).
|
| 12 |
+
|
| 13 |
+
The data can be directly adapted to frameworks like [LLamafactory](https://github.com/hiyouga/LLaMA-Factory) or [VeRL](https://github.com/volcengine/verl).
|
| 14 |
+
|
| 15 |
+
We opensource the training and testing data for parametric, contextual and complementary reasoning, respectively.
|
| 16 |
+
|
| 17 |
+
The training data is in the format of SFT full fine-tuning for LLamafactory. It contains the paramtric knowledge and QA-pairs for each reasoning type. For reinforcement learning (RL), we only adopt QA-pairs. To use VeRL, please adopt the data preprocess script to prepare the data.
|
| 18 |
+
|
| 19 |
+
Example:
|
| 20 |
+
```
|
| 21 |
+
"question": "Can you tell me the phone number of the boss of Justin Martinez's roommate?",
|
| 22 |
+
"answer": "6548113713",
|
| 23 |
+
"answer_cot": "Justin Martinez shared a room with Carl Jarvis. Frederick Juarez is the boss of Carl Jarvis. Frederick Juarez's phone number is 6548113713. So, the answer is: 6548113713",
|
| 24 |
+
"instruction": "Answer the following question.",
|
| 25 |
+
"gen_type": "composition"
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
The testing data is splited based on three level of generalization difficulty (i.e., I.I.D., composition, zero-shot generalization). The training data is sufficient to achieve excellent I.I.D. performance.
|
| 29 |
+
|
| 30 |
+
|