Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
ScottHao commited on
Commit
3bb72a6
·
verified ·
1 Parent(s): 6a0d0e9

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -0
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - zh
6
+ size_categories:
7
+ - 100M<n<1B
8
+ task_categories:
9
+ - text-generation
10
+ pretty_name: Ring-lite-rl-data
11
+ tags:
12
+ - math
13
+ - code
14
+ ---
15
+
16
+
17
+ <p align="center">
18
+ <img src="https://huggingface.co/inclusionAI/Ling-lite/resolve/main/ant-bailing.png" width="100"/>
19
+ <p>
20
+
21
+ <p align="center">
22
+ 🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>
23
+ 🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>
24
+ 🖥️ <a href="https://github.com/inclusionAI/Ring">GitHub</a>
25
+ <p>
26
+
27
+ # Ring-lite-rl-data
28
+
29
+ This dataset is a curated subset of high-quality problems across mathematics and code domains designed for reinforcement learning in the [Ring-lite](https://huggingface.co/inclusionAI/Ring-lite) model. This dataset contains:
30
+
31
+ * **Mathematics**: Over 39,000 rigorously curated problems sourced from:
32
+ - Open-source datasets (BigMath, DeepScaleR, DAPO, DeepMath-103K)
33
+ - Art of Problem Solving (AoPS) contest collections
34
+ * **Code**: Approximately 8,400 verified coding problems from:
35
+ - Programming competition resources (CodeContest, TACO, APPS)
36
+ - QOJ online judge platform
37
+ - All problems include validated "Accepted" solutions and test cases
38
+
39
+ ## Dataset Construction
40
+
41
+ ### Data Sources
42
+ - **Mathematics**: Problems collected from open-source datasets, filtered through strict quality control
43
+ - **Code**: Problems from open-source programming competition resources with verified solutions
44
+
45
+ ### Curation Pipeline
46
+ Our data undergoes a rigorous three-stage curation process:
47
+
48
+ 1. **Data Cleansing**:
49
+ - Removal of problems with invalid characters, images, or multiple subquestions
50
+ - Strict character-based and semantic-based deduplication
51
+ - Exclusion of easily guessable problems (multiple-choice, True/False questions)
52
+
53
+ 2. **Answer Verification**:
54
+ - LLM-based verification using models of different sizes
55
+ - Human expert annotation
56
+ - Problems failing verification are excluded
57
+
58
+ 3. **Data Annotation**:
59
+ - Multi-dimensional labeling (source, educational level, domain knowledge)
60
+ - Mathematical Subject Classification (MSC) for math problems
61
+ - Model-aware difficulty assessment
62
+
63
+ ## Dataset Fields
64
+
65
+ The dataset contains the following fields for each domain:
66
+
67
+ ### Mathematics
68
+ - **context**: The problem statement
69
+ - **groundtruth**: Verified correct answer
70
+ - **type**: Problem category
71
+ - **mid**: Unique problem ID
72
+
73
+ ### Code
74
+ - **context**: Detailed programming problem description
75
+ - **groundtruth**: Verified correct Python solution code
76
+ - **groundtruth_language**: Implementation language
77
+ - **type**: Problem category
78
+ - **code_test_cases**: List of validated test cases with:
79
+ - **input**: Test input
80
+ - **output**: Expected output
81
+ - **dataset**: Source dataset
82
+ - **code_language**: Programming language
83
+ - **difficulty**: Problem difficulty score
84
+ - **mid**: Unique problem ID
85
+
86
+ ## Citation Information
87
+ **Please consider citing our technical report [Ring-lite](https://arxiv.org/abs/2506.14731) if you use this dataset:**
88
+
89
+ ```
90
+ @misc{ringteam2025ringlitescalablereasoningc3postabilized,
91
+ title={Ring-lite: Scalable Reasoning via C3PO-Stabilized Reinforcement Learning for LLMs},
92
+ author={Ling Team and Bin Hu and Cai Chen and Deng Zhao and Ding Liu and Dingnan Jin and Feng Zhu and Hao Dai and Hongzhi Luan and Jia Guo and Jiaming Liu and Jiewei Wu and Jun Mei and Jun Zhou and Junbo Zhao and Junwu Xiong and Kaihong Zhang and Kuan Xu and Lei Liang and Liang Jiang and Liangcheng Fu and Longfei Zheng and Qiang Gao and Qing Cui and Quan Wan and Shaomian Zheng and Shuaicheng Li and Tongkai Yang and Wang Ren and Xiaodong Yan and Xiaopei Wan and Xiaoyun Feng and Xin Zhao and Xinxing Yang and Xinyu Kong and Xuemin Yang and Yang Li and Yingting Wu and Yongkang Liu and Zhankai Xu and Zhenduo Zhang and Zhenglei Zhou and Zhenyu Huang and Zhiqiang Zhang and Zihao Wang and Zujie Wen},
93
+ year={2025},
94
+ eprint={2506.14731},
95
+ archivePrefix={arXiv},
96
+ primaryClass={cs.CL},
97
+ url={https://arxiv.org/abs/2506.14731},
98
+ }
99
+ ```
100
+
101
+ ## Intended Usage
102
+
103
+ This dataset is designed for:
104
+ - Training and evaluating LLMs on multi-domain reasoning tasks
105
+ - Reinforcement learning applications
106
+ - Benchmarking model performance across mathematics and code domains
107
+
108
+ ## Release Date
109
+ 06/19/2025
110
+
111
+ ## Data Version
112
+ 1.0