alphaXiv commited on
Commit
ea041b8
·
verified ·
1 Parent(s): 2ab6fbe

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +77 -53
README.md CHANGED
@@ -1,55 +1,79 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: data_source
5
- dtype: string
6
- - name: prompt
7
- list:
8
- - name: content
9
- dtype: string
10
- - name: role
11
- dtype: string
12
- - name: ability
13
- dtype: string
14
- - name: reward_model
15
- struct:
16
- - name: ground_truth
17
- struct:
18
- - name: numbers
19
- list: int64
20
- - name: target
21
- dtype: float64
22
- - name: style
23
- dtype: string
24
- - name: extra_info
25
- struct:
26
- - name: index
27
- dtype: int64
28
- - name: numbers
29
- list: int64
30
- - name: solution
31
- dtype: string
32
- - name: target
33
- dtype: float64
34
- splits:
35
- - name: train
36
- num_bytes: 132408
37
- num_examples: 200
38
- - name: validation
39
- num_bytes: 1191445
40
- num_examples: 1800
41
- - name: test
42
- num_bytes: 132597
43
- num_examples: 200
44
- download_size: 1395212
45
- dataset_size: 1456450
46
- configs:
47
- - config_name: default
48
- data_files:
49
- - split: train
50
- path: data/train-*
51
- - split: validation
52
- path: data/validation-*
53
- - split: test
54
- path: data/test-*
55
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ task_categories:
6
+ - text-generation
7
+ - question-answering
8
+ tags:
9
+ - math
10
+ - reasoning
11
+ - gsm8k
12
+ - countdown
13
+ - evolution-strategies
14
+ - reinforcement-learning
15
+ size_categories:
16
+ - 1K<n<10K
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ---
18
+
19
+ # COUNTDOWN Dataset for ES vs GRPO Comparison
20
+
21
+ ## Dataset Description
22
+
23
+ This dataset contains prepared splits of the COUNTDOWN task for comparing Evolution Strategies (ES) and Group Relative Policy Optimization (GRPO) methods for LLM fine-tuning.
24
+
25
+ ### Dataset Statistics
26
+
27
+ - **Training split:** 10.0% of available data
28
+ - **Training samples:** 200
29
+ - **Validation samples:** 1,800
30
+ - **Test samples:** 200 (reserved for final evaluation)
31
+
32
+ ### Data Format
33
+
34
+ Each example contains:
35
+ - `data`: The input prompt/question
36
+ - `answer`: Ground truth answer
37
+ - Additional task-specific fields
38
+
39
+ ### Usage
40
+
41
+ ```python
42
+ from datasets import load_dataset
43
+
44
+ # Load the dataset
45
+ dataset = load_dataset("your-username/countdown-es-grpo")
46
+
47
+ # Access splits
48
+ train_data = dataset["train"]
49
+ val_data = dataset["validation"]
50
+ test_data = dataset["test"]
51
+
52
+ # Example
53
+ print(train_data[0])
54
+ ```
55
+
56
+ ### Citation
57
+
58
+ If you use this dataset, please cite:
59
+
60
+ ```bibtex
61
+ @misc{qiu2025evolutionstrategiesscalellm,
62
+ title={Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning},
63
+ author={Xin Qiu and Yulu Gan and Conor F. Hayes and Qiyao Liang and Elliot Meyerson and Babak Hodjat and Risto Miikkulainen},
64
+ year={2025},
65
+ eprint={2509.24372},
66
+ archivePrefix={arXiv},
67
+ primaryClass={cs.LG},
68
+ url={https://arxiv.org/abs/2509.24372},
69
+ }
70
+ ```
71
+
72
+ ### Source
73
+
74
+ - **Repository:** https://github.com/alphaXiv/paper-implementations
75
+ - **Paper:** https://arxiv.org/abs/2509.24372
76
+
77
+ ### License
78
+
79
+ MIT License