toolevalxm commited on
Commit
effd366
·
verified ·
1 Parent(s): ae047fa

Add dataset documentation

Browse files
Files changed (1) hide show
  1. README.md +28 -0
README.md ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CodeBench-HumanEval-Curated
2
+
3
+ A curated subset of code evaluation problems designed for assessing language model coding capabilities.
4
+
5
+ ## Dataset Description
6
+
7
+ This dataset contains carefully selected programming problems from the original HumanEval benchmark created by OpenAI. We performed quality filtering and added enhanced test cases.
8
+
9
+ ### Source Data
10
+
11
+ The problems in this dataset are derived from [OpenAI HumanEval](https://github.com/openai/human-eval), a benchmark for evaluating code generation capabilities of language models.
12
+
13
+ ### Data Processing
14
+
15
+ - Selected top 100 problems based on discriminative power
16
+ - Added edge case test coverage
17
+ - Standardized docstring formats
18
+
19
+ ## Usage
20
+
21
+ ```python
22
+ from datasets import load_dataset
23
+ dataset = load_dataset("toolevalxm/CodeBench-HumanEval-Curated")
24
+ ```
25
+
26
+ ## Citation
27
+
28
+ If using this dataset, please cite the original HumanEval paper and this curation effort.