Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -102,9 +102,6 @@ dataset = load_dataset("ByteDance-Seed/AInsteinBench")
|
|
| 102 |
msb_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "msb_type")
|
| 103 |
et_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "et_type")
|
| 104 |
|
| 105 |
-
# Load difficulty annotations
|
| 106 |
-
difficulty = load_dataset("ByteDance-Seed/AInsteinBench", "difficulty_tag")
|
| 107 |
-
|
| 108 |
# Access samples
|
| 109 |
for sample in dataset['train']:
|
| 110 |
print(f"ID: {sample['question_id']}")
|
|
@@ -112,14 +109,6 @@ for sample in dataset['train']:
|
|
| 112 |
print(f"Task: {sample['description']}")
|
| 113 |
```
|
| 114 |
|
| 115 |
-
### Difficulty Tags
|
| 116 |
-
|
| 117 |
-
The `difficulty_tag` subset provides difficulty annotations for each question with two metrics:
|
| 118 |
-
- **`engineeringDifficultyScore`** (1-5): Software engineering complexity
|
| 119 |
-
- **`semanticDepthScore`** (1-5): Domain-specific scientific knowledge required
|
| 120 |
-
|
| 121 |
-
Match difficulty tags to questions using `repo` and `pr_number` fields.
|
| 122 |
-
|
| 123 |
### Evaluation
|
| 124 |
|
| 125 |
For evaluation scripts and detailed usage, please visit the [AInsteinBench GitHub repository](https://github.com/ByteDance-Seed/AInsteinBench).
|
|
|
|
| 102 |
msb_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "msb_type")
|
| 103 |
et_dataset = load_dataset("ByteDance-Seed/AInsteinBench", "et_type")
|
| 104 |
|
|
|
|
|
|
|
|
|
|
| 105 |
# Access samples
|
| 106 |
for sample in dataset['train']:
|
| 107 |
print(f"ID: {sample['question_id']}")
|
|
|
|
| 109 |
print(f"Task: {sample['description']}")
|
| 110 |
```
|
| 111 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 112 |
### Evaluation
|
| 113 |
|
| 114 |
For evaluation scripts and detailed usage, please visit the [AInsteinBench GitHub repository](https://github.com/ByteDance-Seed/AInsteinBench).
|