Finalized dataset card with English ranking logic explanation (v2).
Browse files
README.md
ADDED
|
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---
|
| 3 |
+
license: cc-by-sa-4.0
|
| 4 |
+
tags:
|
| 5 |
+
- competitive-programming
|
| 6 |
+
- code-ranking
|
| 7 |
+
- llm-benchmark
|
| 8 |
+
- code-efficiency
|
| 9 |
+
- aizu-online-judge
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# AOJ-CodeRank-Benchmark: Hybrid Efficiency Ranking Benchmark Dataset
|
| 13 |
+
|
| 14 |
+
## 1. Overview
|
| 15 |
+
|
| 16 |
+
This dataset (AOJ-CodeRank-Benchmark) was created to evaluate the capability of **Large Language Models (LLMs)** in **code efficiency ranking tasks** using a high-quality, structured benchmark.
|
| 17 |
+
|
| 18 |
+
The dataset is built entirely on code submission records from **Aizu Online Judge (AOJ)**, strictly adhering to the principle of **correctness first, efficiency second**.
|
| 19 |
+
|
| 20 |
+
* **Problem Scope**: ALDS1 (Fundamental Algorithms), DSL/GRL/CGL (Advanced Data Structures/Graphs), and Volume 0000-3299 (Classic Contest Problems).
|
| 21 |
+
* **Core Feature**: **Eliminates** 0ms submissions and low-quality/non-unique submissions, ensuring true time differentiation across all data groups.
|
| 22 |
+
|
| 23 |
+
## 2. Data Structure
|
| 24 |
+
|
| 25 |
+
The dataset uses the **JSON Lines (.jsonl)** format. Each line represents a single Task Group object.
|
| 26 |
+
|
| 27 |
+
**Structure Preview (Candidates):**
|
| 28 |
+
|
| 29 |
+
| Field Name | Type | Description |
|
| 30 |
+
| :--- | :--- | :--- |
|
| 31 |
+
| `submission_id` | string | Unique Submission ID. |
|
| 32 |
+
| `code_snippet` | string | The complete C++ source code. |
|
| 33 |
+
| **`accuracy`** | float | **Accuracy Score** (0.0 to 1.0). |
|
| 34 |
+
| `time_ms` | integer | Actual Execution Time (in milliseconds). |
|
| 35 |
+
| **`score_of_the_acc`** | float | **Normalized Efficiency Score** (Range -2.0 to 0.0). |
|
| 36 |
+
| **`final_rank`** | integer | **Final Competition Rank** (1, 2, 3...). |
|
| 37 |
+
|
| 38 |
+
## 3. Ground Truth (GT) Scoring and Ranking Logic 🏆
|
| 39 |
+
|
| 40 |
+
The LLM's objective is to predict the `final_rank`. This ranking is derived from a unique two-tiered system:
|
| 41 |
+
|
| 42 |
+
### Phase I: Efficiency Score (`score_of_the_acc`)
|
| 43 |
+
|
| 44 |
+
This score is a purely performance-based metric, calculating the normalized inverse sum of Time and Memory costs within the task group.
|
| 45 |
+
|
| 46 |
+
$$ ext{Score} = -( ext{Norm\_Time} + ext{Norm\_Memory})$$
|
| 47 |
+
|
| 48 |
+
*(Note: Score is between -2.0 and 0.0. A score closer to 0.0 is better.)*
|
| 49 |
+
|
| 50 |
+
### Phase II: Final Ranking (`final_rank`) Mechanism
|
| 51 |
+
|
| 52 |
+
The final rank is determined by a lexicographical sort (Standard Competition Ranking) using the following priority:
|
| 53 |
+
|
| 54 |
+
1. **Primary Sort Key (Accuracy)**: **`accuracy`** (Descending).
|
| 55 |
+
2. **Secondary Sort Key (Efficiency)**: **`score_of_the_acc`** (Descending).
|
| 56 |
+
|
| 57 |
+
**Tie-Breaking**: Submissions with identical Accuracy and Efficiency Score receive the same rank (1-2-2-4 rule).
|
| 58 |
+
|
| 59 |
+
---
|
| 60 |
+
|
| 61 |
+
### 4. Usage Example
|
| 62 |
+
|
| 63 |
+
```python
|
| 64 |
+
from datasets import load_dataset
|
| 65 |
+
|
| 66 |
+
# Load the dataset and access the candidates list
|
| 67 |
+
dataset = load_dataset("Slime/AOJ-CodeRank-Benchmark", data_files="train.jsonl", split="train")
|
| 68 |
+
|
| 69 |
+
# The LLM sorting algorithm will receive task['candidates'] for ranking
|
| 70 |
+
for task in dataset:
|
| 71 |
+
candidates = task['candidates']
|
| 72 |
+
# Algorithm generates predicted_rank for candidates
|
| 73 |
+
# Evaluation compares predicted_rank against ground_truth['final_rank']
|
| 74 |
+
5. Acknowledgments
|
| 75 |
+
Original submission records and problem context are sourced from Aizu Online Judge (AOJ).
|