Update README.md
Browse files
README.md
CHANGED
|
@@ -4,16 +4,24 @@ language:
|
|
| 4 |
- en
|
| 5 |
pretty_name: 'CORE: Computational Reproducibility Agent Benchmark'
|
| 6 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
# Dataset Card for CORE-Bench
|
| 8 |
|
| 9 |
`CORE-Bench` is a benchmark evaluating the ability of agents to computationally reproduce scientific papers. It comprises 270 tasks from 90 papers across computer science, social science, and medicine, written in Python or R.
|
| 10 |
|
| 11 |
Each task in `CORE-Bench` requires an agent to reproduce the results of a research paper given its repository. The agent must install libraries, packages, and dependencies and run the code. If the code runs successfully, the agent needs to search through all outputs to answer the task questions. The agent submits a report and is evaluated against the results of a successful reproduction. An agent successfully completes a task if it correctly answers all questions about a code repository.
|
| 12 |
|
| 13 |
-
[Read the paper here](https://arxiv.org/abs/2409.11363).
|
| 14 |
-
|
| 15 |
-
[See here for the harness to run the benchmark](https://github.com/siegelz/core-bench).
|
| 16 |
-
|
| 17 |
## Dataset Details
|
| 18 |
The benchmark is defined in two files: `core_train.json` and `core_test.json` (decrypt the test set using `gpg --output core_test.json --decrypt core_test.json.gpg`).
|
| 19 |
|
|
|
|
| 4 |
- en
|
| 5 |
pretty_name: 'CORE: Computational Reproducibility Agent Benchmark'
|
| 6 |
---
|
| 7 |
+
|
| 8 |
+
<p align="center">
|
| 9 |
+
<a href="https://arxiv.org/abs/2409.11363">
|
| 10 |
+
<img alt="Paper" src="https://img.shields.io/badge/arXiv-arXiv:2409.11363-b31b1b.svg">
|
| 11 |
+
<a href = "https://agent-evals-core-leaderboard.hf.space">
|
| 12 |
+
<img alt="Leaderboard" src="https://img.shields.io/badge/Leaderboard-Link-blue.svg">
|
| 13 |
+
<a href = "https://github.com/siegelz/core-bench">
|
| 14 |
+
<img alt="GitHub" src="https://img.shields.io/badge/GitHub-Repository-181717.svg">
|
| 15 |
+
<a href="https://huggingface.co/datasets/siegelz/core-bench">
|
| 16 |
+
<img alt="Dataset" src="https://img.shields.io/badge/Hugging%20Face-Dataset-yellow.svg">
|
| 17 |
+
</p>
|
| 18 |
+
|
| 19 |
# Dataset Card for CORE-Bench
|
| 20 |
|
| 21 |
`CORE-Bench` is a benchmark evaluating the ability of agents to computationally reproduce scientific papers. It comprises 270 tasks from 90 papers across computer science, social science, and medicine, written in Python or R.
|
| 22 |
|
| 23 |
Each task in `CORE-Bench` requires an agent to reproduce the results of a research paper given its repository. The agent must install libraries, packages, and dependencies and run the code. If the code runs successfully, the agent needs to search through all outputs to answer the task questions. The agent submits a report and is evaluated against the results of a successful reproduction. An agent successfully completes a task if it correctly answers all questions about a code repository.
|
| 24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
## Dataset Details
|
| 26 |
The benchmark is defined in two files: `core_train.json` and `core_test.json` (decrypt the test set using `gpg --output core_test.json --decrypt core_test.json.gpg`).
|
| 27 |
|