Update README.md
Browse files
README.md
CHANGED
|
@@ -22,4 +22,42 @@ configs:
|
|
| 22 |
data_files:
|
| 23 |
- split: train
|
| 24 |
path: data/train-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
data_files:
|
| 23 |
- split: train
|
| 24 |
path: data/train-*
|
| 25 |
+
license: mit
|
| 26 |
+
task_categories:
|
| 27 |
+
- question-answering
|
| 28 |
+
language:
|
| 29 |
+
- en
|
| 30 |
+
tags:
|
| 31 |
+
- code
|
| 32 |
+
pretty_name: SciCode
|
| 33 |
+
size_categories:
|
| 34 |
+
- 1K<n<10K
|
| 35 |
---
|
| 36 |
+
# Dataset Card for Dataset Name
|
| 37 |
+
|
| 38 |
+
Official Description (from the authors):
|
| 39 |
+
Since language models (LMs) now outperform average humans on many challenging tasks,
|
| 40 |
+
it has become increasingly difficult to develop challenging, high-quality, and realistic evaluations.
|
| 41 |
+
We address this issue by examining LMs' capabilities to generate code for solving real scientific research problems.
|
| 42 |
+
Incorporating input from scientists and AI researchers in 16 diverse natural science sub-fields,
|
| 43 |
+
including mathematics, physics, chemistry, biology, and materials science, we created a scientist-curated coding benchmark,
|
| 44 |
+
SciCode. The problems in SciCode naturally factorize into multiple subproblems, each involving knowledge recall, reasoning,
|
| 45 |
+
and code synthesis. In total, SciCode contains 338 subproblems decomposed from 80 challenging main problems.
|
| 46 |
+
It offers optional descriptions specifying useful scientific background information and scientist-annotated gold-standard solutions
|
| 47 |
+
and test cases for evaluation. Claude3.5-Sonnet, the best-performing model among those tested,
|
| 48 |
+
can solve only 4.6% of the problems in the most realistic setting. We believe that SciCode demonstrates both contemporary LMs'
|
| 49 |
+
progress towards becoming helpful scientific assistants and sheds light on the development and evaluation of scientific AI in the future.
|
| 50 |
+
|
| 51 |
+
## Dataset Details
|
| 52 |
+
|
| 53 |
+
### Dataset Sources [optional]
|
| 54 |
+
|
| 55 |
+
<!-- Provide the basic links for the dataset. -->
|
| 56 |
+
|
| 57 |
+
- **Repository:** [https://github.com/scicode-bench/SciCode?tab=readme-ov-file]
|
| 58 |
+
- **Paper [optional]:** [https://arxiv.org/abs/2407.13168]
|
| 59 |
+
|
| 60 |
+
## Dataset Card Authors
|
| 61 |
+
|
| 62 |
+
The original authors of SciCode benchmark and Akshath Mangudi for
|
| 63 |
+
providing the ground truth artifact.
|