File size: 2,684 Bytes
301e8d9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
526f6cb
301e8d9
 
 
 
 
 
6ff66e2
 
 
 
 
 
 
 
 
 
301e8d9
6ff66e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9ea22d5
 
 
 
 
 
f9eede6
 
 
6ff66e2
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
dataset_info:
  features:
  - name: benchmark
    dtype: string
  - name: artifact_type
    dtype: string
  - name: problem_id
    dtype: string
  - name: test_id
    dtype: string
  - name: variables
    dtype: string
  splits:
  - name: train
    num_bytes: 1889380696
    num_examples: 1082
  download_size: 820277839
  dataset_size: 1889380696
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: mit
task_categories:
- question-answering
language:
- en
tags:
- code
pretty_name: SciCode
size_categories:
- 1K<n<10K
---
# Dataset Card for Dataset Name

Official Description (from the authors): 
Since language models (LMs) now outperform average humans on many challenging tasks, 
it has become increasingly difficult to develop challenging, high-quality, and realistic evaluations. 
We address this issue by examining LMs' capabilities to generate code for solving real scientific research problems. 
Incorporating input from scientists and AI researchers in 16 diverse natural science sub-fields, 
including mathematics, physics, chemistry, biology, and materials science, we created a scientist-curated coding benchmark, 
SciCode. The problems in SciCode naturally factorize into multiple subproblems, each involving knowledge recall, reasoning, 
and code synthesis. In total, SciCode contains 338 subproblems decomposed from 80 challenging main problems. 
It offers optional descriptions specifying useful scientific background information and scientist-annotated gold-standard solutions 
and test cases for evaluation. Claude3.5-Sonnet, the best-performing model among those tested, 
can solve only 4.6% of the problems in the most realistic setting. We believe that SciCode demonstrates both contemporary LMs' 
progress towards becoming helpful scientific assistants and sheds light on the development and evaluation of scientific AI in the future.

This repository contains the ground truth artifacts that's needed for LightEval benchmarks. 

The original SciCode numerical evaluation artifacts are provided in
`raw/raw_ground.h5` for reproducibility and parity with the original
SciCode evaluation pipeline.

This dataset uses a single split (`train`) as it represents a complete
set of SciCode numerical evaluation artifacts rather than training data.

## Dataset Details

### Dataset Sources [optional]

<!-- Provide the basic links for the dataset. -->

- **Repository:** [https://github.com/scicode-bench/SciCode?tab=readme-ov-file]
- **Paper [optional]:** [https://arxiv.org/abs/2407.13168]

## Dataset Card Authors

The original authors of SciCode benchmark and Akshath Mangudi for 
providing the ground truth artifact.