ds1000_pnyx / README.md
RawthiL's picture
fix example
f118723
metadata
license: cc-by-sa-4.0
viewer: true
task_categories:
  - text-generation
  - question-answering
language:
  - en
tags:
  - code
pretty_name: PNYX-ds1000
size_categories:
  - 100<n<1K
extra_gated_prompt: >-
  You agree to NOT reveal examples from this dataset in plain text or images
  online, to reduce the risk of leakage into foundation model training corpora.
extra_gated_fields:
  I accept these terms: checkbox
configs:
  - config_name: Pandas
    data_files:
      - split: test
        path: data/Pandas/test.parquet
  - config_name: Numpy
    data_files:
      - split: test
        path: data/Numpy/test.parquet
  - config_name: Matplotlib
    data_files:
      - split: test
        path: data/Matplotlib/test.parquet
  - config_name: Tensorflow
    data_files:
      - split: test
        path: data/Tensorflow/test.parquet
  - config_name: Scipy
    data_files:
      - split: test
        path: data/Scipy/test.parquet
  - config_name: Sklearn
    data_files:
      - split: test
        path: data/Sklearn/test.parquet
  - config_name: Pytorch
    data_files:
      - split: test
        path: data/Pytorch/test.parquet

PNYX - DS-1000

This is a splitted and tested version of DS-1000, based on the reformatted version claudios/ds1000 (extracted metadata as columns). This version is designed to be compatible with the hf_evaluate code_eval package. Also, the code was modified to work with newer versions of the used python packages (numpy, scipy, etc.).

This dataset includes all the original fields and the following ones:

  • user_chat_prompt: A chat-style prompt for the problem, adapted from the prompt and including an instruction to wrap the solution code into a function.
  • test_code: A re-write of the code_context in a format that enable to use the hf_evaluate code-evaluator.
  • solution_function: The solution in a compatible format for the hf_evaluate code-evaluator, derived from the reference_code.

Execution dependencies

  • pandas==2.3.3
  • numpy==2.2.6
  • matplotlib==3.10.8
  • scipy==1.15.3
  • pooch==1.9.0
  • seaborn==0.13.2
  • PyYAML==6.0.3
  • scikit-learn==1.7.2
  • torch==2.10.0
  • tensorflow==2.20.0
  • xgboost==1.6.2
  • statsmodels==0.14.6
  • gensim==4.4.0
  • nltk==3.9.3

Testing

The provided code can be tested using hf_evaluate using the following code:

import os
from datasets import load_dataset
import evaluate as hf_evaluate
os.environ["HF_ALLOW_CODE_EVAL"] = "1"

# Login using e.g. `huggingface-cli login` to access this dataset
ds = load_dataset("PNYX/ds1000_pnyx", "Numpy")

solution_code = ds['test'][0]['solution_function']
test_code = ds['test'][0]['test_code']

# run simple test 
pass_at_k = hf_evaluate.load("code_eval")
results = pass_at_k.compute(references=[test_code], predictions=[[solution_code]], k=[1])


assert results[0]['pass@1'] == 1

Note that the execution environment needs the required dependencies installed.

Missing Samples and Changes

Some examples in the original DS-1000 are not included here:

  • Failed test, can be a bugged solution or incompatible with current test methodology:
    • 520
    • 925
  • Non-standard test methodology, incompatible with current approach:
    • 701
  • Require external data (either downloads or hardcoded csv files):
    • 819
    • 908
    • 909
    • 910

Finally, four samples originally assigned to library Numpy were moved to Pytorch or Tensorflow, since due to prompt and imports they were not Numpy problems:

  • Numpy to Pytorch:
    • 377
    • 378
  • Numpy to Tensorflow:
    • 379
    • 380