sakuraeval / README.md
myst72's picture
Update README.md
9e66500 verified
metadata
license: mit
task_categories:
  - text-generation
language:
  - ja
  - en
tags:
  - code
configs:
  - config_name: ja
    data_files:
      - split: test
        path:
          - sakuraeval/ja.parquet
  - config_name: en
    data_files:
      - split: test
        path:
          - sakuraeval/en.parquet

SakuraEval

Dataset Description

SakuraEval is a Japan-specific code generation benchmark dataset. It is designed independently and does not rely on translation from English benchmarks such as HumanEval or JHumanEval.

Dataset Structure

from datasets import load_dataset
load_dataset("kogi-jwu/sakuraeval", "ja")
DatasetDict({
    test: Dataset({
        features: ['task_id', 'category', 'prompt', 'canonical_solution', 'test', 'entry_point'],
        num_rows: 164
    })
})

Data Fields

  • task_id: Identifier for the data sample.
  • category: Task category.
  • prompt: Input for the model, including the function header and docstring that describes the task.
  • canonical_solution: Solution to the problem presented in the prompt.
  • test: Function(s) to test the generated code for correctness.
  • entry_point: Entry point function to begin testing.

Category Breakdown

Category Number of Tasks
文化(Culture) 34
風習(Customs) 27
日本地理(Japanese Geography) 10
公民・法律(Law and Civics) 11
数学・科学(Math and Science) 21
単位変換(Unit Conversion) 11
日本語処理(Japanese Language) 43
その他(Other) 7

Languages

The dataset contains coding problems in 2 natural languages: English and Japanese.