AnesBench / README.md
kkl4's picture
Update README.md
b8bb312 verified
|
raw
history blame
2.99 kB
metadata
task_categories:
  - question-answering
language:
  - en
  - zh
tags:
  - biology
  - medical
size_categories:
  - 1K<n<10K
viewer: true
configs:
  - config_name: default
    data_files:
      - split: test
        path:
          - AnesBench.json

The AnesBench Datasets Collection comprises three distinct datasets: AnesBench, an anesthesiology reasoning benchmark; AnesQA, an SFT dataset; and AnesCorpus, a continual pre-training dataset. This repository pertains to AnesBench. For AnesQA and AnesCorpus, please refer to their respective links: https://huggingface.co/datasets/MiliLab/AnesQA[https://huggingface.co/datasets/MiliLab/AnesQA] and https://huggingface.co/datasets/MiliLab/AnesCorpus[https://huggingface.co/datasets/MiliLab/AnesCorpus].

Dataset Description

AnesBench is designed to assess anesthesiology-related reasoning capabilities of Large Language Models (LLMs). It contains 4,427 anesthesiology questions in English. Each question is labeled with a three-level categorization of cognitive demands and includes Chinese-English translations, enabling evaluation of LLMs’ knowledge, application, and clinical reasoning abilities across diverse linguistic contexts.

JSON Sample

    {
        "id": "1bb76e22-6dbf-5b17-bbdf-0e6cde9f9440",
        "choice_num": 4,
        "answer": "A",
        "level": 1,
        "en_question": "english question",
        "en_A": "option 1",
        "en_B": "option 2",
        "en_C": "option 3",
        "en_D": "option 4",
        "zh_question": "中文问题",
        "zh_A": "选项一",
        "zh_B": "选项二",
        "zh_C": "选项三",
        "zh_D": "选项四"
    }

Field Explanations

Field Type Description
id string A randomly generated ID using UUID
choice_num int The number of choices in this question
answer string The correct answer to this question
level int The cognitive demand level of the question (1, 2, and 3 represent system1, system1.x, and system2 respectively)
en_question string English description of the question stem
cn_question string Chinese description of the question stem
en_X string English description of the option
cn_X string Chinese description of the option

Recommended Usage

  • Question Answering: QA in a zero-shot or few-shot setting, where the question is fed into a QA system. Accuracy should be used as the evaluation metric.