Datasets:
description
Browse files
README.md
CHANGED
|
@@ -13,22 +13,46 @@ viewer: true
|
|
| 13 |
---
|
| 14 |
|
| 15 |
|
| 16 |
-
#
|
| 17 |
-
**AnesBench** is designed to assess anesthesiology-related reasoning capabilities of Large Language Models (LLMs).
|
| 18 |
-
It contains 4,427 anesthesiology questions in English.
|
| 19 |
-
Each question is labeled with a three-level categorization of cognitive demands and includes Chinese-English translations,
|
| 20 |
-
enabling evaluation of LLMs’ knowledge, application, and clinical reasoning abilities across diverse linguistic contexts.
|
| 21 |
|
| 22 |
-
|
| 23 |
-
**2025.03.31**
|
| 24 |
-
- We released the [AnesBench project page](https://mililab.github.io/anesbench.ai/) !!!.
|
| 25 |
|
| 26 |
-
#
|
| 27 |
-
Please refer [AnesBench Github repository](https://github.com/MiliLab/AnesBench).
|
| 28 |
|
| 29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
-
```
|
| 34 |
-
```
|
|
|
|
| 13 |
---
|
| 14 |
|
| 15 |
|
| 16 |
+
# Dataset Description
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
+
**AnesBench** is designed to assess anesthesiology-related reasoning capabilities of Large Language Models (LLMs). It contains 4,427 anesthesiology questions in English. Each question is labeled with a three-level categorization of cognitive demands and includes Chinese-English translations, enabling evaluation of LLMs’ knowledge, application, and clinical reasoning abilities across diverse linguistic contexts.
|
|
|
|
|
|
|
| 19 |
|
| 20 |
+
## JSON Sample
|
|
|
|
| 21 |
|
| 22 |
+
```json
|
| 23 |
+
{
|
| 24 |
+
"id": "1bb76e22-6dbf-5b17-bbdf-0e6cde9f9440",
|
| 25 |
+
"choice_num": 4,
|
| 26 |
+
"answer": "A",
|
| 27 |
+
"level": 1,
|
| 28 |
+
"en_question": "english question",
|
| 29 |
+
"en_A": "option 1",
|
| 30 |
+
"en_B": "option 2",
|
| 31 |
+
"en_C": "option 3",
|
| 32 |
+
"en_D": "option 4",
|
| 33 |
+
"zh_question": "中文问题",
|
| 34 |
+
"zh_A": "选项一",
|
| 35 |
+
"zh_B": "选项二",
|
| 36 |
+
"zh_C": "选项三",
|
| 37 |
+
"zh_D": "选项四"
|
| 38 |
+
}
|
| 39 |
+
```
|
| 40 |
|
| 41 |
+
## Field Explanations
|
| 42 |
+
|
| 43 |
+
| Field | Type | Description |
|
| 44 |
+
|------------------|----------|-----------------------------------------------------------------------------|
|
| 45 |
+
| `id` | string | A randomly generated ID using UUID |
|
| 46 |
+
| `choice_num` | int | The number of choices in this question |
|
| 47 |
+
| `answer` | string | The correct answer to this question |
|
| 48 |
+
| `level` | int | The cognitive demand level of the question (`1`, `2`, and `3` represent `system1`, `system1.x`, and `system2` respectively) |
|
| 49 |
+
| `en_question` | string | English description of the question stem |
|
| 50 |
+
| `cn_question` | string | Chinese description of the question stem |
|
| 51 |
+
| `en_X` | string | English description of the option |
|
| 52 |
+
| `cn_X` | string | Chinese description of the option |
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
## Recommended Usage
|
| 56 |
+
|
| 57 |
+
- **Question Answering**: QA in a zero-shot or few-shot setting, where the question is fed into a QA system. Accuracy should be used as the evaluation metric.
|
| 58 |
|
|
|
|
|
|
meta.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{"@context":{"@language":"en","@vocab":"https://schema.org/","arrayShape":"cr:arrayShape","citeAs":"cr:citeAs","column":"cr:column","conformsTo":"dct:conformsTo","cr":"http://mlcommons.org/croissant/","data":{"@id":"cr:data","@type":"@json"},"dataBiases":"cr:dataBiases","dataCollection":"cr:dataCollection","dataType":{"@id":"cr:dataType","@type":"@vocab"},"dct":"http://purl.org/dc/terms/","extract":"cr:extract","field":"cr:field","fileProperty":"cr:fileProperty","fileObject":"cr:fileObject","fileSet":"cr:fileSet","format":"cr:format","includes":"cr:includes","isArray":"cr:isArray","isLiveDataset":"cr:isLiveDataset","jsonPath":"cr:jsonPath","key":"cr:key","md5":"cr:md5","parentField":"cr:parentField","path":"cr:path","personalSensitiveInformation":"cr:personalSensitiveInformation","recordSet":"cr:recordSet","references":"cr:references","regex":"cr:regex","repeated":"cr:repeated","replace":"cr:replace","sc":"https://schema.org/","separator":"cr:separator","source":"cr:source","subField":"cr:subField","transform":"cr:transform"},"@type":"sc:Dataset","distribution":[{"@type":"cr:FileObject","@id":"repo","name":"repo","description":"The Hugging Face git repository.","contentUrl":"https://huggingface.co/datasets/MiliLab/AnesBench/tree/refs%2Fconvert%2Fparquet","encodingFormat":"git+https","sha256":"https://github.com/mlcommons/croissant/issues/80"},{"@type":"cr:FileSet","@id":"parquet-files-for-config-default","containedIn":{"@id":"repo"},"encodingFormat":"application/x-parquet","includes":"default/*/*.parquet"}],"recordSet":[{"@type":"cr:RecordSet","dataType":"cr:Split","key":{"@id":"default_splits/split_name"},"@id":"default_splits","name":"default_splits","description":"Splits for the default config.","field":[{"@type":"cr:Field","@id":"default_splits/split_name","dataType":"sc:Text"}],"data":[{"default_splits/split_name":"train"}]},{"@type":"cr:RecordSet","@id":"default","description":"MiliLab/AnesBench - 'default' subset","field":[{"@type":"cr:Field","@id":"default/split","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"fileProperty":"fullpath"},"transform":{"regex":"default/(?:partial-)?(train)/.+parquet$"}},"references":{"field":{"@id":"default_splits/split_name"}}},{"@type":"cr:Field","@id":"default/id","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"id"}}},{"@type":"cr:Field","@id":"default/choice_num","dataType":"cr:Int64","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"choice_num"}}},{"@type":"cr:Field","@id":"default/answer","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"answer"}}},{"@type":"cr:Field","@id":"default/level","dataType":"cr:Int64","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"level"}}},{"@type":"cr:Field","@id":"default/en_question","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"en_question"}}},{"@type":"cr:Field","@id":"default/en_A","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"en_A"}}},{"@type":"cr:Field","@id":"default/en_B","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"en_B"}}},{"@type":"cr:Field","@id":"default/en_C","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"en_C"}}},{"@type":"cr:Field","@id":"default/en_D","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"en_D"}}},{"@type":"cr:Field","@id":"default/zh_question","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"zh_question"}}},{"@type":"cr:Field","@id":"default/zh_A","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"zh_A"}}},{"@type":"cr:Field","@id":"default/zh_B","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"zh_B"}}},{"@type":"cr:Field","@id":"default/zh_C","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"zh_C"}}},{"@type":"cr:Field","@id":"default/zh_D","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"zh_D"}}},{"@type":"cr:Field","@id":"default/en_E","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"en_E"}}},{"@type":"cr:Field","@id":"default/zh_E","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"zh_E"}}},{"@type":"cr:Field","@id":"default/en_F","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"en_F"}}},{"@type":"cr:Field","@id":"default/zh_F","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"zh_F"}}},{"@type":"cr:Field","@id":"default/en_G","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"en_G"}}},{"@type":"cr:Field","@id":"default/en_H","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"en_H"}}},{"@type":"cr:Field","@id":"default/zh_G","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"zh_G"}}},{"@type":"cr:Field","@id":"default/zh_H","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"zh_H"}}},{"@type":"cr:Field","@id":"default/en_I","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"en_I"}}},{"@type":"cr:Field","@id":"default/zh_I","dataType":"sc:Text","source":{"fileSet":{"@id":"parquet-files-for-config-default"},"extract":{"column":"zh_I"}}}]}],"conformsTo":"http://mlcommons.org/croissant/1.1","name":"AnesBench","description":"\n\t\n\t\t\n\t\t🌞 Intro\n\t\n\nAnesBench is designed to assess anesthesiology-related reasoning capabilities of Large Language Models (LLMs). \nIt contains 4,427 anesthesiology questions in English. \nEach question is labeled with a three-level categorization of cognitive demands and includes Chinese-English translations, \nenabling evaluation of LLMs’ knowledge, application, and clinical reasoning abilities across diverse linguistic contexts.\n\n\t\n\t\t\n\t\t🔥 Update\n\t\n\n2025.03.31\n\nWe released the AnesBench… See the full description on the dataset page: https://huggingface.co/datasets/MiliLab/AnesBench.","alternateName":["MiliLab/AnesBench"],"creator":{"@type":"Organization","name":"Mili Lab","url":"https://huggingface.co/MiliLab"},"keywords":["question-answering","English","Chinese","1K - 10K","json","Tabular","Text","Datasets","pandas","Croissant","Polars","🇺🇸 Region: US","biology","medical"],"url":"https://huggingface.co/datasets/MiliLab/AnesBench"}
|
| 2 |
+
|
| 3 |
+
|