kkl4 commited on
Commit
b78f5db
·
1 Parent(s): 4391020
Files changed (1) hide show
  1. README.md +37 -27
README.md CHANGED
@@ -27,41 +27,17 @@ configs:
27
 
28
  # AnesBench
29
 
30
- [**Project Page**](https://mililab.github.io/anesbench.ai/) | [**Paper**](https://huggingface.co/papers/2504.02404) | [**GitHub**](https://github.com/mililab/anesbench)
31
 
32
- **AnesBench** is a comprehensive benchmark designed to assess anesthesiology-related reasoning capabilities of Large Language Models (LLMs). It is the evaluation component of **AnesSuite**, the first comprehensive dataset suite specifically designed for anesthesiology reasoning.
33
 
34
- The benchmark features 7,972 anesthesiology Multiple Choice Questions (MCQs) available in both English and Chinese. Each question is labeled with a three-level categorization of cognitive demands based on dual-process theory:
35
- - **System 1**: Factual retrieval (Fast, intuitive recall).
36
- - **System 1.x**: Hybrid reasoning (Pattern recognition and rule application).
37
- - **System 2**: Complex decision-making (Deliberate, analytical clinical reasoning).
38
 
39
  | Subset | File | Total | System 1 | System 1.x | System 2 |
40
  |--------|------|-------|----------|-------------|----------|
41
  | English | `anesbench_en.json` | 4,343 | 2,960 | 1,028 | 355 |
42
  | Chinese | `anesbench_zh.json` | 3,529 | 2,784 | 534 | 211 |
43
 
44
- ## Sample Usage
45
-
46
- To evaluate a model on AnesBench, you can use the evaluation code provided in the [official repository](https://github.com/mililab/anesbench).
47
-
48
- ### Setup
49
- ```bash
50
- git clone https://github.com/MiliLab/AnesBench
51
- cd AnesBench/eval
52
- pip install -r requirements.txt
53
- ```
54
-
55
- ### Run Evaluation
56
- Prepare your environment variables and run the evaluation script:
57
- ```bash
58
- export RESULT_SAVE_PATH=/path/to/result_save_dir
59
- export MODEL_PATH=/path/to/model
60
- export BENCHMARK_PATH=/path/to/benchmark
61
-
62
- python ./evaluate.py --config ./config.yaml
63
- ```
64
-
65
  ## JSON Sample
66
 
67
  **English** (`anesbench_en.json`):
@@ -80,6 +56,23 @@ python ./evaluate.py --config ./config.yaml
80
  }
81
  ```
82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
  ## Field Explanations
84
 
85
  | Field | Type | Description |
@@ -91,6 +84,23 @@ python ./evaluate.py --config ./config.yaml
91
  | `target` | string | The correct answer to this question |
92
  | `category` | int | The cognitive demand category of the question (`1` = System 1, `2` = System 1.x, `3` = System 2) |
93
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
  ## Citation
95
 
96
  If you find AnesBench helpful, please consider citing the following paper:
 
27
 
28
  # AnesBench
29
 
30
+ [**Paper**](https://huggingface.co/papers/2504.02404) | [**GitHub**](https://github.com/mililab/anesbench)
31
 
32
+ # Dataset Description
33
 
34
+ **AnesBench** is designed to assess anesthesiology-related reasoning capabilities of Large Language Models (LLMs). It provides bilingual (English and Chinese) anesthesiology questions across two separate files. Each question is labeled with a three-level categorization of cognitive demands based on dual-process theory (System 1, System 1.x, and System 2), enabling evaluation of LLMs' knowledge, application, and clinical reasoning abilities across diverse linguistic contexts.
 
 
 
35
 
36
  | Subset | File | Total | System 1 | System 1.x | System 2 |
37
  |--------|------|-------|----------|-------------|----------|
38
  | English | `anesbench_en.json` | 4,343 | 2,960 | 1,028 | 355 |
39
  | Chinese | `anesbench_zh.json` | 3,529 | 2,784 | 534 | 211 |
40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  ## JSON Sample
42
 
43
  **English** (`anesbench_en.json`):
 
56
  }
57
  ```
58
 
59
+ **Chinese** (`anesbench_zh.json`):
60
+
61
+ ```json
62
+ {
63
+ "A": "替代治疗",
64
+ "B": "手术治疗",
65
+ "C": "对症治疗",
66
+ "D": "静脉输注糖皮质激素",
67
+ "E": "补充盐皮质激素",
68
+ "id": "78587bd9-f3f6-4118-b6eb-95ed7c91a0ec",
69
+ "question": "Addison病抢救的主要措施是",
70
+ "choice_num": 5,
71
+ "target": "D",
72
+ "category": 1
73
+ }
74
+ ```
75
+
76
  ## Field Explanations
77
 
78
  | Field | Type | Description |
 
84
  | `target` | string | The correct answer to this question |
85
  | `category` | int | The cognitive demand category of the question (`1` = System 1, `2` = System 1.x, `3` = System 2) |
86
 
87
+ ### Cognitive Demand Categories
88
+
89
+ | Category | Label | Description |
90
+ |----------|-------|-------------|
91
+ | 1 | **System 1** | Fast, intuitive recall of factual knowledge |
92
+ | 2 | **System 1.x** | Pattern recognition and application of learned rules |
93
+ | 3 | **System 2** | Deliberate, analytical clinical reasoning |
94
+
95
+ ## Recommended Usage
96
+
97
+ - **Question Answering**: QA in a zero-shot or few-shot setting, where the question is fed into a QA system. Accuracy should be used as the evaluation metric.
98
+
99
+ ## Usage
100
+
101
+ To evaluate a model on AnesBench, you can use the evaluation code provided in the [official repository](https://github.com/MiliLab/AnesSuite).
102
+
103
+
104
  ## Citation
105
 
106
  If you find AnesBench helpful, please consider citing the following paper: