Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
json
Languages:
Chinese
Size:
10K - 100K
License:
File size: 2,567 Bytes
5d62ac1 9651234 5d62ac1 c1ed8e1 5d62ac1 d9858e5 8fa4a6c c42464b d1bea89 c42464b 38c1bfd 74608bb 0668e13 6f87207 0668e13 6f87207 0668e13 6f87207 0668e13 6f87207 0668e13 d9858e5 09d1392 dd9cdf5 09d1392 a66bf49 9651234 8537772 09d1392 592c305 5075f3e a3327df d1bea89 a3327df | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 | ---
license: mit
task_categories:
- question-answering
language:
- zh
tags:
- medical
- tcm
- traditional Chinese medicine
- eval
- benchmark
- test
---
# Description
This dataset can be used to evaluate the capabilities of large language models in traditional Chinese medicine and contains multiple-choice, multiple-answer, and true/false questions.
# Changelog
- **2024-08-28: Added 7226 questions.**
- 2024-08-09: The benchmark code is available at https://github.com/huangxinping/HWTCMBench.
- 2024-08-02: System prompts are removed to ensure the purity of the evaluation results.
- 2024-07-20: Debut.
## Examples
multiple-answers questions(多选题)
```json
[
{
"instruction": "便秘的预防调护应注意\nA.保持心情舒畅\nB.少吃辛辣刺激性食物\nC.适当摄入油脂\nD.积极治疗肛门直肠疾病\nE.按时登厕",
"input": "",
"output": "ABCDE"
}
]
```
multiple-choice questions(单选题)
```json
[
{
"instruction": "患者,男,50岁。眩晕欲仆,头摇而痛,项强肢颤,腰膝疫软,舌红苔薄白,脉弦有力。其病机是\nA.肝阳上亢\nB.肝肾阴虚\nC.肝阳化风\nD.阴虚风动\nE.肝血不足",
"input": "",
"output": "C"
}
]
```
True/False questions(判断题)
```json
[
{
"instruction": "秦医医和提出了“六气病源说”。",
"input": "",
"output": "正确"
},
{
"instruction": "中风中经络邪盛时也可出现神志改变",
"input": "",
"output": "错误"
}
]
```
## Evaluation
| | multiple-choice questions | multiple-answers questions | True/False questions |
|---|---|---|---|
| llama3:8b | 21.94% | 17.71% | 46.56% |
| phi3:14b-instruct | 26.93% | 1.04% | 38.93% |
| aya:8b | 17.85% | 1.04% | 34.35% |
| mistral:7b-instruct | 21.76% | 2.08% | **48.09%** |
| qwen1.5-7b-chat | 51.35% | 13.54% | 46.56% |
| qwen1.5-14b-chat | 69.94% | **78.12%** | 31.30% |
| huangdi-13b-chat | 21.73% | 45.83% | 0.00% |
| canggong-14b-chat(SFT)<br>**Ours** | 55.98% | 4.17% | 23.66% |
| canggong-14b-chat(DPO)<br>**Ours** | **72.33%** | 2.08% | 45.80% |
> Canggong-14b-chat is an LLM of traditional Chinese medicine still in training.
## Citation
If you find this project useful in your research, please consider cite:
```
@misc{hwtcm2024,
title={{hwtcm} a traditional Chinese medicine QA dataset for evaluating large language models},
author={Haiwei AI Team},
howpublished = {\url{https://huggingface.co/datasets/Monor/hwtcm}},
year={2024}
}
``` |