File size: 1,683 Bytes
ab5addc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
task_categories:
- question-answering
language:
- en
tags:
- medical
pretty_name: MentalBench
size_categories:
- 10K<n<100K
---
# MentalBench: A Benchmark for Evaluating Psychiatric Diagnostic Capability of Large Language Models

## 🌟 Overview

**MentalBench** is a comprehensive benchmark for evaluating the psychiatric diagnostic capabilities of large language models (LLMs). As the use of LLMs in healthcare expands, ensuring their reliability in sensitive domains such as psychiatry is crucial. MentalBench provides a robust evaluation framework, grounded in real-world psychiatric knowledge. To facilitate deeper reasoning and grounded evaluation, this benchmark is integrated with MentalKG, a specialized knowledge graph structured for psychiatric domain knowledge.

## 🎯 Question Types
| Type | Description | Difficulty | Number of Samples |
|------|-------------|------------|-------------------|
| **Type 1** | Medical Chart → Single Answer | Low | 1,725 |
| **Type 2** | Patient Self-Report → Single Answer | Medium | 3,450 |
| **Type 3** | Ambiguous Type → Multiple Answer | High | 6,525 |
| **Type 4** | Clear Type → Single Answer | High | 13,050 |

## 📝 Citation

If you find MentalBench and MentalKG useful for your research, please cite our paper:

```bibtex
@article{song2026mentalbench,
    title={MentalBench: A Benchmark for Evaluating Psychiatric Diagnostic Capability of Large Language Models},
    author={Song, Hoyun and Kang, Migyeong and Shin, Jisu and Kim, Jihyun and Park, Chanbi and Yoo, Hangyeol and An, Jihyun and Oh, Alice and Han, Jinyoung and Lim, KyungTae},
    journal={arXiv preprint arXiv:2602.12871},
    year={2026}
  }
```