File size: 1,824 Bytes
104d9f4
 
 
 
f5a34d8
104d9f4
 
 
 
f5a34d8
104d9f4
 
 
 
 
60e7e66
 
 
 
 
104d9f4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
dataset_info:
  dataset_name: TCM_Book_Corpus
  language: zh
  license: apache-2.0          
  task_categories:
  - language-modeling
  pretty_name: 中医教材语料全集
  size_categories:
  - 1M<n<10M                   
  tags:
  - chinese
  - traditional-chinese-medicine
  - textbook
  - academic
  configs:
  - config_name: default
    data_files:
    - split: train
      path: data*/*.parquet
---

# TCM Book Corpus

> Pre-training Corpus

## Data Overview 

|                                    | Modality     | Description                                                               | Data Quantity                  |
| ---------------------------------- | ------------ | ------------------------------------------------------------------------- | ------------------------------ |
| TCM\_Book\_Corpus                  | 📝 Text         | A cleaned corpus of 3,256 TCM textbooks.                                  | \~ 0.5 B tokens                |

## Important Notice

版权归属:原始数据由 FreedomIntelligence (FreedomAI) 公开提供,本仓库仅做整理与格式转换。  
数据来源:截取自 [FreedomIntelligence/TCM-Pretrain-Data-ShizhenGPT](https://huggingface.co/datasets/FreedomIntelligence/TCM-Pretrain-Data-ShizhenGPT)。  


若使用本数据,请同时引用原作者工作:
```bibtex
@misc{chen2025shizhengptmultimodalllmstraditional,
  title={ShizhenGPT: Towards Multimodal LLMs for Traditional Chinese Medicine}, 
  author={Junying Chen and Zhenyang Cai and Zhiheng Liu and Yunjin Yang and Rongsheng Wang and Qingying Xiao and Xiangyi Feng and Zhan Su and Jing Guo and Xiang Wan and Guangjun Yu and Haizhou Li and Benyou Wang},
  year={2025},
  eprint={2508.14706},
  archivePrefix={arXiv},
  primaryClass={cs.CL},
  url={https://arxiv.org/abs/2508.14706}
}
```