Night-Quiet commited on
Commit
083f5cf
·
verified ·
1 Parent(s): 6e5b841

翻译成中文.

Browse files
Files changed (1) hide show
  1. README.md +29 -31
README.md CHANGED
@@ -16,42 +16,40 @@ tags:
16
  tasks:
17
  - text-generation
18
  ---
19
- # DeepPulse-80B-Thinking-V0.1
20
- **DeepPulse(深度把脉)** 是心语心言开源的中医系列大模型核心成果之一。该模型以 Qwen3-Next-80B-A3B-Thinking 为基座,利用自建的高质量中医临床医疗数据集进行了深度微调,专注于中医辅助诊断场景的落地应用。凭借卓越的临床推理能力,DeepPulse 在公开基准测试 TCM-5CEval 中表现优异,斩获总分与中医诊断双项第一,展现了领域顶尖水平。
21
 
22
- # 公开中医Benchmark指标对比(MedBench - TCM-5CEval)
23
- | 序号 | 模型名称 | 组织/团队名称 | 发布日期 | 类型 | 参数量 | 总分 | TCM-Exam | TCM-LitQA | TCM-MRCD | TCM-CMM | TCM-ClinNPT |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
25
  | 1 | <font color="red">DeepPulse-80B-Thinking-V0.1</font> | <font color="red">心语心言</font> | <font color="red">2025/12/23</font> | <font color="red">开源</font> | <font color="red">80B</font> | <font color="red">71.3</font> | <font color="red">83.0</font> | <font color="red">45.5</font> | <font color="red">75.4</font> | <font color="red">84.9</font> | <font color="red">67.6</font> |
26
  | 2 | HKR_TCM_HW_v1 | 港仔机器人主动健管团队 | 2025/12/12 | 闭源 | 671B | 70.8 | 85.4 | 44.2 | 73.1 | 83.8 | 67.5 |
27
  | 3 | Gemini-2.5-Pro-nothinking | Google | 2025/03/25 | 闭源 | N/A | 69.2 | 77.9 | 62.0 | 72.4 | 72.6 | 61.2 |
28
  | 4 | DeepSeek-V3.2 | DeepSeek | 2025/12/01 | 开源 | 671B | 66.8 | 74.5 | 44.4 | 66.8 | 80.0 | 68.3 |
29
  | 5 | Grok-4 | xAI | 2025/07/09 | 闭源 | N/A | 66.6 | 73.0 | 59.3 | 68.4 | 68.0 | 64.2 |
30
- | 6 | Qwen3-235B-A22B-Thinking-2507 | Alibaba | 2025/08/17 | 开源 | 235B | 64.8 | 75.5 | 40.3 | 68.5 | 78.2 | 61.5 |
31
- | 7 | Claude-Sonnet-4.5 | Anthropic | 2025/09/29 | 源 | N/A | 64.8 | 69.8 | 59.3 | 67.2 | 71.7 | 56.0 |
32
- | 8 | GPT-5 | OpenAI | 2025/08/07 | 闭源 | N/A | 63.6 | 75.0 | 51.9 | 64.1 | 66.6 | 60.6 |
33
- | 9 | Qwen3-Next-80B-A3B-Thinking | Alibaba | 2025/09/15 | 源 | 80B | 63.5 | 76.0 | 38.2 | 66.2 | 77.9 | 59.4 |
34
- | 10 | Llama-4-maverick | Meta | 2025/04/06 | 开源 | 400B | 57.2 | 72.1 | 51.3 | 63.8 | 54.4 | 44.3 |
35
- | 11 | GPT-4o | OpenAI | 2025/05/13 | 源 | 200B | 55.9 | 66.5 | 46.9 | 60.9 | 57.1 | 47.9 |
36
-
37
- > 注: 参数量中的“N/A”表示未公开模型参数量。
38
- >
39
- > 除`DeepSeek-V3.2`, `Qwen3-235B-A22B-Thinking-2507`, `Qwen3-Next-80B-A3B-Thinking`为部署自测数据外, 其他模型参考了榜单公开数据
40
- >
41
- > TCM-5CEval: https://medbench.opencompass.org.cn/track-detail/tcmeval
42
 
43
- # SDK下载
44
- ```bash
45
- #安装ModelScope
46
- pip install modelscope
47
- ```
48
- ```python
49
- #SDK模型下载
50
- from modelscope import snapshot_download
51
- model_dir = snapshot_download('PneumaAI-org/DeepPulse-80B-Thinking-V0.1')
52
- ```
53
- # Git下载
54
- ```
55
- #Git模型下载
56
- git clone https://www.modelscope.cn/PneumaAI-org/DeepPulse-80B-Thinking-V0.1.git
57
- ```
 
16
  tasks:
17
  - text-generation
18
  ---
19
+ # DeepPulse-80B TCM Large Model Series
 
20
 
21
+ **DeepPulse (深度把脉)** is the core achievement of 心语心言's open-source Traditional Chinese Medicine (TCM) large model series. This series of models uses Qwen3-Next-80B as the base model and has undergone deep fine-tuning using a self-built high-quality TCM clinical medical dataset. This release includes two versions:
22
+
23
+ * **DeepPulse-80B-Thinking-V0.1**: Focuses on complex clinical reasoning and assisted diagnosis, achieving first place in total score in public evaluations, demonstrating top-tier logical reasoning capabilities in the TCM domain.
24
+ * **DeepPulse-80B-Instruct-V0.1**: Possesses excellent TCM instruction-following capabilities, suitable for a wide range of TCM Q&A and interactive scenarios, with a comprehensive ranking of sixth.
25
+
26
+ # Public TCM Benchmark Metrics Comparison (MedBench - TCM-5CEval)
27
+
28
+ TCM-5CEval is an authoritative evaluation benchmark for TCM large models, comprising the following five subtasks that comprehensively assess the model's TCM capabilities:
29
+
30
+ * **TCM-Exam (中医考试)**: Evaluates the mastery and application of fundamental TCM theories (Yin-Yang, Zang-Fu organs, etc.) and diagnostics knowledge.
31
+ * **TCM-LitQA (典籍问答)**: Tests deep understanding and reasoning of classic TCM texts such as "Huangdi Neijing" and "Shanghan Lun".
32
+ * **TCM-MRCD (临床诊疗)**: Simulates real clinical scenarios, evaluating the model's ability to analyze medical cases, perform pattern differentiation, and make prescription decisions.
33
+ * **TCM-CMM (中药方剂)**: Measures the model's knowledge of Chinese materia medica properties, effects, compatibility contraindications, and formula applications.
34
+ * **TCM-ClinNPT (非药物疗法)**: Assesses ability in acupoint selection for acupuncture, Tuina massage techniques, and pattern-based treatment for specific clinical scenarios.
35
+
36
+ | No. | Model Name | Organization/Team Name | Release Date | Type | Parameters | Total Score | TCM-Exam | TCM-LitQA | TCM-MRCD | TCM-CMM | TCM-ClinNPT |
37
  | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
38
  | 1 | <font color="red">DeepPulse-80B-Thinking-V0.1</font> | <font color="red">心语心言</font> | <font color="red">2025/12/23</font> | <font color="red">开源</font> | <font color="red">80B</font> | <font color="red">71.3</font> | <font color="red">83.0</font> | <font color="red">45.5</font> | <font color="red">75.4</font> | <font color="red">84.9</font> | <font color="red">67.6</font> |
39
  | 2 | HKR_TCM_HW_v1 | 港仔机器人主动健管团队 | 2025/12/12 | 闭源 | 671B | 70.8 | 85.4 | 44.2 | 73.1 | 83.8 | 67.5 |
40
  | 3 | Gemini-2.5-Pro-nothinking | Google | 2025/03/25 | 闭源 | N/A | 69.2 | 77.9 | 62.0 | 72.4 | 72.6 | 61.2 |
41
  | 4 | DeepSeek-V3.2 | DeepSeek | 2025/12/01 | 开源 | 671B | 66.8 | 74.5 | 44.4 | 66.8 | 80.0 | 68.3 |
42
  | 5 | Grok-4 | xAI | 2025/07/09 | 闭源 | N/A | 66.6 | 73.0 | 59.3 | 68.4 | 68.0 | 64.2 |
43
+ | 6 | <font color="red">DeepPulse-80B-Instruct-V0.1</font> | <font color="red">心语心言</font> | <font color="red">2025/12/23</font> | <font color="red">开源</font> | <font color="red">80B</font> | <font color="red">66.2</font> | <font color="red">74.4</font> | <font color="red">40.7</font> | <font color="red">70.6</font> | <font color="red">79.7</font> | <font color="red">65.6</font> |
44
+ | 7 | Qwen3-235B-A22B-Thinking-2507 | Alibaba | 2025/08/17 | 源 | 235B | 64.8 | 75.5 | 40.3 | 68.5 | 78.2 | 61.5 |
45
+ | 8 | Claude-Sonnet-4.5 | Anthropic | 2025/09/29 | 闭源 | N/A | 64.8 | 69.8 | 59.3 | 67.2 | 71.7 | 56.0 |
46
+ | 9 | GPT-5 | OpenAI | 2025/08/07 | 源 | N/A | 63.6 | 75.0 | 51.9 | 64.1 | 66.6 | 60.6 |
47
+ | 10 | Qwen3-Next-80B-A3B-Thinking | Alibaba | 2025/09/15 | 开源 | 80B | 63.5 | 76.0 | 38.2 | 66.2 | 77.9 | 59.4 |
48
+ | 11 | Llama-4-maverick | Meta | 2025/04/06 | 源 | 400B | 57.2 | 72.1 | 51.3 | 63.8 | 54.4 | 44.3 |
49
+ | 12 | GPT-4o | OpenAI | 2025/05/13 | 闭源 | 200B | 55.9 | 66.5 | 46.9 | 60.9 | 57.1 | 47.9 |
 
 
 
 
 
50
 
51
+ > Note: "N/A" in the Parameters column indicates that the model's parameter count has not been publicly disclosed.
52
+ >
53
+ > Except for `DeepSeek-V3.2`, `Qwen3-235B-A22B-Thinking-2507`, `Qwen3-Next-80B-A3B-Thinking` which are self-tested deployment data, other models reference publicly available leaderboard data.
54
+ >
55
+ > TCM-5CEval: https://medbench.opencompass.org.cn/track-detail/tcmeval