qxsecureserver commited on
Commit
d13edbe
·
verified ·
1 Parent(s): cc6ca08

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -399
README.md DELETED
@@ -1,399 +0,0 @@
1
- ---
2
- language:
3
- - en
4
- - zh
5
- tags:
6
- - medical
7
- ---
8
- <div align="center">
9
- <h1>
10
- Baichuan-M1-14B-Instruct
11
- </h1>
12
- </div>
13
-
14
- <p align="center">
15
- 🤗 <a href="https://huggingface.co/baichuan-inc/Baichuan-M1-14B-Base" target="_blank">Baichuan-M1-14B-Base</a> • 🤗 <a href="https://huggingface.co/baichuan-inc/Baichuan-M1-14B-Instruct" target="_blank">Baichuan-M1-14B-Instruct</a> • 📗 <a href="https://arxiv.org/abs/2502.12671" target="_blank">Technical Report</a> • 💬 <a href="https://y41.8if.cn/JQCj6n" target="_blank">WeChat</a>
16
- </p>
17
-
18
-
19
- ---
20
-
21
- # 📖 Table of Contents
22
-
23
- - [🏁 Model Introduction](#intro)
24
- - [🔬 Data Collection and Processing](#data)
25
- - [🧠 New Model Architecture](#structure)
26
- - [⚙️ Training Methodology](#training)
27
- - [📊 Benchmark Results](#benchmark)
28
- - [🚀 Quick Start](#quick)
29
- - [📜 License and Statement](#declare)
30
- - [🏷️ Reference](#reference)
31
-
32
- ---
33
- <a name="intro"></a>
34
- # 🏁 Model Introduction
35
-
36
- **Baichuan-14B-M1** is the industry's first open-source large language model developed from scratch by Baichuan Intelligence, specifically optimized for medical scenarios. While excelling in general capabilities, it demonstrates powerful performance in the medical field. It achieves results comparable to models of similar size in most general benchmark evaluations, while outperforming models five times larger in medical scenarios. Below are the core features of the model:
37
-
38
- - Trained from scratch on **20 trillion tokens** of high-quality medical and general data.
39
- - Specialized modeling for **20+ medical departments** with fine-grained medical expertise.
40
- - Introduces **innovative model architecture**, significantly improving context understanding and long-sequence task performance.
41
- - Provides **[🤗 Base Model](https://huggingface.co/baichuan-inc/Baichuan-M1-14B-Base)** and **[🤗 Instruct Model](https://huggingface.co/baichuan-inc/Baichuan-M1-14B-Instruct)**.
42
-
43
-
44
- ---
45
- <a name="data"></a>
46
- # 🔬 Data Collection and Processing
47
-
48
- ## Medical Data Collection
49
-
50
- We conducted meticulous data collection and synthesis for the medical field, including:
51
-
52
- - **Tens of millions of professional medical data**: Chinese/English professional papers, medical cases, medical textbooks, knowledge bases, etc.
53
- - **Hundreds of millions of medical Q&A and clinical data**: Covering complex medical reasoning and real-world clinical cases.
54
- - **Comprehensive data classification and evaluation**: Categorized by medical departments, content, and value to ensure balanced data distribution and filter out truly valuable medical data.
55
-
56
- ## Data Synthesis and Optimization
57
-
58
- - **Synthetic data design**: Combining knowledge graphs, cases, and textbooks to generate diverse, high-quality medical reasoning data.
59
- - **Self-reflection mechanism and reward model**: Continuously improving the quality of synthetic data, ultimately generating **nearly a trillion tokens** of reasoning data, covering long-tail knowledge and complex scenarios.
60
-
61
-
62
- ## General Data Collection
63
-
64
- - **20T multilingual general dataset**: Including 14T English data, 4T Chinese data, and 2T data covering 30 mainstream languages.
65
- - **Deduplication and upsampling strategy**: Upsampling high-quality data to significantly enhance model performance.
66
- - **27 global knowledge categories**: Optimizing data ratios based on small model experiments to balance general and domain-specific capabilities.
67
-
68
- ---
69
- <a name="structure"></a>
70
- # 🧠 New Model Architecture
71
-
72
- ## Short Convolution Attention Mechanism
73
-
74
- - By introducing lightweight short convolution operations when computing Key and Value, the reliance of standard Transformer models on induction heads for learning is significantly reduced. Traditional Transformers rely on induction heads to capture repetitive patterns and contextual dependencies in sequences, which requires a certain model width and depth. Short convolution decouples the Key and Value sequences in the time dimension, enhancing context learning capabilities. Extensive experiments from toy models to models with over ten billion parameters show that the short convolution attention mechanism excels in language modeling tasks, especially those heavily dependent on contextual information.
75
-
76
-
77
- ## Sliding Window Attention Mechanism
78
-
79
- - Adopting a sliding window attention mechanism in some layers to reduce KV Cache memory usage.
80
- - **Optimization**: Balancing computational efficiency and performance, especially suitable for long-sequence tasks.
81
-
82
- ## Optimizing Position Encoding Oscillation
83
-
84
- - By increasing the dimensions of some attention heads, RoPE curve oscillation is reduced.
85
- - **Result**: More stable performance in long-sequence tasks while maintaining the model's ability to capture diverse features.
86
-
87
- ## High Peak Learning Rate Strategy
88
-
89
- - Using **WSD learning rate scheduling strategy** with high peak learning rates to promote model generalization.
90
- - **Comparison results**: Significant improvement in benchmark task performance.
91
-
92
- ## Adaptive Gradient Update
93
-
94
- - **Dynamic gradient clipping**: Skipping updates when gradients are too large to reduce instability caused by special samples or steep loss spaces.
95
-
96
- ---
97
- <a name="training"></a>
98
- # ⚙️ Training Methodology
99
-
100
- We innovatively adopted a **multi-stage curriculum learning and alignment optimization** approach, systematically enhancing model capabilities through the following two parts:
101
-
102
- ## 1. Multi-Stage Curriculum Learning
103
-
104
- Training is divided into three stages, progressively optimizing the model's general and medical domain capabilities:
105
-
106
- 1. **General Knowledge Enhancement Stage**: Focused on general language modeling to improve basic language and common sense.
107
- 2. **Medical Basic Knowledge Enhancement Stage**: Introducing high-quality medical data to enhance reasoning, mathematical, and medical knowledge.
108
- 3. **Medical Advanced Knowledge Enhancement Stage**: Further optimizing data quality, focusing on complex medical reasoning, disease diagnosis, and long-tail knowledge.
109
-
110
- ## 2. Alignment Optimization
111
-
112
- Enhancing model generation quality, logical reasoning, and user preference alignment through reinforcement learning and pairwise data optimization:
113
-
114
- 1. **Pairwise Data**: Covering multi-turn dialogues, instruction following, math and code, and reasoning tasks, sourced from human annotations and multi-model generation.
115
- 2. **Optimization Process**:
116
- - **ELO**: Optimizing diverse, high-quality chain-of-thought generation based on maximum likelihood.
117
- - **TDPO**: Using pairwise data to optimize the generation model for better user preference alignment.
118
- - **PPO**: Further enhancing generation logic and task performance through policy optimization.
119
-
120
-
121
- This combined approach of multi-stage and alignment optimization enables the model to achieve exceptional performance in both general and medical domain capabilities.
122
-
123
- ---
124
- <a name="benchmark"></a>
125
- # 📊 Benchmark Results
126
-
127
- Our evaluation covers all mainstream benchmarks, achieving excellent metrics in both open-source and closed-source evaluations, demonstrating outstanding medical scenario capabilities while maintaining strong general performance.
128
-
129
- <table style="border: 1px solid #000; border-collapse: collapse; width: 100%; text-align: center;">
130
- <thead>
131
- <tr>
132
- <th>Category</th>
133
- <th>Benchmark</th>
134
- <th style="font-size:15px;">Baichuan-M1-14B-Instruct</th>
135
- <th style="font-size:15px;">Qwen2.5-14B-Instruct</th>
136
- <th style="font-size:15px;">Qwen2.5-72B-Instruct</th>
137
- <th style="font-size:15px;">claude-3.5-sonnet-20241022</th>
138
- <th style="font-size:15px;">gpt-4o</th>
139
- </tr>
140
- </thead>
141
- <tbody>
142
- <tr>
143
- <td colspan="2" style="text-align: center;">Average Score</td>
144
- <td>72.23</td>
145
- <td>65.39</td>
146
- <td>70.51</td>
147
- <td>74.85</td>
148
- <td>75.00</td>
149
- </tr>
150
- <tr>
151
- <td rowspan="7" style="vertical-align: middle;">Clinical Practice</td>
152
- <td style="text-align: left;">cmbclin</td>
153
- <td>77.40</td>
154
- <td>71.51</td>
155
- <td>75.36</td>
156
- <td>78.37</td>
157
- <td>75.36</td>
158
- </tr>
159
- <tr>
160
- <td style="text-align: left;">clinicalbench_diag</td>
161
- <td>70.90</td>
162
- <td>68.85</td>
163
- <td>72.23</td>
164
- <td>75.00</td>
165
- <td>73.05</td>
166
- </tr>
167
- <tr>
168
- <td style="text-align: left;">clinicalbench_hos</td>
169
- <td>70.05</td>
170
- <td>68.83</td>
171
- <td>70.53</td>
172
- <td>65.58</td>
173
- <td>69.38</td>
174
- </tr>
175
- <tr>
176
- <td style="text-align: left;">clinicalbench_treat</td>
177
- <td>56.38</td>
178
- <td>55.03</td>
179
- <td>57.30</td>
180
- <td>64.03</td>
181
- <td>59.35</td>
182
- </tr>
183
- <tr>
184
- <td style="text-align: left;">rarearena_rdc</td>
185
- <td>81.80</td>
186
- <td>66.40</td>
187
- <td>76.20</td>
188
- <td>89.60</td>
189
- <td>88.40</td>
190
- </tr>
191
- <tr>
192
- <td style="text-align: left;">rarearena_rds</td>
193
- <td>54.00</td>
194
- <td>42.60</td>
195
- <td>49.80</td>
196
- <td>59.80</td>
197
- <td>57.20</td>
198
- </tr>
199
- <tr>
200
- <td style="text-align: left;">rarebench</td>
201
- <td>59.60</td>
202
- <td>52.80</td>
203
- <td>60.60</td>
204
- <td>65.30</td>
205
- <td>62.80</td>
206
- </tr>
207
- <tr>
208
- <td rowspan="10" style="vertical-align: middle;">Exams</td>
209
- <td style="text-align: left;">cmexam</td>
210
- <td>80.10</td>
211
- <td>77.70</td>
212
- <td>82.70</td>
213
- <td>77.50</td>
214
- <td>78.00</td>
215
- </tr>
216
- <tr>
217
- <td style="text-align: left;">Pediatric Qualification Exam</td>
218
- <td>78.48</td>
219
- <td>74.68</td>
220
- <td>84.81</td>
221
- <td>76.58</td>
222
- <td>78.48</td>
223
- </tr>
224
- <tr>
225
- <td style="text-align: left;">Internal Medicine Qualification Exam</td>
226
- <td>83.42</td>
227
- <td>86.10</td>
228
- <td>87.17</td>
229
- <td>87.70</td>
230
- <td>83.42</td>
231
- </tr>
232
- <tr>
233
- <td style="text-align: left;">General Practice Qualification Exam</td>
234
- <td>87.07</td>
235
- <td>88.44</td>
236
- <td>88.44</td>
237
- <td>81.63</td>
238
- <td>84.35</td>
239
- </tr>
240
- <tr>
241
- <td style="text-align: left;">USMLE</td>
242
- <td>78.00</td>
243
- <td>67.20</td>
244
- <td>76.70</td>
245
- <td>85.90</td>
246
- <td>87.10</td>
247
- </tr>
248
- <tr>
249
- <td style="text-align: left;">medbullets</td>
250
- <td>66.88</td>
251
- <td>54.22</td>
252
- <td>64.29</td>
253
- <td>72.40</td>
254
- <td>75.97</td>
255
- </tr>
256
- <tr>
257
- <td style="text-align: left;">mediq</td>
258
- <td>83.40</td>
259
- <td>66.80</td>
260
- <td>79.90</td>
261
- <td>88.80</td>
262
- <td>90.20</td>
263
- </tr>
264
- <tr>
265
- <td style="text-align: left;">nejmqa</td>
266
- <td>49.75</td>
267
- <td>45.69</td>
268
- <td>50.76</td>
269
- <td>69.54</td>
270
- <td>54.31</td>
271
- </tr>
272
- <tr>
273
- <td style="text-align: left;">pubmedqa</td>
274
- <td>75.20</td>
275
- <td>76.40</td>
276
- <td>75.60</td>
277
- <td>77.00</td>
278
- <td>77.60</td>
279
- </tr>
280
- <tr>
281
- <td style="text-align: left;">redisqa</td>
282
- <td>74.50</td>
283
- <td>69.70</td>
284
- <td>75.00</td>
285
- <td>83.20</td>
286
- <td>82.80</td>
287
- </tr>
288
- <tr>
289
- <td rowspan="5" style="vertical-align: middle;">Basic Capabilities</td>
290
- <td style="text-align: left;">mednli_dis</td>
291
- <td>80.40</td>
292
- <td>68.90</td>
293
- <td>74.90</td>
294
- <td>58.30</td>
295
- <td>79.80</td>
296
- </tr>
297
- <tr>
298
- <td style="text-align: left;">medcalc</td>
299
- <td>56.00</td>
300
- <td>31.40</td>
301
- <td>37.90</td>
302
- <td>52.60</td>
303
- <td>49.00</td>
304
- </tr>
305
- <tr>
306
- <td style="text-align: left;">MMLU-anatomy</td>
307
- <td>80.00</td>
308
- <td>67.41</td>
309
- <td>71.11</td>
310
- <td>86.67</td>
311
- <td>91.11</td>
312
- </tr>
313
- <tr>
314
- <td style="text-align: left;">MMLU-virology</td>
315
- <td>54.82</td>
316
- <td>56.02</td>
317
- <td>53.01</td>
318
- <td>54.22</td>
319
- <td>57.23</td>
320
- </tr>
321
- <tr>
322
- <td style="text-align: left;">MMLU-genetics</td>
323
- <td>91.00</td>
324
- <td>82.00</td>
325
- <td>87.00</td>
326
- <td>97.00</td>
327
- <td>95.00</td>
328
- </tr>
329
- </tbody>
330
- </table>
331
-
332
-
333
- ---
334
- <a name="quick"></a>
335
- # 🚀 Quick Start
336
-
337
- ### 🤗 Hugging Face Transformers
338
-
339
- We recommend using the latest version of the Transformers library (at least 4.47.0). The following code snippet demonstrates how to use the **Baichuan-M1-14B-Instruct** model:
340
-
341
- ```python
342
- from transformers import AutoModelForCausalLM, AutoTokenizer
343
- import torch
344
- # 1. Load pre-trained model and tokenizer
345
- model_name = "baichuan-inc/Baichuan-M1-14B-Instruct"
346
- tokenizer = AutoTokenizer.from_pretrained(model_name,trust_remote_code=True)
347
- model = AutoModelForCausalLM.from_pretrained(model_name,trust_remote_code=True,torch_dtype = torch.bfloat16).cuda()
348
- # 2. Input prompt text
349
- prompt = "May I ask you some questions about medical knowledge?"
350
-
351
- # 3. Encode the input text for the model
352
- messages = [
353
- {"role": "system", "content": "You are a helpful assistant."},
354
- {"role": "user", "content": prompt}
355
- ]
356
- text = tokenizer.apply_chat_template(
357
- messages,
358
- tokenize=False,
359
- add_generation_prompt=True
360
- )
361
- model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
362
-
363
- # 4. Generate text
364
- generated_ids = model.generate(
365
- **model_inputs,
366
- max_new_tokens=512
367
- )
368
- generated_ids = [
369
- output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
370
- ]
371
-
372
- # 5. Decode the generated text
373
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
374
-
375
-
376
- # 6. Output the result
377
- print("Generated text:")
378
- print(response)
379
- ```
380
-
381
- ---
382
- <a name="declare"></a>
383
- # 📜 License and Statement
384
- The use of the model must comply with [《Baichuan-M1-14B模型社区许可协议》](https://github.com/baichuan-inc/Baichuan-M1-14B/blob/main/Baichuan-M1-14B模型社区许可协议.pdf).
385
-
386
- The development team of Baichuan has not developed any commercial applications based on this model. All users must comply with laws and regulations and must not use the model for harmful national security or illegal purposes.
387
-
388
- ---
389
- <a name="reference"></a>
390
- # 🏷️ Reference
391
- If you need to cite our work, please use the following reference:
392
- ```
393
- @article{baichuan-m1-2025,
394
- title={Baichuan-M1: Pushing the Medical Capability of Large Language Models},
395
- author={Bingning Wang, Haizhou Zhao, Huozhi Zhou, Liang Song, Mingyu Xu, Wei Cheng, Xiangrong Zeng, Yupeng Zhang, Yuqi Huo, Zecheng Wang, Zhengyun Zhao and others},
396
- journal={arXiv preprint arXiv:2502.12671},
397
- year={2025}
398
- }
399
- ```