ttthe leduckhai commited on
Commit
3158f84
·
verified ·
0 Parent(s):

Duplicate from leduckhai/MultiMed

Browse files

Co-authored-by: Khai Le-Duc <leduckhai@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
56
+ # Video files - compressed
57
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
58
+ *.webm filter=lfs diff=lfs merge=lfs -text
Chinese/eval-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d01b60ccad044088f63b8ad0b8919ee30e3a362730a1129f4d5a22fe87a113e
3
+ size 12316733
Chinese/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f9a159417a834fde539938d9ffc4b4cba874d59ede886902571134cd7da2a47
3
+ size 32964889
Chinese/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5f939b54cdb41ca4c9b1f0ebc6926bb17fe2ba45445e1a29922e953fb3e41ac
3
+ size 182285667
English/eval-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f692bd1a2d8f348b820a9235d1494fac48b4f6c16641eb5f8fd85f457f4baac
3
+ size 298260660
English/test-00000-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32ac734b806dae820680b5c7517241be51c90daeb42583bdca76d9ee3c06ab7a
3
+ size 273294341
English/test-00001-of-00002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cd944655c5e40a1d10026df1dc428b2147313227e6f5128cceda89c56fb8966
3
+ size 279018466
English/train-00000-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af6af5187705308e782f1da7b7f70f6f49608104023e302593e5cee7c929eb99
3
+ size 475896443
English/train-00001-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce5240cded0fe1a223b9c7280ed745042df17c6edccf10425311b7730596f4af
3
+ size 412930106
English/train-00002-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe9d63e0f8b5a9296ce71611af9d19e0f098f0a5dfba978630e0503fd7b9e2b3
3
+ size 475860453
English/train-00003-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f55c79965c6e52481950b00efe93948abeead947ec77452d6e338a8eda51f806
3
+ size 460960659
English/train-00004-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f43091c9b1c529e01aa5a8b6c8751a512d62a636b6cb133b41b2f788b3d09d8
3
+ size 453376723
English/train-00005-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:583fdd28c375971b312fbd684b6135e08c57dd3739bdb30cb7c18f60d9835729
3
+ size 498261424
French/eval-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8c9b5259f32881281ce8a589fbdaa49eaf1b5abb9e07fff7bba482ea667f483
3
+ size 5154903
French/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66065a2cb188e0d535c218ccfbfadda7f7408f0bec763f44e5da9eb0f6d627da
3
+ size 42697153
French/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ed30db40d7340d94b0bd2546f454c1629a8645b3746a0f0e17cb074be15b1aa
3
+ size 168266615
German/eval-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d73222d4b62a14befbf84305646cbc2444a7208b2759026b49a1f67513bca121
3
+ size 35470645
German/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:494a635916ceaed914f6238fb7acf37e38a1e8432c30663a2f6f484dbdec58e0
3
+ size 137738439
German/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34ba3a0352342754bf903e857c6939e41a15d4df5086f272fc7033ed83fcda3c
3
+ size 181285063
MultiMed_ACL2025.png ADDED

Git LFS Details

  • SHA256: a5514ec397a4bff5f380d21d67a15fa753161824afce48cb86e8878117631155
  • Pointer size: 132 Bytes
  • Size of remote file: 1.54 MB
README.md ADDED
@@ -0,0 +1,219 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - vi
4
+ - en
5
+ - de
6
+ - fr
7
+ - zh
8
+ license: mit
9
+ task_categories:
10
+ - automatic-speech-recognition
11
+ viewer: true
12
+ dataset_info:
13
+ - config_name: Chinese
14
+ features:
15
+ - name: audio
16
+ dtype:
17
+ audio:
18
+ sampling_rate: 16000
19
+ - name: text
20
+ dtype: string
21
+ - name: duration
22
+ dtype: float64
23
+ splits:
24
+ - name: train
25
+ num_bytes: 182566135.142
26
+ num_examples: 1242
27
+ - name: eval
28
+ num_bytes: 12333509
29
+ num_examples: 91
30
+ - name: test
31
+ num_bytes: 33014034
32
+ num_examples: 225
33
+ download_size: 227567289
34
+ dataset_size: 227913678.142
35
+ - config_name: English
36
+ features:
37
+ - name: audio
38
+ dtype:
39
+ audio:
40
+ sampling_rate: 16000
41
+ - name: text
42
+ dtype: string
43
+ - name: duration
44
+ dtype: float64
45
+ splits:
46
+ - name: train
47
+ num_bytes: 2789314997.152
48
+ num_examples: 25512
49
+ - name: eval
50
+ num_bytes: 299242087.632
51
+ num_examples: 2816
52
+ - name: test
53
+ num_bytes: 553873172.749
54
+ num_examples: 4751
55
+ download_size: 3627859275
56
+ dataset_size: 3642430257.533
57
+ - config_name: French
58
+ features:
59
+ - name: audio
60
+ dtype:
61
+ audio:
62
+ sampling_rate: 16000
63
+ - name: text
64
+ dtype: string
65
+ - name: duration
66
+ dtype: float64
67
+ splits:
68
+ - name: train
69
+ num_bytes: 168642145.231
70
+ num_examples: 1403
71
+ - name: eval
72
+ num_bytes: 5164908
73
+ num_examples: 42
74
+ - name: test
75
+ num_bytes: 42780388
76
+ num_examples: 344
77
+ download_size: 216118671
78
+ dataset_size: 216587441.231
79
+ - config_name: German
80
+ features:
81
+ - name: audio
82
+ dtype: audio
83
+ - name: text
84
+ dtype: string
85
+ - name: duration
86
+ dtype: float64
87
+ splits:
88
+ - name: train
89
+ num_bytes: 181312217.029
90
+ num_examples: 1443
91
+ - name: test
92
+ num_bytes: 137762006.256
93
+ num_examples: 1091
94
+ - name: eval
95
+ num_bytes: 35475098
96
+ num_examples: 287
97
+ download_size: 354494147
98
+ dataset_size: 354549321.285
99
+ - config_name: Vietnamese
100
+ features:
101
+ - name: audio
102
+ dtype: audio
103
+ - name: text
104
+ dtype: string
105
+ - name: duration
106
+ dtype: float64
107
+ splits:
108
+ - name: train
109
+ num_bytes: 56584901.453
110
+ num_examples: 2773
111
+ - name: test
112
+ num_bytes: 69598082.31
113
+ num_examples: 3437
114
+ - name: dev
115
+ num_bytes: 57617298.896
116
+ num_examples: 2912
117
+ download_size: 181789393
118
+ dataset_size: 183800282.659
119
+ configs:
120
+ - config_name: Chinese
121
+ data_files:
122
+ - split: train
123
+ path: Chinese/train-*
124
+ - split: eval
125
+ path: Chinese/eval-*
126
+ - split: test
127
+ path: Chinese/test-*
128
+ - config_name: English
129
+ data_files:
130
+ - split: train
131
+ path: English/train-*
132
+ - split: eval
133
+ path: English/eval-*
134
+ - split: test
135
+ path: English/test-*
136
+ - config_name: French
137
+ data_files:
138
+ - split: train
139
+ path: French/train-*
140
+ - split: eval
141
+ path: French/eval-*
142
+ - split: test
143
+ path: French/test-*
144
+ - config_name: German
145
+ data_files:
146
+ - split: train
147
+ path: German/train-*
148
+ - split: test
149
+ path: German/test-*
150
+ - split: eval
151
+ path: German/eval-*
152
+ - config_name: Vietnamese
153
+ data_files:
154
+ - split: train
155
+ path: Vietnamese/train-*
156
+ - split: test
157
+ path: Vietnamese/test-*
158
+ - split: dev
159
+ path: Vietnamese/dev-*
160
+ tags:
161
+ - medical
162
+ ---
163
+
164
+ # MultiMed: Multilingual Medical Speech Recognition via Attention Encoder Decoder
165
+
166
+ **<div align="center">ACL 2025</div>**
167
+
168
+ <div align="center"><b>Khai Le-Duc</b>, Phuc Phan, Tan-Hanh Pham, Bach Phan Tat,</div>
169
+
170
+ <div align="center">Minh-Huong Ngo, Chris Ngo, Thanh Nguyen-Tang, Truong-Son Hy</div>
171
+
172
+ > Please press ⭐ button and/or cite papers if you feel helpful.
173
+
174
+ <p align="center">
175
+ <img src="MultiMed_ACL2025.png" width="700"/>
176
+ </p>
177
+
178
+ * **Abstract:**
179
+ Multilingual automatic speech recognition (ASR) in the medical domain serves as a foundational task for various downstream applications such as speech translation, spoken language understanding, and voice-activated assistants. This technology improves patient care by enabling efficient communication across language barriers, alleviating specialized workforce shortages, and facilitating improved diagnosis and treatment, particularly during pandemics. In this work, we introduce MultiMed, the first multilingual medical ASR dataset, along with the first collection of small-to-large end-to-end medical ASR models, spanning five languages: Vietnamese, English, German, French, and Mandarin Chinese. To our best knowledge, MultiMed stands as **the world’s largest medical ASR dataset across all major benchmarks**: total duration, number of recording conditions, number of accents, and number of speaking roles. Furthermore, we present the first multilinguality study for medical ASR, which includes reproducible empirical baselines, a monolinguality-multilinguality analysis, Attention Encoder Decoder (AED) vs Hybrid comparative study and a linguistic analysis. We present practical ASR end-to-end training schemes optimized for a fixed number of trainable parameters that are common in industry settings. All code, data, and models are available online: [https://github.com/leduckhai/MultiMed/tree/master/MultiMed](https://github.com/leduckhai/MultiMed/tree/master/MultiMed).
180
+ * **Citation:**
181
+ Please cite this paper: [https://arxiv.org/abs/2409.14074](https://arxiv.org/abs/2409.14074)
182
+
183
+ ``` bibtex
184
+ @article{le2024multimed,
185
+ title={MultiMed: Multilingual Medical Speech Recognition via Attention Encoder Decoder},
186
+ author={Le-Duc, Khai and Phan, Phuc and Pham, Tan-Hanh and Tat, Bach Phan and Ngo, Minh-Huong and Ngo, Chris and Nguyen-Tang, Thanh and Hy, Truong-Son},
187
+ journal={arXiv preprint arXiv:2409.14074},
188
+ year={2024}
189
+ }
190
+ ```
191
+
192
+ ## Dataset and Pre-trained Models:
193
+
194
+ Dataset: [🤗 HuggingFace dataset](https://huggingface.co/datasets/leduckhai/MultiMed), [Paperswithcodes dataset](https://paperswithcode.com/dataset/multimed)
195
+
196
+ Pre-trained models: [🤗 HuggingFace models](https://huggingface.co/leduckhai/MultiMed)
197
+
198
+ | Model Name | Description | Link |
199
+ |------------------|--------------------------------------------|----------------------------------------------------------------------|
200
+ | `Whisper-Small-Chinese` | Small model fine-tuned on medical Chinese set | [Hugging Face models](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-chinese) |
201
+ | `Whisper-Small-English` | Small model fine-tuned on medical English set | [Hugging Face models](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-english) |
202
+ | `Whisper-Small-French` | Small model fine-tuned on medical French set | [Hugging Face models](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-french) |
203
+ | `Whisper-Small-German` | Small model fine-tuned on medical German set | [Hugging Face models](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-german) |
204
+ | `Whisper-Small-Vietnamese` | Small model fine-tuned on medical Vietnamese set | [Hugging Face models](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-vietnamese) |
205
+ | `Whisper-Small-Multilingual` | Small model fine-tuned on medical Multilingual set (5 languages) | [Hugging Face models](https://huggingface.co/leduckhai/MultiMed-ST/tree/main/asr/whisper-small-multilingual) |
206
+
207
+
208
+ ## Contact:
209
+
210
+ If any links are broken, please contact me for fixing!
211
+
212
+ Thanks [Phan Phuc](https://www.linkedin.com/in/pphuc/) for dataset viewer <3
213
+
214
+ ```
215
+ Le Duc Khai
216
+ University of Toronto, Canada
217
+ Email: duckhai.le@mail.utoronto.ca
218
+ GitHub: https://github.com/leduckhai
219
+ ```
Vietnamese/dev-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5f03056619831547b8ce3050617a195552120539e8722e24e7ca1e038534873
3
+ size 22553845
Vietnamese/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81cecbc6aaba1278b92b859b6615204397ca21400a00d8d2f8edd5576c4cdd71
3
+ size 68891501
Vietnamese/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a694e0542ff6f26579b2a71e9cf0231546f96a02e422ba7256d63a216cd809db
3
+ size 90586455