Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -160,25 +160,16 @@ In this work, we introduce *MultiMed*, a collection of small-to-large end-to-end
|
|
| 160 |
To our best knowledge, *MultiMed* stands as **the largest and the first multilingual medical ASR dataset**, in terms of total duration, number of speakers, diversity of diseases, recording conditions, speaker roles, unique medical terms, accents, and ICD-10 codes.
|
| 161 |
|
| 162 |
|
| 163 |
-
Please cite this paper:
|
| 164 |
|
| 165 |
-
@inproceedings{
|
| 166 |
-
title={
|
| 167 |
-
author={Khai
|
| 168 |
-
|
| 169 |
-
|
| 170 |
}
|
| 171 |
-
**TODO** To load labeled data, please refer to our [HuggingFace](https://huggingface.co/datasets/leduckhai/VietMed), [Paperswithcodes](https://paperswithcode.com/dataset/vietmed).
|
| 172 |
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
## Limitations:
|
| 176 |
-
|
| 177 |
-
**TODO** Since this dataset is human-labeled, 1-2 ending/starting words present in the recording might not be present in the transcript.
|
| 178 |
-
That's the nature of human-labeled dataset, in which humans can't distinguish words that are faster than 1 second.
|
| 179 |
-
In contrast, forced alignment could solve this problem because machines can "listen" words in 10ms-20ms.
|
| 180 |
-
However, forced alignment only learns what it is taught by humans.
|
| 181 |
-
Therefore, no transcript is perfect. We will conduct human-machine collaboration to get "more perfect" transcript in the next paper.
|
| 182 |
|
| 183 |
## Contact:
|
| 184 |
|
|
|
|
| 160 |
To our best knowledge, *MultiMed* stands as **the largest and the first multilingual medical ASR dataset**, in terms of total duration, number of speakers, diversity of diseases, recording conditions, speaker roles, unique medical terms, accents, and ICD-10 codes.
|
| 161 |
|
| 162 |
|
| 163 |
+
Please cite this paper: [https://arxiv.org/abs/2409.14074](https://arxiv.org/abs/2409.14074)
|
| 164 |
|
| 165 |
+
@inproceedings{le2024multimed,
|
| 166 |
+
title={MultiMed: Multilingual Medical Speech Recognition via Attention Encoder Decoder},
|
| 167 |
+
author={Le-Duc, Khai and Phan, Phuc and Pham, Tan-Hanh and Tat, Bach Phan and Ngo, Minh-Huong and Hy, Truong-Son},
|
| 168 |
+
journal={arXiv preprint arXiv:2409.14074},
|
| 169 |
+
year={2024}
|
| 170 |
}
|
|
|
|
| 171 |
|
| 172 |
+
To load labeled data, please refer to our [HuggingFace](https://huggingface.co/datasets/leduckhai/MultiMed), [Paperswithcodes](https://paperswithcode.com/dataset/multimed).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 173 |
|
| 174 |
## Contact:
|
| 175 |
|