lbuess commited on
Commit
9afcf2b
Β·
verified Β·
1 Parent(s): a5cb313

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -3
README.md CHANGED
@@ -1,3 +1,104 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ pretty_name: Speech-RATE
4
+ task_categories:
5
+ - text-to-speech
6
+ - speech-synthesis
7
+ - audio-generation
8
+ language:
9
+ - en
10
+ tags:
11
+ - speech
12
+ - audio
13
+ - tts
14
+ - synthetic-speech
15
+ - speech-rate
16
+ size_categories:
17
+ - 10K<n<100K
18
+ ---
19
+
20
+
21
+
22
+ # πŸ”Š Speech-RATE Dataset
23
+ ## πŸ“‹ Overview
24
+ **Speech-RATE** is a curated dataset derived from [**CT-RATE**](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE/tree/main/), designed to support research at the intersection of speech and medical imaging. It provides high-quality, synthetically generated audio recordings of radiology reports from CT-RATE. Combined with the original CT-RATE dataset, Speech-RATE enables the development and evaluation of multimodal models that integrate speech and CT imaging data.
25
+
26
+ <p align="center">
27
+ <img src="figures/speech-rate-tts.png" alt="Speech-RATE Overview" width="800"/>
28
+ </p>
29
+
30
+
31
+
32
+ ## πŸ“‚ Dataset Structure
33
+ The Speech-RATE dataset is organized as follows:
34
+
35
+ ```
36
+ Speech-RATE/
37
+ β”œβ”€β”€ dataset/
38
+ β”‚ β”œβ”€β”€ train/
39
+ β”‚ β”‚ β”œβ”€β”€ path_to_sample_001/file_name.wav
40
+ β”‚ β”‚ β”œβ”€β”€ path_to_sample_002/file_name.wav
41
+ β”‚ β”‚ └── ...
42
+ β”‚ └── valid/
43
+ β”‚ β”œβ”€β”€ path_to_sample_001/file_name.wav
44
+ β”‚ β”œβ”€β”€ path_to_sample_002/file_name.wav
45
+ β”‚ └── ...
46
+ β”œβ”€β”€ speech-classification/
47
+ β”‚ β”œβ”€β”€ train/
48
+ β”‚ β”‚ β”œβ”€β”€ path_to_sample_001/file_name.wav
49
+ β”‚ β”‚ β”œβ”€β”€ path_to_sample_002/file_name.wav
50
+ β”‚ β”‚ └── ...
51
+ β”‚ └── valid/
52
+ β”‚ β”œβ”€β”€ path_to_sample_001/file_name.wav
53
+ β”‚ β”œβ”€β”€ path_to_sample_002/file_name.wav
54
+ β”‚ └── ...
55
+ β”œβ”€β”€ metadata/
56
+ β”‚ β”œβ”€β”€ train.csv
57
+ β”‚ └── valid.csv
58
+ └── README.md
59
+ ```
60
+
61
+
62
+ ### πŸ“ Directory Descriptions
63
+ - **`dataset/`**: Contains `.wav` audio files organized like in [CT-RATE](https://huggingface.co/datasets/ibrahimhamamci/CT-RATE/tree/main/dataset)
64
+ - **`speech-classification/`**: Data used to run speech-only classification tasks [(based on CT-RATE text classifier)](https://github.com/ibrahimethemhamamci/CT-CLIP/tree/main/text_classifier/data)
65
+ - **`metadata/`**: Additional information including speaker gender, speed, and other relevant metadata
66
+
67
+
68
+ ## πŸ“š Citations
69
+ If you use the Speech-RATE dataset in your research, please cite the following papers:
70
+
71
+ ```bibtex
72
+ @article{speech_rate_2025,
73
+ title={Speech-RATE: A Comprehensive Dataset for Speech Rate Analysis and Synthesis},
74
+ author={[Author Names]},
75
+ journal={[Journal Name]},
76
+ year={2025},
77
+ volume={[Volume]},
78
+ pages={[Pages]},
79
+ doi={[DOI]}
80
+ }
81
+ ```
82
+
83
+ ```bibtex
84
+ @article{hamamci2024developing,
85
+ title={Developing generalist foundation models from a multimodal dataset for 3d computed tomography},
86
+ author={Hamamci, Ibrahim Ethem and Er, Sezgin and Wang, Chenyu and Almas, Furkan and Simsek, Ayse Gulnihan and Esirgun, Sevval Nil and Doga, Irem and Durugol, Omer Faruk and Dai, Weicheng and Xu, Murong and others},
87
+ journal={arXiv preprint arXiv:2403.17834},
88
+ year={2024}
89
+ }
90
+ ```
91
+
92
+
93
+ ## πŸ“„ License
94
+ We are committed to fostering innovation and collaboration in the research community. To this end, all elements of the CT-RATE dataset are released under a [Creative Commons Attribution (CC-BY-NC-SA) license](https://creativecommons.org/licenses/by-nc-sa/4.0/). This licensing framework ensures that our contributions can be freely used for non-commercial research purposes, while also encouraging contributions and modifications, provided that the original work is properly cited and any derivative works are shared under similar terms.
95
+
96
+
97
+ ## πŸ™ Acknowledgements
98
+ We gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance
99
+ Computing Center (NHR@FAU) of the Friedrich-Alexander-UniversitΓ€t Erlangen-NΓΌrnberg (FAU). The hardware is funded by the German Research Foundation (DFG). This work was partially funded via the EVUK programme (β€œNext-generation AI for Integrated Diagnostics”) of the Free State of Bavaria, the Deutsche Forschungsgemeinschaft (DFG).
100
+
101
+
102
+ <div align="center">
103
+ <sub>Built with ❀️ for the speech research community</sub>
104
+ </div>