urtzai commited on
Commit
2ba4b7a
·
verified ·
1 Parent(s): b4e6113

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -3
README.md CHANGED
@@ -1,3 +1,81 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - eu
5
+ pretty_name: Maider Dataset
6
+ size_categories:
7
+ - 10k<n<100k
8
+ task_categories:
9
+ - text-to-speech
10
+ - automatic-speech-recognition
11
+ tags:
12
+ - audio
13
+ - TTS
14
+ - Basque
15
+ - Aholab
16
+ - Ilenia
17
+ - synthetic
18
+ base_model:
19
+ - itzune/maider-tts
20
+ ---
21
+
22
+ # Maider Dataset (Synthetic)
23
+
24
+ This is a large-scale **synthetic speech corpus** designed for training and fine-tuning Basque Text-to-Speech (TTS) models. It consists of **99,996 audio files** synthesized from the "Maider" voice model.
25
+
26
+ This dataset was generated by **Itzune** and serves as the primary source for training the [itzune/maider-tts (Piper version)](https://huggingface.co/itzune/maider-tts) model, enabling high-quality Basque synthesis in edge-compatible formats.
27
+
28
+ ## Dataset Structure
29
+
30
+ Due to the large volume of data (approx. 100,000 files), the dataset is organized in the **WebDataset** format. The audio files are bundled into `.tar` shards to optimize storage, I/O performance, and streaming.
31
+
32
+
33
+
34
+ ### Files
35
+ - **data/**: Directory containing the `.tar` shards. Each shard contains approximately 1,000 audio samples.
36
+ - **metadata.csv**: The main metadata file using `|` as a delimiter. It follows this structure:
37
+ - `file_name`: The name of the audio file (e.g., `audio_1.wav`).
38
+ - `transcription`: The corresponding Basque text used for synthesis.
39
+
40
+ ## Technical Specifications
41
+
42
+ - **Audio Format:** WAV (PCM)
43
+ - **Sample Rate:** 22050 Hz
44
+ - **Language:** Basque (eu)
45
+ - **Voice Profile:** Maider (Female)
46
+ - **Total Samples:** 99,996
47
+ - **Generation Method:** Synthesized using VITS-based architecture.
48
+
49
+ ## Usage
50
+
51
+ You can load this dataset using the Hugging Face `datasets` library. Using `streaming=True` is highly recommended to avoid downloading the entire 100k files at once:
52
+
53
+ ```python
54
+ from datasets import load_dataset
55
+
56
+ dataset = load_dataset("itzune/maider-dataset", streaming=True)
57
+
58
+ # View an example
59
+ sample = next(iter(dataset["train"]))
60
+ print(f"Text: {sample['transcription']}")
61
+ ```
62
+
63
+ ## Credits and Licensing
64
+
65
+ ### Source and Methodology
66
+ This is a **synthetic dataset** generated by **Itzune**. It was created using the `aHoTTS` synthesis tools provided by **HiTZ Basque Center for Language Technology - Aholab Signal Processing Laboratory**.
67
+
68
+ The audio files were synthesized using the pre-trained **Maider (VITS)** model, following the methodology described in the HiTZ/Aholab repository. This dataset serves as a large-scale synthetic corpus for downstream tasks, such as exporting models to edge-compatible formats (e.g., Piper).
69
+
70
+ ### Acknowledgments
71
+ The underlying technology and the original voice models were developed by:
72
+ - **HiTZ Basque Center for Language Technology - Aholab Signal Processing Laboratory**, University of the Basque Country (UPV/EHU).
73
+ - **Project ILENIA:** The Maider voice resource was developed with funding from Project ILENIA.
74
+
75
+ ### License
76
+ - **Dataset Content:** Licensed under [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/).
77
+ - **Original Tools/Code:** The tools used to generate this data are licensed under the **Apache License 2.0** by the original authors.
78
+
79
+ ## Citation
80
+ If you use this dataset, please cite the original work from HiTZ/Aholab:
81
+ > García, V., Hernáez, I., & Navas, E. (2022). Evaluation of Tacotron Based Synthesizers for Spanish and Basque. Applied Sciences, 12(3), 1686. https://doi.org/10.3390/app12031686