Aybee5 commited on
Commit
e86d5bd
·
verified ·
1 Parent(s): 9652a20

Add dataset README with dataset card and examples

Browse files
Files changed (1) hide show
  1. README.md +69 -3
README.md CHANGED
@@ -1,3 +1,69 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Dataset conversion helper
2
+
3
+ 1. Install dependencies:
4
+
5
+ pip install -r requirements.txt
6
+
7
+ 2. Run the script from the repo root to create the parquet and copy audio files into `data/audio_files`:
8
+
9
+ python3 scripts/create_parquet.py
10
+
11
+ 3. Resulting files:
12
+
13
+ - data/dataset.parquet (contains columns: source, text, audio)
14
+ - data/audio_files/<speaker_id>/<audio_id>.wav (copied audio files)
15
+
16
+ Notes:
17
+ - The `audio` column contains relative paths starting with `data/audio_files/...` so the whole `data/` folder can be uploaded to Hugging Face or copied into Colab and loaded with relative paths.
18
+ - If you want to only inspect without copying, run with `--dry-run`.
19
+
20
+ ## Dataset Card
21
+
22
+ This dataset is designed for text-to-speech (TTS) applications. It contains audio files along with their corresponding text.
23
+
24
+ ### Example for Audio Rendering
25
+
26
+ To render audio from the dataset, you can use the following code snippet:
27
+
28
+ ```python
29
+ import soundfile as sf
30
+ import numpy as np
31
+
32
+ # Load audio file
33
+ audio_path = 'data/audio_files/<speaker_id>/<audio_id>.wav'
34
+ audio_data, sample_rate = sf.read(audio_path)
35
+
36
+ # Play audio (requires sounddevice library)
37
+ import sounddevice as sd
38
+ sd.play(audio_data, sample_rate)
39
+ sd.wait() # Wait until audio is finished playing
40
+ ```
41
+
42
+ ---
43
+
44
+ dataset:
45
+ name: "ha-tts-mixed"
46
+ description: |
47
+ Hausa mixed single/multi-speaker TTS dataset converted from local MimicStudio export.
48
+ Each row has the speaker id, the text transcript and a relative path to the audio file.
49
+ license: "unspecified"
50
+ features:
51
+ - name: source
52
+ type: string
53
+ description: Speaker id (used as `source` in Unsloth format)
54
+ - name: text
55
+ type: string
56
+ description: Transcript / prompt text for the audio file
57
+ - name: audio
58
+ type: audio
59
+ description: Relative path to the wav file under `data/audio_files/...`
60
+
61
+ ### How the Hugging Face dataset card will render
62
+
63
+ On the Hugging Face dataset page the `audio` column will be playable inline, and the `text` and `source` fields will be shown alongside it. Example row:
64
+
65
+ | source | text | audio |
66
+ |---|---|---|
67
+ | c3621689-ca53-c1e1-d0c1-e5619d6c0634 | Ansamu ɓaraka acikin shirin | (playable audio) |
68
+
69
+ If you'd like, I can add a full `dataset_card.md` in the repo root with licensing, citation, and a more detailed schema.