Thanarit commited on
Commit
724ef1a
·
verified ·
1 Parent(s): 63c0698

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: ID
5
+ dtype: string
6
+ - name: speaker_id
7
+ dtype: string
8
+ - name: Language
9
+ dtype: string
10
+ - name: audio
11
+ dtype:
12
+ audio:
13
+ sampling_rate: 16000
14
+ - name: transcript
15
+ dtype: string
16
+ - name: length
17
+ dtype: float32
18
+ - name: dataset_name
19
+ dtype: string
20
+ - name: confidence_score
21
+ dtype: float64
22
+ splits:
23
+ - name: train
24
+ num_examples: 50
25
+ download_size: 0
26
+ dataset_size: 0
27
+ configs:
28
+ - config_name: default
29
+ data_files:
30
+ - split: train
31
+ path: data/train/*.parquet
32
+ ---
33
+
34
+ # Thanarit/Thai-Voice
35
+
36
+ Combined Thai audio dataset from multiple sources
37
+
38
+ ## Dataset Details
39
+
40
+ - **Total samples**: 50
41
+ - **Total duration**: 0.06 hours
42
+ - **Language**: Thai (th)
43
+ - **Audio format**: 16kHz mono WAV
44
+ - **Volume normalization**: -20dB
45
+
46
+ ## Sources
47
+
48
+ Processed 1 datasets in streaming mode
49
+
50
+ ## Source Datasets
51
+
52
+ 1. **GigaSpeech2**: Large-scale multilingual speech corpus
53
+
54
+ ## Usage
55
+
56
+ ```python
57
+ from datasets import load_dataset
58
+
59
+ # Load with streaming to avoid downloading everything
60
+ dataset = load_dataset("Thanarit/Thai-Voice-Test-Final", streaming=True)
61
+
62
+ # Iterate through samples
63
+ for sample in dataset['train']:
64
+ print(sample['ID'], sample['transcript'][:50])
65
+ # Process audio: sample['audio']
66
+ break
67
+ ```
68
+
69
+ ## Schema
70
+
71
+ - `ID`: Unique identifier (S1, S2, S3, ...)
72
+ - `speaker_id`: Speaker identifier (SPK_00001, SPK_00002, ...)
73
+ - `Language`: Language code (always "th" for Thai)
74
+ - `audio`: Audio data with 16kHz sampling rate
75
+ - `transcript`: Text transcript of the audio
76
+ - `length`: Duration in seconds
77
+ - `dataset_name`: Source dataset name (e.g., "GigaSpeech2", "ProcessedVoiceTH", "MozillaCommonVoice")
78
+ - `confidence_score`: Confidence score of the transcript (0.0-1.0)
79
+ - 1.0: Original transcript from source dataset
80
+ - <1.0: STT-generated transcript
81
+ - 0.0: Fallback transcript (e.g., [NO_TRANSCRIPT])
82
+
83
+ ## Processing Details
84
+
85
+ This dataset was created using streaming processing to handle large-scale data without requiring full downloads.
86
+ Audio has been standardized to 16kHz mono with -20dB volume normalization.