AdoCleanCode commited on
Commit
a64cdcc
·
verified ·
1 Parent(s): 0096fe2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +82 -0
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: sequence
5
+ dtype: string
6
+ - name: transcription_full
7
+ dtype: string
8
+ - name: transcription_original
9
+ dtype: string
10
+ - name: removed_words
11
+ dtype: string
12
+ - name: phonemes_annotated
13
+ dtype: string
14
+ - name: to_convert
15
+ dtype: string
16
+ - name: edit_type
17
+ dtype: string
18
+ - name: phoneme_probability
19
+ dtype: float64
20
+ - name: xcodec2_tokens
21
+ dtype: string
22
+ splits:
23
+ - name: train
24
+ num_bytes: unknown
25
+ num_examples: 522013
26
+ download_size: unknown
27
+ dataset_size: unknown
28
+ ---
29
+
30
+ # Multilingual Audio Alignments - Processed (Mixed Text/Phonemes)
31
+
32
+ This dataset contains processed audio alignments from AAdonis/multilingual_audio_alignments (mandarin).
33
+
34
+ ## Curriculum Learning
35
+
36
+ This dataset uses **mixed text/phoneme conditioning** with a curriculum learning schedule:
37
+ - **p_start**: 0.0 (starting probability of using phonemes)
38
+ - **p_end**: 0.0 (ending probability of using phonemes)
39
+ - **curriculum_rows**: 400000 (rows over which probability increases)
40
+
41
+ Early in the dataset, more words are kept as text. Later, almost all words are converted to phonemes.
42
+
43
+ ## Deletion Training
44
+
45
+ **Deletion ratio**: 20.0% of samples are deletion samples
46
+ **Deletion margin**: 0.1s on each side (=0.2s total transition)
47
+
48
+ How deletion training works:
49
+ 1. Pick a random gap between two adjacent words
50
+ 2. Find the midpoint of that gap
51
+ 3. Cut 0.1s on each side of the midpoint
52
+ 4. The target audio is that 0.2s transition
53
+ 5. The phoneme content is `<|ph_space|>`
54
+ 6. The transcript remains unchanged (no words removed)
55
+
56
+ This teaches the model to generate natural inter-word transitions.
57
+
58
+ ## Features:
59
+ - `sequence`: Full LLASA training sequence with mixed text/phonemes and XCodec2 tokens
60
+ - `transcription_full`: Transcript matching the actual audio (left + right portions)
61
+ - `transcription_original`: Original full transcript
62
+ - `removed_words`: Words that were removed for infilling training (empty for deletion)
63
+ - `phonemes_annotated`: Mixed text/phoneme tokens with markers
64
+ - `to_convert`: Type of conditioning: "text", "phonemes", or "text and phonemes"
65
+ - `edit_type`: Type of edit: "substitution" or "deletion"
66
+ - `phoneme_probability`: The probability used for this sample (for debugging)
67
+ - `xcodec2_tokens`: XCodec2 audio token representations
68
+
69
+ ## Sequence Format:
70
+ ```
71
+ {mixed_left}<|start_phon_gen|>{mixed_removed}<|end_phon_gen|>{mixed_right}<|start_audio|>{right_audio}<|start_of_speech|>{left_audio}<|SPEECH_GENERATION_START|>{removed_audio}<|SPEECH_GENERATION_END|>
72
+ ```
73
+
74
+ Note: The training script adds the instruction prefix ("Generate the missing speech from..."), so it's not included in the data.
75
+ The XCodec2 audio tokens are UNCHANGED - only the text/phoneme conditioning is mixed.
76
+ **ALL segments (left, removed, right) use the same curriculum probability** - so with p=0 you get pure text, with p=1 pure phonemes.
77
+
78
+ ## Processing:
79
+ - Language: mandarin
80
+ - Index range: 540000 to 714787
81
+ - Final row counter: 522013
82
+ - Total samples: 522013