|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: sequence |
|
|
dtype: string |
|
|
- name: transcription_full |
|
|
dtype: string |
|
|
- name: transcription_original |
|
|
dtype: string |
|
|
- name: removed_words |
|
|
dtype: string |
|
|
- name: phonemes_annotated |
|
|
dtype: string |
|
|
- name: to_convert |
|
|
dtype: string |
|
|
- name: edit_type |
|
|
dtype: string |
|
|
- name: phoneme_probability |
|
|
dtype: float64 |
|
|
- name: xcodec2_tokens |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: unknown |
|
|
num_examples: 604173 |
|
|
download_size: unknown |
|
|
dataset_size: unknown |
|
|
--- |
|
|
|
|
|
# Multilingual Audio Alignments - Processed (Mixed Text/Phonemes) |
|
|
|
|
|
This dataset contains processed audio alignments from AAdonis/multilingual_audio_alignments (english). |
|
|
|
|
|
## Curriculum Learning |
|
|
|
|
|
This dataset uses **mixed text/phoneme conditioning** with a curriculum learning schedule: |
|
|
- **p_start**: 0.0 (starting probability of using phonemes) |
|
|
- **p_end**: 0.0 (ending probability of using phonemes) |
|
|
- **curriculum_rows**: 400000 (rows over which probability increases) |
|
|
|
|
|
Early in the dataset, more words are kept as text. Later, almost all words are converted to phonemes. |
|
|
|
|
|
## Deletion Training |
|
|
|
|
|
**Deletion ratio**: 20.0% of samples are deletion samples |
|
|
**Deletion margin**: 0.1s on each side (=0.2s total transition) |
|
|
|
|
|
How deletion training works: |
|
|
1. Pick a random gap between two adjacent words |
|
|
2. Find the midpoint of that gap |
|
|
3. Cut 0.1s on each side of the midpoint |
|
|
4. The target audio is that 0.2s transition |
|
|
5. The phoneme content is `<|ph_space|>` |
|
|
6. The transcript remains unchanged (no words removed) |
|
|
|
|
|
This teaches the model to generate natural inter-word transitions. |
|
|
|
|
|
## Features: |
|
|
- `sequence`: Full LLASA training sequence with mixed text/phonemes and XCodec2 tokens |
|
|
- `transcription_full`: Transcript matching the actual audio (left + right portions) |
|
|
- `transcription_original`: Original full transcript |
|
|
- `removed_words`: Words that were removed for infilling training (empty for deletion) |
|
|
- `phonemes_annotated`: Mixed text/phoneme tokens with markers |
|
|
- `to_convert`: Type of conditioning: "text", "phonemes", or "text and phonemes" |
|
|
- `edit_type`: Type of edit: "substitution" or "deletion" |
|
|
- `phoneme_probability`: The probability used for this sample (for debugging) |
|
|
- `xcodec2_tokens`: XCodec2 audio token representations |
|
|
|
|
|
## Sequence Format: |
|
|
``` |
|
|
{mixed_left}<|start_phon_gen|>{mixed_removed}<|end_phon_gen|>{mixed_right}<|start_audio|>{right_audio}<|start_of_speech|>{left_audio}<|SPEECH_GENERATION_START|>{removed_audio}<|SPEECH_GENERATION_END|> |
|
|
``` |
|
|
|
|
|
Note: The training script adds the instruction prefix ("Generate the missing speech from..."), so it's not included in the data. |
|
|
The XCodec2 audio tokens are UNCHANGED - only the text/phoneme conditioning is mixed. |
|
|
**ALL segments (left, removed, right) use the same curriculum probability** - so with p=0 you get pure text, with p=1 pure phonemes. |
|
|
|
|
|
## Processing: |
|
|
- Language: english |
|
|
- Index range: 0 to 199999 |
|
|
- Final row counter: 604173 |
|
|
- Total samples: 604173 |
|
|
|