Iliass Lasri commited on
Commit
eff87e0
·
1 Parent(s): ccd40c2

updated readme with experiemtns

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -68,6 +68,26 @@ config_path = hf_hub_download(
68
  | Duck Audio | <audio controls src="https://huggingface.co/iliasslasri/robust_speech_quantizer/resolve/main/augmentations/13_duck_audio.wav"></audio> |
69
  | Up-Down Resample | <audio controls src="https://huggingface.co/iliasslasri/robust_speech_quantizer/resolve/main/augmentations/14_updownresample.wav"></audio> |
70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
  ## Links
72
  - Paper: [Algayres et al., Interspeech 2023](https://aclanthology.org/2023.iwslt-1.46/)
73
  - Code: [GitHub](https://github.com/iliasslasri/snlp_project)
 
68
  | Duck Audio | <audio controls src="https://huggingface.co/iliasslasri/robust_speech_quantizer/resolve/main/augmentations/13_duck_audio.wav"></audio> |
69
  | Up-Down Resample | <audio controls src="https://huggingface.co/iliasslasri/robust_speech_quantizer/resolve/main/augmentations/14_updownresample.wav"></audio> |
70
 
71
+ ## Experiments
72
+
73
+ We trained quantizers across different encoders, codebook sizes, and augmentation strategies. The augmentation configurations are:
74
+
75
+ - **All augmentations, chained** — all augmentations from the table above are enabled, and multiple augmentations are applied sequentially to each sample. The number of chained augmentations is sampled from a uniform distribution between 0 and 4.
76
+ - **All augmentations, single** — all augmentations are enabled, but only one randomly chosen augmentation is applied per sample.
77
+ - **No extra augmentations, single** — only the baseline augmentations (from the original paper) are used, with one applied per sample.
78
+
79
+ | Encoder | Codebook | Augmentation Strategy |
80
+ |:---|:---:|:---|
81
+ | HuBERT | 500 | All augmentations, chained |
82
+ | HuBERT | 500 | All augmentations, single |
83
+ | HuBERT | 500 | No extra augmentations, single |
84
+ | | | |
85
+ | SpidR | 256 | No extra augmentations, single |
86
+ | SpidR | 256 | All augmentations, chained |
87
+ | | | |
88
+ | DinoSR (original) | 256 | All augmentations, chained |
89
+ | DinoSR (reproduced) | 256 | All augmentations, chained |
90
+
91
  ## Links
92
  - Paper: [Algayres et al., Interspeech 2023](https://aclanthology.org/2023.iwslt-1.46/)
93
  - Code: [GitHub](https://github.com/iliasslasri/snlp_project)