primepake commited on
Commit
d9cc92f
·
1 Parent(s): 067b9b6

change to fsq

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +12 -13
  2. dac-codec/assets/comparsion_stats.png +0 -3
  3. dac-codec/assets/objective_comparisons.png +0 -3
  4. dac-codec/conf/1gpu.yml +0 -6
  5. dac-codec/conf/ablations/baseline.yml +0 -3
  6. dac-codec/conf/ablations/diff-mb.yml +0 -22
  7. dac-codec/conf/ablations/equal-mb.yml +0 -22
  8. dac-codec/conf/ablations/no-adv.yml +0 -9
  9. dac-codec/conf/ablations/no-data-balance.yml +0 -22
  10. dac-codec/conf/ablations/no-low-hop.yml +0 -18
  11. dac-codec/conf/ablations/no-mb.yml +0 -17
  12. dac-codec/conf/ablations/no-mpd-msd.yml +0 -21
  13. dac-codec/conf/ablations/no-mpd.yml +0 -21
  14. dac-codec/conf/ablations/only-speech.yml +0 -22
  15. dac-codec/conf/base.yml +0 -123
  16. dac-codec/conf/downsampling/1024x.yml +0 -16
  17. dac-codec/conf/downsampling/128x.yml +0 -16
  18. dac-codec/conf/downsampling/1536x.yml +0 -16
  19. dac-codec/conf/downsampling/768x.yml +0 -16
  20. dac-codec/conf/final/16khz.yml +0 -123
  21. dac-codec/conf/final/24khz.yml +0 -123
  22. dac-codec/conf/final/44khz-16kbps.yml +0 -124
  23. dac-codec/conf/final/44khz.yml +0 -123
  24. dac-codec/conf/quantizer/24kbps.yml +0 -5
  25. dac-codec/conf/quantizer/256d.yml +0 -5
  26. dac-codec/conf/quantizer/2d.yml +0 -5
  27. dac-codec/conf/quantizer/32d.yml +0 -5
  28. dac-codec/conf/quantizer/4d.yml +0 -5
  29. dac-codec/conf/quantizer/512d.yml +0 -5
  30. dac-codec/conf/quantizer/dropout-0.0.yml +0 -5
  31. dac-codec/conf/quantizer/dropout-0.25.yml +0 -5
  32. dac-codec/conf/quantizer/dropout-0.5.yml +0 -5
  33. dac-codec/conf/size/medium.yml +0 -5
  34. dac-codec/conf/size/small.yml +0 -5
  35. dac-codec/dac/__init__.py +0 -16
  36. dac-codec/dac/__main__.py +0 -36
  37. dac-codec/dac/__pycache__/__init__.cpython-310.pyc +0 -0
  38. dac-codec/dac/__pycache__/__main__.cpython-310.pyc +0 -0
  39. dac-codec/dac/compare/__init__.py +0 -0
  40. dac-codec/dac/compare/encodec.py +0 -54
  41. dac-codec/dac/model/__init__.py +0 -4
  42. dac-codec/dac/model/__pycache__/__init__.cpython-310.pyc +0 -0
  43. dac-codec/dac/model/__pycache__/base.cpython-310.pyc +0 -0
  44. dac-codec/dac/model/__pycache__/dac.cpython-310.pyc +0 -0
  45. dac-codec/dac/model/__pycache__/discriminator.cpython-310.pyc +0 -0
  46. dac-codec/dac/model/base.py +0 -294
  47. dac-codec/dac/model/dac.py +0 -364
  48. dac-codec/dac/model/discriminator.py +0 -228
  49. dac-codec/dac/nn/__init__.py +0 -3
  50. dac-codec/dac/nn/__pycache__/__init__.cpython-310.pyc +0 -0
README.md CHANGED
@@ -18,7 +18,7 @@ This repository provides an implementation of the MiniMax-Speech model, featurin
18
  ## Architecture
19
 
20
  ### Stage 1: Audio to Discrete Tokens
21
- Converts raw audio into discrete representations using the DAC (Descript Audio Codec) framework.
22
 
23
  ### Stage 2: Discrete Tokens to Continuous Latent Space
24
  Maps discrete tokens to a continuous latent space using a Variational Autoencoder (VAE).
@@ -29,25 +29,25 @@ Maps discrete tokens to a continuous latent space using a Variational Autoencode
29
 
30
  ### 1. Model Training
31
 
32
- #### BPE tokens to DAC codec tokens
33
- - Based on the DAC codec
34
- - Using Auto Regressive to predict the DAC codec tokens with learnable speaker extractor
35
 
36
- #### DAC codec tokens to DAC-VAE latent
37
  - Based on Cosyvoice2 flow matching decoder
38
  - Learns continuous latent representations from discrete tokens
39
 
40
  ### 2. Feature Extraction
41
 
42
  Before training the main model:
43
- 1. Extract discrete tokens using the trained DAC codec [Descript Audio Codec](https://github.com/descriptinc/descript-audio-codec)
44
  2. Generate continuous latent representations using the trained DAC-VAE - the pretrained I provided here: [DAC-VAE](https://drive.google.com/file/d/1iwZhPlcdDwvPjeON3bFAeYarsV4ZtI2E/view?usp=sharing)
45
 
46
  ### 3. Two-Stage Training
47
 
48
  Train the models sequentially:
49
- - **Stage 1**: BPE tokens → Discrete DAC codec
50
- - **Stage 2**: Discrete DAC codec → DAC-VAE Continuous latent space
51
 
52
  ## Getting Started
53
 
@@ -59,7 +59,7 @@ pip install -r requirements.txt
59
 
60
  ### Training Pipeline
61
 
62
- 1. **Extracting DAC Codec** (if not using pretrained)
63
  ```bash
64
  # Add training command
65
  ```
@@ -88,13 +88,12 @@ minimax-speech/
88
  ├── configs/
89
  │ └── dac_vae.yaml
90
  ├── models/
91
- │ ├── dac_codec/
92
  │ └── dac_vae/
93
  ├── cosyvoice/ # Components from CosyVoice2
94
  │ ├── flow/
95
  │ ├── transformer/
96
  │ └── utils/
97
- ├── train_dac_vae.py
98
  └── README.md
99
  ```
100
 
@@ -130,13 +129,13 @@ If you use this code in your research, please cite:
130
 
131
  This project follows the licensing terms of its dependencies:
132
  - CosyVoice2 components: [Check CosyVoice2 License](https://github.com/FunAudioLLM/CosyVoice/blob/main/LICENSE)
133
- - DAC components: [Apache 2.0 License](https://github.com/descriptinc/descript-audio-codec/blob/main/LICENSE)
134
  - Original contributions: [Specify your license here]
135
 
136
  ## Acknowledgments
137
 
138
  - **[CosyVoice2](https://github.com/FunAudioLLM/CosyVoice)**: This implementation extensively uses code and architectures from CosyVoice2
139
- - **[Descript Audio Codec](https://github.com/descriptinc/descript-audio-codec)**: For the DAC implementation
140
  - **MiniMax team**: For the technical report and methodology
141
  - **FunAudioLLM team**: For the excellent CosyVoice2 codebase
142
 
 
18
  ## Architecture
19
 
20
  ### Stage 1: Audio to Discrete Tokens
21
+ Converts raw audio into discrete representations using the FSQ (S3Tokenizer) framework.
22
 
23
  ### Stage 2: Discrete Tokens to Continuous Latent Space
24
  Maps discrete tokens to a continuous latent space using a Variational Autoencoder (VAE).
 
29
 
30
  ### 1. Model Training
31
 
32
+ #### BPE tokens to FSQ tokens
33
+ - Based on the FSQ
34
+ - Using Auto Regressive to predict the FSQ tokens with learnable speaker extractor
35
 
36
+ #### FSQ tokens to DAC-VAE latent
37
  - Based on Cosyvoice2 flow matching decoder
38
  - Learns continuous latent representations from discrete tokens
39
 
40
  ### 2. Feature Extraction
41
 
42
  Before training the main model:
43
+ 1. Extract discrete tokens using the trained FSQ [S3Tokenizer](https://github.com/xingchensong/S3Tokenizer)
44
  2. Generate continuous latent representations using the trained DAC-VAE - the pretrained I provided here: [DAC-VAE](https://drive.google.com/file/d/1iwZhPlcdDwvPjeON3bFAeYarsV4ZtI2E/view?usp=sharing)
45
 
46
  ### 3. Two-Stage Training
47
 
48
  Train the models sequentially:
49
+ - **Stage 1**: BPE tokens → Discrete FSQ
50
+ - **Stage 2**: Discrete FSQ → DAC-VAE Continuous latent space
51
 
52
  ## Getting Started
53
 
 
59
 
60
  ### Training Pipeline
61
 
62
+ 1. **Extracting FSQ** (if not using pretrained)
63
  ```bash
64
  # Add training command
65
  ```
 
88
  ├── configs/
89
  │ └── dac_vae.yaml
90
  ├── models/
91
+ │ ├── fsq/
92
  │ └── dac_vae/
93
  ├── cosyvoice/ # Components from CosyVoice2
94
  │ ├── flow/
95
  │ ├── transformer/
96
  │ └── utils/
 
97
  └── README.md
98
  ```
99
 
 
129
 
130
  This project follows the licensing terms of its dependencies:
131
  - CosyVoice2 components: [Check CosyVoice2 License](https://github.com/FunAudioLLM/CosyVoice/blob/main/LICENSE)
132
+ - FSQ components: [Apache 2.0 License](https://github.com/xingchensong/S3Tokenizer/blob/main/LICENSE)
133
  - Original contributions: [Specify your license here]
134
 
135
  ## Acknowledgments
136
 
137
  - **[CosyVoice2](https://github.com/FunAudioLLM/CosyVoice)**: This implementation extensively uses code and architectures from CosyVoice2
138
+ - **[FSQ](https://github.com/xingchensong/S3Tokenizer)**: For the FSQ implementation
139
  - **MiniMax team**: For the technical report and methodology
140
  - **FunAudioLLM team**: For the excellent CosyVoice2 codebase
141
 
dac-codec/assets/comparsion_stats.png DELETED

Git LFS Details

  • SHA256: 46dcd8f1b60cf44443354b21cece5b88f2a122aa4788dc4f899e5d28f34e2dac
  • Pointer size: 131 Bytes
  • Size of remote file: 185 kB
dac-codec/assets/objective_comparisons.png DELETED

Git LFS Details

  • SHA256: 919fa32c38d51aed15a7fb43eba2d28b687636ba3a289e7f9eeb10ab6489030d
  • Pointer size: 131 Bytes
  • Size of remote file: 531 kB
dac-codec/conf/1gpu.yml DELETED
@@ -1,6 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
-
4
- batch_size: 12
5
- val_batch_size: 12
6
- num_workers: 4
 
 
 
 
 
 
 
dac-codec/conf/ablations/baseline.yml DELETED
@@ -1,3 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
 
 
 
 
dac-codec/conf/ablations/diff-mb.yml DELETED
@@ -1,22 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- Discriminator.sample_rate: 44100
6
- Discriminator.fft_sizes: [2048, 1024, 512]
7
- Discriminator.bands:
8
- - [0.0, 0.05]
9
- - [0.05, 0.1]
10
- - [0.1, 0.25]
11
- - [0.25, 0.5]
12
- - [0.5, 1.0]
13
-
14
-
15
- # re-weight lambdas to make up for
16
- # lost discriminators vs baseline
17
- lambdas:
18
- mel/loss: 15.0
19
- adv/feat_loss: 5.0
20
- adv/gen_loss: 1.0
21
- vq/commitment_loss: 0.25
22
- vq/codebook_loss: 1.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/ablations/equal-mb.yml DELETED
@@ -1,22 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- Discriminator.sample_rate: 44100
6
- Discriminator.fft_sizes: [2048, 1024, 512]
7
- Discriminator.bands:
8
- - [0.0, 0.2]
9
- - [0.2, 0.4]
10
- - [0.4, 0.6]
11
- - [0.6, 0.8]
12
- - [0.8, 1.0]
13
-
14
-
15
- # re-weight lambdas to make up for
16
- # lost discriminators vs baseline
17
- lambdas:
18
- mel/loss: 15.0
19
- adv/feat_loss: 5.0
20
- adv/gen_loss: 1.0
21
- vq/commitment_loss: 0.25
22
- vq/codebook_loss: 1.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/ablations/no-adv.yml DELETED
@@ -1,9 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- lambdas:
6
- mel/loss: 1.0
7
- waveform/loss: 1.0
8
- vq/commitment_loss: 0.25
9
- vq/codebook_loss: 1.0
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/ablations/no-data-balance.yml DELETED
@@ -1,22 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- train/build_dataset.folders:
6
- speech:
7
- - /data/daps/train
8
- - /data/vctk
9
- - /data/vocalset
10
- - /data/read_speech
11
- - /data/french_speech
12
- - /data/emotional_speech/
13
- - /data/common_voice/
14
- - /data/german_speech/
15
- - /data/russian_speech/
16
- - /data/spanish_speech/
17
- music:
18
- - /data/musdb/train
19
- - /data/jamendo
20
- general:
21
- - /data/audioset/data/unbalanced_train_segments/
22
- - /data/audioset/data/balanced_train_segments/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/ablations/no-low-hop.yml DELETED
@@ -1,18 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- MelSpectrogramLoss.n_mels: [80]
6
- MelSpectrogramLoss.window_lengths: [512]
7
- MelSpectrogramLoss.mel_fmin: [0]
8
- MelSpectrogramLoss.mel_fmax: [null]
9
- MelSpectrogramLoss.pow: 1.0
10
- MelSpectrogramLoss.clamp_eps: 1.0e-5
11
- MelSpectrogramLoss.mag_weight: 0.0
12
-
13
- lambdas:
14
- mel/loss: 100.0
15
- adv/feat_loss: 2.0
16
- adv/gen_loss: 1.0
17
- vq/commitment_loss: 0.25
18
- vq/codebook_loss: 1.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/ablations/no-mb.yml DELETED
@@ -1,17 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- Discriminator.sample_rate: 44100
6
- Discriminator.fft_sizes: [2048, 1024, 512]
7
- Discriminator.bands:
8
- - [0.0, 1.0]
9
-
10
- # re-weight lambdas to make up for
11
- # lost discriminators vs baseline
12
- lambdas:
13
- mel/loss: 15.0
14
- adv/feat_loss: 5.0
15
- adv/gen_loss: 1.0
16
- vq/commitment_loss: 0.25
17
- vq/codebook_loss: 1.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/ablations/no-mpd-msd.yml DELETED
@@ -1,21 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- Discriminator.sample_rate: 44100
6
- Discriminator.rates: []
7
- Discriminator.periods: []
8
- Discriminator.fft_sizes: [2048, 1024, 512]
9
- Discriminator.bands:
10
- - [0.0, 0.1]
11
- - [0.1, 0.25]
12
- - [0.25, 0.5]
13
- - [0.5, 0.75]
14
- - [0.75, 1.0]
15
-
16
- lambdas:
17
- mel/loss: 15.0
18
- adv/feat_loss: 2.66
19
- adv/gen_loss: 1.0
20
- vq/commitment_loss: 0.25
21
- vq/codebook_loss: 1.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/ablations/no-mpd.yml DELETED
@@ -1,21 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- Discriminator.sample_rate: 44100
6
- Discriminator.rates: [1]
7
- Discriminator.periods: []
8
- Discriminator.fft_sizes: [2048, 1024, 512]
9
- Discriminator.bands:
10
- - [0.0, 0.1]
11
- - [0.1, 0.25]
12
- - [0.25, 0.5]
13
- - [0.5, 0.75]
14
- - [0.75, 1.0]
15
-
16
- lambdas:
17
- mel/loss: 15.0
18
- adv/feat_loss: 2.5
19
- adv/gen_loss: 1.0
20
- vq/commitment_loss: 0.25
21
- vq/codebook_loss: 1.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/ablations/only-speech.yml DELETED
@@ -1,22 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- train/build_dataset.folders:
6
- speech_fb:
7
- - /data/daps/train
8
- speech_hq:
9
- - /data/vctk
10
- - /data/vocalset
11
- - /data/read_speech
12
- - /data/french_speech
13
- speech_uq:
14
- - /data/emotional_speech/
15
- - /data/common_voice/
16
- - /data/german_speech/
17
- - /data/russian_speech/
18
- - /data/spanish_speech/
19
-
20
- val/build_dataset.folders:
21
- speech_hq:
22
- - /data/daps/val
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/base.yml DELETED
@@ -1,123 +0,0 @@
1
- # Model setup
2
- DAC.sample_rate: 44100
3
- DAC.encoder_dim: 64
4
- DAC.encoder_rates: [2, 4, 8, 8]
5
- DAC.decoder_dim: 1536
6
- DAC.decoder_rates: [8, 8, 4, 2]
7
-
8
- # Quantization
9
- DAC.n_codebooks: 9
10
- DAC.codebook_size: 1024
11
- DAC.codebook_dim: 8
12
- DAC.quantizer_dropout: 1.0
13
-
14
- # Discriminator
15
- Discriminator.sample_rate: 44100
16
- Discriminator.rates: []
17
- Discriminator.periods: [2, 3, 5, 7, 11]
18
- Discriminator.fft_sizes: [2048, 1024, 512]
19
- Discriminator.bands:
20
- - [0.0, 0.1]
21
- - [0.1, 0.25]
22
- - [0.25, 0.5]
23
- - [0.5, 0.75]
24
- - [0.75, 1.0]
25
-
26
- # Optimization
27
- AdamW.betas: [0.8, 0.99]
28
- AdamW.lr: 0.0001
29
- ExponentialLR.gamma: 0.999996
30
-
31
- amp: false
32
- val_batch_size: 100
33
- device: cuda
34
- num_iters: 250000
35
- save_iters: [10000, 50000, 100000, 200000]
36
- valid_freq: 1000
37
- sample_freq: 10000
38
- num_workers: 32
39
- val_idx: [0, 1, 2, 3, 4, 5, 6, 7]
40
- seed: 0
41
- lambdas:
42
- mel/loss: 15.0
43
- adv/feat_loss: 2.0
44
- adv/gen_loss: 1.0
45
- vq/commitment_loss: 0.25
46
- vq/codebook_loss: 1.0
47
-
48
- VolumeNorm.db: [const, -16]
49
-
50
- # Transforms
51
- build_transform.preprocess:
52
- - Identity
53
- build_transform.augment_prob: 0.0
54
- build_transform.augment:
55
- - Identity
56
- build_transform.postprocess:
57
- - VolumeNorm
58
- - RescaleAudio
59
- - ShiftPhase
60
-
61
- # Loss setup
62
- MultiScaleSTFTLoss.window_lengths: [2048, 512]
63
- MelSpectrogramLoss.n_mels: [5, 10, 20, 40, 80, 160, 320]
64
- MelSpectrogramLoss.window_lengths: [32, 64, 128, 256, 512, 1024, 2048]
65
- MelSpectrogramLoss.mel_fmin: [0, 0, 0, 0, 0, 0, 0]
66
- MelSpectrogramLoss.mel_fmax: [null, null, null, null, null, null, null]
67
- MelSpectrogramLoss.pow: 1.0
68
- MelSpectrogramLoss.clamp_eps: 1.0e-5
69
- MelSpectrogramLoss.mag_weight: 0.0
70
-
71
- # Data
72
- batch_size: 72
73
- train/AudioDataset.duration: 0.38
74
- train/AudioDataset.n_examples: 10000000
75
-
76
- val/AudioDataset.duration: 5.0
77
- val/build_transform.augment_prob: 1.0
78
- val/AudioDataset.n_examples: 250
79
-
80
- test/AudioDataset.duration: 10.0
81
- test/build_transform.augment_prob: 1.0
82
- test/AudioDataset.n_examples: 1000
83
-
84
- AudioLoader.shuffle: true
85
- AudioDataset.without_replacement: true
86
-
87
- train/build_dataset.folders:
88
- speech_fb:
89
- - /data/daps/train
90
- speech_hq:
91
- - /data/vctk
92
- - /data/vocalset
93
- - /data/read_speech
94
- - /data/french_speech
95
- speech_uq:
96
- - /data/emotional_speech/
97
- - /data/common_voice/
98
- - /data/german_speech/
99
- - /data/russian_speech/
100
- - /data/spanish_speech/
101
- music_hq:
102
- - /data/musdb/train
103
- music_uq:
104
- - /data/jamendo
105
- general:
106
- - /data/audioset/data/unbalanced_train_segments/
107
- - /data/audioset/data/balanced_train_segments/
108
-
109
- val/build_dataset.folders:
110
- speech_hq:
111
- - /data/daps/val
112
- music_hq:
113
- - /data/musdb/test
114
- general:
115
- - /data/audioset/data/eval_segments/
116
-
117
- test/build_dataset.folders:
118
- speech_hq:
119
- - /data/daps/test
120
- music_hq:
121
- - /data/musdb/test
122
- general:
123
- - /data/audioset/data/eval_segments/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/downsampling/1024x.yml DELETED
@@ -1,16 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- # Model setup
6
- DAC.sample_rate: 44100
7
- DAC.encoder_dim: 64
8
- DAC.encoder_rates: [2, 8, 8, 8]
9
- DAC.decoder_dim: 1536
10
- DAC.decoder_rates: [8, 4, 4, 2, 2, 2]
11
-
12
- # Quantization
13
- DAC.n_codebooks: 19
14
- DAC.codebook_size: 1024
15
- DAC.codebook_dim: 8
16
- DAC.quantizer_dropout: 1.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/downsampling/128x.yml DELETED
@@ -1,16 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- # Model setup
6
- DAC.sample_rate: 44100
7
- DAC.encoder_dim: 64
8
- DAC.encoder_rates: [2, 4, 4, 4]
9
- DAC.decoder_dim: 1536
10
- DAC.decoder_rates: [4, 4, 2, 2, 2, 1]
11
-
12
- # Quantization
13
- DAC.n_codebooks: 2
14
- DAC.codebook_size: 1024
15
- DAC.codebook_dim: 8
16
- DAC.quantizer_dropout: 1.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/downsampling/1536x.yml DELETED
@@ -1,16 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- # Model setup
6
- DAC.sample_rate: 44100
7
- DAC.encoder_dim: 96
8
- DAC.encoder_rates: [2, 8, 8, 12]
9
- DAC.decoder_dim: 1536
10
- DAC.decoder_rates: [12, 4, 4, 2, 2, 2]
11
-
12
- # Quantization
13
- DAC.n_codebooks: 28
14
- DAC.codebook_size: 1024
15
- DAC.codebook_dim: 8
16
- DAC.quantizer_dropout: 1.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/downsampling/768x.yml DELETED
@@ -1,16 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- # Model setup
6
- DAC.sample_rate: 44100
7
- DAC.encoder_dim: 64
8
- DAC.encoder_rates: [2, 6, 8, 8]
9
- DAC.decoder_dim: 1536
10
- DAC.decoder_rates: [6, 4, 4, 2, 2, 2]
11
-
12
- # Quantization
13
- DAC.n_codebooks: 14
14
- DAC.codebook_size: 1024
15
- DAC.codebook_dim: 8
16
- DAC.quantizer_dropout: 1.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/final/16khz.yml DELETED
@@ -1,123 +0,0 @@
1
- # Model setup
2
- DAC.sample_rate: 16000
3
- DAC.encoder_dim: 64
4
- DAC.encoder_rates: [2, 4, 5, 8]
5
- DAC.decoder_dim: 1536
6
- DAC.decoder_rates: [8, 5, 4, 2]
7
-
8
- # Quantization
9
- DAC.n_codebooks: 12
10
- DAC.codebook_size: 1024
11
- DAC.codebook_dim: 8
12
- DAC.quantizer_dropout: 0.5
13
-
14
- # Discriminator
15
- Discriminator.sample_rate: 16000
16
- Discriminator.rates: []
17
- Discriminator.periods: [2, 3, 5, 7, 11]
18
- Discriminator.fft_sizes: [2048, 1024, 512]
19
- Discriminator.bands:
20
- - [0.0, 0.1]
21
- - [0.1, 0.25]
22
- - [0.25, 0.5]
23
- - [0.5, 0.75]
24
- - [0.75, 1.0]
25
-
26
- # Optimization
27
- AdamW.betas: [0.8, 0.99]
28
- AdamW.lr: 0.0001
29
- ExponentialLR.gamma: 0.999996
30
-
31
- amp: false
32
- val_batch_size: 100
33
- device: cuda
34
- num_iters: 400000
35
- save_iters: [10000, 50000, 100000, 200000]
36
- valid_freq: 1000
37
- sample_freq: 10000
38
- num_workers: 32
39
- val_idx: [0, 1, 2, 3, 4, 5, 6, 7]
40
- seed: 0
41
- lambdas:
42
- mel/loss: 15.0
43
- adv/feat_loss: 2.0
44
- adv/gen_loss: 1.0
45
- vq/commitment_loss: 0.25
46
- vq/codebook_loss: 1.0
47
-
48
- VolumeNorm.db: [const, -16]
49
-
50
- # Transforms
51
- build_transform.preprocess:
52
- - Identity
53
- build_transform.augment_prob: 0.0
54
- build_transform.augment:
55
- - Identity
56
- build_transform.postprocess:
57
- - VolumeNorm
58
- - RescaleAudio
59
- - ShiftPhase
60
-
61
- # Loss setup
62
- MultiScaleSTFTLoss.window_lengths: [2048, 512]
63
- MelSpectrogramLoss.n_mels: [5, 10, 20, 40, 80, 160, 320]
64
- MelSpectrogramLoss.window_lengths: [32, 64, 128, 256, 512, 1024, 2048]
65
- MelSpectrogramLoss.mel_fmin: [0, 0, 0, 0, 0, 0, 0]
66
- MelSpectrogramLoss.mel_fmax: [null, null, null, null, null, null, null]
67
- MelSpectrogramLoss.pow: 1.0
68
- MelSpectrogramLoss.clamp_eps: 1.0e-5
69
- MelSpectrogramLoss.mag_weight: 0.0
70
-
71
- # Data
72
- batch_size: 72
73
- train/AudioDataset.duration: 0.38
74
- train/AudioDataset.n_examples: 10000000
75
-
76
- val/AudioDataset.duration: 5.0
77
- val/build_transform.augment_prob: 1.0
78
- val/AudioDataset.n_examples: 250
79
-
80
- test/AudioDataset.duration: 10.0
81
- test/build_transform.augment_prob: 1.0
82
- test/AudioDataset.n_examples: 1000
83
-
84
- AudioLoader.shuffle: true
85
- AudioDataset.without_replacement: true
86
-
87
- train/build_dataset.folders:
88
- speech_fb:
89
- - /data/daps/train
90
- speech_hq:
91
- - /data/vctk
92
- - /data/vocalset
93
- - /data/read_speech
94
- - /data/french_speech
95
- speech_uq:
96
- - /data/emotional_speech/
97
- - /data/common_voice/
98
- - /data/german_speech/
99
- - /data/russian_speech/
100
- - /data/spanish_speech/
101
- music_hq:
102
- - /data/musdb/train
103
- music_uq:
104
- - /data/jamendo
105
- general:
106
- - /data/audioset/data/unbalanced_train_segments/
107
- - /data/audioset/data/balanced_train_segments/
108
-
109
- val/build_dataset.folders:
110
- speech_hq:
111
- - /data/daps/val
112
- music_hq:
113
- - /data/musdb/test
114
- general:
115
- - /data/audioset/data/eval_segments/
116
-
117
- test/build_dataset.folders:
118
- speech_hq:
119
- - /data/daps/test
120
- music_hq:
121
- - /data/musdb/test
122
- general:
123
- - /data/audioset/data/eval_segments/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/final/24khz.yml DELETED
@@ -1,123 +0,0 @@
1
- # Model setup
2
- DAC.sample_rate: 24000
3
- DAC.encoder_dim: 64
4
- DAC.encoder_rates: [2, 4, 5, 8]
5
- DAC.decoder_dim: 1536
6
- DAC.decoder_rates: [8, 5, 4, 2]
7
-
8
- # Quantization
9
- DAC.n_codebooks: 32
10
- DAC.codebook_size: 1024
11
- DAC.codebook_dim: 8
12
- DAC.quantizer_dropout: 0.5
13
-
14
- # Discriminator
15
- Discriminator.sample_rate: 24000
16
- Discriminator.rates: []
17
- Discriminator.periods: [2, 3, 5, 7, 11]
18
- Discriminator.fft_sizes: [2048, 1024, 512]
19
- Discriminator.bands:
20
- - [0.0, 0.1]
21
- - [0.1, 0.25]
22
- - [0.25, 0.5]
23
- - [0.5, 0.75]
24
- - [0.75, 1.0]
25
-
26
- # Optimization
27
- AdamW.betas: [0.8, 0.99]
28
- AdamW.lr: 0.0001
29
- ExponentialLR.gamma: 0.999996
30
-
31
- amp: false
32
- val_batch_size: 100
33
- device: cuda
34
- num_iters: 400000
35
- save_iters: [10000, 50000, 100000, 200000]
36
- valid_freq: 1000
37
- sample_freq: 10000
38
- num_workers: 32
39
- val_idx: [0, 1, 2, 3, 4, 5, 6, 7]
40
- seed: 0
41
- lambdas:
42
- mel/loss: 15.0
43
- adv/feat_loss: 2.0
44
- adv/gen_loss: 1.0
45
- vq/commitment_loss: 0.25
46
- vq/codebook_loss: 1.0
47
-
48
- VolumeNorm.db: [const, -16]
49
-
50
- # Transforms
51
- build_transform.preprocess:
52
- - Identity
53
- build_transform.augment_prob: 0.0
54
- build_transform.augment:
55
- - Identity
56
- build_transform.postprocess:
57
- - VolumeNorm
58
- - RescaleAudio
59
- - ShiftPhase
60
-
61
- # Loss setup
62
- MultiScaleSTFTLoss.window_lengths: [2048, 512]
63
- MelSpectrogramLoss.n_mels: [5, 10, 20, 40, 80, 160, 320]
64
- MelSpectrogramLoss.window_lengths: [32, 64, 128, 256, 512, 1024, 2048]
65
- MelSpectrogramLoss.mel_fmin: [0, 0, 0, 0, 0, 0, 0]
66
- MelSpectrogramLoss.mel_fmax: [null, null, null, null, null, null, null]
67
- MelSpectrogramLoss.pow: 1.0
68
- MelSpectrogramLoss.clamp_eps: 1.0e-5
69
- MelSpectrogramLoss.mag_weight: 0.0
70
-
71
- # Data
72
- batch_size: 72
73
- train/AudioDataset.duration: 0.38
74
- train/AudioDataset.n_examples: 10000000
75
-
76
- val/AudioDataset.duration: 5.0
77
- val/build_transform.augment_prob: 1.0
78
- val/AudioDataset.n_examples: 250
79
-
80
- test/AudioDataset.duration: 10.0
81
- test/build_transform.augment_prob: 1.0
82
- test/AudioDataset.n_examples: 1000
83
-
84
- AudioLoader.shuffle: true
85
- AudioDataset.without_replacement: true
86
-
87
- train/build_dataset.folders:
88
- speech_fb:
89
- - /data/daps/train
90
- speech_hq:
91
- - /data/vctk
92
- - /data/vocalset
93
- - /data/read_speech
94
- - /data/french_speech
95
- speech_uq:
96
- - /data/emotional_speech/
97
- - /data/common_voice/
98
- - /data/german_speech/
99
- - /data/russian_speech/
100
- - /data/spanish_speech/
101
- music_hq:
102
- - /data/musdb/train
103
- music_uq:
104
- - /data/jamendo
105
- general:
106
- - /data/audioset/data/unbalanced_train_segments/
107
- - /data/audioset/data/balanced_train_segments/
108
-
109
- val/build_dataset.folders:
110
- speech_hq:
111
- - /data/daps/val
112
- music_hq:
113
- - /data/musdb/test
114
- general:
115
- - /data/audioset/data/eval_segments/
116
-
117
- test/build_dataset.folders:
118
- speech_hq:
119
- - /data/daps/test
120
- music_hq:
121
- - /data/musdb/test
122
- general:
123
- - /data/audioset/data/eval_segments/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/final/44khz-16kbps.yml DELETED
@@ -1,124 +0,0 @@
1
- # Model setup
2
- DAC.sample_rate: 44100
3
- DAC.encoder_dim: 64
4
- DAC.encoder_rates: [2, 4, 8, 8]
5
- DAC.latent_dim: 128
6
- DAC.decoder_dim: 1536
7
- DAC.decoder_rates: [8, 8, 4, 2]
8
-
9
- # Quantization
10
- DAC.n_codebooks: 18 # Max bitrate of 16kbps
11
- DAC.codebook_size: 1024
12
- DAC.codebook_dim: 8
13
- DAC.quantizer_dropout: 0.5
14
-
15
- # Discriminator
16
- Discriminator.sample_rate: 44100
17
- Discriminator.rates: []
18
- Discriminator.periods: [2, 3, 5, 7, 11]
19
- Discriminator.fft_sizes: [2048, 1024, 512]
20
- Discriminator.bands:
21
- - [0.0, 0.1]
22
- - [0.1, 0.25]
23
- - [0.25, 0.5]
24
- - [0.5, 0.75]
25
- - [0.75, 1.0]
26
-
27
- # Optimization
28
- AdamW.betas: [0.8, 0.99]
29
- AdamW.lr: 0.0001
30
- ExponentialLR.gamma: 0.999996
31
-
32
- amp: false
33
- val_batch_size: 100
34
- device: cuda
35
- num_iters: 400000
36
- save_iters: [10000, 50000, 100000, 200000]
37
- valid_freq: 1000
38
- sample_freq: 10000
39
- num_workers: 32
40
- val_idx: [0, 1, 2, 3, 4, 5, 6, 7]
41
- seed: 0
42
- lambdas:
43
- mel/loss: 15.0
44
- adv/feat_loss: 2.0
45
- adv/gen_loss: 1.0
46
- vq/commitment_loss: 0.25
47
- vq/codebook_loss: 1.0
48
-
49
- VolumeNorm.db: [const, -16]
50
-
51
- # Transforms
52
- build_transform.preprocess:
53
- - Identity
54
- build_transform.augment_prob: 0.0
55
- build_transform.augment:
56
- - Identity
57
- build_transform.postprocess:
58
- - VolumeNorm
59
- - RescaleAudio
60
- - ShiftPhase
61
-
62
- # Loss setup
63
- MultiScaleSTFTLoss.window_lengths: [2048, 512]
64
- MelSpectrogramLoss.n_mels: [5, 10, 20, 40, 80, 160, 320]
65
- MelSpectrogramLoss.window_lengths: [32, 64, 128, 256, 512, 1024, 2048]
66
- MelSpectrogramLoss.mel_fmin: [0, 0, 0, 0, 0, 0, 0]
67
- MelSpectrogramLoss.mel_fmax: [null, null, null, null, null, null, null]
68
- MelSpectrogramLoss.pow: 1.0
69
- MelSpectrogramLoss.clamp_eps: 1.0e-5
70
- MelSpectrogramLoss.mag_weight: 0.0
71
-
72
- # Data
73
- batch_size: 72
74
- train/AudioDataset.duration: 0.38
75
- train/AudioDataset.n_examples: 10000000
76
-
77
- val/AudioDataset.duration: 5.0
78
- val/build_transform.augment_prob: 1.0
79
- val/AudioDataset.n_examples: 250
80
-
81
- test/AudioDataset.duration: 10.0
82
- test/build_transform.augment_prob: 1.0
83
- test/AudioDataset.n_examples: 1000
84
-
85
- AudioLoader.shuffle: true
86
- AudioDataset.without_replacement: true
87
-
88
- train/build_dataset.folders:
89
- speech_fb:
90
- - /data/daps/train
91
- speech_hq:
92
- - /data/vctk
93
- - /data/vocalset
94
- - /data/read_speech
95
- - /data/french_speech
96
- speech_uq:
97
- - /data/emotional_speech/
98
- - /data/common_voice/
99
- - /data/german_speech/
100
- - /data/russian_speech/
101
- - /data/spanish_speech/
102
- music_hq:
103
- - /data/musdb/train
104
- music_uq:
105
- - /data/jamendo
106
- general:
107
- - /data/audioset/data/unbalanced_train_segments/
108
- - /data/audioset/data/balanced_train_segments/
109
-
110
- val/build_dataset.folders:
111
- speech_hq:
112
- - /data/daps/val
113
- music_hq:
114
- - /data/musdb/test
115
- general:
116
- - /data/audioset/data/eval_segments/
117
-
118
- test/build_dataset.folders:
119
- speech_hq:
120
- - /data/daps/test
121
- music_hq:
122
- - /data/musdb/test
123
- general:
124
- - /data/audioset/data/eval_segments/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/final/44khz.yml DELETED
@@ -1,123 +0,0 @@
1
- # Model setup
2
- DAC.sample_rate: 44100
3
- DAC.encoder_dim: 64
4
- DAC.encoder_rates: [2, 4, 8, 8]
5
- DAC.decoder_dim: 1536
6
- DAC.decoder_rates: [8, 8, 4, 2]
7
-
8
- # Quantization
9
- DAC.n_codebooks: 9
10
- DAC.codebook_size: 1024
11
- DAC.codebook_dim: 8
12
- DAC.quantizer_dropout: 0.5
13
-
14
- # Discriminator
15
- Discriminator.sample_rate: 44100
16
- Discriminator.rates: []
17
- Discriminator.periods: [2, 3, 5, 7, 11]
18
- Discriminator.fft_sizes: [2048, 1024, 512]
19
- Discriminator.bands:
20
- - [0.0, 0.1]
21
- - [0.1, 0.25]
22
- - [0.25, 0.5]
23
- - [0.5, 0.75]
24
- - [0.75, 1.0]
25
-
26
- # Optimization
27
- AdamW.betas: [0.8, 0.99]
28
- AdamW.lr: 0.0001
29
- ExponentialLR.gamma: 0.999996
30
-
31
- amp: false
32
- val_batch_size: 100
33
- device: cuda
34
- num_iters: 400000
35
- save_iters: [10000, 50000, 100000, 200000]
36
- valid_freq: 1000
37
- sample_freq: 10000
38
- num_workers: 32
39
- val_idx: [0, 1, 2, 3, 4, 5, 6, 7]
40
- seed: 0
41
- lambdas:
42
- mel/loss: 15.0
43
- adv/feat_loss: 2.0
44
- adv/gen_loss: 1.0
45
- vq/commitment_loss: 0.25
46
- vq/codebook_loss: 1.0
47
-
48
- VolumeNorm.db: [const, -16]
49
-
50
- # Transforms
51
- build_transform.preprocess:
52
- - Identity
53
- build_transform.augment_prob: 0.0
54
- build_transform.augment:
55
- - Identity
56
- build_transform.postprocess:
57
- - VolumeNorm
58
- - RescaleAudio
59
- - ShiftPhase
60
-
61
- # Loss setup
62
- MultiScaleSTFTLoss.window_lengths: [2048, 512]
63
- MelSpectrogramLoss.n_mels: [5, 10, 20, 40, 80, 160, 320]
64
- MelSpectrogramLoss.window_lengths: [32, 64, 128, 256, 512, 1024, 2048]
65
- MelSpectrogramLoss.mel_fmin: [0, 0, 0, 0, 0, 0, 0]
66
- MelSpectrogramLoss.mel_fmax: [null, null, null, null, null, null, null]
67
- MelSpectrogramLoss.pow: 1.0
68
- MelSpectrogramLoss.clamp_eps: 1.0e-5
69
- MelSpectrogramLoss.mag_weight: 0.0
70
-
71
- # Data
72
- batch_size: 72
73
- train/AudioDataset.duration: 0.38
74
- train/AudioDataset.n_examples: 10000000
75
-
76
- val/AudioDataset.duration: 5.0
77
- val/build_transform.augment_prob: 1.0
78
- val/AudioDataset.n_examples: 250
79
-
80
- test/AudioDataset.duration: 10.0
81
- test/build_transform.augment_prob: 1.0
82
- test/AudioDataset.n_examples: 1000
83
-
84
- AudioLoader.shuffle: true
85
- AudioDataset.without_replacement: true
86
-
87
- train/build_dataset.folders:
88
- speech_fb:
89
- - /data/daps/train
90
- speech_hq:
91
- - /data/vctk
92
- - /data/vocalset
93
- - /data/read_speech
94
- - /data/french_speech
95
- speech_uq:
96
- - /data/emotional_speech/
97
- - /data/common_voice/
98
- - /data/german_speech/
99
- - /data/russian_speech/
100
- - /data/spanish_speech/
101
- music_hq:
102
- - /data/musdb/train
103
- music_uq:
104
- - /data/jamendo
105
- general:
106
- - /data/audioset/data/unbalanced_train_segments/
107
- - /data/audioset/data/balanced_train_segments/
108
-
109
- val/build_dataset.folders:
110
- speech_hq:
111
- - /data/daps/val
112
- music_hq:
113
- - /data/musdb/test
114
- general:
115
- - /data/audioset/data/eval_segments/
116
-
117
- test/build_dataset.folders:
118
- speech_hq:
119
- - /data/daps/test
120
- music_hq:
121
- - /data/musdb/test
122
- general:
123
- - /data/audioset/data/eval_segments/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/conf/quantizer/24kbps.yml DELETED
@@ -1,5 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- DAC.n_codebooks: 28
 
 
 
 
 
 
dac-codec/conf/quantizer/256d.yml DELETED
@@ -1,5 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- DAC.codebook_dim: 256
 
 
 
 
 
 
dac-codec/conf/quantizer/2d.yml DELETED
@@ -1,5 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- DAC.codebook_dim: 2
 
 
 
 
 
 
dac-codec/conf/quantizer/32d.yml DELETED
@@ -1,5 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- DAC.codebook_dim: 32
 
 
 
 
 
 
dac-codec/conf/quantizer/4d.yml DELETED
@@ -1,5 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- DAC.codebook_dim: 4
 
 
 
 
 
 
dac-codec/conf/quantizer/512d.yml DELETED
@@ -1,5 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- DAC.codebook_dim: 512
 
 
 
 
 
 
dac-codec/conf/quantizer/dropout-0.0.yml DELETED
@@ -1,5 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- DAC.quantizer_dropout: 0.0
 
 
 
 
 
 
dac-codec/conf/quantizer/dropout-0.25.yml DELETED
@@ -1,5 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- DAC.quantizer_dropout: 0.25
 
 
 
 
 
 
dac-codec/conf/quantizer/dropout-0.5.yml DELETED
@@ -1,5 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- DAC.quantizer_dropout: 0.5
 
 
 
 
 
 
dac-codec/conf/size/medium.yml DELETED
@@ -1,5 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- DAC.decoder_dim: 1024
 
 
 
 
 
 
dac-codec/conf/size/small.yml DELETED
@@ -1,5 +0,0 @@
1
- $include:
2
- - conf/base.yml
3
- - conf/1gpu.yml
4
-
5
- DAC.decoder_dim: 512
 
 
 
 
 
 
dac-codec/dac/__init__.py DELETED
@@ -1,16 +0,0 @@
1
- __version__ = "1.0.0"
2
-
3
- # preserved here for legacy reasons
4
- __model_version__ = "latest"
5
-
6
- import audiotools
7
-
8
- audiotools.ml.BaseModel.INTERN += ["dac.**"]
9
- audiotools.ml.BaseModel.EXTERN += ["einops"]
10
-
11
-
12
- from . import nn
13
- from . import model
14
- from . import utils
15
- from .model import DAC
16
- from .model import DACFile
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/dac/__main__.py DELETED
@@ -1,36 +0,0 @@
1
- import sys
2
-
3
- import argbind
4
-
5
- from dac.utils import download
6
- from dac.utils.decode import decode
7
- from dac.utils.encode import encode
8
-
9
- STAGES = ["encode", "decode", "download"]
10
-
11
-
12
- def run(stage: str):
13
- """Run stages.
14
-
15
- Parameters
16
- ----------
17
- stage : str
18
- Stage to run
19
- """
20
- if stage not in STAGES:
21
- raise ValueError(f"Unknown command: {stage}. Allowed commands are {STAGES}")
22
- stage_fn = globals()[stage]
23
-
24
- if stage == "download":
25
- stage_fn()
26
- return
27
-
28
- stage_fn()
29
-
30
-
31
- if __name__ == "__main__":
32
- group = sys.argv.pop(1)
33
- args = argbind.parse_args(group=group)
34
-
35
- with argbind.scope(args):
36
- run(group)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/dac/__pycache__/__init__.cpython-310.pyc DELETED
Binary file (473 Bytes)
 
dac-codec/dac/__pycache__/__main__.cpython-310.pyc DELETED
Binary file (899 Bytes)
 
dac-codec/dac/compare/__init__.py DELETED
File without changes
dac-codec/dac/compare/encodec.py DELETED
@@ -1,54 +0,0 @@
1
- import torch
2
- from audiotools import AudioSignal
3
- from audiotools.ml import BaseModel
4
- from encodec import EncodecModel
5
-
6
-
7
- class Encodec(BaseModel):
8
- def __init__(self, sample_rate: int = 24000, bandwidth: float = 24.0):
9
- super().__init__()
10
-
11
- if sample_rate == 24000:
12
- self.model = EncodecModel.encodec_model_24khz()
13
- else:
14
- self.model = EncodecModel.encodec_model_48khz()
15
- self.model.set_target_bandwidth(bandwidth)
16
- self.sample_rate = 44100
17
-
18
- def forward(
19
- self,
20
- audio_data: torch.Tensor,
21
- sample_rate: int = 44100,
22
- n_quantizers: int = None,
23
- ):
24
- signal = AudioSignal(audio_data, sample_rate)
25
- signal.resample(self.model.sample_rate)
26
- recons = self.model(signal.audio_data)
27
- recons = AudioSignal(recons, self.model.sample_rate)
28
- recons.resample(sample_rate)
29
- return {"audio": recons.audio_data}
30
-
31
-
32
- if __name__ == "__main__":
33
- import numpy as np
34
- from functools import partial
35
-
36
- model = Encodec()
37
-
38
- for n, m in model.named_modules():
39
- o = m.extra_repr()
40
- p = sum([np.prod(p.size()) for p in m.parameters()])
41
- fn = lambda o, p: o + f" {p/1e6:<.3f}M params."
42
- setattr(m, "extra_repr", partial(fn, o=o, p=p))
43
- print(model)
44
- print("Total # of params: ", sum([np.prod(p.size()) for p in model.parameters()]))
45
-
46
- length = 88200 * 2
47
- x = torch.randn(1, 1, length).to(model.device)
48
- x.requires_grad_(True)
49
- x.retain_grad()
50
-
51
- # Make a forward pass
52
- out = model(x)["audio"]
53
-
54
- print(x.shape, out.shape)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/dac/model/__init__.py DELETED
@@ -1,4 +0,0 @@
1
- from .base import CodecMixin
2
- from .base import DACFile
3
- from .dac import DAC
4
- from .discriminator import Discriminator
 
 
 
 
 
dac-codec/dac/model/__pycache__/__init__.cpython-310.pyc DELETED
Binary file (314 Bytes)
 
dac-codec/dac/model/__pycache__/base.cpython-310.pyc DELETED
Binary file (7.22 kB)
 
dac-codec/dac/model/__pycache__/dac.cpython-310.pyc DELETED
Binary file (10.6 kB)
 
dac-codec/dac/model/__pycache__/discriminator.cpython-310.pyc DELETED
Binary file (8.02 kB)
 
dac-codec/dac/model/base.py DELETED
@@ -1,294 +0,0 @@
1
- import math
2
- from dataclasses import dataclass
3
- from pathlib import Path
4
- from typing import Union
5
-
6
- import numpy as np
7
- import torch
8
- import tqdm
9
- from audiotools import AudioSignal
10
- from torch import nn
11
-
12
- SUPPORTED_VERSIONS = ["1.0.0"]
13
-
14
-
15
- @dataclass
16
- class DACFile:
17
- codes: torch.Tensor
18
-
19
- # Metadata
20
- chunk_length: int
21
- original_length: int
22
- input_db: float
23
- channels: int
24
- sample_rate: int
25
- padding: bool
26
- dac_version: str
27
-
28
- def save(self, path):
29
- artifacts = {
30
- "codes": self.codes.numpy().astype(np.uint16),
31
- "metadata": {
32
- "input_db": self.input_db.numpy().astype(np.float32),
33
- "original_length": self.original_length,
34
- "sample_rate": self.sample_rate,
35
- "chunk_length": self.chunk_length,
36
- "channels": self.channels,
37
- "padding": self.padding,
38
- "dac_version": SUPPORTED_VERSIONS[-1],
39
- },
40
- }
41
- path = Path(path).with_suffix(".dac")
42
- with open(path, "wb") as f:
43
- np.save(f, artifacts)
44
- return path
45
-
46
- @classmethod
47
- def load(cls, path):
48
- artifacts = np.load(path, allow_pickle=True)[()]
49
- codes = torch.from_numpy(artifacts["codes"].astype(int))
50
- if artifacts["metadata"].get("dac_version", None) not in SUPPORTED_VERSIONS:
51
- raise RuntimeError(
52
- f"Given file {path} can't be loaded with this version of descript-audio-codec."
53
- )
54
- return cls(codes=codes, **artifacts["metadata"])
55
-
56
-
57
- class CodecMixin:
58
- @property
59
- def padding(self):
60
- if not hasattr(self, "_padding"):
61
- self._padding = True
62
- return self._padding
63
-
64
- @padding.setter
65
- def padding(self, value):
66
- assert isinstance(value, bool)
67
-
68
- layers = [
69
- l for l in self.modules() if isinstance(l, (nn.Conv1d, nn.ConvTranspose1d))
70
- ]
71
-
72
- for layer in layers:
73
- if value:
74
- if hasattr(layer, "original_padding"):
75
- layer.padding = layer.original_padding
76
- else:
77
- layer.original_padding = layer.padding
78
- layer.padding = tuple(0 for _ in range(len(layer.padding)))
79
-
80
- self._padding = value
81
-
82
- def get_delay(self):
83
- # Any number works here, delay is invariant to input length
84
- l_out = self.get_output_length(0)
85
- L = l_out
86
-
87
- layers = []
88
- for layer in self.modules():
89
- if isinstance(layer, (nn.Conv1d, nn.ConvTranspose1d)):
90
- layers.append(layer)
91
-
92
- for layer in reversed(layers):
93
- d = layer.dilation[0]
94
- k = layer.kernel_size[0]
95
- s = layer.stride[0]
96
-
97
- if isinstance(layer, nn.ConvTranspose1d):
98
- L = ((L - d * (k - 1) - 1) / s) + 1
99
- elif isinstance(layer, nn.Conv1d):
100
- L = (L - 1) * s + d * (k - 1) + 1
101
-
102
- L = math.ceil(L)
103
-
104
- l_in = L
105
-
106
- return (l_in - l_out) // 2
107
-
108
- def get_output_length(self, input_length):
109
- L = input_length
110
- # Calculate output length
111
- for layer in self.modules():
112
- if isinstance(layer, (nn.Conv1d, nn.ConvTranspose1d)):
113
- d = layer.dilation[0]
114
- k = layer.kernel_size[0]
115
- s = layer.stride[0]
116
-
117
- if isinstance(layer, nn.Conv1d):
118
- L = ((L - d * (k - 1) - 1) / s) + 1
119
- elif isinstance(layer, nn.ConvTranspose1d):
120
- L = (L - 1) * s + d * (k - 1) + 1
121
-
122
- L = math.floor(L)
123
- return L
124
-
125
- @torch.no_grad()
126
- def compress(
127
- self,
128
- audio_path_or_signal: Union[str, Path, AudioSignal],
129
- win_duration: float = 1.0,
130
- verbose: bool = False,
131
- normalize_db: float = -16,
132
- n_quantizers: int = None,
133
- ) -> DACFile:
134
- """Processes an audio signal from a file or AudioSignal object into
135
- discrete codes. This function processes the signal in short windows,
136
- using constant GPU memory.
137
-
138
- Parameters
139
- ----------
140
- audio_path_or_signal : Union[str, Path, AudioSignal]
141
- audio signal to reconstruct
142
- win_duration : float, optional
143
- window duration in seconds, by default 5.0
144
- verbose : bool, optional
145
- by default False
146
- normalize_db : float, optional
147
- normalize db, by default -16
148
-
149
- Returns
150
- -------
151
- DACFile
152
- Object containing compressed codes and metadata
153
- required for decompression
154
- """
155
- audio_signal = audio_path_or_signal
156
- if isinstance(audio_signal, (str, Path)):
157
- audio_signal = AudioSignal.load_from_file_with_ffmpeg(str(audio_signal))
158
-
159
- self.eval()
160
- original_padding = self.padding
161
- original_device = audio_signal.device
162
-
163
- audio_signal = audio_signal.clone()
164
- original_sr = audio_signal.sample_rate
165
-
166
- resample_fn = audio_signal.resample
167
- loudness_fn = audio_signal.loudness
168
-
169
- # If audio is > 10 minutes long, use the ffmpeg versions
170
- if audio_signal.signal_duration >= 10 * 60 * 60:
171
- resample_fn = audio_signal.ffmpeg_resample
172
- loudness_fn = audio_signal.ffmpeg_loudness
173
-
174
- original_length = audio_signal.signal_length
175
- resample_fn(self.sample_rate)
176
- input_db = loudness_fn()
177
-
178
- if normalize_db is not None:
179
- audio_signal.normalize(normalize_db)
180
- audio_signal.ensure_max_of_audio()
181
-
182
- nb, nac, nt = audio_signal.audio_data.shape
183
- audio_signal.audio_data = audio_signal.audio_data.reshape(nb * nac, 1, nt)
184
- win_duration = (
185
- audio_signal.signal_duration if win_duration is None else win_duration
186
- )
187
-
188
- if audio_signal.signal_duration <= win_duration:
189
- # Unchunked compression (used if signal length < win duration)
190
- self.padding = True
191
- n_samples = nt
192
- hop = nt
193
- else:
194
- # Chunked inference
195
- self.padding = False
196
- # Zero-pad signal on either side by the delay
197
- audio_signal.zero_pad(self.delay, self.delay)
198
- n_samples = int(win_duration * self.sample_rate)
199
- # Round n_samples to nearest hop length multiple
200
- n_samples = int(math.ceil(n_samples / self.hop_length) * self.hop_length)
201
- hop = self.get_output_length(n_samples)
202
-
203
- codes = []
204
- range_fn = range if not verbose else tqdm.trange
205
-
206
- for i in range_fn(0, nt, hop):
207
- x = audio_signal[..., i : i + n_samples]
208
- x = x.zero_pad(0, max(0, n_samples - x.shape[-1]))
209
-
210
- audio_data = x.audio_data.to(self.device)
211
- audio_data = self.preprocess(audio_data, self.sample_rate)
212
- _, c, _, _, _ = self.encode(audio_data, n_quantizers)
213
- codes.append(c.to(original_device))
214
- chunk_length = c.shape[-1]
215
-
216
- codes = torch.cat(codes, dim=-1)
217
-
218
- dac_file = DACFile(
219
- codes=codes,
220
- chunk_length=chunk_length,
221
- original_length=original_length,
222
- input_db=input_db,
223
- channels=nac,
224
- sample_rate=original_sr,
225
- padding=self.padding,
226
- dac_version=SUPPORTED_VERSIONS[-1],
227
- )
228
-
229
- if n_quantizers is not None:
230
- codes = codes[:, :n_quantizers, :]
231
-
232
- self.padding = original_padding
233
- return dac_file
234
-
235
- @torch.no_grad()
236
- def decompress(
237
- self,
238
- obj: Union[str, Path, DACFile],
239
- verbose: bool = False,
240
- ) -> AudioSignal:
241
- """Reconstruct audio from a given .dac file
242
-
243
- Parameters
244
- ----------
245
- obj : Union[str, Path, DACFile]
246
- .dac file location or corresponding DACFile object.
247
- verbose : bool, optional
248
- Prints progress if True, by default False
249
-
250
- Returns
251
- -------
252
- AudioSignal
253
- Object with the reconstructed audio
254
- """
255
- self.eval()
256
- if isinstance(obj, (str, Path)):
257
- obj = DACFile.load(obj)
258
-
259
- original_padding = self.padding
260
- self.padding = obj.padding
261
-
262
- range_fn = range if not verbose else tqdm.trange
263
- codes = obj.codes
264
- original_device = codes.device
265
- chunk_length = obj.chunk_length
266
- recons = []
267
-
268
- for i in range_fn(0, codes.shape[-1], chunk_length):
269
- c = codes[..., i : i + chunk_length].to(self.device)
270
- z = self.quantizer.from_codes(c)[0]
271
- r = self.decode(z)
272
- recons.append(r.to(original_device))
273
-
274
- recons = torch.cat(recons, dim=-1)
275
- recons = AudioSignal(recons, self.sample_rate)
276
-
277
- resample_fn = recons.resample
278
- loudness_fn = recons.loudness
279
-
280
- # If audio is > 10 minutes long, use the ffmpeg versions
281
- if recons.signal_duration >= 10 * 60 * 60:
282
- resample_fn = recons.ffmpeg_resample
283
- loudness_fn = recons.ffmpeg_loudness
284
-
285
- recons.normalize(obj.input_db)
286
- resample_fn(obj.sample_rate)
287
- recons = recons[..., : obj.original_length]
288
- loudness_fn()
289
- recons.audio_data = recons.audio_data.reshape(
290
- -1, obj.channels, obj.original_length
291
- )
292
-
293
- self.padding = original_padding
294
- return recons
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/dac/model/dac.py DELETED
@@ -1,364 +0,0 @@
1
- import math
2
- from typing import List
3
- from typing import Union
4
-
5
- import numpy as np
6
- import torch
7
- from audiotools import AudioSignal
8
- from audiotools.ml import BaseModel
9
- from torch import nn
10
-
11
- from .base import CodecMixin
12
- from dac.nn.layers import Snake1d
13
- from dac.nn.layers import WNConv1d
14
- from dac.nn.layers import WNConvTranspose1d
15
- from dac.nn.quantize import ResidualVectorQuantize
16
-
17
-
18
- def init_weights(m):
19
- if isinstance(m, nn.Conv1d):
20
- nn.init.trunc_normal_(m.weight, std=0.02)
21
- nn.init.constant_(m.bias, 0)
22
-
23
-
24
- class ResidualUnit(nn.Module):
25
- def __init__(self, dim: int = 16, dilation: int = 1):
26
- super().__init__()
27
- pad = ((7 - 1) * dilation) // 2
28
- self.block = nn.Sequential(
29
- Snake1d(dim),
30
- WNConv1d(dim, dim, kernel_size=7, dilation=dilation, padding=pad),
31
- Snake1d(dim),
32
- WNConv1d(dim, dim, kernel_size=1),
33
- )
34
-
35
- def forward(self, x):
36
- y = self.block(x)
37
- pad = (x.shape[-1] - y.shape[-1]) // 2
38
- if pad > 0:
39
- x = x[..., pad:-pad]
40
- return x + y
41
-
42
-
43
- class EncoderBlock(nn.Module):
44
- def __init__(self, dim: int = 16, stride: int = 1):
45
- super().__init__()
46
- self.block = nn.Sequential(
47
- ResidualUnit(dim // 2, dilation=1),
48
- ResidualUnit(dim // 2, dilation=3),
49
- ResidualUnit(dim // 2, dilation=9),
50
- Snake1d(dim // 2),
51
- WNConv1d(
52
- dim // 2,
53
- dim,
54
- kernel_size=2 * stride,
55
- stride=stride,
56
- padding=math.ceil(stride / 2),
57
- ),
58
- )
59
-
60
- def forward(self, x):
61
- return self.block(x)
62
-
63
-
64
- class Encoder(nn.Module):
65
- def __init__(
66
- self,
67
- d_model: int = 64,
68
- strides: list = [2, 4, 8, 8],
69
- d_latent: int = 64,
70
- ):
71
- super().__init__()
72
- # Create first convolution
73
- self.block = [WNConv1d(1, d_model, kernel_size=7, padding=3)]
74
-
75
- # Create EncoderBlocks that double channels as they downsample by `stride`
76
- for stride in strides:
77
- d_model *= 2
78
- self.block += [EncoderBlock(d_model, stride=stride)]
79
-
80
- # Create last convolution
81
- self.block += [
82
- Snake1d(d_model),
83
- WNConv1d(d_model, d_latent, kernel_size=3, padding=1),
84
- ]
85
-
86
- # Wrap black into nn.Sequential
87
- self.block = nn.Sequential(*self.block)
88
- self.enc_dim = d_model
89
-
90
- def forward(self, x):
91
- return self.block(x)
92
-
93
-
94
- class DecoderBlock(nn.Module):
95
- def __init__(self, input_dim: int = 16, output_dim: int = 8, stride: int = 1):
96
- super().__init__()
97
- self.block = nn.Sequential(
98
- Snake1d(input_dim),
99
- WNConvTranspose1d(
100
- input_dim,
101
- output_dim,
102
- kernel_size=2 * stride,
103
- stride=stride,
104
- padding=math.ceil(stride / 2),
105
- ),
106
- ResidualUnit(output_dim, dilation=1),
107
- ResidualUnit(output_dim, dilation=3),
108
- ResidualUnit(output_dim, dilation=9),
109
- )
110
-
111
- def forward(self, x):
112
- return self.block(x)
113
-
114
-
115
- class Decoder(nn.Module):
116
- def __init__(
117
- self,
118
- input_channel,
119
- channels,
120
- rates,
121
- d_out: int = 1,
122
- ):
123
- super().__init__()
124
-
125
- # Add first conv layer
126
- layers = [WNConv1d(input_channel, channels, kernel_size=7, padding=3)]
127
-
128
- # Add upsampling + MRF blocks
129
- for i, stride in enumerate(rates):
130
- input_dim = channels // 2**i
131
- output_dim = channels // 2 ** (i + 1)
132
- layers += [DecoderBlock(input_dim, output_dim, stride)]
133
-
134
- # Add final conv layer
135
- layers += [
136
- Snake1d(output_dim),
137
- WNConv1d(output_dim, d_out, kernel_size=7, padding=3),
138
- nn.Tanh(),
139
- ]
140
-
141
- self.model = nn.Sequential(*layers)
142
-
143
- def forward(self, x):
144
- return self.model(x)
145
-
146
-
147
- class DAC(BaseModel, CodecMixin):
148
- def __init__(
149
- self,
150
- encoder_dim: int = 64,
151
- encoder_rates: List[int] = [2, 4, 8, 8],
152
- latent_dim: int = None,
153
- decoder_dim: int = 1536,
154
- decoder_rates: List[int] = [8, 8, 4, 2],
155
- n_codebooks: int = 9,
156
- codebook_size: int = 1024,
157
- codebook_dim: Union[int, list] = 8,
158
- quantizer_dropout: bool = False,
159
- sample_rate: int = 44100,
160
- ):
161
- super().__init__()
162
-
163
- self.encoder_dim = encoder_dim
164
- self.encoder_rates = encoder_rates
165
- self.decoder_dim = decoder_dim
166
- self.decoder_rates = decoder_rates
167
- self.sample_rate = sample_rate
168
-
169
- if latent_dim is None:
170
- latent_dim = encoder_dim * (2 ** len(encoder_rates))
171
-
172
- self.latent_dim = latent_dim
173
-
174
- self.hop_length = np.prod(encoder_rates)
175
- self.encoder = Encoder(encoder_dim, encoder_rates, latent_dim)
176
-
177
- self.n_codebooks = n_codebooks
178
- self.codebook_size = codebook_size
179
- self.codebook_dim = codebook_dim
180
- self.quantizer = ResidualVectorQuantize(
181
- input_dim=latent_dim,
182
- n_codebooks=n_codebooks,
183
- codebook_size=codebook_size,
184
- codebook_dim=codebook_dim,
185
- quantizer_dropout=quantizer_dropout,
186
- )
187
-
188
- self.decoder = Decoder(
189
- latent_dim,
190
- decoder_dim,
191
- decoder_rates,
192
- )
193
- self.sample_rate = sample_rate
194
- self.apply(init_weights)
195
-
196
- self.delay = self.get_delay()
197
-
198
- def preprocess(self, audio_data, sample_rate):
199
- if sample_rate is None:
200
- sample_rate = self.sample_rate
201
- assert sample_rate == self.sample_rate
202
-
203
- length = audio_data.shape[-1]
204
- right_pad = math.ceil(length / self.hop_length) * self.hop_length - length
205
- audio_data = nn.functional.pad(audio_data, (0, right_pad))
206
-
207
- return audio_data
208
-
209
- def encode(
210
- self,
211
- audio_data: torch.Tensor,
212
- n_quantizers: int = None,
213
- ):
214
- """Encode given audio data and return quantized latent codes
215
-
216
- Parameters
217
- ----------
218
- audio_data : Tensor[B x 1 x T]
219
- Audio data to encode
220
- n_quantizers : int, optional
221
- Number of quantizers to use, by default None
222
- If None, all quantizers are used.
223
-
224
- Returns
225
- -------
226
- dict
227
- A dictionary with the following keys:
228
- "z" : Tensor[B x D x T]
229
- Quantized continuous representation of input
230
- "codes" : Tensor[B x N x T]
231
- Codebook indices for each codebook
232
- (quantized discrete representation of input)
233
- "latents" : Tensor[B x N*D x T]
234
- Projected latents (continuous representation of input before quantization)
235
- "vq/commitment_loss" : Tensor[1]
236
- Commitment loss to train encoder to predict vectors closer to codebook
237
- entries
238
- "vq/codebook_loss" : Tensor[1]
239
- Codebook loss to update the codebook
240
- "length" : int
241
- Number of samples in input audio
242
- """
243
- z = self.encoder(audio_data)
244
- z, codes, latents, commitment_loss, codebook_loss = self.quantizer(
245
- z, n_quantizers
246
- )
247
- return z, codes, latents, commitment_loss, codebook_loss
248
-
249
- def decode(self, z: torch.Tensor):
250
- """Decode given latent codes and return audio data
251
-
252
- Parameters
253
- ----------
254
- z : Tensor[B x D x T]
255
- Quantized continuous representation of input
256
- length : int, optional
257
- Number of samples in output audio, by default None
258
-
259
- Returns
260
- -------
261
- dict
262
- A dictionary with the following keys:
263
- "audio" : Tensor[B x 1 x length]
264
- Decoded audio data.
265
- """
266
- return self.decoder(z)
267
-
268
- def forward(
269
- self,
270
- audio_data: torch.Tensor,
271
- sample_rate: int = None,
272
- n_quantizers: int = None,
273
- ):
274
- """Model forward pass
275
-
276
- Parameters
277
- ----------
278
- audio_data : Tensor[B x 1 x T]
279
- Audio data to encode
280
- sample_rate : int, optional
281
- Sample rate of audio data in Hz, by default None
282
- If None, defaults to `self.sample_rate`
283
- n_quantizers : int, optional
284
- Number of quantizers to use, by default None.
285
- If None, all quantizers are used.
286
-
287
- Returns
288
- -------
289
- dict
290
- A dictionary with the following keys:
291
- "z" : Tensor[B x D x T]
292
- Quantized continuous representation of input
293
- "codes" : Tensor[B x N x T]
294
- Codebook indices for each codebook
295
- (quantized discrete representation of input)
296
- "latents" : Tensor[B x N*D x T]
297
- Projected latents (continuous representation of input before quantization)
298
- "vq/commitment_loss" : Tensor[1]
299
- Commitment loss to train encoder to predict vectors closer to codebook
300
- entries
301
- "vq/codebook_loss" : Tensor[1]
302
- Codebook loss to update the codebook
303
- "length" : int
304
- Number of samples in input audio
305
- "audio" : Tensor[B x 1 x length]
306
- Decoded audio data.
307
- """
308
- length = audio_data.shape[-1]
309
- audio_data = self.preprocess(audio_data, sample_rate)
310
- z, codes, latents, commitment_loss, codebook_loss = self.encode(
311
- audio_data, n_quantizers
312
- )
313
-
314
- x = self.decode(z)
315
- return {
316
- "audio": x[..., :length],
317
- "z": z,
318
- "codes": codes,
319
- "latents": latents,
320
- "vq/commitment_loss": commitment_loss,
321
- "vq/codebook_loss": codebook_loss,
322
- }
323
-
324
-
325
- if __name__ == "__main__":
326
- import numpy as np
327
- from functools import partial
328
-
329
- model = DAC().to("cpu")
330
-
331
- for n, m in model.named_modules():
332
- o = m.extra_repr()
333
- p = sum([np.prod(p.size()) for p in m.parameters()])
334
- fn = lambda o, p: o + f" {p/1e6:<.3f}M params."
335
- setattr(m, "extra_repr", partial(fn, o=o, p=p))
336
- print(model)
337
- print("Total # of params: ", sum([np.prod(p.size()) for p in model.parameters()]))
338
-
339
- length = 88200 * 2
340
- x = torch.randn(1, 1, length).to(model.device)
341
- x.requires_grad_(True)
342
- x.retain_grad()
343
-
344
- # Make a forward pass
345
- out = model(x)["audio"]
346
- print("Input shape:", x.shape)
347
- print("Output shape:", out.shape)
348
-
349
- # Create gradient variable
350
- grad = torch.zeros_like(out)
351
- grad[:, :, grad.shape[-1] // 2] = 1
352
-
353
- # Make a backward pass
354
- out.backward(grad)
355
-
356
- # Check non-zero values
357
- gradmap = x.grad.squeeze(0)
358
- gradmap = (gradmap != 0).sum(0) # sum across features
359
- rf = (gradmap != 0).sum()
360
-
361
- print(f"Receptive field: {rf.item()}")
362
-
363
- x = AudioSignal(torch.randn(1, 1, 44100 * 60), 44100)
364
- model.decompress(model.compress(x, verbose=True), verbose=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/dac/model/discriminator.py DELETED
@@ -1,228 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
- from audiotools import AudioSignal
5
- from audiotools import ml
6
- from audiotools import STFTParams
7
- from einops import rearrange
8
- from torch.nn.utils import weight_norm
9
-
10
-
11
- def WNConv1d(*args, **kwargs):
12
- act = kwargs.pop("act", True)
13
- conv = weight_norm(nn.Conv1d(*args, **kwargs))
14
- if not act:
15
- return conv
16
- return nn.Sequential(conv, nn.LeakyReLU(0.1))
17
-
18
-
19
- def WNConv2d(*args, **kwargs):
20
- act = kwargs.pop("act", True)
21
- conv = weight_norm(nn.Conv2d(*args, **kwargs))
22
- if not act:
23
- return conv
24
- return nn.Sequential(conv, nn.LeakyReLU(0.1))
25
-
26
-
27
- class MPD(nn.Module):
28
- def __init__(self, period):
29
- super().__init__()
30
- self.period = period
31
- self.convs = nn.ModuleList(
32
- [
33
- WNConv2d(1, 32, (5, 1), (3, 1), padding=(2, 0)),
34
- WNConv2d(32, 128, (5, 1), (3, 1), padding=(2, 0)),
35
- WNConv2d(128, 512, (5, 1), (3, 1), padding=(2, 0)),
36
- WNConv2d(512, 1024, (5, 1), (3, 1), padding=(2, 0)),
37
- WNConv2d(1024, 1024, (5, 1), 1, padding=(2, 0)),
38
- ]
39
- )
40
- self.conv_post = WNConv2d(
41
- 1024, 1, kernel_size=(3, 1), padding=(1, 0), act=False
42
- )
43
-
44
- def pad_to_period(self, x):
45
- t = x.shape[-1]
46
- x = F.pad(x, (0, self.period - t % self.period), mode="reflect")
47
- return x
48
-
49
- def forward(self, x):
50
- fmap = []
51
-
52
- x = self.pad_to_period(x)
53
- x = rearrange(x, "b c (l p) -> b c l p", p=self.period)
54
-
55
- for layer in self.convs:
56
- x = layer(x)
57
- fmap.append(x)
58
-
59
- x = self.conv_post(x)
60
- fmap.append(x)
61
-
62
- return fmap
63
-
64
-
65
- class MSD(nn.Module):
66
- def __init__(self, rate: int = 1, sample_rate: int = 44100):
67
- super().__init__()
68
- self.convs = nn.ModuleList(
69
- [
70
- WNConv1d(1, 16, 15, 1, padding=7),
71
- WNConv1d(16, 64, 41, 4, groups=4, padding=20),
72
- WNConv1d(64, 256, 41, 4, groups=16, padding=20),
73
- WNConv1d(256, 1024, 41, 4, groups=64, padding=20),
74
- WNConv1d(1024, 1024, 41, 4, groups=256, padding=20),
75
- WNConv1d(1024, 1024, 5, 1, padding=2),
76
- ]
77
- )
78
- self.conv_post = WNConv1d(1024, 1, 3, 1, padding=1, act=False)
79
- self.sample_rate = sample_rate
80
- self.rate = rate
81
-
82
- def forward(self, x):
83
- x = AudioSignal(x, self.sample_rate)
84
- x.resample(self.sample_rate // self.rate)
85
- x = x.audio_data
86
-
87
- fmap = []
88
-
89
- for l in self.convs:
90
- x = l(x)
91
- fmap.append(x)
92
- x = self.conv_post(x)
93
- fmap.append(x)
94
-
95
- return fmap
96
-
97
-
98
- BANDS = [(0.0, 0.1), (0.1, 0.25), (0.25, 0.5), (0.5, 0.75), (0.75, 1.0)]
99
-
100
-
101
- class MRD(nn.Module):
102
- def __init__(
103
- self,
104
- window_length: int,
105
- hop_factor: float = 0.25,
106
- sample_rate: int = 44100,
107
- bands: list = BANDS,
108
- ):
109
- """Complex multi-band spectrogram discriminator.
110
- Parameters
111
- ----------
112
- window_length : int
113
- Window length of STFT.
114
- hop_factor : float, optional
115
- Hop factor of the STFT, defaults to ``0.25 * window_length``.
116
- sample_rate : int, optional
117
- Sampling rate of audio in Hz, by default 44100
118
- bands : list, optional
119
- Bands to run discriminator over.
120
- """
121
- super().__init__()
122
-
123
- self.window_length = window_length
124
- self.hop_factor = hop_factor
125
- self.sample_rate = sample_rate
126
- self.stft_params = STFTParams(
127
- window_length=window_length,
128
- hop_length=int(window_length * hop_factor),
129
- match_stride=True,
130
- )
131
-
132
- n_fft = window_length // 2 + 1
133
- bands = [(int(b[0] * n_fft), int(b[1] * n_fft)) for b in bands]
134
- self.bands = bands
135
-
136
- ch = 32
137
- convs = lambda: nn.ModuleList(
138
- [
139
- WNConv2d(2, ch, (3, 9), (1, 1), padding=(1, 4)),
140
- WNConv2d(ch, ch, (3, 9), (1, 2), padding=(1, 4)),
141
- WNConv2d(ch, ch, (3, 9), (1, 2), padding=(1, 4)),
142
- WNConv2d(ch, ch, (3, 9), (1, 2), padding=(1, 4)),
143
- WNConv2d(ch, ch, (3, 3), (1, 1), padding=(1, 1)),
144
- ]
145
- )
146
- self.band_convs = nn.ModuleList([convs() for _ in range(len(self.bands))])
147
- self.conv_post = WNConv2d(ch, 1, (3, 3), (1, 1), padding=(1, 1), act=False)
148
-
149
- def spectrogram(self, x):
150
- x = AudioSignal(x, self.sample_rate, stft_params=self.stft_params)
151
- x = torch.view_as_real(x.stft())
152
- x = rearrange(x, "b 1 f t c -> (b 1) c t f")
153
- # Split into bands
154
- x_bands = [x[..., b[0] : b[1]] for b in self.bands]
155
- return x_bands
156
-
157
- def forward(self, x):
158
- x_bands = self.spectrogram(x)
159
- fmap = []
160
-
161
- x = []
162
- for band, stack in zip(x_bands, self.band_convs):
163
- for layer in stack:
164
- band = layer(band)
165
- fmap.append(band)
166
- x.append(band)
167
-
168
- x = torch.cat(x, dim=-1)
169
- x = self.conv_post(x)
170
- fmap.append(x)
171
-
172
- return fmap
173
-
174
-
175
- class Discriminator(ml.BaseModel):
176
- def __init__(
177
- self,
178
- rates: list = [],
179
- periods: list = [2, 3, 5, 7, 11],
180
- fft_sizes: list = [2048, 1024, 512],
181
- sample_rate: int = 44100,
182
- bands: list = BANDS,
183
- ):
184
- """Discriminator that combines multiple discriminators.
185
-
186
- Parameters
187
- ----------
188
- rates : list, optional
189
- sampling rates (in Hz) to run MSD at, by default []
190
- If empty, MSD is not used.
191
- periods : list, optional
192
- periods (of samples) to run MPD at, by default [2, 3, 5, 7, 11]
193
- fft_sizes : list, optional
194
- Window sizes of the FFT to run MRD at, by default [2048, 1024, 512]
195
- sample_rate : int, optional
196
- Sampling rate of audio in Hz, by default 44100
197
- bands : list, optional
198
- Bands to run MRD at, by default `BANDS`
199
- """
200
- super().__init__()
201
- discs = []
202
- discs += [MPD(p) for p in periods]
203
- discs += [MSD(r, sample_rate=sample_rate) for r in rates]
204
- discs += [MRD(f, sample_rate=sample_rate, bands=bands) for f in fft_sizes]
205
- self.discriminators = nn.ModuleList(discs)
206
-
207
- def preprocess(self, y):
208
- # Remove DC offset
209
- y = y - y.mean(dim=-1, keepdims=True)
210
- # Peak normalize the volume of input audio
211
- y = 0.8 * y / (y.abs().max(dim=-1, keepdim=True)[0] + 1e-9)
212
- return y
213
-
214
- def forward(self, x):
215
- x = self.preprocess(x)
216
- fmaps = [d(x) for d in self.discriminators]
217
- return fmaps
218
-
219
-
220
- if __name__ == "__main__":
221
- disc = Discriminator()
222
- x = torch.zeros(1, 1, 44100)
223
- results = disc(x)
224
- for i, result in enumerate(results):
225
- print(f"disc{i}")
226
- for i, r in enumerate(result):
227
- print(r.shape, r.mean(), r.min(), r.max())
228
- print()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dac-codec/dac/nn/__init__.py DELETED
@@ -1,3 +0,0 @@
1
- from . import layers
2
- from . import loss
3
- from . import quantize
 
 
 
 
dac-codec/dac/nn/__pycache__/__init__.cpython-310.pyc DELETED
Binary file (249 Bytes)