niobures commited on
Commit
2c7e138
·
verified ·
1 Parent(s): c17d5c1

VoiceSculptor (models: X-Codec-2.0-25TPS-24k)

Browse files
Files changed (25) hide show
  1. models/xcodec2/code/X-Codec-2.0-25TPS-24k.zip +3 -0
  2. models/xcodec2/models/xcodec2-25TPS-24k/.gitattributes +35 -0
  3. models/xcodec2/models/xcodec2-25TPS-24k/259041.mp3 +0 -0
  4. models/xcodec2/models/xcodec2-25TPS-24k/README.md +67 -0
  5. models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D1045000.ckpt +3 -0
  6. models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D1150000.ckpt +3 -0
  7. models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D1407500.ckpt +3 -0
  8. models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D1712500.ckpt +3 -0
  9. models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D2005000.ckpt +3 -0
  10. models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D2325000.ckpt +3 -0
  11. models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D2702500.ckpt +3 -0
  12. models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D2902500.ckpt +3 -0
  13. models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D2997500.ckpt +3 -0
  14. models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D3000000.ckpt +3 -0
  15. models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D592500.ckpt +3 -0
  16. models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D762500.ckpt +3 -0
  17. models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D880000.ckpt +3 -0
  18. models/xcodec2/models/xcodec2-25TPS-24k/config.json +15 -0
  19. models/xcodec2/models/xcodec2-25TPS-24k/configuration_bigcodec.py +19 -0
  20. models/xcodec2/models/xcodec2-25TPS-24k/model.safetensors +3 -0
  21. models/xcodec2/models/xcodec2-25TPS-24k/modeling_xcodec2.py +165 -0
  22. models/xcodec2/models/xcodec2-25TPS-24k/source.txt +1 -0
  23. models/xcodec2/models/xcodec2-25TPS-24k/test-set/sample-common-voice-17-test-set.zip +3 -0
  24. models/xcodec2/models/xcodec2-25TPS-24k/test-set/sample-common-voice-22-test-set.json +0 -0
  25. models/xcodec2/models/xcodec2-25TPS-24k/test-set/sample-common-voice-22-test-set.zip +3 -0
models/xcodec2/code/X-Codec-2.0-25TPS-24k.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:515c72147c6c44af49408f3cbb0ae318a1b2c8f08ce4eec6b0b5ddfd584620e8
3
+ size 13680614
models/xcodec2/models/xcodec2-25TPS-24k/.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
models/xcodec2/models/xcodec2-25TPS-24k/259041.mp3 ADDED
Binary file (30.2 kB). View file
 
models/xcodec2/models/xcodec2-25TPS-24k/README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - HKUSTAudio/xcodec2
4
+ datasets:
5
+ - malaysia-ai/common_voice_17_0
6
+ - mesolitica/Malaysian-STT-Whisper-Stage2
7
+ - malaysia-ai/Multilingual-TTS
8
+ - mesolitica/Malaysian-Emilia-v2
9
+ library_name: transformers
10
+ pipeline_tag: audio-to-audio
11
+ ---
12
+
13
+ # xcodec2-25TPS-24k
14
+
15
+ This repository contains the improved X-Codec-2.0 model as described in the paper [Improving X-Codec-2.0 for Multi-Lingual Speech: 25 Hz Latent Rate and 24 kHz Sampling](https://huggingface.co/papers/2601.20185).
16
+
17
+ [![Preprint](https://img.shields.io/badge/technical-preprint-<COLOR>.svg)](https://github.com/Scicom-AI-Enterprise-Organization/X-Codec-2.0-25TPS-24k/blob/main/preprint/neurips_2023.pdf)
18
+
19
+ Improve https://huggingface.co/HKUSTAudio/xcodec2 from 50 TPS to become 25 TPS and upscale output to 24k sample rate.
20
+
21
+ WanDB at https://wandb.ai/huseinzol05/xcodec2-24k-25tps, we also pushed all checkpoints in [checkpoint](checkpoint).
22
+
23
+ ## Dataset
24
+
25
+ 1. https://huggingface.co/datasets/malaysia-ai/common_voice_17_0, train set only.
26
+ 2. https://huggingface.co/datasets/mesolitica/Malaysian-STT-Whisper-Stage2, except `noise` and `audioset_0.5s`.
27
+ 3. https://huggingface.co/datasets/malaysia-ai/Multilingual-TTS, specific commit [2421a13e07226d96ac7009d5327d96a84672768c](https://huggingface.co/datasets/malaysia-ai/Multilingual-TTS/commit/2421a13e07226d96ac7009d5327d96a84672768c) except `cml-tts` and `libritts_r_filtered`
28
+ 4. https://huggingface.co/datasets/mesolitica/Malaysian-Emilia-v2, only `sg_podcast` and `malaysian_podcast`
29
+
30
+ ## How to use
31
+
32
+ 1. Git clone,
33
+
34
+ ```bash
35
+ git clone https://github.com/Scicom-AI-Enterprise-Organization/X-Codec-2.0-25TPS-24k
36
+ cd X-Codec-2.0-25TPS-24k
37
+ ```
38
+
39
+ 2. Load the model,
40
+
41
+ ```python
42
+ from modeling_xcodec2 import XCodec2Model
43
+ model = XCodec2Model.from_pretrained("Scicom-intl/xcodec2-25TPS-24k")
44
+ ```
45
+
46
+ 3. Encode,
47
+
48
+ ```python
49
+ import librosa
50
+ import torch
51
+
52
+ y, sr = librosa.load('259041.mp3', sr = 16000)
53
+ wav_tensor = torch.from_numpy(y).float().unsqueeze(0)
54
+ codes = model.encode_code(wav_tensor)
55
+ ```
56
+
57
+ 4. Decode,
58
+
59
+ ```python
60
+ import IPython.display as ipd
61
+
62
+ ipd.Audio(model.decode_code(codes)[0, 0].cpu(), rate = 24000)
63
+ ```
64
+
65
+ ## Source code
66
+
67
+ Source code at https://github.com/Scicom-AI-Enterprise-Organization/X-Codec-2.0-25TPS-24k
models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D1045000.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49aa2f3b1ebb5fed0a70448a769c3f203036e72e0a5e239c63e63391718b17ca
3
+ size 5144889455
models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D1150000.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c221501f69f89fcc108802e9b40247a62a75e0969930a17bac7f979da07cc30
3
+ size 5144889455
models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D1407500.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fa986876b642cb82094a757f603488c0a8c138b270e9951a9c3eb7411745a2d
3
+ size 5144889455
models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D1712500.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e65d797086e1275da3ca16d1f68b0f260b9edcfc1a8434cd375b72e6c197301b
3
+ size 5144889455
models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D2005000.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd6174a68ba30e2825c27898b9f2b502cb9ce390cfe6b61a1512f8bf904c9e11
3
+ size 5144889455
models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D2325000.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f6942e06f0a6f41d459dbe71436ace8eeafd14c0e2c7ca4f9b8f5fcad297d65
3
+ size 5144889455
models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D2702500.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a970479564fe3772866f396adcb4609a63dbd42a35bab4936cd9594ac704601
3
+ size 5144889455
models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D2902500.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fb5ae95e7ea6e9ee529f190d2631aec862758602b6c52559863a26fd95b1266
3
+ size 5144889455
models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D2997500.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb7266c15b36760aaf6e64eef608be0bfaa6694c2b5ddf6adb6da7057d8574c9
3
+ size 5144889455
models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D3000000.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb4f8312f961b067c500536127fa560da93eec8d4b37af9967a03ac85a2a0025
3
+ size 5144889455
models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D592500.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96d1aa0700a6d13aa107a5b50d5baa22561c8ec4df07e5087d5d095e8641c161
3
+ size 5144889455
models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D762500.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:049f3ada32131dfb998378d807ebb99e2cabe8a053bf41f858e999222ebe79c3
3
+ size 5144889455
models/xcodec2/models/xcodec2-25TPS-24k/checkpoint/epoch3D880000.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13824baa9b2538d0ad261267e93cc16627dbec5316a3359446add74fbfc8da27
3
+ size 5144889455
models/xcodec2/models/xcodec2-25TPS-24k/config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "XCodec2Model"
4
+ ],
5
+ "auto_map": {
6
+ "AutoModel": "modeling_xcodec2.XCodec2Model"
7
+ },
8
+ "codec_decoder_hidden_size": 1024,
9
+ "codec_encoder_hidden_size": 1024,
10
+ "dtype": "float32",
11
+ "model_type": "xcodec",
12
+ "semantic_hidden_size": 1024,
13
+ "transformers_version": "4.56.2",
14
+ "use_vocos": true
15
+ }
models/xcodec2/models/xcodec2-25TPS-24k/configuration_bigcodec.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import PretrainedConfig
2
+
3
+ class BigCodecConfig(PretrainedConfig):
4
+ model_type = "xcodec"
5
+
6
+ def __init__(
7
+ self,
8
+ # 下面这些只是示例超参
9
+ semantic_hidden_size=1024,
10
+ codec_encoder_hidden_size=1024,
11
+ codec_decoder_hidden_size=1024,
12
+ use_vocos=True,
13
+ **kwargs
14
+ ):
15
+ super().__init__(**kwargs)
16
+ self.semantic_hidden_size = semantic_hidden_size
17
+ self.codec_encoder_hidden_size = codec_encoder_hidden_size
18
+ self.codec_decoder_hidden_size = codec_decoder_hidden_size
19
+ self.use_vocos = use_vocos
models/xcodec2/models/xcodec2-25TPS-24k/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a8598048127f24161e69f0f83a4a3b1bcc83e4ebcf20d4ad6259b528090e997
3
+ size 3301612648
models/xcodec2/models/xcodec2-25TPS-24k/modeling_xcodec2.py ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ from transformers import PreTrainedModel
4
+ from configuration_bigcodec import BigCodecConfig
5
+
6
+ # 请确保这些模块路径是正确的
7
+ from vq.codec_encoder import CodecEncoder_Transformer
8
+ from vq.codec_decoder_vocos import CodecDecoderVocos
9
+ from vq.module import SemanticEncoder
10
+ from transformers import AutoFeatureExtractor, Wav2Vec2BertModel
11
+ import torch.nn.functional as F
12
+ class XCodec2Model(PreTrainedModel):
13
+ config_class = BigCodecConfig
14
+
15
+ def __init__(self, config: BigCodecConfig):
16
+ super().__init__(config)
17
+
18
+ # 1) 语义模型
19
+ self.semantic_model = Wav2Vec2BertModel.from_pretrained(
20
+ "facebook/w2v-bert-2.0",
21
+ output_hidden_states=True
22
+ )
23
+ self.semantic_model.eval()
24
+
25
+ self.SemanticEncoder_module = SemanticEncoder(
26
+ config.semantic_hidden_size,
27
+ config.semantic_hidden_size,
28
+ config.semantic_hidden_size
29
+ )
30
+
31
+ # 2) Codec Encoder
32
+ self.CodecEnc = CodecEncoder_Transformer()
33
+
34
+ # 3) Codec Decoder
35
+ self.generator = CodecDecoderVocos()
36
+
37
+ # 4) 两个全连接层
38
+ self.fc_prior = nn.Linear(2048, 2048)
39
+ self.fc_post_a = nn.Linear(2048, 1024)
40
+ feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/w2v-bert-2.0")
41
+ self.feature_extractor = feature_extractor
42
+ self.avg_pooler = nn.AvgPool1d(2, stride=2)
43
+
44
+ def forward(self, input_waveform, input_features=None, sample_rate=16000):
45
+ """
46
+ 这里的 forward 不一定要叫 forward,也可以拆成别的方法;
47
+ 但是如果想兼容 pipeline,需要在 forward 里给出核心逻辑。
48
+
49
+ 参数:
50
+ input_waveform: [batch_size, waveform_length]
51
+ sample_rate: 默认 16000
52
+ 返回:
53
+ 重构后的语音音频 (Tensor)
54
+ """
55
+ # 1) 特征提取
56
+ # 如果需要 padding,可以在这里做
57
+ wav = input_waveform
58
+ with torch.no_grad():
59
+ if input_features is None:
60
+ pad_for_wav = (320 - (wav.shape[1] % 320))
61
+ wav = torch.nn.functional.pad(wav, (0, pad_for_wav))
62
+ padded = F.pad(wav, (160, 160, 0, 0))
63
+ input_features = self.feature_extractor(
64
+ padded.cpu().numpy(),
65
+ sampling_rate=sample_rate,
66
+ return_tensors="pt"
67
+ ).input_features.to(self.device)
68
+
69
+ # 2) 语义层
70
+ semantic_output = self.semantic_model(input_features)
71
+ semantic_hidden_16 = semantic_output.hidden_states[16] # 取第16层
72
+ semantic_hidden_16 = semantic_hidden_16.transpose(1, 2) # [batch, hidden_dim, frames]
73
+ semantic_encoded = self.SemanticEncoder_module(semantic_hidden_16)
74
+
75
+ # 3) codec encoder
76
+ wav = wav.to(self.device) # shape: [batch, 1, time]
77
+ vq_emb = self.CodecEnc(wav.unsqueeze(1)) # [batch, time//down, 1024] 只是示例
78
+ vq_emb = vq_emb.transpose(1, 2) # -> [batch, 1024, frames]
79
+
80
+ concat_emb = torch.cat([semantic_encoded, vq_emb], dim=1) # [batch, 1024 + 1024, frames]
81
+
82
+ # 5) fc_prior
83
+ concat_emb = self.fc_prior(concat_emb.transpose(1, 2)).transpose(1, 2)
84
+ concat_emb = self.avg_pooler(concat_emb)
85
+
86
+ # 6) decoder 的量化部分
87
+ _, vq_code, _ = self.generator(concat_emb, vq=True)
88
+ vq_post_emb = self.generator.quantizer.get_output_from_indices(vq_code.transpose(1, 2))
89
+ vq_post_emb = vq_post_emb.transpose(1, 2)
90
+
91
+ # 7) fc_post_a
92
+ vq_post_emb = self.fc_post_a(vq_post_emb.transpose(1, 2)).transpose(1, 2)
93
+
94
+ # 8) 最后解码成波形
95
+ recon_audio = self.generator(vq_post_emb.transpose(1, 2), vq=False)[0]
96
+ # recon_audio: [batch, time]
97
+ return recon_audio
98
+
99
+ def encode_code(self, input_waveform, sample_rate=16000):
100
+ """
101
+ 将输入的音频编码为代码表示。
102
+
103
+ 参数:
104
+ input_waveform: [batch_size, waveform_length]
105
+ sample_rate: 默认 16000
106
+ 返回:
107
+ 编码后的代码 (Tensor)
108
+ """
109
+ with torch.no_grad():
110
+
111
+
112
+ wav = input_waveform
113
+ pad_for_wav = (320 - (wav.shape[1] % 320))
114
+
115
+ wav = torch.nn.functional.pad(wav, (0, pad_for_wav))
116
+
117
+ input_features = self.feature_extractor(
118
+ F.pad(wav[0,:].cpu(), (160, 160)),
119
+ sampling_rate=sample_rate,
120
+ return_tensors="pt"
121
+ ).input_features.to(self.device) # [batch, frames, feat_dim]
122
+
123
+ # 2) 语义层
124
+ semantic_output = self.semantic_model(input_features)
125
+ semantic_hidden_16 = semantic_output.hidden_states[16] # 取第16层
126
+ semantic_hidden_16 = semantic_hidden_16.transpose(1, 2) # [batch, hidden_dim, frames]
127
+ semantic_encoded = self.SemanticEncoder_module(semantic_hidden_16)
128
+
129
+ # 3) codec encoder
130
+ wav = wav.to(self.device) # shape: [batch, 1, time]
131
+ vq_emb = self.CodecEnc(wav.unsqueeze(1)) # [batch, time//down, 1024] 只是示例
132
+ vq_emb = vq_emb.transpose(1, 2) # -> [batch, 1024, frames]
133
+
134
+ # 4) 拼接
135
+ concat_emb = torch.cat([semantic_encoded, vq_emb], dim=1) # [batch, 2048, frames]
136
+
137
+ # 5) fc_prior
138
+ concat_emb = self.fc_prior(concat_emb.transpose(1, 2)).transpose(1, 2)
139
+
140
+ # 6) decoder 的量化部分,获取code
141
+ concat_emb = self.avg_pooler(concat_emb)
142
+ _, vq_code, _ = self.generator(concat_emb, vq=True)
143
+ # vq_code: [batch, frames]
144
+ return vq_code
145
+
146
+ def decode_code(self, vq_code):
147
+ """
148
+ 将编码后的代码解码回音频。
149
+
150
+ 参数:
151
+ vq_code: 编码后的代码 (Tensor) [batch, frames]
152
+ 返回:
153
+ 解码后的音频 (Tensor) [batch, waveform_length]
154
+ """
155
+ with torch.no_grad():
156
+ # 获取量化后的嵌入
157
+ vq_post_emb = self.generator.quantizer.get_output_from_indices(vq_code.transpose(1, 2))
158
+ vq_post_emb = vq_post_emb.transpose(1, 2) # [batch, 1024, frames]
159
+
160
+ # 7) fc_post_a
161
+ vq_post_emb = self.fc_post_a(vq_post_emb.transpose(1, 2)).transpose(1, 2) # [batch, 1024, frames]
162
+
163
+ # 8) 最后解码成波形
164
+ recon_audio = self.generator(vq_post_emb.transpose(1, 2), vq=False)[0] # [batch, time]
165
+ return recon_audio
models/xcodec2/models/xcodec2-25TPS-24k/source.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ https://huggingface.co/Scicom-intl/xcodec2-25TPS-24k
models/xcodec2/models/xcodec2-25TPS-24k/test-set/sample-common-voice-17-test-set.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0447dbcc5fd06bdabd7f72161d992fb472c57672f612c26c456fb9110185e0a2
3
+ size 2311734364
models/xcodec2/models/xcodec2-25TPS-24k/test-set/sample-common-voice-22-test-set.json ADDED
The diff for this file is too large to render. See raw diff
 
models/xcodec2/models/xcodec2-25TPS-24k/test-set/sample-common-voice-22-test-set.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd2f01dcef5cc75a5921e4ed94ef345ba768b3a9c2c1e1782c3582c3bcccb667
3
+ size 2657635349