tmp_data / music /test /16splits /log /audio_tokenizer.3.log
Dongchao's picture
Upload 231 files
b439acc verified
# python egs/pretraining/data_scripts/offline_tokenization_scp.py --rank 3 --input-file /turing_music_fs/music_data/ydc/code2/TokenPPL/data/music/test/16splits/wav.3.scp --output_file_reason /turing_music_fs/music_data/ydc/code2/TokenPPL/data/music/test/16splits/reason_tokens.3.pt --output_file_semantic /turing_music_fs/music_data/ydc/code2/TokenPPL/data/music/test/16splits/semantic_tokens.3.pt --model_path /turing_music_fs/music_data/ydc/exp2/tmp_codec/reasoncodec_1024/reason_codec.checkpoint --train_config /turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual/tools/tokenizer/ReasoningCodec_film_1024/infer_config.yaml --tokenizer reasoningCodec_film_1024
# Started at Mon Nov 10 16:02:24 UTC 2025
#
2025-11-10 16:02:26,377 INFO [offline_tokenization_scp.py:44] max gpu 8
2025-11-10 16:02:26,377 INFO [offline_tokenization_scp.py:47] Using device: cuda:2
2025-11-10 16:02:27,972 DEBUG [__init__.py:44] Skipping import of cpp extensions
[2025-11-10 16:02:33,091] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cuda (auto detect)
2025-11-10 16:02:33,174 INFO [spawn.py:60] gcc -pthread -B /root/miniconda3/envs/uniaudio2/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /root/miniconda3/envs/uniaudio2/include -fPIC -O2 -isystem /root/miniconda3/envs/uniaudio2/include -fPIC -c /tmp/tmpyv9ep7jc/test.c -o /tmp/tmpyv9ep7jc/test.o
2025-11-10 16:02:33,190 INFO [spawn.py:60] gcc -pthread -B /root/miniconda3/envs/uniaudio2/compiler_compat /tmp/tmpyv9ep7jc/test.o -laio -o /tmp/tmpyv9ep7jc/a.out
2025-11-10 16:02:33,900 INFO [spawn.py:60] gcc -pthread -B /root/miniconda3/envs/uniaudio2/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /root/miniconda3/envs/uniaudio2/include -fPIC -O2 -isystem /root/miniconda3/envs/uniaudio2/include -fPIC -c /tmp/tmpk30ytxm5/test.c -o /tmp/tmpk30ytxm5/test.o
2025-11-10 16:02:33,916 INFO [spawn.py:60] gcc -pthread -B /root/miniconda3/envs/uniaudio2/compiler_compat /tmp/tmpk30ytxm5/test.o -L/usr/local/cuda -L/usr/local/cuda/lib64 -lcufile -o /tmp/tmpk30ytxm5/a.out
2025-11-10 16:02:34,754 DEBUG [cmd.py:1253] Popen(['git', 'version'], cwd=/turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual, stdin=None, shell=False, universal_newlines=False)
2025-11-10 16:02:34,756 DEBUG [cmd.py:1253] Popen(['git', 'version'], cwd=/turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual, stdin=None, shell=False, universal_newlines=False)
2025-11-10 16:02:35,019 DEBUG [auth.py:50] Trying paths: ['/root/.docker/config.json', '/root/.dockercfg']
2025-11-10 16:02:35,019 DEBUG [auth.py:57] No config file found
/turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual/tools/tokenizer/ReasoningCodec_film_1024/modules/transformer.py:126: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
@autocast(enabled = False)
/turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual/tools/tokenizer/ReasoningCodec_film_1024/modules/transformer.py:151: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
@autocast(enabled = False)
2025-11-10 16:02:36,456 DEBUG [__init__.py:342] matplotlib data path: /root/miniconda3/envs/uniaudio2/lib/python3.10/site-packages/matplotlib/mpl-data
2025-11-10 16:02:36,461 DEBUG [__init__.py:342] CONFIGDIR=/root/.config/matplotlib
2025-11-10 16:02:36,463 DEBUG [__init__.py:1557] interactive is False
2025-11-10 16:02:36,463 DEBUG [__init__.py:1558] platform is linux
2025-11-10 16:02:36,484 DEBUG [__init__.py:342] CACHEDIR=/root/.cache/matplotlib
2025-11-10 16:02:36,486 DEBUG [font_manager.py:1635] Using fontManager instance from /root/.cache/matplotlib/fontlist-v390.json
/root/miniconda3/envs/uniaudio2/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py:134: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`.
WeightNorm.apply(module, name, dim)
/turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual/tools/tokenizer/ReasoningCodec_film_1024/models/scalar24k.py:435: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
parameter_dict = torch.load(self.resume_path)
best_rq_ckpt /turing_music_fs/music_data/ckpts/ckpts/ssl.pt
['/turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual/tools/tokenizer/ReasoningCodec_film_1024/modules/our_MERT_BESTRQ', '/turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual/egs/pretraining/data_scripts', '/turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual', '/turing_music_fs/music_data/ydc/code2', '/root/miniconda3/envs/uniaudio2/lib/python310.zip', '/root/miniconda3/envs/uniaudio2/lib/python3.10', '/root/miniconda3/envs/uniaudio2/lib/python3.10/lib-dynload', '/root/.local/lib/python3.10/site-packages', '/root/miniconda3/envs/uniaudio2/lib/python3.10/site-packages', '/home/ydc/musicllm/v2_speech', '/turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual', '/root/miniconda3/envs/uniaudio2/lib/python3.10/site-packages/setuptools/_vendor', '/tmp/tmpi8zuhb5n']
2025-11-10 16:02:47,049 DEBUG [__init__.py:47] Creating converter from 7 to 5
2025-11-10 16:02:47,050 DEBUG [__init__.py:47] Creating converter from 5 to 7
2025-11-10 16:02:47,050 DEBUG [__init__.py:47] Creating converter from 7 to 5
2025-11-10 16:02:47,050 DEBUG [__init__.py:47] Creating converter from 5 to 7
/root/miniconda3/envs/uniaudio2/lib/python3.10/site-packages/fairseq/checkpoint_utils.py:315: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state = torch.load(f, map_location=torch.device("cpu"))
path_ls /turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual/tools/tokenizer/ReasoningCodec_film_1024/modules/our_MERT_BESTRQ/mert_fairseq/models/eat
path_ls /turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual/tools/tokenizer/ReasoningCodec_film_1024/modules/our_MERT_BESTRQ/mert_fairseq/models/eat
2025-11-10 16:02:52,106 INFO [mert_pretraining.py:253] current directory is /turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual
2025-11-10 16:02:52,106 INFO [mert_pretraining.py:254] MERTPretrainingTask Config {'_name': 'mert_pretraining', 'data': '/apdcephfs_nj7/share_301796285/user/hainazhu/our-MERT/data/all_songs_20240430', 'sharding_data': -1, 'load_random_data_shard': True, 'fine_tuning': False, 'labels': [], 'label_dir': '', 'label_rate': 25.0, 'sample_rate': 24000, 'normalize': False, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 720000, 'min_sample_size': 432000, 'single_target': False, 'random_crop': True, 'pad_audio': False, 'store_labels': False, 'numpy_memmap_label': False, 'augmentation_effects': '[]', 'augmentation_probs': '[]', 'inbatch_noise_augment_len_range': '[8000, 24000]', 'inbatch_noise_augment_number_range': '[1, 3]', 'inbatch_noise_augment_volume': 1.0, 'dynamic_crops': '[]', 'dynamic_crops_epoches': '[]', 'cqt_loss_bin_dataloader': -1, 'clip_secs': 30}
/root/miniconda3/envs/uniaudio2/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py:134: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`.
WeightNorm.apply(module, name, dim)
/root/miniconda3/envs/uniaudio2/lib/python3.10/site-packages/diffusers/configuration_utils.py:245: FutureWarning: It is deprecated to pass a pretrained model name or path to `from_config`.If you were trying to load a model, please use <class 'tools.tokenizer.ReasoningCodec_film_1024.models.transformer_1d_flow.Transformer1DModel'>.load_config(...) followed by <class 'tools.tokenizer.ReasoningCodec_film_1024.models.transformer_1d_flow.Transformer1DModel'>.from_config(...) instead. Otherwise, please make sure to pass a configuration dictionary instead. This functionality will be removed in v1.0.0.
deprecate("config-passed-as-path", "1.0.0", deprecation_message, standard_warn=False)
No stats file found at `None`, use default from msd.
Checkpoint for rvq `/apdcephfs_nj7/share_301796285/user/hainazhu/our-MERT/data/fairseq_savedir/rvq/RVQ_4000.pth` not found. Using random initialization.
# of parameters: 313.9525146484375
Power Normalization requires removing norms, setting remove_norms to True
Power Normalization requires removing norms, setting remove_norms to True
Power Normalization requires removing norms, setting remove_norms to True
Power Normalization requires removing norms, setting remove_norms to True
Power Normalization requires removing norms, setting remove_norms to True
Power Normalization requires removing norms, setting remove_norms to True
Power Normalization requires removing norms, setting remove_norms to True
Power Normalization requires removing norms, setting remove_norms to True
Power Normalization requires removing norms, setting remove_norms to True
Power Normalization requires removing norms, setting remove_norms to True
Power Normalization requires removing norms, setting remove_norms to True
Power Normalization requires removing norms, setting remove_norms to True
Power Normalization requires removing norms, setting remove_norms to True
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 1/2 [00:06<00:06, 6.09s/it] Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:07<00:00, 3.54s/it] Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2/2 [00:07<00:00, 3.92s/it]
The new embeddings will be initialized from a multivariate normal distribution that has old embeddings' mean and covariance. As described in this article: https://nlp.stanford.edu/~johnhew/vocab-expansion.html. To disable this, use `mean_resizing=False`
/turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual/tools/tokenizer/ReasoningCodec_film_1024/models/model_utils.py:42: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
ckpt = torch.load(ckpt_path, map_location=device)
/turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual/tools/tokenizer/ReasoningCodec_film_1024/reason_tokenizer.py:57: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(model_path, map_location='cpu')['model']
trainable params: 9,175,040 || all params: 3,221,927,936 || trainable%: 0.2848
Loading training prompts done!
loadable_state dict_keys(['cls_token', 'llama_model.base_model.model.model.layers.0.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.0.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.0.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.0.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.1.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.1.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.1.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.1.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.2.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.2.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.2.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.2.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.3.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.3.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.3.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.3.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.4.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.4.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.4.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.4.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.5.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.5.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.5.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.5.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.6.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.6.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.6.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.6.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.7.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.7.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.7.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.7.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.8.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.8.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.8.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.8.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.9.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.9.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.9.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.9.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.10.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.10.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.10.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.10.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.11.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.11.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.11.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.11.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.12.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.12.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.12.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.12.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.13.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.13.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.13.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.13.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.14.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.14.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.14.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.14.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.15.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.15.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.15.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.15.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.16.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.16.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.16.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.16.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.17.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.17.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.17.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.17.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.18.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.18.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.18.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.18.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.19.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.19.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.19.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.19.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.20.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.20.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.20.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.20.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.21.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.21.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.21.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.21.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.22.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.22.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.22.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.22.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.23.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.23.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.23.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.23.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.24.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.24.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.24.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.24.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.25.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.25.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.25.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.25.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.26.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.26.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.26.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.26.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.27.self_attn.q_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.27.self_attn.q_proj.lora_B.default.weight', 'llama_model.base_model.model.model.layers.27.self_attn.v_proj.lora_A.default.weight', 'llama_model.base_model.model.model.layers.27.self_attn.v_proj.lora_B.default.weight', 'llama_model.base_model.model.lm_head.weight', 'llm_proj.weight', 'llm_proj.bias', 'semantic_merge_proj.weight', 'semantic_merge_proj.bias', 'encoder_transformers.0.self_attn.to_qkv.parametrizations.weight.original0', 'encoder_transformers.0.self_attn.to_qkv.parametrizations.weight.original1', 'encoder_transformers.0.self_attn.to_out.parametrizations.weight.original0', 'encoder_transformers.0.self_attn.to_out.parametrizations.weight.original1', 'encoder_transformers.0.self_attn.q_norm.weight', 'encoder_transformers.0.self_attn.q_norm.bias', 'encoder_transformers.0.self_attn.k_norm.weight', 'encoder_transformers.0.self_attn.k_norm.bias', 'encoder_transformers.0.self_attn_scale.scale', 'encoder_transformers.0.ff.ff.0.proj.bias', 'encoder_transformers.0.ff.ff.0.proj.parametrizations.weight.original0', 'encoder_transformers.0.ff.ff.0.proj.parametrizations.weight.original1', 'encoder_transformers.0.ff.ff.2.bias', 'encoder_transformers.0.ff.ff.2.parametrizations.weight.original0', 'encoder_transformers.0.ff.ff.2.parametrizations.weight.original1', 'encoder_transformers.0.ff_scale.scale', 'encoder_transformers.0.rope.inv_freq', 'encoder_transformers.1.self_attn.to_qkv.parametrizations.weight.original0', 'encoder_transformers.1.self_attn.to_qkv.parametrizations.weight.original1', 'encoder_transformers.1.self_attn.to_out.parametrizations.weight.original0', 'encoder_transformers.1.self_attn.to_out.parametrizations.weight.original1', 'encoder_transformers.1.self_attn.q_norm.weight', 'encoder_transformers.1.self_attn.q_norm.bias', 'encoder_transformers.1.self_attn.k_norm.weight', 'encoder_transformers.1.self_attn.k_norm.bias', 'encoder_transformers.1.self_attn_scale.scale', 'encoder_transformers.1.ff.ff.0.proj.bias', 'encoder_transformers.1.ff.ff.0.proj.parametrizations.weight.original0', 'encoder_transformers.1.ff.ff.0.proj.parametrizations.weight.original1', 'encoder_transformers.1.ff.ff.2.bias', 'encoder_transformers.1.ff.ff.2.parametrizations.weight.original0', 'encoder_transformers.1.ff.ff.2.parametrizations.weight.original1', 'encoder_transformers.1.ff_scale.scale', 'encoder_transformers.1.rope.inv_freq', 'encoder_transformers.2.self_attn.to_qkv.parametrizations.weight.original0', 'encoder_transformers.2.self_attn.to_qkv.parametrizations.weight.original1', 'encoder_transformers.2.self_attn.to_out.parametrizations.weight.original0', 'encoder_transformers.2.self_attn.to_out.parametrizations.weight.original1', 'encoder_transformers.2.self_attn.q_norm.weight', 'encoder_transformers.2.self_attn.q_norm.bias', 'encoder_transformers.2.self_attn.k_norm.weight', 'encoder_transformers.2.self_attn.k_norm.bias', 'encoder_transformers.2.self_attn_scale.scale', 'encoder_transformers.2.ff.ff.0.proj.bias', 'encoder_transformers.2.ff.ff.0.proj.parametrizations.weight.original0', 'encoder_transformers.2.ff.ff.0.proj.parametrizations.weight.original1', 'encoder_transformers.2.ff.ff.2.bias', 'encoder_transformers.2.ff.ff.2.parametrizations.weight.original0', 'encoder_transformers.2.ff.ff.2.parametrizations.weight.original1', 'encoder_transformers.2.ff_scale.scale', 'encoder_transformers.2.rope.inv_freq', 'encoder_transformers.3.self_attn.to_qkv.parametrizations.weight.original0', 'encoder_transformers.3.self_attn.to_qkv.parametrizations.weight.original1', 'encoder_transformers.3.self_attn.to_out.parametrizations.weight.original0', 'encoder_transformers.3.self_attn.to_out.parametrizations.weight.original1', 'encoder_transformers.3.self_attn.q_norm.weight', 'encoder_transformers.3.self_attn.q_norm.bias', 'encoder_transformers.3.self_attn.k_norm.weight', 'encoder_transformers.3.self_attn.k_norm.bias', 'encoder_transformers.3.self_attn_scale.scale', 'encoder_transformers.3.ff.ff.0.proj.bias', 'encoder_transformers.3.ff.ff.0.proj.parametrizations.weight.original0', 'encoder_transformers.3.ff.ff.0.proj.parametrizations.weight.original1', 'encoder_transformers.3.ff.ff.2.bias', 'encoder_transformers.3.ff.ff.2.parametrizations.weight.original0', 'encoder_transformers.3.ff.ff.2.parametrizations.weight.original1', 'encoder_transformers.3.ff_scale.scale', 'encoder_transformers.3.rope.inv_freq', 'encoder_transformers.4.self_attn.to_qkv.parametrizations.weight.original0', 'encoder_transformers.4.self_attn.to_qkv.parametrizations.weight.original1', 'encoder_transformers.4.self_attn.to_out.parametrizations.weight.original0', 'encoder_transformers.4.self_attn.to_out.parametrizations.weight.original1', 'encoder_transformers.4.self_attn.q_norm.weight', 'encoder_transformers.4.self_attn.q_norm.bias', 'encoder_transformers.4.self_attn.k_norm.weight', 'encoder_transformers.4.self_attn.k_norm.bias', 'encoder_transformers.4.self_attn_scale.scale', 'encoder_transformers.4.ff.ff.0.proj.bias', 'encoder_transformers.4.ff.ff.0.proj.parametrizations.weight.original0', 'encoder_transformers.4.ff.ff.0.proj.parametrizations.weight.original1', 'encoder_transformers.4.ff.ff.2.bias', 'encoder_transformers.4.ff.ff.2.parametrizations.weight.original0', 'encoder_transformers.4.ff.ff.2.parametrizations.weight.original1', 'encoder_transformers.4.ff_scale.scale', 'encoder_transformers.4.rope.inv_freq', 'reasoning_vq.project_in.weight', 'reasoning_vq.project_in.bias', 'reasoning_vq.project_out.weight', 'reasoning_vq.project_out.bias', 'reasoning_vq.layers.0._codebook.initted', 'reasoning_vq.layers.0._codebook.cluster_size', 'reasoning_vq.layers.0._codebook.embed_avg', 'reasoning_vq.layers.0._codebook.embed', 'reasoning_vq.layers.1._codebook.initted', 'reasoning_vq.layers.1._codebook.cluster_size', 'reasoning_vq.layers.1._codebook.embed_avg', 'reasoning_vq.layers.1._codebook.embed', 'reasoning_vq.layers.2._codebook.initted', 'reasoning_vq.layers.2._codebook.cluster_size', 'reasoning_vq.layers.2._codebook.embed_avg', 'reasoning_vq.layers.2._codebook.embed', 'reasoning_vq.layers.3._codebook.initted', 'reasoning_vq.layers.3._codebook.cluster_size', 'reasoning_vq.layers.3._codebook.embed_avg', 'reasoning_vq.layers.3._codebook.embed', 'reasoning_vq.layers.4._codebook.initted', 'reasoning_vq.layers.4._codebook.cluster_size', 'reasoning_vq.layers.4._codebook.embed_avg', 'reasoning_vq.layers.4._codebook.embed', 'reasoning_vq.layers.5._codebook.initted', 'reasoning_vq.layers.5._codebook.cluster_size', 'reasoning_vq.layers.5._codebook.embed_avg', 'reasoning_vq.layers.5._codebook.embed', 'reasoning_vq.layers.6._codebook.initted', 'reasoning_vq.layers.6._codebook.cluster_size', 'reasoning_vq.layers.6._codebook.embed_avg', 'reasoning_vq.layers.6._codebook.embed', 'reasoning_vq.layers.7._codebook.initted', 'reasoning_vq.layers.7._codebook.cluster_size', 'reasoning_vq.layers.7._codebook.embed_avg', 'reasoning_vq.layers.7._codebook.embed', 'down_sampling_layer_whisper.weight', 'down_sampling_layer_whisper.bias'])
[LoRA‑Loader] matched=241 mismatched=0 missing=254 unexpected=108
β†’ 254 keys present in model but missing in ckpt, e.g.:
llama_model.base_model.model.model.embed_tokens.weight
llama_model.base_model.model.model.layers.0.self_attn.q_proj.base_layer.weight
llama_model.base_model.model.model.layers.0.self_attn.k_proj.weight
llama_model.base_model.model.model.layers.0.self_attn.v_proj.base_layer.weight
llama_model.base_model.model.model.layers.0.self_attn.o_proj.weight
llama_model.base_model.model.model.layers.0.mlp.gate_proj.weight
llama_model.base_model.model.model.layers.0.mlp.up_proj.weight
llama_model.base_model.model.model.layers.0.mlp.down_proj.weight
llama_model.base_model.model.model.layers.0.input_layernorm.weight
llama_model.base_model.model.model.layers.0.post_attention_layernorm.weight
llama_model.base_model.model.model.layers.1.self_attn.q_proj.base_layer.weight
llama_model.base_model.model.model.layers.1.self_attn.k_proj.weight
llama_model.base_model.model.model.layers.1.self_attn.v_proj.base_layer.weight
llama_model.base_model.model.model.layers.1.self_attn.o_proj.weight
llama_model.base_model.model.model.layers.1.mlp.gate_proj.weight
llama_model.base_model.model.model.layers.1.mlp.up_proj.weight
llama_model.base_model.model.model.layers.1.mlp.down_proj.weight
llama_model.base_model.model.model.layers.1.input_layernorm.weight
llama_model.base_model.model.model.layers.1.post_attention_layernorm.weight
llama_model.base_model.model.model.layers.2.self_attn.q_proj.base_layer.weight ...
β†’ 108 keys in ckpt but not used, e.g.:
llm_mapping.weight
llm_mapping.bias
muencoder.model.preprocessor_melspec_2048.mel_stft.spectrogram.window
muencoder.model.preprocessor_melspec_2048.mel_stft.mel_scale.fb
muencoder.model.rvq.quantizers.0.stale_counter
muencoder.model.rvq.quantizers.1.stale_counter
muencoder.model.rvq.quantizers.2.stale_counter
muencoder.model.rvq.quantizers.3.stale_counter
muencoder.model.rvq.quantizers.4.stale_counter
muencoder.model.rvq.quantizers.5.stale_counter
muencoder.model.rvq.quantizers.6.stale_counter
muencoder.model.rvq.quantizers.7.stale_counter
muencoder.model.conv.conv.0.bn1.running_mean
muencoder.model.conv.conv.0.bn1.running_var
muencoder.model.conv.conv.0.bn1.num_batches_tracked
muencoder.model.conv.conv.0.bn2.running_mean
muencoder.model.conv.conv.0.bn2.running_var
muencoder.model.conv.conv.0.bn2.num_batches_tracked
muencoder.model.conv.conv.0.bn3.running_mean
muencoder.model.conv.conv.0.bn3.running_var ...
[LoRA‑Loader] done.
loadable_state dict_keys(['zero_cond_embedding1', 'whisper_encoder.conv1.weight', 'whisper_encoder.conv1.bias', 'whisper_encoder.conv2.weight', 'whisper_encoder.conv2.bias', 'whisper_encoder.embed_positions.weight', 'whisper_encoder.layers.0.self_attn.k_proj.weight', 'whisper_encoder.layers.0.self_attn.v_proj.weight', 'whisper_encoder.layers.0.self_attn.v_proj.bias', 'whisper_encoder.layers.0.self_attn.q_proj.weight', 'whisper_encoder.layers.0.self_attn.q_proj.bias', 'whisper_encoder.layers.0.self_attn.out_proj.weight', 'whisper_encoder.layers.0.self_attn.out_proj.bias', 'whisper_encoder.layers.0.self_attn_layer_norm.weight', 'whisper_encoder.layers.0.self_attn_layer_norm.bias', 'whisper_encoder.layers.0.fc1.weight', 'whisper_encoder.layers.0.fc1.bias', 'whisper_encoder.layers.0.fc2.weight', 'whisper_encoder.layers.0.fc2.bias', 'whisper_encoder.layers.0.final_layer_norm.weight', 'whisper_encoder.layers.0.final_layer_norm.bias', 'whisper_encoder.layers.1.self_attn.k_proj.weight', 'whisper_encoder.layers.1.self_attn.v_proj.weight', 'whisper_encoder.layers.1.self_attn.v_proj.bias', 'whisper_encoder.layers.1.self_attn.q_proj.weight', 'whisper_encoder.layers.1.self_attn.q_proj.bias', 'whisper_encoder.layers.1.self_attn.out_proj.weight', 'whisper_encoder.layers.1.self_attn.out_proj.bias', 'whisper_encoder.layers.1.self_attn_layer_norm.weight', 'whisper_encoder.layers.1.self_attn_layer_norm.bias', 'whisper_encoder.layers.1.fc1.weight', 'whisper_encoder.layers.1.fc1.bias', 'whisper_encoder.layers.1.fc2.weight', 'whisper_encoder.layers.1.fc2.bias', 'whisper_encoder.layers.1.final_layer_norm.weight', 'whisper_encoder.layers.1.final_layer_norm.bias', 'whisper_encoder.layers.2.self_attn.k_proj.weight', 'whisper_encoder.layers.2.self_attn.v_proj.weight', 'whisper_encoder.layers.2.self_attn.v_proj.bias', 'whisper_encoder.layers.2.self_attn.q_proj.weight', 'whisper_encoder.layers.2.self_attn.q_proj.bias', 'whisper_encoder.layers.2.self_attn.out_proj.weight', 'whisper_encoder.layers.2.self_attn.out_proj.bias', 'whisper_encoder.layers.2.self_attn_layer_norm.weight', 'whisper_encoder.layers.2.self_attn_layer_norm.bias', 'whisper_encoder.layers.2.fc1.weight', 'whisper_encoder.layers.2.fc1.bias', 'whisper_encoder.layers.2.fc2.weight', 'whisper_encoder.layers.2.fc2.bias', 'whisper_encoder.layers.2.final_layer_norm.weight', 'whisper_encoder.layers.2.final_layer_norm.bias', 'whisper_encoder.layers.3.self_attn.k_proj.weight', 'whisper_encoder.layers.3.self_attn.v_proj.weight', 'whisper_encoder.layers.3.self_attn.v_proj.bias', 'whisper_encoder.layers.3.self_attn.q_proj.weight', 'whisper_encoder.layers.3.self_attn.q_proj.bias', 'whisper_encoder.layers.3.self_attn.out_proj.weight', 'whisper_encoder.layers.3.self_attn.out_proj.bias', 'whisper_encoder.layers.3.self_attn_layer_norm.weight', 'whisper_encoder.layers.3.self_attn_layer_norm.bias', 'whisper_encoder.layers.3.fc1.weight', 'whisper_encoder.layers.3.fc1.bias', 'whisper_encoder.layers.3.fc2.weight', 'whisper_encoder.layers.3.fc2.bias', 'whisper_encoder.layers.3.final_layer_norm.weight', 'whisper_encoder.layers.3.final_layer_norm.bias', 'whisper_encoder.layers.4.self_attn.k_proj.weight', 'whisper_encoder.layers.4.self_attn.v_proj.weight', 'whisper_encoder.layers.4.self_attn.v_proj.bias', 'whisper_encoder.layers.4.self_attn.q_proj.weight', 'whisper_encoder.layers.4.self_attn.q_proj.bias', 'whisper_encoder.layers.4.self_attn.out_proj.weight', 'whisper_encoder.layers.4.self_attn.out_proj.bias', 'whisper_encoder.layers.4.self_attn_layer_norm.weight', 'whisper_encoder.layers.4.self_attn_layer_norm.bias', 'whisper_encoder.layers.4.fc1.weight', 'whisper_encoder.layers.4.fc1.bias', 'whisper_encoder.layers.4.fc2.weight', 'whisper_encoder.layers.4.fc2.bias', 'whisper_encoder.layers.4.final_layer_norm.weight', 'whisper_encoder.layers.4.final_layer_norm.bias', 'whisper_encoder.layers.5.self_attn.k_proj.weight', 'whisper_encoder.layers.5.self_attn.v_proj.weight', 'whisper_encoder.layers.5.self_attn.v_proj.bias', 'whisper_encoder.layers.5.self_attn.q_proj.weight', 'whisper_encoder.layers.5.self_attn.q_proj.bias', 'whisper_encoder.layers.5.self_attn.out_proj.weight', 'whisper_encoder.layers.5.self_attn.out_proj.bias', 'whisper_encoder.layers.5.self_attn_layer_norm.weight', 'whisper_encoder.layers.5.self_attn_layer_norm.bias', 'whisper_encoder.layers.5.fc1.weight', 'whisper_encoder.layers.5.fc1.bias', 'whisper_encoder.layers.5.fc2.weight', 'whisper_encoder.layers.5.fc2.bias', 'whisper_encoder.layers.5.final_layer_norm.weight', 'whisper_encoder.layers.5.final_layer_norm.bias', 'whisper_encoder.layers.6.self_attn.k_proj.weight', 'whisper_encoder.layers.6.self_attn.v_proj.weight', 'whisper_encoder.layers.6.self_attn.v_proj.bias', 'whisper_encoder.layers.6.self_attn.q_proj.weight', 'whisper_encoder.layers.6.self_attn.q_proj.bias', 'whisper_encoder.layers.6.self_attn.out_proj.weight', 'whisper_encoder.layers.6.self_attn.out_proj.bias', 'whisper_encoder.layers.6.self_attn_layer_norm.weight', 'whisper_encoder.layers.6.self_attn_layer_norm.bias', 'whisper_encoder.layers.6.fc1.weight', 'whisper_encoder.layers.6.fc1.bias', 'whisper_encoder.layers.6.fc2.weight', 'whisper_encoder.layers.6.fc2.bias', 'whisper_encoder.layers.6.final_layer_norm.weight', 'whisper_encoder.layers.6.final_layer_norm.bias', 'whisper_encoder.layers.7.self_attn.k_proj.weight', 'whisper_encoder.layers.7.self_attn.v_proj.weight', 'whisper_encoder.layers.7.self_attn.v_proj.bias', 'whisper_encoder.layers.7.self_attn.q_proj.weight', 'whisper_encoder.layers.7.self_attn.q_proj.bias', 'whisper_encoder.layers.7.self_attn.out_proj.weight', 'whisper_encoder.layers.7.self_attn.out_proj.bias', 'whisper_encoder.layers.7.self_attn_layer_norm.weight', 'whisper_encoder.layers.7.self_attn_layer_norm.bias', 'whisper_encoder.layers.7.fc1.weight', 'whisper_encoder.layers.7.fc1.bias', 'whisper_encoder.layers.7.fc2.weight', 'whisper_encoder.layers.7.fc2.bias', 'whisper_encoder.layers.7.final_layer_norm.weight', 'whisper_encoder.layers.7.final_layer_norm.bias', 'whisper_encoder.layers.8.self_attn.k_proj.weight', 'whisper_encoder.layers.8.self_attn.v_proj.weight', 'whisper_encoder.layers.8.self_attn.v_proj.bias', 'whisper_encoder.layers.8.self_attn.q_proj.weight', 'whisper_encoder.layers.8.self_attn.q_proj.bias', 'whisper_encoder.layers.8.self_attn.out_proj.weight', 'whisper_encoder.layers.8.self_attn.out_proj.bias', 'whisper_encoder.layers.8.self_attn_layer_norm.weight', 'whisper_encoder.layers.8.self_attn_layer_norm.bias', 'whisper_encoder.layers.8.fc1.weight', 'whisper_encoder.layers.8.fc1.bias', 'whisper_encoder.layers.8.fc2.weight', 'whisper_encoder.layers.8.fc2.bias', 'whisper_encoder.layers.8.final_layer_norm.weight', 'whisper_encoder.layers.8.final_layer_norm.bias', 'whisper_encoder.layers.9.self_attn.k_proj.weight', 'whisper_encoder.layers.9.self_attn.v_proj.weight', 'whisper_encoder.layers.9.self_attn.v_proj.bias', 'whisper_encoder.layers.9.self_attn.q_proj.weight', 'whisper_encoder.layers.9.self_attn.q_proj.bias', 'whisper_encoder.layers.9.self_attn.out_proj.weight', 'whisper_encoder.layers.9.self_attn.out_proj.bias', 'whisper_encoder.layers.9.self_attn_layer_norm.weight', 'whisper_encoder.layers.9.self_attn_layer_norm.bias', 'whisper_encoder.layers.9.fc1.weight', 'whisper_encoder.layers.9.fc1.bias', 'whisper_encoder.layers.9.fc2.weight', 'whisper_encoder.layers.9.fc2.bias', 'whisper_encoder.layers.9.final_layer_norm.weight', 'whisper_encoder.layers.9.final_layer_norm.bias', 'whisper_encoder.layers.10.self_attn.k_proj.weight', 'whisper_encoder.layers.10.self_attn.v_proj.weight', 'whisper_encoder.layers.10.self_attn.v_proj.bias', 'whisper_encoder.layers.10.self_attn.q_proj.weight', 'whisper_encoder.layers.10.self_attn.q_proj.bias', 'whisper_encoder.layers.10.self_attn.out_proj.weight', 'whisper_encoder.layers.10.self_attn.out_proj.bias', 'whisper_encoder.layers.10.self_attn_layer_norm.weight', 'whisper_encoder.layers.10.self_attn_layer_norm.bias', 'whisper_encoder.layers.10.fc1.weight', 'whisper_encoder.layers.10.fc1.bias', 'whisper_encoder.layers.10.fc2.weight', 'whisper_encoder.layers.10.fc2.bias', 'whisper_encoder.layers.10.final_layer_norm.weight', 'whisper_encoder.layers.10.final_layer_norm.bias', 'whisper_encoder.layers.11.self_attn.k_proj.weight', 'whisper_encoder.layers.11.self_attn.v_proj.weight', 'whisper_encoder.layers.11.self_attn.v_proj.bias', 'whisper_encoder.layers.11.self_attn.q_proj.weight', 'whisper_encoder.layers.11.self_attn.q_proj.bias', 'whisper_encoder.layers.11.self_attn.out_proj.weight', 'whisper_encoder.layers.11.self_attn.out_proj.bias', 'whisper_encoder.layers.11.self_attn_layer_norm.weight', 'whisper_encoder.layers.11.self_attn_layer_norm.bias', 'whisper_encoder.layers.11.fc1.weight', 'whisper_encoder.layers.11.fc1.bias', 'whisper_encoder.layers.11.fc2.weight', 'whisper_encoder.layers.11.fc2.bias', 'whisper_encoder.layers.11.final_layer_norm.weight', 'whisper_encoder.layers.11.final_layer_norm.bias', 'whisper_encoder.layers.12.self_attn.k_proj.weight', 'whisper_encoder.layers.12.self_attn.v_proj.weight', 'whisper_encoder.layers.12.self_attn.v_proj.bias', 'whisper_encoder.layers.12.self_attn.q_proj.weight', 'whisper_encoder.layers.12.self_attn.q_proj.bias', 'whisper_encoder.layers.12.self_attn.out_proj.weight', 'whisper_encoder.layers.12.self_attn.out_proj.bias', 'whisper_encoder.layers.12.self_attn_layer_norm.weight', 'whisper_encoder.layers.12.self_attn_layer_norm.bias', 'whisper_encoder.layers.12.fc1.weight', 'whisper_encoder.layers.12.fc1.bias', 'whisper_encoder.layers.12.fc2.weight', 'whisper_encoder.layers.12.fc2.bias', 'whisper_encoder.layers.12.final_layer_norm.weight', 'whisper_encoder.layers.12.final_layer_norm.bias', 'whisper_encoder.layers.13.self_attn.k_proj.weight', 'whisper_encoder.layers.13.self_attn.v_proj.weight', 'whisper_encoder.layers.13.self_attn.v_proj.bias', 'whisper_encoder.layers.13.self_attn.q_proj.weight', 'whisper_encoder.layers.13.self_attn.q_proj.bias', 'whisper_encoder.layers.13.self_attn.out_proj.weight', 'whisper_encoder.layers.13.self_attn.out_proj.bias', 'whisper_encoder.layers.13.self_attn_layer_norm.weight', 'whisper_encoder.layers.13.self_attn_layer_norm.bias', 'whisper_encoder.layers.13.fc1.weight', 'whisper_encoder.layers.13.fc1.bias', 'whisper_encoder.layers.13.fc2.weight', 'whisper_encoder.layers.13.fc2.bias', 'whisper_encoder.layers.13.final_layer_norm.weight', 'whisper_encoder.layers.13.final_layer_norm.bias', 'whisper_encoder.layers.14.self_attn.k_proj.weight', 'whisper_encoder.layers.14.self_attn.v_proj.weight', 'whisper_encoder.layers.14.self_attn.v_proj.bias', 'whisper_encoder.layers.14.self_attn.q_proj.weight', 'whisper_encoder.layers.14.self_attn.q_proj.bias', 'whisper_encoder.layers.14.self_attn.out_proj.weight', 'whisper_encoder.layers.14.self_attn.out_proj.bias', 'whisper_encoder.layers.14.self_attn_layer_norm.weight', 'whisper_encoder.layers.14.self_attn_layer_norm.bias', 'whisper_encoder.layers.14.fc1.weight', 'whisper_encoder.layers.14.fc1.bias', 'whisper_encoder.layers.14.fc2.weight', 'whisper_encoder.layers.14.fc2.bias', 'whisper_encoder.layers.14.final_layer_norm.weight', 'whisper_encoder.layers.14.final_layer_norm.bias', 'whisper_encoder.layers.15.self_attn.k_proj.weight', 'whisper_encoder.layers.15.self_attn.v_proj.weight', 'whisper_encoder.layers.15.self_attn.v_proj.bias', 'whisper_encoder.layers.15.self_attn.q_proj.weight', 'whisper_encoder.layers.15.self_attn.q_proj.bias', 'whisper_encoder.layers.15.self_attn.out_proj.weight', 'whisper_encoder.layers.15.self_attn.out_proj.bias', 'whisper_encoder.layers.15.self_attn_layer_norm.weight', 'whisper_encoder.layers.15.self_attn_layer_norm.bias', 'whisper_encoder.layers.15.fc1.weight', 'whisper_encoder.layers.15.fc1.bias', 'whisper_encoder.layers.15.fc2.weight', 'whisper_encoder.layers.15.fc2.bias', 'whisper_encoder.layers.15.final_layer_norm.weight', 'whisper_encoder.layers.15.final_layer_norm.bias', 'whisper_encoder.layers.16.self_attn.k_proj.weight', 'whisper_encoder.layers.16.self_attn.v_proj.weight', 'whisper_encoder.layers.16.self_attn.v_proj.bias', 'whisper_encoder.layers.16.self_attn.q_proj.weight', 'whisper_encoder.layers.16.self_attn.q_proj.bias', 'whisper_encoder.layers.16.self_attn.out_proj.weight', 'whisper_encoder.layers.16.self_attn.out_proj.bias', 'whisper_encoder.layers.16.self_attn_layer_norm.weight', 'whisper_encoder.layers.16.self_attn_layer_norm.bias', 'whisper_encoder.layers.16.fc1.weight', 'whisper_encoder.layers.16.fc1.bias', 'whisper_encoder.layers.16.fc2.weight', 'whisper_encoder.layers.16.fc2.bias', 'whisper_encoder.layers.16.final_layer_norm.weight', 'whisper_encoder.layers.16.final_layer_norm.bias', 'whisper_encoder.layers.17.self_attn.k_proj.weight', 'whisper_encoder.layers.17.self_attn.v_proj.weight', 'whisper_encoder.layers.17.self_attn.v_proj.bias', 'whisper_encoder.layers.17.self_attn.q_proj.weight', 'whisper_encoder.layers.17.self_attn.q_proj.bias', 'whisper_encoder.layers.17.self_attn.out_proj.weight', 'whisper_encoder.layers.17.self_attn.out_proj.bias', 'whisper_encoder.layers.17.self_attn_layer_norm.weight', 'whisper_encoder.layers.17.self_attn_layer_norm.bias', 'whisper_encoder.layers.17.fc1.weight', 'whisper_encoder.layers.17.fc1.bias', 'whisper_encoder.layers.17.fc2.weight', 'whisper_encoder.layers.17.fc2.bias', 'whisper_encoder.layers.17.final_layer_norm.weight', 'whisper_encoder.layers.17.final_layer_norm.bias', 'whisper_encoder.layers.18.self_attn.k_proj.weight', 'whisper_encoder.layers.18.self_attn.v_proj.weight', 'whisper_encoder.layers.18.self_attn.v_proj.bias', 'whisper_encoder.layers.18.self_attn.q_proj.weight', 'whisper_encoder.layers.18.self_attn.q_proj.bias', 'whisper_encoder.layers.18.self_attn.out_proj.weight', 'whisper_encoder.layers.18.self_attn.out_proj.bias', 'whisper_encoder.layers.18.self_attn_layer_norm.weight', 'whisper_encoder.layers.18.self_attn_layer_norm.bias', 'whisper_encoder.layers.18.fc1.weight', 'whisper_encoder.layers.18.fc1.bias', 'whisper_encoder.layers.18.fc2.weight', 'whisper_encoder.layers.18.fc2.bias', 'whisper_encoder.layers.18.final_layer_norm.weight', 'whisper_encoder.layers.18.final_layer_norm.bias', 'whisper_encoder.layers.19.self_attn.k_proj.weight', 'whisper_encoder.layers.19.self_attn.v_proj.weight', 'whisper_encoder.layers.19.self_attn.v_proj.bias', 'whisper_encoder.layers.19.self_attn.q_proj.weight', 'whisper_encoder.layers.19.self_attn.q_proj.bias', 'whisper_encoder.layers.19.self_attn.out_proj.weight', 'whisper_encoder.layers.19.self_attn.out_proj.bias', 'whisper_encoder.layers.19.self_attn_layer_norm.weight', 'whisper_encoder.layers.19.self_attn_layer_norm.bias', 'whisper_encoder.layers.19.fc1.weight', 'whisper_encoder.layers.19.fc1.bias', 'whisper_encoder.layers.19.fc2.weight', 'whisper_encoder.layers.19.fc2.bias', 'whisper_encoder.layers.19.final_layer_norm.weight', 'whisper_encoder.layers.19.final_layer_norm.bias', 'whisper_encoder.layers.20.self_attn.k_proj.weight', 'whisper_encoder.layers.20.self_attn.v_proj.weight', 'whisper_encoder.layers.20.self_attn.v_proj.bias', 'whisper_encoder.layers.20.self_attn.q_proj.weight', 'whisper_encoder.layers.20.self_attn.q_proj.bias', 'whisper_encoder.layers.20.self_attn.out_proj.weight', 'whisper_encoder.layers.20.self_attn.out_proj.bias', 'whisper_encoder.layers.20.self_attn_layer_norm.weight', 'whisper_encoder.layers.20.self_attn_layer_norm.bias', 'whisper_encoder.layers.20.fc1.weight', 'whisper_encoder.layers.20.fc1.bias', 'whisper_encoder.layers.20.fc2.weight', 'whisper_encoder.layers.20.fc2.bias', 'whisper_encoder.layers.20.final_layer_norm.weight', 'whisper_encoder.layers.20.final_layer_norm.bias', 'whisper_encoder.layers.21.self_attn.k_proj.weight', 'whisper_encoder.layers.21.self_attn.v_proj.weight', 'whisper_encoder.layers.21.self_attn.v_proj.bias', 'whisper_encoder.layers.21.self_attn.q_proj.weight', 'whisper_encoder.layers.21.self_attn.q_proj.bias', 'whisper_encoder.layers.21.self_attn.out_proj.weight', 'whisper_encoder.layers.21.self_attn.out_proj.bias', 'whisper_encoder.layers.21.self_attn_layer_norm.weight', 'whisper_encoder.layers.21.self_attn_layer_norm.bias', 'whisper_encoder.layers.21.fc1.weight', 'whisper_encoder.layers.21.fc1.bias', 'whisper_encoder.layers.21.fc2.weight', 'whisper_encoder.layers.21.fc2.bias', 'whisper_encoder.layers.21.final_layer_norm.weight', 'whisper_encoder.layers.21.final_layer_norm.bias', 'whisper_encoder.layers.22.self_attn.k_proj.weight', 'whisper_encoder.layers.22.self_attn.v_proj.weight', 'whisper_encoder.layers.22.self_attn.v_proj.bias', 'whisper_encoder.layers.22.self_attn.q_proj.weight', 'whisper_encoder.layers.22.self_attn.q_proj.bias', 'whisper_encoder.layers.22.self_attn.out_proj.weight', 'whisper_encoder.layers.22.self_attn.out_proj.bias', 'whisper_encoder.layers.22.self_attn_layer_norm.weight', 'whisper_encoder.layers.22.self_attn_layer_norm.bias', 'whisper_encoder.layers.22.fc1.weight', 'whisper_encoder.layers.22.fc1.bias', 'whisper_encoder.layers.22.fc2.weight', 'whisper_encoder.layers.22.fc2.bias', 'whisper_encoder.layers.22.final_layer_norm.weight', 'whisper_encoder.layers.22.final_layer_norm.bias', 'whisper_encoder.layers.23.self_attn.k_proj.weight', 'whisper_encoder.layers.23.self_attn.v_proj.weight', 'whisper_encoder.layers.23.self_attn.v_proj.bias', 'whisper_encoder.layers.23.self_attn.q_proj.weight', 'whisper_encoder.layers.23.self_attn.q_proj.bias', 'whisper_encoder.layers.23.self_attn.out_proj.weight', 'whisper_encoder.layers.23.self_attn.out_proj.bias', 'whisper_encoder.layers.23.self_attn_layer_norm.weight', 'whisper_encoder.layers.23.self_attn_layer_norm.bias', 'whisper_encoder.layers.23.fc1.weight', 'whisper_encoder.layers.23.fc1.bias', 'whisper_encoder.layers.23.fc2.weight', 'whisper_encoder.layers.23.fc2.bias', 'whisper_encoder.layers.23.final_layer_norm.weight', 'whisper_encoder.layers.23.final_layer_norm.bias', 'whisper_encoder.layer_norm.weight', 'whisper_encoder.layer_norm.bias', 'wavlm_encoder.masked_spec_embed', 'wavlm_encoder.feature_extractor.conv_layers.0.conv.weight', 'wavlm_encoder.feature_extractor.conv_layers.0.layer_norm.weight', 'wavlm_encoder.feature_extractor.conv_layers.0.layer_norm.bias', 'wavlm_encoder.feature_extractor.conv_layers.1.conv.weight', 'wavlm_encoder.feature_extractor.conv_layers.2.conv.weight', 'wavlm_encoder.feature_extractor.conv_layers.3.conv.weight', 'wavlm_encoder.feature_extractor.conv_layers.4.conv.weight', 'wavlm_encoder.feature_extractor.conv_layers.5.conv.weight', 'wavlm_encoder.feature_extractor.conv_layers.6.conv.weight', 'wavlm_encoder.feature_projection.layer_norm.weight', 'wavlm_encoder.feature_projection.layer_norm.bias', 'wavlm_encoder.feature_projection.projection.weight', 'wavlm_encoder.feature_projection.projection.bias', 'wavlm_encoder.encoder.pos_conv_embed.conv.bias', 'wavlm_encoder.encoder.pos_conv_embed.conv.parametrizations.weight.original0', 'wavlm_encoder.encoder.pos_conv_embed.conv.parametrizations.weight.original1', 'wavlm_encoder.encoder.layer_norm.weight', 'wavlm_encoder.encoder.layer_norm.bias', 'wavlm_encoder.encoder.layers.0.attention.gru_rel_pos_const', 'wavlm_encoder.encoder.layers.0.attention.k_proj.weight', 'wavlm_encoder.encoder.layers.0.attention.k_proj.bias', 'wavlm_encoder.encoder.layers.0.attention.v_proj.weight', 'wavlm_encoder.encoder.layers.0.attention.v_proj.bias', 'wavlm_encoder.encoder.layers.0.attention.q_proj.weight', 'wavlm_encoder.encoder.layers.0.attention.q_proj.bias', 'wavlm_encoder.encoder.layers.0.attention.out_proj.weight', 'wavlm_encoder.encoder.layers.0.attention.out_proj.bias', 'wavlm_encoder.encoder.layers.0.attention.gru_rel_pos_linear.weight', 'wavlm_encoder.encoder.layers.0.attention.gru_rel_pos_linear.bias', 'wavlm_encoder.encoder.layers.0.attention.rel_attn_embed.weight', 'wavlm_encoder.encoder.layers.0.layer_norm.weight', 'wavlm_encoder.encoder.layers.0.layer_norm.bias', 'wavlm_encoder.encoder.layers.0.feed_forward.intermediate_dense.weight', 'wavlm_encoder.encoder.layers.0.feed_forward.intermediate_dense.bias', 'wavlm_encoder.encoder.layers.0.feed_forward.output_dense.weight', 'wavlm_encoder.encoder.layers.0.feed_forward.output_dense.bias', 'wavlm_encoder.encoder.layers.0.final_layer_norm.weight', 'wavlm_encoder.encoder.layers.0.final_layer_norm.bias', 'wavlm_encoder.encoder.layers.1.attention.gru_rel_pos_const', 'wavlm_encoder.encoder.layers.1.attention.k_proj.weight', 'wavlm_encoder.encoder.layers.1.attention.k_proj.bias', 'wavlm_encoder.encoder.layers.1.attention.v_proj.weight', 'wavlm_encoder.encoder.layers.1.attention.v_proj.bias', 'wavlm_encoder.encoder.layers.1.attention.q_proj.weight', 'wavlm_encoder.encoder.layers.1.attention.q_proj.bias', 'wavlm_encoder.encoder.layers.1.attention.out_proj.weight', 'wavlm_encoder.encoder.layers.1.attention.out_proj.bias', 'wavlm_encoder.encoder.layers.1.attention.gru_rel_pos_linear.weight', 'wavlm_encoder.encoder.layers.1.attention.gru_rel_pos_linear.bias', 'wavlm_encoder.encoder.layers.1.layer_norm.weight', 'wavlm_encoder.encoder.layers.1.layer_norm.bias', 'wavlm_encoder.encoder.layers.1.feed_forward.intermediate_dense.weight', 'wavlm_encoder.encoder.layers.1.feed_forward.intermediate_dense.bias', 'wavlm_encoder.encoder.layers.1.feed_forward.output_dense.weight', 'wavlm_encoder.encoder.layers.1.feed_forward.output_dense.bias', 'wavlm_encoder.encoder.layers.1.final_layer_norm.weight', 'wavlm_encoder.encoder.layers.1.final_layer_norm.bias', 'wavlm_encoder.encoder.layers.2.attention.gru_rel_pos_const', 'wavlm_encoder.encoder.layers.2.attention.k_proj.weight', 'wavlm_encoder.encoder.layers.2.attention.k_proj.bias', 'wavlm_encoder.encoder.layers.2.attention.v_proj.weight', 'wavlm_encoder.encoder.layers.2.attention.v_proj.bias', 'wavlm_encoder.encoder.layers.2.attention.q_proj.weight', 'wavlm_encoder.encoder.layers.2.attention.q_proj.bias', 'wavlm_encoder.encoder.layers.2.attention.out_proj.weight', 'wavlm_encoder.encoder.layers.2.attention.out_proj.bias', 'wavlm_encoder.encoder.layers.2.attention.gru_rel_pos_linear.weight', 'wavlm_encoder.encoder.layers.2.attention.gru_rel_pos_linear.bias', 'wavlm_encoder.encoder.layers.2.layer_norm.weight', 'wavlm_encoder.encoder.layers.2.layer_norm.bias', 'wavlm_encoder.encoder.layers.2.feed_forward.intermediate_dense.weight', 'wavlm_encoder.encoder.layers.2.feed_forward.intermediate_dense.bias', 'wavlm_encoder.encoder.layers.2.feed_forward.output_dense.weight', 'wavlm_encoder.encoder.layers.2.feed_forward.output_dense.bias', 'wavlm_encoder.encoder.layers.2.final_layer_norm.weight', 'wavlm_encoder.encoder.layers.2.final_layer_norm.bias', 'wavlm_encoder.encoder.layers.3.attention.gru_rel_pos_const', 'wavlm_encoder.encoder.layers.3.attention.k_proj.weight', 'wavlm_encoder.encoder.layers.3.attention.k_proj.bias', 'wavlm_encoder.encoder.layers.3.attention.v_proj.weight', 'wavlm_encoder.encoder.layers.3.attention.v_proj.bias', 'wavlm_encoder.encoder.layers.3.attention.q_proj.weight', 'wavlm_encoder.encoder.layers.3.attention.q_proj.bias', 'wavlm_encoder.encoder.layers.3.attention.out_proj.weight', 'wavlm_encoder.encoder.layers.3.attention.out_proj.bias', 'wavlm_encoder.encoder.layers.3.attention.gru_rel_pos_linear.weight', 'wavlm_encoder.encoder.layers.3.attention.gru_rel_pos_linear.bias', 'wavlm_encoder.encoder.layers.3.layer_norm.weight', 'wavlm_encoder.encoder.layers.3.layer_norm.bias', 'wavlm_encoder.encoder.layers.3.feed_forward.intermediate_dense.weight', 'wavlm_encoder.encoder.layers.3.feed_forward.intermediate_dense.bias', 'wavlm_encoder.encoder.layers.3.feed_forward.output_dense.weight', 'wavlm_encoder.encoder.layers.3.feed_forward.output_dense.bias', 'wavlm_encoder.encoder.layers.3.final_layer_norm.weight', 'wavlm_encoder.encoder.layers.3.final_layer_norm.bias', 'wavlm_encoder.encoder.layers.4.attention.gru_rel_pos_const', 'wavlm_encoder.encoder.layers.4.attention.k_proj.weight', 'wavlm_encoder.encoder.layers.4.attention.k_proj.bias', 'wavlm_encoder.encoder.layers.4.attention.v_proj.weight', 'wavlm_encoder.encoder.layers.4.attention.v_proj.bias', 'wavlm_encoder.encoder.layers.4.attention.q_proj.weight', 'wavlm_encoder.encoder.layers.4.attention.q_proj.bias', 'wavlm_encoder.encoder.layers.4.attention.out_proj.weight', 'wavlm_encoder.encoder.layers.4.attention.out_proj.bias', 'wavlm_encoder.encoder.layers.4.attention.gru_rel_pos_linear.weight', 'wavlm_encoder.encoder.layers.4.attention.gru_rel_pos_linear.bias', 'wavlm_encoder.encoder.layers.4.layer_norm.weight', 'wavlm_encoder.encoder.layers.4.layer_norm.bias', 'wavlm_encoder.encoder.layers.4.feed_forward.intermediate_dense.weight', 'wavlm_encoder.encoder.layers.4.feed_forward.intermediate_dense.bias', 'wavlm_encoder.encoder.layers.4.feed_forward.output_dense.weight', 'wavlm_encoder.encoder.layers.4.feed_forward.output_dense.bias', 'wavlm_encoder.encoder.layers.4.final_layer_norm.weight', 'wavlm_encoder.encoder.layers.4.final_layer_norm.bias', 'wavlm_encoder.encoder.layers.5.attention.gru_rel_pos_const', 'wavlm_encoder.encoder.layers.5.attention.k_proj.weight', 'wavlm_encoder.encoder.layers.5.attention.k_proj.bias', 'wavlm_encoder.encoder.layers.5.attention.v_proj.weight', 'wavlm_encoder.encoder.layers.5.attention.v_proj.bias', 'wavlm_encoder.encoder.layers.5.attention.q_proj.weight', 'wavlm_encoder.encoder.layers.5.attention.q_proj.bias', 'wavlm_encoder.encoder.layers.5.attention.out_proj.weight', 'wavlm_encoder.encoder.layers.5.attention.out_proj.bias', 'wavlm_encoder.encoder.layers.5.attention.gru_rel_pos_linear.weight', 'wavlm_encoder.encoder.layers.5.attention.gru_rel_pos_linear.bias', 'wavlm_encoder.encoder.layers.5.layer_norm.weight', 'wavlm_encoder.encoder.layers.5.layer_norm.bias', 'wavlm_encoder.encoder.layers.5.feed_forward.intermediate_dense.weight', 'wavlm_encoder.encoder.layers.5.feed_forward.intermediate_dense.bias', 'wavlm_encoder.encoder.layers.5.feed_forward.output_dense.weight', 'wavlm_encoder.encoder.layers.5.feed_forward.output_dense.bias', 'wavlm_encoder.encoder.layers.5.final_layer_norm.weight', 'wavlm_encoder.encoder.layers.5.final_layer_norm.bias', 'wavlm_encoder.encoder.layers.6.attention.gru_rel_pos_const', 'wavlm_encoder.encoder.layers.6.attention.k_proj.weight', 'wavlm_encoder.encoder.layers.6.attention.k_proj.bias', 'wavlm_encoder.encoder.layers.6.attention.v_proj.weight', 'wavlm_encoder.encoder.layers.6.attention.v_proj.bias', 'wavlm_encoder.encoder.layers.6.attention.q_proj.weight', 'wavlm_encoder.encoder.layers.6.attention.q_proj.bias', 'wavlm_encoder.encoder.layers.6.attention.out_proj.weight', 'wavlm_encoder.encoder.layers.6.attention.out_proj.bias', 'wavlm_encoder.encoder.layers.6.attention.gru_rel_pos_linear.weight', 'wavlm_encoder.encoder.layers.6.attention.gru_rel_pos_linear.bias', 'wavlm_encoder.encoder.layers.6.layer_norm.weight', 'wavlm_encoder.encoder.layers.6.layer_norm.bias', 'wavlm_encoder.encoder.layers.6.feed_forward.intermediate_dense.weight', 'wavlm_encoder.encoder.layers.6.feed_forward.intermediate_dense.bias', 'wavlm_encoder.encoder.layers.6.feed_forward.output_dense.weight', 'wavlm_encoder.encoder.layers.6.feed_forward.output_dense.bias', 'wavlm_encoder.encoder.layers.6.final_layer_norm.weight', 'wavlm_encoder.encoder.layers.6.final_layer_norm.bias', 'wavlm_encoder.encoder.layers.7.attention.gru_rel_pos_const', 'wavlm_encoder.encoder.layers.7.attention.k_proj.weight', 'wavlm_encoder.encoder.layers.7.attention.k_proj.bias', 'wavlm_encoder.encoder.layers.7.attention.v_proj.weight', 'wavlm_encoder.encoder.layers.7.attention.v_proj.bias', 'wavlm_encoder.encoder.layers.7.attention.q_proj.weight', 'wavlm_encoder.encoder.layers.7.attention.q_proj.bias', 'wavlm_encoder.encoder.layers.7.attention.out_proj.weight', 'wavlm_encoder.encoder.layers.7.attention.out_proj.bias', 'wavlm_encoder.encoder.layers.7.attention.gru_rel_pos_linear.weight', 'wavlm_encoder.encoder.layers.7.attention.gru_rel_pos_linear.bias', 'wavlm_encoder.encoder.layers.7.layer_norm.weight', 'wavlm_encoder.encoder.layers.7.layer_norm.bias', 'wavlm_encoder.encoder.layers.7.feed_forward.intermediate_dense.weight', 'wavlm_encoder.encoder.layers.7.feed_forward.intermediate_dense.bias', 'wavlm_encoder.encoder.layers.7.feed_forward.output_dense.weight', 'wavlm_encoder.encoder.layers.7.feed_forward.output_dense.bias', 'wavlm_encoder.encoder.layers.7.final_layer_norm.weight', 'wavlm_encoder.encoder.layers.7.final_layer_norm.bias', 'wavlm_encoder.encoder.layers.8.attention.gru_rel_pos_const', 'wavlm_encoder.encoder.layers.8.attention.k_proj.weight', 'wavlm_encoder.encoder.layers.8.attention.k_proj.bias', 'wavlm_encoder.encoder.layers.8.attention.v_proj.weight', 'wavlm_encoder.encoder.layers.8.attention.v_proj.bias', 'wavlm_encoder.encoder.layers.8.attention.q_proj.weight', 'wavlm_encoder.encoder.layers.8.attention.q_proj.bias', 'wavlm_encoder.encoder.layers.8.attention.out_proj.weight', 'wavlm_encoder.encoder.layers.8.attention.out_proj.bias', 'wavlm_encoder.encoder.layers.8.attention.gru_rel_pos_linear.weight', 'wavlm_encoder.encoder.layers.8.attention.gru_rel_pos_linear.bias', 'wavlm_encoder.encoder.layers.8.layer_norm.weight', 'wavlm_encoder.encoder.layers.8.layer_norm.bias', 'wavlm_encoder.encoder.layers.8.feed_forward.intermediate_dense.weight', 'wavlm_encoder.encoder.layers.8.feed_forward.intermediate_dense.bias', 'wavlm_encoder.encoder.layers.8.feed_forward.output_dense.weight', 'wavlm_encoder.encoder.layers.8.feed_forward.output_dense.bias', 'wavlm_encoder.encoder.layers.8.final_layer_norm.weight', 'wavlm_encoder.encoder.layers.8.final_layer_norm.bias', 'wavlm_encoder.encoder.layers.9.attention.gru_rel_pos_const', 'wavlm_encoder.encoder.layers.9.attention.k_proj.weight', 'wavlm_encoder.encoder.layers.9.attention.k_proj.bias', 'wavlm_encoder.encoder.layers.9.attention.v_proj.weight', 'wavlm_encoder.encoder.layers.9.attention.v_proj.bias', 'wavlm_encoder.encoder.layers.9.attention.q_proj.weight', 'wavlm_encoder.encoder.layers.9.attention.q_proj.bias', 'wavlm_encoder.encoder.layers.9.attention.out_proj.weight', 'wavlm_encoder.encoder.layers.9.attention.out_proj.bias', 'wavlm_encoder.encoder.layers.9.attention.gru_rel_pos_linear.weight', 'wavlm_encoder.encoder.layers.9.attention.gru_rel_pos_linear.bias', 'wavlm_encoder.encoder.layers.9.layer_norm.weight', 'wavlm_encoder.encoder.layers.9.layer_norm.bias', 'wavlm_encoder.encoder.layers.9.feed_forward.intermediate_dense.weight', 'wavlm_encoder.encoder.layers.9.feed_forward.intermediate_dense.bias', 'wavlm_encoder.encoder.layers.9.feed_forward.output_dense.weight', 'wavlm_encoder.encoder.layers.9.feed_forward.output_dense.bias', 'wavlm_encoder.encoder.layers.9.final_layer_norm.weight', 'wavlm_encoder.encoder.layers.9.final_layer_norm.bias', 'wavlm_encoder.encoder.layers.10.attention.gru_rel_pos_const', 'wavlm_encoder.encoder.layers.10.attention.k_proj.weight', 'wavlm_encoder.encoder.layers.10.attention.k_proj.bias', 'wavlm_encoder.encoder.layers.10.attention.v_proj.weight', 'wavlm_encoder.encoder.layers.10.attention.v_proj.bias', 'wavlm_encoder.encoder.layers.10.attention.q_proj.weight', 'wavlm_encoder.encoder.layers.10.attention.q_proj.bias', 'wavlm_encoder.encoder.layers.10.attention.out_proj.weight', 'wavlm_encoder.encoder.layers.10.attention.out_proj.bias', 'wavlm_encoder.encoder.layers.10.attention.gru_rel_pos_linear.weight', 'wavlm_encoder.encoder.layers.10.attention.gru_rel_pos_linear.bias', 'wavlm_encoder.encoder.layers.10.layer_norm.weight', 'wavlm_encoder.encoder.layers.10.layer_norm.bias', 'wavlm_encoder.encoder.layers.10.feed_forward.intermediate_dense.weight', 'wavlm_encoder.encoder.layers.10.feed_forward.intermediate_dense.bias', 'wavlm_encoder.encoder.layers.10.feed_forward.output_dense.weight', 'wavlm_encoder.encoder.layers.10.feed_forward.output_dense.bias', 'wavlm_encoder.encoder.layers.10.final_layer_norm.weight', 'wavlm_encoder.encoder.layers.10.final_layer_norm.bias', 'wavlm_encoder.encoder.layers.11.attention.gru_rel_pos_const', 'wavlm_encoder.encoder.layers.11.attention.k_proj.weight', 'wavlm_encoder.encoder.layers.11.attention.k_proj.bias', 'wavlm_encoder.encoder.layers.11.attention.v_proj.weight', 'wavlm_encoder.encoder.layers.11.attention.v_proj.bias', 'wavlm_encoder.encoder.layers.11.attention.q_proj.weight', 'wavlm_encoder.encoder.layers.11.attention.q_proj.bias', 'wavlm_encoder.encoder.layers.11.attention.out_proj.weight', 'wavlm_encoder.encoder.layers.11.attention.out_proj.bias', 'wavlm_encoder.encoder.layers.11.attention.gru_rel_pos_linear.weight', 'wavlm_encoder.encoder.layers.11.attention.gru_rel_pos_linear.bias', 'wavlm_encoder.encoder.layers.11.layer_norm.weight', 'wavlm_encoder.encoder.layers.11.layer_norm.bias', 'wavlm_encoder.encoder.layers.11.feed_forward.intermediate_dense.weight', 'wavlm_encoder.encoder.layers.11.feed_forward.intermediate_dense.bias', 'wavlm_encoder.encoder.layers.11.feed_forward.output_dense.weight', 'wavlm_encoder.encoder.layers.11.feed_forward.output_dense.bias', 'wavlm_encoder.encoder.layers.11.final_layer_norm.weight', 'wavlm_encoder.encoder.layers.11.final_layer_norm.bias', 'wavlm_transfer.kernel', 'pretrained_model.model.model.cls_token', 'pretrained_model.model.model.preprocessor_melspec_2048.mel_stft.spectrogram.window', 'pretrained_model.model.model.preprocessor_melspec_2048.mel_stft.mel_scale.fb', 'pretrained_model.model.model.rvq.quantizers.0.stale_counter', 'pretrained_model.model.model.rvq.quantizers.0.in_proj.bias', 'pretrained_model.model.model.rvq.quantizers.0.in_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.0.in_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.0.out_proj.bias', 'pretrained_model.model.model.rvq.quantizers.0.out_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.0.out_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.0.codebook.weight', 'pretrained_model.model.model.rvq.quantizers.1.stale_counter', 'pretrained_model.model.model.rvq.quantizers.1.in_proj.bias', 'pretrained_model.model.model.rvq.quantizers.1.in_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.1.in_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.1.out_proj.bias', 'pretrained_model.model.model.rvq.quantizers.1.out_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.1.out_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.1.codebook.weight', 'pretrained_model.model.model.rvq.quantizers.2.stale_counter', 'pretrained_model.model.model.rvq.quantizers.2.in_proj.bias', 'pretrained_model.model.model.rvq.quantizers.2.in_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.2.in_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.2.out_proj.bias', 'pretrained_model.model.model.rvq.quantizers.2.out_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.2.out_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.2.codebook.weight', 'pretrained_model.model.model.rvq.quantizers.3.stale_counter', 'pretrained_model.model.model.rvq.quantizers.3.in_proj.bias', 'pretrained_model.model.model.rvq.quantizers.3.in_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.3.in_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.3.out_proj.bias', 'pretrained_model.model.model.rvq.quantizers.3.out_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.3.out_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.3.codebook.weight', 'pretrained_model.model.model.rvq.quantizers.4.stale_counter', 'pretrained_model.model.model.rvq.quantizers.4.in_proj.bias', 'pretrained_model.model.model.rvq.quantizers.4.in_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.4.in_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.4.out_proj.bias', 'pretrained_model.model.model.rvq.quantizers.4.out_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.4.out_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.4.codebook.weight', 'pretrained_model.model.model.rvq.quantizers.5.stale_counter', 'pretrained_model.model.model.rvq.quantizers.5.in_proj.bias', 'pretrained_model.model.model.rvq.quantizers.5.in_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.5.in_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.5.out_proj.bias', 'pretrained_model.model.model.rvq.quantizers.5.out_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.5.out_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.5.codebook.weight', 'pretrained_model.model.model.rvq.quantizers.6.stale_counter', 'pretrained_model.model.model.rvq.quantizers.6.in_proj.bias', 'pretrained_model.model.model.rvq.quantizers.6.in_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.6.in_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.6.out_proj.bias', 'pretrained_model.model.model.rvq.quantizers.6.out_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.6.out_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.6.codebook.weight', 'pretrained_model.model.model.rvq.quantizers.7.stale_counter', 'pretrained_model.model.model.rvq.quantizers.7.in_proj.bias', 'pretrained_model.model.model.rvq.quantizers.7.in_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.7.in_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.7.out_proj.bias', 'pretrained_model.model.model.rvq.quantizers.7.out_proj.weight_g', 'pretrained_model.model.model.rvq.quantizers.7.out_proj.weight_v', 'pretrained_model.model.model.rvq.quantizers.7.codebook.weight', 'pretrained_model.model.model.conv.conv.0.conv1.weight', 'pretrained_model.model.model.conv.conv.0.conv1.bias', 'pretrained_model.model.model.conv.conv.0.bn1.weight', 'pretrained_model.model.model.conv.conv.0.bn1.bias', 'pretrained_model.model.model.conv.conv.0.bn1.running_mean', 'pretrained_model.model.model.conv.conv.0.bn1.running_var', 'pretrained_model.model.model.conv.conv.0.bn1.num_batches_tracked', 'pretrained_model.model.model.conv.conv.0.conv2.weight', 'pretrained_model.model.model.conv.conv.0.conv2.bias', 'pretrained_model.model.model.conv.conv.0.bn2.weight', 'pretrained_model.model.model.conv.conv.0.bn2.bias', 'pretrained_model.model.model.conv.conv.0.bn2.running_mean', 'pretrained_model.model.model.conv.conv.0.bn2.running_var', 'pretrained_model.model.model.conv.conv.0.bn2.num_batches_tracked', 'pretrained_model.model.model.conv.conv.0.conv3.weight', 'pretrained_model.model.model.conv.conv.0.conv3.bias', 'pretrained_model.model.model.conv.conv.0.bn3.weight', 'pretrained_model.model.model.conv.conv.0.bn3.bias', 'pretrained_model.model.model.conv.conv.0.bn3.running_mean', 'pretrained_model.model.model.conv.conv.0.bn3.running_var', 'pretrained_model.model.model.conv.conv.0.bn3.num_batches_tracked', 'pretrained_model.model.model.conv.conv.1.conv1.weight', 'pretrained_model.model.model.conv.conv.1.conv1.bias', 'pretrained_model.model.model.conv.conv.1.bn1.weight', 'pretrained_model.model.model.conv.conv.1.bn1.bias', 'pretrained_model.model.model.conv.conv.1.bn1.running_mean', 'pretrained_model.model.model.conv.conv.1.bn1.running_var', 'pretrained_model.model.model.conv.conv.1.bn1.num_batches_tracked', 'pretrained_model.model.model.conv.conv.1.conv2.weight', 'pretrained_model.model.model.conv.conv.1.conv2.bias', 'pretrained_model.model.model.conv.conv.1.bn2.weight', 'pretrained_model.model.model.conv.conv.1.bn2.bias', 'pretrained_model.model.model.conv.conv.1.bn2.running_mean', 'pretrained_model.model.model.conv.conv.1.bn2.running_var', 'pretrained_model.model.model.conv.conv.1.bn2.num_batches_tracked', 'pretrained_model.model.model.conv.conv.1.conv3.weight', 'pretrained_model.model.model.conv.conv.1.conv3.bias', 'pretrained_model.model.model.conv.conv.1.bn3.weight', 'pretrained_model.model.model.conv.conv.1.bn3.bias', 'pretrained_model.model.model.conv.conv.1.bn3.running_mean', 'pretrained_model.model.model.conv.conv.1.bn3.running_var', 'pretrained_model.model.model.conv.conv.1.bn3.num_batches_tracked', 'pretrained_model.model.model.conv.linear.weight', 'pretrained_model.model.model.conv.linear.bias', 'pretrained_model.model.model.conformer.embed_positions.inv_freq', 'pretrained_model.model.model.conformer.pos_conv_embed.conv.bias', 'pretrained_model.model.model.conformer.pos_conv_embed.conv.parametrizations.weight.original0', 'pretrained_model.model.model.conformer.pos_conv_embed.conv.parametrizations.weight.original1', 'pretrained_model.model.model.conformer.layer_norm.weight', 'pretrained_model.model.model.conformer.layer_norm.bias', 'pretrained_model.model.model.conformer.layers.0.ffn1_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.0.ffn1_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.0.ffn1.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.0.ffn1.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.0.ffn1.output_dense.weight', 'pretrained_model.model.model.conformer.layers.0.ffn1.output_dense.bias', 'pretrained_model.model.model.conformer.layers.0.self_attn_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.0.self_attn_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.0.self_attn.linear_q.weight', 'pretrained_model.model.model.conformer.layers.0.self_attn.linear_q.bias', 'pretrained_model.model.model.conformer.layers.0.self_attn.linear_k.weight', 'pretrained_model.model.model.conformer.layers.0.self_attn.linear_k.bias', 'pretrained_model.model.model.conformer.layers.0.self_attn.linear_v.weight', 'pretrained_model.model.model.conformer.layers.0.self_attn.linear_v.bias', 'pretrained_model.model.model.conformer.layers.0.self_attn.linear_out.weight', 'pretrained_model.model.model.conformer.layers.0.self_attn.linear_out.bias', 'pretrained_model.model.model.conformer.layers.0.conv_module.layer_norm.weight', 'pretrained_model.model.model.conformer.layers.0.conv_module.layer_norm.bias', 'pretrained_model.model.model.conformer.layers.0.conv_module.pointwise_conv1.weight', 'pretrained_model.model.model.conformer.layers.0.conv_module.depthwise_conv.weight', 'pretrained_model.model.model.conformer.layers.0.conv_module.batch_norm.weight', 'pretrained_model.model.model.conformer.layers.0.conv_module.batch_norm.bias', 'pretrained_model.model.model.conformer.layers.0.conv_module.batch_norm.running_mean', 'pretrained_model.model.model.conformer.layers.0.conv_module.batch_norm.running_var', 'pretrained_model.model.model.conformer.layers.0.conv_module.batch_norm.num_batches_tracked', 'pretrained_model.model.model.conformer.layers.0.conv_module.pointwise_conv2.weight', 'pretrained_model.model.model.conformer.layers.0.ffn2_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.0.ffn2_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.0.ffn2.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.0.ffn2.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.0.ffn2.output_dense.weight', 'pretrained_model.model.model.conformer.layers.0.ffn2.output_dense.bias', 'pretrained_model.model.model.conformer.layers.0.final_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.0.final_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.1.ffn1_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.1.ffn1_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.1.ffn1.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.1.ffn1.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.1.ffn1.output_dense.weight', 'pretrained_model.model.model.conformer.layers.1.ffn1.output_dense.bias', 'pretrained_model.model.model.conformer.layers.1.self_attn_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.1.self_attn_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.1.self_attn.linear_q.weight', 'pretrained_model.model.model.conformer.layers.1.self_attn.linear_q.bias', 'pretrained_model.model.model.conformer.layers.1.self_attn.linear_k.weight', 'pretrained_model.model.model.conformer.layers.1.self_attn.linear_k.bias', 'pretrained_model.model.model.conformer.layers.1.self_attn.linear_v.weight', 'pretrained_model.model.model.conformer.layers.1.self_attn.linear_v.bias', 'pretrained_model.model.model.conformer.layers.1.self_attn.linear_out.weight', 'pretrained_model.model.model.conformer.layers.1.self_attn.linear_out.bias', 'pretrained_model.model.model.conformer.layers.1.conv_module.layer_norm.weight', 'pretrained_model.model.model.conformer.layers.1.conv_module.layer_norm.bias', 'pretrained_model.model.model.conformer.layers.1.conv_module.pointwise_conv1.weight', 'pretrained_model.model.model.conformer.layers.1.conv_module.depthwise_conv.weight', 'pretrained_model.model.model.conformer.layers.1.conv_module.batch_norm.weight', 'pretrained_model.model.model.conformer.layers.1.conv_module.batch_norm.bias', 'pretrained_model.model.model.conformer.layers.1.conv_module.batch_norm.running_mean', 'pretrained_model.model.model.conformer.layers.1.conv_module.batch_norm.running_var', 'pretrained_model.model.model.conformer.layers.1.conv_module.batch_norm.num_batches_tracked', 'pretrained_model.model.model.conformer.layers.1.conv_module.pointwise_conv2.weight', 'pretrained_model.model.model.conformer.layers.1.ffn2_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.1.ffn2_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.1.ffn2.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.1.ffn2.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.1.ffn2.output_dense.weight', 'pretrained_model.model.model.conformer.layers.1.ffn2.output_dense.bias', 'pretrained_model.model.model.conformer.layers.1.final_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.1.final_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.2.ffn1_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.2.ffn1_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.2.ffn1.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.2.ffn1.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.2.ffn1.output_dense.weight', 'pretrained_model.model.model.conformer.layers.2.ffn1.output_dense.bias', 'pretrained_model.model.model.conformer.layers.2.self_attn_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.2.self_attn_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.2.self_attn.linear_q.weight', 'pretrained_model.model.model.conformer.layers.2.self_attn.linear_q.bias', 'pretrained_model.model.model.conformer.layers.2.self_attn.linear_k.weight', 'pretrained_model.model.model.conformer.layers.2.self_attn.linear_k.bias', 'pretrained_model.model.model.conformer.layers.2.self_attn.linear_v.weight', 'pretrained_model.model.model.conformer.layers.2.self_attn.linear_v.bias', 'pretrained_model.model.model.conformer.layers.2.self_attn.linear_out.weight', 'pretrained_model.model.model.conformer.layers.2.self_attn.linear_out.bias', 'pretrained_model.model.model.conformer.layers.2.conv_module.layer_norm.weight', 'pretrained_model.model.model.conformer.layers.2.conv_module.layer_norm.bias', 'pretrained_model.model.model.conformer.layers.2.conv_module.pointwise_conv1.weight', 'pretrained_model.model.model.conformer.layers.2.conv_module.depthwise_conv.weight', 'pretrained_model.model.model.conformer.layers.2.conv_module.batch_norm.weight', 'pretrained_model.model.model.conformer.layers.2.conv_module.batch_norm.bias', 'pretrained_model.model.model.conformer.layers.2.conv_module.batch_norm.running_mean', 'pretrained_model.model.model.conformer.layers.2.conv_module.batch_norm.running_var', 'pretrained_model.model.model.conformer.layers.2.conv_module.batch_norm.num_batches_tracked', 'pretrained_model.model.model.conformer.layers.2.conv_module.pointwise_conv2.weight', 'pretrained_model.model.model.conformer.layers.2.ffn2_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.2.ffn2_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.2.ffn2.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.2.ffn2.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.2.ffn2.output_dense.weight', 'pretrained_model.model.model.conformer.layers.2.ffn2.output_dense.bias', 'pretrained_model.model.model.conformer.layers.2.final_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.2.final_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.3.ffn1_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.3.ffn1_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.3.ffn1.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.3.ffn1.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.3.ffn1.output_dense.weight', 'pretrained_model.model.model.conformer.layers.3.ffn1.output_dense.bias', 'pretrained_model.model.model.conformer.layers.3.self_attn_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.3.self_attn_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.3.self_attn.linear_q.weight', 'pretrained_model.model.model.conformer.layers.3.self_attn.linear_q.bias', 'pretrained_model.model.model.conformer.layers.3.self_attn.linear_k.weight', 'pretrained_model.model.model.conformer.layers.3.self_attn.linear_k.bias', 'pretrained_model.model.model.conformer.layers.3.self_attn.linear_v.weight', 'pretrained_model.model.model.conformer.layers.3.self_attn.linear_v.bias', 'pretrained_model.model.model.conformer.layers.3.self_attn.linear_out.weight', 'pretrained_model.model.model.conformer.layers.3.self_attn.linear_out.bias', 'pretrained_model.model.model.conformer.layers.3.conv_module.layer_norm.weight', 'pretrained_model.model.model.conformer.layers.3.conv_module.layer_norm.bias', 'pretrained_model.model.model.conformer.layers.3.conv_module.pointwise_conv1.weight', 'pretrained_model.model.model.conformer.layers.3.conv_module.depthwise_conv.weight', 'pretrained_model.model.model.conformer.layers.3.conv_module.batch_norm.weight', 'pretrained_model.model.model.conformer.layers.3.conv_module.batch_norm.bias', 'pretrained_model.model.model.conformer.layers.3.conv_module.batch_norm.running_mean', 'pretrained_model.model.model.conformer.layers.3.conv_module.batch_norm.running_var', 'pretrained_model.model.model.conformer.layers.3.conv_module.batch_norm.num_batches_tracked', 'pretrained_model.model.model.conformer.layers.3.conv_module.pointwise_conv2.weight', 'pretrained_model.model.model.conformer.layers.3.ffn2_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.3.ffn2_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.3.ffn2.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.3.ffn2.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.3.ffn2.output_dense.weight', 'pretrained_model.model.model.conformer.layers.3.ffn2.output_dense.bias', 'pretrained_model.model.model.conformer.layers.3.final_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.3.final_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.4.ffn1_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.4.ffn1_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.4.ffn1.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.4.ffn1.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.4.ffn1.output_dense.weight', 'pretrained_model.model.model.conformer.layers.4.ffn1.output_dense.bias', 'pretrained_model.model.model.conformer.layers.4.self_attn_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.4.self_attn_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.4.self_attn.linear_q.weight', 'pretrained_model.model.model.conformer.layers.4.self_attn.linear_q.bias', 'pretrained_model.model.model.conformer.layers.4.self_attn.linear_k.weight', 'pretrained_model.model.model.conformer.layers.4.self_attn.linear_k.bias', 'pretrained_model.model.model.conformer.layers.4.self_attn.linear_v.weight', 'pretrained_model.model.model.conformer.layers.4.self_attn.linear_v.bias', 'pretrained_model.model.model.conformer.layers.4.self_attn.linear_out.weight', 'pretrained_model.model.model.conformer.layers.4.self_attn.linear_out.bias', 'pretrained_model.model.model.conformer.layers.4.conv_module.layer_norm.weight', 'pretrained_model.model.model.conformer.layers.4.conv_module.layer_norm.bias', 'pretrained_model.model.model.conformer.layers.4.conv_module.pointwise_conv1.weight', 'pretrained_model.model.model.conformer.layers.4.conv_module.depthwise_conv.weight', 'pretrained_model.model.model.conformer.layers.4.conv_module.batch_norm.weight', 'pretrained_model.model.model.conformer.layers.4.conv_module.batch_norm.bias', 'pretrained_model.model.model.conformer.layers.4.conv_module.batch_norm.running_mean', 'pretrained_model.model.model.conformer.layers.4.conv_module.batch_norm.running_var', 'pretrained_model.model.model.conformer.layers.4.conv_module.batch_norm.num_batches_tracked', 'pretrained_model.model.model.conformer.layers.4.conv_module.pointwise_conv2.weight', 'pretrained_model.model.model.conformer.layers.4.ffn2_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.4.ffn2_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.4.ffn2.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.4.ffn2.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.4.ffn2.output_dense.weight', 'pretrained_model.model.model.conformer.layers.4.ffn2.output_dense.bias', 'pretrained_model.model.model.conformer.layers.4.final_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.4.final_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.5.ffn1_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.5.ffn1_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.5.ffn1.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.5.ffn1.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.5.ffn1.output_dense.weight', 'pretrained_model.model.model.conformer.layers.5.ffn1.output_dense.bias', 'pretrained_model.model.model.conformer.layers.5.self_attn_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.5.self_attn_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.5.self_attn.linear_q.weight', 'pretrained_model.model.model.conformer.layers.5.self_attn.linear_q.bias', 'pretrained_model.model.model.conformer.layers.5.self_attn.linear_k.weight', 'pretrained_model.model.model.conformer.layers.5.self_attn.linear_k.bias', 'pretrained_model.model.model.conformer.layers.5.self_attn.linear_v.weight', 'pretrained_model.model.model.conformer.layers.5.self_attn.linear_v.bias', 'pretrained_model.model.model.conformer.layers.5.self_attn.linear_out.weight', 'pretrained_model.model.model.conformer.layers.5.self_attn.linear_out.bias', 'pretrained_model.model.model.conformer.layers.5.conv_module.layer_norm.weight', 'pretrained_model.model.model.conformer.layers.5.conv_module.layer_norm.bias', 'pretrained_model.model.model.conformer.layers.5.conv_module.pointwise_conv1.weight', 'pretrained_model.model.model.conformer.layers.5.conv_module.depthwise_conv.weight', 'pretrained_model.model.model.conformer.layers.5.conv_module.batch_norm.weight', 'pretrained_model.model.model.conformer.layers.5.conv_module.batch_norm.bias', 'pretrained_model.model.model.conformer.layers.5.conv_module.batch_norm.running_mean', 'pretrained_model.model.model.conformer.layers.5.conv_module.batch_norm.running_var', 'pretrained_model.model.model.conformer.layers.5.conv_module.batch_norm.num_batches_tracked', 'pretrained_model.model.model.conformer.layers.5.conv_module.pointwise_conv2.weight', 'pretrained_model.model.model.conformer.layers.5.ffn2_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.5.ffn2_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.5.ffn2.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.5.ffn2.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.5.ffn2.output_dense.weight', 'pretrained_model.model.model.conformer.layers.5.ffn2.output_dense.bias', 'pretrained_model.model.model.conformer.layers.5.final_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.5.final_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.6.ffn1_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.6.ffn1_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.6.ffn1.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.6.ffn1.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.6.ffn1.output_dense.weight', 'pretrained_model.model.model.conformer.layers.6.ffn1.output_dense.bias', 'pretrained_model.model.model.conformer.layers.6.self_attn_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.6.self_attn_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.6.self_attn.linear_q.weight', 'pretrained_model.model.model.conformer.layers.6.self_attn.linear_q.bias', 'pretrained_model.model.model.conformer.layers.6.self_attn.linear_k.weight', 'pretrained_model.model.model.conformer.layers.6.self_attn.linear_k.bias', 'pretrained_model.model.model.conformer.layers.6.self_attn.linear_v.weight', 'pretrained_model.model.model.conformer.layers.6.self_attn.linear_v.bias', 'pretrained_model.model.model.conformer.layers.6.self_attn.linear_out.weight', 'pretrained_model.model.model.conformer.layers.6.self_attn.linear_out.bias', 'pretrained_model.model.model.conformer.layers.6.conv_module.layer_norm.weight', 'pretrained_model.model.model.conformer.layers.6.conv_module.layer_norm.bias', 'pretrained_model.model.model.conformer.layers.6.conv_module.pointwise_conv1.weight', 'pretrained_model.model.model.conformer.layers.6.conv_module.depthwise_conv.weight', 'pretrained_model.model.model.conformer.layers.6.conv_module.batch_norm.weight', 'pretrained_model.model.model.conformer.layers.6.conv_module.batch_norm.bias', 'pretrained_model.model.model.conformer.layers.6.conv_module.batch_norm.running_mean', 'pretrained_model.model.model.conformer.layers.6.conv_module.batch_norm.running_var', 'pretrained_model.model.model.conformer.layers.6.conv_module.batch_norm.num_batches_tracked', 'pretrained_model.model.model.conformer.layers.6.conv_module.pointwise_conv2.weight', 'pretrained_model.model.model.conformer.layers.6.ffn2_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.6.ffn2_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.6.ffn2.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.6.ffn2.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.6.ffn2.output_dense.weight', 'pretrained_model.model.model.conformer.layers.6.ffn2.output_dense.bias', 'pretrained_model.model.model.conformer.layers.6.final_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.6.final_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.7.ffn1_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.7.ffn1_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.7.ffn1.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.7.ffn1.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.7.ffn1.output_dense.weight', 'pretrained_model.model.model.conformer.layers.7.ffn1.output_dense.bias', 'pretrained_model.model.model.conformer.layers.7.self_attn_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.7.self_attn_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.7.self_attn.linear_q.weight', 'pretrained_model.model.model.conformer.layers.7.self_attn.linear_q.bias', 'pretrained_model.model.model.conformer.layers.7.self_attn.linear_k.weight', 'pretrained_model.model.model.conformer.layers.7.self_attn.linear_k.bias', 'pretrained_model.model.model.conformer.layers.7.self_attn.linear_v.weight', 'pretrained_model.model.model.conformer.layers.7.self_attn.linear_v.bias', 'pretrained_model.model.model.conformer.layers.7.self_attn.linear_out.weight', 'pretrained_model.model.model.conformer.layers.7.self_attn.linear_out.bias', 'pretrained_model.model.model.conformer.layers.7.conv_module.layer_norm.weight', 'pretrained_model.model.model.conformer.layers.7.conv_module.layer_norm.bias', 'pretrained_model.model.model.conformer.layers.7.conv_module.pointwise_conv1.weight', 'pretrained_model.model.model.conformer.layers.7.conv_module.depthwise_conv.weight', 'pretrained_model.model.model.conformer.layers.7.conv_module.batch_norm.weight', 'pretrained_model.model.model.conformer.layers.7.conv_module.batch_norm.bias', 'pretrained_model.model.model.conformer.layers.7.conv_module.batch_norm.running_mean', 'pretrained_model.model.model.conformer.layers.7.conv_module.batch_norm.running_var', 'pretrained_model.model.model.conformer.layers.7.conv_module.batch_norm.num_batches_tracked', 'pretrained_model.model.model.conformer.layers.7.conv_module.pointwise_conv2.weight', 'pretrained_model.model.model.conformer.layers.7.ffn2_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.7.ffn2_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.7.ffn2.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.7.ffn2.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.7.ffn2.output_dense.weight', 'pretrained_model.model.model.conformer.layers.7.ffn2.output_dense.bias', 'pretrained_model.model.model.conformer.layers.7.final_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.7.final_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.8.ffn1_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.8.ffn1_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.8.ffn1.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.8.ffn1.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.8.ffn1.output_dense.weight', 'pretrained_model.model.model.conformer.layers.8.ffn1.output_dense.bias', 'pretrained_model.model.model.conformer.layers.8.self_attn_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.8.self_attn_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.8.self_attn.linear_q.weight', 'pretrained_model.model.model.conformer.layers.8.self_attn.linear_q.bias', 'pretrained_model.model.model.conformer.layers.8.self_attn.linear_k.weight', 'pretrained_model.model.model.conformer.layers.8.self_attn.linear_k.bias', 'pretrained_model.model.model.conformer.layers.8.self_attn.linear_v.weight', 'pretrained_model.model.model.conformer.layers.8.self_attn.linear_v.bias', 'pretrained_model.model.model.conformer.layers.8.self_attn.linear_out.weight', 'pretrained_model.model.model.conformer.layers.8.self_attn.linear_out.bias', 'pretrained_model.model.model.conformer.layers.8.conv_module.layer_norm.weight', 'pretrained_model.model.model.conformer.layers.8.conv_module.layer_norm.bias', 'pretrained_model.model.model.conformer.layers.8.conv_module.pointwise_conv1.weight', 'pretrained_model.model.model.conformer.layers.8.conv_module.depthwise_conv.weight', 'pretrained_model.model.model.conformer.layers.8.conv_module.batch_norm.weight', 'pretrained_model.model.model.conformer.layers.8.conv_module.batch_norm.bias', 'pretrained_model.model.model.conformer.layers.8.conv_module.batch_norm.running_mean', 'pretrained_model.model.model.conformer.layers.8.conv_module.batch_norm.running_var', 'pretrained_model.model.model.conformer.layers.8.conv_module.batch_norm.num_batches_tracked', 'pretrained_model.model.model.conformer.layers.8.conv_module.pointwise_conv2.weight', 'pretrained_model.model.model.conformer.layers.8.ffn2_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.8.ffn2_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.8.ffn2.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.8.ffn2.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.8.ffn2.output_dense.weight', 'pretrained_model.model.model.conformer.layers.8.ffn2.output_dense.bias', 'pretrained_model.model.model.conformer.layers.8.final_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.8.final_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.9.ffn1_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.9.ffn1_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.9.ffn1.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.9.ffn1.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.9.ffn1.output_dense.weight', 'pretrained_model.model.model.conformer.layers.9.ffn1.output_dense.bias', 'pretrained_model.model.model.conformer.layers.9.self_attn_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.9.self_attn_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.9.self_attn.linear_q.weight', 'pretrained_model.model.model.conformer.layers.9.self_attn.linear_q.bias', 'pretrained_model.model.model.conformer.layers.9.self_attn.linear_k.weight', 'pretrained_model.model.model.conformer.layers.9.self_attn.linear_k.bias', 'pretrained_model.model.model.conformer.layers.9.self_attn.linear_v.weight', 'pretrained_model.model.model.conformer.layers.9.self_attn.linear_v.bias', 'pretrained_model.model.model.conformer.layers.9.self_attn.linear_out.weight', 'pretrained_model.model.model.conformer.layers.9.self_attn.linear_out.bias', 'pretrained_model.model.model.conformer.layers.9.conv_module.layer_norm.weight', 'pretrained_model.model.model.conformer.layers.9.conv_module.layer_norm.bias', 'pretrained_model.model.model.conformer.layers.9.conv_module.pointwise_conv1.weight', 'pretrained_model.model.model.conformer.layers.9.conv_module.depthwise_conv.weight', 'pretrained_model.model.model.conformer.layers.9.conv_module.batch_norm.weight', 'pretrained_model.model.model.conformer.layers.9.conv_module.batch_norm.bias', 'pretrained_model.model.model.conformer.layers.9.conv_module.batch_norm.running_mean', 'pretrained_model.model.model.conformer.layers.9.conv_module.batch_norm.running_var', 'pretrained_model.model.model.conformer.layers.9.conv_module.batch_norm.num_batches_tracked', 'pretrained_model.model.model.conformer.layers.9.conv_module.pointwise_conv2.weight', 'pretrained_model.model.model.conformer.layers.9.ffn2_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.9.ffn2_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.9.ffn2.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.9.ffn2.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.9.ffn2.output_dense.weight', 'pretrained_model.model.model.conformer.layers.9.ffn2.output_dense.bias', 'pretrained_model.model.model.conformer.layers.9.final_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.9.final_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.10.ffn1_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.10.ffn1_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.10.ffn1.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.10.ffn1.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.10.ffn1.output_dense.weight', 'pretrained_model.model.model.conformer.layers.10.ffn1.output_dense.bias', 'pretrained_model.model.model.conformer.layers.10.self_attn_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.10.self_attn_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.10.self_attn.linear_q.weight', 'pretrained_model.model.model.conformer.layers.10.self_attn.linear_q.bias', 'pretrained_model.model.model.conformer.layers.10.self_attn.linear_k.weight', 'pretrained_model.model.model.conformer.layers.10.self_attn.linear_k.bias', 'pretrained_model.model.model.conformer.layers.10.self_attn.linear_v.weight', 'pretrained_model.model.model.conformer.layers.10.self_attn.linear_v.bias', 'pretrained_model.model.model.conformer.layers.10.self_attn.linear_out.weight', 'pretrained_model.model.model.conformer.layers.10.self_attn.linear_out.bias', 'pretrained_model.model.model.conformer.layers.10.conv_module.layer_norm.weight', 'pretrained_model.model.model.conformer.layers.10.conv_module.layer_norm.bias', 'pretrained_model.model.model.conformer.layers.10.conv_module.pointwise_conv1.weight', 'pretrained_model.model.model.conformer.layers.10.conv_module.depthwise_conv.weight', 'pretrained_model.model.model.conformer.layers.10.conv_module.batch_norm.weight', 'pretrained_model.model.model.conformer.layers.10.conv_module.batch_norm.bias', 'pretrained_model.model.model.conformer.layers.10.conv_module.batch_norm.running_mean', 'pretrained_model.model.model.conformer.layers.10.conv_module.batch_norm.running_var', 'pretrained_model.model.model.conformer.layers.10.conv_module.batch_norm.num_batches_tracked', 'pretrained_model.model.model.conformer.layers.10.conv_module.pointwise_conv2.weight', 'pretrained_model.model.model.conformer.layers.10.ffn2_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.10.ffn2_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.10.ffn2.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.10.ffn2.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.10.ffn2.output_dense.weight', 'pretrained_model.model.model.conformer.layers.10.ffn2.output_dense.bias', 'pretrained_model.model.model.conformer.layers.10.final_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.10.final_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.11.ffn1_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.11.ffn1_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.11.ffn1.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.11.ffn1.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.11.ffn1.output_dense.weight', 'pretrained_model.model.model.conformer.layers.11.ffn1.output_dense.bias', 'pretrained_model.model.model.conformer.layers.11.self_attn_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.11.self_attn_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.11.self_attn.linear_q.weight', 'pretrained_model.model.model.conformer.layers.11.self_attn.linear_q.bias', 'pretrained_model.model.model.conformer.layers.11.self_attn.linear_k.weight', 'pretrained_model.model.model.conformer.layers.11.self_attn.linear_k.bias', 'pretrained_model.model.model.conformer.layers.11.self_attn.linear_v.weight', 'pretrained_model.model.model.conformer.layers.11.self_attn.linear_v.bias', 'pretrained_model.model.model.conformer.layers.11.self_attn.linear_out.weight', 'pretrained_model.model.model.conformer.layers.11.self_attn.linear_out.bias', 'pretrained_model.model.model.conformer.layers.11.conv_module.layer_norm.weight', 'pretrained_model.model.model.conformer.layers.11.conv_module.layer_norm.bias', 'pretrained_model.model.model.conformer.layers.11.conv_module.pointwise_conv1.weight', 'pretrained_model.model.model.conformer.layers.11.conv_module.depthwise_conv.weight', 'pretrained_model.model.model.conformer.layers.11.conv_module.batch_norm.weight', 'pretrained_model.model.model.conformer.layers.11.conv_module.batch_norm.bias', 'pretrained_model.model.model.conformer.layers.11.conv_module.batch_norm.running_mean', 'pretrained_model.model.model.conformer.layers.11.conv_module.batch_norm.running_var', 'pretrained_model.model.model.conformer.layers.11.conv_module.batch_norm.num_batches_tracked', 'pretrained_model.model.model.conformer.layers.11.conv_module.pointwise_conv2.weight', 'pretrained_model.model.model.conformer.layers.11.ffn2_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.11.ffn2_layer_norm.bias', 'pretrained_model.model.model.conformer.layers.11.ffn2.intermediate_dense.weight', 'pretrained_model.model.model.conformer.layers.11.ffn2.intermediate_dense.bias', 'pretrained_model.model.model.conformer.layers.11.ffn2.output_dense.weight', 'pretrained_model.model.model.conformer.layers.11.ffn2.output_dense.bias', 'pretrained_model.model.model.conformer.layers.11.final_layer_norm.weight', 'pretrained_model.model.model.conformer.layers.11.final_layer_norm.bias', 'pretrained_model.model.model.linear.weight', 'pretrained_model.model.model.linear.bias', 'd_conv_whisper.transformer_before.0.self_attn.to_qkv.parametrizations.weight.original0', 'd_conv_whisper.transformer_before.0.self_attn.to_qkv.parametrizations.weight.original1', 'd_conv_whisper.transformer_before.0.self_attn.to_out.parametrizations.weight.original0', 'd_conv_whisper.transformer_before.0.self_attn.to_out.parametrizations.weight.original1', 'd_conv_whisper.transformer_before.0.self_attn.q_norm.weight', 'd_conv_whisper.transformer_before.0.self_attn.q_norm.bias', 'd_conv_whisper.transformer_before.0.self_attn.k_norm.weight', 'd_conv_whisper.transformer_before.0.self_attn.k_norm.bias', 'd_conv_whisper.transformer_before.0.self_attn_scale.scale', 'd_conv_whisper.transformer_before.0.ff.ff.0.proj.bias', 'd_conv_whisper.transformer_before.0.ff.ff.0.proj.parametrizations.weight.original0', 'd_conv_whisper.transformer_before.0.ff.ff.0.proj.parametrizations.weight.original1', 'd_conv_whisper.transformer_before.0.ff.ff.2.bias', 'd_conv_whisper.transformer_before.0.ff.ff.2.parametrizations.weight.original0', 'd_conv_whisper.transformer_before.0.ff.ff.2.parametrizations.weight.original1', 'd_conv_whisper.transformer_before.0.ff_scale.scale', 'd_conv_whisper.transformer_before.0.rope.inv_freq', 'd_conv_whisper.downsample.weight', 'd_conv_whisper.downsample.bias', 'd_conv_whisper.transformer_after.0.self_attn.to_qkv.parametrizations.weight.original0', 'd_conv_whisper.transformer_after.0.self_attn.to_qkv.parametrizations.weight.original1', 'd_conv_whisper.transformer_after.0.self_attn.to_out.parametrizations.weight.original0', 'd_conv_whisper.transformer_after.0.self_attn.to_out.parametrizations.weight.original1', 'd_conv_whisper.transformer_after.0.self_attn.q_norm.weight', 'd_conv_whisper.transformer_after.0.self_attn.q_norm.bias', 'd_conv_whisper.transformer_after.0.self_attn.k_norm.weight', 'd_conv_whisper.transformer_after.0.self_attn.k_norm.bias', 'd_conv_whisper.transformer_after.0.self_attn_scale.scale', 'd_conv_whisper.transformer_after.0.ff.ff.0.proj.bias', 'd_conv_whisper.transformer_after.0.ff.ff.0.proj.parametrizations.weight.original0', 'd_conv_whisper.transformer_after.0.ff.ff.0.proj.parametrizations.weight.original1', 'd_conv_whisper.transformer_after.0.ff.ff.2.bias', 'd_conv_whisper.transformer_after.0.ff.ff.2.parametrizations.weight.original0', 'd_conv_whisper.transformer_after.0.ff.ff.2.parametrizations.weight.original1', 'd_conv_whisper.transformer_after.0.ff_scale.scale', 'd_conv_whisper.transformer_after.0.rope.inv_freq', 'd_conv_whisper.norm_before.weight', 'd_conv_whisper.norm_before.bias', 'd_conv_whisper.norm_after.weight', 'd_conv_whisper.norm_after.bias', 'd_conv_wavlm.transformer_before.0.self_attn.to_qkv.parametrizations.weight.original0', 'd_conv_wavlm.transformer_before.0.self_attn.to_qkv.parametrizations.weight.original1', 'd_conv_wavlm.transformer_before.0.self_attn.to_out.parametrizations.weight.original0', 'd_conv_wavlm.transformer_before.0.self_attn.to_out.parametrizations.weight.original1', 'd_conv_wavlm.transformer_before.0.self_attn.q_norm.weight', 'd_conv_wavlm.transformer_before.0.self_attn.q_norm.bias', 'd_conv_wavlm.transformer_before.0.self_attn.k_norm.weight', 'd_conv_wavlm.transformer_before.0.self_attn.k_norm.bias', 'd_conv_wavlm.transformer_before.0.self_attn_scale.scale', 'd_conv_wavlm.transformer_before.0.ff.ff.0.proj.bias', 'd_conv_wavlm.transformer_before.0.ff.ff.0.proj.parametrizations.weight.original0', 'd_conv_wavlm.transformer_before.0.ff.ff.0.proj.parametrizations.weight.original1', 'd_conv_wavlm.transformer_before.0.ff.ff.2.bias', 'd_conv_wavlm.transformer_before.0.ff.ff.2.parametrizations.weight.original0', 'd_conv_wavlm.transformer_before.0.ff.ff.2.parametrizations.weight.original1', 'd_conv_wavlm.transformer_before.0.ff_scale.scale', 'd_conv_wavlm.transformer_before.0.rope.inv_freq', 'd_conv_wavlm.downsample.weight', 'd_conv_wavlm.downsample.bias', 'd_conv_wavlm.transformer_after.0.self_attn.to_qkv.parametrizations.weight.original0', 'd_conv_wavlm.transformer_after.0.self_attn.to_qkv.parametrizations.weight.original1', 'd_conv_wavlm.transformer_after.0.self_attn.to_out.parametrizations.weight.original0', 'd_conv_wavlm.transformer_after.0.self_attn.to_out.parametrizations.weight.original1', 'd_conv_wavlm.transformer_after.0.self_attn.q_norm.weight', 'd_conv_wavlm.transformer_after.0.self_attn.q_norm.bias', 'd_conv_wavlm.transformer_after.0.self_attn.k_norm.weight', 'd_conv_wavlm.transformer_after.0.self_attn.k_norm.bias', 'd_conv_wavlm.transformer_after.0.self_attn_scale.scale', 'd_conv_wavlm.transformer_after.0.ff.ff.0.proj.bias', 'd_conv_wavlm.transformer_after.0.ff.ff.0.proj.parametrizations.weight.original0', 'd_conv_wavlm.transformer_after.0.ff.ff.0.proj.parametrizations.weight.original1', 'd_conv_wavlm.transformer_after.0.ff.ff.2.bias', 'd_conv_wavlm.transformer_after.0.ff.ff.2.parametrizations.weight.original0', 'd_conv_wavlm.transformer_after.0.ff.ff.2.parametrizations.weight.original1', 'd_conv_wavlm.transformer_after.0.ff_scale.scale', 'd_conv_wavlm.transformer_after.0.rope.inv_freq', 'd_conv_wavlm.norm_before.weight', 'd_conv_wavlm.norm_before.bias', 'd_conv_wavlm.norm_after.weight', 'd_conv_wavlm.norm_after.bias', 'd_conv_embedding_semantic.transformer_before.0.self_attn.to_qkv.parametrizations.weight.original0', 'd_conv_embedding_semantic.transformer_before.0.self_attn.to_qkv.parametrizations.weight.original1', 'd_conv_embedding_semantic.transformer_before.0.self_attn.to_out.parametrizations.weight.original0', 'd_conv_embedding_semantic.transformer_before.0.self_attn.to_out.parametrizations.weight.original1', 'd_conv_embedding_semantic.transformer_before.0.self_attn.q_norm.weight', 'd_conv_embedding_semantic.transformer_before.0.self_attn.q_norm.bias', 'd_conv_embedding_semantic.transformer_before.0.self_attn.k_norm.weight', 'd_conv_embedding_semantic.transformer_before.0.self_attn.k_norm.bias', 'd_conv_embedding_semantic.transformer_before.0.self_attn_scale.scale', 'd_conv_embedding_semantic.transformer_before.0.ff.ff.0.proj.bias', 'd_conv_embedding_semantic.transformer_before.0.ff.ff.0.proj.parametrizations.weight.original0', 'd_conv_embedding_semantic.transformer_before.0.ff.ff.0.proj.parametrizations.weight.original1', 'd_conv_embedding_semantic.transformer_before.0.ff.ff.2.bias', 'd_conv_embedding_semantic.transformer_before.0.ff.ff.2.parametrizations.weight.original0', 'd_conv_embedding_semantic.transformer_before.0.ff.ff.2.parametrizations.weight.original1', 'd_conv_embedding_semantic.transformer_before.0.ff_scale.scale', 'd_conv_embedding_semantic.transformer_before.0.rope.inv_freq', 'd_conv_embedding_semantic.downsample.weight', 'd_conv_embedding_semantic.downsample.bias', 'd_conv_embedding_semantic.transformer_after.0.self_attn.to_qkv.parametrizations.weight.original0', 'd_conv_embedding_semantic.transformer_after.0.self_attn.to_qkv.parametrizations.weight.original1', 'd_conv_embedding_semantic.transformer_after.0.self_attn.to_out.parametrizations.weight.original0', 'd_conv_embedding_semantic.transformer_after.0.self_attn.to_out.parametrizations.weight.original1', 'd_conv_embedding_semantic.transformer_after.0.self_attn.q_norm.weight', 'd_conv_embedding_semantic.transformer_after.0.self_attn.q_norm.bias', 'd_conv_embedding_semantic.transformer_after.0.self_attn.k_norm.weight', 'd_conv_embedding_semantic.transformer_after.0.self_attn.k_norm.bias', 'd_conv_embedding_semantic.transformer_after.0.self_attn_scale.scale', 'd_conv_embedding_semantic.transformer_after.0.ff.ff.0.proj.bias', 'd_conv_embedding_semantic.transformer_after.0.ff.ff.0.proj.parametrizations.weight.original0', 'd_conv_embedding_semantic.transformer_after.0.ff.ff.0.proj.parametrizations.weight.original1', 'd_conv_embedding_semantic.transformer_after.0.ff.ff.2.bias', 'd_conv_embedding_semantic.transformer_after.0.ff.ff.2.parametrizations.weight.original0', 'd_conv_embedding_semantic.transformer_after.0.ff.ff.2.parametrizations.weight.original1', 'd_conv_embedding_semantic.transformer_after.0.ff_scale.scale', 'd_conv_embedding_semantic.transformer_after.0.rope.inv_freq', 'd_conv_embedding_semantic.norm_before.weight', 'd_conv_embedding_semantic.norm_before.bias', 'd_conv_embedding_semantic.norm_after.weight', 'd_conv_embedding_semantic.norm_after.bias', 'd_conv_embedding_acoustic.transformer_before.0.self_attn.to_qkv.parametrizations.weight.original0', 'd_conv_embedding_acoustic.transformer_before.0.self_attn.to_qkv.parametrizations.weight.original1', 'd_conv_embedding_acoustic.transformer_before.0.self_attn.to_out.parametrizations.weight.original0', 'd_conv_embedding_acoustic.transformer_before.0.self_attn.to_out.parametrizations.weight.original1', 'd_conv_embedding_acoustic.transformer_before.0.self_attn.q_norm.weight', 'd_conv_embedding_acoustic.transformer_before.0.self_attn.q_norm.bias', 'd_conv_embedding_acoustic.transformer_before.0.self_attn.k_norm.weight', 'd_conv_embedding_acoustic.transformer_before.0.self_attn.k_norm.bias', 'd_conv_embedding_acoustic.transformer_before.0.self_attn_scale.scale', 'd_conv_embedding_acoustic.transformer_before.0.ff.ff.0.proj.bias', 'd_conv_embedding_acoustic.transformer_before.0.ff.ff.0.proj.parametrizations.weight.original0', 'd_conv_embedding_acoustic.transformer_before.0.ff.ff.0.proj.parametrizations.weight.original1', 'd_conv_embedding_acoustic.transformer_before.0.ff.ff.2.bias', 'd_conv_embedding_acoustic.transformer_before.0.ff.ff.2.parametrizations.weight.original0', 'd_conv_embedding_acoustic.transformer_before.0.ff.ff.2.parametrizations.weight.original1', 'd_conv_embedding_acoustic.transformer_before.0.ff_scale.scale', 'd_conv_embedding_acoustic.transformer_before.0.rope.inv_freq', 'd_conv_embedding_acoustic.downsample.weight', 'd_conv_embedding_acoustic.downsample.bias', 'd_conv_embedding_acoustic.transformer_after.0.self_attn.to_qkv.parametrizations.weight.original0', 'd_conv_embedding_acoustic.transformer_after.0.self_attn.to_qkv.parametrizations.weight.original1', 'd_conv_embedding_acoustic.transformer_after.0.self_attn.to_out.parametrizations.weight.original0', 'd_conv_embedding_acoustic.transformer_after.0.self_attn.to_out.parametrizations.weight.original1', 'd_conv_embedding_acoustic.transformer_after.0.self_attn.q_norm.weight', 'd_conv_embedding_acoustic.transformer_after.0.self_attn.q_norm.bias', 'd_conv_embedding_acoustic.transformer_after.0.self_attn.k_norm.weight', 'd_conv_embedding_acoustic.transformer_after.0.self_attn.k_norm.bias', 'd_conv_embedding_acoustic.transformer_after.0.self_attn_scale.scale', 'd_conv_embedding_acoustic.transformer_after.0.ff.ff.0.proj.bias', 'd_conv_embedding_acoustic.transformer_after.0.ff.ff.0.proj.parametrizations.weight.original0', 'd_conv_embedding_acoustic.transformer_after.0.ff.ff.0.proj.parametrizations.weight.original1', 'd_conv_embedding_acoustic.transformer_after.0.ff.ff.2.bias', 'd_conv_embedding_acoustic.transformer_after.0.ff.ff.2.parametrizations.weight.original0', 'd_conv_embedding_acoustic.transformer_after.0.ff.ff.2.parametrizations.weight.original1', 'd_conv_embedding_acoustic.transformer_after.0.ff_scale.scale', 'd_conv_embedding_acoustic.transformer_after.0.rope.inv_freq', 'd_conv_embedding_acoustic.norm_before.weight', 'd_conv_embedding_acoustic.norm_before.bias', 'd_conv_embedding_acoustic.norm_after.weight', 'd_conv_embedding_acoustic.norm_after.bias', 'structure_semantic_decoder.conv1.conv.weight', 'structure_semantic_decoder.conv_blocks.0.conv.conv.weight', 'structure_semantic_decoder.conv_blocks.0.conv.conv.bias', 'structure_semantic_decoder.conv_blocks.0.res_units.0.conv1.conv.weight', 'structure_semantic_decoder.conv_blocks.0.res_units.0.conv2.weight', 'structure_semantic_decoder.conv_blocks.0.res_units.1.conv1.conv.weight', 'structure_semantic_decoder.conv_blocks.0.res_units.1.conv2.weight', 'structure_semantic_decoder.conv_blocks.1.conv.deconv.weight', 'structure_semantic_decoder.conv_blocks.1.conv.deconv.bias', 'structure_semantic_decoder.conv_blocks.1.res_units.0.conv1.conv.weight', 'structure_semantic_decoder.conv_blocks.1.res_units.0.conv2.weight', 'structure_semantic_decoder.conv_blocks.1.res_units.1.conv1.conv.weight', 'structure_semantic_decoder.conv_blocks.1.res_units.1.conv2.weight', 'structure_semantic_decoder.conv2.conv.weight', 'pronunciation_decoder.conv1.conv.weight', 'pronunciation_decoder.conv_blocks.0.conv.deconv.weight', 'pronunciation_decoder.conv_blocks.0.conv.deconv.bias', 'pronunciation_decoder.conv_blocks.0.res_units.0.conv1.conv.weight', 'pronunciation_decoder.conv_blocks.0.res_units.0.conv2.weight', 'pronunciation_decoder.conv_blocks.0.res_units.1.conv1.conv.weight', 'pronunciation_decoder.conv_blocks.0.res_units.1.conv2.weight', 'pronunciation_decoder.conv_blocks.1.conv.deconv.weight', 'pronunciation_decoder.conv_blocks.1.conv.deconv.bias', 'pronunciation_decoder.conv_blocks.1.res_units.0.conv1.conv.weight', 'pronunciation_decoder.conv_blocks.1.res_units.0.conv2.weight', 'pronunciation_decoder.conv_blocks.1.res_units.1.conv1.conv.weight', 'pronunciation_decoder.conv_blocks.1.res_units.1.conv2.weight', 'pronunciation_decoder.conv2.conv.weight', 'vq_acoustic.project_in.weight', 'vq_acoustic.project_in.bias', 'vq_acoustic.project_out.weight', 'vq_acoustic.project_out.bias', 'vq_acoustic.layers.0._codebook.initted', 'vq_acoustic.layers.0._codebook.cluster_size', 'vq_acoustic.layers.0._codebook.embed_avg', 'vq_acoustic.layers.0._codebook.embed', 'vq_acoustic.layers.1._codebook.initted', 'vq_acoustic.layers.1._codebook.cluster_size', 'vq_acoustic.layers.1._codebook.embed_avg', 'vq_acoustic.layers.1._codebook.embed', 'vq_acoustic.layers.2._codebook.initted', 'vq_acoustic.layers.2._codebook.cluster_size', 'vq_acoustic.layers.2._codebook.embed_avg', 'vq_acoustic.layers.2._codebook.embed', 'vq_acoustic.layers.3._codebook.initted', 'vq_acoustic.layers.3._codebook.cluster_size', 'vq_acoustic.layers.3._codebook.embed_avg', 'vq_acoustic.layers.3._codebook.embed', 'vq_acoustic.layers.4._codebook.initted', 'vq_acoustic.layers.4._codebook.cluster_size', 'vq_acoustic.layers.4._codebook.embed_avg', 'vq_acoustic.layers.4._codebook.embed', 'vq_acoustic.layers.5._codebook.initted', 'vq_acoustic.layers.5._codebook.cluster_size', 'vq_acoustic.layers.5._codebook.embed_avg', 'vq_acoustic.layers.5._codebook.embed', 'vq_structure_semantic.project_in.weight', 'vq_structure_semantic.project_in.bias', 'vq_structure_semantic.project_out.weight', 'vq_structure_semantic.project_out.bias', 'vq_structure_semantic.layers.0._codebook.initted', 'vq_structure_semantic.layers.0._codebook.cluster_size', 'vq_structure_semantic.layers.0._codebook.embed_avg', 'vq_structure_semantic.layers.0._codebook.embed', 'vq_pronunciation_semantic.project_in.weight', 'vq_pronunciation_semantic.project_in.bias', 'vq_pronunciation_semantic.project_out.weight', 'vq_pronunciation_semantic.project_out.bias', 'vq_pronunciation_semantic.layers.0._codebook.initted', 'vq_pronunciation_semantic.layers.0._codebook.cluster_size', 'vq_pronunciation_semantic.layers.0._codebook.embed_avg', 'vq_pronunciation_semantic.layers.0._codebook.embed', 'cond_fusion_layer_semantic.weight', 'cond_fusion_layer_semantic.bias', 'cond_fusion_layer_acoustic.weight', 'cond_fusion_layer_acoustic.bias', 'cond_fusion_layer_phone.weight', 'cond_fusion_layer_phone.bias', 'cond_feature_emb.weight', 'cond_feature_emb.bias', 'cfm_wrapper.estimator.scale_shift_table', 'cfm_wrapper.estimator.proj_in.ffn_1.weight', 'cfm_wrapper.estimator.proj_in.ffn_1.bias', 'cfm_wrapper.estimator.proj_in.ffn_2.weight', 'cfm_wrapper.estimator.proj_in.ffn_2.bias', 'cfm_wrapper.estimator.pos_embed.pe', 'cfm_wrapper.estimator.transformer_blocks.0.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.0.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.0.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.0.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.0.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.0.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.0.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.0.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.0.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.0.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.0.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.0.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.0.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.1.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.1.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.1.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.1.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.1.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.1.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.1.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.1.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.1.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.1.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.1.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.1.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.1.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.2.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.2.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.2.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.2.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.2.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.2.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.2.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.2.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.2.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.2.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.2.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.2.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.2.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.3.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.3.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.3.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.3.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.3.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.3.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.3.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.3.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.3.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.3.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.3.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.3.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.3.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.4.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.4.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.4.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.4.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.4.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.4.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.4.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.4.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.4.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.4.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.4.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.4.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.4.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.5.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.5.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.5.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.5.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.5.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.5.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.5.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.5.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.5.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.5.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.5.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.5.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.5.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.6.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.6.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.6.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.6.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.6.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.6.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.6.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.6.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.6.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.6.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.6.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.6.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.6.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.7.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.7.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.7.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.7.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.7.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.7.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.7.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.7.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.7.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.7.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.7.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.7.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.7.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.8.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.8.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.8.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.8.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.8.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.8.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.8.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.8.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.8.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.8.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.8.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.8.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.8.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.9.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.9.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.9.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.9.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.9.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.9.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.9.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.9.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.9.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.9.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.9.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.9.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.9.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.10.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.10.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.10.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.10.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.10.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.10.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.10.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.10.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.10.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.10.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.10.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.10.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.10.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.11.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.11.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.11.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.11.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.11.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.11.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.11.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.11.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.11.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.11.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.11.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.11.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.11.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.12.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.12.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.12.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.12.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.12.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.12.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.12.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.12.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.12.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.12.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.12.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.12.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.12.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.13.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.13.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.13.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.13.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.13.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.13.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.13.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.13.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.13.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.13.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.13.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.13.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.13.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.14.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.14.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.14.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.14.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.14.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.14.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.14.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.14.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.14.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.14.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.14.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.14.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.14.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.15.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.15.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.15.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.15.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.15.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.15.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.15.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.15.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.15.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.15.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.15.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.15.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.15.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.16.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.16.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.16.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.16.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.16.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.16.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.16.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.16.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.16.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.16.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.16.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.16.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.16.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.17.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.17.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.17.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.17.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.17.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.17.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.17.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.17.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.17.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.17.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.17.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.17.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.17.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.18.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.18.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.18.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.18.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.18.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.18.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.18.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.18.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.18.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.18.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.18.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.18.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.18.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.19.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.19.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.19.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.19.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.19.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.19.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.19.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.19.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.19.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.19.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.19.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.19.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.19.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.20.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.20.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.20.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.20.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.20.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.20.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.20.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.20.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.20.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.20.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.20.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.20.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.20.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.21.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.21.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.21.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.21.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.21.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.21.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.21.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.21.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.21.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.21.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.21.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.21.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.21.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.22.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.22.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.22.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.22.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.22.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.22.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.22.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.22.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.22.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.22.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.22.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.22.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.22.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.23.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.23.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.23.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.23.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.23.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.23.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.23.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.23.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.23.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.23.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.23.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.23.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.23.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.24.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.24.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.24.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.24.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.24.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.24.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.24.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.24.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.24.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.24.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.24.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.24.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.24.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.25.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.25.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.25.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.25.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.25.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.25.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.25.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.25.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.25.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.25.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.25.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.25.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.25.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.26.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.26.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.26.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.26.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.26.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.26.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.26.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.26.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.26.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.26.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.26.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.26.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.26.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.27.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.27.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.27.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.27.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.27.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.27.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.27.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.27.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.27.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.27.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.27.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.27.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.27.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.28.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.28.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.28.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.28.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.28.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.28.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.28.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.28.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.28.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.28.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.28.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.28.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.28.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.29.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.29.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.29.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.29.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.29.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.29.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.29.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.29.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.29.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.29.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.29.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.29.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.29.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.30.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.30.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.30.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.30.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.30.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.30.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.30.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.30.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.30.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.30.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.30.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.30.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.30.ff.net.2.bias', 'cfm_wrapper.estimator.transformer_blocks.31.scale_shift_table', 'cfm_wrapper.estimator.transformer_blocks.31.attn1.to_q.weight', 'cfm_wrapper.estimator.transformer_blocks.31.attn1.to_q.bias', 'cfm_wrapper.estimator.transformer_blocks.31.attn1.to_k.weight', 'cfm_wrapper.estimator.transformer_blocks.31.attn1.to_k.bias', 'cfm_wrapper.estimator.transformer_blocks.31.attn1.to_v.weight', 'cfm_wrapper.estimator.transformer_blocks.31.attn1.to_v.bias', 'cfm_wrapper.estimator.transformer_blocks.31.attn1.to_out.0.weight', 'cfm_wrapper.estimator.transformer_blocks.31.attn1.to_out.0.bias', 'cfm_wrapper.estimator.transformer_blocks.31.ff.net.0.proj.weight', 'cfm_wrapper.estimator.transformer_blocks.31.ff.net.0.proj.bias', 'cfm_wrapper.estimator.transformer_blocks.31.ff.net.2.weight', 'cfm_wrapper.estimator.transformer_blocks.31.ff.net.2.bias', 'cfm_wrapper.estimator.proj_out.ffn_1.weight', 'cfm_wrapper.estimator.proj_out.ffn_1.bias', 'cfm_wrapper.estimator.proj_out.ffn_2.weight', 'cfm_wrapper.estimator.proj_out.ffn_2.bias', 'cfm_wrapper.estimator.adaln_single.emb.timestep_embedder.linear_1.weight', 'cfm_wrapper.estimator.adaln_single.emb.timestep_embedder.linear_1.bias', 'cfm_wrapper.estimator.adaln_single.emb.timestep_embedder.linear_2.weight', 'cfm_wrapper.estimator.adaln_single.emb.timestep_embedder.linear_2.bias', 'cfm_wrapper.estimator.adaln_single.linear.weight', 'cfm_wrapper.estimator.adaln_single.linear.bias'])
[LoRA‑Loader] matched=1816 mismatched=0 missing=503 unexpected=0
β†’ 503 keys present in model but missing in ckpt, e.g.:
time_film_phone.weight
time_film_phone.bias
time_film_semantic.weight
time_film_semantic.bias
time_film_acoustic.weight
time_film_acoustic.bias
reason_adaptor.weight
reason_adaptor.bias
audio_thinking.cls_token
audio_thinking.encoder_transformers.0.self_attn.to_qkv.parametrizations.weight.original0
audio_thinking.encoder_transformers.0.self_attn.to_qkv.parametrizations.weight.original1
audio_thinking.encoder_transformers.0.self_attn.to_out.parametrizations.weight.original0
audio_thinking.encoder_transformers.0.self_attn.to_out.parametrizations.weight.original1
audio_thinking.encoder_transformers.0.self_attn.q_norm.weight
audio_thinking.encoder_transformers.0.self_attn.q_norm.bias
audio_thinking.encoder_transformers.0.self_attn.k_norm.weight
audio_thinking.encoder_transformers.0.self_attn.k_norm.bias
audio_thinking.encoder_transformers.0.self_attn_scale.scale
audio_thinking.encoder_transformers.0.ff.ff.0.proj.bias
audio_thinking.encoder_transformers.0.ff.ff.0.proj.parametrizations.weight.original0 ...
[LoRA‑Loader] done.
Successfully loaded checkpoint from: /turing_music_fs/music_data/ydc/exp2/tmp_codec/reasoncodec_1024/reason_codec.checkpoint
2025-11-10 16:04:43,558 INFO [offline_tokenization_scp.py:68] tokenizer built
2025-11-10 16:04:43,560 INFO [offline_tokenization_scp.py:79] [Rank 2] Final output will be at: /turing_music_fs/music_data/ydc/code2/TokenPPL/data/music/test/16splits/reason_tokens.3.pt
2025-11-10 16:04:43,560 INFO [offline_tokenization_scp.py:80] [Rank 2] Checkpoint path is: /turing_music_fs/music_data/ydc/code2/TokenPPL/data/music/test/16splits/semantic_tokens.3.pt.checkpoint_rec.pth, /turing_music_fs/music_data/ydc/code2/TokenPPL/data/music/test/16splits/reason_tokens.3.pt.checkpoint_reason.pth
2025-11-10 16:04:43,560 INFO [offline_tokenization_scp.py:97] [Rank 2] No checkpoint found. Starting from scratch.
[Rank 2] Tokenizing: 0%| | 0/13 [00:00<?, ?file/s]/turing_music_fs/music_data/ydc/code2/TokenPPL/Token_LM_dual/tools/tokenizer/ReasoningCodec_film_1024/models/AudioDiffusion1D.py:662: FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
with torch.cuda.amp.autocast(enabled=False):
/root/miniconda3/envs/uniaudio2/lib/python3.10/contextlib.py:103: FutureWarning: `torch.backends.cuda.sdp_kernel()` is deprecated. In the future, this context manager will be removed. Please see `torch.nn.attention.sdpa_kernel()` for the new context manager, with updated signature.
self.gen = func(*args, **kwds)
[Rank 2] Tokenizing: 8%|β–Š | 1/13 [00:01<00:15, 1.26s/file] [Rank 2] Tokenizing: 15%|β–ˆβ–Œ | 2/13 [00:01<00:09, 1.20file/s] [Rank 2] Tokenizing: 23%|β–ˆβ–ˆβ–Ž | 3/13 [00:02<00:06, 1.62file/s] [Rank 2] Tokenizing: 31%|β–ˆβ–ˆβ–ˆ | 4/13 [00:02<00:05, 1.65file/s] [Rank 2] Tokenizing: 38%|β–ˆβ–ˆβ–ˆβ–Š | 5/13 [00:03<00:04, 1.63file/s] [Rank 2] Tokenizing: 46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 6/13 [00:03<00:04, 1.65file/s] [Rank 2] Tokenizing: 54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 7/13 [00:04<00:03, 1.65file/s] [Rank 2] Tokenizing: 62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 8/13 [00:05<00:03, 1.65file/s] [Rank 2] Tokenizing: 69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 9/13 [00:05<00:02, 1.65file/s] [Rank 2] Tokenizing: 77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 10/13 [00:06<00:01, 1.65file/s] [Rank 2] Tokenizing: 85%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 11/13 [00:07<00:01, 1.64file/s] [Rank 2] Tokenizing: 92%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–| 12/13 [00:07<00:00, 1.65file/s] [Rank 2] Tokenizing: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 13/13 [00:08<00:00, 1.65file/s] [Rank 2] Tokenizing: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 13/13 [00:08<00:00, 1.58file/s]
2025-11-10 16:04:51,781 INFO [offline_tokenization_scp.py:166] [Rank 2] Processing complete. Saving final data...
2025-11-10 16:04:51,783 INFO [offline_tokenization_scp.py:169] [Rank 2] Final data saved to /turing_music_fs/music_data/ydc/code2/TokenPPL/data/music/test/16splits/reason_tokens.3.pt with 13 entries.
2025-11-10 16:04:51,920 INFO [offline_tokenization_scp.py:185] [Rank 2] Final GPU cache cleanup completed
2025-11-10 16:04:51,990 DEBUG [_api.py:331] Attempting to acquire lock 139923780301584 on /root/.triton/autotune/Fp16Matmul_2d_kernel.pickle.lock
2025-11-10 16:04:51,990 DEBUG [_api.py:334] Lock 139923780301584 acquired on /root/.triton/autotune/Fp16Matmul_2d_kernel.pickle.lock
2025-11-10 16:04:51,991 DEBUG [_api.py:364] Attempting to release lock 139923780301584 on /root/.triton/autotune/Fp16Matmul_2d_kernel.pickle.lock
2025-11-10 16:04:51,991 DEBUG [_api.py:367] Lock 139923780301584 released on /root/.triton/autotune/Fp16Matmul_2d_kernel.pickle.lock
2025-11-10 16:04:51,993 DEBUG [_api.py:331] Attempting to acquire lock 139923780303696 on /root/.triton/autotune/Fp16Matmul_4d_kernel.pickle.lock
2025-11-10 16:04:51,994 DEBUG [_api.py:334] Lock 139923780303696 acquired on /root/.triton/autotune/Fp16Matmul_4d_kernel.pickle.lock
2025-11-10 16:04:51,994 DEBUG [_api.py:364] Attempting to release lock 139923780303696 on /root/.triton/autotune/Fp16Matmul_4d_kernel.pickle.lock
2025-11-10 16:04:51,994 DEBUG [_api.py:367] Lock 139923780303696 released on /root/.triton/autotune/Fp16Matmul_4d_kernel.pickle.lock
# Accounting: time=152 threads=1
# Ended (code 0) at Mon Nov 10 16:04:56 UTC 2025, elapsed time 152 seconds