english_encodec_model / train.3.log
Stanwang1210's picture
Upload folder using huggingface_hub
cb5c8bb verified
# python3 -m espnet2.bin.speechlm_train --use_preprocessor true --token_list data/token_list/tts_mls_all_espnet_mls-english_encodec_16k/token_list --token_bias data/token_list/tts_mls_all_espnet_mls-english_encodec_16k/token_bias.json --non_linguistic_symbols none --cleaner None --g2p g2p_en --bpemodel dump_16000/raw_tts_mls_ESPnet_espnet_mls-english_encodec_16k/mls_all_train_subset/token_lists/text_bpe --multi_task_dataset true --sharded_dataset true --resume true --output_dir exp_ar_tts/speechlm_tts_mls_all_train_valle_espnet_mls-english_encodec_16k --config conf/train_valle.yaml --train_data_path_and_name_and_type exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/train/mls_all_train_subset//split2/JOB/data.JOB.json,_,dataset_json --valid_data_path_and_name_and_type exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/valid/mls_all_dev//split2/JOB/data.JOB.json,_,dataset_json --train_shape_file exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/train/dec_seq_lengths.JOB --valid_shape_file exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/valid/dec_seq_lengths.JOB --ngpu 2 --multiprocessing_distributed True
# Started at Tue Sep 10 01:03:08 CST 2024
#
/home/stan/miniconda3/envs/espnet_codec/bin/python3 /mnt/data/stan/codec_espnet/espnet2/bin/speechlm_train.py --use_preprocessor true --token_list data/token_list/tts_mls_all_espnet_mls-english_encodec_16k/token_list --token_bias data/token_list/tts_mls_all_espnet_mls-english_encodec_16k/token_bias.json --non_linguistic_symbols none --cleaner None --g2p g2p_en --bpemodel dump_16000/raw_tts_mls_ESPnet_espnet_mls-english_encodec_16k/mls_all_train_subset/token_lists/text_bpe --multi_task_dataset true --sharded_dataset true --resume true --output_dir exp_ar_tts/speechlm_tts_mls_all_train_valle_espnet_mls-english_encodec_16k --config conf/train_valle.yaml --train_data_path_and_name_and_type exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/train/mls_all_train_subset//split2/JOB/data.JOB.json,_,dataset_json --valid_data_path_and_name_and_type exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/valid/mls_all_dev//split2/JOB/data.JOB.json,_,dataset_json --train_shape_file exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/train/dec_seq_lengths.JOB --valid_shape_file exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/valid/dec_seq_lengths.JOB --ngpu 2 --multiprocessing_distributed True
[W Utils.hpp:166] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function getCvarBool)
[W Utils.hpp:166] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function getCvarBool)
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:25,281 (speechlm:274) INFO: Vocabulary size: 8645
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:25,282 (speechlm:283) INFO: Token Bias: {'codec': 256, 'text_bpe': 8448}
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:25,354 (transformer:47) INFO: Build Transformer Decoder with internal implementation
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:26,707 (abs_task:1397) INFO: pytorch.version=2.3.0+cu118, cuda.available=True, cudnn.version=8700, cudnn.benchmark=False, cudnn.deterministic=True
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:26,712 (abs_task:1398) INFO: Model structure:
ESPnetSpeechLMModel(
(corelm): ValleLM(
(emb): Embedding(8645, 512)
(lm_head): Linear(in_features=512, out_features=8645, bias=False)
(ar_decoder): TransformerDecoder(
(model): TransformerDecoder(
(pos_emb): Embedding(3000, 512)
(blocks): ModuleList(
(0-11): 12 x ResidualAttentionBlock(
(attn): MultiHeadAttention(
(query): Linear(in_features=512, out_features=512, bias=True)
(key): Linear(in_features=512, out_features=512, bias=False)
(value): Linear(in_features=512, out_features=512, bias=True)
(out): Linear(in_features=512, out_features=512, bias=True)
(q_norm): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
(k_norm): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
)
(attn_ln): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
(attn_dropout): Dropout(p=0.0, inplace=False)
(mlp): Sequential(
(0): Linear(in_features=512, out_features=2048, bias=True)
(1): GELU(approximate='none')
(2): Linear(in_features=2048, out_features=512, bias=True)
)
(mlp_ln): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
(mlp_dropout): Dropout(p=0.0, inplace=False)
)
)
(ln): LayerNorm((512,), eps=1e-05, elementwise_affine=True)
)
)
(nar_decoder): ValleNARDecoder(
(pos_emb): Embedding(3000, 512)
(blocks): ModuleList(
(0-11): 12 x ResidualAttentionBlockAdaLN(
(attn): MultiHeadAttention(
(query): Linear(in_features=512, out_features=512, bias=True)
(key): Linear(in_features=512, out_features=512, bias=False)
(value): Linear(in_features=512, out_features=512, bias=True)
(out): Linear(in_features=512, out_features=512, bias=True)
(q_norm): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
(k_norm): LayerNorm((64,), eps=1e-05, elementwise_affine=True)
)
(attn_ln): AdaLN(
(weight): Linear(in_features=512, out_features=512, bias=False)
(bias): Linear(in_features=512, out_features=512, bias=False)
)
(attn_dropout): Dropout(p=0.0, inplace=False)
(mlp): Sequential(
(0): Linear(in_features=512, out_features=2048, bias=True)
(1): GELU(approximate='none')
(2): Linear(in_features=2048, out_features=512, bias=True)
)
(mlp_ln): AdaLN(
(weight): Linear(in_features=512, out_features=512, bias=False)
(bias): Linear(in_features=512, out_features=512, bias=False)
)
(mlp_dropout): Dropout(p=0.0, inplace=False)
)
)
(ln): AdaLN(
(weight): Linear(in_features=512, out_features=512, bias=False)
(bias): Linear(in_features=512, out_features=512, bias=False)
)
(level_emb): Embedding(7, 512)
)
)
)
Model summary:
Class Name: ESPnetSpeechLMModel
Total Number of model parameters: 100.66 M
Number of trainable parameters: 100.66 M (100.0%)
Size: 402.65 MB
Type: torch.float32
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:26,712 (abs_task:1401) INFO: Optimizer:
AdamW (
Parameter Group 0
amsgrad: False
betas: [0.9, 0.95]
capturable: False
differentiable: False
eps: 1e-08
foreach: None
fused: None
initial_lr: 0.0001
lr: 4e-09
maximize: False
weight_decay: 0.01
)
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:26,712 (abs_task:1402) INFO: Scheduler: WarmupLR(warmup_steps=25000)
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:26,714 (abs_task:1411) INFO: Saving the configuration in exp_ar_tts/speechlm_tts_mls_all_train_valle_espnet_mls-english_encodec_16k/config.yaml
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:33,199 (abs_task:1811) INFO: [train] dataset:
##### Multi-Task Dataset #####
## Sub-Dataset: 0; Task: tts ##
EspnetSpeechLMDataset(
text: {"path": "exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/train/mls_all_train_subset/split2/1/text", "type": "text"}
utt2spk: {"path": "exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/train/mls_all_train_subset/split2/1/utt2spk", "type": "text"}
wav.scp: {"path": "exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/train/mls_all_train_subset/split2/1/wav.scp", "type": "kaldi_ark"}
preprocess: <espnet2.train.preprocessor.SpeechLMPreprocessor object at 0x7ffac45a1510>)
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:33,199 (abs_task:1812) INFO: [train] Batch sampler: NumElementsBatchSampler(N-batch=25391, batch_bins=32000, sort_in_batch=descending, sort_batch=descending)
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:33,204 (abs_task:1813) INFO: [train] mini-batch sizes summary: N-batch=25391, mean=24.1, min=19, max=32
rootroot-4U4G-SPC621D8:1514876:1514876 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^lo,docker,virbr,vmnet,vboxnet
rootroot-4U4G-SPC621D8:1514876:1514876 [0] NCCL INFO Bootstrap : Using eno2:140.112.20.2<0>
rootroot-4U4G-SPC621D8:1514876:1514876 [0] NCCL INFO NET/Plugin : dlerror=libnccl-net.so: cannot open shared object file: No such file or directory No plugin found (libnccl-net.so), using internal implementation
rootroot-4U4G-SPC621D8:1514876:1514876 [0] NCCL INFO cudaDriverVersion 12020
NCCL version 2.20.5+cuda11.0
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:34,208 (synchronize_batches:28) INFO: Synchronize sharded dataset across all process
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:34,208 (synchronize_batches:29) INFO: #Batches: 25391 -> 25503
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:34,305 (abs_task:1811) INFO: [valid] dataset:
##### Multi-Task Dataset #####
## Sub-Dataset: 0; Task: tts ##
EspnetSpeechLMDataset(
text: {"path": "exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/valid/mls_all_dev/split2/1/text", "type": "text"}
utt2spk: {"path": "exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/valid/mls_all_dev/split2/1/utt2spk", "type": "text"}
wav.scp: {"path": "exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/valid/mls_all_dev/split2/1/wav.scp", "type": "kaldi_ark"}
preprocess: <espnet2.train.preprocessor.SpeechLMPreprocessor object at 0x7ffa95a9b640>)
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:34,305 (abs_task:1812) INFO: [valid] Batch sampler: NumElementsBatchSampler(N-batch=316, batch_bins=32000, sort_in_batch=descending, sort_batch=descending)
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:34,305 (abs_task:1813) INFO: [valid] mini-batch sizes summary: N-batch=316, mean=23.9, min=19, max=30
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:34,306 (synchronize_batches:28) INFO: Synchronize sharded dataset across all process
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:34,306 (synchronize_batches:29) INFO: #Batches: 316 -> 319
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:34,360 (abs_task:1811) INFO: [plot_att] dataset:
##### Multi-Task Dataset #####
## Sub-Dataset: 0; Task: tts ##
EspnetSpeechLMDataset(
text: {"path": "exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/valid/mls_all_dev/split2/1/text", "type": "text"}
utt2spk: {"path": "exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/valid/mls_all_dev/split2/1/utt2spk", "type": "text"}
wav.scp: {"path": "exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/valid/mls_all_dev/split2/1/wav.scp", "type": "kaldi_ark"}
preprocess: <espnet2.train.preprocessor.SpeechLMPreprocessor object at 0x7ffa95a9ae30>)
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:34,360 (abs_task:1812) INFO: [plot_att] Batch sampler: UnsortedBatchSampler(N-batch=7549, batch_size=1, key_file=exp_ar_tts/speechlm_stats_tts_mls_all_espnet_mls-english_encodec_16k/sharded_stats_ngpu2/valid/dec_seq_lengths.1,
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:34,360 (abs_task:1813) INFO: [plot_att] mini-batch sizes summary: N-batch=3, mean=1.0, min=1, max=1
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:38,707 (trainer:189) INFO: The training was resumed using exp_ar_tts/speechlm_tts_mls_all_train_valle_espnet_mls-english_encodec_16k/checkpoint.pth
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO Failed to open libibverbs.so[.1]
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^lo,docker,virbr,vmnet,vboxnet
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO NET/Socket : Using [0]eno2:140.112.20.2<0> [1]vethf69eb03:fe80::4c90:2ff:fe90:7c9b%vethf69eb03<0>
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO Using non-device net plugin version 0
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO Using network Socket
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO comm 0x10f49790 rank 0 nranks 2 cudaDev 0 nvmlDev 2 busId 8a000 commId 0x9c7b8d0e3f49ed42 - Init START
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO Setting affinity for GPU 2 to ffffffff
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO comm 0x10f49790 rank 0 nRanks 2 nNodes 1 localRanks 2 localRank 0 MNNVL 0
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO Channel 00/02 : 0 1
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO Channel 01/02 : 0 1
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO P2P Chunksize set to 131072
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO Channel 00 : 0[2] -> 1[3] via SHM/direct/direct
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO Channel 01 : 0[2] -> 1[3] via SHM/direct/direct
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO Connected all rings
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO Connected all trees
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO 2 coll channels, 0 collnet channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer
rootroot-4U4G-SPC621D8:1514876:1515024 [0] NCCL INFO comm 0x10f49790 rank 0 nranks 2 cudaDev 0 nvmlDev 2 busId 8a000 commId 0x9c7b8d0e3f49ed42 - Init COMPLETE
[rank0]:[W Utils.hpp:108] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function getCvarString)
rootroot-4U4G-SPC621D8:1514877:1514877 [1] NCCL INFO cudaDriverVersion 12020
rootroot-4U4G-SPC621D8:1514877:1514877 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^lo,docker,virbr,vmnet,vboxnet
rootroot-4U4G-SPC621D8:1514877:1514877 [1] NCCL INFO Bootstrap : Using eno2:140.112.20.2<0>
rootroot-4U4G-SPC621D8:1514877:1514877 [1] NCCL INFO NET/Plugin : dlerror=libnccl-net.so: cannot open shared object file: No such file or directory No plugin found (libnccl-net.so), using internal implementation
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO Failed to open libibverbs.so[.1]
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^lo,docker,virbr,vmnet,vboxnet
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO NET/Socket : Using [0]eno2:140.112.20.2<0> [1]vethf69eb03:fe80::4c90:2ff:fe90:7c9b%vethf69eb03<0>
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO Using non-device net plugin version 0
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO Using network Socket
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO comm 0xd67d000 rank 1 nranks 2 cudaDev 1 nvmlDev 3 busId c3000 commId 0x9c7b8d0e3f49ed42 - Init START
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO Setting affinity for GPU 3 to ffffffff
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO comm 0xd67d000 rank 1 nRanks 2 nNodes 1 localRanks 2 localRank 1 MNNVL 0
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO P2P Chunksize set to 131072
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO Channel 00 : 1[3] -> 0[2] via SHM/direct/direct
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO Channel 01 : 1[3] -> 0[2] via SHM/direct/direct
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO Connected all rings
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO Connected all trees
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO 2 coll channels, 0 collnet channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer
rootroot-4U4G-SPC621D8:1514877:1515025 [1] NCCL INFO comm 0xd67d000 rank 1 nranks 2 cudaDev 1 nvmlDev 3 busId c3000 commId 0x9c7b8d0e3f49ed42 - Init COMPLETE
[rank1]:[W Utils.hpp:108] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function getCvarString)
[rootroot-4U4G-SPC621D8:0/2] 2024-09-10 01:03:38,846 (trainer:333) INFO: 10/50epoch started
W0910 01:10:08.641000 140266687317824 torch/multiprocessing/spawn.py:145] Terminating process 1514876 via signal SIGTERM
Traceback (most recent call last):
File "/home/stan/miniconda3/envs/espnet_codec/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/stan/miniconda3/envs/espnet_codec/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/mnt/data/stan/codec_espnet/espnet2/bin/speechlm_train.py", line 22, in <module>
main()
File "/mnt/data/stan/codec_espnet/espnet2/bin/speechlm_train.py", line 18, in main
SpeechLMTask.main(cmd=cmd)
File "/mnt/data/stan/codec_espnet/espnet2/tasks/abs_task.py", line 1257, in main
while not ProcessContext(processes, error_queues).join():
File "/home/stan/miniconda3/envs/espnet_codec/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 162, in join
if not os.access(self.error_files[error_index], os.R_OK):
TypeError: access: path should be string, bytes or os.PathLike, not SimpleQueue
# Accounting: time=422 threads=1
# Ended (code 1) at Tue Sep 10 01:10:10 CST 2024, elapsed time 422 seconds
/home/stan/miniconda3/envs/espnet_codec/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 64 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '