amir0907 commited on
Commit
24e9a2a
·
verified ·
1 Parent(s): 13e7a30

Upload folder using huggingface_hub

Browse files
Files changed (28) hide show
  1. preprocessed/preprocessed_batches_part_0.pt +3 -0
  2. preprocessed/preprocessed_batches_part_1.pt +3 -0
  3. preprocessed/preprocessed_batches_part_2.pt +3 -0
  4. preprocessed/preprocessed_batches_part_3.pt +3 -0
  5. src/__pycache__/data_vibevoice.cpython-312.pyc +0 -0
  6. src/__pycache__/finetune_vibevoice_lora0.cpython-312.pyc +0 -0
  7. src/vibevoice/modular/__pycache__/__init__.cpython-312.pyc +0 -0
  8. src/vibevoice/modular/__pycache__/configuration_vibevoice.cpython-312.pyc +0 -0
  9. src/vibevoice/modular/__pycache__/modeling_vibevoice.cpython-312.pyc +0 -0
  10. src/vibevoice/modular/__pycache__/modular_vibevoice_diffusion_head.cpython-312.pyc +0 -0
  11. src/vibevoice/modular/__pycache__/modular_vibevoice_text_tokenizer.cpython-312.pyc +0 -0
  12. src/vibevoice/modular/__pycache__/modular_vibevoice_tokenizer.cpython-312.pyc +0 -0
  13. src/vibevoice/processor/__pycache__/__init__.cpython-312.pyc +0 -0
  14. src/vibevoice/processor/__pycache__/vibevoice_processor.cpython-312.pyc +0 -0
  15. src/vibevoice/processor/__pycache__/vibevoice_tokenizer_processor.cpython-312.pyc +0 -0
  16. src/vibevoice/schedule/__pycache__/__init__.cpython-312.pyc +0 -0
  17. src/vibevoice/schedule/__pycache__/dpm_solver.cpython-312.pyc +0 -0
  18. wandb/debug-internal.log +11 -0
  19. wandb/debug.log +26 -0
  20. wandb/run-20260213_133940-4e4xqwjr/files/config.yaml +896 -0
  21. wandb/run-20260213_133940-4e4xqwjr/files/output.log +29 -0
  22. wandb/run-20260213_133940-4e4xqwjr/files/requirements.txt +920 -0
  23. wandb/run-20260213_133940-4e4xqwjr/files/wandb-metadata.json +105 -0
  24. wandb/run-20260213_133940-4e4xqwjr/files/wandb-summary.json +1 -0
  25. wandb/run-20260213_133940-4e4xqwjr/logs/debug-core.log +14 -0
  26. wandb/run-20260213_133940-4e4xqwjr/logs/debug-internal.log +11 -0
  27. wandb/run-20260213_133940-4e4xqwjr/logs/debug.log +26 -0
  28. wandb/run-20260213_133940-4e4xqwjr/run-4e4xqwjr.wandb +0 -0
preprocessed/preprocessed_batches_part_0.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:383c314113993bee84136c6666cbc5e2cdf54cc2168e53d6528c0e59cd816efa
3
+ size 2474483375
preprocessed/preprocessed_batches_part_1.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01671b5e73f1ed745e50290e9eeb3e5c690390189146195c22a674763167f3bc
3
+ size 2504113711
preprocessed/preprocessed_batches_part_2.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f942b60029a04e99bfb3b455943ca822743c90fe9266430ba66708afa913ff54
3
+ size 2511098479
preprocessed/preprocessed_batches_part_3.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85eba44dbaa40c523423dd5cd39a8b2df45104f8f4e25ef883c1672ed122750d
3
+ size 2445650287
src/__pycache__/data_vibevoice.cpython-312.pyc CHANGED
Binary files a/src/__pycache__/data_vibevoice.cpython-312.pyc and b/src/__pycache__/data_vibevoice.cpython-312.pyc differ
 
src/__pycache__/finetune_vibevoice_lora0.cpython-312.pyc CHANGED
Binary files a/src/__pycache__/finetune_vibevoice_lora0.cpython-312.pyc and b/src/__pycache__/finetune_vibevoice_lora0.cpython-312.pyc differ
 
src/vibevoice/modular/__pycache__/__init__.cpython-312.pyc CHANGED
Binary files a/src/vibevoice/modular/__pycache__/__init__.cpython-312.pyc and b/src/vibevoice/modular/__pycache__/__init__.cpython-312.pyc differ
 
src/vibevoice/modular/__pycache__/configuration_vibevoice.cpython-312.pyc CHANGED
Binary files a/src/vibevoice/modular/__pycache__/configuration_vibevoice.cpython-312.pyc and b/src/vibevoice/modular/__pycache__/configuration_vibevoice.cpython-312.pyc differ
 
src/vibevoice/modular/__pycache__/modeling_vibevoice.cpython-312.pyc CHANGED
Binary files a/src/vibevoice/modular/__pycache__/modeling_vibevoice.cpython-312.pyc and b/src/vibevoice/modular/__pycache__/modeling_vibevoice.cpython-312.pyc differ
 
src/vibevoice/modular/__pycache__/modular_vibevoice_diffusion_head.cpython-312.pyc CHANGED
Binary files a/src/vibevoice/modular/__pycache__/modular_vibevoice_diffusion_head.cpython-312.pyc and b/src/vibevoice/modular/__pycache__/modular_vibevoice_diffusion_head.cpython-312.pyc differ
 
src/vibevoice/modular/__pycache__/modular_vibevoice_text_tokenizer.cpython-312.pyc CHANGED
Binary files a/src/vibevoice/modular/__pycache__/modular_vibevoice_text_tokenizer.cpython-312.pyc and b/src/vibevoice/modular/__pycache__/modular_vibevoice_text_tokenizer.cpython-312.pyc differ
 
src/vibevoice/modular/__pycache__/modular_vibevoice_tokenizer.cpython-312.pyc CHANGED
Binary files a/src/vibevoice/modular/__pycache__/modular_vibevoice_tokenizer.cpython-312.pyc and b/src/vibevoice/modular/__pycache__/modular_vibevoice_tokenizer.cpython-312.pyc differ
 
src/vibevoice/processor/__pycache__/__init__.cpython-312.pyc CHANGED
Binary files a/src/vibevoice/processor/__pycache__/__init__.cpython-312.pyc and b/src/vibevoice/processor/__pycache__/__init__.cpython-312.pyc differ
 
src/vibevoice/processor/__pycache__/vibevoice_processor.cpython-312.pyc CHANGED
Binary files a/src/vibevoice/processor/__pycache__/vibevoice_processor.cpython-312.pyc and b/src/vibevoice/processor/__pycache__/vibevoice_processor.cpython-312.pyc differ
 
src/vibevoice/processor/__pycache__/vibevoice_tokenizer_processor.cpython-312.pyc CHANGED
Binary files a/src/vibevoice/processor/__pycache__/vibevoice_tokenizer_processor.cpython-312.pyc and b/src/vibevoice/processor/__pycache__/vibevoice_tokenizer_processor.cpython-312.pyc differ
 
src/vibevoice/schedule/__pycache__/__init__.cpython-312.pyc CHANGED
Binary files a/src/vibevoice/schedule/__pycache__/__init__.cpython-312.pyc and b/src/vibevoice/schedule/__pycache__/__init__.cpython-312.pyc differ
 
src/vibevoice/schedule/__pycache__/dpm_solver.cpython-312.pyc CHANGED
Binary files a/src/vibevoice/schedule/__pycache__/dpm_solver.cpython-312.pyc and b/src/vibevoice/schedule/__pycache__/dpm_solver.cpython-312.pyc differ
 
wandb/debug-internal.log ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-02-13T13:39:40.904677275Z","level":"INFO","msg":"stream: starting","core version":"0.22.2"}
2
+ {"time":"2026-02-13T13:39:41.210817561Z","level":"INFO","msg":"stream: created new stream","id":"4e4xqwjr"}
3
+ {"time":"2026-02-13T13:39:41.210886499Z","level":"INFO","msg":"handler: started","stream_id":"4e4xqwjr"}
4
+ {"time":"2026-02-13T13:39:41.211039391Z","level":"INFO","msg":"stream: started","id":"4e4xqwjr"}
5
+ {"time":"2026-02-13T13:39:41.211089284Z","level":"INFO","msg":"sender: started","stream_id":"4e4xqwjr"}
6
+ {"time":"2026-02-13T13:39:41.211089554Z","level":"INFO","msg":"writer: started","stream_id":"4e4xqwjr"}
7
+ {"time":"2026-02-13T13:39:52.939820892Z","level":"INFO","msg":"stream: closing","id":"4e4xqwjr"}
8
+ {"time":"2026-02-13T13:39:53.159904441Z","level":"INFO","msg":"fileTransfer: Close: file transfer manager closed"}
9
+ {"time":"2026-02-13T13:39:53.330431911Z","level":"INFO","msg":"handler: closed","stream_id":"4e4xqwjr"}
10
+ {"time":"2026-02-13T13:39:53.330553224Z","level":"INFO","msg":"sender: closed","stream_id":"4e4xqwjr"}
11
+ {"time":"2026-02-13T13:39:53.330566036Z","level":"INFO","msg":"stream: closed","id":"4e4xqwjr"}
wandb/debug.log ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_setup.py:_flush():81] Current SDK version is 0.22.2
2
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_setup.py:_flush():81] Configure stats pid to 191
3
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_setup.py:_flush():81] Loading settings from /root/.config/wandb/settings
4
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_setup.py:_flush():81] Loading settings from /kaggle/working/VibeVoice-finetuning/wandb/settings
5
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_setup.py:_flush():81] Loading settings from environment variables
6
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_init.py:setup_run_log_directory():705] Logging user logs to /kaggle/working/VibeVoice-finetuning/wandb/run-20260213_133940-4e4xqwjr/logs/debug.log
7
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_init.py:setup_run_log_directory():706] Logging internal logs to /kaggle/working/VibeVoice-finetuning/wandb/run-20260213_133940-4e4xqwjr/logs/debug-internal.log
8
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_init.py:init():832] calling init triggers
9
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_init.py:init():837] wandb.init called with sweep_config: {}
10
+ config: {'_wandb': {}}
11
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_init.py:init():880] starting backend
12
+ 2026-02-13 13:39:40,875 INFO MainThread:191 [wandb_init.py:init():883] sending inform_init request
13
+ 2026-02-13 13:39:40,888 INFO MainThread:191 [wandb_init.py:init():891] backend started and connected
14
+ 2026-02-13 13:39:40,891 INFO MainThread:191 [wandb_init.py:init():961] updated telemetry
15
+ 2026-02-13 13:39:40,904 INFO MainThread:191 [wandb_init.py:init():985] communicating run to backend with 90.0 second timeout
16
+ 2026-02-13 13:39:41,633 INFO MainThread:191 [wandb_init.py:init():1036] starting run threads in backend
17
+ 2026-02-13 13:39:42,309 INFO MainThread:191 [wandb_run.py:_console_start():2509] atexit reg
18
+ 2026-02-13 13:39:42,309 INFO MainThread:191 [wandb_run.py:_redirect():2357] redirect: wrap_raw
19
+ 2026-02-13 13:39:42,309 INFO MainThread:191 [wandb_run.py:_redirect():2426] Wrapping output streams.
20
+ 2026-02-13 13:39:42,309 INFO MainThread:191 [wandb_run.py:_redirect():2449] Redirects installed.
21
+ 2026-02-13 13:39:42,320 INFO MainThread:191 [wandb_init.py:init():1076] run started, returning control to user process
22
+ 2026-02-13 13:39:42,322 INFO MainThread:191 [wandb_run.py:_config_callback():1392] config_cb None None {'acoustic_tokenizer_config': {'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': None, 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': '', '_attn_implementation_autoset': False, 'model_type': 'vibevoice_acoustic_tokenizer', 'channels': 1, 'corpus_normalize': 0.0, 'causal': True, 'vae_dim': 64, 'fix_std': 0.5, 'std_dist_type': 'gaussian', 'conv_norm': 'none', 'pad_mode': 'constant', 'layernorm_eps': 1e-05, 'disable_last_norm': True, 'layernorm': 'RMSNorm', 'layernorm_elementwise_affine': True, 'conv_bias': True, 'layer_scale_init_value': 1e-06, 'weight_init_value': 0.01, 'mixer_layer': 'depthwise_conv', 'encoder_n_filters': 32, 'encoder_ratios': [8, 5, 5, 4, 2, 2], 'encoder_depths': '3-3-3-3-3-3-8', 'decoder_ratios': [8, 5, 5, 4, 2, 2], 'decoder_n_filters': 32, 'decoder_depths': None}, 'semantic_tokenizer_config': {'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': None, 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': '', '_attn_implementation_autoset': False, 'model_type': 'vibevoice_semantic_tokenizer', 'channels': 1, 'corpus_normalize': 0.0, 'causal': True, 'vae_dim': 128, 'fix_std': 0, 'std_dist_type': 'none', 'conv_norm': 'none', 'pad_mode': 'constant', 'layernorm_eps': 1e-05, 'disable_last_norm': True, 'layernorm': 'RMSNorm', 'layernorm_elementwise_affine': True, 'conv_bias': True, 'layer_scale_init_value': 1e-06, 'weight_init_value': 0.01, 'mixer_layer': 'depthwise_conv', 'encoder_n_filters': 32, 'encoder_ratios': [8, 5, 5, 4, 2, 2], 'encoder_depths': '3-3-3-3-3-3-8'}, 'decoder_config': {'vocab_size': 151936, 'max_position_embeddings': 65536, 'hidden_size': 1536, 'intermediate_size': 8960, 'num_hidden_layers': 28, 'num_attention_heads': 12, 'use_sliding_window': False, 'sliding_window': None, 'max_window_layers': 28, 'num_key_value_heads': 2, 'hidden_act': 'silu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-06, 'use_cache': True, 'rope_theta': 1000000.0, 'rope_scaling': None, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': None, 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': '', '_attn_implementation_autoset': False, 'model_type': 'qwen2'}, 'diffusion_head_config': {'hidden_size': 1536, 'head_layers': 4, 'head_ffn_ratio': 3.0, 'rms_norm_eps': 1e-05, 'latent_size': 64, 'speech_vae_dim': 64, 'prediction_type': 'v_prediction', 'diffusion_type': 'ddpm', 'ddpm_num_steps': 1000, 'ddpm_num_inference_steps': 20, 'ddpm_beta_schedule': 'cosine', 'ddpm_batch_mul': 4, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': None, 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': '', '_attn_implementation_autoset': False, 'model_type': 'vibevoice_diffusion_head'}, 'acoustic_vae_dim': 64, 'semantic_vae_dim': 128, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['VibeVoiceForConditionalGeneration'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'microsoft/VibeVoice-1.5B', '_attn_implementation_autoset': True, 'transformers_version': '4.51.3', 'model_type': 'vibevoice', 'output_dir': '/kaggle/working/VibeVoice-finetuning/', 'overwrite_output_dir': False, 'do_train': True, 'do_eval': False, 'do_predict': False, 'eval_strategy': 'no', 'prediction_loss_only': False, 'per_device_train_batch_size': 1, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 14, 'eval_accumulation_steps': None, 'eval_delay': 0, 'torch_empty_cache_steps': None, 'learning_rate': 5e-05, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 0.6, 'num_train_epochs': 8.0, 'max_steps': -1, 'lr_scheduler_type': 'cosine', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.1, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/kaggle/working/VibeVoice-finetuning/runs/Feb13_13-38-15_773233cc2cd1', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 10, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 400, 'save_total_limit': None, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': 100.0, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/kaggle/working/VibeVoice-finetuning/', 'disable_tqdm': False, 'remove_unused_columns': False, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'tp_size': 0, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_torch', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': False, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': None, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'include_for_metrics': [], 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': False, 'eval_on_start': False, 'use_liger_kernel': False, 'eval_use_gather_object': False, 'average_tokens_across_devices': False, 'ddpm_batch_mul': 1, 'ce_loss_weight': 1.1, 'diffusion_loss_weight': 1.8, 'debug_ce_details': False, 'debug_ce_topk': 5, 'debug_ce_max_examples': 1, 'debug_ce_every_n_steps': 200, 'gradient_clipping': True, 'debug_save': False}
23
+ 2026-02-13 13:39:42,336 INFO MainThread:191 [wandb_config.py:__setitem__():154] [no run ID] config set model/num_parameters = 2777881057 - <bound method Run._config_callback of <wandb.sdk.wandb_run.Run object at 0x7992b50eef30>>
24
+ 2026-02-13 13:39:42,337 INFO MainThread:191 [wandb_run.py:_config_callback():1392] config_cb model/num_parameters 2777881057 None
25
+ 2026-02-13 13:39:52,939 INFO wandb-AsyncioManager-main:191 [service_client.py:_forward_responses():80] Reached EOF.
26
+ 2026-02-13 13:39:52,939 INFO wandb-AsyncioManager-main:191 [mailbox.py:close():137] Closing mailbox, abandoning 1 handles.
wandb/run-20260213_133940-4e4xqwjr/files/config.yaml ADDED
@@ -0,0 +1,896 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ _attn_implementation_autoset:
2
+ value: true
3
+ _name_or_path:
4
+ value: microsoft/VibeVoice-1.5B
5
+ _wandb:
6
+ value:
7
+ cli_version: 0.22.2
8
+ e:
9
+ qukzz3q99k6i4b8i39t090zozeztk4cy:
10
+ args:
11
+ - --model_name_or_path
12
+ - microsoft/VibeVoice-1.5B
13
+ - --processor_name_or_path
14
+ - vibevoice/processor
15
+ - --text_column_name
16
+ - text
17
+ - --audio_column_name
18
+ - audio
19
+ - --voice_prompts_column_name
20
+ - voice_prompts
21
+ - --output_dir
22
+ - /kaggle/working/VibeVoice-finetuning/
23
+ - --per_device_train_batch_size
24
+ - "1"
25
+ - --gradient_accumulation_steps
26
+ - "14"
27
+ - --learning_rate
28
+ - "5e-5"
29
+ - --num_train_epochs
30
+ - "8"
31
+ - --logging_steps
32
+ - "10"
33
+ - --save_steps
34
+ - "400"
35
+ - --eval_steps
36
+ - "100"
37
+ - --report_to
38
+ - wandb
39
+ - --lora_r
40
+ - "64"
41
+ - --lora_alpha
42
+ - "128"
43
+ - --remove_unused_columns
44
+ - "False"
45
+ - --fp16
46
+ - "True"
47
+ - --do_train
48
+ - --gradient_clipping
49
+ - --gradient_checkpointing
50
+ - "False"
51
+ - --ddpm_batch_mul
52
+ - "1"
53
+ - --diffusion_loss_weight
54
+ - "1.8"
55
+ - --train_diffusion_head
56
+ - "True"
57
+ - --ce_loss_weight
58
+ - "1.1"
59
+ - --voice_prompt_drop_rate
60
+ - "0.35"
61
+ - --lora_target_modules
62
+ - q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj
63
+ - --lr_scheduler_type
64
+ - cosine
65
+ - --warmup_ratio
66
+ - "0.1"
67
+ - --max_grad_norm
68
+ - "0.6"
69
+ cpu_count: 2
70
+ cpu_count_logical: 4
71
+ cudaVersion: "13.0"
72
+ disk:
73
+ /:
74
+ total: "8656922775552"
75
+ used: "7136140312576"
76
+ email: aralien0907@gmail.com
77
+ executable: /usr/bin/python3
78
+ git:
79
+ commit: f74368637dd67fc3895d9f81365c50e65ae0641c
80
+ remote: https://github.com/voicepowered-ai/VibeVoice-finetuning
81
+ gpu: Tesla T4
82
+ gpu_count: 2
83
+ gpu_nvidia:
84
+ - architecture: Turing
85
+ cudaCores: 2560
86
+ memoryTotal: "16106127360"
87
+ name: Tesla T4
88
+ uuid: GPU-2fb54c29-fff5-d673-7644-3c83188f84df
89
+ - architecture: Turing
90
+ cudaCores: 2560
91
+ memoryTotal: "16106127360"
92
+ name: Tesla T4
93
+ uuid: GPU-a049c658-948a-bce5-0209-30fd25d62128
94
+ host: 773233cc2cd1
95
+ memory:
96
+ total: "33662472192"
97
+ os: Linux-6.6.113+-x86_64-with-glibc2.35
98
+ program: -m src.finetune_vibevoice_lora0
99
+ python: CPython 3.12.12
100
+ root: /kaggle/working/VibeVoice-finetuning
101
+ startedAt: "2026-02-13T13:39:40.245154Z"
102
+ writerId: qukzz3q99k6i4b8i39t090zozeztk4cy
103
+ m:
104
+ - "1": train/global_step
105
+ "6":
106
+ - 3
107
+ "7": []
108
+ - "2": '*'
109
+ "5": 1
110
+ "6":
111
+ - 1
112
+ "7": []
113
+ python_version: 3.12.12
114
+ t:
115
+ "1":
116
+ - 1
117
+ - 2
118
+ - 3
119
+ - 5
120
+ - 11
121
+ - 12
122
+ - 41
123
+ - 49
124
+ - 51
125
+ - 53
126
+ - 63
127
+ - 71
128
+ - 83
129
+ - 98
130
+ - 105
131
+ "2":
132
+ - 1
133
+ - 2
134
+ - 3
135
+ - 5
136
+ - 11
137
+ - 12
138
+ - 41
139
+ - 49
140
+ - 51
141
+ - 53
142
+ - 63
143
+ - 71
144
+ - 83
145
+ - 98
146
+ - 105
147
+ "3":
148
+ - 7
149
+ - 13
150
+ - 19
151
+ - 66
152
+ "4": 3.12.12
153
+ "5": 0.22.2
154
+ "6": 4.51.3
155
+ "8":
156
+ - 2
157
+ "9":
158
+ "1": transformers_trainer
159
+ "12": 0.22.2
160
+ "13": linux-x86_64
161
+ accelerator_config:
162
+ value:
163
+ dispatch_batches: null
164
+ even_batches: true
165
+ gradient_accumulation_kwargs: null
166
+ non_blocking: false
167
+ split_batches: false
168
+ use_seedable_sampler: true
169
+ acoustic_tokenizer_config:
170
+ value:
171
+ _attn_implementation_autoset: false
172
+ _name_or_path: ""
173
+ add_cross_attention: false
174
+ architectures: null
175
+ bad_words_ids: null
176
+ begin_suppress_tokens: null
177
+ bos_token_id: null
178
+ causal: true
179
+ channels: 1
180
+ chunk_size_feed_forward: 0
181
+ conv_bias: true
182
+ conv_norm: none
183
+ corpus_normalize: 0
184
+ cross_attention_hidden_size: null
185
+ decoder_depths: null
186
+ decoder_n_filters: 32
187
+ decoder_ratios:
188
+ - 8
189
+ - 5
190
+ - 5
191
+ - 4
192
+ - 2
193
+ - 2
194
+ decoder_start_token_id: null
195
+ disable_last_norm: true
196
+ diversity_penalty: 0
197
+ do_sample: false
198
+ early_stopping: false
199
+ encoder_depths: 3-3-3-3-3-3-8
200
+ encoder_n_filters: 32
201
+ encoder_no_repeat_ngram_size: 0
202
+ encoder_ratios:
203
+ - 8
204
+ - 5
205
+ - 5
206
+ - 4
207
+ - 2
208
+ - 2
209
+ eos_token_id: null
210
+ exponential_decay_length_penalty: null
211
+ finetuning_task: null
212
+ fix_std: 0.5
213
+ forced_bos_token_id: null
214
+ forced_eos_token_id: null
215
+ id2label:
216
+ "0": LABEL_0
217
+ "1": LABEL_1
218
+ is_decoder: false
219
+ is_encoder_decoder: false
220
+ label2id:
221
+ LABEL_0: 0
222
+ LABEL_1: 1
223
+ layer_scale_init_value: 1e-06
224
+ layernorm: RMSNorm
225
+ layernorm_elementwise_affine: true
226
+ layernorm_eps: 1e-05
227
+ length_penalty: 1
228
+ max_length: 20
229
+ min_length: 0
230
+ mixer_layer: depthwise_conv
231
+ model_type: vibevoice_acoustic_tokenizer
232
+ no_repeat_ngram_size: 0
233
+ num_beam_groups: 1
234
+ num_beams: 1
235
+ num_return_sequences: 1
236
+ output_attentions: false
237
+ output_hidden_states: false
238
+ output_scores: false
239
+ pad_mode: constant
240
+ pad_token_id: null
241
+ prefix: null
242
+ problem_type: null
243
+ remove_invalid_values: false
244
+ repetition_penalty: 1
245
+ return_dict: true
246
+ return_dict_in_generate: false
247
+ sep_token_id: null
248
+ std_dist_type: gaussian
249
+ suppress_tokens: null
250
+ task_specific_params: null
251
+ temperature: 1
252
+ tf_legacy_loss: false
253
+ tie_encoder_decoder: false
254
+ tie_word_embeddings: true
255
+ tokenizer_class: null
256
+ top_k: 50
257
+ top_p: 1
258
+ torch_dtype: float16
259
+ torchscript: false
260
+ typical_p: 1
261
+ use_bfloat16: false
262
+ vae_dim: 64
263
+ weight_init_value: 0.01
264
+ acoustic_vae_dim:
265
+ value: 64
266
+ adafactor:
267
+ value: false
268
+ adam_beta1:
269
+ value: 0.9
270
+ adam_beta2:
271
+ value: 0.999
272
+ adam_epsilon:
273
+ value: 1e-08
274
+ add_cross_attention:
275
+ value: false
276
+ architectures:
277
+ value:
278
+ - VibeVoiceForConditionalGeneration
279
+ auto_find_batch_size:
280
+ value: false
281
+ average_tokens_across_devices:
282
+ value: false
283
+ bad_words_ids:
284
+ value: null
285
+ batch_eval_metrics:
286
+ value: false
287
+ begin_suppress_tokens:
288
+ value: null
289
+ bf16:
290
+ value: false
291
+ bf16_full_eval:
292
+ value: false
293
+ bos_token_id:
294
+ value: null
295
+ ce_loss_weight:
296
+ value: 1.1
297
+ chunk_size_feed_forward:
298
+ value: 0
299
+ cross_attention_hidden_size:
300
+ value: null
301
+ data_seed:
302
+ value: null
303
+ dataloader_drop_last:
304
+ value: false
305
+ dataloader_num_workers:
306
+ value: 0
307
+ dataloader_persistent_workers:
308
+ value: false
309
+ dataloader_pin_memory:
310
+ value: true
311
+ dataloader_prefetch_factor:
312
+ value: null
313
+ ddp_backend:
314
+ value: null
315
+ ddp_broadcast_buffers:
316
+ value: null
317
+ ddp_bucket_cap_mb:
318
+ value: null
319
+ ddp_find_unused_parameters:
320
+ value: null
321
+ ddp_timeout:
322
+ value: 1800
323
+ ddpm_batch_mul:
324
+ value: 1
325
+ debug:
326
+ value: []
327
+ debug_ce_details:
328
+ value: false
329
+ debug_ce_every_n_steps:
330
+ value: 200
331
+ debug_ce_max_examples:
332
+ value: 1
333
+ debug_ce_topk:
334
+ value: 5
335
+ debug_save:
336
+ value: false
337
+ decoder_config:
338
+ value:
339
+ _attn_implementation_autoset: false
340
+ _name_or_path: ""
341
+ add_cross_attention: false
342
+ architectures: null
343
+ attention_dropout: 0
344
+ bad_words_ids: null
345
+ begin_suppress_tokens: null
346
+ bos_token_id: null
347
+ chunk_size_feed_forward: 0
348
+ cross_attention_hidden_size: null
349
+ decoder_start_token_id: null
350
+ diversity_penalty: 0
351
+ do_sample: false
352
+ early_stopping: false
353
+ encoder_no_repeat_ngram_size: 0
354
+ eos_token_id: null
355
+ exponential_decay_length_penalty: null
356
+ finetuning_task: null
357
+ forced_bos_token_id: null
358
+ forced_eos_token_id: null
359
+ hidden_act: silu
360
+ hidden_size: 1536
361
+ id2label:
362
+ "0": LABEL_0
363
+ "1": LABEL_1
364
+ initializer_range: 0.02
365
+ intermediate_size: 8960
366
+ is_decoder: false
367
+ is_encoder_decoder: false
368
+ label2id:
369
+ LABEL_0: 0
370
+ LABEL_1: 1
371
+ length_penalty: 1
372
+ max_length: 20
373
+ max_position_embeddings: 65536
374
+ max_window_layers: 28
375
+ min_length: 0
376
+ model_type: qwen2
377
+ no_repeat_ngram_size: 0
378
+ num_attention_heads: 12
379
+ num_beam_groups: 1
380
+ num_beams: 1
381
+ num_hidden_layers: 28
382
+ num_key_value_heads: 2
383
+ num_return_sequences: 1
384
+ output_attentions: false
385
+ output_hidden_states: false
386
+ output_scores: false
387
+ pad_token_id: null
388
+ prefix: null
389
+ problem_type: null
390
+ remove_invalid_values: false
391
+ repetition_penalty: 1
392
+ return_dict: true
393
+ return_dict_in_generate: false
394
+ rms_norm_eps: 1e-06
395
+ rope_scaling: null
396
+ rope_theta: 1e+06
397
+ sep_token_id: null
398
+ sliding_window: null
399
+ suppress_tokens: null
400
+ task_specific_params: null
401
+ temperature: 1
402
+ tf_legacy_loss: false
403
+ tie_encoder_decoder: false
404
+ tie_word_embeddings: true
405
+ tokenizer_class: null
406
+ top_k: 50
407
+ top_p: 1
408
+ torch_dtype: float16
409
+ torchscript: false
410
+ typical_p: 1
411
+ use_bfloat16: false
412
+ use_cache: true
413
+ use_sliding_window: false
414
+ vocab_size: 151936
415
+ decoder_start_token_id:
416
+ value: null
417
+ deepspeed:
418
+ value: null
419
+ diffusion_head_config:
420
+ value:
421
+ _attn_implementation_autoset: false
422
+ _name_or_path: ""
423
+ add_cross_attention: false
424
+ architectures: null
425
+ bad_words_ids: null
426
+ begin_suppress_tokens: null
427
+ bos_token_id: null
428
+ chunk_size_feed_forward: 0
429
+ cross_attention_hidden_size: null
430
+ ddpm_batch_mul: 4
431
+ ddpm_beta_schedule: cosine
432
+ ddpm_num_inference_steps: 20
433
+ ddpm_num_steps: 1000
434
+ decoder_start_token_id: null
435
+ diffusion_type: ddpm
436
+ diversity_penalty: 0
437
+ do_sample: false
438
+ early_stopping: false
439
+ encoder_no_repeat_ngram_size: 0
440
+ eos_token_id: null
441
+ exponential_decay_length_penalty: null
442
+ finetuning_task: null
443
+ forced_bos_token_id: null
444
+ forced_eos_token_id: null
445
+ head_ffn_ratio: 3
446
+ head_layers: 4
447
+ hidden_size: 1536
448
+ id2label:
449
+ "0": LABEL_0
450
+ "1": LABEL_1
451
+ is_decoder: false
452
+ is_encoder_decoder: false
453
+ label2id:
454
+ LABEL_0: 0
455
+ LABEL_1: 1
456
+ latent_size: 64
457
+ length_penalty: 1
458
+ max_length: 20
459
+ min_length: 0
460
+ model_type: vibevoice_diffusion_head
461
+ no_repeat_ngram_size: 0
462
+ num_beam_groups: 1
463
+ num_beams: 1
464
+ num_return_sequences: 1
465
+ output_attentions: false
466
+ output_hidden_states: false
467
+ output_scores: false
468
+ pad_token_id: null
469
+ prediction_type: v_prediction
470
+ prefix: null
471
+ problem_type: null
472
+ remove_invalid_values: false
473
+ repetition_penalty: 1
474
+ return_dict: true
475
+ return_dict_in_generate: false
476
+ rms_norm_eps: 1e-05
477
+ sep_token_id: null
478
+ speech_vae_dim: 64
479
+ suppress_tokens: null
480
+ task_specific_params: null
481
+ temperature: 1
482
+ tf_legacy_loss: false
483
+ tie_encoder_decoder: false
484
+ tie_word_embeddings: true
485
+ tokenizer_class: null
486
+ top_k: 50
487
+ top_p: 1
488
+ torch_dtype: float16
489
+ torchscript: false
490
+ typical_p: 1
491
+ use_bfloat16: false
492
+ diffusion_loss_weight:
493
+ value: 1.8
494
+ disable_tqdm:
495
+ value: false
496
+ diversity_penalty:
497
+ value: 0
498
+ do_eval:
499
+ value: false
500
+ do_predict:
501
+ value: false
502
+ do_sample:
503
+ value: false
504
+ do_train:
505
+ value: true
506
+ early_stopping:
507
+ value: false
508
+ encoder_no_repeat_ngram_size:
509
+ value: 0
510
+ eos_token_id:
511
+ value: null
512
+ eval_accumulation_steps:
513
+ value: null
514
+ eval_delay:
515
+ value: 0
516
+ eval_do_concat_batches:
517
+ value: true
518
+ eval_on_start:
519
+ value: false
520
+ eval_steps:
521
+ value: 100
522
+ eval_strategy:
523
+ value: "no"
524
+ eval_use_gather_object:
525
+ value: false
526
+ exponential_decay_length_penalty:
527
+ value: null
528
+ finetuning_task:
529
+ value: null
530
+ forced_bos_token_id:
531
+ value: null
532
+ forced_eos_token_id:
533
+ value: null
534
+ fp16:
535
+ value: true
536
+ fp16_backend:
537
+ value: auto
538
+ fp16_full_eval:
539
+ value: false
540
+ fp16_opt_level:
541
+ value: O1
542
+ fsdp:
543
+ value: []
544
+ fsdp_config:
545
+ value:
546
+ min_num_params: 0
547
+ xla: false
548
+ xla_fsdp_grad_ckpt: false
549
+ xla_fsdp_v2: false
550
+ fsdp_min_num_params:
551
+ value: 0
552
+ fsdp_transformer_layer_cls_to_wrap:
553
+ value: null
554
+ full_determinism:
555
+ value: false
556
+ gradient_accumulation_steps:
557
+ value: 14
558
+ gradient_checkpointing:
559
+ value: false
560
+ gradient_checkpointing_kwargs:
561
+ value: null
562
+ gradient_clipping:
563
+ value: true
564
+ greater_is_better:
565
+ value: null
566
+ group_by_length:
567
+ value: false
568
+ half_precision_backend:
569
+ value: auto
570
+ hub_always_push:
571
+ value: false
572
+ hub_model_id:
573
+ value: null
574
+ hub_private_repo:
575
+ value: null
576
+ hub_strategy:
577
+ value: every_save
578
+ hub_token:
579
+ value: <HUB_TOKEN>
580
+ id2label:
581
+ value:
582
+ "0": LABEL_0
583
+ "1": LABEL_1
584
+ ignore_data_skip:
585
+ value: false
586
+ include_for_metrics:
587
+ value: []
588
+ include_inputs_for_metrics:
589
+ value: false
590
+ include_num_input_tokens_seen:
591
+ value: false
592
+ include_tokens_per_second:
593
+ value: false
594
+ is_decoder:
595
+ value: false
596
+ is_encoder_decoder:
597
+ value: false
598
+ jit_mode_eval:
599
+ value: false
600
+ label_names:
601
+ value: null
602
+ label_smoothing_factor:
603
+ value: 0
604
+ label2id:
605
+ value:
606
+ LABEL_0: 0
607
+ LABEL_1: 1
608
+ learning_rate:
609
+ value: 5e-05
610
+ length_column_name:
611
+ value: length
612
+ length_penalty:
613
+ value: 1
614
+ load_best_model_at_end:
615
+ value: false
616
+ local_rank:
617
+ value: 0
618
+ log_level:
619
+ value: passive
620
+ log_level_replica:
621
+ value: warning
622
+ log_on_each_node:
623
+ value: true
624
+ logging_dir:
625
+ value: /kaggle/working/VibeVoice-finetuning/runs/Feb13_13-38-15_773233cc2cd1
626
+ logging_first_step:
627
+ value: false
628
+ logging_nan_inf_filter:
629
+ value: true
630
+ logging_steps:
631
+ value: 10
632
+ logging_strategy:
633
+ value: steps
634
+ lr_scheduler_type:
635
+ value: cosine
636
+ max_grad_norm:
637
+ value: 0.6
638
+ max_length:
639
+ value: 20
640
+ max_steps:
641
+ value: -1
642
+ metric_for_best_model:
643
+ value: null
644
+ min_length:
645
+ value: 0
646
+ model/num_parameters:
647
+ value: 2777881057
648
+ model_type:
649
+ value: vibevoice
650
+ mp_parameters:
651
+ value: ""
652
+ neftune_noise_alpha:
653
+ value: null
654
+ no_cuda:
655
+ value: false
656
+ no_repeat_ngram_size:
657
+ value: 0
658
+ num_beam_groups:
659
+ value: 1
660
+ num_beams:
661
+ value: 1
662
+ num_return_sequences:
663
+ value: 1
664
+ num_train_epochs:
665
+ value: 8
666
+ optim:
667
+ value: adamw_torch
668
+ optim_args:
669
+ value: null
670
+ optim_target_modules:
671
+ value: null
672
+ output_attentions:
673
+ value: false
674
+ output_dir:
675
+ value: /kaggle/working/VibeVoice-finetuning/
676
+ output_hidden_states:
677
+ value: false
678
+ output_scores:
679
+ value: false
680
+ overwrite_output_dir:
681
+ value: false
682
+ pad_token_id:
683
+ value: null
684
+ past_index:
685
+ value: -1
686
+ per_device_eval_batch_size:
687
+ value: 8
688
+ per_device_train_batch_size:
689
+ value: 1
690
+ per_gpu_eval_batch_size:
691
+ value: null
692
+ per_gpu_train_batch_size:
693
+ value: null
694
+ prediction_loss_only:
695
+ value: false
696
+ prefix:
697
+ value: null
698
+ problem_type:
699
+ value: null
700
+ push_to_hub:
701
+ value: false
702
+ push_to_hub_model_id:
703
+ value: null
704
+ push_to_hub_organization:
705
+ value: null
706
+ push_to_hub_token:
707
+ value: <PUSH_TO_HUB_TOKEN>
708
+ ray_scope:
709
+ value: last
710
+ remove_invalid_values:
711
+ value: false
712
+ remove_unused_columns:
713
+ value: false
714
+ repetition_penalty:
715
+ value: 1
716
+ report_to:
717
+ value:
718
+ - wandb
719
+ restore_callback_states_from_checkpoint:
720
+ value: false
721
+ resume_from_checkpoint:
722
+ value: null
723
+ return_dict:
724
+ value: true
725
+ return_dict_in_generate:
726
+ value: false
727
+ run_name:
728
+ value: /kaggle/working/VibeVoice-finetuning/
729
+ save_on_each_node:
730
+ value: false
731
+ save_only_model:
732
+ value: false
733
+ save_safetensors:
734
+ value: true
735
+ save_steps:
736
+ value: 400
737
+ save_strategy:
738
+ value: steps
739
+ save_total_limit:
740
+ value: null
741
+ seed:
742
+ value: 42
743
+ semantic_tokenizer_config:
744
+ value:
745
+ _attn_implementation_autoset: false
746
+ _name_or_path: ""
747
+ add_cross_attention: false
748
+ architectures: null
749
+ bad_words_ids: null
750
+ begin_suppress_tokens: null
751
+ bos_token_id: null
752
+ causal: true
753
+ channels: 1
754
+ chunk_size_feed_forward: 0
755
+ conv_bias: true
756
+ conv_norm: none
757
+ corpus_normalize: 0
758
+ cross_attention_hidden_size: null
759
+ decoder_start_token_id: null
760
+ disable_last_norm: true
761
+ diversity_penalty: 0
762
+ do_sample: false
763
+ early_stopping: false
764
+ encoder_depths: 3-3-3-3-3-3-8
765
+ encoder_n_filters: 32
766
+ encoder_no_repeat_ngram_size: 0
767
+ encoder_ratios:
768
+ - 8
769
+ - 5
770
+ - 5
771
+ - 4
772
+ - 2
773
+ - 2
774
+ eos_token_id: null
775
+ exponential_decay_length_penalty: null
776
+ finetuning_task: null
777
+ fix_std: 0
778
+ forced_bos_token_id: null
779
+ forced_eos_token_id: null
780
+ id2label:
781
+ "0": LABEL_0
782
+ "1": LABEL_1
783
+ is_decoder: false
784
+ is_encoder_decoder: false
785
+ label2id:
786
+ LABEL_0: 0
787
+ LABEL_1: 1
788
+ layer_scale_init_value: 1e-06
789
+ layernorm: RMSNorm
790
+ layernorm_elementwise_affine: true
791
+ layernorm_eps: 1e-05
792
+ length_penalty: 1
793
+ max_length: 20
794
+ min_length: 0
795
+ mixer_layer: depthwise_conv
796
+ model_type: vibevoice_semantic_tokenizer
797
+ no_repeat_ngram_size: 0
798
+ num_beam_groups: 1
799
+ num_beams: 1
800
+ num_return_sequences: 1
801
+ output_attentions: false
802
+ output_hidden_states: false
803
+ output_scores: false
804
+ pad_mode: constant
805
+ pad_token_id: null
806
+ prefix: null
807
+ problem_type: null
808
+ remove_invalid_values: false
809
+ repetition_penalty: 1
810
+ return_dict: true
811
+ return_dict_in_generate: false
812
+ sep_token_id: null
813
+ std_dist_type: none
814
+ suppress_tokens: null
815
+ task_specific_params: null
816
+ temperature: 1
817
+ tf_legacy_loss: false
818
+ tie_encoder_decoder: false
819
+ tie_word_embeddings: true
820
+ tokenizer_class: null
821
+ top_k: 50
822
+ top_p: 1
823
+ torch_dtype: float16
824
+ torchscript: false
825
+ typical_p: 1
826
+ use_bfloat16: false
827
+ vae_dim: 128
828
+ weight_init_value: 0.01
829
+ semantic_vae_dim:
830
+ value: 128
831
+ sep_token_id:
832
+ value: null
833
+ skip_memory_metrics:
834
+ value: true
835
+ suppress_tokens:
836
+ value: null
837
+ task_specific_params:
838
+ value: null
839
+ temperature:
840
+ value: 1
841
+ tf_legacy_loss:
842
+ value: false
843
+ tf32:
844
+ value: null
845
+ tie_encoder_decoder:
846
+ value: false
847
+ tie_word_embeddings:
848
+ value: true
849
+ tokenizer_class:
850
+ value: null
851
+ top_k:
852
+ value: 50
853
+ top_p:
854
+ value: 1
855
+ torch_compile:
856
+ value: false
857
+ torch_compile_backend:
858
+ value: null
859
+ torch_compile_mode:
860
+ value: null
861
+ torch_dtype:
862
+ value: float16
863
+ torch_empty_cache_steps:
864
+ value: null
865
+ torchdynamo:
866
+ value: null
867
+ torchscript:
868
+ value: false
869
+ tp_size:
870
+ value: 0
871
+ tpu_metrics_debug:
872
+ value: false
873
+ tpu_num_cores:
874
+ value: null
875
+ transformers_version:
876
+ value: 4.51.3
877
+ typical_p:
878
+ value: 1
879
+ use_bfloat16:
880
+ value: false
881
+ use_cpu:
882
+ value: false
883
+ use_ipex:
884
+ value: false
885
+ use_legacy_prediction_loop:
886
+ value: false
887
+ use_liger_kernel:
888
+ value: false
889
+ use_mps_device:
890
+ value: false
891
+ warmup_ratio:
892
+ value: 0.1
893
+ warmup_steps:
894
+ value: 0
895
+ weight_decay:
896
+ value: 0
wandb/run-20260213_133940-4e4xqwjr/files/output.log ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 0%| | 0/568 [00:00<?, ?it/s]Traceback (most recent call last):
2
+ File "<frozen runpy>", line 198, in _run_module_as_main
3
+ File "<frozen runpy>", line 88, in _run_code
4
+ File "/kaggle/working/VibeVoice-finetuning/src/finetune_vibevoice_lora0.py", line 691, in <module>
5
+ main()
6
+ File "/kaggle/working/VibeVoice-finetuning/src/finetune_vibevoice_lora0.py", line 687, in main
7
+ trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)
8
+ File "/usr/local/lib/python3.12/dist-packages/transformers/trainer.py", line 2245, in train
9
+ return inner_training_loop(
10
+ ^^^^^^^^^^^^^^^^^^^^
11
+ File "/usr/local/lib/python3.12/dist-packages/transformers/trainer.py", line 2514, in _inner_training_loop
12
+ batch_samples, num_items_in_batch = self.get_batch_samples(epoch_iterator, num_batches, args.device)
13
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
14
+ File "/usr/local/lib/python3.12/dist-packages/transformers/trainer.py", line 5243, in get_batch_samples
15
+ batch_samples.append(next(epoch_iterator))
16
+ ^^^^^^^^^^^^^^^^^^^^
17
+ File "/usr/local/lib/python3.12/dist-packages/torch/utils/data/dataloader.py", line 734, in __next__
18
+ data = self._next_data()
19
+ ^^^^^^^^^^^^^^^^^
20
+ File "/usr/local/lib/python3.12/dist-packages/torch/utils/data/dataloader.py", line 790, in _next_data
21
+ data = self._dataset_fetcher.fetch(index) # may raise StopIteration
22
+ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
23
+ File "/usr/local/lib/python3.12/dist-packages/torch/utils/data/_utils/fetch.py", line 55, in fetch
24
+ return self.collate_fn(data)
25
+ ^^^^^^^^^^^^^^^^^^^^^
26
+ File "/kaggle/working/VibeVoice-finetuning/src/finetune_vibevoice_lora0.py", line 200, in __call__
27
+ result[key] = torch.cat(items, dim=0)
28
+ ^^^^^^^^^^^^^^^^^^^^^^^
29
+ RuntimeError: Sizes of tensors must match except in dimension 0. Expected size 275 but got size 144 for tensor number 1 in the list.
wandb/run-20260213_133940-4e4xqwjr/files/requirements.txt ADDED
@@ -0,0 +1,920 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ setuptools==75.2.0
2
+ types-setuptools==80.9.0.20250822
3
+ requirements-parser==0.9.0
4
+ pip==24.1.2
5
+ cfgv==3.5.0
6
+ torchcodec==0.10.0
7
+ pre_commit==4.5.1
8
+ transformers==4.51.3
9
+ identify==2.6.16
10
+ virtualenv==20.36.1
11
+ multiprocess==0.70.16
12
+ diffusers==0.29.0
13
+ conformer==0.3.2
14
+ numpy==1.26.4
15
+ peft==0.7.1
16
+ datasets==2.21.0
17
+ resampy==0.4.3
18
+ tokenizers==0.21.4
19
+ nodeenv==1.10.0
20
+ distlib==0.4.0
21
+ resemble-perth==1.0.1
22
+ safetensors==0.5.3
23
+ fsspec==2024.6.1
24
+ vibevoice-finetuning==0.1.0
25
+ s3tokenizer==0.3.0
26
+ dill==0.3.8
27
+ click==8.3.1
28
+ regex==2025.11.3
29
+ joblib==1.5.3
30
+ nltk==3.9.2
31
+ tqdm==4.67.1
32
+ pytools==2025.2.5
33
+ pycuda==2025.1.2
34
+ siphash24==1.8
35
+ protobuf==5.29.5
36
+ pandas==2.2.2
37
+ google-cloud-translate==3.12.1
38
+ scipy==1.15.3
39
+ gensim==4.4.0
40
+ torchtune==0.6.1
41
+ huggingface-hub==0.36.0
42
+ learntools==0.3.5
43
+ urllib3==2.6.3
44
+ aiosignal==1.4.0
45
+ kagglehub==0.4.0
46
+ annotated-types==0.7.0
47
+ rfc3161-client==1.0.5
48
+ blake3==1.0.8
49
+ filelock==3.20.3
50
+ pyOpenSSL==25.3.0
51
+ securesystemslib==1.3.1
52
+ pydantic_core==2.41.5
53
+ markdown-it-py==4.0.0
54
+ grpclib==0.4.9
55
+ multidict==6.7.0
56
+ in-toto-attestation==0.9.3
57
+ yarl==1.22.0
58
+ id==1.5.0
59
+ PyJWT==2.10.1
60
+ anyio==4.12.1
61
+ charset-normalizer==3.4.4
62
+ h11==0.16.0
63
+ cffi==2.0.0
64
+ pydantic==2.12.5
65
+ typing_extensions==4.15.0
66
+ PyYAML==6.0.3
67
+ aiohappyeyeballs==2.6.1
68
+ sigstore==4.1.0
69
+ pyasn1==0.6.1
70
+ model-signing==1.1.1
71
+ sigstore-models==0.0.5
72
+ shellingham==1.5.4
73
+ hf-xet==1.2.1rc0
74
+ propcache==0.4.1
75
+ typer-slim==0.21.1
76
+ betterproto==2.0.0b7
77
+ Pygments==2.19.2
78
+ certifi==2026.1.4
79
+ idna==3.11
80
+ rich==14.2.0
81
+ httpcore==1.0.9
82
+ pycparser==2.23
83
+ email-validator==2.3.0
84
+ python-dateutil==2.9.0.post0
85
+ mdurl==0.1.2
86
+ requests==2.32.5
87
+ hpack==4.1.0
88
+ pyarrow==22.0.0
89
+ cryptography==46.0.3
90
+ packaging==26.0rc2
91
+ six==1.17.0
92
+ dnspython==2.8.0
93
+ frozenlist==1.8.0
94
+ kagglesdk==0.1.14
95
+ typing-inspection==0.4.2
96
+ platformdirs==4.5.1
97
+ asn1crypto==1.5.1
98
+ hyperframe==6.1.0
99
+ h2==4.3.0
100
+ rfc8785==0.1.4
101
+ sigstore-rekor-types==0.0.18
102
+ httpx==0.28.1
103
+ tuf==6.0.0
104
+ attrs==25.4.0
105
+ aiohttp==3.13.3
106
+ xxhash==3.6.0
107
+ rouge_score==0.1.2
108
+ pyclipper==1.4.0
109
+ bayesian-optimization==3.2.0
110
+ fiona==1.10.1
111
+ urwid_readline==0.15.1
112
+ Wand==0.6.13
113
+ mpld3==0.5.12
114
+ qgrid==1.3.1
115
+ woodwork==0.31.0
116
+ y-py==0.6.2
117
+ ipywidgets==8.1.5
118
+ daal==2025.10.0
119
+ scikit-multilearn==0.2.0
120
+ pytesseract==0.3.13
121
+ Cartopy==0.25.0
122
+ odfpy==1.4.1
123
+ Boruta==0.4.3
124
+ docstring-to-markdown==0.17
125
+ torchinfo==1.8.0
126
+ clint==0.5.1
127
+ h2o==3.46.0.9
128
+ comm==0.2.3
129
+ Deprecated==1.3.1
130
+ pymongo==4.16.0
131
+ tensorflow-io-gcs-filesystem==0.37.1
132
+ optuna==4.6.0
133
+ pygltflib==1.16.5
134
+ keras-core==0.1.7
135
+ google-cloud-vision==3.11.0
136
+ pandas-profiling==3.6.6
137
+ pyaml==25.7.0
138
+ keras-tuner==1.4.8
139
+ open_spiel==1.6.11
140
+ fastuuid==0.14.0
141
+ scikit-surprise==1.1.4
142
+ torchmetrics==1.8.2
143
+ vtk==9.3.1
144
+ jupyter-ydoc==0.2.5
145
+ aiofiles==22.1.0
146
+ featuretools==1.31.0
147
+ plotly-express==0.4.1
148
+ marshmallow==3.26.2
149
+ easyocr==1.7.2
150
+ pyemd==1.0.0
151
+ fuzzywuzzy==0.18.0
152
+ starlette==0.50.0
153
+ openslide-python==1.4.3
154
+ black==25.12.0
155
+ google-cloud-videointelligence==2.17.0
156
+ pandasql==0.7.3
157
+ update-checker==0.18.0
158
+ pathos==0.3.2
159
+ xvfbwrapper==0.2.18
160
+ jupyter_server_fileid==0.9.3
161
+ scikit-learn-intelex==2025.10.0
162
+ fasttext==0.9.3
163
+ litellm==1.80.16
164
+ stopit==1.1.2
165
+ haversine==2.9.0
166
+ pox==0.3.6
167
+ google-auth==2.47.0
168
+ catboost==1.2.8
169
+ jupyter_server==2.12.5
170
+ geojson==3.2.0
171
+ boto3==1.42.27
172
+ nilearn==0.13.0
173
+ fury==0.12.0
174
+ ipython_pygments_lexers==1.1.1
175
+ olefile==0.47
176
+ jupyter_server_proxy==4.4.0
177
+ gymnasium==0.29.0
178
+ tensorflow-cloud==0.1.5
179
+ pytorch-ignite==0.5.3
180
+ annotated-doc==0.0.4
181
+ jupyter-lsp==1.5.1
182
+ gpxpy==1.6.2
183
+ simpervisor==1.0.0
184
+ mlcrate==0.2.0
185
+ papermill==2.6.0
186
+ jupyterlab==3.6.8
187
+ args==0.1.0
188
+ ImageHash==4.3.2
189
+ TPOT==0.12.2
190
+ typing-inspect==0.9.0
191
+ PyUpSet==0.1.1.post7
192
+ dacite==1.9.2
193
+ pycryptodome==3.23.0
194
+ urwid==3.0.3
195
+ ray==2.53.0
196
+ deepdiff==8.6.1
197
+ visions==0.8.1
198
+ trx-python==0.3
199
+ Chessnut==0.4.1
200
+ deap==1.4.3
201
+ lml==0.2.0
202
+ jmespath==1.0.1
203
+ jiter==0.10.0
204
+ ypy-websocket==0.8.4
205
+ cytoolz==1.1.0
206
+ qtconsole==5.7.0
207
+ ansicolors==1.1.8
208
+ path.py==12.5.0
209
+ pathspec==1.0.3
210
+ lightning-utilities==0.15.2
211
+ tensorflow-io==0.37.1
212
+ wavio==0.0.9
213
+ cligj==0.7.2
214
+ pytorch-lightning==2.6.0
215
+ pdf2image==1.17.0
216
+ dipy==1.11.0
217
+ ipympl==0.9.8
218
+ fastapi==0.123.10
219
+ pybind11==3.0.1
220
+ pyLDAvis==3.4.1
221
+ coverage==7.13.1
222
+ Janome==0.5.0
223
+ langid==1.1.6
224
+ scikit-plot==0.3.7
225
+ fastgit==0.0.1
226
+ simpleitk==2.5.3
227
+ ml_collections==1.1.0
228
+ filetype==1.2.0
229
+ jupyter_server_ydoc==0.8.0
230
+ execnb==0.1.18
231
+ colorama==0.4.6
232
+ ruamel.yaml==0.19.1
233
+ python-lsp-server==1.14.0
234
+ ujson==5.11.0
235
+ PyArabic==0.6.15
236
+ path==17.1.1
237
+ nbdev==2.4.10
238
+ google-genai==1.57.0
239
+ google-cloud-aiplatform==1.133.0
240
+ fastcore==1.11.3
241
+ botocore==1.42.27
242
+ openai==2.15.0
243
+ Pympler==1.1
244
+ puremagic==1.30
245
+ s3fs==0.4.2
246
+ nbconvert==6.4.5
247
+ kornia==0.8.2
248
+ funcy==2.0
249
+ testpath==0.6.0
250
+ kornia_rs==0.1.10
251
+ nbclient==0.5.13
252
+ ydata-profiling==4.18.1
253
+ squarify==0.4.4
254
+ dataclasses-json==0.6.7
255
+ pettingzoo==1.24.0
256
+ segment_anything==1.0
257
+ emoji==2.15.0
258
+ a2a-sdk==0.3.22
259
+ click-plugins==1.1.1.2
260
+ google-cloud-pubsub==2.34.0
261
+ python-bidi==0.6.7
262
+ rgf-python==3.12.0
263
+ pytokens==0.3.0
264
+ ninja==1.13.0
265
+ stable-baselines3==2.1.0
266
+ widgetsnbextension==4.0.15
267
+ minify_html==0.18.1
268
+ pypdf==6.6.0
269
+ kaggle-environments==1.18.0
270
+ jedi==0.19.2
271
+ jupyterlab-lsp==3.10.2
272
+ line_profiler==5.0.0
273
+ python-lsp-jsonrpc==1.1.2
274
+ QtPy==2.4.3
275
+ pydicom==3.0.1
276
+ multimethod==1.12
277
+ asttokens==3.0.1
278
+ docker==7.1.0
279
+ ghapi==1.0.8
280
+ s3transfer==0.16.0
281
+ ppft==1.7.7
282
+ libpysal==4.9.2
283
+ igraph==1.0.0
284
+ jupyterlab_server==2.28.0
285
+ tenacity==9.1.2
286
+ isoweek==1.3.3
287
+ texttable==1.7.0
288
+ sphinx-rtd-theme==0.2.4
289
+ kt-legacy==1.0.5
290
+ orderly-set==5.5.0
291
+ pyexcel-io==0.6.7
292
+ Shimmy==1.3.0
293
+ mamba==0.11.3
294
+ colorlog==6.10.1
295
+ openslide-bin==4.0.0.11
296
+ pyexcel-ods==0.6.0
297
+ aiosqlite==0.22.1
298
+ preprocessing==0.1.13
299
+ lime==0.2.0.1
300
+ cesium==0.12.4
301
+ google-adk==1.22.1
302
+ hep_ml==0.8.0
303
+ phik==0.12.5
304
+ setuptools-scm==9.2.2
305
+ pudb==2025.1.5
306
+ mne==1.11.0
307
+ rtree==1.4.1
308
+ keras-cv==0.9.0
309
+ gatspy==0.3
310
+ opentelemetry-exporter-gcp-logging==1.11.0a0
311
+ onnx==1.20.1
312
+ scikit-optimize==0.10.2
313
+ category_encoders==2.9.0
314
+ mypy_extensions==1.1.0
315
+ mistune==0.8.4
316
+ json5==0.13.0
317
+ google-colab==1.0.0
318
+ psutil==5.9.5
319
+ tblib==3.1.0
320
+ typer==0.20.0
321
+ astunparse==1.6.3
322
+ sentry-sdk==2.42.1
323
+ ipython==7.34.0
324
+ oauthlib==3.3.1
325
+ grpc-google-iam-v1==0.14.3
326
+ roman-numerals-py==3.1.0
327
+ prophet==1.1.7
328
+ libclang==18.1.1
329
+ libucxx-cu12==0.44.0
330
+ accelerate==1.11.0
331
+ imageio==2.37.0
332
+ geemap==0.35.3
333
+ patsy==1.0.2
334
+ imagesize==1.4.1
335
+ py-cpuinfo==9.0.0
336
+ pyzmq==26.2.1
337
+ langchain==0.3.27
338
+ soxr==1.0.0
339
+ jupyterlab_pygments==0.3.0
340
+ opentelemetry-proto==1.37.0
341
+ backcall==0.2.0
342
+ tensorflow-hub==0.16.1
343
+ requests-oauthlib==2.0.0
344
+ orjson==3.11.3
345
+ dopamine_rl==4.1.2
346
+ sentence-transformers==5.1.1
347
+ overrides==7.7.0
348
+ bokeh==3.7.3
349
+ jeepney==0.9.0
350
+ ipython-genutils==0.2.0
351
+ xarray==2025.10.1
352
+ catalogue==2.0.10
353
+ beautifulsoup4==4.13.5
354
+ sphinxcontrib-devhelp==2.0.0
355
+ partd==1.4.2
356
+ treelite==4.4.1
357
+ sklearn-pandas==2.2.0
358
+ nvidia-nccl-cu12==2.27.3
359
+ sphinxcontrib-qthelp==2.0.0
360
+ google-auth-httplib2==0.2.0
361
+ typeguard==4.4.4
362
+ h5py==3.15.1
363
+ google-cloud-core==2.4.3
364
+ xlrd==2.0.2
365
+ branca==0.8.2
366
+ chardet==5.2.0
367
+ imbalanced-learn==0.14.0
368
+ sentencepiece==0.2.1
369
+ google-api-core==2.26.0
370
+ nvidia-cusparselt-cu12==0.7.1
371
+ Flask==3.1.2
372
+ tcmlib==1.4.0
373
+ simple-parsing==0.1.7
374
+ matplotlib-venn==1.1.2
375
+ fqdn==1.5.1
376
+ gin-config==0.5.0
377
+ ipython-sql==0.5.0
378
+ toml==0.10.2
379
+ PyOpenGL==3.1.10
380
+ jsonpointer==3.0.0
381
+ wrapt==2.0.0
382
+ websocket-client==1.9.0
383
+ torchao==0.10.0
384
+ alabaster==1.0.0
385
+ jaxlib==0.7.2
386
+ timm==1.0.20
387
+ namex==0.1.0
388
+ google-cloud-secret-manager==2.25.0
389
+ pynvml==12.0.0
390
+ nvidia-cublas-cu12==12.6.4.1
391
+ pyogrio==0.11.1
392
+ PyGObject==3.42.0
393
+ libkvikio-cu12==25.6.0
394
+ array_record==0.8.1
395
+ jupyter_core==5.9.1
396
+ jupyter_server_terminals==0.5.3
397
+ spacy-legacy==3.0.12
398
+ gradio_client==1.13.3
399
+ librosa==0.11.0
400
+ ibis-framework==9.5.0
401
+ requests-toolbelt==1.0.0
402
+ triton==3.4.0
403
+ dask-cuda==25.6.0
404
+ google-cloud-language==2.18.0
405
+ imutils==0.5.4
406
+ google-cloud-monitoring==2.28.0
407
+ opt_einsum==3.4.0
408
+ folium==0.20.0
409
+ moviepy==1.0.3
410
+ en_core_web_sm==3.8.0
411
+ ucxx-cu12==0.44.0
412
+ tensorflow-text==2.19.0
413
+ importlib_resources==6.5.2
414
+ lazy_loader==0.4
415
+ numba-cuda==0.11.0
416
+ jsonschema==4.25.1
417
+ opentelemetry-semantic-conventions==0.58b0
418
+ music21==9.3.0
419
+ bigquery-magics==0.10.3
420
+ spanner-graph-notebook==1.1.8
421
+ thinc==8.3.6
422
+ sqlglot==25.20.2
423
+ linkify-it-py==2.0.3
424
+ tsfresh==0.21.1
425
+ opencv-contrib-python==4.12.0.88
426
+ nbclassic==1.3.3
427
+ scikit-image==0.25.2
428
+ tensorflow_decision_forests==1.12.0
429
+ language_data==1.3.0
430
+ isoduration==20.11.0
431
+ pytest==8.4.2
432
+ libcuml-cu12==25.6.0
433
+ nvidia-cuda-nvcc-cu12==12.5.82
434
+ google-crc32c==1.7.1
435
+ nvidia-nvtx-cu12==12.6.77
436
+ torchsummary==1.5.1
437
+ earthengine-api==1.5.24
438
+ webencodings==0.5.1
439
+ jax-cuda12-pjrt==0.7.2
440
+ webcolors==24.11.1
441
+ pydot==3.0.4
442
+ gym==0.25.2
443
+ fastjsonschema==2.21.2
444
+ google-generativeai==0.8.5
445
+ orbax-checkpoint==0.11.25
446
+ gdown==5.2.0
447
+ wandb==0.22.2
448
+ soupsieve==2.8
449
+ httpimport==1.4.1
450
+ py4j==0.10.9.7
451
+ Markdown==3.9
452
+ rsa==4.9.1
453
+ tomlkit==0.13.3
454
+ libcugraph-cu12==25.6.0
455
+ entrypoints==0.4
456
+ pyspark==3.5.1
457
+ httplib2==0.31.0
458
+ fastprogress==1.0.3
459
+ importlib_metadata==8.7.0
460
+ ipyleaflet==0.20.0
461
+ missingno==0.5.2
462
+ pandas-datareader==0.10.0
463
+ pooch==1.8.2
464
+ pyviz_comms==3.0.6
465
+ cycler==0.12.1
466
+ fastrlock==0.8.3
467
+ opencv-python==4.12.0.88
468
+ greenlet==3.2.4
469
+ nvidia-cusolver-cu12==11.7.1.2
470
+ tensorboard==2.19.0
471
+ jax-cuda12-plugin==0.7.2
472
+ inflect==7.5.0
473
+ panel==1.8.2
474
+ multitasking==0.0.12
475
+ xgboost==3.1.0
476
+ cvxpy==1.6.7
477
+ rpy2==3.5.17
478
+ tf_keras==2.19.0
479
+ shapely==2.1.2
480
+ uri-template==1.3.0
481
+ opentelemetry-api==1.37.0
482
+ pyshp==3.0.2.post1
483
+ atpublic==5.1
484
+ sphinxcontrib-applehelp==2.0.0
485
+ treescope==0.1.10
486
+ tiktoken==0.12.0
487
+ tensorstore==0.1.78
488
+ xyzservices==2025.4.0
489
+ dask-cudf-cu12==25.6.0
490
+ cupy-cuda12x==13.3.0
491
+ bleach==6.2.0
492
+ preshed==3.0.10
493
+ defusedxml==0.7.1
494
+ sphinxcontrib-serializinghtml==2.0.0
495
+ libucx-cu12==1.18.1
496
+ python-utils==3.9.1
497
+ lxml==5.4.0
498
+ highspy==1.11.0
499
+ pydotplus==2.0.2
500
+ pycryptodomex==3.23.0
501
+ matplotlib==3.10.0
502
+ google-pasta==0.2.0
503
+ cmdstanpy==1.2.5
504
+ ipyparallel==8.8.0
505
+ keras==3.10.0
506
+ spacy-loggers==1.0.5
507
+ rfc3987-syntax==1.1.0
508
+ google-ai-generativelanguage==0.6.15
509
+ keras-hub==0.21.1
510
+ pydata-google-auth==1.9.1
511
+ cudf-polars-cu12==25.6.0
512
+ absl-py==1.4.0
513
+ openpyxl==3.1.5
514
+ vega-datasets==0.9.0
515
+ mpmath==1.3.0
516
+ etils==1.13.0
517
+ ffmpy==0.6.3
518
+ frozendict==2.4.6
519
+ polars==1.25.2
520
+ graphviz==0.21
521
+ torchdata==0.11.0
522
+ jax==0.7.2
523
+ langchain-core==0.3.79
524
+ tensorflow-metadata==1.17.2
525
+ pycairo==1.28.0
526
+ sympy==1.13.3
527
+ pyparsing==3.2.5
528
+ cudf-cu12==25.6.0
529
+ langcodes==3.5.0
530
+ stringzilla==4.2.1
531
+ duckdb==1.3.2
532
+ smart_open==7.4.0
533
+ blinker==1.9.0
534
+ referencing==0.37.0
535
+ googledrivedownloader==1.1.0
536
+ GDAL==3.8.4
537
+ et_xmlfile==2.0.0
538
+ jieba==0.42.1
539
+ zict==3.0.0
540
+ hyperopt==0.2.7
541
+ python-louvain==0.16
542
+ GitPython==3.1.45
543
+ gradio==5.49.1
544
+ PyDrive2==1.21.3
545
+ keyring==25.6.0
546
+ murmurhash==1.0.13
547
+ python-dotenv==1.1.1
548
+ jupyter-console==6.6.3
549
+ gspread==6.2.1
550
+ rapids-logger==0.1.19
551
+ docstring_parser==0.17.0
552
+ albumentations==2.0.8
553
+ h5netcdf==1.7.2
554
+ pandas-gbq==0.29.2
555
+ seaborn==0.13.2
556
+ cons==0.4.7
557
+ pyomo==6.9.5
558
+ omegaconf==2.3.0
559
+ parso==0.8.5
560
+ rpds-py==0.27.1
561
+ progressbar2==4.5.0
562
+ pexpect==4.9.0
563
+ ptyprocess==0.7.0
564
+ jaraco.functools==4.3.0
565
+ pygame==2.6.1
566
+ google-cloud-datastore==2.21.0
567
+ cuml-cu12==25.6.0
568
+ ydf==0.13.0
569
+ libcuvs-cu12==25.6.1
570
+ ucx-py-cu12==0.44.0
571
+ kiwisolver==1.4.9
572
+ Cython==3.0.12
573
+ jupyterlab_widgets==3.0.15
574
+ snowballstemmer==3.0.1
575
+ nbformat==5.10.4
576
+ intel-openmp==2025.2.1
577
+ python-snappy==0.7.3
578
+ bqplot==0.12.45
579
+ nest-asyncio==1.6.0
580
+ umf==0.11.0
581
+ nvidia-cufft-cu12==11.3.0.4
582
+ notebook==6.5.7
583
+ nibabel==5.3.2
584
+ google-cloud-bigtable==2.33.0
585
+ cmake==3.31.6
586
+ multipledispatch==1.0.0
587
+ eerepr==0.1.2
588
+ locket==1.0.0
589
+ nvidia-ml-py==12.575.51
590
+ einops==0.8.1
591
+ google-cloud-dataproc==5.23.0
592
+ plotly==5.24.1
593
+ msgpack==1.1.2
594
+ clarabel==0.11.1
595
+ ipytree==0.2.2
596
+ psygnal==0.15.0
597
+ python-box==7.3.2
598
+ tweepy==4.16.0
599
+ prometheus_client==0.23.1
600
+ termcolor==3.1.0
601
+ nvidia-cuda-runtime-cu12==12.6.77
602
+ mkl==2025.2.0
603
+ libraft-cu12==25.6.0
604
+ textblob==0.19.0
605
+ firebase-admin==6.9.0
606
+ bigframes==2.26.0
607
+ debugpy==1.8.15
608
+ SecretStorage==3.4.0
609
+ google-cloud-discoveryengine==0.13.12
610
+ xarray-einstats==0.9.1
611
+ decorator==4.4.2
612
+ libcudf-cu12==25.6.0
613
+ pickleshare==0.7.5
614
+ google-cloud-storage==2.19.0
615
+ lark==1.3.0
616
+ wasabi==1.1.3
617
+ blis==1.3.0
618
+ pydub==0.25.1
619
+ more-itertools==10.8.0
620
+ sse-starlette==3.0.2
621
+ keyrings.google-artifactregistry-auth==1.1.2
622
+ param==2.2.1
623
+ gast==0.6.0
624
+ Werkzeug==3.1.3
625
+ marisa-trie==1.3.1
626
+ colorcet==3.1.0
627
+ antlr4-python3-runtime==4.9.3
628
+ absolufy-imports==0.3.1
629
+ keras-nlp==0.21.1
630
+ toolz==0.12.1
631
+ python-slugify==8.0.4
632
+ google-cloud-speech==2.34.0
633
+ jupyter-leaflet==0.20.0
634
+ gym-notices==0.1.0
635
+ fonttools==4.60.1
636
+ semantic-version==2.10.0
637
+ etuples==0.3.10
638
+ google-cloud-logging==3.12.1
639
+ traitlets==5.7.1
640
+ sqlparse==0.5.3
641
+ terminado==0.18.1
642
+ google-cloud-resource-manager==1.15.0
643
+ librmm-cu12==25.6.0
644
+ python-multipart==0.0.20
645
+ Brotli==1.1.0
646
+ pyproj==3.7.2
647
+ langchain-text-splitters==0.3.11
648
+ sphinxcontrib-htmlhelp==2.1.0
649
+ grpc-interceptor==0.15.4
650
+ geocoder==1.38.1
651
+ mizani==0.13.5
652
+ babel==2.17.0
653
+ pyasn1_modules==0.4.2
654
+ pyperclip==1.11.0
655
+ optax==0.2.6
656
+ ply==3.11
657
+ audioread==3.0.1
658
+ docutils==0.21.2
659
+ scooby==0.10.2
660
+ distro==1.9.0
661
+ tf-slim==1.1.0
662
+ tzlocal==5.3.1
663
+ anywidget==0.9.18
664
+ groovy==0.1.2
665
+ opencv-python-headless==4.12.0.88
666
+ nvidia-cufile-cu12==1.11.1.6
667
+ proglog==0.1.12
668
+ nvtx==0.2.13
669
+ google-cloud-bigquery-connection==1.19.0
670
+ matplotlib-inline==0.1.7
671
+ editdistance==0.8.1
672
+ tzdata==2025.2
673
+ tabulate==0.9.0
674
+ dlib==19.24.6
675
+ pylibraft-cu12==25.6.0
676
+ immutabledict==4.2.2
677
+ community==1.0.0b1
678
+ tensorflow==2.19.0
679
+ ale-py==0.11.2
680
+ CacheControl==0.14.3
681
+ notebook_shim==0.2.4
682
+ ndindex==1.10.0
683
+ pytensor==2.35.1
684
+ beartype==0.22.2
685
+ google-cloud-spanner==3.58.0
686
+ soundfile==0.13.1
687
+ itsdangerous==2.2.0
688
+ jsonpatch==1.33
689
+ nx-cugraph-cu12==25.6.0
690
+ plotnine==0.14.5
691
+ blosc2==3.10.2
692
+ ml_dtypes==0.5.3
693
+ traittypes==0.2.1
694
+ holidays==0.82
695
+ text-unidecode==1.3
696
+ yfinance==0.2.66
697
+ arviz==0.22.0
698
+ weasel==0.4.1
699
+ networkx==3.5
700
+ rapids-dask-dependency==25.6.0
701
+ srsly==2.5.1
702
+ wordcloud==1.9.4
703
+ jaraco.classes==3.4.0
704
+ albucore==0.0.24
705
+ db-dtypes==1.4.3
706
+ google-cloud-bigquery==3.38.0
707
+ uritemplate==4.2.0
708
+ numexpr==2.14.1
709
+ cymem==2.0.11
710
+ simsimd==6.5.3
711
+ alembic==1.17.0
712
+ jupytext==1.17.3
713
+ pillow==11.3.0
714
+ jsonschema-specifications==2025.9.1
715
+ tables==3.10.2
716
+ altair==5.5.0
717
+ pynvjitlink-cu12==0.7.0
718
+ cufflinks==0.17.3
719
+ cvxopt==1.3.2
720
+ watchdog==6.0.0
721
+ PySocks==1.7.1
722
+ uc-micro-py==1.0.3
723
+ proto-plus==1.26.1
724
+ Sphinx==8.2.3
725
+ optree==0.17.0
726
+ numba==0.60.0
727
+ opentelemetry-exporter-gcp-trace==1.10.0
728
+ pynndescent==0.5.13
729
+ raft-dask-cu12==25.6.0
730
+ google-cloud-trace==1.17.0
731
+ google-auth-oauthlib==1.2.2
732
+ peewee==3.18.2
733
+ promise==2.3
734
+ cloudpickle==3.1.1
735
+ chex==0.1.90
736
+ spacy==3.8.7
737
+ rfc3986-validator==0.1.1
738
+ threadpoolctl==3.6.0
739
+ Send2Trash==1.8.3
740
+ shap==0.49.1
741
+ sniffio==1.3.1
742
+ dask==2025.5.0
743
+ ipyevents==2.0.4
744
+ cramjam==2.11.0
745
+ wurlitzer==3.1.1
746
+ confection==0.1.5
747
+ narwhals==2.9.0
748
+ torchvision==0.23.0+cu126
749
+ plum-dispatch==2.5.8
750
+ stanio==0.5.1
751
+ easydict==1.13
752
+ argon2-cffi==25.1.0
753
+ llvmlite==0.43.0
754
+ nvidia-nvjitlink-cu12==12.6.85
755
+ statsmodels==0.14.5
756
+ argon2-cffi-bindings==25.1.0
757
+ future==1.0.0
758
+ psycopg2==2.9.11
759
+ iniconfig==2.3.0
760
+ lightgbm==4.6.0
761
+ jupyter-events==0.12.0
762
+ opentelemetry-exporter-otlp-proto-http==1.37.0
763
+ Bottleneck==1.4.2
764
+ opentelemetry-exporter-otlp-proto-common==1.37.0
765
+ pylibcugraph-cu12==25.6.0
766
+ distributed==2025.5.0
767
+ osqp==1.0.5
768
+ colour==0.1.5
769
+ zipp==3.23.0
770
+ httpx-sse==0.4.3
771
+ hdbscan==0.8.40
772
+ cuda-python==12.6.2.post1
773
+ nvidia-nvshmem-cu12==3.4.5
774
+ sphinxcontrib-jsmath==1.0.1
775
+ prompt_toolkit==3.0.52
776
+ google-resumable-media==2.7.2
777
+ holoviews==1.21.0
778
+ portpicker==1.5.2
779
+ PyWavelets==1.9.0
780
+ nvidia-cusparse-cu12==12.5.4.2
781
+ Farama-Notifications==0.0.4
782
+ pytz==2025.2
783
+ opentelemetry-resourcedetector-gcp==1.10.0a0
784
+ MarkupSafe==3.0.3
785
+ scs==3.2.9
786
+ SQLAlchemy==2.0.44
787
+ pycocotools==2.0.10
788
+ pydantic-settings==2.11.0
789
+ curl_cffi==0.13.0
790
+ humanize==4.14.0
791
+ astropy-iers-data==0.2025.10.20.0.39.8
792
+ cuvs-cu12==25.6.1
793
+ opentelemetry-exporter-gcp-monitoring==1.10.0a0
794
+ ipykernel==6.17.1
795
+ jsonpickle==4.1.1
796
+ websockets==15.0.1
797
+ nvidia-curand-cu12==10.3.7.77
798
+ googleapis-common-protos==1.71.0
799
+ pandas-stubs==2.2.2.240909
800
+ uvicorn==0.38.0
801
+ ratelim==0.1.6
802
+ miniKanren==1.0.5
803
+ geographiclib==2.1
804
+ cachetools==5.5.2
805
+ Jinja2==3.1.6
806
+ intel-cmplr-lib-ur==2025.2.1
807
+ simplejson==3.20.2
808
+ imageio-ffmpeg==0.6.0
809
+ python-json-logger==4.0.0
810
+ jupyter_kernel_gateway==2.5.2
811
+ tifffile==2025.10.16
812
+ contourpy==1.3.3
813
+ grpcio==1.75.1
814
+ nvidia-cudnn-cu12==9.10.2.21
815
+ pygit2==1.18.2
816
+ fastai==2.8.4
817
+ pymc==5.26.1
818
+ tinycss2==1.4.0
819
+ mdit-py-plugins==0.5.0
820
+ jaraco.context==6.0.1
821
+ tensorflow-datasets==4.9.9
822
+ flax==0.10.7
823
+ wcwidth==0.2.14
824
+ cyipopt==1.5.0
825
+ oauth2client==4.1.3
826
+ nvidia-cuda-nvrtc-cu12==12.6.77
827
+ umap-learn==0.5.9.post2
828
+ gspread-dataframe==4.0.0
829
+ types-pytz==2025.2.0.20250809
830
+ geopy==2.4.1
831
+ logical-unification==0.4.6
832
+ google-cloud-appengine-logging==1.7.0
833
+ natsort==8.4.0
834
+ langsmith==0.4.37
835
+ dataproc-spark-connect==0.8.3
836
+ opentelemetry-sdk==1.37.0
837
+ geopandas==1.1.1
838
+ rfc3339-validator==0.1.4
839
+ prettytable==3.16.0
840
+ stumpy==1.13.0
841
+ rmm-cu12==25.6.0
842
+ parsy==2.2
843
+ safehttpx==0.1.6
844
+ pyerfa==2.0.1.5
845
+ astropy==7.1.1
846
+ blobfile==3.1.0
847
+ mcp==1.18.0
848
+ slicer==0.0.8
849
+ google-cloud-firestore==2.21.0
850
+ build==1.3.0
851
+ grpcio-status==1.71.2
852
+ gitdb==4.0.12
853
+ sqlalchemy-spanner==1.17.0
854
+ ruff==0.14.1
855
+ pylibcudf-cu12==25.6.0
856
+ google-cloud-audit-log==0.4.0
857
+ scikit-learn==1.6.1
858
+ Authlib==1.6.5
859
+ cloudpathlib==0.23.0
860
+ flatbuffers==25.9.23
861
+ fastdownload==0.0.7
862
+ gcsfs==2025.3.0
863
+ google-cloud-functions==1.21.0
864
+ pyproject_hooks==1.2.0
865
+ tornado==6.5.1
866
+ pandocfilters==1.5.1
867
+ fasttransform==0.0.2
868
+ yellowbrick==1.5
869
+ jupyter_client==7.4.9
870
+ kaggle==1.7.4.5
871
+ distributed-ucxx-cu12==0.44.0
872
+ tensorboard-data-server==0.7.2
873
+ torchaudio==2.8.0+cu126
874
+ pluggy==1.6.0
875
+ google==2.0.3
876
+ arrow==1.4.0
877
+ mlxtend==0.23.4
878
+ smmap==5.0.2
879
+ hf_transfer==0.1.9
880
+ torch==2.8.0+cu126
881
+ sortedcontainers==2.4.0
882
+ tbb==2022.2.0
883
+ google-api-python-client==2.185.0
884
+ wheel==0.45.1
885
+ nvidia-cuda-cupti-cu12==12.6.80
886
+ zstandard==0.25.0
887
+ Mako==1.3.10
888
+ autograd==1.8.0
889
+ glob2==0.7
890
+ tensorflow-probability==0.25.0
891
+ colorlover==0.3.0
892
+ ipyfilechooser==0.6.0
893
+ dm-tree==0.1.9
894
+ html5lib==1.1
895
+ python-apt==0.0.0
896
+ vibevoice-finetuning==0.1.0
897
+ PyGObject==3.42.1
898
+ blinker==1.4
899
+ jeepney==0.7.1
900
+ six==1.16.0
901
+ oauthlib==3.2.0
902
+ wadllib==1.3.6
903
+ launchpadlib==1.10.16
904
+ dbus-python==1.2.18
905
+ PyJWT==2.3.0
906
+ importlib-metadata==4.6.4
907
+ httplib2==0.20.2
908
+ zipp==1.0.0
909
+ pyparsing==2.4.7
910
+ python-apt==2.4.0+ubuntu4
911
+ lazr.restfulclient==0.14.4
912
+ SecretStorage==3.3.1
913
+ distro==1.7.0
914
+ lazr.uri==1.0.6
915
+ more-itertools==8.10.0
916
+ cryptography==3.4.8
917
+ keyring==23.5.0
918
+ Markdown==3.3.6
919
+ Mako==1.1.3
920
+ MarkupSafe==2.0.1
wandb/run-20260213_133940-4e4xqwjr/files/wandb-metadata.json ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "os": "Linux-6.6.113+-x86_64-with-glibc2.35",
3
+ "python": "CPython 3.12.12",
4
+ "startedAt": "2026-02-13T13:39:40.245154Z",
5
+ "args": [
6
+ "--model_name_or_path",
7
+ "microsoft/VibeVoice-1.5B",
8
+ "--processor_name_or_path",
9
+ "vibevoice/processor",
10
+ "--text_column_name",
11
+ "text",
12
+ "--audio_column_name",
13
+ "audio",
14
+ "--voice_prompts_column_name",
15
+ "voice_prompts",
16
+ "--output_dir",
17
+ "/kaggle/working/VibeVoice-finetuning/",
18
+ "--per_device_train_batch_size",
19
+ "1",
20
+ "--gradient_accumulation_steps",
21
+ "14",
22
+ "--learning_rate",
23
+ "5e-5",
24
+ "--num_train_epochs",
25
+ "8",
26
+ "--logging_steps",
27
+ "10",
28
+ "--save_steps",
29
+ "400",
30
+ "--eval_steps",
31
+ "100",
32
+ "--report_to",
33
+ "wandb",
34
+ "--lora_r",
35
+ "64",
36
+ "--lora_alpha",
37
+ "128",
38
+ "--remove_unused_columns",
39
+ "False",
40
+ "--fp16",
41
+ "True",
42
+ "--do_train",
43
+ "--gradient_clipping",
44
+ "--gradient_checkpointing",
45
+ "False",
46
+ "--ddpm_batch_mul",
47
+ "1",
48
+ "--diffusion_loss_weight",
49
+ "1.8",
50
+ "--train_diffusion_head",
51
+ "True",
52
+ "--ce_loss_weight",
53
+ "1.1",
54
+ "--voice_prompt_drop_rate",
55
+ "0.35",
56
+ "--lora_target_modules",
57
+ "q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj",
58
+ "--lr_scheduler_type",
59
+ "cosine",
60
+ "--warmup_ratio",
61
+ "0.1",
62
+ "--max_grad_norm",
63
+ "0.6"
64
+ ],
65
+ "program": "-m src.finetune_vibevoice_lora0",
66
+ "git": {
67
+ "remote": "https://github.com/voicepowered-ai/VibeVoice-finetuning",
68
+ "commit": "f74368637dd67fc3895d9f81365c50e65ae0641c"
69
+ },
70
+ "email": "aralien0907@gmail.com",
71
+ "root": "/kaggle/working/VibeVoice-finetuning",
72
+ "host": "773233cc2cd1",
73
+ "executable": "/usr/bin/python3",
74
+ "cpu_count": 2,
75
+ "cpu_count_logical": 4,
76
+ "gpu": "Tesla T4",
77
+ "gpu_count": 2,
78
+ "disk": {
79
+ "/": {
80
+ "total": "8656922775552",
81
+ "used": "7136140312576"
82
+ }
83
+ },
84
+ "memory": {
85
+ "total": "33662472192"
86
+ },
87
+ "gpu_nvidia": [
88
+ {
89
+ "name": "Tesla T4",
90
+ "memoryTotal": "16106127360",
91
+ "cudaCores": 2560,
92
+ "architecture": "Turing",
93
+ "uuid": "GPU-2fb54c29-fff5-d673-7644-3c83188f84df"
94
+ },
95
+ {
96
+ "name": "Tesla T4",
97
+ "memoryTotal": "16106127360",
98
+ "cudaCores": 2560,
99
+ "architecture": "Turing",
100
+ "uuid": "GPU-a049c658-948a-bce5-0209-30fd25d62128"
101
+ }
102
+ ],
103
+ "cudaVersion": "13.0",
104
+ "writerId": "qukzz3q99k6i4b8i39t090zozeztk4cy"
105
+ }
wandb/run-20260213_133940-4e4xqwjr/files/wandb-summary.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"_wandb":{"runtime":11},"_runtime":11}
wandb/run-20260213_133940-4e4xqwjr/logs/debug-core.log ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-02-13T13:39:40.762135369Z","level":"INFO","msg":"main: starting server","port-filename":"/tmp/tmpv1nv3qug/port-191.txt","pid":191,"log-level":0,"disable-analytics":false,"shutdown-on-parent-exit":false,"enable-dcgm-profiling":false}
2
+ {"time":"2026-02-13T13:39:40.770773179Z","level":"INFO","msg":"server: will exit if parent process dies","ppid":191}
3
+ {"time":"2026-02-13T13:39:40.770192004Z","level":"INFO","msg":"server: accepting connections","addr":{"Name":"/tmp/wandb-191-249-3787227306/socket","Net":"unix"}}
4
+ {"time":"2026-02-13T13:39:40.87767405Z","level":"INFO","msg":"connection: ManageConnectionData: new connection created","id":"1(@)"}
5
+ {"time":"2026-02-13T13:39:40.904505385Z","level":"INFO","msg":"handleInformInit: received","streamId":"4e4xqwjr","id":"1(@)"}
6
+ {"time":"2026-02-13T13:39:41.211071383Z","level":"INFO","msg":"handleInformInit: stream started","streamId":"4e4xqwjr","id":"1(@)"}
7
+ {"time":"2026-02-13T13:39:52.939752498Z","level":"INFO","msg":"handleInformTeardown: server teardown initiated","id":"1(@)"}
8
+ {"time":"2026-02-13T13:39:52.939826658Z","level":"INFO","msg":"connection: closing","id":"1(@)"}
9
+ {"time":"2026-02-13T13:39:52.939897471Z","level":"INFO","msg":"connection: closed successfully","id":"1(@)"}
10
+ {"time":"2026-02-13T13:39:52.939833876Z","level":"INFO","msg":"server is shutting down"}
11
+ {"time":"2026-02-13T13:39:52.940000079Z","level":"INFO","msg":"server: listener closed","addr":{"Name":"/tmp/wandb-191-249-3787227306/socket","Net":"unix"}}
12
+ {"time":"2026-02-13T13:39:59.972017322Z","level":"INFO","msg":"handleInformTeardown: server shutdown complete","id":"1(@)"}
13
+ {"time":"2026-02-13T13:39:59.972089498Z","level":"INFO","msg":"connection: ManageConnectionData: connection closed","id":"1(@)"}
14
+ {"time":"2026-02-13T13:39:59.972113486Z","level":"INFO","msg":"server is closed"}
wandb/run-20260213_133940-4e4xqwjr/logs/debug-internal.log ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-02-13T13:39:40.904677275Z","level":"INFO","msg":"stream: starting","core version":"0.22.2"}
2
+ {"time":"2026-02-13T13:39:41.210817561Z","level":"INFO","msg":"stream: created new stream","id":"4e4xqwjr"}
3
+ {"time":"2026-02-13T13:39:41.210886499Z","level":"INFO","msg":"handler: started","stream_id":"4e4xqwjr"}
4
+ {"time":"2026-02-13T13:39:41.211039391Z","level":"INFO","msg":"stream: started","id":"4e4xqwjr"}
5
+ {"time":"2026-02-13T13:39:41.211089284Z","level":"INFO","msg":"sender: started","stream_id":"4e4xqwjr"}
6
+ {"time":"2026-02-13T13:39:41.211089554Z","level":"INFO","msg":"writer: started","stream_id":"4e4xqwjr"}
7
+ {"time":"2026-02-13T13:39:52.939820892Z","level":"INFO","msg":"stream: closing","id":"4e4xqwjr"}
8
+ {"time":"2026-02-13T13:39:53.159904441Z","level":"INFO","msg":"fileTransfer: Close: file transfer manager closed"}
9
+ {"time":"2026-02-13T13:39:53.330431911Z","level":"INFO","msg":"handler: closed","stream_id":"4e4xqwjr"}
10
+ {"time":"2026-02-13T13:39:53.330553224Z","level":"INFO","msg":"sender: closed","stream_id":"4e4xqwjr"}
11
+ {"time":"2026-02-13T13:39:53.330566036Z","level":"INFO","msg":"stream: closed","id":"4e4xqwjr"}
wandb/run-20260213_133940-4e4xqwjr/logs/debug.log ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_setup.py:_flush():81] Current SDK version is 0.22.2
2
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_setup.py:_flush():81] Configure stats pid to 191
3
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_setup.py:_flush():81] Loading settings from /root/.config/wandb/settings
4
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_setup.py:_flush():81] Loading settings from /kaggle/working/VibeVoice-finetuning/wandb/settings
5
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_setup.py:_flush():81] Loading settings from environment variables
6
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_init.py:setup_run_log_directory():705] Logging user logs to /kaggle/working/VibeVoice-finetuning/wandb/run-20260213_133940-4e4xqwjr/logs/debug.log
7
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_init.py:setup_run_log_directory():706] Logging internal logs to /kaggle/working/VibeVoice-finetuning/wandb/run-20260213_133940-4e4xqwjr/logs/debug-internal.log
8
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_init.py:init():832] calling init triggers
9
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_init.py:init():837] wandb.init called with sweep_config: {}
10
+ config: {'_wandb': {}}
11
+ 2026-02-13 13:39:40,246 INFO MainThread:191 [wandb_init.py:init():880] starting backend
12
+ 2026-02-13 13:39:40,875 INFO MainThread:191 [wandb_init.py:init():883] sending inform_init request
13
+ 2026-02-13 13:39:40,888 INFO MainThread:191 [wandb_init.py:init():891] backend started and connected
14
+ 2026-02-13 13:39:40,891 INFO MainThread:191 [wandb_init.py:init():961] updated telemetry
15
+ 2026-02-13 13:39:40,904 INFO MainThread:191 [wandb_init.py:init():985] communicating run to backend with 90.0 second timeout
16
+ 2026-02-13 13:39:41,633 INFO MainThread:191 [wandb_init.py:init():1036] starting run threads in backend
17
+ 2026-02-13 13:39:42,309 INFO MainThread:191 [wandb_run.py:_console_start():2509] atexit reg
18
+ 2026-02-13 13:39:42,309 INFO MainThread:191 [wandb_run.py:_redirect():2357] redirect: wrap_raw
19
+ 2026-02-13 13:39:42,309 INFO MainThread:191 [wandb_run.py:_redirect():2426] Wrapping output streams.
20
+ 2026-02-13 13:39:42,309 INFO MainThread:191 [wandb_run.py:_redirect():2449] Redirects installed.
21
+ 2026-02-13 13:39:42,320 INFO MainThread:191 [wandb_init.py:init():1076] run started, returning control to user process
22
+ 2026-02-13 13:39:42,322 INFO MainThread:191 [wandb_run.py:_config_callback():1392] config_cb None None {'acoustic_tokenizer_config': {'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': None, 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': '', '_attn_implementation_autoset': False, 'model_type': 'vibevoice_acoustic_tokenizer', 'channels': 1, 'corpus_normalize': 0.0, 'causal': True, 'vae_dim': 64, 'fix_std': 0.5, 'std_dist_type': 'gaussian', 'conv_norm': 'none', 'pad_mode': 'constant', 'layernorm_eps': 1e-05, 'disable_last_norm': True, 'layernorm': 'RMSNorm', 'layernorm_elementwise_affine': True, 'conv_bias': True, 'layer_scale_init_value': 1e-06, 'weight_init_value': 0.01, 'mixer_layer': 'depthwise_conv', 'encoder_n_filters': 32, 'encoder_ratios': [8, 5, 5, 4, 2, 2], 'encoder_depths': '3-3-3-3-3-3-8', 'decoder_ratios': [8, 5, 5, 4, 2, 2], 'decoder_n_filters': 32, 'decoder_depths': None}, 'semantic_tokenizer_config': {'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': None, 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': '', '_attn_implementation_autoset': False, 'model_type': 'vibevoice_semantic_tokenizer', 'channels': 1, 'corpus_normalize': 0.0, 'causal': True, 'vae_dim': 128, 'fix_std': 0, 'std_dist_type': 'none', 'conv_norm': 'none', 'pad_mode': 'constant', 'layernorm_eps': 1e-05, 'disable_last_norm': True, 'layernorm': 'RMSNorm', 'layernorm_elementwise_affine': True, 'conv_bias': True, 'layer_scale_init_value': 1e-06, 'weight_init_value': 0.01, 'mixer_layer': 'depthwise_conv', 'encoder_n_filters': 32, 'encoder_ratios': [8, 5, 5, 4, 2, 2], 'encoder_depths': '3-3-3-3-3-3-8'}, 'decoder_config': {'vocab_size': 151936, 'max_position_embeddings': 65536, 'hidden_size': 1536, 'intermediate_size': 8960, 'num_hidden_layers': 28, 'num_attention_heads': 12, 'use_sliding_window': False, 'sliding_window': None, 'max_window_layers': 28, 'num_key_value_heads': 2, 'hidden_act': 'silu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-06, 'use_cache': True, 'rope_theta': 1000000.0, 'rope_scaling': None, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': None, 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': '', '_attn_implementation_autoset': False, 'model_type': 'qwen2'}, 'diffusion_head_config': {'hidden_size': 1536, 'head_layers': 4, 'head_ffn_ratio': 3.0, 'rms_norm_eps': 1e-05, 'latent_size': 64, 'speech_vae_dim': 64, 'prediction_type': 'v_prediction', 'diffusion_type': 'ddpm', 'ddpm_num_steps': 1000, 'ddpm_num_inference_steps': 20, 'ddpm_beta_schedule': 'cosine', 'ddpm_batch_mul': 4, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': None, 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': '', '_attn_implementation_autoset': False, 'model_type': 'vibevoice_diffusion_head'}, 'acoustic_vae_dim': 64, 'semantic_vae_dim': 128, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['VibeVoiceForConditionalGeneration'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': None, 'pad_token_id': None, 'eos_token_id': None, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'microsoft/VibeVoice-1.5B', '_attn_implementation_autoset': True, 'transformers_version': '4.51.3', 'model_type': 'vibevoice', 'output_dir': '/kaggle/working/VibeVoice-finetuning/', 'overwrite_output_dir': False, 'do_train': True, 'do_eval': False, 'do_predict': False, 'eval_strategy': 'no', 'prediction_loss_only': False, 'per_device_train_batch_size': 1, 'per_device_eval_batch_size': 8, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 14, 'eval_accumulation_steps': None, 'eval_delay': 0, 'torch_empty_cache_steps': None, 'learning_rate': 5e-05, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 0.6, 'num_train_epochs': 8.0, 'max_steps': -1, 'lr_scheduler_type': 'cosine', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.1, 'warmup_steps': 0, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/kaggle/working/VibeVoice-finetuning/runs/Feb13_13-38-15_773233cc2cd1', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 10, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 400, 'save_total_limit': None, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': None, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': 100.0, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/kaggle/working/VibeVoice-finetuning/', 'disable_tqdm': False, 'remove_unused_columns': False, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'tp_size': 0, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'adamw_torch', 'optim_args': None, 'adafactor': False, 'group_by_length': False, 'length_column_name': 'length', 'report_to': ['wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': False, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': None, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'include_for_metrics': [], 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': False, 'eval_on_start': False, 'use_liger_kernel': False, 'eval_use_gather_object': False, 'average_tokens_across_devices': False, 'ddpm_batch_mul': 1, 'ce_loss_weight': 1.1, 'diffusion_loss_weight': 1.8, 'debug_ce_details': False, 'debug_ce_topk': 5, 'debug_ce_max_examples': 1, 'debug_ce_every_n_steps': 200, 'gradient_clipping': True, 'debug_save': False}
23
+ 2026-02-13 13:39:42,336 INFO MainThread:191 [wandb_config.py:__setitem__():154] [no run ID] config set model/num_parameters = 2777881057 - <bound method Run._config_callback of <wandb.sdk.wandb_run.Run object at 0x7992b50eef30>>
24
+ 2026-02-13 13:39:42,337 INFO MainThread:191 [wandb_run.py:_config_callback():1392] config_cb model/num_parameters 2777881057 None
25
+ 2026-02-13 13:39:52,939 INFO wandb-AsyncioManager-main:191 [service_client.py:_forward_responses():80] Reached EOF.
26
+ 2026-02-13 13:39:52,939 INFO wandb-AsyncioManager-main:191 [mailbox.py:close():137] Closing mailbox, abandoning 1 handles.
wandb/run-20260213_133940-4e4xqwjr/run-4e4xqwjr.wandb ADDED
Binary file (20.1 kB). View file