2025-04-21 18:18:55 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-21 18:18:55 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-21 18:18:55 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr21_18-18-55_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=batchmean, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-21 18:18:58 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-21 18:20:32 - INFO - __main__ - *** Train *** 2025-04-21 18:20:32 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-21 18:44:46 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-21 18:44:46 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-21 18:44:46 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr21_18-44-46_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=batchmean, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-21 18:44:49 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-21 18:46:05 - INFO - __main__ - *** Train *** 2025-04-21 18:46:05 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-21 19:04:30 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-21 19:04:30 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-21 19:04:30 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr21_19-04-30_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=batchmean, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-21 19:04:33 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-21 19:05:49 - INFO - __main__ - *** Train *** 2025-04-21 19:05:49 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-21 20:15:34 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-21 20:15:34 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-21 20:15:34 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr21_20-15-33_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=batchmean, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-21 20:15:37 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-21 20:16:52 - INFO - __main__ - *** Train *** 2025-04-21 20:16:52 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-21 20:32:23 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-21 20:32:23 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-21 20:32:23 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr21_20-32-23_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=batchmean, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-21 20:32:26 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-21 20:33:41 - INFO - __main__ - *** Train *** 2025-04-21 20:33:41 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-21 20:47:04 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-21 20:47:04 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-21 20:47:04 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr21_20-47-04_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=batchmean, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-21 20:47:09 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-21 20:48:24 - INFO - __main__ - *** Train *** 2025-04-21 20:48:24 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-21 21:03:19 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-21 21:03:19 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-21 21:03:19 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr21_21-03-19_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=batchmean, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-21 21:03:22 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-21 21:04:38 - INFO - __main__ - *** Train *** 2025-04-21 21:04:38 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-21 22:30:06 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-21 22:30:06 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-21 22:30:06 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr21_22-30-05_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=batchmean, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-21 22:30:08 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-21 23:29:58 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-21 23:29:58 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-21 23:29:58 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr21_23-29-57_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=batchmean, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-21 23:30:00 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-21 23:31:19 - INFO - __main__ - *** Train *** 2025-04-21 23:31:19 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 00:04:21 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 00:04:21 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 00:04:21 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_00-04-21_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=batchmean, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 00:04:24 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 00:05:38 - INFO - __main__ - *** Train *** 2025-04-22 00:05:38 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 00:57:44 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 00:57:44 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 00:57:44 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_00-57-44_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=batchmean, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 00:57:46 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 00:58:35 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 00:58:35 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 00:58:35 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_00-58-35_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=batchmean, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 00:58:39 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 01:04:45 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 01:04:45 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 01:04:45 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_01-04-45_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=batchmean, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 01:04:47 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 01:06:18 - INFO - __main__ - *** Train *** 2025-04-22 01:06:18 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 01:10:51 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 01:10:51 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 01:10:51 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_01-10-51_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 01:10:53 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 01:12:15 - INFO - __main__ - *** Train *** 2025-04-22 01:12:15 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 01:17:38 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 01:17:38 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 01:17:38 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_01-17-38_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 01:17:41 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 01:19:55 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 01:19:55 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 01:19:55 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_01-19-54_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 01:19:57 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 01:22:55 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 01:22:55 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 01:22:55 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_01-22-55_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 01:22:58 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 01:26:10 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 01:26:10 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 01:26:10 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_01-26-10_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 01:26:13 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 01:34:27 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 01:34:27 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 01:34:27 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_01-34-27_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 01:34:30 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 01:45:22 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 01:45:22 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 01:45:22 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_01-45-21_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 01:45:24 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 01:50:04 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 01:50:04 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 01:50:04 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_01-50-03_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 01:50:06 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 01:51:50 - INFO - __main__ - *** Train *** 2025-04-22 01:51:50 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 01:59:20 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 01:59:20 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 01:59:20 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_01-59-20_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 01:59:23 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 02:00:16 - INFO - __main__ - *** Train *** 2025-04-22 02:00:16 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 02:04:28 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 02:04:28 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 02:04:28 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_02-04-28_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 02:04:31 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 02:05:27 - INFO - __main__ - *** Train *** 2025-04-22 02:05:27 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 03:50:38 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 03:50:38 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 03:50:38 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_03-50-38_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 03:50:41 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 03:51:36 - INFO - __main__ - *** Train *** 2025-04-22 03:51:36 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 03:59:44 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 03:59:44 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 03:59:44 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_03-59-43_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 04:02:34 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 04:02:34 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 04:02:34 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_04-02-34_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 04:02:37 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 04:04:12 - INFO - __main__ - *** Train *** 2025-04-22 04:04:12 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 04:08:43 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 04:08:43 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 04:08:43 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_04-08-42_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 04:08:45 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 04:09:43 - INFO - __main__ - *** Train *** 2025-04-22 04:09:43 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 04:56:11 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 04:56:11 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 04:56:11 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_04-56-10_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 04:56:14 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 04:57:09 - INFO - __main__ - *** Train *** 2025-04-22 04:57:09 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 05:11:13 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 05:11:13 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 05:11:13 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_05-11-13_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 05:11:15 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 05:12:12 - INFO - __main__ - *** Train *** 2025-04-22 05:12:12 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 05:29:35 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 05:29:35 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 05:29:35 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_05-29-35_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 05:29:47 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 05:30:41 - INFO - __main__ - *** Train *** 2025-04-22 05:30:41 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 06:01:18 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 06:01:18 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 06:01:18 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_06-01-18_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 06:01:22 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 06:02:23 - INFO - __main__ - *** Train *** 2025-04-22 06:02:23 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 06:34:37 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 06:34:37 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 06:34:37 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_06-34-37_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 06:34:40 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 06:35:36 - INFO - __main__ - *** Train *** 2025-04-22 06:35:36 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 21:49:31 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 21:49:31 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 21:49:31 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_21-49-31_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 21:49:33 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 21:50:31 - INFO - __main__ - *** Train *** 2025-04-22 21:50:31 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 22:47:09 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 22:47:09 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 22:47:09 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_22-47-09_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 22:47:12 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 22:48:09 - INFO - __main__ - *** Train *** 2025-04-22 22:48:09 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 23:03:26 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 23:03:26 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 23:03:26 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_23-03-26_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 23:03:31 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 23:04:27 - INFO - __main__ - *** Train *** 2025-04-22 23:04:27 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 23:21:27 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 23:21:27 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 23:21:27 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_23-21-27_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 23:21:31 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 23:22:43 - INFO - __main__ - *** Train *** 2025-04-22 23:22:43 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 23:40:49 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 23:40:49 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 23:40:49 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_23-40-48_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 23:40:55 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 23:42:13 - INFO - __main__ - *** Train *** 2025-04-22 23:42:13 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-22 23:56:22 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-22 23:56:22 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-22 23:56:22 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr22_23-56-22_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-22 23:56:25 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-22 23:57:34 - INFO - __main__ - *** Train *** 2025-04-22 23:57:34 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 00:00:00 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 00:00:00 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 00:00:00 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_00-00-00_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 00:00:06 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 00:01:23 - INFO - __main__ - *** Train *** 2025-04-23 00:01:23 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 00:20:28 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 00:20:28 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 00:20:28 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_00-20-28_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 00:20:36 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 00:21:49 - INFO - __main__ - *** Train *** 2025-04-23 00:21:49 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 05:50:10 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 05:50:10 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 05:50:10 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_05-50-10_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 05:50:13 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 05:51:25 - INFO - __main__ - *** Train *** 2025-04-23 05:51:25 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 06:30:57 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 06:30:57 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 06:30:57 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_06-30-57_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 06:30:59 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 06:32:15 - INFO - __main__ - *** Train *** 2025-04-23 06:32:15 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 06:45:50 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 06:45:50 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 06:45:50 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_06-45-49_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 06:45:53 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 06:47:12 - INFO - __main__ - *** Train *** 2025-04-23 06:47:12 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 07:13:50 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 07:13:50 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 07:13:50 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_07-13-49_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 07:13:53 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 07:15:03 - INFO - __main__ - *** Train *** 2025-04-23 07:15:03 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 15:55:02 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 15:55:02 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 15:55:02 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_15-55-01_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 15:55:06 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 15:56:32 - INFO - __main__ - *** Train *** 2025-04-23 15:56:32 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 16:01:33 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 16:01:33 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 16:01:33 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_16-01-33_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 16:01:36 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 16:02:54 - INFO - __main__ - *** Train *** 2025-04-23 16:02:54 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 18:16:35 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 18:16:35 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 18:16:35 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_18-16-35_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 18:16:38 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 18:17:48 - INFO - __main__ - *** Train *** 2025-04-23 18:17:48 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 18:20:44 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 18:20:44 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 18:20:44 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_18-20-44_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 18:20:47 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 18:21:55 - INFO - __main__ - *** Train *** 2025-04-23 18:21:55 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 18:25:24 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 18:25:24 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 18:25:24 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_18-25-24_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 18:25:26 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 18:26:38 - INFO - __main__ - *** Train *** 2025-04-23 18:26:38 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 18:49:17 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 18:49:17 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 18:49:17 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_18-49-17_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 18:49:19 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 18:50:29 - INFO - __main__ - *** Train *** 2025-04-23 18:50:29 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 19:17:14 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 19:17:14 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 19:17:14 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_19-17-13_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 19:17:17 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 19:18:26 - INFO - __main__ - *** Train *** 2025-04-23 19:18:26 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 19:37:21 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 19:37:21 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 19:37:21 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_19-37-21_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 19:37:23 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 19:38:34 - INFO - __main__ - *** Train *** 2025-04-23 19:38:34 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 20:20:45 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 20:20:45 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 20:20:45 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_20-20-45_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 20:20:48 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 20:21:57 - INFO - __main__ - *** Train *** 2025-04-23 20:21:57 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 20:45:14 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation=None, use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 20:45:14 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 20:45:14 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_20-45-14_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 20:45:17 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 20:46:29 - INFO - __main__ - *** Train *** 2025-04-23 20:46:29 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2Attention( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2Attention( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 21:26:04 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation=None, use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 21:26:04 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 21:26:04 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_21-26-04_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 21:26:06 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 21:27:20 - INFO - __main__ - *** Train *** 2025-04-23 21:27:20 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2Attention( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2Attention( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 21:51:22 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation=None, use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 21:51:22 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 21:51:22 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_21-51-22_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 21:51:25 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 21:52:34 - INFO - __main__ - *** Train *** 2025-04-23 21:52:34 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2Attention( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2Attention( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-23 22:10:49 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation=None, use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 22:10:49 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 22:10:49 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_22-10-49_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 22:10:51 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 22:13:13 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation=None, use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 22:13:13 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 22:13:13 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_22-13-12_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 22:13:15 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 22:14:54 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation=None, use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 22:14:54 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 22:14:54 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_22-14-53_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 22:14:56 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 22:17:35 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation=None, use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 22:17:35 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 22:17:35 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_22-17-35_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 22:17:38 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 22:21:36 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 22:21:36 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 22:21:36 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_22-21-36_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 22:21:38 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 22:23:32 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 22:23:32 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 22:23:32 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_22-23-32_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 22:23:35 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 22:25:28 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 22:25:28 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 22:25:28 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_22-25-27_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 22:25:30 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 22:28:06 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 22:28:06 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 22:28:06 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_22-28-06_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 22:28:09 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 22:30:26 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 22:30:26 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 22:30:26 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_22-30-25_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 22:30:28 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 22:32:21 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 22:32:21 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 22:32:21 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_22-32-21_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 22:32:24 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 22:34:14 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 22:34:14 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 22:34:14 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_22-34-14_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 22:34:17 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 22:37:44 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-23 22:37:44 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-23 22:37:44 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr23_22-37-44_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-23 22:37:46 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-23 22:38:57 - INFO - __main__ - *** Train *** 2025-04-23 22:38:57 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-24 00:51:30 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-24 00:51:30 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-24 00:51:30 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr24_00-51-29_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-24 00:51:32 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-24 00:52:41 - INFO - __main__ - *** Train *** 2025-04-24 00:52:41 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-24 03:12:48 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-24 03:12:48 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-24 03:12:48 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr24_03-12-47_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-24 03:12:50 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-24 03:14:01 - INFO - __main__ - *** Train *** 2025-04-24 03:14:01 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-24 05:13:22 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-24 05:13:22 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-24 05:13:22 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr24_05-13-22_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-24 05:13:24 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-24 05:14:39 - INFO - __main__ - *** Train *** 2025-04-24 05:14:39 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-24 16:15:33 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-24 16:15:33 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-24 16:15:33 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr24_16-15-33_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-24 16:15:36 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-24 16:16:55 - INFO - __main__ - *** Train *** 2025-04-24 16:16:55 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) ) 2025-04-24 16:38:02 - INFO - __main__ - Model parameters ModelConfig(model_name_or_path='deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct', model_revision='main', torch_dtype='bfloat16', trust_remote_code=True, attn_implementation='flash_attention_2', use_peft=False, lora_r=16, lora_alpha=32, lora_dropout=0.05, lora_target_modules=None, lora_modules_to_save=None, lora_task_type='CAUSAL_LM', use_rslora=False, load_in_8bit=False, load_in_4bit=False, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False) 2025-04-24 16:38:02 - INFO - __main__ - Script parameters ScriptArguments(dataset_name='open-r1/OpenR1-Math-220k', dataset_config=None, dataset_train_split='train', dataset_test_split='test', gradient_checkpointing_use_reentrant=False, ignore_bias_buffers=False) 2025-04-24 16:38:02 - INFO - __main__ - Training parameters EfficientDistillationConfig( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, average_tokens_across_devices=False, batch_eval_metrics=False, benchmarks=[], bf16=True, bf16_full_eval=False, callbacks=[], chars_per_token=, chat_template=None, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, dataset_batch_size=None, dataset_kwargs=None, dataset_num_proc=None, dataset_text_field=text, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=180000000, debug=[], deepspeed=None, disable_dropout=True, disable_tqdm=False, dispatch_batches=None, do_eval=True, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_do_concat_batches=True, eval_on_start=False, eval_packing=None, eval_steps=None, eval_strategy=IntervalStrategy.NO, eval_use_gather_object=False, evaluation_strategy=None, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=4, gradient_checkpointing=True, gradient_checkpointing_kwargs={'use_reentrant': False}, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=Deepseek-Coder-V2-Lite-13B-Instruct-Open-R1-Distill, hub_model_revision=main, hub_private_repo=None, hub_strategy=HubStrategy.EVERY_SAVE, hub_token=, ignore_data_skip=False, include_for_metrics=[], include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, lmbda=0.0, load_best_model_at_end=False, local_rank=0, log_level=info, log_level_replica=warning, log_on_each_node=True, logging_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill/runs/Apr24_16-38-01_q-h100, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=IntervalStrategy.STEPS, loss_type=forward_kl, lr_scheduler_kwargs={'min_lr_rate': 0.1}, lr_scheduler_type=SchedulerType.COSINE_WITH_MIN_LR, max_grad_norm=1.0, max_length=2048, max_new_tokens=128, max_seq_length=None, max_steps=-1, metric_for_best_model=None, model_init_kwargs=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_of_sequences=None, num_train_epochs=3, optim=OptimizerNames.ADAMW_TORCH, optim_args=None, optim_target_modules=None, output_dir=data/DeepSeek-Coder-V2-Lite-Instruct/distill, overwrite_hub_revision=False, overwrite_output_dir=True, packing=False, past_index=-1, per_device_eval_batch_size=16, per_device_train_batch_size=4, prediction_loss_only=False, push_to_hub=True, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_revision=False, push_to_hub_token=, ray_scope=last, reduction=sum, remove_unused_columns=True, report_to=['wandb'], restore_callback_states_from_checkpoint=False, resume_from_checkpoint=None, run_name=data/DeepSeek-Coder-V2-Lite-Instruct/distill, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=200, save_strategy=SaveStrategy.STEPS, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, system_prompt=None, teacher_model_init_kwargs=None, teacher_model_name_or_path=None, temperature=0.9, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torch_empty_cache_steps=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_liger=False, use_liger_kernel=False, use_mps_device=False, wandb_entity=None, wandb_project=None, warmup_ratio=0.1, warmup_steps=0, weight_decay=0.0, ) 2025-04-24 16:38:04 - INFO - __main__ - *** Initializing model kwargs *** 2025-04-24 16:39:20 - INFO - __main__ - *** Train *** 2025-04-24 16:39:20 - INFO - __main__ - DeepseekV2ForCausalLM( (model): DeepseekV2Model( (embed_tokens): Embedding(102400, 2048) (layers): ModuleList( (0): DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=10944, bias=False) (up_proj): Linear(in_features=2048, out_features=10944, bias=False) (down_proj): Linear(in_features=10944, out_features=2048, bias=False) (act_fn): SiLU() ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) (1-26): 26 x DeepseekV2DecoderLayer( (self_attn): DeepseekV2FlashAttention2( (q_proj): Linear(in_features=2048, out_features=3072, bias=False) (kv_a_proj_with_mqa): Linear(in_features=2048, out_features=576, bias=False) (kv_a_layernorm): DeepseekV2RMSNorm() (kv_b_proj): Linear(in_features=512, out_features=4096, bias=False) (o_proj): Linear(in_features=2048, out_features=2048, bias=False) (rotary_emb): DeepseekV2YarnRotaryEmbedding() ) (mlp): DeepseekV2MoE( (experts): ModuleList( (0-63): 64 x Identity() ) (gate): MoEGate() (shared_experts): DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=2816, bias=False) (up_proj): Linear(in_features=2048, out_features=2816, bias=False) (down_proj): Linear(in_features=2816, out_features=2048, bias=False) (act_fn): SiLU() ) (selected_experts): ModuleList( (0-2): 3 x DeepseekV2MLP( (gate_proj): Linear(in_features=2048, out_features=1408, bias=False) (up_proj): Linear(in_features=2048, out_features=1408, bias=False) (down_proj): Linear(in_features=1408, out_features=2048, bias=False) (act_fn): SiLU() ) ) ) (input_layernorm): DeepseekV2RMSNorm() (post_attention_layernorm): DeepseekV2RMSNorm() ) ) (norm): DeepseekV2RMSNorm() ) (lm_head): Linear(in_features=2048, out_features=102400, bias=False) )