Delete conversion_to_hf.log
Browse files- conversion_to_hf.log +0 -109
conversion_to_hf.log
DELETED
|
@@ -1,109 +0,0 @@
|
|
| 1 |
-
Loaded loader_megatron_core as the loader.
|
| 2 |
-
Loaded saver_llama2_hf_bf as the saver.
|
| 3 |
-
Starting saver...
|
| 4 |
-
Starting loader...
|
| 5 |
-
fused_indices_to_multihot has reached end of life. Please migrate to a non-experimental function.
|
| 6 |
-
/usr/local/lib/python3.12/dist-packages/modelopt/torch/utils/import_utils.py:31: UserWarning: Failed to import apex plugin due to: AttributeError("module 'transformers.modeling_utils' has no attribute 'Conv1D'"). You may ignore this warning if you do not need this plugin.
|
| 7 |
-
warnings.warn(
|
| 8 |
-
/usr/local/lib/python3.12/dist-packages/modelopt/torch/utils/import_utils.py:31: UserWarning: Failed to import huggingface plugin due to: AttributeError("module 'transformers.modeling_utils' has no attribute 'Conv1D'"). You may ignore this warning if you do not need this plugin.
|
| 9 |
-
warnings.warn(
|
| 10 |
-
/usr/local/lib/python3.12/dist-packages/modelopt/torch/utils/import_utils.py:31: UserWarning: Failed to import megatron plugin due to: AttributeError("module 'transformers.modeling_utils' has no attribute 'Conv1D'"). You may ignore this warning if you do not need this plugin.
|
| 11 |
-
warnings.warn(
|
| 12 |
-
Setting num_layers to 14 from checkpoint
|
| 13 |
-
Setting hidden_size to 5120 from checkpoint
|
| 14 |
-
Setting ffn_hidden_size to 27648 from checkpoint
|
| 15 |
-
Setting seq_length to 131072 from checkpoint
|
| 16 |
-
Setting num_attention_heads to 40 from checkpoint
|
| 17 |
-
Setting num_query_groups to 8 from checkpoint
|
| 18 |
-
Setting group_query_attention to True from checkpoint
|
| 19 |
-
Setting kv_channels to 128 from checkpoint
|
| 20 |
-
Setting max_position_embeddings to 131072 from checkpoint
|
| 21 |
-
Setting position_embedding_type to rope from checkpoint
|
| 22 |
-
Setting add_position_embedding to True from checkpoint
|
| 23 |
-
Setting use_rotary_position_embeddings to True from checkpoint
|
| 24 |
-
Setting rotary_base to 500000 from checkpoint
|
| 25 |
-
Setting rotary_percent to 1.0 from checkpoint
|
| 26 |
-
Setting rotary_interleaved to False from checkpoint
|
| 27 |
-
Setting add_bias_linear to False from checkpoint
|
| 28 |
-
Setting add_qkv_bias to False from checkpoint
|
| 29 |
-
Setting squared_relu to False from checkpoint
|
| 30 |
-
Setting swiglu to True from checkpoint
|
| 31 |
-
Setting untie_embeddings_and_output_weights to True from checkpoint
|
| 32 |
-
Setting apply_layernorm_1p to False from checkpoint
|
| 33 |
-
Setting normalization to RMSNorm from checkpoint
|
| 34 |
-
Setting apply_query_key_layer_scaling to False from checkpoint
|
| 35 |
-
Setting attention_dropout to 0.0 from checkpoint
|
| 36 |
-
Setting hidden_dropout to 0.0 from checkpoint
|
| 37 |
-
Checkpoint did not provide arguments hybrid_override_pattern
|
| 38 |
-
Checkpoint did not provide arguments spec
|
| 39 |
-
Setting hybrid_attention_ratio to 0.0 from checkpoint
|
| 40 |
-
Setting hybrid_mlp_ratio to 0.0 from checkpoint
|
| 41 |
-
Checkpoint did not provide arguments num_experts
|
| 42 |
-
Setting moe_layer_freq to 1 from checkpoint
|
| 43 |
-
Setting moe_router_topk to 2 from checkpoint
|
| 44 |
-
Setting moe_router_pre_softmax to False from checkpoint
|
| 45 |
-
Setting moe_grouped_gemm to False from checkpoint
|
| 46 |
-
Checkpoint did not provide arguments moe_shared_expert_intermediate_size
|
| 47 |
-
Setting mamba_state_dim to 128 from checkpoint
|
| 48 |
-
Setting mamba_head_dim to 64 from checkpoint
|
| 49 |
-
Setting mamba_num_groups to 8 from checkpoint
|
| 50 |
-
Checkpoint did not provide arguments mamba_num_heads
|
| 51 |
-
Setting is_hybrid_model to False from checkpoint
|
| 52 |
-
Checkpoint did not provide arguments heterogeneous_layers_config_path
|
| 53 |
-
Checkpoint did not provide arguments heterogeneous_layers_config_encoded_json
|
| 54 |
-
Setting tokenizer_type to SFTTokenizer from checkpoint
|
| 55 |
-
Setting tokenizer_model to /cpfs01/users/wzhang/iquest-coder-v1.1/RepoData-Ucoder-32B-128k-from2.5.2/97.09B_instruct_iquest-coder from checkpoint
|
| 56 |
-
Checkpoint did not provide arguments tiktoken_pattern
|
| 57 |
-
Setting padded_vocab_size to 76800 from checkpoint
|
| 58 |
-
INFO:megatron.core.num_microbatches_calculator:setting number of microbatches to constant 1
|
| 59 |
-
WARNING: one_logger package is required to enable e2e metrics tracking. please go to https://confluence.nvidia.com/display/MLWFO/Package+Repositories for details to install it
|
| 60 |
-
building GPT model ...
|
| 61 |
-
(TP, PP) mismatch after resume ((1, 1) vs (8, 1) from checkpoint): RNG state will be ignored
|
| 62 |
-
sharded_state_dict metadata loaded from the checkpoint: {'distrib_optim_sharding_type': 'dp_reshardable', 'singleton_local_shards': False, 'chained_optim_avoid_prefix': True}
|
| 63 |
-
Job sharding has changed: Rerun state will be ignored
|
| 64 |
-
loading distributed checkpoint from /tmp/megatron_convert_iter1717_node0_pid360_aefde564 at iteration 1717
|
| 65 |
-
/volume/pt-train/users/wzhang/wjj-workspace/code-sft/src/training/Megatron-LM/megatron/core/dist_checkpointing/strategies/torch.py:956: FutureWarning: `load_state_dict` is deprecated and will be removed in future versions. Please use `load` instead.
|
| 66 |
-
checkpoint.load_state_dict(
|
| 67 |
-
/usr/local/lib/python3.12/dist-packages/torch/distributed/checkpoint/planner_helpers.py:406: FutureWarning: Please use DTensor instead and we are deprecating ShardedTensor.
|
| 68 |
-
device = getattr(value, "device", None)
|
| 69 |
-
/usr/local/lib/python3.12/dist-packages/torch/distributed/checkpoint/default_planner.py:454: FutureWarning: Please use DTensor instead and we are deprecating ShardedTensor.
|
| 70 |
-
and md.size != obj.size()
|
| 71 |
-
checkpoint version 3.0
|
| 72 |
-
successfully loaded checkpoint from /tmp/megatron_convert_iter1717_node0_pid360_aefde564 [ t 1/1, p 1/1 ] at iteration 1717
|
| 73 |
-
sending embeddings
|
| 74 |
-
sending transformer layer 0
|
| 75 |
-
sending transformer layer 1
|
| 76 |
-
sending transformer layer 2
|
| 77 |
-
sending transformer layer 3
|
| 78 |
-
sending transformer layer 4
|
| 79 |
-
sending transformer layer 5
|
| 80 |
-
sending transformer layer 6
|
| 81 |
-
sending transformer layer 7
|
| 82 |
-
sending transformer layer 8
|
| 83 |
-
sending transformer layer 9
|
| 84 |
-
sending transformer layer 10
|
| 85 |
-
sending transformer layer 11
|
| 86 |
-
sending transformer layer 12
|
| 87 |
-
sending transformer layer 13
|
| 88 |
-
sending final norm
|
| 89 |
-
sending output layer
|
| 90 |
-
Waiting for saver to complete...
|
| 91 |
-
fused_indices_to_multihot has reached end of life. Please migrate to a non-experimental function.
|
| 92 |
-
received embeddings
|
| 93 |
-
received transformer layer 0
|
| 94 |
-
received transformer layer 1
|
| 95 |
-
received transformer layer 2
|
| 96 |
-
received transformer layer 3
|
| 97 |
-
received transformer layer 4
|
| 98 |
-
received transformer layer 5
|
| 99 |
-
received transformer layer 6
|
| 100 |
-
received transformer layer 7
|
| 101 |
-
received transformer layer 8
|
| 102 |
-
received transformer layer 9
|
| 103 |
-
received transformer layer 10
|
| 104 |
-
received transformer layer 11
|
| 105 |
-
received transformer layer 12
|
| 106 |
-
received transformer layer 13
|
| 107 |
-
received final norm
|
| 108 |
-
received output layer
|
| 109 |
-
Saving model to disk ...
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|