text stringlengths 0 1.16k |
|---|
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.19.attention.wo.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.19.feed_forward.w1.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.19.feed_forward.w1.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.19.feed_forward.w3.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.19.feed_forward.w3.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.19.feed_forward.w2.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.19.feed_forward.w2.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.20.attention.wqkv.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.20.attention.wqkv.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.20.attention.wo.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.20.attention.wo.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.20.feed_forward.w1.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.20.feed_forward.w1.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.20.feed_forward.w3.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.20.feed_forward.w3.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.20.feed_forward.w2.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.20.feed_forward.w2.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.21.attention.wqkv.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.21.attention.wqkv.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.21.attention.wo.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.21.attention.wo.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.21.feed_forward.w1.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.21.feed_forward.w1.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.21.feed_forward.w3.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.21.feed_forward.w3.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.21.feed_forward.w2.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.21.feed_forward.w2.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.22.attention.wqkv.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.22.attention.wqkv.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.22.attention.wo.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.22.attention.wo.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.22.feed_forward.w1.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.22.feed_forward.w1.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.22.feed_forward.w3.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.22.feed_forward.w3.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.22.feed_forward.w2.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.22.feed_forward.w2.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.23.attention.wqkv.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.23.attention.wqkv.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.23.attention.wo.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.23.attention.wo.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.23.feed_forward.w1.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.23.feed_forward.w1.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.23.feed_forward.w3.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.23.feed_forward.w3.lora_B.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.23.feed_forward.w2.lora_A.default.weight |
10/22/2024 17:11:37 - INFO - __main__ - language_model.base_model.model.model.layers.23.feed_forward.w2.lora_B.default.weight |
[INFO|trainer.py:571] 2024-10-22 17:11:37,740 >> Using auto half precision backend |
trainable params: 15,728,640 || all params: 1,904,875,520 || trainable%: 0.8257 |
[2024-10-22 17:11:37,846] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.10.0, git-hash=unknown, git-branch=unknown |
[2024-10-22 17:11:39,059] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False |
Using /mnt/SSD1_4TB/yunjie/.cache/torch_extensions/py310_cu121 as PyTorch extensions root... |
Using /mnt/SSD1_4TB/yunjie/.cache/torch_extensions/py310_cu121 as PyTorch extensions root... |
Using /mnt/SSD1_4TB/yunjie/.cache/torch_extensions/py310_cu121 as PyTorch extensions root... |
Using /mnt/SSD1_4TB/yunjie/.cache/torch_extensions/py310_cu121 as PyTorch extensions root... |
Detected CUDA files, patching ldflags |
Emitting ninja build file /mnt/SSD1_4TB/yunjie/.cache/torch_extensions/py310_cu121/fused_adam/build.ninja... |
Building extension module fused_adam... |
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) |
ninja: no work to do. |
Loading extension module fused_adam... |
Time to load fused_adam op: 0.051596641540527344 seconds |
Rank: 2 partition count [4] and sizes[(3932160, False)] |
Loading extension module fused_adam... |
Time to load fused_adam op: 0.10177874565124512 seconds |
Loading extension module fused_adam... |
Time to load fused_adam op: 0.10140776634216309 seconds |
Loading extension module fused_adam... |
Time to load fused_adam op: 0.10117673873901367 seconds |
[2024-10-22 17:11:39,412] [INFO] [logging.py:96:log_dist] [Rank 0] Using DeepSpeed Optimizer param name adamw as basic optimizer |
[2024-10-22 17:11:39,426] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam |
[2024-10-22 17:11:39,426] [INFO] [utils.py:54:is_zero_supported_optimizer] Checking ZeRO support for optimizer=FusedAdam type=<class 'deepspeed.ops.adam.fused_adam.FusedAdam'> |
[2024-10-22 17:11:39,426] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.bfloat16 ZeRO stage 1 optimizer |
[2024-10-22 17:11:39,426] [INFO] [stage_1_and_2.py:133:__init__] Reduce bucket size 1000000000 |
[2024-10-22 17:11:39,426] [INFO] [stage_1_and_2.py:134:__init__] Allgather bucket size 1000000000 |
[2024-10-22 17:11:39,426] [INFO] [stage_1_and_2.py:135:__init__] CPU Offload: False |
[2024-10-22 17:11:39,426] [INFO] [stage_1_and_2.py:136:__init__] Round robin gradient partitioning: False |
Rank: 1 partition count [4] and sizes[(3932160, False)] |
Rank: 0 partition count [4] and sizes[(3932160, False)] |
Rank: 3 partition count [4] and sizes[(3932160, False)] |
[2024-10-22 17:11:39,847] [INFO] [utils.py:785:see_memory_usage] Before initializing optimizer states |
[2024-10-22 17:11:39,848] [INFO] [utils.py:786:see_memory_usage] MA 4.94 GB Max_MA 4.95 GB CA 5.17 GB Max_CA 5 GB |
[2024-10-22 17:11:39,848] [INFO] [utils.py:793:see_memory_usage] CPU Virtual Memory: used = 36.28 GB, percent = 7.2% |
[2024-10-22 17:11:39,953] [INFO] [utils.py:785:see_memory_usage] After initializing optimizer states |
[2024-10-22 17:11:39,953] [INFO] [utils.py:786:see_memory_usage] MA 4.97 GB Max_MA 4.99 GB CA 5.2 GB Max_CA 5 GB |
[2024-10-22 17:11:39,954] [INFO] [utils.py:793:see_memory_usage] CPU Virtual Memory: used = 36.28 GB, percent = 7.2% |
[2024-10-22 17:11:39,954] [INFO] [stage_1_and_2.py:493:__init__] optimizer state initialized |
[2024-10-22 17:11:40,049] [INFO] [utils.py:785:see_memory_usage] After initializing ZeRO optimizer |
[2024-10-22 17:11:40,050] [INFO] [utils.py:786:see_memory_usage] MA 4.97 GB Max_MA 4.97 GB CA 5.2 GB Max_CA 5 GB |
[2024-10-22 17:11:40,050] [INFO] [utils.py:793:see_memory_usage] CPU Virtual Memory: used = 36.28 GB, percent = 7.2% |
[2024-10-22 17:11:40,052] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Final Optimizer = adamw |
[2024-10-22 17:11:40,052] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed using client callable to create LR scheduler |
[2024-10-22 17:11:40,052] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed LR Scheduler = <torch.optim.lr_scheduler.LambdaLR object at 0x721d90def700> |
[2024-10-22 17:11:40,052] [INFO] [logging.py:96:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0], mom=[[0.9, 0.999]] |
[2024-10-22 17:11:40,054] [INFO] [config.py:960:print] DeepSpeedEngine configuration: |
[2024-10-22 17:11:40,054] [INFO] [config.py:964:print] activation_checkpointing_config { |
"partition_activations": false, |
"contiguous_memory_optimization": false, |
"cpu_checkpointing": false, |
"number_checkpoints": null, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.