text
stringlengths 0
1.16k
|
|---|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.18.feed_forward.w1.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.18.feed_forward.w3.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.18.feed_forward.w3.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.18.feed_forward.w2.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.18.feed_forward.w2.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.19.attention.wqkv.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.19.attention.wqkv.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.19.attention.wo.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.19.attention.wo.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.19.feed_forward.w1.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.19.feed_forward.w1.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.19.feed_forward.w3.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.19.feed_forward.w3.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.19.feed_forward.w2.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.19.feed_forward.w2.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.20.attention.wqkv.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.20.attention.wqkv.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.20.attention.wo.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.20.attention.wo.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.20.feed_forward.w1.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.20.feed_forward.w1.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.20.feed_forward.w3.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.20.feed_forward.w3.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.20.feed_forward.w2.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.20.feed_forward.w2.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.21.attention.wqkv.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.21.attention.wqkv.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.21.attention.wo.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.21.attention.wo.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.21.feed_forward.w1.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.21.feed_forward.w1.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.21.feed_forward.w3.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.21.feed_forward.w3.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.21.feed_forward.w2.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.21.feed_forward.w2.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.22.attention.wqkv.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.22.attention.wqkv.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.22.attention.wo.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.22.attention.wo.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.22.feed_forward.w1.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.22.feed_forward.w1.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.22.feed_forward.w3.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.22.feed_forward.w3.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.22.feed_forward.w2.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.22.feed_forward.w2.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.23.attention.wqkv.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.23.attention.wqkv.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.23.attention.wo.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.23.attention.wo.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.23.feed_forward.w1.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.23.feed_forward.w1.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.23.feed_forward.w3.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.23.feed_forward.w3.lora_B.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.23.feed_forward.w2.lora_A.default.weight
|
10/22/2024 17:17:05 - INFO - __main__ - language_model.base_model.model.model.layers.23.feed_forward.w2.lora_B.default.weight
|
[INFO|trainer.py:571] 2024-10-22 17:17:05,563 >> Using auto half precision backend
|
[2024-10-22 17:17:05,663] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.10.0, git-hash=unknown, git-branch=unknown
|
trainable params: 15,728,640 || all params: 1,904,875,520 || trainable%: 0.8257
|
[2024-10-22 17:17:08,698] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
|
Using /mnt/SSD1_4TB/yunjie/.cache/torch_extensions/py310_cu121 as PyTorch extensions root...
|
Using /mnt/SSD1_4TB/yunjie/.cache/torch_extensions/py310_cu121 as PyTorch extensions root...
|
Using /mnt/SSD1_4TB/yunjie/.cache/torch_extensions/py310_cu121 as PyTorch extensions root...
|
Using /mnt/SSD1_4TB/yunjie/.cache/torch_extensions/py310_cu121 as PyTorch extensions root...
|
Detected CUDA files, patching ldflags
|
Emitting ninja build file /mnt/SSD1_4TB/yunjie/.cache/torch_extensions/py310_cu121/fused_adam/build.ninja...
|
Building extension module fused_adam...
|
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
|
ninja: no work to do.
|
Loading extension module fused_adam...
|
Time to load fused_adam op: 0.05132889747619629 seconds
|
Rank: 3 partition count [4] and sizes[(3932160, False)]
|
Loading extension module fused_adam...
|
Time to load fused_adam op: 0.10113906860351562 seconds
|
[2024-10-22 17:17:09,038] [INFO] [logging.py:96:log_dist] [Rank 0] Using DeepSpeed Optimizer param name adamw as basic optimizer
|
Loading extension module fused_adam...
|
Time to load fused_adam op: 0.10114169120788574 seconds
|
Loading extension module fused_adam...
|
Time to load fused_adam op: 0.10110735893249512 seconds
|
[2024-10-22 17:17:09,053] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam
|
[2024-10-22 17:17:09,053] [INFO] [utils.py:54:is_zero_supported_optimizer] Checking ZeRO support for optimizer=FusedAdam type=<class 'deepspeed.ops.adam.fused_adam.FusedAdam'>
|
[2024-10-22 17:17:09,053] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.bfloat16 ZeRO stage 1 optimizer
|
[2024-10-22 17:17:09,054] [INFO] [stage_1_and_2.py:133:__init__] Reduce bucket size 1000000000
|
[2024-10-22 17:17:09,054] [INFO] [stage_1_and_2.py:134:__init__] Allgather bucket size 1000000000
|
[2024-10-22 17:17:09,054] [INFO] [stage_1_and_2.py:135:__init__] CPU Offload: False
|
[2024-10-22 17:17:09,054] [INFO] [stage_1_and_2.py:136:__init__] Round robin gradient partitioning: False
|
Rank: 2 partition count [4] and sizes[(3932160, False)]
|
Rank: 0 partition count [4] and sizes[(3932160, False)]
|
Rank: 1 partition count [4] and sizes[(3932160, False)]
|
[2024-10-22 17:17:09,485] [INFO] [utils.py:785:see_memory_usage] Before initializing optimizer states
|
[2024-10-22 17:17:09,486] [INFO] [utils.py:786:see_memory_usage] MA 4.94 GB Max_MA 4.95 GB CA 5.17 GB Max_CA 5 GB
|
[2024-10-22 17:17:09,486] [INFO] [utils.py:793:see_memory_usage] CPU Virtual Memory: used = 32.95 GB, percent = 6.5%
|
[2024-10-22 17:17:09,576] [INFO] [utils.py:785:see_memory_usage] After initializing optimizer states
|
[2024-10-22 17:17:09,576] [INFO] [utils.py:786:see_memory_usage] MA 4.97 GB Max_MA 4.99 GB CA 5.2 GB Max_CA 5 GB
|
[2024-10-22 17:17:09,577] [INFO] [utils.py:793:see_memory_usage] CPU Virtual Memory: used = 32.94 GB, percent = 6.5%
|
[2024-10-22 17:17:09,577] [INFO] [stage_1_and_2.py:493:__init__] optimizer state initialized
|
[2024-10-22 17:17:09,670] [INFO] [utils.py:785:see_memory_usage] After initializing ZeRO optimizer
|
[2024-10-22 17:17:09,670] [INFO] [utils.py:786:see_memory_usage] MA 4.97 GB Max_MA 4.97 GB CA 5.2 GB Max_CA 5 GB
|
[2024-10-22 17:17:09,670] [INFO] [utils.py:793:see_memory_usage] CPU Virtual Memory: used = 32.93 GB, percent = 6.5%
|
[2024-10-22 17:17:09,672] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Final Optimizer = adamw
|
[2024-10-22 17:17:09,672] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed using client callable to create LR scheduler
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.