Elfsong commited on
Commit
03048ae
·
verified ·
1 Parent(s): c5d2394

Scheduled Commit

Browse files
Files changed (4) hide show
  1. vllm_0004000.log +1 -0
  2. vllm_0005000.log +13 -0
  3. vllm_0006000.log +1 -0
  4. vllm_0007000.log +1 -0
vllm_0004000.log CHANGED
@@ -17,3 +17,4 @@
17
  (EngineCore_DP0 pid=3233898) INFO 02-03 01:27:00 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0004000...
18
  (EngineCore_DP0 pid=3233898) INFO 02-03 01:27:02 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
  (EngineCore_DP0 pid=3233898) INFO 02-03 01:27:03 [bitsandbytes_loader.py:786] Loading weights with BitsAndBytes quantization. May take a while ...
 
 
17
  (EngineCore_DP0 pid=3233898) INFO 02-03 01:27:00 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0004000...
18
  (EngineCore_DP0 pid=3233898) INFO 02-03 01:27:02 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
  (EngineCore_DP0 pid=3233898) INFO 02-03 01:27:03 [bitsandbytes_loader.py:786] Loading weights with BitsAndBytes quantization. May take a while ...
20
+ Cancellation requested; stopping current tasks.
vllm_0005000.log CHANGED
@@ -17,3 +17,16 @@
17
  (EngineCore_DP0 pid=3234476) INFO 02-03 01:27:06 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0005000...
18
  (EngineCore_DP0 pid=3234476) INFO 02-03 01:27:08 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
  (EngineCore_DP0 pid=3234476) INFO 02-03 01:27:08 [bitsandbytes_loader.py:786] Loading weights with BitsAndBytes quantization. May take a while ...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  (EngineCore_DP0 pid=3234476) INFO 02-03 01:27:06 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0005000...
18
  (EngineCore_DP0 pid=3234476) INFO 02-03 01:27:08 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
  (EngineCore_DP0 pid=3234476) INFO 02-03 01:27:08 [bitsandbytes_loader.py:786] Loading weights with BitsAndBytes quantization. May take a while ...
20
+ (EngineCore_DP0 pid=3234476) INFO 02-03 01:39:24 [weight_utils.py:527] Time spent downloading weights for Elfsong/VLM_stage_2_iter_0005000: 734.294342 seconds
21
+ (EngineCore_DP0 pid=3234476)
22
+ (EngineCore_DP0 pid=3234476)
23
+ (EngineCore_DP0 pid=3234476)
24
+ (EngineCore_DP0 pid=3234476)
25
+ (EngineCore_DP0 pid=3234476)
26
+ (EngineCore_DP0 pid=3234476)
27
+ (EngineCore_DP0 pid=3234476)
28
+ (EngineCore_DP0 pid=3234476)
29
+ (EngineCore_DP0 pid=3234476)
30
+ (EngineCore_DP0 pid=3234476)
31
+ (EngineCore_DP0 pid=3234476)
32
+ (EngineCore_DP0 pid=3234476)
vllm_0006000.log CHANGED
@@ -17,3 +17,4 @@
17
  (EngineCore_DP0 pid=3235235) INFO 02-03 01:27:11 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0006000...
18
  (EngineCore_DP0 pid=3235235) INFO 02-03 01:27:13 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
  (EngineCore_DP0 pid=3235235) INFO 02-03 01:27:14 [bitsandbytes_loader.py:786] Loading weights with BitsAndBytes quantization. May take a while ...
 
 
17
  (EngineCore_DP0 pid=3235235) INFO 02-03 01:27:11 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0006000...
18
  (EngineCore_DP0 pid=3235235) INFO 02-03 01:27:13 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
  (EngineCore_DP0 pid=3235235) INFO 02-03 01:27:14 [bitsandbytes_loader.py:786] Loading weights with BitsAndBytes quantization. May take a while ...
20
+ Cancellation requested; stopping current tasks.
vllm_0007000.log CHANGED
@@ -17,3 +17,4 @@
17
  (EngineCore_DP0 pid=3236204) INFO 02-03 01:27:16 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0007000...
18
  (EngineCore_DP0 pid=3236204) INFO 02-03 01:27:18 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
  (EngineCore_DP0 pid=3236204) INFO 02-03 01:27:19 [bitsandbytes_loader.py:786] Loading weights with BitsAndBytes quantization. May take a while ...
 
 
17
  (EngineCore_DP0 pid=3236204) INFO 02-03 01:27:16 [gpu_model_runner.py:4021] Starting to load model Elfsong/VLM_stage_2_iter_0007000...
18
  (EngineCore_DP0 pid=3236204) INFO 02-03 01:27:18 [cuda.py:364] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')
19
  (EngineCore_DP0 pid=3236204) INFO 02-03 01:27:19 [bitsandbytes_loader.py:786] Loading weights with BitsAndBytes quantization. May take a while ...
20
+ Cancellation requested; stopping current tasks.