YFanwang commited on
Commit
490436d
Β·
verified Β·
1 Parent(s): c3d2c5f

Upload folder using huggingface_hub

Browse files
Files changed (17) hide show
  1. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.3_2e-1_connector-1.0_0.3_2e-1_ablation_20251011_014003.log +698 -0
  2. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.3_2e-1_connector-1.0_0.3_2e-1_ablation_20251011_014510.log +0 -0
  3. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.5_2e-1_connector-1.0_0.5_2e-1_ablation_20251011_014101.log +698 -0
  4. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.5_2e-1_connector-1.0_0.5_2e-1_ablation_20251011_015706.log +0 -0
  5. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.7_2e-1_connector-1.0_0.7_2e-1_ablation_20251011_014153.log +698 -0
  6. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.7_2e-1_connector-1.0_0.7_2e-1_ablation_20251011_020942.log +0 -0
  7. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.9_2e-1_connector-1.0_0.9_2e-1_ablation_20251011_014243.log +676 -0
  8. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.9_2e-1_connector-1.0_0.9_2e-1_ablation_20251011_021905.log +0 -0
  9. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.1_2e-1_connector-1.0_1.1_2e-1_ablation_20251011_022746.log +0 -0
  10. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.3_2e-1_connector-1.0_1.3_2e-1_ablation_20251011_023449.log +0 -0
  11. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.5_2e-1_connector-1.0_1.5_2e-1_ablation_20251011_024052.log +0 -0
  12. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation_20251011_024734.log +92 -0
  13. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation_20251011_024748.log +92 -0
  14. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-3.0_0.5_1_connector-3.0_0.5_1_ablation_20251011_025420.log +0 -0
  15. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-3.0_0.5_1e-1_connector-3.0_0.5_1e-1_ablation_20251011_024802.log +0 -0
  16. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-3.0_0.5_3_connector-3.0_0.5_3_ablation_20251011_031423.log +681 -0
  17. logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-3.0_0.5_3e-1_connector-3.0_0.5_3e-1_ablation_20251011_030648.log +0 -0
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.3_2e-1_connector-1.0_0.3_2e-1_ablation_20251011_014003.log ADDED
@@ -0,0 +1,698 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ==== STARTING EXPERIMENT: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.3_2e-1_connector-1.0_0.3_2e-1_ablation ====
2
+ Log File: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.3_2e-1_connector-1.0_0.3_2e-1_ablation_20251011_014003.log
3
+ Timestamp: 2025-10-11 01:40:03
4
+ =====================================
5
+ Processing: /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.3_2e-1_connector-1.0_0.3_2e-1_ablation
6
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
7
+ import pynvml # type: ignore[import]
8
+ [2025-10-11 01:40:06,177] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
9
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/file_download.py:945: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
10
+ warnings.warn(
11
+ config_mask.torch_dtype: torch.bfloat16
12
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
13
+ Load mask model from /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.3_2e-1_connector-1.0_0.3_2e-1_ablation over.
14
+ TinyLlavaConfig {
15
+ "architectures": [
16
+ "TinyLlavaForConditionalGeneration"
17
+ ],
18
+ "backward_type_connector": "normal",
19
+ "cache_dir": null,
20
+ "connector_type": "mlp2x_gelu",
21
+ "hidden_size": 896,
22
+ "ignore_index": -100,
23
+ "image_aspect_ratio": "square",
24
+ "image_token_index": -200,
25
+ "llm_model_name_or_path": "Qwen/Qwen2.5-0.5B",
26
+ "mask_model": [
27
+ "llm",
28
+ "connector"
29
+ ],
30
+ "mask_type_connector": "soft",
31
+ "model_type": "tinyllava",
32
+ "num_queries": 128,
33
+ "num_resampler_layers": 3,
34
+ "pad_token": "<|endoftext|>",
35
+ "resampler_hidden_size": 768,
36
+ "sparsity_connector": null,
37
+ "subnet_type_connector": "global",
38
+ "temperature_connector": 0.3,
39
+ "text_config": {
40
+ "_name_or_path": "Qwen/Qwen2.5-0.5B",
41
+ "architectures": [
42
+ "Qwen2ForCausalLM"
43
+ ],
44
+ "backward_type": "normal",
45
+ "bos_token_id": 151643,
46
+ "eos_token_id": 151643,
47
+ "hidden_size": 896,
48
+ "intermediate_size": 4864,
49
+ "mask_type": "soft",
50
+ "masked_layers": "all",
51
+ "max_position_embeddings": 32768,
52
+ "max_window_layers": 24,
53
+ "model_type": "qwen2",
54
+ "num_attention_heads": 14,
55
+ "num_hidden_layers": 24,
56
+ "num_key_value_heads": 2,
57
+ "rope_theta": 1000000.0,
58
+ "sliding_window": 32768,
59
+ "subnet_mode": "both",
60
+ "subnet_type": "None",
61
+ "temperature_attn": 0.3,
62
+ "temperature_mlp": 0.3,
63
+ "tie_word_embeddings": true,
64
+ "torch_dtype": "bfloat16",
65
+ "use_mrope": false,
66
+ "use_sliding_window": false,
67
+ "vocab_size": 151936
68
+ },
69
+ "threshold_connector": null,
70
+ "tokenizer_model_max_length": 2048,
71
+ "tokenizer_name_or_path": "Qwen/Qwen2.5-0.5B",
72
+ "tokenizer_padding_side": "right",
73
+ "tokenizer_use_fast": false,
74
+ "torch_dtype": "bfloat16",
75
+ "transformers_version": "4.40.1",
76
+ "tune_type_connector": "full",
77
+ "tune_type_llm": "full",
78
+ "tune_type_vision_tower": "frozen",
79
+ "tune_vision_tower_from_layer": 0,
80
+ "use_cache": true,
81
+ "vision_config": {
82
+ "hidden_act": "gelu_pytorch_tanh",
83
+ "hidden_size": 1152,
84
+ "image_size": 384,
85
+ "intermediate_size": 4304,
86
+ "layer_norm_eps": 1e-06,
87
+ "model_name_or_path": "google/siglip-so400m-patch14-384",
88
+ "model_name_or_path2": "",
89
+ "model_type": "siglip_vision_model",
90
+ "num_attention_heads": 16,
91
+ "num_hidden_layers": 27,
92
+ "patch_size": 14
93
+ },
94
+ "vision_feature_layer": -2,
95
+ "vision_feature_select_strategy": "patch",
96
+ "vision_hidden_size": 1152,
97
+ "vision_model_name_or_path": "google/siglip-so400m-patch14-384",
98
+ "vision_model_name_or_path2": "",
99
+ "vocab_size": 151936
100
+ }
101
+
102
+ TinyLlavaForConditionalGeneration(
103
+ (language_model): Qwen2ForCausalLM(
104
+ (model): Qwen2Model(
105
+ (embed_tokens): Embedding(151936, 896)
106
+ (layers): ModuleList(
107
+ (0-23): 24 x Qwen2DecoderLayer(
108
+ (self_attn): Qwen2Attention(
109
+ (q_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=True)
110
+ (k_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=128, bias=True)
111
+ (v_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=128, bias=True)
112
+ (o_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=False)
113
+ (rotary_emb): Qwen2RotaryEmbedding()
114
+ )
115
+ (mlp): Qwen2MLP(
116
+ (gate_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=4864, bias=False)
117
+ (up_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=4864, bias=False)
118
+ (down_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=4864, out_features=896, bias=False)
119
+ (act_fn): SiLU()
120
+ )
121
+ (input_layernorm): Qwen2RMSNorm()
122
+ (post_attention_layernorm): Qwen2RMSNorm()
123
+ )
124
+ )
125
+ (norm): Qwen2RMSNorm()
126
+ )
127
+ (lm_head): Linear(in_features=896, out_features=151936, bias=False)
128
+ )
129
+ (vision_tower): SIGLIPVisionTower(
130
+ (_vision_tower): SiglipVisionModel(
131
+ (vision_model): SiglipVisionTransformer(
132
+ (embeddings): SiglipVisionEmbeddings(
133
+ (patch_embedding): Conv2d(3, 1152, kernel_size=(14, 14), stride=(14, 14), padding=valid)
134
+ (position_embedding): Embedding(729, 1152)
135
+ )
136
+ (encoder): SiglipEncoder(
137
+ (layers): ModuleList(
138
+ (0-26): 27 x SiglipEncoderLayer(
139
+ (self_attn): SiglipAttention(
140
+ (k_proj): Linear(in_features=1152, out_features=1152, bias=True)
141
+ (v_proj): Linear(in_features=1152, out_features=1152, bias=True)
142
+ (q_proj): Linear(in_features=1152, out_features=1152, bias=True)
143
+ (out_proj): Linear(in_features=1152, out_features=1152, bias=True)
144
+ )
145
+ (layer_norm1): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
146
+ (mlp): SiglipMLP(
147
+ (activation_fn): PytorchGELUTanh()
148
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
149
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
150
+ )
151
+ (layer_norm2): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
152
+ )
153
+ )
154
+ )
155
+ (post_layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
156
+ (head): SiglipMultiheadAttentionPoolingHead(
157
+ (attention): MultiheadAttention(
158
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1152, out_features=1152, bias=True)
159
+ )
160
+ (layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
161
+ (mlp): SiglipMLP(
162
+ (activation_fn): PytorchGELUTanh()
163
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
164
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
165
+ )
166
+ )
167
+ )
168
+ )
169
+ )
170
+ (connector): MLPConnector(
171
+ (_connector): Sequential(
172
+ (0): SupermaskLinearSparsity_SoftForward_Normal(in_features=1152, out_features=896, bias=True)
173
+ (1): GELU(approximate='none')
174
+ (2): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=True)
175
+ )
176
+ )
177
+ )
178
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
179
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
180
+ return self.fget.__get__(instance, owner)()
181
+ loading language model from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/language_model
182
+ Loading vision tower from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/vision_tower
183
+ Loading connector from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/connector/pytorch_model.bin...
184
+ Load base model from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain over.
185
+ TinyLlavaConfig {
186
+ "cache_dir": null,
187
+ "connector_type": "mlp2x_gelu",
188
+ "hidden_size": 896,
189
+ "ignore_index": -100,
190
+ "image_aspect_ratio": "square",
191
+ "image_token_index": -200,
192
+ "llm_model_name_or_path": "Qwen/Qwen2.5-0.5B",
193
+ "model_type": "tinyllava",
194
+ "num_queries": 128,
195
+ "num_resampler_layers": 3,
196
+ "pad_token": "<|endoftext|>",
197
+ "pad_token_id": 151643,
198
+ "resampler_hidden_size": 768,
199
+ "text_config": {
200
+ "_name_or_path": "Qwen/Qwen2.5-0.5B",
201
+ "architectures": [
202
+ "Qwen2ForCausalLM"
203
+ ],
204
+ "bos_token_id": 151643,
205
+ "eos_token_id": 151643,
206
+ "hidden_size": 896,
207
+ "intermediate_size": 4864,
208
+ "max_position_embeddings": 32768,
209
+ "max_window_layers": 24,
210
+ "model_type": "qwen2",
211
+ "num_attention_heads": 14,
212
+ "num_hidden_layers": 24,
213
+ "num_key_value_heads": 2,
214
+ "rope_theta": 1000000.0,
215
+ "sliding_window": 32768,
216
+ "tie_word_embeddings": true,
217
+ "use_mrope": false,
218
+ "use_sliding_window": false,
219
+ "vocab_size": 151936
220
+ },
221
+ "tokenizer_model_max_length": 2048,
222
+ "tokenizer_name_or_path": "Qwen/Qwen2.5-0.5B",
223
+ "tokenizer_padding_side": "right",
224
+ "tokenizer_use_fast": false,
225
+ "transformers_version": "4.40.1",
226
+ "tune_type_connector": "full",
227
+ "tune_type_llm": "frozen",
228
+ "tune_type_vision_tower": "frozen",
229
+ "tune_vision_tower_from_layer": 0,
230
+ "use_cache": true,
231
+ "vision_config": {
232
+ "hidden_act": "gelu_pytorch_tanh",
233
+ "hidden_size": 1152,
234
+ "image_size": 384,
235
+ "intermediate_size": 4304,
236
+ "layer_norm_eps": 1e-06,
237
+ "model_name_or_path": "google/siglip-so400m-patch14-384",
238
+ "model_name_or_path2": "",
239
+ "model_type": "siglip_vision_model",
240
+ "num_attention_heads": 16,
241
+ "num_hidden_layers": 27,
242
+ "patch_size": 14
243
+ },
244
+ "vision_feature_layer": -2,
245
+ "vision_feature_select_strategy": "patch",
246
+ "vision_hidden_size": 1152,
247
+ "vision_model_name_or_path": "google/siglip-so400m-patch14-384",
248
+ "vision_model_name_or_path2": "",
249
+ "vocab_size": 151936
250
+ }
251
+
252
+ TinyLlavaForConditionalGeneration(
253
+ (language_model): Qwen2ForCausalLM(
254
+ (model): Qwen2Model(
255
+ (embed_tokens): Embedding(151936, 896)
256
+ (layers): ModuleList(
257
+ (0-23): 24 x Qwen2DecoderLayer(
258
+ (self_attn): Qwen2Attention(
259
+ (q_proj): Linear(in_features=896, out_features=896, bias=True)
260
+ (k_proj): Linear(in_features=896, out_features=128, bias=True)
261
+ (v_proj): Linear(in_features=896, out_features=128, bias=True)
262
+ (o_proj): Linear(in_features=896, out_features=896, bias=False)
263
+ (rotary_emb): Qwen2RotaryEmbedding()
264
+ )
265
+ (mlp): Qwen2MLP(
266
+ (gate_proj): Linear(in_features=896, out_features=4864, bias=False)
267
+ (up_proj): Linear(in_features=896, out_features=4864, bias=False)
268
+ (down_proj): Linear(in_features=4864, out_features=896, bias=False)
269
+ (act_fn): SiLU()
270
+ )
271
+ (input_layernorm): Qwen2RMSNorm()
272
+ (post_attention_layernorm): Qwen2RMSNorm()
273
+ )
274
+ )
275
+ (norm): Qwen2RMSNorm()
276
+ )
277
+ (lm_head): Linear(in_features=896, out_features=151936, bias=False)
278
+ )
279
+ (vision_tower): SIGLIPVisionTower(
280
+ (_vision_tower): SiglipVisionModel(
281
+ (vision_model): SiglipVisionTransformer(
282
+ (embeddings): SiglipVisionEmbeddings(
283
+ (patch_embedding): Conv2d(3, 1152, kernel_size=(14, 14), stride=(14, 14), padding=valid)
284
+ (position_embedding): Embedding(729, 1152)
285
+ )
286
+ (encoder): SiglipEncoder(
287
+ (layers): ModuleList(
288
+ (0-26): 27 x SiglipEncoderLayer(
289
+ (self_attn): SiglipAttention(
290
+ (k_proj): Linear(in_features=1152, out_features=1152, bias=True)
291
+ (v_proj): Linear(in_features=1152, out_features=1152, bias=True)
292
+ (q_proj): Linear(in_features=1152, out_features=1152, bias=True)
293
+ (out_proj): Linear(in_features=1152, out_features=1152, bias=True)
294
+ )
295
+ (layer_norm1): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
296
+ (mlp): SiglipMLP(
297
+ (activation_fn): PytorchGELUTanh()
298
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
299
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
300
+ )
301
+ (layer_norm2): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
302
+ )
303
+ )
304
+ )
305
+ (post_layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
306
+ (head): SiglipMultiheadAttentionPoolingHead(
307
+ (attention): MultiheadAttention(
308
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1152, out_features=1152, bias=True)
309
+ )
310
+ (layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
311
+ (mlp): SiglipMLP(
312
+ (activation_fn): PytorchGELUTanh()
313
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
314
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
315
+ )
316
+ )
317
+ )
318
+ )
319
+ )
320
+ (connector): MLPConnector(
321
+ (_connector): Sequential(
322
+ (0): Linear(in_features=1152, out_features=896, bias=True)
323
+ (1): GELU(approximate='none')
324
+ (2): Linear(in_features=896, out_features=896, bias=True)
325
+ )
326
+ )
327
+ )
328
+ Collect masks for language model over.
329
+ Collect masks for connector over.
330
+ Applying mask on model.layers.0.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
331
+ Applied soft mask on model.layers.0.self_attn.q_proj.
332
+ Applying mask on model.layers.0.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
333
+ Applied soft mask on model.layers.0.self_attn.k_proj.
334
+ Applying mask on model.layers.0.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
335
+ Applied soft mask on model.layers.0.self_attn.v_proj.
336
+ Applying mask on model.layers.0.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
337
+ Applied soft mask on model.layers.0.self_attn.o_proj.
338
+ Applying mask on model.layers.0.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
339
+ Applied soft mask on model.layers.0.mlp.gate_proj.
340
+ Applying mask on model.layers.0.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
341
+ Applied soft mask on model.layers.0.mlp.up_proj.
342
+ Applying mask on model.layers.0.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
343
+ Applied soft mask on model.layers.0.mlp.down_proj.
344
+ Applying mask on model.layers.1.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
345
+ Applied soft mask on model.layers.1.self_attn.q_proj.
346
+ Applying mask on model.layers.1.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
347
+ Applied soft mask on model.layers.1.self_attn.k_proj.
348
+ Applying mask on model.layers.1.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
349
+ Applied soft mask on model.layers.1.self_attn.v_proj.
350
+ Applying mask on model.layers.1.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
351
+ Applied soft mask on model.layers.1.self_attn.o_proj.
352
+ Applying mask on model.layers.1.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
353
+ Applied soft mask on model.layers.1.mlp.gate_proj.
354
+ Applying mask on model.layers.1.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
355
+ Applied soft mask on model.layers.1.mlp.up_proj.
356
+ Applying mask on model.layers.1.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
357
+ Applied soft mask on model.layers.1.mlp.down_proj.
358
+ Applying mask on model.layers.2.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
359
+ Applied soft mask on model.layers.2.self_attn.q_proj.
360
+ Applying mask on model.layers.2.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
361
+ Applied soft mask on model.layers.2.self_attn.k_proj.
362
+ Applying mask on model.layers.2.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
363
+ Applied soft mask on model.layers.2.self_attn.v_proj.
364
+ Applying mask on model.layers.2.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
365
+ Applied soft mask on model.layers.2.self_attn.o_proj.
366
+ Applying mask on model.layers.2.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
367
+ Applied soft mask on model.layers.2.mlp.gate_proj.
368
+ Applying mask on model.layers.2.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
369
+ Applied soft mask on model.layers.2.mlp.up_proj.
370
+ Applying mask on model.layers.2.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
371
+ Applied soft mask on model.layers.2.mlp.down_proj.
372
+ Applying mask on model.layers.3.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
373
+ Applied soft mask on model.layers.3.self_attn.q_proj.
374
+ Applying mask on model.layers.3.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
375
+ Applied soft mask on model.layers.3.self_attn.k_proj.
376
+ Applying mask on model.layers.3.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
377
+ Applied soft mask on model.layers.3.self_attn.v_proj.
378
+ Applying mask on model.layers.3.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
379
+ Applied soft mask on model.layers.3.self_attn.o_proj.
380
+ Applying mask on model.layers.3.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
381
+ Applied soft mask on model.layers.3.mlp.gate_proj.
382
+ Applying mask on model.layers.3.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
383
+ Applied soft mask on model.layers.3.mlp.up_proj.
384
+ Applying mask on model.layers.3.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
385
+ Applied soft mask on model.layers.3.mlp.down_proj.
386
+ Applying mask on model.layers.4.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
387
+ Applied soft mask on model.layers.4.self_attn.q_proj.
388
+ Applying mask on model.layers.4.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
389
+ Applied soft mask on model.layers.4.self_attn.k_proj.
390
+ Applying mask on model.layers.4.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
391
+ Applied soft mask on model.layers.4.self_attn.v_proj.
392
+ Applying mask on model.layers.4.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
393
+ Applied soft mask on model.layers.4.self_attn.o_proj.
394
+ Applying mask on model.layers.4.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
395
+ Applied soft mask on model.layers.4.mlp.gate_proj.
396
+ Applying mask on model.layers.4.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
397
+ Applied soft mask on model.layers.4.mlp.up_proj.
398
+ Applying mask on model.layers.4.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
399
+ Applied soft mask on model.layers.4.mlp.down_proj.
400
+ Applying mask on model.layers.5.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
401
+ Applied soft mask on model.layers.5.self_attn.q_proj.
402
+ Applying mask on model.layers.5.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
403
+ Applied soft mask on model.layers.5.self_attn.k_proj.
404
+ Applying mask on model.layers.5.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
405
+ Applied soft mask on model.layers.5.self_attn.v_proj.
406
+ Applying mask on model.layers.5.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
407
+ Applied soft mask on model.layers.5.self_attn.o_proj.
408
+ Applying mask on model.layers.5.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
409
+ Applied soft mask on model.layers.5.mlp.gate_proj.
410
+ Applying mask on model.layers.5.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
411
+ Applied soft mask on model.layers.5.mlp.up_proj.
412
+ Applying mask on model.layers.5.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
413
+ Applied soft mask on model.layers.5.mlp.down_proj.
414
+ Applying mask on model.layers.6.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
415
+ Applied soft mask on model.layers.6.self_attn.q_proj.
416
+ Applying mask on model.layers.6.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
417
+ Applied soft mask on model.layers.6.self_attn.k_proj.
418
+ Applying mask on model.layers.6.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
419
+ Applied soft mask on model.layers.6.self_attn.v_proj.
420
+ Applying mask on model.layers.6.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
421
+ Applied soft mask on model.layers.6.self_attn.o_proj.
422
+ Applying mask on model.layers.6.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
423
+ Applied soft mask on model.layers.6.mlp.gate_proj.
424
+ Applying mask on model.layers.6.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
425
+ Applied soft mask on model.layers.6.mlp.up_proj.
426
+ Applying mask on model.layers.6.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
427
+ Applied soft mask on model.layers.6.mlp.down_proj.
428
+ Applying mask on model.layers.7.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
429
+ Applied soft mask on model.layers.7.self_attn.q_proj.
430
+ Applying mask on model.layers.7.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
431
+ Applied soft mask on model.layers.7.self_attn.k_proj.
432
+ Applying mask on model.layers.7.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
433
+ Applied soft mask on model.layers.7.self_attn.v_proj.
434
+ Applying mask on model.layers.7.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
435
+ Applied soft mask on model.layers.7.self_attn.o_proj.
436
+ Applying mask on model.layers.7.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
437
+ Applied soft mask on model.layers.7.mlp.gate_proj.
438
+ Applying mask on model.layers.7.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
439
+ Applied soft mask on model.layers.7.mlp.up_proj.
440
+ Applying mask on model.layers.7.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
441
+ Applied soft mask on model.layers.7.mlp.down_proj.
442
+ Applying mask on model.layers.8.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
443
+ Applied soft mask on model.layers.8.self_attn.q_proj.
444
+ Applying mask on model.layers.8.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
445
+ Applied soft mask on model.layers.8.self_attn.k_proj.
446
+ Applying mask on model.layers.8.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
447
+ Applied soft mask on model.layers.8.self_attn.v_proj.
448
+ Applying mask on model.layers.8.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
449
+ Applied soft mask on model.layers.8.self_attn.o_proj.
450
+ Applying mask on model.layers.8.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
451
+ Applied soft mask on model.layers.8.mlp.gate_proj.
452
+ Applying mask on model.layers.8.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
453
+ Applied soft mask on model.layers.8.mlp.up_proj.
454
+ Applying mask on model.layers.8.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
455
+ Applied soft mask on model.layers.8.mlp.down_proj.
456
+ Applying mask on model.layers.9.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
457
+ Applied soft mask on model.layers.9.self_attn.q_proj.
458
+ Applying mask on model.layers.9.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
459
+ Applied soft mask on model.layers.9.self_attn.k_proj.
460
+ Applying mask on model.layers.9.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
461
+ Applied soft mask on model.layers.9.self_attn.v_proj.
462
+ Applying mask on model.layers.9.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
463
+ Applied soft mask on model.layers.9.self_attn.o_proj.
464
+ Applying mask on model.layers.9.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
465
+ Applied soft mask on model.layers.9.mlp.gate_proj.
466
+ Applying mask on model.layers.9.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
467
+ Applied soft mask on model.layers.9.mlp.up_proj.
468
+ Applying mask on model.layers.9.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
469
+ Applied soft mask on model.layers.9.mlp.down_proj.
470
+ Applying mask on model.layers.10.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
471
+ Applied soft mask on model.layers.10.self_attn.q_proj.
472
+ Applying mask on model.layers.10.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
473
+ Applied soft mask on model.layers.10.self_attn.k_proj.
474
+ Applying mask on model.layers.10.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
475
+ Applied soft mask on model.layers.10.self_attn.v_proj.
476
+ Applying mask on model.layers.10.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
477
+ Applied soft mask on model.layers.10.self_attn.o_proj.
478
+ Applying mask on model.layers.10.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
479
+ Applied soft mask on model.layers.10.mlp.gate_proj.
480
+ Applying mask on model.layers.10.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
481
+ Applied soft mask on model.layers.10.mlp.up_proj.
482
+ Applying mask on model.layers.10.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
483
+ Applied soft mask on model.layers.10.mlp.down_proj.
484
+ Applying mask on model.layers.11.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
485
+ Applied soft mask on model.layers.11.self_attn.q_proj.
486
+ Applying mask on model.layers.11.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
487
+ Applied soft mask on model.layers.11.self_attn.k_proj.
488
+ Applying mask on model.layers.11.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
489
+ Applied soft mask on model.layers.11.self_attn.v_proj.
490
+ Applying mask on model.layers.11.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
491
+ Applied soft mask on model.layers.11.self_attn.o_proj.
492
+ Applying mask on model.layers.11.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
493
+ Applied soft mask on model.layers.11.mlp.gate_proj.
494
+ Applying mask on model.layers.11.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
495
+ Applied soft mask on model.layers.11.mlp.up_proj.
496
+ Applying mask on model.layers.11.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
497
+ Applied soft mask on model.layers.11.mlp.down_proj.
498
+ Applying mask on model.layers.12.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
499
+ Applied soft mask on model.layers.12.self_attn.q_proj.
500
+ Applying mask on model.layers.12.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
501
+ Applied soft mask on model.layers.12.self_attn.k_proj.
502
+ Applying mask on model.layers.12.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
503
+ Applied soft mask on model.layers.12.self_attn.v_proj.
504
+ Applying mask on model.layers.12.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
505
+ Applied soft mask on model.layers.12.self_attn.o_proj.
506
+ Applying mask on model.layers.12.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
507
+ Applied soft mask on model.layers.12.mlp.gate_proj.
508
+ Applying mask on model.layers.12.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
509
+ Applied soft mask on model.layers.12.mlp.up_proj.
510
+ Applying mask on model.layers.12.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
511
+ Applied soft mask on model.layers.12.mlp.down_proj.
512
+ Applying mask on model.layers.13.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
513
+ Applied soft mask on model.layers.13.self_attn.q_proj.
514
+ Applying mask on model.layers.13.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
515
+ Applied soft mask on model.layers.13.self_attn.k_proj.
516
+ Applying mask on model.layers.13.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
517
+ Applied soft mask on model.layers.13.self_attn.v_proj.
518
+ Applying mask on model.layers.13.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
519
+ Applied soft mask on model.layers.13.self_attn.o_proj.
520
+ Applying mask on model.layers.13.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
521
+ Applied soft mask on model.layers.13.mlp.gate_proj.
522
+ Applying mask on model.layers.13.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
523
+ Applied soft mask on model.layers.13.mlp.up_proj.
524
+ Applying mask on model.layers.13.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
525
+ Applied soft mask on model.layers.13.mlp.down_proj.
526
+ Applying mask on model.layers.14.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
527
+ Applied soft mask on model.layers.14.self_attn.q_proj.
528
+ Applying mask on model.layers.14.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
529
+ Applied soft mask on model.layers.14.self_attn.k_proj.
530
+ Applying mask on model.layers.14.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
531
+ Applied soft mask on model.layers.14.self_attn.v_proj.
532
+ Applying mask on model.layers.14.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
533
+ Applied soft mask on model.layers.14.self_attn.o_proj.
534
+ Applying mask on model.layers.14.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
535
+ Applied soft mask on model.layers.14.mlp.gate_proj.
536
+ Applying mask on model.layers.14.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
537
+ Applied soft mask on model.layers.14.mlp.up_proj.
538
+ Applying mask on model.layers.14.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
539
+ Applied soft mask on model.layers.14.mlp.down_proj.
540
+ Applying mask on model.layers.15.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
541
+ Applied soft mask on model.layers.15.self_attn.q_proj.
542
+ Applying mask on model.layers.15.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
543
+ Applied soft mask on model.layers.15.self_attn.k_proj.
544
+ Applying mask on model.layers.15.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
545
+ Applied soft mask on model.layers.15.self_attn.v_proj.
546
+ Applying mask on model.layers.15.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
547
+ Applied soft mask on model.layers.15.self_attn.o_proj.
548
+ Applying mask on model.layers.15.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
549
+ Applied soft mask on model.layers.15.mlp.gate_proj.
550
+ Applying mask on model.layers.15.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
551
+ Applied soft mask on model.layers.15.mlp.up_proj.
552
+ Applying mask on model.layers.15.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
553
+ Applied soft mask on model.layers.15.mlp.down_proj.
554
+ Applying mask on model.layers.16.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
555
+ Applied soft mask on model.layers.16.self_attn.q_proj.
556
+ Applying mask on model.layers.16.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
557
+ Applied soft mask on model.layers.16.self_attn.k_proj.
558
+ Applying mask on model.layers.16.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
559
+ Applied soft mask on model.layers.16.self_attn.v_proj.
560
+ Applying mask on model.layers.16.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
561
+ Applied soft mask on model.layers.16.self_attn.o_proj.
562
+ Applying mask on model.layers.16.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
563
+ Applied soft mask on model.layers.16.mlp.gate_proj.
564
+ Applying mask on model.layers.16.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
565
+ Applied soft mask on model.layers.16.mlp.up_proj.
566
+ Applying mask on model.layers.16.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
567
+ Applied soft mask on model.layers.16.mlp.down_proj.
568
+ Applying mask on model.layers.17.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
569
+ Applied soft mask on model.layers.17.self_attn.q_proj.
570
+ Applying mask on model.layers.17.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
571
+ Applied soft mask on model.layers.17.self_attn.k_proj.
572
+ Applying mask on model.layers.17.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
573
+ Applied soft mask on model.layers.17.self_attn.v_proj.
574
+ Applying mask on model.layers.17.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
575
+ Applied soft mask on model.layers.17.self_attn.o_proj.
576
+ Applying mask on model.layers.17.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
577
+ Applied soft mask on model.layers.17.mlp.gate_proj.
578
+ Applying mask on model.layers.17.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
579
+ Applied soft mask on model.layers.17.mlp.up_proj.
580
+ Applying mask on model.layers.17.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
581
+ Applied soft mask on model.layers.17.mlp.down_proj.
582
+ Applying mask on model.layers.18.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
583
+ Applied soft mask on model.layers.18.self_attn.q_proj.
584
+ Applying mask on model.layers.18.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
585
+ Applied soft mask on model.layers.18.self_attn.k_proj.
586
+ Applying mask on model.layers.18.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
587
+ Applied soft mask on model.layers.18.self_attn.v_proj.
588
+ Applying mask on model.layers.18.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
589
+ Applied soft mask on model.layers.18.self_attn.o_proj.
590
+ Applying mask on model.layers.18.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
591
+ Applied soft mask on model.layers.18.mlp.gate_proj.
592
+ Applying mask on model.layers.18.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
593
+ Applied soft mask on model.layers.18.mlp.up_proj.
594
+ Applying mask on model.layers.18.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
595
+ Applied soft mask on model.layers.18.mlp.down_proj.
596
+ Applying mask on model.layers.19.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
597
+ Applied soft mask on model.layers.19.self_attn.q_proj.
598
+ Applying mask on model.layers.19.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
599
+ Applied soft mask on model.layers.19.self_attn.k_proj.
600
+ Applying mask on model.layers.19.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
601
+ Applied soft mask on model.layers.19.self_attn.v_proj.
602
+ Applying mask on model.layers.19.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
603
+ Applied soft mask on model.layers.19.self_attn.o_proj.
604
+ Applying mask on model.layers.19.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
605
+ Applied soft mask on model.layers.19.mlp.gate_proj.
606
+ Applying mask on model.layers.19.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
607
+ Applied soft mask on model.layers.19.mlp.up_proj.
608
+ Applying mask on model.layers.19.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
609
+ Applied soft mask on model.layers.19.mlp.down_proj.
610
+ Applying mask on model.layers.20.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
611
+ Applied soft mask on model.layers.20.self_attn.q_proj.
612
+ Applying mask on model.layers.20.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
613
+ Applied soft mask on model.layers.20.self_attn.k_proj.
614
+ Applying mask on model.layers.20.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
615
+ Applied soft mask on model.layers.20.self_attn.v_proj.
616
+ Applying mask on model.layers.20.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
617
+ Applied soft mask on model.layers.20.self_attn.o_proj.
618
+ Applying mask on model.layers.20.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
619
+ Applied soft mask on model.layers.20.mlp.gate_proj.
620
+ Applying mask on model.layers.20.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
621
+ Applied soft mask on model.layers.20.mlp.up_proj.
622
+ Applying mask on model.layers.20.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
623
+ Applied soft mask on model.layers.20.mlp.down_proj.
624
+ Applying mask on model.layers.21.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
625
+ Applied soft mask on model.layers.21.self_attn.q_proj.
626
+ Applying mask on model.layers.21.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
627
+ Applied soft mask on model.layers.21.self_attn.k_proj.
628
+ Applying mask on model.layers.21.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
629
+ Applied soft mask on model.layers.21.self_attn.v_proj.
630
+ Applying mask on model.layers.21.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
631
+ Applied soft mask on model.layers.21.self_attn.o_proj.
632
+ Applying mask on model.layers.21.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
633
+ Applied soft mask on model.layers.21.mlp.gate_proj.
634
+ Applying mask on model.layers.21.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
635
+ Applied soft mask on model.layers.21.mlp.up_proj.
636
+ Applying mask on model.layers.21.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
637
+ Applied soft mask on model.layers.21.mlp.down_proj.
638
+ Applying mask on model.layers.22.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
639
+ Applied soft mask on model.layers.22.self_attn.q_proj.
640
+ Applying mask on model.layers.22.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
641
+ Applied soft mask on model.layers.22.self_attn.k_proj.
642
+ Applying mask on model.layers.22.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
643
+ Applied soft mask on model.layers.22.self_attn.v_proj.
644
+ Applying mask on model.layers.22.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
645
+ Applied soft mask on model.layers.22.self_attn.o_proj.
646
+ Applying mask on model.layers.22.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
647
+ Applied soft mask on model.layers.22.mlp.gate_proj.
648
+ Applying mask on model.layers.22.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
649
+ Applied soft mask on model.layers.22.mlp.up_proj.
650
+ Applying mask on model.layers.22.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
651
+ Applied soft mask on model.layers.22.mlp.down_proj.
652
+ Applying mask on model.layers.23.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
653
+ Applied soft mask on model.layers.23.self_attn.q_proj.
654
+ Applying mask on model.layers.23.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
655
+ Applied soft mask on model.layers.23.self_attn.k_proj.
656
+ Applying mask on model.layers.23.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
657
+ Applied soft mask on model.layers.23.self_attn.v_proj.
658
+ Applying mask on model.layers.23.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
659
+ Applied soft mask on model.layers.23.self_attn.o_proj.
660
+ Applying mask on model.layers.23.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
661
+ Applied soft mask on model.layers.23.mlp.gate_proj.
662
+ Applying mask on model.layers.23.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
663
+ Applied soft mask on model.layers.23.mlp.up_proj.
664
+ Applying mask on model.layers.23.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
665
+ Applied soft mask on model.layers.23.mlp.down_proj.
666
+ Applying mask on _connector.0 with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
667
+ Applied soft mask on _connector.0.
668
+ Applying mask on _connector.2 with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
669
+ Applied soft mask on _connector.2.
670
+ Using cleaned config_mask (without mask parameters) for saving.
671
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
672
+ import pynvml # type: ignore[import]
673
+ [2025-10-11 01:40:54,849] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
674
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/file_download.py:945: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
675
+ warnings.warn(
676
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
677
+ Traceback (most recent call last):
678
+ File "/opt/conda/envs/tinyllava/lib/python3.10/runpy.py", line 196, in _run_module_as_main
679
+ return _run_code(code, main_globals, None,
680
+ File "/opt/conda/envs/tinyllava/lib/python3.10/runpy.py", line 86, in _run_code
681
+ exec(code, run_globals)
682
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/eval/model_vqa_mmmu.py", line 180, in <module>
683
+ eval_model(args)
684
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/eval/model_vqa_mmmu.py", line 94, in eval_model
685
+ questions = json.load(open(os.path.expanduser(args.question_file), "r"))
686
+ FileNotFoundError: [Errno 2] No such file or directory: '/s3-code/ywang29/datasets/tinyllava/eval/MMMU/.../eval/MMMU/anns_for_eval.json'
687
+ Traceback (most recent call last):
688
+ File "/nfs/ywang29/TinyLLaVA/scripts/convert_answer_to_mmmu.py", line 31, in <module>
689
+ eval_model(args)
690
+ File "/nfs/ywang29/TinyLLaVA/scripts/convert_answer_to_mmmu.py", line 7, in eval_model
691
+ answers = [json.loads(q) for q in open(os.path.expanduser(args.answers_file), "r")]
692
+ FileNotFoundError: [Errno 2] No such file or directory: '/s3-code/ywang29/datasets/tinyllava/eval/MMMU/.../eval/MMMU/answers/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.3_2e-1_connector-1.0_0.3_2e-1_ablation-mask_applied.jsonl'
693
+ scripts/eval/mmmu.sh: line 23: cd: /s3-code/ywang29/datasets/tinyllava/eval/MMMU/.../eval/MMMU/eval: No such file or directory
694
+ python: can't open file '/nfs/ywang29/TinyLLaVA/main_eval_only.py': [Errno 2] No such file or directory
695
+ ==== EXPERIMENT COMPLETED: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.3_2e-1_connector-1.0_0.3_2e-1_ablation ====
696
+ Log File: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.3_2e-1_connector-1.0_0.3_2e-1_ablation_20251011_014003.log
697
+ Timestamp: 2025-10-11 01:41:01
698
+ =====================================
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.3_2e-1_connector-1.0_0.3_2e-1_ablation_20251011_014510.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.5_2e-1_connector-1.0_0.5_2e-1_ablation_20251011_014101.log ADDED
@@ -0,0 +1,698 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ==== STARTING EXPERIMENT: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.5_2e-1_connector-1.0_0.5_2e-1_ablation ====
2
+ Log File: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.5_2e-1_connector-1.0_0.5_2e-1_ablation_20251011_014101.log
3
+ Timestamp: 2025-10-11 01:41:01
4
+ =====================================
5
+ Processing: /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.5_2e-1_connector-1.0_0.5_2e-1_ablation
6
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
7
+ import pynvml # type: ignore[import]
8
+ [2025-10-11 01:41:04,593] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
9
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/file_download.py:945: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
10
+ warnings.warn(
11
+ config_mask.torch_dtype: torch.bfloat16
12
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
13
+ Load mask model from /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.5_2e-1_connector-1.0_0.5_2e-1_ablation over.
14
+ TinyLlavaConfig {
15
+ "architectures": [
16
+ "TinyLlavaForConditionalGeneration"
17
+ ],
18
+ "backward_type_connector": "normal",
19
+ "cache_dir": null,
20
+ "connector_type": "mlp2x_gelu",
21
+ "hidden_size": 896,
22
+ "ignore_index": -100,
23
+ "image_aspect_ratio": "square",
24
+ "image_token_index": -200,
25
+ "llm_model_name_or_path": "Qwen/Qwen2.5-0.5B",
26
+ "mask_model": [
27
+ "llm",
28
+ "connector"
29
+ ],
30
+ "mask_type_connector": "soft",
31
+ "model_type": "tinyllava",
32
+ "num_queries": 128,
33
+ "num_resampler_layers": 3,
34
+ "pad_token": "<|endoftext|>",
35
+ "resampler_hidden_size": 768,
36
+ "sparsity_connector": null,
37
+ "subnet_type_connector": "global",
38
+ "temperature_connector": 0.5,
39
+ "text_config": {
40
+ "_name_or_path": "Qwen/Qwen2.5-0.5B",
41
+ "architectures": [
42
+ "Qwen2ForCausalLM"
43
+ ],
44
+ "backward_type": "normal",
45
+ "bos_token_id": 151643,
46
+ "eos_token_id": 151643,
47
+ "hidden_size": 896,
48
+ "intermediate_size": 4864,
49
+ "mask_type": "soft",
50
+ "masked_layers": "all",
51
+ "max_position_embeddings": 32768,
52
+ "max_window_layers": 24,
53
+ "model_type": "qwen2",
54
+ "num_attention_heads": 14,
55
+ "num_hidden_layers": 24,
56
+ "num_key_value_heads": 2,
57
+ "rope_theta": 1000000.0,
58
+ "sliding_window": 32768,
59
+ "subnet_mode": "both",
60
+ "subnet_type": "None",
61
+ "temperature_attn": 0.5,
62
+ "temperature_mlp": 0.5,
63
+ "tie_word_embeddings": true,
64
+ "torch_dtype": "bfloat16",
65
+ "use_mrope": false,
66
+ "use_sliding_window": false,
67
+ "vocab_size": 151936
68
+ },
69
+ "threshold_connector": null,
70
+ "tokenizer_model_max_length": 2048,
71
+ "tokenizer_name_or_path": "Qwen/Qwen2.5-0.5B",
72
+ "tokenizer_padding_side": "right",
73
+ "tokenizer_use_fast": false,
74
+ "torch_dtype": "bfloat16",
75
+ "transformers_version": "4.40.1",
76
+ "tune_type_connector": "full",
77
+ "tune_type_llm": "full",
78
+ "tune_type_vision_tower": "frozen",
79
+ "tune_vision_tower_from_layer": 0,
80
+ "use_cache": true,
81
+ "vision_config": {
82
+ "hidden_act": "gelu_pytorch_tanh",
83
+ "hidden_size": 1152,
84
+ "image_size": 384,
85
+ "intermediate_size": 4304,
86
+ "layer_norm_eps": 1e-06,
87
+ "model_name_or_path": "google/siglip-so400m-patch14-384",
88
+ "model_name_or_path2": "",
89
+ "model_type": "siglip_vision_model",
90
+ "num_attention_heads": 16,
91
+ "num_hidden_layers": 27,
92
+ "patch_size": 14
93
+ },
94
+ "vision_feature_layer": -2,
95
+ "vision_feature_select_strategy": "patch",
96
+ "vision_hidden_size": 1152,
97
+ "vision_model_name_or_path": "google/siglip-so400m-patch14-384",
98
+ "vision_model_name_or_path2": "",
99
+ "vocab_size": 151936
100
+ }
101
+
102
+ TinyLlavaForConditionalGeneration(
103
+ (language_model): Qwen2ForCausalLM(
104
+ (model): Qwen2Model(
105
+ (embed_tokens): Embedding(151936, 896)
106
+ (layers): ModuleList(
107
+ (0-23): 24 x Qwen2DecoderLayer(
108
+ (self_attn): Qwen2Attention(
109
+ (q_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=True)
110
+ (k_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=128, bias=True)
111
+ (v_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=128, bias=True)
112
+ (o_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=False)
113
+ (rotary_emb): Qwen2RotaryEmbedding()
114
+ )
115
+ (mlp): Qwen2MLP(
116
+ (gate_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=4864, bias=False)
117
+ (up_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=4864, bias=False)
118
+ (down_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=4864, out_features=896, bias=False)
119
+ (act_fn): SiLU()
120
+ )
121
+ (input_layernorm): Qwen2RMSNorm()
122
+ (post_attention_layernorm): Qwen2RMSNorm()
123
+ )
124
+ )
125
+ (norm): Qwen2RMSNorm()
126
+ )
127
+ (lm_head): Linear(in_features=896, out_features=151936, bias=False)
128
+ )
129
+ (vision_tower): SIGLIPVisionTower(
130
+ (_vision_tower): SiglipVisionModel(
131
+ (vision_model): SiglipVisionTransformer(
132
+ (embeddings): SiglipVisionEmbeddings(
133
+ (patch_embedding): Conv2d(3, 1152, kernel_size=(14, 14), stride=(14, 14), padding=valid)
134
+ (position_embedding): Embedding(729, 1152)
135
+ )
136
+ (encoder): SiglipEncoder(
137
+ (layers): ModuleList(
138
+ (0-26): 27 x SiglipEncoderLayer(
139
+ (self_attn): SiglipAttention(
140
+ (k_proj): Linear(in_features=1152, out_features=1152, bias=True)
141
+ (v_proj): Linear(in_features=1152, out_features=1152, bias=True)
142
+ (q_proj): Linear(in_features=1152, out_features=1152, bias=True)
143
+ (out_proj): Linear(in_features=1152, out_features=1152, bias=True)
144
+ )
145
+ (layer_norm1): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
146
+ (mlp): SiglipMLP(
147
+ (activation_fn): PytorchGELUTanh()
148
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
149
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
150
+ )
151
+ (layer_norm2): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
152
+ )
153
+ )
154
+ )
155
+ (post_layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
156
+ (head): SiglipMultiheadAttentionPoolingHead(
157
+ (attention): MultiheadAttention(
158
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1152, out_features=1152, bias=True)
159
+ )
160
+ (layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
161
+ (mlp): SiglipMLP(
162
+ (activation_fn): PytorchGELUTanh()
163
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
164
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
165
+ )
166
+ )
167
+ )
168
+ )
169
+ )
170
+ (connector): MLPConnector(
171
+ (_connector): Sequential(
172
+ (0): SupermaskLinearSparsity_SoftForward_Normal(in_features=1152, out_features=896, bias=True)
173
+ (1): GELU(approximate='none')
174
+ (2): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=True)
175
+ )
176
+ )
177
+ )
178
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
179
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
180
+ return self.fget.__get__(instance, owner)()
181
+ loading language model from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/language_model
182
+ Loading vision tower from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/vision_tower
183
+ Loading connector from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/connector/pytorch_model.bin...
184
+ Load base model from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain over.
185
+ TinyLlavaConfig {
186
+ "cache_dir": null,
187
+ "connector_type": "mlp2x_gelu",
188
+ "hidden_size": 896,
189
+ "ignore_index": -100,
190
+ "image_aspect_ratio": "square",
191
+ "image_token_index": -200,
192
+ "llm_model_name_or_path": "Qwen/Qwen2.5-0.5B",
193
+ "model_type": "tinyllava",
194
+ "num_queries": 128,
195
+ "num_resampler_layers": 3,
196
+ "pad_token": "<|endoftext|>",
197
+ "pad_token_id": 151643,
198
+ "resampler_hidden_size": 768,
199
+ "text_config": {
200
+ "_name_or_path": "Qwen/Qwen2.5-0.5B",
201
+ "architectures": [
202
+ "Qwen2ForCausalLM"
203
+ ],
204
+ "bos_token_id": 151643,
205
+ "eos_token_id": 151643,
206
+ "hidden_size": 896,
207
+ "intermediate_size": 4864,
208
+ "max_position_embeddings": 32768,
209
+ "max_window_layers": 24,
210
+ "model_type": "qwen2",
211
+ "num_attention_heads": 14,
212
+ "num_hidden_layers": 24,
213
+ "num_key_value_heads": 2,
214
+ "rope_theta": 1000000.0,
215
+ "sliding_window": 32768,
216
+ "tie_word_embeddings": true,
217
+ "use_mrope": false,
218
+ "use_sliding_window": false,
219
+ "vocab_size": 151936
220
+ },
221
+ "tokenizer_model_max_length": 2048,
222
+ "tokenizer_name_or_path": "Qwen/Qwen2.5-0.5B",
223
+ "tokenizer_padding_side": "right",
224
+ "tokenizer_use_fast": false,
225
+ "transformers_version": "4.40.1",
226
+ "tune_type_connector": "full",
227
+ "tune_type_llm": "frozen",
228
+ "tune_type_vision_tower": "frozen",
229
+ "tune_vision_tower_from_layer": 0,
230
+ "use_cache": true,
231
+ "vision_config": {
232
+ "hidden_act": "gelu_pytorch_tanh",
233
+ "hidden_size": 1152,
234
+ "image_size": 384,
235
+ "intermediate_size": 4304,
236
+ "layer_norm_eps": 1e-06,
237
+ "model_name_or_path": "google/siglip-so400m-patch14-384",
238
+ "model_name_or_path2": "",
239
+ "model_type": "siglip_vision_model",
240
+ "num_attention_heads": 16,
241
+ "num_hidden_layers": 27,
242
+ "patch_size": 14
243
+ },
244
+ "vision_feature_layer": -2,
245
+ "vision_feature_select_strategy": "patch",
246
+ "vision_hidden_size": 1152,
247
+ "vision_model_name_or_path": "google/siglip-so400m-patch14-384",
248
+ "vision_model_name_or_path2": "",
249
+ "vocab_size": 151936
250
+ }
251
+
252
+ TinyLlavaForConditionalGeneration(
253
+ (language_model): Qwen2ForCausalLM(
254
+ (model): Qwen2Model(
255
+ (embed_tokens): Embedding(151936, 896)
256
+ (layers): ModuleList(
257
+ (0-23): 24 x Qwen2DecoderLayer(
258
+ (self_attn): Qwen2Attention(
259
+ (q_proj): Linear(in_features=896, out_features=896, bias=True)
260
+ (k_proj): Linear(in_features=896, out_features=128, bias=True)
261
+ (v_proj): Linear(in_features=896, out_features=128, bias=True)
262
+ (o_proj): Linear(in_features=896, out_features=896, bias=False)
263
+ (rotary_emb): Qwen2RotaryEmbedding()
264
+ )
265
+ (mlp): Qwen2MLP(
266
+ (gate_proj): Linear(in_features=896, out_features=4864, bias=False)
267
+ (up_proj): Linear(in_features=896, out_features=4864, bias=False)
268
+ (down_proj): Linear(in_features=4864, out_features=896, bias=False)
269
+ (act_fn): SiLU()
270
+ )
271
+ (input_layernorm): Qwen2RMSNorm()
272
+ (post_attention_layernorm): Qwen2RMSNorm()
273
+ )
274
+ )
275
+ (norm): Qwen2RMSNorm()
276
+ )
277
+ (lm_head): Linear(in_features=896, out_features=151936, bias=False)
278
+ )
279
+ (vision_tower): SIGLIPVisionTower(
280
+ (_vision_tower): SiglipVisionModel(
281
+ (vision_model): SiglipVisionTransformer(
282
+ (embeddings): SiglipVisionEmbeddings(
283
+ (patch_embedding): Conv2d(3, 1152, kernel_size=(14, 14), stride=(14, 14), padding=valid)
284
+ (position_embedding): Embedding(729, 1152)
285
+ )
286
+ (encoder): SiglipEncoder(
287
+ (layers): ModuleList(
288
+ (0-26): 27 x SiglipEncoderLayer(
289
+ (self_attn): SiglipAttention(
290
+ (k_proj): Linear(in_features=1152, out_features=1152, bias=True)
291
+ (v_proj): Linear(in_features=1152, out_features=1152, bias=True)
292
+ (q_proj): Linear(in_features=1152, out_features=1152, bias=True)
293
+ (out_proj): Linear(in_features=1152, out_features=1152, bias=True)
294
+ )
295
+ (layer_norm1): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
296
+ (mlp): SiglipMLP(
297
+ (activation_fn): PytorchGELUTanh()
298
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
299
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
300
+ )
301
+ (layer_norm2): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
302
+ )
303
+ )
304
+ )
305
+ (post_layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
306
+ (head): SiglipMultiheadAttentionPoolingHead(
307
+ (attention): MultiheadAttention(
308
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1152, out_features=1152, bias=True)
309
+ )
310
+ (layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
311
+ (mlp): SiglipMLP(
312
+ (activation_fn): PytorchGELUTanh()
313
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
314
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
315
+ )
316
+ )
317
+ )
318
+ )
319
+ )
320
+ (connector): MLPConnector(
321
+ (_connector): Sequential(
322
+ (0): Linear(in_features=1152, out_features=896, bias=True)
323
+ (1): GELU(approximate='none')
324
+ (2): Linear(in_features=896, out_features=896, bias=True)
325
+ )
326
+ )
327
+ )
328
+ Collect masks for language model over.
329
+ Collect masks for connector over.
330
+ Applying mask on model.layers.0.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
331
+ Applied soft mask on model.layers.0.self_attn.q_proj.
332
+ Applying mask on model.layers.0.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
333
+ Applied soft mask on model.layers.0.self_attn.k_proj.
334
+ Applying mask on model.layers.0.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
335
+ Applied soft mask on model.layers.0.self_attn.v_proj.
336
+ Applying mask on model.layers.0.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
337
+ Applied soft mask on model.layers.0.self_attn.o_proj.
338
+ Applying mask on model.layers.0.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
339
+ Applied soft mask on model.layers.0.mlp.gate_proj.
340
+ Applying mask on model.layers.0.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
341
+ Applied soft mask on model.layers.0.mlp.up_proj.
342
+ Applying mask on model.layers.0.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
343
+ Applied soft mask on model.layers.0.mlp.down_proj.
344
+ Applying mask on model.layers.1.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
345
+ Applied soft mask on model.layers.1.self_attn.q_proj.
346
+ Applying mask on model.layers.1.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
347
+ Applied soft mask on model.layers.1.self_attn.k_proj.
348
+ Applying mask on model.layers.1.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
349
+ Applied soft mask on model.layers.1.self_attn.v_proj.
350
+ Applying mask on model.layers.1.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
351
+ Applied soft mask on model.layers.1.self_attn.o_proj.
352
+ Applying mask on model.layers.1.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
353
+ Applied soft mask on model.layers.1.mlp.gate_proj.
354
+ Applying mask on model.layers.1.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
355
+ Applied soft mask on model.layers.1.mlp.up_proj.
356
+ Applying mask on model.layers.1.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
357
+ Applied soft mask on model.layers.1.mlp.down_proj.
358
+ Applying mask on model.layers.2.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
359
+ Applied soft mask on model.layers.2.self_attn.q_proj.
360
+ Applying mask on model.layers.2.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
361
+ Applied soft mask on model.layers.2.self_attn.k_proj.
362
+ Applying mask on model.layers.2.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
363
+ Applied soft mask on model.layers.2.self_attn.v_proj.
364
+ Applying mask on model.layers.2.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
365
+ Applied soft mask on model.layers.2.self_attn.o_proj.
366
+ Applying mask on model.layers.2.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
367
+ Applied soft mask on model.layers.2.mlp.gate_proj.
368
+ Applying mask on model.layers.2.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
369
+ Applied soft mask on model.layers.2.mlp.up_proj.
370
+ Applying mask on model.layers.2.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
371
+ Applied soft mask on model.layers.2.mlp.down_proj.
372
+ Applying mask on model.layers.3.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
373
+ Applied soft mask on model.layers.3.self_attn.q_proj.
374
+ Applying mask on model.layers.3.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
375
+ Applied soft mask on model.layers.3.self_attn.k_proj.
376
+ Applying mask on model.layers.3.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
377
+ Applied soft mask on model.layers.3.self_attn.v_proj.
378
+ Applying mask on model.layers.3.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
379
+ Applied soft mask on model.layers.3.self_attn.o_proj.
380
+ Applying mask on model.layers.3.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
381
+ Applied soft mask on model.layers.3.mlp.gate_proj.
382
+ Applying mask on model.layers.3.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
383
+ Applied soft mask on model.layers.3.mlp.up_proj.
384
+ Applying mask on model.layers.3.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
385
+ Applied soft mask on model.layers.3.mlp.down_proj.
386
+ Applying mask on model.layers.4.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
387
+ Applied soft mask on model.layers.4.self_attn.q_proj.
388
+ Applying mask on model.layers.4.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
389
+ Applied soft mask on model.layers.4.self_attn.k_proj.
390
+ Applying mask on model.layers.4.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
391
+ Applied soft mask on model.layers.4.self_attn.v_proj.
392
+ Applying mask on model.layers.4.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
393
+ Applied soft mask on model.layers.4.self_attn.o_proj.
394
+ Applying mask on model.layers.4.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
395
+ Applied soft mask on model.layers.4.mlp.gate_proj.
396
+ Applying mask on model.layers.4.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
397
+ Applied soft mask on model.layers.4.mlp.up_proj.
398
+ Applying mask on model.layers.4.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
399
+ Applied soft mask on model.layers.4.mlp.down_proj.
400
+ Applying mask on model.layers.5.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
401
+ Applied soft mask on model.layers.5.self_attn.q_proj.
402
+ Applying mask on model.layers.5.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
403
+ Applied soft mask on model.layers.5.self_attn.k_proj.
404
+ Applying mask on model.layers.5.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
405
+ Applied soft mask on model.layers.5.self_attn.v_proj.
406
+ Applying mask on model.layers.5.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
407
+ Applied soft mask on model.layers.5.self_attn.o_proj.
408
+ Applying mask on model.layers.5.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
409
+ Applied soft mask on model.layers.5.mlp.gate_proj.
410
+ Applying mask on model.layers.5.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
411
+ Applied soft mask on model.layers.5.mlp.up_proj.
412
+ Applying mask on model.layers.5.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
413
+ Applied soft mask on model.layers.5.mlp.down_proj.
414
+ Applying mask on model.layers.6.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
415
+ Applied soft mask on model.layers.6.self_attn.q_proj.
416
+ Applying mask on model.layers.6.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
417
+ Applied soft mask on model.layers.6.self_attn.k_proj.
418
+ Applying mask on model.layers.6.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
419
+ Applied soft mask on model.layers.6.self_attn.v_proj.
420
+ Applying mask on model.layers.6.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
421
+ Applied soft mask on model.layers.6.self_attn.o_proj.
422
+ Applying mask on model.layers.6.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
423
+ Applied soft mask on model.layers.6.mlp.gate_proj.
424
+ Applying mask on model.layers.6.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
425
+ Applied soft mask on model.layers.6.mlp.up_proj.
426
+ Applying mask on model.layers.6.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
427
+ Applied soft mask on model.layers.6.mlp.down_proj.
428
+ Applying mask on model.layers.7.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
429
+ Applied soft mask on model.layers.7.self_attn.q_proj.
430
+ Applying mask on model.layers.7.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
431
+ Applied soft mask on model.layers.7.self_attn.k_proj.
432
+ Applying mask on model.layers.7.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
433
+ Applied soft mask on model.layers.7.self_attn.v_proj.
434
+ Applying mask on model.layers.7.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
435
+ Applied soft mask on model.layers.7.self_attn.o_proj.
436
+ Applying mask on model.layers.7.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
437
+ Applied soft mask on model.layers.7.mlp.gate_proj.
438
+ Applying mask on model.layers.7.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
439
+ Applied soft mask on model.layers.7.mlp.up_proj.
440
+ Applying mask on model.layers.7.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
441
+ Applied soft mask on model.layers.7.mlp.down_proj.
442
+ Applying mask on model.layers.8.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
443
+ Applied soft mask on model.layers.8.self_attn.q_proj.
444
+ Applying mask on model.layers.8.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
445
+ Applied soft mask on model.layers.8.self_attn.k_proj.
446
+ Applying mask on model.layers.8.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
447
+ Applied soft mask on model.layers.8.self_attn.v_proj.
448
+ Applying mask on model.layers.8.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
449
+ Applied soft mask on model.layers.8.self_attn.o_proj.
450
+ Applying mask on model.layers.8.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
451
+ Applied soft mask on model.layers.8.mlp.gate_proj.
452
+ Applying mask on model.layers.8.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
453
+ Applied soft mask on model.layers.8.mlp.up_proj.
454
+ Applying mask on model.layers.8.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
455
+ Applied soft mask on model.layers.8.mlp.down_proj.
456
+ Applying mask on model.layers.9.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
457
+ Applied soft mask on model.layers.9.self_attn.q_proj.
458
+ Applying mask on model.layers.9.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
459
+ Applied soft mask on model.layers.9.self_attn.k_proj.
460
+ Applying mask on model.layers.9.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
461
+ Applied soft mask on model.layers.9.self_attn.v_proj.
462
+ Applying mask on model.layers.9.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
463
+ Applied soft mask on model.layers.9.self_attn.o_proj.
464
+ Applying mask on model.layers.9.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
465
+ Applied soft mask on model.layers.9.mlp.gate_proj.
466
+ Applying mask on model.layers.9.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
467
+ Applied soft mask on model.layers.9.mlp.up_proj.
468
+ Applying mask on model.layers.9.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
469
+ Applied soft mask on model.layers.9.mlp.down_proj.
470
+ Applying mask on model.layers.10.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
471
+ Applied soft mask on model.layers.10.self_attn.q_proj.
472
+ Applying mask on model.layers.10.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
473
+ Applied soft mask on model.layers.10.self_attn.k_proj.
474
+ Applying mask on model.layers.10.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
475
+ Applied soft mask on model.layers.10.self_attn.v_proj.
476
+ Applying mask on model.layers.10.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
477
+ Applied soft mask on model.layers.10.self_attn.o_proj.
478
+ Applying mask on model.layers.10.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
479
+ Applied soft mask on model.layers.10.mlp.gate_proj.
480
+ Applying mask on model.layers.10.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
481
+ Applied soft mask on model.layers.10.mlp.up_proj.
482
+ Applying mask on model.layers.10.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
483
+ Applied soft mask on model.layers.10.mlp.down_proj.
484
+ Applying mask on model.layers.11.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
485
+ Applied soft mask on model.layers.11.self_attn.q_proj.
486
+ Applying mask on model.layers.11.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
487
+ Applied soft mask on model.layers.11.self_attn.k_proj.
488
+ Applying mask on model.layers.11.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
489
+ Applied soft mask on model.layers.11.self_attn.v_proj.
490
+ Applying mask on model.layers.11.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
491
+ Applied soft mask on model.layers.11.self_attn.o_proj.
492
+ Applying mask on model.layers.11.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
493
+ Applied soft mask on model.layers.11.mlp.gate_proj.
494
+ Applying mask on model.layers.11.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
495
+ Applied soft mask on model.layers.11.mlp.up_proj.
496
+ Applying mask on model.layers.11.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
497
+ Applied soft mask on model.layers.11.mlp.down_proj.
498
+ Applying mask on model.layers.12.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
499
+ Applied soft mask on model.layers.12.self_attn.q_proj.
500
+ Applying mask on model.layers.12.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
501
+ Applied soft mask on model.layers.12.self_attn.k_proj.
502
+ Applying mask on model.layers.12.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
503
+ Applied soft mask on model.layers.12.self_attn.v_proj.
504
+ Applying mask on model.layers.12.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
505
+ Applied soft mask on model.layers.12.self_attn.o_proj.
506
+ Applying mask on model.layers.12.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
507
+ Applied soft mask on model.layers.12.mlp.gate_proj.
508
+ Applying mask on model.layers.12.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
509
+ Applied soft mask on model.layers.12.mlp.up_proj.
510
+ Applying mask on model.layers.12.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
511
+ Applied soft mask on model.layers.12.mlp.down_proj.
512
+ Applying mask on model.layers.13.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
513
+ Applied soft mask on model.layers.13.self_attn.q_proj.
514
+ Applying mask on model.layers.13.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
515
+ Applied soft mask on model.layers.13.self_attn.k_proj.
516
+ Applying mask on model.layers.13.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
517
+ Applied soft mask on model.layers.13.self_attn.v_proj.
518
+ Applying mask on model.layers.13.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
519
+ Applied soft mask on model.layers.13.self_attn.o_proj.
520
+ Applying mask on model.layers.13.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
521
+ Applied soft mask on model.layers.13.mlp.gate_proj.
522
+ Applying mask on model.layers.13.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
523
+ Applied soft mask on model.layers.13.mlp.up_proj.
524
+ Applying mask on model.layers.13.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
525
+ Applied soft mask on model.layers.13.mlp.down_proj.
526
+ Applying mask on model.layers.14.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
527
+ Applied soft mask on model.layers.14.self_attn.q_proj.
528
+ Applying mask on model.layers.14.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
529
+ Applied soft mask on model.layers.14.self_attn.k_proj.
530
+ Applying mask on model.layers.14.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
531
+ Applied soft mask on model.layers.14.self_attn.v_proj.
532
+ Applying mask on model.layers.14.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
533
+ Applied soft mask on model.layers.14.self_attn.o_proj.
534
+ Applying mask on model.layers.14.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
535
+ Applied soft mask on model.layers.14.mlp.gate_proj.
536
+ Applying mask on model.layers.14.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
537
+ Applied soft mask on model.layers.14.mlp.up_proj.
538
+ Applying mask on model.layers.14.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
539
+ Applied soft mask on model.layers.14.mlp.down_proj.
540
+ Applying mask on model.layers.15.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
541
+ Applied soft mask on model.layers.15.self_attn.q_proj.
542
+ Applying mask on model.layers.15.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
543
+ Applied soft mask on model.layers.15.self_attn.k_proj.
544
+ Applying mask on model.layers.15.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
545
+ Applied soft mask on model.layers.15.self_attn.v_proj.
546
+ Applying mask on model.layers.15.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
547
+ Applied soft mask on model.layers.15.self_attn.o_proj.
548
+ Applying mask on model.layers.15.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
549
+ Applied soft mask on model.layers.15.mlp.gate_proj.
550
+ Applying mask on model.layers.15.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
551
+ Applied soft mask on model.layers.15.mlp.up_proj.
552
+ Applying mask on model.layers.15.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
553
+ Applied soft mask on model.layers.15.mlp.down_proj.
554
+ Applying mask on model.layers.16.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
555
+ Applied soft mask on model.layers.16.self_attn.q_proj.
556
+ Applying mask on model.layers.16.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
557
+ Applied soft mask on model.layers.16.self_attn.k_proj.
558
+ Applying mask on model.layers.16.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
559
+ Applied soft mask on model.layers.16.self_attn.v_proj.
560
+ Applying mask on model.layers.16.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
561
+ Applied soft mask on model.layers.16.self_attn.o_proj.
562
+ Applying mask on model.layers.16.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
563
+ Applied soft mask on model.layers.16.mlp.gate_proj.
564
+ Applying mask on model.layers.16.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
565
+ Applied soft mask on model.layers.16.mlp.up_proj.
566
+ Applying mask on model.layers.16.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
567
+ Applied soft mask on model.layers.16.mlp.down_proj.
568
+ Applying mask on model.layers.17.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
569
+ Applied soft mask on model.layers.17.self_attn.q_proj.
570
+ Applying mask on model.layers.17.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
571
+ Applied soft mask on model.layers.17.self_attn.k_proj.
572
+ Applying mask on model.layers.17.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
573
+ Applied soft mask on model.layers.17.self_attn.v_proj.
574
+ Applying mask on model.layers.17.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
575
+ Applied soft mask on model.layers.17.self_attn.o_proj.
576
+ Applying mask on model.layers.17.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
577
+ Applied soft mask on model.layers.17.mlp.gate_proj.
578
+ Applying mask on model.layers.17.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
579
+ Applied soft mask on model.layers.17.mlp.up_proj.
580
+ Applying mask on model.layers.17.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
581
+ Applied soft mask on model.layers.17.mlp.down_proj.
582
+ Applying mask on model.layers.18.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
583
+ Applied soft mask on model.layers.18.self_attn.q_proj.
584
+ Applying mask on model.layers.18.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
585
+ Applied soft mask on model.layers.18.self_attn.k_proj.
586
+ Applying mask on model.layers.18.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
587
+ Applied soft mask on model.layers.18.self_attn.v_proj.
588
+ Applying mask on model.layers.18.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
589
+ Applied soft mask on model.layers.18.self_attn.o_proj.
590
+ Applying mask on model.layers.18.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
591
+ Applied soft mask on model.layers.18.mlp.gate_proj.
592
+ Applying mask on model.layers.18.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
593
+ Applied soft mask on model.layers.18.mlp.up_proj.
594
+ Applying mask on model.layers.18.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
595
+ Applied soft mask on model.layers.18.mlp.down_proj.
596
+ Applying mask on model.layers.19.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
597
+ Applied soft mask on model.layers.19.self_attn.q_proj.
598
+ Applying mask on model.layers.19.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
599
+ Applied soft mask on model.layers.19.self_attn.k_proj.
600
+ Applying mask on model.layers.19.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
601
+ Applied soft mask on model.layers.19.self_attn.v_proj.
602
+ Applying mask on model.layers.19.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
603
+ Applied soft mask on model.layers.19.self_attn.o_proj.
604
+ Applying mask on model.layers.19.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
605
+ Applied soft mask on model.layers.19.mlp.gate_proj.
606
+ Applying mask on model.layers.19.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
607
+ Applied soft mask on model.layers.19.mlp.up_proj.
608
+ Applying mask on model.layers.19.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
609
+ Applied soft mask on model.layers.19.mlp.down_proj.
610
+ Applying mask on model.layers.20.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
611
+ Applied soft mask on model.layers.20.self_attn.q_proj.
612
+ Applying mask on model.layers.20.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
613
+ Applied soft mask on model.layers.20.self_attn.k_proj.
614
+ Applying mask on model.layers.20.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
615
+ Applied soft mask on model.layers.20.self_attn.v_proj.
616
+ Applying mask on model.layers.20.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
617
+ Applied soft mask on model.layers.20.self_attn.o_proj.
618
+ Applying mask on model.layers.20.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
619
+ Applied soft mask on model.layers.20.mlp.gate_proj.
620
+ Applying mask on model.layers.20.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
621
+ Applied soft mask on model.layers.20.mlp.up_proj.
622
+ Applying mask on model.layers.20.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
623
+ Applied soft mask on model.layers.20.mlp.down_proj.
624
+ Applying mask on model.layers.21.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
625
+ Applied soft mask on model.layers.21.self_attn.q_proj.
626
+ Applying mask on model.layers.21.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
627
+ Applied soft mask on model.layers.21.self_attn.k_proj.
628
+ Applying mask on model.layers.21.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
629
+ Applied soft mask on model.layers.21.self_attn.v_proj.
630
+ Applying mask on model.layers.21.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
631
+ Applied soft mask on model.layers.21.self_attn.o_proj.
632
+ Applying mask on model.layers.21.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
633
+ Applied soft mask on model.layers.21.mlp.gate_proj.
634
+ Applying mask on model.layers.21.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
635
+ Applied soft mask on model.layers.21.mlp.up_proj.
636
+ Applying mask on model.layers.21.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
637
+ Applied soft mask on model.layers.21.mlp.down_proj.
638
+ Applying mask on model.layers.22.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
639
+ Applied soft mask on model.layers.22.self_attn.q_proj.
640
+ Applying mask on model.layers.22.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
641
+ Applied soft mask on model.layers.22.self_attn.k_proj.
642
+ Applying mask on model.layers.22.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
643
+ Applied soft mask on model.layers.22.self_attn.v_proj.
644
+ Applying mask on model.layers.22.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
645
+ Applied soft mask on model.layers.22.self_attn.o_proj.
646
+ Applying mask on model.layers.22.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
647
+ Applied soft mask on model.layers.22.mlp.gate_proj.
648
+ Applying mask on model.layers.22.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
649
+ Applied soft mask on model.layers.22.mlp.up_proj.
650
+ Applying mask on model.layers.22.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
651
+ Applied soft mask on model.layers.22.mlp.down_proj.
652
+ Applying mask on model.layers.23.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
653
+ Applied soft mask on model.layers.23.self_attn.q_proj.
654
+ Applying mask on model.layers.23.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
655
+ Applied soft mask on model.layers.23.self_attn.k_proj.
656
+ Applying mask on model.layers.23.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
657
+ Applied soft mask on model.layers.23.self_attn.v_proj.
658
+ Applying mask on model.layers.23.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
659
+ Applied soft mask on model.layers.23.self_attn.o_proj.
660
+ Applying mask on model.layers.23.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
661
+ Applied soft mask on model.layers.23.mlp.gate_proj.
662
+ Applying mask on model.layers.23.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
663
+ Applied soft mask on model.layers.23.mlp.up_proj.
664
+ Applying mask on model.layers.23.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
665
+ Applied soft mask on model.layers.23.mlp.down_proj.
666
+ Applying mask on _connector.0 with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
667
+ Applied soft mask on _connector.0.
668
+ Applying mask on _connector.2 with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
669
+ Applied soft mask on _connector.2.
670
+ Using cleaned config_mask (without mask parameters) for saving.
671
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
672
+ import pynvml # type: ignore[import]
673
+ [2025-10-11 01:41:46,313] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
674
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/file_download.py:945: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
675
+ warnings.warn(
676
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
677
+ Traceback (most recent call last):
678
+ File "/opt/conda/envs/tinyllava/lib/python3.10/runpy.py", line 196, in _run_module_as_main
679
+ return _run_code(code, main_globals, None,
680
+ File "/opt/conda/envs/tinyllava/lib/python3.10/runpy.py", line 86, in _run_code
681
+ exec(code, run_globals)
682
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/eval/model_vqa_mmmu.py", line 180, in <module>
683
+ eval_model(args)
684
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/eval/model_vqa_mmmu.py", line 94, in eval_model
685
+ questions = json.load(open(os.path.expanduser(args.question_file), "r"))
686
+ FileNotFoundError: [Errno 2] No such file or directory: '/s3-code/ywang29/datasets/tinyllava/eval/MMMU/.../eval/MMMU/anns_for_eval.json'
687
+ Traceback (most recent call last):
688
+ File "/nfs/ywang29/TinyLLaVA/scripts/convert_answer_to_mmmu.py", line 31, in <module>
689
+ eval_model(args)
690
+ File "/nfs/ywang29/TinyLLaVA/scripts/convert_answer_to_mmmu.py", line 7, in eval_model
691
+ answers = [json.loads(q) for q in open(os.path.expanduser(args.answers_file), "r")]
692
+ FileNotFoundError: [Errno 2] No such file or directory: '/s3-code/ywang29/datasets/tinyllava/eval/MMMU/.../eval/MMMU/answers/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.5_2e-1_connector-1.0_0.5_2e-1_ablation-mask_applied.jsonl'
693
+ scripts/eval/mmmu.sh: line 23: cd: /s3-code/ywang29/datasets/tinyllava/eval/MMMU/.../eval/MMMU/eval: No such file or directory
694
+ python: can't open file '/nfs/ywang29/TinyLLaVA/main_eval_only.py': [Errno 2] No such file or directory
695
+ ==== EXPERIMENT COMPLETED: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.5_2e-1_connector-1.0_0.5_2e-1_ablation ====
696
+ Log File: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.5_2e-1_connector-1.0_0.5_2e-1_ablation_20251011_014101.log
697
+ Timestamp: 2025-10-11 01:41:53
698
+ =====================================
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.5_2e-1_connector-1.0_0.5_2e-1_ablation_20251011_015706.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.7_2e-1_connector-1.0_0.7_2e-1_ablation_20251011_014153.log ADDED
@@ -0,0 +1,698 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ==== STARTING EXPERIMENT: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.7_2e-1_connector-1.0_0.7_2e-1_ablation ====
2
+ Log File: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.7_2e-1_connector-1.0_0.7_2e-1_ablation_20251011_014153.log
3
+ Timestamp: 2025-10-11 01:41:53
4
+ =====================================
5
+ Processing: /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.7_2e-1_connector-1.0_0.7_2e-1_ablation
6
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
7
+ import pynvml # type: ignore[import]
8
+ [2025-10-11 01:41:56,040] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
9
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/file_download.py:945: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
10
+ warnings.warn(
11
+ config_mask.torch_dtype: torch.bfloat16
12
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
13
+ Load mask model from /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.7_2e-1_connector-1.0_0.7_2e-1_ablation over.
14
+ TinyLlavaConfig {
15
+ "architectures": [
16
+ "TinyLlavaForConditionalGeneration"
17
+ ],
18
+ "backward_type_connector": "normal",
19
+ "cache_dir": null,
20
+ "connector_type": "mlp2x_gelu",
21
+ "hidden_size": 896,
22
+ "ignore_index": -100,
23
+ "image_aspect_ratio": "square",
24
+ "image_token_index": -200,
25
+ "llm_model_name_or_path": "Qwen/Qwen2.5-0.5B",
26
+ "mask_model": [
27
+ "llm",
28
+ "connector"
29
+ ],
30
+ "mask_type_connector": "soft",
31
+ "model_type": "tinyllava",
32
+ "num_queries": 128,
33
+ "num_resampler_layers": 3,
34
+ "pad_token": "<|endoftext|>",
35
+ "resampler_hidden_size": 768,
36
+ "sparsity_connector": null,
37
+ "subnet_type_connector": "global",
38
+ "temperature_connector": 0.7,
39
+ "text_config": {
40
+ "_name_or_path": "Qwen/Qwen2.5-0.5B",
41
+ "architectures": [
42
+ "Qwen2ForCausalLM"
43
+ ],
44
+ "backward_type": "normal",
45
+ "bos_token_id": 151643,
46
+ "eos_token_id": 151643,
47
+ "hidden_size": 896,
48
+ "intermediate_size": 4864,
49
+ "mask_type": "soft",
50
+ "masked_layers": "all",
51
+ "max_position_embeddings": 32768,
52
+ "max_window_layers": 24,
53
+ "model_type": "qwen2",
54
+ "num_attention_heads": 14,
55
+ "num_hidden_layers": 24,
56
+ "num_key_value_heads": 2,
57
+ "rope_theta": 1000000.0,
58
+ "sliding_window": 32768,
59
+ "subnet_mode": "both",
60
+ "subnet_type": "None",
61
+ "temperature_attn": 0.7,
62
+ "temperature_mlp": 0.7,
63
+ "tie_word_embeddings": true,
64
+ "torch_dtype": "bfloat16",
65
+ "use_mrope": false,
66
+ "use_sliding_window": false,
67
+ "vocab_size": 151936
68
+ },
69
+ "threshold_connector": null,
70
+ "tokenizer_model_max_length": 2048,
71
+ "tokenizer_name_or_path": "Qwen/Qwen2.5-0.5B",
72
+ "tokenizer_padding_side": "right",
73
+ "tokenizer_use_fast": false,
74
+ "torch_dtype": "bfloat16",
75
+ "transformers_version": "4.40.1",
76
+ "tune_type_connector": "full",
77
+ "tune_type_llm": "full",
78
+ "tune_type_vision_tower": "frozen",
79
+ "tune_vision_tower_from_layer": 0,
80
+ "use_cache": true,
81
+ "vision_config": {
82
+ "hidden_act": "gelu_pytorch_tanh",
83
+ "hidden_size": 1152,
84
+ "image_size": 384,
85
+ "intermediate_size": 4304,
86
+ "layer_norm_eps": 1e-06,
87
+ "model_name_or_path": "google/siglip-so400m-patch14-384",
88
+ "model_name_or_path2": "",
89
+ "model_type": "siglip_vision_model",
90
+ "num_attention_heads": 16,
91
+ "num_hidden_layers": 27,
92
+ "patch_size": 14
93
+ },
94
+ "vision_feature_layer": -2,
95
+ "vision_feature_select_strategy": "patch",
96
+ "vision_hidden_size": 1152,
97
+ "vision_model_name_or_path": "google/siglip-so400m-patch14-384",
98
+ "vision_model_name_or_path2": "",
99
+ "vocab_size": 151936
100
+ }
101
+
102
+ TinyLlavaForConditionalGeneration(
103
+ (language_model): Qwen2ForCausalLM(
104
+ (model): Qwen2Model(
105
+ (embed_tokens): Embedding(151936, 896)
106
+ (layers): ModuleList(
107
+ (0-23): 24 x Qwen2DecoderLayer(
108
+ (self_attn): Qwen2Attention(
109
+ (q_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=True)
110
+ (k_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=128, bias=True)
111
+ (v_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=128, bias=True)
112
+ (o_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=False)
113
+ (rotary_emb): Qwen2RotaryEmbedding()
114
+ )
115
+ (mlp): Qwen2MLP(
116
+ (gate_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=4864, bias=False)
117
+ (up_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=4864, bias=False)
118
+ (down_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=4864, out_features=896, bias=False)
119
+ (act_fn): SiLU()
120
+ )
121
+ (input_layernorm): Qwen2RMSNorm()
122
+ (post_attention_layernorm): Qwen2RMSNorm()
123
+ )
124
+ )
125
+ (norm): Qwen2RMSNorm()
126
+ )
127
+ (lm_head): Linear(in_features=896, out_features=151936, bias=False)
128
+ )
129
+ (vision_tower): SIGLIPVisionTower(
130
+ (_vision_tower): SiglipVisionModel(
131
+ (vision_model): SiglipVisionTransformer(
132
+ (embeddings): SiglipVisionEmbeddings(
133
+ (patch_embedding): Conv2d(3, 1152, kernel_size=(14, 14), stride=(14, 14), padding=valid)
134
+ (position_embedding): Embedding(729, 1152)
135
+ )
136
+ (encoder): SiglipEncoder(
137
+ (layers): ModuleList(
138
+ (0-26): 27 x SiglipEncoderLayer(
139
+ (self_attn): SiglipAttention(
140
+ (k_proj): Linear(in_features=1152, out_features=1152, bias=True)
141
+ (v_proj): Linear(in_features=1152, out_features=1152, bias=True)
142
+ (q_proj): Linear(in_features=1152, out_features=1152, bias=True)
143
+ (out_proj): Linear(in_features=1152, out_features=1152, bias=True)
144
+ )
145
+ (layer_norm1): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
146
+ (mlp): SiglipMLP(
147
+ (activation_fn): PytorchGELUTanh()
148
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
149
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
150
+ )
151
+ (layer_norm2): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
152
+ )
153
+ )
154
+ )
155
+ (post_layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
156
+ (head): SiglipMultiheadAttentionPoolingHead(
157
+ (attention): MultiheadAttention(
158
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1152, out_features=1152, bias=True)
159
+ )
160
+ (layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
161
+ (mlp): SiglipMLP(
162
+ (activation_fn): PytorchGELUTanh()
163
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
164
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
165
+ )
166
+ )
167
+ )
168
+ )
169
+ )
170
+ (connector): MLPConnector(
171
+ (_connector): Sequential(
172
+ (0): SupermaskLinearSparsity_SoftForward_Normal(in_features=1152, out_features=896, bias=True)
173
+ (1): GELU(approximate='none')
174
+ (2): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=True)
175
+ )
176
+ )
177
+ )
178
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
179
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
180
+ return self.fget.__get__(instance, owner)()
181
+ loading language model from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/language_model
182
+ Loading vision tower from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/vision_tower
183
+ Loading connector from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/connector/pytorch_model.bin...
184
+ Load base model from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain over.
185
+ TinyLlavaConfig {
186
+ "cache_dir": null,
187
+ "connector_type": "mlp2x_gelu",
188
+ "hidden_size": 896,
189
+ "ignore_index": -100,
190
+ "image_aspect_ratio": "square",
191
+ "image_token_index": -200,
192
+ "llm_model_name_or_path": "Qwen/Qwen2.5-0.5B",
193
+ "model_type": "tinyllava",
194
+ "num_queries": 128,
195
+ "num_resampler_layers": 3,
196
+ "pad_token": "<|endoftext|>",
197
+ "pad_token_id": 151643,
198
+ "resampler_hidden_size": 768,
199
+ "text_config": {
200
+ "_name_or_path": "Qwen/Qwen2.5-0.5B",
201
+ "architectures": [
202
+ "Qwen2ForCausalLM"
203
+ ],
204
+ "bos_token_id": 151643,
205
+ "eos_token_id": 151643,
206
+ "hidden_size": 896,
207
+ "intermediate_size": 4864,
208
+ "max_position_embeddings": 32768,
209
+ "max_window_layers": 24,
210
+ "model_type": "qwen2",
211
+ "num_attention_heads": 14,
212
+ "num_hidden_layers": 24,
213
+ "num_key_value_heads": 2,
214
+ "rope_theta": 1000000.0,
215
+ "sliding_window": 32768,
216
+ "tie_word_embeddings": true,
217
+ "use_mrope": false,
218
+ "use_sliding_window": false,
219
+ "vocab_size": 151936
220
+ },
221
+ "tokenizer_model_max_length": 2048,
222
+ "tokenizer_name_or_path": "Qwen/Qwen2.5-0.5B",
223
+ "tokenizer_padding_side": "right",
224
+ "tokenizer_use_fast": false,
225
+ "transformers_version": "4.40.1",
226
+ "tune_type_connector": "full",
227
+ "tune_type_llm": "frozen",
228
+ "tune_type_vision_tower": "frozen",
229
+ "tune_vision_tower_from_layer": 0,
230
+ "use_cache": true,
231
+ "vision_config": {
232
+ "hidden_act": "gelu_pytorch_tanh",
233
+ "hidden_size": 1152,
234
+ "image_size": 384,
235
+ "intermediate_size": 4304,
236
+ "layer_norm_eps": 1e-06,
237
+ "model_name_or_path": "google/siglip-so400m-patch14-384",
238
+ "model_name_or_path2": "",
239
+ "model_type": "siglip_vision_model",
240
+ "num_attention_heads": 16,
241
+ "num_hidden_layers": 27,
242
+ "patch_size": 14
243
+ },
244
+ "vision_feature_layer": -2,
245
+ "vision_feature_select_strategy": "patch",
246
+ "vision_hidden_size": 1152,
247
+ "vision_model_name_or_path": "google/siglip-so400m-patch14-384",
248
+ "vision_model_name_or_path2": "",
249
+ "vocab_size": 151936
250
+ }
251
+
252
+ TinyLlavaForConditionalGeneration(
253
+ (language_model): Qwen2ForCausalLM(
254
+ (model): Qwen2Model(
255
+ (embed_tokens): Embedding(151936, 896)
256
+ (layers): ModuleList(
257
+ (0-23): 24 x Qwen2DecoderLayer(
258
+ (self_attn): Qwen2Attention(
259
+ (q_proj): Linear(in_features=896, out_features=896, bias=True)
260
+ (k_proj): Linear(in_features=896, out_features=128, bias=True)
261
+ (v_proj): Linear(in_features=896, out_features=128, bias=True)
262
+ (o_proj): Linear(in_features=896, out_features=896, bias=False)
263
+ (rotary_emb): Qwen2RotaryEmbedding()
264
+ )
265
+ (mlp): Qwen2MLP(
266
+ (gate_proj): Linear(in_features=896, out_features=4864, bias=False)
267
+ (up_proj): Linear(in_features=896, out_features=4864, bias=False)
268
+ (down_proj): Linear(in_features=4864, out_features=896, bias=False)
269
+ (act_fn): SiLU()
270
+ )
271
+ (input_layernorm): Qwen2RMSNorm()
272
+ (post_attention_layernorm): Qwen2RMSNorm()
273
+ )
274
+ )
275
+ (norm): Qwen2RMSNorm()
276
+ )
277
+ (lm_head): Linear(in_features=896, out_features=151936, bias=False)
278
+ )
279
+ (vision_tower): SIGLIPVisionTower(
280
+ (_vision_tower): SiglipVisionModel(
281
+ (vision_model): SiglipVisionTransformer(
282
+ (embeddings): SiglipVisionEmbeddings(
283
+ (patch_embedding): Conv2d(3, 1152, kernel_size=(14, 14), stride=(14, 14), padding=valid)
284
+ (position_embedding): Embedding(729, 1152)
285
+ )
286
+ (encoder): SiglipEncoder(
287
+ (layers): ModuleList(
288
+ (0-26): 27 x SiglipEncoderLayer(
289
+ (self_attn): SiglipAttention(
290
+ (k_proj): Linear(in_features=1152, out_features=1152, bias=True)
291
+ (v_proj): Linear(in_features=1152, out_features=1152, bias=True)
292
+ (q_proj): Linear(in_features=1152, out_features=1152, bias=True)
293
+ (out_proj): Linear(in_features=1152, out_features=1152, bias=True)
294
+ )
295
+ (layer_norm1): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
296
+ (mlp): SiglipMLP(
297
+ (activation_fn): PytorchGELUTanh()
298
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
299
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
300
+ )
301
+ (layer_norm2): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
302
+ )
303
+ )
304
+ )
305
+ (post_layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
306
+ (head): SiglipMultiheadAttentionPoolingHead(
307
+ (attention): MultiheadAttention(
308
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1152, out_features=1152, bias=True)
309
+ )
310
+ (layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
311
+ (mlp): SiglipMLP(
312
+ (activation_fn): PytorchGELUTanh()
313
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
314
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
315
+ )
316
+ )
317
+ )
318
+ )
319
+ )
320
+ (connector): MLPConnector(
321
+ (_connector): Sequential(
322
+ (0): Linear(in_features=1152, out_features=896, bias=True)
323
+ (1): GELU(approximate='none')
324
+ (2): Linear(in_features=896, out_features=896, bias=True)
325
+ )
326
+ )
327
+ )
328
+ Collect masks for language model over.
329
+ Collect masks for connector over.
330
+ Applying mask on model.layers.0.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
331
+ Applied soft mask on model.layers.0.self_attn.q_proj.
332
+ Applying mask on model.layers.0.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
333
+ Applied soft mask on model.layers.0.self_attn.k_proj.
334
+ Applying mask on model.layers.0.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
335
+ Applied soft mask on model.layers.0.self_attn.v_proj.
336
+ Applying mask on model.layers.0.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
337
+ Applied soft mask on model.layers.0.self_attn.o_proj.
338
+ Applying mask on model.layers.0.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
339
+ Applied soft mask on model.layers.0.mlp.gate_proj.
340
+ Applying mask on model.layers.0.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
341
+ Applied soft mask on model.layers.0.mlp.up_proj.
342
+ Applying mask on model.layers.0.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
343
+ Applied soft mask on model.layers.0.mlp.down_proj.
344
+ Applying mask on model.layers.1.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
345
+ Applied soft mask on model.layers.1.self_attn.q_proj.
346
+ Applying mask on model.layers.1.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
347
+ Applied soft mask on model.layers.1.self_attn.k_proj.
348
+ Applying mask on model.layers.1.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
349
+ Applied soft mask on model.layers.1.self_attn.v_proj.
350
+ Applying mask on model.layers.1.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
351
+ Applied soft mask on model.layers.1.self_attn.o_proj.
352
+ Applying mask on model.layers.1.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
353
+ Applied soft mask on model.layers.1.mlp.gate_proj.
354
+ Applying mask on model.layers.1.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
355
+ Applied soft mask on model.layers.1.mlp.up_proj.
356
+ Applying mask on model.layers.1.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
357
+ Applied soft mask on model.layers.1.mlp.down_proj.
358
+ Applying mask on model.layers.2.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
359
+ Applied soft mask on model.layers.2.self_attn.q_proj.
360
+ Applying mask on model.layers.2.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
361
+ Applied soft mask on model.layers.2.self_attn.k_proj.
362
+ Applying mask on model.layers.2.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
363
+ Applied soft mask on model.layers.2.self_attn.v_proj.
364
+ Applying mask on model.layers.2.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
365
+ Applied soft mask on model.layers.2.self_attn.o_proj.
366
+ Applying mask on model.layers.2.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
367
+ Applied soft mask on model.layers.2.mlp.gate_proj.
368
+ Applying mask on model.layers.2.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
369
+ Applied soft mask on model.layers.2.mlp.up_proj.
370
+ Applying mask on model.layers.2.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
371
+ Applied soft mask on model.layers.2.mlp.down_proj.
372
+ Applying mask on model.layers.3.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
373
+ Applied soft mask on model.layers.3.self_attn.q_proj.
374
+ Applying mask on model.layers.3.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
375
+ Applied soft mask on model.layers.3.self_attn.k_proj.
376
+ Applying mask on model.layers.3.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
377
+ Applied soft mask on model.layers.3.self_attn.v_proj.
378
+ Applying mask on model.layers.3.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
379
+ Applied soft mask on model.layers.3.self_attn.o_proj.
380
+ Applying mask on model.layers.3.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
381
+ Applied soft mask on model.layers.3.mlp.gate_proj.
382
+ Applying mask on model.layers.3.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
383
+ Applied soft mask on model.layers.3.mlp.up_proj.
384
+ Applying mask on model.layers.3.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
385
+ Applied soft mask on model.layers.3.mlp.down_proj.
386
+ Applying mask on model.layers.4.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
387
+ Applied soft mask on model.layers.4.self_attn.q_proj.
388
+ Applying mask on model.layers.4.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
389
+ Applied soft mask on model.layers.4.self_attn.k_proj.
390
+ Applying mask on model.layers.4.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
391
+ Applied soft mask on model.layers.4.self_attn.v_proj.
392
+ Applying mask on model.layers.4.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
393
+ Applied soft mask on model.layers.4.self_attn.o_proj.
394
+ Applying mask on model.layers.4.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
395
+ Applied soft mask on model.layers.4.mlp.gate_proj.
396
+ Applying mask on model.layers.4.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
397
+ Applied soft mask on model.layers.4.mlp.up_proj.
398
+ Applying mask on model.layers.4.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
399
+ Applied soft mask on model.layers.4.mlp.down_proj.
400
+ Applying mask on model.layers.5.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
401
+ Applied soft mask on model.layers.5.self_attn.q_proj.
402
+ Applying mask on model.layers.5.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
403
+ Applied soft mask on model.layers.5.self_attn.k_proj.
404
+ Applying mask on model.layers.5.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
405
+ Applied soft mask on model.layers.5.self_attn.v_proj.
406
+ Applying mask on model.layers.5.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
407
+ Applied soft mask on model.layers.5.self_attn.o_proj.
408
+ Applying mask on model.layers.5.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
409
+ Applied soft mask on model.layers.5.mlp.gate_proj.
410
+ Applying mask on model.layers.5.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
411
+ Applied soft mask on model.layers.5.mlp.up_proj.
412
+ Applying mask on model.layers.5.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
413
+ Applied soft mask on model.layers.5.mlp.down_proj.
414
+ Applying mask on model.layers.6.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
415
+ Applied soft mask on model.layers.6.self_attn.q_proj.
416
+ Applying mask on model.layers.6.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
417
+ Applied soft mask on model.layers.6.self_attn.k_proj.
418
+ Applying mask on model.layers.6.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
419
+ Applied soft mask on model.layers.6.self_attn.v_proj.
420
+ Applying mask on model.layers.6.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
421
+ Applied soft mask on model.layers.6.self_attn.o_proj.
422
+ Applying mask on model.layers.6.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
423
+ Applied soft mask on model.layers.6.mlp.gate_proj.
424
+ Applying mask on model.layers.6.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
425
+ Applied soft mask on model.layers.6.mlp.up_proj.
426
+ Applying mask on model.layers.6.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
427
+ Applied soft mask on model.layers.6.mlp.down_proj.
428
+ Applying mask on model.layers.7.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
429
+ Applied soft mask on model.layers.7.self_attn.q_proj.
430
+ Applying mask on model.layers.7.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
431
+ Applied soft mask on model.layers.7.self_attn.k_proj.
432
+ Applying mask on model.layers.7.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
433
+ Applied soft mask on model.layers.7.self_attn.v_proj.
434
+ Applying mask on model.layers.7.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
435
+ Applied soft mask on model.layers.7.self_attn.o_proj.
436
+ Applying mask on model.layers.7.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
437
+ Applied soft mask on model.layers.7.mlp.gate_proj.
438
+ Applying mask on model.layers.7.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
439
+ Applied soft mask on model.layers.7.mlp.up_proj.
440
+ Applying mask on model.layers.7.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
441
+ Applied soft mask on model.layers.7.mlp.down_proj.
442
+ Applying mask on model.layers.8.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
443
+ Applied soft mask on model.layers.8.self_attn.q_proj.
444
+ Applying mask on model.layers.8.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
445
+ Applied soft mask on model.layers.8.self_attn.k_proj.
446
+ Applying mask on model.layers.8.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
447
+ Applied soft mask on model.layers.8.self_attn.v_proj.
448
+ Applying mask on model.layers.8.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
449
+ Applied soft mask on model.layers.8.self_attn.o_proj.
450
+ Applying mask on model.layers.8.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
451
+ Applied soft mask on model.layers.8.mlp.gate_proj.
452
+ Applying mask on model.layers.8.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
453
+ Applied soft mask on model.layers.8.mlp.up_proj.
454
+ Applying mask on model.layers.8.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
455
+ Applied soft mask on model.layers.8.mlp.down_proj.
456
+ Applying mask on model.layers.9.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
457
+ Applied soft mask on model.layers.9.self_attn.q_proj.
458
+ Applying mask on model.layers.9.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
459
+ Applied soft mask on model.layers.9.self_attn.k_proj.
460
+ Applying mask on model.layers.9.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
461
+ Applied soft mask on model.layers.9.self_attn.v_proj.
462
+ Applying mask on model.layers.9.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
463
+ Applied soft mask on model.layers.9.self_attn.o_proj.
464
+ Applying mask on model.layers.9.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
465
+ Applied soft mask on model.layers.9.mlp.gate_proj.
466
+ Applying mask on model.layers.9.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
467
+ Applied soft mask on model.layers.9.mlp.up_proj.
468
+ Applying mask on model.layers.9.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
469
+ Applied soft mask on model.layers.9.mlp.down_proj.
470
+ Applying mask on model.layers.10.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
471
+ Applied soft mask on model.layers.10.self_attn.q_proj.
472
+ Applying mask on model.layers.10.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
473
+ Applied soft mask on model.layers.10.self_attn.k_proj.
474
+ Applying mask on model.layers.10.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
475
+ Applied soft mask on model.layers.10.self_attn.v_proj.
476
+ Applying mask on model.layers.10.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
477
+ Applied soft mask on model.layers.10.self_attn.o_proj.
478
+ Applying mask on model.layers.10.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
479
+ Applied soft mask on model.layers.10.mlp.gate_proj.
480
+ Applying mask on model.layers.10.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
481
+ Applied soft mask on model.layers.10.mlp.up_proj.
482
+ Applying mask on model.layers.10.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
483
+ Applied soft mask on model.layers.10.mlp.down_proj.
484
+ Applying mask on model.layers.11.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
485
+ Applied soft mask on model.layers.11.self_attn.q_proj.
486
+ Applying mask on model.layers.11.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
487
+ Applied soft mask on model.layers.11.self_attn.k_proj.
488
+ Applying mask on model.layers.11.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
489
+ Applied soft mask on model.layers.11.self_attn.v_proj.
490
+ Applying mask on model.layers.11.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
491
+ Applied soft mask on model.layers.11.self_attn.o_proj.
492
+ Applying mask on model.layers.11.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
493
+ Applied soft mask on model.layers.11.mlp.gate_proj.
494
+ Applying mask on model.layers.11.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
495
+ Applied soft mask on model.layers.11.mlp.up_proj.
496
+ Applying mask on model.layers.11.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
497
+ Applied soft mask on model.layers.11.mlp.down_proj.
498
+ Applying mask on model.layers.12.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
499
+ Applied soft mask on model.layers.12.self_attn.q_proj.
500
+ Applying mask on model.layers.12.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
501
+ Applied soft mask on model.layers.12.self_attn.k_proj.
502
+ Applying mask on model.layers.12.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
503
+ Applied soft mask on model.layers.12.self_attn.v_proj.
504
+ Applying mask on model.layers.12.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
505
+ Applied soft mask on model.layers.12.self_attn.o_proj.
506
+ Applying mask on model.layers.12.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
507
+ Applied soft mask on model.layers.12.mlp.gate_proj.
508
+ Applying mask on model.layers.12.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
509
+ Applied soft mask on model.layers.12.mlp.up_proj.
510
+ Applying mask on model.layers.12.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
511
+ Applied soft mask on model.layers.12.mlp.down_proj.
512
+ Applying mask on model.layers.13.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
513
+ Applied soft mask on model.layers.13.self_attn.q_proj.
514
+ Applying mask on model.layers.13.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
515
+ Applied soft mask on model.layers.13.self_attn.k_proj.
516
+ Applying mask on model.layers.13.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
517
+ Applied soft mask on model.layers.13.self_attn.v_proj.
518
+ Applying mask on model.layers.13.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
519
+ Applied soft mask on model.layers.13.self_attn.o_proj.
520
+ Applying mask on model.layers.13.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
521
+ Applied soft mask on model.layers.13.mlp.gate_proj.
522
+ Applying mask on model.layers.13.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
523
+ Applied soft mask on model.layers.13.mlp.up_proj.
524
+ Applying mask on model.layers.13.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
525
+ Applied soft mask on model.layers.13.mlp.down_proj.
526
+ Applying mask on model.layers.14.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
527
+ Applied soft mask on model.layers.14.self_attn.q_proj.
528
+ Applying mask on model.layers.14.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
529
+ Applied soft mask on model.layers.14.self_attn.k_proj.
530
+ Applying mask on model.layers.14.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
531
+ Applied soft mask on model.layers.14.self_attn.v_proj.
532
+ Applying mask on model.layers.14.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
533
+ Applied soft mask on model.layers.14.self_attn.o_proj.
534
+ Applying mask on model.layers.14.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
535
+ Applied soft mask on model.layers.14.mlp.gate_proj.
536
+ Applying mask on model.layers.14.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
537
+ Applied soft mask on model.layers.14.mlp.up_proj.
538
+ Applying mask on model.layers.14.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
539
+ Applied soft mask on model.layers.14.mlp.down_proj.
540
+ Applying mask on model.layers.15.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
541
+ Applied soft mask on model.layers.15.self_attn.q_proj.
542
+ Applying mask on model.layers.15.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
543
+ Applied soft mask on model.layers.15.self_attn.k_proj.
544
+ Applying mask on model.layers.15.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
545
+ Applied soft mask on model.layers.15.self_attn.v_proj.
546
+ Applying mask on model.layers.15.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
547
+ Applied soft mask on model.layers.15.self_attn.o_proj.
548
+ Applying mask on model.layers.15.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
549
+ Applied soft mask on model.layers.15.mlp.gate_proj.
550
+ Applying mask on model.layers.15.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
551
+ Applied soft mask on model.layers.15.mlp.up_proj.
552
+ Applying mask on model.layers.15.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
553
+ Applied soft mask on model.layers.15.mlp.down_proj.
554
+ Applying mask on model.layers.16.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
555
+ Applied soft mask on model.layers.16.self_attn.q_proj.
556
+ Applying mask on model.layers.16.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
557
+ Applied soft mask on model.layers.16.self_attn.k_proj.
558
+ Applying mask on model.layers.16.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
559
+ Applied soft mask on model.layers.16.self_attn.v_proj.
560
+ Applying mask on model.layers.16.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
561
+ Applied soft mask on model.layers.16.self_attn.o_proj.
562
+ Applying mask on model.layers.16.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
563
+ Applied soft mask on model.layers.16.mlp.gate_proj.
564
+ Applying mask on model.layers.16.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
565
+ Applied soft mask on model.layers.16.mlp.up_proj.
566
+ Applying mask on model.layers.16.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
567
+ Applied soft mask on model.layers.16.mlp.down_proj.
568
+ Applying mask on model.layers.17.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
569
+ Applied soft mask on model.layers.17.self_attn.q_proj.
570
+ Applying mask on model.layers.17.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
571
+ Applied soft mask on model.layers.17.self_attn.k_proj.
572
+ Applying mask on model.layers.17.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
573
+ Applied soft mask on model.layers.17.self_attn.v_proj.
574
+ Applying mask on model.layers.17.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
575
+ Applied soft mask on model.layers.17.self_attn.o_proj.
576
+ Applying mask on model.layers.17.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
577
+ Applied soft mask on model.layers.17.mlp.gate_proj.
578
+ Applying mask on model.layers.17.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
579
+ Applied soft mask on model.layers.17.mlp.up_proj.
580
+ Applying mask on model.layers.17.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
581
+ Applied soft mask on model.layers.17.mlp.down_proj.
582
+ Applying mask on model.layers.18.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
583
+ Applied soft mask on model.layers.18.self_attn.q_proj.
584
+ Applying mask on model.layers.18.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
585
+ Applied soft mask on model.layers.18.self_attn.k_proj.
586
+ Applying mask on model.layers.18.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
587
+ Applied soft mask on model.layers.18.self_attn.v_proj.
588
+ Applying mask on model.layers.18.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
589
+ Applied soft mask on model.layers.18.self_attn.o_proj.
590
+ Applying mask on model.layers.18.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
591
+ Applied soft mask on model.layers.18.mlp.gate_proj.
592
+ Applying mask on model.layers.18.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
593
+ Applied soft mask on model.layers.18.mlp.up_proj.
594
+ Applying mask on model.layers.18.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
595
+ Applied soft mask on model.layers.18.mlp.down_proj.
596
+ Applying mask on model.layers.19.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
597
+ Applied soft mask on model.layers.19.self_attn.q_proj.
598
+ Applying mask on model.layers.19.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
599
+ Applied soft mask on model.layers.19.self_attn.k_proj.
600
+ Applying mask on model.layers.19.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
601
+ Applied soft mask on model.layers.19.self_attn.v_proj.
602
+ Applying mask on model.layers.19.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
603
+ Applied soft mask on model.layers.19.self_attn.o_proj.
604
+ Applying mask on model.layers.19.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
605
+ Applied soft mask on model.layers.19.mlp.gate_proj.
606
+ Applying mask on model.layers.19.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
607
+ Applied soft mask on model.layers.19.mlp.up_proj.
608
+ Applying mask on model.layers.19.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
609
+ Applied soft mask on model.layers.19.mlp.down_proj.
610
+ Applying mask on model.layers.20.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
611
+ Applied soft mask on model.layers.20.self_attn.q_proj.
612
+ Applying mask on model.layers.20.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
613
+ Applied soft mask on model.layers.20.self_attn.k_proj.
614
+ Applying mask on model.layers.20.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
615
+ Applied soft mask on model.layers.20.self_attn.v_proj.
616
+ Applying mask on model.layers.20.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
617
+ Applied soft mask on model.layers.20.self_attn.o_proj.
618
+ Applying mask on model.layers.20.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
619
+ Applied soft mask on model.layers.20.mlp.gate_proj.
620
+ Applying mask on model.layers.20.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
621
+ Applied soft mask on model.layers.20.mlp.up_proj.
622
+ Applying mask on model.layers.20.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
623
+ Applied soft mask on model.layers.20.mlp.down_proj.
624
+ Applying mask on model.layers.21.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
625
+ Applied soft mask on model.layers.21.self_attn.q_proj.
626
+ Applying mask on model.layers.21.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
627
+ Applied soft mask on model.layers.21.self_attn.k_proj.
628
+ Applying mask on model.layers.21.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
629
+ Applied soft mask on model.layers.21.self_attn.v_proj.
630
+ Applying mask on model.layers.21.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
631
+ Applied soft mask on model.layers.21.self_attn.o_proj.
632
+ Applying mask on model.layers.21.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
633
+ Applied soft mask on model.layers.21.mlp.gate_proj.
634
+ Applying mask on model.layers.21.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
635
+ Applied soft mask on model.layers.21.mlp.up_proj.
636
+ Applying mask on model.layers.21.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
637
+ Applied soft mask on model.layers.21.mlp.down_proj.
638
+ Applying mask on model.layers.22.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
639
+ Applied soft mask on model.layers.22.self_attn.q_proj.
640
+ Applying mask on model.layers.22.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
641
+ Applied soft mask on model.layers.22.self_attn.k_proj.
642
+ Applying mask on model.layers.22.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
643
+ Applied soft mask on model.layers.22.self_attn.v_proj.
644
+ Applying mask on model.layers.22.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
645
+ Applied soft mask on model.layers.22.self_attn.o_proj.
646
+ Applying mask on model.layers.22.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
647
+ Applied soft mask on model.layers.22.mlp.gate_proj.
648
+ Applying mask on model.layers.22.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
649
+ Applied soft mask on model.layers.22.mlp.up_proj.
650
+ Applying mask on model.layers.22.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
651
+ Applied soft mask on model.layers.22.mlp.down_proj.
652
+ Applying mask on model.layers.23.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
653
+ Applied soft mask on model.layers.23.self_attn.q_proj.
654
+ Applying mask on model.layers.23.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
655
+ Applied soft mask on model.layers.23.self_attn.k_proj.
656
+ Applying mask on model.layers.23.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
657
+ Applied soft mask on model.layers.23.self_attn.v_proj.
658
+ Applying mask on model.layers.23.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
659
+ Applied soft mask on model.layers.23.self_attn.o_proj.
660
+ Applying mask on model.layers.23.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
661
+ Applied soft mask on model.layers.23.mlp.gate_proj.
662
+ Applying mask on model.layers.23.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
663
+ Applied soft mask on model.layers.23.mlp.up_proj.
664
+ Applying mask on model.layers.23.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
665
+ Applied soft mask on model.layers.23.mlp.down_proj.
666
+ Applying mask on _connector.0 with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
667
+ Applied soft mask on _connector.0.
668
+ Applying mask on _connector.2 with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
669
+ Applied soft mask on _connector.2.
670
+ Using cleaned config_mask (without mask parameters) for saving.
671
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
672
+ import pynvml # type: ignore[import]
673
+ [2025-10-11 01:42:37,533] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
674
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/file_download.py:945: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
675
+ warnings.warn(
676
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
677
+ Traceback (most recent call last):
678
+ File "/opt/conda/envs/tinyllava/lib/python3.10/runpy.py", line 196, in _run_module_as_main
679
+ return _run_code(code, main_globals, None,
680
+ File "/opt/conda/envs/tinyllava/lib/python3.10/runpy.py", line 86, in _run_code
681
+ exec(code, run_globals)
682
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/eval/model_vqa_mmmu.py", line 180, in <module>
683
+ eval_model(args)
684
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/eval/model_vqa_mmmu.py", line 94, in eval_model
685
+ questions = json.load(open(os.path.expanduser(args.question_file), "r"))
686
+ FileNotFoundError: [Errno 2] No such file or directory: '/s3-code/ywang29/datasets/tinyllava/eval/MMMU/.../eval/MMMU/anns_for_eval.json'
687
+ Traceback (most recent call last):
688
+ File "/nfs/ywang29/TinyLLaVA/scripts/convert_answer_to_mmmu.py", line 31, in <module>
689
+ eval_model(args)
690
+ File "/nfs/ywang29/TinyLLaVA/scripts/convert_answer_to_mmmu.py", line 7, in eval_model
691
+ answers = [json.loads(q) for q in open(os.path.expanduser(args.answers_file), "r")]
692
+ FileNotFoundError: [Errno 2] No such file or directory: '/s3-code/ywang29/datasets/tinyllava/eval/MMMU/.../eval/MMMU/answers/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.7_2e-1_connector-1.0_0.7_2e-1_ablation-mask_applied.jsonl'
693
+ scripts/eval/mmmu.sh: line 23: cd: /s3-code/ywang29/datasets/tinyllava/eval/MMMU/.../eval/MMMU/eval: No such file or directory
694
+ python: can't open file '/nfs/ywang29/TinyLLaVA/main_eval_only.py': [Errno 2] No such file or directory
695
+ ==== EXPERIMENT COMPLETED: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.7_2e-1_connector-1.0_0.7_2e-1_ablation ====
696
+ Log File: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.7_2e-1_connector-1.0_0.7_2e-1_ablation_20251011_014153.log
697
+ Timestamp: 2025-10-11 01:42:43
698
+ =====================================
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.7_2e-1_connector-1.0_0.7_2e-1_ablation_20251011_020942.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.9_2e-1_connector-1.0_0.9_2e-1_ablation_20251011_014243.log ADDED
@@ -0,0 +1,676 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ==== STARTING EXPERIMENT: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.9_2e-1_connector-1.0_0.9_2e-1_ablation ====
2
+ Log File: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.9_2e-1_connector-1.0_0.9_2e-1_ablation_20251011_014243.log
3
+ Timestamp: 2025-10-11 01:42:43
4
+ =====================================
5
+ Processing: /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.9_2e-1_connector-1.0_0.9_2e-1_ablation
6
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
7
+ import pynvml # type: ignore[import]
8
+ [2025-10-11 01:42:46,516] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
9
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/file_download.py:945: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
10
+ warnings.warn(
11
+ config_mask.torch_dtype: torch.bfloat16
12
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
13
+ Load mask model from /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.9_2e-1_connector-1.0_0.9_2e-1_ablation over.
14
+ TinyLlavaConfig {
15
+ "architectures": [
16
+ "TinyLlavaForConditionalGeneration"
17
+ ],
18
+ "backward_type_connector": "normal",
19
+ "cache_dir": null,
20
+ "connector_type": "mlp2x_gelu",
21
+ "hidden_size": 896,
22
+ "ignore_index": -100,
23
+ "image_aspect_ratio": "square",
24
+ "image_token_index": -200,
25
+ "llm_model_name_or_path": "Qwen/Qwen2.5-0.5B",
26
+ "mask_model": [
27
+ "llm",
28
+ "connector"
29
+ ],
30
+ "mask_type_connector": "soft",
31
+ "model_type": "tinyllava",
32
+ "num_queries": 128,
33
+ "num_resampler_layers": 3,
34
+ "pad_token": "<|endoftext|>",
35
+ "resampler_hidden_size": 768,
36
+ "sparsity_connector": null,
37
+ "subnet_type_connector": "global",
38
+ "temperature_connector": 0.9,
39
+ "text_config": {
40
+ "_name_or_path": "Qwen/Qwen2.5-0.5B",
41
+ "architectures": [
42
+ "Qwen2ForCausalLM"
43
+ ],
44
+ "backward_type": "normal",
45
+ "bos_token_id": 151643,
46
+ "eos_token_id": 151643,
47
+ "hidden_size": 896,
48
+ "intermediate_size": 4864,
49
+ "mask_type": "soft",
50
+ "masked_layers": "all",
51
+ "max_position_embeddings": 32768,
52
+ "max_window_layers": 24,
53
+ "model_type": "qwen2",
54
+ "num_attention_heads": 14,
55
+ "num_hidden_layers": 24,
56
+ "num_key_value_heads": 2,
57
+ "rope_theta": 1000000.0,
58
+ "sliding_window": 32768,
59
+ "subnet_mode": "both",
60
+ "subnet_type": "None",
61
+ "temperature_attn": 0.9,
62
+ "temperature_mlp": 0.9,
63
+ "tie_word_embeddings": true,
64
+ "torch_dtype": "bfloat16",
65
+ "use_mrope": false,
66
+ "use_sliding_window": false,
67
+ "vocab_size": 151936
68
+ },
69
+ "threshold_connector": null,
70
+ "tokenizer_model_max_length": 2048,
71
+ "tokenizer_name_or_path": "Qwen/Qwen2.5-0.5B",
72
+ "tokenizer_padding_side": "right",
73
+ "tokenizer_use_fast": false,
74
+ "torch_dtype": "bfloat16",
75
+ "transformers_version": "4.40.1",
76
+ "tune_type_connector": "full",
77
+ "tune_type_llm": "full",
78
+ "tune_type_vision_tower": "frozen",
79
+ "tune_vision_tower_from_layer": 0,
80
+ "use_cache": true,
81
+ "vision_config": {
82
+ "hidden_act": "gelu_pytorch_tanh",
83
+ "hidden_size": 1152,
84
+ "image_size": 384,
85
+ "intermediate_size": 4304,
86
+ "layer_norm_eps": 1e-06,
87
+ "model_name_or_path": "google/siglip-so400m-patch14-384",
88
+ "model_name_or_path2": "",
89
+ "model_type": "siglip_vision_model",
90
+ "num_attention_heads": 16,
91
+ "num_hidden_layers": 27,
92
+ "patch_size": 14
93
+ },
94
+ "vision_feature_layer": -2,
95
+ "vision_feature_select_strategy": "patch",
96
+ "vision_hidden_size": 1152,
97
+ "vision_model_name_or_path": "google/siglip-so400m-patch14-384",
98
+ "vision_model_name_or_path2": "",
99
+ "vocab_size": 151936
100
+ }
101
+
102
+ TinyLlavaForConditionalGeneration(
103
+ (language_model): Qwen2ForCausalLM(
104
+ (model): Qwen2Model(
105
+ (embed_tokens): Embedding(151936, 896)
106
+ (layers): ModuleList(
107
+ (0-23): 24 x Qwen2DecoderLayer(
108
+ (self_attn): Qwen2Attention(
109
+ (q_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=True)
110
+ (k_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=128, bias=True)
111
+ (v_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=128, bias=True)
112
+ (o_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=False)
113
+ (rotary_emb): Qwen2RotaryEmbedding()
114
+ )
115
+ (mlp): Qwen2MLP(
116
+ (gate_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=4864, bias=False)
117
+ (up_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=4864, bias=False)
118
+ (down_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=4864, out_features=896, bias=False)
119
+ (act_fn): SiLU()
120
+ )
121
+ (input_layernorm): Qwen2RMSNorm()
122
+ (post_attention_layernorm): Qwen2RMSNorm()
123
+ )
124
+ )
125
+ (norm): Qwen2RMSNorm()
126
+ )
127
+ (lm_head): Linear(in_features=896, out_features=151936, bias=False)
128
+ )
129
+ (vision_tower): SIGLIPVisionTower(
130
+ (_vision_tower): SiglipVisionModel(
131
+ (vision_model): SiglipVisionTransformer(
132
+ (embeddings): SiglipVisionEmbeddings(
133
+ (patch_embedding): Conv2d(3, 1152, kernel_size=(14, 14), stride=(14, 14), padding=valid)
134
+ (position_embedding): Embedding(729, 1152)
135
+ )
136
+ (encoder): SiglipEncoder(
137
+ (layers): ModuleList(
138
+ (0-26): 27 x SiglipEncoderLayer(
139
+ (self_attn): SiglipAttention(
140
+ (k_proj): Linear(in_features=1152, out_features=1152, bias=True)
141
+ (v_proj): Linear(in_features=1152, out_features=1152, bias=True)
142
+ (q_proj): Linear(in_features=1152, out_features=1152, bias=True)
143
+ (out_proj): Linear(in_features=1152, out_features=1152, bias=True)
144
+ )
145
+ (layer_norm1): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
146
+ (mlp): SiglipMLP(
147
+ (activation_fn): PytorchGELUTanh()
148
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
149
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
150
+ )
151
+ (layer_norm2): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
152
+ )
153
+ )
154
+ )
155
+ (post_layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
156
+ (head): SiglipMultiheadAttentionPoolingHead(
157
+ (attention): MultiheadAttention(
158
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1152, out_features=1152, bias=True)
159
+ )
160
+ (layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
161
+ (mlp): SiglipMLP(
162
+ (activation_fn): PytorchGELUTanh()
163
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
164
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
165
+ )
166
+ )
167
+ )
168
+ )
169
+ )
170
+ (connector): MLPConnector(
171
+ (_connector): Sequential(
172
+ (0): SupermaskLinearSparsity_SoftForward_Normal(in_features=1152, out_features=896, bias=True)
173
+ (1): GELU(approximate='none')
174
+ (2): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=True)
175
+ )
176
+ )
177
+ )
178
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
179
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
180
+ return self.fget.__get__(instance, owner)()
181
+ loading language model from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/language_model
182
+ Loading vision tower from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/vision_tower
183
+ Loading connector from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/connector/pytorch_model.bin...
184
+ Load base model from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain over.
185
+ TinyLlavaConfig {
186
+ "cache_dir": null,
187
+ "connector_type": "mlp2x_gelu",
188
+ "hidden_size": 896,
189
+ "ignore_index": -100,
190
+ "image_aspect_ratio": "square",
191
+ "image_token_index": -200,
192
+ "llm_model_name_or_path": "Qwen/Qwen2.5-0.5B",
193
+ "model_type": "tinyllava",
194
+ "num_queries": 128,
195
+ "num_resampler_layers": 3,
196
+ "pad_token": "<|endoftext|>",
197
+ "pad_token_id": 151643,
198
+ "resampler_hidden_size": 768,
199
+ "text_config": {
200
+ "_name_or_path": "Qwen/Qwen2.5-0.5B",
201
+ "architectures": [
202
+ "Qwen2ForCausalLM"
203
+ ],
204
+ "bos_token_id": 151643,
205
+ "eos_token_id": 151643,
206
+ "hidden_size": 896,
207
+ "intermediate_size": 4864,
208
+ "max_position_embeddings": 32768,
209
+ "max_window_layers": 24,
210
+ "model_type": "qwen2",
211
+ "num_attention_heads": 14,
212
+ "num_hidden_layers": 24,
213
+ "num_key_value_heads": 2,
214
+ "rope_theta": 1000000.0,
215
+ "sliding_window": 32768,
216
+ "tie_word_embeddings": true,
217
+ "use_mrope": false,
218
+ "use_sliding_window": false,
219
+ "vocab_size": 151936
220
+ },
221
+ "tokenizer_model_max_length": 2048,
222
+ "tokenizer_name_or_path": "Qwen/Qwen2.5-0.5B",
223
+ "tokenizer_padding_side": "right",
224
+ "tokenizer_use_fast": false,
225
+ "transformers_version": "4.40.1",
226
+ "tune_type_connector": "full",
227
+ "tune_type_llm": "frozen",
228
+ "tune_type_vision_tower": "frozen",
229
+ "tune_vision_tower_from_layer": 0,
230
+ "use_cache": true,
231
+ "vision_config": {
232
+ "hidden_act": "gelu_pytorch_tanh",
233
+ "hidden_size": 1152,
234
+ "image_size": 384,
235
+ "intermediate_size": 4304,
236
+ "layer_norm_eps": 1e-06,
237
+ "model_name_or_path": "google/siglip-so400m-patch14-384",
238
+ "model_name_or_path2": "",
239
+ "model_type": "siglip_vision_model",
240
+ "num_attention_heads": 16,
241
+ "num_hidden_layers": 27,
242
+ "patch_size": 14
243
+ },
244
+ "vision_feature_layer": -2,
245
+ "vision_feature_select_strategy": "patch",
246
+ "vision_hidden_size": 1152,
247
+ "vision_model_name_or_path": "google/siglip-so400m-patch14-384",
248
+ "vision_model_name_or_path2": "",
249
+ "vocab_size": 151936
250
+ }
251
+
252
+ TinyLlavaForConditionalGeneration(
253
+ (language_model): Qwen2ForCausalLM(
254
+ (model): Qwen2Model(
255
+ (embed_tokens): Embedding(151936, 896)
256
+ (layers): ModuleList(
257
+ (0-23): 24 x Qwen2DecoderLayer(
258
+ (self_attn): Qwen2Attention(
259
+ (q_proj): Linear(in_features=896, out_features=896, bias=True)
260
+ (k_proj): Linear(in_features=896, out_features=128, bias=True)
261
+ (v_proj): Linear(in_features=896, out_features=128, bias=True)
262
+ (o_proj): Linear(in_features=896, out_features=896, bias=False)
263
+ (rotary_emb): Qwen2RotaryEmbedding()
264
+ )
265
+ (mlp): Qwen2MLP(
266
+ (gate_proj): Linear(in_features=896, out_features=4864, bias=False)
267
+ (up_proj): Linear(in_features=896, out_features=4864, bias=False)
268
+ (down_proj): Linear(in_features=4864, out_features=896, bias=False)
269
+ (act_fn): SiLU()
270
+ )
271
+ (input_layernorm): Qwen2RMSNorm()
272
+ (post_attention_layernorm): Qwen2RMSNorm()
273
+ )
274
+ )
275
+ (norm): Qwen2RMSNorm()
276
+ )
277
+ (lm_head): Linear(in_features=896, out_features=151936, bias=False)
278
+ )
279
+ (vision_tower): SIGLIPVisionTower(
280
+ (_vision_tower): SiglipVisionModel(
281
+ (vision_model): SiglipVisionTransformer(
282
+ (embeddings): SiglipVisionEmbeddings(
283
+ (patch_embedding): Conv2d(3, 1152, kernel_size=(14, 14), stride=(14, 14), padding=valid)
284
+ (position_embedding): Embedding(729, 1152)
285
+ )
286
+ (encoder): SiglipEncoder(
287
+ (layers): ModuleList(
288
+ (0-26): 27 x SiglipEncoderLayer(
289
+ (self_attn): SiglipAttention(
290
+ (k_proj): Linear(in_features=1152, out_features=1152, bias=True)
291
+ (v_proj): Linear(in_features=1152, out_features=1152, bias=True)
292
+ (q_proj): Linear(in_features=1152, out_features=1152, bias=True)
293
+ (out_proj): Linear(in_features=1152, out_features=1152, bias=True)
294
+ )
295
+ (layer_norm1): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
296
+ (mlp): SiglipMLP(
297
+ (activation_fn): PytorchGELUTanh()
298
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
299
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
300
+ )
301
+ (layer_norm2): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
302
+ )
303
+ )
304
+ )
305
+ (post_layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
306
+ (head): SiglipMultiheadAttentionPoolingHead(
307
+ (attention): MultiheadAttention(
308
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1152, out_features=1152, bias=True)
309
+ )
310
+ (layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
311
+ (mlp): SiglipMLP(
312
+ (activation_fn): PytorchGELUTanh()
313
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
314
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
315
+ )
316
+ )
317
+ )
318
+ )
319
+ )
320
+ (connector): MLPConnector(
321
+ (_connector): Sequential(
322
+ (0): Linear(in_features=1152, out_features=896, bias=True)
323
+ (1): GELU(approximate='none')
324
+ (2): Linear(in_features=896, out_features=896, bias=True)
325
+ )
326
+ )
327
+ )
328
+ Collect masks for language model over.
329
+ Collect masks for connector over.
330
+ Applying mask on model.layers.0.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
331
+ Applied soft mask on model.layers.0.self_attn.q_proj.
332
+ Applying mask on model.layers.0.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
333
+ Applied soft mask on model.layers.0.self_attn.k_proj.
334
+ Applying mask on model.layers.0.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
335
+ Applied soft mask on model.layers.0.self_attn.v_proj.
336
+ Applying mask on model.layers.0.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
337
+ Applied soft mask on model.layers.0.self_attn.o_proj.
338
+ Applying mask on model.layers.0.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
339
+ Applied soft mask on model.layers.0.mlp.gate_proj.
340
+ Applying mask on model.layers.0.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
341
+ Applied soft mask on model.layers.0.mlp.up_proj.
342
+ Applying mask on model.layers.0.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
343
+ Applied soft mask on model.layers.0.mlp.down_proj.
344
+ Applying mask on model.layers.1.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
345
+ Applied soft mask on model.layers.1.self_attn.q_proj.
346
+ Applying mask on model.layers.1.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
347
+ Applied soft mask on model.layers.1.self_attn.k_proj.
348
+ Applying mask on model.layers.1.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
349
+ Applied soft mask on model.layers.1.self_attn.v_proj.
350
+ Applying mask on model.layers.1.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
351
+ Applied soft mask on model.layers.1.self_attn.o_proj.
352
+ Applying mask on model.layers.1.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
353
+ Applied soft mask on model.layers.1.mlp.gate_proj.
354
+ Applying mask on model.layers.1.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
355
+ Applied soft mask on model.layers.1.mlp.up_proj.
356
+ Applying mask on model.layers.1.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
357
+ Applied soft mask on model.layers.1.mlp.down_proj.
358
+ Applying mask on model.layers.2.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
359
+ Applied soft mask on model.layers.2.self_attn.q_proj.
360
+ Applying mask on model.layers.2.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
361
+ Applied soft mask on model.layers.2.self_attn.k_proj.
362
+ Applying mask on model.layers.2.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
363
+ Applied soft mask on model.layers.2.self_attn.v_proj.
364
+ Applying mask on model.layers.2.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
365
+ Applied soft mask on model.layers.2.self_attn.o_proj.
366
+ Applying mask on model.layers.2.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
367
+ Applied soft mask on model.layers.2.mlp.gate_proj.
368
+ Applying mask on model.layers.2.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
369
+ Applied soft mask on model.layers.2.mlp.up_proj.
370
+ Applying mask on model.layers.2.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
371
+ Applied soft mask on model.layers.2.mlp.down_proj.
372
+ Applying mask on model.layers.3.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
373
+ Applied soft mask on model.layers.3.self_attn.q_proj.
374
+ Applying mask on model.layers.3.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
375
+ Applied soft mask on model.layers.3.self_attn.k_proj.
376
+ Applying mask on model.layers.3.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
377
+ Applied soft mask on model.layers.3.self_attn.v_proj.
378
+ Applying mask on model.layers.3.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
379
+ Applied soft mask on model.layers.3.self_attn.o_proj.
380
+ Applying mask on model.layers.3.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
381
+ Applied soft mask on model.layers.3.mlp.gate_proj.
382
+ Applying mask on model.layers.3.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
383
+ Applied soft mask on model.layers.3.mlp.up_proj.
384
+ Applying mask on model.layers.3.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
385
+ Applied soft mask on model.layers.3.mlp.down_proj.
386
+ Applying mask on model.layers.4.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
387
+ Applied soft mask on model.layers.4.self_attn.q_proj.
388
+ Applying mask on model.layers.4.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
389
+ Applied soft mask on model.layers.4.self_attn.k_proj.
390
+ Applying mask on model.layers.4.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
391
+ Applied soft mask on model.layers.4.self_attn.v_proj.
392
+ Applying mask on model.layers.4.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
393
+ Applied soft mask on model.layers.4.self_attn.o_proj.
394
+ Applying mask on model.layers.4.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
395
+ Applied soft mask on model.layers.4.mlp.gate_proj.
396
+ Applying mask on model.layers.4.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
397
+ Applied soft mask on model.layers.4.mlp.up_proj.
398
+ Applying mask on model.layers.4.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
399
+ Applied soft mask on model.layers.4.mlp.down_proj.
400
+ Applying mask on model.layers.5.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
401
+ Applied soft mask on model.layers.5.self_attn.q_proj.
402
+ Applying mask on model.layers.5.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
403
+ Applied soft mask on model.layers.5.self_attn.k_proj.
404
+ Applying mask on model.layers.5.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
405
+ Applied soft mask on model.layers.5.self_attn.v_proj.
406
+ Applying mask on model.layers.5.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
407
+ Applied soft mask on model.layers.5.self_attn.o_proj.
408
+ Applying mask on model.layers.5.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
409
+ Applied soft mask on model.layers.5.mlp.gate_proj.
410
+ Applying mask on model.layers.5.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
411
+ Applied soft mask on model.layers.5.mlp.up_proj.
412
+ Applying mask on model.layers.5.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
413
+ Applied soft mask on model.layers.5.mlp.down_proj.
414
+ Applying mask on model.layers.6.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
415
+ Applied soft mask on model.layers.6.self_attn.q_proj.
416
+ Applying mask on model.layers.6.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
417
+ Applied soft mask on model.layers.6.self_attn.k_proj.
418
+ Applying mask on model.layers.6.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
419
+ Applied soft mask on model.layers.6.self_attn.v_proj.
420
+ Applying mask on model.layers.6.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
421
+ Applied soft mask on model.layers.6.self_attn.o_proj.
422
+ Applying mask on model.layers.6.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
423
+ Applied soft mask on model.layers.6.mlp.gate_proj.
424
+ Applying mask on model.layers.6.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
425
+ Applied soft mask on model.layers.6.mlp.up_proj.
426
+ Applying mask on model.layers.6.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
427
+ Applied soft mask on model.layers.6.mlp.down_proj.
428
+ Applying mask on model.layers.7.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
429
+ Applied soft mask on model.layers.7.self_attn.q_proj.
430
+ Applying mask on model.layers.7.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
431
+ Applied soft mask on model.layers.7.self_attn.k_proj.
432
+ Applying mask on model.layers.7.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
433
+ Applied soft mask on model.layers.7.self_attn.v_proj.
434
+ Applying mask on model.layers.7.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
435
+ Applied soft mask on model.layers.7.self_attn.o_proj.
436
+ Applying mask on model.layers.7.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
437
+ Applied soft mask on model.layers.7.mlp.gate_proj.
438
+ Applying mask on model.layers.7.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
439
+ Applied soft mask on model.layers.7.mlp.up_proj.
440
+ Applying mask on model.layers.7.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
441
+ Applied soft mask on model.layers.7.mlp.down_proj.
442
+ Applying mask on model.layers.8.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
443
+ Applied soft mask on model.layers.8.self_attn.q_proj.
444
+ Applying mask on model.layers.8.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
445
+ Applied soft mask on model.layers.8.self_attn.k_proj.
446
+ Applying mask on model.layers.8.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
447
+ Applied soft mask on model.layers.8.self_attn.v_proj.
448
+ Applying mask on model.layers.8.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
449
+ Applied soft mask on model.layers.8.self_attn.o_proj.
450
+ Applying mask on model.layers.8.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
451
+ Applied soft mask on model.layers.8.mlp.gate_proj.
452
+ Applying mask on model.layers.8.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
453
+ Applied soft mask on model.layers.8.mlp.up_proj.
454
+ Applying mask on model.layers.8.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
455
+ Applied soft mask on model.layers.8.mlp.down_proj.
456
+ Applying mask on model.layers.9.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
457
+ Applied soft mask on model.layers.9.self_attn.q_proj.
458
+ Applying mask on model.layers.9.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
459
+ Applied soft mask on model.layers.9.self_attn.k_proj.
460
+ Applying mask on model.layers.9.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
461
+ Applied soft mask on model.layers.9.self_attn.v_proj.
462
+ Applying mask on model.layers.9.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
463
+ Applied soft mask on model.layers.9.self_attn.o_proj.
464
+ Applying mask on model.layers.9.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
465
+ Applied soft mask on model.layers.9.mlp.gate_proj.
466
+ Applying mask on model.layers.9.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
467
+ Applied soft mask on model.layers.9.mlp.up_proj.
468
+ Applying mask on model.layers.9.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
469
+ Applied soft mask on model.layers.9.mlp.down_proj.
470
+ Applying mask on model.layers.10.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
471
+ Applied soft mask on model.layers.10.self_attn.q_proj.
472
+ Applying mask on model.layers.10.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
473
+ Applied soft mask on model.layers.10.self_attn.k_proj.
474
+ Applying mask on model.layers.10.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
475
+ Applied soft mask on model.layers.10.self_attn.v_proj.
476
+ Applying mask on model.layers.10.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
477
+ Applied soft mask on model.layers.10.self_attn.o_proj.
478
+ Applying mask on model.layers.10.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
479
+ Applied soft mask on model.layers.10.mlp.gate_proj.
480
+ Applying mask on model.layers.10.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
481
+ Applied soft mask on model.layers.10.mlp.up_proj.
482
+ Applying mask on model.layers.10.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
483
+ Applied soft mask on model.layers.10.mlp.down_proj.
484
+ Applying mask on model.layers.11.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
485
+ Applied soft mask on model.layers.11.self_attn.q_proj.
486
+ Applying mask on model.layers.11.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
487
+ Applied soft mask on model.layers.11.self_attn.k_proj.
488
+ Applying mask on model.layers.11.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
489
+ Applied soft mask on model.layers.11.self_attn.v_proj.
490
+ Applying mask on model.layers.11.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
491
+ Applied soft mask on model.layers.11.self_attn.o_proj.
492
+ Applying mask on model.layers.11.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
493
+ Applied soft mask on model.layers.11.mlp.gate_proj.
494
+ Applying mask on model.layers.11.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
495
+ Applied soft mask on model.layers.11.mlp.up_proj.
496
+ Applying mask on model.layers.11.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
497
+ Applied soft mask on model.layers.11.mlp.down_proj.
498
+ Applying mask on model.layers.12.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
499
+ Applied soft mask on model.layers.12.self_attn.q_proj.
500
+ Applying mask on model.layers.12.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
501
+ Applied soft mask on model.layers.12.self_attn.k_proj.
502
+ Applying mask on model.layers.12.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
503
+ Applied soft mask on model.layers.12.self_attn.v_proj.
504
+ Applying mask on model.layers.12.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
505
+ Applied soft mask on model.layers.12.self_attn.o_proj.
506
+ Applying mask on model.layers.12.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
507
+ Applied soft mask on model.layers.12.mlp.gate_proj.
508
+ Applying mask on model.layers.12.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
509
+ Applied soft mask on model.layers.12.mlp.up_proj.
510
+ Applying mask on model.layers.12.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
511
+ Applied soft mask on model.layers.12.mlp.down_proj.
512
+ Applying mask on model.layers.13.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
513
+ Applied soft mask on model.layers.13.self_attn.q_proj.
514
+ Applying mask on model.layers.13.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
515
+ Applied soft mask on model.layers.13.self_attn.k_proj.
516
+ Applying mask on model.layers.13.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
517
+ Applied soft mask on model.layers.13.self_attn.v_proj.
518
+ Applying mask on model.layers.13.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
519
+ Applied soft mask on model.layers.13.self_attn.o_proj.
520
+ Applying mask on model.layers.13.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
521
+ Applied soft mask on model.layers.13.mlp.gate_proj.
522
+ Applying mask on model.layers.13.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
523
+ Applied soft mask on model.layers.13.mlp.up_proj.
524
+ Applying mask on model.layers.13.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
525
+ Applied soft mask on model.layers.13.mlp.down_proj.
526
+ Applying mask on model.layers.14.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
527
+ Applied soft mask on model.layers.14.self_attn.q_proj.
528
+ Applying mask on model.layers.14.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
529
+ Applied soft mask on model.layers.14.self_attn.k_proj.
530
+ Applying mask on model.layers.14.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
531
+ Applied soft mask on model.layers.14.self_attn.v_proj.
532
+ Applying mask on model.layers.14.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
533
+ Applied soft mask on model.layers.14.self_attn.o_proj.
534
+ Applying mask on model.layers.14.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
535
+ Applied soft mask on model.layers.14.mlp.gate_proj.
536
+ Applying mask on model.layers.14.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
537
+ Applied soft mask on model.layers.14.mlp.up_proj.
538
+ Applying mask on model.layers.14.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
539
+ Applied soft mask on model.layers.14.mlp.down_proj.
540
+ Applying mask on model.layers.15.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
541
+ Applied soft mask on model.layers.15.self_attn.q_proj.
542
+ Applying mask on model.layers.15.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
543
+ Applied soft mask on model.layers.15.self_attn.k_proj.
544
+ Applying mask on model.layers.15.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
545
+ Applied soft mask on model.layers.15.self_attn.v_proj.
546
+ Applying mask on model.layers.15.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
547
+ Applied soft mask on model.layers.15.self_attn.o_proj.
548
+ Applying mask on model.layers.15.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
549
+ Applied soft mask on model.layers.15.mlp.gate_proj.
550
+ Applying mask on model.layers.15.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
551
+ Applied soft mask on model.layers.15.mlp.up_proj.
552
+ Applying mask on model.layers.15.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
553
+ Applied soft mask on model.layers.15.mlp.down_proj.
554
+ Applying mask on model.layers.16.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
555
+ Applied soft mask on model.layers.16.self_attn.q_proj.
556
+ Applying mask on model.layers.16.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
557
+ Applied soft mask on model.layers.16.self_attn.k_proj.
558
+ Applying mask on model.layers.16.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
559
+ Applied soft mask on model.layers.16.self_attn.v_proj.
560
+ Applying mask on model.layers.16.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
561
+ Applied soft mask on model.layers.16.self_attn.o_proj.
562
+ Applying mask on model.layers.16.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
563
+ Applied soft mask on model.layers.16.mlp.gate_proj.
564
+ Applying mask on model.layers.16.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
565
+ Applied soft mask on model.layers.16.mlp.up_proj.
566
+ Applying mask on model.layers.16.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
567
+ Applied soft mask on model.layers.16.mlp.down_proj.
568
+ Applying mask on model.layers.17.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
569
+ Applied soft mask on model.layers.17.self_attn.q_proj.
570
+ Applying mask on model.layers.17.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
571
+ Applied soft mask on model.layers.17.self_attn.k_proj.
572
+ Applying mask on model.layers.17.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
573
+ Applied soft mask on model.layers.17.self_attn.v_proj.
574
+ Applying mask on model.layers.17.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
575
+ Applied soft mask on model.layers.17.self_attn.o_proj.
576
+ Applying mask on model.layers.17.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
577
+ Applied soft mask on model.layers.17.mlp.gate_proj.
578
+ Applying mask on model.layers.17.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
579
+ Applied soft mask on model.layers.17.mlp.up_proj.
580
+ Applying mask on model.layers.17.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
581
+ Applied soft mask on model.layers.17.mlp.down_proj.
582
+ Applying mask on model.layers.18.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
583
+ Applied soft mask on model.layers.18.self_attn.q_proj.
584
+ Applying mask on model.layers.18.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
585
+ Applied soft mask on model.layers.18.self_attn.k_proj.
586
+ Applying mask on model.layers.18.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
587
+ Applied soft mask on model.layers.18.self_attn.v_proj.
588
+ Applying mask on model.layers.18.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
589
+ Applied soft mask on model.layers.18.self_attn.o_proj.
590
+ Applying mask on model.layers.18.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
591
+ Applied soft mask on model.layers.18.mlp.gate_proj.
592
+ Applying mask on model.layers.18.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
593
+ Applied soft mask on model.layers.18.mlp.up_proj.
594
+ Applying mask on model.layers.18.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
595
+ Applied soft mask on model.layers.18.mlp.down_proj.
596
+ Applying mask on model.layers.19.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
597
+ Applied soft mask on model.layers.19.self_attn.q_proj.
598
+ Applying mask on model.layers.19.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
599
+ Applied soft mask on model.layers.19.self_attn.k_proj.
600
+ Applying mask on model.layers.19.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
601
+ Applied soft mask on model.layers.19.self_attn.v_proj.
602
+ Applying mask on model.layers.19.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
603
+ Applied soft mask on model.layers.19.self_attn.o_proj.
604
+ Applying mask on model.layers.19.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
605
+ Applied soft mask on model.layers.19.mlp.gate_proj.
606
+ Applying mask on model.layers.19.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
607
+ Applied soft mask on model.layers.19.mlp.up_proj.
608
+ Applying mask on model.layers.19.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
609
+ Applied soft mask on model.layers.19.mlp.down_proj.
610
+ Applying mask on model.layers.20.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
611
+ Applied soft mask on model.layers.20.self_attn.q_proj.
612
+ Applying mask on model.layers.20.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
613
+ Applied soft mask on model.layers.20.self_attn.k_proj.
614
+ Applying mask on model.layers.20.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
615
+ Applied soft mask on model.layers.20.self_attn.v_proj.
616
+ Applying mask on model.layers.20.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
617
+ Applied soft mask on model.layers.20.self_attn.o_proj.
618
+ Applying mask on model.layers.20.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
619
+ Applied soft mask on model.layers.20.mlp.gate_proj.
620
+ Applying mask on model.layers.20.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
621
+ Applied soft mask on model.layers.20.mlp.up_proj.
622
+ Applying mask on model.layers.20.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
623
+ Applied soft mask on model.layers.20.mlp.down_proj.
624
+ Applying mask on model.layers.21.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
625
+ Applied soft mask on model.layers.21.self_attn.q_proj.
626
+ Applying mask on model.layers.21.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
627
+ Applied soft mask on model.layers.21.self_attn.k_proj.
628
+ Applying mask on model.layers.21.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
629
+ Applied soft mask on model.layers.21.self_attn.v_proj.
630
+ Applying mask on model.layers.21.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
631
+ Applied soft mask on model.layers.21.self_attn.o_proj.
632
+ Applying mask on model.layers.21.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
633
+ Applied soft mask on model.layers.21.mlp.gate_proj.
634
+ Applying mask on model.layers.21.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
635
+ Applied soft mask on model.layers.21.mlp.up_proj.
636
+ Applying mask on model.layers.21.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
637
+ Applied soft mask on model.layers.21.mlp.down_proj.
638
+ Applying mask on model.layers.22.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
639
+ Applied soft mask on model.layers.22.self_attn.q_proj.
640
+ Applying mask on model.layers.22.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
641
+ Applied soft mask on model.layers.22.self_attn.k_proj.
642
+ Applying mask on model.layers.22.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
643
+ Applied soft mask on model.layers.22.self_attn.v_proj.
644
+ Applying mask on model.layers.22.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
645
+ Applied soft mask on model.layers.22.self_attn.o_proj.
646
+ Applying mask on model.layers.22.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
647
+ Applied soft mask on model.layers.22.mlp.gate_proj.
648
+ Applying mask on model.layers.22.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
649
+ Applied soft mask on model.layers.22.mlp.up_proj.
650
+ Applying mask on model.layers.22.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
651
+ Applied soft mask on model.layers.22.mlp.down_proj.
652
+ Applying mask on model.layers.23.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
653
+ Applied soft mask on model.layers.23.self_attn.q_proj.
654
+ Applying mask on model.layers.23.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
655
+ Applied soft mask on model.layers.23.self_attn.k_proj.
656
+ Applying mask on model.layers.23.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
657
+ Applied soft mask on model.layers.23.self_attn.v_proj.
658
+ Applying mask on model.layers.23.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
659
+ Applied soft mask on model.layers.23.self_attn.o_proj.
660
+ Applying mask on model.layers.23.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
661
+ Applied soft mask on model.layers.23.mlp.gate_proj.
662
+ Applying mask on model.layers.23.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
663
+ Applied soft mask on model.layers.23.mlp.up_proj.
664
+ Applying mask on model.layers.23.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
665
+ Applied soft mask on model.layers.23.mlp.down_proj.
666
+ Applying mask on _connector.0 with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
667
+ Applied soft mask on _connector.0.
668
+ Applying mask on _connector.2 with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
669
+ Applied soft mask on _connector.2.
670
+ Using cleaned config_mask (without mask parameters) for saving.
671
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
672
+ import pynvml # type: ignore[import]
673
+ [2025-10-11 01:43:28,649] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
674
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/file_download.py:945: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
675
+ warnings.warn(
676
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_0.9_2e-1_connector-1.0_0.9_2e-1_ablation_20251011_021905.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.1_2e-1_connector-1.0_1.1_2e-1_ablation_20251011_022746.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.3_2e-1_connector-1.0_1.3_2e-1_ablation_20251011_023449.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.5_2e-1_connector-1.0_1.5_2e-1_ablation_20251011_024052.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation_20251011_024734.log ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ==== STARTING EXPERIMENT: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation ====
2
+ Log File: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation_20251011_024734.log
3
+ Timestamp: 2025-10-11 02:47:34
4
+ =====================================
5
+ Processing: /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation
6
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
7
+ import pynvml # type: ignore[import]
8
+ [2025-10-11 02:47:37,748] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
9
+ Traceback (most recent call last):
10
+ File "/nfs/ywang29/TinyLLaVA/scripts/apply_masks.py", line 488, in <module>
11
+ main()
12
+ File "/nfs/ywang29/TinyLLaVA/scripts/apply_masks.py", line 123, in main
13
+ config_mask = TinyLlavaConfig.from_pretrained(model_args.mask_model_name_or_path)
14
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/configuration_utils.py", line 602, in from_pretrained
15
+ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
16
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/configuration_utils.py", line 631, in get_config_dict
17
+ config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
18
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/configuration_utils.py", line 686, in _get_config_dict
19
+ resolved_config_file = cached_file(
20
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/utils/hub.py", line 369, in cached_file
21
+ raise EnvironmentError(
22
+ OSError: /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation does not appear to have a file named config.json. Checkout 'https://huggingface.co//nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation/tree/main' for available files.
23
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
24
+ import pynvml # type: ignore[import]
25
+ [2025-10-11 02:47:45,172] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
26
+ Traceback (most recent call last):
27
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/utils/hub.py", line 398, in cached_file
28
+ resolved_file = hf_hub_download(
29
+ File "/opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
30
+ validate_repo_id(arg_value)
31
+ File "/opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
32
+ raise HFValidationError(
33
+ huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation/mask_applied'. Use `repo_type` argument if needed.
34
+
35
+ The above exception was the direct cause of the following exception:
36
+
37
+ Traceback (most recent call last):
38
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/model/load_model.py", line 38, in load_pretrained_model
39
+ model = TinyLlavaForConditionalGeneration.from_pretrained(model_name_or_path,low_cpu_mem_usage=True)
40
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/modeling_utils.py", line 3015, in from_pretrained
41
+ resolved_config_file = cached_file(
42
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/utils/hub.py", line 462, in cached_file
43
+ raise EnvironmentError(
44
+ OSError: Incorrect path_or_model_id: '/nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation/mask_applied'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
45
+
46
+ During handling of the above exception, another exception occurred:
47
+
48
+ Traceback (most recent call last):
49
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/utils/hub.py", line 398, in cached_file
50
+ resolved_file = hf_hub_download(
51
+ File "/opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
52
+ validate_repo_id(arg_value)
53
+ File "/opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
54
+ raise HFValidationError(
55
+ huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation/mask_applied'. Use `repo_type` argument if needed.
56
+
57
+ The above exception was the direct cause of the following exception:
58
+
59
+ Traceback (most recent call last):
60
+ File "/opt/conda/envs/tinyllava/lib/python3.10/runpy.py", line 196, in _run_module_as_main
61
+ return _run_code(code, main_globals, None,
62
+ File "/opt/conda/envs/tinyllava/lib/python3.10/runpy.py", line 86, in _run_code
63
+ exec(code, run_globals)
64
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/eval/model_vqa_mmmu.py", line 180, in <module>
65
+ eval_model(args)
66
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/eval/model_vqa_mmmu.py", line 88, in eval_model
67
+ model, tokenizer, image_processor, context_len = load_pretrained_model(model_path)
68
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/model/load_model.py", line 40, in load_pretrained_model
69
+ model_config = TinyLlavaConfig.from_pretrained(model_name_or_path)
70
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/configuration_utils.py", line 602, in from_pretrained
71
+ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
72
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/configuration_utils.py", line 631, in get_config_dict
73
+ config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
74
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/configuration_utils.py", line 686, in _get_config_dict
75
+ resolved_config_file = cached_file(
76
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/utils/hub.py", line 462, in cached_file
77
+ raise EnvironmentError(
78
+ OSError: Incorrect path_or_model_id: '/nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation/mask_applied'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
79
+ Traceback (most recent call last):
80
+ File "/nfs/ywang29/TinyLLaVA/scripts/convert_answer_to_mmmu.py", line 31, in <module>
81
+ eval_model(args)
82
+ File "/nfs/ywang29/TinyLLaVA/scripts/convert_answer_to_mmmu.py", line 7, in eval_model
83
+ answers = [json.loads(q) for q in open(os.path.expanduser(args.answers_file), "r")]
84
+ FileNotFoundError: [Errno 2] No such file or directory: '/s3-code/ywang29/datasets/tinyllava/eval/MMMU/answers/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation-mask_applied.jsonl'
85
+ Traceback (most recent call last):
86
+ File "/s3-code/ywang29/datasets/tinyllava/eval/MMMU/eval/main_eval_only.py", line 19, in <module>
87
+ output_dict = json.load(open(args.output_path))
88
+ FileNotFoundError: [Errno 2] No such file or directory: '/s3-code/ywang29/datasets/tinyllava/eval/MMMU/answers/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation-mask_applied_output.json'
89
+ ==== EXPERIMENT COMPLETED: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation ====
90
+ Log File: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.7_2e-1_connector-1.0_1.7_2e-1_ablation_20251011_024734.log
91
+ Timestamp: 2025-10-11 02:47:48
92
+ =====================================
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation_20251011_024748.log ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ==== STARTING EXPERIMENT: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation ====
2
+ Log File: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation_20251011_024748.log
3
+ Timestamp: 2025-10-11 02:47:48
4
+ =====================================
5
+ Processing: /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation
6
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
7
+ import pynvml # type: ignore[import]
8
+ [2025-10-11 02:47:51,862] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
9
+ Traceback (most recent call last):
10
+ File "/nfs/ywang29/TinyLLaVA/scripts/apply_masks.py", line 488, in <module>
11
+ main()
12
+ File "/nfs/ywang29/TinyLLaVA/scripts/apply_masks.py", line 123, in main
13
+ config_mask = TinyLlavaConfig.from_pretrained(model_args.mask_model_name_or_path)
14
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/configuration_utils.py", line 602, in from_pretrained
15
+ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
16
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/configuration_utils.py", line 631, in get_config_dict
17
+ config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
18
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/configuration_utils.py", line 686, in _get_config_dict
19
+ resolved_config_file = cached_file(
20
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/utils/hub.py", line 369, in cached_file
21
+ raise EnvironmentError(
22
+ OSError: /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation does not appear to have a file named config.json. Checkout 'https://huggingface.co//nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation/tree/main' for available files.
23
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
24
+ import pynvml # type: ignore[import]
25
+ [2025-10-11 02:47:59,132] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
26
+ Traceback (most recent call last):
27
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/utils/hub.py", line 398, in cached_file
28
+ resolved_file = hf_hub_download(
29
+ File "/opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
30
+ validate_repo_id(arg_value)
31
+ File "/opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
32
+ raise HFValidationError(
33
+ huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation/mask_applied'. Use `repo_type` argument if needed.
34
+
35
+ The above exception was the direct cause of the following exception:
36
+
37
+ Traceback (most recent call last):
38
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/model/load_model.py", line 38, in load_pretrained_model
39
+ model = TinyLlavaForConditionalGeneration.from_pretrained(model_name_or_path,low_cpu_mem_usage=True)
40
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/modeling_utils.py", line 3015, in from_pretrained
41
+ resolved_config_file = cached_file(
42
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/utils/hub.py", line 462, in cached_file
43
+ raise EnvironmentError(
44
+ OSError: Incorrect path_or_model_id: '/nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation/mask_applied'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
45
+
46
+ During handling of the above exception, another exception occurred:
47
+
48
+ Traceback (most recent call last):
49
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/utils/hub.py", line 398, in cached_file
50
+ resolved_file = hf_hub_download(
51
+ File "/opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
52
+ validate_repo_id(arg_value)
53
+ File "/opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
54
+ raise HFValidationError(
55
+ huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation/mask_applied'. Use `repo_type` argument if needed.
56
+
57
+ The above exception was the direct cause of the following exception:
58
+
59
+ Traceback (most recent call last):
60
+ File "/opt/conda/envs/tinyllava/lib/python3.10/runpy.py", line 196, in _run_module_as_main
61
+ return _run_code(code, main_globals, None,
62
+ File "/opt/conda/envs/tinyllava/lib/python3.10/runpy.py", line 86, in _run_code
63
+ exec(code, run_globals)
64
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/eval/model_vqa_mmmu.py", line 180, in <module>
65
+ eval_model(args)
66
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/eval/model_vqa_mmmu.py", line 88, in eval_model
67
+ model, tokenizer, image_processor, context_len = load_pretrained_model(model_path)
68
+ File "/nfs/ywang29/TinyLLaVA/tinyllava/model/load_model.py", line 40, in load_pretrained_model
69
+ model_config = TinyLlavaConfig.from_pretrained(model_name_or_path)
70
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/configuration_utils.py", line 602, in from_pretrained
71
+ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
72
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/configuration_utils.py", line 631, in get_config_dict
73
+ config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
74
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/configuration_utils.py", line 686, in _get_config_dict
75
+ resolved_config_file = cached_file(
76
+ File "/nfs/ywang29/TinyLLaVA/transformers/src/transformers/utils/hub.py", line 462, in cached_file
77
+ raise EnvironmentError(
78
+ OSError: Incorrect path_or_model_id: '/nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation/mask_applied'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
79
+ Traceback (most recent call last):
80
+ File "/nfs/ywang29/TinyLLaVA/scripts/convert_answer_to_mmmu.py", line 31, in <module>
81
+ eval_model(args)
82
+ File "/nfs/ywang29/TinyLLaVA/scripts/convert_answer_to_mmmu.py", line 7, in eval_model
83
+ answers = [json.loads(q) for q in open(os.path.expanduser(args.answers_file), "r")]
84
+ FileNotFoundError: [Errno 2] No such file or directory: '/s3-code/ywang29/datasets/tinyllava/eval/MMMU/answers/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation-mask_applied.jsonl'
85
+ Traceback (most recent call last):
86
+ File "/s3-code/ywang29/datasets/tinyllava/eval/MMMU/eval/main_eval_only.py", line 19, in <module>
87
+ output_dict = json.load(open(args.output_path))
88
+ FileNotFoundError: [Errno 2] No such file or directory: '/s3-code/ywang29/datasets/tinyllava/eval/MMMU/answers/qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation-mask_applied_output.json'
89
+ ==== EXPERIMENT COMPLETED: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation ====
90
+ Log File: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-1.0_1.9_2e-1_connector-1.0_1.9_2e-1_ablation_20251011_024748.log
91
+ Timestamp: 2025-10-11 02:48:02
92
+ =====================================
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-3.0_0.5_1_connector-3.0_0.5_1_ablation_20251011_025420.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-3.0_0.5_1e-1_connector-3.0_0.5_1e-1_ablation_20251011_024802.log ADDED
The diff for this file is too large to render. See raw diff
 
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-3.0_0.5_3_connector-3.0_0.5_3_ablation_20251011_031423.log ADDED
@@ -0,0 +1,681 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0
  0%| | 0/900 [00:00<?, ?it/s]/nfs/ywang29/TinyLLaVA/transformers/src/transformers/generation/configuration_utils.py:492: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.0` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
 
 
1
  0%| | 1/900 [00:27<6:47:32, 27.20s/it]
2
  0%| | 2/900 [00:54<6:44:58, 27.06s/it]
3
  0%| | 3/900 [01:21<6:43:57, 27.02s/it]
4
  0%| | 4/900 [01:42<6:11:18, 24.86s/it]
5
  1%| | 5/900 [02:04<5:53:04, 23.67s/it]
6
  1%| | 6/900 [02:31<6:09:51, 24.82s/it]
7
  1%| | 7/900 [02:58<6:19:46, 25.52s/it]
8
  1%| | 8/900 [03:19<6:00:27, 24.25s/it]
9
  1%| | 9/900 [03:46<6:12:25, 25.08s/it]
10
  1%| | 10/900 [04:08<5:55:55, 23.99s/it]
11
  1%| | 11/900 [04:29<5:44:12, 23.23s/it]
12
  1%|▏ | 12/900 [04:50<5:33:31, 22.54s/it]
13
  1%|▏ | 13/900 [05:11<5:27:34, 22.16s/it]
14
  2%|▏ | 14/900 [05:32<5:21:56, 21.80s/it]
15
  2%|▏ | 15/900 [05:53<5:17:34, 21.53s/it]
16
  2%|▏ | 16/900 [05:54<3:45:11, 15.28s/it]
17
  2%|▏ | 17/900 [06:15<4:11:25, 17.08s/it]
18
  2%|▏ | 18/900 [06:37<4:30:32, 18.40s/it]
19
  2%|▏ | 19/900 [06:58<4:43:28, 19.31s/it]
20
  2%|▏ | 20/900 [07:19<4:51:24, 19.87s/it]
21
  2%|▏ | 21/900 [07:20<3:26:01, 14.06s/it]
22
  2%|▏ | 22/900 [07:47<4:21:31, 17.87s/it]
23
  3%|β–Ž | 23/900 [08:08<4:37:08, 18.96s/it]
24
  3%|β–Ž | 24/900 [08:30<4:47:58, 19.72s/it]
25
  3%|β–Ž | 25/900 [08:51<4:52:05, 20.03s/it]
26
  3%|β–Ž | 26/900 [09:12<4:58:17, 20.48s/it]
27
  3%|β–Ž | 27/900 [09:33<4:58:46, 20.53s/it]
28
  3%|β–Ž | 28/900 [09:54<5:01:37, 20.75s/it]
29
  3%|β–Ž | 29/900 [10:15<5:01:20, 20.76s/it]
30
  3%|β–Ž | 30/900 [10:36<5:01:23, 20.79s/it]
31
  3%|β–Ž | 31/900 [10:36<3:34:35, 14.82s/it]
32
  4%|β–Ž | 32/900 [10:37<2:31:35, 10.48s/it]
33
  4%|β–Ž | 33/900 [10:38<1:49:24, 7.57s/it]
34
  4%|▍ | 34/900 [10:38<1:18:57, 5.47s/it]
35
  4%|▍ | 35/900 [11:00<2:27:53, 10.26s/it]
36
  4%|▍ | 36/900 [11:21<3:13:48, 13.46s/it]
37
  4%|▍ | 37/900 [11:21<2:17:48, 9.58s/it]
38
  4%|▍ | 38/900 [11:22<1:38:49, 6.88s/it]
39
  4%|▍ | 39/900 [11:23<1:13:17, 5.11s/it]
40
  4%|▍ | 40/900 [11:44<2:21:36, 9.88s/it]
41
  5%|▍ | 41/900 [11:45<1:43:49, 7.25s/it]
42
  5%|▍ | 42/900 [11:47<1:21:06, 5.67s/it]
43
  5%|▍ | 43/900 [11:48<1:00:21, 4.23s/it]
44
  5%|▍ | 44/900 [11:48<44:24, 3.11s/it]
45
  5%|β–Œ | 45/900 [11:49<34:30, 2.42s/it]
46
  5%|β–Œ | 46/900 [11:49<25:44, 1.81s/it]
47
  5%|β–Œ | 47/900 [11:50<20:51, 1.47s/it]
48
  5%|β–Œ | 48/900 [11:51<17:35, 1.24s/it]
49
  5%|β–Œ | 49/900 [11:51<15:07, 1.07s/it]
50
  6%|β–Œ | 50/900 [11:52<13:30, 1.05it/s]
51
  6%|β–Œ | 51/900 [12:13<1:40:18, 7.09s/it]
52
  6%|β–Œ | 52/900 [12:35<2:42:27, 11.49s/it]
53
  6%|β–Œ | 53/900 [12:36<1:56:57, 8.29s/it]
54
  6%|β–Œ | 54/900 [12:37<1:24:37, 6.00s/it]
55
  6%|β–Œ | 55/900 [12:37<1:01:33, 4.37s/it]
56
  6%|β–Œ | 56/900 [12:38<44:13, 3.14s/it]
57
  6%|β–‹ | 57/900 [12:38<32:20, 2.30s/it]
58
  6%|β–‹ | 58/900 [12:39<26:25, 1.88s/it]
59
  7%|β–‹ | 59/900 [12:40<21:51, 1.56s/it]
60
  7%|β–‹ | 60/900 [12:40<18:21, 1.31s/it]
61
  7%|β–‹ | 61/900 [13:01<1:38:38, 7.05s/it]
62
  7%|β–‹ | 62/900 [13:22<2:36:11, 11.18s/it]
63
  7%|β–‹ | 63/900 [13:42<3:16:18, 14.07s/it]
64
  7%|β–‹ | 64/900 [13:43<2:19:13, 9.99s/it]
65
  7%|β–‹ | 65/900 [14:04<3:05:57, 13.36s/it]
66
  7%|β–‹ | 66/900 [14:05<2:12:00, 9.50s/it]
67
  7%|β–‹ | 67/900 [14:26<3:01:18, 13.06s/it]
68
  8%|β–Š | 68/900 [14:47<3:36:07, 15.59s/it]
69
  8%|β–Š | 69/900 [15:09<4:00:11, 17.34s/it]
70
  8%|β–Š | 70/900 [15:30<4:14:33, 18.40s/it]
71
  8%|β–Š | 71/900 [15:51<4:25:48, 19.24s/it]
72
  8%|β–Š | 72/900 [15:51<3:08:02, 13.63s/it]
73
  8%|β–Š | 73/900 [16:12<3:36:16, 15.69s/it]
74
  8%|β–Š | 74/900 [16:33<3:56:33, 17.18s/it]
75
  8%|β–Š | 75/900 [16:54<4:13:13, 18.42s/it]
76
  8%|β–Š | 76/900 [17:15<4:24:26, 19.26s/it]
77
  9%|β–Š | 77/900 [17:36<4:30:38, 19.73s/it]
78
  9%|β–Š | 78/900 [17:57<4:35:31, 20.11s/it]
79
  9%|β–‰ | 79/900 [18:18<4:38:00, 20.32s/it]
80
  9%|β–‰ | 80/900 [18:39<4:41:40, 20.61s/it]
81
  9%|β–‰ | 81/900 [19:00<4:44:08, 20.82s/it]
82
  9%|β–‰ | 82/900 [19:22<4:46:40, 21.03s/it]
83
  9%|β–‰ | 83/900 [19:43<4:48:35, 21.19s/it]
84
  9%|β–‰ | 84/900 [19:44<3:23:52, 14.99s/it]
85
  9%|β–‰ | 85/900 [20:05<3:47:04, 16.72s/it]
86
  10%|β–‰ | 86/900 [20:05<2:40:43, 11.85s/it]
87
  10%|β–‰ | 87/900 [20:26<3:16:39, 14.51s/it]
88
  10%|β–‰ | 88/900 [20:47<3:42:31, 16.44s/it]
89
  10%|β–‰ | 89/900 [21:08<4:00:51, 17.82s/it]
90
  10%|β–ˆ | 90/900 [21:29<4:13:24, 18.77s/it]
91
  10%|β–ˆ | 91/900 [21:29<2:59:28, 13.31s/it]
92
  10%|β–ˆ | 92/900 [21:30<2:06:33, 9.40s/it]
93
  10%|β–ˆ | 93/900 [21:30<1:29:50, 6.68s/it]
94
  10%|β–ˆ | 94/900 [21:31<1:04:42, 4.82s/it]
95
  11%|β–ˆ | 95/900 [21:31<46:41, 3.48s/it]
96
  11%|β–ˆ | 96/900 [21:32<38:00, 2.84s/it]
97
  11%|β–ˆ | 97/900 [21:54<1:52:31, 8.41s/it]
98
  11%|β–ˆ | 98/900 [22:15<2:42:39, 12.17s/it]
99
  11%|β–ˆ | 99/900 [22:36<3:20:58, 15.05s/it]
100
  11%|β–ˆ | 100/900 [22:57<3:44:21, 16.83s/it]
101
  11%|β–ˆ | 101/900 [22:58<2:39:28, 11.98s/it]
102
  11%|β–ˆβ– | 102/900 [22:58<1:52:44, 8.48s/it]
103
  11%|β–ˆβ– | 103/900 [23:19<2:43:02, 12.27s/it]
104
  12%|β–ˆβ– | 104/900 [23:20<1:56:11, 8.76s/it]
105
  12%|β–ˆβ– | 105/900 [23:20<1:22:32, 6.23s/it]
106
  12%|β–ˆβ– | 106/900 [23:41<2:19:27, 10.54s/it]
107
  12%|β–ˆβ– | 107/900 [23:41<1:39:39, 7.54s/it]
108
  12%|β–ˆβ– | 108/900 [23:43<1:17:26, 5.87s/it]
109
  12%|β–ˆβ– | 109/900 [24:04<2:16:59, 10.39s/it]
110
  12%|β–ˆβ– | 110/900 [24:05<1:37:51, 7.43s/it]
111
  12%|β–ˆβ– | 111/900 [24:26<2:31:48, 11.54s/it]
112
  12%|β–ˆβ– | 112/900 [24:27<1:48:07, 8.23s/it]
113
  13%|β–ˆβ–Ž | 113/900 [24:48<2:39:08, 12.13s/it]
114
  13%|β–ˆβ–Ž | 114/900 [25:08<3:12:12, 14.67s/it]
115
  13%|β–ˆβ–Ž | 115/900 [25:29<3:35:28, 16.47s/it]
116
  13%|β–ˆβ–Ž | 116/900 [25:32<2:41:07, 12.33s/it]
117
  13%|β–ˆβ–Ž | 117/900 [25:32<1:54:47, 8.80s/it]
118
  13%|β–ˆβ–Ž | 118/900 [25:53<2:41:10, 12.37s/it]
119
  13%|β–ˆβ–Ž | 119/900 [25:54<1:54:56, 8.83s/it]
120
  13%|β–ˆβ–Ž | 120/900 [25:54<1:21:40, 6.28s/it]
121
  13%|β–ˆβ–Ž | 121/900 [26:15<2:20:32, 10.83s/it]
122
  14%|β–ˆβ–Ž | 122/900 [26:36<2:58:25, 13.76s/it]
123
  14%|β–ˆβ–Ž | 123/900 [26:57<3:24:59, 15.83s/it]
124
  14%|β–ˆβ– | 124/900 [26:57<2:25:30, 11.25s/it]
125
  14%|β–ˆβ– | 125/900 [26:58<1:43:04, 7.98s/it]
126
  14%|β–ˆβ– | 126/900 [26:58<1:13:27, 5.69s/it]
127
  14%|β–ˆβ– | 127/900 [27:19<2:14:45, 10.46s/it]
128
  14%|β–ˆβ– | 128/900 [27:20<1:36:21, 7.49s/it]
129
  14%|β–ˆβ– | 129/900 [27:20<1:08:30, 5.33s/it]
130
  14%|β–ˆβ– | 130/900 [27:21<49:09, 3.83s/it]
131
  15%|β–ˆβ– | 131/900 [27:41<1:54:21, 8.92s/it]
132
  15%|β–ˆβ– | 132/900 [27:42<1:22:19, 6.43s/it]
133
  15%|β–ˆβ– | 133/900 [27:42<58:48, 4.60s/it]
134
  15%|β–ˆβ– | 134/900 [28:04<2:02:57, 9.63s/it]
135
  15%|β–ˆβ–Œ | 135/900 [28:25<2:48:22, 13.21s/it]
136
  15%|β–ˆβ–Œ | 136/900 [28:46<3:18:34, 15.59s/it]
137
  15%|β–ˆβ–Œ | 137/900 [29:07<3:38:29, 17.18s/it]
138
  15%|β–ˆβ–Œ | 138/900 [29:28<3:51:55, 18.26s/it]
139
  15%|β–ˆβ–Œ | 139/900 [29:29<2:44:12, 12.95s/it]
140
  16%|β–ˆβ–Œ | 140/900 [29:29<1:56:11, 9.17s/it]
141
  16%|β–ˆβ–Œ | 141/900 [29:49<2:38:50, 12.56s/it]
142
  16%|β–ˆβ–Œ | 142/900 [29:50<1:53:35, 8.99s/it]
143
  16%|β–ˆβ–Œ | 143/900 [30:11<2:37:05, 12.45s/it]
144
  16%|β–ˆβ–Œ | 144/900 [30:11<1:52:03, 8.89s/it]
145
  16%|β–ˆβ–Œ | 145/900 [30:12<1:19:20, 6.31s/it]
146
  16%|β–ˆβ–Œ | 146/900 [30:32<2:12:39, 10.56s/it]
147
  16%|β–ˆβ–‹ | 147/900 [30:33<1:34:48, 7.55s/it]
148
  16%|β–ˆβ–‹ | 148/900 [30:54<2:26:49, 11.71s/it]
149
  17%|β–ˆβ–‹ | 149/900 [31:15<3:00:00, 14.38s/it]
150
  17%|β–ˆβ–‹ | 150/900 [31:15<2:07:46, 10.22s/it]
151
  17%|β–ˆβ–‹ | 151/900 [31:15<1:30:33, 7.25s/it]
152
  17%|β–ˆβ–‹ | 152/900 [31:16<1:04:22, 5.16s/it]
153
  17%|β–ˆβ–‹ | 153/900 [31:37<2:04:23, 9.99s/it]
154
  17%|β–ˆβ–‹ | 154/900 [31:37<1:28:51, 7.15s/it]
155
  17%|β–ˆβ–‹ | 155/900 [31:58<2:18:25, 11.15s/it]
156
  17%|β–ˆβ–‹ | 156/900 [32:19<2:53:50, 14.02s/it]
157
  17%|β–ˆβ–‹ | 157/900 [32:39<3:18:27, 16.03s/it]
158
  18%|β–ˆβ–Š | 158/900 [32:40<2:20:37, 11.37s/it]
159
  18%|β–ˆβ–Š | 159/900 [33:01<2:56:26, 14.29s/it]
160
  18%|β–ˆβ–Š | 160/900 [33:22<3:19:56, 16.21s/it]
161
  18%|β–ˆβ–Š | 161/900 [33:43<3:36:52, 17.61s/it]
162
  18%|β–ˆβ–Š | 162/900 [34:03<3:48:45, 18.60s/it]
163
  18%|β–ˆβ–Š | 163/900 [34:24<3:56:46, 19.28s/it]
164
  18%|β–ˆβ–Š | 164/900 [34:45<4:02:24, 19.76s/it]
165
  18%|β–ˆβ–Š | 165/900 [34:46<2:51:11, 13.98s/it]
166
  18%|β–ˆβ–Š | 166/900 [35:06<3:15:27, 15.98s/it]/opt/conda/envs/tinyllava/lib/python3.10/site-packages/PIL/Image.py:1047: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
 
 
167
  19%|β–ˆβ–Š | 167/900 [35:27<3:33:09, 17.45s/it]
168
  19%|β–ˆβ–Š | 168/900 [35:48<3:45:56, 18.52s/it]
169
  19%|β–ˆβ–‰ | 169/900 [35:49<2:39:39, 13.10s/it]
170
  19%|β–ˆβ–‰ | 170/900 [36:09<3:07:23, 15.40s/it]
171
  19%|β–ˆβ–‰ | 171/900 [36:11<2:17:47, 11.34s/it]
172
  19%|β–ˆβ–‰ | 172/900 [36:32<2:52:23, 14.21s/it]
173
  19%|β–ˆβ–‰ | 173/900 [36:33<2:02:08, 10.08s/it]
174
  19%|β–ˆβ–‰ | 174/900 [36:53<2:40:42, 13.28s/it]
175
  19%|β–ˆβ–‰ | 175/900 [36:54<1:53:56, 9.43s/it]
176
  20%|β–ˆβ–‰ | 176/900 [37:15<2:34:23, 12.79s/it]
177
  20%|β–ˆβ–‰ | 177/900 [37:15<1:49:43, 9.11s/it]
178
  20%|β–ˆβ–‰ | 178/900 [37:36<2:31:21, 12.58s/it]
179
  20%|β–ˆβ–‰ | 179/900 [37:57<3:01:16, 15.09s/it]
180
  20%|β–ˆβ–ˆ | 180/900 [38:18<3:22:13, 16.85s/it]
181
  20%|β–ˆβ–ˆ | 181/900 [38:39<3:36:46, 18.09s/it]
182
  20%|β–ˆβ–ˆ | 182/900 [39:00<3:46:51, 18.96s/it]
183
  20%|β–ˆβ–ˆ | 183/900 [39:20<3:53:11, 19.51s/it]
184
  20%|β–ˆβ–ˆ | 184/900 [39:42<3:58:51, 20.02s/it]
185
  21%|β–ˆβ–ˆ | 185/900 [40:03<4:03:01, 20.39s/it]
186
  21%|β–ˆβ–ˆ | 186/900 [40:24<4:05:40, 20.65s/it]
187
  21%|β–ˆβ–ˆ | 187/900 [40:45<4:07:35, 20.84s/it]
188
  21%|β–ˆβ–ˆ | 188/900 [40:46<2:54:51, 14.73s/it]
189
  21%|β–ˆβ–ˆ | 189/900 [40:46<2:03:16, 10.40s/it]
190
  21%|β–ˆβ–ˆ | 190/900 [41:07<2:40:07, 13.53s/it]
191
  21%|β–ˆβ–ˆ | 191/900 [41:28<3:05:18, 15.68s/it]
192
  21%|β–ˆβ–ˆβ– | 192/900 [41:49<3:25:17, 17.40s/it]
193
  21%|β–ˆβ–ˆβ– | 193/900 [42:10<3:36:34, 18.38s/it]
194
  22%|β–ˆβ–ˆβ– | 194/900 [42:31<3:46:32, 19.25s/it]
195
  22%|β–ˆβ–ˆβ– | 195/900 [42:32<2:40:07, 13.63s/it]
196
  22%|β–ˆβ–ˆβ– | 196/900 [42:53<3:06:39, 15.91s/it]
197
  22%|β–ˆβ–ˆβ– | 197/900 [43:14<3:23:51, 17.40s/it]
198
  22%|β–ˆβ–ˆβ– | 198/900 [43:35<3:35:42, 18.44s/it]
199
  22%|β–ˆβ–ˆβ– | 199/900 [43:56<3:44:32, 19.22s/it]
200
  22%|β–ˆβ–ˆβ– | 200/900 [43:56<2:38:40, 13.60s/it]
201
  22%|β–ˆβ–ˆβ– | 201/900 [44:17<3:04:37, 15.85s/it]
202
  22%|β–ˆβ–ˆβ– | 202/900 [44:18<2:10:59, 11.26s/it]
203
  23%|β–ˆβ–ˆβ–Ž | 203/900 [44:38<2:43:51, 14.11s/it]
204
  23%|β–ˆβ–ˆβ–Ž | 204/900 [44:59<3:07:05, 16.13s/it]
205
  23%|β–ˆβ–ˆβ–Ž | 205/900 [45:21<3:25:41, 17.76s/it]
206
  23%|β–ˆβ–ˆβ–Ž | 206/900 [45:42<3:37:56, 18.84s/it]
207
  23%|β–ˆβ–ˆβ–Ž | 207/900 [46:03<3:44:46, 19.46s/it]
208
  23%|β–ˆβ–ˆβ–Ž | 208/900 [46:24<3:49:22, 19.89s/it]
209
  23%|β–ˆβ–ˆβ–Ž | 209/900 [46:45<3:52:06, 20.15s/it]
210
  23%|β–ˆβ–ˆβ–Ž | 210/900 [46:45<2:43:57, 14.26s/it]
211
  23%|β–ˆβ–ˆβ–Ž | 211/900 [47:06<3:05:29, 16.15s/it]
212
  24%|β–ˆβ–ˆβ–Ž | 212/900 [47:27<3:22:51, 17.69s/it]
213
  24%|β–ˆβ–ˆβ–Ž | 213/900 [47:48<3:34:48, 18.76s/it]
214
  24%|β–ˆβ–ˆβ– | 214/900 [48:09<3:41:54, 19.41s/it]
215
  24%|β–ˆβ–ˆβ– | 215/900 [48:31<3:47:52, 19.96s/it]
216
  24%|β–ˆβ–ˆβ– | 216/900 [48:52<3:52:10, 20.37s/it]
217
  24%|β–ˆβ–ˆβ– | 217/900 [49:13<3:55:03, 20.65s/it]
218
  24%|β–ˆβ–ˆβ– | 218/900 [49:34<3:56:04, 20.77s/it]
219
  24%|β–ˆβ–ˆβ– | 219/900 [49:56<3:57:50, 20.96s/it]
220
  24%|β–ˆβ–ˆβ– | 220/900 [49:56<2:48:08, 14.84s/it]
221
  25%|β–ˆβ–ˆβ– | 221/900 [50:17<3:09:01, 16.70s/it]
222
  25%|β–ˆβ–ˆβ– | 222/900 [50:38<3:23:43, 18.03s/it]
223
  25%|β–ˆβ–ˆβ– | 223/900 [51:00<3:34:10, 18.98s/it]
224
  25%|β–ˆβ–ˆβ– | 224/900 [51:20<3:39:41, 19.50s/it]
225
  25%|β–ˆβ–ˆβ–Œ | 225/900 [51:21<2:35:10, 13.79s/it]
226
  25%|β–ˆβ–ˆβ–Œ | 226/900 [51:41<2:57:55, 15.84s/it]
227
  25%|β–ˆβ–ˆβ–Œ | 227/900 [52:02<3:14:10, 17.31s/it]
228
  25%|β–ˆβ–ˆβ–Œ | 228/900 [52:03<2:17:31, 12.28s/it]
229
  25%|β–ˆβ–ˆβ–Œ | 229/900 [52:03<1:37:03, 8.68s/it]
230
  26%|β–ˆβ–ˆβ–Œ | 230/900 [52:24<2:17:48, 12.34s/it]
231
  26%|β–ˆβ–ˆβ–Œ | 231/900 [52:45<2:47:15, 15.00s/it]
232
  26%|β–ˆβ–ˆβ–Œ | 232/900 [53:06<3:07:32, 16.84s/it]
233
  26%|β–ˆβ–ˆβ–Œ | 233/900 [53:27<3:21:37, 18.14s/it]
234
  26%|β–ˆβ–ˆβ–Œ | 234/900 [53:48<3:30:25, 18.96s/it]
235
  26%|β–ˆβ–ˆβ–Œ | 235/900 [54:09<3:36:04, 19.50s/it]
236
  26%|β–ˆβ–ˆβ–Œ | 236/900 [54:31<3:42:35, 20.11s/it]
237
  26%|β–ˆβ–ˆβ–‹ | 237/900 [54:52<3:46:53, 20.53s/it]
238
  26%|β–ˆβ–ˆβ–‹ | 238/900 [55:14<3:49:43, 20.82s/it]
239
  27%|β–ˆβ–ˆβ–‹ | 239/900 [55:34<3:49:26, 20.83s/it]
240
  27%|β–ˆβ–ˆβ–‹ | 240/900 [55:55<3:48:56, 20.81s/it]
241
  27%|β–ˆβ–ˆβ–‹ | 241/900 [55:56<2:41:44, 14.73s/it]
242
  27%|β–ˆβ–ˆβ–‹ | 242/900 [56:16<3:00:54, 16.50s/it]
243
  27%|β–ˆβ–ˆβ–‹ | 243/900 [56:17<2:08:09, 11.70s/it]
244
  27%|β–ˆβ–ˆβ–‹ | 244/900 [56:17<1:30:46, 8.30s/it]
245
  27%|β–ˆβ–ˆβ–‹ | 245/900 [56:17<1:04:11, 5.88s/it]
246
  27%|β–ˆβ–ˆβ–‹ | 246/900 [56:44<2:12:41, 12.17s/it]
247
  27%|β–ˆβ–ˆβ–‹ | 247/900 [56:45<1:34:35, 8.69s/it]
248
  28%|β–ˆβ–ˆβ–Š | 248/900 [56:45<1:07:04, 6.17s/it]
249
  28%|β–ˆβ–ˆβ–Š | 249/900 [56:45<47:49, 4.41s/it]
250
  28%|β–ˆβ–ˆβ–Š | 250/900 [56:46<34:59, 3.23s/it]
251
  28%|β–ˆβ–ˆβ–Š | 251/900 [56:46<25:26, 2.35s/it]
252
  28%|β–ˆβ–ˆβ–Š | 252/900 [57:07<1:26:29, 8.01s/it]
253
  28%|β–ˆβ–ˆβ–Š | 253/900 [57:08<1:02:06, 5.76s/it]
254
  28%|β–ˆβ–ˆβ–Š | 254/900 [57:35<2:09:44, 12.05s/it]
255
  28%|β–ˆβ–ˆβ–Š | 255/900 [58:02<2:57:15, 16.49s/it]
256
  28%|β–ˆβ–ˆβ–Š | 256/900 [58:02<2:05:38, 11.71s/it]
257
  29%|β–ˆβ–ˆβ–Š | 257/900 [58:23<2:36:16, 14.58s/it]
258
  29%|β–ˆβ–ˆβ–Š | 258/900 [58:45<2:58:41, 16.70s/it]
259
  29%|β–ˆβ–ˆβ–‰ | 259/900 [58:46<2:06:40, 11.86s/it]
260
  29%|β–ˆβ–ˆβ–‰ | 260/900 [58:46<1:29:43, 8.41s/it]
261
  29%|β–ˆβ–ˆβ–‰ | 261/900 [59:06<2:08:06, 12.03s/it]
262
  29%|β–ˆβ–ˆβ–‰ | 262/900 [59:07<1:31:19, 8.59s/it]
263
  29%|β–ˆβ–ˆβ–‰ | 263/900 [59:07<1:04:49, 6.11s/it]
264
  29%|β–ˆβ–ˆβ–‰ | 264/900 [59:08<46:04, 4.35s/it]
265
  29%|β–ˆβ–ˆβ–‰ | 265/900 [59:28<1:38:03, 9.26s/it]
266
  30%|β–ˆβ–ˆβ–‰ | 266/900 [59:29<1:10:07, 6.64s/it]
267
  30%|β–ˆβ–ˆβ–‰ | 267/900 [59:50<1:56:18, 11.02s/it]
268
  30%|β–ˆβ–ˆβ–‰ | 268/900 [1:00:17<2:46:32, 15.81s/it]
269
  30%|β–ˆβ–ˆβ–‰ | 269/900 [1:00:44<3:21:20, 19.14s/it]
270
  30%|β–ˆβ–ˆβ–ˆ | 270/900 [1:00:44<2:22:24, 13.56s/it]
271
  30%|β–ˆβ–ˆβ–ˆ | 271/900 [1:01:06<2:46:23, 15.87s/it]
272
  30%|β–ˆβ–ˆβ–ˆ | 272/900 [1:01:27<3:03:48, 17.56s/it]
273
  30%|β–ˆβ–ˆβ–ˆ | 273/900 [1:01:28<2:10:24, 12.48s/it]
274
  30%|β–ˆβ–ˆβ–ˆ | 274/900 [1:01:49<2:37:36, 15.11s/it]
275
  31%|β–ˆβ–ˆβ–ˆ | 275/900 [1:02:11<2:57:05, 17.00s/it]
276
  31%|β–ˆβ–ˆβ–ˆ | 276/900 [1:02:31<3:08:20, 18.11s/it]
277
  31%|β–ˆβ–ˆβ–ˆ | 277/900 [1:02:52<3:15:59, 18.88s/it]
278
  31%|β–ˆβ–ˆβ–ˆ | 278/900 [1:03:12<3:20:59, 19.39s/it]
279
  31%|β–ˆβ–ˆβ–ˆ | 279/900 [1:03:33<3:24:52, 19.79s/it]
280
  31%|β–ˆβ–ˆβ–ˆ | 280/900 [1:03:54<3:27:08, 20.05s/it]
281
  31%|β–ˆβ–ˆβ–ˆ | 281/900 [1:04:15<3:31:36, 20.51s/it]
282
  31%|β–ˆβ–ˆβ–ˆβ– | 282/900 [1:04:36<3:31:49, 20.56s/it]
283
  31%|β–ˆβ–ˆβ–ˆβ– | 283/900 [1:04:57<3:32:09, 20.63s/it]
284
  32%|β–ˆβ–ˆβ–ˆβ– | 284/900 [1:05:18<3:32:15, 20.67s/it]
285
  32%|β–ˆβ–ˆβ–ˆβ– | 285/900 [1:05:39<3:32:29, 20.73s/it]
286
  32%|β–ˆβ–ˆβ–ˆβ– | 286/900 [1:05:59<3:32:07, 20.73s/it]
287
  32%|β–ˆβ–ˆβ–ˆβ– | 287/900 [1:06:20<3:31:24, 20.69s/it]
288
  32%|β–ˆβ–ˆβ–ˆβ– | 288/900 [1:06:40<3:30:42, 20.66s/it]
289
  32%|β–ˆβ–ˆβ–ˆβ– | 289/900 [1:06:41<2:28:41, 14.60s/it]
290
  32%|β–ˆβ–ˆβ–ˆβ– | 290/900 [1:07:01<2:46:14, 16.35s/it]
291
  32%|β–ˆβ–ˆβ–ˆβ– | 291/900 [1:07:22<2:58:58, 17.63s/it]
292
  32%|β–ˆβ–ˆβ–ˆβ– | 292/900 [1:07:43<3:07:41, 18.52s/it]
293
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 293/900 [1:08:03<3:13:42, 19.15s/it]
294
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 294/900 [1:08:24<3:18:02, 19.61s/it]
295
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 295/900 [1:08:44<3:20:39, 19.90s/it]
296
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 296/900 [1:09:06<3:25:34, 20.42s/it]
297
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 297/900 [1:09:27<3:25:53, 20.49s/it]
298
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 298/900 [1:09:48<3:29:13, 20.85s/it]
299
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 299/900 [1:10:10<3:30:41, 21.03s/it]
300
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 300/900 [1:10:31<3:31:42, 21.17s/it]
301
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 301/900 [1:10:53<3:32:25, 21.28s/it]
302
  34%|β–ˆβ–ˆβ–ˆβ–Ž | 302/900 [1:10:53<2:29:51, 15.04s/it]
303
  34%|β–ˆβ–ˆβ–ˆβ–Ž | 303/900 [1:11:14<2:45:44, 16.66s/it]
304
  34%|β–ˆβ–ˆβ–ˆβ– | 304/900 [1:11:14<1:57:39, 11.85s/it]
305
  34%|β–ˆβ–ˆβ–ˆβ– | 305/900 [1:11:16<1:26:19, 8.70s/it]
306
  34%|β–ˆβ–ˆβ–ˆβ– | 306/900 [1:11:16<1:01:37, 6.22s/it]
307
  34%|β–ˆβ–ˆβ–ˆβ– | 307/900 [1:11:37<1:44:10, 10.54s/it]
308
  34%|β–ˆβ–ˆβ–ˆβ– | 308/900 [1:11:37<1:14:33, 7.56s/it]
309
  34%|β–ˆβ–ˆβ–ˆβ– | 309/900 [1:11:38<53:30, 5.43s/it]
310
  34%|β–ˆβ–ˆβ–ˆβ– | 310/900 [1:11:38<38:50, 3.95s/it]
311
  35%|β–ˆβ–ˆβ–ˆβ– | 311/900 [1:11:39<28:06, 2.86s/it]
312
  35%|β–ˆβ–ˆβ–ˆβ– | 312/900 [1:11:39<20:28, 2.09s/it]
313
  35%|β–ˆβ–ˆβ–ˆβ– | 313/900 [1:11:39<15:39, 1.60s/it]
314
  35%|β–ˆβ–ˆβ–ˆβ– | 314/900 [1:12:00<1:10:44, 7.24s/it]
315
  35%|β–ˆβ–ˆβ–ˆβ–Œ | 315/900 [1:12:00<51:11, 5.25s/it]
316
  35%|β–ˆβ–ˆβ–ˆβ–Œ | 316/900 [1:12:01<36:39, 3.77s/it]
317
  35%|β–ˆβ–ˆβ–ˆβ–Œ | 317/900 [1:12:01<26:21, 2.71s/it]
318
  35%|β–ˆβ–ˆβ–ˆβ–Œ | 318/900 [1:12:22<1:18:11, 8.06s/it]
319
  35%|β–ˆβ–ˆβ–ˆβ–Œ | 319/900 [1:12:42<1:54:26, 11.82s/it]
320
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 320/900 [1:12:43<1:21:17, 8.41s/it]
321
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 321/900 [1:13:04<1:58:20, 12.26s/it]
322
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 322/900 [1:13:04<1:24:17, 8.75s/it]
323
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 323/900 [1:13:05<59:43, 6.21s/it]
324
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 324/900 [1:13:05<42:34, 4.44s/it]
325
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 325/900 [1:13:05<31:05, 3.24s/it]
326
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 326/900 [1:13:06<22:24, 2.34s/it]
327
  36%|β–ˆβ–ˆβ–ˆβ–‹ | 327/900 [1:13:27<1:15:17, 7.88s/it]
328
  36%|β–ˆβ–ˆβ–ˆβ–‹ | 328/900 [1:13:48<1:54:41, 12.03s/it]
329
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 329/900 [1:13:49<1:21:35, 8.57s/it]
330
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 330/900 [1:13:49<57:58, 6.10s/it]
331
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 331/900 [1:13:50<41:50, 4.41s/it]
332
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 332/900 [1:14:10<1:27:39, 9.26s/it]
333
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 333/900 [1:14:11<1:03:20, 6.70s/it]
334
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 334/900 [1:14:31<1:42:10, 10.83s/it]
335
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 335/900 [1:14:53<2:11:17, 13.94s/it]
336
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 336/900 [1:14:53<1:33:19, 9.93s/it]
337
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 337/900 [1:14:54<1:06:43, 7.11s/it]
338
  38%|β–ˆβ–ˆβ–ˆβ–Š | 338/900 [1:14:54<47:37, 5.08s/it]
339
  38%|β–ˆβ–ˆβ–ˆβ–Š | 339/900 [1:14:55<34:55, 3.74s/it]
340
  38%|β–ˆβ–ˆβ–ˆβ–Š | 340/900 [1:14:55<25:24, 2.72s/it]
341
  38%|β–ˆβ–ˆβ–ˆβ–Š | 341/900 [1:15:16<1:15:28, 8.10s/it]
342
  38%|β–ˆβ–ˆβ–ˆβ–Š | 342/900 [1:15:36<1:50:48, 11.92s/it]
343
  38%|β–ˆβ–ˆβ–ˆβ–Š | 343/900 [1:15:57<2:15:32, 14.60s/it]
344
  38%|β–ˆβ–ˆβ–ˆβ–Š | 344/900 [1:16:19<2:34:19, 16.65s/it]
345
  38%|β–ˆβ–ˆβ–ˆβ–Š | 345/900 [1:16:19<1:49:10, 11.80s/it]
346
  38%|β–ˆβ–ˆβ–ˆβ–Š | 346/900 [1:16:19<1:17:00, 8.34s/it]
347
  39%|β–ˆβ–ˆβ–ˆβ–Š | 347/900 [1:16:20<54:33, 5.92s/it]
348
  39%|β–ˆβ–ˆβ–ˆβ–Š | 348/900 [1:16:20<39:04, 4.25s/it]
349
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 349/900 [1:16:21<29:14, 3.18s/it]
350
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 350/900 [1:16:21<21:55, 2.39s/it]
351
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 351/900 [1:16:42<1:11:59, 7.87s/it]
352
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 352/900 [1:17:03<1:47:09, 11.73s/it]
353
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 353/900 [1:17:23<2:11:35, 14.43s/it]
354
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 354/900 [1:17:44<2:29:19, 16.41s/it]
355
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 355/900 [1:18:06<2:41:51, 17.82s/it]
356
  40%|β–ˆβ–ˆβ–ˆβ–‰ | 356/900 [1:18:06<1:54:44, 12.66s/it]
357
  40%|β–ˆβ–ˆβ–ˆβ–‰ | 357/900 [1:18:27<2:16:13, 15.05s/it]
358
  40%|β–ˆβ–ˆβ–ˆβ–‰ | 358/900 [1:18:48<2:31:28, 16.77s/it]
359
  40%|β–ˆβ–ˆβ–ˆβ–‰ | 359/900 [1:18:48<1:47:09, 11.89s/it]
360
  40%|β–ˆβ–ˆβ–ˆβ–ˆ | 360/900 [1:18:48<1:15:59, 8.44s/it]
361
  40%|β–ˆβ–ˆβ–ˆβ–ˆ | 361/900 [1:19:09<1:48:26, 12.07s/it]
362
  40%|β–ˆβ–ˆβ–ˆβ–ˆ | 362/900 [1:19:30<2:12:59, 14.83s/it]
363
  40%|β–ˆβ–ˆβ–ˆβ–ˆ | 363/900 [1:19:51<2:28:30, 16.59s/it]
364
  40%|β–ˆβ–ˆβ–ˆβ–ˆ | 364/900 [1:20:12<2:39:08, 17.82s/it]
365
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 365/900 [1:20:32<2:46:41, 18.70s/it]
366
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 366/900 [1:20:53<2:51:52, 19.31s/it]
367
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 367/900 [1:21:14<2:56:27, 19.86s/it]
368
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 368/900 [1:21:36<3:00:19, 20.34s/it]
369
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 369/900 [1:21:57<3:01:26, 20.50s/it]
370
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 370/900 [1:22:18<3:02:20, 20.64s/it]
371
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 371/900 [1:22:39<3:02:45, 20.73s/it]
372
  41%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 372/900 [1:23:00<3:04:56, 21.02s/it]
373
  41%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 373/900 [1:23:22<3:06:30, 21.24s/it]
374
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 374/900 [1:23:43<3:05:26, 21.15s/it]
375
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 375/900 [1:24:04<3:04:33, 21.09s/it]
376
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 376/900 [1:24:25<3:03:52, 21.06s/it]
377
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 377/900 [1:24:47<3:05:08, 21.24s/it]
378
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 378/900 [1:25:07<3:03:16, 21.07s/it]
379
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 379/900 [1:25:08<2:09:19, 14.89s/it]
380
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 380/900 [1:25:28<2:23:26, 16.55s/it]
381
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 381/900 [1:25:49<2:33:46, 17.78s/it]
382
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 382/900 [1:26:09<2:40:38, 18.61s/it]
383
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 383/900 [1:26:30<2:45:18, 19.19s/it]
384
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 384/900 [1:26:51<2:49:00, 19.65s/it]
385
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 385/900 [1:27:11<2:51:20, 19.96s/it]
386
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 386/900 [1:27:33<2:55:13, 20.46s/it]
387
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 387/900 [1:27:54<2:57:41, 20.78s/it]
388
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 388/900 [1:28:15<2:57:44, 20.83s/it]
389
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 389/900 [1:28:37<2:59:35, 21.09s/it]
390
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 390/900 [1:28:59<3:00:49, 21.27s/it]
391
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 391/900 [1:29:19<2:58:49, 21.08s/it]
392
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 392/900 [1:29:40<2:57:23, 20.95s/it]
393
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 393/900 [1:30:01<2:56:29, 20.89s/it]
394
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 394/900 [1:30:22<2:57:13, 21.02s/it]
395
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 395/900 [1:30:43<2:55:50, 20.89s/it]
396
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 396/900 [1:31:04<2:56:31, 21.02s/it]
397
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 397/900 [1:31:25<2:55:13, 20.90s/it]
398
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 398/900 [1:31:45<2:54:07, 20.81s/it]
399
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 399/900 [1:32:06<2:54:50, 20.94s/it]
400
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 400/900 [1:32:27<2:54:05, 20.89s/it]
401
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 401/900 [1:32:49<2:54:50, 21.02s/it]
402
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 402/900 [1:33:10<2:54:18, 21.00s/it]
403
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 403/900 [1:33:30<2:53:45, 20.98s/it]
404
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 404/900 [1:33:31<2:02:38, 14.83s/it]
405
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 405/900 [1:33:52<2:18:25, 16.78s/it]
406
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 406/900 [1:34:14<2:29:55, 18.21s/it]
407
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 407/900 [1:34:35<2:37:40, 19.19s/it]
408
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 408/900 [1:34:36<1:52:10, 13.68s/it]
409
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 409/900 [1:34:57<2:09:10, 15.78s/it]
410
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 410/900 [1:34:57<1:31:22, 11.19s/it]
411
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 411/900 [1:35:18<1:54:48, 14.09s/it]
412
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 412/900 [1:35:39<2:12:17, 16.27s/it]
413
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 413/900 [1:35:40<1:33:41, 11.54s/it]
414
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 414/900 [1:36:00<1:55:00, 14.20s/it]
415
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 415/900 [1:36:22<2:12:24, 16.38s/it]
416
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 416/900 [1:36:42<2:22:16, 17.64s/it]
417
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 417/900 [1:37:04<2:31:54, 18.87s/it]
418
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 418/900 [1:37:26<2:38:00, 19.67s/it]
419
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 419/900 [1:37:47<2:40:47, 20.06s/it]
420
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 420/900 [1:38:08<2:43:54, 20.49s/it]
421
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 421/900 [1:38:30<2:46:00, 20.80s/it]
422
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 422/900 [1:38:51<2:47:15, 20.99s/it]
423
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 423/900 [1:39:12<2:45:54, 20.87s/it]
424
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 424/900 [1:39:32<2:45:00, 20.80s/it]
425
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 425/900 [1:39:53<2:45:25, 20.90s/it]
426
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 426/900 [1:40:14<2:45:16, 20.92s/it]
427
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 427/900 [1:40:36<2:45:36, 21.01s/it]
428
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 428/900 [1:40:57<2:45:28, 21.04s/it]
429
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 429/900 [1:41:18<2:44:59, 21.02s/it]
430
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 430/900 [1:41:39<2:45:04, 21.07s/it]
431
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 431/900 [1:42:01<2:46:08, 21.25s/it]
432
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 432/900 [1:42:22<2:46:43, 21.38s/it]
433
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 433/900 [1:42:44<2:47:01, 21.46s/it]
434
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 434/900 [1:42:45<1:58:28, 15.25s/it]
435
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 435/900 [1:43:06<2:12:16, 17.07s/it]
436
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 436/900 [1:43:27<2:21:45, 18.33s/it]
437
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 437/900 [1:43:48<2:26:42, 19.01s/it]
438
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 438/900 [1:44:09<2:30:11, 19.51s/it]
439
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 439/900 [1:44:29<2:32:23, 19.84s/it]
440
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 440/900 [1:44:50<2:33:56, 20.08s/it]
441
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 441/900 [1:45:11<2:37:10, 20.55s/it]
442
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 442/900 [1:45:32<2:36:51, 20.55s/it]
443
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 443/900 [1:45:54<2:38:53, 20.86s/it]
444
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 444/900 [1:46:14<2:38:17, 20.83s/it]
445
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 445/900 [1:46:36<2:39:40, 21.06s/it]
446
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 446/900 [1:46:56<2:38:15, 20.91s/it]
447
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 447/900 [1:47:17<2:37:14, 20.83s/it]
448
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 448/900 [1:47:38<2:36:15, 20.74s/it]
449
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 449/900 [1:47:59<2:37:27, 20.95s/it]
450
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 450/900 [1:48:20<2:36:20, 20.85s/it]
451
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 451/900 [1:48:41<2:35:54, 20.84s/it]
452
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 452/900 [1:48:41<1:50:08, 14.75s/it]
453
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 453/900 [1:49:02<2:02:48, 16.49s/it]
454
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 454/900 [1:49:23<2:13:41, 17.99s/it]
455
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 455/900 [1:49:44<2:20:31, 18.95s/it]
456
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 456/900 [1:50:06<2:25:31, 19.67s/it]
457
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 457/900 [1:50:27<2:28:41, 20.14s/it]
458
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 458/900 [1:50:48<2:30:50, 20.48s/it]
459
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 459/900 [1:51:10<2:32:33, 20.76s/it]
460
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 460/900 [1:51:31<2:33:30, 20.93s/it]
461
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 461/900 [1:51:52<2:33:12, 20.94s/it]
462
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 462/900 [1:52:13<2:32:34, 20.90s/it]
463
  51%|β–ˆβ–ˆοΏ½οΏ½οΏ½β–ˆβ–ˆβ– | 463/900 [1:52:34<2:32:09, 20.89s/it]
464
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 464/900 [1:52:54<2:31:58, 20.91s/it]
465
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 465/900 [1:53:15<2:31:31, 20.90s/it]
466
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 466/900 [1:53:36<2:31:06, 20.89s/it]
467
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 467/900 [1:53:57<2:30:44, 20.89s/it]
468
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 468/900 [1:54:18<2:30:24, 20.89s/it]
469
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 469/900 [1:54:39<2:29:32, 20.82s/it]
470
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 470/900 [1:54:59<2:29:02, 20.80s/it]
471
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 471/900 [1:55:20<2:29:02, 20.84s/it]
472
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 472/900 [1:55:42<2:30:42, 21.13s/it]
473
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 473/900 [1:56:04<2:31:32, 21.29s/it]
474
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 474/900 [1:56:25<2:31:57, 21.40s/it]
475
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 475/900 [1:56:47<2:32:22, 21.51s/it]
476
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 476/900 [1:57:08<2:30:10, 21.25s/it]
477
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 477/900 [1:57:29<2:28:34, 21.07s/it]
478
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 478/900 [1:57:49<2:27:24, 20.96s/it]
479
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 479/900 [1:58:10<2:27:34, 21.03s/it]
480
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 480/900 [1:58:31<2:26:39, 20.95s/it]
481
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 481/900 [1:58:52<2:26:00, 20.91s/it]
482
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 482/900 [1:59:13<2:25:03, 20.82s/it]
483
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 483/900 [1:59:13<1:42:19, 14.72s/it]
484
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 484/900 [1:59:34<1:54:22, 16.50s/it]
485
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 485/900 [1:59:55<2:03:54, 17.91s/it]
486
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 486/900 [2:00:16<2:09:18, 18.74s/it]
487
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 487/900 [2:00:37<2:14:17, 19.51s/it]
488
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 488/900 [2:00:58<2:16:25, 19.87s/it]
489
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 489/900 [2:01:19<2:19:02, 20.30s/it]
490
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 490/900 [2:01:40<2:19:28, 20.41s/it]
491
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 491/900 [2:01:40<1:38:31, 14.45s/it]
492
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 492/900 [2:01:40<1:09:23, 10.20s/it]
493
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 493/900 [2:02:01<1:29:58, 13.26s/it]
494
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 494/900 [2:02:21<1:44:39, 15.47s/it]
495
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 495/900 [2:02:43<1:56:48, 17.30s/it]
496
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 496/900 [2:03:05<2:05:25, 18.63s/it]
497
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 497/900 [2:03:26<2:11:11, 19.53s/it]
498
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 498/900 [2:03:48<2:14:24, 20.06s/it]
499
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 499/900 [2:04:09<2:16:16, 20.39s/it]
500
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 500/900 [2:04:30<2:17:29, 20.62s/it]
501
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 501/900 [2:04:51<2:17:17, 20.64s/it]
502
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 502/900 [2:05:11<2:16:51, 20.63s/it]
503
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 503/900 [2:05:12<1:36:28, 14.58s/it]
504
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 504/900 [2:05:32<1:47:40, 16.31s/it]
505
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 505/900 [2:05:53<1:57:13, 17.81s/it]
506
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 506/900 [2:06:15<2:03:40, 18.83s/it]
507
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 507/900 [2:06:36<2:08:04, 19.55s/it]
508
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 508/900 [2:06:57<2:10:02, 19.90s/it]
509
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 509/900 [2:07:18<2:12:18, 20.30s/it]
510
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 510/900 [2:07:39<2:13:46, 20.58s/it]
511
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 511/900 [2:08:00<2:14:52, 20.80s/it]
512
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 512/900 [2:08:22<2:15:29, 20.95s/it]
513
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 513/900 [2:08:43<2:15:01, 20.93s/it]
514
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 514/900 [2:09:03<2:14:10, 20.86s/it]
515
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 515/900 [2:09:24<2:13:40, 20.83s/it]
516
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 516/900 [2:09:45<2:14:26, 21.01s/it]
517
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 517/900 [2:10:07<2:14:44, 21.11s/it]
518
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 518/900 [2:10:07<1:35:02, 14.93s/it]
519
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 519/900 [2:10:28<1:45:45, 16.65s/it]
520
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 520/900 [2:10:29<1:14:51, 11.82s/it]
521
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 521/900 [2:10:49<1:31:54, 14.55s/it]
522
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 522/900 [2:11:11<1:44:05, 16.52s/it]
523
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 523/900 [2:11:11<1:13:38, 11.72s/it]
524
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 524/900 [2:11:32<1:30:07, 14.38s/it]
525
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 525/900 [2:11:53<1:42:57, 16.47s/it]
526
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 526/900 [2:12:14<1:50:36, 17.74s/it]
527
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 527/900 [2:12:14<1:18:08, 12.57s/it]
528
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 528/900 [2:12:35<1:32:37, 14.94s/it]
529
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 529/900 [2:12:35<1:05:36, 10.61s/it]
530
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 530/900 [2:12:36<46:22, 7.52s/it]
531
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 531/900 [2:12:56<1:10:12, 11.42s/it]
532
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 532/900 [2:12:57<49:55, 8.14s/it]
533
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 533/900 [2:12:57<35:23, 5.79s/it]
534
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 534/900 [2:13:17<1:02:05, 10.18s/it]
535
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 535/900 [2:13:38<1:21:15, 13.36s/it]
536
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 536/900 [2:13:39<57:38, 9.50s/it]
537
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 537/900 [2:14:00<1:18:29, 12.97s/it]
538
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 538/900 [2:14:00<55:40, 9.23s/it]
539
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 539/900 [2:14:21<1:17:22, 12.86s/it]
540
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 540/900 [2:14:22<54:54, 9.15s/it]
541
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 541/900 [2:14:22<38:53, 6.50s/it]
542
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 542/900 [2:14:43<1:03:53, 10.71s/it]
543
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 543/900 [2:15:04<1:21:47, 13.75s/it]
544
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 544/900 [2:15:04<58:02, 9.78s/it]
545
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 545/900 [2:15:25<1:18:05, 13.20s/it]
546
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 546/900 [2:15:47<1:32:17, 15.64s/it]
547
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 547/900 [2:16:08<1:41:11, 17.20s/it]
548
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 548/900 [2:16:08<1:11:37, 12.21s/it]
549
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 549/900 [2:16:29<1:26:10, 14.73s/it]
550
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 550/900 [2:16:50<1:37:50, 16.77s/it]
551
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 551/900 [2:17:11<1:44:30, 17.97s/it]
552
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 552/900 [2:17:32<1:49:55, 18.95s/it]
553
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 553/900 [2:17:33<1:17:35, 13.42s/it]
554
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 554/900 [2:17:53<1:29:29, 15.52s/it]
555
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 555/900 [2:17:54<1:03:28, 11.04s/it]
556
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 556/900 [2:18:15<1:20:55, 14.12s/it]
557
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 557/900 [2:18:16<57:23, 10.04s/it]
558
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 558/900 [2:18:37<1:16:22, 13.40s/it]
559
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 559/900 [2:18:37<54:09, 9.53s/it]
560
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 560/900 [2:18:38<38:14, 6.75s/it]
561
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 561/900 [2:18:38<27:18, 4.83s/it]
562
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 562/900 [2:18:59<54:16, 9.63s/it]
563
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 563/900 [2:18:59<38:39, 6.88s/it]
564
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 564/900 [2:19:20<1:02:14, 11.12s/it]
565
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 565/900 [2:19:41<1:18:58, 14.14s/it]
566
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 566/900 [2:19:42<55:58, 10.06s/it]
567
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 567/900 [2:20:03<1:13:58, 13.33s/it]
568
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 568/900 [2:20:04<52:57, 9.57s/it]
569
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 569/900 [2:20:25<1:11:43, 13.00s/it]
570
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 570/900 [2:20:25<50:52, 9.25s/it]
571
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 571/900 [2:20:46<1:09:25, 12.66s/it]
572
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 572/900 [2:21:07<1:23:14, 15.23s/it]
573
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 573/900 [2:21:28<1:31:53, 16.86s/it]
574
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 574/900 [2:21:49<1:38:35, 18.14s/it]
575
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 575/900 [2:22:10<1:43:18, 19.07s/it]
576
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 576/900 [2:22:31<1:46:27, 19.71s/it]
577
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 577/900 [2:22:52<1:47:36, 19.99s/it]
578
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 578/900 [2:23:13<1:49:28, 20.40s/it]
579
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 579/900 [2:23:34<1:49:40, 20.50s/it]
580
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 580/900 [2:23:55<1:50:35, 20.74s/it]
581
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 581/900 [2:24:17<1:51:11, 20.91s/it]
582
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 582/900 [2:24:38<1:50:52, 20.92s/it]
583
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 583/900 [2:24:59<1:51:18, 21.07s/it]
584
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 584/900 [2:25:20<1:50:46, 21.03s/it]
585
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 585/900 [2:25:41<1:51:00, 21.14s/it]
586
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 586/900 [2:25:42<1:18:21, 14.97s/it]
587
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 587/900 [2:26:02<1:26:51, 16.65s/it]
588
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 588/900 [2:26:23<1:33:04, 17.90s/it]
589
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 589/900 [2:26:45<1:38:09, 18.94s/it]
590
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 590/900 [2:27:05<1:40:26, 19.44s/it]
591
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 591/900 [2:27:26<1:42:46, 19.96s/it]
592
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 592/900 [2:27:48<1:44:21, 20.33s/it]
593
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 593/900 [2:28:09<1:45:17, 20.58s/it]
594
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 594/900 [2:28:30<1:45:51, 20.76s/it]
595
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 595/900 [2:28:30<1:14:31, 14.66s/it]
596
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 596/900 [2:28:51<1:23:52, 16.55s/it]
597
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 597/900 [2:29:12<1:30:09, 17.85s/it]
598
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 598/900 [2:29:34<1:34:59, 18.87s/it]
599
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 599/900 [2:29:34<1:07:07, 13.38s/it]
600
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 600/900 [2:29:55<1:17:32, 15.51s/it]
601
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 601/900 [2:30:15<1:24:58, 17.05s/it]
602
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 602/900 [2:30:16<59:59, 12.08s/it]
603
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 603/900 [2:30:36<1:12:12, 14.59s/it]
604
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 604/900 [2:30:57<1:21:48, 16.58s/it]
605
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 605/900 [2:30:58<57:46, 11.75s/it]
606
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 606/900 [2:31:19<1:10:49, 14.45s/it]
607
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 607/900 [2:31:19<50:06, 10.26s/it]
608
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 608/900 [2:31:40<1:05:02, 13.37s/it]
609
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 609/900 [2:32:01<1:16:13, 15.72s/it]
610
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 610/900 [2:32:22<1:23:53, 17.36s/it]
611
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 611/900 [2:32:43<1:29:20, 18.55s/it]
612
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 612/900 [2:33:05<1:32:49, 19.34s/it]
613
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 613/900 [2:33:26<1:35:44, 20.02s/it]
614
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 614/900 [2:33:47<1:37:07, 20.38s/it]
615
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 615/900 [2:34:09<1:38:01, 20.64s/it]
616
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 616/900 [2:34:29<1:37:40, 20.64s/it]
617
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 617/900 [2:34:50<1:37:16, 20.62s/it]
618
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 618/900 [2:35:11<1:37:26, 20.73s/it]
619
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 619/900 [2:35:32<1:37:42, 20.86s/it]
620
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 620/900 [2:35:53<1:37:09, 20.82s/it]
621
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 621/900 [2:36:14<1:37:16, 20.92s/it]
622
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 622/900 [2:36:35<1:37:14, 20.99s/it]
623
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 623/900 [2:36:56<1:37:04, 21.03s/it]
624
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 624/900 [2:37:17<1:36:50, 21.05s/it]
625
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 625/900 [2:37:38<1:36:39, 21.09s/it]
626
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 626/900 [2:38:00<1:36:19, 21.09s/it]
627
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 627/900 [2:38:20<1:35:23, 20.97s/it]
628
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 628/900 [2:38:41<1:34:32, 20.85s/it]
629
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 629/900 [2:39:01<1:33:46, 20.76s/it]
630
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 630/900 [2:39:22<1:33:05, 20.69s/it]
631
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 631/900 [2:39:43<1:33:53, 20.94s/it]
632
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 632/900 [2:40:05<1:34:16, 21.11s/it]
633
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 633/900 [2:40:26<1:33:33, 21.03s/it]
634
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 634/900 [2:40:47<1:33:26, 21.08s/it]
635
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 635/900 [2:41:08<1:33:21, 21.14s/it]
636
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 636/900 [2:41:29<1:33:01, 21.14s/it]
637
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 637/900 [2:41:51<1:32:43, 21.15s/it]
638
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 638/900 [2:42:12<1:32:28, 21.18s/it]
639
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 639/900 [2:42:33<1:32:06, 21.17s/it]
640
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 640/900 [2:42:54<1:31:46, 21.18s/it]
641
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 641/900 [2:43:15<1:30:55, 21.06s/it]
642
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 642/900 [2:43:15<1:04:00, 14.88s/it]
643
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 643/900 [2:43:16<45:00, 10.51s/it]
644
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 644/900 [2:43:37<58:08, 13.63s/it]
645
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 645/900 [2:43:58<1:07:40, 15.92s/it]
646
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 646/900 [2:44:19<1:14:02, 17.49s/it]
647
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 647/900 [2:44:40<1:18:25, 18.60s/it]
648
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 648/900 [2:45:01<1:21:23, 19.38s/it]
649
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 649/900 [2:45:23<1:23:27, 19.95s/it]
650
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 650/900 [2:45:44<1:24:48, 20.35s/it]
651
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 651/900 [2:45:44<59:44, 14.40s/it]
652
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 652/900 [2:46:06<1:07:45, 16.39s/it]
653
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 653/900 [2:46:26<1:12:46, 17.68s/it]
654
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 654/900 [2:46:27<51:20, 12.52s/it]
655
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 655/900 [2:46:47<1:01:09, 14.98s/it]
656
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 656/900 [2:47:09<1:08:30, 16.85s/it]
657
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 657/900 [2:47:30<1:13:38, 18.18s/it]
658
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 658/900 [2:47:51<1:17:04, 19.11s/it]
659
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 659/900 [2:48:12<1:19:17, 19.74s/it]
660
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 660/900 [2:48:13<56:17, 14.07s/it]
661
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 661/900 [2:48:34<1:04:20, 16.15s/it]
662
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 662/900 [2:48:56<1:10:10, 17.69s/it]
663
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 663/900 [2:49:23<1:20:58, 20.50s/it]
664
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 664/900 [2:49:50<1:28:12, 22.43s/it]
665
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 665/900 [2:50:17<1:33:16, 23.82s/it]
666
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 666/900 [2:50:44<1:36:35, 24.77s/it]
667
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 667/900 [2:51:11<1:38:57, 25.48s/it]
668
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 668/900 [2:51:38<1:40:15, 25.93s/it]
669
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 669/900 [2:52:05<1:41:01, 26.24s/it]
670
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 670/900 [2:52:32<1:41:29, 26.47s/it]
671
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 671/900 [2:52:59<1:41:40, 26.64s/it]
672
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 672/900 [2:53:20<1:35:10, 25.04s/it]
673
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 673/900 [2:53:41<1:30:38, 23.96s/it]
674
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 674/900 [2:54:03<1:27:19, 23.18s/it]
675
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 675/900 [2:54:25<1:25:17, 22.74s/it]
676
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 676/900 [2:54:46<1:22:56, 22.22s/it]
677
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 677/900 [2:55:07<1:21:28, 21.92s/it]
678
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 678/900 [2:55:28<1:20:25, 21.74s/it]
679
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 679/900 [2:55:49<1:18:54, 21.42s/it]
680
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 680/900 [2:56:10<1:18:00, 21.28s/it]
681
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 681/900 [2:56:30<1:17:03, 21.11s/it]
682
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 682/900 [2:56:31<54:13, 14.93s/it]
683
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 683/900 [2:56:52<1:00:48, 16.81s/it]
684
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 684/900 [2:57:14<1:05:34, 18.22s/it]
685
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 685/900 [2:57:34<1:07:57, 18.97s/it]
686
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 686/900 [2:57:35<47:51, 13.42s/it]
687
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 687/900 [2:57:55<55:03, 15.51s/it]
688
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 688/900 [2:58:16<1:00:21, 17.08s/it]
689
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 689/900 [2:58:16<42:32, 12.10s/it]
690
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 690/900 [2:58:17<29:54, 8.54s/it]
691
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 691/900 [2:58:37<42:20, 12.16s/it]
692
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 692/900 [2:58:58<51:11, 14.76s/it]
693
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 693/900 [2:59:19<57:14, 16.59s/it]
694
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 694/900 [2:59:40<1:01:23, 17.88s/it]
695
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 695/900 [3:00:01<1:04:09, 18.78s/it]
696
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 696/900 [3:00:22<1:05:55, 19.39s/it]
697
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 697/900 [3:00:42<1:06:58, 19.79s/it]
698
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 698/900 [3:00:43<47:06, 13.99s/it]
699
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 699/900 [3:01:04<54:04, 16.14s/it]
700
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 700/900 [3:01:04<38:11, 11.46s/it]
701
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 701/900 [3:01:25<46:57, 14.16s/it]
702
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 702/900 [3:01:46<54:01, 16.37s/it]
703
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 703/900 [3:02:07<58:04, 17.69s/it]
704
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 704/900 [3:02:08<40:57, 12.54s/it]
705
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 705/900 [3:02:29<49:07, 15.11s/it]
706
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 706/900 [3:02:50<54:57, 17.00s/it]
707
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 707/900 [3:03:11<58:13, 18.10s/it]
708
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 708/900 [3:03:32<1:00:55, 19.04s/it]
709
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 709/900 [3:03:53<1:02:37, 19.67s/it]
710
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 710/900 [3:03:54<44:03, 13.91s/it]
711
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 711/900 [3:04:15<50:29, 16.03s/it]
712
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 712/900 [3:04:35<54:27, 17.38s/it]
713
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 713/900 [3:04:56<57:40, 18.50s/it]
714
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 714/900 [3:04:57<40:36, 13.10s/it]
715
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 715/900 [3:05:18<47:55, 15.54s/it]
716
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 716/900 [3:05:40<53:08, 17.33s/it]
717
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 717/900 [3:06:01<56:37, 18.57s/it]
718
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 718/900 [3:06:22<58:16, 19.21s/it]
719
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 719/900 [3:06:22<41:00, 13.59s/it]
720
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 720/900 [3:06:43<46:54, 15.64s/it]
721
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 721/900 [3:07:03<51:15, 17.18s/it]
722
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 722/900 [3:07:25<54:50, 18.49s/it]
723
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 723/900 [3:07:46<56:36, 19.19s/it]
724
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 724/900 [3:08:07<58:14, 19.86s/it]
725
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 725/900 [3:08:28<58:50, 20.17s/it]
726
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 726/900 [3:08:49<59:02, 20.36s/it]
727
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 727/900 [3:09:10<59:41, 20.70s/it]
728
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 728/900 [3:09:31<59:29, 20.75s/it]
729
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 729/900 [3:09:53<59:55, 21.02s/it]
730
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 730/900 [3:10:14<59:26, 20.98s/it]
731
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 731/900 [3:10:35<58:59, 20.95s/it]
732
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 732/900 [3:10:56<58:38, 20.95s/it]
733
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 733/900 [3:11:17<58:16, 20.93s/it]
734
  82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 734/900 [3:11:37<57:51, 20.91s/it]
735
  82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 735/900 [3:11:59<57:39, 20.97s/it]
 
1
+ ==== STARTING EXPERIMENT: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-3.0_0.5_3_connector-3.0_0.5_3_ablation ====
2
+ Log File: eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-3.0_0.5_3_connector-3.0_0.5_3_ablation_20251011_031423.log
3
+ Timestamp: 2025-10-11 03:14:23
4
+ =====================================
5
+ Processing: /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-3.0_0.5_3_connector-3.0_0.5_3_ablation
6
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
7
+ import pynvml # type: ignore[import]
8
+ [2025-10-11 03:14:26,395] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
9
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/file_download.py:945: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
10
+ warnings.warn(
11
+ config_mask.torch_dtype: torch.bfloat16
12
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
13
+ Load mask model from /nfs/ywang29/TinyLLaVA/checkpoints/qwen2.5-0_5b_base_masktune_42_llm-connector_text-3.0_0.5_3_connector-3.0_0.5_3_ablation over.
14
+ TinyLlavaConfig {
15
+ "architectures": [
16
+ "TinyLlavaForConditionalGeneration"
17
+ ],
18
+ "backward_type_connector": "normal",
19
+ "cache_dir": null,
20
+ "connector_type": "mlp2x_gelu",
21
+ "hidden_size": 896,
22
+ "ignore_index": -100,
23
+ "image_aspect_ratio": "square",
24
+ "image_token_index": -200,
25
+ "llm_model_name_or_path": "Qwen/Qwen2.5-0.5B",
26
+ "mask_model": [
27
+ "llm",
28
+ "connector"
29
+ ],
30
+ "mask_type_connector": "soft",
31
+ "model_type": "tinyllava",
32
+ "num_queries": 128,
33
+ "num_resampler_layers": 3,
34
+ "pad_token": "<|endoftext|>",
35
+ "resampler_hidden_size": 768,
36
+ "sparsity_connector": null,
37
+ "subnet_type_connector": "global",
38
+ "temperature_connector": 0.5,
39
+ "text_config": {
40
+ "_name_or_path": "Qwen/Qwen2.5-0.5B",
41
+ "architectures": [
42
+ "Qwen2ForCausalLM"
43
+ ],
44
+ "backward_type": "normal",
45
+ "bos_token_id": 151643,
46
+ "eos_token_id": 151643,
47
+ "hidden_size": 896,
48
+ "intermediate_size": 4864,
49
+ "mask_type": "soft",
50
+ "masked_layers": "all",
51
+ "max_position_embeddings": 32768,
52
+ "max_window_layers": 24,
53
+ "model_type": "qwen2",
54
+ "num_attention_heads": 14,
55
+ "num_hidden_layers": 24,
56
+ "num_key_value_heads": 2,
57
+ "rope_theta": 1000000.0,
58
+ "sliding_window": 32768,
59
+ "subnet_mode": "both",
60
+ "subnet_type": "None",
61
+ "temperature_attn": 0.5,
62
+ "temperature_mlp": 0.5,
63
+ "tie_word_embeddings": true,
64
+ "torch_dtype": "bfloat16",
65
+ "use_mrope": false,
66
+ "use_sliding_window": false,
67
+ "vocab_size": 151936
68
+ },
69
+ "threshold_connector": null,
70
+ "tokenizer_model_max_length": 2048,
71
+ "tokenizer_name_or_path": "Qwen/Qwen2.5-0.5B",
72
+ "tokenizer_padding_side": "right",
73
+ "tokenizer_use_fast": false,
74
+ "torch_dtype": "bfloat16",
75
+ "transformers_version": "4.40.1",
76
+ "tune_type_connector": "full",
77
+ "tune_type_llm": "full",
78
+ "tune_type_vision_tower": "frozen",
79
+ "tune_vision_tower_from_layer": 0,
80
+ "use_cache": true,
81
+ "vision_config": {
82
+ "hidden_act": "gelu_pytorch_tanh",
83
+ "hidden_size": 1152,
84
+ "image_size": 384,
85
+ "intermediate_size": 4304,
86
+ "layer_norm_eps": 1e-06,
87
+ "model_name_or_path": "google/siglip-so400m-patch14-384",
88
+ "model_name_or_path2": "",
89
+ "model_type": "siglip_vision_model",
90
+ "num_attention_heads": 16,
91
+ "num_hidden_layers": 27,
92
+ "patch_size": 14
93
+ },
94
+ "vision_feature_layer": -2,
95
+ "vision_feature_select_strategy": "patch",
96
+ "vision_hidden_size": 1152,
97
+ "vision_model_name_or_path": "google/siglip-so400m-patch14-384",
98
+ "vision_model_name_or_path2": "",
99
+ "vocab_size": 151936
100
+ }
101
+
102
+ TinyLlavaForConditionalGeneration(
103
+ (language_model): Qwen2ForCausalLM(
104
+ (model): Qwen2Model(
105
+ (embed_tokens): Embedding(151936, 896)
106
+ (layers): ModuleList(
107
+ (0-23): 24 x Qwen2DecoderLayer(
108
+ (self_attn): Qwen2Attention(
109
+ (q_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=True)
110
+ (k_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=128, bias=True)
111
+ (v_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=128, bias=True)
112
+ (o_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=False)
113
+ (rotary_emb): Qwen2RotaryEmbedding()
114
+ )
115
+ (mlp): Qwen2MLP(
116
+ (gate_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=4864, bias=False)
117
+ (up_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=4864, bias=False)
118
+ (down_proj): SupermaskLinearSparsity_SoftForward_Normal(in_features=4864, out_features=896, bias=False)
119
+ (act_fn): SiLU()
120
+ )
121
+ (input_layernorm): Qwen2RMSNorm()
122
+ (post_attention_layernorm): Qwen2RMSNorm()
123
+ )
124
+ )
125
+ (norm): Qwen2RMSNorm()
126
+ )
127
+ (lm_head): Linear(in_features=896, out_features=151936, bias=False)
128
+ )
129
+ (vision_tower): SIGLIPVisionTower(
130
+ (_vision_tower): SiglipVisionModel(
131
+ (vision_model): SiglipVisionTransformer(
132
+ (embeddings): SiglipVisionEmbeddings(
133
+ (patch_embedding): Conv2d(3, 1152, kernel_size=(14, 14), stride=(14, 14), padding=valid)
134
+ (position_embedding): Embedding(729, 1152)
135
+ )
136
+ (encoder): SiglipEncoder(
137
+ (layers): ModuleList(
138
+ (0-26): 27 x SiglipEncoderLayer(
139
+ (self_attn): SiglipAttention(
140
+ (k_proj): Linear(in_features=1152, out_features=1152, bias=True)
141
+ (v_proj): Linear(in_features=1152, out_features=1152, bias=True)
142
+ (q_proj): Linear(in_features=1152, out_features=1152, bias=True)
143
+ (out_proj): Linear(in_features=1152, out_features=1152, bias=True)
144
+ )
145
+ (layer_norm1): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
146
+ (mlp): SiglipMLP(
147
+ (activation_fn): PytorchGELUTanh()
148
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
149
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
150
+ )
151
+ (layer_norm2): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
152
+ )
153
+ )
154
+ )
155
+ (post_layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
156
+ (head): SiglipMultiheadAttentionPoolingHead(
157
+ (attention): MultiheadAttention(
158
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1152, out_features=1152, bias=True)
159
+ )
160
+ (layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
161
+ (mlp): SiglipMLP(
162
+ (activation_fn): PytorchGELUTanh()
163
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
164
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
165
+ )
166
+ )
167
+ )
168
+ )
169
+ )
170
+ (connector): MLPConnector(
171
+ (_connector): Sequential(
172
+ (0): SupermaskLinearSparsity_SoftForward_Normal(in_features=1152, out_features=896, bias=True)
173
+ (1): GELU(approximate='none')
174
+ (2): SupermaskLinearSparsity_SoftForward_Normal(in_features=896, out_features=896, bias=True)
175
+ )
176
+ )
177
+ )
178
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
179
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
180
+ return self.fget.__get__(instance, owner)()
181
+ loading language model from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/language_model
182
+ Loading vision tower from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/vision_tower
183
+ Loading connector from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain/connector/pytorch_model.bin...
184
+ Load base model from /nfs/ywang29/TinyLLaVA/checkpoints/tiny-llava-Qwen2.5-0.5B-siglip-so400m-patch14-384-pretrain over.
185
+ TinyLlavaConfig {
186
+ "cache_dir": null,
187
+ "connector_type": "mlp2x_gelu",
188
+ "hidden_size": 896,
189
+ "ignore_index": -100,
190
+ "image_aspect_ratio": "square",
191
+ "image_token_index": -200,
192
+ "llm_model_name_or_path": "Qwen/Qwen2.5-0.5B",
193
+ "model_type": "tinyllava",
194
+ "num_queries": 128,
195
+ "num_resampler_layers": 3,
196
+ "pad_token": "<|endoftext|>",
197
+ "pad_token_id": 151643,
198
+ "resampler_hidden_size": 768,
199
+ "text_config": {
200
+ "_name_or_path": "Qwen/Qwen2.5-0.5B",
201
+ "architectures": [
202
+ "Qwen2ForCausalLM"
203
+ ],
204
+ "bos_token_id": 151643,
205
+ "eos_token_id": 151643,
206
+ "hidden_size": 896,
207
+ "intermediate_size": 4864,
208
+ "max_position_embeddings": 32768,
209
+ "max_window_layers": 24,
210
+ "model_type": "qwen2",
211
+ "num_attention_heads": 14,
212
+ "num_hidden_layers": 24,
213
+ "num_key_value_heads": 2,
214
+ "rope_theta": 1000000.0,
215
+ "sliding_window": 32768,
216
+ "tie_word_embeddings": true,
217
+ "use_mrope": false,
218
+ "use_sliding_window": false,
219
+ "vocab_size": 151936
220
+ },
221
+ "tokenizer_model_max_length": 2048,
222
+ "tokenizer_name_or_path": "Qwen/Qwen2.5-0.5B",
223
+ "tokenizer_padding_side": "right",
224
+ "tokenizer_use_fast": false,
225
+ "transformers_version": "4.40.1",
226
+ "tune_type_connector": "full",
227
+ "tune_type_llm": "frozen",
228
+ "tune_type_vision_tower": "frozen",
229
+ "tune_vision_tower_from_layer": 0,
230
+ "use_cache": true,
231
+ "vision_config": {
232
+ "hidden_act": "gelu_pytorch_tanh",
233
+ "hidden_size": 1152,
234
+ "image_size": 384,
235
+ "intermediate_size": 4304,
236
+ "layer_norm_eps": 1e-06,
237
+ "model_name_or_path": "google/siglip-so400m-patch14-384",
238
+ "model_name_or_path2": "",
239
+ "model_type": "siglip_vision_model",
240
+ "num_attention_heads": 16,
241
+ "num_hidden_layers": 27,
242
+ "patch_size": 14
243
+ },
244
+ "vision_feature_layer": -2,
245
+ "vision_feature_select_strategy": "patch",
246
+ "vision_hidden_size": 1152,
247
+ "vision_model_name_or_path": "google/siglip-so400m-patch14-384",
248
+ "vision_model_name_or_path2": "",
249
+ "vocab_size": 151936
250
+ }
251
+
252
+ TinyLlavaForConditionalGeneration(
253
+ (language_model): Qwen2ForCausalLM(
254
+ (model): Qwen2Model(
255
+ (embed_tokens): Embedding(151936, 896)
256
+ (layers): ModuleList(
257
+ (0-23): 24 x Qwen2DecoderLayer(
258
+ (self_attn): Qwen2Attention(
259
+ (q_proj): Linear(in_features=896, out_features=896, bias=True)
260
+ (k_proj): Linear(in_features=896, out_features=128, bias=True)
261
+ (v_proj): Linear(in_features=896, out_features=128, bias=True)
262
+ (o_proj): Linear(in_features=896, out_features=896, bias=False)
263
+ (rotary_emb): Qwen2RotaryEmbedding()
264
+ )
265
+ (mlp): Qwen2MLP(
266
+ (gate_proj): Linear(in_features=896, out_features=4864, bias=False)
267
+ (up_proj): Linear(in_features=896, out_features=4864, bias=False)
268
+ (down_proj): Linear(in_features=4864, out_features=896, bias=False)
269
+ (act_fn): SiLU()
270
+ )
271
+ (input_layernorm): Qwen2RMSNorm()
272
+ (post_attention_layernorm): Qwen2RMSNorm()
273
+ )
274
+ )
275
+ (norm): Qwen2RMSNorm()
276
+ )
277
+ (lm_head): Linear(in_features=896, out_features=151936, bias=False)
278
+ )
279
+ (vision_tower): SIGLIPVisionTower(
280
+ (_vision_tower): SiglipVisionModel(
281
+ (vision_model): SiglipVisionTransformer(
282
+ (embeddings): SiglipVisionEmbeddings(
283
+ (patch_embedding): Conv2d(3, 1152, kernel_size=(14, 14), stride=(14, 14), padding=valid)
284
+ (position_embedding): Embedding(729, 1152)
285
+ )
286
+ (encoder): SiglipEncoder(
287
+ (layers): ModuleList(
288
+ (0-26): 27 x SiglipEncoderLayer(
289
+ (self_attn): SiglipAttention(
290
+ (k_proj): Linear(in_features=1152, out_features=1152, bias=True)
291
+ (v_proj): Linear(in_features=1152, out_features=1152, bias=True)
292
+ (q_proj): Linear(in_features=1152, out_features=1152, bias=True)
293
+ (out_proj): Linear(in_features=1152, out_features=1152, bias=True)
294
+ )
295
+ (layer_norm1): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
296
+ (mlp): SiglipMLP(
297
+ (activation_fn): PytorchGELUTanh()
298
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
299
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
300
+ )
301
+ (layer_norm2): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
302
+ )
303
+ )
304
+ )
305
+ (post_layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
306
+ (head): SiglipMultiheadAttentionPoolingHead(
307
+ (attention): MultiheadAttention(
308
+ (out_proj): NonDynamicallyQuantizableLinear(in_features=1152, out_features=1152, bias=True)
309
+ )
310
+ (layernorm): LayerNorm((1152,), eps=1e-06, elementwise_affine=True)
311
+ (mlp): SiglipMLP(
312
+ (activation_fn): PytorchGELUTanh()
313
+ (fc1): Linear(in_features=1152, out_features=4304, bias=True)
314
+ (fc2): Linear(in_features=4304, out_features=1152, bias=True)
315
+ )
316
+ )
317
+ )
318
+ )
319
+ )
320
+ (connector): MLPConnector(
321
+ (_connector): Sequential(
322
+ (0): Linear(in_features=1152, out_features=896, bias=True)
323
+ (1): GELU(approximate='none')
324
+ (2): Linear(in_features=896, out_features=896, bias=True)
325
+ )
326
+ )
327
+ )
328
+ Collect masks for language model over.
329
+ Collect masks for connector over.
330
+ Applying mask on model.layers.0.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
331
+ Applied soft mask on model.layers.0.self_attn.q_proj.
332
+ Applying mask on model.layers.0.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
333
+ Applied soft mask on model.layers.0.self_attn.k_proj.
334
+ Applying mask on model.layers.0.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
335
+ Applied soft mask on model.layers.0.self_attn.v_proj.
336
+ Applying mask on model.layers.0.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
337
+ Applied soft mask on model.layers.0.self_attn.o_proj.
338
+ Applying mask on model.layers.0.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
339
+ Applied soft mask on model.layers.0.mlp.gate_proj.
340
+ Applying mask on model.layers.0.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
341
+ Applied soft mask on model.layers.0.mlp.up_proj.
342
+ Applying mask on model.layers.0.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
343
+ Applied soft mask on model.layers.0.mlp.down_proj.
344
+ Applying mask on model.layers.1.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
345
+ Applied soft mask on model.layers.1.self_attn.q_proj.
346
+ Applying mask on model.layers.1.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
347
+ Applied soft mask on model.layers.1.self_attn.k_proj.
348
+ Applying mask on model.layers.1.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
349
+ Applied soft mask on model.layers.1.self_attn.v_proj.
350
+ Applying mask on model.layers.1.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
351
+ Applied soft mask on model.layers.1.self_attn.o_proj.
352
+ Applying mask on model.layers.1.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
353
+ Applied soft mask on model.layers.1.mlp.gate_proj.
354
+ Applying mask on model.layers.1.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
355
+ Applied soft mask on model.layers.1.mlp.up_proj.
356
+ Applying mask on model.layers.1.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
357
+ Applied soft mask on model.layers.1.mlp.down_proj.
358
+ Applying mask on model.layers.2.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
359
+ Applied soft mask on model.layers.2.self_attn.q_proj.
360
+ Applying mask on model.layers.2.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
361
+ Applied soft mask on model.layers.2.self_attn.k_proj.
362
+ Applying mask on model.layers.2.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
363
+ Applied soft mask on model.layers.2.self_attn.v_proj.
364
+ Applying mask on model.layers.2.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
365
+ Applied soft mask on model.layers.2.self_attn.o_proj.
366
+ Applying mask on model.layers.2.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
367
+ Applied soft mask on model.layers.2.mlp.gate_proj.
368
+ Applying mask on model.layers.2.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
369
+ Applied soft mask on model.layers.2.mlp.up_proj.
370
+ Applying mask on model.layers.2.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
371
+ Applied soft mask on model.layers.2.mlp.down_proj.
372
+ Applying mask on model.layers.3.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
373
+ Applied soft mask on model.layers.3.self_attn.q_proj.
374
+ Applying mask on model.layers.3.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
375
+ Applied soft mask on model.layers.3.self_attn.k_proj.
376
+ Applying mask on model.layers.3.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
377
+ Applied soft mask on model.layers.3.self_attn.v_proj.
378
+ Applying mask on model.layers.3.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
379
+ Applied soft mask on model.layers.3.self_attn.o_proj.
380
+ Applying mask on model.layers.3.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
381
+ Applied soft mask on model.layers.3.mlp.gate_proj.
382
+ Applying mask on model.layers.3.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
383
+ Applied soft mask on model.layers.3.mlp.up_proj.
384
+ Applying mask on model.layers.3.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
385
+ Applied soft mask on model.layers.3.mlp.down_proj.
386
+ Applying mask on model.layers.4.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
387
+ Applied soft mask on model.layers.4.self_attn.q_proj.
388
+ Applying mask on model.layers.4.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
389
+ Applied soft mask on model.layers.4.self_attn.k_proj.
390
+ Applying mask on model.layers.4.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
391
+ Applied soft mask on model.layers.4.self_attn.v_proj.
392
+ Applying mask on model.layers.4.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
393
+ Applied soft mask on model.layers.4.self_attn.o_proj.
394
+ Applying mask on model.layers.4.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
395
+ Applied soft mask on model.layers.4.mlp.gate_proj.
396
+ Applying mask on model.layers.4.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
397
+ Applied soft mask on model.layers.4.mlp.up_proj.
398
+ Applying mask on model.layers.4.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
399
+ Applied soft mask on model.layers.4.mlp.down_proj.
400
+ Applying mask on model.layers.5.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
401
+ Applied soft mask on model.layers.5.self_attn.q_proj.
402
+ Applying mask on model.layers.5.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
403
+ Applied soft mask on model.layers.5.self_attn.k_proj.
404
+ Applying mask on model.layers.5.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
405
+ Applied soft mask on model.layers.5.self_attn.v_proj.
406
+ Applying mask on model.layers.5.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
407
+ Applied soft mask on model.layers.5.self_attn.o_proj.
408
+ Applying mask on model.layers.5.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
409
+ Applied soft mask on model.layers.5.mlp.gate_proj.
410
+ Applying mask on model.layers.5.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
411
+ Applied soft mask on model.layers.5.mlp.up_proj.
412
+ Applying mask on model.layers.5.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
413
+ Applied soft mask on model.layers.5.mlp.down_proj.
414
+ Applying mask on model.layers.6.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
415
+ Applied soft mask on model.layers.6.self_attn.q_proj.
416
+ Applying mask on model.layers.6.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
417
+ Applied soft mask on model.layers.6.self_attn.k_proj.
418
+ Applying mask on model.layers.6.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
419
+ Applied soft mask on model.layers.6.self_attn.v_proj.
420
+ Applying mask on model.layers.6.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
421
+ Applied soft mask on model.layers.6.self_attn.o_proj.
422
+ Applying mask on model.layers.6.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
423
+ Applied soft mask on model.layers.6.mlp.gate_proj.
424
+ Applying mask on model.layers.6.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
425
+ Applied soft mask on model.layers.6.mlp.up_proj.
426
+ Applying mask on model.layers.6.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
427
+ Applied soft mask on model.layers.6.mlp.down_proj.
428
+ Applying mask on model.layers.7.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
429
+ Applied soft mask on model.layers.7.self_attn.q_proj.
430
+ Applying mask on model.layers.7.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
431
+ Applied soft mask on model.layers.7.self_attn.k_proj.
432
+ Applying mask on model.layers.7.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
433
+ Applied soft mask on model.layers.7.self_attn.v_proj.
434
+ Applying mask on model.layers.7.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
435
+ Applied soft mask on model.layers.7.self_attn.o_proj.
436
+ Applying mask on model.layers.7.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
437
+ Applied soft mask on model.layers.7.mlp.gate_proj.
438
+ Applying mask on model.layers.7.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
439
+ Applied soft mask on model.layers.7.mlp.up_proj.
440
+ Applying mask on model.layers.7.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
441
+ Applied soft mask on model.layers.7.mlp.down_proj.
442
+ Applying mask on model.layers.8.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
443
+ Applied soft mask on model.layers.8.self_attn.q_proj.
444
+ Applying mask on model.layers.8.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
445
+ Applied soft mask on model.layers.8.self_attn.k_proj.
446
+ Applying mask on model.layers.8.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
447
+ Applied soft mask on model.layers.8.self_attn.v_proj.
448
+ Applying mask on model.layers.8.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
449
+ Applied soft mask on model.layers.8.self_attn.o_proj.
450
+ Applying mask on model.layers.8.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
451
+ Applied soft mask on model.layers.8.mlp.gate_proj.
452
+ Applying mask on model.layers.8.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
453
+ Applied soft mask on model.layers.8.mlp.up_proj.
454
+ Applying mask on model.layers.8.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
455
+ Applied soft mask on model.layers.8.mlp.down_proj.
456
+ Applying mask on model.layers.9.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
457
+ Applied soft mask on model.layers.9.self_attn.q_proj.
458
+ Applying mask on model.layers.9.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
459
+ Applied soft mask on model.layers.9.self_attn.k_proj.
460
+ Applying mask on model.layers.9.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
461
+ Applied soft mask on model.layers.9.self_attn.v_proj.
462
+ Applying mask on model.layers.9.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
463
+ Applied soft mask on model.layers.9.self_attn.o_proj.
464
+ Applying mask on model.layers.9.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
465
+ Applied soft mask on model.layers.9.mlp.gate_proj.
466
+ Applying mask on model.layers.9.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
467
+ Applied soft mask on model.layers.9.mlp.up_proj.
468
+ Applying mask on model.layers.9.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
469
+ Applied soft mask on model.layers.9.mlp.down_proj.
470
+ Applying mask on model.layers.10.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
471
+ Applied soft mask on model.layers.10.self_attn.q_proj.
472
+ Applying mask on model.layers.10.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
473
+ Applied soft mask on model.layers.10.self_attn.k_proj.
474
+ Applying mask on model.layers.10.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
475
+ Applied soft mask on model.layers.10.self_attn.v_proj.
476
+ Applying mask on model.layers.10.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
477
+ Applied soft mask on model.layers.10.self_attn.o_proj.
478
+ Applying mask on model.layers.10.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
479
+ Applied soft mask on model.layers.10.mlp.gate_proj.
480
+ Applying mask on model.layers.10.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
481
+ Applied soft mask on model.layers.10.mlp.up_proj.
482
+ Applying mask on model.layers.10.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
483
+ Applied soft mask on model.layers.10.mlp.down_proj.
484
+ Applying mask on model.layers.11.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
485
+ Applied soft mask on model.layers.11.self_attn.q_proj.
486
+ Applying mask on model.layers.11.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
487
+ Applied soft mask on model.layers.11.self_attn.k_proj.
488
+ Applying mask on model.layers.11.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
489
+ Applied soft mask on model.layers.11.self_attn.v_proj.
490
+ Applying mask on model.layers.11.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
491
+ Applied soft mask on model.layers.11.self_attn.o_proj.
492
+ Applying mask on model.layers.11.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
493
+ Applied soft mask on model.layers.11.mlp.gate_proj.
494
+ Applying mask on model.layers.11.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
495
+ Applied soft mask on model.layers.11.mlp.up_proj.
496
+ Applying mask on model.layers.11.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
497
+ Applied soft mask on model.layers.11.mlp.down_proj.
498
+ Applying mask on model.layers.12.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
499
+ Applied soft mask on model.layers.12.self_attn.q_proj.
500
+ Applying mask on model.layers.12.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
501
+ Applied soft mask on model.layers.12.self_attn.k_proj.
502
+ Applying mask on model.layers.12.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
503
+ Applied soft mask on model.layers.12.self_attn.v_proj.
504
+ Applying mask on model.layers.12.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
505
+ Applied soft mask on model.layers.12.self_attn.o_proj.
506
+ Applying mask on model.layers.12.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
507
+ Applied soft mask on model.layers.12.mlp.gate_proj.
508
+ Applying mask on model.layers.12.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
509
+ Applied soft mask on model.layers.12.mlp.up_proj.
510
+ Applying mask on model.layers.12.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
511
+ Applied soft mask on model.layers.12.mlp.down_proj.
512
+ Applying mask on model.layers.13.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
513
+ Applied soft mask on model.layers.13.self_attn.q_proj.
514
+ Applying mask on model.layers.13.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
515
+ Applied soft mask on model.layers.13.self_attn.k_proj.
516
+ Applying mask on model.layers.13.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
517
+ Applied soft mask on model.layers.13.self_attn.v_proj.
518
+ Applying mask on model.layers.13.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
519
+ Applied soft mask on model.layers.13.self_attn.o_proj.
520
+ Applying mask on model.layers.13.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
521
+ Applied soft mask on model.layers.13.mlp.gate_proj.
522
+ Applying mask on model.layers.13.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
523
+ Applied soft mask on model.layers.13.mlp.up_proj.
524
+ Applying mask on model.layers.13.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
525
+ Applied soft mask on model.layers.13.mlp.down_proj.
526
+ Applying mask on model.layers.14.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
527
+ Applied soft mask on model.layers.14.self_attn.q_proj.
528
+ Applying mask on model.layers.14.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
529
+ Applied soft mask on model.layers.14.self_attn.k_proj.
530
+ Applying mask on model.layers.14.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
531
+ Applied soft mask on model.layers.14.self_attn.v_proj.
532
+ Applying mask on model.layers.14.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
533
+ Applied soft mask on model.layers.14.self_attn.o_proj.
534
+ Applying mask on model.layers.14.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
535
+ Applied soft mask on model.layers.14.mlp.gate_proj.
536
+ Applying mask on model.layers.14.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
537
+ Applied soft mask on model.layers.14.mlp.up_proj.
538
+ Applying mask on model.layers.14.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
539
+ Applied soft mask on model.layers.14.mlp.down_proj.
540
+ Applying mask on model.layers.15.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
541
+ Applied soft mask on model.layers.15.self_attn.q_proj.
542
+ Applying mask on model.layers.15.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
543
+ Applied soft mask on model.layers.15.self_attn.k_proj.
544
+ Applying mask on model.layers.15.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
545
+ Applied soft mask on model.layers.15.self_attn.v_proj.
546
+ Applying mask on model.layers.15.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
547
+ Applied soft mask on model.layers.15.self_attn.o_proj.
548
+ Applying mask on model.layers.15.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
549
+ Applied soft mask on model.layers.15.mlp.gate_proj.
550
+ Applying mask on model.layers.15.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
551
+ Applied soft mask on model.layers.15.mlp.up_proj.
552
+ Applying mask on model.layers.15.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
553
+ Applied soft mask on model.layers.15.mlp.down_proj.
554
+ Applying mask on model.layers.16.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
555
+ Applied soft mask on model.layers.16.self_attn.q_proj.
556
+ Applying mask on model.layers.16.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
557
+ Applied soft mask on model.layers.16.self_attn.k_proj.
558
+ Applying mask on model.layers.16.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
559
+ Applied soft mask on model.layers.16.self_attn.v_proj.
560
+ Applying mask on model.layers.16.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
561
+ Applied soft mask on model.layers.16.self_attn.o_proj.
562
+ Applying mask on model.layers.16.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
563
+ Applied soft mask on model.layers.16.mlp.gate_proj.
564
+ Applying mask on model.layers.16.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
565
+ Applied soft mask on model.layers.16.mlp.up_proj.
566
+ Applying mask on model.layers.16.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
567
+ Applied soft mask on model.layers.16.mlp.down_proj.
568
+ Applying mask on model.layers.17.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
569
+ Applied soft mask on model.layers.17.self_attn.q_proj.
570
+ Applying mask on model.layers.17.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
571
+ Applied soft mask on model.layers.17.self_attn.k_proj.
572
+ Applying mask on model.layers.17.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
573
+ Applied soft mask on model.layers.17.self_attn.v_proj.
574
+ Applying mask on model.layers.17.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
575
+ Applied soft mask on model.layers.17.self_attn.o_proj.
576
+ Applying mask on model.layers.17.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
577
+ Applied soft mask on model.layers.17.mlp.gate_proj.
578
+ Applying mask on model.layers.17.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
579
+ Applied soft mask on model.layers.17.mlp.up_proj.
580
+ Applying mask on model.layers.17.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
581
+ Applied soft mask on model.layers.17.mlp.down_proj.
582
+ Applying mask on model.layers.18.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
583
+ Applied soft mask on model.layers.18.self_attn.q_proj.
584
+ Applying mask on model.layers.18.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
585
+ Applied soft mask on model.layers.18.self_attn.k_proj.
586
+ Applying mask on model.layers.18.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
587
+ Applied soft mask on model.layers.18.self_attn.v_proj.
588
+ Applying mask on model.layers.18.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
589
+ Applied soft mask on model.layers.18.self_attn.o_proj.
590
+ Applying mask on model.layers.18.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
591
+ Applied soft mask on model.layers.18.mlp.gate_proj.
592
+ Applying mask on model.layers.18.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
593
+ Applied soft mask on model.layers.18.mlp.up_proj.
594
+ Applying mask on model.layers.18.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
595
+ Applied soft mask on model.layers.18.mlp.down_proj.
596
+ Applying mask on model.layers.19.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
597
+ Applied soft mask on model.layers.19.self_attn.q_proj.
598
+ Applying mask on model.layers.19.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
599
+ Applied soft mask on model.layers.19.self_attn.k_proj.
600
+ Applying mask on model.layers.19.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
601
+ Applied soft mask on model.layers.19.self_attn.v_proj.
602
+ Applying mask on model.layers.19.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
603
+ Applied soft mask on model.layers.19.self_attn.o_proj.
604
+ Applying mask on model.layers.19.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
605
+ Applied soft mask on model.layers.19.mlp.gate_proj.
606
+ Applying mask on model.layers.19.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
607
+ Applied soft mask on model.layers.19.mlp.up_proj.
608
+ Applying mask on model.layers.19.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
609
+ Applied soft mask on model.layers.19.mlp.down_proj.
610
+ Applying mask on model.layers.20.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
611
+ Applied soft mask on model.layers.20.self_attn.q_proj.
612
+ Applying mask on model.layers.20.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
613
+ Applied soft mask on model.layers.20.self_attn.k_proj.
614
+ Applying mask on model.layers.20.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
615
+ Applied soft mask on model.layers.20.self_attn.v_proj.
616
+ Applying mask on model.layers.20.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
617
+ Applied soft mask on model.layers.20.self_attn.o_proj.
618
+ Applying mask on model.layers.20.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
619
+ Applied soft mask on model.layers.20.mlp.gate_proj.
620
+ Applying mask on model.layers.20.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
621
+ Applied soft mask on model.layers.20.mlp.up_proj.
622
+ Applying mask on model.layers.20.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
623
+ Applied soft mask on model.layers.20.mlp.down_proj.
624
+ Applying mask on model.layers.21.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
625
+ Applied soft mask on model.layers.21.self_attn.q_proj.
626
+ Applying mask on model.layers.21.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
627
+ Applied soft mask on model.layers.21.self_attn.k_proj.
628
+ Applying mask on model.layers.21.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
629
+ Applied soft mask on model.layers.21.self_attn.v_proj.
630
+ Applying mask on model.layers.21.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
631
+ Applied soft mask on model.layers.21.self_attn.o_proj.
632
+ Applying mask on model.layers.21.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
633
+ Applied soft mask on model.layers.21.mlp.gate_proj.
634
+ Applying mask on model.layers.21.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
635
+ Applied soft mask on model.layers.21.mlp.up_proj.
636
+ Applying mask on model.layers.21.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
637
+ Applied soft mask on model.layers.21.mlp.down_proj.
638
+ Applying mask on model.layers.22.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
639
+ Applied soft mask on model.layers.22.self_attn.q_proj.
640
+ Applying mask on model.layers.22.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
641
+ Applied soft mask on model.layers.22.self_attn.k_proj.
642
+ Applying mask on model.layers.22.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
643
+ Applied soft mask on model.layers.22.self_attn.v_proj.
644
+ Applying mask on model.layers.22.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
645
+ Applied soft mask on model.layers.22.self_attn.o_proj.
646
+ Applying mask on model.layers.22.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
647
+ Applied soft mask on model.layers.22.mlp.gate_proj.
648
+ Applying mask on model.layers.22.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
649
+ Applied soft mask on model.layers.22.mlp.up_proj.
650
+ Applying mask on model.layers.22.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
651
+ Applied soft mask on model.layers.22.mlp.down_proj.
652
+ Applying mask on model.layers.23.self_attn.q_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
653
+ Applied soft mask on model.layers.23.self_attn.q_proj.
654
+ Applying mask on model.layers.23.self_attn.k_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
655
+ Applied soft mask on model.layers.23.self_attn.k_proj.
656
+ Applying mask on model.layers.23.self_attn.v_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
657
+ Applied soft mask on model.layers.23.self_attn.v_proj.
658
+ Applying mask on model.layers.23.self_attn.o_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
659
+ Applied soft mask on model.layers.23.self_attn.o_proj.
660
+ Applying mask on model.layers.23.mlp.gate_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
661
+ Applied soft mask on model.layers.23.mlp.gate_proj.
662
+ Applying mask on model.layers.23.mlp.up_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
663
+ Applied soft mask on model.layers.23.mlp.up_proj.
664
+ Applying mask on model.layers.23.mlp.down_proj with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
665
+ Applied soft mask on model.layers.23.mlp.down_proj.
666
+ Applying mask on _connector.0 with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
667
+ Applied soft mask on _connector.0.
668
+ Applying mask on _connector.2 with dtype, mask_dtype=torch.bfloat16, module_dtype=torch.bfloat16
669
+ Applied soft mask on _connector.2.
670
+ Using cleaned config_mask (without mask parameters) for saving.
671
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/torch/cuda/__init__.py:51: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
672
+ import pynvml # type: ignore[import]
673
+ [2025-10-11 03:15:09,192] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect)
674
+ /opt/conda/envs/tinyllava/lib/python3.10/site-packages/huggingface_hub/file_download.py:945: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
675
+ warnings.warn(
676
+ Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
677
+
678
  0%| | 0/900 [00:00<?, ?it/s]/nfs/ywang29/TinyLLaVA/transformers/src/transformers/generation/configuration_utils.py:492: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.0` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
679
+ warnings.warn(
680
+
681
  0%| | 1/900 [00:27<6:47:32, 27.20s/it]
682
  0%| | 2/900 [00:54<6:44:58, 27.06s/it]
683
  0%| | 3/900 [01:21<6:43:57, 27.02s/it]
684
  0%| | 4/900 [01:42<6:11:18, 24.86s/it]
685
  1%| | 5/900 [02:04<5:53:04, 23.67s/it]
686
  1%| | 6/900 [02:31<6:09:51, 24.82s/it]
687
  1%| | 7/900 [02:58<6:19:46, 25.52s/it]
688
  1%| | 8/900 [03:19<6:00:27, 24.25s/it]
689
  1%| | 9/900 [03:46<6:12:25, 25.08s/it]
690
  1%| | 10/900 [04:08<5:55:55, 23.99s/it]
691
  1%| | 11/900 [04:29<5:44:12, 23.23s/it]
692
  1%|▏ | 12/900 [04:50<5:33:31, 22.54s/it]
693
  1%|▏ | 13/900 [05:11<5:27:34, 22.16s/it]
694
  2%|▏ | 14/900 [05:32<5:21:56, 21.80s/it]
695
  2%|▏ | 15/900 [05:53<5:17:34, 21.53s/it]
696
  2%|▏ | 16/900 [05:54<3:45:11, 15.28s/it]
697
  2%|▏ | 17/900 [06:15<4:11:25, 17.08s/it]
698
  2%|▏ | 18/900 [06:37<4:30:32, 18.40s/it]
699
  2%|▏ | 19/900 [06:58<4:43:28, 19.31s/it]
700
  2%|▏ | 20/900 [07:19<4:51:24, 19.87s/it]
701
  2%|▏ | 21/900 [07:20<3:26:01, 14.06s/it]
702
  2%|▏ | 22/900 [07:47<4:21:31, 17.87s/it]
703
  3%|β–Ž | 23/900 [08:08<4:37:08, 18.96s/it]
704
  3%|β–Ž | 24/900 [08:30<4:47:58, 19.72s/it]
705
  3%|β–Ž | 25/900 [08:51<4:52:05, 20.03s/it]
706
  3%|β–Ž | 26/900 [09:12<4:58:17, 20.48s/it]
707
  3%|β–Ž | 27/900 [09:33<4:58:46, 20.53s/it]
708
  3%|β–Ž | 28/900 [09:54<5:01:37, 20.75s/it]
709
  3%|β–Ž | 29/900 [10:15<5:01:20, 20.76s/it]
710
  3%|β–Ž | 30/900 [10:36<5:01:23, 20.79s/it]
711
  3%|β–Ž | 31/900 [10:36<3:34:35, 14.82s/it]
712
  4%|β–Ž | 32/900 [10:37<2:31:35, 10.48s/it]
713
  4%|β–Ž | 33/900 [10:38<1:49:24, 7.57s/it]
714
  4%|▍ | 34/900 [10:38<1:18:57, 5.47s/it]
715
  4%|▍ | 35/900 [11:00<2:27:53, 10.26s/it]
716
  4%|▍ | 36/900 [11:21<3:13:48, 13.46s/it]
717
  4%|▍ | 37/900 [11:21<2:17:48, 9.58s/it]
718
  4%|▍ | 38/900 [11:22<1:38:49, 6.88s/it]
719
  4%|▍ | 39/900 [11:23<1:13:17, 5.11s/it]
720
  4%|▍ | 40/900 [11:44<2:21:36, 9.88s/it]
721
  5%|▍ | 41/900 [11:45<1:43:49, 7.25s/it]
722
  5%|▍ | 42/900 [11:47<1:21:06, 5.67s/it]
723
  5%|▍ | 43/900 [11:48<1:00:21, 4.23s/it]
724
  5%|▍ | 44/900 [11:48<44:24, 3.11s/it]
725
  5%|β–Œ | 45/900 [11:49<34:30, 2.42s/it]
726
  5%|β–Œ | 46/900 [11:49<25:44, 1.81s/it]
727
  5%|β–Œ | 47/900 [11:50<20:51, 1.47s/it]
728
  5%|β–Œ | 48/900 [11:51<17:35, 1.24s/it]
729
  5%|β–Œ | 49/900 [11:51<15:07, 1.07s/it]
730
  6%|β–Œ | 50/900 [11:52<13:30, 1.05it/s]
731
  6%|β–Œ | 51/900 [12:13<1:40:18, 7.09s/it]
732
  6%|β–Œ | 52/900 [12:35<2:42:27, 11.49s/it]
733
  6%|β–Œ | 53/900 [12:36<1:56:57, 8.29s/it]
734
  6%|β–Œ | 54/900 [12:37<1:24:37, 6.00s/it]
735
  6%|β–Œ | 55/900 [12:37<1:01:33, 4.37s/it]
736
  6%|β–Œ | 56/900 [12:38<44:13, 3.14s/it]
737
  6%|β–‹ | 57/900 [12:38<32:20, 2.30s/it]
738
  6%|β–‹ | 58/900 [12:39<26:25, 1.88s/it]
739
  7%|β–‹ | 59/900 [12:40<21:51, 1.56s/it]
740
  7%|β–‹ | 60/900 [12:40<18:21, 1.31s/it]
741
  7%|β–‹ | 61/900 [13:01<1:38:38, 7.05s/it]
742
  7%|β–‹ | 62/900 [13:22<2:36:11, 11.18s/it]
743
  7%|β–‹ | 63/900 [13:42<3:16:18, 14.07s/it]
744
  7%|β–‹ | 64/900 [13:43<2:19:13, 9.99s/it]
745
  7%|β–‹ | 65/900 [14:04<3:05:57, 13.36s/it]
746
  7%|β–‹ | 66/900 [14:05<2:12:00, 9.50s/it]
747
  7%|β–‹ | 67/900 [14:26<3:01:18, 13.06s/it]
748
  8%|β–Š | 68/900 [14:47<3:36:07, 15.59s/it]
749
  8%|β–Š | 69/900 [15:09<4:00:11, 17.34s/it]
750
  8%|β–Š | 70/900 [15:30<4:14:33, 18.40s/it]
751
  8%|β–Š | 71/900 [15:51<4:25:48, 19.24s/it]
752
  8%|β–Š | 72/900 [15:51<3:08:02, 13.63s/it]
753
  8%|β–Š | 73/900 [16:12<3:36:16, 15.69s/it]
754
  8%|β–Š | 74/900 [16:33<3:56:33, 17.18s/it]
755
  8%|β–Š | 75/900 [16:54<4:13:13, 18.42s/it]
756
  8%|β–Š | 76/900 [17:15<4:24:26, 19.26s/it]
757
  9%|β–Š | 77/900 [17:36<4:30:38, 19.73s/it]
758
  9%|β–Š | 78/900 [17:57<4:35:31, 20.11s/it]
759
  9%|β–‰ | 79/900 [18:18<4:38:00, 20.32s/it]
760
  9%|β–‰ | 80/900 [18:39<4:41:40, 20.61s/it]
761
  9%|β–‰ | 81/900 [19:00<4:44:08, 20.82s/it]
762
  9%|β–‰ | 82/900 [19:22<4:46:40, 21.03s/it]
763
  9%|β–‰ | 83/900 [19:43<4:48:35, 21.19s/it]
764
  9%|β–‰ | 84/900 [19:44<3:23:52, 14.99s/it]
765
  9%|β–‰ | 85/900 [20:05<3:47:04, 16.72s/it]
766
  10%|β–‰ | 86/900 [20:05<2:40:43, 11.85s/it]
767
  10%|β–‰ | 87/900 [20:26<3:16:39, 14.51s/it]
768
  10%|β–‰ | 88/900 [20:47<3:42:31, 16.44s/it]
769
  10%|β–‰ | 89/900 [21:08<4:00:51, 17.82s/it]
770
  10%|β–ˆ | 90/900 [21:29<4:13:24, 18.77s/it]
771
  10%|β–ˆ | 91/900 [21:29<2:59:28, 13.31s/it]
772
  10%|β–ˆ | 92/900 [21:30<2:06:33, 9.40s/it]
773
  10%|β–ˆ | 93/900 [21:30<1:29:50, 6.68s/it]
774
  10%|β–ˆ | 94/900 [21:31<1:04:42, 4.82s/it]
775
  11%|β–ˆ | 95/900 [21:31<46:41, 3.48s/it]
776
  11%|β–ˆ | 96/900 [21:32<38:00, 2.84s/it]
777
  11%|β–ˆ | 97/900 [21:54<1:52:31, 8.41s/it]
778
  11%|β–ˆ | 98/900 [22:15<2:42:39, 12.17s/it]
779
  11%|β–ˆ | 99/900 [22:36<3:20:58, 15.05s/it]
780
  11%|β–ˆ | 100/900 [22:57<3:44:21, 16.83s/it]
781
  11%|β–ˆ | 101/900 [22:58<2:39:28, 11.98s/it]
782
  11%|β–ˆβ– | 102/900 [22:58<1:52:44, 8.48s/it]
783
  11%|β–ˆβ– | 103/900 [23:19<2:43:02, 12.27s/it]
784
  12%|β–ˆβ– | 104/900 [23:20<1:56:11, 8.76s/it]
785
  12%|β–ˆβ– | 105/900 [23:20<1:22:32, 6.23s/it]
786
  12%|β–ˆβ– | 106/900 [23:41<2:19:27, 10.54s/it]
787
  12%|β–ˆβ– | 107/900 [23:41<1:39:39, 7.54s/it]
788
  12%|β–ˆβ– | 108/900 [23:43<1:17:26, 5.87s/it]
789
  12%|β–ˆβ– | 109/900 [24:04<2:16:59, 10.39s/it]
790
  12%|β–ˆβ– | 110/900 [24:05<1:37:51, 7.43s/it]
791
  12%|β–ˆβ– | 111/900 [24:26<2:31:48, 11.54s/it]
792
  12%|β–ˆβ– | 112/900 [24:27<1:48:07, 8.23s/it]
793
  13%|β–ˆβ–Ž | 113/900 [24:48<2:39:08, 12.13s/it]
794
  13%|β–ˆβ–Ž | 114/900 [25:08<3:12:12, 14.67s/it]
795
  13%|β–ˆβ–Ž | 115/900 [25:29<3:35:28, 16.47s/it]
796
  13%|β–ˆβ–Ž | 116/900 [25:32<2:41:07, 12.33s/it]
797
  13%|β–ˆβ–Ž | 117/900 [25:32<1:54:47, 8.80s/it]
798
  13%|β–ˆβ–Ž | 118/900 [25:53<2:41:10, 12.37s/it]
799
  13%|β–ˆβ–Ž | 119/900 [25:54<1:54:56, 8.83s/it]
800
  13%|β–ˆβ–Ž | 120/900 [25:54<1:21:40, 6.28s/it]
801
  13%|β–ˆβ–Ž | 121/900 [26:15<2:20:32, 10.83s/it]
802
  14%|β–ˆβ–Ž | 122/900 [26:36<2:58:25, 13.76s/it]
803
  14%|β–ˆβ–Ž | 123/900 [26:57<3:24:59, 15.83s/it]
804
  14%|β–ˆβ– | 124/900 [26:57<2:25:30, 11.25s/it]
805
  14%|β–ˆβ– | 125/900 [26:58<1:43:04, 7.98s/it]
806
  14%|β–ˆβ– | 126/900 [26:58<1:13:27, 5.69s/it]
807
  14%|β–ˆβ– | 127/900 [27:19<2:14:45, 10.46s/it]
808
  14%|β–ˆβ– | 128/900 [27:20<1:36:21, 7.49s/it]
809
  14%|β–ˆβ– | 129/900 [27:20<1:08:30, 5.33s/it]
810
  14%|β–ˆβ– | 130/900 [27:21<49:09, 3.83s/it]
811
  15%|β–ˆβ– | 131/900 [27:41<1:54:21, 8.92s/it]
812
  15%|β–ˆβ– | 132/900 [27:42<1:22:19, 6.43s/it]
813
  15%|β–ˆβ– | 133/900 [27:42<58:48, 4.60s/it]
814
  15%|β–ˆβ– | 134/900 [28:04<2:02:57, 9.63s/it]
815
  15%|β–ˆβ–Œ | 135/900 [28:25<2:48:22, 13.21s/it]
816
  15%|β–ˆβ–Œ | 136/900 [28:46<3:18:34, 15.59s/it]
817
  15%|β–ˆβ–Œ | 137/900 [29:07<3:38:29, 17.18s/it]
818
  15%|β–ˆβ–Œ | 138/900 [29:28<3:51:55, 18.26s/it]
819
  15%|β–ˆβ–Œ | 139/900 [29:29<2:44:12, 12.95s/it]
820
  16%|β–ˆβ–Œ | 140/900 [29:29<1:56:11, 9.17s/it]
821
  16%|β–ˆβ–Œ | 141/900 [29:49<2:38:50, 12.56s/it]
822
  16%|β–ˆβ–Œ | 142/900 [29:50<1:53:35, 8.99s/it]
823
  16%|β–ˆβ–Œ | 143/900 [30:11<2:37:05, 12.45s/it]
824
  16%|β–ˆβ–Œ | 144/900 [30:11<1:52:03, 8.89s/it]
825
  16%|β–ˆβ–Œ | 145/900 [30:12<1:19:20, 6.31s/it]
826
  16%|β–ˆβ–Œ | 146/900 [30:32<2:12:39, 10.56s/it]
827
  16%|β–ˆβ–‹ | 147/900 [30:33<1:34:48, 7.55s/it]
828
  16%|β–ˆβ–‹ | 148/900 [30:54<2:26:49, 11.71s/it]
829
  17%|β–ˆβ–‹ | 149/900 [31:15<3:00:00, 14.38s/it]
830
  17%|β–ˆβ–‹ | 150/900 [31:15<2:07:46, 10.22s/it]
831
  17%|β–ˆβ–‹ | 151/900 [31:15<1:30:33, 7.25s/it]
832
  17%|β–ˆβ–‹ | 152/900 [31:16<1:04:22, 5.16s/it]
833
  17%|β–ˆβ–‹ | 153/900 [31:37<2:04:23, 9.99s/it]
834
  17%|β–ˆβ–‹ | 154/900 [31:37<1:28:51, 7.15s/it]
835
  17%|β–ˆβ–‹ | 155/900 [31:58<2:18:25, 11.15s/it]
836
  17%|β–ˆβ–‹ | 156/900 [32:19<2:53:50, 14.02s/it]
837
  17%|β–ˆβ–‹ | 157/900 [32:39<3:18:27, 16.03s/it]
838
  18%|β–ˆβ–Š | 158/900 [32:40<2:20:37, 11.37s/it]
839
  18%|β–ˆβ–Š | 159/900 [33:01<2:56:26, 14.29s/it]
840
  18%|β–ˆβ–Š | 160/900 [33:22<3:19:56, 16.21s/it]
841
  18%|β–ˆβ–Š | 161/900 [33:43<3:36:52, 17.61s/it]
842
  18%|β–ˆβ–Š | 162/900 [34:03<3:48:45, 18.60s/it]
843
  18%|β–ˆβ–Š | 163/900 [34:24<3:56:46, 19.28s/it]
844
  18%|β–ˆβ–Š | 164/900 [34:45<4:02:24, 19.76s/it]
845
  18%|β–ˆβ–Š | 165/900 [34:46<2:51:11, 13.98s/it]
846
  18%|β–ˆβ–Š | 166/900 [35:06<3:15:27, 15.98s/it]/opt/conda/envs/tinyllava/lib/python3.10/site-packages/PIL/Image.py:1047: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
847
+ warnings.warn(
848
+
849
  19%|β–ˆβ–Š | 167/900 [35:27<3:33:09, 17.45s/it]
850
  19%|β–ˆβ–Š | 168/900 [35:48<3:45:56, 18.52s/it]
851
  19%|β–ˆβ–‰ | 169/900 [35:49<2:39:39, 13.10s/it]
852
  19%|β–ˆβ–‰ | 170/900 [36:09<3:07:23, 15.40s/it]
853
  19%|β–ˆβ–‰ | 171/900 [36:11<2:17:47, 11.34s/it]
854
  19%|β–ˆβ–‰ | 172/900 [36:32<2:52:23, 14.21s/it]
855
  19%|β–ˆβ–‰ | 173/900 [36:33<2:02:08, 10.08s/it]
856
  19%|β–ˆβ–‰ | 174/900 [36:53<2:40:42, 13.28s/it]
857
  19%|β–ˆβ–‰ | 175/900 [36:54<1:53:56, 9.43s/it]
858
  20%|β–ˆβ–‰ | 176/900 [37:15<2:34:23, 12.79s/it]
859
  20%|β–ˆβ–‰ | 177/900 [37:15<1:49:43, 9.11s/it]
860
  20%|β–ˆβ–‰ | 178/900 [37:36<2:31:21, 12.58s/it]
861
  20%|β–ˆβ–‰ | 179/900 [37:57<3:01:16, 15.09s/it]
862
  20%|β–ˆβ–ˆ | 180/900 [38:18<3:22:13, 16.85s/it]
863
  20%|β–ˆβ–ˆ | 181/900 [38:39<3:36:46, 18.09s/it]
864
  20%|β–ˆβ–ˆ | 182/900 [39:00<3:46:51, 18.96s/it]
865
  20%|β–ˆβ–ˆ | 183/900 [39:20<3:53:11, 19.51s/it]
866
  20%|β–ˆβ–ˆ | 184/900 [39:42<3:58:51, 20.02s/it]
867
  21%|β–ˆβ–ˆ | 185/900 [40:03<4:03:01, 20.39s/it]
868
  21%|β–ˆβ–ˆ | 186/900 [40:24<4:05:40, 20.65s/it]
869
  21%|β–ˆβ–ˆ | 187/900 [40:45<4:07:35, 20.84s/it]
870
  21%|β–ˆβ–ˆ | 188/900 [40:46<2:54:51, 14.73s/it]
871
  21%|β–ˆβ–ˆ | 189/900 [40:46<2:03:16, 10.40s/it]
872
  21%|β–ˆβ–ˆ | 190/900 [41:07<2:40:07, 13.53s/it]
873
  21%|β–ˆβ–ˆ | 191/900 [41:28<3:05:18, 15.68s/it]
874
  21%|β–ˆβ–ˆβ– | 192/900 [41:49<3:25:17, 17.40s/it]
875
  21%|β–ˆβ–ˆβ– | 193/900 [42:10<3:36:34, 18.38s/it]
876
  22%|β–ˆβ–ˆβ– | 194/900 [42:31<3:46:32, 19.25s/it]
877
  22%|β–ˆβ–ˆβ– | 195/900 [42:32<2:40:07, 13.63s/it]
878
  22%|β–ˆβ–ˆβ– | 196/900 [42:53<3:06:39, 15.91s/it]
879
  22%|β–ˆβ–ˆβ– | 197/900 [43:14<3:23:51, 17.40s/it]
880
  22%|β–ˆβ–ˆβ– | 198/900 [43:35<3:35:42, 18.44s/it]
881
  22%|β–ˆβ–ˆβ– | 199/900 [43:56<3:44:32, 19.22s/it]
882
  22%|β–ˆβ–ˆβ– | 200/900 [43:56<2:38:40, 13.60s/it]
883
  22%|β–ˆβ–ˆβ– | 201/900 [44:17<3:04:37, 15.85s/it]
884
  22%|β–ˆβ–ˆβ– | 202/900 [44:18<2:10:59, 11.26s/it]
885
  23%|β–ˆβ–ˆβ–Ž | 203/900 [44:38<2:43:51, 14.11s/it]
886
  23%|β–ˆβ–ˆβ–Ž | 204/900 [44:59<3:07:05, 16.13s/it]
887
  23%|β–ˆβ–ˆβ–Ž | 205/900 [45:21<3:25:41, 17.76s/it]
888
  23%|β–ˆβ–ˆβ–Ž | 206/900 [45:42<3:37:56, 18.84s/it]
889
  23%|β–ˆβ–ˆβ–Ž | 207/900 [46:03<3:44:46, 19.46s/it]
890
  23%|β–ˆβ–ˆβ–Ž | 208/900 [46:24<3:49:22, 19.89s/it]
891
  23%|β–ˆβ–ˆβ–Ž | 209/900 [46:45<3:52:06, 20.15s/it]
892
  23%|β–ˆβ–ˆβ–Ž | 210/900 [46:45<2:43:57, 14.26s/it]
893
  23%|β–ˆβ–ˆβ–Ž | 211/900 [47:06<3:05:29, 16.15s/it]
894
  24%|β–ˆβ–ˆβ–Ž | 212/900 [47:27<3:22:51, 17.69s/it]
895
  24%|β–ˆβ–ˆβ–Ž | 213/900 [47:48<3:34:48, 18.76s/it]
896
  24%|β–ˆβ–ˆβ– | 214/900 [48:09<3:41:54, 19.41s/it]
897
  24%|β–ˆβ–ˆβ– | 215/900 [48:31<3:47:52, 19.96s/it]
898
  24%|β–ˆβ–ˆβ– | 216/900 [48:52<3:52:10, 20.37s/it]
899
  24%|β–ˆβ–ˆβ– | 217/900 [49:13<3:55:03, 20.65s/it]
900
  24%|β–ˆβ–ˆβ– | 218/900 [49:34<3:56:04, 20.77s/it]
901
  24%|β–ˆβ–ˆβ– | 219/900 [49:56<3:57:50, 20.96s/it]
902
  24%|β–ˆβ–ˆβ– | 220/900 [49:56<2:48:08, 14.84s/it]
903
  25%|β–ˆβ–ˆβ– | 221/900 [50:17<3:09:01, 16.70s/it]
904
  25%|β–ˆβ–ˆβ– | 222/900 [50:38<3:23:43, 18.03s/it]
905
  25%|β–ˆβ–ˆβ– | 223/900 [51:00<3:34:10, 18.98s/it]
906
  25%|β–ˆβ–ˆβ– | 224/900 [51:20<3:39:41, 19.50s/it]
907
  25%|β–ˆβ–ˆβ–Œ | 225/900 [51:21<2:35:10, 13.79s/it]
908
  25%|β–ˆβ–ˆβ–Œ | 226/900 [51:41<2:57:55, 15.84s/it]
909
  25%|β–ˆβ–ˆβ–Œ | 227/900 [52:02<3:14:10, 17.31s/it]
910
  25%|β–ˆβ–ˆβ–Œ | 228/900 [52:03<2:17:31, 12.28s/it]
911
  25%|β–ˆβ–ˆβ–Œ | 229/900 [52:03<1:37:03, 8.68s/it]
912
  26%|β–ˆβ–ˆβ–Œ | 230/900 [52:24<2:17:48, 12.34s/it]
913
  26%|β–ˆβ–ˆβ–Œ | 231/900 [52:45<2:47:15, 15.00s/it]
914
  26%|β–ˆβ–ˆβ–Œ | 232/900 [53:06<3:07:32, 16.84s/it]
915
  26%|β–ˆβ–ˆβ–Œ | 233/900 [53:27<3:21:37, 18.14s/it]
916
  26%|β–ˆβ–ˆβ–Œ | 234/900 [53:48<3:30:25, 18.96s/it]
917
  26%|β–ˆβ–ˆβ–Œ | 235/900 [54:09<3:36:04, 19.50s/it]
918
  26%|β–ˆβ–ˆβ–Œ | 236/900 [54:31<3:42:35, 20.11s/it]
919
  26%|β–ˆβ–ˆβ–‹ | 237/900 [54:52<3:46:53, 20.53s/it]
920
  26%|β–ˆβ–ˆβ–‹ | 238/900 [55:14<3:49:43, 20.82s/it]
921
  27%|β–ˆβ–ˆβ–‹ | 239/900 [55:34<3:49:26, 20.83s/it]
922
  27%|β–ˆβ–ˆβ–‹ | 240/900 [55:55<3:48:56, 20.81s/it]
923
  27%|β–ˆβ–ˆβ–‹ | 241/900 [55:56<2:41:44, 14.73s/it]
924
  27%|β–ˆβ–ˆβ–‹ | 242/900 [56:16<3:00:54, 16.50s/it]
925
  27%|β–ˆβ–ˆβ–‹ | 243/900 [56:17<2:08:09, 11.70s/it]
926
  27%|β–ˆβ–ˆβ–‹ | 244/900 [56:17<1:30:46, 8.30s/it]
927
  27%|β–ˆβ–ˆβ–‹ | 245/900 [56:17<1:04:11, 5.88s/it]
928
  27%|β–ˆβ–ˆβ–‹ | 246/900 [56:44<2:12:41, 12.17s/it]
929
  27%|β–ˆβ–ˆβ–‹ | 247/900 [56:45<1:34:35, 8.69s/it]
930
  28%|β–ˆβ–ˆβ–Š | 248/900 [56:45<1:07:04, 6.17s/it]
931
  28%|β–ˆβ–ˆβ–Š | 249/900 [56:45<47:49, 4.41s/it]
932
  28%|β–ˆβ–ˆβ–Š | 250/900 [56:46<34:59, 3.23s/it]
933
  28%|β–ˆβ–ˆβ–Š | 251/900 [56:46<25:26, 2.35s/it]
934
  28%|β–ˆβ–ˆβ–Š | 252/900 [57:07<1:26:29, 8.01s/it]
935
  28%|β–ˆβ–ˆβ–Š | 253/900 [57:08<1:02:06, 5.76s/it]
936
  28%|β–ˆβ–ˆβ–Š | 254/900 [57:35<2:09:44, 12.05s/it]
937
  28%|β–ˆβ–ˆβ–Š | 255/900 [58:02<2:57:15, 16.49s/it]
938
  28%|β–ˆβ–ˆβ–Š | 256/900 [58:02<2:05:38, 11.71s/it]
939
  29%|β–ˆβ–ˆβ–Š | 257/900 [58:23<2:36:16, 14.58s/it]
940
  29%|β–ˆβ–ˆβ–Š | 258/900 [58:45<2:58:41, 16.70s/it]
941
  29%|β–ˆβ–ˆβ–‰ | 259/900 [58:46<2:06:40, 11.86s/it]
942
  29%|β–ˆβ–ˆβ–‰ | 260/900 [58:46<1:29:43, 8.41s/it]
943
  29%|β–ˆβ–ˆβ–‰ | 261/900 [59:06<2:08:06, 12.03s/it]
944
  29%|β–ˆβ–ˆβ–‰ | 262/900 [59:07<1:31:19, 8.59s/it]
945
  29%|β–ˆβ–ˆβ–‰ | 263/900 [59:07<1:04:49, 6.11s/it]
946
  29%|β–ˆβ–ˆβ–‰ | 264/900 [59:08<46:04, 4.35s/it]
947
  29%|β–ˆβ–ˆβ–‰ | 265/900 [59:28<1:38:03, 9.26s/it]
948
  30%|β–ˆβ–ˆβ–‰ | 266/900 [59:29<1:10:07, 6.64s/it]
949
  30%|β–ˆβ–ˆβ–‰ | 267/900 [59:50<1:56:18, 11.02s/it]
950
  30%|β–ˆβ–ˆβ–‰ | 268/900 [1:00:17<2:46:32, 15.81s/it]
951
  30%|β–ˆβ–ˆβ–‰ | 269/900 [1:00:44<3:21:20, 19.14s/it]
952
  30%|β–ˆβ–ˆβ–ˆ | 270/900 [1:00:44<2:22:24, 13.56s/it]
953
  30%|β–ˆβ–ˆβ–ˆ | 271/900 [1:01:06<2:46:23, 15.87s/it]
954
  30%|β–ˆβ–ˆβ–ˆ | 272/900 [1:01:27<3:03:48, 17.56s/it]
955
  30%|β–ˆβ–ˆβ–ˆ | 273/900 [1:01:28<2:10:24, 12.48s/it]
956
  30%|β–ˆβ–ˆβ–ˆ | 274/900 [1:01:49<2:37:36, 15.11s/it]
957
  31%|β–ˆβ–ˆβ–ˆ | 275/900 [1:02:11<2:57:05, 17.00s/it]
958
  31%|β–ˆβ–ˆβ–ˆ | 276/900 [1:02:31<3:08:20, 18.11s/it]
959
  31%|β–ˆβ–ˆβ–ˆ | 277/900 [1:02:52<3:15:59, 18.88s/it]
960
  31%|β–ˆβ–ˆβ–ˆ | 278/900 [1:03:12<3:20:59, 19.39s/it]
961
  31%|β–ˆβ–ˆβ–ˆ | 279/900 [1:03:33<3:24:52, 19.79s/it]
962
  31%|β–ˆβ–ˆβ–ˆ | 280/900 [1:03:54<3:27:08, 20.05s/it]
963
  31%|β–ˆβ–ˆβ–ˆ | 281/900 [1:04:15<3:31:36, 20.51s/it]
964
  31%|β–ˆβ–ˆβ–ˆβ– | 282/900 [1:04:36<3:31:49, 20.56s/it]
965
  31%|β–ˆβ–ˆβ–ˆβ– | 283/900 [1:04:57<3:32:09, 20.63s/it]
966
  32%|β–ˆβ–ˆβ–ˆβ– | 284/900 [1:05:18<3:32:15, 20.67s/it]
967
  32%|β–ˆβ–ˆβ–ˆβ– | 285/900 [1:05:39<3:32:29, 20.73s/it]
968
  32%|β–ˆβ–ˆβ–ˆβ– | 286/900 [1:05:59<3:32:07, 20.73s/it]
969
  32%|β–ˆβ–ˆβ–ˆβ– | 287/900 [1:06:20<3:31:24, 20.69s/it]
970
  32%|β–ˆβ–ˆβ–ˆβ– | 288/900 [1:06:40<3:30:42, 20.66s/it]
971
  32%|β–ˆβ–ˆβ–ˆβ– | 289/900 [1:06:41<2:28:41, 14.60s/it]
972
  32%|β–ˆβ–ˆβ–ˆβ– | 290/900 [1:07:01<2:46:14, 16.35s/it]
973
  32%|β–ˆβ–ˆβ–ˆβ– | 291/900 [1:07:22<2:58:58, 17.63s/it]
974
  32%|β–ˆβ–ˆβ–ˆβ– | 292/900 [1:07:43<3:07:41, 18.52s/it]
975
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 293/900 [1:08:03<3:13:42, 19.15s/it]
976
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 294/900 [1:08:24<3:18:02, 19.61s/it]
977
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 295/900 [1:08:44<3:20:39, 19.90s/it]
978
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 296/900 [1:09:06<3:25:34, 20.42s/it]
979
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 297/900 [1:09:27<3:25:53, 20.49s/it]
980
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 298/900 [1:09:48<3:29:13, 20.85s/it]
981
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 299/900 [1:10:10<3:30:41, 21.03s/it]
982
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 300/900 [1:10:31<3:31:42, 21.17s/it]
983
  33%|β–ˆβ–ˆβ–ˆβ–Ž | 301/900 [1:10:53<3:32:25, 21.28s/it]
984
  34%|β–ˆβ–ˆβ–ˆβ–Ž | 302/900 [1:10:53<2:29:51, 15.04s/it]
985
  34%|β–ˆβ–ˆβ–ˆβ–Ž | 303/900 [1:11:14<2:45:44, 16.66s/it]
986
  34%|β–ˆβ–ˆβ–ˆβ– | 304/900 [1:11:14<1:57:39, 11.85s/it]
987
  34%|β–ˆβ–ˆβ–ˆβ– | 305/900 [1:11:16<1:26:19, 8.70s/it]
988
  34%|β–ˆβ–ˆβ–ˆβ– | 306/900 [1:11:16<1:01:37, 6.22s/it]
989
  34%|β–ˆβ–ˆβ–ˆβ– | 307/900 [1:11:37<1:44:10, 10.54s/it]
990
  34%|β–ˆβ–ˆβ–ˆβ– | 308/900 [1:11:37<1:14:33, 7.56s/it]
991
  34%|β–ˆβ–ˆβ–ˆβ– | 309/900 [1:11:38<53:30, 5.43s/it]
992
  34%|β–ˆβ–ˆβ–ˆβ– | 310/900 [1:11:38<38:50, 3.95s/it]
993
  35%|β–ˆβ–ˆβ–ˆβ– | 311/900 [1:11:39<28:06, 2.86s/it]
994
  35%|β–ˆβ–ˆβ–ˆβ– | 312/900 [1:11:39<20:28, 2.09s/it]
995
  35%|β–ˆβ–ˆβ–ˆβ– | 313/900 [1:11:39<15:39, 1.60s/it]
996
  35%|β–ˆβ–ˆβ–ˆβ– | 314/900 [1:12:00<1:10:44, 7.24s/it]
997
  35%|β–ˆβ–ˆβ–ˆβ–Œ | 315/900 [1:12:00<51:11, 5.25s/it]
998
  35%|β–ˆβ–ˆβ–ˆβ–Œ | 316/900 [1:12:01<36:39, 3.77s/it]
999
  35%|β–ˆβ–ˆβ–ˆβ–Œ | 317/900 [1:12:01<26:21, 2.71s/it]
1000
  35%|β–ˆβ–ˆβ–ˆβ–Œ | 318/900 [1:12:22<1:18:11, 8.06s/it]
1001
  35%|β–ˆβ–ˆβ–ˆβ–Œ | 319/900 [1:12:42<1:54:26, 11.82s/it]
1002
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 320/900 [1:12:43<1:21:17, 8.41s/it]
1003
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 321/900 [1:13:04<1:58:20, 12.26s/it]
1004
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 322/900 [1:13:04<1:24:17, 8.75s/it]
1005
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 323/900 [1:13:05<59:43, 6.21s/it]
1006
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 324/900 [1:13:05<42:34, 4.44s/it]
1007
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 325/900 [1:13:05<31:05, 3.24s/it]
1008
  36%|β–ˆβ–ˆβ–ˆβ–Œ | 326/900 [1:13:06<22:24, 2.34s/it]
1009
  36%|β–ˆβ–ˆβ–ˆβ–‹ | 327/900 [1:13:27<1:15:17, 7.88s/it]
1010
  36%|β–ˆβ–ˆβ–ˆβ–‹ | 328/900 [1:13:48<1:54:41, 12.03s/it]
1011
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 329/900 [1:13:49<1:21:35, 8.57s/it]
1012
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 330/900 [1:13:49<57:58, 6.10s/it]
1013
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 331/900 [1:13:50<41:50, 4.41s/it]
1014
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 332/900 [1:14:10<1:27:39, 9.26s/it]
1015
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 333/900 [1:14:11<1:03:20, 6.70s/it]
1016
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 334/900 [1:14:31<1:42:10, 10.83s/it]
1017
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 335/900 [1:14:53<2:11:17, 13.94s/it]
1018
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 336/900 [1:14:53<1:33:19, 9.93s/it]
1019
  37%|β–ˆβ–ˆβ–ˆβ–‹ | 337/900 [1:14:54<1:06:43, 7.11s/it]
1020
  38%|β–ˆβ–ˆβ–ˆβ–Š | 338/900 [1:14:54<47:37, 5.08s/it]
1021
  38%|β–ˆβ–ˆβ–ˆβ–Š | 339/900 [1:14:55<34:55, 3.74s/it]
1022
  38%|β–ˆβ–ˆβ–ˆβ–Š | 340/900 [1:14:55<25:24, 2.72s/it]
1023
  38%|β–ˆβ–ˆβ–ˆβ–Š | 341/900 [1:15:16<1:15:28, 8.10s/it]
1024
  38%|β–ˆβ–ˆβ–ˆβ–Š | 342/900 [1:15:36<1:50:48, 11.92s/it]
1025
  38%|β–ˆβ–ˆβ–ˆβ–Š | 343/900 [1:15:57<2:15:32, 14.60s/it]
1026
  38%|β–ˆβ–ˆβ–ˆβ–Š | 344/900 [1:16:19<2:34:19, 16.65s/it]
1027
  38%|β–ˆβ–ˆβ–ˆβ–Š | 345/900 [1:16:19<1:49:10, 11.80s/it]
1028
  38%|β–ˆβ–ˆβ–ˆβ–Š | 346/900 [1:16:19<1:17:00, 8.34s/it]
1029
  39%|β–ˆβ–ˆβ–ˆβ–Š | 347/900 [1:16:20<54:33, 5.92s/it]
1030
  39%|β–ˆβ–ˆβ–ˆβ–Š | 348/900 [1:16:20<39:04, 4.25s/it]
1031
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 349/900 [1:16:21<29:14, 3.18s/it]
1032
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 350/900 [1:16:21<21:55, 2.39s/it]
1033
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 351/900 [1:16:42<1:11:59, 7.87s/it]
1034
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 352/900 [1:17:03<1:47:09, 11.73s/it]
1035
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 353/900 [1:17:23<2:11:35, 14.43s/it]
1036
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 354/900 [1:17:44<2:29:19, 16.41s/it]
1037
  39%|β–ˆβ–ˆβ–ˆβ–‰ | 355/900 [1:18:06<2:41:51, 17.82s/it]
1038
  40%|β–ˆβ–ˆβ–ˆβ–‰ | 356/900 [1:18:06<1:54:44, 12.66s/it]
1039
  40%|β–ˆβ–ˆβ–ˆβ–‰ | 357/900 [1:18:27<2:16:13, 15.05s/it]
1040
  40%|β–ˆβ–ˆβ–ˆβ–‰ | 358/900 [1:18:48<2:31:28, 16.77s/it]
1041
  40%|β–ˆβ–ˆβ–ˆβ–‰ | 359/900 [1:18:48<1:47:09, 11.89s/it]
1042
  40%|β–ˆβ–ˆβ–ˆβ–ˆ | 360/900 [1:18:48<1:15:59, 8.44s/it]
1043
  40%|β–ˆβ–ˆβ–ˆβ–ˆ | 361/900 [1:19:09<1:48:26, 12.07s/it]
1044
  40%|β–ˆβ–ˆβ–ˆβ–ˆ | 362/900 [1:19:30<2:12:59, 14.83s/it]
1045
  40%|β–ˆβ–ˆβ–ˆβ–ˆ | 363/900 [1:19:51<2:28:30, 16.59s/it]
1046
  40%|β–ˆβ–ˆβ–ˆβ–ˆ | 364/900 [1:20:12<2:39:08, 17.82s/it]
1047
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 365/900 [1:20:32<2:46:41, 18.70s/it]
1048
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 366/900 [1:20:53<2:51:52, 19.31s/it]
1049
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 367/900 [1:21:14<2:56:27, 19.86s/it]
1050
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 368/900 [1:21:36<3:00:19, 20.34s/it]
1051
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 369/900 [1:21:57<3:01:26, 20.50s/it]
1052
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 370/900 [1:22:18<3:02:20, 20.64s/it]
1053
  41%|β–ˆβ–ˆβ–ˆβ–ˆ | 371/900 [1:22:39<3:02:45, 20.73s/it]
1054
  41%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 372/900 [1:23:00<3:04:56, 21.02s/it]
1055
  41%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 373/900 [1:23:22<3:06:30, 21.24s/it]
1056
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 374/900 [1:23:43<3:05:26, 21.15s/it]
1057
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 375/900 [1:24:04<3:04:33, 21.09s/it]
1058
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 376/900 [1:24:25<3:03:52, 21.06s/it]
1059
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 377/900 [1:24:47<3:05:08, 21.24s/it]
1060
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 378/900 [1:25:07<3:03:16, 21.07s/it]
1061
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 379/900 [1:25:08<2:09:19, 14.89s/it]
1062
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 380/900 [1:25:28<2:23:26, 16.55s/it]
1063
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 381/900 [1:25:49<2:33:46, 17.78s/it]
1064
  42%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 382/900 [1:26:09<2:40:38, 18.61s/it]
1065
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 383/900 [1:26:30<2:45:18, 19.19s/it]
1066
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 384/900 [1:26:51<2:49:00, 19.65s/it]
1067
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 385/900 [1:27:11<2:51:20, 19.96s/it]
1068
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 386/900 [1:27:33<2:55:13, 20.46s/it]
1069
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 387/900 [1:27:54<2:57:41, 20.78s/it]
1070
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 388/900 [1:28:15<2:57:44, 20.83s/it]
1071
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 389/900 [1:28:37<2:59:35, 21.09s/it]
1072
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 390/900 [1:28:59<3:00:49, 21.27s/it]
1073
  43%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 391/900 [1:29:19<2:58:49, 21.08s/it]
1074
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 392/900 [1:29:40<2:57:23, 20.95s/it]
1075
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 393/900 [1:30:01<2:56:29, 20.89s/it]
1076
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 394/900 [1:30:22<2:57:13, 21.02s/it]
1077
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 395/900 [1:30:43<2:55:50, 20.89s/it]
1078
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 396/900 [1:31:04<2:56:31, 21.02s/it]
1079
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 397/900 [1:31:25<2:55:13, 20.90s/it]
1080
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 398/900 [1:31:45<2:54:07, 20.81s/it]
1081
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 399/900 [1:32:06<2:54:50, 20.94s/it]
1082
  44%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 400/900 [1:32:27<2:54:05, 20.89s/it]
1083
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 401/900 [1:32:49<2:54:50, 21.02s/it]
1084
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 402/900 [1:33:10<2:54:18, 21.00s/it]
1085
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 403/900 [1:33:30<2:53:45, 20.98s/it]
1086
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ– | 404/900 [1:33:31<2:02:38, 14.83s/it]
1087
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 405/900 [1:33:52<2:18:25, 16.78s/it]
1088
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 406/900 [1:34:14<2:29:55, 18.21s/it]
1089
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 407/900 [1:34:35<2:37:40, 19.19s/it]
1090
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 408/900 [1:34:36<1:52:10, 13.68s/it]
1091
  45%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 409/900 [1:34:57<2:09:10, 15.78s/it]
1092
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 410/900 [1:34:57<1:31:22, 11.19s/it]
1093
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 411/900 [1:35:18<1:54:48, 14.09s/it]
1094
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 412/900 [1:35:39<2:12:17, 16.27s/it]
1095
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 413/900 [1:35:40<1:33:41, 11.54s/it]
1096
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 414/900 [1:36:00<1:55:00, 14.20s/it]
1097
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 415/900 [1:36:22<2:12:24, 16.38s/it]
1098
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 416/900 [1:36:42<2:22:16, 17.64s/it]
1099
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 417/900 [1:37:04<2:31:54, 18.87s/it]
1100
  46%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 418/900 [1:37:26<2:38:00, 19.67s/it]
1101
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 419/900 [1:37:47<2:40:47, 20.06s/it]
1102
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 420/900 [1:38:08<2:43:54, 20.49s/it]
1103
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 421/900 [1:38:30<2:46:00, 20.80s/it]
1104
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 422/900 [1:38:51<2:47:15, 20.99s/it]
1105
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 423/900 [1:39:12<2:45:54, 20.87s/it]
1106
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 424/900 [1:39:32<2:45:00, 20.80s/it]
1107
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 425/900 [1:39:53<2:45:25, 20.90s/it]
1108
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 426/900 [1:40:14<2:45:16, 20.92s/it]
1109
  47%|β–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 427/900 [1:40:36<2:45:36, 21.01s/it]
1110
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 428/900 [1:40:57<2:45:28, 21.04s/it]
1111
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 429/900 [1:41:18<2:44:59, 21.02s/it]
1112
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 430/900 [1:41:39<2:45:04, 21.07s/it]
1113
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 431/900 [1:42:01<2:46:08, 21.25s/it]
1114
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 432/900 [1:42:22<2:46:43, 21.38s/it]
1115
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 433/900 [1:42:44<2:47:01, 21.46s/it]
1116
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 434/900 [1:42:45<1:58:28, 15.25s/it]
1117
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 435/900 [1:43:06<2:12:16, 17.07s/it]
1118
  48%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 436/900 [1:43:27<2:21:45, 18.33s/it]
1119
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 437/900 [1:43:48<2:26:42, 19.01s/it]
1120
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–Š | 438/900 [1:44:09<2:30:11, 19.51s/it]
1121
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 439/900 [1:44:29<2:32:23, 19.84s/it]
1122
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 440/900 [1:44:50<2:33:56, 20.08s/it]
1123
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 441/900 [1:45:11<2:37:10, 20.55s/it]
1124
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 442/900 [1:45:32<2:36:51, 20.55s/it]
1125
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 443/900 [1:45:54<2:38:53, 20.86s/it]
1126
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 444/900 [1:46:14<2:38:17, 20.83s/it]
1127
  49%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 445/900 [1:46:36<2:39:40, 21.06s/it]
1128
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 446/900 [1:46:56<2:38:15, 20.91s/it]
1129
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 447/900 [1:47:17<2:37:14, 20.83s/it]
1130
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 448/900 [1:47:38<2:36:15, 20.74s/it]
1131
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 449/900 [1:47:59<2:37:27, 20.95s/it]
1132
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 450/900 [1:48:20<2:36:20, 20.85s/it]
1133
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 451/900 [1:48:41<2:35:54, 20.84s/it]
1134
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 452/900 [1:48:41<1:50:08, 14.75s/it]
1135
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 453/900 [1:49:02<2:02:48, 16.49s/it]
1136
  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 454/900 [1:49:23<2:13:41, 17.99s/it]
1137
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 455/900 [1:49:44<2:20:31, 18.95s/it]
1138
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 456/900 [1:50:06<2:25:31, 19.67s/it]
1139
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 457/900 [1:50:27<2:28:41, 20.14s/it]
1140
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 458/900 [1:50:48<2:30:50, 20.48s/it]
1141
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 459/900 [1:51:10<2:32:33, 20.76s/it]
1142
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 460/900 [1:51:31<2:33:30, 20.93s/it]
1143
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 461/900 [1:51:52<2:33:12, 20.94s/it]
1144
  51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 462/900 [1:52:13<2:32:34, 20.90s/it]
1145
  51%|β–ˆβ–ˆοΏ½οΏ½οΏ½β–ˆβ–ˆβ– | 463/900 [1:52:34<2:32:09, 20.89s/it]
1146
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 464/900 [1:52:54<2:31:58, 20.91s/it]
1147
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 465/900 [1:53:15<2:31:31, 20.90s/it]
1148
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 466/900 [1:53:36<2:31:06, 20.89s/it]
1149
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 467/900 [1:53:57<2:30:44, 20.89s/it]
1150
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 468/900 [1:54:18<2:30:24, 20.89s/it]
1151
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 469/900 [1:54:39<2:29:32, 20.82s/it]
1152
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 470/900 [1:54:59<2:29:02, 20.80s/it]
1153
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 471/900 [1:55:20<2:29:02, 20.84s/it]
1154
  52%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 472/900 [1:55:42<2:30:42, 21.13s/it]
1155
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 473/900 [1:56:04<2:31:32, 21.29s/it]
1156
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 474/900 [1:56:25<2:31:57, 21.40s/it]
1157
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 475/900 [1:56:47<2:32:22, 21.51s/it]
1158
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 476/900 [1:57:08<2:30:10, 21.25s/it]
1159
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 477/900 [1:57:29<2:28:34, 21.07s/it]
1160
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 478/900 [1:57:49<2:27:24, 20.96s/it]
1161
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 479/900 [1:58:10<2:27:34, 21.03s/it]
1162
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 480/900 [1:58:31<2:26:39, 20.95s/it]
1163
  53%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 481/900 [1:58:52<2:26:00, 20.91s/it]
1164
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 482/900 [1:59:13<2:25:03, 20.82s/it]
1165
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 483/900 [1:59:13<1:42:19, 14.72s/it]
1166
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 484/900 [1:59:34<1:54:22, 16.50s/it]
1167
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 485/900 [1:59:55<2:03:54, 17.91s/it]
1168
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 486/900 [2:00:16<2:09:18, 18.74s/it]
1169
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 487/900 [2:00:37<2:14:17, 19.51s/it]
1170
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 488/900 [2:00:58<2:16:25, 19.87s/it]
1171
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 489/900 [2:01:19<2:19:02, 20.30s/it]
1172
  54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 490/900 [2:01:40<2:19:28, 20.41s/it]
1173
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 491/900 [2:01:40<1:38:31, 14.45s/it]
1174
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 492/900 [2:01:40<1:09:23, 10.20s/it]
1175
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 493/900 [2:02:01<1:29:58, 13.26s/it]
1176
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 494/900 [2:02:21<1:44:39, 15.47s/it]
1177
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 495/900 [2:02:43<1:56:48, 17.30s/it]
1178
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 496/900 [2:03:05<2:05:25, 18.63s/it]
1179
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 497/900 [2:03:26<2:11:11, 19.53s/it]
1180
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 498/900 [2:03:48<2:14:24, 20.06s/it]
1181
  55%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 499/900 [2:04:09<2:16:16, 20.39s/it]
1182
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 500/900 [2:04:30<2:17:29, 20.62s/it]
1183
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 501/900 [2:04:51<2:17:17, 20.64s/it]
1184
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 502/900 [2:05:11<2:16:51, 20.63s/it]
1185
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 503/900 [2:05:12<1:36:28, 14.58s/it]
1186
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 504/900 [2:05:32<1:47:40, 16.31s/it]
1187
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 505/900 [2:05:53<1:57:13, 17.81s/it]
1188
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 506/900 [2:06:15<2:03:40, 18.83s/it]
1189
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 507/900 [2:06:36<2:08:04, 19.55s/it]
1190
  56%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 508/900 [2:06:57<2:10:02, 19.90s/it]
1191
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 509/900 [2:07:18<2:12:18, 20.30s/it]
1192
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 510/900 [2:07:39<2:13:46, 20.58s/it]
1193
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 511/900 [2:08:00<2:14:52, 20.80s/it]
1194
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 512/900 [2:08:22<2:15:29, 20.95s/it]
1195
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 513/900 [2:08:43<2:15:01, 20.93s/it]
1196
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 514/900 [2:09:03<2:14:10, 20.86s/it]
1197
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 515/900 [2:09:24<2:13:40, 20.83s/it]
1198
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 516/900 [2:09:45<2:14:26, 21.01s/it]
1199
  57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 517/900 [2:10:07<2:14:44, 21.11s/it]
1200
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 518/900 [2:10:07<1:35:02, 14.93s/it]
1201
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 519/900 [2:10:28<1:45:45, 16.65s/it]
1202
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 520/900 [2:10:29<1:14:51, 11.82s/it]
1203
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 521/900 [2:10:49<1:31:54, 14.55s/it]
1204
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 522/900 [2:11:11<1:44:05, 16.52s/it]
1205
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 523/900 [2:11:11<1:13:38, 11.72s/it]
1206
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 524/900 [2:11:32<1:30:07, 14.38s/it]
1207
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 525/900 [2:11:53<1:42:57, 16.47s/it]
1208
  58%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 526/900 [2:12:14<1:50:36, 17.74s/it]
1209
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 527/900 [2:12:14<1:18:08, 12.57s/it]
1210
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 528/900 [2:12:35<1:32:37, 14.94s/it]
1211
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 529/900 [2:12:35<1:05:36, 10.61s/it]
1212
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 530/900 [2:12:36<46:22, 7.52s/it]
1213
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 531/900 [2:12:56<1:10:12, 11.42s/it]
1214
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 532/900 [2:12:57<49:55, 8.14s/it]
1215
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 533/900 [2:12:57<35:23, 5.79s/it]
1216
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 534/900 [2:13:17<1:02:05, 10.18s/it]
1217
  59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 535/900 [2:13:38<1:21:15, 13.36s/it]
1218
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 536/900 [2:13:39<57:38, 9.50s/it]
1219
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 537/900 [2:14:00<1:18:29, 12.97s/it]
1220
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 538/900 [2:14:00<55:40, 9.23s/it]
1221
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 539/900 [2:14:21<1:17:22, 12.86s/it]
1222
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 540/900 [2:14:22<54:54, 9.15s/it]
1223
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 541/900 [2:14:22<38:53, 6.50s/it]
1224
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 542/900 [2:14:43<1:03:53, 10.71s/it]
1225
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 543/900 [2:15:04<1:21:47, 13.75s/it]
1226
  60%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 544/900 [2:15:04<58:02, 9.78s/it]
1227
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 545/900 [2:15:25<1:18:05, 13.20s/it]
1228
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 546/900 [2:15:47<1:32:17, 15.64s/it]
1229
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 547/900 [2:16:08<1:41:11, 17.20s/it]
1230
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 548/900 [2:16:08<1:11:37, 12.21s/it]
1231
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 549/900 [2:16:29<1:26:10, 14.73s/it]
1232
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 550/900 [2:16:50<1:37:50, 16.77s/it]
1233
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 551/900 [2:17:11<1:44:30, 17.97s/it]
1234
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 552/900 [2:17:32<1:49:55, 18.95s/it]
1235
  61%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 553/900 [2:17:33<1:17:35, 13.42s/it]
1236
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 554/900 [2:17:53<1:29:29, 15.52s/it]
1237
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 555/900 [2:17:54<1:03:28, 11.04s/it]
1238
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 556/900 [2:18:15<1:20:55, 14.12s/it]
1239
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 557/900 [2:18:16<57:23, 10.04s/it]
1240
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 558/900 [2:18:37<1:16:22, 13.40s/it]
1241
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 559/900 [2:18:37<54:09, 9.53s/it]
1242
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 560/900 [2:18:38<38:14, 6.75s/it]
1243
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 561/900 [2:18:38<27:18, 4.83s/it]
1244
  62%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 562/900 [2:18:59<54:16, 9.63s/it]
1245
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 563/900 [2:18:59<38:39, 6.88s/it]
1246
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 564/900 [2:19:20<1:02:14, 11.12s/it]
1247
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 565/900 [2:19:41<1:18:58, 14.14s/it]
1248
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 566/900 [2:19:42<55:58, 10.06s/it]
1249
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 567/900 [2:20:03<1:13:58, 13.33s/it]
1250
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 568/900 [2:20:04<52:57, 9.57s/it]
1251
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 569/900 [2:20:25<1:11:43, 13.00s/it]
1252
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 570/900 [2:20:25<50:52, 9.25s/it]
1253
  63%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 571/900 [2:20:46<1:09:25, 12.66s/it]
1254
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 572/900 [2:21:07<1:23:14, 15.23s/it]
1255
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 573/900 [2:21:28<1:31:53, 16.86s/it]
1256
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 574/900 [2:21:49<1:38:35, 18.14s/it]
1257
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 575/900 [2:22:10<1:43:18, 19.07s/it]
1258
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 576/900 [2:22:31<1:46:27, 19.71s/it]
1259
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 577/900 [2:22:52<1:47:36, 19.99s/it]
1260
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 578/900 [2:23:13<1:49:28, 20.40s/it]
1261
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 579/900 [2:23:34<1:49:40, 20.50s/it]
1262
  64%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 580/900 [2:23:55<1:50:35, 20.74s/it]
1263
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 581/900 [2:24:17<1:51:11, 20.91s/it]
1264
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 582/900 [2:24:38<1:50:52, 20.92s/it]
1265
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 583/900 [2:24:59<1:51:18, 21.07s/it]
1266
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 584/900 [2:25:20<1:50:46, 21.03s/it]
1267
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 585/900 [2:25:41<1:51:00, 21.14s/it]
1268
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 586/900 [2:25:42<1:18:21, 14.97s/it]
1269
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 587/900 [2:26:02<1:26:51, 16.65s/it]
1270
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 588/900 [2:26:23<1:33:04, 17.90s/it]
1271
  65%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 589/900 [2:26:45<1:38:09, 18.94s/it]
1272
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 590/900 [2:27:05<1:40:26, 19.44s/it]
1273
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 591/900 [2:27:26<1:42:46, 19.96s/it]
1274
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 592/900 [2:27:48<1:44:21, 20.33s/it]
1275
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 593/900 [2:28:09<1:45:17, 20.58s/it]
1276
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 594/900 [2:28:30<1:45:51, 20.76s/it]
1277
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 595/900 [2:28:30<1:14:31, 14.66s/it]
1278
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 596/900 [2:28:51<1:23:52, 16.55s/it]
1279
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 597/900 [2:29:12<1:30:09, 17.85s/it]
1280
  66%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 598/900 [2:29:34<1:34:59, 18.87s/it]
1281
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 599/900 [2:29:34<1:07:07, 13.38s/it]
1282
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 600/900 [2:29:55<1:17:32, 15.51s/it]
1283
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 601/900 [2:30:15<1:24:58, 17.05s/it]
1284
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 602/900 [2:30:16<59:59, 12.08s/it]
1285
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 603/900 [2:30:36<1:12:12, 14.59s/it]
1286
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 604/900 [2:30:57<1:21:48, 16.58s/it]
1287
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 605/900 [2:30:58<57:46, 11.75s/it]
1288
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 606/900 [2:31:19<1:10:49, 14.45s/it]
1289
  67%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 607/900 [2:31:19<50:06, 10.26s/it]
1290
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 608/900 [2:31:40<1:05:02, 13.37s/it]
1291
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 609/900 [2:32:01<1:16:13, 15.72s/it]
1292
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 610/900 [2:32:22<1:23:53, 17.36s/it]
1293
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 611/900 [2:32:43<1:29:20, 18.55s/it]
1294
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 612/900 [2:33:05<1:32:49, 19.34s/it]
1295
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 613/900 [2:33:26<1:35:44, 20.02s/it]
1296
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 614/900 [2:33:47<1:37:07, 20.38s/it]
1297
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 615/900 [2:34:09<1:38:01, 20.64s/it]
1298
  68%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 616/900 [2:34:29<1:37:40, 20.64s/it]
1299
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 617/900 [2:34:50<1:37:16, 20.62s/it]
1300
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 618/900 [2:35:11<1:37:26, 20.73s/it]
1301
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 619/900 [2:35:32<1:37:42, 20.86s/it]
1302
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 620/900 [2:35:53<1:37:09, 20.82s/it]
1303
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 621/900 [2:36:14<1:37:16, 20.92s/it]
1304
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 622/900 [2:36:35<1:37:14, 20.99s/it]
1305
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 623/900 [2:36:56<1:37:04, 21.03s/it]
1306
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 624/900 [2:37:17<1:36:50, 21.05s/it]
1307
  69%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 625/900 [2:37:38<1:36:39, 21.09s/it]
1308
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 626/900 [2:38:00<1:36:19, 21.09s/it]
1309
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 627/900 [2:38:20<1:35:23, 20.97s/it]
1310
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 628/900 [2:38:41<1:34:32, 20.85s/it]
1311
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 629/900 [2:39:01<1:33:46, 20.76s/it]
1312
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 630/900 [2:39:22<1:33:05, 20.69s/it]
1313
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 631/900 [2:39:43<1:33:53, 20.94s/it]
1314
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 632/900 [2:40:05<1:34:16, 21.11s/it]
1315
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 633/900 [2:40:26<1:33:33, 21.03s/it]
1316
  70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 634/900 [2:40:47<1:33:26, 21.08s/it]
1317
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 635/900 [2:41:08<1:33:21, 21.14s/it]
1318
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 636/900 [2:41:29<1:33:01, 21.14s/it]
1319
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 637/900 [2:41:51<1:32:43, 21.15s/it]
1320
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 638/900 [2:42:12<1:32:28, 21.18s/it]
1321
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 639/900 [2:42:33<1:32:06, 21.17s/it]
1322
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 640/900 [2:42:54<1:31:46, 21.18s/it]
1323
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 641/900 [2:43:15<1:30:55, 21.06s/it]
1324
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 642/900 [2:43:15<1:04:00, 14.88s/it]
1325
  71%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 643/900 [2:43:16<45:00, 10.51s/it]
1326
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 644/900 [2:43:37<58:08, 13.63s/it]
1327
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 645/900 [2:43:58<1:07:40, 15.92s/it]
1328
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 646/900 [2:44:19<1:14:02, 17.49s/it]
1329
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 647/900 [2:44:40<1:18:25, 18.60s/it]
1330
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 648/900 [2:45:01<1:21:23, 19.38s/it]
1331
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 649/900 [2:45:23<1:23:27, 19.95s/it]
1332
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 650/900 [2:45:44<1:24:48, 20.35s/it]
1333
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 651/900 [2:45:44<59:44, 14.40s/it]
1334
  72%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 652/900 [2:46:06<1:07:45, 16.39s/it]
1335
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 653/900 [2:46:26<1:12:46, 17.68s/it]
1336
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 654/900 [2:46:27<51:20, 12.52s/it]
1337
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 655/900 [2:46:47<1:01:09, 14.98s/it]
1338
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 656/900 [2:47:09<1:08:30, 16.85s/it]
1339
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 657/900 [2:47:30<1:13:38, 18.18s/it]
1340
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 658/900 [2:47:51<1:17:04, 19.11s/it]
1341
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 659/900 [2:48:12<1:19:17, 19.74s/it]
1342
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 660/900 [2:48:13<56:17, 14.07s/it]
1343
  73%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 661/900 [2:48:34<1:04:20, 16.15s/it]
1344
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 662/900 [2:48:56<1:10:10, 17.69s/it]
1345
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Ž | 663/900 [2:49:23<1:20:58, 20.50s/it]
1346
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 664/900 [2:49:50<1:28:12, 22.43s/it]
1347
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 665/900 [2:50:17<1:33:16, 23.82s/it]
1348
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 666/900 [2:50:44<1:36:35, 24.77s/it]
1349
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 667/900 [2:51:11<1:38:57, 25.48s/it]
1350
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 668/900 [2:51:38<1:40:15, 25.93s/it]
1351
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 669/900 [2:52:05<1:41:01, 26.24s/it]
1352
  74%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 670/900 [2:52:32<1:41:29, 26.47s/it]
1353
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 671/900 [2:52:59<1:41:40, 26.64s/it]
1354
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 672/900 [2:53:20<1:35:10, 25.04s/it]
1355
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 673/900 [2:53:41<1:30:38, 23.96s/it]
1356
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 674/900 [2:54:03<1:27:19, 23.18s/it]
1357
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 675/900 [2:54:25<1:25:17, 22.74s/it]
1358
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 676/900 [2:54:46<1:22:56, 22.22s/it]
1359
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 677/900 [2:55:07<1:21:28, 21.92s/it]
1360
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 678/900 [2:55:28<1:20:25, 21.74s/it]
1361
  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 679/900 [2:55:49<1:18:54, 21.42s/it]
1362
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 680/900 [2:56:10<1:18:00, 21.28s/it]
1363
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 681/900 [2:56:30<1:17:03, 21.11s/it]
1364
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 682/900 [2:56:31<54:13, 14.93s/it]
1365
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 683/900 [2:56:52<1:00:48, 16.81s/it]
1366
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 684/900 [2:57:14<1:05:34, 18.22s/it]
1367
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 685/900 [2:57:34<1:07:57, 18.97s/it]
1368
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 686/900 [2:57:35<47:51, 13.42s/it]
1369
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 687/900 [2:57:55<55:03, 15.51s/it]
1370
  76%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 688/900 [2:58:16<1:00:21, 17.08s/it]
1371
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 689/900 [2:58:16<42:32, 12.10s/it]
1372
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 690/900 [2:58:17<29:54, 8.54s/it]
1373
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 691/900 [2:58:37<42:20, 12.16s/it]
1374
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 692/900 [2:58:58<51:11, 14.76s/it]
1375
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 693/900 [2:59:19<57:14, 16.59s/it]
1376
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 694/900 [2:59:40<1:01:23, 17.88s/it]
1377
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 695/900 [3:00:01<1:04:09, 18.78s/it]
1378
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 696/900 [3:00:22<1:05:55, 19.39s/it]
1379
  77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 697/900 [3:00:42<1:06:58, 19.79s/it]
1380
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 698/900 [3:00:43<47:06, 13.99s/it]
1381
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 699/900 [3:01:04<54:04, 16.14s/it]
1382
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 700/900 [3:01:04<38:11, 11.46s/it]
1383
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 701/900 [3:01:25<46:57, 14.16s/it]
1384
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 702/900 [3:01:46<54:01, 16.37s/it]
1385
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 703/900 [3:02:07<58:04, 17.69s/it]
1386
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 704/900 [3:02:08<40:57, 12.54s/it]
1387
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 705/900 [3:02:29<49:07, 15.11s/it]
1388
  78%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 706/900 [3:02:50<54:57, 17.00s/it]
1389
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 707/900 [3:03:11<58:13, 18.10s/it]
1390
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Š | 708/900 [3:03:32<1:00:55, 19.04s/it]
1391
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 709/900 [3:03:53<1:02:37, 19.67s/it]
1392
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 710/900 [3:03:54<44:03, 13.91s/it]
1393
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 711/900 [3:04:15<50:29, 16.03s/it]
1394
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 712/900 [3:04:35<54:27, 17.38s/it]
1395
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 713/900 [3:04:56<57:40, 18.50s/it]
1396
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 714/900 [3:04:57<40:36, 13.10s/it]
1397
  79%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 715/900 [3:05:18<47:55, 15.54s/it]
1398
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 716/900 [3:05:40<53:08, 17.33s/it]
1399
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 717/900 [3:06:01<56:37, 18.57s/it]
1400
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 718/900 [3:06:22<58:16, 19.21s/it]
1401
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 719/900 [3:06:22<41:00, 13.59s/it]
1402
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 720/900 [3:06:43<46:54, 15.64s/it]
1403
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 721/900 [3:07:03<51:15, 17.18s/it]
1404
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 722/900 [3:07:25<54:50, 18.49s/it]
1405
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 723/900 [3:07:46<56:36, 19.19s/it]
1406
  80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 724/900 [3:08:07<58:14, 19.86s/it]
1407
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 725/900 [3:08:28<58:50, 20.17s/it]
1408
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 726/900 [3:08:49<59:02, 20.36s/it]
1409
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 727/900 [3:09:10<59:41, 20.70s/it]
1410
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 728/900 [3:09:31<59:29, 20.75s/it]
1411
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 729/900 [3:09:53<59:55, 21.02s/it]
1412
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 730/900 [3:10:14<59:26, 20.98s/it]
1413
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 731/900 [3:10:35<58:59, 20.95s/it]
1414
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 732/900 [3:10:56<58:38, 20.95s/it]
1415
  81%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 733/900 [3:11:17<58:16, 20.93s/it]
1416
  82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 734/900 [3:11:37<57:51, 20.91s/it]
1417
  82%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 735/900 [3:11:59<57:39, 20.97s/it]
logs_oct10/eval_qwen2.5-0_5b_base_masktune_42_llm-connector_text-3.0_0.5_3e-1_connector-3.0_0.5_3e-1_ablation_20251011_030648.log ADDED
The diff for this file is too large to render. See raw diff