| [api shell.run] |
| * Running on local URL: http://127.0.0.1:7860
|
| * To create a public link, set `share=True` in `launch()`.
|
|
|
| π ε―ε¨ζ΅ζ°΄ηΊΏ | ζ°ι: 1 | ζ ΌεΌ: PNG
|
| π ζ’ζ΅ Transformer: zImageTurboQuantized_fp8ScaledE4m3fnKJ.safetensors
|
| β ε θ½½ε€±θ΄₯: Error(s) in loading state_dict for ZImageTransformer2DModel:
|
| size mismatch for x_pad_token: copying a param with shape torch.Size([1, 3840]) from checkpoint, the shape in current model is torch.Size([1, 1024]).
|
| size mismatch for cap_pad_token: copying a param with shape torch.Size([1, 3840]) from checkpoint, the shape in current model is torch.Size([1, 1024]).
|
| size mismatch for noise_refiner.0.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for noise_refiner.0.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for noise_refiner.0.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for noise_refiner.0.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for noise_refiner.0.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for noise_refiner.0.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for noise_refiner.0.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for noise_refiner.0.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for noise_refiner.0.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for noise_refiner.1.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for noise_refiner.1.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for noise_refiner.1.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for noise_refiner.1.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for noise_refiner.1.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for noise_refiner.1.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for noise_refiner.1.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for noise_refiner.1.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for noise_refiner.1.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for context_refiner.0.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for context_refiner.0.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for context_refiner.0.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for context_refiner.0.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for context_refiner.0.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for context_refiner.0.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for context_refiner.0.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for context_refiner.1.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for context_refiner.1.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for context_refiner.1.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for context_refiner.1.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for context_refiner.1.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for context_refiner.1.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for context_refiner.1.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for cap_embedder.1.weight: copying a param with shape torch.Size([3840, 2560]) from checkpoint, the shape in current model is torch.Size([1024, 2560]).
|
| size mismatch for cap_embedder.1.bias: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.0.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.0.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.0.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.0.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.0.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.0.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.0.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.0.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.0.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.1.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.1.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.1.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.1.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.1.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.1.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.1.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.1.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.1.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.2.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.2.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.2.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.2.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.2.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.2.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.2.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.2.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.2.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.3.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.3.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.3.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.3.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.3.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.3.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.3.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.3.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.3.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.4.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.4.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.4.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.4.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.4.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.4.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.4.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.4.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.4.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.5.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.5.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.5.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.5.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.5.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.5.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.5.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.5.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.5.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.6.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.6.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.6.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.6.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.6.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.6.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.6.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.6.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.6.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.7.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.7.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.7.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.7.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.7.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.7.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.7.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.7.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.7.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.8.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.8.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.8.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.8.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.8.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.8.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.8.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.8.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.8.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.9.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.9.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.9.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.9.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.9.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.9.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.9.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.9.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.9.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.10.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.10.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.10.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.10.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.10.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.10.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.10.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.10.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.10.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.11.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.11.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.11.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.11.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.11.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.11.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.11.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.11.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.11.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.12.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.12.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.12.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.12.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.12.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.12.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.12.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.12.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.12.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.13.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.13.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.13.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.13.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.13.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.13.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.13.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.13.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.13.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.14.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.14.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.14.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.14.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.14.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.14.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.14.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.14.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.14.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.15.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.15.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.15.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.15.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.15.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.15.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.15.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.15.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.15.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.16.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.16.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.16.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.16.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.16.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.16.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.16.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.16.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.16.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.17.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.17.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.17.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.17.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.17.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.17.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.17.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.17.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.17.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.18.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.18.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.18.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.18.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.18.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.18.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.18.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.18.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.18.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.19.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.19.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.19.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.19.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.19.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.19.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.19.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.19.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.19.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.20.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.20.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.20.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.20.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.20.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.20.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.20.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.20.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.20.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.21.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.21.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.21.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.21.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.21.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.21.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.21.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.21.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.21.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.22.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.22.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.22.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.22.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.22.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.22.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.22.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.22.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.22.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.23.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.23.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.23.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.23.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.23.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.23.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.23.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.23.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.23.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.24.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.24.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.24.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.24.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.24.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.24.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.24.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.24.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.24.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.25.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.25.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.25.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.25.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.25.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.25.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.25.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.25.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.25.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.26.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.26.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.26.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.26.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.26.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.26.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.26.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.26.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.26.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.27.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.27.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.27.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.27.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.27.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.27.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.27.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.27.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.27.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.28.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.28.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.28.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.28.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.28.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.28.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.28.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.28.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.28.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| size mismatch for layers.29.feed_forward.w1.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.29.feed_forward.w2.weight: copying a param with shape torch.Size([3840, 10240]) from checkpoint, the shape in current model is torch.Size([1024, 2730]).
|
| size mismatch for layers.29.feed_forward.w3.weight: copying a param with shape torch.Size([10240, 3840]) from checkpoint, the shape in current model is torch.Size([2730, 1024]).
|
| size mismatch for layers.29.attention_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.29.ffn_norm1.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.29.attention_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.29.ffn_norm2.weight: copying a param with shape torch.Size([3840]) from checkpoint, the shape in current model is torch.Size([1024]).
|
| size mismatch for layers.29.adaLN_modulation.0.weight: copying a param with shape torch.Size([15360, 256]) from checkpoint, the shape in current model is torch.Size([4096, 256]).
|
| size mismatch for layers.29.adaLN_modulation.0.bias: copying a param with shape torch.Size([15360]) from checkpoint, the shape in current model is torch.Size([4096]).
|
| Loading checkpoint shards: 100
|
| π ζ’ζ΅ TextEncoder: zImageTurboQuantized_textEncoderFp8Scaled.safetensors
|
|
|