runtime error
Exit code: 1. Reason: | MISSING | audio_encoder.patch_embed.fusion_model.local_att.{1, 4}.num_batches_tracked | MISSING | audio_encoder.patch_embed.fusion_model.local_att.{1, 4}.running_var | MISSING | audio_encoder.patch_embed.proj.weight | MISSING | audio_encoder.patch_embed.proj.bias | MISSING | audio_encoder.batch_norm.weight | MISSING | audio_encoder.batch_norm.num_batches_tracked | MISSING | Notes: - UNEXPECTED: can be ignored when loading from different task/architecture; not ok if you expect identical arch. - MISSING: those params were newly initialized because missing from the checkpoint. Consider training on your downstream task. Traceback (most recent call last): File "/app/app.py", line 115, in <module> audio_model.load_state_dict(ckpt['model_state'], strict=False) File "/usr/local/lib/python3.11/site-packages/torch/nn/modules/module.py", line 2639, in load_state_dict raise RuntimeError( RuntimeError: Error(s) in loading state_dict for UnifiedAudioModel: size mismatch for fusion.0.weight: copying a param with shape torch.Size([512, 1536]) from checkpoint, the shape in current model is torch.Size([768, 1536]). size mismatch for fusion.0.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for fusion.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([768]). size mismatch for fusion.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([768]).
Container logs:
Fetching error logs...