SLIP / modeling_slip.py

Commit History

Pass dtype through to Gemma init for proper torch_dtype support
b81a726
verified

LeoChen085 commited on

Init Gemma in float32, users control dtype via torch_dtype param
117e0c8
verified

LeoChen085 commited on

Remove Gemma dtype override
16640e7
verified

LeoChen085 commited on

Uniform float32 weights, clean dtype handling: modeling_slip.py
1cc2650
verified

LeoChen085 commited on

Fix mixed-dtype handling in modeling_slip.py
ba797a2
verified

LeoChen085 commited on

Use set_default_dtype for uniform dtype init, assert batch size match in CrossAttention
28848e5
verified

LeoChen085 commited on

Update modeling_slip.py: use HF torch_dtype instead of hardcoded bfloat16
866f1ac
verified

LeoChen085 commited on

Fix CrossAttention batch size mismatch: expand context to match query batch
08e9734
verified

LeoChen085 commited on

Fix dtype mismatches: cast entire model to bfloat16 in init
bcd17bc
verified

LeoChen085 commited on

Upload SLIP model, checkpoints, and source code
c600982
verified

LeoChen085 commited on

Upload SLIP model, checkpoints, and source code
0c7f3e3
verified

LeoChen085 commited on