| # fokan/medsiglip-448-int8 | |
| INT8 dynamic quantized version of `google/medsiglip-448` | |
| - Quantization: dynamic INT8 on all nn.Linear layers (PyTorch) | |
| - Intended for CPU inference & smaller disk footprint | |
| - Saved as `pytorch_model.bin` (quantized weights); config & processor included. | |
| > Note: Quantized state_dict is stored with PyTorch serialization (not safetensors) due to quantization tensors. | |