fokan's picture
Add unified MedFusion-AI (Pro+Lite) with pipeline & app
b85ba72 verified
# fokan/medgemma-4b-it-int8
INT8 dynamic quantized version of `google/medgemma-4b-it`
- Quantization: Dynamic INT8 on Linear layers (PyTorch)
- Ideal for CPU inference
- 4× smaller than original model