File size: 203 Bytes
b85ba72 |
1 2 3 4 5 6 7 |
# fokan/medgemma-4b-it-int8
INT8 dynamic quantized version of `google/medgemma-4b-it`
- Quantization: Dynamic INT8 on Linear layers (PyTorch)
- Ideal for CPU inference
- 4× smaller than original model
|