ishu-newaz/Gemma3-1B-FP16 (Quantized)
Description
This model is a quantized version of the original model ishu-newaz/Gemma3-1B-FP16.
It's quantized using the BitsAndBytes library to 4-bit using the bnb-my-repo space.
Quantization Details
- Quantization Type: int4
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: bfloat16
- bnb_4bit_quant_storage: int8
π Original Model Information
Uploaded finetuned model
- Developed by: ishu-newaz
- License: apache-2.0
- Finetuned from model : unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 2
Model tree for ishu-newaz/Gemma3-1B-FP16-bnb-4bit
Base model
google/gemma-3-1b-pt Finetuned
google/gemma-3-1b-it Quantized
unsloth/gemma-3-1b-it-unsloth-bnb-4bit Finetuned
ishu-newaz/Gemma3-1B-FP16