Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
jnjj
/
Xddchvh
like
0
Text Generation
Transformers
Safetensors
gemma3_text
conversational
text-generation-inference
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Xddchvh
2.04 GB
1 contributor
History:
5 commits
jnjj
Upload INT4 quantized model with bfloat16 compute, extreme shrinkage and modifications.
1910067
verified
10 months ago
.gitattributes
1.57 kB
Upload INT4 quantized Gemma‑3‑1B‑IT QAT with bfloat16 compute, extreme shrinkage (100% weight prune, only weights saved), and extensive unconventional modifications including GPTQ/AWQ flags (bfloat16 compute)
10 months ago
README.md
34 Bytes
Create README.md
10 months ago
added_tokens.json
35 Bytes
Upload INT4 quantized Gemma‑3‑1B‑IT QAT with bfloat16 compute, extreme shrinkage (100% weight prune, only weights saved), and extensive unconventional modifications including GPTQ/AWQ flags (bfloat16 compute)
10 months ago
config.json
914 Bytes
Upload INT4 quantized model with bfloat16 compute, extreme shrinkage and modifications.
10 months ago
model.safetensors
2 GB
xet
Upload INT4 quantized model with bfloat16 compute, extreme shrinkage and modifications.
10 months ago
special_tokens_map.json
662 Bytes
Upload INT4 quantized Gemma‑3‑1B‑IT QAT with bfloat16 compute, extreme shrinkage (100% weight prune, only weights saved), and extensive unconventional modifications including GPTQ/AWQ flags (bfloat16 compute)
10 months ago
tokenizer.json
33.4 MB
xet
Upload INT4 quantized Gemma‑3‑1B‑IT QAT with bfloat16 compute, extreme shrinkage (100% weight prune, only weights saved), and extensive unconventional modifications including GPTQ/AWQ flags (bfloat16 compute)
10 months ago
tokenizer.model
4.69 MB
xet
Upload INT4 quantized Gemma‑3‑1B‑IT QAT with bfloat16 compute, extreme shrinkage (100% weight prune, only weights saved), and extensive unconventional modifications including GPTQ/AWQ flags (bfloat16 compute)
10 months ago
tokenizer_config.json
1.16 MB
Upload INT4 quantized Gemma‑3‑1B‑IT QAT with bfloat16 compute, extreme shrinkage (100% weight prune, only weights saved), and extensive unconventional modifications including GPTQ/AWQ flags (bfloat16 compute)
10 months ago