Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
jnjj
/
Xddchvh
like
0
Text Generation
Transformers
Safetensors
gemma3_text
conversational
text-generation-inference
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Xddchvh
2.04 GB
Ctrl+K
Ctrl+K
1 contributor
History:
5 commits
jnjj
Upload INT4 quantized model with bfloat16 compute, extreme shrinkage and modifications.
1910067
verified
11 months ago
.gitattributes
Safe
1.57 kB
Upload INT4 quantized Gemma‑3‑1B‑IT QAT with bfloat16 compute, extreme shrinkage (100% weight prune, only weights saved), and extensive unconventional modifications including GPTQ/AWQ flags (bfloat16 compute)
11 months ago
README.md
Safe
34 Bytes
Create README.md
11 months ago
added_tokens.json
Safe
35 Bytes
Upload INT4 quantized Gemma‑3‑1B‑IT QAT with bfloat16 compute, extreme shrinkage (100% weight prune, only weights saved), and extensive unconventional modifications including GPTQ/AWQ flags (bfloat16 compute)
11 months ago
config.json
Safe
914 Bytes
Upload INT4 quantized model with bfloat16 compute, extreme shrinkage and modifications.
11 months ago
model.safetensors
2 GB
xet
Upload INT4 quantized model with bfloat16 compute, extreme shrinkage and modifications.
11 months ago
special_tokens_map.json
Safe
662 Bytes
Upload INT4 quantized Gemma‑3‑1B‑IT QAT with bfloat16 compute, extreme shrinkage (100% weight prune, only weights saved), and extensive unconventional modifications including GPTQ/AWQ flags (bfloat16 compute)
11 months ago
tokenizer.json
Safe
33.4 MB
xet
Upload INT4 quantized Gemma‑3‑1B‑IT QAT with bfloat16 compute, extreme shrinkage (100% weight prune, only weights saved), and extensive unconventional modifications including GPTQ/AWQ flags (bfloat16 compute)
11 months ago
tokenizer.model
Safe
4.69 MB
xet
Upload INT4 quantized Gemma‑3‑1B‑IT QAT with bfloat16 compute, extreme shrinkage (100% weight prune, only weights saved), and extensive unconventional modifications including GPTQ/AWQ flags (bfloat16 compute)
11 months ago
tokenizer_config.json
Safe
1.16 MB
Upload INT4 quantized Gemma‑3‑1B‑IT QAT with bfloat16 compute, extreme shrinkage (100% weight prune, only weights saved), and extensive unconventional modifications including GPTQ/AWQ flags (bfloat16 compute)
11 months ago