Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Crusadersk
/
phi-2-gptq-4bit

Text Generation
Transformers
Safetensors
English
phi
quantized
gptq
4bit
safety-evaluation
banterhearts
text-generation-inference
4-bit precision
Model card Files Files and versions
xet
Community
phi-2-gptq-4bit
1.84 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 4 commits
Crusadersk's picture
Crusadersk
Model card v2: full eval results, provenance, compatibility, reproduction
6385e88 verified 5 days ago
  • .gitattributes
    1.52 kB
    initial commit 5 days ago
  • README.md
    5.31 kB
    Model card v2: full eval results, provenance, compatibility, reproduction 5 days ago
  • config.json
    1.89 kB
    Self-quantized phi-2-gptq 4-bit (group_size=128, seed=42) 5 days ago
  • generation_config.json
    139 Bytes
    Self-quantized phi-2-gptq 4-bit (group_size=128, seed=42) 5 days ago
  • model.safetensors
    1.84 GB
    xet
    Self-quantized phi-2-gptq 4-bit (group_size=128, seed=42) 5 days ago
  • quant_log.csv
    8.59 kB
    Self-quantized phi-2-gptq 4-bit (group_size=128, seed=42) 5 days ago
  • quantize_config.json
    104 Bytes
    Self-quantized phi-2-gptq 4-bit (group_size=128, seed=42) 5 days ago
  • quantize_manifest.json
    251 Bytes
    Self-quantized phi-2-gptq 4-bit (group_size=128, seed=42) 5 days ago
  • tokenizer.json
    3.56 MB
    Self-quantized phi-2-gptq 4-bit (group_size=128, seed=42) 5 days ago
  • tokenizer_config.json
    344 Bytes
    Self-quantized phi-2-gptq 4-bit (group_size=128, seed=42) 5 days ago