Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

embedl
/
Llama-3.2-1B-Instruct-FlashHead-W4A16

Safetensors
flash_head_llama
text-generation-inference
custom_code
compressed-tensors
Model card Files Files and versions
xet
Community
Llama-3.2-1B-Instruct-FlashHead-W4A16
1.6 GB
  • 2 contributors
History: 19 commits
swaze's picture
swaze
Upload 3 files
61f8176 verified 2 months ago
  • assets
    Upload folder using huggingface_hub 3 months ago
  • flash_head_assets
    Delete files flash_head_assets/clustering_cache.pt with huggingface_hub 3 months ago
  • .gitattributes
    1.63 kB
    Upload folder using huggingface_hub 3 months ago
  • README.md
    6.6 kB
    Update README.md 3 months ago
  • chat_template.jinja
    3.83 kB
    Upload folder using huggingface_hub 3 months ago
  • config.json
    2.13 kB
    Upload 3 files 2 months ago
  • configuration_flash_head_llama.py
    73 Bytes
    Upload 3 files 2 months ago
  • generation_config.json
    184 Bytes
    Upload folder using huggingface_hub 3 months ago
  • model.safetensors
    1.55 GB
    xet
    Upload folder using huggingface_hub 3 months ago
  • modeling_flash_head_llama.py
    78 Bytes
    Upload 3 files 2 months ago
  • recipe.yaml
    231 Bytes
    Upload folder using huggingface_hub 3 months ago
  • special_tokens_map.json
    296 Bytes
    Upload folder using huggingface_hub 3 months ago
  • tokenizer.json
    17.2 MB
    xet
    Upload folder using huggingface_hub 3 months ago
  • tokenizer_config.json
    50.5 kB
    Upload folder using huggingface_hub 3 months ago