Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
inference-optimization
/
Llama-3.1-8B-Instruct-FP8-dynamic-QKV-Cache-FP8-Per-Tensor
like
0
Follow
Inference Optimization
16
Safetensors
llama
compressed-tensors
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
main
Llama-3.1-8B-Instruct-FP8-dynamic-QKV-Cache-FP8-Per-Tensor
Commit History
Upload folder using huggingface_hub
d318a7c
verified
krishnateja95
commited on
Dec 4, 2025
initial commit
41cc2bb
verified
krishnateja95
commited on
Dec 4, 2025