Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
ddh0
/
Q4_K_X.gguf
like
2
GGUF
imatrix
conversational
License:
unknown
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Q4_K_X.gguf
444 GB
1 contributor
History:
37 commits
ddh0
Upload Llama-3.3-70B-Joyous-Q4_K_X.gguf with huggingface_hub
d708445
verified
about 2 months ago
.gitattributes
Safe
2.65 kB
Upload Llama-3.3-70B-Joyous-Q4_K_X.gguf with huggingface_hub
about 2 months ago
Cassiopeia-70B-Q4_K_X.gguf
Safe
43.9 GB
xet
Upload Cassiopeia-70B-Q4_K_X.gguf with huggingface_hub
7 months ago
Diagesis-Q4_K_X.gguf
43.9 GB
xet
Upload Diagesis-Q4_K_X.gguf with huggingface_hub
4 months ago
Hunyuan-A13B-Instruct-Q4_K_X.gguf
48.7 GB
xet
Upload Hunyuan-A13B-Instruct-Q4_K_X.gguf with huggingface_hub
8 months ago
L3.3-Unnamed-Exp-70B-v0.8-Q4_K_X.gguf
Safe
43.9 GB
xet
Upload L3.3-Unnamed-Exp-70B-v0.8-Q4_K_X.gguf with huggingface_hub
8 months ago
Llama-3.3-70B-Instruct-Q4_K_X.gguf
43.9 GB
xet
Upload Llama-3.3-70B-Instruct-Q4_K_X.gguf with huggingface_hub
about 2 months ago
Llama-3.3-70B-Joyous-Q4_K_X.gguf
43.9 GB
xet
Upload Llama-3.3-70B-Joyous-Q4_K_X.gguf with huggingface_hub
about 2 months ago
Llama-3.3-70B-Vulpecula-r1-IQ4_XS_X.gguf
42.7 GB
xet
Upload Llama-3.3-70B-Vulpecula-r1-IQ4_XS_X.gguf with huggingface_hub
2 months ago
Magistral-Small-2506-Q4_K_X.gguf
14.8 GB
xet
Upload Magistral-Small-2506-Q4_K_X.gguf with huggingface_hub
8 months ago
Mistral-Small-3.2-24B-Instruct-2506-Q4_K_X.gguf
14.8 GB
xet
Upload Mistral-Small-3.2-24B-Instruct-2506-Q4_K_X.gguf with huggingface_hub
8 months ago
Qwen3-14B-Base-Q4_K_X.gguf
9.49 GB
xet
Upload Qwen3-14B-Base-Q4_K_X.gguf with huggingface_hub
8 months ago
Qwen3-14B-Q4_K_X.gguf
9.49 GB
xet
Upload Qwen3-14B-Q4_K_X.gguf with huggingface_hub
8 months ago
Qwen3-32B-64K-Q4_K_X.gguf
Safe
20.5 GB
xet
Upload Qwen3-32B-64K-Q4_K_X.gguf with huggingface_hub
8 months ago
Qwen3-8B-Q4_K_X.gguf
5.38 GB
xet
Upload Qwen3-8B-Q4_K_X.gguf with huggingface_hub
8 months ago
README.md
Safe
519 Bytes
update `attn_output` GGML type
8 months ago
StrawberryLemonade-70B-v1.2-Q4_K_X.gguf
43.9 GB
xet
Upload StrawberryLemonade-70B-v1.2-Q4_K_X.gguf with huggingface_hub
8 months ago
gemma-3-12b-it-Q4_K_X.gguf
7.45 GB
xet
Upload gemma-3-12b-it-Q4_K_X.gguf with huggingface_hub
8 months ago
gemma-3-12b-pt-Q4_K_X.gguf
Safe
7.45 GB
xet
Upload gemma-3-12b-pt-Q4_K_X.gguf with huggingface_hub
8 months ago