Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
FPHam
/
Autolycus-Mistral_7B-Q6_K-GGUF
like
0
GGUF
English
mistral
instruct
finetune
chatml
gpt4
llama-cpp
gguf-my-repo
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
Autolycus-Mistral_7B-Q6_K-GGUF
5.94 GB
1 contributor
History:
4 commits
FPHam
Update README.md
299e2ba
verified
almost 2 years ago
.gitattributes
Safe
1.59 kB
Upload autolycus-mistral_7b.Q6_K.gguf with huggingface_hub
almost 2 years ago
README.md
Safe
1.82 kB
Update README.md
almost 2 years ago
autolycus-mistral_7b.Q6_K.gguf
5.94 GB
xet
Upload autolycus-mistral_7b.Q6_K.gguf with huggingface_hub
almost 2 years ago