Uploaded model
- Developed by: sal076
- License: llama 3.1
- Finetuned from model : unsloth/meta-llama-3.1-8b-bnb-4bit
This a shit fintune quickly made as a proof of concept, This isn't supposed to be a useable model
Here is a updated better version, use this instead
Q4_K_M: https://huggingface.co/sal076/L3.1_RP_TEST3-Q4_K_M-GGUF
Q5_K_M: https://huggingface.co/sal076/L3.1_RP_TEST3-Q5_K_M-GGUF
- Downloads last month
- 52
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support