What is this?
This is the Colox-v1 GGUF Lora adapter.
How can I use this?
This is compatible with any service that supports GGUF Lora adpaters plus base model. (ex: llama-cpp, ollama, etc.)
What is the base model?
Llama 3.1 8B Instruct
How is this Lora adapter quanitized?
This has been quanitized to F16.
- Downloads last month
- 10
Hardware compatibility
Log In to add your hardware
16-bit
Model tree for retronic/Colox_The1st-LORA_GGUF
Base model
meta-llama/Llama-3.1-8B Finetuned
meta-llama/Llama-3.1-8B-Instruct Finetuned
retronic/Colox_The1st