llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)llama-3 : GGUF
This model was finetuned and converted to GGUF format using Unsloth.
Example usage:
- For text only LLMs:
./llama.cpp/llama-cli -hf reboo13/llama-3 --jinja - For multimodal models:
./llama.cpp/llama-mtmd-cli -hf reboo13/llama-3 --jinja
Available Model files:
llama-3-8b.Q4_K_M.gguf
Note
The model's BOS token behavior was adjusted for GGUF compatibility.
This was trained 2x faster with Unsloth

- Downloads last month
- 3
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="reboo13/llama-3", filename="llama-3-8b.Q4_K_M.gguf", )