model_satere_mawe_v3 - GGUF
This model was finetuned and converted to GGUF format using Unsloth.
Example usage:
- For text only LLMs: llama-cli --hf repo_id/model_name -p "why is the sky blue?"
- For multimodal models: llama-mtmd-cli -m model_name.gguf --mmproj mmproj_file.gguf
Available Model files:
Meta-Llama-3.1-8B-Instruct.F16.gguf
Ollama
An Ollama Modelfile is included for easy deployment.
How to use
from ollama import Client
client = Client(host='http://localhost:11434')
response = client.chat( model="satere:latest", messages=[ {"role": "user", "content": "Qual a palavra para 'cabeça'?"} ] )
print("Resposta do modelo:") print(response["message"]["content"])
- Downloads last month
- 4
Hardware compatibility
Log In
to view the estimation
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support