output = llm(
"Once upon a time,",
max_tokens=512,
echo=True
)
print(output)danielus/MermaidSolar-Q6_K-GGUF
This model was converted to GGUF format from TroyDoesAI/MermaidSolar using llama.cpp.
Refer to the original model card for more details on the model.
Use with llama.cpp
brew install ggerganov/ggerganov/llama.cpp
llama-cli --hf-repo danielus/MermaidSolar-Q6_K-GGUF --model mermaidsolar.Q6_K.gguf -p "The meaning to life and the universe is "
llama-server --hf-repo danielus/MermaidSolar-Q6_K-GGUF --model mermaidsolar.Q6_K.gguf -c 2048
- Downloads last month
- 6
Hardware compatibility
Log In to add your hardware
6-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="danielus/MermaidSolar-Q6_K-GGUF", filename="mermaidsolar.Q6_K.gguf", )