Transformers
GGUF
llamafile
English
conversational
How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="stillerman/SmolLM-135M-Instruct-Llamafile",
	filename="SmolLM-135M-Instruct-F16.gguf",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

SmolLM-135M-Instruct - llamafile

This repo contains .gguf and .llamafile files for SmolLM-135M-Instruct. Llamafiles are single-file executables (called a "llamafile") that run locally on most computers, with no installation.

Use it in 3 lines!

wget https://huggingface.co/stillerman/SmolLM-135M-Instruct-Llamafile/resolve/main/SmolLM-135M-Instruct-F16.llamafile
chmod a+x SmolLM-135M-Instruct-F16.llamafile
./SmolLM-135M-Instruct-F16.llamafile

Thank you to

Downloads last month
28
GGUF
Model size
0.1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train stillerman/SmolLM-135M-Instruct-Llamafile

Collection including stillerman/SmolLM-135M-Instruct-Llamafile