GGUF
conversational
How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="chatpdflocal/SmolLM2-135M-Instruct-GGUF",
	filename="smollm2-135m-instruct-f16.gguf",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

This is SmolLM2-135M-Instruct GGUF format model, which can be easily runned on PCs, mobile phones or devices with llama.cpp.

If you are a Mac user, the following free wonderful AI tools can help you to read and understand PDFs effectively:

  • If you are using Zotero for managing and reading your personal PDFs, PapersGPT is a free plugin which can assist you to chat PDFs effectively by your local SmolLM2-135M-Instruct.
  • you can directly download the beautiful ChatPDFLocal MacOS app from here, load one or batch PDF files at will, and quickly experience the effect of the model through chat reading.
Downloads last month
50
GGUF
Model size
0.1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support