Text Generation
PEFT
GGUF
Transformers
English
axolotl
lora
How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="Lerelou/SmoLlm3python-3B_GGUF",
	filename="smolpython-3B-Q4_K_M.gguf",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

Model smolpython-3B_GGUF (Fine-Tuned HuggingFaceTB/SmolLM3-3B-Base)

Model Description

This model is a fine-tuning of the HuggingFaceTB/SmolLM3-3B-Base model. It has been specialized for writing Python code.

Training Details

Downloads last month
8
GGUF
Model size
3B params
Architecture
smollm3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Lerelou/SmoLlm3python-3B_GGUF

Adapter
(17)
this model

Dataset used to train Lerelou/SmoLlm3python-3B_GGUF