Safetensors
GGUF
llama
How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="skdrx/amd135m_reasoning_finetune",
	filename="",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

Finetune of amd135m using Rchatml format form reasoning-base-20k dataset from KingNish. Trying to see if i can get this small model to reason. Improvements, suggestions welcome. Will upload training script and dataset script soon (yell at me if I dont)

Downloads last month
67
Safetensors
Model size
0.1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for skdrx/amd135m_reasoning_finetune

Quantized
(15)
this model

Datasets used to train skdrx/amd135m_reasoning_finetune