I had to modify the convert python and a few others in ollama to get the model converted, however, until the model is added to the backend, this model won't load. Beyond that, I have no idea about its performance, but given its size, should be simple enough to run. This is my first model conversion, so, forgive me if it's a tad rough, whenever it does get the needed backend.

I've added my modified version of llama.cpp, this currently contains the changes made to the conversion script, constants, and tensor mapping that were needed to perfom the conversion. This should have all the information needed to modify the loader, so that both the conversion and loader, loads the model 1:1 without compromises, however I don't know what to modify for the C code for the loader, so hopefully someone can take the adjustments I made to get it working the rest of the way.

Downloads last month
10
GGUF
Model size
3B params
Architecture
instella
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Melechtna/Instella-3B-Instruct-GGUF

Quantized
(3)
this model