I had to modify the convert python and a few others in ollama to get the model converted, however, until the model is added to the backend, this model won't load. Beyond that, I have no idea about its performance, but given its size, should be simple enough to run. This is my first model conversion, so, forgive me if it's a tad rough, whenever it does get the needed backend.
I've added my modified version of llama.cpp, this currently contains the changes made to the conversion script, constants, and tensor mapping that were needed to perfom the conversion. This should have all the information needed to modify the loader, so that both the conversion and loader, loads the model 1:1 without compromises, however I don't know what to modify for the C code for the loader, so hopefully someone can take the adjustments I made to get it working the rest of the way.
- Downloads last month
- 10
16-bit
Model tree for Melechtna/Instella-3B-Instruct-GGUF
Base model
amd/Instella-3B-Instruct