Can you quantize this model?

#1
by KnutJaegersberg - opened

I tried the exllamav2 file format and awq on an adapter merged model (note this model is kinda less than lima trained), those two didn't work, in both cases got weird errors.
https://huggingface.co/KnutJaegersberg/Deacon-30b

currently trying to gguf it, that seems to work.

Good to know, I thought the bloke would be busy with it. that should mean gguf works, it almost finished.
But i'm more concerned about the general functionality for ones own fine tunes. Thus we have gguf, so far I have no exceptions here, too.

KnutJaegersberg changed discussion status to closed

I did FT this model and I found the results disappointing, not worth publishing.

did you measure it?

I mean I've seen the score of this model on the hub, too, but I was hoping fine tuning corrects it.

Sign up or log in to comment