https://huggingface.co/stepfun-ai/Step-3.5-Flash
#1774
by
cocorang
- opened
It's queued!
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Step-3.5-Flash-GGUF for quants to appear.
It Errors out with "error/1 ~Step3p5ForCausalLM"
We need to use the working llama.cpp fork custom build for quanting this model.
unless it gets built into the main llama cpp I cant do anything myself, let's see what @nicoboss will say for that
Hey, mradermacher doesn't do models not yet merged into upstream llama.cpp as doing so confuses too many users. As soon as merged remind me I queue