[Suggestion] REAP, plus IQ4_NL (for faster Apple Silicon / ARM64 inference)

#1
by dissociativity - opened

Make this model even better by pruning it at least partially with REAP (before quantising it of course)
Reap is located in and described by this repo:
https://github.com/CerebrasResearch/reap

See GLM Air 4.5 (same architecture as Intellect-3) pruned with REAP all the way down to 82B!:
https://huggingface.co/cerebras/GLM-4.5-Air-REAP-82B-A12B

It's shockingly good, I feel the gains made by Intellect-3, plus the fork of Heretic you're using would make for an excellent pruned model, meaning far more people can run the model, and it would perform nearly as well despite pruning, allowing for far more context too!

I have a 96gb M2 Max and I'd love to have the additional context memory afforded by REAP, additionally I'd love to have a copy of the model in IQ4_NL as it would be far faster on my hardware, but you don't seem to have the full tensors uploaded on your repo, this is optional though, if you upload full tensors of a REAP pruned version of this model, I should be able to make my own quants.
I'd be happy to donate a little for the efforts, otherwise I'm attempting to figure out how to do this myself, it's just more difficult on Apple Silicon than X86 + Nvidia hardware.
Thankyou for considering my suggestions!

Our SignRoundV2 quantized REAPER-PRISM pipelines accomplish this. Matter of funding. Members receive day-0 drops and choice-picks.

ericelbaz-Sharable-Membership-Horizontal

Sign up or log in to comment