These are GGUF quantized versions of FoxEngineAi/Mega-Destroyer-8x7B.
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using wiki.train.raw.
Some model files above 50GB are split into smaller files. To concatenate them, use the cat command (on Windows, use PowerShell): cat foo-Q6_K.gguf.* > foo-Q6_K.gguf
- What quant do I need? See https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
- Want to steal my scripts? See https://gist.github.com/Artefact2/fd2254fc133906ac96b49b6947f0cd4a
- Quant requests? Just open a discussion in the community tabs.
- Downloads last month
- 838
Hardware compatibility
Log In
to view the estimation
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support