GGUF importance matrix (imatrix) quants for https://huggingface.co/abideen/AlphaMonarch-laser
The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a general purpose imatrix calibration dataset.

AlphaMonarch-laser is a DPO fine-tuned of mlabonne/NeuralMonarch-7B using the argilla/OpenHermes2.5-dpo-binarized-alpha preference dataset but achieves better performance then mlabonne/AlphaMonarch-7B using LaserQLoRA. We have fine-tuned this model only on half of the projections, but have achieved better results as compared to the version released by Maximme Labonne. We have trained this model for 1080 steps.

Layers Context Template
32
32768
[INST] {prompt} [/INST]
{response}
Downloads last month
40
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support