QuantFactory/Mistral-Gutenberg-Doppel-7B-FFT-GGUF
This is quantized version of nbeerbower/Mistral-Gutenberg-Doppel-7B-FFT created using llama.cpp
Original Model Card
Mistral-Gutenberg-Doppel-7B-FFT
mistralai/Mistral-7B-Instruct-v0.2 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
This is a full finetune rather than my usual QLoRA tunes. Mostly for learning purposes.
Method
ORPO tuned with 4x A100 for 2 epochs.
- Downloads last month
- 10
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for QuantFactory/Mistral-Gutenberg-Doppel-7B-FFT-GGUF
Base model
mistralai/Mistral-7B-Instruct-v0.2