QuantFactory/Captain_BMO-12B-GGUF
This is quantized version of Nitral-AI/Captain_BMO-12B created using llama.cpp
Original Model Card
Quant's available from Bartowski <3: GGUF Single Quant from me: 4bpw Exl2
Uses Mistral Formatting, Text completion preset here
Notes: One off train most likely, this was done purely for internal testing purposes but seemed ok enough to release. I do not plan to offer any kind of extended support for using this model, so your mileage may vary depending on use and context size.
- (Nemo 12B instruct as base)
- 200k randomized subset of GU_instruct-Remastered-1.1, with a splash of 25k hathor/poppy sauce, slow cooked for 3 epochs on medium heat.
- Downloads last month
- 7
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support

