QuantFactory Banner

QuantFactory/Captain_BMO-12B-GGUF

This is quantized version of Nitral-AI/Captain_BMO-12B created using llama.cpp

Original Model Card

image/png

Quant's available from Bartowski <3: GGUF Single Quant from me: 4bpw Exl2

image/png

Uses Mistral Formatting, Text completion preset here

Notes: One off train most likely, this was done purely for internal testing purposes but seemed ok enough to release. I do not plan to offer any kind of extended support for using this model, so your mileage may vary depending on use and context size.

  • (Nemo 12B instruct as base)
  • 200k randomized subset of GU_instruct-Remastered-1.1, with a splash of 25k hathor/poppy sauce, slow cooked for 3 epochs on medium heat.
Downloads last month
7
GGUF
Model size
12B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support