Quantized using the default exllamav3 (0.0.1) quantization process.


Join our Discord! https://discord.gg/Nbv9pQ88Xb


BeaverAI proudly presents...

Star Command R 32B v1 🌟

An RP finetune of Command-R-8-2024

image/png

Links

Usage

  • Cohere Instruct format or Text Completion

Special Thanks

  • Mr. Gargle for the GPUs! Love you, brotha.
Downloads last month
10
Safetensors
Model size
14B params
Tensor type
F16
Β·
I16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for MetaphoricalCode/Star-Command-R-32B-v1-exl3-6bpw-hb6

Quantized
(9)
this model