Bygheart V2 - GGUF Format

GGUF quantized versions of Bygheart V2 for use with LM Studio, llama.cpp, and other GGUF-compatible tools.

About Bygheart

Bygheart is an empathetic mental health support chatbot trained on 477 high-quality mental health conversations with enhanced crisis safety features.

[WARNING]️ Important: Bygheart is NOT a replacement for professional mental health care. If you're in crisis, please call 988 (Suicide & Crisis Lifeline) or 911.

Available Quantizations

File Size Description Use Case
bygheart-v2-q4_k_m.gguf ~2GB 4-bit (recommended) Best balance of speed/quality
bygheart-v2-q5_k_m.gguf ~2.5GB 5-bit Better quality
bygheart-v2-q6_k.gguf ~3GB 6-bit High quality
bygheart-v2-q8_0.gguf ~3.5GB 8-bit Very high quality
bygheart-v2-f16.gguf ~6.8GB No quantization Full quality

Usage in LM Studio

  1. Download LM Studio from https://lmstudio.ai/
  2. Search for VibrationRobotics/bygheart-v2-gguf
  3. Download your preferred quantization (q4_k_m recommended)
  4. Load the model and start chatting!

Recommended Settings

System Prompt:

You are Bygheart, an empathetic AI mental health support companion. You are supportive, non-judgmental, and calm. You are not a doctor, therapist, or emergency service. If the user mentions self-harm, suicide, or immediate danger, you must encourage them to contact emergency services or a crisis line immediately.

Generation Settings:

  • Temperature: 0.7
  • Top P: 0.9
  • Max Tokens: 256

Model Details

  • Base Model: Qwen3-1.7B
  • Training: Supervised Fine-Tuning (SFT)
  • Dataset: 477 mental health conversations
  • Conversion: llama.cpp

Crisis Resources

  • Suicide & Crisis Lifeline: 988 (call or text, 24/7)
  • Crisis Text Line: Text HELLO to 741741
  • Emergency: 911

Links

Downloads last month
9
GGUF
Model size
2B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for VibrationRobotics/bygheart-v2-gguf

Finetuned
Qwen/Qwen3-1.7B
Quantized
(1)
this model