Model Card for Gemma-2b-it-Psych-GGUF

Model Summary

Gemma-2b-it-Psych-GGUF is the quantized version of the Gemma-2b-it-Psych-Merged model. It was converted using the llama.cpp pipeline to provide high-performance, low-latency inference on local hardware such as CPUs, consumer GPUs, and mobile devices.

The model is optimized for psychologically safe, empathetic, and supportive interactions, maintaining the fine-tuned alignment of the original model while significantly reducing memory requirements.


Model Details

Key Information

File Specifications

File Name Size Description
gemma-2b-it-psych-f16.gguf ~4.7 GB High-fidelity FP16 base GGUF
gemma-2b-it-psych-q5_k_m.gguf ~1.8 GB Balanced Q5_K_M quantization (Recommended)

Usage

This model is compatible with any runtime supporting the GGUF format, including llama.cpp, Ollama, Jan, LM Studio, and Text Generation WebUI.

Using with llama.cpp (CLI)

To run a single prompt:

./llama-cli \
  -m gemma-2b-it-psych-q5_k_m.gguf \
  -p "I feel anxious and overwhelmed lately. What should I do?" \
  -n 256 \
  --temp 0.7

To start an interactive chat session:

./llama-cli -m gemma-2b-it-psych-q5_k_m.gguf -cnv

Local Deployment Tools

  • Jan: Supported as a local model (GGUF/llama.cpp backend).
  • Ollama: Can be imported using a Modelfile.
  • LM Studio: Search for the GGUF file or load manually from disk.

Quantization Pipeline

The model followed a strict three-step conversion process:

  • Merging: The LoRA adapter was merged with the base gemma-2b-it model to create a full FP16 checkpoint.
  • Conversion: The Hugging Face checkpoint was converted to GGUF format using convert_hf_to_gguf.py.
  • Quantization: Applied the Q5_K_M method to reduce the model size from 4.7 GB to 1.8 GB while preserving instruction-following accuracy.

Bias, Risks, and Limitations

  • Medical Disclaimer: This is an experimental model for research and educational purposes. It is not a licensed medical tool and should not be used for clinical diagnosis.
  • Quantization Loss: While Q5_K_M minimizes performance degradation, some nuances in empathetic tone might differ slightly from the original FP16 model.
  • Scope: Intended for simulating supportive conversations and studying alignment in the psychological domain.

Technical Maintenance

This model was generated and uploaded using the llama.cpp toolkit and the Hugging Face CLI. For the full conversion scripts and training logs, visit the official GitHub repository.

Downloads last month
46
GGUF
Model size
3B params
Architecture
gemma
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ecorbari/Gemma-2b-it-Psych-GGUF

Base model

google/gemma-2b-it
Quantized
(1)
this model

Dataset used to train ecorbari/Gemma-2b-it-Psych-GGUF