Model Card for Gemma-2b-it-Psych-GGUF
Model Summary
Gemma-2b-it-Psych-GGUF is the quantized version of the Gemma-2b-it-Psych-Merged model. It was converted using the llama.cpp pipeline to provide high-performance, low-latency inference on local hardware such as CPUs, consumer GPUs, and mobile devices.
The model is optimized for psychologically safe, empathetic, and supportive interactions, maintaining the fine-tuned alignment of the original model while significantly reducing memory requirements.
Model Details
Key Information
- Author: Ederson Corbari (e@NeuroQuest.ai)
- Date: February 01, 2026
- Base Merged Model: ecorbari/Gemma-2b-it-Psych-Merged
- Format: GGUF
- Quantization Method: Q5_K_M (Recommended for balancing quality and size)
- Release Date: February 01, 2026
File Specifications
| File Name | Size | Description |
|---|---|---|
gemma-2b-it-psych-f16.gguf |
~4.7 GB | High-fidelity FP16 base GGUF |
gemma-2b-it-psych-q5_k_m.gguf |
~1.8 GB | Balanced Q5_K_M quantization (Recommended) |
Usage
This model is compatible with any runtime supporting the GGUF format, including llama.cpp, Ollama, Jan, LM Studio, and Text Generation WebUI.
Using with llama.cpp (CLI)
To run a single prompt:
./llama-cli \
-m gemma-2b-it-psych-q5_k_m.gguf \
-p "I feel anxious and overwhelmed lately. What should I do?" \
-n 256 \
--temp 0.7
To start an interactive chat session:
./llama-cli -m gemma-2b-it-psych-q5_k_m.gguf -cnv
Local Deployment Tools
- Jan: Supported as a local model (GGUF/llama.cpp backend).
- Ollama: Can be imported using a Modelfile.
- LM Studio: Search for the GGUF file or load manually from disk.
Quantization Pipeline
The model followed a strict three-step conversion process:
- Merging: The LoRA adapter was merged with the base gemma-2b-it model to create a full FP16 checkpoint.
- Conversion: The Hugging Face checkpoint was converted to GGUF format using convert_hf_to_gguf.py.
- Quantization: Applied the Q5_K_M method to reduce the model size from 4.7 GB to 1.8 GB while preserving instruction-following accuracy.
Bias, Risks, and Limitations
- Medical Disclaimer: This is an experimental model for research and educational purposes. It is not a licensed medical tool and should not be used for clinical diagnosis.
- Quantization Loss: While Q5_K_M minimizes performance degradation, some nuances in empathetic tone might differ slightly from the original FP16 model.
- Scope: Intended for simulating supportive conversations and studying alignment in the psychological domain.
Technical Maintenance
This model was generated and uploaded using the llama.cpp toolkit and the Hugging Face CLI. For the full conversion scripts and training logs, visit the official GitHub repository.
- Downloads last month
- 46
5-bit