llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)Therapy Phi-3 Mini β GGUF
This is a fine-tuned version of Phi-3-mini-4k-instruct, adapted for supportive, therapeutic-style conversations.
π Files
therapy_phi3.ggufβ Full precision model (~7.6 GB)therapy_phi3_q4_0.ggufβ 4-bit quantized version (~2β3 GB, faster, lighter)
π Usage
llama.cpp
./llama-cli -m therapy_phi3_q4_0.gguf -p "I'm feeling anxious." -n 100
- Downloads last month
- 19
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for Daizee/therapy-phi3-gguf
Base model
microsoft/Phi-3-mini-4k-instruct
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Daizee/therapy-phi3-gguf", filename="", )