Model Card
It's fine-tuned model based on mistralai/Mistral-Nemo-Instruct-2407 and has 12B parameters, stored in quantized form. The model should be used to transform raw LLM assistant responses into empatic ones.
How to Get Started with the Model
from inference import EmpathicStylingModel
# model initialization
model = EmpathicStylingModel()
# prediction on 1 sample
input_request = "В случае кражи телефона вы можете быстро заблокировать стикер через мобильное приложение банка."
response = model.predict(input_request)
Training Details
Model was fine-tuned with SFT on 353 examples (private dataset) of initial LLM assistant responses and corresponding empatic responses.
Hardware
12 Gb of VRAM is needed to run inference on 1 example
Software
Model was tested with Python3.11 and transformers==4.49.0
Model Card Authors
- Kseniia Cheloshkina (https://huggingface.co/KseniiaCheloshkina)
- Downloads last month
- 9