|
|
--- |
|
|
license: other |
|
|
license_name: hsul |
|
|
license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md |
|
|
language: |
|
|
- en |
|
|
pipeline_tag: text-generation |
|
|
tags: |
|
|
- 3B |
|
|
- Emotionally Intelligent |
|
|
--- |
|
|
|
|
|
|
|
|
# HelpingAI-3B-v2.2: Emotionally Intelligent Conversational AI |
|
|
|
|
|
 |
|
|
|
|
|
## Introduction |
|
|
|
|
|
HelpingAI-3B-v2.2 is a state-of-the-art large language model specializing in emotionally intelligent conversation. With advanced emotional understanding capabilities, it can engage in empathetic dialogue tailored to the user's emotional state and context. |
|
|
|
|
|
## Emotional Intelligence Capabilities |
|
|
|
|
|
HelpingAI-3B-v2.2 exhibits several key traits that enable emotionally resonant responses: |
|
|
|
|
|
- Emotion recognition and validation |
|
|
- Empathetic perspective-taking |
|
|
- Generating emotionally supportive language |
|
|
- Contextual emotional attunement |
|
|
- Using appropriate tone, word choice and emotional expression |
|
|
|
|
|
Whether comforting someone grieving, celebrating positive news, or addressing complex feelings, HelpingAI-3B-v2.2 can adapt its communication style with emotional nuance. |
|
|
|
|
|
## Examples of Emotionally Intelligent Responses |
|
|
|
|
|
"I'm really sorry to hear about your friend's loss. 😔 Losing a parent can be incredibly difficult and heart-wrenching. It's important to show them support and comfort during this challenging time. You is there anything specific you would like to share or ask for help with? Remember, it's okay to grieve and seek support from others." |
|
|
|
|
|
The model tailors its language, tone and emotional content to be contextually appropriate, combining emotional intelligence with factual knowledge and practical suggestions. |
|
|
|
|
|
|
|
|
|
|
|
## Performance Comparison |
|
|
|
|
|
The performance of HelpingAI-3B-v2.2 is compared with other relevant models on various metrics in the table below: |
|
|
|
|
|
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | |
|
|
|-|-|-|-|-|-|-|-| |
|
|
| **HelpingAI-3B-v2.2** | **57.555** | **53.14** | **82.61** | **47.42** | **57.92** | **68.15** | **36.09** | |
|
|
| **HelpingAI-3B-v2.1** | **57.44** | **53.14** | **82.61** | **47.42** | **57.92** | **68.15** | **35.39** | |
|
|
| rocket-3B | 55.77 | 50.6 | 76.69 | 47.1 | 55.82 | 67.96 | 36.47 | |
|
|
| **HelpingAI-3B** | **55.59** | **50.6** | **76.64** | **46.82** | **55.62** | **67.8** | **36.09** | |
|
|
| stableLM-zephyr-3b | 53.43 | 46.08 | 74.16 | 46.17 | 46.49 | 65.51 | 42.15 | |
|
|
| mmd-3b | 53.22 | 44.8 | 70.41 | 50.9 | 43.2 | 66.22 | 43.82 | |
|
|
| MiniGPT-3B-Bacchus | 52.55 | 43.52 | 70.45 | 50.49 | 43.52 | 66.85 | 40.49 | |
|
|
| MiniGPT-3B-Hercules-v2.0 | 52.52 | 43.26 | 71.11 | 51.82 | 40.37 | 66.46 | 42.08 | |
|
|
| MiniGPT-3B-OpenHermes-2.5-v2 | 51.91 | 47.44 | 72 | 53.06 | 42.28 | 65.43 | 31.24 | |
|
|
| MiniChat-2-3B | 51.49 | 44.88 | 67.69 | 47.59 | 49.64 | 66.46 | 32.68 | |
|
|
| smol-3b | 50.27 | 46.33 | 68.23 | 46.33 | 50.73 | 65.35 | 24.64 | |
|
|
| MiniChat-1.5-3B | 50.23 | 46.5 | 68.28 | 46.67 | 50.71 | 65.04 | 24.18 | |
|
|
| 3BigReasonCinder | 48.16 | 41.72 | 65.16 | 44.79 | 44.76 | 64.96 | 27.6 | |
|
|
| MintMerlin-3B | 47.63 | 44.37 | 66.56 | 43.21 | 47.07 | 64.4 | 20.17 | |
|
|
|
|
|
## Simple Usage Code |
|
|
```python |
|
|
import torch |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer |
|
|
|
|
|
# Let's bring in the big guns! Our super cool HelpingAI-3B model |
|
|
model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI-3B-v2.2", trust_remote_code=True, torch_dtype=torch.float16).to("cuda") |
|
|
|
|
|
# We also need the special HelpingAI translator to understand our chats |
|
|
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-3B-v2.2", trust_remote_code=True, torch_dtype=torch.float16) |
|
|
|
|
|
# This TextStreamer thingy is our secret weapon for super smooth conversation flow |
|
|
streamer = TextStreamer(tokenizer) |
|
|
|
|
|
# Now, here comes the magic! ✨ This is the basic template for our chat |
|
|
prompt = """ |
|
|
<|im_start|>system: {system} |
|
|
<|im_end|> |
|
|
<|im_start|>user: {insaan} |
|
|
<|im_end|> |
|
|
<|im_start|>assistant: |
|
|
""" |
|
|
|
|
|
# Okay, enough chit-chat, let's get down to business! Here's what our system will be our system prompt |
|
|
# We recommend to Use HelpingAI style in system prompt as this model is just trained on 3.7K rows of fealings dataset and we are working on even better model |
|
|
system = "You are HelpingAI a emotional AI always answer my question in HelpingAI style" |
|
|
|
|
|
|
|
|
# And the insaan is curious (like you!) insaan means human in hindi |
|
|
insaan = "My best friend recently lost their parent to cancer after a long battle. They are understandably devastated and struggling with grief. What would be a caring and supportive way to respond to help them through this difficult time?" |
|
|
|
|
|
# Now we combine system and user messages into the template, like adding sprinkles to our conversation cupcake |
|
|
prompt = prompt.format(system=system, insaan=insaan) |
|
|
|
|
|
# Time to chat! We'll use the tokenizer to translate our text into a language the model understands |
|
|
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda") |
|
|
|
|
|
# Here comes the fun part! Let's unleash the power of HelpingAI-3B to generate some awesome text |
|
|
generated_text = model.generate(**inputs, max_length=3084, top_p=0.95, do_sample=True, temperature=0.6, use_cache=True, streamer=streamer) |
|
|
|
|
|
``` |