Text Generation
Transformers
Safetensors
GGUF
llama
HelpingAI
Emotionally Intelligent
EQ
conversational
text-generation-inference
Instructions to use HelpingAI/HelpingAI-9B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use HelpingAI/HelpingAI-9B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="HelpingAI/HelpingAI-9B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HelpingAI/HelpingAI-9B") model = AutoModelForCausalLM.from_pretrained("HelpingAI/HelpingAI-9B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - llama-cpp-python
How to use HelpingAI/HelpingAI-9B with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="HelpingAI/HelpingAI-9B", filename="helpingai-9b.Q4_0.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use HelpingAI/HelpingAI-9B with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf HelpingAI/HelpingAI-9B:Q4_0 # Run inference directly in the terminal: llama-cli -hf HelpingAI/HelpingAI-9B:Q4_0
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf HelpingAI/HelpingAI-9B:Q4_0 # Run inference directly in the terminal: llama-cli -hf HelpingAI/HelpingAI-9B:Q4_0
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf HelpingAI/HelpingAI-9B:Q4_0 # Run inference directly in the terminal: ./llama-cli -hf HelpingAI/HelpingAI-9B:Q4_0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf HelpingAI/HelpingAI-9B:Q4_0 # Run inference directly in the terminal: ./build/bin/llama-cli -hf HelpingAI/HelpingAI-9B:Q4_0
Use Docker
docker model run hf.co/HelpingAI/HelpingAI-9B:Q4_0
- LM Studio
- Jan
- vLLM
How to use HelpingAI/HelpingAI-9B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "HelpingAI/HelpingAI-9B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HelpingAI/HelpingAI-9B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/HelpingAI/HelpingAI-9B:Q4_0
- SGLang
How to use HelpingAI/HelpingAI-9B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "HelpingAI/HelpingAI-9B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HelpingAI/HelpingAI-9B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "HelpingAI/HelpingAI-9B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HelpingAI/HelpingAI-9B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use HelpingAI/HelpingAI-9B with Ollama:
ollama run hf.co/HelpingAI/HelpingAI-9B:Q4_0
- Unsloth Studio new
How to use HelpingAI/HelpingAI-9B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for HelpingAI/HelpingAI-9B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for HelpingAI/HelpingAI-9B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for HelpingAI/HelpingAI-9B to start chatting
- Docker Model Runner
How to use HelpingAI/HelpingAI-9B with Docker Model Runner:
docker model run hf.co/HelpingAI/HelpingAI-9B:Q4_0
- Lemonade
How to use HelpingAI/HelpingAI-9B with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull HelpingAI/HelpingAI-9B:Q4_0
Run and chat with the model
lemonade run user.HelpingAI-9B-Q4_0
List all available models
lemonade list
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
license_name: hsul
|
| 4 |
+
license_link: https://huggingface.co/OEvortex/vortex-3b/raw/main/LICENSE.md
|
| 5 |
+
pipeline_tag: text-generation
|
| 6 |
+
tags:
|
| 7 |
+
- HelpingAI
|
| 8 |
+
- Emotionally Intelligent
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
# HelpingAI-9B: Emotionally Intelligent Conversational AI
|
| 13 |
+
|
| 14 |
+

|
| 15 |
+
|
| 16 |
+
## Overview
|
| 17 |
+
HelpingAI-9B is a large language model designed for emotionally intelligent conversational interactions. It is trained to engage users with empathy, understanding, and supportive dialogue across a wide range of topics and contexts. The model aims to provide a supportive AI companion that can attune to users' emotional states and communicative needs.
|
| 18 |
+
|
| 19 |
+
## Objectives
|
| 20 |
+
- Engage in open-ended dialogue while displaying emotional intelligence
|
| 21 |
+
- Recognize and validate user emotions and emotional contexts
|
| 22 |
+
- Provide supportive, empathetic, and psychologically-grounded responses
|
| 23 |
+
- Avoid insensitive, harmful, or unethical speech
|
| 24 |
+
- Continuously improve emotional awareness and dialogue skills
|
| 25 |
+
|
| 26 |
+
## Methodology
|
| 27 |
+
HelpingAI-9B is based on the HelpingAI series and further trained using:
|
| 28 |
+
- Supervised learning on large dialogue datasets with emotional labeling
|
| 29 |
+
- Reinforcement learning with a reward model favoring emotionally supportive responses
|
| 30 |
+
- Constitution training to instill stable and beneficial objectives
|
| 31 |
+
- Knowledge augmentation from psychological resources on emotional intelligence
|
| 32 |
+
|
| 33 |
+
## Usage code
|
| 34 |
+
```python
|
| 35 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 36 |
+
import torch
|
| 37 |
+
device = "cuda" # the device to load the model onto
|
| 38 |
+
|
| 39 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 40 |
+
"OEvortex/HelpingAI-9B",
|
| 41 |
+
torch_dtype='auto',
|
| 42 |
+
device_map="auto"
|
| 43 |
+
)
|
| 44 |
+
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-9B")
|
| 45 |
+
|
| 46 |
+
prompt = "Express joy and excitement about visiting a new place"
|
| 47 |
+
messages = [
|
| 48 |
+
# {"role": "system", "content": "You are a helpful AI assistant."},
|
| 49 |
+
{"role": "user", "content": prompt}
|
| 50 |
+
]
|
| 51 |
+
text = tokenizer.apply_chat_template(
|
| 52 |
+
messages,
|
| 53 |
+
tokenize=False,
|
| 54 |
+
add_generation_prompt=True
|
| 55 |
+
)
|
| 56 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(device)
|
| 57 |
+
|
| 58 |
+
generated_ids = model.generate(
|
| 59 |
+
model_inputs.input_ids,
|
| 60 |
+
max_new_tokens=1024,
|
| 61 |
+
eos_token_id=tokenizer.eos_token_id,
|
| 62 |
+
temperature=0.25,
|
| 63 |
+
)
|
| 64 |
+
generated_ids = [
|
| 65 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
| 66 |
+
]
|
| 67 |
+
|
| 68 |
+
response = tokenizer.batch_decode(generated_ids)[0]
|
| 69 |
+
print(response)
|
| 70 |
+
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
## Example Dialogue
|
| 74 |
+
Human: Express joy and excitement about visiting a new place.
|
| 75 |
+
|
| 76 |
+
HelpingAI-9B: Yay! π I'm so excited to visit a new place! π The thought of exploring and discovering new things fills me with joy and anticipation. I can't wait to take in the sights, sounds, and flavors of this new place. Let's make the most of this adventure! π How about you? Are you excited too? π
|
| 77 |
+
|
| 78 |
+
Human: I'm feeling really down today. Nothing seems to be going right.
|
| 79 |
+
|
| 80 |
+
HelpingAI-9B: I'm really sorry to hear that you're feeling down. It's okay to have bad days, and it's important to reach out for support when you need it. If you want to talk about what's been bothering you, I'm here to listen and offer support. Remember, you're not alone, and things will get better.
|
| 81 |
+
|
| 82 |
+
Human: I'm excited because I just got accepted into my dream school! I wanted to share the good news with someone.
|
| 83 |
+
|
| 84 |
+
HelpingAI-9B: π That's amazing! I'm so happy for you! π Congratulations on achieving your dream and getting accepted into your dream school! This is such a special moment, and I'm here to celebrate with you! π Keep shining, and remember to always believe in yourself!
|
| 85 |
+
|
| 86 |
+
Human: My best friend just got into an argument with me and I don't know what to do. How can I fix this situation?
|
| 87 |
+
|
| 88 |
+
HelpingAI-9B: Hey there! π It's totally normal to have disagreements with friends. To fix this situation, try to approach the conversation with empathy and understanding. Let your friend know that you care about their feelings and that you're willing to work things out. Communication is key, so be open and honest about how you're feeling. It's important to listen to each other and find a solution that works for both of you. Good luck! π
|
| 89 |
+
|