Instructions to use HelpingAI/HelpingAI2-6B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use HelpingAI/HelpingAI2-6B with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="HelpingAI/HelpingAI2-6B", filename="helpingai-6b-q4_k_m.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use HelpingAI/HelpingAI2-6B with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf HelpingAI/HelpingAI2-6B:Q4_K_M # Run inference directly in the terminal: llama-cli -hf HelpingAI/HelpingAI2-6B:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf HelpingAI/HelpingAI2-6B:Q4_K_M # Run inference directly in the terminal: llama-cli -hf HelpingAI/HelpingAI2-6B:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf HelpingAI/HelpingAI2-6B:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf HelpingAI/HelpingAI2-6B:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf HelpingAI/HelpingAI2-6B:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf HelpingAI/HelpingAI2-6B:Q4_K_M
Use Docker
docker model run hf.co/HelpingAI/HelpingAI2-6B:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use HelpingAI/HelpingAI2-6B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "HelpingAI/HelpingAI2-6B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HelpingAI/HelpingAI2-6B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/HelpingAI/HelpingAI2-6B:Q4_K_M
- Ollama
How to use HelpingAI/HelpingAI2-6B with Ollama:
ollama run hf.co/HelpingAI/HelpingAI2-6B:Q4_K_M
- Unsloth Studio new
How to use HelpingAI/HelpingAI2-6B with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for HelpingAI/HelpingAI2-6B to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for HelpingAI/HelpingAI2-6B to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for HelpingAI/HelpingAI2-6B to start chatting
- Docker Model Runner
How to use HelpingAI/HelpingAI2-6B with Docker Model Runner:
docker model run hf.co/HelpingAI/HelpingAI2-6B:Q4_K_M
- Lemonade
How to use HelpingAI/HelpingAI2-6B with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull HelpingAI/HelpingAI2-6B:Q4_K_M
Run and chat with the model
lemonade run user.HelpingAI2-6B-Q4_K_M
List all available models
lemonade list
Update README.md
Browse files
README.md
CHANGED
|
@@ -17,12 +17,12 @@ datasets:
|
|
| 17 |
language:
|
| 18 |
- en
|
| 19 |
---
|
| 20 |
-
#
|
| 21 |
|
| 22 |

|
| 23 |
|
| 24 |
## Overview
|
| 25 |
-
|
| 26 |
|
| 27 |
- Engage in meaningful, open-ended dialogue while displaying high emotional intelligence.
|
| 28 |
- Recognize and validate user emotions and emotional contexts.
|
|
@@ -31,14 +31,14 @@ HelpingAI-6B is a state-of-the-art large language model designed to facilitate e
|
|
| 31 |
- Continuously improve emotional awareness and dialogue skills.
|
| 32 |
|
| 33 |
## Methodology
|
| 34 |
-
|
| 35 |
- **Supervised Learning**: Utilizing large dialogue datasets with emotional labeling to enhance empathy and emotional recognition.
|
| 36 |
- **Reinforcement Learning**: Implementing a reward model that favors emotionally supportive responses to ensure beneficial interactions.
|
| 37 |
- **Constitution Training**: Embedding stable and ethical objectives to guide its conversational behavior.
|
| 38 |
- **Knowledge Augmentation**: Incorporating psychological resources on emotional intelligence to improve its understanding and response capabilities.
|
| 39 |
|
| 40 |
## Emotional Quotient (EQ)
|
| 41 |
-
|
| 42 |
|
| 43 |
|
| 44 |
## Usage Code
|
|
@@ -46,10 +46,10 @@ HelpingAI-6B has achieved an impressive Emotional Quotient (EQ) of 91.57, making
|
|
| 46 |
import torch
|
| 47 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 48 |
|
| 49 |
-
# Load the
|
| 50 |
-
model = AutoModelForCausalLM.from_pretrained("OEvortex/
|
| 51 |
# Load the tokenizer
|
| 52 |
-
tokenizer = AutoTokenizer.from_pretrained("OEvortex/
|
| 53 |
|
| 54 |
|
| 55 |
# Define the chat input
|
|
@@ -93,8 +93,8 @@ from webscout.Local.samplers import SamplerSettings
|
|
| 93 |
|
| 94 |
|
| 95 |
# Download the model
|
| 96 |
-
repo_id = "OEvortex/
|
| 97 |
-
filename = "
|
| 98 |
model_path = download_model(repo_id, filename, token="")
|
| 99 |
|
| 100 |
# Load the model
|
|
@@ -114,7 +114,7 @@ sampler = SamplerSettings(temp=0.7, top_p=0.9)
|
|
| 114 |
thread = Thread(model, helpingai, sampler=sampler)
|
| 115 |
|
| 116 |
# Start interacting with the model
|
| 117 |
-
thread.interact(header="🌟
|
| 118 |
|
| 119 |
```
|
| 120 |
|
|
|
|
| 17 |
language:
|
| 18 |
- en
|
| 19 |
---
|
| 20 |
+
# HelpingAI2-6B : Emotionally Intelligent Conversational AI
|
| 21 |
|
| 22 |

|
| 23 |
|
| 24 |
## Overview
|
| 25 |
+
HelpingAI2-6B is a state-of-the-art large language model designed to facilitate emotionally intelligent conversations. It leverages advanced natural language processing capabilities to engage users with empathy, understanding, and supportive dialogue across a variety of topics.
|
| 26 |
|
| 27 |
- Engage in meaningful, open-ended dialogue while displaying high emotional intelligence.
|
| 28 |
- Recognize and validate user emotions and emotional contexts.
|
|
|
|
| 31 |
- Continuously improve emotional awareness and dialogue skills.
|
| 32 |
|
| 33 |
## Methodology
|
| 34 |
+
HelpingAI2-6B is part of the HelpingAI series and has been trained using:
|
| 35 |
- **Supervised Learning**: Utilizing large dialogue datasets with emotional labeling to enhance empathy and emotional recognition.
|
| 36 |
- **Reinforcement Learning**: Implementing a reward model that favors emotionally supportive responses to ensure beneficial interactions.
|
| 37 |
- **Constitution Training**: Embedding stable and ethical objectives to guide its conversational behavior.
|
| 38 |
- **Knowledge Augmentation**: Incorporating psychological resources on emotional intelligence to improve its understanding and response capabilities.
|
| 39 |
|
| 40 |
## Emotional Quotient (EQ)
|
| 41 |
+
HelpingAI2-6B has achieved an impressive Emotional Quotient (EQ) of 91.57, making it one of the most emotionally intelligent AI models available. This EQ score reflects its advanced ability to understand and respond to human emotions in a supportive and empathetic manner.
|
| 42 |
|
| 43 |
|
| 44 |
## Usage Code
|
|
|
|
| 46 |
import torch
|
| 47 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 48 |
|
| 49 |
+
# Load the HelpingAI2-6B model
|
| 50 |
+
model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI2-6B ", trust_remote_code=True)
|
| 51 |
# Load the tokenizer
|
| 52 |
+
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI2-6B ", trust_remote_code=True)
|
| 53 |
|
| 54 |
|
| 55 |
# Define the chat input
|
|
|
|
| 93 |
|
| 94 |
|
| 95 |
# Download the model
|
| 96 |
+
repo_id = "OEvortex/HelpingAI2-6B "
|
| 97 |
+
filename = "HelpingAI2-6B -q4_k_m.gguf"
|
| 98 |
model_path = download_model(repo_id, filename, token="")
|
| 99 |
|
| 100 |
# Load the model
|
|
|
|
| 114 |
thread = Thread(model, helpingai, sampler=sampler)
|
| 115 |
|
| 116 |
# Start interacting with the model
|
| 117 |
+
thread.interact(header="🌟 HelpingAI2-6B : Emotionally Intelligent Conversational AI 🚀", color=True)
|
| 118 |
|
| 119 |
```
|
| 120 |
|