Instructions to use the-robot-ai/tiny-emotion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use the-robot-ai/tiny-emotion with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="the-robot-ai/tiny-emotion", filename="tiny-emotion.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use the-robot-ai/tiny-emotion with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf the-robot-ai/tiny-emotion # Run inference directly in the terminal: llama-cli -hf the-robot-ai/tiny-emotion
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf the-robot-ai/tiny-emotion # Run inference directly in the terminal: llama-cli -hf the-robot-ai/tiny-emotion
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf the-robot-ai/tiny-emotion # Run inference directly in the terminal: ./llama-cli -hf the-robot-ai/tiny-emotion
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf the-robot-ai/tiny-emotion # Run inference directly in the terminal: ./build/bin/llama-cli -hf the-robot-ai/tiny-emotion
Use Docker
docker model run hf.co/the-robot-ai/tiny-emotion
- LM Studio
- Jan
- Ollama
How to use the-robot-ai/tiny-emotion with Ollama:
ollama run hf.co/the-robot-ai/tiny-emotion
- Unsloth Studio new
How to use the-robot-ai/tiny-emotion with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for the-robot-ai/tiny-emotion to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for the-robot-ai/tiny-emotion to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for the-robot-ai/tiny-emotion to start chatting
- Docker Model Runner
How to use the-robot-ai/tiny-emotion with Docker Model Runner:
docker model run hf.co/the-robot-ai/tiny-emotion
- Lemonade
How to use the-robot-ai/tiny-emotion with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull the-robot-ai/tiny-emotion
Run and chat with the model
lemonade run user.tiny-emotion-{{QUANT_TAG}}List all available models
lemonade list
Model Summary
tiny-emotion is a lightweight language model fine-tuned to classify emotions in short texts, such as tweets or messages. Designed for speed and efficiency, it can run fully locally, making it ideal for real-time, privacy-preserving applications. The model provides concise, accurate emotion labels, enabling quick insights without unnecessary complexity or lengthy explanations.
Use cases
tiny-emotion is best suited for applications requiring fast, local emotion classification from short-form text. Some potential real-world applications are:
- Robotics: Enable robots to better understand and react to human emotions in real time.
- Empathetic chatbots: Help virtual assistants respond in a more human, emotionally-aware way.
- Mental health tools: Pick up on emotional changes that could signal a shift in someone's well-being.
- Customer feedback: Quickly figure out how people feel about your product or service.
Model Behavior
This model keeps things short and clear, in contrast to larger LLMs that may produce long paragraphs or over-explaining. For example:
“Wow, I just won tickets to the concert! Totally unexpected.”
The model outputs:
Surprise
Comparison Example
| Model | Output |
|---|---|
| Tiny-emotion | ""Surprise"" |
| ChatGPT | "The emotion expressed is joy or excitement... likely surprise mixed with happiness." |
| Gemini | "The emotion of the tweet is joy or excitement." |
While larger models provide richer explanations, tiny-emotion offers faster, more focused outputs. That makes it super useful for applications where you want quick insights without digging through wordy outputs.
Key Features
- Fine-tuned for emotion recognition
- Lightweight and fast
- Can run locally
- Optimized for short texts like tweets, messages, and comments
- Downloads last month
- 35
docker model run hf.co/the-robot-ai/tiny-emotion