Update README.md
Browse files
README.md
CHANGED
|
@@ -10,4 +10,61 @@ pinned: false
|
|
| 10 |
short_description: Emotion Classification
|
| 11 |
---
|
| 12 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co/docs/api-inference/index).
|
|
|
|
| 10 |
short_description: Emotion Classification
|
| 11 |
---
|
| 12 |
|
| 13 |
+
Emotion Classifier
|
| 14 |
+
|
| 15 |
+
1. Overview
|
| 16 |
+
|
| 17 |
+
This project classifies emotions in textual data using advanced Large Language Models (LLMs). Three models have been fine-tuned on a structured emotion dataset to detect seven primary emotions: anger, disgust, fear, guilt, joy, sadness, and shame. The models used are:
|
| 18 |
+
|
| 19 |
+
* GPT-4
|
| 20 |
+
|
| 21 |
+
* Llama-3.2
|
| 22 |
+
|
| 23 |
+
* T5 Base
|
| 24 |
+
|
| 25 |
+
The models are deployed on Hugging Face Spaces using Gradio, offering a real-time, interactive web-based interface for emotion classification.
|
| 26 |
+
|
| 27 |
+
2. Live Demo
|
| 28 |
+
|
| 29 |
+
Interact with the emotion classifier models in real-time by entering a text sample and selecting a model for classification.
|
| 30 |
+
|
| 31 |
+
Link: https://huggingface.co/spaces/HaryaniAnjali/Emotion_Classification
|
| 32 |
+
|
| 33 |
+
3. Models Used
|
| 34 |
+
|
| 35 |
+
* GPT-4: A state-of-the-art language model optimized for text understanding.
|
| 36 |
+
* Llama-3.2: A fine-tuned 7B parameter model for emotion classification.
|
| 37 |
+
* T5 Base: A smaller yet efficient model designed for prompt-based emotion detection.
|
| 38 |
+
|
| 39 |
+
|
| 40 |
+
4. Dataset
|
| 41 |
+
|
| 42 |
+
The dataset used for training consists of text samples labeled with seven emotions.
|
| 43 |
+
|
| 44 |
+
5. Implementation Details
|
| 45 |
+
|
| 46 |
+
* Data Preprocessing: Text cleaning (alphnumeric code, punctuation removal).
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
* Model Fine-Tuning: Fine-tuning was performed using prompt-based training strategies for all models (GPT-4, Llama-3.2, and T5 Base).
|
| 50 |
+
GPT-4, Llama-3.2, and T5 Base models were trained with specialized prompts to improve emotion classification.
|
| 51 |
+
The training approach included using structured prompts to guide the models in recognizing and classifying emotions in text.
|
| 52 |
+
This method helped the models focus on the task of emotion recognition from natural language inputs, improving their ability to understand context and nuances in emotions.
|
| 53 |
+
|
| 54 |
+
* Model Deployment: Fine-tuned models were uploaded to Hugging Face Model Hub.
|
| 55 |
+
|
| 56 |
+
* Performance Analysis The models were evaluated using accuracy, precision, recall, and F1-score. Here are the results:
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
|
| 62 |
+
* Key Findings: GPT-4 demonstrated the best performance with the highest precision (0.8006) and accuracy (0.7399). Llama-3.2 showed good performance, but with slightly lower accuracy compared to GPT-4. T5 Base was effective but had slightly lower scores across the board.
|
| 63 |
+
|
| 64 |
+
6. How to Use
|
| 65 |
+
|
| 66 |
+
* Enter a text input in the provided text box.
|
| 67 |
+
* Click the "Submit" button to classify the emotion.
|
| 68 |
+
* The predicted emotion will be displayed on the interface.
|
| 69 |
+
|
| 70 |
An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co/docs/api-inference/index).
|