JusteLeo commited on
Commit
c2a98e7
Β·
verified Β·
1 Parent(s): d0e1ab2

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +136 -98
README.md CHANGED
@@ -1,99 +1,137 @@
1
- ---
2
- license: gemma
3
- language:
4
- - multilingual
5
- tags:
6
- - gguf
7
- - emotion-classification
8
- - zero-shot
9
- - chat-template
10
- - gemma
11
- ---
12
-
13
- # Emotion Classification GGUF
14
-
15
- ## Model Description
16
-
17
- This repository contains a GGUF version of **[google/gemma-3-1b-it-qat]**, specially configured for **zero-shot emotion classification**.
18
-
19
- The goal is to offer a lightweight, fast, and universal alternative to traditional classifiers (like fine-tuned BERT models). Instead of relying on a model trained on a fixed dataset, this GGUF leverages the power of a foundational language model and a modified chat template to transform it into a specialized text analysis tool.
20
-
21
- This approach makes emotion classification highly accessible, requiring no specialized training or complex setups.
22
-
23
- ## ✨ Key Features
24
-
25
- * **⚑ Fast & Accessible**: The GGUF format allows for very fast inference, even on a CPU, making emotion classification accessible without a powerful GPU.
26
- * **🎯 Prompt-Specialized**: The model is guided by a detailed, built-in system prompt that instructs it to classify text against a predefined list of 30+ emotions and provide an explanation in a structured JSON format.
27
- * **πŸ”„ Stateless (No Conversation Memory)**: Thanks to the custom template, the model only considers the user's current input. It has no conversational memory, making it perfect for API-like use cases (one input -> one output).
28
- * **🌍 Multilingual**: Based on the Gemma model, it is theoretically capable of classifying emotions in any language supported by the base model. Performance will vary depending on the base model's proficiency in a given language.
29
- * **πŸ”§ Easily Adaptable**: While this model is ready for emotion classification, the underlying method can be easily adapted for other NLP tasks like sentiment analysis, intent detection, or topic modeling simply by changing the system prompt.
30
-
31
- ## πŸš€ How to Use
32
-
33
- This model is designed to be used with any GGUF-compatible runner, such as `llama.cpp`, LM Studio, Ollama, and others.
34
-
35
- The core logic is embedded directly into the **chat template within the GGUF file**. Most modern tools will automatically detect and use this template. All you need to do is provide your text as the user's prompt, and the model will perform the classification.
36
-
37
- ### Expected Output
38
-
39
- The model will return a response in the JSON format specified in the prompt:
40
-
41
- **Input:**
42
- > "le ciel est bleu"
43
-
44
- **Model Output:**
45
- ```json
46
- {
47
- "emotions": [ "Neutral" ],
48
- "explanation": "The sentence simply describes a visual observation of the sky – it’s neutral in terms of expressing emotion."
49
- }
50
- ```
51
-
52
- ## πŸ› οΈ The Trick: The Custom Chat Template
53
-
54
- This model's specialization comes from a custom Jinja2 chat template, not from fine-tuning. This template forces the model to adopt a specialized question-answering behavior.
55
-
56
- Here’s how it works:
57
- 1. **Hardcoded System Prompt**: A detailed system prompt is embedded at the very beginning of every request, instructing the model on its role, the list of possible emotions, and the required JSON output format.
58
- 2. **Ignoring History**: The template uses a `{% if loop.last %}` condition. This ensures that **only the very last user message** is processed, making the model stateless and perfect for single-shot tasks.
59
-
60
- Here is the template baked into this GGUF file:
61
- ```jinja
62
- {{ bos_token }}<start_of_turn>system
63
- You are an emotion classification assistant. Your task is to analyze ALL given sentence and classify it emotions chosen from Contentment, Joy, Euphoria, Excitement, Disappointment, Sadness, Regret, Irritation, Frustration, Anger, Anxiety, Fear, Astonishment, Disgust, Hate, Pleasure, Desire, Affection, Trust, Distrust, Gratitude, Compassion, Admiration, Contempt, Guilt, Shame, Pride, Jealousy, Envy, Hope, Nostalgia, Relief, Curiosity, Boredom, Neutral, fatigue, Trust You can choose one or several emotions follow this format
64
- ___json
65
- {
66
- "emotions": [ " "
67
- ],
68
- "explanation": "This is the explanation related to the listed emotions."
69
- }
70
- ___
71
- begin<end_of_turn>
72
- {%- for message in messages %}
73
- {%- if loop.last and message['role'] == 'user' -%}
74
- {{ '<start_of_turn>user
75
- ' + message['content'] | trim + '<end_of_turn>
76
- ' }}
77
- {%- endif -%}
78
- {%- endfor -%}
79
- {%- if add_generation_prompt -%}
80
- {{ '<start_of_turn>model
81
- ' }}
82
- {%- endif -%}
83
- ```
84
- Note : ___ must be replaced by ```
85
-
86
- ## ⚠️ Limitations & Performance
87
-
88
- It is important to note that **this model has not been evaluated on academic emotion classification benchmarks**. Its performance is based on qualitative testing and may vary.
89
-
90
- * **Accuracy**: While results are often very good, they might be less precise than a specialized model fine-tuned on a domain-specific dataset.
91
- * **Base Model Dependency**: The quality of the classification is entirely dependent on the intrinsic capabilities of the original base model.
92
- * **Format Robustness**: For very complex, ambiguous, or adversarial inputs, the model might occasionally fail to adhere strictly to the JSON output format.
93
-
94
- ## Acknowledgements
95
-
96
- This model is an adaptation of **[google/gemma-3-1b-it-qat-q4_0-gguf]**. All credit for the foundational model training goes to its original creators at Google.
97
-
98
- Model adapted and packaged by **JusteLeo**.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
  ```
 
1
+ ---
2
+ license: gemma
3
+ language:
4
+ - multilingual
5
+ tags:
6
+ - gguf
7
+ - emotion-classification
8
+ - zero-shot
9
+ - chat-template
10
+ - gemma
11
+ ---
12
+
13
+ # Emotion Classification GGUF
14
+
15
+ ## Model Description
16
+
17
+ This repository contains a GGUF version of **[google/gemma-3-1b-it-qat]**, specially configured for **zero-shot emotion classification**.
18
+
19
+ The goal is to offer a lightweight, fast, and universal alternative to traditional classifiers (like fine-tuned BERT models). Instead of relying on a model trained on a fixed dataset, this GGUF leverages the power of a foundational language model and a modified chat template to transform it into a specialized text analysis tool.
20
+
21
+ This approach makes emotion classification highly accessible, requiring no specialized training or complex setups.
22
+
23
+ ## ✨ Key Features
24
+
25
+ * **⚑ Fast & Accessible**: The GGUF format allows for very fast inference, even on a CPU, making emotion classification accessible without a powerful GPU.
26
+ * **🎯 Prompt-Specialized**: The model is guided by a detailed, built-in system prompt that instructs it to classify text against a predefined list of 30+ emotions and provide an explanation in a structured JSON format.
27
+ * **πŸ”„ Stateless (No Conversation Memory)**: Thanks to the custom template, the model only considers the user's current input. It has no conversational memory, making it perfect for API-like use cases (one input -> one output).
28
+ * **🌍 Multilingual**: Based on the Gemma model, it is theoretically capable of classifying emotions in any language supported by the base model. Performance will vary depending on the base model's proficiency in a given language.
29
+ * **πŸ”§ Easily Adaptable**: While this model is ready for emotion classification, the underlying method can be easily adapted for other NLP tasks like sentiment analysis, intent detection, or topic modeling simply by changing the system prompt.
30
+
31
+ ## πŸš€ How to Use
32
+
33
+ This model is designed to be used with any GGUF-compatible runner, such as `llama.cpp`, LM Studio, Ollama, and others.
34
+
35
+ The core logic is embedded directly into the **chat template within the GGUF file**. Most modern tools will automatically detect and use this template. All you need to do is provide your text as the user's prompt, and the model will perform the classification.
36
+
37
+ ### Expected Output
38
+
39
+ The model will return a response in the JSON format specified in the prompt:
40
+
41
+ **Input:**
42
+ > "le ciel est bleu"
43
+
44
+ **Model Output:**
45
+ ```json
46
+ {
47
+ "emotions": [ "Neutral" ],
48
+ "explanation": "The sentence simply describes a visual observation of the sky – it’s neutral in terms of expressing emotion."
49
+ }
50
+ ```
51
+ ## Emotions List
52
+
53
+ - Contentment
54
+ - Joy
55
+ - Euphoria
56
+ - Excitement
57
+ - Disappointment
58
+ - Sadness
59
+ - Regret
60
+ - Irritation
61
+ - Frustration
62
+ - Anger
63
+ - Anxiety
64
+ - Fear
65
+ - Astonishment
66
+ - Disgust
67
+ - Hate
68
+ - Pleasure
69
+ - Desire
70
+ - Affection
71
+ - Trust
72
+ - Distrust
73
+ - Gratitude
74
+ - Compassion
75
+ - Admiration
76
+ - Contempt
77
+ - Guilt
78
+ - Shame
79
+ - Pride
80
+ - Jealousy
81
+ - Envy
82
+ - Hope
83
+ - Nostalgia
84
+ - Relief
85
+ - Curiosity
86
+ - Boredom
87
+ - Neutral
88
+ - Fatigue
89
+
90
+ ## πŸ› οΈ The Trick: The Custom Chat Template
91
+
92
+ This model's specialization comes from a custom Jinja2 chat template, not from fine-tuning. This template forces the model to adopt a specialized question-answering behavior.
93
+
94
+ Here’s how it works:
95
+ 1. **Hardcoded System Prompt**: A detailed system prompt is embedded at the very beginning of every request, instructing the model on its role, the list of possible emotions, and the required JSON output format.
96
+ 2. **Ignoring History**: The template uses a `{% if loop.last %}` condition. This ensures that **only the very last user message** is processed, making the model stateless and perfect for single-shot tasks.
97
+
98
+ Here is the template baked into this GGUF file:
99
+ ```jinja
100
+ {{ bos_token }}<start_of_turn>system
101
+ You are an emotion classification assistant. Your task is to analyze ALL given sentence and classify it emotions chosen from Contentment, Joy, Euphoria, Excitement, Disappointment, Sadness, Regret, Irritation, Frustration, Anger, Anxiety, Fear, Astonishment, Disgust, Hate, Pleasure, Desire, Affection, Trust, Distrust, Gratitude, Compassion, Admiration, Contempt, Guilt, Shame, Pride, Jealousy, Envy, Hope, Nostalgia, Relief, Curiosity, Boredom, Neutral, fatigue, Trust You can choose one or several emotions follow this format
102
+ ___json
103
+ {
104
+ "emotions": [ " "
105
+ ],
106
+ "explanation": "This is the explanation related to the listed emotions."
107
+ }
108
+ ___
109
+ begin<end_of_turn>
110
+ {%- for message in messages %}
111
+ {%- if loop.last and message['role'] == 'user' -%}
112
+ {{ '<start_of_turn>user
113
+ ' + message['content'] | trim + '<end_of_turn>
114
+ ' }}
115
+ {%- endif -%}
116
+ {%- endfor -%}
117
+ {%- if add_generation_prompt -%}
118
+ {{ '<start_of_turn>model
119
+ ' }}
120
+ {%- endif -%}
121
+ ```
122
+ Note : ___ must be replaced by ```
123
+
124
+ ## ⚠️ Limitations & Performance
125
+
126
+ It is important to note that **this model has not been evaluated on academic emotion classification benchmarks**. Its performance is based on qualitative testing and may vary.
127
+
128
+ * **Accuracy**: While results are often very good, they might be less precise than a specialized model fine-tuned on a domain-specific dataset.
129
+ * **Base Model Dependency**: The quality of the classification is entirely dependent on the intrinsic capabilities of the original base model.
130
+ * **Format Robustness**: For very complex, ambiguous, or adversarial inputs, the model might occasionally fail to adhere strictly to the JSON output format.
131
+
132
+ ## Acknowledgements
133
+
134
+ This model is an adaptation of **[google/gemma-3-1b-it-qat-q4_0-gguf]**. All credit for the foundational model training goes to its original creators at Google.
135
+
136
+ Model adapted and packaged by **JusteLeo**.
137
  ```