FritzStack commited on
Commit
8bed501
·
verified ·
1 Parent(s): 2f35e78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -3
README.md CHANGED
@@ -3,9 +3,99 @@ library_name: transformers
3
  tags: []
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
 
11
 
@@ -170,6 +260,19 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
170
 
171
  ## Citation [optional]
172
 
 
 
 
 
 
 
 
 
 
 
 
 
 
173
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
 
175
  **BibTeX:**
 
3
  tags: []
4
  ---
5
 
6
+ # Model Description
7
+
8
+ This model is identical to `DeGra/RACLETTE-v0.2`, which is based on Mistral 7B and has been fine-tuned for emotion recognition and empathetic conversational support within mental health contexts. It is derived from the research presented in the paper "The Emotional Spectrum of LLMs: Leveraging Empathy and Emotion-Based Markers for Mental Health Support". The model’s architecture and fine-tuning details follow the methodology outlined in that publication—specifically, leveraging next-token prediction for emotion labeling, progressive construction of a user emotional profile throughout conversation, and interpretable emotional embeddings for preliminary mental health screening.aclanthology+4​
9
+ Reference
10
+ For full details see:
11
+ De Grandi, Ravenda et al. (2025). "The Emotional Spectrum of LLMs: Leveraging Empathy and Emotion-Based Markers for Mental Health Support."
12
+
13
+ # Usage Example: Emotional Profile Extraction
14
+
15
+ Suppose you have a list of sentences and want to compute the aggregated emotional profile (distribution of emotions predicted over the set):
16
+
17
+ ## Example Python Code
18
+
19
+ ```{python}
20
+ import torch
21
+ import transformers
22
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
23
+
24
+ model_name = 'DeGra/RACLETTE-v0.2'
25
+
26
+ bnb_config = BitsAndBytesConfig(
27
+ load_in_4bit=True,
28
+ bnb_4bit_quant_type="nf4",
29
+ bnb_4bit_compute_dtype=torch.float16,
30
+ )
31
+
32
+ model = AutoModelForCausalLM.from_pretrained(
33
+ model_name,
34
+ quantization_config=bnb_config,
35
+ trust_remote_code=True
36
+ )
37
+ model.config.use_cache = False
38
+
39
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
40
+ tokenizer.pad_token = tokenizer.eos_token
41
+
42
+ generation_pipeline = transformers.pipeline(
43
+ "text-generation",
44
+ model=model,
45
+ tokenizer=tokenizer,
46
+ torch_dtype=torch.bfloat16,
47
+ trust_remote_code=True,
48
+ device_map="auto",
49
+ )
50
+
51
+ def filter_limit_chars(text, limit_chars, max_limited_chars=2, stop_at_max=False):
52
+ count, index = 0, [0]
53
+ for separator in limit_chars:
54
+ separator_count = text.count(separator)
55
+ count += separator_count
56
+ i = 0
57
+ for _ in range(separator_count):
58
+ i = text.find(separator, i)
59
+ index.append(i if not stop_at_max else i+len(separator))
60
+ i += len(separator)
61
+ index.sort()
62
+ index.append(len(text))
63
+ if count >= max_limited_chars:
64
+ text = text[0:index[max_limited_chars+1] if not stop_at_max else index[max_limited_chars]]
65
+ return text
66
+
67
+ def predict_emotion(prompt, num_return_emotions=10):
68
+ sequences = generation_pipeline(
69
+ prompt,
70
+ min_new_tokens=2,
71
+ max_new_tokens=5,
72
+ do_sample=True,
73
+ top_k=5,
74
+ num_return_sequences=num_return_emotions,
75
+ eos_token_id=tokenizer.eos_token_id,
76
+ )
77
+ emotions_count = {}
78
+ for seq in sequences:
79
+ emotion = seq['generated_text'][len(prompt):].strip()
80
+ emotion = emotion.split('<|assistant|>',1)[0].split('<|endoftext|>',1)[0]
81
+ emotion = filter_limit_chars(emotion, ['|','<','>',',','.'], 0, False).strip()
82
+ emotions_count[emotion] = emotions_count.get(emotion, 0) + 1
83
+ return emotions_count
84
+
85
+ # Example: Extract emotional profile from sentences
86
+ emotion_dict = {e: 0 for e in ["surprised","excited","angry","proud","sad","annoyed","grateful","lonely","afraid","terrified","guilty","impressed","disgusted","hopeful","confident","furious","anxious","anticipating","joyful","nostalgic","disappointed","prepared","jealous","content","devastated","embarrassed","caring","sentimental","trusting","ashamed","apprehensive","faithful"]}
87
+
88
+ sentences = ["I'm feeling really down lately.", "I don't know if I can handle this anymore.", "Today I got some good news!"]
89
+
90
+ for sent in sentences:
91
+ prompt = f'<|prompter|>{sent}<|endoftext|><|emotion|>'
92
+ emotions_count = predict_emotion(prompt)
93
+ for emotion, count in emotions_count.items():
94
+ if emotion in emotion_dict:
95
+ emotion_dict[emotion] += count
96
+
97
+ print(emotion_dict)
98
+ ```
99
 
100
 
101
 
 
260
 
261
  ## Citation [optional]
262
 
263
+
264
+ ```
265
+ @inproceedings{de2025emotional,
266
+ title={The emotional spectrum of llms: Leveraging empathy and emotion-based markers for mental health support},
267
+ author={De Grandi, Alessandro and Ravenda, Federico and Raballo, Andrea and Crestani, Fabio},
268
+ booktitle={Proceedings of the 10th Workshop on Computational Linguistics and Clinical Psychology (CLPsych 2025)},
269
+ pages={26--43},
270
+ year={2025}
271
+ }
272
+ ```
273
+
274
+
275
+
276
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
277
 
278
  **BibTeX:**