FritzStack commited on
Commit
afe97bc
·
verified ·
1 Parent(s): 196138f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md CHANGED
@@ -11,6 +11,63 @@ language:
11
  - en
12
  ---
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  # Uploaded model
15
 
16
  - **Developed by:** FritzStack
 
11
  - en
12
  ---
13
 
14
+ # Usage
15
+
16
+ 4-bit-quantized Qwen 0.6B fine-tuned on the english version of `brighter-dataset/BRIGHTER-emotion-categories'.
17
+
18
+ To use the model:
19
+
20
+ ```
21
+ from transformers import AutoModelForCausalLM, AutoTokenizer
22
+ import torch
23
+
24
+
25
+ tokenizer = AutoTokenizer.from_pretrained(
26
+ "FritzStack/QWEmotioN-4bit",
27
+ trust_remote_code=True
28
+ )
29
+
30
+ model = AutoModelForCausalLM.from_pretrained(
31
+ "FritzStack/QWEmotioN-4bit",
32
+ torch_dtype=torch.float16,
33
+ device_map="auto",
34
+ trust_remote_code=True
35
+ )
36
+
37
+ def predict_emotions(text, max_new_tokens=50):
38
+ """
39
+ Predict emotions for a given text
40
+ """
41
+ prompt = f"{text}. "
42
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
43
+
44
+ with torch.no_grad():
45
+ outputs = model.generate(
46
+ **inputs,
47
+ max_new_tokens=max_new_tokens,
48
+ do_sample=False,
49
+ #temperature=0.8,
50
+ top_k = 10,
51
+ #repetition_penalty=35.,
52
+ pad_token_id=tokenizer.eos_token_id
53
+ )
54
+
55
+ generated_text = tokenizer.decode(
56
+ outputs[0][len(inputs.input_ids[0]):],
57
+ skip_special_tokens=False
58
+ ).strip()
59
+
60
+ return generated_text
61
+ ```
62
+
63
+
64
+ ```
65
+ print(predict_emotions("I miss you"))
66
+ ### Output
67
+ Emotion Output: sadness <|im_end|>
68
+ ```
69
+
70
+
71
  # Uploaded model
72
 
73
  - **Developed by:** FritzStack