chi0818 commited on
Commit
4c49092
·
verified ·
1 Parent(s): 823de6d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -12
README.md CHANGED
@@ -1,24 +1,23 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - chatbot
5
- - chinese
6
- - mental-health
7
- - text-generation
8
- - emotion-support
9
- - lora
10
- - deepseek
11
- - llama-factory
12
  library_name: transformers
13
  language:
14
- - zh
15
  pipeline_tag: text-generation
16
- base_model: deepseek-ai/deepseek-llm-1.5b-chat # Or deepseek-ai/deepseek-llm-7b-chat
17
  ---
18
 
19
  # Emotion-Therapy Chatbot Based on DeepSeek LLM (1.5B)
20
 
21
- This model is a **Chinese emotional-support chatbot** fine-tuned on top of DeepSeek LLM-1.5B / 7B Distill using LoRA. It is designed to simulate empathetic, comforting conversations for emotional wellness, daily companionship, and supportive dialogue scenarios.
22
 
23
  ## 💡 Project Background
24
 
@@ -52,4 +51,4 @@ tokenizer = AutoTokenizer.from_pretrained("chi0818/my-chatbot-model")
52
  input_text = "Today I feel so lonely and sad……"
53
  inputs = tokenizer(input_text, return_tensors="pt")
54
  outputs = model.generate(**inputs, max_new_tokens=100)
55
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
 
1
  ---
2
  license: apache-2.0
3
  tags:
4
+ - chatbot
5
+ - mental-health
6
+ - text-generation
7
+ - emotion-support
8
+ - lora
9
+ - deepseek
10
+ - llama-factory
 
11
  library_name: transformers
12
  language:
13
+ - en
14
  pipeline_tag: text-generation
15
+ base_model: deepseek-ai/deepseek-llm-1.5b-chat
16
  ---
17
 
18
  # Emotion-Therapy Chatbot Based on DeepSeek LLM (1.5B)
19
 
20
+ This model is a **emotional-support chatbot** fine-tuned on top of DeepSeek LLM-1.5B / 7B Distill using LoRA. It is designed to simulate empathetic, comforting conversations for emotional wellness, daily companionship, and supportive dialogue scenarios.
21
 
22
  ## 💡 Project Background
23
 
 
51
  input_text = "Today I feel so lonely and sad……"
52
  inputs = tokenizer(input_text, return_tensors="pt")
53
  outputs = model.generate(**inputs, max_new_tokens=100)
54
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))