ZhiyangQi97 commited on
Commit
7d4ca5b
·
verified ·
1 Parent(s): 49aa0c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -14,9 +14,9 @@ datasets:
14
  - UEC-InabaLab/KokoroChat
15
  ---
16
 
17
- # 🧠 KokoroChat-Full: Japanese Counseling Dialogue Model
18
 
19
- **KokoroChat-Full** is a large-scale Japanese language model fine-tuned on the **entire KokoroChat dataset**—a collection of over 6,000 psychological counseling dialogues conducted via **role-play between trained counselors**. The model is capable of generating **empathetic and context-aware responses** suitable for mental health-related conversational tasks.
20
 
21
  ---
22
 
@@ -35,7 +35,7 @@ datasets:
35
  ```python
36
  from transformers import AutoModelForCausalLM, AutoTokenizer
37
 
38
- model_id = "UEC-InabaLab/KokoroChat-Full"
39
 
40
  # Load tokenizer and model
41
  tokenizer = AutoTokenizer.from_pretrained(model_id)
@@ -130,6 +130,6 @@ If you use this model or dataset, please cite the following paper:
130
  - [KokoroChat on Hugging Face Datasets](https://huggingface.co/datasets/UEC-InabaLab/KokoroChat)
131
  - [KokoroChat on GitHub (UEC-InabaLab)](https://github.com/UEC-InabaLab/KokoroChat)
132
  - 🤖 **Model Variants**:
133
- - [KokoroChat-Low](https://huggingface.co/UEC-InabaLab/KokoroChat-Low): fine-tuned on **3,870 dialogues** with client feedback scores **< 70**
134
- - [KokoroChat-High](https://huggingface.co/UEC-InabaLab/KokoroChat-High): fine-tuned on **2,601 dialogues** with client feedback scores between **70 and 98**
135
  - 📄 **Paper**: [ACL 2025 Paper (PDF)](https://drive.google.com/file/d/1T6XgvZii8rZ1kKLgOUGqm3BMvqQAvxEM/view?usp=sharing)
 
14
  - UEC-InabaLab/KokoroChat
15
  ---
16
 
17
+ # 🧠 Llama-3.1-KokoroChat-Full: Japanese Counseling Dialogue Model
18
 
19
+ **Llama-3.1-KokoroChat-Full** is a large-scale Japanese language model fine-tuned on the **entire KokoroChat dataset**—a collection of over 6,000 psychological counseling dialogues conducted via **role-play between trained counselors**. The model is capable of generating **empathetic and context-aware responses** suitable for mental health-related conversational tasks.
20
 
21
  ---
22
 
 
35
  ```python
36
  from transformers import AutoModelForCausalLM, AutoTokenizer
37
 
38
+ model_id = "UEC-InabaLab/Llama-3.1-KokoroChat-Full"
39
 
40
  # Load tokenizer and model
41
  tokenizer = AutoTokenizer.from_pretrained(model_id)
 
130
  - [KokoroChat on Hugging Face Datasets](https://huggingface.co/datasets/UEC-InabaLab/KokoroChat)
131
  - [KokoroChat on GitHub (UEC-InabaLab)](https://github.com/UEC-InabaLab/KokoroChat)
132
  - 🤖 **Model Variants**:
133
+ - [Llama-3.1-KokoroChat-Low](https://huggingface.co/UEC-InabaLab/Llama-3.1-KokoroChat-Low): fine-tuned on **3,870 dialogues** with client feedback scores **< 70**
134
+ - [Llama-3.1-KokoroChat-High](https://huggingface.co/UEC-InabaLab/Llama-3.1-KokoroChat-High): fine-tuned on **2,601 dialogues** with client feedback scores between **70 and 98**
135
  - 📄 **Paper**: [ACL 2025 Paper (PDF)](https://drive.google.com/file/d/1T6XgvZii8rZ1kKLgOUGqm3BMvqQAvxEM/view?usp=sharing)