Update README.md
Browse files
README.md
CHANGED
|
@@ -14,9 +14,9 @@ datasets:
|
|
| 14 |
- UEC-InabaLab/KokoroChat
|
| 15 |
---
|
| 16 |
|
| 17 |
-
# 🧠 KokoroChat-High: Japanese Counseling Dialogue Model
|
| 18 |
|
| 19 |
-
**KokoroChat-High** is a large-scale Japanese language model fine-tuned on the **entire KokoroChat dataset**—a collection of over 6,000 psychological counseling dialogues conducted via **role-play between trained counselors**. The model is capable of generating **empathetic and context-aware responses** suitable for mental health-related conversational tasks.
|
| 20 |
|
| 21 |
---
|
| 22 |
|
|
@@ -34,7 +34,7 @@ datasets:
|
|
| 34 |
```python
|
| 35 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 36 |
|
| 37 |
-
model_id = "UEC-InabaLab/KokoroChat-High"
|
| 38 |
|
| 39 |
# Load tokenizer and model
|
| 40 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
|
@@ -128,6 +128,6 @@ If you use this model or dataset, please cite the following paper:
|
|
| 128 |
- [KokoroChat on Hugging Face Datasets](https://huggingface.co/datasets/UEC-InabaLab/KokoroChat)
|
| 129 |
- [KokoroChat on GitHub (UEC-InabaLab)](https://github.com/UEC-InabaLab/KokoroChat)
|
| 130 |
- 🤖 **Model Variants**:
|
| 131 |
-
- [KokoroChat-Low](https://huggingface.co/UEC-InabaLab/KokoroChat-Low): fine-tuned on **3,870 dialogues** with client feedback scores **< 70**
|
| 132 |
-
- [KokoroChat-Full](https://huggingface.co/UEC-InabaLab/KokoroChat-Full): fine-tuned on **6,471 dialogues** with client feedback scores **≤ 98**
|
| 133 |
- 📄 **Paper**: [ACL 2025 Paper (PDF)](https://drive.google.com/file/d/1T6XgvZii8rZ1kKLgOUGqm3BMvqQAvxEM/view?usp=sharing)
|
|
|
|
| 14 |
- UEC-InabaLab/KokoroChat
|
| 15 |
---
|
| 16 |
|
| 17 |
+
# 🧠 Llama-3.1-KokoroChat-High: Japanese Counseling Dialogue Model
|
| 18 |
|
| 19 |
+
**Llama-3.1-KokoroChat-High** is a large-scale Japanese language model fine-tuned on the **entire KokoroChat dataset**—a collection of over 6,000 psychological counseling dialogues conducted via **role-play between trained counselors**. The model is capable of generating **empathetic and context-aware responses** suitable for mental health-related conversational tasks.
|
| 20 |
|
| 21 |
---
|
| 22 |
|
|
|
|
| 34 |
```python
|
| 35 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 36 |
|
| 37 |
+
model_id = "UEC-InabaLab/Llama-3.1-KokoroChat-High"
|
| 38 |
|
| 39 |
# Load tokenizer and model
|
| 40 |
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
|
|
|
| 128 |
- [KokoroChat on Hugging Face Datasets](https://huggingface.co/datasets/UEC-InabaLab/KokoroChat)
|
| 129 |
- [KokoroChat on GitHub (UEC-InabaLab)](https://github.com/UEC-InabaLab/KokoroChat)
|
| 130 |
- 🤖 **Model Variants**:
|
| 131 |
+
- [Llama-3.1-KokoroChat-Low](https://huggingface.co/UEC-InabaLab/Llama-3.1-KokoroChat-Low): fine-tuned on **3,870 dialogues** with client feedback scores **< 70**
|
| 132 |
+
- [Llama-3.1-KokoroChat-Full](https://huggingface.co/UEC-InabaLab/Llama-3.1-KokoroChat-Full): fine-tuned on **6,471 dialogues** with client feedback scores **≤ 98**
|
| 133 |
- 📄 **Paper**: [ACL 2025 Paper (PDF)](https://drive.google.com/file/d/1T6XgvZii8rZ1kKLgOUGqm3BMvqQAvxEM/view?usp=sharing)
|