Update README.md
Browse files
README.md
CHANGED
|
@@ -14,91 +14,104 @@ datasets:
|
|
| 14 |
- UEC-InabaLab/KokoroChat
|
| 15 |
---
|
| 16 |
|
| 17 |
-
# 🧠 KokoroChat-High
|
| 18 |
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
The base model is [tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3), and this adapter enhances it for generating **high-quality, empathetic Japanese counseling responses**.
|
| 22 |
|
| 23 |
---
|
| 24 |
|
| 25 |
-
## 💡
|
| 26 |
|
| 27 |
-
- ✅
|
| 28 |
-
|
| 29 |
-
- ✅
|
|
|
|
|
|
|
| 30 |
|
| 31 |
---
|
| 32 |
|
| 33 |
-
##
|
| 34 |
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
- **Adapter Size**: ~1.1GB (`adapter_model.safetensors`)
|
| 38 |
-
- **Language**: Japanese
|
| 39 |
-
- **Training Data**: KokoroChat-High subset
|
| 40 |
|
| 41 |
-
|
| 42 |
|
| 43 |
-
|
|
|
|
|
|
|
| 44 |
|
| 45 |
-
|
| 46 |
-
|
|
|
|
|
|
|
| 47 |
|
| 48 |
-
|
| 49 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
|
| 50 |
-
from peft import PeftModel
|
| 51 |
-
|
| 52 |
-
# === Base + Adapter Paths ===
|
| 53 |
-
base_model_id = "tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3"
|
| 54 |
-
adapter_id = "UEC-InabaLab/KokoroChat-High"
|
| 55 |
-
|
| 56 |
-
# === Load Tokenizer ===
|
| 57 |
-
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
|
| 58 |
-
|
| 59 |
-
# === Load Base Model ===
|
| 60 |
-
base_model = AutoModelForCausalLM.from_pretrained(
|
| 61 |
-
base_model_id,
|
| 62 |
-
device_map="auto",
|
| 63 |
-
torch_dtype="auto",
|
| 64 |
-
quantization_config=BitsAndBytesConfig(load_in_4bit=True)
|
| 65 |
-
)
|
| 66 |
-
|
| 67 |
-
# === Load & Merge LoRA ===
|
| 68 |
-
model = PeftModel.from_pretrained(base_model, adapter_id)
|
| 69 |
-
model = model.merge_and_unload()
|
| 70 |
-
```
|
| 71 |
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
```python
|
| 75 |
messages = [
|
| 76 |
{"role": "system", "content": "心理カウンセリングの会話において、対話履歴を考慮し、カウンセラーとして適切に応答してください。"},
|
| 77 |
-
{"role": "user", "content": "
|
| 78 |
]
|
| 79 |
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 87 |
)
|
| 88 |
|
| 89 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
```
|
| 91 |
|
| 92 |
-
|
| 93 |
|
| 94 |
-
|
| 95 |
-
- 🧠 **Base Model**: [tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3)
|
| 96 |
-
- 📄 **Paper**: [KokoroChat: A Japanese Psychological Counseling Dialogue Dataset (ACL 2025)](https://drive.google.com/file/d/1T6XgvZii8rZ1kKLgOUGqm3BMvqQAvxEM/view?usp=sharing)
|
| 97 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 98 |
|
| 99 |
## 📄 Citation
|
| 100 |
|
| 101 |
-
If you use this dataset, please cite the following paper:
|
| 102 |
|
| 103 |
```bibtex
|
| 104 |
@inproceedings{qi2025kokorochat,
|
|
@@ -108,4 +121,15 @@ If you use this dataset, please cite the following paper:
|
|
| 108 |
year = {2025},
|
| 109 |
url = {https://github.com/UEC-InabaLab/KokoroChat}
|
| 110 |
}
|
| 111 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
- UEC-InabaLab/KokoroChat
|
| 15 |
---
|
| 16 |
|
| 17 |
+
# 🧠 KokoroChat-High: Japanese Counseling Dialogue Model
|
| 18 |
|
| 19 |
+
**KokoroChat-High** is a large-scale Japanese language model fine-tuned on the **entire KokoroChat dataset**—a collection of over 6,000 psychological counseling dialogues conducted via **role-play between trained counselors**. The model is capable of generating **empathetic and context-aware responses** suitable for mental health-related conversational tasks.
|
|
|
|
|
|
|
| 20 |
|
| 21 |
---
|
| 22 |
|
| 23 |
+
## 💡 Overview
|
| 24 |
|
| 25 |
+
- ✅ Fine-tuned on **6,471 dialogues** with feedback scores ≤ 98
|
| 26 |
+
(from the full KokoroChat dataset of 6,589 dialogues; 118 high-score dialogues reserved for testing)
|
| 27 |
+
- ✅ Data collected through **text-based role-play** by trained counselors
|
| 28 |
+
- ✅ Covers a wide range of topics: depression, family, school, career, relationships, and more
|
| 29 |
+
- ✅ Base Model: [`tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3`](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3)
|
| 30 |
|
| 31 |
---
|
| 32 |
|
| 33 |
+
## ⚙️ Usage Example
|
| 34 |
|
| 35 |
+
```python
|
| 36 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
+
model_id = "UEC-InabaLab/KokoroChat-High"
|
| 39 |
|
| 40 |
+
# Load tokenizer and model
|
| 41 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 42 |
+
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
|
| 43 |
|
| 44 |
+
# Set pad_token_id
|
| 45 |
+
if tokenizer.pad_token_id is None:
|
| 46 |
+
tokenizer.pad_token = "[PAD]"
|
| 47 |
+
tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids("[PAD]")
|
| 48 |
|
| 49 |
+
model.config.pad_token_id = tokenizer.pad_token_id
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
+
# Build dialogue input
|
|
|
|
|
|
|
| 52 |
messages = [
|
| 53 |
{"role": "system", "content": "心理カウンセリングの会話において、対話履歴を考慮し、カウンセラーとして適切に応答してください。"},
|
| 54 |
+
{"role": "user", "content": "最近、気分が落ち込んでやる気が出ません。"}
|
| 55 |
]
|
| 56 |
|
| 57 |
+
# Tokenize with chat template
|
| 58 |
+
inputs = tokenizer.apply_chat_template(
|
| 59 |
+
messages,
|
| 60 |
+
add_generation_prompt=True,
|
| 61 |
+
return_tensors="pt"
|
| 62 |
+
).to(model.device)
|
| 63 |
+
|
| 64 |
+
attention_mask = inputs.ne(tokenizer.pad_token_id)
|
| 65 |
+
|
| 66 |
+
# Generate response
|
| 67 |
+
outputs = model.generate(
|
| 68 |
+
inputs,
|
| 69 |
+
attention_mask=attention_mask,
|
| 70 |
+
pad_token_id=tokenizer.pad_token_id,
|
| 71 |
+
max_new_tokens=256
|
| 72 |
)
|
| 73 |
|
| 74 |
+
# Extract only the newly generated tokens
|
| 75 |
+
response = outputs[0][inputs.shape[-1]:]
|
| 76 |
+
response_text = tokenizer.decode(response, skip_special_tokens=True)
|
| 77 |
+
|
| 78 |
+
# Print clean response
|
| 79 |
+
print(response_text)
|
| 80 |
```
|
| 81 |
|
| 82 |
+
---
|
| 83 |
|
| 84 |
+
## 🛠️ Fine-Tuning Details
|
|
|
|
|
|
|
| 85 |
|
| 86 |
+
Fine-tuning was performed using **QLoRA** with the following configuration:
|
| 87 |
+
|
| 88 |
+
- **Quantization**: 4-bit NF4 with bfloat16 computation
|
| 89 |
+
- **LoRA target modules**: `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj`
|
| 90 |
+
- **LoRA parameters**:
|
| 91 |
+
- `r = 8`
|
| 92 |
+
- `lora_alpha = 16`
|
| 93 |
+
- `lora_dropout = 0.05`
|
| 94 |
+
|
| 95 |
+
### Dataset Split
|
| 96 |
+
|
| 97 |
+
- **Training Data**: 6,471 dialogues with feedback scores ≤ 98
|
| 98 |
+
*(from the full KokoroChat dataset of 6,589 dialogues; 118 dialogues with scores of 99 or 100 were reserved for testing)*
|
| 99 |
+
- **Train/Validation Split**: 90% train, 10% validation
|
| 100 |
+
|
| 101 |
+
### Hyperparameter Settings
|
| 102 |
+
|
| 103 |
+
- **Optimizer**: `adamw_8bit`
|
| 104 |
+
- **Warm-up Steps**: `100`
|
| 105 |
+
- **Learning Rate**: `1e-3`
|
| 106 |
+
- **Epochs**: `5`
|
| 107 |
+
- **Batch Size**: `8`
|
| 108 |
+
- **Validation Frequency**: every 400 steps
|
| 109 |
+
|
| 110 |
+
---
|
| 111 |
|
| 112 |
## 📄 Citation
|
| 113 |
|
| 114 |
+
If you use this model or dataset, please cite the following paper:
|
| 115 |
|
| 116 |
```bibtex
|
| 117 |
@inproceedings{qi2025kokorochat,
|
|
|
|
| 121 |
year = {2025},
|
| 122 |
url = {https://github.com/UEC-InabaLab/KokoroChat}
|
| 123 |
}
|
| 124 |
+
```
|
| 125 |
+
---
|
| 126 |
+
|
| 127 |
+
## 🔗 Related
|
| 128 |
+
|
| 129 |
+
- 📁 **Dataset**:
|
| 130 |
+
- [KokoroChat on Hugging Face Datasets](https://huggingface.co/datasets/UEC-InabaLab/KokoroChat)
|
| 131 |
+
- [KokoroChat on GitHub (UEC-InabaLab)](https://github.com/UEC-InabaLab/KokoroChat)
|
| 132 |
+
- 🤖 **Model Variants**:
|
| 133 |
+
- [KokoroChat-Low](https://huggingface.co/UEC-InabaLab/KokoroChat-Low): fine-tuned on **3,870 dialogues** with client feedback scores **< 70**
|
| 134 |
+
- [KokoroChat-High](https://huggingface.co/UEC-InabaLab/KokoroChat-High): fine-tuned on **2,601 dialogues** with client feedback scores between **70 and 98**
|
| 135 |
+
- 📄 **Paper**: [ACL 2025 Paper (PDF)](https://drive.google.com/file/d/1T6XgvZii8rZ1kKLgOUGqm3BMvqQAvxEM/view?usp=sharing)
|