ZhiyangQi97 commited on
Commit
b704e68
Β·
verified Β·
1 Parent(s): 434164b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -5
README.md CHANGED
@@ -22,8 +22,7 @@ datasets:
22
 
23
  ## πŸ’‘ Overview
24
 
25
- - βœ… Fine-tuned on **6,471 dialogues** with feedback scores ≀ 98
26
- (from the full KokoroChat dataset of 6,589 dialogues; 118 high-score dialogues reserved for testing)
27
  - βœ… Data collected through **text-based role-play** by trained counselors
28
  - βœ… Covers a wide range of topics: depression, family, school, career, relationships, and more
29
  - βœ… Base Model: [`tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3`](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3)
@@ -94,8 +93,7 @@ Fine-tuning was performed using **QLoRA** with the following configuration:
94
 
95
  ### Dataset Split
96
 
97
- - **Training Data**: 6,471 dialogues with feedback scores ≀ 98
98
- *(from the full KokoroChat dataset of 6,589 dialogues; 118 dialogues with scores of 99 or 100 were reserved for testing)*
99
  - **Train/Validation Split**: 90% train, 10% validation
100
 
101
  ### Hyperparameter Settings
@@ -131,5 +129,5 @@ If you use this model or dataset, please cite the following paper:
131
  - [KokoroChat on GitHub (UEC-InabaLab)](https://github.com/UEC-InabaLab/KokoroChat)
132
  - πŸ€– **Model Variants**:
133
  - [KokoroChat-Low](https://huggingface.co/UEC-InabaLab/KokoroChat-Low): fine-tuned on **3,870 dialogues** with client feedback scores **< 70**
134
- - [KokoroChat-High](https://huggingface.co/UEC-InabaLab/KokoroChat-High): fine-tuned on **2,601 dialogues** with client feedback scores between **70 and 98**
135
  - πŸ“„ **Paper**: [ACL 2025 Paper (PDF)](https://drive.google.com/file/d/1T6XgvZii8rZ1kKLgOUGqm3BMvqQAvxEM/view?usp=sharing)
 
22
 
23
  ## πŸ’‘ Overview
24
 
25
+ - βœ… Fine-tuned on **2,601 dialogues** with client feedback scores between **70 and 98**
 
26
  - βœ… Data collected through **text-based role-play** by trained counselors
27
  - βœ… Covers a wide range of topics: depression, family, school, career, relationships, and more
28
  - βœ… Base Model: [`tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3`](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3)
 
93
 
94
  ### Dataset Split
95
 
96
+ - **Training Data**: 2,601 dialogues with feedback scores between 70 and 98
 
97
  - **Train/Validation Split**: 90% train, 10% validation
98
 
99
  ### Hyperparameter Settings
 
129
  - [KokoroChat on GitHub (UEC-InabaLab)](https://github.com/UEC-InabaLab/KokoroChat)
130
  - πŸ€– **Model Variants**:
131
  - [KokoroChat-Low](https://huggingface.co/UEC-InabaLab/KokoroChat-Low): fine-tuned on **3,870 dialogues** with client feedback scores **< 70**
132
+ - [KokoroChat-Full](https://huggingface.co/UEC-InabaLab/KokoroChat-Full): fine-tuned on **6,471 dialogues** with client feedback scores **≀ 98**
133
  - πŸ“„ **Paper**: [ACL 2025 Paper (PDF)](https://drive.google.com/file/d/1T6XgvZii8rZ1kKLgOUGqm3BMvqQAvxEM/view?usp=sharing)