ZhiyangQi97 commited on
Commit
c0b10b2
·
verified ·
1 Parent(s): f8c98fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -80
README.md CHANGED
@@ -14,99 +14,51 @@ datasets:
14
  - UEC-InabaLab/KokoroChat
15
  ---
16
 
17
- # Model Card for Model ID
18
 
19
- <!-- Provide a quick summary of what the model is/does. -->
20
 
 
21
 
 
22
 
23
- ## Model Details
24
-
25
- ### Model Description
26
-
27
- <!-- Provide a longer summary of what this model is. -->
28
-
29
-
30
-
31
- - **Developed by:** [More Information Needed]
32
- - **Funded by [optional]:** [More Information Needed]
33
- - **Shared by [optional]:** [More Information Needed]
34
- - **Model type:** [More Information Needed]
35
- - **Language(s) (NLP):** [More Information Needed]
36
- - **License:** [More Information Needed]
37
- - **Finetuned from model [optional]:** [More Information Needed]
38
-
39
- ### Model Sources [optional]
40
-
41
- <!-- Provide the basic links for the model. -->
42
-
43
- - **Repository:** [More Information Needed]
44
- - **Paper [optional]:** [More Information Needed]
45
- - **Demo [optional]:** [More Information Needed]
46
-
47
- ## Uses
48
-
49
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
50
-
51
- ### Direct Use
52
-
53
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
54
-
55
- [More Information Needed]
56
-
57
- ### Downstream Use [optional]
58
-
59
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
60
-
61
- [More Information Needed]
62
-
63
- ### Out-of-Scope Use
64
-
65
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
66
-
67
- [More Information Needed]
68
-
69
- ## Bias, Risks, and Limitations
70
-
71
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
72
-
73
- [More Information Needed]
74
-
75
- ### Recommendations
76
-
77
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
78
-
79
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
80
-
81
- ## How to Get Started with the Model
82
-
83
- Use the code below to get started with the model.
84
-
85
- [More Information Needed]
86
-
87
- ## Training Details
88
-
89
- ### Training Data
90
 
91
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
 
92
 
93
- [More Information Needed]
94
 
95
- ### Training Procedure
96
 
97
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
 
 
 
98
 
99
- #### Preprocessing [optional]
100
 
101
- [More Information Needed]
102
 
 
103
 
104
- #### Training Hyperparameters
 
 
105
 
106
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
 
107
 
108
- #### Speeds, Sizes, Times [optional]
109
 
110
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
 
 
 
 
 
111
 
112
- [More Information Needed]
 
 
14
  - UEC-InabaLab/KokoroChat
15
  ---
16
 
17
+ # 🧠 KokoroChat-High (LoRA Adapter for Japanese Counseling Dialogue)
18
 
19
+ This repository contains the **LoRA adapter weights** for KokoroChat-High, a version of the KokoroChat model fine-tuned on **high-feedback counseling dialogues** (client score ≥ 70 and ≤ 98) from the [KokoroChat dataset](https://huggingface.co/datasets/your-username/kokorochat).
20
 
21
+ The base model is [tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3), and this adapter enhances it for generating **high-quality, empathetic Japanese counseling responses**.
22
 
23
+ ---
24
 
25
+ ## 💡 What is "Kokoro-High"?
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
+ - Trained on **2,601 dialogues**
28
+ - ✅ All sessions have **client feedback scores between 70 and 98**
29
+ - ✅ Represents high-quality, successful counseling interactions
30
 
31
+ ---
32
 
33
+ ## 🧾 Model Details
34
 
35
+ - **Base Model**: [`tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3`](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3)
36
+ - **Fine-tuning Method**: PEFT (LoRA)
37
+ - **Adapter Size**: ~1.1GB (`adapter_model.safetensors`)
38
+ - **Language**: Japanese
39
+ - **Training Data**: KokoroChat-High subset
40
 
41
+ ---
42
 
43
+ ## ⚙️ Usage Instructions
44
 
45
+ This repository contains **only the LoRA adapter**. You must load the original base model and then apply this adapter:
46
 
47
+ ```python
48
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
49
+ from peft import PeftModel
50
 
51
+ base_model_id = "tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3"
52
+ adapter_id = "your-username/kokorochat-high-lora"
53
 
54
+ tokenizer = AutoTokenizer.from_pretrained(base_model_id)
55
 
56
+ base_model = AutoModelForCausalLM.from_pretrained(
57
+ base_model_id,
58
+ device_map="auto",
59
+ torch_dtype="auto",
60
+ quantization_config=BitsAndBytesConfig(load_in_4bit=True)
61
+ )
62
 
63
+ model = PeftModel.from_pretrained(base_model, adapter_id)
64
+ model = model.merge_and_unload()