Commit
·
7772fd0
1
Parent(s):
cea935d
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,6 +7,8 @@ athirdpath/Nethena-20b-Glue-LORA is a 128 rank LORA for RP, trained on a private
|
|
| 7 |
|
| 8 |
This is a test, exploring the effects of "gluing" the components of the 20b model together to reduce the iconic word replacement errors, increase lucidity, and improve recall.
|
| 9 |
|
|
|
|
|
|
|
| 10 |
The private ~500k token dataset used to train the LORA was Alpaca formatted and focused on 4 primary categories:
|
| 11 |
|
| 12 |
- Medical texts (on psychology, reproductive organs, anatomy, and pregnancy). These are formatted so the model, in character as a doctor or therapist, answers a patient's question in short to medium form.
|
|
|
|
| 7 |
|
| 8 |
This is a test, exploring the effects of "gluing" the components of the 20b model together to reduce the iconic word replacement errors, increase lucidity, and improve recall.
|
| 9 |
|
| 10 |
+

|
| 11 |
+
|
| 12 |
The private ~500k token dataset used to train the LORA was Alpaca formatted and focused on 4 primary categories:
|
| 13 |
|
| 14 |
- Medical texts (on psychology, reproductive organs, anatomy, and pregnancy). These are formatted so the model, in character as a doctor or therapist, answers a patient's question in short to medium form.
|