athirdpath commited on
Commit
7772fd0
·
1 Parent(s): cea935d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -7,6 +7,8 @@ athirdpath/Nethena-20b-Glue-LORA is a 128 rank LORA for RP, trained on a private
7
 
8
  This is a test, exploring the effects of "gluing" the components of the 20b model together to reduce the iconic word replacement errors, increase lucidity, and improve recall.
9
 
 
 
10
  The private ~500k token dataset used to train the LORA was Alpaca formatted and focused on 4 primary categories:
11
 
12
  - Medical texts (on psychology, reproductive organs, anatomy, and pregnancy). These are formatted so the model, in character as a doctor or therapist, answers a patient's question in short to medium form.
 
7
 
8
  This is a test, exploring the effects of "gluing" the components of the 20b model together to reduce the iconic word replacement errors, increase lucidity, and improve recall.
9
 
10
+ ![image/png](https://huggingface.co/athirdpath/Nethena-20b-Glued/resolve/main/b5787896-afd5-44a3-b757-0e75ee28bed8.png)
11
+
12
  The private ~500k token dataset used to train the LORA was Alpaca formatted and focused on 4 primary categories:
13
 
14
  - Medical texts (on psychology, reproductive organs, anatomy, and pregnancy). These are formatted so the model, in character as a doctor or therapist, answers a patient's question in short to medium form.