Commit ·
207552a
1
Parent(s): 920f10a
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,86 @@
|
|
| 1 |
---
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
datasets:
|
| 3 |
+
- S2ORC
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
tags:
|
| 7 |
+
- llama
|
| 8 |
+
- ggml
|
| 9 |
+
- pubmed
|
| 10 |
+
- medicine
|
| 11 |
+
- research
|
| 12 |
+
- papers
|
| 13 |
---
|
| 14 |
+
|
| 15 |
+
# ---
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
# PMC_LLaMA - finetuned on PubMed Central papers
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
**This is a ggml conversion of chaoyi-wu's [PMC_LLAMA_7B_10_epoch](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B_10_epoch) model.**
|
| 26 |
+
|
| 27 |
+
**It is a LLaMA model which is finetuned on PubMed Central papers from**
|
| 28 |
+
**The Semantic Scholar Open Research Coprus [dataset](https://github.com/allenai/s2orc).**
|
| 29 |
+
|
| 30 |
+
Currently I have only converted it into **new k-quant method Q5_K_M**. I will gladly make more versions on request.
|
| 31 |
+
|
| 32 |
+
Other possible quantizations include: q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q5_K_M, q6_K
|
| 33 |
+
|
| 34 |
+
A f-16 version could be found here: [nikuya3/alpaca-lora-7b-german-base-51k-ggml](https://huggingface.co/nikuya3/alpaca-lora-7b-german-base-51k-ggml)
|
| 35 |
+
|
| 36 |
+
Compatible with **llama.cpp**, but also with:
|
| 37 |
+
|
| 38 |
+
- **text-generation-webui**
|
| 39 |
+
- **KoboldCpp**
|
| 40 |
+
- **ParisNeo/GPT4All-UI**
|
| 41 |
+
- **llama-cpp-python**
|
| 42 |
+
- **ctransformers**
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
---
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
# CAVE!
|
| 49 |
+
|
| 50 |
+
Being a professional myself and having tested the model, I can strongly advise that this model is best left in the hands of professionals.
|
| 51 |
+
|
| 52 |
+
This model can produce very detailed and elaborate responses, but it tends to confabulate quite often in my opinion (considering the field of use).
|
| 53 |
+
|
| 54 |
+
Because of the detail accuracy, it is difficult for a layperson to tell when the model is returning facts and when it is returning bullshit.
|
| 55 |
+
|
| 56 |
+
– so unless you are a subject matter expert (biology, medicine, chemistry, pharmacy, etc) I appeal to your sense of responsibility and ask you:
|
| 57 |
+
|
| 58 |
+
**to use the model only for testing, exploration, and just-for-fun. In no case should the answers of this model lead to implications that affect your health.**
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
---
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
Here is what the autor/s write in the original model [card](https://huggingface.co/chaoyi-wu/PMC_LLAMA_7B_10_epoch/blob/main/README.md):
|
| 65 |
+
|
| 66 |
+
```
|
| 67 |
+
This repo contains the latest version of PMC_LLaMA_7B, which is LLaMA-7b finetuned on the PMC papers in the S2ORC dataset.
|
| 68 |
+
|
| 69 |
+
Notably, different from chaoyi-wu/PMC_LLAMA_7B, this model is further trained for 10 epochs.
|
| 70 |
+
|
| 71 |
+
The model was trained with the following hyperparameters:
|
| 72 |
+
|
| 73 |
+
Epochs: 10
|
| 74 |
+
Batch size: 128
|
| 75 |
+
Cutoff length: 512
|
| 76 |
+
Learning rate: 2e-5
|
| 77 |
+
Each epoch we sample 512 tokens per paper for training.
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
---
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
### That's it!
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
If you have any further questions, feel free to contact me or start a discussion
|