Update README.md
Browse files
README.md
CHANGED
|
@@ -15,143 +15,33 @@ tags:
|
|
| 15 |
## Model Summary
|
| 16 |
|
| 17 |
MedPhi-2 is a Phi-2, **2.7 billion** parameters, further trained for the biomedical domain.
|
|
|
|
| 18 |
|
| 19 |
-
##
|
| 20 |
|
| 21 |
-
|
| 22 |
|
| 23 |
-
|
| 24 |
|
| 25 |
-
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
-
The current `transformers` version can be verified with: `pip list | grep transformers`.
|
| 28 |
|
| 29 |
-
## Intended Uses
|
| 30 |
|
| 31 |
-
|
| 32 |
|
| 33 |
-
|
| 34 |
|
| 35 |
-
|
| 36 |
|
| 37 |
-
```markdown
|
| 38 |
-
Write a detailed analogy between mathematics and a lighthouse.
|
| 39 |
```
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
|
|
|
|
|
|
| 45 |
```
|
| 46 |
-
|
| 47 |
-
where the model generates the text after "Output:".
|
| 48 |
-
|
| 49 |
-
### Chat Format:
|
| 50 |
-
|
| 51 |
-
```markdown
|
| 52 |
-
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
|
| 53 |
-
Bob: Well, have you tried creating a study schedule and sticking to it?
|
| 54 |
-
Alice: Yes, I have, but it doesn't seem to help much.
|
| 55 |
-
Bob: Hmm, maybe you should try studying in a quiet environment, like the library.
|
| 56 |
-
Alice: ...
|
| 57 |
-
```
|
| 58 |
-
|
| 59 |
-
where the model generates the text after the first "Bob:".
|
| 60 |
-
|
| 61 |
-
### Code Format:
|
| 62 |
-
|
| 63 |
-
```python
|
| 64 |
-
def print_prime(n):
|
| 65 |
-
"""
|
| 66 |
-
Print all primes between 1 and n
|
| 67 |
-
"""
|
| 68 |
-
primes = []
|
| 69 |
-
for num in range(2, n+1):
|
| 70 |
-
is_prime = True
|
| 71 |
-
for i in range(2, int(math.sqrt(num))+1):
|
| 72 |
-
if num % i == 0:
|
| 73 |
-
is_prime = False
|
| 74 |
-
break
|
| 75 |
-
if is_prime:
|
| 76 |
-
primes.append(num)
|
| 77 |
-
print(primes)
|
| 78 |
-
```
|
| 79 |
-
|
| 80 |
-
where the model generates the text after the comments.
|
| 81 |
-
|
| 82 |
-
**Notes:**
|
| 83 |
-
|
| 84 |
-
* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
|
| 85 |
-
|
| 86 |
-
* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
|
| 87 |
-
|
| 88 |
-
* If you are using `transformers<4.37.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
|
| 89 |
-
|
| 90 |
-
## Sample Code
|
| 91 |
-
|
| 92 |
-
```python
|
| 93 |
-
import torch
|
| 94 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 95 |
-
|
| 96 |
-
torch.set_default_device("cuda")
|
| 97 |
-
|
| 98 |
-
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", trust_remote_code=True)
|
| 99 |
-
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
|
| 100 |
-
|
| 101 |
-
inputs = tokenizer('''def print_prime(n):
|
| 102 |
-
"""
|
| 103 |
-
Print all primes between 1 and n
|
| 104 |
-
"""''', return_tensors="pt", return_attention_mask=False)
|
| 105 |
-
|
| 106 |
-
outputs = model.generate(**inputs, max_length=200)
|
| 107 |
-
text = tokenizer.batch_decode(outputs)[0]
|
| 108 |
-
print(text)
|
| 109 |
-
```
|
| 110 |
-
|
| 111 |
-
## Limitations of Phi-2
|
| 112 |
-
|
| 113 |
-
* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
|
| 114 |
-
|
| 115 |
-
* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
|
| 116 |
-
|
| 117 |
-
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
|
| 118 |
-
|
| 119 |
-
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
|
| 120 |
-
|
| 121 |
-
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
|
| 122 |
-
|
| 123 |
-
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
|
| 124 |
-
|
| 125 |
-
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
|
| 126 |
-
|
| 127 |
-
## Training
|
| 128 |
-
|
| 129 |
-
### Model
|
| 130 |
-
|
| 131 |
-
* Architecture: a Transformer-based model with next-word prediction objective
|
| 132 |
-
|
| 133 |
-
* Context length: 2048 tokens
|
| 134 |
-
|
| 135 |
-
* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.
|
| 136 |
-
|
| 137 |
-
* Training tokens: 1.4T tokens
|
| 138 |
-
|
| 139 |
-
* GPUs: 96xA100-80G
|
| 140 |
-
|
| 141 |
-
* Training time: 14 days
|
| 142 |
-
|
| 143 |
-
### Software
|
| 144 |
-
|
| 145 |
-
* [PyTorch](https://github.com/pytorch/pytorch)
|
| 146 |
-
|
| 147 |
-
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
|
| 148 |
-
|
| 149 |
-
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
|
| 150 |
-
|
| 151 |
-
### License
|
| 152 |
-
|
| 153 |
-
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE).
|
| 154 |
-
|
| 155 |
-
## Trademarks
|
| 156 |
-
|
| 157 |
-
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
|
|
|
| 15 |
## Model Summary
|
| 16 |
|
| 17 |
MedPhi-2 is a Phi-2, **2.7 billion** parameters, further trained for the biomedical domain.
|
| 18 |
+
[(arxiv)](https://arxiv.org/abs/2406.06331)
|
| 19 |
|
| 20 |
+
## Model Details
|
| 21 |
|
| 22 |
+
### Model Description
|
| 23 |
|
| 24 |
+
<!-- Provide a longer summary of what this model is. -->
|
| 25 |
|
| 26 |
+
- **Model type:** Clinical LLM (Large Language Model)
|
| 27 |
+
- **Language(s) (NLP):** English
|
| 28 |
+
- **License:** [MIT license](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE)
|
| 29 |
+
- **Finetuned from model [optional]:** Phi-2
|
| 30 |
|
|
|
|
| 31 |
|
|
|
|
| 32 |
|
| 33 |
+
## Citation
|
| 34 |
|
| 35 |
+
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 36 |
|
| 37 |
+
**BibTeX:**
|
| 38 |
|
|
|
|
|
|
|
| 39 |
```
|
| 40 |
+
@article{kim2024medexqa,
|
| 41 |
+
title={MedExQA: Medical Question Answering Benchmark with Multiple Explanations},
|
| 42 |
+
author={Kim, Yunsoo and Wu, Jinge and Abdulle, Yusuf and Wu, Honghan},
|
| 43 |
+
journal={arXiv e-prints},
|
| 44 |
+
pages={arXiv--2406},
|
| 45 |
+
year={2024}
|
| 46 |
+
}
|
| 47 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|