|
|
--- |
|
|
license: llama2 |
|
|
--- |
|
|
|
|
|
<div align="center"> |
|
|
<h1> |
|
|
AIMI FMs: A Collection of Foundation Models in Radiology |
|
|
</h1> |
|
|
</div> |
|
|
|
|
|
<p align="center"> |
|
|
π <a href="https://arxiv.org/" target="_blank">Paper</a> β’ π€ <a href="https://huggingface.co/StanfordAIMI/RadLLaMA-7b" target="_blank">Hugging Face</a> β’ π§© <a href="https://github.com/Stanford-AIMI/aimi-fms" target="_blank">Github</a> β’ πͺ <a href="https://github.com/Stanford-AIMI/aimi-fms" target="_blank">Project</a> |
|
|
</p> |
|
|
<div align="center"> |
|
|
</div> |
|
|
|
|
|
## β¨ Latest News |
|
|
|
|
|
- [01/20/2023]: Model released in [Hugging Face](https://huggingface.co/StanfordAIMI/RadLLaMA-7b). |
|
|
|
|
|
## π¬ Get Started |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer |
|
|
from transformers import AutoModelForCausalLM |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained("StanfordAIMI/RadLLaMA-7b", trust_remote_code=True) |
|
|
model = AutoModelForCausalLM.from_pretrained("StanfordAIMI/RadLLaMA-7b") |
|
|
|
|
|
prompt = "Hi" |
|
|
conv = [{"from": "human", "value": prompt}] |
|
|
input_ids = tokenizer.apply_chat_template(conv, add_generation_prompt=True, return_tensors="pt") |
|
|
|
|
|
outputs = model.generate(input_ids) |
|
|
response = tokenizer.decode(outputs[0]) |
|
|
print(response) |
|
|
``` |
|
|
|
|
|
## βοΈ Citation |
|
|
|
|
|
``` |
|
|
@article{aimifms-2024, |
|
|
title={}, |
|
|
author={}, |
|
|
journal={arXiv preprint arXiv:xxxx.xxxxx}, |
|
|
url={https://arxiv.org/abs/xxxx.xxxxx}, |
|
|
year={2024} |
|
|
} |
|
|
``` |
|
|
|