RadLLaMA-7b / README.md
zhjohnchan's picture
Update README.md
bf6783d verified
---
license: llama2
---
<div align="center">
<h1>
AIMI FMs: A Collection of Foundation Models in Radiology
</h1>
</div>
<p align="center">
πŸ“ <a href="https://arxiv.org/" target="_blank">Paper</a> β€’ πŸ€— <a href="https://huggingface.co/StanfordAIMI/RadLLaMA-7b" target="_blank">Hugging Face</a> β€’ 🧩 <a href="https://github.com/Stanford-AIMI/aimi-fms" target="_blank">Github</a> β€’ πŸͺ„ <a href="https://github.com/Stanford-AIMI/aimi-fms" target="_blank">Project</a>
</p>
<div align="center">
</div>
## ✨ Latest News
- [01/20/2023]: Model released in [Hugging Face](https://huggingface.co/StanfordAIMI/RadLLaMA-7b).
## 🎬 Get Started
```python
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("StanfordAIMI/RadLLaMA-7b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("StanfordAIMI/RadLLaMA-7b")
prompt = "Hi"
conv = [{"from": "human", "value": prompt}]
input_ids = tokenizer.apply_chat_template(conv, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(input_ids)
response = tokenizer.decode(outputs[0])
print(response)
```
## ✏️ Citation
```
@article{aimifms-2024,
title={},
author={},
journal={arXiv preprint arXiv:xxxx.xxxxx},
url={https://arxiv.org/abs/xxxx.xxxxx},
year={2024}
}
```