Instructions to use Xianjun/PLLaMa-7b-instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Xianjun/PLLaMa-7b-instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Xianjun/PLLaMa-7b-instruct")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Xianjun/PLLaMa-7b-instruct") model = AutoModelForCausalLM.from_pretrained("Xianjun/PLLaMa-7b-instruct") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Xianjun/PLLaMa-7b-instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Xianjun/PLLaMa-7b-instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Xianjun/PLLaMa-7b-instruct", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Xianjun/PLLaMa-7b-instruct
- SGLang
How to use Xianjun/PLLaMa-7b-instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Xianjun/PLLaMa-7b-instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Xianjun/PLLaMa-7b-instruct", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Xianjun/PLLaMa-7b-instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Xianjun/PLLaMa-7b-instruct", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Xianjun/PLLaMa-7b-instruct with Docker Model Runner:
docker model run hf.co/Xianjun/PLLaMa-7b-instruct
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,47 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
+
|
| 5 |
+
# Model Card for Model ID
|
| 6 |
+
|
| 7 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
| 8 |
+
|
| 9 |
+
This model is optimized for plant science by continuing pertaining on over 1.5 million plant science academic articles based on LLaMa-2-7b-base. And it undergoes further instruction tuning to make it follow instructions.
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
- **Developed by:** [UCSB]
|
| 13 |
+
- **Language(s) (NLP):** [More Information Needed]
|
| 14 |
+
- **License:** [More Information Needed]
|
| 15 |
+
- **Finetuned from model [optional]:** [LLaMa-2]
|
| 16 |
+
|
| 17 |
+
- **Paper [optional]:** [https://arxiv.org/pdf/2401.01600.pdf]
|
| 18 |
+
- **Demo [optional]:** [More Information Needed]
|
| 19 |
+
|
| 20 |
+
## How to Get Started with the Model
|
| 21 |
+
```python
|
| 22 |
+
from transformers import LlamaTokenizer, LlamaForCausalLM
|
| 23 |
+
import torch
|
| 24 |
+
|
| 25 |
+
tokenizer = LlamaTokenizer.from_pretrained("Xianjun/PLLaMa-7b-instruct")
|
| 26 |
+
model = LlamaForCausalLM.from_pretrained("Xianjun/PLLaMa-7b-instruct").half().to("cuda")
|
| 27 |
+
|
| 28 |
+
instruction = "How to ..."
|
| 29 |
+
batch = tokenizer(instruction, return_tensors="pt", add_special_tokens=False).to("cuda")
|
| 30 |
+
with torch.no_grad():
|
| 31 |
+
output = model.generate(**batch, max_new_tokens=512, temperature=0.7, do_sample=True)
|
| 32 |
+
response = tokenizer.decode(output[0], skip_special_tokens=True)
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
## Citation
|
| 36 |
+
If you find PLLaMa useful in your research, please cite the following paper:
|
| 37 |
+
|
| 38 |
+
```latex
|
| 39 |
+
@inproceedings{Yang2024PLLaMaAO,
|
| 40 |
+
title={PLLaMa: An Open-source Large Language Model for Plant Science},
|
| 41 |
+
author={Xianjun Yang and Junfeng Gao and Wenxin Xue and Erik Alexandersson},
|
| 42 |
+
year={2024},
|
| 43 |
+
url={https://api.semanticscholar.org/CorpusID:266741610}
|
| 44 |
+
}
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
|