PLLaMa: An Open-source Large Language Model for Plant Science
Paper β’ 2401.01600 β’ Published β’ 3
How to use Xianjun/PLLaMa-7b-base with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Xianjun/PLLaMa-7b-base") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Xianjun/PLLaMa-7b-base")
model = AutoModelForCausalLM.from_pretrained("Xianjun/PLLaMa-7b-base")How to use Xianjun/PLLaMa-7b-base with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Xianjun/PLLaMa-7b-base"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Xianjun/PLLaMa-7b-base",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/Xianjun/PLLaMa-7b-base
How to use Xianjun/PLLaMa-7b-base with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "Xianjun/PLLaMa-7b-base" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Xianjun/PLLaMa-7b-base",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "Xianjun/PLLaMa-7b-base" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Xianjun/PLLaMa-7b-base",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use Xianjun/PLLaMa-7b-base with Docker Model Runner:
docker model run hf.co/Xianjun/PLLaMa-7b-base
This model is optimized for plant science by continuing pertaining on over 1.5 million plant science academic articles based on LLaMa-2.
Developed by: [UCSB]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Finetuned from model [optional]: [LLaMa-2]
Paper [optional]: [https://arxiv.org/pdf/2401.01600.pdf]
Demo [optional]: [More Information Needed]
from transformers import LlamaTokenizer, LlamaForCausalLM
import torch
tokenizer = LlamaTokenizer.from_pretrained("Xianjun/PLLaMa-7b-base")
model = LlamaForCausalLM.from_pretrained("Xianjun/PLLaMa-7b-base").half().to("cuda")
instruction = "How to ..."
batch = tokenizer(instruction, return_tensors="pt", add_special_tokens=False).to("cuda")
with torch.no_grad():
output = model.generate(**batch, max_new_tokens=512, temperature=0.7, do_sample=True)
response = tokenizer.decode(output[0], skip_special_tokens=True)
If you find PLLaMa useful in your research, please cite the following paper:
@inproceedings{Yang2024PLLaMaAO,
title={PLLaMa: An Open-source Large Language Model for Plant Science},
author={Xianjun Yang and Junfeng Gao and Wenxin Xue and Erik Alexandersson},
year={2024},
url={https://api.semanticscholar.org/CorpusID:266741610}
}