QLoRA: Efficient Finetuning of Quantized LLMs
Paper • 2305.14314 • Published • 61
How to use skumar9/Llama-medx_v2 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="skumar9/Llama-medx_v2")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("skumar9/Llama-medx_v2")
model = AutoModelForCausalLM.from_pretrained("skumar9/Llama-medx_v2")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use skumar9/Llama-medx_v2 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "skumar9/Llama-medx_v2"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "skumar9/Llama-medx_v2",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/skumar9/Llama-medx_v2
How to use skumar9/Llama-medx_v2 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "skumar9/Llama-medx_v2" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "skumar9/Llama-medx_v2",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "skumar9/Llama-medx_v2" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "skumar9/Llama-medx_v2",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use skumar9/Llama-medx_v2 with Docker Model Runner:
docker model run hf.co/skumar9/Llama-medx_v2
This is llama3 8b family chat model finetuned from base epfl-llm/meditron-7b with open assist dataset using SFT QLora .
All the linear parameters were made trainable with a rank of 16.
'<s> [INST] <<SYS>>
You are a helpful, respectful and medical honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>> {question} [/INST] {Model answer } </s>'
model_name='jiviadmin/meditron-7b-guanaco-chat'
# Load the model
base_model = AutoModelForCausalLM.from_pretrained(
model_name,
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map={"": 0},
)
# Load tokenizer to save it
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True,add_eos_token=True)
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.pad_token_id = 18610
tokenizer.padding_side = "right"
default_system_prompt="You are a helpful, respectful and honest medical assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.Please consider the context below if applicable:
Context:NA"
#Initialize the hugging face pipeline
def format_prompt(question):
return f'''<s> [INST] <<SYS>> {default_system_prompt} <</SYS>> [INST] {question} [/INST]'''
question=' My father has a big white colour patch inside of his right cheek. please suggest a reason.'
pipe = pipeline(task="text-generation", model=base_model, tokenizer=tokenizer, max_length=512,repetition_penalty=1.1,return_full_text=False)
result = pipe(format_prompt(question))
answer=result[0]['generated_text']
print(answer)