Instructions to use lxuechen/phi-2-tool-use with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use lxuechen/phi-2-tool-use with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="lxuechen/phi-2-tool-use", trust_remote_code=True)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("lxuechen/phi-2-tool-use", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use lxuechen/phi-2-tool-use with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "lxuechen/phi-2-tool-use" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lxuechen/phi-2-tool-use", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/lxuechen/phi-2-tool-use
- SGLang
How to use lxuechen/phi-2-tool-use with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "lxuechen/phi-2-tool-use" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lxuechen/phi-2-tool-use", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "lxuechen/phi-2-tool-use" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "lxuechen/phi-2-tool-use", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use lxuechen/phi-2-tool-use with Docker Model Runner:
docker model run hf.co/lxuechen/phi-2-tool-use
Model Summary
phi-2-tool-use is fine-tuned version of Phi-2 for function calling purposes. The model was fine-tuned on the public function call dataset glaiveai/glaive-function-calling-v2.
The purpose of the experiment is to understand the quality of the pre-trained Phi-2 model. phi-2-tool-use can generalize to call simple tools/functions not seen during fine-tuning.
Decoding
Format your prompt as
"""SYSTEM: {system_content}\n\nUSER: {user_content} {eos_token} ASSISTANT:"""
where system_content is the system message containing a description of the tool/function as a json schema, user_content is the user message, and eos_token is the EOS token.
The model can handle multi-turn dialogue as it was trained on such data.
Here's a full-fledged example:
import torch
import transformers
model_name_or_path = "lxuechen/phi-2-tool-use"
model: transformers.PreTrainedModel = transformers.AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.float16
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name_or_path)
input_text = """SYSTEM: You are a helpful assistant with access to the following functions. Use them if required - { "name": "get_exchange_rate", "description": "Get the exchange rate between two currencies", "parameters": { "type": "object", "properties": { "base_currency": { "type": "string", "description": "The currency to convert from" }, "target_currency": { "type": "string", "description": "The currency to convert to" } }, "required": [ "base_currency", "target_currency" ] } }\n\nUSER: Convert 100 USD to CAD <|endoftext|> ASSISTANT:"""
outputs = model.generate(
tokenizer(input_text, return_tensors="pt").to(model.device)['input_ids'],
max_length=1024,
temperature=0.7,
top_p=0.9,
do_sample=True,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training
The model was fine-tuned with SFT on glaiveai/glaive-function-calling-v2.
Hyperparameters:
- learning rate: 3% linear warmup, with a peak of 2e-5 and cosine decay
- epochs: 2
- batch size: 64
- context length: 2048
- Downloads last month
- 5