Text Generation
Transformers
PyTorch
English
llama
text-generation-inference
unsloth
trl
sft
conversational
How to use from
vLLMUse Docker
docker model run hf.co/LocalAI-io/LocalAI-functioncall-phi-4-v0.2Quick Links
Description
A model tailored to be conversational and execute function calls with LocalAI. This model is based on phi-4.
How to run
With LocalAI:
local-ai run LocalAI-functioncall-phi-4-v0.2
Updates
This is the second iteration of https://huggingface.co/mudler/LocalAI-functioncall-phi-4-v0.1 with added CoT (o1) capabilities from the marco-o1 dataset.
- Downloads last month
- 7
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "LocalAI-io/LocalAI-functioncall-phi-4-v0.2"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LocalAI-io/LocalAI-functioncall-phi-4-v0.2", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'