1. Qwen3-14B Async Tools Model
DEMO https://youtu.be/CyHbX13AuK0
Finetuned from model: unsloth/Qwen3-14B-unsloth-bnb-4bit using AsyncTool dataset
💡 Why Async Tools?
Real-world AI agents often need to:
- Call external APIs with variable latency
- Query databases that take time to respond
- Execute long-running computations
- Handle multiple tool calls in parallel
- Provide responsive user experiences without blocking
This model handles asynchronous tool execution — a critical capability for building responsive, real-world AI agents. Unlike traditional function-calling models that assume tools return results immediately, this model understands and properly handles tools that take time to execute and return results later during conversation.
🔄 Async Tool Call Protocol
The model implements a robust async protocol:
- Tool Call: The model makes a function/tool call
- ACK (Acknowledgment): The tool immediately returns
<tool_ack id="tN"/>to confirm the request is received - Processing: The tool executes asynchronously (could be API calls, database queries, external services)
- RESPONSE: The tool returns the actual result later
📋 Example Conversation Flow
User ask question
"Could you verify if Makani number 2871442438 is valid?"
Assistant makes tool call:
[
{
"id": "t1",
"name": "IsValidMakani",
"arguments": { "makaniNo": "2871442438" }
}
]
Tool returns ACK:
<tool_ack id="t1"/>
Assistant provides interim response:
"Sure—checking that Makani number now. I'll get back to you as soon as I have the result."
User's message again
User: "Thanks"
Tool returns result:
{ "id": "t1", "ok": true, "data": { "isValid": true } }
Assistant provides final response:
"Great news! The Makani number 2871442438 is valid for the specified entrance in Dubai."
2. Video
3. How we used Gemini and Pipecut?
Gemini
We used Gemini Speech to Text and are currently fine tuning Gemini 2.5 Flash Lite Model for the same task, improved latency and accuracy.
Pipecat
Link to implementation in Pipecat Pull Request.
5. Tell us what you did new during the hackathon
At the hackathon we've
- improved AsyncTool dataset with more variety to improve quality of responses
- fine tuned unsloth/Qwen3-14B-unsloth-bnb-4bit model using Google Colab (for handling Async Tools)
- Prepared a draft (Pull Request) to Pipecat by adding support for our new model and native behaviour
🔧 Training Details
This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
6. Feedback
At the beginning we had issues with running fine tuning on Google Vertex-ai (because of outdated documentation). Loved the test coverage and dev environment of pipecat.
- Downloads last month
- 44
Model tree for qforge/Qwen3-14B-AT
Base model
Qwen/Qwen3-14B-Base

