Spaces:
Sleeping
Sleeping
Merge pull request #9 from ismael-dm/remove-extra-note
Browse files
units/en/unit2/tiny-agents.mdx
CHANGED
|
@@ -130,7 +130,6 @@ The canonical documentation I will link to here is [OpenAI's function calling do
|
|
| 130 |
Inference engines let you pass a list of tools when calling the LLM, and the LLM is free to call zero, one or more of those tools.
|
| 131 |
As a developer, you run the tools and feed their result back into the LLM to continue the generation.
|
| 132 |
|
| 133 |
-
> [!NOTE]
|
| 134 |
> Note that in the backend (at the inference engine level), the tools are simply passed to the model in a specially-formatted `chat_template`, like any other message, and then parsed out of the response (using model-specific special tokens) to expose them as tool calls.
|
| 135 |
|
| 136 |
## Implementing an MCP client on top of InferenceClient
|
|
|
|
| 130 |
Inference engines let you pass a list of tools when calling the LLM, and the LLM is free to call zero, one or more of those tools.
|
| 131 |
As a developer, you run the tools and feed their result back into the LLM to continue the generation.
|
| 132 |
|
|
|
|
| 133 |
> Note that in the backend (at the inference engine level), the tools are simply passed to the model in a specially-formatted `chat_template`, like any other message, and then parsed out of the response (using model-specific special tokens) to expose them as tool calls.
|
| 134 |
|
| 135 |
## Implementing an MCP client on top of InferenceClient
|