content
large_stringlengths
3
20.5k
url
large_stringlengths
54
193
branch
large_stringclasses
4 values
source
large_stringclasses
42 values
embeddings
listlengths
384
384
score
float64
-0.21
0.65
# Banners Several banners appear on the Documentation site under different conditions and with different requirements. All of the banners are rendered by the [`doc\_banners` component](../themes/gitlab-docs/src/components/docs\_banner.vue). ## Archive banner The archive banner appears only on interior pages of archived...
https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/blob/main//doc/banners.md
main
gitlab
[ -0.05437473580241203, 0.15419639647006989, 0.008031656965613365, 0.04530021548271179, 0.104204922914505, -0.05448858067393303, -0.06902426481246948, -0.04990522935986519, 0.036053020507097244, 0.01388595998287201, -0.008544659242033958, 0.011189905926585197, 0.010505360551178455, 0.0280169...
0.031163
# GitLab docs site development Before starting development, follow the [Setup guide](setup.md) to clone the project and install dependencies. ## Build process The Docs website uses [Hugo](https://gohugo.io/) to transform Markdown files to HTML webpages. Additional build steps clone the source content, build complex con...
https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/blob/main//doc/development.md
main
gitlab
[ -0.06007843464612961, 0.030907467007637024, -0.0007603316335007548, 0.04237184301018715, -0.0013351362431421876, -0.0887482613325119, -0.10488460212945938, 0.015857785940170288, -0.006236447487026453, 0.041027817875146866, -0.023207342252135277, 0.0401114895939827, 0.08930865675210953, -0....
-0.010866
used throughout the application. #### Adding a New Command To add a new command, follow these steps: 1. \*\*Determine the Command Type\*\*: - Build-related: Add it to the `build` group. - Maintenance task: Add it to the `task` group. 1. \*\*Create the Implementation File\*\*: - Place implementation code in `cmd/interna...
https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/blob/main//doc/development.md
main
gitlab
[ -0.07792786508798599, -0.05353720113635063, -0.08512182533740997, 0.03026106394827366, -0.0416284054517746, -0.05048265680670738, -0.008480886928737164, 0.07353032380342484, -0.10009799897670746, 0.08522675186395645, 0.04752939194440842, -0.09328784793615341, 0.04505554214119911, 0.0325111...
0.118955
`@$(DATE\_CMD)` before `@printf` to prefix debug output with date and time information. ### Add to shell scripts How to add debug output to shell scripts depends on whether `gum` is available where the shell script is run. To add debug output to shell scripts where `gum` is available: 1. Source `scripts-common.sh`: ```...
https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/blob/main//doc/development.md
main
gitlab
[ 0.0013075722381472588, 0.012778880074620247, 0.004587458446621895, 0.03288138657808304, 0.014728167094290257, -0.010097309947013855, 0.02309734933078289, 0.06302256137132645, 0.008305110037326813, 0.02902720309793949, 0.009429163299500942, -0.05902580916881561, 0.03668133541941643, 0.01380...
-0.010025
set `Pagefind` as the search backend, include `pagefind` anywhere in the branch name. For example, `242-some-new-pagefind-feature`. ## MacOS Docker considerations Due to licensing restrictions, consider an alternative to Docker Desktop. There are several suggestions in [the handbook](https://handbook.gitlab.com/handboo...
https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/blob/main//doc/development.md
main
gitlab
[ 0.045411087572574615, 0.019067468121647835, 0.02063116803765297, -0.041340697556734085, -0.0838427022099495, -0.0727161392569542, -0.12827034294605255, 0.04708895832300186, -0.06536372005939484, 0.0615568645298481, -0.015626301988959312, -0.01930197700858116, 0.03773296996951103, 0.0094268...
0.005595
# Docs site infrastructure Use this guide to determine what to do in the event of infrastructure problems with the GitLab Docs website. Infrastructure issues will likely require enlisting help outside of the Technical Writing team. ## What is an infrastructure issue? The term "infrastructure" refers to the services tha...
https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/blob/main//doc/infrastructure.md
main
gitlab
[ -0.039081327617168427, -0.023850709199905396, 0.06653708964586258, 0.009023196063935757, -0.010933110490441322, -0.15442602336406708, -0.09800832718610764, -0.047295838594436646, 0.01328361313790083, 0.024147531017661095, -0.05391610041260719, 0.03839860484004021, 0.08608353137969971, 0.05...
0.025239
in the `#gitlab-pages` channel if errors look related to the GitLab Pages service. 1. If the problem persists, or if you are unable to get a response in `#gitlab-pages`, [declare an incident](https://about.gitlab.com/handbook/engineering/infrastructure/incident-management/#reporting-an-incident) to engage an on-call si...
https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/blob/main//doc/infrastructure.md
main
gitlab
[ -0.07907168567180634, -0.009500359185039997, 0.008216727524995804, 0.04100969433784485, -0.05057169124484062, -0.12765508890151978, 0.006637074053287506, -0.04736781865358353, -0.004379590507596731, 0.002814712468534708, 0.01605156995356083, 0.016124144196510315, 0.1346241682767868, 0.0422...
0.002369
# Feedback forms The Docs "Was this page helpful?" feedback form is a Vue.js component that collects user feedback and stores it in a [Cloud Firestore](https://firebase.google.com/docs/firestore) database. In the event of a problem, see [Turn off the feedback form](#turn-off-the-feedback-form) below. ## Data flow 1. Us...
https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/blob/main//doc/feedback.md
main
gitlab
[ -0.1003609448671341, 0.031538303941488266, 0.015414608642458916, 0.027056626975536346, 0.0036293666344136, 0.017392486333847046, -0.034846194088459015, -0.015653321519494057, 0.020075244829058647, 0.01108736451715231, -0.0753583088517189, -0.013398156501352787, 0.032528772950172424, -0.012...
0.002787
Request](https://gitlab.com/gitlab-com/gl-security/corp/issue-tracker/-/issues/new?issuable\_template=gcp\_services\_project\_iam\_update). ### Accessing the console 1. Visit [Firebase Console](https://console.firebase.google.com/). 2. Select the `ux-tech-writing-web-f169cc0e` project. Useful console links: - [`feedbac...
https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/blob/main//doc/feedback.md
main
gitlab
[ -0.03153029829263687, -0.0017312044510617852, -0.02495049498975277, -0.01196037046611309, 0.0028049517422914505, -0.0026673132088035345, 0.020982787013053894, -0.02113834209740162, -0.059598349034786224, 0.06093328446149826, -0.010532385669648647, -0.00027441949350759387, 0.07591047883033752...
0.006679
# GitLab docs site maintenance Some of the issues that the GitLab technical writing team handles to maintain `https://docs.gitlab.com` include: - The deployment process. - Temporary event or survey banners. ## Deployment process We use [GitLab Pages](https://docs.gitlab.com/user/project/pages/) to build and host this w...
https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/blob/main//doc/maintenance.md
main
gitlab
[ -0.10453715175390244, 0.058677997440099716, 0.027581093832850456, 0.06928420811891556, -0.006633734796196222, -0.18259236216545105, -0.042438965290784836, -0.0653332769870758, 0.02461332269012928, 0.060208212584257126, -0.02426900900900364, 0.019739121198654175, 0.09124265611171722, -0.001...
0.046176
used only for review apps or project maintenance. Prerequisites: - You must have at least the Maintainer role in the `docs-gitlab-com` project. To regenerate `DOCS\_TRIGGER\_TOKEN`: 1. Go to [\*\*Settings > CI/CD > Pipelines trigger tokens\*\*](https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/-/settings/...
https://gitlab.com/gitlab-org/technical-writing/docs-gitlab-com/blob/main//doc/maintenance.md
main
gitlab
[ -0.12441428005695343, -0.01651686616241932, 0.012153674848377705, 0.028780797496438026, -0.05124513432383537, -0.06157542020082474, 0.0163701344281435, 0.0038879967760294676, 0.033256709575653076, 0.02663903310894966, -0.020900817587971687, -0.006838524714112282, 0.10313864797353745, -0.04...
0.09073
# `smolagents`## What is smolagents? `smolagents` is an open-source Python library designed to make it extremely easy to build and run agents using just a few lines of code. Key features of `smolagents` include: ✨ \*\*Simplicity\*\*: The logic for agents fits in ~thousand lines of code. We kept abstractions to their mi...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/index.md
main
smolagents
[ -0.11115244030952454, -0.013342616148293018, -0.0848650187253952, 0.01982821524143219, 0.028873665258288383, -0.09349118918180466, -0.025518501177430153, 0.006164530757814646, -0.031044811010360718, -0.01288160216063261, 0.021802039816975594, -0.007756686769425869, 0.011079719290137291, -0...
0.24715
[How-to guides Practical guides to help you achieve a specific goal: create an agent to generate and test SQL queries!](./examples/text_to_sql) [Conceptual guides High-level explanations for building a better understanding of important topics.](./conceptual_guides/intro_agents) [Tutorials Horizontal tutorials that cove...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/index.md
main
smolagents
[ 0.029685845598578453, -0.059748146682977676, -0.07862924784421921, 0.02834145724773407, -0.05581938102841377, -0.012972977943718433, 0.03868991136550903, 0.05706408992409706, -0.14843595027923584, -0.010216467082500458, 0.0030287106055766344, 0.0033957301639020443, 0.07382666319608688, -0....
0.079886
# Agents - Guided tour [[open-in-colab]] In this guided visit, you will learn how to build an agent, how to run it, and how to customize it to make it work better for your use-case. ## Choosing an agent type: CodeAgent or ToolCallingAgent `smolagents` comes with two agent classes: [`CodeAgent`] and [`ToolCallingAgent`]...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/guided_tour.md
main
smolagents
[ -0.06457087397575378, -0.03602973744273186, -0.10179640352725983, 0.023336810991168022, -0.028067167848348618, -0.06124042719602585, -0.03221380338072777, 0.04860060289502144, -0.012751955538988113, -0.03391571342945099, 0.02053339220583439, -0.027840077877044678, 0.045041535049676895, -0....
0.137103
the page at url 'https://huggingface.co/blog'?") ``` Additionally, as an extra security layer, access to submodule is forbidden by default, unless explicitly authorized within the import list. For instance, to access the `numpy.random` submodule, you need to add `'numpy.random'` to the `additional\_authorized\_imports`...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/guided_tour.md
main
smolagents
[ -0.0928749218583107, -0.000045922392018837854, -0.023388762027025223, -0.004449663683772087, 0.0753994807600975, -0.09964126348495483, -0.018661409616470337, -0.014277905225753784, -0.058183394372463226, -0.003232028800994158, 0.02642926760017872, -0.04914221912622452, 0.10203718394041061, ...
0.058107
Bedrock](https://aws.amazon.com/bedrock/?nc1=h\_ls), or [mlx-lm](https://pypi.org/project/mlx-lm/). All model classes support passing additional keyword arguments (like `temperature`, `max\_tokens`, `top\_p`, etc.) directly at instantiation time. These parameters are automatically forwarded to the underlying model's co...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/guided_tour.md
main
smolagents
[ -0.052309393882751465, -0.07658597826957703, -0.06572975218296051, 0.03162512928247452, 0.0773143321275711, -0.007276778109371662, -0.009474343620240688, 0.05124298855662346, 0.006122216582298279, 0.037069909274578094, 0.04776586964726448, -0.10655380040407181, 0.04234285652637482, -0.0302...
0.019327
integration with Amazon Bedrock, allowing for direct API calls and comprehensive configuration. Basic Usage: ```python # !pip install 'smolagents[bedrock]' from smolagents import CodeAgent, AmazonBedrockModel model = AmazonBedrockModel(model\_id="anthropic.claude-3-sonnet-20240229-v1:0") agent = CodeAgent(tools=[], mod...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/guided_tour.md
main
smolagents
[ -0.04102335497736931, -0.01980258710682392, -0.11519011110067368, -0.006778818089514971, 0.03925574570894241, -0.08588913828134537, -0.07057648152112961, 0.035211142152547836, -0.05579311400651932, 0.03504471853375435, 0.01628255471587181, -0.08386118710041046, -0.04212242364883423, -0.017...
0.109651
list of chat messages for the Model to view. This method goes over each step of the log and only stores what it's interested in as a message: for instance, it will save the system prompt and task in separate messages, then for each step it will store the LLM output as a message, and the tool call output as another mess...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/guided_tour.md
main
smolagents
[ -0.0019209090387448668, -0.025922738015651703, -0.022710075601935387, 0.002307393355295062, 0.03427854925394058, -0.04353193938732147, -0.0032703883480280638, 0.035805683583021164, 0.06052515655755997, -0.006228393875062466, -0.04801938310265541, -0.035302143543958664, 0.033666327595710754, ...
0.112917
next(iter(list\_models(filter=task, sort="downloads", direction=-1))) return most\_downloaded\_model.id ``` The function needs: - A clear name. The name should be descriptive enough of what this tool does to help the LLM brain powering the agent. Since this tool returns the model with the most downloads for a task, let...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/guided_tour.md
main
smolagents
[ -0.038876235485076904, 0.02225237898528576, -0.018559446558356285, 0.025861790403723717, 0.03385399654507637, 0.018803032115101814, 0.113771952688694, 0.03355644270777702, 0.015333039686083794, 0.0055512008257210255, -0.027043486014008522, -0.009664228186011314, 0.007424617651849985, -0.01...
0.115739
systems have been introduced with Microsoft's framework [Autogen](https://huggingface.co/papers/2308.08155). In this type of framework, you have several agents working together to solve your task instead of only one. It empirically yields better performance on most benchmarks. The reason for this better performance is ...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/guided_tour.md
main
smolagents
[ -0.036366287618875504, -0.044590264558792114, -0.09276658296585083, 0.04338311403989792, -0.00794258527457714, -0.05750396102666855, -0.005168989300727844, 0.02438916079699993, -0.020830076187849045, -0.007890899665653706, -0.030472366139292717, -0.052588433027267456, 0.09493542462587357, ...
0.12216
# Installation Options The `smolagents` library can be installed using pip. Here are the different installation methods and options available. ## Prerequisites - Python 3.10 or newer - Python package manager: [`pip`](https://pip.pypa.io/en/stable/) or [`uv`](https://docs.astral.sh/uv/) ## Virtual Environment It's stron...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/installation.md
main
smolagents
[ -0.08130574971437454, -0.02151614800095558, -0.018156813457608223, -0.013924467377364635, 0.06732235103845596, -0.04535403847694397, -0.04754167050123215, 0.07268725335597992, -0.07317902147769928, -0.01850985176861286, 0.022416435182094574, -0.04423154890537262, -0.03709752857685089, 0.10...
0.002051
``` - \*\*e2b\*\*: Enable E2B support for remote execution. ```bash uv pip install "smolagents[e2b]" ``` - \*\*docker\*\*: Add support for executing code in Docker containers. ```bash uv pip install "smolagents[docker]" ``` ### Telemetry and User Interface Extras for telemetry, monitoring and user interface components:...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/installation.md
main
smolagents
[ -0.009817507117986679, 0.04086390137672424, -0.04851411655545235, -0.03786204755306244, 0.03512527048587799, -0.09593968838453293, 0.014375654049217701, 0.06381354480981827, -0.03919088840484619, 0.026226941496133804, 0.00870451144874096, -0.12641602754592896, 0.0015171635895967484, 0.0322...
0.089222
# How do multi-step agents work? The ReAct framework ([Yao et al., 2022](https://huggingface.co/papers/2210.03629)) is currently the main approach to building agents. The name is based on the concatenation of two words, "Reason" and "Act." Indeed, agents following this architecture will solve their task in as many step...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/conceptual_guides/react.md
main
smolagents
[ -0.07006281614303589, -0.029462089762091637, -0.04472695663571358, -0.012093859724700451, 0.02664102055132389, -0.015854282304644585, 0.006144202314317226, 0.07481773942708969, 0.04382363334298134, 0.009963920339941978, -0.02783670835196972, -0.016882888972759247, 0.007656940259039402, 0.0...
0.194389
# What are agents? 🤔 ## An introduction to agentic systems. Any efficient system using AI will need to provide LLMs some kind of access to the real world: for instance the possibility to call a search tool to get external information, or to act on certain programs in order to solve a task. In other words, LLMs should ...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/conceptual_guides/intro_agents.md
main
smolagents
[ -0.022466687485575676, -0.07748466730117798, -0.08787563443183899, -0.028262421488761902, -0.0004181306285317987, -0.025743860751390457, 0.021722646430134773, -0.015688439831137657, 0.0649530440568924, -0.0030690014827996492, -0.03972354158759117, -0.001189965521916747, 0.09334886819124222, ...
0.211419
100% reliable system with no risk of error introduced by letting unpredictable LLMs meddle in your workflow. For the sake of simplicity and robustness, it's advised to regularize towards not using any agentic behaviour. But what if the workflow can't be determined that well in advance? For instance, a user wants to ask...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/conceptual_guides/intro_agents.md
main
smolagents
[ -0.04236620292067528, -0.0025700817350298166, 0.03399894759058952, -0.001123200636357069, -0.004826146177947521, -0.03104476071894169, 0.003949521109461784, 0.0006772213964723051, 0.02092875726521015, 0.0022201447281986475, -0.06421780586242676, -0.0024955752305686474, 0.05154658108949661, ...
0.060505
calls to external tools. A common format (used by Anthropic, OpenAI, and many others) for writing these actions is generally different shades of "writing actions as a JSON of tools names and arguments to use, which you then parse to know which tool to execute and with which arguments". [Multiple](https://huggingface.co...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/conceptual_guides/intro_agents.md
main
smolagents
[ -0.09028122574090958, -0.013345403596758842, -0.025304142385721207, -0.01586177945137024, -0.004990681074559689, -0.05963463708758354, -0.0334533266723156, 0.04993290826678276, 0.07832779735326767, 0.03209087252616882, -0.007347537204623222, 0.020744482055306435, 0.052841756492853165, -0.0...
0.134226
# Agentic RAG [[open-in-colab]] ## Introduction to Retrieval-Augmented Generation (RAG) Retrieval-Augmented Generation (RAG) combines the power of large language models with external knowledge retrieval to produce more accurate, factual, and contextually relevant responses. At its core, RAG is about "using an LLM to an...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/examples/rag.md
main
smolagents
[ -0.042313333600759506, -0.00929667055606842, -0.004192085936665535, 0.01567157916724682, -0.023566745221614838, 0.06295429915189743, 0.0139149259775877, 0.03267231211066246, -0.013084014877676964, -0.00999438762664795, -0.017302626743912697, 0.0027478390838950872, 0.09101946651935577, -0.0...
0.156128
smaller chunks for better retrieval text\_splitter = RecursiveCharacterTextSplitter( chunk\_size=500, # Characters per chunk chunk\_overlap=50, # Overlap between chunks to maintain context add\_start\_index=True, strip\_whitespace=True, separators=["\n\n", "\n", ".", " ", ""], # Priority order for splitting ) docs\_pro...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/examples/rag.md
main
smolagents
[ -0.07796941697597504, 0.0650319904088974, 0.021548831835389137, 0.05695078894495964, -0.037093307822942734, -0.03658558800816536, 0.003728386014699936, 0.0829324796795845, -0.02751483954489231, 0.011261312291026115, 0.02156711556017399, -0.019069653004407883, 0.057130321860313416, -0.00260...
0.053986
systems. The approach we've demonstrated: - Overcomes the limitations of single-step retrieval - Enables more natural interactions with knowledge bases - Provides a framework for continuous improvement through self-critique and query refinement As you build your own Agentic RAG systems, consider experimenting with diff...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/examples/rag.md
main
smolagents
[ -0.007880669087171555, 0.055105894804000854, -0.0633099228143692, 0.012935652397572994, -0.06471487879753113, 0.029665805399417877, 0.03977993503212929, 0.05894722044467926, 0.006875340826809406, -0.013032419607043266, -0.04547320678830147, 0.026633430272340775, 0.1252627670764923, 0.05588...
0.17901
# Web Browser Automation with Agents 🤖🌐 [[open-in-colab]] In this notebook, we'll create an \*\*agent-powered web browser automation system\*\*! This system can navigate websites, interact with elements, and extract information automatically. The agent will be able to: - [x] Navigate to web pages - [x] Click on eleme...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/examples/web_browser.md
main
smolagents
[ -0.0846109390258789, -0.02354993112385273, -0.05683756247162819, 0.02463345415890217, 0.0569244809448719, -0.03573125973343849, 0.025068499147892, 0.01739632897078991, -0.07136499136686325, -0.11538901180028915, 0.026245566084980965, -0.05123418942093849, 0.0699613019824028, -0.01879388839...
0.095391
click clickable elements by inputting the text that appears on them. Code: ```py click("Top products") ``` If it's a link: Code: ```py click(Link("Top products")) ``` If you try to interact with an element and it's not found, you'll get a LookupError. In general stop your action after each button click to see what happ...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/examples/web_browser.md
main
smolagents
[ -0.09632996469736099, 0.013037036173045635, -0.04064822569489479, -0.023676598444581032, 0.09173426777124405, -0.03359225392341614, 0.07432683557271957, 0.05354663357138634, -0.021107813343405724, -0.036731887608766556, 0.02128199115395546, 0.06313470005989075, 0.10381249338388443, -0.0347...
-0.003906
# Orchestrate a multi-agent system 🤖🤝🤖 [[open-in-colab]] In this notebook we will make a \*\*multi-agent web browser: an agentic system with several agents collaborating to solve problems using the web!\*\* It will be a simple hierarchy: ``` +----------------+ | Manager agent | +----------------+ | \_\_\_\_\_\_\_\_\...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/examples/multiagents.md
main
smolagents
[ -0.07742960005998611, -0.0764247328042984, -0.06302643567323685, -0.023624682798981667, -0.04815743491053581, -0.013690131716430187, 0.011563747189939022, 0.00047218016698025167, -0.04280363768339157, -0.018020179122686386, 0.05016518011689186, -0.07641474902629852, 0.031450849026441574, 0...
0.083306
work well. Also, we want to ask a question that involves the current year and does additional data calculations: so let us add `additional\_authorized\_imports=["time", "numpy", "pandas"]`, just in case the agent needs these packages. ```py manager\_agent = CodeAgent( tools=[], model=model, managed\_agents=[web\_agent]...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/examples/multiagents.md
main
smolagents
[ -0.05456303432583809, 0.026242174208164215, -0.07484482228755951, 0.05268251523375511, 0.01049269549548626, -0.06968928873538971, -0.03528861701488495, 0.030956296250224113, -0.11396847665309906, 0.0317191518843174, 0.009408916346728802, -0.05296168103814125, -0.020336391404271126, -0.0175...
0.038122
# Human-in-the-Loop: Customize Agent Plan Interactively This page demonstrates advanced usage of the smolagents library, with a special focus on \*\*Human-in-the-Loop (HITL)\*\* approaches for interactive plan creation, user-driven plan modification, and memory preservation in agentic workflows. The example is based on...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/examples/plan_customization.md
main
smolagents
[ -0.07260455936193466, 0.014565698802471161, -0.09430135786533356, 0.0358603410422802, -0.03637319430708885, -0.048046112060546875, 0.014631325379014015, 0.02709132432937622, -0.03466976061463356, 0.0013344334438443184, 0.002404022729024291, 0.03939007595181465, -0.024140574038028717, -0.03...
0.130722
# Async Applications with Agents This guide demonstrates how to integrate a synchronous agent from the `smolagents` library into an asynchronous Python web application using Starlette. The example is designed to help users new to async Python and agent integration understand best practices for combining synchronous age...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/examples/async_agent.md
main
smolagents
[ -0.1300499439239502, -0.028602102771401405, -0.11960744857788086, 0.07954967021942139, -0.02021637000143528, -0.10806725919246674, -0.028673740103840828, -0.034302473068237305, -0.02446843311190605, -0.05079777538776398, -0.001594229252077639, 0.0005779087077826262, 0.005314955487847328, -...
0.169374
# Using different models [[open-in-colab]] `smolagents` provides a flexible framework that allows you to use various language models from different providers. This guide will show you how to use different model types with your agents. ## Available model types `smolagents` supports several model types out of the box: 1....
https://github.com/huggingface/smolagents/blob/main//docs/source/en/examples/using_different_models.md
main
smolagents
[ -0.033734384924173355, -0.09851621091365814, -0.02446209453046322, 0.017635678872466087, 0.03331494703888893, 0.007093994412571192, -0.05694475769996643, 0.0005508470931090415, 0.024294335395097733, 0.0006369411712512374, 0.0025274569634348154, -0.08437003195285797, 0.03384529799222946, 0....
0.163237
# Text-to-SQL [[open-in-colab]] In this tutorial, we’ll see how to implement an agent that leverages SQL using `smolagents`. > Let's start with the golden question: why not keep it simple and use a standard text-to-SQL pipeline? A standard text-to-sql pipeline is brittle, since the generated SQL query can be incorrect....
https://github.com/huggingface/smolagents/blob/main//docs/source/en/examples/text_to_sql.md
main
smolagents
[ -0.056377459317445755, -0.05615480989217758, -0.04588739573955536, 0.07411976903676987, -0.06097334995865822, -0.05540743097662926, 0.050800010561943054, 0.0447479784488678, -0.04578801989555359, -0.028662795200943947, -0.031142892315983772, -0.051261793822050095, 0.05001407861709595, -0.0...
0.082175
) agent.run("Can you give me the name of the client who got the most expensive receipt?") ``` ### Level 2: Table joins Now let’s make it more challenging! We want our agent to handle joins across multiple tables. So let’s make a second table recording the names of waiters for each receipt\_id! ```py table\_name = "wait...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/examples/text_to_sql.md
main
smolagents
[ 0.0021256518084555864, 0.07834689319133759, -0.013552333228290081, 0.0014372658915817738, -0.03330572694540024, -0.05468859523534775, 0.06203969195485115, 0.05885981023311615, -0.05694425478577614, -0.018126215785741806, 0.0012938248692080379, -0.05192374438047409, 0.04142872989177704, -0....
-0.01997
# Secure code execution [[open-in-colab]] > [!TIP] > If you're new to building agents, make sure to first read the [intro to agents](../conceptual\_guides/intro\_agents) and the [guided tour of smolagents](../guided\_tour). ### Code agents [Multiple](https://huggingface.co/papers/2402.01030) [research](https://huggingf...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/secure_code_execution.md
main
smolagents
[ -0.14111775159835815, -0.017290521413087845, -0.07132244855165482, -0.010537064634263515, 0.005478483624756336, -0.03684039041399956, -0.04308729246258736, 0.07642041891813278, 0.05952288582921028, -0.019179360941052437, 0.02366657927632332, 0.00920263584703207, -0.0014281908515840769, 0.0...
0.155019
To add a first layer of security, code execution in `smolagents` is not performed by the vanilla Python interpreter. We have re-built a more secure `LocalPythonExecutor` from the ground up. To be precise, this interpreter works by loading the Abstract Syntax Tree (AST) from your Code and executes it operation by operat...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/secure_code_execution.md
main
smolagents
[ -0.10164466500282288, -0.005213862285017967, -0.05464542657136917, 0.028071440756320953, 0.00781176658347249, -0.09948219358921051, -0.009983787313103676, 0.042040180414915085, -0.07500984519720078, -0.026753878220915794, 0.028293181210756302, -0.040345557034015656, 0.039993174374103546, 0...
0.138159
remote execution sandbox. ## Sandbox approaches for secure code execution When working with AI agents that execute code, security is paramount. There are two main approaches to sandboxing code execution in smolagents, each with different security properties and capabilities: ![Sandbox approaches comparison](https://hug...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/secure_code_execution.md
main
smolagents
[ -0.024084094911813736, 0.014449338428676128, -0.054278042167425156, 0.0011598634300753474, 0.02890026941895485, -0.032753847539424896, -0.07580771297216415, 0.038622356951236725, -0.02413489855825901, 0.004793889820575714, 0.0051457686349749565, -0.09400825947523117, 0.042623743414878845, ...
0.138461
be returned. This is illustrated in the figure below.However, since any call to a [managed agent](../examples/multiagents) would require model calls, since we do not transfer secrets to the remote sandbox, the model call would lack credentials. Hence this solution does not work (yet) with more complicated multi-agent s...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/secure_code_execution.md
main
smolagents
[ 0.026584357023239136, -0.038001079112291336, -0.02844257280230522, -0.00852294359356165, -0.001037304988130927, -0.02982161194086075, -0.04230232909321785, 0.0159438643604517, -0.03765169158577919, -0.009347504936158657, 0.006704920437186956, -0.13810233771800995, 0.07973691821098328, -0.0...
0.051853
```python import docker import os from typing import Optional class DockerSandbox: def \_\_init\_\_(self): self.client = docker.from\_env() self.container = None def create\_container(self): try: image, build\_logs = self.client.images.build( path=".", tag="agent-sandbox", rm=True, forcerm=True, buildargs={}, # decode=...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/secure_code_execution.md
main
smolagents
[ 0.07137007266283035, 0.07732393592596054, 0.01769072934985161, -0.0008619582513347268, 0.07498938590288162, -0.09535438567399979, 0.004301265813410282, 0.022651294246315956, -0.0020215578842908144, 0.027555523440241814, -0.013546631671488285, -0.07813674956560135, 0.061941467225551605, 0.0...
0.009425
keys to the sandbox - Potentially higher latency due to more complex operations Choose the approach that best balances your security needs with your application's requirements. For most applications with simpler agent architectures, Approach 1 provides a good balance of security and ease of use. For more complex multi-...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/secure_code_execution.md
main
smolagents
[ 0.04322955384850502, 0.0013628324959427118, -0.03742373734712601, -0.07539001852273941, 0.06220520660281181, 0.009847703389823437, -0.009436994791030884, 0.017863549292087555, 0.08241190761327744, -0.006341245491057634, 0.044193632900714874, 0.015556233003735542, 0.03381974995136261, -0.06...
0.176246
# Inspecting runs with OpenTelemetry [[open-in-colab]] > [!TIP] > If you're new to building agents, make sure to first read the [intro to agents](../conceptual\_guides/intro\_agents) and the [guided tour of smolagents](../guided\_tour). ## Why log your agent runs? Agent runs are complicated to debug. Validating that a ...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/inspect_runs.md
main
smolagents
[ -0.013094433583319187, -0.027953535318374634, -0.05864390730857849, 0.012373953126370907, 0.04501063749194145, -0.08967006951570511, -0.020770063623785973, 0.015072476118803024, -0.023614197969436646, -0.009808565489947796, -0.04016052559018135, -0.049684789031744, -0.018507516011595726, 0...
0.17902
[Langfuse](https://langfuse.com) is an open-source platform for LLM engineering. It provides tracing and monitoring capabilities for AI agents, helping developers debug, analyze, and optimize their products. Langfuse integrates with various tools and frameworks via native integrations, OpenTelemetry, and SDKs. ### Step...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/inspect_runs.md
main
smolagents
[ -0.06712450087070465, -0.0524832047522068, 0.06517418473958969, 0.012994538061320782, 0.026768051087856293, -0.11783462762832642, 0.004771921318024397, 0.05254789814352989, 0.0005351693835109472, -0.0025120521895587444, -0.01286396849900484, -0.09882491081953049, -0.010012741200625896, 0.0...
0.171034
# 📚 Manage your agent's memory [[open-in-colab]] In the end, an agent can be defined by simple components: it has tools, prompts. And most importantly, it has a memory of past steps, drawing a history of planning, execution, and errors. ### Replay your agent's memory We propose several features to inspect a past agent...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/memory.md
main
smolagents
[ -0.00483829528093338, -0.01758713833987713, -0.13242147862911224, 0.0449676439166069, -0.02051156386733055, 0.02993961051106453, -0.02134852297604084, 0.0692213848233223, 0.015385745093226433, 0.0031863958574831486, -0.01564604602754116, -0.015348805114626884, 0.018156500533223152, -0.0071...
0.149097
step\_number += 1 # Change the memory as you please! # For instance to update the latest step: # agent.memory.steps[-1] = ... print("The final answer is:", final\_answer) ```
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/memory.md
main
smolagents
[ -0.004666041117161512, 0.08548647910356522, -0.05800139531493187, 0.04267004504799843, -0.06646949797868729, 0.01437873113900423, 0.011254449374973774, 0.0556957870721817, -0.03395417332649231, 0.003779643215239048, 0.017957359552383423, 0.02993207797408104, 0.03730473294854164, -0.0743084...
0.069907
# Building good agents [[open-in-colab]] There's a world of difference between building an agent that works and one that doesn't. How can we build agents that fall into the former category? In this guide, we're going to talk about best practices for building agents. > [!TIP] > If you're new to building agents, make sur...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/building_good_agents.md
main
smolagents
[ -0.015507664531469345, -0.03645189106464386, -0.022085610777139664, -0.023172477260231972, 0.006722731981426477, -0.06680380553007126, -0.006513052154332399, 0.025538338348269463, -0.011099392548203468, -0.016529444605112076, -0.026659168303012848, -0.05717490613460541, 0.07519298046827316, ...
0.130361
not being in a proper format, or date\_time not being properly formatted. - the output format is hard to understand If the tool call fails, the error trace logged in memory can help the LLM reverse engineer the tool to fix the errors. But why leave it with so much heavy lifting to do? A better way to build this tool wo...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/building_good_agents.md
main
smolagents
[ 0.02922370471060276, 0.055353887379169464, -0.004768159240484238, 0.08798135817050934, 0.019625216722488403, -0.08696309477090836, -0.01077614352107048, 0.058804549276828766, -0.05953191965818405, -0.009473850950598717, -0.009698580019176006, -0.08714894205331802, 0.04585496708750725, 0.00...
-0.020364
wouldn't have made that mistake. ### 2. Provide more information or specific instructions You can also use less powerful models, provided you guide them more effectively. Put yourself in the shoes of your model: if you were the model solving the task, would you struggle with the information available to you (from the s...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/building_good_agents.md
main
smolagents
[ 0.0059708161279559135, 0.040510185062885284, -0.006424549967050552, -0.023671718314290047, -0.02250811643898487, 0.021149897947907448, -0.015297658741474152, 0.0814291462302208, -0.07379162311553955, 0.0801609680056572, 0.006114888470619917, -0.02838006801903248, 0.06542317569255829, -0.01...
0.076544
variable `question` about the image stored in the variable `image`. The question is in French. You have been provided with these additional arguments, that you can access using the keys as variables in your python code: {'question': 'Quel est l'animal sur l'image?', 'image': 'path/to/image.jpg'}" Thought: I will use th...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/building_good_agents.md
main
smolagents
[ -0.039794713258743286, 0.08919926732778549, -0.025747938081622124, 0.05212435498833656, -0.007858464494347572, -0.04338071867823601, 0.0875353142619133, 0.10363901406526566, 0.028592463582754135, -0.04148029908537865, 0.026530897244811058, -0.06195541098713875, 0.053570911288261414, 0.0869...
0.0536
%} {{ tool.to\_code\_prompt() }} {% endfor %} {{code\_block\_closing\_tag}} {%- if managed\_agents and managed\_agents.values() | list %} You can also give tasks to team members. Calling a team member works similarly to calling a tool: provide the task description as the 'task' argument. Since this team member is a rea...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/building_good_agents.md
main
smolagents
[ -0.007527414709329605, 0.061331991106271744, -0.093434639275074, 0.024409597739577293, 0.015963884070515633, -0.07182645052671432, 0.06421912461519241, 0.0306317750364542, -0.046743154525756836, 0.009885544888675213, -0.030146917328238487, -0.062080658972263336, 0.029642099514603615, 0.067...
0.077248
following placeholders: - To insert tool descriptions: ``` {%- for tool in tools.values() %} - {{ tool.to\_tool\_calling\_prompt() }} {%- endfor %} ``` - To insert the descriptions for managed agents if there are any: ``` {%- if managed\_agents and managed\_agents.values() | list %} You can also give tasks to team memb...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/building_good_agents.md
main
smolagents
[ 0.007540109567344189, 0.027825884521007538, -0.08902525156736374, 0.030625073239207268, -0.03526214882731438, -0.014952176250517368, 0.0696880891919136, 0.04899884760379791, -0.05451849848031998, 0.016721626743674278, -0.029146581888198853, -0.1090308204293251, 0.07273847609758377, 0.09098...
0.069259
# Tools [[open-in-colab]] Here, we're going to see advanced tool usage. > [!TIP] > If you're new to building agents, make sure to first read the [intro to agents](../conceptual\_guides/intro\_agents) and the [guided tour of smolagents](../guided\_tour). ### What is a tool, and how to build one? A tool is mostly a funct...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/tools.md
main
smolagents
[ -0.046613916754722595, -0.05502194166183472, -0.11095479130744934, 0.002596429781988263, -0.013506138697266579, -0.022626059129834175, -0.02014932595193386, 0.059480857104063034, -0.012917093001306057, -0.028466153889894485, -0.003220901358872652, -0.055179793387651443, 0.05042147636413574, ...
0.204599
initialization are hard to track, which prevents from sharing them properly to the hub. And anyway, the idea of making a specific class is that you can already set class attributes for anything you need to hard-code (just set `your\_variable=(...)` directly under the `class YourTool(Tool):` line). And of course you can...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/tools.md
main
smolagents
[ -0.046131834387779236, -0.032191965728998184, -0.15131476521492004, 0.0160734411329031, -0.0022551952861249447, 0.0175190269947052, -0.02750268578529358, 0.05163877084851265, -0.08232621103525162, -0.03954015299677849, 0.08446896821260452, -0.024857161566615105, 0.04512474685907364, -0.019...
0.013245
(2025-06-18+)](https://modelcontextprotocol.io/specification/2025-06-18/server/tools#structured-content) include support for `outputSchema`, which enables tools to return structured data with defined schemas. `smolagents` takes advantage of these structured output capabilities, allowing agents to work with tools that r...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/tools.md
main
smolagents
[ -0.004486857447773218, -0.02010008506476879, -0.059586051851511, 0.07210531830787659, 0.020933041349053383, -0.0777084231376648, -0.07318998873233795, -0.0022049027029424906, -0.029920805245637894, -0.012595218606293201, 0.02060418203473091, -0.12622511386871338, 0.022296488285064697, -0.0...
0.126646
to the agent. ```python from smolagents import CodeAgent, InferenceClientModel model = InferenceClientModel(model\_id="Qwen/Qwen3-Next-80B-A3B-Thinking") agent = CodeAgent(tools=[image\_generation\_tool], model=model) agent.run( "Improve this prompt, then generate an image of it.", additional\_args={'user\_prompt': 'A ...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/tools.md
main
smolagents
[ -0.02527032233774662, 0.016248399391770363, 0.026995165273547173, 0.03542259708046913, 0.05073564872145653, -0.04392603039741516, 0.06266851723194122, 0.013212649151682854, -0.004786815494298935, 0.011997565627098083, 0.05362049117684364, -0.11050152778625488, 0.12435191124677658, 0.015778...
0.018411
with ToolCollection.from\_mcp(server\_parameters, trust\_remote\_code=True, structured\_output=True) as tool\_collection: agent = CodeAgent(tools=[\*tool\_collection.tools], model=model, add\_base\_tools=True) agent.run("Please find a remedy for hangover.") ``` For Streamable HTTP-based MCP servers, simply pass a dict ...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/tutorials/tools.md
main
smolagents
[ -0.057540930807590485, -0.009692653082311153, -0.06007205322384834, 0.036864444613456726, -0.012602395378053188, -0.07170064747333527, 0.0067559219896793365, 0.028062226250767708, -0.04490484669804573, -0.02751065418124199, 0.034024570137262344, -0.0603264644742012, 0.006456669885665178, -...
0.026042
# Built-in Tools Ready-to-use tool implementations provided by the `smolagents` library. These built-in tools are concrete implementations of the [`Tool`] base class, each designed for specific tasks such as web searching, Python code execution, webpage retrieval, and user interaction. You can use these tools directly ...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/reference/default_tools.md
main
smolagents
[ -0.07924032211303711, -0.02035975642502308, -0.07154247164726257, 0.03541776165366173, -0.0004810671089217067, -0.05724651366472244, -0.018976904451847076, -0.03628862649202347, -0.0450778491795063, -0.061744656413793564, 0.019152546301484108, -0.02457396499812603, 0.05352342873811722, -0....
0.136811
# Models Smolagents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. To learn more about agents and tools make sure to read the [introductory guide](../index). This page contains the API docs for the underlying c...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/reference/models.md
main
smolagents
[ -0.13325417041778564, -0.06803931295871735, -0.03633127361536026, 0.10420513153076172, 0.015816083177924156, -0.03499889746308327, -0.061478398740291595, 0.04706910625100136, 0.010290883481502533, -0.006201896816492081, -0.0030934505630284548, -0.05410033091902733, 0.004591772798448801, -0...
0.315242
model\_list=[ { "model\_name": "llama-3.3-70b", "litellm\_params": {"model": "groq/llama-3.3-70b", "api\_key": os.getenv("GROQ\_API\_KEY")}, }, { "model\_name": "llama-3.3-70b", "litellm\_params": {"model": "cerebras/llama-3.3-70b", "api\_key": os.getenv("CEREBRAS\_API\_KEY")}, }, ], client\_kwargs={ "routing\_strategy...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/reference/models.md
main
smolagents
[ -0.00709922332316637, 0.037831392139196396, -0.050674665719270706, 0.05359258875250816, -0.034507010132074356, -0.008550254628062248, 0.010147623717784882, 0.042894601821899414, 0.008922009728848934, -0.02666907198727131, 0.043821386992931366, -0.07882571965456009, 0.028801513835787773, -0...
0.041232
# Agents Smolagents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. To learn more about agents and tools make sure to read the [introductory guide](../index). This page contains the API docs for the underlying c...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/reference/agents.md
main
smolagents
[ -0.1377386450767517, -0.041351351886987686, -0.06528148055076599, 0.01538065355271101, 0.003289933083578944, -0.03786593675613403, -0.06015695631504059, 0.05162478983402252, -0.026577269658446312, -0.037377696484327316, -0.006409204564988613, -0.02789865806698799, -0.02428489178419113, 0.0...
0.202211
# Tools Smolagents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. To learn more about agents and tools make sure to read the [introductory guide](../index). This page contains the API docs for the underlying cl...
https://github.com/huggingface/smolagents/blob/main//docs/source/en/reference/tools.md
main
smolagents
[ -0.102787546813488, -0.039993561804294586, -0.053870201110839844, -0.018176738172769547, 0.0118936812505126, -0.06227419897913933, -0.040257617831230164, 0.03777477517724037, -0.026031428948044777, -0.015847116708755493, 0.05080028623342514, -0.06162641942501068, -0.023038366809487343, -0....
0.180998
# Examples Check out a variety of sample implementations of the SDK in the examples section of the [repo](https://github.com/openai/openai-agents-python/tree/main/examples). The examples are organized into several categories that demonstrate different patterns and capabilities. ## Categories - \*\*[agent\_patterns](htt...
https://github.com/openai/openai-agents-python/blob/main//docs/examples.md
main
openai-agents
[ -0.058538272976875305, -0.004658430349081755, -0.11851291358470917, -0.009693752974271774, 0.05224208906292915, -0.06628116220235825, -0.0001508327986812219, 0.019242223352193832, 0.0011542071588337421, -0.014333177357912064, 0.008731376379728317, -0.03688519448041916, 0.041015300899744034, ...
0.263291
# Guardrails Guardrails enable you to do checks and validations of user input and agent output. For example, imagine you have an agent that uses a very smart (and hence slow/expensive) model to help with customer requests. You wouldn't want malicious users to ask the model to help them with their math homework. So, you...
https://github.com/openai/openai-agents-python/blob/main//docs/guardrails.md
main
openai-agents
[ -0.07791637629270554, 0.039820559322834015, -0.02352019213140011, 0.030792146921157837, 0.01762176863849163, -0.0356772281229496, 0.046325575560331345, 0.026740211993455887, 0.010866548866033554, 0.014544087462127209, 0.04001235589385033, 0.014108396135270596, 0.07128695398569107, -0.00948...
0.154421
and can skip the call, replace the output with a message, or raise a tripwire. - Output tool guardrails run after the tool executes and can replace the output or raise a tripwire. - Tool guardrails apply only to function tools created with [`function\_tool`][agents.function\_tool]; hosted tools (`WebSearchTool`, `FileS...
https://github.com/openai/openai-agents-python/blob/main//docs/guardrails.md
main
openai-agents
[ -0.07031280547380447, 0.0664212703704834, -0.044701091945171356, 0.07125190645456314, 0.020403577014803886, -0.05900202691555023, 0.025580089539289474, 0.05014099180698395, -0.03130100294947624, -0.011747750453650951, 0.07220777869224548, -0.031934741884469986, 0.054610081017017365, -0.033...
0.080024
if "sk-" in text: return ToolGuardrailFunctionOutput.reject\_content("Output contained sensitive data.") return ToolGuardrailFunctionOutput.allow() @function\_tool( tool\_input\_guardrails=[block\_secrets], tool\_output\_guardrails=[redact\_output], ) def classify\_text(text: str) -> str: """Classify text for internal ...
https://github.com/openai/openai-agents-python/blob/main//docs/guardrails.md
main
openai-agents
[ -0.03829574957489967, 0.06289387494325638, -0.09041367471218109, 0.027417434379458427, 0.03042047657072544, -0.03362113982439041, 0.018623167648911476, -0.03832092136144638, -0.04993496090173721, -0.07399550080299377, 0.06198325753211975, -0.026826024055480957, 0.06362209469079971, -0.0420...
0.032006
# Model context protocol (MCP) The [Model context protocol](https://modelcontextprotocol.io/introduction) (MCP) standardises how applications expose tools and context to language models. From the official documentation: > MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP l...
https://github.com/openai/openai-agents-python/blob/main//docs/mcp.md
main
openai-agents
[ -0.03819134458899498, -0.057215187698602676, -0.027913682162761688, -0.000325209490256384, 0.00037154791061766446, -0.043102942407131195, -0.02864384464919567, 0.1051875501871109, 0.012753968127071857, -0.028839951381087303, -0.0028805250767618418, -0.011353162117302418, 0.09476332366466522,...
0.211153
False, "reason": "Escalate to a human reviewer"} agent = Agent( name="Assistant", tools=[ HostedMCPTool( tool\_config={ "type": "mcp", "server\_label": "gitmcp", "server\_url": "https://gitmcp.io/openai/codex", "require\_approval": "always", }, on\_approval\_request=approve\_tool, ) ], ) ``` The callback can be synchro...
https://github.com/openai/openai-agents-python/blob/main//docs/mcp.md
main
openai-agents
[ -0.10507597774267197, -0.019171373918652534, -0.046726979315280914, 0.1201230138540268, 0.012970205396413803, -0.11831530183553696, -0.020646601915359497, -0.04401032254099846, 0.05206485092639923, 0.02982678823173046, 0.04966355115175247, -0.012206049636006355, 0.06641486287117004, -0.026...
0.116438
agents.mcp import MCPServerStdio, create\_static\_tool\_filter samples\_dir = Path("/path/to/files") filesystem\_server = MCPServerStdio( params={ "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", str(samples\_dir)], }, tool\_filter=create\_static\_tool\_filter(allowed\_tool\_names=["read\_fil...
https://github.com/openai/openai-agents-python/blob/main//docs/mcp.md
main
openai-agents
[ -0.05776097998023033, -0.0021101145539432764, -0.06232953816652298, 0.03052877075970173, 0.022006915882229805, -0.07150256633758545, 0.03220723941922188, 0.010148109868168831, 0.021048273891210556, 0.01694389432668686, 0.05046581104397774, -0.009426500648260117, 0.027100902050733566, -0.01...
0.08154
# Context management Context is an overloaded term. There are two main classes of context you might care about: 1. Context available locally to your code: this is data and dependencies you might need when tool functions run, during callbacks like `on\_handoff`, in lifecycle hooks, etc. 2. Context available to LLMs: thi...
https://github.com/openai/openai-agents-python/blob/main//docs/context.md
main
openai-agents
[ -0.03407768905162811, -0.025707358494400978, -0.05234749987721443, 0.02773733250796795, 0.00962881464511156, -0.10704439878463745, 0.07317563891410828, 0.05123046040534973, -0.03451467677950859, -0.03437528759241104, 0.009045534767210484, -0.051967862993478775, 0.09275062382221222, -0.0056...
0.065264
agent that can tell the weather of a given city.", tools=[get\_weather], ) ``` `ToolContext` provides the same `.context` property as `RunContextWrapper`, plus additional fields specific to the current tool call: - `tool\_name` – the name of the tool being invoked - `tool\_call\_id` – a unique identifier for this tool ...
https://github.com/openai/openai-agents-python/blob/main//docs/context.md
main
openai-agents
[ 0.001008477876894176, -0.0038749852683395147, -0.09056203812360764, 0.04106615483760834, -0.013140114955604076, -0.0830889642238617, 0.06983882188796997, 0.012275351211428642, -0.036873068660497665, -0.015478222630918026, -0.00952692236751318, -0.1308741271495819, 0.0485355444252491, -0.01...
0.080017
# Usage The Agents SDK automatically tracks token usage for every run. You can access it from the run context and use it to monitor costs, enforce limits, or record analytics. ## What is tracked - \*\*requests\*\*: number of LLM API calls made - \*\*input\_tokens\*\*: total input tokens sent - \*\*output\_tokens\*\*: t...
https://github.com/openai/openai-agents-python/blob/main//docs/usage.md
main
openai-agents
[ 0.006352993194013834, 0.008699042722582817, -0.09363964945077896, 0.06534525007009506, 0.016937164589762688, -0.07336359471082687, 0.0675513744354248, 0.05519678443670273, 0.04504784941673279, 0.02216021530330181, -0.05565856024622917, -0.11537636071443558, 0.025125376880168915, -0.0298680...
0.159664
# Quickstart ## Create a project and virtual environment You'll only need to do this once. ```bash mkdir my\_project cd my\_project python -m venv .venv ``` ### Activate the virtual environment Do this every time you start a new terminal session. ```bash source .venv/bin/activate ``` ### Install the Agents SDK ```bash ...
https://github.com/openai/openai-agents-python/blob/main//docs/quickstart.md
main
openai-agents
[ 0.025532932952046394, -0.04739708825945854, -0.10334241390228271, -0.023065466433763504, 0.06939134746789932, -0.019209299236536026, 0.000625604996457696, 0.06088957190513611, 0.014776884578168392, 0.0030967933125793934, 0.039102911949157715, -0.053466156125068665, 0.059460002928972244, 0....
0.007496
except InputGuardrailTripwireTriggered as e: print("Guardrail blocked this input:", e) # Example 2: General/philosophical question try: result = await Runner.run(triage\_agent, "What is the meaning of life?") print(result.final\_output) except InputGuardrailTripwireTriggered as e: print("Guardrail blocked this input:",...
https://github.com/openai/openai-agents-python/blob/main//docs/quickstart.md
main
openai-agents
[ -0.027389535680413246, 0.02928740903735161, -0.11615680158138275, 0.0614926740527153, 0.08478543162345886, -0.023219427093863487, -0.009266741573810577, 0.008134386502206326, -0.03242514282464981, -0.047426920384168625, 0.05727897956967354, -0.064247265458107, -0.01255307998508215, 0.02519...
0.169064
# Handoffs Handoffs allow an agent to delegate tasks to another agent. This is particularly useful in scenarios where different agents specialize in distinct areas. For example, a customer support app might have agents that each specifically handle tasks like order status, refunds, FAQs, etc. Handoffs are represented a...
https://github.com/openai/openai-agents-python/blob/main//docs/handoffs.md
main
openai-agents
[ -0.0909150093793869, -0.031978242099285126, -0.03272769972681999, 0.07912160456180573, -0.03382987529039383, -0.013685553334653378, 0.04761238768696785, 0.007768602576106787, 0.03309984877705574, -0.014093424193561077, 0.0043170335702598095, 0.04982192814350128, -0.006122915539890528, 0.02...
0.088987
transcript into a single assistant summary message (see [`RunConfig.nest\_handoff\_history`][agents.run.RunConfig.nest\_handoff\_history]). The summary appears inside a `` block that keeps appending new turns when multiple handoffs happen during the same run. You can provide your own mapping function via [`RunConfig.ha...
https://github.com/openai/openai-agents-python/blob/main//docs/handoffs.md
main
openai-agents
[ -0.02700851298868656, 0.032611530274152756, 0.01554117351770401, 0.029949693009257317, -0.021156322211027145, 0.002469712169840932, -0.0047752996906638145, -0.029748238623142242, -0.022415664047002792, -0.047516338527202606, -0.013882928527891636, -0.0382952019572258, -0.03800121694803238, ...
0.035361
# Streaming Streaming lets you subscribe to updates of the agent run as it proceeds. This can be useful for showing the end-user progress updates and partial responses. To stream, you can call [`Runner.run\_streamed()`][agents.run.Runner.run\_streamed], which will give you a [`RunResultStreaming`][agents.result.RunResu...
https://github.com/openai/openai-agents-python/blob/main//docs/streaming.md
main
openai-agents
[ 0.00362035958096385, -0.030017705634236336, -0.05949615687131882, 0.029241224750876427, 0.0815419852733612, -0.010468284599483013, -0.006786659825593233, 0.007117037195712328, 0.05342768877744675, -0.005924528930336237, -0.03854849189519882, -0.04227355867624283, -0.026372287422418594, 0.0...
0.129185
# Running agents You can run agents via the [`Runner`][agents.run.Runner] class. You have 3 options: 1. [`Runner.run()`][agents.run.Runner.run], which runs async and returns a [`RunResult`][agents.result.RunResult]. 2. [`Runner.run\_sync()`][agents.run.Runner.run\_sync], which is a sync method and just runs `.run()` un...
https://github.com/openai/openai-agents-python/blob/main//docs/running_agents.md
main
openai-agents
[ -0.01114762481302023, -0.0880575031042099, -0.105443075299263, -0.010232185944914818, 0.0023954608477652073, -0.023015346378087997, -0.04709898307919502, -0.021329771727323532, 0.0034367945045232773, -0.050457023084163666, -0.025492044165730476, 0.009725448675453663, 0.015110386535525322, ...
0.102053
Optional callable that receives the normalized transcript (history + handoff items) whenever `nest\_handoff\_history` is `True`. It must return the exact list of input items to forward to the next agent, allowing you to replace the built-in summary without writing a full handoff filter. - [`tracing\_disabled`][agents.r...
https://github.com/openai/openai-agents-python/blob/main//docs/running_agents.md
main
openai-agents
[ -0.024919578805565834, 0.053798526525497437, -0.06667356938123703, 0.021130340173840523, 0.02400457113981247, -0.0010992815950885415, 0.0025703676510602236, -0.004895925987511873, -0.04119746759533882, -0.04975951090455055, -0.006204754579812288, 0.02745141088962555, -0.047489460557699203, ...
0.084188
group\_id=thread\_id): # First turn result = await Runner.run(agent, "What city is the Golden Gate Bridge in?", session=session) print(result.final\_output) # San Francisco # Second turn - agent automatically remembers previous context result = await Runner.run(agent, "What state is it in?", session=session) print(resu...
https://github.com/openai/openai-agents-python/blob/main//docs/running_agents.md
main
openai-agents
[ 0.018191678449511528, -0.011970804072916508, -0.027376960963010788, 0.08402513712644577, 0.002064689062535763, -0.017101721838116646, 0.03808450698852539, -0.07911240309476852, 0.07463795691728592, -0.05498266965150833, -0.05397454649209976, 0.002929474227130413, -0.035291023552417755, -0....
0.137852
Malformed JSON: When the model provides a malformed JSON structure for tool calls or in its direct output, especially if a specific `output\_type` is defined. - Unexpected tool-related failures: When the model fails to use tools in an expected manner - [`UserError`][agents.exceptions.UserError]: This exception is raise...
https://github.com/openai/openai-agents-python/blob/main//docs/running_agents.md
main
openai-agents
[ -0.07179921120405197, 0.01852252334356308, -0.0012952694669365883, 0.024582909420132637, 0.01528444979339838, -0.025769859552383423, 0.003044976620003581, 0.06373734772205353, -0.02387397550046444, -0.006260903552174568, 0.06762073934078217, -0.0527847521007061, 0.04811183363199234, 0.0432...
0.068581
# Configuring the SDK ## API keys and clients By default, the SDK looks for the `OPENAI\_API\_KEY` environment variable for LLM requests and tracing, as soon as it is imported. If you are unable to set that environment variable before your app starts, you can use the [set\_default\_openai\_key()][agents.set\_default\_o...
https://github.com/openai/openai-agents-python/blob/main//docs/config.md
main
openai-agents
[ 0.001875514630228281, -0.03527488932013512, -0.1023358553647995, 0.006086856126785278, -0.021517658606171608, -0.06703578680753708, 0.005561148747801781, 0.02664537914097309, 0.03747212514281273, -0.023082245141267776, 0.030588237568736076, -0.06895968317985535, 0.08530765026807785, 0.0141...
0.053964
# Release process/changelog The project follows a slightly modified version of semantic versioning using the form `0.Y.Z`. The leading `0` indicates the SDK is still evolving rapidly. Increment the components as follows: ## Minor (`Y`) versions We will increase minor versions `Y` for \*\*breaking changes\*\* to any pub...
https://github.com/openai/openai-agents-python/blob/main//docs/release.md
main
openai-agents
[ -0.07571018487215042, 0.008684791624546051, 0.07660940289497375, -0.036066748201847076, 0.05484244227409363, 0.0050805723294615746, -0.01730360835790634, -0.01133375521749258, -0.02374504879117012, 0.006989708170294762, 0.009566393680870533, 0.04002094268798828, -0.1028645783662796, -0.008...
0.097757
# Results When you call the `Runner.run` methods, you either get a: - [`RunResult`][agents.result.RunResult] if you call `run` or `run\_sync` - [`RunResultStreaming`][agents.result.RunResultStreaming] if you call `run\_streamed` Both of these inherit from [`RunResultBase`][agents.result.RunResultBase], which is where m...
https://github.com/openai/openai-agents-python/blob/main//docs/results.md
main
openai-agents
[ -0.008825414814054966, -0.02717014215886593, -0.0733824297785759, 0.041015706956386566, 0.01927044801414013, -0.007409478537738323, -0.024051973596215248, 0.006796024739742279, -0.04875035956501961, -0.013634947128593922, -0.001464420696720481, -0.005404096562415361, -0.015055806376039982, ...
0.090216
# OpenAI Agents SDK The [OpenAI Agents SDK](https://github.com/openai/openai-agents-python) enables you to build agentic AI apps in a lightweight, easy-to-use package with very few abstractions. It's a production-ready upgrade of our previous experimentation for agents, [Swarm](https://github.com/openai/swarm/tree/main...
https://github.com/openai/openai-agents-python/blob/main//docs/index.md
main
openai-agents
[ -0.024967657402157784, -0.0324515700340271, -0.06958295404911041, 0.019490079954266548, 0.07729653269052505, -0.0883282721042633, -0.019101954996585846, -0.0023673339746892452, 0.004604883026331663, 0.050354644656181335, -0.029794424772262573, -0.0725754052400589, 0.048819441348314285, 0.0...
0.265097
# Tracing The Agents SDK includes built-in tracing, collecting a comprehensive record of events during an agent run: LLM generations, tool calls, handoffs, guardrails, and even custom events that occur. Using the [Traces dashboard](https://platform.openai.com/traces), you can debug, visualize, and monitor your workflow...
https://github.com/openai/openai-agents-python/blob/main//docs/tracing.md
main
openai-agents
[ 0.021383576095104218, -0.016057509928941727, -0.07046488672494888, 0.048247698694467545, 0.0961594358086586, -0.08820948004722595, 0.011951316148042679, -0.051009658724069595, 0.067876435816288, -0.022190071642398834, -0.000956569507252425, 0.008197847753763199, 0.008139196783304214, -0.03...
0.124508
end the trace at the right time. 2. You can also manually call [`trace.start()`][agents.tracing.Trace.start] and [`trace.finish()`][agents.tracing.Trace.finish]. The current trace is tracked via a Python [`contextvar`](https://docs.python.org/3/library/contextvars.html). This means that it works with concurrency automa...
https://github.com/openai/openai-agents-python/blob/main//docs/tracing.md
main
openai-agents
[ -0.022916346788406372, -0.06438592076301575, -0.05152437463402748, 0.022531693801283836, 0.008745689876377583, -0.04743662104010582, 0.05809244513511658, -0.08100274205207825, -0.03566012531518936, -0.005661292001605034, 0.04071899875998497, 0.02751254104077816, 0.0025610639713704586, -0.0...
0.049658
# Agents Agents are the core building block in your apps. An agent is a large language model (LLM), configured with instructions and tools. ## Basic configuration The most common properties of an agent you'll configure are: - `name`: A required string that identifies your agent. - `instructions`: also known as a develo...
https://github.com/openai/openai-agents-python/blob/main//docs/agents.md
main
openai-agents
[ 0.016293011605739594, -0.04463447257876396, -0.03531389683485031, 0.04289938136935234, -0.026910394430160522, -0.07495398819446564, -0.018219470977783203, -0.015455775894224644, -0.048977576196193695, -0.003967956639826298, 0.011103863827884197, -0.05592843517661095, 0.07881859689950943, -...
0.160658
= Agent( name="Triage agent", instructions=( "Help the user with their questions. " "If they ask about booking, hand off to the booking agent. " "If they ask about refunds, hand off to the refund agent." ), handoffs=[booking\_agent, refund\_agent], ) ``` ## Dynamic instructions In most cases, you can provide instructio...
https://github.com/openai/openai-agents-python/blob/main//docs/agents.md
main
openai-agents
[ -0.022334743291139603, 0.037952642887830734, -0.11620204150676727, 0.044507868587970734, -0.059137213975191116, -0.01010191161185503, 0.09688054770231247, 0.06545920670032501, -0.032372426241636276, -0.066010981798172, 0.022848905995488167, -0.060260772705078125, -0.02028757333755493, 0.01...
0.053517
tools=[get\_weather, sum\_numbers], tool\_use\_behavior=StopAtTools(stop\_at\_tool\_names=["get\_weather"]) ) ``` - `ToolsToFinalOutputFunction`: A custom function that processes tool results and decides whether to stop or continue with the LLM. ```python from agents import Agent, Runner, function\_tool, FunctionToolRe...
https://github.com/openai/openai-agents-python/blob/main//docs/agents.md
main
openai-agents
[ 0.019242847338318825, 0.03402552008628845, -0.023824095726013184, 0.07367515563964844, -0.019783129915595055, -0.02804039977490902, 0.07473441958427429, -0.024215569719672203, -0.023741822689771652, 0.010860775597393513, -0.005653768312186003, -0.06877455860376358, 0.011944700963795185, -0...
0.069224
# Agent Visualization Agent visualization allows you to generate a structured graphical representation of agents and their relationships using \*\*Graphviz\*\*. This is useful for understanding how agents, tools, and handoffs interact within an application. ## Installation Install the optional `viz` dependency group: `...
https://github.com/openai/openai-agents-python/blob/main//docs/visualization.md
main
openai-agents
[ 0.03622441738843918, -0.0054029137827456, -0.07031360268592834, 0.029925020411610603, 0.01397186703979969, -0.07126850634813309, -0.008475353009998798, -0.03121686354279518, -0.024026161059737206, 0.006754994858056307, 0.03758075460791588, -0.04886382073163986, 0.08201058954000473, 0.01321...
0.177854
# REPL utility The SDK provides `run\_demo\_loop` for quick, interactive testing of an agent's behavior directly in your terminal. ```python import asyncio from agents import Agent, run\_demo\_loop async def main() -> None: agent = Agent(name="Assistant", instructions="You are a helpful assistant.") await run\_demo\_lo...
https://github.com/openai/openai-agents-python/blob/main//docs/repl.md
main
openai-agents
[ -0.07656516134738922, -0.02031262405216694, -0.10321172326803207, 0.063954658806324, -0.03208232298493385, -0.09762124717235565, -0.008446766994893551, -0.020924611017107964, 0.013241284526884556, -0.10103588551282883, 0.0022114471066743135, -0.06864325702190399, -0.03943027928471565, -0.0...
0.15201
# Tools Tools let agents take actions: things like fetching data, running code, calling external APIs, and even using a computer. The SDK supports five categories: - Hosted OpenAI tools: run alongside the model on OpenAI servers. - Local runtime tools: run in your environment (computer use, shell, apply patch). - Funct...
https://github.com/openai/openai-agents-python/blob/main//docs/tools.md
main
openai-agents
[ -0.04833878576755524, -0.029692556709051132, -0.12357085198163986, 0.08076676726341248, 0.05473006144165993, -0.09194231778383255, -0.05394745245575905, 0.013058885000646114, 0.026898663491010666, -0.009114638902246952, 0.015726132318377495, -0.03753512725234032, 0.02703317254781723, -0.02...
0.152586
read\_file(ctx: RunContextWrapper[Any], path: str, directory: str | None = None) -> str: """Read the contents of a file. Args: path: The path to the file to read. directory: The directory to read the file from. """ # In real life, we'd read the file from the file system return "" agent = Agent( name="Assistant", tools=...
https://github.com/openai/openai-agents-python/blob/main//docs/tools.md
main
openai-agents
[ -0.05707325413823128, 0.059026770293712616, -0.06404786556959152, 0.06839901953935623, -0.005745455157011747, -0.0802529975771904, 0.022191934287548065, 0.10149568319320679, -0.02933400310575962, -0.03333593159914017, 0.024721525609493256, -0.028517434373497963, 0.01799316145479679, -0.004...
0.117997
supports most types, including Python primitives, Pydantic models, TypedDicts, and more. 2. We use `griffe` to parse docstrings. Supported docstring formats are `google`, `sphinx` and `numpy`. We attempt to automatically detect the docstring format, but this is best-effort and you can explicitly set it when calling `fu...
https://github.com/openai/openai-agents-python/blob/main//docs/tools.md
main
openai-agents
[ -0.011315424926578999, 0.021733958274126053, -0.029148606583476067, -0.031498271971940994, -0.03983382508158684, -0.05062134563922882, -0.020949365571141243, 0.018973026424646378, -0.011749443598091602, -0.015073585323989391, -0.003550916677340865, -0.03681840002536774, 0.05190769582986832, ...
0.120332
event is delivered in order as it arrives. - `tool\_call\_id` is present when the tool is invoked via a model tool call; direct calls may leave it `None`. - See `examples/agent\_patterns/agents\_as\_tools\_streaming.py` for a complete runnable sample. ### Conditional tool enabling You can conditionally enable or disabl...
https://github.com/openai/openai-agents-python/blob/main//docs/tools.md
main
openai-agents
[ 0.011343378573656082, -0.03920001536607742, -0.04030328243970871, 0.016256209462881088, 0.008607080206274986, 0.006678523030132055, 0.004861121531575918, -0.013588539324700832, 0.012309523299336433, -0.07564060389995575, 0.023275762796401978, -0.07167842239141464, -0.011803369037806988, 0....
0.091266
default (i.e. if you don't pass anything), it runs a `default\_tool\_error\_function` which tells the LLM an error occurred. - If you pass your own error function, it runs that instead, and sends the response to the LLM. - If you explicitly pass `None`, then any tool call errors will be re-raised for you to handle. Thi...
https://github.com/openai/openai-agents-python/blob/main//docs/tools.md
main
openai-agents
[ -0.01911631040275097, -0.027534054592251778, -0.03137770667672157, 0.031197408214211464, 0.0008682203479111195, -0.055217936635017395, -0.045184601098299026, 0.08123147487640381, 0.01138343010097742, -0.03818724304437637, 0.05951070785522461, -0.04898913577198982, 0.07342969626188278, -0.0...
0.080765
# Orchestrating multiple agents Orchestration refers to the flow of agents in your app. Which agents run, in what order, and how do they decide what happens next? There are two main ways to orchestrate agents: 1. Allowing the LLM to make decisions: this uses the intelligence of an LLM to plan, reason, and decide on wha...
https://github.com/openai/openai-agents-python/blob/main//docs/multi_agent.md
main
openai-agents
[ 0.029054097831249237, -0.060456469655036926, -0.03555309399962425, -0.030170973390340805, -0.024330461397767067, -0.029395118355751038, -0.06412415206432343, 0.025402195751667023, 0.05714605376124382, 0.010035312734544277, -0.08661586791276932, 0.018781442195177078, 0.0728534534573555, -0....
0.126927