source
stringclasses
1 value
repository
stringclasses
1 value
file
stringlengths
17
99
label
stringclasses
1 value
content
stringlengths
11
13.3k
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
IPython Code Executor To use the IPython code executor, you need to install the `jupyter-client` and `ipykernel` packages: ```bash pip install "pyautogen[ipython]" ``` To use the IPython code executor: ```python from autogen import UserProxyAgent proxy = UserProxyAgent(name="proxy", code_execution_config={"executo...
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
blendsearch `pyautogen<0.2` offers a cost-effective hyperparameter optimization technique [EcoOptiGen](https://arxiv.org/abs/2303.04673) for tuning Large Language Models. Please install with the [blendsearch] option to use it. ```bash pip install "pyautogen[blendsearch]<0.2" ``` Example notebooks: [Optimize for Cod...
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
retrievechat `pyautogen` supports retrieval-augmented generation tasks such as question answering and code generation with RAG agents. Please install with the [retrievechat] option to use it with ChromaDB. ```bash pip install "pyautogen[retrievechat]" ``` Alternatively `pyautogen` also supports PGVector and Qdrant w...
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
Teachability To use Teachability, please install AutoGen with the [teachable] option. ```bash pip install "pyautogen[teachable]" ``` Example notebook: [Chatting with a teachable agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_teachability.ipynb)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
Large Multimodal Model (LMM) Agents We offered Multimodal Conversable Agent and LLaVA Agent. Please install with the [lmm] option to use it. ```bash pip install "pyautogen[lmm]" ``` Example notebooks: [LLaVA Agent](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_lmm_llava.ipynb)
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
mathchat `pyautogen<0.2` offers an experimental agent for math problem solving. Please install with the [mathchat] option to use it. ```bash pip install "pyautogen[mathchat]<0.2" ``` Example notebooks: [Using MathChat to Solve Math Problems](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_MathChat...
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
Graph To use a graph in `GroupChat`, particularly for graph visualization, please install AutoGen with the [graph] option. ```bash pip install "pyautogen[graph]" ``` Example notebook: [Finite State Machine graphs to set speaker transition constraints](https://microsoft.github.io/autogen/docs/notebooks/agentchat_grou...
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
Long Context Handling AutoGen includes support for handling long textual contexts by leveraging the LLMLingua library for text compression. To enable this functionality, please install AutoGen with the `[long-context]` option: ```bash pip install "pyautogen[long-context]" ```
GitHub
autogen
autogen/website/docs/contributor-guide/tests.md
autogen
# Tests Tests are automatically run via GitHub actions. There are two workflows: 1. [build.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/build.yml) 1. [openai.yml](https://github.com/microsoft/autogen/blob/main/.github/workflows/openai.yml) The first workflow is required to pass for all PRs (...
GitHub
autogen
autogen/website/docs/contributor-guide/tests.md
autogen
Running tests locally To run tests, install the [test] option: ```bash pip install -e."[test]" ``` Then you can run the tests from the `test` folder using the following command: ```bash pytest test ``` Tests for the `autogen.agentchat.contrib` module may be skipped automatically if the required dependencies are no...
GitHub
autogen
autogen/website/docs/contributor-guide/tests.md
autogen
Skip flags for tests - `--skip-openai` for skipping tests that require access to OpenAI services. - `--skip-docker` for skipping tests that explicitly use docker - `--skip-redis` for skipping tests that require a Redis server For example, the following command will skip tests that require access to OpenAI and docker ...
GitHub
autogen
autogen/website/docs/contributor-guide/tests.md
autogen
Coverage Any code you commit should not decrease coverage. To ensure your code maintains or increases coverage, use the following commands after installing the required test dependencies: ```bash pip install -e ."[test]" pytest test --cov-report=html ``` Pytest generated a code coverage report and created a htmlcov...
GitHub
autogen
autogen/website/docs/contributor-guide/pre-commit.md
autogen
# Pre-commit Run `pre-commit install` to install pre-commit into your git hooks. Before you commit, run `pre-commit run` to check if you meet the pre-commit requirements. If you use Windows (without WSL) and can't commit after installing pre-commit, you can run `pre-commit uninstall` to uninstall the hook. In WSL or L...
GitHub
autogen
autogen/website/docs/contributor-guide/maintainer.md
autogen
# Guidance for Maintainers
GitHub
autogen
autogen/website/docs/contributor-guide/maintainer.md
autogen
General - Be a member of the community and treat everyone as a member. Be inclusive. - Help each other and encourage mutual help. - Actively post and respond. - Keep open communication. - Identify good maintainer candidates from active contributors.
GitHub
autogen
autogen/website/docs/contributor-guide/maintainer.md
autogen
Pull Requests - For new PR, decide whether to close without review. If not, find the right reviewers. One source to refer to is the roles on Discord. Another consideration is to ask users who can benefit from the PR to review it. - For old PR, check the blocker: reviewer or PR creator. Try to unblock. Get additional ...
GitHub
autogen
autogen/website/docs/contributor-guide/maintainer.md
autogen
Issues and Discussions - For new issues, write a reply, apply a label if relevant. Ask on discord when necessary. For roadmap issues, apply the roadmap label and encourage community discussion. Mention relevant experts when necessary. - For old issues, provide an update or close. Ask on discord when necessary. Encour...
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
# Docker for Development For developers contributing to the AutoGen project, we offer a specialized Docker environment. This setup is designed to streamline the development process, ensuring that all contributors work within a consistent and well-equipped environment.
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
Autogen Developer Image (autogen_dev_img) - **Purpose**: The `autogen_dev_img` is tailored for contributors to the AutoGen project. It includes a suite of tools and configurations that aid in the development and testing of new features or fixes. - **Usage**: This image is recommended for developers who intend to contr...
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
Building the Developer Docker Image - To build the developer Docker image (`autogen_dev_img`), use the following commands: ```bash docker build -f .devcontainer/dev/Dockerfile -t autogen_dev_img https://github.com/microsoft/autogen.git#main ``` - For building the developer image built from a specific Dockerfil...
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
Using the Developer Docker Image Once you have built the `autogen_dev_img`, you can run it using the standard Docker commands. This will place you inside the containerized development environment where you can run tests, develop code, and ensure everything is functioning as expected before submitting your contribution...
GitHub
autogen
autogen/website/docs/contributor-guide/docker.md
autogen
Develop in Remote Container If you use vscode, you can open the autogen folder in a [Container](https://code.visualstudio.com/docs/remote/containers). We have provided the configuration in [devcontainer](https://github.com/microsoft/autogen/blob/main/.devcontainer). They can be used in GitHub codespace too. Developing...
GitHub
autogen
autogen/website/docs/contributor-guide/documentation.md
autogen
# Documentation
GitHub
autogen
autogen/website/docs/contributor-guide/documentation.md
autogen
How to get a notebook rendered on the website See [here](https://github.com/microsoft/autogen/blob/main/notebook/contributing.md#how-to-get-a-notebook-displayed-on-the-website) for instructions on how to get a notebook in the `notebook` directory rendered on the website.
GitHub
autogen
autogen/website/docs/contributor-guide/documentation.md
autogen
Build documentation locally 1\. To build and test documentation locally, first install [Node.js](https://nodejs.org/en/download/). For example, ```bash nvm install --lts ``` Then, install `yarn` and other required packages: ```bash npm install --global yarn pip install pydoc-markdown pyyaml termcolor ``` 2\. You a...
GitHub
autogen
autogen/website/docs/contributor-guide/documentation.md
autogen
Build with Docker To build and test documentation within a docker container. Use the Dockerfile in the `dev` folder as described above to build your image: ```bash docker build -f .devcontainer/dev/Dockerfile -t autogen_dev_img https://github.com/microsoft/autogen.git#main ``` Then start the container like so, this ...
GitHub
autogen
autogen/website/docs/contributor-guide/contributing.md
autogen
# Contributing to AutoGen The project welcomes contributions from developers and organizations worldwide. Our goal is to foster a collaborative and inclusive community where diverse perspectives and expertise can drive innovation and enhance the project's capabilities. Whether you are an individual contributor or repr...
GitHub
autogen
autogen/website/docs/contributor-guide/contributing.md
autogen
Roadmaps To see what we are working on and what we plan to work on, please check our [Roadmap Issues](https://aka.ms/autogen-roadmap).
GitHub
autogen
autogen/website/docs/contributor-guide/contributing.md
autogen
Becoming a Reviewer There is currently no formal reviewer solicitation process. Current reviewers identify reviewers from active contributors. If you are willing to become a reviewer, you are welcome to let us know on discord.
GitHub
autogen
autogen/website/docs/contributor-guide/contributing.md
autogen
Contact Maintainers The project is currently maintained by a [dynamic group of volunteers](https://butternut-swordtail-8a5.notion.site/410675be605442d3ada9a42eb4dfef30?v=fa5d0a79fd3d4c0f9c112951b2831cbb&pvs=4) from several different organizations. Contact project administrators Chi Wang and Qingyun Wu via auto-gen@out...
GitHub
autogen
autogen/website/docs/contributor-guide/file-bug-report.md
autogen
# File A Bug Report When you submit an issue to [GitHub](https://github.com/microsoft/autogen/issues), please do your best to follow these guidelines! This will make it a lot easier to provide you with good feedback: - The ideal bug report contains a short reproducible code snippet. This way anyone can try to repro...
GitHub
autogen
autogen/website/docs/tutorial/what-next.md
autogen
# What Next? Now that you have learned the basics of AutoGen, you can start to build your own agents. Here are some ideas to get you started without going to the advanced topics: 1. **Chat with LLMs**: In [Human in the Loop](./human-in-the-loop) we covered the basic human-in-the-loop usage. You can try to hook u...
GitHub
autogen
autogen/website/docs/tutorial/what-next.md
autogen
Dig Deeper - Read the [user guide](/docs/topics) to learn more - Read the examples and guides in the [notebooks section](/docs/notebooks) - Check [research](/docs/Research) and [blog](/blog)
GitHub
autogen
autogen/website/docs/tutorial/what-next.md
autogen
Get Help If you have any questions, you can ask in our [GitHub Discussions](https://github.com/microsoft/autogen/discussions), or join our [Discord Server](https://aka.ms/autogen-dc). [![](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat.png)](https://aka.ms/autogen-dc)
GitHub
autogen
autogen/website/docs/tutorial/what-next.md
autogen
Get Involved - Check out [Roadmap Issues](https://aka.ms/autogen-roadmap) to see what we are working on. - Contribute your work to our [gallery](/docs/Gallery) - Follow our [contribution guide](/docs/contributor-guide/contributing) to make a pull request to AutoGen - You can also share your work with the community on ...
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
--- title: Does Model and Inference Parameter Matter in LLM Applications? - A Case Study for MATH authors: sonichi tags: [LLM, GPT, research] --- ![level 2 algebra](img/level2algebra.png) **TL;DR:** * **Just by tuning the inference parameters like model, number of responses, temperature etc. without changing any mode...
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
Experiment Setup We use AutoGen to select between the following models with a target inference budget $0.02 per instance: - gpt-3.5-turbo, a relatively cheap model that powers the popular ChatGPT app - gpt-4, the state of the art LLM that costs more than 10 times of gpt-3.5-turbo We adapt the models using 20 examples...
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
Experiment Results The first figure in this blog post shows the average accuracy and average inference cost of each configuration on the level 2 Algebra test set. Surprisingly, the tuned gpt-3.5-turbo model is selected as a better model and it vastly outperforms untuned gpt-4 in accuracy (92% vs. 70%) with equal or 2...
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
Analysis and Discussion While gpt-3.5-turbo demonstrates competitive accuracy with voted answers in relatively easy algebra problems under the same inference budget, gpt-4 is a better choice for the most difficult problems. In general, through parameter tuning and model selection, we can identify the opportunity to sa...
GitHub
autogen
autogen/website/blog/2023-04-21-LLM-tuning-math/index.md
autogen
For Further Reading * [Research paper about the tuning technique](https://arxiv.org/abs/2303.04673) * [Documentation about inference tuning](/docs/Use-Cases/enhanced_inference) *Do you have any experience to share about LLM applications? Do you like to see more support or research of LLM optimization or automation? P...
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
--- title: Use AutoGen for Local LLMs authors: jialeliu tags: [LLM] --- **TL;DR:** We demonstrate how to use autogen for local LLM application. As an example, we will initiate an endpoint using [FastChat](https://github.com/lm-sys/FastChat) and perform inference on [ChatGLMv2-6b](https://github.com/THUDM/ChatGLM2-6B).
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
Preparations ### Clone FastChat FastChat provides OpenAI-compatible APIs for its supported models, so you can use FastChat as a local drop-in replacement for OpenAI APIs. However, its code needs minor modification in order to function properly. ```bash git clone https://github.com/lm-sys/FastChat.git cd FastChat ```...
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
Initiate server First, launch the controller ```bash python -m fastchat.serve.controller ``` Then, launch the model worker(s) ```bash python -m fastchat.serve.model_worker --model-path chatglm2-6b ``` Finally, launch the RESTful API server ```bash python -m fastchat.serve.openai_api_server --host localhost --port...
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
Interact with model using `oai.Completion` (requires openai<1) Now the models can be directly accessed through openai-python library as well as `autogen.oai.Completion` and `autogen.oai.ChatCompletion`. ```python from autogen import oai # create a text completion request response = oai.Completion.create( config...
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
interacting with multiple local LLMs If you would like to interact with multiple LLMs on your local machine, replace the `model_worker` step above with a multi model variant: ```bash python -m fastchat.serve.multi_model_worker \ --model-path lmsys/vicuna-7b-v1.3 \ --model-names vicuna-7b-v1.3 \ --model-pa...
GitHub
autogen
autogen/website/blog/2023-07-14-Local-LLMs/index.md
autogen
For Further Reading * [Documentation](/docs/Getting-Started) about `autogen`. * [Documentation](https://github.com/lm-sys/FastChat) about FastChat.
GitHub
autogen
autogen/notebook/contributing.md
autogen
# Contributing
GitHub
autogen
autogen/notebook/contributing.md
autogen
How to get a notebook displayed on the website In the notebook metadata set the `tags` and `description` `front_matter` properties. For example: ```json { "...": "...", "metadata": { "...": "...", "front_matter": { "tags": ["code generation", "debugging"], "description"...
GitHub
autogen
autogen/notebook/contributing.md
autogen
Best practices for authoring notebooks The following points are best practices for authoring notebooks to ensure consistency and ease of use for the website. - The Colab button will be automatically generated on the website for all notebooks where it is missing. Going forward, it is recommended to not include the Col...
GitHub
autogen
autogen/notebook/contributing.md
autogen
Testing Notebooks can be tested by running: ```sh python website/process_notebooks.py test ``` This will automatically scan for all notebooks in the notebook/ and website/ dirs. To test a specific notebook pass its path: ```sh python website/process_notebooks.py test notebook/agentchat_logging.ipynb ``` Options: ...
GitHub
autogen
autogen/notebook/contributing.md
autogen
Metadata fields All possible metadata fields are as follows: ```json { "...": "...", "metadata": { "...": "...", "front_matter": { "tags": "List[str] - List of tags to categorize the notebook", "description": "str - Brief description of the notebook", }, ...
GitHub
autogen
autogen/dotnet/README.md
autogen
### AutoGen for .NET [![dotnet-ci](https://github.com/microsoft/autogen/actions/workflows/dotnet-build.yml/badge.svg)](https://github.com/microsoft/autogen/actions/workflows/dotnet-build.yml) [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) > [!NOTE] > Nightly build...
GitHub
autogen
autogen/dotnet/nuget/NUGET.md
autogen
### About AutoGen for .NET `AutoGen for .NET` is the official .NET SDK for [AutoGen](https://github.com/microsoft/autogen). It enables you to create LLM agents and construct multi-agent workflows with ease. It also provides integration with popular platforms like OpenAI, Semantic Kernel, and LM Studio. ### Gettings st...
GitHub
autogen
autogen/dotnet/website/README.md
autogen
## How to build and run the website ### Prerequisites - dotnet 7.0 or later ### Build Firstly, go to autogen/dotnet folder and run the following command to build the website: ```bash dotnet tool restore dotnet tool run docfx website/docfx.json --serve ``` After the command is executed, you can open your browser and ...
GitHub
autogen
autogen/dotnet/website/index.md
autogen
[!INCLUDE [](./articles/getting-start.md)]
GitHub
autogen
autogen/dotnet/website/articles/Create-a-user-proxy-agent.md
autogen
## UserProxyAgent [`UserProxyAgent`](../api/AutoGen.UserProxyAgent.yml) is a special type of agent that can be used to proxy user input to another agent or group of agents. It supports the following human input modes: - `ALWAYS`: Always ask user for input. - `NEVER`: Never ask user for input. In this mode, the agent w...
GitHub
autogen
autogen/dotnet/website/articles/MistralChatAgent-use-function-call.md
autogen
## Use tool in MistralChatAgent The following example shows how to enable tool support in @AutoGen.Mistral.MistralClientAgent by creating a `GetWeatherAsync` function and passing it to the agent. Firstly, you need to install the following packages: ```bash dotnet add package AutoGen.Mistral dotnet add package AutoGen...
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
This example shows how to use function call with local LLM models where [Ollama](https://ollama.com/) as local model provider and [LiteLLM](https://docs.litellm.ai/docs/) proxy server which provides an openai-api compatible interface. [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://gith...
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Install Ollama and pull `dolphincoder:latest` model First, install Ollama by following the instructions on the [Ollama website](https://ollama.com/). After installing Ollama, pull the `dolphincoder:latest` model by running the following command: ```bash ollama pull dolphincoder:latest ```
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Install LiteLLM and start the proxy server You can install LiteLLM by following the instructions on the [LiteLLM website](https://docs.litellm.ai/docs/). ```bash pip install 'litellm[proxy]' ``` Then, start the proxy server by running the following command: ```bash litellm --model ollama_chat/dolphincoder --port 400...
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Install AutoGen and AutoGen.SourceGenerator In your project, install the AutoGen and AutoGen.SourceGenerator package using the following command: ```bash dotnet add package AutoGen dotnet add package AutoGen.SourceGenerator ``` The `AutoGen.SourceGenerator` package is used to automatically generate type-safe `Functio...
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Define `WeatherReport` function and create @AutoGen.Core.FunctionCallMiddleware Create a `public partial` class to host the methods you want to use in AutoGen agents. The method has to be a `public` instance method and its return type must be `Task<string>`. After the methods are defined, mark them with `AutoGen.Core....
GitHub
autogen
autogen/dotnet/website/articles/Function-call-with-ollama-and-litellm.md
autogen
Create @AutoGen.OpenAI.OpenAIChatAgent with `GetWeatherReport` tool and chat with it Because LiteLLM proxy server is openai-api compatible, we can use @AutoGen.OpenAI.OpenAIChatAgent to connect to it as a third-party openai-api provider. The agent is also registered with a @AutoGen.Core.FunctionCallMiddleware which co...
GitHub
autogen
autogen/dotnet/website/articles/Group-chat-overview.md
autogen
@AutoGen.Core.IGroupChat is a fundamental feature in AutoGen. It provides a way to organize multiple agents under the same context and work together to resolve a given task. In AutoGen, there are two types of group chat: - @AutoGen.Core.RoundRobinGroupChat : This group chat runs agents in a round-robin sequence. The c...
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
`AutoGen` provides a built-in feature to run code snippet from agent response. Currently the following languages are supported: - dotnet More languages will be supported in the future.
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
What is a code snippet? A code snippet in agent response is a code block with a language identifier. For example: [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/RunCodeSnippetCodeSnippet.cs?name=code_snippet_1_3)]
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
Why running code snippet is useful? The ability of running code snippet can greatly extend the ability of an agent. Because it enables agent to resolve tasks by writing code and run it, which is much more powerful than just returning a text response. For example, in data analysis scenario, agent can resolve tasks like...
GitHub
autogen
autogen/dotnet/website/articles/Run-dotnet-code.md
autogen
How to run dotnet code snippet? The built-in feature of running dotnet code snippet is provided by [dotnet-interactive](https://github.com/dotnet/interactive). To run dotnet code snippet, you need to install the following package to your project, which provides the intergraion with dotnet-interactive: ```xml <PackageR...
GitHub
autogen
autogen/dotnet/website/articles/Consume-LLM-server-from-LM-Studio.md
autogen
## Consume LLM server from LM Studio You can use @AutoGen.LMStudio.LMStudioAgent from `AutoGen.LMStudio` package to consume openai-like API from LMStudio local server. ### What's LM Studio [LM Studio](https://lmstudio.ai/) is an app that allows you to deploy and inference hundreds of thousands of open-source language ...
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-function-call.md
autogen
The following example shows how to create a `GetWeatherAsync` function and pass it to @AutoGen.OpenAI.OpenAIChatAgent. Firstly, you need to install the following packages: ```xml <ItemGroup> <PackageReference Include="AutoGen.OpenAI" Version="AUTOGEN_VERSION" /> <PackageReference Include="AutoGen.SourceGenerat...
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
## Use function call in AutoGen agent Typically, there are three ways to pass a function definition to an agent to enable function call: - Pass function definitions when creating an agent. This only works if the agent supports pass function call from its constructor. - Passing function definitions in @AutoGen.Core.Gen...
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Pass function definitions when creating an agent In some agents like @AutoGen.AssistantAgent or @AutoGen.OpenAI.GPTAgent, you can pass function definitions when creating the agent Suppose the `TypeSafeFunctionCall` is defined in the following code snippet: [!code-csharp[TypeSafeFunctionCall](../../sample/AutoGen.Basic...
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Passing function definitions in @AutoGen.Core.GenerateReplyOptions when invoking an agent You can also pass function definitions in @AutoGen.Core.GenerateReplyOptions when invoking an agent. This is useful when you want to override the function definitions passed to the agent when creating it. [!code-csharp[assistant ...
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Register an agent with @AutoGen.Core.FunctionCallMiddleware to process and invoke function calls You can also register an agent with @AutoGen.Core.FunctionCallMiddleware to process and invoke function calls. This is useful when you want to process and invoke function calls in a more flexible way. [!code-csharp[assista...
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Invoke function call inside an agent To invoke a function instead of returning the function call object, you can pass its function call wrapper to the agent via `functionMap`. You can then pass the `WeatherReportWrapper` to the agent via `functionMap`: [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/Func...
GitHub
autogen
autogen/dotnet/website/articles/Use-function-call.md
autogen
Invoke function call by another agent You can also use another agent to invoke the function call from one agent. This is a useful pattern in two-agent chat, where one agent is used as a function proxy to invoke the function call from another agent. Once the function call is invoked, the result can be returned to the or...
GitHub
autogen
autogen/dotnet/website/articles/Create-your-own-agent.md
autogen
## Coming soon
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-json-mode.md
autogen
The following example shows how to enable JSON mode in @AutoGen.OpenAI.OpenAIChatAgent. [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.OpenAI.Sample/Use_Json_Mode.cs)
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-json-mode.md
autogen
What is JSON mode? JSON mode is a new feature in OpenAI which allows you to instruct model to always respond with a valid JSON object. This is useful when you want to constrain the model output to JSON format only. > [!NOTE] > Currently, JOSN mode is only supported by `gpt-4-turbo-preview` and `gpt-3.5-turbo-0125`. Fo...
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-use-json-mode.md
autogen
How to enable JSON mode in OpenAIChatAgent. To enable JSON mode for @AutoGen.OpenAI.OpenAIChatAgent, set `responseFormat` to `ChatCompletionsResponseFormat.JsonObject` when creating the agent. Note that when enabling JSON mode, you also need to instruct the agent to output JSON format in its system message. [!code-cs...
GitHub
autogen
autogen/dotnet/website/articles/Installation.md
autogen
### Current version: [![NuGet version](https://badge.fury.io/nu/AutoGen.Core.svg)](https://badge.fury.io/nu/AutoGen.Core) AutoGen.Net provides the following packages, you can choose to install one or more of them based on your needs: - `AutoGen`: The one-in-all package. This package has dependencies over `AutoGen.Co...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen-Mistral-Overview.md
autogen
## AutoGen.Mistral overview AutoGen.Mistral provides the following agent(s) to connect to [Mistral.AI](https://mistral.ai/) platform. - @AutoGen.Mistral.MistralClientAgent: A slim wrapper agent over @AutoGen.Mistral.MistralClient. ### Get started with AutoGen.Mistral To get started with AutoGen.Mistral, follow the [...
GitHub
autogen
autogen/dotnet/website/articles/Built-in-messages.md
autogen
## An overview of built-in @AutoGen.Core.IMessage types Start from 0.0.9, AutoGen introduces the @AutoGen.Core.IMessage and @AutoGen.Core.IMessage`1 types to provide a unified message interface for different agents. The @AutoGen.Core.IMessage is a non-generic interface that represents a message. The @AutoGen.Core.IMes...
GitHub
autogen
autogen/dotnet/website/articles/Roundrobin-chat.md
autogen
@AutoGen.Core.RoundRobinGroupChat is a group chat that invokes agents in a round-robin order. It's useful when you want to call multiple agents in a fixed sequence. For example, asking search agent to retrieve related information followed by a summarization agent to summarize the information. Beside, it also used by @A...
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
`Agent` is one of the most fundamental concepts in AutoGen.Net. In AutoGen.Net, you construct a single agent to process a specific task, and you extend an agent using [Middlewares](./Middleware-overview.md), and you construct a multi-agent workflow using [GroupChat](./Group-chat-overview.md). > [!NOTE] > Every agent i...
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Create an agent - Create an @AutoGen.AssistantAgent: [Create an assistant agent](./Create-an-agent.md) - Create an @AutoGen.OpenAI.OpenAIChatAgent: [Create an OpenAI chat agent](./OpenAIChatAgent-simple-chat.md) - Create a @AutoGen.SemanticKernel.SemanticKernelAgent: [Create a semantic kernel agent](./AutoGen.SemanticK...
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Chat with an agent To chat with an agent, typically you can invoke @AutoGen.Core.IAgent.GenerateReplyAsync*. On top of that, you can also use one of the extension methods like @AutoGen.Core.AgentExtension.SendAsync* as shortcuts. > [!NOTE] > AutoGen provides a list of built-in message types like @AutoGen.Core.TextMess...
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Streaming chat If an agent implements @AutoGen.Core.IStreamingAgent, you can use @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync* to chat with the agent in a streaming way. You would need to process the streaming updates on your side though. - Send a @AutoGen.Core.TextMessage to an agent via @AutoGen.Core.IS...
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Register middleware to an agent @AutoGen.Core.IMiddleware and @AutoGen.Core.IStreamingMiddleware are used to extend the behavior of @AutoGen.Core.IAgent.GenerateReplyAsync* and @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync*. You can register middleware to an agent to customize the behavior of the agent on t...
GitHub
autogen
autogen/dotnet/website/articles/Agent-overview.md
autogen
Group chat You can construct a multi-agent workflow using @AutoGen.Core.IGroupChat. In AutoGen.Net, there are two type of group chat: @AutoGen.Core.SequentialGroupChat: Orchestrates the agents in the group chat in a fix, sequential order. @AutoGen.Core.GroupChat: Provide more dynamic yet controllable way to orchestrate...
GitHub
autogen
autogen/dotnet/website/articles/Group-chat.md
autogen
@AutoGen.Core.GroupChat invokes agents in a dynamic way. On one hand, It relies on its admin agent to intellegently determines the next speaker based on conversation context, and on the other hand, it also allows you to control the conversation flow by using a @AutoGen.Core.Graph. This makes it a more dynamic yet contr...
GitHub
autogen
autogen/dotnet/website/articles/Group-chat.md
autogen
Use @AutoGen.Core.GroupChat to implement a code interpreter chat flow The following example shows how to create a dynamic group chat with @AutoGen.Core.GroupChat. In this example, we will create a dynamic group chat with 4 agents: `admin`, `coder`, `reviewer` and `runner`. Each agent has its own role in the group chat:...
GitHub
autogen
autogen/dotnet/website/articles/Two-agent-chat.md
autogen
In `AutoGen`, you can start a conversation between two agents using @AutoGen.Core.AgentExtension.InitiateChatAsync* or one of @AutoGen.Core.AgentExtension.SendAsync* APIs. When conversation starts, the sender agent will firstly send a message to receiver agent, then receiver agent will generate a reply and send it back...
GitHub
autogen
autogen/dotnet/website/articles/Two-agent-chat.md
autogen
A basic example The following example shows how to start a conversation between the teacher agent and student agent, where the student agent starts the conversation by asking teacher to create math questions. > [!TIP] > You can use @AutoGen.Core.PrintMessageMiddlewareExtension.RegisterPrintMessage* to pretty print th...
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
`Middleware` is a key feature in AutoGen.Net that enables you to customize the behavior of @AutoGen.Core.IAgent.GenerateReplyAsync*. It's similar to the middleware concept in ASP.Net and is widely used in AutoGen.Net for various scenarios, such as function call support, converting message of different types, print mess...
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
Use middleware in an agent To use middleware in an existing agent, you can either create a @AutoGen.Core.MiddlewareAgent on top of the original agent or register middleware functions to the original agent. ### Create @AutoGen.Core.MiddlewareAgent on top of the original agent [!code-csharp[](../../sample/AutoGen.BasicS...
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
Short-circuit the next agent The example below shows how to short-circuit the inner agent [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/MiddlewareAgentCodeSnippet.cs?name=short_circuit_middleware_agent)] > [!Note] > When multiple middleware functions are registered, the order of middleware functions i...
GitHub
autogen
autogen/dotnet/website/articles/Middleware-overview.md
autogen
Streaming middleware You can also modify the behavior of @AutoGen.Core.IStreamingAgent.GenerateStreamingReplyAsync* by registering streaming middleware to it. One example is @AutoGen.OpenAI.OpenAIChatRequestMessageConnector which converts `StreamingChatCompletionsUpdate` to one of `AutoGen.Core.TextMessageUpdate` or `A...
GitHub
autogen
autogen/dotnet/website/articles/Function-call-overview.md
autogen
## Overview of function call In some LLM models, you can provide a list of function definitions to the model. The function definition is usually essentially an OpenAPI schema object which describes the function, its parameters and return value. And these function definitions tells the model what "functions" are availa...
GitHub
autogen
autogen/dotnet/website/articles/Use-graph-in-group-chat.md
autogen
Sometimes, you may want to add more control on how the next agent is selected in a @AutoGen.Core.GroupChat based on the task you want to resolve. For example, in the previous [code writing example](./Group-chat.md), the original code interpreter workflow can be improved by the following diagram because it's not necessa...