source
stringclasses
1 value
repository
stringclasses
1 value
file
stringlengths
17
99
label
stringclasses
1 value
content
stringlengths
11
13.3k
GitHub
autogen
autogen/.github/ISSUE_TEMPLATE.md
autogen
### Description <!-- A clear and concise description of the issue or feature request. --> ### Environment - AutoGen version: <!-- Specify the AutoGen version (e.g., v0.2.0) --> - Python version: <!-- Specify the Python version (e.g., 3.8) --> - Operating System: <!-- Specify the OS (e.g., Windows 10, Ubuntu 20.04) -->...
GitHub
autogen
autogen/website/README.md
autogen
# Website This website is built using [Docusaurus 3](https://docusaurus.io/), a modern static website generator.
GitHub
autogen
autogen/website/README.md
autogen
Prerequisites To build and test documentation locally, begin by downloading and installing [Node.js](https://nodejs.org/en/download/), and then installing [Yarn](https://classic.yarnpkg.com/en/). On Windows, you can install via the npm package manager (npm) which comes bundled with Node.js: ```console npm install --g...
GitHub
autogen
autogen/website/README.md
autogen
Installation ```console pip install pydoc-markdown pyyaml colored cd website yarn install ``` ### Install Quarto `quarto` is used to render notebooks. Install it [here](https://github.com/quarto-dev/quarto-cli/releases). > Note: Ensure that your `quarto` version is `1.5.23` or higher.
GitHub
autogen
autogen/website/README.md
autogen
Local Development Navigate to the `website` folder and run: ```console pydoc-markdown python ./process_notebooks.py render yarn start ``` This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
GitHub
autogen
autogen/website/docs/Migration-Guide.md
autogen
# Migration Guide
GitHub
autogen
autogen/website/docs/Migration-Guide.md
autogen
Migrating to 0.2 openai v1 is a total rewrite of the library with many breaking changes. For example, the inference requires instantiating a client, instead of using a global class method. Therefore, some changes are required for users of `pyautogen<0.2`. - `api_base` -> `base_url`, `request_timeout` -> `timeout` in ...
GitHub
autogen
autogen/website/docs/Examples.md
autogen
# Examples
GitHub
autogen
autogen/website/docs/Examples.md
autogen
Automated Multi Agent Chat AutoGen offers conversable agents powered by LLM, tool or human, which can be used to perform tasks collectively via automated chat. This framework allows tool use and human participation via multi-agent conversation. Please find documentation about this feature [here](/docs/Use-Cases/agent_...
GitHub
autogen
autogen/website/docs/Examples.md
autogen
Enhanced Inferences ### Utilities - API Unification - [View Documentation with Code Example](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference/#api-unification) - Utility Functions to Help Managing API configurations effectively - [View Notebook](/docs/topics/llm_configuration) - Cost Calculation...
GitHub
autogen
autogen/website/docs/Research.md
autogen
# Research For technical details, please check our technical report and research publications. * [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155). Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiao...
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
# Agent Monitoring and Debugging with AgentOps <img src="https://github.com/AgentOps-AI/agentops/blob/main/docs/images/external/logo/banner-badge.png?raw=true" style="width: 40%;" alt="AgentOps logo"/> [AgentOps](https://agentops.ai/?=autogen) provides session replays, metrics, and monitoring for AI agents. At a hig...
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
Installation AgentOps works seamlessly with applications built using Autogen. 1. **Install AgentOps** ```bash pip install agentops ``` 2. **Create an API Key:** Create a user API key here: [Create API Key](https://app.agentops.ai/settings/projects) 3. **Configure Your Environment:** Add your API key to your environ...
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
Features - **LLM Costs**: Track spend with foundation model providers - **Replay Analytics**: Watch step-by-step agent execution graphs - **Recursive Thought Detection**: Identify when agents fall into infinite loops - **Custom Reporting:** Create custom analytics on agent performance - **Analytics Dashboard:** Monito...
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
Autogen + AgentOps examples * [AgentChat with AgentOps Notebook](/docs/notebooks/agentchat_agentops) * [More AgentOps Examples](https://docs.agentops.ai/v1/quickstart)
GitHub
autogen
autogen/website/docs/ecosystem/agentops.md
autogen
Extra links - [🐦 Twitter](https://twitter.com/agentopsai/) - [📢 Discord](https://discord.gg/JHPt4C7r) - [🖇️ AgentOps Dashboard](https://app.agentops.ai/ref?=autogen) - [📙 Documentation](https://docs.agentops.ai/introduction)
GitHub
autogen
autogen/website/docs/ecosystem/ollama.md
autogen
# Ollama ![Ollama Example](img/ecosystem-ollama.png) [Ollama](https://ollama.com/) allows the users to run open-source large language models, such as Llama 2, locally. Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, inc...
GitHub
autogen
autogen/website/docs/ecosystem/microsoft-fabric.md
autogen
# Microsoft Fabric ![Fabric Example](img/ecosystem-fabric.png) [Microsoft Fabric](https://learn.microsoft.com/en-us/fabric/get-started/microsoft-fabric-overview) is an all-in-one analytics solution for enterprises that covers everything from data movement to data science, Real-Time Analytics, and business intelligenc...
GitHub
autogen
autogen/website/docs/ecosystem/pgvector.md
autogen
# PGVector [PGVector](https://github.com/pgvector/pgvector) is an open-source vector similarity search for Postgres. - [PGVector + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_RetrieveChat_pgvector.ipynb)
GitHub
autogen
autogen/website/docs/ecosystem/promptflow.md
autogen
# Promptflow Promptflow is a comprehensive suite of tools that simplifies the development, testing, evaluation, and deployment of LLM based AI applications. It also supports integration with Azure AI for cloud-based operations and is designed to streamline end-to-end development. Refer to [Promptflow docs](https://mi...
GitHub
autogen
autogen/website/docs/ecosystem/promptflow.md
autogen
Sample Flow ![Sample Promptflow](./img/ecosystem-promptflow.png)
GitHub
autogen
autogen/website/docs/ecosystem/composio.md
autogen
# Composio ![Composio Example](img/ecosystem-composio.png) Composio empowers AI agents to seamlessly connect with external tools, Apps, and APIs to perform actions and receive triggers. With built-in support for AutoGen, Composio enables the creation of highly capable and adaptable AI agents that can autonomously exe...
GitHub
autogen
autogen/website/docs/ecosystem/databricks.md
autogen
# Databricks ![Databricks Data Intelligence Platform](img/ecosystem-databricks.png) The [Databricks Data Intelligence Platform ](https://www.databricks.com/product/data-intelligence-platform) allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all d...
GitHub
autogen
autogen/website/docs/ecosystem/memgpt.md
autogen
# MemGPT ![MemGPT Example](img/ecosystem-memgpt.png) MemGPT enables LLMs to manage their own memory and overcome limited context windows. You can use MemGPT to create perpetual chatbots that learn about you and modify their own personalities over time. You can connect MemGPT to your own local filesystems and database...
GitHub
autogen
autogen/website/docs/ecosystem/llamaindex.md
autogen
# Llamaindex ![Llamaindex Example](img/ecosystem-llamaindex.png) [Llamaindex](https://www.llamaindex.ai/) allows the users to create Llamaindex agents and integrate them in autogen conversation patterns. - [Llamaindex + AutoGen Code Examples](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_group_ch...
GitHub
autogen
autogen/website/docs/ecosystem/azure_cosmos_db.md
autogen
# Azure Cosmos DB > "OpenAI relies on Cosmos DB to dynamically scale their ChatGPT service – one of the fastest-growing consumer apps ever – enabling high reliability and low maintenance." > – Satya Nadella, Microsoft chairman and chief executive officer Azure Cosmos DB is a fully managed [NoSQL](https://learn.micros...
GitHub
autogen
autogen/website/docs/topics/llm-observability.md
autogen
# Agent Observability AutoGen supports advanced LLM agent observability and monitoring through built-in logging and partner providers.
GitHub
autogen
autogen/website/docs/topics/llm-observability.md
autogen
AutoGen Observability Integrations ### Built-In Logging AutoGen's SQLite and File Logger - [Tutorial Notebook](/docs/notebooks/agentchat_logging) ### Full-Service Partner Integrations AutoGen partners with [AgentOps](https://agentops.ai) to provide multi-agent tracking, metrics, and monitoring - [Tutorial Notebook](/...
GitHub
autogen
autogen/website/docs/topics/llm-observability.md
autogen
What is Observability? Observability provides developers with the necessary insights to understand and improve the internal workings of their agents. Observability is necessary for maintaining reliability, tracking costs, and ensuring AI safety. **Without observability tools, developers face significant hurdles:** - ...
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
# Retrieval Augmentation Retrieval Augmented Generation (RAG) is a powerful technique that combines language models with external knowledge retrieval to improve the quality and relevance of generated responses. One way to realize RAG in AutoGen is to construct agent chats with `RetrieveAssistantAgent` and `RetrieveUs...
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
Example Setup: RAG with Retrieval Augmented Agents The following is an example setup demonstrating how to create retrieval augmented agents in AutoGen: ### Step 1. Create an instance of `RetrieveAssistantAgent` and `RetrieveUserProxyAgent`. Here `RetrieveUserProxyAgent` instance acts as a proxy agent that retrieves r...
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
Example Setup: RAG with Retrieval Augmented Agents with PGVector The following is an example setup demonstrating how to create retrieval augmented agents in AutoGen: ### Step 1. Create an instance of `RetrieveAssistantAgent` and `RetrieveUserProxyAgent`. Here `RetrieveUserProxyAgent` instance acts as a proxy agent th...
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
Online Demo [Retrival-Augmented Chat Demo on Huggingface](https://huggingface.co/spaces/thinkall/autogen-demos)
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
More Examples and Notebooks For more detailed examples and notebooks showcasing the usage of retrieval augmented agents in AutoGen, refer to the following: - Automated Code Generation and Question Answering with Retrieval Augmented Agents - [View Notebook](/docs/notebooks/agentchat_RetrieveChat) - Automated Code Genera...
GitHub
autogen
autogen/website/docs/topics/retrieval_augmentation.md
autogen
Roadmap Explore our detailed roadmap [here](https://github.com/microsoft/autogen/issues/1657) for further advancements plan around RAG. Your contributions, feedback, and use cases are highly appreciated! We invite you to engage with us and play a pivotal role in the development of this impactful feature.
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
# LLM Caching AutoGen supports caching API requests so that they can be reused when the same request is issued. This is useful when repeating or continuing experiments for reproducibility and cost saving. Since version [`0.2.8`](https://github.com/microsoft/autogen/releases/tag/v0.2.8), a configurable context manager...
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
Controlling the seed You can vary the `cache_seed` parameter to get different LLM output while still using cache. ```python # Setting the cache_seed to 1 will use a different cache from the default one # and you will see different output. with Cache.disk(cache_seed=1) as cache: user.initiate_chat(assistant, messa...
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
Cache path By default [`DiskCache`](/docs/reference/cache/disk_cache#diskcache) uses `.cache` for storage. To change the cache directory, set `cache_path_root`: ```python with Cache.disk(cache_path_root="/tmp/autogen_cache") as cache: user.initiate_chat(assistant, message=coding_task, cache=cache) ```
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
Disabling cache For backward compatibility, [`DiskCache`](/docs/reference/cache/disk_cache#diskcache) is on by default with `cache_seed` set to 41. To disable caching completely, set `cache_seed` to `None` in the `llm_config` of the agent. ```python assistant = AssistantAgent( "coding_agent", llm_config={ ...
GitHub
autogen
autogen/website/docs/topics/llm-caching.md
autogen
Difference between `cache_seed` and OpenAI's `seed` parameter OpenAI v1.1 introduced a new parameter `seed`. The difference between AutoGen's `cache_seed` and OpenAI's `seed` is AutoGen uses an explicit request cache to guarantee the exactly same output is produced for the same input and when cache is hit, no OpenAI A...
GitHub
autogen
autogen/website/docs/topics/non-openai-models/about-using-nonopenai-models.md
autogen
# Non-OpenAI Models AutoGen allows you to use non-OpenAI models through proxy servers that provide an OpenAI-compatible API or a [custom model client](https://microsoft.github.io/autogen/blog/2024/01/26/Custom-Models) class. Benefits of this flexibility include access to hundreds of models, assigning specialized mode...
GitHub
autogen
autogen/website/docs/topics/non-openai-models/about-using-nonopenai-models.md
autogen
OpenAI-compatible API proxy server Any proxy server that provides an API that is compatible with [OpenAI's API](https://platform.openai.com/docs/api-reference) will work with AutoGen. These proxy servers can be cloud-based or running locally within your environment. ![Cloud or Local Proxy Servers](images/cloudlocalpr...
GitHub
autogen
autogen/website/docs/topics/non-openai-models/about-using-nonopenai-models.md
autogen
Custom Model Client class For more advanced users, you can create your own custom model client class, enabling you to define and load your own models. See the [AutoGen with Custom Models: Empowering Users to Use Their Own Inference Mechanism](/blog/2024/01/26/Custom-Models) blog post and [this notebook](/docs/notebook...
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
# Tips for Non-OpenAI Models Here are some tips for using non-OpenAI Models with AutoGen.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
Finding the right model Every model will perform differently across the operations within your AutoGen setup, such as speaker selection, coding, function calling, content creation, etc. On the whole, larger models (13B+) perform better with following directions and providing more cohesive responses. Content creation c...
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
Validating your program Testing your AutoGen setup against a very large LLM, such as OpenAI's ChatGPT or Anthropic's Claude 3, can help validate your agent setup and configuration. Once a setup is performing as you want, you can replace the models for your agents with non-OpenAI models and iteratively tweak system mes...
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
Chat template AutoGen utilises a set of chat messages for the conversation between AutoGen/user and LLMs. Each chat message has a role attribute that is typically `user`, `assistant`, or `system`. A chat template is applied during inference and some chat templates implement rules about what roles can be used in specif...
GitHub
autogen
autogen/website/docs/topics/non-openai-models/best-tips-for-nonopenai-models.md
autogen
Discord Join AutoGen's [#alt-models](https://discord.com/channels/1153072414184452236/1201369716057440287) channel on their Discord and discuss non-OpenAI models and configurations.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
# vLLM [vLLM](https://github.com/vllm-project/vllm) is a locally run proxy and inference server, providing an OpenAI-compatible API. As it performs both the proxy and the inferencing, you don't need to install an additional inference server. Note: vLLM does not support OpenAI's [Function Calling](https://platform.open...
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Installing vLLM In your terminal: ```bash pip install vllm ```
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Choosing models vLLM will download new models when you run the server. The models are sourced from [Hugging Face](https://huggingface.co), a filtered list of Text Generation models is [here](https://huggingface.co/models?pipeline_tag=text-generation&sort=trending) and vLLM has a list of [commonly used models](https:/...
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Chat Template vLLM uses a pre-defined chat template, unless the model has a chat template defined in its config file on Hugging Face. This can cause an issue if the chat template doesn't allow `'role' : 'system'` messages, as used in AutoGen. Therefore, we will create a chat template for the Mistral.AI Mistral 7B mod...
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Running vLLM proxy server To run vLLM with the chosen model and our chat template, in your terminal: ```bash python -m vllm.entrypoints.openai.api_server --model mistralai/Mistral-7B-Instruct-v0.2 --chat-template autogenmistraltemplate.jinja ``` By default, vLLM will run on 'http://0.0.0.0:8000'.
GitHub
autogen
autogen/website/docs/topics/non-openai-models/local-vllm.md
autogen
Using vLLM with AutoGen Now that we have the URL for the vLLM proxy server, you can use it within AutoGen in the same way as OpenAI or cloud-based proxy servers. As you are running this proxy server locally, no API key is required. As ```api_key``` is a mandatory field for configurations within AutoGen we put a dummy...
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md
autogen
# Introduction to Transform Messages Why do we need to handle long contexts? The problem arises from several constraints and requirements: 1. Token limits: LLMs have token limits that restrict the amount of textual data they can process. If we exceed these limits, we may encounter errors or incur additional costs. By...
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/intro_to_transform_messages.md
autogen
Transform Messages Capability The `TransformMessages` capability is designed to modify incoming messages before they are processed by the LLM agent. This can include limiting the number of messages, truncating messages to meet token limits, and more. :::info Requirements Install `pyautogen`: ```bash pip install pyau...
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md
autogen
# Compressing Text with LLMLingua Text compression is crucial for optimizing interactions with LLMs, especially when dealing with long prompts that can lead to higher costs and slower response times. LLMLingua is a tool designed to compress prompts effectively, enhancing the efficiency and cost-effectiveness of LLM op...
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md
autogen
Example 1: Compressing AutoGen Research Paper using LLMLingua We will look at how we can use `TextMessageCompressor` to compress an AutoGen research paper using `LLMLingua`. Here's how you can initialize `TextMessageCompressor` with LLMLingua, a text compressor that adheres to the `TextCompressor` protocol. ```python...
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md
autogen
Example 2: Integrating LLMLingua with `ConversableAgent` Now, let's integrate `LLMLingua` into a conversational agent within AutoGen. This allows dynamic compression of prompts before they are sent to the LLM. ```python import os import autogen from autogen.agentchat.contrib.capabilities import transform_messages s...
GitHub
autogen
autogen/website/docs/topics/handling_long_contexts/compressing_text_w_llmligua.md
autogen
Example 3: Modifying LLMLingua's Compression Parameters LLMLingua's flexibility allows for various configurations, such as customizing instructions for the LLM or setting specific token counts for compression. This example demonstrates how to set a target token count, enabling the use of models with smaller context si...
GitHub
autogen
autogen/website/docs/topics/openai-assistant/gpt_assistant_agent.md
autogen
# Agent Backed by OpenAI Assistant API The GPTAssistantAgent is a powerful component of the AutoGen framework, utilizing OpenAI's Assistant API to enhance agents with advanced capabilities. This agent enables the integration of multiple tools such as the Code Interpreter, File Search, and Function Calling, allowing fo...
GitHub
autogen
autogen/website/docs/topics/openai-assistant/gpt_assistant_agent.md
autogen
Create a OpenAI Assistant in Autogen ```python import os from autogen import config_list_from_json from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent assistant_id = os.environ.get("ASSISTANT_ID", None) config_list = config_list_from_json("OAI_CONFIG_LIST") llm_config = { "config_list": c...
GitHub
autogen
autogen/website/docs/topics/openai-assistant/gpt_assistant_agent.md
autogen
Use OpenAI Assistant Built-in Tools and Function Calling ### Code Interpreter The [Code Interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter) empowers your agents to write and execute Python code in a secure environment provide by OpenAI. This unlocks several capabilities, including but not...
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
# Enhanced Inference `autogen.OpenAIWrapper` provides enhanced LLM inference for `openai>=1`. `autogen.Completion` is a drop-in replacement of `openai.Completion` and `openai.ChatCompletion` for enhanced LLM inference using `openai<1`. There are a number of benefits of using `autogen` to perform inference: performance...
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Tune Inference Parameters (for openai<1) Find a list of examples in this page: [Tune Inference Parameters Examples](../Examples.md#inference-hyperparameters-tuning) ### Choices to optimize The cost of using foundation models for text generation is typically measured in terms of the number of tokens in the input and ...
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
API unification `autogen.OpenAIWrapper.create()` can be used to create completions for both chat and non-chat models, and both OpenAI API and Azure OpenAI API. ```python from autogen import OpenAIWrapper # OpenAI endpoint client = OpenAIWrapper() # ChatCompletion response = client.create(messages=[{"role": "user", "c...
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Usage Summary The `OpenAIWrapper` from `autogen` tracks token counts and costs of your API calls. Use the `create()` method to initiate requests and `print_usage_summary()` to retrieve a detailed usage report, including total cost and token usage for both cached and actual requests. - `mode=["actual", "total"]` (defa...
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Caching Moved to [here](/docs/topics/llm-caching).
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Error handling ### Runtime error One can pass a list of configurations of different models/endpoints to mitigate the rate limits and other runtime error. For example, ```python client = OpenAIWrapper( config_list=[ { "model": "gpt-4", "api_key": os.environ.get("AZURE_OPENAI_API_KE...
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Templating If the provided prompt or message is a template, it will be automatically materialized with a given context. For example, ```python response = client.create( context={"problem": "How many positive integers, not exceeding 100, are multiples of 2 or 3 but not 4?"}, prompt="{problem} Solve the problem...
GitHub
autogen
autogen/website/docs/Use-Cases/enhanced_inference.md
autogen
Logging When debugging or diagnosing an LLM-based system, it is often convenient to log the API calls and analyze them. ### For openai >= 1 Logging example: [View Notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_logging.ipynb) #### Start logging: ```python import autogen.runtime_logging ...
GitHub
autogen
autogen/website/docs/Use-Cases/agent_chat.md
autogen
# Multi-agent Conversation Framework AutoGen offers a unified multi-agent conversation framework as a high-level abstraction of using foundation models. It features capable, customizable and conversable agents which integrate LLMs, tools, and humans via automated agent chat. By automating chat among multiple capable a...
GitHub
autogen
autogen/website/docs/Use-Cases/agent_chat.md
autogen
Multi-agent Conversations ### A Basic Two-Agent Conversation Example Once the participating agents are constructed properly, one can start a multi-agent conversation session by an initialization step as shown in the following code: ```python # the assistant receives a message from the user, which contains the task d...
GitHub
autogen
autogen/website/docs/Use-Cases/agent_chat.md
autogen
For Further Reading _Interested in the research that leads to this package? Please check the following papers._ - [AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework](https://arxiv.org/abs/2308.08155). Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin L...
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
# AutoGen Studio FAQs
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: How do I specify the directory where files(e.g. database) are stored? A: You can specify the directory where files are stored by setting the `--appdir` argument when running the application. For example, `autogenstudio ui --appdir /path/to/folder`. This will store the database (default) and other files in the speci...
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Where can I adjust the default skills, agent and workflow configurations? A: You can modify agent configurations directly from the UI or by editing the `init_db_samples` function in the `autogenstudio/database/utils.py` file which is used to initialize the database.
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: If I want to reset the entire conversation with an agent, how do I go about it? A: To reset your conversation history, you can delete the `database.sqlite` file in the `--appdir` directory. This will reset the entire conversation history. To delete user files, you can delete the `files` directory in the `--appdir` ...
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Is it possible to view the output and messages generated by the agents during interactions? A: Yes, you can view the generated messages in the debug console of the web UI, providing insights into the agent interactions. Alternatively, you can inspect the `database.sqlite` file for a comprehensive record of messages...
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Can I use other models with AutoGen Studio? Yes. AutoGen standardizes on the openai model api format, and you can use any api server that offers an openai compliant endpoint. In the AutoGen Studio UI, each agent has an `llm_config` field where you can input your model endpoint details including `model`, `api key`, ...
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: The server starts but I can't access the UI A: If you are running the server on a remote machine (or a local machine that fails to resolve localhost correctly), you may need to specify the host address. By default, the host address is set to `localhost`. You can specify the host address using the `--host <host>` ar...
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Can I export my agent workflows for use in a python app? Yes. In the Build view, you can click the export button to save your agent workflow as a JSON file. This file can be imported in a python application using the `WorkflowManager` class. For example: ```python from autogenstudio import WorkflowManager # load ...
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Can I deploy my agent workflows as APIs? Yes. You can launch the workflow as an API endpoint from the command line using the `autogenstudio` commandline tool. For example: ```bash autogenstudio serve --workflow=workflow.json --port=5000 ``` Similarly, the workflow launch command above can be wrapped into a Docker...
GitHub
autogen
autogen/website/docs/autogen-studio/faqs.md
autogen
Q: Can I run AutoGen Studio in a Docker container? A: Yes, you can run AutoGen Studio in a Docker container. You can build the Docker image using the provided [Dockerfile](https://github.com/microsoft/autogen/blob/autogenstudio/samples/apps/autogen-studio/Dockerfile) and run the container using the following commands:...
GitHub
autogen
autogen/website/docs/autogen-studio/getting-started.md
autogen
# AutoGen Studio - Getting Started [![PyPI version](https://badge.fury.io/py/autogenstudio.svg)](https://badge.fury.io/py/autogenstudio) [![Downloads](https://static.pepy.tech/badge/autogenstudio/week)](https://pepy.tech/project/autogenstudio) ![ARA](./img/ara_stockprices.png) AutoGen Studio is an low-code interface...
GitHub
autogen
autogen/website/docs/autogen-studio/getting-started.md
autogen
Contribution Guide We welcome contributions to AutoGen Studio. We recommend the following general steps to contribute to the project: - Review the overall AutoGen project [contribution guide](https://github.com/microsoft/autogen?tab=readme-ov-file#contributing) - Please review the AutoGen Studio [roadmap](https://git...
GitHub
autogen
autogen/website/docs/autogen-studio/getting-started.md
autogen
A Note on Security AutoGen Studio is a research prototype and is not meant to be used in a production environment. Some baseline practices are encouraged e.g., using Docker code execution environment for your agents. However, other considerations such as rigorous tests related to jailbreaking, ensuring LLMs only have...
GitHub
autogen
autogen/website/docs/autogen-studio/getting-started.md
autogen
Acknowledgements AutoGen Studio is Based on the [AutoGen](https://microsoft.github.io/autogen) project. It was adapted from a research prototype built in October 2023 (original credits: Gagan Bansal, Adam Fourney, Victor Dibia, Piali Choudhury, Saleema Amershi, Ahmed Awadallah, Chi Wang).
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
# Using AutoGen Studio AutoGen Studio supports the declarative creation of an agent workflow and tasks can be specified and run in a chat interface for the agents to complete. The expected usage behavior is that developers can create skills and models, _attach_ them to agents, and compose agents into workflows that ca...
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
Building an Agent Workflow AutoGen Studio implements several entities that are ultimately composed into a workflow. ### Skills A skill is a python function that implements the solution to a task. In general, a good skill has a descriptive name (e.g. generate*images), extensive docstrings and good defaults (e.g., wri...
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
Entities and Concepts -->
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
Testing an Agent Workflow AutoGen Studio allows users to interactively test workflows on tasks and review resulting artifacts (such as images, code, and documents). ![AutoGen Studio Test Workflow](./img/workflow_test.png) Users can also review the “inner monologue” of agent workflows as they address tasks, and view ...
GitHub
autogen
autogen/website/docs/autogen-studio/usage.md
autogen
Exporting Agent Workflows Users can download the skills, agents, and workflow configurations they create as well as share and reuse these artifacts. AutoGen Studio also offers a seamless process to export workflows and deploy them as application programming interfaces (APIs) that can be consumed in other applications ...
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
# Docker Docker, an indispensable tool in modern software development, offers a compelling solution for AutoGen's setup. Docker allows you to create consistent environments that are portable and isolated from the host OS. With Docker, everything AutoGen needs to run, from the operating system to specific libraries, is...
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
Step 1: Install Docker - **General Installation**: Follow the [official Docker installation instructions](https://docs.docker.com/get-docker/). This is your first step towards a containerized environment, ensuring a consistent and isolated workspace for AutoGen. - **For Mac Users**: If you encounter issues with the D...
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
Step 2: Build a Docker Image AutoGen now provides updated Dockerfiles tailored for different needs. Building a Docker image is akin to setting the foundation for your project's environment: - **Autogen Basic**: Ideal for general use, this setup includes common Python libraries and essential dependencies. Perfect for ...
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
Step 3: Run AutoGen Applications from Docker Image Here's how you can run an application built with AutoGen, using the Docker image: 1. **Mount Your Directory**: Use the Docker `-v` flag to mount your local application directory to the Docker container. This allows you to develop on your local machine while running t...
GitHub
autogen
autogen/website/docs/installation/Docker.md
autogen
Additional Resources - Details on all the Dockerfile options can be found in the [Dockerfile](https://github.com/microsoft/autogen/.devcontainer/README.md) README. - For more information on Docker usage and best practices, refer to the [official Docker documentation](https://docs.docker.com). - Details on how to use t...
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
# Optional Dependencies
GitHub
autogen
autogen/website/docs/installation/Optional-Dependencies.md
autogen
LLM Caching To use LLM caching with Redis, you need to install the Python package with the option `redis`: ```bash pip install "pyautogen[redis]" ``` See [LLM Caching](Use-Cases/agent_chat.md#llm-caching) for details.