source
stringclasses
1 value
repository
stringclasses
1 value
file
stringlengths
17
99
label
stringclasses
1 value
content
stringlengths
11
13.3k
GitHub
autogen
autogen/dotnet/website/articles/Create-your-own-middleware.md
autogen
## Coming soon
GitHub
autogen
autogen/dotnet/website/articles/Function-call-middleware.md
autogen
# Coming soon
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-support-more-messages.md
autogen
By default, @AutoGen.OpenAI.OpenAIChatAgent only supports the @AutoGen.Core.IMessage<T> type where `T` is original request or response message from `Azure.AI.OpenAI`. To support more AutoGen built-in message types like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, @AutoGen.Core.MultiModalMessage and so on, you...
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
The following example shows how to connect to third-party OpenAI API using @AutoGen.OpenAI.OpenAIChatAgent. [![](https://img.shields.io/badge/Open%20on%20Github-grey?logo=github)](https://github.com/microsoft/autogen/blob/main/dotnet/sample/AutoGen.OpenAI.Sample/Connect_To_Ollama.cs)
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
Overview A lot of LLM applications/platforms support spinning up a chat server that is compatible with OpenAI API, such as LM Studio, Ollama, Mistral etc. This means that you can connect to these servers using the @AutoGen.OpenAI.OpenAIChatAgent. > [!NOTE] > Some platforms might not support all the features of OpenAI ...
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
Prerequisites - Install the following packages: ```bash dotnet add package AutoGen.OpenAI --version AUTOGEN_VERSION ``` - Spin up a chat server that is compatible with OpenAI API. The following example uses Ollama as the chat server, and llama3 as the llm model. ```bash ollama serve ```
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
Steps - Import the required namespaces: [!code-csharp[](../../sample/AutoGen.OpenAI.Sample/Connect_To_Ollama.cs?name=using_statement)] - Create a `CustomHttpClientHandler` class. The `CustomHttpClientHandler` class is used to customize the HttpClientHandler. In this example, we override the `SendAsync` method to redi...
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-connect-to-third-party-api.md
autogen
Sample Output The following is the sample output of the code snippet above: ![output](../images/articles/ConnectTo3PartyOpenAI/output.gif)
GitHub
autogen
autogen/dotnet/website/articles/getting-start.md
autogen
### Get start with AutoGen for dotnet [![dotnet-ci](https://github.com/microsoft/autogen/actions/workflows/dotnet-build.yml/badge.svg)](https://github.com/microsoft/autogen/actions/workflows/dotnet-build.yml) [![Discord](https://img.shields.io/discord/1153072414184452236?logo=discord&style=flat)](https://discord.gg/pAb...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen-OpenAI-Overview.md
autogen
## AutoGen.OpenAI Overview AutoGen.OpenAI provides the following agents over openai models: - @AutoGen.OpenAI.OpenAIChatAgent: A slim wrapper agent over `OpenAIClient`. This agent only support `IMessage<ChatRequestMessage>` message type. To support more message types like @AutoGen.Core.TextMessage, register the agent ...
GitHub
autogen
autogen/dotnet/website/articles/Create-type-safe-function-call.md
autogen
## Type-safe function call `AutoGen` provides a source generator to easness the trouble of manually craft function definition and function call wrapper from a function. To use this feature, simply add the `AutoGen.SourceGenerator` package to your project and decorate your function with @AutoGen.Core.FunctionAttribute....
GitHub
autogen
autogen/dotnet/website/articles/Create-an-agent.md
autogen
## AssistantAgent [`AssistantAgent`](../api/AutoGen.AssistantAgent.yml) is a built-in agent in `AutoGen` that acts as an AI assistant. It uses LLM to generate response to user input. It also supports function call if the underlying LLM model supports it (e.g. `gpt-3.5-turbo-0613`).
GitHub
autogen
autogen/dotnet/website/articles/Create-an-agent.md
autogen
Create an `AssistantAgent` using OpenAI model. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/CreateAnAgent.cs?name=code_snippet_1)]
GitHub
autogen
autogen/dotnet/website/articles/Create-an-agent.md
autogen
Create an `AssistantAgent` using Azure OpenAI model. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/CreateAnAgent.cs?name=code_snippet_2)]
GitHub
autogen
autogen/dotnet/website/articles/Print-message-middleware.md
autogen
@AutoGen.Core.PrintMessageMiddleware is a built-in @AutoGen.Core.IMiddleware that pretty print @AutoGen.Core.IMessage to console. > [!NOTE] > @AutoGen.Core.PrintMessageMiddleware support the following @AutoGen.Core.IMessage types: > - @AutoGen.Core.TextMessage > - @AutoGen.Core.MultiModalMessage > - @AutoGen.Core.Tool...
GitHub
autogen
autogen/dotnet/website/articles/Print-message-middleware.md
autogen
Use @AutoGen.Core.PrintMessageMiddleware in an agent You can use @AutoGen.Core.PrintMessageMiddlewareExtension.RegisterPrintMessage* to register the @AutoGen.Core.PrintMessageMiddleware to an agent. [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/PrintMessageMiddlewareCodeSnippet.cs?name=PrintMessageMidd...
GitHub
autogen
autogen/dotnet/website/articles/Print-message-middleware.md
autogen
Streaming message support @AutoGen.Core.PrintMessageMiddleware also supports streaming message types like @AutoGen.Core.TextMessageUpdate and @AutoGen.Core.ToolCallMessageUpdate. If you register @AutoGen.Core.PrintMessageMiddleware to a @AutoGen.Core.IStreamingAgent, it will format the streaming message and print it t...
GitHub
autogen
autogen/dotnet/website/articles/MistralChatAgent-count-token-usage.md
autogen
The following example shows how to create a `MistralAITokenCounterMiddleware` @AutoGen.Core.IMiddleware and count the token usage when chatting with @AutoGen.Mistral.MistralClientAgent. ### Overview To collect the token usage for the entire chat session, one easy solution is simply collect all the responses from agent...
GitHub
autogen
autogen/dotnet/website/articles/OpenAIChatAgent-simple-chat.md
autogen
The following example shows how to create an @AutoGen.OpenAI.OpenAIChatAgent and chat with it. Firsly, import the required namespaces: [!code-csharp[](../../sample/AutoGen.BasicSamples/CodeSnippet/OpenAICodeSnippet.cs?name=using_statement)] Then, create an @AutoGen.OpenAI.OpenAIChatAgent and chat with it: [!code-csha...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Ollama/Chat-with-llava.md
autogen
This sample shows how to use @AutoGen.Ollama.OllamaAgent to chat with LLaVA model. To run this example, you need to have an Ollama server running aside and have `llava:latest` model installed. For how to setup an Ollama server, please refer to [Ollama](https://ollama.com/). > [!NOTE] > You can find the complete sampl...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Ollama/Chat-with-llama.md
autogen
This example shows how to use @AutoGen.Ollama.OllamaAgent to connect to Ollama server and chat with LLaVA model. To run this example, you need to have an Ollama server running aside and have `llama3:latest` model installed. For how to setup an Ollama server, please refer to [Ollama](https://ollama.com/). > [!NOTE] > ...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.SemanticKernel/Use-kernel-plugin-in-other-agents.md
autogen
In semantic kernel, a kernel plugin is a collection of kernel functions that can be invoked during LLM calls. Semantic kernel provides a list of built-in plugins, like [core plugins](https://github.com/microsoft/semantic-kernel/tree/main/dotnet/src/Plugins/Plugins.Core), [web search plugin](https://github.com/microsoft...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.SemanticKernel/SemanticKernelChatAgent-simple-chat.md
autogen
`AutoGen.SemanticKernel` provides built-in support for `ChatCompletionAgent` via @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent. By default the @AutoGen.SemanticKernel.SemanticKernelChatCompletionAgent only supports the original `ChatMessageContent` type via `IMessage<ChatMessageContent>`. To support more Au...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.SemanticKernel/SemanticKernelAgent-simple-chat.md
autogen
You can chat with @AutoGen.SemanticKernel.SemanticKernelAgent using both streaming and non-streaming methods and use native `ChatMessageContent` type via `IMessage<ChatMessageContent>`. The following example shows how to create an @AutoGen.SemanticKernel.SemanticKernelAgent and chat with it using non-streaming method:...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.SemanticKernel/AutoGen-SemanticKernel-Overview.md
autogen
## AutoGen.SemanticKernel Overview AutoGen.SemanticKernel is a package that provides seamless integration with Semantic Kernel. It provides the following agents: - @AutoGen.SemanticKernel.SemanticKernelAgent: A slim wrapper agent over `Kernel` that only support original `ChatMessageContent` type via `IMessage<ChatMess...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.SemanticKernel/SemanticKernelAgent-support-more-messages.md
autogen
@AutoGen.SemanticKernel.SemanticKernelAgent only supports the original `ChatMessageContent` type via `IMessage<ChatMessageContent>`. To support more AutoGen built-in message types like @AutoGen.Core.TextMessage, @AutoGen.Core.ImageMessage, @AutoGen.Core.MultiModalMessage, you can register the agent with @AutoGen.Semant...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Gemini/Overview.md
autogen
# AutoGen.Gemini Overview AutoGen.Gemini is a package that provides seamless integration with Google Gemini. It provides the following agent: - @AutoGen.Gemini.GeminiChatAgent: The agent that connects to Google Gemini or Vertex AI Gemini. It supports chat, multi-modal chat, and function call. AutoGen.Gemini also pro...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Gemini/Overview.md
autogen
Examples You can find more examples under the [gemini sample project](https://github.com/microsoft/autogen/tree/main/dotnet/sample/AutoGen.Gemini.Sample)
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Gemini/Chat-with-google-gemini.md
autogen
This example shows how to use @AutoGen.Gemini.GeminiChatAgent to connect to Google AI Gemini and chat with Gemini model. To run this example, you need to have a Google AI Gemini API key. For how to get a Google Gemini API key, please refer to [Google Gemini](https://gemini.google.com/). > [!NOTE] > You can find the c...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Gemini/Image-chat-with-gemini.md
autogen
This example shows how to use @AutoGen.Gemini.GeminiChatAgent for image chat with Gemini model. To run this example, you need to have a project on Google Cloud with access to Vertex AI API. For more information please refer to [Google Vertex AI](https://cloud.google.com/vertex-ai/docs). > [!NOTE] > You can find the ...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Gemini/Function-call-with-gemini.md
autogen
This example shows how to use @AutoGen.Gemini.GeminiChatAgent to make function call. This example is modified from [gemini-api function call example](https://ai.google.dev/gemini-api/docs/function-calling) To run this example, you need to have a project on Google Cloud with access to Vertex AI API. For more informatio...
GitHub
autogen
autogen/dotnet/website/articles/AutoGen.Gemini/Chat-with-vertex-gemini.md
autogen
This example shows how to use @AutoGen.Gemini.GeminiChatAgent to connect to Vertex AI Gemini API and chat with Gemini model. To run this example, you need to have a project on Google Cloud with access to Vertex AI API. For more information please refer to [Google Vertex AI](https://cloud.google.com/vertex-ai/docs). >...
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
This tutorial shows how to use AutoGen.Net agent as model in AG Studio
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 1. Create Dotnet empty web app and install AutoGen and AutoGen.WebAPI package ```bash dotnet new web dotnet add package AutoGen dotnet add package AutoGen.WebAPI ```
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 2. Replace the Program.cs with following code ```bash using AutoGen.Core; using AutoGen.Service; var builder = WebApplication.CreateBuilder(args); var app = builder.Build(); var helloWorldAgent = new HelloWorldAgent(); app.UseAgentAsOpenAIChatCompletionEndpoint(helloWorldAgent); app.Run(); class HelloWorldAge...
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 3: Start the web app Run the following command to start web api ```bash dotnet RUN ``` The web api will listen at `http://localhost:5264/v1/chat/completion ![terminal](../images/articles/UseAutoGenAsModelinAGStudio/Terminal.png)
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 4: In another terminal, start autogen-studio ```bash autogenstudio ui ```
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 5: Navigate to AutoGen Studio UI and add hello world agent as openai Model ### Step 5.1: Go to model tab ![The Model Tab](../images/articles/UseAutoGenAsModelinAGStudio/TheModelTab.png) ### Step 5.2: Select "OpenAI model" card ![Open AI model Card](../images/articles/UseAutoGenAsModelinAGStudio/Step5.2OpenAIMo...
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Step 6: Create a hello world agent that uses the hello world model ![Create a hello world agent that uses the hello world model](../images/articles/UseAutoGenAsModelinAGStudio/Step6.png) ![Agent Configuration](../images/articles/UseAutoGenAsModelinAGStudio/Step6b.png)
GitHub
autogen
autogen/dotnet/website/tutorial/Use-AutoGen.Net-agent-as-model-in-AG-Studio.md
autogen
Final Step: Use the hello world agent in workflow ![Use the hello world agent in workflow](../images/articles/UseAutoGenAsModelinAGStudio/FinalStepsA.png) ![Use the hello world agent in workflow](../images/articles/UseAutoGenAsModelinAGStudio/FinalStepsA.png) ![Use the hello world agent in workflow](../images/articl...
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
This tutorial shows how to perform image chat with an agent using the @AutoGen.OpenAI.OpenAIChatAgent as an example. > [!NOTE] > To chat image with an agent, the model behind the agent needs to support image input. Here is a partial list of models that support image input: > - gpt-4o > - gemini-1.5 > - llava > - claud...
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 1: Install AutoGen First, install the AutoGen package using the following command: ```bash dotnet add package AutoGen ```
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 2: Add Using Statements [!code-csharp[Using Statements](../../sample/AutoGen.BasicSamples/GettingStart/Image_Chat_With_Agent.cs?name=Using)]
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 3: Create an @AutoGen.OpenAI.OpenAIChatAgent [!code-csharp[Create an OpenAIChatAgent](../../sample/AutoGen.BasicSamples/GettingStart/Image_Chat_With_Agent.cs?name=Create_Agent)]
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 4: Prepare Image Message In AutoGen, you can create an image message using either @AutoGen.Core.ImageMessage or @AutoGen.Core.MultiModalMessage. The @AutoGen.Core.ImageMessage takes a single image as input, whereas the @AutoGen.Core.MultiModalMessage allows you to pass multiple modalities like text or image. Her...
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Step 5: Generate Response To generate response, you can use one of the overloaded methods of @AutoGen.Core.AgentExtension.SendAsync* method. The following code shows how to generate response with an image message: [!code-csharp[Generate Response](../../sample/AutoGen.BasicSamples/GettingStart/Image_Chat_With_Agent.cs...
GitHub
autogen
autogen/dotnet/website/tutorial/Image-chat-with-agent.md
autogen
Further Reading - [Image chat with gemini](../articles/AutoGen.Gemini/Image-chat-with-gemini.md) - [Image chat with llava](../articles/AutoGen.Ollama/Chat-with-llava.md)
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
This tutorial shows how to use tools in an agent.
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
What is tool Tools are pre-defined functions in user's project that agent can invoke. Agent can use tools to perform actions like search web, perform calculations, etc. With tools, it can greatly extend the capabilities of an agent. > [!NOTE] > To use tools with agent, the backend LLM model used by the agent needs to ...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Key Concepts - @AutoGen.Core.FunctionContract: The contract of a function that agent can invoke. It contains the function name, description, parameters schema, and return type. - @AutoGen.Core.ToolCallMessage: A message type that represents a tool call request in AutoGen.Net. - @AutoGen.Core.ToolCallResultMessage: A me...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Install AutoGen and AutoGen.SourceGenerator First, install the AutoGen and AutoGen.SourceGenerator package using the following command: ```bash dotnet add package AutoGen dotnet add package AutoGen.SourceGenerator ``` Also, you might need to enable structural xml document support by setting `GenerateDocumentationFile...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Add Using Statements [!code-csharp[Using Statements](../../sample/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=Using)]
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Create agent Create an @AutoGen.OpenAI.OpenAIChatAgent with `GPT-3.5-turbo` as the backend LLM model. [!code-csharp[Create an agent with tools](../../sample/AutoGen.BasicSamples/GettingStart/Use_Tools_With_Agent.cs?name=Create_Agent)]
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Define `Tool` class and create tools Create a `public partial` class to host the tools you want to use in AutoGen agents. The method has to be a `public` instance method and its return type must be `Task<string>`. After the methods is defined, mark them with @AutoGen.Core.FunctionAttribute attribute. In the following ...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Tool call without auto-invoke In this case, when receiving a @AutoGen.Core.ToolCallMessage, the agent will not automatically invoke the tool. Instead, the agent will return the original message back to the user. The user can then decide whether to invoke the tool or not. ![single-turn tool call without auto-invoke](.....
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Tool call with auto-invoke In this case, the agent will automatically invoke the tool when receiving a @AutoGen.Core.ToolCallMessage and return the @AutoGen.Core.ToolCallAggregateMessage which contains both the tool call request and the tool call result. ![single-turn tool call with auto-invoke](../images/articles/Cre...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Send the tool call result back to LLM to generate further response In some cases, you may want to send the tool call result back to the LLM to generate further response. To do this, you can send the tool call response from agent back to the LLM by calling the `SendAsync` method of the agent. [!code-csharp[Generate Res...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Parallel tool call Some LLM models support parallel tool call, which returns multiple tool calls in one single message. Note that @AutoGen.Core.FunctionCallMiddleware has already handled the parallel tool call for you. When it receives a @AutoGen.Core.ToolCallMessage that contains multiple tool calls, it will automatic...
GitHub
autogen
autogen/dotnet/website/tutorial/Create-agent-with-tools.md
autogen
Further Reading - [Function call with openai](../articles/OpenAIChatAgent-use-function-call.md) - [Function call with gemini](../articles/AutoGen.Gemini/Function-call-with-gemini.md) - [Function call with local model](../articles/Function-call-with-ollama-and-litellm.md) - [Use kernel plugin in other agents](../article...
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
This tutorial shows how to generate response using an @AutoGen.Core.IAgent by taking @AutoGen.OpenAI.OpenAIChatAgent as an example. > [!NOTE] > AutoGen.Net provides the following agents to connect to different LLM platforms. Generating responses using these agents is similar to the example shown below. > - @AutoGen.Op...
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Step 1: Install AutoGen First, install the AutoGen package using the following command: ```bash dotnet add package AutoGen ```
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Step 2: add Using Statements [!code-csharp[Using Statements](../../sample/AutoGen.BasicSamples/GettingStart/Chat_With_Agent.cs?name=Using)]
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Step 3: Create an @AutoGen.OpenAI.OpenAIChatAgent > [!NOTE] > The @AutoGen.OpenAI.Extension.OpenAIAgentExtension.RegisterMessageConnector* method registers an @AutoGen.OpenAI.OpenAIChatRequestMessageConnector middleware which converts OpenAI message types to AutoGen message types. This step is necessary when you want ...
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Step 4: Generate Response To generate response, you can use one of the overloaded method of @AutoGen.Core.AgentExtension.SendAsync* method. The following code shows how to generate response with text message: [!code-csharp[Generate Response](../../sample/AutoGen.BasicSamples/GettingStart/Chat_With_Agent.cs?name=Chat_W...
GitHub
autogen
autogen/dotnet/website/tutorial/Chat-with-an-agent.md
autogen
Further Reading - [Chat with google gemini](../articles/AutoGen.Gemini/Chat-with-google-gemini.md) - [Chat with vertex gemini](../articles/AutoGen.Gemini/Chat-with-vertex-gemini.md) - [Chat with Ollama](../articles/AutoGen.Ollama/Chat-with-llama.md) - [Chat with Semantic Kernel Agent](../articles/AutoGen.SemanticKernel...
GitHub
autogen
autogen/dotnet/website/release_note/0.0.17.md
autogen
# AutoGen.Net 0.0.17 Release Notes
GitHub
autogen
autogen/dotnet/website/release_note/0.0.17.md
autogen
🌟 What's New 1. **.NET Core Target Framework Support** ([#3203](https://github.com/microsoft/autogen/issues/3203)) - πŸš€ Added support for .NET Core to ensure compatibility and enhanced performance of AutoGen packages across different platforms. 2. **Kernel Support in Interactive Service Constructor** ([#3181](htt...
GitHub
autogen
autogen/dotnet/website/release_note/0.0.17.md
autogen
πŸš€ Improvements 1. **Cancellation Token Addition in Graph APIs** ([#3111](https://github.com/microsoft/autogen/issues/3111)) - πŸ”„ Added cancellation tokens to async APIs in the `AutoGen.Core.Graph` class to follow best practices and enhance the control flow.
GitHub
autogen
autogen/dotnet/website/release_note/0.0.17.md
autogen
⚠️ API Breaking Changes 1. **FunctionDefinition Generation Stopped in Source Generator** ([#3133](https://github.com/microsoft/autogen/issues/3133)) - πŸ›‘ Stopped generating `FunctionDefinition` from `Azure.AI.OpenAI` in the source generator to eliminate unnecessary package dependencies. Migration guide: - ➑️ U...
GitHub
autogen
autogen/dotnet/website/release_note/0.0.17.md
autogen
πŸ“š Documentation 1. **Consume AutoGen.Net Agent in AG Studio** ([#3142](https://github.com/microsoft/autogen/issues/3142)) - Added detailed documentation on using AutoGen.Net Agent as a model in AG Studio, including examples of starting an OpenAI chat backend and integrating third-party OpenAI models. 2. **Middlew...
GitHub
autogen
autogen/dotnet/website/release_note/update.md
autogen
##### Update on 0.0.15 (2024-06-13) Milestone: [AutoGen.Net 0.0.15](https://github.com/microsoft/autogen/milestone/3) ###### Highlights - [Issue 2851](https://github.com/microsoft/autogen/issues/2851) `AutoGen.Gemini` package for Gemini support. Examples can be found [here](https://github.com/microsoft/autogen/tree/ma...
GitHub
autogen
autogen/dotnet/website/release_note/0.0.16.md
autogen
# AutoGen.Net 0.0.16 Release Notes We are excited to announce the release of **AutoGen.Net 0.0.16**. This release includes several new features, bug fixes, improvements, and important updates. Below are the detailed release notes: **[Milestone: AutoGen.Net 0.0.16](https://github.com/microsoft/autogen/milestone/4)**
GitHub
autogen
autogen/dotnet/website/release_note/0.0.16.md
autogen
πŸ“¦ New Features 1. **Deprecate `IStreamingMessage`** ([#3045](https://github.com/microsoft/autogen/issues/3045)) - Replaced `IStreamingMessage` and `IStreamingMessage<T>` with `IMessage` and `IMessage<T>`. 2. **Add example for using ollama + LiteLLM for function call** ([#3014](https://github.com/microsoft/autogen/issu...
GitHub
autogen
autogen/dotnet/website/release_note/0.0.16.md
autogen
πŸ› Bug Fixes 1. **SourceGenerator doesn't work when function's arguments are empty** ([#2976](https://github.com/microsoft/autogen/issues/2976)) - Fixed an issue where the SourceGenerator failed when function arguments were empty. 2. **Add content field in ToolCallMessage** ([#2975](https://github.com/microsoft/autogen...
GitHub
autogen
autogen/dotnet/website/release_note/0.0.16.md
autogen
πŸš€ Improvements 1. **Sample update - Add getting-start samples for BasicSample project** ([#2859](https://github.com/microsoft/autogen/issues/2859)) - Re-organized the `AutoGen.BasicSample` project to include only essential getting-started examples, simplifying complex examples. 2. **Graph constructor should consider n...
GitHub
autogen
autogen/dotnet/website/release_note/0.0.16.md
autogen
⚠️ API-Breakchange 1. **Deprecate `IStreamingMessage`** ([#3045](https://github.com/microsoft/autogen/issues/3045)) - **Migration guide:** Deprecating `IStreamingMessage` will introduce breaking changes, particularly for `IStreamingAgent` and `IStreamingMiddleware`. Replace all `IStreamingMessage` and `IStreamingMessag...
GitHub
autogen
autogen/dotnet/website/release_note/0.0.16.md
autogen
πŸ“š Document Update 1. **Add example for using ollama + LiteLLM for function call** ([#3014](https://github.com/microsoft/autogen/issues/3014)) - Added a tutorial to the website for using ollama with LiteLLM. Thank you to all the contributors for making this release possible. We encourage everyone to upgrade to AutoGen...
GitHub
autogen
autogen/dotnet/src/AutoGen.SourceGenerator/README.md
autogen
### AutoGen.SourceGenerator This package carries a source generator that adds support for type-safe function definition generation. Simply mark a method with `Function` attribute, and the source generator will generate a function definition and a function call wrapper for you. ### Get start First, add the following ...
GitHub
autogen
autogen/dotnet/src/AutoGen.LMStudio/README.md
autogen
## AutoGen.LMStudio This package provides support for consuming openai-like API from LMStudio local server.
GitHub
autogen
autogen/dotnet/src/AutoGen.LMStudio/README.md
autogen
Installation To use `AutoGen.LMStudio`, add the following package to your `.csproj` file: ```xml <ItemGroup> <PackageReference Include="AutoGen.LMStudio" Version="AUTOGEN_VERSION" /> </ItemGroup> ```
GitHub
autogen
autogen/dotnet/src/AutoGen.LMStudio/README.md
autogen
Usage ```csharp using AutoGen.LMStudio; var localServerEndpoint = "localhost"; var port = 5000; var lmStudioConfig = new LMStudioConfig(localServerEndpoint, port); var agent = new LMStudioAgent( name: "agent", systemMessage: "You are an agent that help user to do some tasks.", lmStudioConfig: lmStudioConfig...
GitHub
autogen
autogen/dotnet/src/AutoGen.LMStudio/README.md
autogen
Update history ### Update on 0.0.7 (2024-02-11) - Add `LMStudioAgent` to support consuming openai-like API from LMStudio local server.
GitHub
autogen
autogen/.devcontainer/README.md
autogen
# Dockerfiles and Devcontainer Configurations for AutoGen Welcome to the `.devcontainer` directory! Here you'll find Dockerfiles and devcontainer configurations that are essential for setting up your AutoGen development environment. Each Dockerfile is tailored for different use cases and requirements. Below is a brief...
GitHub
autogen
autogen/.devcontainer/README.md
autogen
Dockerfile Descriptions ### base - **Purpose**: This Dockerfile, i.e., `./Dockerfile`, is designed for basic setups. It includes common Python libraries and essential dependencies required for general usage of AutoGen. - **Usage**: Ideal for those just starting with AutoGen or for general-purpose applications. - **Bu...
GitHub
autogen
autogen/.devcontainer/README.md
autogen
Customizing Dockerfiles Feel free to modify these Dockerfiles for your specific project needs. Here are some common customizations: - **Adding New Dependencies**: If your project requires additional Python packages, you can add them using the `RUN pip install` command. - **Changing the Base Image**: You may change th...