genaitiwari commited on
Commit
7ddd05c
·
1 Parent(s): 7734f80

rag chat and readme updated

Browse files
README.md CHANGED
@@ -11,10 +11,60 @@ license: apache-2.0
11
  ---
12
  # AutogenMultiAgent
13
  Autogen Multiagent
 
 
 
 
14
 
15
 
16
- ## Code execution
17
 
 
18
  ![alt text](image.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
 
 
 
 
 
11
  ---
12
  # AutogenMultiAgent
13
  Autogen Multiagent
14
+ AutoGen is an open-source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks. AutoGen aims to provide an easy-to-use and flexible framework for accelerating development and research on agentic AI, like PyTorch for Deep Learning. It offers features such as agents that can converse with other agents, LLM and tool use support, autonomous and human-in-the-loop workflows, and multi-agent conversation patterns.
15
+
16
+ ## AutoGen Overview
17
+ ![alt text](image-2.png)
18
 
19
 
 
20
 
21
+ ## Code execution
22
  ![alt text](image.png)
23
+
24
+ ![alt text](image-3.png)
25
+
26
+ ## RAG Chat
27
+ ![alt text](image-1.png)
28
+
29
+ Qdrant is a high-performance vector search engine/database.
30
+
31
+ This notebook demonstrates the usage of QdrantRetrieveUserProxyAgent for RAG, based on agentchat_RetrieveChat.ipynb.
32
+
33
+ RetrieveChat is a conversational system for retrieve augmented code generation and question answering. In this notebook, we demonstrate how to utilize RetrieveChat to generate code and answer questions based on customized documentations that are not present in the LLM's training dataset. RetrieveChat uses the RetrieveAssistantAgent and QdrantRetrieveUserProxyAgent, which is similar to the usage of AssistantAgent and UserProxyAgent in other notebooks (e.g., Automated Task Solving with Code Generation, Execution & Debugging)
34
+
35
+ :::info Requirements
36
+ Some extra dependencies are needed for this notebook, which can be installed via pip:
37
+
38
+ ```bash
39
+ pip install "pyautogen[retrievechat-qdrant]" "flaml[automl]"
40
+ ```
41
+
42
+ For more information, please refer to the [installation guide](/docs/installation/).
43
+ :::
44
+
45
+
46
+ ## Groupchat with Llamaindex agents
47
+ Llamaindex agents have the ability to use planning strategies to answer user questions. They can be integrated in Autogen in easy ways
48
+
49
+ Requirements
50
+ %pip install pyautogen llama-index llama-index-tools-wikipedia llama-index-readers-wikipedia wikipedia
51
 
52
+ ## Defaults
53
+ ### LLM_OPTIONS
54
+ Groq
55
+ ### USECASE_OPTIONS
56
+ MultiAgent Code Execution
57
+ MultiAgent Chat
58
+ RAG Chat
59
+ With LLamaIndex Tool
60
+ AgentChat Sql Spider
61
+ ### GROQ_MODEL_OPTIONS
62
+ mixtral-8x7b-32768
63
+ llama3-8b-8192
64
+ llama3-70b-8192
65
+ gemma-7b-i
66
 
67
+ ## Important links
68
+ https://microsoft.github.io/autogen/docs/notebooks
69
+ https://microsoft.github.io/autogen/docs/tutorial/code-executors
70
+ https://microsoft.github.io/autogen/docs/tutorial/tool-use
app.py CHANGED
@@ -7,6 +7,7 @@ from src.usecases.multiagentcodeexecution import MultiAgentCodeExecution
7
  from src.usecases.withllamaIndex import WithLlamaIndexMultiAgentChat
8
  from src.usecases.agentchatsqlspider import AgentChatSqlSpider
9
  from src.LLMS.groqllm import GroqLLM
 
10
 
11
 
12
  # MAIN Function START
@@ -30,14 +31,19 @@ if __name__ == "__main__":
30
  if problem:
31
  # start multichat
32
  if user_input['selected_usecase'] == "MultiAgent Code Execution":
33
- obj_usecases_multichat = MultiAgentCodeExecution(assistant_name=['Assistant',"Product_Manager"], user_proxy_name='Userproxy', llm_config=llm_config,
34
  problem=problem)
35
- obj_usecases_multichat.run()
36
 
37
  elif user_input['selected_usecase'] == "MultiAgent Chat":
38
  obj_usecases_multichat = MultiAgentChat(assistant_name='Assistant', user_proxy_name='Userproxy', llm_config=llm_config,
39
  problem=problem)
40
  obj_usecases_multichat.run()
 
 
 
 
 
41
 
42
  elif user_input['selected_usecase'] == "With LLamaIndex Tool":
43
  obj_usecases_with_llamaIndex_multichat = WithLlamaIndexMultiAgentChat(assistant_name='Assistant', user_proxy_name='Userproxy',
 
7
  from src.usecases.withllamaIndex import WithLlamaIndexMultiAgentChat
8
  from src.usecases.agentchatsqlspider import AgentChatSqlSpider
9
  from src.LLMS.groqllm import GroqLLM
10
+ from src.usecases.multiagentragchat import MultiAgentRAGChat
11
 
12
 
13
  # MAIN Function START
 
31
  if problem:
32
  # start multichat
33
  if user_input['selected_usecase'] == "MultiAgent Code Execution":
34
+ obj_usecases_multichatexec = MultiAgentCodeExecution(assistant_name=['Assistant',"Product_Manager"], user_proxy_name='Userproxy', llm_config=llm_config,
35
  problem=problem)
36
+ obj_usecases_multichatexec.run()
37
 
38
  elif user_input['selected_usecase'] == "MultiAgent Chat":
39
  obj_usecases_multichat = MultiAgentChat(assistant_name='Assistant', user_proxy_name='Userproxy', llm_config=llm_config,
40
  problem=problem)
41
  obj_usecases_multichat.run()
42
+
43
+ elif user_input['selected_usecase'] == "RAG Chat":
44
+ obj_usecases_rag_multichat = MultiAgentRAGChat(assistant_name='Assistant', user_proxy_name='Userproxy', llm_config=llm_config,
45
+ problem=problem)
46
+ obj_usecases_rag_multichat.run()
47
 
48
  elif user_input['selected_usecase'] == "With LLamaIndex Tool":
49
  obj_usecases_with_llamaIndex_multichat = WithLlamaIndexMultiAgentChat(assistant_name='Assistant', user_proxy_name='Userproxy',
codegen/configfile.ini ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ [DEFAULT]
2
+ PAGE_TITLE = AUTOGEN IN ACTION
3
+ LLM_OPTIONS = Groq, Huggingface
4
+ USECASE_OPTIONS = MultiAgent Code Execution, MultiAgent Chat, RAG Chat, With LLamaIndex Tool, AgentChat Sql Spider
5
+ GROQ_MODEL_OPTIONS = mixtral-8x7b-32768, llama3-8b-8192, llama3-70b-8192, gemma-7b-i
6
+
codegen/configfile.py ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from configparser import ConfigParser
2
+
3
+
4
+ class Config:
5
+ def __init__(self, config_file="configfile.ini"):
6
+ self.config = ConfigParser()
7
+ self.config.read(config_file)
8
+
9
+ def get_llm_options(self):
10
+ return self.config["DEFAULT"].get("LLM_OPTIONS").split(", ")
11
+
12
+ def get_usecase_options(self):
13
+ return self.config["DEFAULT"].get("USECASE_OPTIONS").split(", ")
14
+
15
+ def get_groq_model_options(self):
16
+ return self.config["DEFAULT"].get("GROQ_MODEL_OPTIONS").split(", ")
17
+
18
+ def get_page_title(self):
19
+ return self.config["DEFAULT"].get("PAGE_TITLE")
20
+
image-1.png ADDED
image-2.png ADDED
image-3.png ADDED
requirements.txt CHANGED
@@ -6,4 +6,8 @@ llama-index-tools-wikipedia
6
  llama-index-readers-wikipedia
7
  wikipedia
8
  llama-index-llms-groq
9
- spider-env
 
 
 
 
 
6
  llama-index-readers-wikipedia
7
  wikipedia
8
  llama-index-llms-groq
9
+ spider-env
10
+ pyautogen[retrievechat]
11
+ pyautogen[retrievechat-qdrant]
12
+ flaml[automl]
13
+ sentence_transformers
src/LLMS/groqllm.py CHANGED
@@ -10,9 +10,12 @@ class GroqLLM:
10
 
11
  def groq_llm_config(self):
12
  config_list = [
13
- {"api_type": 'groq',
14
- "model": self.user_controls_input['selected_groq_model'], "api_key": st.session_state["GROQ_API_KEY"],
15
- "cache_seed": None}
 
 
 
16
  ]
17
 
18
  llm_config = {"config_list": config_list, "request_timeout": 60}
 
10
 
11
  def groq_llm_config(self):
12
  config_list = [
13
+ {
14
+ "api_type": 'groq',
15
+ "model": self.user_controls_input['selected_groq_model'],
16
+ "api_key": st.session_state["GROQ_API_KEY"],
17
+ "cache_seed": None
18
+ }
19
  ]
20
 
21
  llm_config = {"config_list": config_list, "request_timeout": 60}
src/agents/qdrantretrieveuserproxyagent.py ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ from autogen import UserProxyAgent
2
+ import streamlit as st
3
+ from autogen.agentchat.contrib.qdrant_retrieve_user_proxy_agent import QdrantRetrieveUserProxyAgent
4
+
5
+
6
+ class TrackableQdrantRetrieveUserProxyAgent(QdrantRetrieveUserProxyAgent):
7
+ def _process_received_message(self, message, sender, silent):
8
+ with st.chat_message("user"):
9
+ st.write(message)
10
+ return super()._process_received_message(message, sender, silent)
src/agents/retrieveassistantagent.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from autogen import AssistantAgent
2
+ import streamlit as st
3
+ from autogen.agentchat.contrib.retrieve_assistant_agent import RetrieveAssistantAgent
4
+ from autogen.agentchat.contrib.qdrant_retrieve_user_proxy_agent import QdrantRetrieveUserProxyAgent
5
+
6
+
7
+
8
+ class TrackableRetrieveAssistantAgent(RetrieveAssistantAgent):
9
+ def _process_received_message(self, message, sender, silent):
10
+ with st.chat_message(sender.name):
11
+ st.write(message)
12
+ return super()._process_received_message(message, sender, silent)
src/streamlitui/loadui.py CHANGED
@@ -28,6 +28,10 @@ class LoadStreamlitUI:
28
  type="password")
29
  # Use case selection
30
  self.user_controls["selected_usecase"] = st.selectbox("Select Usecases", usecase_options)
 
 
 
 
31
  st.session_state["chat_with_history"] = st.sidebar.toggle("Chat With History")
32
 
33
  return self.user_controls
 
28
  type="password")
29
  # Use case selection
30
  self.user_controls["selected_usecase"] = st.selectbox("Select Usecases", usecase_options)
31
+ if self.user_controls['selected_usecase'] == "RAG Chat":
32
+ st.session_state["docs_path"] = st.text_input("Enter Docs path or filename")
33
+
34
+
35
  st.session_state["chat_with_history"] = st.sidebar.toggle("Chat With History")
36
 
37
  return self.user_controls
src/usecases/multiagentcodeexecution.py CHANGED
@@ -8,7 +8,7 @@ import autogen
8
  class MultiAgentCodeExecution:
9
  def __init__(self, assistant_name, user_proxy_name, llm_config, problem):
10
  self.coder = TrackableAssistantAgent(name=assistant_name[0],
11
- system_message="""you are helpful assistant and efficient in writing code.""",
12
  human_input_mode="NEVER",
13
  llm_config=llm_config,
14
  )
@@ -33,8 +33,15 @@ class MultiAgentCodeExecution:
33
  self.problem = problem
34
  self.loop = asyncio.new_event_loop()
35
  asyncio.set_event_loop(self.loop)
 
 
 
 
 
 
36
 
37
  async def initiate_chat(self):
 
38
  await self.user_proxy.a_initiate_chat(self.manager, message=self.problem, clear_history=st.session_state["chat_with_history"])
39
 
40
  def run(self):
 
8
  class MultiAgentCodeExecution:
9
  def __init__(self, assistant_name, user_proxy_name, llm_config, problem):
10
  self.coder = TrackableAssistantAgent(name=assistant_name[0],
11
+ system_message="""you are helpful assistant and efficient in writing code in python.""",
12
  human_input_mode="NEVER",
13
  llm_config=llm_config,
14
  )
 
33
  self.problem = problem
34
  self.loop = asyncio.new_event_loop()
35
  asyncio.set_event_loop(self.loop)
36
+
37
+ def _reset(self):
38
+ self.coder.reset()
39
+ self.pm.reset()
40
+
41
+
42
 
43
  async def initiate_chat(self):
44
+ self._reset()
45
  await self.user_proxy.a_initiate_chat(self.manager, message=self.problem, clear_history=st.session_state["chat_with_history"])
46
 
47
  def run(self):
src/usecases/multiagentragchat.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import asyncio
2
+ from src.agents.qdrantretrieveuserproxyagent import TrackableQdrantRetrieveUserProxyAgent
3
+ from src.agents.retrieveassistantagent import TrackableRetrieveAssistantAgent
4
+ import streamlit as st
5
+ from qdrant_client import QdrantClient
6
+ import glob
7
+ import os
8
+ from sentence_transformers import SentenceTransformer
9
+
10
+ class MultiAgentRAGChat:
11
+ def __init__(self, assistant_name, user_proxy_name, llm_config, problem):
12
+ self.assistant = TrackableRetrieveAssistantAgent(name=assistant_name,
13
+ system_message="""you are helpful assistant. Reply "TERMINATE" in
14
+ the end when everything is done """,
15
+ human_input_mode="NEVER",
16
+ llm_config=llm_config,
17
+ )
18
+ self.user_proxy = TrackableQdrantRetrieveUserProxyAgent(name=user_proxy_name,
19
+ human_input_mode="NEVER",
20
+ max_consecutive_auto_reply=4,
21
+ retrieve_config={
22
+ "task": "code",
23
+ "docs_path": self.list_files(st.session_state["docs_path"]),
24
+ "chunk_token_size": 500,
25
+ "model": llm_config["config_list"][0]["model"],
26
+ "client": QdrantClient(":memory:"),
27
+ "embedding_model": self.embeddings_model()
28
+ },
29
+ code_execution_config=False,
30
+ is_termination_msg=lambda x: x.get("content", "").strip().endswith(
31
+ "TERMINATE"))
32
+ self.problem = problem
33
+ self.loop = asyncio.new_event_loop()
34
+ asyncio.set_event_loop(self.loop)
35
+
36
+
37
+
38
+ def embeddings_model(self):
39
+ sentence_transformer_ef = SentenceTransformer("all-distilroberta-v1").encode
40
+ return sentence_transformer_ef
41
+
42
+
43
+ def list_files(self,directory):
44
+ # Ensure the directory path ends with a slash
45
+ if not directory.endswith('/'):
46
+ directory += '/'
47
+
48
+ # Use glob to get the list of files
49
+ files = glob.glob(os.path.join(directory, '*'))
50
+ file_list = [path.replace('\\', '/') for path in files]
51
+
52
+ return file_list
53
+
54
+
55
+
56
+ async def initiate_chat(self):
57
+ await self.user_proxy.a_initiate_chat(self.assistant, message=self.user_proxy.message_generator, problem=self.problem, clear_history=st.session_state["chat_with_history"])
58
+
59
+ def run(self):
60
+ self.loop.run_until_complete(self.initiate_chat())