subashpoudel commited on
Commit
fbc17f4
·
1 Parent(s): b4fb6ac

New commit

Browse files
__pycache__/main.cpython-312.pyc CHANGED
Binary files a/__pycache__/main.cpython-312.pyc and b/__pycache__/main.cpython-312.pyc differ
 
brainstroming_agent/utils/__pycache__/nodes.cpython-312.pyc CHANGED
Binary files a/brainstroming_agent/utils/__pycache__/nodes.cpython-312.pyc and b/brainstroming_agent/utils/__pycache__/nodes.cpython-312.pyc differ
 
brainstroming_agent/utils/nodes.py CHANGED
@@ -57,6 +57,7 @@ def retrieve(state: State) -> State:
57
  query_prompt = 'Represent this sentence for searching relevant passages: '
58
  if len(state.latest_preferred_topics)==0:
59
  for idea in state.idea:
 
60
  result = retrieve_tool(idea+query_prompt)
61
  retrievals.append(result)
62
  print('Retrieval process completed......')
 
57
  query_prompt = 'Represent this sentence for searching relevant passages: '
58
  if len(state.latest_preferred_topics)==0:
59
  for idea in state.idea:
60
+ print('The idea for retrieval:', idea)
61
  result = retrieve_tool(idea+query_prompt)
62
  retrievals.append(result)
63
  print('Retrieval process completed......')
context_analysis_agent/__pycache__/agent.cpython-312.pyc CHANGED
Binary files a/context_analysis_agent/__pycache__/agent.cpython-312.pyc and b/context_analysis_agent/__pycache__/agent.cpython-312.pyc differ
 
context_analysis_agent/agent.py CHANGED
@@ -36,4 +36,9 @@ class IntroductionChatbot:
36
  def extract_details(self):
37
  response = extract_business_details(business_state.interactions)
38
  print('Extracted details:', response)
39
- return response
 
 
 
 
 
 
36
  def extract_details(self):
37
  response = extract_business_details(business_state.interactions)
38
  print('Extracted details:', response)
39
+ return response
40
+
41
+ def reset(self):
42
+ self.memory= MemorySaver()
43
+ self.interact_agent = self.workflow.compile(checkpointer=self.memory)
44
+ print('Memory cleared')
context_analysis_agent/utils/__pycache__/prompts.cpython-312.pyc CHANGED
Binary files a/context_analysis_agent/utils/__pycache__/prompts.cpython-312.pyc and b/context_analysis_agent/utils/__pycache__/prompts.cpython-312.pyc differ
 
context_analysis_agent/utils/prompts.py CHANGED
@@ -2,6 +2,8 @@ introduction_prompt = '''
2
  You are a business assistant who collects only valid and relevant data.
3
  Your job is to gather details from business owners in a friendly and conversational manner to understand their business better. Ask in very easy and short way.
4
  No matter what the user asks, you have to say to user that we have to collect these details first and only you can move forward.
 
 
5
 
6
  We need these details:
7
  1. Business Type (e.g., e-commerce, SaaS, consulting),
 
2
  You are a business assistant who collects only valid and relevant data.
3
  Your job is to gather details from business owners in a friendly and conversational manner to understand their business better. Ask in very easy and short way.
4
  No matter what the user asks, you have to say to user that we have to collect these details first and only you can move forward.
5
+ If user asks you for some other queries related to influencers, marketing, video ideas etc or anything. Don't say i'm not here to help you. Just say, First i will collect your all the details and only can help you analyzing your details.
6
+ You have to say user to be patient until al the details are collected.
7
 
8
  We need these details:
9
  1. Business Type (e.g., e-commerce, SaaS, consulting),
dummy_state.py CHANGED
@@ -17,7 +17,7 @@ stored_data['final_ideation']= ['''A street magician performs tricks, leaving a
17
  '''A young Nepali woman discovers a hidden strength within herself while hiking the Himalayas. She returns home, and her fitness journey begins at our gym. With the help of our personal trainers, she transforms her body and mind. The gym becomes her sanctuary, and her transformation inspires others to find their inner strength.''']
18
 
19
  stored_data['human_ideation_interactions'] = []
20
- # stored_data['refined_ideation'] = '''A street magician's trick fails, inspiring a fitness journey. Months later, he fuses magic with strength, showcasing transformation. Meanwhile, a fitness coach bonds with foodies over health, sparking a community workout group, blending fun and fitness.'''
21
 
22
 
23
  # stored_data['brainstroming_response']={
 
17
  '''A young Nepali woman discovers a hidden strength within herself while hiking the Himalayas. She returns home, and her fitness journey begins at our gym. With the help of our personal trainers, she transforms her body and mind. The gym becomes her sanctuary, and her transformation inspires others to find their inner strength.''']
18
 
19
  stored_data['human_ideation_interactions'] = []
20
+ stored_data['refined_ideation'] = '''A street magician's trick fails, inspiring a fitness journey. Months later, he fuses magic with strength, showcasing transformation. Meanwhile, a fitness coach bonds with foodies over health, sparking a community workout group, blending fun and fitness.'''
21
 
22
 
23
  # stored_data['brainstroming_response']={
human_refined_ideation/utils/prompts.py CHANGED
@@ -18,8 +18,9 @@ The four ideas will be provided to you through a tool.
18
  - Stick with the **business details** provided to you as a tool message. The generated idea **must conclude** to the business details.
19
  - **No explanations, no headers, no labels.**
20
  - **Do not ask the user anything back.**
21
- - Remember, the length of your refined idea have to be **same as the length of other ideas**.
22
  - Even if user asks to generate story based on some idea, you are **not allowed** to to generate the story. Just return that idea only as it is.
 
23
 
24
 
25
  You must function like a deterministic idea-refiner — each input must result in a single, story-rich output idea.
 
18
  - Stick with the **business details** provided to you as a tool message. The generated idea **must conclude** to the business details.
19
  - **No explanations, no headers, no labels.**
20
  - **Do not ask the user anything back.**
21
+ - Remember, the length of your refined idea have to be **same as the length of other ideas**.
22
  - Even if user asks to generate story based on some idea, you are **not allowed** to to generate the story. Just return that idea only as it is.
23
+ - If no any ideas are provided to you, you can just simply say **The ideas are not provided to me**.
24
 
25
 
26
  You must function like a deterministic idea-refiner — each input must result in a single, story-rich output idea.
main.py CHANGED
@@ -67,6 +67,7 @@ def context_analysis(msg: UserMessage):
67
  print('Details Type:',type(details))
68
  # save_to_db(details)
69
  stored_data['business_details'] = details
 
70
  return {"response": response, "business_details": details, "complete": True}
71
  return {"response": response, "complete": False}
72
 
@@ -99,10 +100,10 @@ def ideation_endpoint():
99
  stored_data['final_ideation'] = result['improver_response'][-1]
100
  stored_data['final_ideation']=ast.literal_eval(stored_data['final_ideation'])
101
 
102
- return {'response':result}
103
  except GraphRecursionError:
104
  result = idea_graph.get_state({"configurable": {"thread_id": "ideation_thread123"}})
105
- return {'response': result[0]}
106
 
107
  class RefineIdeationRequest(BaseModel):
108
  query: str
@@ -116,7 +117,7 @@ def human_idea_refine_endpoint(request:RefineIdeationRequest):
116
  {
117
  'query': stored_data['human_ideation_interactions'],
118
  'business_details': stored_data["business_details"],
119
- 'final_ideation': stored_data['final_ideation'],
120
  },config={"configurable": {"thread_id": request.thread_id}}
121
  )
122
  stored_data['human_ideation_interactions'].append({"role": "assistant", "content": response['result']})
@@ -140,8 +141,16 @@ class BrainstormRequest(BaseModel):
140
  def brainstroming_endpoint(
141
  request: BrainstormRequest, # 🔥 Full JSON body here
142
  ):
 
 
 
 
 
 
 
143
  result = brainstrom_graph.invoke({
144
- 'idea': [stored_data.get('refined_ideation', '')],
 
145
  'images': request.image_base64_list,
146
  'latest_preferred_topics': request.preferred_topics,
147
  'business_details': stored_data['business_details']
@@ -150,7 +159,10 @@ def brainstroming_endpoint(
150
 
151
  stored_data['brainstroming_response'] = result
152
 
153
- return {'response': result}
 
 
 
154
 
155
 
156
 
@@ -171,9 +183,9 @@ def generate_final_story_endpoint():
171
 
172
  @app.post("/generate-image")
173
  def generate_image_endpoint():
174
- image = generate_image(str(stored_data.get('final_story',''))
175
  ,str(stored_data.get('business_details'))
176
- ,str(stored_data.get('refined_ideation','')))
177
  stored_data['generated_image']=image
178
  return {
179
  'response':image
 
67
  print('Details Type:',type(details))
68
  # save_to_db(details)
69
  stored_data['business_details'] = details
70
+ context_analysis_graph.reset()
71
  return {"response": response, "business_details": details, "complete": True}
72
  return {"response": response, "complete": False}
73
 
 
100
  stored_data['final_ideation'] = result['improver_response'][-1]
101
  stored_data['final_ideation']=ast.literal_eval(stored_data['final_ideation'])
102
 
103
+ return {'response':ast.literal_eval(result['improver_response'][-1])}
104
  except GraphRecursionError:
105
  result = idea_graph.get_state({"configurable": {"thread_id": "ideation_thread123"}})
106
+ return {'response': ast.literal_eval(result[0])}
107
 
108
  class RefineIdeationRequest(BaseModel):
109
  query: str
 
117
  {
118
  'query': stored_data['human_ideation_interactions'],
119
  'business_details': stored_data["business_details"],
120
+ 'final_ideation': stored_data.get('final_ideation',["","","",""]),
121
  },config={"configurable": {"thread_id": request.thread_id}}
122
  )
123
  stored_data['human_ideation_interactions'].append({"role": "assistant", "content": response['result']})
 
141
  def brainstroming_endpoint(
142
  request: BrainstormRequest, # 🔥 Full JSON body here
143
  ):
144
+ idea = (
145
+ [stored_data['refined_ideation']]
146
+ if stored_data.get('refined_ideation')
147
+ else [str(stored_data['final_ideation'])]
148
+ if stored_data.get('final_ideation')
149
+ else ['''I don't have any idea right now. Create your own **very creative** and **out of the box** video idea and generate the story for now.'''])
150
+
151
  result = brainstrom_graph.invoke({
152
+ # 'idea': [stored_data.get('refined_ideation', 'final_ideation', )],
153
+ 'idea': idea,
154
  'images': request.image_base64_list,
155
  'latest_preferred_topics': request.preferred_topics,
156
  'business_details': stored_data['business_details']
 
159
 
160
  stored_data['brainstroming_response'] = result
161
 
162
+ return {'response':{
163
+ "story": result['stories'][-1],
164
+ "brainstorming_topics": result['brainstroming_topics'][-1]
165
+ }}
166
 
167
 
168
 
 
183
 
184
  @app.post("/generate-image")
185
  def generate_image_endpoint():
186
+ image = generate_image(str(stored_data.get('final_story','''I don't have any story right now. Just use the business details for now.'''))
187
  ,str(stored_data.get('business_details'))
188
+ ,str(stored_data.get('refined_ideation','''I don't have any idea right now. Just use the business details for now.''')))
189
  stored_data['generated_image']=image
190
  return {
191
  'response':image
orchestration_agent/utils/nodes.py CHANGED
@@ -1,6 +1,6 @@
1
  from .prompts import tool_return_prompt , extract_user_reference_prompt
2
- from langchain_core.messages import SystemMessage
3
- from utils.models_loader import llm_gpt
4
  from .state import ToolResponseFormatter, UserReferenceResponseFormatter
5
 
6
 
@@ -15,10 +15,12 @@ def tool_return_node(state):
15
  print(response)
16
  return {"messages": [{'role':'assistant','content':f'''The exact name of the tool is: {response}'''}]}
17
 
18
-
19
-
20
  def extract_user_reference_node(state):
21
- history = state['messages']
22
- template = [SystemMessage(content=extract_user_reference_prompt)] + history
23
- response = llm_gpt.with_structured_output(UserReferenceResponseFormatter).invoke(template)
24
- return {'messages': [{'role':'assistant','content':f'''The video idea is: {response.video_idea} and the video story is: {response.video_story}'''}]}
 
 
 
 
 
1
  from .prompts import tool_return_prompt , extract_user_reference_prompt
2
+ from langchain_core.messages import SystemMessage, HumanMessage
3
+ from utils.models_loader import llm_gpt , llm_gemini
4
  from .state import ToolResponseFormatter, UserReferenceResponseFormatter
5
 
6
 
 
15
  print(response)
16
  return {"messages": [{'role':'assistant','content':f'''The exact name of the tool is: {response}'''}]}
17
 
 
 
18
  def extract_user_reference_node(state):
19
+ history = state['messages']
20
+ latest_human_message = next(
21
+ (msg for msg in reversed(history) if isinstance(msg, HumanMessage)),
22
+ None
23
+ )
24
+ template = [SystemMessage(content=extract_user_reference_prompt), HumanMessage(content=latest_human_message.content)]
25
+ response = llm_gemini.with_structured_output(UserReferenceResponseFormatter).invoke(template)
26
+ return {'messages': [{'role':'assistant','content':f'''The video idea is: {response.video_idea} and the video story is: {response.video_story}'''}]}
orchestration_agent/utils/prompts.py CHANGED
@@ -1,31 +1,3 @@
1
- tool_return_prompt_old = """
2
- You are a AI orchestration agent that reads the user's message and decides which one of the following tools should be called next. You're perfect at analyzing the intention of the user.
3
-
4
- Your job is to analyze the **user's input** and return a response with:
5
- `tool`: the most appropriate tool name from the list below (or `null` if not applicable)
6
- ---
7
-
8
- ### Available Tools:
9
- 1. **ideation** → Use if the user wants to create some video ideas. Also, If user says that they didn't like any of the previous ideas and want to genrate again, you have to trigger this again.
10
-
11
- 2. **human-idea-refining** → Use if the user:
12
- Likes some idea, locks some idea or if they gives feedback or asks to improve/change ideas.
13
- Trigger this if user locks or likes some ideas OR they want to modify the created ideas according to their requirements.
14
- This is the part where human locks the idea generated by AI or want to modify ideas.
15
-
16
- 3. **generate-story** → Use if the user talks about **creating the story** from their idea **OR** if the user talks about **creating story with brainstorming**.
17
- 4. **generate-ultimate-story** → Use if the user is ready for a final or ultimate story/script based on ideas and already brainstormed topics from the previous tool called **generate-story**.
18
- Remember one thing, **generate-ultimate-story** is never invoked without invoking **generate-story**.
19
- 5. **generate-image** → Use if the user wants a visual or image based on the ultimate story.
20
-
21
- ---
22
-
23
- ### Output Format:
24
- "tool": "the exact name of the tool" OR "null".
25
-
26
-
27
- """
28
-
29
 
30
  tool_return_prompt = """
31
  You are an AI orchestration agent and a friendly assistant built to help businesses and brands find influencers and generate content ideas, stories, and visuals.
@@ -34,7 +6,7 @@ Your job is to:
34
  1. **Read the user's message carefully** and identify their intent.
35
  2. **Return exactly two things**:
36
  - `tools`: a Python-style list of tool names (from the ordered list below) that match the user’s intent.
37
- - `query_response`: a short, friendly, and helpful reply that aligns with the tools you've selected. Also return the relevant data of influencers aligning with the uer query only if the data of influencers.
38
 
39
  Your `tools` output must:
40
  - Always be a **Python-style list**, even if only one tool is selected.
@@ -61,6 +33,8 @@ Your `query_response` must:
61
  - Gives feedback, asks for improvement, merging, or editing of ideas
62
  - Refers to using or building on a specific idea (e.g., “based on idea 2”)
63
  - This is the part where human locks the idea generated by AI or want to modify ideas.
 
 
64
 
65
  3. **generate-story** → Trigger if the user:
66
  - Wants to generate a story or says that they want to brainstorm
@@ -83,7 +57,7 @@ Your `query_response` must:
83
  > “Generate story based on idea 4”
84
  This means they selected/liked idea 4 → so return:
85
  `"tools": ["human-idea-refining", "generate-story"]`
86
-
87
  - If a user says:
88
  > “Create an image of the final script”
89
  → Return: `["generate-image"]`
@@ -95,7 +69,7 @@ Your `query_response` must:
95
  - If a user gives feedback or says “improve this idea” → Return `["human-idea-refining"]`
96
 
97
  - If a user just wants help with finding influencers or advice → Return `"tools": []` but still respond helpfully via `query_response`.
98
-
99
  ---
100
 
101
  ### Influencer Assistant Note:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
  tool_return_prompt = """
3
  You are an AI orchestration agent and a friendly assistant built to help businesses and brands find influencers and generate content ideas, stories, and visuals.
 
6
  1. **Read the user's message carefully** and identify their intent.
7
  2. **Return exactly two things**:
8
  - `tools`: a Python-style list of tool names (from the ordered list below) that match the user’s intent.
9
+ - `query_response`: a short, friendly, and helpful reply that aligns with the tools you've selected. Also return the relevant data of influencers aligning with the user query only if the user asks about the influencers.
10
 
11
  Your `tools` output must:
12
  - Always be a **Python-style list**, even if only one tool is selected.
 
33
  - Gives feedback, asks for improvement, merging, or editing of ideas
34
  - Refers to using or building on a specific idea (e.g., “based on idea 2”)
35
  - This is the part where human locks the idea generated by AI or want to modify ideas.
36
+ - In this part you have to be very careful in this part. You have to trigger this tool until the user likes or locks the idea. They can modify the idea as long time as they can. Even after some time of refinements, they can say that i want to go back to the previous idea, undo or something. At that time also they are trying to refine their idea and you have to trigger this tool. So be wise here.
37
+ - Remember, you are **strictly not allowed to** return ideation after human-idea-refining. For eg: ['human-idea-refining','ideation']. This is not allowed
38
 
39
  3. **generate-story** → Trigger if the user:
40
  - Wants to generate a story or says that they want to brainstorm
 
57
  > “Generate story based on idea 4”
58
  This means they selected/liked idea 4 → so return:
59
  `"tools": ["human-idea-refining", "generate-story"]`
60
+
61
  - If a user says:
62
  > “Create an image of the final script”
63
  → Return: `["generate-image"]`
 
69
  - If a user gives feedback or says “improve this idea” → Return `["human-idea-refining"]`
70
 
71
  - If a user just wants help with finding influencers or advice → Return `"tools": []` but still respond helpfully via `query_response`.
72
+ - Again reminding you the most important part. If you generate more than one tools, **Please** generate them in the sequence above provided. For eg: Don't give **human-idea-refining** after **ideation**. The list of tools have to be in the proper order provided above.
73
  ---
74
 
75
  ### Influencer Assistant Note:
utils/models_loader.py CHANGED
@@ -17,7 +17,7 @@ os.environ['GROQ_API_KEY']=os.getenv('GROQ_API_KEY')
17
 
18
 
19
 
20
- # llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
21
 
22
  llm = ChatGroq(
23
  model="llama-3.1-8b-instant",
 
17
 
18
 
19
 
20
+ llm_gemini = ChatGoogleGenerativeAI(model="gemini-1.5-flash")
21
 
22
  llm = ChatGroq(
23
  model="llama-3.1-8b-instant",