File size: 6,655 Bytes
3002e1b
6b61df1
05626fe
 
 
 
 
 
 
 
 
 
 
 
0b2c9fd
05626fe
 
0b2c9fd
 
05626fe
 
0b2c9fd
05626fe
0b2c9fd
8ce97f0
05626fe
a6a0614
acfddab
 
 
a6a0614
ca63886
 
 
 
 
a6a0614
ca63886
a6a0614
 
 
 
 
 
 
 
 
 
d0f597b
a6a0614
d0f597b
a6a0614
247193e
f2d3838
a6a0614
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93d50e5
 
0b2c9fd
 
 
 
 
 
 
 
6b61df1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
acfddab
 
 
 
c19b5e1
 
 
 
1a6859b
acfddab
a6a0614
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123

def compare_prompt():
    return '''You are given a user query  for comparing influencers.

Your task:
1. Extract all influencer names in the form of list.
   - The names should be returned exactly as they appear.

2. Identify the frequency of comparison mentioned in the dictionary (for example: "daily", "weekly", "monthly", "yearly", etc.).
Return the result strictly in this JSON format:
{
  "names": ["<influencer_1>", "<influencer_2>", ...],
  "frequency": "<frequency_value>"
}
If the frequency is not provided to you, use 'weekly' as default.

Example:
If the query is :"I want to compare the analytics of divyadhakal_ and munachiya in monthly basis", then 
output: 
{
  "names": ["divyadhakal_", "munachiya"],
  "frequency": "monthly"
}

'''

fetch_last_message_prompt = '''
You are an AI assistant that reads an entire conversation between a human and an AI. 
The human is trying to ask something about the influencers.
Your task is to detect the human's most recent intention, taking into account the full conversation history.  

- Carefully consider all previous human messages to understand context.  
- Focus on the latest goal, request, or intention, even if it is expressed briefly or implicitly.  
- Detect the latest intention in **one complete, clear sentence** that is self-contained and understandable without needing the previous conversation.  
- Do not simply repeat the latest message verbatim. Instead, incorporate necessary context from prior messages to make the intention explicit.  
- Ignore AI responses unless they are needed to clarify the human's current intention.  

Output only what the user wants now. Nothing else. Make the output as short as possible in just one sentence.

'''

fetch_parameters_prompt= '''
You are an intelligent parameter extractor.  
Given a user query and a list of needed parameters, return a Python dictionary assigning the best value for each parameter.  
Infer values when possible (e.g., “weekly” → frequency). 
Return only a valid Python dictionary — no explanations.  

Example:  
user_query: I want weekly engagement trend of john_  
needed_parameters: ['frequency', 'influencer_username']  
parameters_values: {'frequency': 'weekly', 'influencer_username': 'john_'}

Note: If the frequency is not mentioned explicitly, use weekly as default. In case of frequency, the available values are only: 'weekly','monthly' and 'yearly'. Don't use the values other than that for frequency.

'''

fetch_endpoint_prompt = '''
You are an intelligent endpoint selector.  
Given a user query in natural language and a list of possible endpoints, select the single most appropriate endpoint from the list.  

Guidelines:
- Only choose from the provided list; do not invent endpoints.
- Consider the intent of the query and the purpose of each endpoint.
- Return only the endpoint as plain text, no explanations.  

Example:
User Query: I want weekly engagement stats of John  
Possible Endpoints: ['/api/v1/overview/buzz_trend', '/api/v1/analytics/engagement', '/api/v1/analytics/followers']  
endpoint: /api/v1/analytics/engagement

'''

query_check_prompt = '''
You are an intent classification assistant.  
Given a user query about influencer analytics, classify it as one of the following types:

1. single_influencer_query — if the query refers to one specific influencer (e.g., "Show Muna's engagement rate"). Simply if the query includes the name of influencer.
2. aggregate_query — if the query involves comparing multiple influencers, rankings, or overall statistics (e.g., "Who has the highest engagement?").  

Return only one label: "single_influencer_query" or "aggregate_query".
'''


posting_time_analysis_prompt = '''
You are perfect parameters extractor for posting time analysis of the influencer.
Given a user query and a list of needed parameters, return a Python dictionary assigning the best value for each parameter.  
You have to return a dictionary containing influencer_name , start_date and end_date. If there is no any mention of the dates, keep the dates as None.
'''

peak_comment_hour_prompt = '''
You are perfect parameters extractor for analysis of peak comment hour the influencer.
Given a user query and a list of needed parameters, return a Python dictionary assigning the best value for each parameter.  
You have to return a dictionary containing influencer_name , start_date and end_date. If there is no any mention of the dates, keep the dates as None.
'''

emoji_count_prompt = '''
You are perfect parameters extractor for analysis of emoji count of the influencer.
Given a user query and a list of needed parameters, return a Python dictionary assigning the best value for each parameter.  
You have to return a dictionary containing influencer_name , and the number of emoji (top_n) by understanding the user query. If there is no any mention of the number of emoji, then keep it 15 as default.
'''

comment_quality_prompt = '''
You are perfect parameters extractor for analysis of comment quality of the influencer.
Given a user query and a list of needed parameters, return a Python dictionary assigning the best value for each parameter.  
You have to return a dictionary containing influencer_name , start_date and end_date. If there is no any mention of the dates, keep the dates as None.
'''

bot_and_diversity_prompt = '''
You are perfect parameters extractor for analysis of bot and comment diversity of the influencer.
Given a user query and a list of needed parameters, return a Python dictionary assigning the best value for each parameter. 
You have to return a dictionary containing influencer_name , number of commentors (top_n), start_date and  end_date from the user query.
If there is no any specific mention of dates, you can return None for dates. In the case of number of commentors, return a default value of 10 if the number is not passed from the user.
'''

def analytics_description_prompt(query):
    return f'''
You are a perfect assistant for the description of analytics given to you. 
You are provided with the user query and the graph image of the analytics of that particular user query.
Aligning with the user query, you have to generate short and precise description of not more than 3 sentences briefly elaborating what the analytics is saying for that user query.
The user query will be about asking some analytics of the influencers. And the graph image of analytics is also provided to you of that particular query. 
Don't be confused if you haven't got the influencer name in the graph image.The graph is already related to the influencer name provided in the user query.
The user query is: \n{query}\n
'''