Spaces:
Sleeping
Sleeping
Update app.py
Browse files
app.py
CHANGED
|
@@ -51,19 +51,38 @@ class LegionMariaAssistant:
|
|
| 51 |
|
| 52 |
Available data sections: {available_sections}
|
| 53 |
|
| 54 |
-
|
| 55 |
-
- about: mission, vision, core values, organizational information
|
| 56 |
-
- office: projects, community outreach, operational details
|
| 57 |
-
- leadership: organizational structure, leadership team, roles
|
| 58 |
|
| 59 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
|
| 61 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
- "
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
- "What do you do?" -> general
|
| 68 |
|
| 69 |
Section:"""
|
|
@@ -114,22 +133,23 @@ Section:"""
|
|
| 114 |
conversation_context += "Current conversation:\n"
|
| 115 |
|
| 116 |
# Step 3: Response LLM generates answer using only relevant data
|
| 117 |
-
response_prompt = f"""You are Santa Legion from the Legion Maria Directorate of Youth Affairs.
|
| 118 |
|
| 119 |
-
|
| 120 |
{json.dumps(relevant_data, indent=2)}
|
| 121 |
|
| 122 |
{conversation_context}User: {message}
|
| 123 |
|
| 124 |
-
|
| 125 |
-
- You are Santa Legion, speak as "I" and "we" (the organization)
|
|
|
|
|
|
|
| 126 |
- Keep responses SHORT (1-3 sentences maximum)
|
| 127 |
-
-
|
| 128 |
- Never mention being provided documents or data
|
| 129 |
-
-
|
| 130 |
-
- Use "our mission", "we believe", "I can help you with"
|
| 131 |
|
| 132 |
-
Answer:"""
|
| 133 |
|
| 134 |
# Get response from specialist LLM
|
| 135 |
response = self.response_llm.invoke([{"role": "user", "content": response_prompt}])
|
|
|
|
| 51 |
|
| 52 |
Available data sections: {available_sections}
|
| 53 |
|
| 54 |
+
DETAILED SECTION CONTENTS:
|
|
|
|
|
|
|
|
|
|
| 55 |
|
| 56 |
+
ABOUT section contains:
|
| 57 |
+
- organization, headquarters_location, head_office, contact info (phone, email, websites)
|
| 58 |
+
- mission, vision, core_values (faithfulness, inclusivity, service, formation, community, integrity)
|
| 59 |
+
- church_beliefs (core_belief, reach, membership, dress_code, sacred_practice)
|
| 60 |
+
- worship (practices, prohibited, prayer_schedule)
|
| 61 |
+
|
| 62 |
+
OFFICE section contains:
|
| 63 |
+
- departments (30+ departments with officers and roles like administration, treasury, education, procurement, events, medical, auditing, communications, projects, ICT, diaspora, facilities, music, logistics, grants, data engineering, coordination, security, etc.)
|
| 64 |
+
- completed_projects, pending_projects (washrooms, radio/TV station, hospital, mausoleum, refurbishments)
|
| 65 |
+
- current_activities (festivals, sports, workshops, conventions, camps)
|
| 66 |
+
- support_donations (mpesa, bank_account details)
|
| 67 |
|
| 68 |
+
LEADERSHIP section contains:
|
| 69 |
+
- supreme_leadership (patron_pope, matron_mother_superior, dean_of_cardinals)
|
| 70 |
+
- youth_affairs_director (title, contact)
|
| 71 |
+
- organizational_structure
|
| 72 |
+
|
| 73 |
+
User query: "{message}"
|
| 74 |
|
| 75 |
+
Respond with ONLY the most relevant section name. If the query spans multiple sections or is general, respond with "general".
|
| 76 |
+
|
| 77 |
+
ROUTING EXAMPLES:
|
| 78 |
+
- "Who is the director/pope/patron?" -> leadership
|
| 79 |
+
- "What is your mission/vision/values?" -> about
|
| 80 |
+
- "Where is headquarters/location/office?" -> about
|
| 81 |
+
- "Contact info/phone/email/website?" -> about
|
| 82 |
+
- "Tell me about projects/departments?" -> office
|
| 83 |
+
- "How to donate/support/contribute?" -> office
|
| 84 |
+
- "What activities do you do?" -> office
|
| 85 |
+
- "Church beliefs/worship/practices?" -> about
|
| 86 |
- "What do you do?" -> general
|
| 87 |
|
| 88 |
Section:"""
|
|
|
|
| 133 |
conversation_context += "Current conversation:\n"
|
| 134 |
|
| 135 |
# Step 3: Response LLM generates answer using only relevant data
|
| 136 |
+
response_prompt = f"""You are Santa Legion from the Legion Maria Directorate of Youth Affairs. IMPORTANT: You must ONLY use the information provided below. Do not use any external knowledge or make assumptions.
|
| 137 |
|
| 138 |
+
ONLY USE THIS INFORMATION:
|
| 139 |
{json.dumps(relevant_data, indent=2)}
|
| 140 |
|
| 141 |
{conversation_context}User: {message}
|
| 142 |
|
| 143 |
+
STRICT RULES:
|
| 144 |
+
- You are Santa Legion, speak as "I" and "we" (the organization)
|
| 145 |
+
- ONLY answer using the information provided above
|
| 146 |
+
- If headquarters is mentioned in the data, use EXACTLY what is stated
|
| 147 |
- Keep responses SHORT (1-3 sentences maximum)
|
| 148 |
+
- Do not invent or assume any information not in the provided data
|
| 149 |
- Never mention being provided documents or data
|
| 150 |
+
- If asked about something not in your data, say "I don't have that information"
|
|
|
|
| 151 |
|
| 152 |
+
Answer based ONLY on the provided information:"""
|
| 153 |
|
| 154 |
# Get response from specialist LLM
|
| 155 |
response = self.response_llm.invoke([{"role": "user", "content": response_prompt}])
|