WeMWish
commited on
Commit
·
11d5596
1
Parent(s):
16b63c6
temp update
Browse files- LITERATURE_CONFIRMATION_FEATURE.md +85 -0
- agents/manager_agent.py +266 -0
- cache_data/excel_schema_cache.json +6 -0
- server.R +64 -0
- www/chat_script.js +44 -0
- www/chat_styles.css +61 -0
LITERATURE_CONFIRMATION_FEATURE.md
ADDED
|
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Literature Confirmation Feature
|
| 2 |
+
|
| 3 |
+
## Overview
|
| 4 |
+
|
| 5 |
+
This feature adds user confirmation before the AI agents use external literature sources or paper.pdf in their analysis. When the system detects that a query relates to literature or research papers, it will pause and ask the user whether they want to include these resources.
|
| 6 |
+
|
| 7 |
+
## How It Works
|
| 8 |
+
|
| 9 |
+
### 1. Detection Phase
|
| 10 |
+
- The `ManagerAgent` analyzes the plan generated by `GenerationAgent`
|
| 11 |
+
- It looks for patterns indicating literature usage:
|
| 12 |
+
- `multi_source_literature_search`
|
| 13 |
+
- `fetch_text_from_urls`
|
| 14 |
+
- `paper.pdf`
|
| 15 |
+
- "literature search", "academic papers", "research papers", "scientific literature"
|
| 16 |
+
|
| 17 |
+
### 2. Confirmation Dialog
|
| 18 |
+
- When literature usage is detected, a confirmation dialog appears between the thinking box and response area
|
| 19 |
+
- The dialog asks: "This query appears to relate to literature or research papers. Would you like me to include external literature sources and the paper.pdf in my analysis?"
|
| 20 |
+
- Two buttons: "Yes" and "No"
|
| 21 |
+
|
| 22 |
+
### 3. Processing Based on User Choice
|
| 23 |
+
- **If Yes**: Continues with the original plan including literature search
|
| 24 |
+
- **If No**: Modifies the plan to remove literature components and provides analysis without external sources
|
| 25 |
+
|
| 26 |
+
## Implementation Details
|
| 27 |
+
|
| 28 |
+
### Backend Changes
|
| 29 |
+
|
| 30 |
+
#### `agents/manager_agent.py`
|
| 31 |
+
- Added `_detect_literature_request()` method to identify literature-related plans
|
| 32 |
+
- Added `_request_literature_confirmation()` to generate confirmation request
|
| 33 |
+
- Added `handle_literature_confirmation()` public method for R/UI integration
|
| 34 |
+
- Added plan modification logic to remove literature components when declined
|
| 35 |
+
- Integrated detection into the main `_process_turn()` flow
|
| 36 |
+
|
| 37 |
+
#### `server.R`
|
| 38 |
+
- Added handler for `TAIJICHAT_LITERATURE_CONFIRMATION:` response type
|
| 39 |
+
- Added `observeEvent` for `literature_confirmation_response` input
|
| 40 |
+
- Integrated with existing chat message processing flow
|
| 41 |
+
|
| 42 |
+
### Frontend Changes
|
| 43 |
+
|
| 44 |
+
#### `www/chat_script.js`
|
| 45 |
+
- Added `literature_confirmation_request` message handler
|
| 46 |
+
- Added `showLiteratureConfirmationDialog()` function to display the confirmation UI
|
| 47 |
+
- Added `handleLiteratureConfirmation()` global function for button clicks
|
| 48 |
+
- Integrated with existing chat message flow
|
| 49 |
+
|
| 50 |
+
#### `www/chat_styles.css`
|
| 51 |
+
- Added styles for `.literature-confirmation-dialog`
|
| 52 |
+
- Added button styles for Yes/No options
|
| 53 |
+
- Added fade-in animation for smooth appearance
|
| 54 |
+
|
| 55 |
+
## User Experience
|
| 56 |
+
|
| 57 |
+
1. User asks a literature-related question
|
| 58 |
+
2. System shows "Thinking..." as usual
|
| 59 |
+
3. A confirmation dialog appears in the chat asking about literature usage
|
| 60 |
+
4. User clicks "Yes" or "No"
|
| 61 |
+
5. System continues processing based on the choice
|
| 62 |
+
6. Final response is provided
|
| 63 |
+
|
| 64 |
+
## Technical Flow
|
| 65 |
+
|
| 66 |
+
```
|
| 67 |
+
User Query → GenerationAgent → ManagerAgent Detection →
|
| 68 |
+
Confirmation Dialog → User Choice → Modified/Original Plan →
|
| 69 |
+
Execution → Final Response
|
| 70 |
+
```
|
| 71 |
+
|
| 72 |
+
## Benefits
|
| 73 |
+
|
| 74 |
+
- **User Control**: Users can choose whether to include external literature
|
| 75 |
+
- **Transparency**: Clear indication when literature sources would be used
|
| 76 |
+
- **Flexibility**: Can provide analysis with or without external sources
|
| 77 |
+
- **Non-intrusive**: Only appears when literature usage is detected
|
| 78 |
+
|
| 79 |
+
## Configuration
|
| 80 |
+
|
| 81 |
+
The detection patterns can be modified in the `_detect_literature_request()` method in `manager_agent.py`. The confirmation message can be customized in the `_request_literature_confirmation()` method.
|
| 82 |
+
|
| 83 |
+
## Testing
|
| 84 |
+
|
| 85 |
+
The feature includes detection logic that has been tested with various plan types to ensure accurate identification of literature-related requests while avoiding false positives for regular data analysis tasks.
|
agents/manager_agent.py
CHANGED
|
@@ -26,6 +26,8 @@ class ManagerAgent:
|
|
| 26 |
self.client = openai_client
|
| 27 |
self.conversation_history = [] # To store user queries and agent responses
|
| 28 |
self.r_callback = r_callback_fn # Store the R callback
|
|
|
|
|
|
|
| 29 |
|
| 30 |
if self.r_callback:
|
| 31 |
print("ManagerAgent: R callback function provided and stored.")
|
|
@@ -85,6 +87,150 @@ class ManagerAgent:
|
|
| 85 |
# else:
|
| 86 |
# print(f"Python Agent (No R callback): Thought: {thought_text}") # Optional: uncomment to see thoughts even if no R callback
|
| 87 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 88 |
def _process_turn(self, user_query_text: str) -> tuple:
|
| 89 |
"""
|
| 90 |
Processes a single turn of the conversation.
|
|
@@ -175,6 +321,19 @@ class ManagerAgent:
|
|
| 175 |
print(f"[GenerationAgent] Thought: {generated_thought}")
|
| 176 |
self._send_thought_to_r(f"GenerationAgent thought: {generated_thought}")
|
| 177 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 178 |
# --- Start: Logic for handling AWAITING_DATA, AWAITING_IMAGE_ANALYSIS, AWAITING_ANALYSIS_CODE ---
|
| 179 |
# This section processes plans that require execution of code or further interaction
|
| 180 |
# before the main generation attempt can be considered complete or failed.
|
|
@@ -746,6 +905,113 @@ class ManagerAgent:
|
|
| 746 |
|
| 747 |
return "Processing completed, but no response was generated.", False, None
|
| 748 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 749 |
def process_single_query(self, user_query_text: str, conversation_history_from_r: list = None) -> str:
|
| 750 |
"""
|
| 751 |
Processes a single query, suitable for calling from an external system like R/Shiny.
|
|
|
|
| 26 |
self.client = openai_client
|
| 27 |
self.conversation_history = [] # To store user queries and agent responses
|
| 28 |
self.r_callback = r_callback_fn # Store the R callback
|
| 29 |
+
# Add new attribute to track literature confirmation requests
|
| 30 |
+
self.pending_literature_confirmation = None
|
| 31 |
|
| 32 |
if self.r_callback:
|
| 33 |
print("ManagerAgent: R callback function provided and stored.")
|
|
|
|
| 87 |
# else:
|
| 88 |
# print(f"Python Agent (No R callback): Thought: {thought_text}") # Optional: uncomment to see thoughts even if no R callback
|
| 89 |
|
| 90 |
+
def _detect_literature_request(self, plan: dict, user_query: str = "") -> bool:
|
| 91 |
+
"""
|
| 92 |
+
Detects if the generated plan wants to use literature search or paper.pdf resources.
|
| 93 |
+
Also checks if the user query itself is about paper/literature.
|
| 94 |
+
Returns True if literature resources are requested.
|
| 95 |
+
"""
|
| 96 |
+
if not isinstance(plan, dict):
|
| 97 |
+
return False
|
| 98 |
+
|
| 99 |
+
# Check the python_code field for literature search patterns
|
| 100 |
+
python_code = plan.get("python_code", "")
|
| 101 |
+
thought = plan.get("thought", "")
|
| 102 |
+
explanation = plan.get("explanation", "")
|
| 103 |
+
|
| 104 |
+
# Patterns that indicate literature/paper usage
|
| 105 |
+
literature_patterns = [
|
| 106 |
+
"multi_source_literature_search",
|
| 107 |
+
"fetch_text_from_urls",
|
| 108 |
+
"paper.pdf",
|
| 109 |
+
"literature search",
|
| 110 |
+
"academic papers",
|
| 111 |
+
"research papers",
|
| 112 |
+
"scientific literature"
|
| 113 |
+
]
|
| 114 |
+
|
| 115 |
+
# Patterns that indicate user is asking about the paper specifically
|
| 116 |
+
paper_query_patterns = [
|
| 117 |
+
"what's the title of the paper",
|
| 118 |
+
"what is the title of the paper",
|
| 119 |
+
"paper title",
|
| 120 |
+
"title of the paper",
|
| 121 |
+
"what does the paper say",
|
| 122 |
+
"according to the paper",
|
| 123 |
+
"in the paper",
|
| 124 |
+
"the paper describes",
|
| 125 |
+
"the paper shows",
|
| 126 |
+
"from the paper",
|
| 127 |
+
"based on the paper"
|
| 128 |
+
]
|
| 129 |
+
|
| 130 |
+
# Check in all relevant fields from the plan
|
| 131 |
+
all_plan_text = f"{python_code} {thought} {explanation}".lower()
|
| 132 |
+
|
| 133 |
+
for pattern in literature_patterns:
|
| 134 |
+
if pattern.lower() in all_plan_text:
|
| 135 |
+
print(f"[ManagerAgent] Detected literature request pattern: '{pattern}' in plan")
|
| 136 |
+
return True
|
| 137 |
+
|
| 138 |
+
# Check if the user query itself is about the paper
|
| 139 |
+
user_query_lower = user_query.lower()
|
| 140 |
+
for pattern in paper_query_patterns:
|
| 141 |
+
if pattern in user_query_lower:
|
| 142 |
+
print(f"[ManagerAgent] Detected paper query pattern: '{pattern}' in user query")
|
| 143 |
+
return True
|
| 144 |
+
|
| 145 |
+
# Also check for general "paper" mentions in user query (but be more specific to avoid false positives)
|
| 146 |
+
if any(word in user_query_lower for word in ["paper", "manuscript", "publication", "study"]):
|
| 147 |
+
# Additional context clues that suggest they're asking about THE paper
|
| 148 |
+
if any(word in user_query_lower for word in ["the", "this", "title", "abstract", "conclusion", "method", "result"]):
|
| 149 |
+
print(f"[ManagerAgent] Detected general paper reference in user query: '{user_query}'")
|
| 150 |
+
return True
|
| 151 |
+
|
| 152 |
+
return False
|
| 153 |
+
|
| 154 |
+
def _request_literature_confirmation(self, plan: dict) -> str:
|
| 155 |
+
"""
|
| 156 |
+
Requests user confirmation for using literature resources.
|
| 157 |
+
Returns a special response indicating confirmation is needed.
|
| 158 |
+
"""
|
| 159 |
+
# Store the pending plan for later use
|
| 160 |
+
self.pending_literature_confirmation = plan
|
| 161 |
+
|
| 162 |
+
# Send a special response that the UI can recognize
|
| 163 |
+
confirmation_request = {
|
| 164 |
+
"type": "literature_confirmation_request",
|
| 165 |
+
"plan": plan,
|
| 166 |
+
"message": "This query appears to relate to literature or research papers. Would you like me to include external literature sources and the paper.pdf in my analysis?"
|
| 167 |
+
}
|
| 168 |
+
|
| 169 |
+
return f"TAIJICHAT_LITERATURE_CONFIRMATION:{json.dumps(confirmation_request)}"
|
| 170 |
+
|
| 171 |
+
def handle_literature_confirmation_response(self, user_response: bool) -> str:
|
| 172 |
+
"""
|
| 173 |
+
Handles the user's response to literature confirmation request.
|
| 174 |
+
user_response: True if user wants to include literature, False otherwise
|
| 175 |
+
"""
|
| 176 |
+
if self.pending_literature_confirmation is None:
|
| 177 |
+
return "No pending literature confirmation request."
|
| 178 |
+
|
| 179 |
+
plan = self.pending_literature_confirmation
|
| 180 |
+
self.pending_literature_confirmation = None
|
| 181 |
+
|
| 182 |
+
if user_response:
|
| 183 |
+
# User approved literature use - continue with original plan
|
| 184 |
+
self._send_thought_to_r("User approved literature usage. Proceeding with research...")
|
| 185 |
+
return self._continue_with_literature_plan(plan)
|
| 186 |
+
else:
|
| 187 |
+
# User declined literature use - modify plan to skip literature
|
| 188 |
+
self._send_thought_to_r("User declined literature usage. Providing response without external literature...")
|
| 189 |
+
return self._continue_without_literature_plan(plan)
|
| 190 |
+
|
| 191 |
+
def _continue_with_literature_plan(self, plan: dict) -> str:
|
| 192 |
+
"""Continue processing with the original plan that includes literature search."""
|
| 193 |
+
# Execute the original plan as intended
|
| 194 |
+
return self._execute_plan_with_literature(plan)
|
| 195 |
+
|
| 196 |
+
def _continue_without_literature_plan(self, plan: dict) -> str:
|
| 197 |
+
"""Continue processing but skip literature search components."""
|
| 198 |
+
# Modify the plan to remove literature search calls
|
| 199 |
+
modified_plan = self._remove_literature_from_plan(plan)
|
| 200 |
+
return self._execute_modified_plan(modified_plan)
|
| 201 |
+
|
| 202 |
+
def _remove_literature_from_plan(self, plan: dict) -> dict:
|
| 203 |
+
"""Remove literature search components from the plan."""
|
| 204 |
+
modified_plan = plan.copy()
|
| 205 |
+
|
| 206 |
+
python_code = modified_plan.get("python_code", "")
|
| 207 |
+
|
| 208 |
+
# Remove literature search calls and replace with generic response
|
| 209 |
+
if "multi_source_literature_search" in python_code or "fetch_text_from_urls" in python_code:
|
| 210 |
+
# Replace with a simple response
|
| 211 |
+
modified_plan["python_code"] = 'print(json.dumps({"response": "I can provide analysis based on available data, but external literature search was not used per your preference."}))'
|
| 212 |
+
modified_plan["status"] = "CODE_COMPLETE"
|
| 213 |
+
modified_plan["explanation"] = "Providing analysis without external literature sources as requested."
|
| 214 |
+
|
| 215 |
+
return modified_plan
|
| 216 |
+
|
| 217 |
+
def _execute_plan_with_literature(self, plan: dict) -> str:
|
| 218 |
+
"""Execute the original plan with literature components."""
|
| 219 |
+
# This continues the normal execution flow
|
| 220 |
+
# We'll integrate this into the existing _process_turn method
|
| 221 |
+
return self._continue_plan_execution(plan)
|
| 222 |
+
|
| 223 |
+
def _execute_modified_plan(self, plan: dict) -> str:
|
| 224 |
+
"""Execute the modified plan without literature."""
|
| 225 |
+
return self._continue_plan_execution(plan)
|
| 226 |
+
|
| 227 |
+
def _continue_plan_execution(self, plan: dict) -> str:
|
| 228 |
+
"""Continue with plan execution after literature confirmation."""
|
| 229 |
+
# This method will be called from the existing _process_turn logic
|
| 230 |
+
# For now, return a simple response - the actual execution logic
|
| 231 |
+
# will be integrated into the existing code flow
|
| 232 |
+
return plan.get("explanation", "Processing completed.")
|
| 233 |
+
|
| 234 |
def _process_turn(self, user_query_text: str) -> tuple:
|
| 235 |
"""
|
| 236 |
Processes a single turn of the conversation.
|
|
|
|
| 321 |
print(f"[GenerationAgent] Thought: {generated_thought}")
|
| 322 |
self._send_thought_to_r(f"GenerationAgent thought: {generated_thought}")
|
| 323 |
|
| 324 |
+
# --- Check for literature request and ask for user confirmation ---
|
| 325 |
+
print(f"[Manager._process_turn] Checking for literature request in plan: {plan.get('status', 'NO_STATUS')}")
|
| 326 |
+
print(f"[Manager._process_turn] User query: '{user_query_text}'")
|
| 327 |
+
print(f"[Manager._process_turn] Plan thought: '{plan.get('thought', '')[:100]}...'")
|
| 328 |
+
|
| 329 |
+
if self._detect_literature_request(plan, user_query_text):
|
| 330 |
+
print(f"[Manager._process_turn] Literature request detected, requesting user confirmation")
|
| 331 |
+
self._send_thought_to_r("Detected request for literature resources. Requesting user confirmation...")
|
| 332 |
+
confirmation_response = self._request_literature_confirmation(plan)
|
| 333 |
+
return confirmation_response, False, None
|
| 334 |
+
else:
|
| 335 |
+
print(f"[Manager._process_turn] No literature request detected, proceeding normally")
|
| 336 |
+
|
| 337 |
# --- Start: Logic for handling AWAITING_DATA, AWAITING_IMAGE_ANALYSIS, AWAITING_ANALYSIS_CODE ---
|
| 338 |
# This section processes plans that require execution of code or further interaction
|
| 339 |
# before the main generation attempt can be considered complete or failed.
|
|
|
|
| 905 |
|
| 906 |
return "Processing completed, but no response was generated.", False, None
|
| 907 |
|
| 908 |
+
def handle_literature_confirmation(self, user_response: bool, original_query: str = None) -> str:
|
| 909 |
+
"""
|
| 910 |
+
Public method to handle literature confirmation from R/UI.
|
| 911 |
+
This method can be called from the R side when user responds to confirmation dialog.
|
| 912 |
+
"""
|
| 913 |
+
print(f"[ManagerAgent] Received literature confirmation: {user_response}")
|
| 914 |
+
|
| 915 |
+
if self.pending_literature_confirmation is None:
|
| 916 |
+
return "No pending literature confirmation request."
|
| 917 |
+
|
| 918 |
+
plan = self.pending_literature_confirmation
|
| 919 |
+
self.pending_literature_confirmation = None
|
| 920 |
+
|
| 921 |
+
if user_response:
|
| 922 |
+
# User approved literature use - continue with original plan
|
| 923 |
+
self._send_thought_to_r("User approved literature usage. Proceeding with research...")
|
| 924 |
+
return self._continue_processing_with_plan(plan, use_literature=True)
|
| 925 |
+
else:
|
| 926 |
+
# User declined literature use - modify plan to skip literature
|
| 927 |
+
self._send_thought_to_r("User declined literature usage. Providing response without external literature...")
|
| 928 |
+
return self._continue_processing_with_plan(plan, use_literature=False)
|
| 929 |
+
|
| 930 |
+
def _continue_processing_with_plan(self, plan: dict, use_literature: bool) -> str:
|
| 931 |
+
"""
|
| 932 |
+
Continue processing with the plan, either with or without literature.
|
| 933 |
+
This method will execute the plan and return the final response.
|
| 934 |
+
"""
|
| 935 |
+
try:
|
| 936 |
+
if not use_literature:
|
| 937 |
+
# Modify the plan to remove literature components
|
| 938 |
+
plan = self._remove_literature_from_plan(plan)
|
| 939 |
+
|
| 940 |
+
# Execute the plan
|
| 941 |
+
if plan.get("status") == "CODE_COMPLETE":
|
| 942 |
+
# Plan is already complete, return the explanation
|
| 943 |
+
return plan.get("explanation", "Processing completed.")
|
| 944 |
+
elif plan.get("status") in ["AWAITING_DATA", "AWAITING_ANALYSIS_CODE"]:
|
| 945 |
+
# Plan needs code execution
|
| 946 |
+
return self._execute_plan_code(plan)
|
| 947 |
+
else:
|
| 948 |
+
# Unknown status, return explanation
|
| 949 |
+
return plan.get("explanation", "Processing completed with unknown status.")
|
| 950 |
+
|
| 951 |
+
except Exception as e:
|
| 952 |
+
error_msg = f"Error continuing with plan: {str(e)}"
|
| 953 |
+
print(f"[ManagerAgent] {error_msg}")
|
| 954 |
+
return error_msg
|
| 955 |
+
|
| 956 |
+
def _execute_plan_code(self, plan: dict) -> str:
|
| 957 |
+
"""
|
| 958 |
+
Execute the code in the plan and return the result.
|
| 959 |
+
This uses the supervisor and executor agents to safely run the code.
|
| 960 |
+
"""
|
| 961 |
+
try:
|
| 962 |
+
code_to_execute = plan.get("python_code", "").strip()
|
| 963 |
+
if not code_to_execute:
|
| 964 |
+
return "No code to execute."
|
| 965 |
+
|
| 966 |
+
if not self.supervisor_agent or not self.executor_agent:
|
| 967 |
+
return "Cannot execute code, Supervisor or Executor agent is missing."
|
| 968 |
+
|
| 969 |
+
# Have supervisor review the code
|
| 970 |
+
self._send_thought_to_r("Reviewing code for safety...")
|
| 971 |
+
review = self.supervisor_agent.review_code(code_to_execute, f"Reviewing user-approved plan: {plan.get('thought', '')}")
|
| 972 |
+
supervisor_status = review.get('safety_status', 'UNKNOWN_STATUS')
|
| 973 |
+
supervisor_feedback = review.get('safety_feedback', 'No feedback.')
|
| 974 |
+
|
| 975 |
+
print(f"[ManagerAgent] Code Review: {supervisor_feedback} (Status: {supervisor_status})")
|
| 976 |
+
self._send_thought_to_r(f"Code review: {supervisor_status} - {supervisor_feedback}")
|
| 977 |
+
|
| 978 |
+
if supervisor_status != "APPROVED_FOR_EXECUTION":
|
| 979 |
+
return f"Code execution blocked by supervisor: {supervisor_feedback}"
|
| 980 |
+
|
| 981 |
+
# Execute the code
|
| 982 |
+
self._send_thought_to_r("Executing code...")
|
| 983 |
+
execution_result = self.executor_agent.execute_code(code_to_execute)
|
| 984 |
+
execution_output = execution_result.get("execution_output", "")
|
| 985 |
+
execution_status = execution_result.get("execution_status", "UNKNOWN")
|
| 986 |
+
|
| 987 |
+
if execution_status == "SUCCESS":
|
| 988 |
+
self._send_thought_to_r(f"Code execution successful. Processing results...")
|
| 989 |
+
|
| 990 |
+
# If this is literature search, we need to continue processing
|
| 991 |
+
if "intermediate_data_for_llm" in execution_output:
|
| 992 |
+
# Add the results to conversation history
|
| 993 |
+
self.conversation_history.append({"role": "assistant", "content": f"```json\n{execution_output}\n```"})
|
| 994 |
+
|
| 995 |
+
# Call GenerationAgent again to process the results
|
| 996 |
+
self._send_thought_to_r("Processing literature search results...")
|
| 997 |
+
followup_plan = self.generation_agent.generate_code_plan(
|
| 998 |
+
user_query="Process the literature search results",
|
| 999 |
+
conversation_history=self.conversation_history,
|
| 1000 |
+
image_file_id_for_prompt=None,
|
| 1001 |
+
previous_attempts_feedback=None
|
| 1002 |
+
)
|
| 1003 |
+
|
| 1004 |
+
return followup_plan.get("explanation", "Literature search completed and processed.")
|
| 1005 |
+
else:
|
| 1006 |
+
return execution_output
|
| 1007 |
+
else:
|
| 1008 |
+
return f"Code execution failed: {execution_output}"
|
| 1009 |
+
|
| 1010 |
+
except Exception as e:
|
| 1011 |
+
error_msg = f"Error executing plan code: {str(e)}"
|
| 1012 |
+
print(f"[ManagerAgent] {error_msg}")
|
| 1013 |
+
return error_msg
|
| 1014 |
+
|
| 1015 |
def process_single_query(self, user_query_text: str, conversation_history_from_r: list = None) -> str:
|
| 1016 |
"""
|
| 1017 |
Processes a single query, suitable for calling from an external system like R/Shiny.
|
cache_data/excel_schema_cache.json
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"freshness_signature": {
|
| 3 |
+
"multi-omicsdata.xlsx": 1748219732.5558078
|
| 4 |
+
},
|
| 5 |
+
"formatted_schema_string": "DYNAMICALLY DISCOVERED EXCEL SCHEMAS (first sheet columns shown):\n- File: 'www\\multi-omicsdata.xlsx' (Identifier: 'multi_omicsdata')\n Sheets: [Dataset detail, specific TF-Taiji, newspecific TF-Taiji, Validation list, T cell differentiation map, Kay nanostring panel, Meeting summary, Kays wetlab to do, Wang Lab work to do, Philip et al. 2017 Nature sampl, T cell migration associated gen, KO EX-> proEX, Reprogramming MP -> TRM, Reprogramming TEX -> TRM, Reprogramming TEX -> proEX]\n Columns (first sheet): [Author, Lab, Year, DOI, Accession, Data type, Species, Infection, Naive, MP (MPEC), ... (total 23 columns)]\n- File: 'www\\networkanalysis\\comp_log2FC_RegulatedData_TRMTEXterm.xlsx' (Identifier: 'comp_log2FC_RegulatedData_TRMTEXterm')\n Sheets: [Worksheet]\n Columns (first sheet): [Unnamed: 0, Ahr, Arid3a, Arnt, Arntl, Atf1, Atf2, Atf3, Atf4, Atf7, ... (total 199 columns)]\n- File: 'www\\old files\\log2FC_RegulatedData_TRMTEXterm.xlsx' (Identifier: 'log2FC_RegulatedData_TRMTEXterm')\n Sheets: [Worksheet]\n Columns (first sheet): [Unnamed: 0, Ahr, Arid3a, Arnt, Arntl, Atf1, Atf2, Atf3, Atf4, Atf7, ... (total 199 columns)]\n- File: 'www\\tablePagerank\\MP.xlsx' (Identifier: 'MP')\n Sheets: [Sheet1]\n Columns (first sheet): [TF, Naive_Kaech_Kaech, Naive_Kaech_Chung, Naive_Mackay_Chung, Naive_MilnerAug_Chung, Naive_Renkema_Chung, Naive_Scott_Scott, MP_Kaech_Chung, MP_Kaech_Kaech, MP_Kaech_Scott, ... (total 43 columns)]\n- File: 'www\\tablePagerank\\Naive.xlsx' (Identifier: 'Naive')\n Sheets: [Sheet1]\n Columns (first sheet): [TF, Naive_Kaech_Kaech, Naive_Kaech_Chung, Naive_Mackay_Chung, Naive_MilnerAug_Chung, Naive_Renkema_Chung, Naive_Scott_Scott, MP_Kaech_Chung, MP_Kaech_Kaech, MP_Kaech_Scott, ... (total 43 columns)]\n- File: 'www\\tablePagerank\\Table_TF PageRank Scores for Audrey.xlsx' (Identifier: 'Table_TF_PageRank_Scores_for_Audrey')\n Sheets: [Fig_1F (Multi state-specific TF, Fig_1G (Single state-specific T]\n Columns (first sheet): [Unnamed: 0, Category, Cell-state specificity, Naive_Kaech_Kaech, Naive_Kaech_Chung, Naive_Mackay_Chung, Naive_MilnerAug_Chung, Naive_Renkema_Chung, Naive_Scott_Scott, MP_Kaech_Chung, ... (total 45 columns)]\n- File: 'www\\tablePagerank\\TCM.xlsx' (Identifier: 'TCM')\n Sheets: [Sheet1]\n Columns (first sheet): [TF, Naive_Kaech_Kaech, Naive_Kaech_Chung, Naive_Mackay_Chung, Naive_MilnerAug_Chung, Naive_Renkema_Chung, Naive_Scott_Scott, MP_Kaech_Chung, MP_Kaech_Kaech, MP_Kaech_Scott, ... (total 43 columns)]\n- File: 'www\\tablePagerank\\TE.xlsx' (Identifier: 'TE')\n Sheets: [Sheet1]\n Columns (first sheet): [TF, Naive_Kaech_Kaech, Naive_Kaech_Chung, Naive_Mackay_Chung, Naive_MilnerAug_Chung, Naive_Renkema_Chung, Naive_Scott_Scott, MP_Kaech_Chung, MP_Kaech_Kaech, MP_Kaech_Scott, ... (total 43 columns)]\n- File: 'www\\tablePagerank\\TEM.xlsx' (Identifier: 'TEM')\n Sheets: [Sheet1]\n Columns (first sheet): [TF, Naive_Kaech_Kaech, Naive_Kaech_Chung, Naive_Mackay_Chung, Naive_MilnerAug_Chung, Naive_Renkema_Chung, Naive_Scott_Scott, MP_Kaech_Chung, MP_Kaech_Kaech, MP_Kaech_Scott, ... (total 43 columns)]\n- File: 'www\\tablePagerank\\TEXeff.xlsx' (Identifier: 'TEXeff')\n Sheets: [Sheet1]\n Columns (first sheet): [TF, Naive_Kaech_Kaech, Naive_Kaech_Chung, Naive_Mackay_Chung, Naive_MilnerAug_Chung, Naive_Renkema_Chung, Naive_Scott_Scott, MP_Kaech_Chung, MP_Kaech_Kaech, MP_Kaech_Scott, ... (total 43 columns)]\n- File: 'www\\tablePagerank\\TEXprog.xlsx' (Identifier: 'TEXprog')\n Sheets: [Sheet1]\n Columns (first sheet): [TF, Naive_Kaech_Kaech, Naive_Kaech_Chung, Naive_Mackay_Chung, Naive_MilnerAug_Chung, Naive_Renkema_Chung, Naive_Scott_Scott, MP_Kaech_Chung, MP_Kaech_Kaech, MP_Kaech_Scott, ... (total 43 columns)]\n- File: 'www\\tablePagerank\\TEXterm.xlsx' (Identifier: 'TEXterm')\n Sheets: [Sheet1]\n Columns (first sheet): [TF, Naive_Kaech_Kaech, Naive_Kaech_Chung, Naive_Mackay_Chung, Naive_MilnerAug_Chung, Naive_Renkema_Chung, Naive_Scott_Scott, MP_Kaech_Chung, MP_Kaech_Kaech, MP_Kaech_Scott, ... (total 43 columns)]\n- File: 'www\\tablePagerank\\TRM.xlsx' (Identifier: 'TRM')\n Sheets: [Sheet1]\n Columns (first sheet): [TF, Naive_Kaech_Kaech, Naive_Kaech_Chung, Naive_Mackay_Chung, Naive_MilnerAug_Chung, Naive_Renkema_Chung, Naive_Scott_Scott, MP_Kaech_Chung, MP_Kaech_Kaech, MP_Kaech_Scott, ... (total 43 columns)]\n- File: 'www\\tfcommunities\\texcommunities.xlsx' (Identifier: 'texcommunities')\n Sheets: [TEX Communities, TEX_c1, TEX_c2, TEX_c3, TEX_c4, TEX_c5, TRM Communities, TRM_c1, TRM_c2, TRM_c3, TRM_c4, TRM_c5]\n Columns (first sheet): [TEX Communities, TF Members]\n- File: 'www\\tfcommunities\\trmcommunities.xlsx' (Identifier: 'trmcommunities')\n Sheets: [Sheet1]\n Columns (first sheet): [TRM Communities, TF Members]\n- File: 'www\\TFcorintextrm\\TF-TFcorTRMTEX.xlsx' (Identifier: 'TF_TFcorTRMTEX')\n Sheets: [Sheet1]\n Columns (first sheet): [TF Name, TF Merged Graph Path]\n- File: 'www\\waveanalysis\\searchtfwaves.xlsx' (Identifier: 'searchtfwaves')\n Sheets: [Sheet1]\n Columns (first sheet): [Wave 1, Wave 2, Wave 3, Wave 4, Wave 5, Wave 6, Wave 7]"
|
| 6 |
+
}
|
server.R
CHANGED
|
@@ -2119,6 +2119,28 @@ if 'agents.manager_agent' in sys.modules:
|
|
| 2119 |
|
| 2120 |
print(paste("TaijiChat: Received from Python agent -", agent_reply_text))
|
| 2121 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2122 |
# Check if this is an image response
|
| 2123 |
if (startsWith(agent_reply_text, "TAIJICHAT_IMAGE_RESPONSE:")) {
|
| 2124 |
# Extract the JSON part - START AFTER THE COLON IN THE PREFIX
|
|
@@ -2217,6 +2239,48 @@ if 'agents.manager_agent' in sys.modules:
|
|
| 2217 |
print("TaijiChat: Received empty user_chat_message.")
|
| 2218 |
}
|
| 2219 |
})
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2220 |
# --- END: TaijiChat Message Handling ---
|
| 2221 |
|
| 2222 |
#Render and hyperlink table + edit size so that everything fits into webpage
|
|
|
|
| 2119 |
|
| 2120 |
print(paste("TaijiChat: Received from Python agent -", agent_reply_text))
|
| 2121 |
|
| 2122 |
+
# Check if this is a literature confirmation request
|
| 2123 |
+
if (startsWith(agent_reply_text, "TAIJICHAT_LITERATURE_CONFIRMATION:")) {
|
| 2124 |
+
# Extract the JSON part
|
| 2125 |
+
json_str <- substr(agent_reply_text, 35, nchar(agent_reply_text)) # 35 = length of "TAIJICHAT_LITERATURE_CONFIRMATION:"
|
| 2126 |
+
|
| 2127 |
+
tryCatch({
|
| 2128 |
+
confirmation_info <- jsonlite::fromJSON(json_str)
|
| 2129 |
+
session$sendCustomMessage(type = "literature_confirmation_request",
|
| 2130 |
+
message = list(
|
| 2131 |
+
text = confirmation_info$message,
|
| 2132 |
+
type = confirmation_info$type
|
| 2133 |
+
))
|
| 2134 |
+
}, error = function(e) {
|
| 2135 |
+
warning(paste("Failed to parse literature confirmation JSON:", e$message))
|
| 2136 |
+
session$sendCustomMessage(type = "agent_chat_response",
|
| 2137 |
+
message = list(text = "Error processing literature confirmation request."))
|
| 2138 |
+
})
|
| 2139 |
+
|
| 2140 |
+
# Don't update chat history or send response yet - wait for user confirmation
|
| 2141 |
+
return()
|
| 2142 |
+
}
|
| 2143 |
+
|
| 2144 |
# Check if this is an image response
|
| 2145 |
if (startsWith(agent_reply_text, "TAIJICHAT_IMAGE_RESPONSE:")) {
|
| 2146 |
# Extract the JSON part - START AFTER THE COLON IN THE PREFIX
|
|
|
|
| 2239 |
print("TaijiChat: Received empty user_chat_message.")
|
| 2240 |
}
|
| 2241 |
})
|
| 2242 |
+
|
| 2243 |
+
# --- Literature Confirmation Handler ---
|
| 2244 |
+
observeEvent(input$literature_confirmation_response, {
|
| 2245 |
+
req(input$literature_confirmation_response)
|
| 2246 |
+
|
| 2247 |
+
user_response <- input$literature_confirmation_response
|
| 2248 |
+
print(paste("TaijiChat: Received literature confirmation response:", user_response))
|
| 2249 |
+
|
| 2250 |
+
agent_instance_val <- rv_agent_instance()
|
| 2251 |
+
|
| 2252 |
+
if (!is.null(agent_instance_val)) {
|
| 2253 |
+
tryCatch({
|
| 2254 |
+
# Call the agent's literature confirmation handler
|
| 2255 |
+
agent_reply_py <- agent_instance_val$handle_literature_confirmation(
|
| 2256 |
+
user_response = as.logical(user_response)
|
| 2257 |
+
)
|
| 2258 |
+
agent_reply_text <- as.character(agent_reply_py)
|
| 2259 |
+
|
| 2260 |
+
print(paste("TaijiChat: Literature confirmation processed, response:", agent_reply_text))
|
| 2261 |
+
|
| 2262 |
+
# Update chat history with the agent's response
|
| 2263 |
+
current_hist <- chat_history()
|
| 2264 |
+
final_hist <- append(current_hist, list(list(role = "assistant", content = agent_reply_text)))
|
| 2265 |
+
chat_history(final_hist)
|
| 2266 |
+
|
| 2267 |
+
# Send the response to UI
|
| 2268 |
+
session$sendCustomMessage(type = "agent_chat_response", message = list(text = agent_reply_text))
|
| 2269 |
+
|
| 2270 |
+
}, error = function(e) {
|
| 2271 |
+
error_message <- paste("TaijiChat: Error processing literature confirmation:", e$message)
|
| 2272 |
+
warning(error_message)
|
| 2273 |
+
print(error_message)
|
| 2274 |
+
session$sendCustomMessage(type = "agent_chat_response",
|
| 2275 |
+
message = list(text = "Sorry, an error occurred processing your literature preference."))
|
| 2276 |
+
})
|
| 2277 |
+
} else {
|
| 2278 |
+
warning("TaijiChat: Agent instance is NULL during literature confirmation.")
|
| 2279 |
+
session$sendCustomMessage(type = "agent_chat_response",
|
| 2280 |
+
message = list(text = "The chat agent is not available for literature confirmation."))
|
| 2281 |
+
}
|
| 2282 |
+
})
|
| 2283 |
+
|
| 2284 |
# --- END: TaijiChat Message Handling ---
|
| 2285 |
|
| 2286 |
#Render and hyperlink table + edit size so that everything fits into webpage
|
www/chat_script.js
CHANGED
|
@@ -529,4 +529,48 @@ function initializeChatUI() {
|
|
| 529 |
addChatMessage("An unexpected error occurred with the agent.", 'agent-error');
|
| 530 |
}
|
| 531 |
});
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 532 |
}
|
|
|
|
| 529 |
addChatMessage("An unexpected error occurred with the agent.", 'agent-error');
|
| 530 |
}
|
| 531 |
});
|
| 532 |
+
|
| 533 |
+
Shiny.addCustomMessageHandler("literature_confirmation_request", function(message) {
|
| 534 |
+
console.log("Received literature confirmation request");
|
| 535 |
+
if (message && typeof message.text === 'string') {
|
| 536 |
+
showLiteratureConfirmationDialog(message.text);
|
| 537 |
+
}
|
| 538 |
+
});
|
| 539 |
+
|
| 540 |
+
function showLiteratureConfirmationDialog(messageText) {
|
| 541 |
+
var $chatMessages = $('#chatMessages');
|
| 542 |
+
|
| 543 |
+
// Create the confirmation dialog box
|
| 544 |
+
var $confirmationDiv = $('<div></div>').addClass('literature-confirmation-dialog');
|
| 545 |
+
$confirmationDiv.html(`
|
| 546 |
+
<div class="confirmation-content">
|
| 547 |
+
<div class="confirmation-message">${messageText}</div>
|
| 548 |
+
<div class="confirmation-buttons">
|
| 549 |
+
<button class="btn btn-success confirmation-yes" onclick="handleLiteratureConfirmation(true)">Yes</button>
|
| 550 |
+
<button class="btn btn-secondary confirmation-no" onclick="handleLiteratureConfirmation(false)">No</button>
|
| 551 |
+
</div>
|
| 552 |
+
</div>
|
| 553 |
+
`);
|
| 554 |
+
|
| 555 |
+
// Add the dialog to the chat
|
| 556 |
+
$chatMessages.append($confirmationDiv);
|
| 557 |
+
$chatMessages.scrollTop($chatMessages[0].scrollHeight);
|
| 558 |
+
|
| 559 |
+
// Disable input while waiting for confirmation
|
| 560 |
+
$('#chatInput').prop('disabled', true);
|
| 561 |
+
$('#sendChatMsg').prop('disabled', true);
|
| 562 |
+
}
|
| 563 |
+
|
| 564 |
+
// Global function to handle literature confirmation response
|
| 565 |
+
window.handleLiteratureConfirmation = function(userChoice) {
|
| 566 |
+
console.log("Literature confirmation choice:", userChoice);
|
| 567 |
+
|
| 568 |
+
// Remove the confirmation dialog
|
| 569 |
+
$('.literature-confirmation-dialog').remove();
|
| 570 |
+
|
| 571 |
+
// Send the response to Shiny
|
| 572 |
+
Shiny.setInputValue("literature_confirmation_response", userChoice, {priority: "event"});
|
| 573 |
+
|
| 574 |
+
// Keep input disabled - it will be re-enabled when the agent responds
|
| 575 |
+
};
|
| 576 |
}
|
www/chat_styles.css
CHANGED
|
@@ -158,4 +158,65 @@
|
|
| 158 |
color: #bfa100;
|
| 159 |
margin-right: 4px;
|
| 160 |
user-select: none;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 161 |
}
|
|
|
|
| 158 |
color: #bfa100;
|
| 159 |
margin-right: 4px;
|
| 160 |
user-select: none;
|
| 161 |
+
}
|
| 162 |
+
|
| 163 |
+
/* Literature confirmation dialog styles */
|
| 164 |
+
.literature-confirmation-dialog {
|
| 165 |
+
background-color: #fff3cd;
|
| 166 |
+
border: 2px solid #ffc107;
|
| 167 |
+
border-radius: 8px;
|
| 168 |
+
padding: 15px;
|
| 169 |
+
margin: 10px 0;
|
| 170 |
+
box-shadow: 0 2px 10px rgba(0,0,0,0.1);
|
| 171 |
+
animation: fadeIn 0.3s ease-in;
|
| 172 |
+
}
|
| 173 |
+
|
| 174 |
+
.confirmation-content {
|
| 175 |
+
text-align: center;
|
| 176 |
+
}
|
| 177 |
+
|
| 178 |
+
.confirmation-message {
|
| 179 |
+
font-size: 14px;
|
| 180 |
+
color: #856404;
|
| 181 |
+
margin-bottom: 15px;
|
| 182 |
+
line-height: 1.4;
|
| 183 |
+
}
|
| 184 |
+
|
| 185 |
+
.confirmation-buttons {
|
| 186 |
+
display: flex;
|
| 187 |
+
gap: 10px;
|
| 188 |
+
justify-content: center;
|
| 189 |
+
}
|
| 190 |
+
|
| 191 |
+
.confirmation-buttons .btn {
|
| 192 |
+
min-width: 60px;
|
| 193 |
+
font-size: 13px;
|
| 194 |
+
padding: 6px 12px;
|
| 195 |
+
border-radius: 4px;
|
| 196 |
+
cursor: pointer;
|
| 197 |
+
border: none;
|
| 198 |
+
transition: background-color 0.2s;
|
| 199 |
+
}
|
| 200 |
+
|
| 201 |
+
.confirmation-yes {
|
| 202 |
+
background-color: #28a745;
|
| 203 |
+
color: white;
|
| 204 |
+
}
|
| 205 |
+
|
| 206 |
+
.confirmation-yes:hover {
|
| 207 |
+
background-color: #218838;
|
| 208 |
+
}
|
| 209 |
+
|
| 210 |
+
.confirmation-no {
|
| 211 |
+
background-color: #6c757d;
|
| 212 |
+
color: white;
|
| 213 |
+
}
|
| 214 |
+
|
| 215 |
+
.confirmation-no:hover {
|
| 216 |
+
background-color: #545b62;
|
| 217 |
+
}
|
| 218 |
+
|
| 219 |
+
@keyframes fadeIn {
|
| 220 |
+
from { opacity: 0; transform: translateY(-10px); }
|
| 221 |
+
to { opacity: 1; transform: translateY(0); }
|
| 222 |
}
|