File size: 16,886 Bytes
f492127
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db7762a
f492127
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db7762a
 
 
 
f492127
db7762a
 
f492127
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
"""
HaramGuard โ€” CoordinatorAgent
================================
AISA Layer : Reasoning + Governance
Design Pattern : ReAct (Reason โ†’ Act โ†’ Observe) + Output Guardrails

ReAct Pattern Implementation (Required by Capstone Rubric):
  This agent implements the ReAct (Reasoning-Acting-Observing) design pattern
  as explicitly required by the capstone project rubric. The ReAct loop enables
  iterative self-correction through:
  
  1. REASON: Analyze the situation and compose a structured prompt
     - Input: RiskResult, Decision, recent frames, feedback (if any)
     - Output: Contextualized prompt for LLM
  
  2. ACT: Execute the action (call LLM to generate action plan)
     - Model: Groq API (GPT-OSS-120B or similar)
     - Output: Raw JSON plan from LLM
  
  3. OBSERVE: Validate the output using guardrails
     - Run 5 validation checks (GR-C1 through GR-C5)
     - If issues found: generate feedback and loop back to REASON
     - If valid: return the plan
  
  The loop continues up to MAX_REACT_ITERS (3) times, ensuring the agent
  can self-correct errors in its reasoning or output format.

Responsibilities:
  - Called on ALL decisions (P0/P1/P2) โ€” not just critical emergencies
  - Implements a ReAct loop (max 3 iterations):
      Reason  : analyse the crowd situation and compose a prompt
      Act     : call LLM (Groq) to generate a structured action plan
      Observe : run 6 guardrails to validate the output
      โ†’ if validation fails, feed back issues and Reason again
  - Guardrails:
      GR-C1: Required fields check
      GR-C2: Valid threat level (CRITICAL/HIGH/MEDIUM/LOW only)
      GR-C3: Confidence score in [0, 1]
      GR-C4: Consistency check (low risk score โ‰  CRITICAL threat)
      GR-C5: Arabic alert fallback if empty
      GR-C6: selected_gates must be a non-empty list
"""

import json
import numpy as np
from typing import Optional, Tuple

from groq import Groq
from openai import OpenAI

from core.models import RiskResult, Decision

class CoordinatorAgent:

    REQUIRED_FIELDS = {
        'threat_level', 'executive_summary', 'selected_gates',
        'immediate_actions', 'actions_justification',
        'arabic_alert', 'confidence_score'
    }
    VALID_THREATS   = {'CRITICAL', 'HIGH', 'MEDIUM', 'LOW'}
    MAX_REACT_ITERS = 3   # ReAct: maximum reasoning iterations

    # Real gate list โ€” injected into every LLM prompt so the agent can choose
    HARAM_GATES = [
        'ุจุงุจ ุงู„ู…ู„ูƒ ุนุจุฏุงู„ุนุฒูŠุฒ',   # South, main entrance, highest traffic
        'ุจุงุจ ุงู„ู…ู„ูƒ ูู‡ุฏ',          # North, large capacity
        'ุจุงุจ ุงู„ุณู„ุงู…',             # East, historic, medium traffic
        'ุจุงุจ ุงู„ูุชุญ',              # West, medium capacity
        'ุจุงุจ ุงู„ุนู…ุฑุฉ',             # West, Umrah pilgrims
        'ุจุงุจ ุงู„ู…ู„ูƒ ุนุจุฏุงู„ู„ู‡',      # South-West, high capacity
        'ุจุงุจ ุงู„ุตูุง',              # East, leads to Safa-Marwa
        'ุจุงุจ ุนู„ูŠ',                # North-East, smaller gate
        'ุจุงุจ ุงู„ุฒูŠุงุฏุฉ',            # North, overflow gate
        'ุจุงุจ ุงู„ู…ุฑูˆุฉ',             # East, leads to Marwa
    ]

    def __init__(self, groq_api_key: str):
        self.name       = 'CoordinatorAgent'
        self.aisa_layer = 'Reasoning + Governance (ReAct)'

        self._groq_client    = Groq(api_key=groq_api_key)
        self._active_backend = 'groq'
        self._active_model   = 'llama-3.3-70b-versatile'

        print(f'๐Ÿง  [CoordinatorAgent] Ready โ€” backend=groq model={self._active_model} | ReAct loop')

    # โ”€โ”€ Guardrails โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

    def _validate(self, plan: dict, risk_score: float) -> Tuple[dict, list]:
        """
        Validate and sanitise model output.
        Returns (corrected_plan, list_of_issues).
        Issues list is empty when the plan is fully valid.
        """
        issues = []

        if not isinstance(plan, dict):
            plan = {}
            issues.append('GR_C1_invalid_json_object')

        # GR-C1: Required fields
        for field_name in self.REQUIRED_FIELDS:
            if field_name not in plan:
                plan[field_name] = 'N/A'
                issues.append(f'GR_C1_missing:{field_name}')

        # GR-C2: Valid threat level
        tl = str(plan.get('threat_level', '')).upper()
        if tl not in self.VALID_THREATS:
            issues.append(f'GR_C2_invalid_threat:{tl}->HIGH')
            plan['threat_level'] = 'HIGH'
        else:
            plan['threat_level'] = tl

        # GR-C3: Confidence in [0, 1]
        cs = plan.get('confidence_score', 0)
        if not isinstance(cs, (int, float)) or not (0 <= cs <= 1):
            issues.append(f'GR_C3_invalid_confidence:{cs}->0.5')
            plan['confidence_score'] = 0.5

        # GR-C4: Consistency โ€” threat_level must match risk_score thresholds
        #   risk > 0.80  (density_pct > 80%) โ†’ HIGH
        #   risk > 0.20  (density_pct > 20%) โ†’ MEDIUM
        #   risk <= 0.20 (density_pct <= 20%) โ†’ LOW
        expected = 'HIGH' if risk_score > 0.80 else 'MEDIUM' if risk_score > 0.20 else 'LOW'
        tl_current = plan['threat_level']
        if tl_current != expected:
            issues.append(f'GR_C4_threat_corrected:{tl_current}->{expected}')
            plan['threat_level'] = expected

        # GR-C5: Arabic alert fallback
        if not str(plan.get('arabic_alert', '')).strip():
            plan['arabic_alert'] = (
                'ุชู†ุจูŠู‡ ุฃู…ู†ูŠ: ูŠูุฑุฌู‰ ู…ุฑุงู‚ุจุฉ ู†ู‚ุงุท ุงู„ุชุฌู…ุน ูˆุงุชุฎุงุฐ ุงู„ุฅุฌุฑุงุกุงุช ุงู„ูˆู‚ุงุฆูŠุฉ ุงู„ู„ุงุฒู…ุฉ.'
            )
            issues.append('GR_C5_arabic_fallback')

        # GR-C1 extra: immediate_actions must be a non-empty list
        ia = plan.get('immediate_actions', [])
        if not isinstance(ia, list) or not ia:
            plan['immediate_actions'] = ['ุฒูŠุงุฏุฉ ุงู„ู…ุฑุงู‚ุจุฉ ุงู„ู…ูŠุฏุงู†ูŠุฉ', 'ุฅุฑุณุงู„ ูˆุญุฏุงุช ุฅู„ู‰ ู†ู‚ุทุฉ ุงู„ุงุฒุฏุญุงู…']
            issues.append('GR_C1_immediate_actions_fixed')

        # GR-C6: selected_gates โ€” count enforced by threat level
        #   LOW/P2  โ†’ 0 gates (no action needed)
        #   MEDIUM  โ†’ exactly 1 gate
        #   HIGH    โ†’ exactly 2 gates
        #   CRITICALโ†’ exactly 2 gates (same as HIGH)
        tl_now = plan.get('threat_level', 'LOW')
        sg = plan.get('selected_gates', [])
        if not isinstance(sg, list):
            sg = []

        if tl_now == 'LOW':
            plan['selected_gates'] = []          # no action, no gates shown
        elif tl_now == 'MEDIUM':
            if not sg:
                plan['selected_gates'] = ['ุจุงุจ ุงู„ู…ู„ูƒ ุนุจุฏุงู„ุนุฒูŠุฒ']
                issues.append('GR_C6_medium_fallback')
            else:
                plan['selected_gates'] = sg[:1]  # cap at 1
        else:  # HIGH or CRITICAL
            if len(sg) < 2:
                fallback = ['ุจุงุจ ุงู„ู…ู„ูƒ ุนุจุฏุงู„ุนุฒูŠุฒ', 'ุจุงุจ ุงู„ุณู„ุงู…']
                plan['selected_gates'] = (sg + fallback)[:2]
                issues.append('GR_C6_high_padded')
            else:
                plan['selected_gates'] = sg[:2]  # cap at 2

        plan['_guardrail_issues'] = issues
        return plan, issues

    # โ”€โ”€ ReAct helpers โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

    def _build_prompt(
        self,
        rr: RiskResult,
        decision: Decision,
        recent_frames: list,
        feedback: str = ''
    ) -> str:
        """
        Reason step: compose the model prompt.
        If feedback is provided (from a previous failed Observe step),
        include it so the model can correct itself.
        """
        avg_p      = np.mean([f.person_count for f in recent_frames]) if recent_frames else 0
        cur_count  = recent_frames[-1].person_count if recent_frames else 0
        gates_list = '\n'.join(
            f'{i+1}. {g}' for i, g in enumerate(self.HARAM_GATES)
        )

        base = (
            'ุฃู†ุช ู…ู†ุณู‚ ู†ุธุงู… ุญุงุฑุณ ุงู„ุญุฑู… ู„ุฅุฏุงุฑุฉ ุณู„ุงู…ุฉ ุงู„ุญุดูˆุฏ ููŠ ุงู„ู…ุณุฌุฏ ุงู„ุญุฑุงู….\n'
            'ู…ู‡ู…ุชูƒ: ุฅู†ุชุงุฌ ุฎุทุฉ ุชุดุบูŠู„ูŠุฉ ูˆุงุถุญุฉ ูˆู…ูˆุฌุฒุฉ ู„ู„ู…ุดุบู„ูŠู† ุจู†ุงุกู‹ ุนู„ู‰ ุจูŠุงู†ุงุช ุงู„ุญุดูˆุฏ ุงู„ุญุงู„ูŠุฉ.\n\n'
            f'ู…ุณุชูˆู‰ ุงู„ุฎุทุฑ     : {rr.risk_level} (ุงู„ุฏุฑุฌุฉ: {rr.risk_score:.3f})\n'
            f'ุงู„ุฃูˆู„ูˆูŠุฉ        : {decision.priority}  (P0=ุทุงุฑุฆ ุญุฑุฌุŒ P1=ุชุญุฐูŠุฑ ูˆู‚ุงุฆูŠุŒ P2=ู…ุฑุงู‚ุจุฉ ุฑูˆุชูŠู†ูŠุฉ)\n'
            f'ุงุชุฌุงู‡ ุงู„ุญุดูˆุฏ    : {rr.trend}\n'
            f'ุงู„ุนุฏุฏ ุงู„ุญุงู„ูŠ    : {cur_count} ุดุฎุต | ุงู„ู…ุชูˆุณุท (ุขุฎุฑ 30 ุฅุทุงุฑ): {avg_p:.0f} | ุงู„ุฐุฑูˆุฉ: {rr.window_max}\n\n'
            'ุงู„ุจูˆุงุจุงุช ุงู„ู…ุชุงุญุฉ ููŠ ุงู„ู…ุณุฌุฏ ุงู„ุญุฑุงู… โ€” ุงุฎุชุฑ ุงู„ุฃู†ุณุจ ู…ู†ู‡ุง:\n'
            f'{gates_list}\n\n'
            'ุฅุฑุดุงุฏุงุช ุงุฎุชูŠุงุฑ ุงู„ุจูˆุงุจุงุช:\n'
            '  P0 (ุทุงุฑุฆ ุญุฑุฌ): ุงูุชุญ ุจูˆุงุจุงุช ุงู„ุฅุฎู„ุงุก ุนุงู„ูŠุฉ ุงู„ุณุนุฉ + ุฃุบู„ู‚ ุงู„ู…ุฏุงุฎู„ ุงู„ู…ุฒุฏุญู…ุฉ\n'
            '  P1 (ุชุญุฐูŠุฑ)   : ูุนู‘ู„ ุงู„ู„ุงูุชุงุช ุงู„ุฅุฑุดุงุฏูŠุฉ ู†ุญูˆ ุงู„ุจูˆุงุจุงุช ุงู„ุฃู‚ู„ ุงุฒุฏุญุงู…ุงู‹\n'
            '  P2 (ุฑูˆุชูŠู†ูŠ)  : ุฑุงู‚ุจ ุงู„ุจูˆุงุจุงุช ุงู„ุฃูƒุซุฑ ุญุฑูƒุฉ ูู‚ุท\n\n'
        )

        if feedback:
            base += (
                'ุชุตุญูŠุญ ู…ุทู„ูˆุจ โ€” ุงู„ุฅุฌุงุจุฉ ุงู„ุณุงุจู‚ุฉ ุจู‡ุง ู…ุดุงูƒู„:\n'
                f'{feedback}\n'
                'ุตุญุญ ุฌู…ูŠุน ุงู„ู…ุดุงูƒู„ ูˆุฃุนุฏ ุงู„ุฅุฌุงุจุฉ.\n\n'
            )

        base += (
            'ู‚ูˆุงุนุฏ ุงู„ุฅุฎุฑุงุฌ ุงู„ุตุงุฑู…ุฉ:\n'
            '- ุฃุฌุจ ูู‚ุท ุจู€ JSON ุฎุงู…. ุจุฏูˆู† markdown. ุจุฏูˆู† backticks. ุจุฏูˆู† ุฃูŠ ู†ุต ู‚ุจู„ู‡ ุฃูˆ ุจุนุฏู‡.\n'
            '- ุฌู…ูŠุน ุญู‚ูˆู„ ุงู„ู†ุต ูŠุฌุจ ุฃู† ุชูƒูˆู† ุจุงู„ู„ุบุฉ ุงู„ุนุฑุจูŠุฉ ุงู„ูุตุญู‰ ุงู„ุฑุณู…ูŠุฉ ุญุตุฑุงู‹. ู„ุง ุชุณุชุฎุฏู… ุงู„ุนุงู…ูŠุฉ ุฃูˆ ุงู„ู„ู‡ุฌุงุช ุงู„ู…ุญู„ูŠุฉ ู…ุทู„ู‚ุงู‹. ุงุณุชุฎุฏู… ุฃุณู„ูˆุจุงู‹ ู…ู‡ู†ูŠุงู‹ ุฑุณู…ูŠุงู‹ ูŠู„ูŠู‚ ุจุฅุฏุงุฑุฉ ุงู„ุญุฑู… ุงู„ู…ูƒูŠ.\n'
            '- ู…ู…ู†ูˆุน ุงุณุชุฎุฏุงู… ุตูŠุบุฉ ุงู„ุฃู…ุฑ ุงู„ู…ุจุงุดุฑ (ุฑุงู‚ุจูˆุงุŒ ุชุฃูƒุฏูˆุงุŒ ุงูุชุญูˆุง). ุงุณุชุฎุฏู… ุฏุงุฆู…ุงู‹ ุตูŠุบุฉ ูŠูุฑุฌู‰ / ูŠูุทู„ุจ / ูŠููˆุตู‰. ู…ุซุงู„: ูŠูุฑุฌู‰ ู…ุฑุงู‚ุจุฉ... ูˆู„ูŠุณ ุฑุงู‚ุจูˆุง...\n'
            '- ู„ุง ุชุณุชุฎุฏู… ุนู„ุงู…ุงุช ุงู„ุงู‚ุชุจุงุณ ุงู„ู…ุฒุฏูˆุฌุฉ (") ุฏุงุฎู„ ู‚ูŠู… ุงู„ู†ุต ุงู„ุนุฑุจูŠ.\n'
            '- ู„ุง ุชุถุน ุฃุณุทุฑุงู‹ ุฌุฏูŠุฏุฉ (\\n) ุฏุงุฎู„ ู‚ูŠู… ุงู„ู†ุตูˆุต ููŠ JSON.\n\n'
            'ู‚ูˆุงุนุฏ selected_gates โ€” ุฅู„ุฒุงู…ูŠุฉ ุจุญุณุจ ู…ุณุชูˆู‰ ุงู„ุฎุทุฑ:\n'
            '  * LOW  (P2 ุฑูˆุชูŠู†ูŠ) : ู‚ุงุฆู…ุฉ ูุงุฑุบุฉ [] โ€” ู„ุง ุญุงุฌุฉ ู„ุฃูŠ ุฅุฌุฑุงุก ุนู„ู‰ ุงู„ุจูˆุงุจุงุช.\n'
            '  * MEDIUM (P1 ุชุญุฐูŠุฑ): ุจูˆุงุจุฉ ูˆุงุญุฏุฉ ูู‚ุท โ€” ุงู„ุฃู†ุณุจ ู„ู„ุชูˆุฌูŠู‡ ุงู„ูˆู‚ุงุฆูŠ.\n'
            '  * HIGH/CRITICAL (P0): ุจูˆุงุจุชุงู† ูู‚ุท โ€” ุจูˆุงุจุฉ ุฅุฎู„ุงุก + ุจูˆุงุจุฉ ุชุญูˆูŠู„.\n'
            '  * ุงุณุชุฎุฏู… ุงู„ุฃุณู…ุงุก ุงู„ุนุฑุจูŠุฉ ุงู„ุฏู‚ูŠู‚ุฉ ู…ู† ุงู„ู‚ุงุฆู…ุฉ ุฃุนู„ุงู‡ ุญุฑูุงู‹ ุจุญุฑู.\n'
            '  * ู…ู…ู†ูˆุน ู…ู†ุนุงู‹ ุจุงุชุงู‹ ุฅุฏุฑุงุฌ ุฃูƒุซุฑ ู…ู† ุจูˆุงุจุชูŠู† ููŠ ุฃูŠ ุญุงู„ โ€” ู‡ุฐุง ุฎุทุฃ ูุงุฏุญ.\n\n'
            '- immediate_actions: 3 ุฅู„ู‰ 5 ุฅุฌุฑุงุกุงุช ุนุฑุจูŠุฉ ู‚ุตูŠุฑุฉ ุชุฐูƒุฑ ุงู„ุจูˆุงุจุงุช ุงู„ู…ุฎุชุงุฑุฉ ุจุงู„ุงุณู….\n'
            '- actions_justification: ุฃู‚ู„ ู…ู† 40 ูƒู„ู…ุฉ. ุงุดุฑุญ ู„ู…ุงุฐุง ู‡ุฐู‡ ุงู„ุจูˆุงุจุงุช ุจุงู„ุฐุงุช.\n'
            '- arabic_alert: ุฃู‚ู„ ู…ู† 15 ูƒู„ู…ุฉ. ุชูˆุฌูŠู‡ ุฑุณู…ูŠ ุจุงู„ูุตุญู‰ ู…ูˆุฌู‘ู‡ ู„ู…ูˆุธููŠ ุงู„ุฃู…ู† ูˆุงู„ู…ุดุบู„ูŠู† ุญุตุฑุงู‹. ุงุณุชุฎุฏู… ุตูŠุบุฉ ูŠูุฑุฌู‰/ูŠูุทู„ุจ ูู‚ุท. ู…ุซุงู„: ูŠูุฑุฌู‰ ุชูˆุฌูŠู‡ ุงู„ุญุดูˆุฏ ู†ุญูˆ ุจุงุจ ุงู„ู…ู„ูƒ ูู‡ุฏ ูˆู…ุฑุงู‚ุจุฉ ู†ู‚ุงุท ุงู„ุชุฌู…ุน.\n'
            '- executive_summary: ุฃู‚ู„ ู…ู† 20 ูƒู„ู…ุฉ. ู…ู„ุฎุต ุงู„ู…ูˆู‚ู ู„ู„ู‚ูŠุงุฏุฉ.\n\n'
            'ุฃุนุฏ ุจุงู„ุถุจุท ู‡ุฐุง JSON (ุจุฏูˆู† ุญู‚ูˆู„ ุฅุถุงููŠุฉุŒ ุจุฏูˆู† ู†ุต ุฅุถุงููŠ):\n'
            '{\n'
            '  "threat_level": "HIGH",\n'
            '  "executive_summary": "...",\n'
            '  "selected_gates": ["ุจุงุจ ..."],\n'
            '  "immediate_actions": ["..."],\n'
            '  "actions_justification": "...",\n'
            '  "arabic_alert": "...",\n'
            '  "confidence_score": 0.85\n'
            '}'
        )
        return base

    def _llm_call(self, prompt: str) -> Optional[dict]:
        """Act step: call Groq LLM, parse JSON response."""
        raw_text = ''
        try:
            resp = self._groq_client.chat.completions.create(
                model=self._active_model,
                messages=[{'role': 'user', 'content': prompt}],
                max_tokens=1200,
                temperature=0.2,
            )
            raw_text = (resp.choices[0].message.content or '').strip()

            # โ”€โ”€ Parse JSON โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
            try:
                return json.loads(raw_text)
            except json.JSONDecodeError:
                pass

            # Brace-counting extractor (handles truncated responses)
            start = raw_text.find('{')
            if start != -1:
                fragment = raw_text[start:]
                depth    = 0
                best_end = -1
                for i, ch in enumerate(fragment):
                    if ch == '{':
                        depth += 1
                    elif ch == '}':
                        depth -= 1
                        if depth == 0:
                            best_end = i
                            break
                if best_end != -1:
                    return json.loads(fragment[:best_end + 1])

            raise json.JSONDecodeError('No valid JSON block found', raw_text, 0)

        except json.JSONDecodeError as e:
            print(f'   [CoordinatorAgent] JSON parse error: {e}')
            print(f'   [CoordinatorAgent] Raw LLM response (first 600 chars):\n{raw_text[:600]}')
            return {}
        except Exception as e:
            print(f'   [CoordinatorAgent] API error: {e}')
            return None

    # โ”€โ”€ Public API โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

    def call(self, rr: RiskResult, decision: Decision, recent_frames: list) -> Optional[dict]:
        """
        ReAct loop: Reason โ†’ Act โ†’ Observe, up to MAX_REACT_ITERS times.
        Returns validated plan dict or None on unrecoverable error.
        """
        print(f'\n๐Ÿง  [CoordinatorAgent] {decision.priority} @ frame {rr.frame_id} โ†’ ReAct loop starting...')

        feedback  = ''
        best_plan = None

        for iteration in range(1, self.MAX_REACT_ITERS + 1):
            print(f'   โ†บ ReAct iteration {iteration}/{self.MAX_REACT_ITERS}')

            # โ”€โ”€ Reason โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
            prompt = self._build_prompt(rr, decision, recent_frames, feedback)

            # โ”€โ”€ Act โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
            raw = self._llm_call(prompt)
            if raw is None:
                print('   [CoordinatorAgent] API unavailable โ€” aborting ReAct')
                return best_plan  # return previous best (may be None)

            # โ”€โ”€ Observe โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
            plan, issues = self._validate(raw, rr.risk_score)
            best_plan    = plan

            if not issues:
                print(f'   โœ… [CoordinatorAgent] Plan validated on iteration {iteration}')
                plan['_react_iterations'] = iteration
                plan['_llm_model'] = f'{self._active_backend}/{self._active_model}'
                return plan

            feedback = '; '.join(issues)
            print(f'   โš ๏ธ  Issues found ({len(issues)}): {feedback}')

        print('   [CoordinatorAgent] Max ReAct iterations reached โ€” returning best effort')
        if best_plan:
            best_plan['_react_iterations'] = self.MAX_REACT_ITERS
            best_plan['_llm_model'] = f'{self._active_backend}/{self._active_model}'
        return best_plan