KinetoLabs Claude Opus 4.5 commited on
Commit
78caafb
·
1 Parent(s): 706520f

Frontend simplification (4→2 tabs) + lazy imports for HF Spaces

Browse files

Major changes:
- Consolidate 4 tabs into 2: Input (room+images+observations) and Results+Chat
- Add chat interface for Q&A and document modifications (pipeline/chat.py)
- Lazy imports throughout to defer chromadb until needed
- Auto-build RAG index on startup if empty
- Default mock_models=False for production

Files added:
- ui/tabs/input_tab.py: Combined input tab with accordions
- ui/tabs/results_tab.py: Results display with chat interface
- pipeline/chat.py: Chat handler using vision model in text-only mode

Import chain fixes (HF Spaces compatibility):
- pipeline/__init__.py: __getattr__ lazy imports
- rag/__init__.py: __getattr__ lazy imports
- pipeline/main.py, dispositions.py, generator.py: TYPE_CHECKING + deferred imports
- rag/retriever.py: Lazy vectorstore import in __init__

FDAM methodology fixes:
- Public/Childcare lead threshold: 4.3 → 0.54 µg/100cm² (FDAM §1.4)
- Sample density tiers for large areas (FDAM §2.3)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

CLAUDE.md CHANGED
@@ -34,8 +34,20 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
34
 
35
  ## UI Components (Gradio 6.x)
36
 
37
- **MVP Simplification:** 4 tabs (Room Assessment, Images, Observations, Generate Results).
38
- Single-room workflow - no project-level or multi-room support.
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
  The frontend uses optimized input components:
41
 
@@ -47,9 +59,11 @@ The frontend uses optimized input components:
47
  | Facility Classification | `gr.Radio` | operational, non-operational, public-childcare |
48
  | Construction Era | `gr.Radio` | pre-1980, 1980-2000, post-2000 |
49
  | Image Upload | `gr.Files(file_count="multiple")` | Batch upload, auto-assigned to room |
 
50
 
51
  **Keyboard Shortcuts:**
52
- - `Ctrl+1` through `Ctrl+4`: Navigate between tabs
 
53
 
54
  ## Development Commands
55
 
@@ -87,8 +101,12 @@ mypy . # Type checking
87
  ├── models/ # Model loading (mock vs real)
88
  ├── rag/ # Chunking, vectorstore, retrieval
89
  ├── schemas/ # Pydantic input/output models
90
- ├── pipeline/ # Main processing logic
91
- ├── ui/ # Gradio UI components (4 tabs: room, images, observations, results)
 
 
 
 
92
  ├── RAG-KB/ # Knowledge base source files
93
  ├── chroma_db/ # ChromaDB persistence (generated)
94
  └── sample_images/ # Sample fire damage images for testing
 
34
 
35
  ## UI Components (Gradio 6.x)
36
 
37
+ **Simplified 2-Tab UI:** Input + Results/Chat.
38
+ Single-room workflow with integrated chat for Q&A and document modifications.
39
+
40
+ ### Tab 1: Input
41
+ Uses `gr.Accordion` for collapsible sections:
42
+ - **Room Details** (open by default): Name, dimensions, ceiling height, facility classification, construction era
43
+ - **Images** (open by default): Multi-file upload, gallery preview, image count
44
+ - **Field Observations** (collapsed by default): 15 qualitative observation fields
45
+
46
+ ### Tab 2: Results + Chat
47
+ - **Results Display**: Annotated gallery, assessment stats (JSON), SOW document (markdown)
48
+ - **Downloads**: Markdown and PDF export
49
+ - **Chat Interface**: Q&A about results, document modifications via `gr.Chatbot(type="messages")`
50
+ - **Quick Actions**: Pre-defined buttons for common queries
51
 
52
  The frontend uses optimized input components:
53
 
 
59
  | Facility Classification | `gr.Radio` | operational, non-operational, public-childcare |
60
  | Construction Era | `gr.Radio` | pre-1980, 1980-2000, post-2000 |
61
  | Image Upload | `gr.Files(file_count="multiple")` | Batch upload, auto-assigned to room |
62
+ | Chat | `gr.Chatbot(type="messages")` | Gradio 6 messages format |
63
 
64
  **Keyboard Shortcuts:**
65
+ - `Ctrl+1`: Navigate to Input tab
66
+ - `Ctrl+2`: Navigate to Results tab
67
 
68
  ## Development Commands
69
 
 
101
  ├── models/ # Model loading (mock vs real)
102
  ├── rag/ # Chunking, vectorstore, retrieval
103
  ├── schemas/ # Pydantic input/output models
104
+ ├── pipeline/ # Main processing logic + chat handler
105
+ │ └── chat.py # Chat handler for Q&A and document mods
106
+ ├── ui/ # Gradio UI components
107
+ │ └── tabs/ # Tab modules
108
+ │ ├── input_tab.py # Combined input (room + images + observations)
109
+ │ └── results_tab.py # Results display + chat interface
110
  ├── RAG-KB/ # Knowledge base source files
111
  ├── chroma_db/ # ChromaDB persistence (generated)
112
  └── sample_images/ # Sample fire damage images for testing
app.py CHANGED
@@ -1,7 +1,7 @@
1
  """FDAM AI Pipeline - Fire Damage Assessment Methodology v4.0.1
2
 
3
- Main Gradio application entry point with session state and tab validation.
4
- MVP Simplification: Single room, 4 tabs (Room, Images, Observations, Results).
5
  """
6
 
7
  import gradio as gr
@@ -18,20 +18,18 @@ logger = logging.getLogger(__name__)
18
  from models.loader import get_models
19
  from ui.state import SessionState, create_new_session
20
  from ui.storage import get_head_html
21
- from ui.tabs import room, images, observations, results
22
  from ui import samples
 
23
 
24
 
25
- # Keyboard shortcuts JavaScript (Ctrl+1-4 for tab navigation)
26
  KEYBOARD_JS = """
27
  <script>
28
  document.addEventListener('keydown', (e) => {
29
- if (e.ctrlKey && e.key >= '1' && e.key <= '4') {
30
  e.preventDefault();
31
- const tabIds = [
32
- 'tab-room-button', 'tab-images-button',
33
- 'tab-observations-button', 'tab-results-button'
34
- ];
35
  const tabIndex = parseInt(e.key) - 1;
36
  const tabButton = document.getElementById(tabIds[tabIndex]);
37
  if (tabButton) {
@@ -53,17 +51,39 @@ VALIDATION_CSS = """
53
  """
54
 
55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  def create_app() -> gr.Blocks:
57
  """Create the main Gradio application."""
58
 
59
  # Initialize models at startup
60
  model_stack = get_models()
61
 
62
- # Note: head parameter moved to launch() in Gradio 6.0
63
- # localStorage JS will be injected there
 
 
 
 
64
  with gr.Blocks(
65
  title="FDAM AI Pipeline - Fire Damage Assessment",
66
  css=VALIDATION_CSS,
 
67
  ) as app:
68
  # Session state (stored in Gradio State component)
69
  session_state = gr.State(value=create_new_session())
@@ -100,31 +120,19 @@ def create_app() -> gr.Blocks:
100
  sample_status = gr.HTML(
101
  value="",
102
  elem_id="sample_status",
103
- scale=3,
104
  )
105
 
106
- # Tab navigation (elem_id for stable JS selectors - Gradio appends "-button" for tab buttons)
107
- # Store Tab references for individual select event handlers
108
  with gr.Tabs() as tabs:
109
- # Tab 1: Room Assessment
110
- tab_room = gr.Tab("1. Room Assessment", id=0, elem_id="tab-room")
111
- with tab_room:
112
- tab1 = room.create_tab()
113
-
114
- # Tab 2: Images
115
- tab_images = gr.Tab("2. Images", id=1, elem_id="tab-images")
116
- with tab_images:
117
- tab2 = images.create_tab()
118
-
119
- # Tab 3: Observations
120
- tab_observations = gr.Tab("3. Observations", id=2, elem_id="tab-observations")
121
- with tab_observations:
122
- tab3 = observations.create_tab()
123
-
124
- # Tab 4: Generate Results
125
- tab_results = gr.Tab("4. Generate Results", id=3, elem_id="tab-results")
126
  with tab_results:
127
- tab4 = results.create_tab()
128
 
129
  # --- Event Handlers ---
130
 
@@ -132,13 +140,14 @@ def create_app() -> gr.Blocks:
132
  def handle_sample_load(scenario_id: str, current_session: SessionState):
133
  """Handle sample dropdown selection."""
134
  if not scenario_id:
135
- # Empty selection, do nothing
136
  return (
137
- current_session, # session_state unchanged
138
- *room.load_from_session(current_session), # room form values
139
- gr.update(), # tabs unchanged
140
- "", # clear status
141
- "", # reset dropdown
 
 
142
  )
143
 
144
  # Load the sample
@@ -146,7 +155,9 @@ def create_app() -> gr.Blocks:
146
  if not new_session:
147
  return (
148
  current_session,
149
- *room.load_from_session(current_session),
 
 
150
  gr.update(),
151
  '<span style="color: #c62828;">Error: Sample not found</span>',
152
  "",
@@ -157,14 +168,18 @@ def create_app() -> gr.Blocks:
157
  name = scenario.name if scenario else scenario_id
158
 
159
  # Load form values from new session
160
- form_values = room.load_from_session(new_session)
 
 
161
 
162
  return (
163
- new_session, # updated session_state
164
- *form_values, # room form values for Tab 1
165
- gr.update(selected=0), # switch to Tab 1 (Gradio 6.x syntax)
 
 
166
  f'<span style="color: #2e7d32;">Loaded sample: {name}</span>',
167
- "", # reset dropdown to empty
168
  )
169
 
170
  sample_dropdown.change(
@@ -172,6 +187,7 @@ def create_app() -> gr.Blocks:
172
  inputs=[sample_dropdown, session_state],
173
  outputs=[
174
  session_state,
 
175
  tab1["room_name"],
176
  tab1["room_length"],
177
  tab1["room_width"],
@@ -181,15 +197,36 @@ def create_app() -> gr.Blocks:
181
  tab1["room_volume"],
182
  tab1["facility_classification"],
183
  tab1["construction_era"],
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
184
  tabs,
185
  sample_status,
186
  sample_dropdown,
187
  ],
188
  )
189
 
190
- # Tab 1: Room Assessment
191
 
192
- # Save room data on field changes
193
  def on_room_field_change(
194
  session: SessionState,
195
  name: str,
@@ -201,16 +238,15 @@ def create_app() -> gr.Blocks:
201
  construction_era: str,
202
  ):
203
  """Save room data and update calculated values."""
204
- updated_session = room.save_room_to_session(
205
  session, name, length, width, height_preset, height_custom,
206
  facility_classification, construction_era
207
  )
208
- floor_area, volume = room.update_calculated_values(
209
  length, width, height_preset, height_custom
210
  )
211
  return updated_session, floor_area, volume
212
 
213
- # Wire up all room input fields to save on change
214
  room_inputs = [
215
  session_state,
216
  tab1["room_name"],
@@ -238,164 +274,317 @@ def create_app() -> gr.Blocks:
238
  outputs=room_outputs,
239
  )
240
 
241
- # Show/hide custom height input based on preset selection
242
  tab1["room_height_preset"].change(
243
- fn=room.on_height_preset_change,
244
  inputs=[tab1["room_height_preset"]],
245
  outputs=[tab1["room_height_custom"]],
246
  )
247
 
248
- tab1["validate_btn"].click(
249
- fn=room.validate_and_continue,
250
- inputs=[session_state],
 
 
 
 
 
251
  outputs=[
252
  session_state,
 
253
  tab1["validation_status"],
254
- tabs,
 
 
255
  ],
256
  )
257
 
258
- # Tab 2: Images
259
- tab2["add_image_btn"].click(
260
- fn=images.add_image,
261
- inputs=[
262
- session_state,
263
- tab2["image_upload"],
264
- tab2["image_description"],
265
  ],
 
 
 
 
 
266
  outputs=[
267
  session_state,
268
- tab2["images_gallery"],
269
- tab2["validation_status"],
270
- tab2["image_count"],
271
- tab2["image_upload"],
272
- tab2["image_description"],
273
  ],
274
  )
275
 
276
- tab2["clear_upload_btn"].click(
277
- fn=lambda: (None, ""),
 
278
  outputs=[
279
- tab2["image_upload"],
280
- tab2["image_description"],
 
 
281
  ],
282
  )
283
 
284
- tab2["remove_last_btn"].click(
285
- fn=images.remove_last_image,
286
- inputs=[session_state],
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
287
  outputs=[
288
  session_state,
289
- tab2["images_gallery"],
290
- tab2["validation_status"],
291
- tab2["image_count"],
292
  ],
293
  )
294
 
295
- tab2["clear_all_btn"].click(
296
- fn=images.clear_all_images,
 
 
 
297
  inputs=[session_state],
298
  outputs=[
299
  session_state,
300
- tab2["images_gallery"],
301
- tab2["validation_status"],
302
- tab2["image_count"],
 
 
 
 
 
303
  ],
304
  )
305
 
306
- tab2["validate_btn"].click(
307
- fn=images.validate_and_continue,
308
  inputs=[session_state],
309
  outputs=[
310
  session_state,
311
- tab2["validation_status"],
312
- tabs,
 
 
 
 
 
 
313
  ],
314
  )
315
 
 
316
  tab2["back_btn"].click(
317
  fn=lambda: gr.update(selected=0),
318
  outputs=[tabs],
319
  )
320
 
321
- # Tab 3: Observations
322
- tab3["validate_btn"].click(
323
- fn=observations.validate_and_continue,
324
- inputs=[
 
 
 
 
 
 
 
325
  session_state,
326
- tab3["smoke_odor"],
327
- tab3["odor_intensity"],
328
- tab3["visible_soot"],
329
- tab3["soot_description"],
330
- tab3["large_char"],
331
- tab3["char_density"],
332
- tab3["ash_residue"],
333
- tab3["ash_description"],
334
- tab3["surface_discoloration"],
335
- tab3["discoloration_description"],
336
- tab3["dust_interference"],
337
- tab3["dust_notes"],
338
- tab3["wildfire_indicators"],
339
- tab3["wildfire_notes"],
340
- tab3["additional_notes"],
341
  ],
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
342
  outputs=[
343
  session_state,
344
- tab3["validation_status"],
345
- tabs,
 
 
 
346
  ],
347
  )
348
 
349
- tab3["back_btn"].click(
350
- fn=lambda: gr.update(selected=1),
351
- outputs=[tabs],
 
 
 
 
 
 
 
 
 
352
  )
353
 
354
- # Tab 4: Generate Results
355
- tab4["generate_btn"].click(
356
- fn=results.generate_assessment,
357
- inputs=[session_state],
 
 
 
 
 
358
  outputs=[
359
  session_state,
360
- tab4["processing_status"],
361
- tab4["progress_html"],
362
- tab4["annotated_gallery"],
363
- tab4["stats_output"],
364
- tab4["sow_output"],
365
- tab4["download_md"],
366
- tab4["download_pdf"],
367
  ],
368
  )
369
 
370
- tab4["regenerate_btn"].click(
371
- fn=results.generate_assessment,
372
- inputs=[session_state],
373
  outputs=[
374
  session_state,
375
- tab4["processing_status"],
376
- tab4["progress_html"],
377
- tab4["annotated_gallery"],
378
- tab4["stats_output"],
379
- tab4["sow_output"],
380
- tab4["download_md"],
381
- tab4["download_pdf"],
382
  ],
383
  )
384
 
385
- tab4["back_btn"].click(
386
- fn=lambda: gr.update(selected=2),
387
- outputs=[tabs],
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
388
  )
389
 
390
- # --- Individual Tab Select Handlers ---
391
- # Using Tab.select instead of Tabs.select because Tabs.select doesn't fire in Gradio 6.x
392
- # See: https://github.com/gradio-app/gradio/issues/7189
 
 
 
 
 
 
393
 
394
- # Tab 1 (Room): Load room form fields when selected
395
- tab_room.select(
396
- fn=room.load_from_session,
397
  inputs=[session_state],
398
  outputs=[
 
399
  tab1["room_name"],
400
  tab1["room_length"],
401
  tab1["room_width"],
@@ -405,55 +594,45 @@ def create_app() -> gr.Blocks:
405
  tab1["room_volume"],
406
  tab1["facility_classification"],
407
  tab1["construction_era"],
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
408
  ],
409
  )
410
 
411
- # Tab 2 (Images): Load gallery and count when selected
412
- def load_images_tab(session: SessionState):
413
- """Load all images tab data."""
414
- gallery, count, warning = images.load_from_session(session)
415
- return gallery, count, warning
 
416
 
417
- tab_images.select(
418
- fn=load_images_tab,
419
- inputs=[session_state],
420
- outputs=[
421
- tab2["images_gallery"],
422
- tab2["image_count"],
423
- tab2["resume_warning"],
424
- ],
425
- )
426
-
427
- # Tab 3 (Observations): Load observation form fields when selected
428
- tab_observations.select(
429
- fn=observations.load_form_from_session,
430
  inputs=[session_state],
431
  outputs=[
432
- tab3["smoke_odor"],
433
- tab3["odor_intensity"],
434
- tab3["visible_soot"],
435
- tab3["soot_description"],
436
- tab3["large_char"],
437
- tab3["char_density"],
438
- tab3["ash_residue"],
439
- tab3["ash_description"],
440
- tab3["surface_discoloration"],
441
- tab3["discoloration_description"],
442
- tab3["dust_interference"],
443
- tab3["dust_notes"],
444
- tab3["wildfire_indicators"],
445
- tab3["wildfire_notes"],
446
- tab3["additional_notes"],
447
  ],
448
  )
449
 
450
- # Tab 4 (Results): Check preflight status when selected
451
- tab_results.select(
452
- fn=results.check_preflight,
453
- inputs=[session_state],
454
- outputs=[tab4["preflight_status"]],
455
- )
456
-
457
  return app
458
 
459
 
@@ -469,7 +648,6 @@ def main():
469
  server_name=settings.server_host,
470
  server_port=settings.server_port,
471
  share=False,
472
- head=get_head_html(KEYBOARD_JS), # Inject localStorage + keyboard shortcuts
473
  )
474
 
475
 
 
1
  """FDAM AI Pipeline - Fire Damage Assessment Methodology v4.0.1
2
 
3
+ Main Gradio application entry point with session state and chat functionality.
4
+ Simplified UI: 2 tabs (Input + Results/Chat).
5
  """
6
 
7
  import gradio as gr
 
18
  from models.loader import get_models
19
  from ui.state import SessionState, create_new_session
20
  from ui.storage import get_head_html
21
+ from ui.tabs import input_tab, results_tab
22
  from ui import samples
23
+ from pipeline.chat import ChatHandler, get_quick_action_message
24
 
25
 
26
+ # Keyboard shortcuts JavaScript (Ctrl+1-2 for tab navigation)
27
  KEYBOARD_JS = """
28
  <script>
29
  document.addEventListener('keydown', (e) => {
30
+ if (e.ctrlKey && e.key >= '1' && e.key <= '2') {
31
  e.preventDefault();
32
+ const tabIds = ['tab-input-button', 'tab-results-button'];
 
 
 
33
  const tabIndex = parseInt(e.key) - 1;
34
  const tabButton = document.getElementById(tabIds[tabIndex]);
35
  if (tabButton) {
 
51
  """
52
 
53
 
54
+ def ensure_rag_index():
55
+ """Ensure RAG index is built. Builds on first run if empty."""
56
+ from pathlib import Path
57
+ try:
58
+ # Check if index exists and has content
59
+ chroma_path = Path(__file__).parent / "chroma_db"
60
+ if not chroma_path.exists() or not any(chroma_path.iterdir()):
61
+ logger.info("RAG index empty or missing - building from RAG-KB...")
62
+ from rag.index_builder import build_index
63
+ stats = build_index(rebuild=False)
64
+ logger.info(f"RAG index built: {stats['chunks_created']} chunks from {stats['documents_processed']} documents")
65
+ else:
66
+ logger.info("RAG index found")
67
+ except Exception as e:
68
+ logger.warning(f"RAG index build failed (will use fallback): {e}")
69
+
70
+
71
  def create_app() -> gr.Blocks:
72
  """Create the main Gradio application."""
73
 
74
  # Initialize models at startup
75
  model_stack = get_models()
76
 
77
+ # Ensure RAG index is built (builds on first run)
78
+ ensure_rag_index()
79
+
80
+ # Initialize chat handler
81
+ chat_handler = ChatHandler(model_stack)
82
+
83
  with gr.Blocks(
84
  title="FDAM AI Pipeline - Fire Damage Assessment",
85
  css=VALIDATION_CSS,
86
+ head=get_head_html(KEYBOARD_JS),
87
  ) as app:
88
  # Session state (stored in Gradio State component)
89
  session_state = gr.State(value=create_new_session())
 
120
  sample_status = gr.HTML(
121
  value="",
122
  elem_id="sample_status",
 
123
  )
124
 
125
+ # Tab navigation (2 tabs)
 
126
  with gr.Tabs() as tabs:
127
+ # Tab 1: Input (combined room + images + observations)
128
+ tab_input = gr.Tab("1. Input", id=0, elem_id="tab-input")
129
+ with tab_input:
130
+ tab1 = input_tab.create_tab()
131
+
132
+ # Tab 2: Results + Chat
133
+ tab_results = gr.Tab("2. Results", id=1, elem_id="tab-results")
 
 
 
 
 
 
 
 
 
 
134
  with tab_results:
135
+ tab2 = results_tab.create_tab()
136
 
137
  # --- Event Handlers ---
138
 
 
140
  def handle_sample_load(scenario_id: str, current_session: SessionState):
141
  """Handle sample dropdown selection."""
142
  if not scenario_id:
 
143
  return (
144
+ current_session,
145
+ *input_tab.load_room_from_session(current_session),
146
+ *input_tab.load_images_from_session(current_session),
147
+ *input_tab.load_observations_from_session(current_session),
148
+ gr.update(),
149
+ "",
150
+ "",
151
  )
152
 
153
  # Load the sample
 
155
  if not new_session:
156
  return (
157
  current_session,
158
+ *input_tab.load_room_from_session(current_session),
159
+ *input_tab.load_images_from_session(current_session),
160
+ *input_tab.load_observations_from_session(current_session),
161
  gr.update(),
162
  '<span style="color: #c62828;">Error: Sample not found</span>',
163
  "",
 
168
  name = scenario.name if scenario else scenario_id
169
 
170
  # Load form values from new session
171
+ room_values = input_tab.load_room_from_session(new_session)
172
+ image_values = input_tab.load_images_from_session(new_session)
173
+ obs_values = input_tab.load_observations_from_session(new_session)
174
 
175
  return (
176
+ new_session,
177
+ *room_values,
178
+ *image_values,
179
+ *obs_values,
180
+ gr.update(selected=0), # Stay on Input tab
181
  f'<span style="color: #2e7d32;">Loaded sample: {name}</span>',
182
+ "", # reset dropdown
183
  )
184
 
185
  sample_dropdown.change(
 
187
  inputs=[sample_dropdown, session_state],
188
  outputs=[
189
  session_state,
190
+ # Room outputs (9)
191
  tab1["room_name"],
192
  tab1["room_length"],
193
  tab1["room_width"],
 
197
  tab1["room_volume"],
198
  tab1["facility_classification"],
199
  tab1["construction_era"],
200
+ # Image outputs (3)
201
+ tab1["images_gallery"],
202
+ tab1["image_count"],
203
+ tab1["resume_warning"],
204
+ # Observation outputs (15)
205
+ tab1["smoke_odor"],
206
+ tab1["odor_intensity"],
207
+ tab1["visible_soot"],
208
+ tab1["soot_description"],
209
+ tab1["large_char"],
210
+ tab1["char_density"],
211
+ tab1["ash_residue"],
212
+ tab1["ash_description"],
213
+ tab1["surface_discoloration"],
214
+ tab1["discoloration_description"],
215
+ tab1["dust_interference"],
216
+ tab1["dust_notes"],
217
+ tab1["wildfire_indicators"],
218
+ tab1["wildfire_notes"],
219
+ tab1["additional_notes"],
220
+ # Navigation
221
  tabs,
222
  sample_status,
223
  sample_dropdown,
224
  ],
225
  )
226
 
227
+ # --- Tab 1: Input ---
228
 
229
+ # Room field changes - save to session and update calculations
230
  def on_room_field_change(
231
  session: SessionState,
232
  name: str,
 
238
  construction_era: str,
239
  ):
240
  """Save room data and update calculated values."""
241
+ updated_session = input_tab.save_room_to_session(
242
  session, name, length, width, height_preset, height_custom,
243
  facility_classification, construction_era
244
  )
245
+ floor_area, volume = input_tab.update_calculated_values(
246
  length, width, height_preset, height_custom
247
  )
248
  return updated_session, floor_area, volume
249
 
 
250
  room_inputs = [
251
  session_state,
252
  tab1["room_name"],
 
274
  outputs=room_outputs,
275
  )
276
 
277
+ # Show/hide custom height input
278
  tab1["room_height_preset"].change(
279
+ fn=input_tab.on_height_preset_change,
280
  inputs=[tab1["room_height_preset"]],
281
  outputs=[tab1["room_height_custom"]],
282
  )
283
 
284
+ # Image handling
285
+ tab1["add_image_btn"].click(
286
+ fn=input_tab.add_image,
287
+ inputs=[
288
+ session_state,
289
+ tab1["image_upload"],
290
+ tab1["image_description"],
291
+ ],
292
  outputs=[
293
  session_state,
294
+ tab1["images_gallery"],
295
  tab1["validation_status"],
296
+ tab1["image_count"],
297
+ tab1["image_upload"],
298
+ tab1["image_description"],
299
  ],
300
  )
301
 
302
+ tab1["clear_upload_btn"].click(
303
+ fn=lambda: (None, ""),
304
+ outputs=[
305
+ tab1["image_upload"],
306
+ tab1["image_description"],
 
 
307
  ],
308
+ )
309
+
310
+ tab1["remove_last_btn"].click(
311
+ fn=input_tab.remove_last_image,
312
+ inputs=[session_state],
313
  outputs=[
314
  session_state,
315
+ tab1["images_gallery"],
316
+ tab1["validation_status"],
317
+ tab1["image_count"],
 
 
318
  ],
319
  )
320
 
321
+ tab1["clear_all_btn"].click(
322
+ fn=input_tab.clear_all_images,
323
+ inputs=[session_state],
324
  outputs=[
325
+ session_state,
326
+ tab1["images_gallery"],
327
+ tab1["validation_status"],
328
+ tab1["image_count"],
329
  ],
330
  )
331
 
332
+ # Generate button - validate and switch to results
333
+ def on_generate_click(
334
+ session: SessionState,
335
+ smoke_odor: bool,
336
+ odor_intensity: str,
337
+ visible_soot: bool,
338
+ soot_description: str,
339
+ large_char: bool,
340
+ char_density: str,
341
+ ash_residue: bool,
342
+ ash_description: str,
343
+ surface_discoloration: bool,
344
+ discoloration_description: str,
345
+ dust_interference: bool,
346
+ dust_notes: str,
347
+ wildfire_indicators: bool,
348
+ wildfire_notes: str,
349
+ additional_notes: str,
350
+ ):
351
+ """Save observations and validate before generating."""
352
+ # Save observations first
353
+ session = input_tab.save_observations_to_session(
354
+ session,
355
+ smoke_odor, odor_intensity, visible_soot, soot_description,
356
+ large_char, char_density, ash_residue, ash_description,
357
+ surface_discoloration, discoloration_description,
358
+ dust_interference, dust_notes, wildfire_indicators,
359
+ wildfire_notes, additional_notes,
360
+ )
361
+ # Validate and potentially switch tabs
362
+ return input_tab.validate_and_generate(session)
363
+
364
+ tab1["generate_btn"].click(
365
+ fn=on_generate_click,
366
+ inputs=[
367
+ session_state,
368
+ tab1["smoke_odor"],
369
+ tab1["odor_intensity"],
370
+ tab1["visible_soot"],
371
+ tab1["soot_description"],
372
+ tab1["large_char"],
373
+ tab1["char_density"],
374
+ tab1["ash_residue"],
375
+ tab1["ash_description"],
376
+ tab1["surface_discoloration"],
377
+ tab1["discoloration_description"],
378
+ tab1["dust_interference"],
379
+ tab1["dust_notes"],
380
+ tab1["wildfire_indicators"],
381
+ tab1["wildfire_notes"],
382
+ tab1["additional_notes"],
383
+ ],
384
  outputs=[
385
  session_state,
386
+ tab1["validation_status"],
387
+ tabs,
 
388
  ],
389
  )
390
 
391
+ # --- Tab 2: Results + Chat ---
392
+
393
+ # Generate assessment
394
+ tab2["generate_btn"].click(
395
+ fn=results_tab.generate_assessment,
396
  inputs=[session_state],
397
  outputs=[
398
  session_state,
399
+ tab2["processing_status"],
400
+ tab2["progress_html"],
401
+ tab2["annotated_gallery"],
402
+ tab2["stats_output"],
403
+ tab2["sow_output"],
404
+ tab2["download_md"],
405
+ tab2["download_pdf"],
406
+ tab2["chatbot"],
407
  ],
408
  )
409
 
410
+ tab2["regenerate_btn"].click(
411
+ fn=results_tab.generate_assessment,
412
  inputs=[session_state],
413
  outputs=[
414
  session_state,
415
+ tab2["processing_status"],
416
+ tab2["progress_html"],
417
+ tab2["annotated_gallery"],
418
+ tab2["stats_output"],
419
+ tab2["sow_output"],
420
+ tab2["download_md"],
421
+ tab2["download_pdf"],
422
+ tab2["chatbot"],
423
  ],
424
  )
425
 
426
+ # Back to input
427
  tab2["back_btn"].click(
428
  fn=lambda: gr.update(selected=0),
429
  outputs=[tabs],
430
  )
431
 
432
+ # Reset document
433
+ def on_reset_document(session: SessionState):
434
+ """Reset document to original and regenerate downloads."""
435
+ session, doc = results_tab.reset_document(session)
436
+ md_path, pdf_path = results_tab.regenerate_downloads(session)
437
+ return session, doc, md_path, pdf_path
438
+
439
+ tab2["reset_doc_btn"].click(
440
+ fn=on_reset_document,
441
+ inputs=[session_state],
442
+ outputs=[
443
  session_state,
444
+ tab2["sow_output"],
445
+ tab2["download_md"],
446
+ tab2["download_pdf"],
 
 
 
 
 
 
 
 
 
 
 
 
447
  ],
448
+ )
449
+
450
+ # Chat functionality
451
+ def handle_chat_message(
452
+ message: str,
453
+ session: SessionState,
454
+ chat_history: list[dict],
455
+ ):
456
+ """Process chat message and update UI."""
457
+ if not message.strip():
458
+ return session, chat_history, "", session.generated_document or "", None, None
459
+
460
+ response, edit, updated_history = chat_handler.process_message(
461
+ message, session, chat_history
462
+ )
463
+
464
+ # Apply document edit if present
465
+ if edit and session.generated_document:
466
+ session.generated_document = chat_handler.apply_document_edit(
467
+ session.generated_document, edit
468
+ )
469
+ session.update_timestamp()
470
+ # Regenerate downloads
471
+ md_path, pdf_path = results_tab.regenerate_downloads(session)
472
+ else:
473
+ md_path, pdf_path = None, None
474
+
475
+ # Store chat history in session
476
+ session.chat_history = updated_history
477
+
478
+ return (
479
+ session,
480
+ updated_history,
481
+ "", # Clear input
482
+ session.generated_document or "",
483
+ md_path,
484
+ pdf_path,
485
+ )
486
+
487
+ # Chat send button
488
+ tab2["chat_send_btn"].click(
489
+ fn=handle_chat_message,
490
+ inputs=[tab2["chat_input"], session_state, tab2["chatbot"]],
491
  outputs=[
492
  session_state,
493
+ tab2["chatbot"],
494
+ tab2["chat_input"],
495
+ tab2["sow_output"],
496
+ tab2["download_md"],
497
+ tab2["download_pdf"],
498
  ],
499
  )
500
 
501
+ # Chat input enter key
502
+ tab2["chat_input"].submit(
503
+ fn=handle_chat_message,
504
+ inputs=[tab2["chat_input"], session_state, tab2["chatbot"]],
505
+ outputs=[
506
+ session_state,
507
+ tab2["chatbot"],
508
+ tab2["chat_input"],
509
+ tab2["sow_output"],
510
+ tab2["download_md"],
511
+ tab2["download_pdf"],
512
+ ],
513
  )
514
 
515
+ # Quick action buttons
516
+ def send_quick_action(action_key: str, session: SessionState, chat_history: list[dict]):
517
+ """Send a quick action message."""
518
+ message = get_quick_action_message(action_key)
519
+ return handle_chat_message(message, session, chat_history)
520
+
521
+ tab2["quick_explain_zones"].click(
522
+ fn=lambda s, h: send_quick_action("explain_zones", s, h),
523
+ inputs=[session_state, tab2["chatbot"]],
524
  outputs=[
525
  session_state,
526
+ tab2["chatbot"],
527
+ tab2["chat_input"],
528
+ tab2["sow_output"],
529
+ tab2["download_md"],
530
+ tab2["download_pdf"],
 
 
531
  ],
532
  )
533
 
534
+ tab2["quick_explain_materials"].click(
535
+ fn=lambda s, h: send_quick_action("explain_materials", s, h),
536
+ inputs=[session_state, tab2["chatbot"]],
537
  outputs=[
538
  session_state,
539
+ tab2["chatbot"],
540
+ tab2["chat_input"],
541
+ tab2["sow_output"],
542
+ tab2["download_md"],
543
+ tab2["download_pdf"],
 
 
544
  ],
545
  )
546
 
547
+ tab2["quick_sampling"].click(
548
+ fn=lambda s, h: send_quick_action("explain_sampling", s, h),
549
+ inputs=[session_state, tab2["chatbot"]],
550
+ outputs=[
551
+ session_state,
552
+ tab2["chatbot"],
553
+ tab2["chat_input"],
554
+ tab2["sow_output"],
555
+ tab2["download_md"],
556
+ tab2["download_pdf"],
557
+ ],
558
+ )
559
+
560
+ tab2["quick_add_note"].click(
561
+ fn=lambda s, h: send_quick_action("add_note", s, h),
562
+ inputs=[session_state, tab2["chatbot"]],
563
+ outputs=[
564
+ session_state,
565
+ tab2["chatbot"],
566
+ tab2["chat_input"],
567
+ tab2["sow_output"],
568
+ tab2["download_md"],
569
+ tab2["download_pdf"],
570
+ ],
571
  )
572
 
573
+ # --- Tab Select Handlers ---
574
+
575
+ # Load data when switching to Input tab
576
+ def load_input_tab(session: SessionState):
577
+ """Load all input data when tab is selected."""
578
+ room_values = input_tab.load_room_from_session(session)
579
+ image_values = input_tab.load_images_from_session(session)
580
+ obs_values = input_tab.load_observations_from_session(session)
581
+ return (*room_values, *image_values, *obs_values)
582
 
583
+ tab_input.select(
584
+ fn=load_input_tab,
 
585
  inputs=[session_state],
586
  outputs=[
587
+ # Room (9)
588
  tab1["room_name"],
589
  tab1["room_length"],
590
  tab1["room_width"],
 
594
  tab1["room_volume"],
595
  tab1["facility_classification"],
596
  tab1["construction_era"],
597
+ # Images (3)
598
+ tab1["images_gallery"],
599
+ tab1["image_count"],
600
+ tab1["resume_warning"],
601
+ # Observations (15)
602
+ tab1["smoke_odor"],
603
+ tab1["odor_intensity"],
604
+ tab1["visible_soot"],
605
+ tab1["soot_description"],
606
+ tab1["large_char"],
607
+ tab1["char_density"],
608
+ tab1["ash_residue"],
609
+ tab1["ash_description"],
610
+ tab1["surface_discoloration"],
611
+ tab1["discoloration_description"],
612
+ tab1["dust_interference"],
613
+ tab1["dust_notes"],
614
+ tab1["wildfire_indicators"],
615
+ tab1["wildfire_notes"],
616
+ tab1["additional_notes"],
617
  ],
618
  )
619
 
620
+ # Load data when switching to Results tab
621
+ def load_results_tab(session: SessionState):
622
+ """Load results data when tab is selected."""
623
+ doc = session.generated_document or "*Generate an assessment to see results here.*"
624
+ chat = session.chat_history or []
625
+ return doc, chat
626
 
627
+ tab_results.select(
628
+ fn=load_results_tab,
 
 
 
 
 
 
 
 
 
 
 
629
  inputs=[session_state],
630
  outputs=[
631
+ tab2["sow_output"],
632
+ tab2["chatbot"],
 
 
 
 
 
 
 
 
 
 
 
 
 
633
  ],
634
  )
635
 
 
 
 
 
 
 
 
636
  return app
637
 
638
 
 
648
  server_name=settings.server_host,
649
  server_port=settings.server_port,
650
  share=False,
 
651
  )
652
 
653
 
config/settings.py CHANGED
@@ -14,7 +14,8 @@ class Settings(BaseSettings):
14
  log_level: Literal["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"] = "INFO"
15
 
16
  # Model loading - set MOCK_MODELS=true for local dev on RTX 4090
17
- mock_models: bool = True
 
18
 
19
  # Model paths (for production on HuggingFace Spaces)
20
  # Single 30B-A3B MoE model with FP8 quantization via vLLM (official, reasoning-enhanced)
 
14
  log_level: Literal["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"] = "INFO"
15
 
16
  # Model loading - set MOCK_MODELS=true for local dev on RTX 4090
17
+ # Default is False for production (HuggingFace Spaces)
18
+ mock_models: bool = False
19
 
20
  # Model paths (for production on HuggingFace Spaces)
21
  # Single 30B-A3B MoE model with FP8 quantization via vLLM (official, reasoning-enhanced)
pipeline/__init__.py CHANGED
@@ -3,15 +3,12 @@
3
  This module provides the core processing pipeline for generating
4
  fire damage assessment reports using AI vision analysis and
5
  RAG-enhanced methodology lookup.
6
- """
7
 
8
- from .calculations import FDAMCalculator
9
- from .dispositions import DispositionEngine
10
- from .generator import DocumentGenerator
11
- from .main import FDAMPipeline, PipelineResult
12
- from .pdf_generator import PDFGenerator, PDFResult, generate_sow_pdf
13
 
14
  __all__ = [
 
15
  "FDAMCalculator",
16
  "DispositionEngine",
17
  "DocumentGenerator",
@@ -21,3 +18,35 @@ __all__ = [
21
  "PDFResult",
22
  "generate_sow_pdf",
23
  ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  This module provides the core processing pipeline for generating
4
  fire damage assessment reports using AI vision analysis and
5
  RAG-enhanced methodology lookup.
 
6
 
7
+ Lazy imports to avoid chromadb dependency at module load time for local development.
8
+ """
 
 
 
9
 
10
  __all__ = [
11
+ "ChatHandler",
12
  "FDAMCalculator",
13
  "DispositionEngine",
14
  "DocumentGenerator",
 
18
  "PDFResult",
19
  "generate_sow_pdf",
20
  ]
21
+
22
+
23
+ def __getattr__(name):
24
+ """Lazy import pipeline modules only when accessed."""
25
+ if name == "FDAMCalculator":
26
+ from .calculations import FDAMCalculator
27
+ return FDAMCalculator
28
+ elif name == "ChatHandler":
29
+ from .chat import ChatHandler
30
+ return ChatHandler
31
+ elif name == "DispositionEngine":
32
+ from .dispositions import DispositionEngine
33
+ return DispositionEngine
34
+ elif name == "DocumentGenerator":
35
+ from .generator import DocumentGenerator
36
+ return DocumentGenerator
37
+ elif name == "FDAMPipeline":
38
+ from .main import FDAMPipeline
39
+ return FDAMPipeline
40
+ elif name == "PipelineResult":
41
+ from .main import PipelineResult
42
+ return PipelineResult
43
+ elif name == "PDFGenerator":
44
+ from .pdf_generator import PDFGenerator
45
+ return PDFGenerator
46
+ elif name == "PDFResult":
47
+ from .pdf_generator import PDFResult
48
+ return PDFResult
49
+ elif name == "generate_sow_pdf":
50
+ from .pdf_generator import generate_sow_pdf
51
+ return generate_sow_pdf
52
+ raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
pipeline/calculations.py CHANGED
@@ -84,7 +84,7 @@ METALS_THRESHOLDS = {
84
  facility_type="Operational",
85
  ),
86
  "public-childcare": MetalsThresholds(
87
- lead_ug_100cm2=4.3, # EPA/HUD October 2024 for floors
88
  cadmium_ug_100cm2=3.3, # Use non-operational as baseline
89
  arsenic_ug_100cm2=6.7,
90
  chromium_vi_ug_100cm2=3.3,
@@ -177,7 +177,7 @@ class FDAMCalculator:
177
  """
178
  notes = []
179
 
180
- # Base sample density by area size
181
  if total_area_sf < 5000:
182
  tape_min, tape_max = 3, 5
183
  wipe_min, wipe_max = 3, 5
@@ -186,11 +186,15 @@ class FDAMCalculator:
186
  tape_min, tape_max = 5, 10
187
  wipe_min, wipe_max = 5, 10
188
  notes.append("Medium area (5,000-25,000 SF): moderate sampling density")
189
- else:
190
- # Scale for larger areas
191
- tape_min, tape_max = 10, 15
192
  wipe_min, wipe_max = 10, 15
193
- notes.append("Large area (>25,000 SF): enhanced sampling density")
 
 
 
 
 
194
 
195
  # Multiply by surface types
196
  tape_min *= surface_types_count
 
84
  facility_type="Operational",
85
  ),
86
  "public-childcare": MetalsThresholds(
87
+ lead_ug_100cm2=0.54, # EPA/HUD October 2024: 0.54 µg/100cm² floors per FDAM §1.4
88
  cadmium_ug_100cm2=3.3, # Use non-operational as baseline
89
  arsenic_ug_100cm2=6.7,
90
  chromium_vi_ug_100cm2=3.3,
 
177
  """
178
  notes = []
179
 
180
+ # Base sample density by area size per FDAM §2.3
181
  if total_area_sf < 5000:
182
  tape_min, tape_max = 3, 5
183
  wipe_min, wipe_max = 3, 5
 
186
  tape_min, tape_max = 5, 10
187
  wipe_min, wipe_max = 5, 10
188
  notes.append("Medium area (5,000-25,000 SF): moderate sampling density")
189
+ elif total_area_sf <= 100000:
190
+ tape_min, tape_max = 10, 20
 
191
  wipe_min, wipe_max = 10, 15
192
+ notes.append("Large area (25,000-100,000 SF): enhanced sampling density")
193
+ else:
194
+ # > 100,000 SF per FDAM §2.3
195
+ tape_min, tape_max = 20, 30
196
+ wipe_min, wipe_max = 15, 25
197
+ notes.append("Very large area (>100,000 SF): maximum sampling density")
198
 
199
  # Multiply by surface types
200
  tape_min *= surface_types_count
pipeline/chat.py ADDED
@@ -0,0 +1,376 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Chat handler for Q&A and document modifications.
2
+
3
+ Uses the vision model (Qwen3-VL-30B-A3B-Thinking-FP8) in text-only mode
4
+ for chat interactions. The model handles both Q&A about assessment results
5
+ and document modification requests.
6
+ """
7
+
8
+ import json
9
+ import logging
10
+ import re
11
+ from typing import Optional
12
+
13
+ from config.settings import settings
14
+
15
+ logger = logging.getLogger(__name__)
16
+
17
+
18
+ # Chat system prompt
19
+ CHAT_SYSTEM_PROMPT = """You are an expert industrial hygienist assistant helping with fire damage assessment. You have access to the assessment results and can answer questions or modify the generated document.
20
+
21
+ ## Your Capabilities
22
+ 1. **Answer Questions**: Explain zone classifications, material detections, sampling recommendations, and methodology.
23
+ 2. **Document Modifications**: Add notes, update sections, or make changes to the generated Scope of Work document.
24
+
25
+ ## When Modifying Documents
26
+ If the user requests a document change, include a modification command in your response using this format:
27
+
28
+ <document_edit>
29
+ {
30
+ "action": "add_note" | "add_section" | "append_to_section",
31
+ "section": "Section Name",
32
+ "content": "Content to add..."
33
+ }
34
+ </document_edit>
35
+
36
+ Available sections:
37
+ - Additional Notes (for general notes)
38
+ - Room Information
39
+ - Scope Summary
40
+ - AI Vision Analysis Summary
41
+ - Field Observations
42
+ - Material Dispositions
43
+ - Cleaning Specifications
44
+ - Air Filtration Requirements
45
+ - Sampling Plan
46
+ - Regulatory References
47
+ - Threshold Documentation
48
+ - Disclaimer
49
+
50
+ ## Guidelines
51
+ - Be concise and professional
52
+ - Reference specific assessment data when answering questions
53
+ - For modifications, only change what the user requests
54
+ - Always explain changes you're making"""
55
+
56
+
57
+ class ChatHandler:
58
+ """Handles chat interactions for assessment Q&A and document modifications."""
59
+
60
+ def __init__(self, model_stack=None):
61
+ """Initialize chat handler.
62
+
63
+ Args:
64
+ model_stack: Model stack (RealModelStack or MockModelStack).
65
+ If None, will be loaded from get_models().
66
+ """
67
+ self.model_stack = model_stack
68
+
69
+ def process_message(
70
+ self,
71
+ user_message: str,
72
+ session_state,
73
+ chat_history: list[dict],
74
+ ) -> tuple[str, Optional[dict], list[dict]]:
75
+ """Process a chat message and return response.
76
+
77
+ Args:
78
+ user_message: The user's message
79
+ session_state: Current SessionState with assessment data
80
+ chat_history: Previous chat messages in Gradio messages format
81
+
82
+ Returns:
83
+ Tuple of (response_text, document_edit_or_none, updated_chat_history)
84
+ """
85
+ # Lazy load model stack if not provided
86
+ if self.model_stack is None:
87
+ from models.loader import get_models
88
+ self.model_stack = get_models()
89
+
90
+ # Build context from session
91
+ context = self._build_context(session_state)
92
+
93
+ # Generate response
94
+ if settings.mock_models:
95
+ response = self._generate_mock_response(user_message, context)
96
+ else:
97
+ response = self._generate_real_response(
98
+ user_message, context, chat_history
99
+ )
100
+
101
+ # Parse for document edits
102
+ document_edit = self._parse_document_edit(response)
103
+
104
+ # Clean response (remove document_edit tags for display)
105
+ display_response = re.sub(
106
+ r'<document_edit>.*?</document_edit>',
107
+ '',
108
+ response,
109
+ flags=re.DOTALL
110
+ ).strip()
111
+
112
+ # Update chat history
113
+ updated_history = chat_history.copy()
114
+ updated_history.append({"role": "user", "content": user_message})
115
+ updated_history.append({"role": "assistant", "content": display_response})
116
+
117
+ return display_response, document_edit, updated_history
118
+
119
+ def _build_context(self, session_state) -> str:
120
+ """Build context string from session state for chat."""
121
+ context_parts = []
122
+
123
+ # Room info
124
+ room = session_state.room
125
+ context_parts.append(f"Room: {room.name}")
126
+ context_parts.append(f"Dimensions: {room.length_ft}' x {room.width_ft}' x {room.ceiling_height_ft}'")
127
+ context_parts.append(f"Facility: {room.facility_classification}")
128
+ context_parts.append(f"Era: {room.construction_era}")
129
+
130
+ # Images summary
131
+ context_parts.append(f"Images analyzed: {len(session_state.images)}")
132
+
133
+ # Pipeline results if available
134
+ if session_state.pipeline_result_json:
135
+ try:
136
+ result = json.loads(session_state.pipeline_result_json)
137
+ if result.get("vision_results"):
138
+ context_parts.append("\nVision Analysis Summary:")
139
+ for img_id, analysis in result["vision_results"].items():
140
+ zone = analysis.get("zone", {}).get("classification", "unknown")
141
+ condition = analysis.get("condition", {}).get("level", "unknown")
142
+ context_parts.append(f" - {img_id}: zone={zone}, condition={condition}")
143
+
144
+ if result.get("dispositions"):
145
+ context_parts.append(f"\nDispositions: {len(result['dispositions'])} materials analyzed")
146
+
147
+ except json.JSONDecodeError:
148
+ pass
149
+
150
+ return "\n".join(context_parts)
151
+
152
+ def _generate_real_response(
153
+ self,
154
+ user_message: str,
155
+ context: str,
156
+ chat_history: list[dict],
157
+ ) -> str:
158
+ """Generate response using real vision model in text-only mode."""
159
+ try:
160
+ # Get vision model components
161
+ vision = self.model_stack.vision
162
+ model = vision.model
163
+ processor = vision.processor
164
+ sampling_params = vision.sampling_params
165
+
166
+ # Build messages (text-only, no image)
167
+ messages = [
168
+ {"role": "system", "content": CHAT_SYSTEM_PROMPT},
169
+ ]
170
+
171
+ # Add context as first user message
172
+ messages.append({
173
+ "role": "user",
174
+ "content": f"Assessment Context:\n{context}"
175
+ })
176
+ messages.append({
177
+ "role": "assistant",
178
+ "content": "I understand the assessment context. How can I help you?"
179
+ })
180
+
181
+ # Add chat history
182
+ for msg in chat_history:
183
+ messages.append(msg)
184
+
185
+ # Add current user message
186
+ messages.append({"role": "user", "content": user_message})
187
+
188
+ # Apply chat template
189
+ prompt = processor.apply_chat_template(
190
+ messages,
191
+ tokenize=False,
192
+ add_generation_prompt=True,
193
+ )
194
+
195
+ # Generate (text-only: no multi_modal_data)
196
+ outputs = model.generate(
197
+ prompts=[prompt], # Just prompt string, no image
198
+ sampling_params=sampling_params,
199
+ )
200
+
201
+ return outputs[0].outputs[0].text
202
+
203
+ except Exception as e:
204
+ logger.error(f"Chat generation failed: {e}")
205
+ return f"I apologize, but I encountered an error processing your request: {str(e)}"
206
+
207
+ def _generate_mock_response(self, user_message: str, context: str) -> str:
208
+ """Generate mock response for local development."""
209
+ user_lower = user_message.lower()
210
+
211
+ # Pattern matching for common questions
212
+ if "zone" in user_lower or "classification" in user_lower:
213
+ return """Zone classifications per FDAM §4.1 (IICRC/RIA/CIRI Technical Guide):
214
+
215
+ - **Burn Zone**: Direct fire involvement with structural char and complete combustion
216
+ - **Near-Field**: Adjacent to burn zone, heavy smoke/heat exposure, visible contamination
217
+ - **Far-Field**: Smoke migration only, light deposits, no structural damage
218
+
219
+ The AI analyzed each image to determine which zone best describes the visible conditions based on these criteria."""
220
+
221
+ if "material" in user_lower:
222
+ return """Materials are categorized by porosity, which affects cleaning requirements:
223
+
224
+ - **Non-porous**: Steel, concrete, glass, CMU - can typically be cleaned
225
+ - **Semi-porous**: Painted drywall, sealed wood - may require evaluation
226
+ - **Porous**: Carpet, insulation, acoustic tile - often require removal
227
+
228
+ The assessment identified materials visible in each image and assigned dispositions based on contamination level."""
229
+
230
+ if "sampling" in user_lower or "sample" in user_lower:
231
+ return """The sampling plan follows FDAM §2.3 requirements:
232
+
233
+ - **Tape lifts**: For particle identification via PLM (polarized light microscopy)
234
+ - **Surface wipes**: For metals quantification per NIOSH Method 9100 / BNL SOP IH75190
235
+
236
+ **Sample Density per FDAM §2.3:**
237
+ - <5,000 SF: 3-5 samples per surface type
238
+ - 5,000-25,000 SF: 5-10 samples per surface type
239
+ - 25,000-100,000 SF: 10-20 samples per surface type
240
+ - >100,000 SF: 20+ samples per surface type
241
+
242
+ **Ceiling Deck Enhancement (FDAM §4.5):** 1 sample per 2,500 SF due to 82.4% pass rate vs 95%+ for other surfaces."""
243
+
244
+ if "add" in user_lower and "note" in user_lower:
245
+ # Extract what they want to add
246
+ return """I'll add that note to the document.
247
+
248
+ <document_edit>
249
+ {
250
+ "action": "add_note",
251
+ "section": "Additional Notes",
252
+ "content": "Note added per assessor request during review."
253
+ }
254
+ </document_edit>
255
+
256
+ The note has been added to the Additional Notes section."""
257
+
258
+ if "explain" in user_lower or "why" in user_lower:
259
+ return """Based on the visual analysis and FDAM methodology:
260
+
261
+ The zone and condition classifications are determined by analyzing:
262
+ 1. Distance from fire origin indicators
263
+ 2. Visible contamination patterns
264
+ 3. Structural damage presence
265
+ 4. Material surface conditions
266
+
267
+ Each factor contributes to the overall assessment with confidence scores reflecting certainty."""
268
+
269
+ # Default response
270
+ return """I can help you understand the assessment results or make changes to the document.
271
+
272
+ **Questions I can answer:**
273
+ - Zone classification explanations
274
+ - Material detection details
275
+ - Sampling plan rationale
276
+ - Methodology references
277
+
278
+ **Changes I can make:**
279
+ - Add notes to any section
280
+ - Update specific content
281
+ - Add clarifications
282
+
283
+ What would you like to know or change?"""
284
+
285
+ def _parse_document_edit(self, response: str) -> Optional[dict]:
286
+ """Parse document edit command from response."""
287
+ match = re.search(
288
+ r'<document_edit>\s*(\{.*?\})\s*</document_edit>',
289
+ response,
290
+ re.DOTALL
291
+ )
292
+ if match:
293
+ try:
294
+ return json.loads(match.group(1))
295
+ except json.JSONDecodeError:
296
+ logger.warning("Failed to parse document edit JSON")
297
+ return None
298
+ return None
299
+
300
+ def apply_document_edit(
301
+ self,
302
+ document: str,
303
+ edit: dict,
304
+ ) -> str:
305
+ """Apply a document edit to the current document.
306
+
307
+ Args:
308
+ document: Current markdown document
309
+ edit: Edit command with action, section, content
310
+
311
+ Returns:
312
+ Modified document
313
+ """
314
+ action = edit.get("action", "")
315
+ section = edit.get("section", "")
316
+ content = edit.get("content", "")
317
+
318
+ if not content:
319
+ return document
320
+
321
+ if action == "add_note":
322
+ # Add note to Additional Notes section, or create it
323
+ if "## Additional Notes" in document:
324
+ # Append to existing section
325
+ document = document.replace(
326
+ "## Additional Notes",
327
+ f"## Additional Notes\n\n{content}"
328
+ )
329
+ else:
330
+ # Add before Disclaimer
331
+ if "## Disclaimer" in document:
332
+ document = document.replace(
333
+ "## Disclaimer",
334
+ f"## Additional Notes\n\n{content}\n\n---\n\n## Disclaimer"
335
+ )
336
+ else:
337
+ # Add at end
338
+ document += f"\n\n---\n\n## Additional Notes\n\n{content}"
339
+
340
+ elif action == "add_section":
341
+ # Add new section before Disclaimer
342
+ new_section = f"## {section}\n\n{content}"
343
+ if "## Disclaimer" in document:
344
+ document = document.replace(
345
+ "## Disclaimer",
346
+ f"{new_section}\n\n---\n\n## Disclaimer"
347
+ )
348
+ else:
349
+ document += f"\n\n---\n\n{new_section}"
350
+
351
+ elif action == "append_to_section":
352
+ # Find section and append content
353
+ section_pattern = rf"(## {re.escape(section)}.*?)(\n---|\Z)"
354
+ match = re.search(section_pattern, document, re.DOTALL)
355
+ if match:
356
+ section_content = match.group(1)
357
+ document = document.replace(
358
+ section_content,
359
+ f"{section_content}\n\n{content}"
360
+ )
361
+
362
+ return document
363
+
364
+
365
+ # Quick action messages
366
+ QUICK_ACTIONS = {
367
+ "explain_zones": "Explain how the zone classifications were determined for this assessment.",
368
+ "explain_materials": "What materials were detected and why were they categorized this way?",
369
+ "explain_sampling": "Explain the sampling plan recommendations.",
370
+ "add_note": "I need to add a note to the document. Please tell me what note you'd like to add.",
371
+ }
372
+
373
+
374
+ def get_quick_action_message(action_key: str) -> str:
375
+ """Get the message for a quick action button."""
376
+ return QUICK_ACTIONS.get(action_key, "")
pipeline/dispositions.py CHANGED
@@ -6,9 +6,11 @@ condition level, and RAG-retrieved methodology context.
6
 
7
  import logging
8
  from dataclasses import dataclass, field
9
- from typing import Literal, Optional
10
 
11
- from rag import FDAMRetriever, ChromaVectorStore
 
 
12
 
13
  logger = logging.getLogger(__name__)
14
 
@@ -99,7 +101,7 @@ class SurfaceDisposition:
99
  class DispositionEngine:
100
  """Determines cleaning dispositions using FDAM methodology and RAG."""
101
 
102
- def __init__(self, retriever: Optional[FDAMRetriever] = None):
103
  """Initialize disposition engine.
104
 
105
  Args:
@@ -108,9 +110,11 @@ class DispositionEngine:
108
  self._retriever = retriever
109
 
110
  @property
111
- def retriever(self) -> FDAMRetriever:
112
  """Get or create RAG retriever."""
113
  if self._retriever is None:
 
 
114
  try:
115
  vs = ChromaVectorStore(persist_directory="chroma_db")
116
  self._retriever = FDAMRetriever(vectorstore=vs)
 
6
 
7
  import logging
8
  from dataclasses import dataclass, field
9
+ from typing import Literal, Optional, TYPE_CHECKING
10
 
11
+ # Type hints only - actual import deferred to retriever property
12
+ if TYPE_CHECKING:
13
+ from rag import FDAMRetriever, ChromaVectorStore
14
 
15
  logger = logging.getLogger(__name__)
16
 
 
101
  class DispositionEngine:
102
  """Determines cleaning dispositions using FDAM methodology and RAG."""
103
 
104
+ def __init__(self, retriever: Optional["FDAMRetriever"] = None):
105
  """Initialize disposition engine.
106
 
107
  Args:
 
110
  self._retriever = retriever
111
 
112
  @property
113
+ def retriever(self) -> "FDAMRetriever":
114
  """Get or create RAG retriever."""
115
  if self._retriever is None:
116
+ # Lazy import to avoid chromadb dependency at module load
117
+ from rag import FDAMRetriever, ChromaVectorStore
118
  try:
119
  vs = ChromaVectorStore(persist_directory="chroma_db")
120
  self._retriever = FDAMRetriever(vectorstore=vs)
pipeline/generator.py CHANGED
@@ -7,12 +7,16 @@ with RAG-enhanced content from the FDAM knowledge base.
7
  import logging
8
  from dataclasses import dataclass
9
  from datetime import datetime
10
- from typing import Optional
11
 
12
  from ui.state import SessionState
13
 
14
  logger = logging.getLogger(__name__)
15
- from rag import FDAMRetriever, ChromaVectorStore
 
 
 
 
16
  from .calculations import FDAMCalculator, AirFiltrationResult, SampleDensityResult, RegulatoryFlags
17
  from .dispositions import DispositionEngine, SurfaceDisposition
18
 
@@ -35,7 +39,7 @@ class DocumentGenerator:
35
  self,
36
  calculator: Optional[FDAMCalculator] = None,
37
  disposition_engine: Optional[DispositionEngine] = None,
38
- retriever: Optional[FDAMRetriever] = None,
39
  ):
40
  """Initialize document generator.
41
 
@@ -49,9 +53,11 @@ class DocumentGenerator:
49
  self._retriever = retriever
50
 
51
  @property
52
- def retriever(self) -> FDAMRetriever:
53
  """Get or create RAG retriever."""
54
  if self._retriever is None:
 
 
55
  try:
56
  vs = ChromaVectorStore(persist_directory="chroma_db")
57
  self._retriever = FDAMRetriever(vectorstore=vs)
 
7
  import logging
8
  from dataclasses import dataclass
9
  from datetime import datetime
10
+ from typing import Optional, TYPE_CHECKING
11
 
12
  from ui.state import SessionState
13
 
14
  logger = logging.getLogger(__name__)
15
+
16
+ # Type hints only - actual import deferred to retriever property
17
+ if TYPE_CHECKING:
18
+ from rag import FDAMRetriever, ChromaVectorStore
19
+
20
  from .calculations import FDAMCalculator, AirFiltrationResult, SampleDensityResult, RegulatoryFlags
21
  from .dispositions import DispositionEngine, SurfaceDisposition
22
 
 
39
  self,
40
  calculator: Optional[FDAMCalculator] = None,
41
  disposition_engine: Optional[DispositionEngine] = None,
42
+ retriever: Optional["FDAMRetriever"] = None,
43
  ):
44
  """Initialize document generator.
45
 
 
53
  self._retriever = retriever
54
 
55
  @property
56
+ def retriever(self) -> "FDAMRetriever":
57
  """Get or create RAG retriever."""
58
  if self._retriever is None:
59
+ # Lazy import to avoid chromadb dependency at module load
60
+ from rag import FDAMRetriever, ChromaVectorStore
61
  try:
62
  vs = ChromaVectorStore(persist_directory="chroma_db")
63
  self._retriever = FDAMRetriever(vectorstore=vs)
pipeline/main.py CHANGED
@@ -13,7 +13,7 @@ import logging
13
  import time
14
  from dataclasses import dataclass, field
15
  from datetime import datetime
16
- from typing import Callable, Optional
17
  from PIL import Image
18
  import io
19
 
@@ -22,7 +22,10 @@ from ui.components import image_store
22
  from models.loader import get_models
23
 
24
  logger = logging.getLogger(__name__)
25
- from rag import FDAMRetriever, ChromaVectorStore
 
 
 
26
 
27
  from .calculations import FDAMCalculator
28
  from .dispositions import DispositionEngine, SurfaceDisposition
@@ -90,7 +93,7 @@ class FDAMPipeline:
90
  calculator: Optional[FDAMCalculator] = None,
91
  disposition_engine: Optional[DispositionEngine] = None,
92
  generator: Optional[DocumentGenerator] = None,
93
- retriever: Optional[FDAMRetriever] = None,
94
  ):
95
  """Initialize pipeline with optional component overrides.
96
 
@@ -112,9 +115,11 @@ class FDAMPipeline:
112
  )
113
 
114
  @property
115
- def retriever(self) -> FDAMRetriever:
116
  """Get or create RAG retriever."""
117
  if self._retriever is None:
 
 
118
  try:
119
  vs = ChromaVectorStore(persist_directory="chroma_db")
120
  self._retriever = FDAMRetriever(vectorstore=vs)
 
13
  import time
14
  from dataclasses import dataclass, field
15
  from datetime import datetime
16
+ from typing import Callable, Optional, TYPE_CHECKING
17
  from PIL import Image
18
  import io
19
 
 
22
  from models.loader import get_models
23
 
24
  logger = logging.getLogger(__name__)
25
+
26
+ # Type hints only - actual import deferred to retriever property
27
+ if TYPE_CHECKING:
28
+ from rag import FDAMRetriever, ChromaVectorStore
29
 
30
  from .calculations import FDAMCalculator
31
  from .dispositions import DispositionEngine, SurfaceDisposition
 
93
  calculator: Optional[FDAMCalculator] = None,
94
  disposition_engine: Optional[DispositionEngine] = None,
95
  generator: Optional[DocumentGenerator] = None,
96
+ retriever: Optional["FDAMRetriever"] = None,
97
  ):
98
  """Initialize pipeline with optional component overrides.
99
 
 
115
  )
116
 
117
  @property
118
+ def retriever(self) -> "FDAMRetriever":
119
  """Get or create RAG retriever."""
120
  if self._retriever is None:
121
+ # Lazy import to avoid chromadb dependency at module load
122
+ from rag import FDAMRetriever, ChromaVectorStore
123
  try:
124
  vs = ChromaVectorStore(persist_directory="chroma_db")
125
  self._retriever = FDAMRetriever(vectorstore=vs)
rag/__init__.py CHANGED
@@ -2,11 +2,9 @@
2
 
3
  This module provides document chunking, vector storage, and retrieval
4
  for the FDAM knowledge base.
5
- """
6
 
7
- from .chunker import SemanticChunker, Chunk
8
- from .vectorstore import ChromaVectorStore
9
- from .retriever import FDAMRetriever
10
 
11
  __all__ = [
12
  "SemanticChunker",
@@ -14,3 +12,20 @@ __all__ = [
14
  "ChromaVectorStore",
15
  "FDAMRetriever",
16
  ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
  This module provides document chunking, vector storage, and retrieval
4
  for the FDAM knowledge base.
 
5
 
6
+ Lazy imports to avoid chromadb dependency at module load time for local development.
7
+ """
 
8
 
9
  __all__ = [
10
  "SemanticChunker",
 
12
  "ChromaVectorStore",
13
  "FDAMRetriever",
14
  ]
15
+
16
+
17
+ def __getattr__(name):
18
+ """Lazy import RAG modules only when accessed."""
19
+ if name == "SemanticChunker":
20
+ from .chunker import SemanticChunker
21
+ return SemanticChunker
22
+ elif name == "Chunk":
23
+ from .chunker import Chunk
24
+ return Chunk
25
+ elif name == "ChromaVectorStore":
26
+ from .vectorstore import ChromaVectorStore
27
+ return ChromaVectorStore
28
+ elif name == "FDAMRetriever":
29
+ from .retriever import FDAMRetriever
30
+ return FDAMRetriever
31
+ raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
rag/retriever.py CHANGED
@@ -8,11 +8,14 @@ Implements tiered retrieval:
8
 
9
  import logging
10
  import time
11
- from typing import Optional
12
  from dataclasses import dataclass
13
 
14
  from config.settings import settings
15
- from .vectorstore import ChromaVectorStore
 
 
 
16
 
17
  logger = logging.getLogger(__name__)
18
 
@@ -141,7 +144,7 @@ class FDAMRetriever:
141
 
142
  def __init__(
143
  self,
144
- vectorstore: Optional[ChromaVectorStore] = None,
145
  reranker=None,
146
  use_reranking: bool = True,
147
  ):
@@ -153,7 +156,11 @@ class FDAMRetriever:
153
  reranker: Reranker instance. If None, uses appropriate default.
154
  use_reranking: Whether to apply reranking step.
155
  """
156
- self.vectorstore = vectorstore or ChromaVectorStore()
 
 
 
 
157
  self.reranker = reranker if reranker is not None else get_reranker()
158
  self.use_reranking = use_reranking
159
 
 
8
 
9
  import logging
10
  import time
11
+ from typing import Optional, TYPE_CHECKING
12
  from dataclasses import dataclass
13
 
14
  from config.settings import settings
15
+
16
+ # Type hints only - actual import deferred to __init__
17
+ if TYPE_CHECKING:
18
+ from .vectorstore import ChromaVectorStore
19
 
20
  logger = logging.getLogger(__name__)
21
 
 
144
 
145
  def __init__(
146
  self,
147
+ vectorstore: Optional["ChromaVectorStore"] = None,
148
  reranker=None,
149
  use_reranking: bool = True,
150
  ):
 
156
  reranker: Reranker instance. If None, uses appropriate default.
157
  use_reranking: Whether to apply reranking step.
158
  """
159
+ if vectorstore is None:
160
+ # Lazy import to avoid chromadb dependency at module load
161
+ from .vectorstore import ChromaVectorStore
162
+ vectorstore = ChromaVectorStore()
163
+ self.vectorstore = vectorstore
164
  self.reranker = reranker if reranker is not None else get_reranker()
165
  self.use_reranking = use_reranking
166
 
ui/state.py CHANGED
@@ -77,8 +77,7 @@ class SessionState(BaseModel):
77
  This model is serialized to localStorage for persistence.
78
  Images are stored separately and referenced by ID.
79
 
80
- MVP Simplification: Single room, no project-level fields.
81
- Tabs: 1=Room, 2=Images, 3=Observations, 4=Generate
82
  """
83
 
84
  # Session metadata
@@ -87,10 +86,8 @@ class SessionState(BaseModel):
87
  updated_at: str = Field(default_factory=lambda: datetime.now().isoformat())
88
  name: str = "" # Display name for history list
89
 
90
- # Tab completion status (3 input tabs)
91
- tab1_complete: bool = False # Room Assessment
92
- tab2_complete: bool = False # Images
93
- tab3_complete: bool = False # Observations
94
 
95
  # Form data - single room (not list)
96
  room: RoomFormData = Field(default_factory=RoomFormData)
@@ -101,6 +98,16 @@ class SessionState(BaseModel):
101
  has_results: bool = False
102
  results_generated_at: Optional[str] = None
103
 
 
 
 
 
 
 
 
 
 
 
104
  def update_timestamp(self) -> None:
105
  """Update the updated_at timestamp."""
106
  self.updated_at = datetime.now().isoformat()
@@ -247,6 +254,14 @@ def session_from_json(json_str: str) -> SessionState:
247
  if "tab4_complete" in data:
248
  del data["tab4_complete"]
249
 
 
 
 
 
 
 
 
 
250
  return SessionState.model_validate(data)
251
  except Exception:
252
  return create_new_session()
 
77
  This model is serialized to localStorage for persistence.
78
  Images are stored separately and referenced by ID.
79
 
80
+ MVP Simplification: Single room, 2 tabs (Input + Results/Chat).
 
81
  """
82
 
83
  # Session metadata
 
86
  updated_at: str = Field(default_factory=lambda: datetime.now().isoformat())
87
  name: str = "" # Display name for history list
88
 
89
+ # Input completion status (single flag replaces 3 tab flags)
90
+ input_complete: bool = False
 
 
91
 
92
  # Form data - single room (not list)
93
  room: RoomFormData = Field(default_factory=RoomFormData)
 
98
  has_results: bool = False
99
  results_generated_at: Optional[str] = None
100
 
101
+ # Chat history (Gradio 6 messages format)
102
+ chat_history: list[dict] = Field(default_factory=list)
103
+
104
+ # Serializable subset of PipelineResult (excludes PIL images)
105
+ pipeline_result_json: Optional[str] = None
106
+
107
+ # Document state for modifications
108
+ generated_document: Optional[str] = None
109
+ original_document: Optional[str] = None
110
+
111
  def update_timestamp(self) -> None:
112
  """Update the updated_at timestamp."""
113
  self.updated_at = datetime.now().isoformat()
 
254
  if "tab4_complete" in data:
255
  del data["tab4_complete"]
256
 
257
+ # Migration: Convert old tab1/2/3_complete to input_complete
258
+ if "tab1_complete" in data or "tab2_complete" in data or "tab3_complete" in data:
259
+ # Input is complete if all three old tabs were complete
260
+ tab1 = data.pop("tab1_complete", False)
261
+ tab2 = data.pop("tab2_complete", False)
262
+ tab3 = data.pop("tab3_complete", False)
263
+ data["input_complete"] = tab1 and tab2 and tab3
264
+
265
  return SessionState.model_validate(data)
266
  except Exception:
267
  return create_new_session()
ui/tabs/__init__.py CHANGED
@@ -1,13 +1,36 @@
1
- """Tab modules for FDAM AI Pipeline UI."""
2
 
3
- from . import room
4
- from . import images
5
- from . import observations
6
- from . import results
 
 
 
 
 
 
 
7
 
8
  __all__ = [
9
- "room",
10
- "images",
11
- "observations",
12
- "results",
13
  ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tab modules for FDAM AI Pipeline UI.
2
 
3
+ Simplified 2-tab structure:
4
+ - input_tab: Combined room, images, and observations
5
+ - results_tab: Results display with chat interface
6
+
7
+ Legacy modules (room, images, observations, results) available but not pre-imported
8
+ to avoid triggering heavy dependencies (chromadb, etc.) during local development.
9
+ """
10
+
11
+ # Only import the new simplified tabs by default
12
+ from . import input_tab
13
+ from . import results_tab
14
 
15
  __all__ = [
16
+ # New simplified tabs (always available)
17
+ "input_tab",
18
+ "results_tab",
 
19
  ]
20
+
21
+
22
+ def __getattr__(name):
23
+ """Lazy import legacy modules only when accessed."""
24
+ if name == "room":
25
+ from . import room
26
+ return room
27
+ elif name == "images":
28
+ from . import images
29
+ return images
30
+ elif name == "observations":
31
+ from . import observations
32
+ return observations
33
+ elif name == "results":
34
+ from . import results
35
+ return results
36
+ raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
ui/tabs/input_tab.py ADDED
@@ -0,0 +1,684 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tab 1: Input - Combined Room, Images, and Observations.
2
+
3
+ Consolidated input tab using Accordions for collapsible sections.
4
+ Room and Images are open by default, Observations is collapsed.
5
+ """
6
+
7
+ import uuid
8
+ import io
9
+ import gradio as gr
10
+ from typing import Any
11
+ from PIL import Image
12
+
13
+ from ui.state import SessionState, ImageFormData, ObservationsFormData
14
+ from ui.constants import CEILING_HEIGHT_PRESETS
15
+ from ui.components import image_store
16
+ from config.settings import settings
17
+
18
+
19
+ # Facility classification options
20
+ FACILITY_OPTIONS = [
21
+ ("Operational", "operational"),
22
+ ("Non-Operational", "non-operational"),
23
+ ("Public/Childcare", "public-childcare"),
24
+ ]
25
+
26
+ # Construction era options
27
+ CONSTRUCTION_ERA_OPTIONS = [
28
+ ("Pre-1980 (potential LBP/ACM)", "pre-1980"),
29
+ ("1980-2000", "1980-2000"),
30
+ ("Post-2000", "post-2000"),
31
+ ]
32
+
33
+ # Odor mapping
34
+ ODOR_MAP = {
35
+ "None": "none",
36
+ "Faint": "faint",
37
+ "Moderate": "moderate",
38
+ "Strong": "strong",
39
+ }
40
+ ODOR_MAP_REVERSE = {v: k for k, v in ODOR_MAP.items()}
41
+
42
+ # Char density mapping
43
+ CHAR_DENSITY_MAP = {
44
+ "None": None,
45
+ "Sparse": "sparse",
46
+ "Moderate": "moderate",
47
+ "Dense": "dense",
48
+ }
49
+ CHAR_DENSITY_MAP_REVERSE = {v: k for k, v in CHAR_DENSITY_MAP.items()}
50
+
51
+
52
+ def create_tab() -> dict[str, Any]:
53
+ """Create combined Input tab with accordions.
54
+
55
+ Returns:
56
+ Dictionary of component references for event wiring.
57
+ """
58
+ # --- Room Details Accordion (OPEN by default) ---
59
+ with gr.Accordion("Room Details", open=True):
60
+ room_name = gr.Textbox(
61
+ label="Room/Area Name *",
62
+ placeholder="e.g., Warehouse Bay A, Office 101",
63
+ elem_id="room_name",
64
+ )
65
+
66
+ with gr.Row():
67
+ room_length = gr.Number(
68
+ label="Length (ft) *",
69
+ minimum=1,
70
+ value=None,
71
+ elem_id="room_length",
72
+ )
73
+ room_width = gr.Number(
74
+ label="Width (ft) *",
75
+ minimum=1,
76
+ value=None,
77
+ elem_id="room_width",
78
+ )
79
+
80
+ with gr.Row():
81
+ room_height_preset = gr.Dropdown(
82
+ label="Ceiling Height *",
83
+ choices=CEILING_HEIGHT_PRESETS,
84
+ elem_id="room_height_preset",
85
+ info="Select preset or choose Custom",
86
+ )
87
+ room_height_custom = gr.Number(
88
+ label="Custom Height (ft)",
89
+ minimum=1,
90
+ value=None,
91
+ visible=False,
92
+ elem_id="room_height_custom",
93
+ )
94
+
95
+ with gr.Row():
96
+ floor_area = gr.Textbox(
97
+ label="Floor Area (SF)",
98
+ value="0",
99
+ interactive=False,
100
+ )
101
+ room_volume = gr.Textbox(
102
+ label="Volume (CF)",
103
+ value="0",
104
+ interactive=False,
105
+ )
106
+
107
+ facility_classification = gr.Radio(
108
+ label="Facility Classification *",
109
+ choices=FACILITY_OPTIONS,
110
+ value="non-operational",
111
+ elem_id="facility_classification",
112
+ info="Affects clearance thresholds",
113
+ )
114
+
115
+ construction_era = gr.Radio(
116
+ label="Construction Era *",
117
+ choices=CONSTRUCTION_ERA_OPTIONS,
118
+ value="post-2000",
119
+ elem_id="construction_era",
120
+ info="Pre-1980 triggers LBP/ACM flags",
121
+ )
122
+
123
+ # --- Images Accordion (OPEN by default) ---
124
+ with gr.Accordion("Images", open=True):
125
+ gr.Markdown(
126
+ f"*Upload up to {settings.max_images_per_assessment} images for AI analysis.*"
127
+ )
128
+
129
+ with gr.Row():
130
+ with gr.Column(scale=2):
131
+ image_upload = gr.Files(
132
+ label="Upload Images (select multiple)",
133
+ file_count="multiple",
134
+ file_types=["image"],
135
+ elem_id="image_upload",
136
+ )
137
+ image_description = gr.Textbox(
138
+ label="Description (optional)",
139
+ placeholder="e.g., View of ceiling deck from center aisle",
140
+ elem_id="image_description",
141
+ info="Applied to all images in batch",
142
+ )
143
+
144
+ with gr.Row():
145
+ add_image_btn = gr.Button("Add Images", variant="primary")
146
+ clear_upload_btn = gr.Button("Clear", variant="secondary")
147
+
148
+ with gr.Column(scale=3):
149
+ images_gallery = gr.Gallery(
150
+ label="Images Added",
151
+ columns=3,
152
+ height="auto",
153
+ elem_id="images_gallery",
154
+ )
155
+ with gr.Row():
156
+ remove_last_btn = gr.Button("Remove Last", variant="secondary")
157
+ clear_all_btn = gr.Button("Clear All", variant="stop")
158
+
159
+ with gr.Row():
160
+ image_count = gr.Textbox(
161
+ label="Images Added",
162
+ value="0 / 20",
163
+ interactive=False,
164
+ )
165
+
166
+ # Resume warning (shown when images need re-upload)
167
+ resume_warning = gr.HTML(
168
+ value="",
169
+ elem_id="resume_warning",
170
+ visible=False,
171
+ )
172
+
173
+ # --- Observations Accordion (COLLAPSED by default) ---
174
+ with gr.Accordion("Field Observations (Optional)", open=False):
175
+ gr.Markdown("*Document observations per FDAM §2.3. All fields optional.*")
176
+
177
+ with gr.Row():
178
+ with gr.Column():
179
+ smoke_odor = gr.Checkbox(
180
+ label="Smoke/fire odor present?",
181
+ elem_id="smoke_odor",
182
+ )
183
+ odor_intensity = gr.Radio(
184
+ choices=["None", "Faint", "Moderate", "Strong"],
185
+ label="Odor Intensity",
186
+ value="None",
187
+ elem_id="odor_intensity",
188
+ )
189
+
190
+ visible_soot = gr.Checkbox(
191
+ label="Visible soot deposits?",
192
+ elem_id="visible_soot",
193
+ )
194
+ soot_description = gr.Textbox(
195
+ label="Soot Pattern Description",
196
+ placeholder="e.g., Heavy deposits on ceiling",
197
+ elem_id="soot_description",
198
+ )
199
+
200
+ large_char = gr.Checkbox(
201
+ label="Large char particles observed?",
202
+ elem_id="large_char",
203
+ )
204
+ char_density = gr.Radio(
205
+ choices=["None", "Sparse", "Moderate", "Dense"],
206
+ label="Char Density",
207
+ value="None",
208
+ elem_id="char_density",
209
+ )
210
+
211
+ ash_residue = gr.Checkbox(
212
+ label="Ash-like residue present?",
213
+ elem_id="ash_residue",
214
+ )
215
+ ash_description = gr.Textbox(
216
+ label="Ash Color/Texture",
217
+ placeholder="e.g., Gray powdery residue",
218
+ elem_id="ash_description",
219
+ )
220
+
221
+ with gr.Column():
222
+ surface_discoloration = gr.Checkbox(
223
+ label="Surface discoloration?",
224
+ elem_id="surface_discoloration",
225
+ )
226
+ discoloration_description = gr.Textbox(
227
+ label="Discoloration Description",
228
+ placeholder="e.g., Yellowing on painted surfaces",
229
+ elem_id="discoloration_description",
230
+ )
231
+
232
+ dust_interference = gr.Checkbox(
233
+ label="Dust loading or interference?",
234
+ info="Pre-existing dust may affect samples",
235
+ elem_id="dust_interference",
236
+ )
237
+ dust_notes = gr.Textbox(
238
+ label="Dust Notes",
239
+ placeholder="e.g., Heavy ambient dust",
240
+ elem_id="dust_notes",
241
+ )
242
+
243
+ wildfire_indicators = gr.Checkbox(
244
+ label="Wildfire indicators (vegetation/pollen)?",
245
+ info="May indicate wildfire vs structural fire",
246
+ elem_id="wildfire_indicators",
247
+ )
248
+ wildfire_notes = gr.Textbox(
249
+ label="Wildfire Notes",
250
+ placeholder="e.g., Burned pine pollen visible",
251
+ elem_id="wildfire_notes",
252
+ )
253
+
254
+ additional_notes = gr.Textbox(
255
+ label="Additional Observations",
256
+ lines=3,
257
+ placeholder="Any other relevant observations...",
258
+ elem_id="additional_notes",
259
+ )
260
+
261
+ # --- Generate Button and Validation ---
262
+ validation_status = gr.HTML(
263
+ value="",
264
+ elem_id="input_validation",
265
+ )
266
+
267
+ generate_btn = gr.Button(
268
+ "Generate Assessment →",
269
+ variant="primary",
270
+ size="lg",
271
+ )
272
+
273
+ return {
274
+ # Room components
275
+ "room_name": room_name,
276
+ "room_length": room_length,
277
+ "room_width": room_width,
278
+ "room_height_preset": room_height_preset,
279
+ "room_height_custom": room_height_custom,
280
+ "floor_area": floor_area,
281
+ "room_volume": room_volume,
282
+ "facility_classification": facility_classification,
283
+ "construction_era": construction_era,
284
+ # Image components
285
+ "image_upload": image_upload,
286
+ "image_description": image_description,
287
+ "add_image_btn": add_image_btn,
288
+ "clear_upload_btn": clear_upload_btn,
289
+ "images_gallery": images_gallery,
290
+ "remove_last_btn": remove_last_btn,
291
+ "clear_all_btn": clear_all_btn,
292
+ "image_count": image_count,
293
+ "resume_warning": resume_warning,
294
+ # Observation components
295
+ "smoke_odor": smoke_odor,
296
+ "odor_intensity": odor_intensity,
297
+ "visible_soot": visible_soot,
298
+ "soot_description": soot_description,
299
+ "large_char": large_char,
300
+ "char_density": char_density,
301
+ "ash_residue": ash_residue,
302
+ "ash_description": ash_description,
303
+ "surface_discoloration": surface_discoloration,
304
+ "discoloration_description": discoloration_description,
305
+ "dust_interference": dust_interference,
306
+ "dust_notes": dust_notes,
307
+ "wildfire_indicators": wildfire_indicators,
308
+ "wildfire_notes": wildfire_notes,
309
+ "additional_notes": additional_notes,
310
+ # Validation and generation
311
+ "validation_status": validation_status,
312
+ "generate_btn": generate_btn,
313
+ }
314
+
315
+
316
+ # --- Room Functions ---
317
+
318
+
319
+ def on_height_preset_change(preset_value: int | None) -> dict:
320
+ """Show/hide custom height input based on preset selection."""
321
+ return gr.update(visible=(preset_value is None))
322
+
323
+
324
+ def update_calculated_values(
325
+ length: float | None,
326
+ width: float | None,
327
+ height_preset: int | None,
328
+ height_custom: float | None,
329
+ ) -> tuple[str, str]:
330
+ """Calculate and return floor area and volume."""
331
+ length_val = float(length) if length and length > 0 else 0
332
+ width_val = float(width) if width and width > 0 else 0
333
+
334
+ if height_preset is not None:
335
+ height_val = float(height_preset)
336
+ elif height_custom is not None and height_custom > 0:
337
+ height_val = float(height_custom)
338
+ else:
339
+ height_val = 0
340
+
341
+ area = length_val * width_val
342
+ volume = area * height_val
343
+
344
+ return f"{area:,.0f}", f"{volume:,.0f}"
345
+
346
+
347
+ def save_room_to_session(
348
+ session: SessionState,
349
+ name: str,
350
+ length: float | None,
351
+ width: float | None,
352
+ height_preset: int | None,
353
+ height_custom: float | None,
354
+ facility_classification: str,
355
+ construction_era: str,
356
+ ) -> SessionState:
357
+ """Save room data to session."""
358
+ if height_preset is not None:
359
+ height = float(height_preset)
360
+ elif height_custom is not None and height_custom > 0:
361
+ height = float(height_custom)
362
+ else:
363
+ height = 0
364
+
365
+ session.room.name = name.strip() if name else ""
366
+ session.room.length_ft = float(length) if length and length > 0 else 0
367
+ session.room.width_ft = float(width) if width and width > 0 else 0
368
+ session.room.ceiling_height_ft = height
369
+ session.room.facility_classification = facility_classification
370
+ session.room.construction_era = construction_era
371
+ session.update_timestamp()
372
+
373
+ return session
374
+
375
+
376
+ def load_room_from_session(
377
+ session: SessionState,
378
+ ) -> tuple[str, float | None, float | None, int | None, float | None, str, str, str, str]:
379
+ """Load room data from session."""
380
+ r = session.room
381
+
382
+ height_preset = None
383
+ height_custom = None
384
+ preset_values = [p[1] for p in CEILING_HEIGHT_PRESETS if p[1] is not None]
385
+ if r.ceiling_height_ft in preset_values:
386
+ height_preset = int(r.ceiling_height_ft)
387
+ elif r.ceiling_height_ft > 0:
388
+ height_custom = r.ceiling_height_ft
389
+
390
+ area = r.length_ft * r.width_ft
391
+ volume = area * r.ceiling_height_ft
392
+
393
+ return (
394
+ r.name,
395
+ r.length_ft if r.length_ft > 0 else None,
396
+ r.width_ft if r.width_ft > 0 else None,
397
+ height_preset,
398
+ height_custom,
399
+ f"{area:,.0f}",
400
+ f"{volume:,.0f}",
401
+ r.facility_classification,
402
+ r.construction_era,
403
+ )
404
+
405
+
406
+ # --- Image Functions ---
407
+
408
+
409
+ def add_image(
410
+ session: SessionState,
411
+ files: list | None,
412
+ description: str,
413
+ ) -> tuple[SessionState, list[tuple], str, str, None, str]:
414
+ """Add images to the session."""
415
+ validation_html = ""
416
+
417
+ errors = []
418
+ if not files or len(files) == 0:
419
+ errors.append("Please upload at least one image")
420
+
421
+ current_count = len(session.images)
422
+ max_allowed = settings.max_images_per_assessment
423
+ if files and current_count + len(files) > max_allowed:
424
+ remaining = max_allowed - current_count
425
+ if remaining <= 0:
426
+ errors.append(f"Maximum of {max_allowed} images allowed")
427
+ else:
428
+ errors.append(f"Can only add {remaining} more image(s)")
429
+
430
+ if errors:
431
+ error_items = "".join(f"<li>{e}</li>" for e in errors)
432
+ validation_html = f"""
433
+ <div style="background: #ffebee; border: 1px solid #ef5350; border-radius: 4px; padding: 10px;">
434
+ <ul style="margin: 0; padding-left: 20px; color: #c62828;">{error_items}</ul>
435
+ </div>
436
+ """
437
+ gallery_data = _get_gallery_data(session)
438
+ count_str = f"{len(session.images)} / {max_allowed}"
439
+ return session, gallery_data, validation_html, count_str, files, description
440
+
441
+ room_id = session.room.id
442
+ room_name = session.room.name.replace(" ", "_")[:20] if session.room.name else "room"
443
+
444
+ added_count = 0
445
+ for file_obj in files:
446
+ if len(session.images) >= max_allowed:
447
+ break
448
+
449
+ try:
450
+ img = Image.open(file_obj.name)
451
+ image_id = f"img-{uuid.uuid4().hex[:8]}"
452
+
453
+ img_bytes = io.BytesIO()
454
+ img.save(img_bytes, format="PNG")
455
+ image_store.store(image_id, img_bytes.getvalue())
456
+
457
+ img_meta = ImageFormData(
458
+ id=image_id,
459
+ filename=f"{room_name}_{image_id}.png",
460
+ room_id=room_id,
461
+ description=description.strip() if description else "",
462
+ )
463
+ session.images.append(img_meta)
464
+ added_count += 1
465
+ except Exception:
466
+ continue
467
+
468
+ session.update_timestamp()
469
+
470
+ if added_count > 0:
471
+ validation_html = f"""
472
+ <div style="background: #e8f5e9; border: 1px solid #66bb6a; border-radius: 4px; padding: 10px;">
473
+ <span style="color: #2e7d32;">✓ {added_count} image(s) added</span>
474
+ </div>
475
+ """
476
+ else:
477
+ validation_html = """
478
+ <div style="background: #fff3e0; border: 1px solid #ffb74d; border-radius: 4px; padding: 10px;">
479
+ <span style="color: #e65100;">No images could be processed</span>
480
+ </div>
481
+ """
482
+
483
+ gallery_data = _get_gallery_data(session)
484
+ count_str = f"{len(session.images)} / {max_allowed}"
485
+ return session, gallery_data, validation_html, count_str, None, ""
486
+
487
+
488
+ def remove_last_image(
489
+ session: SessionState,
490
+ ) -> tuple[SessionState, list[tuple], str, str]:
491
+ """Remove the last image from the session."""
492
+ validation_html = ""
493
+
494
+ if session.images:
495
+ removed = session.images.pop()
496
+ image_store.remove(removed.id)
497
+ session.update_timestamp()
498
+ validation_html = f"""
499
+ <div style="background: #fff3e0; border: 1px solid #ffb74d; border-radius: 4px; padding: 10px;">
500
+ <span style="color: #e65100;">Removed: {removed.filename}</span>
501
+ </div>
502
+ """
503
+
504
+ gallery_data = _get_gallery_data(session)
505
+ count_str = f"{len(session.images)} / {settings.max_images_per_assessment}"
506
+ return session, gallery_data, validation_html, count_str
507
+
508
+
509
+ def clear_all_images(session: SessionState) -> tuple[SessionState, list[tuple], str, str]:
510
+ """Clear all images from the session."""
511
+ count = len(session.images)
512
+
513
+ for img in session.images:
514
+ image_store.remove(img.id)
515
+
516
+ session.images = []
517
+ session.update_timestamp()
518
+
519
+ validation_html = ""
520
+ if count > 0:
521
+ validation_html = f"""
522
+ <div style="background: #fff3e0; border: 1px solid #ffb74d; border-radius: 4px; padding: 10px;">
523
+ <span style="color: #e65100;">Cleared {count} image(s)</span>
524
+ </div>
525
+ """
526
+
527
+ count_str = f"0 / {settings.max_images_per_assessment}"
528
+ return session, [], validation_html, count_str
529
+
530
+
531
+ def load_images_from_session(session: SessionState) -> tuple[list[tuple], str, str]:
532
+ """Load gallery data and count from session."""
533
+ gallery_data = _get_gallery_data(session)
534
+ count_str = f"{len(session.images)} / {settings.max_images_per_assessment}"
535
+
536
+ expected_ids = [img.id for img in session.images]
537
+ missing_ids = image_store.get_missing_ids(expected_ids)
538
+
539
+ resume_html = ""
540
+ if missing_ids and session.images:
541
+ resume_html = f"""
542
+ <div style="background: #fff3e0; border: 1px solid #ffb74d; border-radius: 4px; padding: 10px;">
543
+ <strong style="color: #e65100;">⚠ {len(missing_ids)} image(s) need re-upload</strong>
544
+ </div>
545
+ """
546
+
547
+ return gallery_data, count_str, resume_html
548
+
549
+
550
+ def _get_gallery_data(session: SessionState) -> list[tuple]:
551
+ """Get gallery data from session images."""
552
+ gallery_data = []
553
+ for img_meta in session.images:
554
+ img_bytes = image_store.get(img_meta.id)
555
+ if img_bytes:
556
+ pil_image = Image.open(io.BytesIO(img_bytes))
557
+ caption = img_meta.description or img_meta.filename
558
+ gallery_data.append((pil_image, caption))
559
+ return gallery_data
560
+
561
+
562
+ # --- Observations Functions ---
563
+
564
+
565
+ def save_observations_to_session(
566
+ session: SessionState,
567
+ smoke_odor: bool,
568
+ odor_intensity: str,
569
+ visible_soot: bool,
570
+ soot_description: str,
571
+ large_char: bool,
572
+ char_density: str,
573
+ ash_residue: bool,
574
+ ash_description: str,
575
+ surface_discoloration: bool,
576
+ discoloration_description: str,
577
+ dust_interference: bool,
578
+ dust_notes: str,
579
+ wildfire_indicators: bool,
580
+ wildfire_notes: str,
581
+ additional_notes: str,
582
+ ) -> SessionState:
583
+ """Update session state from observation form values."""
584
+ session.observations = ObservationsFormData(
585
+ smoke_fire_odor=smoke_odor or False,
586
+ odor_intensity=ODOR_MAP.get(odor_intensity, "none"),
587
+ visible_soot_deposits=visible_soot or False,
588
+ soot_pattern_description=soot_description or "",
589
+ large_char_particles=large_char or False,
590
+ char_density_estimate=CHAR_DENSITY_MAP.get(char_density),
591
+ ash_like_residue=ash_residue or False,
592
+ ash_color_texture=ash_description or "",
593
+ surface_discoloration=surface_discoloration or False,
594
+ discoloration_description=discoloration_description or "",
595
+ dust_loading_interference=dust_interference or False,
596
+ dust_notes=dust_notes or "",
597
+ wildfire_indicators=wildfire_indicators or False,
598
+ wildfire_notes=wildfire_notes or "",
599
+ additional_notes=additional_notes or "",
600
+ )
601
+ session.update_timestamp()
602
+ return session
603
+
604
+
605
+ def load_observations_from_session(session: SessionState) -> tuple:
606
+ """Load observation form values from session state."""
607
+ obs = session.observations
608
+ return (
609
+ obs.smoke_fire_odor,
610
+ ODOR_MAP_REVERSE.get(obs.odor_intensity, "None"),
611
+ obs.visible_soot_deposits,
612
+ obs.soot_pattern_description,
613
+ obs.large_char_particles,
614
+ CHAR_DENSITY_MAP_REVERSE.get(obs.char_density_estimate, "None"),
615
+ obs.ash_like_residue,
616
+ obs.ash_color_texture,
617
+ obs.surface_discoloration,
618
+ obs.discoloration_description,
619
+ obs.dust_loading_interference,
620
+ obs.dust_notes,
621
+ obs.wildfire_indicators,
622
+ obs.wildfire_notes,
623
+ obs.additional_notes,
624
+ )
625
+
626
+
627
+ # --- Validation and Generation ---
628
+
629
+
630
+ def validate_input(session: SessionState) -> tuple[bool, list[str]]:
631
+ """Validate all input sections."""
632
+ errors = []
633
+
634
+ # Room validation
635
+ r = session.room
636
+ if not r.name:
637
+ errors.append("Room name is required")
638
+ if r.length_ft <= 0:
639
+ errors.append("Length must be greater than 0")
640
+ if r.width_ft <= 0:
641
+ errors.append("Width must be greater than 0")
642
+ if r.ceiling_height_ft <= 0:
643
+ errors.append("Ceiling height must be greater than 0")
644
+
645
+ # Image validation
646
+ if not session.images:
647
+ errors.append("At least one image is required")
648
+
649
+ # Check for missing images in memory
650
+ expected_ids = [img.id for img in session.images]
651
+ missing_ids = image_store.get_missing_ids(expected_ids)
652
+ if missing_ids:
653
+ errors.append(f"{len(missing_ids)} image(s) need to be re-uploaded")
654
+
655
+ return len(errors) == 0, errors
656
+
657
+
658
+ def validate_and_generate(session: SessionState) -> tuple[SessionState, str, dict]:
659
+ """Validate input and switch to Results tab if valid.
660
+
661
+ Returns:
662
+ Tuple of (session, validation_html, tabs_update).
663
+ """
664
+ is_valid, errors = validate_input(session)
665
+
666
+ if is_valid:
667
+ session.input_complete = True
668
+ session.update_timestamp()
669
+ html = """
670
+ <div style="background: #e8f5e9; border: 1px solid #66bb6a; border-radius: 4px; padding: 10px;">
671
+ <span style="color: #2e7d32;">✓ All inputs valid. Switching to Results...</span>
672
+ </div>
673
+ """
674
+ return session, html, gr.update(selected=1) # Go to Results tab (index 1)
675
+ else:
676
+ session.input_complete = False
677
+ error_items = "".join(f"<li>{e}</li>" for e in errors)
678
+ html = f"""
679
+ <div style="background: #ffebee; border: 1px solid #ef5350; border-radius: 4px; padding: 10px;">
680
+ <strong style="color: #c62828;">Please fix the following:</strong>
681
+ <ul style="margin: 5px 0 0 0; padding-left: 20px; color: #c62828;">{error_items}</ul>
682
+ </div>
683
+ """
684
+ return session, html, gr.update(selected=0) # Stay on Input tab
ui/tabs/results_tab.py ADDED
@@ -0,0 +1,391 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Tab 2: Results + Chat.
2
+
3
+ Display generated assessment results with integrated chat for Q&A and modifications.
4
+ """
5
+
6
+ import json
7
+ import gradio as gr
8
+ from typing import Any, Optional, TYPE_CHECKING
9
+ from datetime import datetime
10
+ import tempfile
11
+
12
+ from ui.state import SessionState
13
+ from ui.components import create_stats_dict, create_progress_html, image_store
14
+
15
+ # Lazy imports to avoid chromadb dependency at module load time
16
+ # These are imported when generate_assessment() is called
17
+ if TYPE_CHECKING:
18
+ from pipeline import FDAMPipeline, PipelineResult, PDFGenerator
19
+
20
+
21
+ def create_tab() -> dict[str, Any]:
22
+ """Create Results + Chat tab UI components.
23
+
24
+ Returns:
25
+ Dictionary of component references for event wiring.
26
+ """
27
+ # --- Processing Section ---
28
+ with gr.Row():
29
+ generate_btn = gr.Button(
30
+ "Generate Assessment",
31
+ variant="primary",
32
+ scale=2,
33
+ elem_id="generate_btn",
34
+ )
35
+ processing_status = gr.Textbox(
36
+ label="Status",
37
+ value="Ready",
38
+ interactive=False,
39
+ elem_id="processing_status",
40
+ )
41
+
42
+ progress_html = gr.HTML(
43
+ value="",
44
+ elem_id="progress_html",
45
+ )
46
+
47
+ # --- Results Display ---
48
+ gr.Markdown("---")
49
+
50
+ with gr.Row():
51
+ with gr.Column(scale=2):
52
+ gr.Markdown("#### Annotated Images")
53
+ annotated_gallery = gr.Gallery(
54
+ label="AI-Analyzed Images",
55
+ columns=2,
56
+ height="auto",
57
+ elem_id="annotated_gallery",
58
+ )
59
+
60
+ with gr.Column(scale=1):
61
+ gr.Markdown("#### Assessment Summary")
62
+ stats_output = gr.JSON(
63
+ label="Statistics",
64
+ elem_id="stats_output",
65
+ )
66
+
67
+ gr.Markdown("---")
68
+ gr.Markdown("### Cleaning Specification / Scope of Work")
69
+
70
+ sow_output = gr.Markdown(
71
+ value="*Generate an assessment to see results here.*",
72
+ elem_id="sow_output",
73
+ )
74
+
75
+ # --- Downloads ---
76
+ gr.Markdown("#### Downloads")
77
+ with gr.Row():
78
+ download_md = gr.File(
79
+ label="Download Markdown (.md)",
80
+ elem_id="download_md",
81
+ )
82
+ download_pdf = gr.File(
83
+ label="Download PDF (.pdf)",
84
+ elem_id="download_pdf",
85
+ )
86
+
87
+ # --- Chat Interface ---
88
+ gr.Markdown("---")
89
+ gr.Markdown("### Ask Questions or Request Changes")
90
+ gr.Markdown(
91
+ "*Chat with the AI about the assessment results or request document modifications.*"
92
+ )
93
+
94
+ chatbot = gr.Chatbot(
95
+ label="Chat",
96
+ type="messages", # Gradio 6 format: [{"role": "user", "content": "..."}, ...]
97
+ height=300,
98
+ elem_id="chatbot",
99
+ )
100
+
101
+ with gr.Row():
102
+ chat_input = gr.Textbox(
103
+ label="Message",
104
+ placeholder="Ask a question or request a change...",
105
+ scale=4,
106
+ elem_id="chat_input",
107
+ )
108
+ chat_send_btn = gr.Button("Send", variant="primary", scale=1)
109
+
110
+ # Quick action buttons
111
+ with gr.Row():
112
+ gr.Markdown("**Quick Actions:**")
113
+ with gr.Row():
114
+ quick_explain_zones = gr.Button("Explain zone classifications", size="sm")
115
+ quick_explain_materials = gr.Button("Explain detected materials", size="sm")
116
+ quick_sampling = gr.Button("Explain sampling plan", size="sm")
117
+ quick_add_note = gr.Button("Add a note to document", size="sm")
118
+
119
+ # Navigation
120
+ with gr.Row():
121
+ back_btn = gr.Button("← Back to Input")
122
+ regenerate_btn = gr.Button(
123
+ "Regenerate Assessment",
124
+ variant="secondary",
125
+ )
126
+ reset_doc_btn = gr.Button(
127
+ "Reset Document",
128
+ variant="secondary",
129
+ )
130
+
131
+ return {
132
+ # Generation controls
133
+ "generate_btn": generate_btn,
134
+ "processing_status": processing_status,
135
+ "progress_html": progress_html,
136
+ # Results display
137
+ "annotated_gallery": annotated_gallery,
138
+ "stats_output": stats_output,
139
+ "sow_output": sow_output,
140
+ # Downloads
141
+ "download_md": download_md,
142
+ "download_pdf": download_pdf,
143
+ # Chat interface
144
+ "chatbot": chatbot,
145
+ "chat_input": chat_input,
146
+ "chat_send_btn": chat_send_btn,
147
+ # Quick actions
148
+ "quick_explain_zones": quick_explain_zones,
149
+ "quick_explain_materials": quick_explain_materials,
150
+ "quick_sampling": quick_sampling,
151
+ "quick_add_note": quick_add_note,
152
+ # Navigation
153
+ "back_btn": back_btn,
154
+ "regenerate_btn": regenerate_btn,
155
+ "reset_doc_btn": reset_doc_btn,
156
+ }
157
+
158
+
159
+ def check_preflight(session: SessionState) -> str:
160
+ """Check if assessment can be generated.
161
+
162
+ Returns:
163
+ HTML string with preflight status.
164
+ """
165
+ can_generate, errors = session.can_generate()
166
+
167
+ # Also check if images are in memory
168
+ expected_ids = [img.id for img in session.images]
169
+ missing_ids = image_store.get_missing_ids(expected_ids)
170
+ if missing_ids:
171
+ errors.append(f"{len(missing_ids)} image(s) need to be re-uploaded")
172
+ can_generate = False
173
+
174
+ if can_generate:
175
+ stats = create_stats_dict(session)
176
+ return f"""
177
+ <div style="background: #e8f5e9; border: 1px solid #66bb6a; border-radius: 4px; padding: 15px;">
178
+ <strong style="color: #2e7d32;">✓ Ready to Generate</strong>
179
+ <div style="margin-top: 10px; color: #333;">
180
+ <strong>Room:</strong> {stats['room_name']}<br>
181
+ <strong>Images:</strong> {stats['images']}<br>
182
+ <strong>Total Area:</strong> {stats['total_floor_area_sf']} SF
183
+ </div>
184
+ </div>
185
+ """
186
+ else:
187
+ error_items = "".join(f"<li>{e}</li>" for e in errors)
188
+ return f"""
189
+ <div style="background: #ffebee; border: 1px solid #ef5350; border-radius: 4px; padding: 15px;">
190
+ <strong style="color: #c62828;">Cannot Generate - Please Fix:</strong>
191
+ <ul style="margin: 10px 0 0 0; padding-left: 20px; color: #c62828;">{error_items}</ul>
192
+ </div>
193
+ """
194
+
195
+
196
+ def generate_assessment(
197
+ session: SessionState,
198
+ progress: Optional[gr.Progress] = None,
199
+ ) -> tuple[SessionState, str, str, list[tuple], dict, str, Optional[str], Optional[str], list[dict]]:
200
+ """Generate the assessment using the FDAM pipeline.
201
+
202
+ Returns:
203
+ Tuple of (session, status, progress_html, annotated_images,
204
+ stats, sow_markdown, md_file_path, pdf_file_path, chat_history).
205
+ """
206
+ # Lazy import to avoid chromadb dependency at module load
207
+ from pipeline import FDAMPipeline, PipelineResult, PDFGenerator
208
+
209
+ # Create pipeline instance
210
+ pipeline = FDAMPipeline()
211
+
212
+ # Define progress callback for Gradio
213
+ def progress_callback(prog):
214
+ if progress:
215
+ progress(prog.percent, desc=prog.message)
216
+
217
+ # Execute pipeline
218
+ result: PipelineResult = pipeline.execute(
219
+ session=session,
220
+ progress_callback=progress_callback,
221
+ )
222
+
223
+ # Handle errors
224
+ if not result.success:
225
+ error_msg = "**Error:** Please fix the following before generating:\n\n"
226
+ error_msg += "\n".join(f"- {e}" for e in result.errors)
227
+ return (
228
+ result.session,
229
+ "Error: Cannot generate",
230
+ "",
231
+ [],
232
+ {},
233
+ error_msg,
234
+ None,
235
+ None,
236
+ [], # Clear chat on error
237
+ )
238
+
239
+ # Generate stats dictionary for UI
240
+ stats = pipeline.generate_stats_dict(result)
241
+
242
+ # Get markdown content
243
+ sow_markdown = result.document.markdown if result.document else ""
244
+
245
+ # Store document in session for chat modifications
246
+ session.generated_document = sow_markdown
247
+ session.original_document = sow_markdown
248
+
249
+ # Store serializable subset of PipelineResult for chat context
250
+ session.pipeline_result_json = _serialize_pipeline_result(result)
251
+
252
+ # Clear chat history on new generation
253
+ session.chat_history = []
254
+
255
+ # Save markdown file
256
+ md_path = None
257
+ pdf_path = None
258
+
259
+ try:
260
+ if sow_markdown:
261
+ room_name_safe = session.room.name.replace(' ', '_') if session.room.name else "Room"
262
+ with tempfile.NamedTemporaryFile(
263
+ mode='w',
264
+ suffix='.md',
265
+ delete=False,
266
+ prefix=f"SOW_{room_name_safe}_",
267
+ ) as f:
268
+ f.write(sow_markdown)
269
+ md_path = f.name
270
+
271
+ # Generate PDF
272
+ pdf_generator = PDFGenerator()
273
+ pdf_result = pdf_generator.generate_pdf(sow_markdown)
274
+ if pdf_result.success:
275
+ pdf_path = pdf_result.pdf_path
276
+ else:
277
+ result.warnings.append(f"PDF generation failed: {pdf_result.error_message}")
278
+
279
+ except Exception as e:
280
+ print(f"Error saving files: {e}")
281
+
282
+ # Add warnings to status if any
283
+ status = "Complete"
284
+ if result.warnings:
285
+ status = f"Complete ({len(result.warnings)} warnings)"
286
+
287
+ session.has_results = True
288
+ session.results_generated_at = datetime.now().isoformat()
289
+ session.update_timestamp()
290
+
291
+ return (
292
+ session,
293
+ status,
294
+ create_progress_html(6, 6, f"Complete! ({result.execution_time_seconds:.1f}s)"),
295
+ result.annotated_images,
296
+ stats,
297
+ sow_markdown,
298
+ md_path,
299
+ pdf_path,
300
+ [], # Reset chat history
301
+ )
302
+
303
+
304
+ def _serialize_pipeline_result(result: "PipelineResult") -> str:
305
+ """Serialize PipelineResult to JSON, excluding non-serializable fields.
306
+
307
+ Excludes:
308
+ - annotated_images (contains PIL.Image objects)
309
+ - session (complex SessionState object)
310
+ - document (GeneratedDocument object)
311
+ """
312
+ # Convert VisionResult dataclasses to dicts
313
+ vision_results_dict = {}
314
+ for img_id, vr in result.vision_results.items():
315
+ vision_results_dict[img_id] = {
316
+ "zone": vr.zone,
317
+ "condition": vr.condition,
318
+ "materials": vr.materials,
319
+ "bounding_boxes": vr.bounding_boxes,
320
+ }
321
+
322
+ # Convert SurfaceDisposition dataclasses to dicts
323
+ dispositions_list = []
324
+ for disp in result.dispositions:
325
+ dispositions_list.append({
326
+ "room_name": disp.room_name,
327
+ "surface_type": disp.surface_type,
328
+ "zone": disp.zone,
329
+ "condition": disp.condition,
330
+ "disposition": disp.disposition,
331
+ "cleaning_method": disp.cleaning_method,
332
+ "notes": disp.notes,
333
+ })
334
+
335
+ serializable = {
336
+ "success": result.success,
337
+ "errors": result.errors,
338
+ "warnings": result.warnings,
339
+ "execution_time_seconds": result.execution_time_seconds,
340
+ "vision_results": vision_results_dict,
341
+ "dispositions": dispositions_list,
342
+ "calculations": result.calculations,
343
+ }
344
+ return json.dumps(serializable, default=str)
345
+
346
+
347
+ def reset_document(session: SessionState) -> tuple[SessionState, str]:
348
+ """Reset document to original generated version."""
349
+ if session.original_document:
350
+ session.generated_document = session.original_document
351
+ session.update_timestamp()
352
+ return session, session.original_document
353
+ return session, session.generated_document or ""
354
+
355
+
356
+ def regenerate_downloads(
357
+ session: SessionState,
358
+ ) -> tuple[Optional[str], Optional[str]]:
359
+ """Regenerate download files from current document.
360
+
361
+ Used after chat modifications to update downloads.
362
+ """
363
+ sow_markdown = session.generated_document
364
+ if not sow_markdown:
365
+ return None, None
366
+
367
+ md_path = None
368
+ pdf_path = None
369
+
370
+ try:
371
+ room_name_safe = session.room.name.replace(' ', '_') if session.room.name else "Room"
372
+ with tempfile.NamedTemporaryFile(
373
+ mode='w',
374
+ suffix='.md',
375
+ delete=False,
376
+ prefix=f"SOW_{room_name_safe}_",
377
+ ) as f:
378
+ f.write(sow_markdown)
379
+ md_path = f.name
380
+
381
+ # Lazy import PDFGenerator
382
+ from pipeline import PDFGenerator
383
+ pdf_generator = PDFGenerator()
384
+ pdf_result = pdf_generator.generate_pdf(sow_markdown)
385
+ if pdf_result.success:
386
+ pdf_path = pdf_result.pdf_path
387
+
388
+ except Exception as e:
389
+ print(f"Error regenerating files: {e}")
390
+
391
+ return md_path, pdf_path