Spaces:
Sleeping
feat: Add explainability system and judge override infrastructure
Browse files- Created scheduler/control/explainability.py with DecisionStep and ExplainabilityEngine
* Provides step-by-step reasoning for scheduling decisions
* Explains ripeness status, priority scores, and policy selection
- Created scheduler/control/overrides.py with Override and OverrideManager
* Supports 8 override types: RIPENESS, PRIORITY, ADD_CASE, REMOVE_CASE, etc.
* JudgePreferences for capacity, blocked dates, case type preferences
* CauseListDraft for draft-approval workflow with acceptance tracking
* Full audit trail export capability
- Modified scheduler/simulation/events.py to log decision metadata
* Added columns: priority_score, age_days, readiness_score, is_urgent, adj_boost
* Enables verification of scheduling decisions
- Modified scheduler/simulation/engine.py
* Calculate and log adjournment boost in priority scoring
* Full metadata logging for scheduled cases
- Added scripts/demo_explainability_and_controls.py
* Demonstrates explainability engine with example decisions
* Shows judge override mechanisms and audit trail
- Added scripts/generate_all_cause_lists.py
* Generates compiled cause lists from simulation events
* Creates statistics and visualizations across scenarios
- Updated README.md with explainability and control system features
- Refactored main.py to use court_scheduler CLI
- Updated pyproject.toml dependencies
Phase 6.5 (explainability + override infrastructure) complete.
Next: Integrate overrides into simulation engine.
- README.md +31 -4
- main.py +6 -18
- pyproject.toml +6 -6
- scheduler/__init__.py +0 -0
- scheduler/control/__init__.py +31 -0
- scheduler/control/explainability.py +316 -0
- scheduler/control/overrides.py +438 -0
- scheduler/core/__init__.py +0 -0
- scheduler/core/case.py +331 -0
- scheduler/core/courtroom.py +228 -0
- scheduler/core/hearing.py +134 -0
- scheduler/core/judge.py +167 -0
- scheduler/core/ripeness.py +216 -0
- scheduler/data/__init__.py +0 -0
- scheduler/data/case_generator.py +265 -0
- scheduler/data/config.py +122 -0
- scheduler/data/param_loader.py +343 -0
- scheduler/metrics/__init__.py +0 -0
- scheduler/metrics/basic.py +62 -0
- scheduler/optimization/__init__.py +0 -0
- scheduler/output/__init__.py +5 -0
- scheduler/output/cause_list.py +232 -0
- scheduler/simulation/__init__.py +0 -0
- scheduler/simulation/allocator.py +2 -2
- scheduler/simulation/engine.py +53 -2
- scheduler/simulation/events.py +63 -0
- scheduler/simulation/policies/__init__.py +18 -0
- scheduler/simulation/policies/age.py +38 -0
- scheduler/simulation/policies/fifo.py +34 -0
- scheduler/simulation/policies/readiness.py +48 -0
- scheduler/simulation/scheduler.py +43 -0
- scheduler/utils/__init__.py +0 -0
- scheduler/utils/calendar.py +217 -0
- scheduler/visualization/__init__.py +0 -0
- scripts/demo_explainability_and_controls.py +378 -0
- scripts/generate_all_cause_lists.py +261 -0
|
@@ -56,19 +56,45 @@ This project delivers a complete court scheduling system for the Code4Change hac
|
|
| 56 |
|
| 57 |
## Quick Start
|
| 58 |
|
| 59 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
```bash
|
| 61 |
# Extract parameters from historical data
|
| 62 |
uv run python main.py
|
| 63 |
```
|
| 64 |
|
| 65 |
-
|
| 66 |
```bash
|
| 67 |
-
# Generate 10,000 synthetic cases
|
| 68 |
uv run python -c "from scheduler.data.case_generator import CaseGenerator; from datetime import date; from pathlib import Path; gen = CaseGenerator(start=date(2022,1,1), end=date(2023,12,31), seed=42); cases = gen.generate(10000, stage_mix_auto=True); CaseGenerator.to_csv(cases, Path('data/generated/cases.csv')); print(f'Generated {len(cases)} cases')"
|
| 69 |
```
|
| 70 |
|
| 71 |
-
|
| 72 |
```bash
|
| 73 |
# 2-year simulation with ripeness classification
|
| 74 |
uv run python scripts/simulate.py --days 384 --start 2024-01-01 --log-dir data/sim_runs/test_run
|
|
@@ -76,6 +102,7 @@ uv run python scripts/simulate.py --days 384 --start 2024-01-01 --log-dir data/s
|
|
| 76 |
# Quick 60-day test
|
| 77 |
uv run python scripts/simulate.py --days 60
|
| 78 |
```
|
|
|
|
| 79 |
|
| 80 |
## Usage
|
| 81 |
|
|
|
|
| 56 |
|
| 57 |
## Quick Start
|
| 58 |
|
| 59 |
+
### Using the CLI (Recommended)
|
| 60 |
+
|
| 61 |
+
The system provides a unified CLI for all operations:
|
| 62 |
+
|
| 63 |
+
```bash
|
| 64 |
+
# See all available commands
|
| 65 |
+
court-scheduler --help
|
| 66 |
+
|
| 67 |
+
# Run EDA pipeline
|
| 68 |
+
court-scheduler eda
|
| 69 |
+
|
| 70 |
+
# Generate test cases
|
| 71 |
+
court-scheduler generate --cases 10000 --output data/generated/cases.csv
|
| 72 |
+
|
| 73 |
+
# Run simulation
|
| 74 |
+
court-scheduler simulate --days 384 --start 2024-01-01 --log-dir data/sim_runs/test_run
|
| 75 |
+
|
| 76 |
+
# Run full workflow (EDA -> Generate -> Simulate)
|
| 77 |
+
court-scheduler workflow --cases 10000 --days 384
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
### Legacy Methods (Still Supported)
|
| 81 |
+
|
| 82 |
+
<details>
|
| 83 |
+
<summary>Click to see old script-based approach</summary>
|
| 84 |
+
|
| 85 |
+
#### 1. Run EDA Pipeline
|
| 86 |
```bash
|
| 87 |
# Extract parameters from historical data
|
| 88 |
uv run python main.py
|
| 89 |
```
|
| 90 |
|
| 91 |
+
#### 2. Generate Case Dataset
|
| 92 |
```bash
|
| 93 |
+
# Generate 10,000 synthetic cases
|
| 94 |
uv run python -c "from scheduler.data.case_generator import CaseGenerator; from datetime import date; from pathlib import Path; gen = CaseGenerator(start=date(2022,1,1), end=date(2023,12,31), seed=42); cases = gen.generate(10000, stage_mix_auto=True); CaseGenerator.to_csv(cases, Path('data/generated/cases.csv')); print(f'Generated {len(cases)} cases')"
|
| 95 |
```
|
| 96 |
|
| 97 |
+
#### 3. Run Simulation
|
| 98 |
```bash
|
| 99 |
# 2-year simulation with ripeness classification
|
| 100 |
uv run python scripts/simulate.py --days 384 --start 2024-01-01 --log-dir data/sim_runs/test_run
|
|
|
|
| 102 |
# Quick 60-day test
|
| 103 |
uv run python scripts/simulate.py --days 60
|
| 104 |
```
|
| 105 |
+
</details>
|
| 106 |
|
| 107 |
## Usage
|
| 108 |
|
|
@@ -1,23 +1,11 @@
|
|
| 1 |
-
|
|
|
|
| 2 |
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
2. Visual EDA (plots + CSV summaries)
|
| 6 |
-
3. Parameter extraction (JSON/CSV priors + features)
|
| 7 |
"""
|
| 8 |
|
| 9 |
-
from
|
| 10 |
-
from src.eda_load_clean import run_load_and_clean
|
| 11 |
-
from src.eda_parameters import run_parameter_export
|
| 12 |
|
| 13 |
if __name__ == "__main__":
|
| 14 |
-
|
| 15 |
-
run_load_and_clean()
|
| 16 |
-
|
| 17 |
-
print("\nStep 2/3: Exploratory analysis and plots")
|
| 18 |
-
run_exploration()
|
| 19 |
-
|
| 20 |
-
print("\nStep 3/3: Parameter extraction for simulation/scheduler")
|
| 21 |
-
run_parameter_export()
|
| 22 |
-
|
| 23 |
-
print("\nAll steps complete.")
|
|
|
|
| 1 |
+
#!/usr/bin/env python
|
| 2 |
+
"""Main entry point for Court Scheduling System.
|
| 3 |
|
| 4 |
+
This file provides the primary entry point for the project.
|
| 5 |
+
It invokes the CLI which provides all scheduling system operations.
|
|
|
|
|
|
|
| 6 |
"""
|
| 7 |
|
| 8 |
+
from court_scheduler.cli import main
|
|
|
|
|
|
|
| 9 |
|
| 10 |
if __name__ == "__main__":
|
| 11 |
+
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
@@ -18,7 +18,9 @@ dependencies = [
|
|
| 18 |
"typer>=0.12",
|
| 19 |
"simpy>=4.1",
|
| 20 |
"scipy>=1.14",
|
| 21 |
-
"scikit-learn>=1.5"
|
|
|
|
|
|
|
| 22 |
]
|
| 23 |
|
| 24 |
[project.optional-dependencies]
|
|
@@ -30,11 +32,6 @@ dev = [
|
|
| 30 |
"hypothesis>=6.0",
|
| 31 |
"mypy>=1.11"
|
| 32 |
]
|
| 33 |
-
graph = [
|
| 34 |
-
"neo4j>=5.0",
|
| 35 |
-
"igraph>=0.11",
|
| 36 |
-
"graph-tool>=2.45; sys_platform != 'win32'"
|
| 37 |
-
]
|
| 38 |
|
| 39 |
[project.scripts]
|
| 40 |
court-scheduler = "court_scheduler.cli:app"
|
|
@@ -43,6 +40,9 @@ court-scheduler = "court_scheduler.cli:app"
|
|
| 43 |
requires = ["hatchling"]
|
| 44 |
build-backend = "hatchling.build"
|
| 45 |
|
|
|
|
|
|
|
|
|
|
| 46 |
[tool.black]
|
| 47 |
line-length = 100
|
| 48 |
target-version = ["py311"]
|
|
|
|
| 18 |
"typer>=0.12",
|
| 19 |
"simpy>=4.1",
|
| 20 |
"scipy>=1.14",
|
| 21 |
+
"scikit-learn>=1.5",
|
| 22 |
+
"streamlit>=1.28",
|
| 23 |
+
"altair>=5.0"
|
| 24 |
]
|
| 25 |
|
| 26 |
[project.optional-dependencies]
|
|
|
|
| 32 |
"hypothesis>=6.0",
|
| 33 |
"mypy>=1.11"
|
| 34 |
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
[project.scripts]
|
| 37 |
court-scheduler = "court_scheduler.cli:app"
|
|
|
|
| 40 |
requires = ["hatchling"]
|
| 41 |
build-backend = "hatchling.build"
|
| 42 |
|
| 43 |
+
[tool.hatch.build.targets.wheel]
|
| 44 |
+
packages = ["scheduler"]
|
| 45 |
+
|
| 46 |
[tool.black]
|
| 47 |
line-length = 100
|
| 48 |
target-version = ["py311"]
|
|
File without changes
|
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Control and intervention systems for court scheduling.
|
| 2 |
+
|
| 3 |
+
Provides explainability and judge override capabilities.
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
from .explainability import (
|
| 7 |
+
DecisionStep,
|
| 8 |
+
SchedulingExplanation,
|
| 9 |
+
ExplainabilityEngine
|
| 10 |
+
)
|
| 11 |
+
|
| 12 |
+
from .overrides import (
|
| 13 |
+
OverrideType,
|
| 14 |
+
Override,
|
| 15 |
+
JudgePreferences,
|
| 16 |
+
CauseListDraft,
|
| 17 |
+
OverrideValidator,
|
| 18 |
+
OverrideManager
|
| 19 |
+
)
|
| 20 |
+
|
| 21 |
+
__all__ = [
|
| 22 |
+
'DecisionStep',
|
| 23 |
+
'SchedulingExplanation',
|
| 24 |
+
'ExplainabilityEngine',
|
| 25 |
+
'OverrideType',
|
| 26 |
+
'Override',
|
| 27 |
+
'JudgePreferences',
|
| 28 |
+
'CauseListDraft',
|
| 29 |
+
'OverrideValidator',
|
| 30 |
+
'OverrideManager'
|
| 31 |
+
]
|
|
@@ -0,0 +1,316 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Explainability system for scheduling decisions.
|
| 2 |
+
|
| 3 |
+
Provides human-readable explanations for why each case was or wasn't scheduled.
|
| 4 |
+
"""
|
| 5 |
+
from dataclasses import dataclass
|
| 6 |
+
from typing import Optional
|
| 7 |
+
from datetime import date
|
| 8 |
+
|
| 9 |
+
from scheduler.core.case import Case
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
@dataclass
|
| 13 |
+
class DecisionStep:
|
| 14 |
+
"""Single step in decision reasoning."""
|
| 15 |
+
step_name: str
|
| 16 |
+
passed: bool
|
| 17 |
+
reason: str
|
| 18 |
+
details: dict
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
@dataclass
|
| 22 |
+
class SchedulingExplanation:
|
| 23 |
+
"""Complete explanation of scheduling decision for a case."""
|
| 24 |
+
case_id: str
|
| 25 |
+
scheduled: bool
|
| 26 |
+
decision_steps: list[DecisionStep]
|
| 27 |
+
final_reason: str
|
| 28 |
+
priority_breakdown: Optional[dict] = None
|
| 29 |
+
courtroom_assignment_reason: Optional[str] = None
|
| 30 |
+
|
| 31 |
+
def to_readable_text(self) -> str:
|
| 32 |
+
"""Convert to human-readable explanation."""
|
| 33 |
+
lines = [f"Case {self.case_id}: {'SCHEDULED' if self.scheduled else 'NOT SCHEDULED'}"]
|
| 34 |
+
lines.append("=" * 60)
|
| 35 |
+
|
| 36 |
+
for i, step in enumerate(self.decision_steps, 1):
|
| 37 |
+
status = "✓ PASS" if step.passed else "✗ FAIL"
|
| 38 |
+
lines.append(f"\nStep {i}: {step.step_name} - {status}")
|
| 39 |
+
lines.append(f" Reason: {step.reason}")
|
| 40 |
+
if step.details:
|
| 41 |
+
for key, value in step.details.items():
|
| 42 |
+
lines.append(f" {key}: {value}")
|
| 43 |
+
|
| 44 |
+
if self.priority_breakdown and self.scheduled:
|
| 45 |
+
lines.append(f"\nPriority Score Breakdown:")
|
| 46 |
+
for component, value in self.priority_breakdown.items():
|
| 47 |
+
lines.append(f" {component}: {value}")
|
| 48 |
+
|
| 49 |
+
if self.courtroom_assignment_reason and self.scheduled:
|
| 50 |
+
lines.append(f"\nCourtroom Assignment:")
|
| 51 |
+
lines.append(f" {self.courtroom_assignment_reason}")
|
| 52 |
+
|
| 53 |
+
lines.append(f"\nFinal Decision: {self.final_reason}")
|
| 54 |
+
|
| 55 |
+
return "\n".join(lines)
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
class ExplainabilityEngine:
|
| 59 |
+
"""Generate explanations for scheduling decisions."""
|
| 60 |
+
|
| 61 |
+
@staticmethod
|
| 62 |
+
def explain_scheduling_decision(
|
| 63 |
+
case: Case,
|
| 64 |
+
current_date: date,
|
| 65 |
+
scheduled: bool,
|
| 66 |
+
ripeness_status: str,
|
| 67 |
+
priority_score: Optional[float] = None,
|
| 68 |
+
courtroom_id: Optional[int] = None,
|
| 69 |
+
capacity_full: bool = False,
|
| 70 |
+
below_threshold: bool = False
|
| 71 |
+
) -> SchedulingExplanation:
|
| 72 |
+
"""Generate complete explanation for why case was/wasn't scheduled.
|
| 73 |
+
|
| 74 |
+
Args:
|
| 75 |
+
case: The case being scheduled
|
| 76 |
+
current_date: Current simulation date
|
| 77 |
+
scheduled: Whether case was scheduled
|
| 78 |
+
ripeness_status: Ripeness classification
|
| 79 |
+
priority_score: Calculated priority score if scheduled
|
| 80 |
+
courtroom_id: Assigned courtroom if scheduled
|
| 81 |
+
capacity_full: Whether capacity was full
|
| 82 |
+
below_threshold: Whether priority was below threshold
|
| 83 |
+
|
| 84 |
+
Returns:
|
| 85 |
+
Complete scheduling explanation
|
| 86 |
+
"""
|
| 87 |
+
steps = []
|
| 88 |
+
|
| 89 |
+
# Step 1: Disposal status check
|
| 90 |
+
if case.is_disposed:
|
| 91 |
+
steps.append(DecisionStep(
|
| 92 |
+
step_name="Case Status Check",
|
| 93 |
+
passed=False,
|
| 94 |
+
reason="Case already disposed",
|
| 95 |
+
details={"disposal_date": str(case.disposal_date)}
|
| 96 |
+
))
|
| 97 |
+
return SchedulingExplanation(
|
| 98 |
+
case_id=case.case_id,
|
| 99 |
+
scheduled=False,
|
| 100 |
+
decision_steps=steps,
|
| 101 |
+
final_reason="Case disposed, no longer eligible for scheduling"
|
| 102 |
+
)
|
| 103 |
+
|
| 104 |
+
steps.append(DecisionStep(
|
| 105 |
+
step_name="Case Status Check",
|
| 106 |
+
passed=True,
|
| 107 |
+
reason="Case active and eligible",
|
| 108 |
+
details={"status": case.status.value}
|
| 109 |
+
))
|
| 110 |
+
|
| 111 |
+
# Step 2: Ripeness check
|
| 112 |
+
is_ripe = ripeness_status == "RIPE"
|
| 113 |
+
ripeness_detail = {}
|
| 114 |
+
|
| 115 |
+
if not is_ripe:
|
| 116 |
+
if "SUMMONS" in ripeness_status:
|
| 117 |
+
ripeness_detail["bottleneck"] = "Summons not yet served"
|
| 118 |
+
ripeness_detail["action_needed"] = "Wait for summons service confirmation"
|
| 119 |
+
elif "DEPENDENT" in ripeness_status:
|
| 120 |
+
ripeness_detail["bottleneck"] = "Dependent on another case"
|
| 121 |
+
ripeness_detail["action_needed"] = "Wait for dependent case resolution"
|
| 122 |
+
elif "PARTY" in ripeness_status:
|
| 123 |
+
ripeness_detail["bottleneck"] = "Party unavailable or unresponsive"
|
| 124 |
+
ripeness_detail["action_needed"] = "Wait for party availability confirmation"
|
| 125 |
+
else:
|
| 126 |
+
ripeness_detail["bottleneck"] = ripeness_status
|
| 127 |
+
else:
|
| 128 |
+
ripeness_detail["status"] = "All prerequisites met, ready for hearing"
|
| 129 |
+
|
| 130 |
+
if case.last_hearing_purpose:
|
| 131 |
+
ripeness_detail["last_hearing_purpose"] = case.last_hearing_purpose
|
| 132 |
+
|
| 133 |
+
steps.append(DecisionStep(
|
| 134 |
+
step_name="Ripeness Classification",
|
| 135 |
+
passed=is_ripe,
|
| 136 |
+
reason="Case is RIPE (ready for hearing)" if is_ripe else f"Case is UNRIPE ({ripeness_status})",
|
| 137 |
+
details=ripeness_detail
|
| 138 |
+
))
|
| 139 |
+
|
| 140 |
+
if not is_ripe and not scheduled:
|
| 141 |
+
return SchedulingExplanation(
|
| 142 |
+
case_id=case.case_id,
|
| 143 |
+
scheduled=False,
|
| 144 |
+
decision_steps=steps,
|
| 145 |
+
final_reason=f"Case not scheduled: UNRIPE status blocks scheduling. {ripeness_detail.get('action_needed', 'Waiting for case to become ready')}"
|
| 146 |
+
)
|
| 147 |
+
|
| 148 |
+
# Step 3: Minimum gap check
|
| 149 |
+
min_gap_days = 7
|
| 150 |
+
days_since = case.days_since_last_hearing
|
| 151 |
+
meets_gap = case.last_hearing_date is None or days_since >= min_gap_days
|
| 152 |
+
|
| 153 |
+
gap_details = {
|
| 154 |
+
"days_since_last_hearing": days_since,
|
| 155 |
+
"minimum_required": min_gap_days
|
| 156 |
+
}
|
| 157 |
+
|
| 158 |
+
if case.last_hearing_date:
|
| 159 |
+
gap_details["last_hearing_date"] = str(case.last_hearing_date)
|
| 160 |
+
|
| 161 |
+
steps.append(DecisionStep(
|
| 162 |
+
step_name="Minimum Gap Check",
|
| 163 |
+
passed=meets_gap,
|
| 164 |
+
reason=f"{'Meets' if meets_gap else 'Does not meet'} minimum {min_gap_days}-day gap requirement",
|
| 165 |
+
details=gap_details
|
| 166 |
+
))
|
| 167 |
+
|
| 168 |
+
if not meets_gap and not scheduled:
|
| 169 |
+
next_eligible = case.last_hearing_date.isoformat() if case.last_hearing_date else "unknown"
|
| 170 |
+
return SchedulingExplanation(
|
| 171 |
+
case_id=case.case_id,
|
| 172 |
+
scheduled=False,
|
| 173 |
+
decision_steps=steps,
|
| 174 |
+
final_reason=f"Case not scheduled: Only {days_since} days since last hearing (minimum {min_gap_days} required). Next eligible after {next_eligible}"
|
| 175 |
+
)
|
| 176 |
+
|
| 177 |
+
# Step 4: Priority calculation
|
| 178 |
+
if priority_score is not None:
|
| 179 |
+
age_component = min(case.age_days / 2000, 1.0) * 0.35
|
| 180 |
+
readiness_component = case.readiness_score * 0.25
|
| 181 |
+
urgency_component = (1.0 if case.is_urgent else 0.0) * 0.25
|
| 182 |
+
|
| 183 |
+
# Adjournment boost calculation
|
| 184 |
+
import math
|
| 185 |
+
adj_boost_value = 0.0
|
| 186 |
+
if case.status.value == "ADJOURNED" and case.hearing_count > 0:
|
| 187 |
+
adj_boost_value = math.exp(-case.days_since_last_hearing / 21)
|
| 188 |
+
adj_boost_component = adj_boost_value * 0.15
|
| 189 |
+
|
| 190 |
+
priority_breakdown = {
|
| 191 |
+
"Age": f"{age_component:.4f} (age={case.age_days}d, weight=0.35)",
|
| 192 |
+
"Readiness": f"{readiness_component:.4f} (score={case.readiness_score:.2f}, weight=0.25)",
|
| 193 |
+
"Urgency": f"{urgency_component:.4f} ({'URGENT' if case.is_urgent else 'normal'}, weight=0.25)",
|
| 194 |
+
"Adjournment Boost": f"{adj_boost_component:.4f} (days_since={days_since}, decay=exp(-{days_since}/21), weight=0.15)",
|
| 195 |
+
"TOTAL": f"{priority_score:.4f}"
|
| 196 |
+
}
|
| 197 |
+
|
| 198 |
+
steps.append(DecisionStep(
|
| 199 |
+
step_name="Priority Calculation",
|
| 200 |
+
passed=True,
|
| 201 |
+
reason=f"Priority score calculated: {priority_score:.4f}",
|
| 202 |
+
details=priority_breakdown
|
| 203 |
+
))
|
| 204 |
+
|
| 205 |
+
# Step 5: Selection by policy
|
| 206 |
+
if scheduled:
|
| 207 |
+
if capacity_full:
|
| 208 |
+
steps.append(DecisionStep(
|
| 209 |
+
step_name="Capacity Check",
|
| 210 |
+
passed=True,
|
| 211 |
+
reason="Selected despite full capacity (high priority override)",
|
| 212 |
+
details={"priority_score": f"{priority_score:.4f}"}
|
| 213 |
+
))
|
| 214 |
+
elif below_threshold:
|
| 215 |
+
steps.append(DecisionStep(
|
| 216 |
+
step_name="Policy Selection",
|
| 217 |
+
passed=True,
|
| 218 |
+
reason="Selected by policy despite being below typical threshold",
|
| 219 |
+
details={"reason": "Algorithm determined case should be scheduled"}
|
| 220 |
+
))
|
| 221 |
+
else:
|
| 222 |
+
steps.append(DecisionStep(
|
| 223 |
+
step_name="Policy Selection",
|
| 224 |
+
passed=True,
|
| 225 |
+
reason="Selected by scheduling policy among eligible cases",
|
| 226 |
+
details={
|
| 227 |
+
"priority_rank": "Top priority among eligible cases",
|
| 228 |
+
"policy": "Readiness + Adjournment Boost"
|
| 229 |
+
}
|
| 230 |
+
))
|
| 231 |
+
|
| 232 |
+
# Courtroom assignment
|
| 233 |
+
if courtroom_id:
|
| 234 |
+
courtroom_reason = f"Assigned to Courtroom {courtroom_id} via load balancing (least loaded courtroom selected)"
|
| 235 |
+
steps.append(DecisionStep(
|
| 236 |
+
step_name="Courtroom Assignment",
|
| 237 |
+
passed=True,
|
| 238 |
+
reason=courtroom_reason,
|
| 239 |
+
details={"courtroom_id": courtroom_id}
|
| 240 |
+
))
|
| 241 |
+
|
| 242 |
+
final_reason = f"Case SCHEDULED: Passed all checks, priority score {priority_score:.4f}, assigned to Courtroom {courtroom_id}"
|
| 243 |
+
|
| 244 |
+
return SchedulingExplanation(
|
| 245 |
+
case_id=case.case_id,
|
| 246 |
+
scheduled=True,
|
| 247 |
+
decision_steps=steps,
|
| 248 |
+
final_reason=final_reason,
|
| 249 |
+
priority_breakdown=priority_breakdown if priority_score else None,
|
| 250 |
+
courtroom_assignment_reason=courtroom_reason if courtroom_id else None
|
| 251 |
+
)
|
| 252 |
+
else:
|
| 253 |
+
# Not scheduled - determine why
|
| 254 |
+
if capacity_full:
|
| 255 |
+
steps.append(DecisionStep(
|
| 256 |
+
step_name="Capacity Check",
|
| 257 |
+
passed=False,
|
| 258 |
+
reason="Daily capacity limit reached",
|
| 259 |
+
details={
|
| 260 |
+
"priority_score": f"{priority_score:.4f}" if priority_score else "N/A",
|
| 261 |
+
"explanation": "Higher priority cases filled all available slots"
|
| 262 |
+
}
|
| 263 |
+
))
|
| 264 |
+
final_reason = f"Case NOT SCHEDULED: Capacity full. Priority score {priority_score:.4f} was not high enough to displace scheduled cases"
|
| 265 |
+
elif below_threshold:
|
| 266 |
+
steps.append(DecisionStep(
|
| 267 |
+
step_name="Policy Selection",
|
| 268 |
+
passed=False,
|
| 269 |
+
reason="Priority below scheduling threshold",
|
| 270 |
+
details={
|
| 271 |
+
"priority_score": f"{priority_score:.4f}" if priority_score else "N/A",
|
| 272 |
+
"explanation": "Other cases had higher priority scores"
|
| 273 |
+
}
|
| 274 |
+
))
|
| 275 |
+
final_reason = f"Case NOT SCHEDULED: Priority score {priority_score:.4f} below threshold. Wait for case to age or become more urgent"
|
| 276 |
+
else:
|
| 277 |
+
final_reason = "Case NOT SCHEDULED: Unknown reason (policy decision)"
|
| 278 |
+
|
| 279 |
+
return SchedulingExplanation(
|
| 280 |
+
case_id=case.case_id,
|
| 281 |
+
scheduled=False,
|
| 282 |
+
decision_steps=steps,
|
| 283 |
+
final_reason=final_reason,
|
| 284 |
+
priority_breakdown=priority_breakdown if priority_score else None
|
| 285 |
+
)
|
| 286 |
+
|
| 287 |
+
@staticmethod
|
| 288 |
+
def explain_why_not_scheduled(case: Case, current_date: date) -> str:
|
| 289 |
+
"""Quick explanation for why a case wasn't scheduled.
|
| 290 |
+
|
| 291 |
+
Args:
|
| 292 |
+
case: Case to explain
|
| 293 |
+
current_date: Current date
|
| 294 |
+
|
| 295 |
+
Returns:
|
| 296 |
+
Human-readable reason
|
| 297 |
+
"""
|
| 298 |
+
if case.is_disposed:
|
| 299 |
+
return f"Already disposed on {case.disposal_date}"
|
| 300 |
+
|
| 301 |
+
if case.ripeness_status != "RIPE":
|
| 302 |
+
bottleneck_reasons = {
|
| 303 |
+
"UNRIPE_SUMMONS": "Summons not served",
|
| 304 |
+
"UNRIPE_DEPENDENT": "Waiting for dependent case",
|
| 305 |
+
"UNRIPE_PARTY": "Party unavailable",
|
| 306 |
+
"UNRIPE_DOCUMENT": "Documents pending"
|
| 307 |
+
}
|
| 308 |
+
reason = bottleneck_reasons.get(case.ripeness_status, case.ripeness_status)
|
| 309 |
+
return f"UNRIPE: {reason}"
|
| 310 |
+
|
| 311 |
+
if case.last_hearing_date and case.days_since_last_hearing < 7:
|
| 312 |
+
return f"Too recent (last hearing {case.days_since_last_hearing} days ago, minimum 7 days)"
|
| 313 |
+
|
| 314 |
+
# If ripe and meets gap, then it's priority-based
|
| 315 |
+
priority = case.get_priority_score()
|
| 316 |
+
return f"Low priority (score {priority:.3f}) - other cases ranked higher"
|
|
@@ -0,0 +1,438 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Judge override and intervention control system.
|
| 2 |
+
|
| 3 |
+
Allows judges to review, modify, and approve algorithmic scheduling suggestions.
|
| 4 |
+
System is suggestive, not prescriptive - judges retain final control.
|
| 5 |
+
"""
|
| 6 |
+
from dataclasses import dataclass, field
|
| 7 |
+
from datetime import date, datetime
|
| 8 |
+
from enum import Enum
|
| 9 |
+
from typing import Optional
|
| 10 |
+
import json
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
class OverrideType(Enum):
|
| 14 |
+
"""Types of overrides judges can make."""
|
| 15 |
+
RIPENESS = "ripeness" # Override ripeness classification
|
| 16 |
+
PRIORITY = "priority" # Adjust priority score or urgency
|
| 17 |
+
ADD_CASE = "add_case" # Manually add case to cause list
|
| 18 |
+
REMOVE_CASE = "remove_case" # Remove case from cause list
|
| 19 |
+
REORDER = "reorder" # Change sequence within day
|
| 20 |
+
CAPACITY = "capacity" # Adjust daily capacity
|
| 21 |
+
MIN_GAP = "min_gap" # Override minimum gap between hearings
|
| 22 |
+
COURTROOM = "courtroom" # Change courtroom assignment
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
@dataclass
|
| 26 |
+
class Override:
|
| 27 |
+
"""Single override action by a judge."""
|
| 28 |
+
override_id: str
|
| 29 |
+
override_type: OverrideType
|
| 30 |
+
case_id: str
|
| 31 |
+
judge_id: str
|
| 32 |
+
timestamp: datetime
|
| 33 |
+
old_value: Optional[str] = None
|
| 34 |
+
new_value: Optional[str] = None
|
| 35 |
+
reason: str = ""
|
| 36 |
+
date_affected: Optional[date] = None
|
| 37 |
+
courtroom_id: Optional[int] = None
|
| 38 |
+
|
| 39 |
+
def to_dict(self) -> dict:
|
| 40 |
+
"""Convert to dictionary for logging."""
|
| 41 |
+
return {
|
| 42 |
+
"override_id": self.override_id,
|
| 43 |
+
"type": self.override_type.value,
|
| 44 |
+
"case_id": self.case_id,
|
| 45 |
+
"judge_id": self.judge_id,
|
| 46 |
+
"timestamp": self.timestamp.isoformat(),
|
| 47 |
+
"old_value": self.old_value,
|
| 48 |
+
"new_value": self.new_value,
|
| 49 |
+
"reason": self.reason,
|
| 50 |
+
"date_affected": self.date_affected.isoformat() if self.date_affected else None,
|
| 51 |
+
"courtroom_id": self.courtroom_id
|
| 52 |
+
}
|
| 53 |
+
|
| 54 |
+
def to_readable_text(self) -> str:
|
| 55 |
+
"""Human-readable description of override."""
|
| 56 |
+
action_desc = {
|
| 57 |
+
OverrideType.RIPENESS: f"Changed ripeness from {self.old_value} to {self.new_value}",
|
| 58 |
+
OverrideType.PRIORITY: f"Adjusted priority from {self.old_value} to {self.new_value}",
|
| 59 |
+
OverrideType.ADD_CASE: f"Manually added case to cause list",
|
| 60 |
+
OverrideType.REMOVE_CASE: f"Removed case from cause list",
|
| 61 |
+
OverrideType.REORDER: f"Reordered from position {self.old_value} to {self.new_value}",
|
| 62 |
+
OverrideType.CAPACITY: f"Changed capacity from {self.old_value} to {self.new_value}",
|
| 63 |
+
OverrideType.MIN_GAP: f"Overrode min gap from {self.old_value} to {self.new_value} days",
|
| 64 |
+
OverrideType.COURTROOM: f"Changed courtroom from {self.old_value} to {self.new_value}"
|
| 65 |
+
}
|
| 66 |
+
|
| 67 |
+
action = action_desc.get(self.override_type, f"Override: {self.override_type.value}")
|
| 68 |
+
|
| 69 |
+
parts = [
|
| 70 |
+
f"[{self.timestamp.strftime('%Y-%m-%d %H:%M')}]",
|
| 71 |
+
f"Judge {self.judge_id}:",
|
| 72 |
+
action,
|
| 73 |
+
f"(Case {self.case_id})"
|
| 74 |
+
]
|
| 75 |
+
|
| 76 |
+
if self.reason:
|
| 77 |
+
parts.append(f"Reason: {self.reason}")
|
| 78 |
+
|
| 79 |
+
return " ".join(parts)
|
| 80 |
+
|
| 81 |
+
|
| 82 |
+
@dataclass
|
| 83 |
+
class JudgePreferences:
|
| 84 |
+
"""Judge-specific scheduling preferences."""
|
| 85 |
+
judge_id: str
|
| 86 |
+
daily_capacity_override: Optional[int] = None # Override default capacity
|
| 87 |
+
blocked_dates: list[date] = field(default_factory=list) # Vacation, illness
|
| 88 |
+
min_gap_overrides: dict[str, int] = field(default_factory=dict) # Per-case gap overrides
|
| 89 |
+
case_type_preferences: dict[str, list[str]] = field(default_factory=dict) # Day-of-week preferences
|
| 90 |
+
|
| 91 |
+
def to_dict(self) -> dict:
|
| 92 |
+
"""Convert to dictionary."""
|
| 93 |
+
return {
|
| 94 |
+
"judge_id": self.judge_id,
|
| 95 |
+
"daily_capacity_override": self.daily_capacity_override,
|
| 96 |
+
"blocked_dates": [d.isoformat() for d in self.blocked_dates],
|
| 97 |
+
"min_gap_overrides": self.min_gap_overrides,
|
| 98 |
+
"case_type_preferences": self.case_type_preferences
|
| 99 |
+
}
|
| 100 |
+
|
| 101 |
+
|
| 102 |
+
@dataclass
|
| 103 |
+
class CauseListDraft:
|
| 104 |
+
"""Draft cause list before judge approval."""
|
| 105 |
+
date: date
|
| 106 |
+
courtroom_id: int
|
| 107 |
+
judge_id: str
|
| 108 |
+
algorithm_suggested: list[str] # Case IDs suggested by algorithm
|
| 109 |
+
judge_approved: list[str] # Case IDs after judge review
|
| 110 |
+
overrides: list[Override]
|
| 111 |
+
created_at: datetime
|
| 112 |
+
finalized_at: Optional[datetime] = None
|
| 113 |
+
status: str = "DRAFT" # DRAFT, APPROVED, REJECTED
|
| 114 |
+
|
| 115 |
+
def get_acceptance_rate(self) -> float:
|
| 116 |
+
"""Calculate what % of suggestions were accepted."""
|
| 117 |
+
if not self.algorithm_suggested:
|
| 118 |
+
return 0.0
|
| 119 |
+
|
| 120 |
+
accepted = len(set(self.algorithm_suggested) & set(self.judge_approved))
|
| 121 |
+
return accepted / len(self.algorithm_suggested) * 100
|
| 122 |
+
|
| 123 |
+
def get_modifications_summary(self) -> dict:
|
| 124 |
+
"""Summarize modifications made."""
|
| 125 |
+
added = set(self.judge_approved) - set(self.algorithm_suggested)
|
| 126 |
+
removed = set(self.algorithm_suggested) - set(self.judge_approved)
|
| 127 |
+
|
| 128 |
+
override_counts = {}
|
| 129 |
+
for override in self.overrides:
|
| 130 |
+
override_type = override.override_type.value
|
| 131 |
+
override_counts[override_type] = override_counts.get(override_type, 0) + 1
|
| 132 |
+
|
| 133 |
+
return {
|
| 134 |
+
"cases_added": len(added),
|
| 135 |
+
"cases_removed": len(removed),
|
| 136 |
+
"cases_kept": len(set(self.algorithm_suggested) & set(self.judge_approved)),
|
| 137 |
+
"override_types": override_counts,
|
| 138 |
+
"acceptance_rate": self.get_acceptance_rate()
|
| 139 |
+
}
|
| 140 |
+
|
| 141 |
+
|
| 142 |
+
class OverrideValidator:
|
| 143 |
+
"""Validates override requests against constraints."""
|
| 144 |
+
|
| 145 |
+
@staticmethod
|
| 146 |
+
def validate_ripeness_override(
|
| 147 |
+
case_id: str,
|
| 148 |
+
old_status: str,
|
| 149 |
+
new_status: str,
|
| 150 |
+
reason: str
|
| 151 |
+
) -> tuple[bool, str]:
|
| 152 |
+
"""Validate ripeness override.
|
| 153 |
+
|
| 154 |
+
Args:
|
| 155 |
+
case_id: Case ID
|
| 156 |
+
old_status: Current ripeness status
|
| 157 |
+
new_status: Requested new status
|
| 158 |
+
reason: Reason for override
|
| 159 |
+
|
| 160 |
+
Returns:
|
| 161 |
+
(valid, error_message)
|
| 162 |
+
"""
|
| 163 |
+
valid_statuses = ["RIPE", "UNRIPE_SUMMONS", "UNRIPE_DEPENDENT", "UNRIPE_PARTY", "UNRIPE_DOCUMENT"]
|
| 164 |
+
|
| 165 |
+
if new_status not in valid_statuses:
|
| 166 |
+
return False, f"Invalid ripeness status: {new_status}"
|
| 167 |
+
|
| 168 |
+
if not reason:
|
| 169 |
+
return False, "Reason required for ripeness override"
|
| 170 |
+
|
| 171 |
+
if len(reason) < 10:
|
| 172 |
+
return False, "Reason must be at least 10 characters"
|
| 173 |
+
|
| 174 |
+
return True, ""
|
| 175 |
+
|
| 176 |
+
@staticmethod
|
| 177 |
+
def validate_capacity_override(
|
| 178 |
+
current_capacity: int,
|
| 179 |
+
new_capacity: int,
|
| 180 |
+
max_capacity: int = 200
|
| 181 |
+
) -> tuple[bool, str]:
|
| 182 |
+
"""Validate capacity override.
|
| 183 |
+
|
| 184 |
+
Args:
|
| 185 |
+
current_capacity: Current daily capacity
|
| 186 |
+
new_capacity: Requested new capacity
|
| 187 |
+
max_capacity: Maximum allowed capacity
|
| 188 |
+
|
| 189 |
+
Returns:
|
| 190 |
+
(valid, error_message)
|
| 191 |
+
"""
|
| 192 |
+
if new_capacity < 0:
|
| 193 |
+
return False, "Capacity cannot be negative"
|
| 194 |
+
|
| 195 |
+
if new_capacity > max_capacity:
|
| 196 |
+
return False, f"Capacity cannot exceed maximum ({max_capacity})"
|
| 197 |
+
|
| 198 |
+
if new_capacity == 0:
|
| 199 |
+
return False, "Capacity cannot be zero (use blocked dates for full closures)"
|
| 200 |
+
|
| 201 |
+
return True, ""
|
| 202 |
+
|
| 203 |
+
@staticmethod
|
| 204 |
+
def validate_add_case(
|
| 205 |
+
case_id: str,
|
| 206 |
+
current_schedule: list[str],
|
| 207 |
+
current_capacity: int,
|
| 208 |
+
max_capacity: int
|
| 209 |
+
) -> tuple[bool, str]:
|
| 210 |
+
"""Validate adding a case to cause list.
|
| 211 |
+
|
| 212 |
+
Args:
|
| 213 |
+
case_id: Case to add
|
| 214 |
+
current_schedule: Currently scheduled case IDs
|
| 215 |
+
current_capacity: Current number of scheduled cases
|
| 216 |
+
max_capacity: Maximum capacity
|
| 217 |
+
|
| 218 |
+
Returns:
|
| 219 |
+
(valid, error_message)
|
| 220 |
+
"""
|
| 221 |
+
if case_id in current_schedule:
|
| 222 |
+
return False, f"Case {case_id} already in schedule"
|
| 223 |
+
|
| 224 |
+
if current_capacity >= max_capacity:
|
| 225 |
+
return False, f"Schedule at capacity ({current_capacity}/{max_capacity})"
|
| 226 |
+
|
| 227 |
+
return True, ""
|
| 228 |
+
|
| 229 |
+
@staticmethod
|
| 230 |
+
def validate_remove_case(
|
| 231 |
+
case_id: str,
|
| 232 |
+
current_schedule: list[str]
|
| 233 |
+
) -> tuple[bool, str]:
|
| 234 |
+
"""Validate removing a case from cause list.
|
| 235 |
+
|
| 236 |
+
Args:
|
| 237 |
+
case_id: Case to remove
|
| 238 |
+
current_schedule: Currently scheduled case IDs
|
| 239 |
+
|
| 240 |
+
Returns:
|
| 241 |
+
(valid, error_message)
|
| 242 |
+
"""
|
| 243 |
+
if case_id not in current_schedule:
|
| 244 |
+
return False, f"Case {case_id} not in schedule"
|
| 245 |
+
|
| 246 |
+
return True, ""
|
| 247 |
+
|
| 248 |
+
|
| 249 |
+
class OverrideManager:
|
| 250 |
+
"""Manages judge overrides and interventions."""
|
| 251 |
+
|
| 252 |
+
def __init__(self):
|
| 253 |
+
self.overrides: list[Override] = []
|
| 254 |
+
self.drafts: list[CauseListDraft] = []
|
| 255 |
+
self.preferences: dict[str, JudgePreferences] = {}
|
| 256 |
+
|
| 257 |
+
def create_draft(
|
| 258 |
+
self,
|
| 259 |
+
date: date,
|
| 260 |
+
courtroom_id: int,
|
| 261 |
+
judge_id: str,
|
| 262 |
+
algorithm_suggested: list[str]
|
| 263 |
+
) -> CauseListDraft:
|
| 264 |
+
"""Create a draft cause list for judge review.
|
| 265 |
+
|
| 266 |
+
Args:
|
| 267 |
+
date: Date of cause list
|
| 268 |
+
courtroom_id: Courtroom ID
|
| 269 |
+
judge_id: Judge ID
|
| 270 |
+
algorithm_suggested: Case IDs suggested by algorithm
|
| 271 |
+
|
| 272 |
+
Returns:
|
| 273 |
+
Draft cause list
|
| 274 |
+
"""
|
| 275 |
+
draft = CauseListDraft(
|
| 276 |
+
date=date,
|
| 277 |
+
courtroom_id=courtroom_id,
|
| 278 |
+
judge_id=judge_id,
|
| 279 |
+
algorithm_suggested=algorithm_suggested.copy(),
|
| 280 |
+
judge_approved=[],
|
| 281 |
+
overrides=[],
|
| 282 |
+
created_at=datetime.now(),
|
| 283 |
+
status="DRAFT"
|
| 284 |
+
)
|
| 285 |
+
|
| 286 |
+
self.drafts.append(draft)
|
| 287 |
+
return draft
|
| 288 |
+
|
| 289 |
+
def apply_override(
|
| 290 |
+
self,
|
| 291 |
+
draft: CauseListDraft,
|
| 292 |
+
override: Override
|
| 293 |
+
) -> tuple[bool, str]:
|
| 294 |
+
"""Apply an override to a draft cause list.
|
| 295 |
+
|
| 296 |
+
Args:
|
| 297 |
+
draft: Draft to modify
|
| 298 |
+
override: Override to apply
|
| 299 |
+
|
| 300 |
+
Returns:
|
| 301 |
+
(success, error_message)
|
| 302 |
+
"""
|
| 303 |
+
# Validate based on type
|
| 304 |
+
if override.override_type == OverrideType.RIPENESS:
|
| 305 |
+
valid, error = OverrideValidator.validate_ripeness_override(
|
| 306 |
+
override.case_id,
|
| 307 |
+
override.old_value or "",
|
| 308 |
+
override.new_value or "",
|
| 309 |
+
override.reason
|
| 310 |
+
)
|
| 311 |
+
if not valid:
|
| 312 |
+
return False, error
|
| 313 |
+
|
| 314 |
+
elif override.override_type == OverrideType.ADD_CASE:
|
| 315 |
+
valid, error = OverrideValidator.validate_add_case(
|
| 316 |
+
override.case_id,
|
| 317 |
+
draft.judge_approved,
|
| 318 |
+
len(draft.judge_approved),
|
| 319 |
+
200 # Max capacity
|
| 320 |
+
)
|
| 321 |
+
if not valid:
|
| 322 |
+
return False, error
|
| 323 |
+
|
| 324 |
+
draft.judge_approved.append(override.case_id)
|
| 325 |
+
|
| 326 |
+
elif override.override_type == OverrideType.REMOVE_CASE:
|
| 327 |
+
valid, error = OverrideValidator.validate_remove_case(
|
| 328 |
+
override.case_id,
|
| 329 |
+
draft.judge_approved
|
| 330 |
+
)
|
| 331 |
+
if not valid:
|
| 332 |
+
return False, error
|
| 333 |
+
|
| 334 |
+
draft.judge_approved.remove(override.case_id)
|
| 335 |
+
|
| 336 |
+
# Record override
|
| 337 |
+
draft.overrides.append(override)
|
| 338 |
+
self.overrides.append(override)
|
| 339 |
+
|
| 340 |
+
return True, ""
|
| 341 |
+
|
| 342 |
+
def finalize_draft(self, draft: CauseListDraft) -> bool:
|
| 343 |
+
"""Finalize draft cause list (judge approval).
|
| 344 |
+
|
| 345 |
+
Args:
|
| 346 |
+
draft: Draft to finalize
|
| 347 |
+
|
| 348 |
+
Returns:
|
| 349 |
+
Success status
|
| 350 |
+
"""
|
| 351 |
+
if draft.status != "DRAFT":
|
| 352 |
+
return False
|
| 353 |
+
|
| 354 |
+
draft.status = "APPROVED"
|
| 355 |
+
draft.finalized_at = datetime.now()
|
| 356 |
+
|
| 357 |
+
return True
|
| 358 |
+
|
| 359 |
+
def get_judge_preferences(self, judge_id: str) -> JudgePreferences:
|
| 360 |
+
"""Get or create judge preferences.
|
| 361 |
+
|
| 362 |
+
Args:
|
| 363 |
+
judge_id: Judge ID
|
| 364 |
+
|
| 365 |
+
Returns:
|
| 366 |
+
Judge preferences
|
| 367 |
+
"""
|
| 368 |
+
if judge_id not in self.preferences:
|
| 369 |
+
self.preferences[judge_id] = JudgePreferences(judge_id=judge_id)
|
| 370 |
+
|
| 371 |
+
return self.preferences[judge_id]
|
| 372 |
+
|
| 373 |
+
def get_override_statistics(self, judge_id: Optional[str] = None) -> dict:
|
| 374 |
+
"""Get override statistics.
|
| 375 |
+
|
| 376 |
+
Args:
|
| 377 |
+
judge_id: Optional filter by judge
|
| 378 |
+
|
| 379 |
+
Returns:
|
| 380 |
+
Statistics dictionary
|
| 381 |
+
"""
|
| 382 |
+
relevant_overrides = self.overrides
|
| 383 |
+
if judge_id:
|
| 384 |
+
relevant_overrides = [o for o in self.overrides if o.judge_id == judge_id]
|
| 385 |
+
|
| 386 |
+
if not relevant_overrides:
|
| 387 |
+
return {
|
| 388 |
+
"total_overrides": 0,
|
| 389 |
+
"by_type": {},
|
| 390 |
+
"avg_per_day": 0
|
| 391 |
+
}
|
| 392 |
+
|
| 393 |
+
override_counts = {}
|
| 394 |
+
for override in relevant_overrides:
|
| 395 |
+
override_type = override.override_type.value
|
| 396 |
+
override_counts[override_type] = override_counts.get(override_type, 0) + 1
|
| 397 |
+
|
| 398 |
+
# Calculate acceptance rate from drafts
|
| 399 |
+
relevant_drafts = self.drafts
|
| 400 |
+
if judge_id:
|
| 401 |
+
relevant_drafts = [d for d in self.drafts if d.judge_id == judge_id]
|
| 402 |
+
|
| 403 |
+
acceptance_rates = [d.get_acceptance_rate() for d in relevant_drafts if d.status == "APPROVED"]
|
| 404 |
+
avg_acceptance = sum(acceptance_rates) / len(acceptance_rates) if acceptance_rates else 0
|
| 405 |
+
|
| 406 |
+
return {
|
| 407 |
+
"total_overrides": len(relevant_overrides),
|
| 408 |
+
"by_type": override_counts,
|
| 409 |
+
"total_drafts": len(relevant_drafts),
|
| 410 |
+
"approved_drafts": len([d for d in relevant_drafts if d.status == "APPROVED"]),
|
| 411 |
+
"avg_acceptance_rate": avg_acceptance,
|
| 412 |
+
"modification_rate": 100 - avg_acceptance if avg_acceptance else 0
|
| 413 |
+
}
|
| 414 |
+
|
| 415 |
+
def export_audit_trail(self, output_file: str):
|
| 416 |
+
"""Export complete audit trail to file.
|
| 417 |
+
|
| 418 |
+
Args:
|
| 419 |
+
output_file: Path to output file
|
| 420 |
+
"""
|
| 421 |
+
audit_data = {
|
| 422 |
+
"overrides": [o.to_dict() for o in self.overrides],
|
| 423 |
+
"drafts": [
|
| 424 |
+
{
|
| 425 |
+
"date": d.date.isoformat(),
|
| 426 |
+
"courtroom_id": d.courtroom_id,
|
| 427 |
+
"judge_id": d.judge_id,
|
| 428 |
+
"status": d.status,
|
| 429 |
+
"acceptance_rate": d.get_acceptance_rate(),
|
| 430 |
+
"modifications": d.get_modifications_summary()
|
| 431 |
+
}
|
| 432 |
+
for d in self.drafts
|
| 433 |
+
],
|
| 434 |
+
"statistics": self.get_override_statistics()
|
| 435 |
+
}
|
| 436 |
+
|
| 437 |
+
with open(output_file, 'w') as f:
|
| 438 |
+
json.dump(audit_data, f, indent=2)
|
|
File without changes
|
|
@@ -0,0 +1,331 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Case entity and lifecycle management.
|
| 2 |
+
|
| 3 |
+
This module defines the Case class which represents a single court case
|
| 4 |
+
progressing through various stages.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from __future__ import annotations
|
| 8 |
+
|
| 9 |
+
from dataclasses import dataclass, field
|
| 10 |
+
from datetime import date, datetime
|
| 11 |
+
from typing import List, Optional, TYPE_CHECKING
|
| 12 |
+
from enum import Enum
|
| 13 |
+
|
| 14 |
+
from scheduler.data.config import TERMINAL_STAGES
|
| 15 |
+
|
| 16 |
+
if TYPE_CHECKING:
|
| 17 |
+
from scheduler.core.ripeness import RipenessStatus
|
| 18 |
+
else:
|
| 19 |
+
# Import at runtime
|
| 20 |
+
RipenessStatus = None
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
class CaseStatus(Enum):
|
| 24 |
+
"""Status of a case in the system."""
|
| 25 |
+
PENDING = "pending" # Filed, awaiting first hearing
|
| 26 |
+
ACTIVE = "active" # Has had at least one hearing
|
| 27 |
+
ADJOURNED = "adjourned" # Last hearing was adjourned
|
| 28 |
+
DISPOSED = "disposed" # Final disposal/settlement reached
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
@dataclass
|
| 32 |
+
class Case:
|
| 33 |
+
"""Represents a single court case.
|
| 34 |
+
|
| 35 |
+
Attributes:
|
| 36 |
+
case_id: Unique identifier (like CNR number)
|
| 37 |
+
case_type: Type of case (RSA, CRP, RFA, CA, CCC, CP, CMP)
|
| 38 |
+
filed_date: Date when case was filed
|
| 39 |
+
current_stage: Current stage in lifecycle
|
| 40 |
+
status: Current status (PENDING, ACTIVE, ADJOURNED, DISPOSED)
|
| 41 |
+
courtroom_id: Assigned courtroom (0-4 for 5 courtrooms)
|
| 42 |
+
is_urgent: Whether case is marked urgent
|
| 43 |
+
readiness_score: Computed readiness score (0-1)
|
| 44 |
+
hearing_count: Number of hearings held
|
| 45 |
+
last_hearing_date: Date of most recent hearing
|
| 46 |
+
days_since_last_hearing: Days elapsed since last hearing
|
| 47 |
+
age_days: Days since filing
|
| 48 |
+
disposal_date: Date of disposal (if disposed)
|
| 49 |
+
history: List of hearing dates and outcomes
|
| 50 |
+
"""
|
| 51 |
+
case_id: str
|
| 52 |
+
case_type: str
|
| 53 |
+
filed_date: date
|
| 54 |
+
current_stage: str = "ADMISSION" # Default initial stage
|
| 55 |
+
status: CaseStatus = CaseStatus.PENDING
|
| 56 |
+
courtroom_id: int | None = None # None = not yet assigned; 0 is invalid
|
| 57 |
+
is_urgent: bool = False
|
| 58 |
+
readiness_score: float = 0.0
|
| 59 |
+
hearing_count: int = 0
|
| 60 |
+
last_hearing_date: Optional[date] = None
|
| 61 |
+
days_since_last_hearing: int = 0
|
| 62 |
+
age_days: int = 0
|
| 63 |
+
disposal_date: Optional[date] = None
|
| 64 |
+
stage_start_date: Optional[date] = None
|
| 65 |
+
days_in_stage: int = 0
|
| 66 |
+
history: List[dict] = field(default_factory=list)
|
| 67 |
+
|
| 68 |
+
# Ripeness tracking (NEW - for bottleneck detection)
|
| 69 |
+
ripeness_status: str = "UNKNOWN" # RipenessStatus enum value (stored as string to avoid circular import)
|
| 70 |
+
bottleneck_reason: Optional[str] = None
|
| 71 |
+
ripeness_updated_at: Optional[datetime] = None
|
| 72 |
+
last_hearing_purpose: Optional[str] = None # Purpose of last hearing (for classification)
|
| 73 |
+
|
| 74 |
+
# No-case-left-behind tracking (NEW)
|
| 75 |
+
last_scheduled_date: Optional[date] = None
|
| 76 |
+
days_since_last_scheduled: int = 0
|
| 77 |
+
|
| 78 |
+
def progress_to_stage(self, new_stage: str, current_date: date) -> None:
|
| 79 |
+
"""Progress case to a new stage.
|
| 80 |
+
|
| 81 |
+
Args:
|
| 82 |
+
new_stage: The stage to progress to
|
| 83 |
+
current_date: Current simulation date
|
| 84 |
+
"""
|
| 85 |
+
self.current_stage = new_stage
|
| 86 |
+
self.stage_start_date = current_date
|
| 87 |
+
self.days_in_stage = 0
|
| 88 |
+
|
| 89 |
+
# Check if terminal stage (case disposed)
|
| 90 |
+
if new_stage in TERMINAL_STAGES:
|
| 91 |
+
self.status = CaseStatus.DISPOSED
|
| 92 |
+
self.disposal_date = current_date
|
| 93 |
+
|
| 94 |
+
# Record in history
|
| 95 |
+
self.history.append({
|
| 96 |
+
"date": current_date,
|
| 97 |
+
"event": "stage_change",
|
| 98 |
+
"stage": new_stage,
|
| 99 |
+
})
|
| 100 |
+
|
| 101 |
+
def record_hearing(self, hearing_date: date, was_heard: bool, outcome: str = "") -> None:
|
| 102 |
+
"""Record a hearing event.
|
| 103 |
+
|
| 104 |
+
Args:
|
| 105 |
+
hearing_date: Date of the hearing
|
| 106 |
+
was_heard: Whether the hearing actually proceeded (not adjourned)
|
| 107 |
+
outcome: Outcome description
|
| 108 |
+
"""
|
| 109 |
+
self.hearing_count += 1
|
| 110 |
+
self.last_hearing_date = hearing_date
|
| 111 |
+
|
| 112 |
+
if was_heard:
|
| 113 |
+
self.status = CaseStatus.ACTIVE
|
| 114 |
+
else:
|
| 115 |
+
self.status = CaseStatus.ADJOURNED
|
| 116 |
+
|
| 117 |
+
# Record in history
|
| 118 |
+
self.history.append({
|
| 119 |
+
"date": hearing_date,
|
| 120 |
+
"event": "hearing",
|
| 121 |
+
"was_heard": was_heard,
|
| 122 |
+
"outcome": outcome,
|
| 123 |
+
"stage": self.current_stage,
|
| 124 |
+
})
|
| 125 |
+
|
| 126 |
+
def update_age(self, current_date: date) -> None:
|
| 127 |
+
"""Update age and days since last hearing.
|
| 128 |
+
|
| 129 |
+
Args:
|
| 130 |
+
current_date: Current simulation date
|
| 131 |
+
"""
|
| 132 |
+
self.age_days = (current_date - self.filed_date).days
|
| 133 |
+
|
| 134 |
+
if self.last_hearing_date:
|
| 135 |
+
self.days_since_last_hearing = (current_date - self.last_hearing_date).days
|
| 136 |
+
else:
|
| 137 |
+
self.days_since_last_hearing = self.age_days
|
| 138 |
+
|
| 139 |
+
if self.stage_start_date:
|
| 140 |
+
self.days_in_stage = (current_date - self.stage_start_date).days
|
| 141 |
+
else:
|
| 142 |
+
self.days_in_stage = self.age_days
|
| 143 |
+
|
| 144 |
+
# Update days since last scheduled (for no-case-left-behind tracking)
|
| 145 |
+
if self.last_scheduled_date:
|
| 146 |
+
self.days_since_last_scheduled = (current_date - self.last_scheduled_date).days
|
| 147 |
+
else:
|
| 148 |
+
self.days_since_last_scheduled = self.age_days
|
| 149 |
+
|
| 150 |
+
def compute_readiness_score(self) -> float:
|
| 151 |
+
"""Compute readiness score based on hearings, gaps, and stage.
|
| 152 |
+
|
| 153 |
+
Formula (from EDA):
|
| 154 |
+
READINESS = (hearings_capped/50) * 0.4 +
|
| 155 |
+
(100/gap_clamped) * 0.3 +
|
| 156 |
+
(stage_advanced) * 0.3
|
| 157 |
+
|
| 158 |
+
Returns:
|
| 159 |
+
Readiness score (0-1, higher = more ready)
|
| 160 |
+
"""
|
| 161 |
+
# Cap hearings at 50
|
| 162 |
+
hearings_capped = min(self.hearing_count, 50)
|
| 163 |
+
hearings_component = (hearings_capped / 50) * 0.4
|
| 164 |
+
|
| 165 |
+
# Gap component (inverse of days since last hearing)
|
| 166 |
+
gap_clamped = min(max(self.days_since_last_hearing, 1), 100)
|
| 167 |
+
gap_component = (100 / gap_clamped) * 0.3
|
| 168 |
+
|
| 169 |
+
# Stage component (advanced stages get higher score)
|
| 170 |
+
advanced_stages = ["ARGUMENTS", "EVIDENCE", "ORDERS / JUDGMENT"]
|
| 171 |
+
stage_component = 0.3 if self.current_stage in advanced_stages else 0.1
|
| 172 |
+
|
| 173 |
+
readiness = hearings_component + gap_component + stage_component
|
| 174 |
+
self.readiness_score = min(1.0, max(0.0, readiness))
|
| 175 |
+
|
| 176 |
+
return self.readiness_score
|
| 177 |
+
|
| 178 |
+
def is_ready_for_scheduling(self, min_gap_days: int = 7) -> bool:
|
| 179 |
+
"""Check if case is ready to be scheduled.
|
| 180 |
+
|
| 181 |
+
Args:
|
| 182 |
+
min_gap_days: Minimum days required since last hearing
|
| 183 |
+
|
| 184 |
+
Returns:
|
| 185 |
+
True if case can be scheduled
|
| 186 |
+
"""
|
| 187 |
+
if self.status == CaseStatus.DISPOSED:
|
| 188 |
+
return False
|
| 189 |
+
|
| 190 |
+
if self.last_hearing_date is None:
|
| 191 |
+
return True # First hearing, always ready
|
| 192 |
+
|
| 193 |
+
return self.days_since_last_hearing >= min_gap_days
|
| 194 |
+
|
| 195 |
+
def needs_alert(self, max_gap_days: int = 90) -> bool:
|
| 196 |
+
"""Check if case needs alert due to long gap.
|
| 197 |
+
|
| 198 |
+
Args:
|
| 199 |
+
max_gap_days: Maximum allowed gap before alert
|
| 200 |
+
|
| 201 |
+
Returns:
|
| 202 |
+
True if alert should be triggered
|
| 203 |
+
"""
|
| 204 |
+
if self.status == CaseStatus.DISPOSED:
|
| 205 |
+
return False
|
| 206 |
+
|
| 207 |
+
return self.days_since_last_hearing > max_gap_days
|
| 208 |
+
|
| 209 |
+
def get_priority_score(self) -> float:
|
| 210 |
+
"""Get overall priority score for scheduling.
|
| 211 |
+
|
| 212 |
+
Combines age, readiness, urgency, and adjournment boost into single score.
|
| 213 |
+
|
| 214 |
+
Formula:
|
| 215 |
+
priority = age*0.35 + readiness*0.25 + urgency*0.25 + adjournment_boost*0.15
|
| 216 |
+
|
| 217 |
+
Adjournment boost: Recently adjourned cases get priority to avoid indefinite postponement.
|
| 218 |
+
The boost decays exponentially: strongest immediately after adjournment, weaker over time.
|
| 219 |
+
|
| 220 |
+
Returns:
|
| 221 |
+
Priority score (higher = higher priority)
|
| 222 |
+
"""
|
| 223 |
+
# Age component (normalize to 0-1, assuming max age ~2000 days)
|
| 224 |
+
age_component = min(self.age_days / 2000, 1.0) * 0.35
|
| 225 |
+
|
| 226 |
+
# Readiness component
|
| 227 |
+
readiness_component = self.readiness_score * 0.25
|
| 228 |
+
|
| 229 |
+
# Urgency component
|
| 230 |
+
urgency_component = 1.0 if self.is_urgent else 0.0
|
| 231 |
+
urgency_component *= 0.25
|
| 232 |
+
|
| 233 |
+
# Adjournment boost (NEW - prevents cases from being repeatedly postponed)
|
| 234 |
+
adjournment_boost = 0.0
|
| 235 |
+
if self.status == CaseStatus.ADJOURNED and self.hearing_count > 0:
|
| 236 |
+
# Boost starts at 1.0 immediately after adjournment, decays exponentially
|
| 237 |
+
# Formula: boost = exp(-days_since_hearing / 21)
|
| 238 |
+
# At 7 days: ~0.71 (strong boost)
|
| 239 |
+
# At 14 days: ~0.50 (moderate boost)
|
| 240 |
+
# At 21 days: ~0.37 (weak boost)
|
| 241 |
+
# At 28 days: ~0.26 (very weak boost)
|
| 242 |
+
import math
|
| 243 |
+
decay_factor = 21 # Half-life of boost
|
| 244 |
+
adjournment_boost = math.exp(-self.days_since_last_hearing / decay_factor)
|
| 245 |
+
adjournment_boost *= 0.15
|
| 246 |
+
|
| 247 |
+
return age_component + readiness_component + urgency_component + adjournment_boost
|
| 248 |
+
|
| 249 |
+
def mark_unripe(self, status, reason: str, current_date: datetime) -> None:
|
| 250 |
+
"""Mark case as unripe with bottleneck reason.
|
| 251 |
+
|
| 252 |
+
Args:
|
| 253 |
+
status: Ripeness status (UNRIPE_SUMMONS, UNRIPE_PARTY, etc.) - RipenessStatus enum
|
| 254 |
+
reason: Human-readable reason for unripeness
|
| 255 |
+
current_date: Current simulation date
|
| 256 |
+
"""
|
| 257 |
+
# Store as string to avoid circular import
|
| 258 |
+
self.ripeness_status = status.value if hasattr(status, 'value') else str(status)
|
| 259 |
+
self.bottleneck_reason = reason
|
| 260 |
+
self.ripeness_updated_at = current_date
|
| 261 |
+
|
| 262 |
+
# Record in history
|
| 263 |
+
self.history.append({
|
| 264 |
+
"date": current_date,
|
| 265 |
+
"event": "ripeness_change",
|
| 266 |
+
"status": self.ripeness_status,
|
| 267 |
+
"reason": reason,
|
| 268 |
+
})
|
| 269 |
+
|
| 270 |
+
def mark_ripe(self, current_date: datetime) -> None:
|
| 271 |
+
"""Mark case as ripe (ready for hearing).
|
| 272 |
+
|
| 273 |
+
Args:
|
| 274 |
+
current_date: Current simulation date
|
| 275 |
+
"""
|
| 276 |
+
self.ripeness_status = "RIPE"
|
| 277 |
+
self.bottleneck_reason = None
|
| 278 |
+
self.ripeness_updated_at = current_date
|
| 279 |
+
|
| 280 |
+
# Record in history
|
| 281 |
+
self.history.append({
|
| 282 |
+
"date": current_date,
|
| 283 |
+
"event": "ripeness_change",
|
| 284 |
+
"status": "RIPE",
|
| 285 |
+
"reason": "Case became ripe",
|
| 286 |
+
})
|
| 287 |
+
|
| 288 |
+
def mark_scheduled(self, scheduled_date: date) -> None:
|
| 289 |
+
"""Mark case as scheduled for a hearing.
|
| 290 |
+
|
| 291 |
+
Used for no-case-left-behind tracking.
|
| 292 |
+
|
| 293 |
+
Args:
|
| 294 |
+
scheduled_date: Date case was scheduled
|
| 295 |
+
"""
|
| 296 |
+
self.last_scheduled_date = scheduled_date
|
| 297 |
+
self.days_since_last_scheduled = 0
|
| 298 |
+
|
| 299 |
+
@property
|
| 300 |
+
def is_disposed(self) -> bool:
|
| 301 |
+
"""Check if case is disposed."""
|
| 302 |
+
return self.status == CaseStatus.DISPOSED
|
| 303 |
+
|
| 304 |
+
def __repr__(self) -> str:
|
| 305 |
+
return (f"Case(id={self.case_id}, type={self.case_type}, "
|
| 306 |
+
f"stage={self.current_stage}, status={self.status.value}, "
|
| 307 |
+
f"hearings={self.hearing_count})")
|
| 308 |
+
|
| 309 |
+
def to_dict(self) -> dict:
|
| 310 |
+
"""Convert case to dictionary for serialization."""
|
| 311 |
+
return {
|
| 312 |
+
"case_id": self.case_id,
|
| 313 |
+
"case_type": self.case_type,
|
| 314 |
+
"filed_date": self.filed_date.isoformat(),
|
| 315 |
+
"current_stage": self.current_stage,
|
| 316 |
+
"status": self.status.value,
|
| 317 |
+
"courtroom_id": self.courtroom_id,
|
| 318 |
+
"is_urgent": self.is_urgent,
|
| 319 |
+
"readiness_score": self.readiness_score,
|
| 320 |
+
"hearing_count": self.hearing_count,
|
| 321 |
+
"last_hearing_date": self.last_hearing_date.isoformat() if self.last_hearing_date else None,
|
| 322 |
+
"days_since_last_hearing": self.days_since_last_hearing,
|
| 323 |
+
"age_days": self.age_days,
|
| 324 |
+
"disposal_date": self.disposal_date.isoformat() if self.disposal_date else None,
|
| 325 |
+
"ripeness_status": self.ripeness_status,
|
| 326 |
+
"bottleneck_reason": self.bottleneck_reason,
|
| 327 |
+
"last_hearing_purpose": self.last_hearing_purpose,
|
| 328 |
+
"last_scheduled_date": self.last_scheduled_date.isoformat() if self.last_scheduled_date else None,
|
| 329 |
+
"days_since_last_scheduled": self.days_since_last_scheduled,
|
| 330 |
+
"history": self.history,
|
| 331 |
+
}
|
|
@@ -0,0 +1,228 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Courtroom resource management.
|
| 2 |
+
|
| 3 |
+
This module defines the Courtroom class which represents a physical courtroom
|
| 4 |
+
with capacity constraints and daily scheduling.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from dataclasses import dataclass, field
|
| 8 |
+
from datetime import date
|
| 9 |
+
from typing import Dict, List, Optional, Set
|
| 10 |
+
|
| 11 |
+
from scheduler.data.config import DEFAULT_DAILY_CAPACITY
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
@dataclass
|
| 15 |
+
class Courtroom:
|
| 16 |
+
"""Represents a courtroom resource.
|
| 17 |
+
|
| 18 |
+
Attributes:
|
| 19 |
+
courtroom_id: Unique identifier (0-4 for 5 courtrooms)
|
| 20 |
+
judge_id: Currently assigned judge (optional)
|
| 21 |
+
daily_capacity: Maximum cases that can be heard per day
|
| 22 |
+
case_types: Types of cases handled by this courtroom
|
| 23 |
+
schedule: Dict mapping dates to lists of case_ids scheduled
|
| 24 |
+
hearings_held: Count of hearings held
|
| 25 |
+
utilization_history: Track daily utilization rates
|
| 26 |
+
"""
|
| 27 |
+
courtroom_id: int
|
| 28 |
+
judge_id: Optional[str] = None
|
| 29 |
+
daily_capacity: int = DEFAULT_DAILY_CAPACITY
|
| 30 |
+
case_types: Set[str] = field(default_factory=set)
|
| 31 |
+
schedule: Dict[date, List[str]] = field(default_factory=dict)
|
| 32 |
+
hearings_held: int = 0
|
| 33 |
+
utilization_history: List[Dict] = field(default_factory=list)
|
| 34 |
+
|
| 35 |
+
def assign_judge(self, judge_id: str) -> None:
|
| 36 |
+
"""Assign a judge to this courtroom.
|
| 37 |
+
|
| 38 |
+
Args:
|
| 39 |
+
judge_id: Judge identifier
|
| 40 |
+
"""
|
| 41 |
+
self.judge_id = judge_id
|
| 42 |
+
|
| 43 |
+
def add_case_types(self, *case_types: str) -> None:
|
| 44 |
+
"""Add case types that this courtroom handles.
|
| 45 |
+
|
| 46 |
+
Args:
|
| 47 |
+
*case_types: One or more case type strings (e.g., 'RSA', 'CRP')
|
| 48 |
+
"""
|
| 49 |
+
self.case_types.update(case_types)
|
| 50 |
+
|
| 51 |
+
def can_schedule(self, hearing_date: date, case_id: str) -> bool:
|
| 52 |
+
"""Check if a case can be scheduled on a given date.
|
| 53 |
+
|
| 54 |
+
Args:
|
| 55 |
+
hearing_date: Date to check
|
| 56 |
+
case_id: Case identifier
|
| 57 |
+
|
| 58 |
+
Returns:
|
| 59 |
+
True if slot available, False if at capacity
|
| 60 |
+
"""
|
| 61 |
+
if hearing_date not in self.schedule:
|
| 62 |
+
return True # No hearings scheduled yet
|
| 63 |
+
|
| 64 |
+
# Check if already scheduled
|
| 65 |
+
if case_id in self.schedule[hearing_date]:
|
| 66 |
+
return False # Already scheduled
|
| 67 |
+
|
| 68 |
+
# Check capacity
|
| 69 |
+
return len(self.schedule[hearing_date]) < self.daily_capacity
|
| 70 |
+
|
| 71 |
+
def schedule_case(self, hearing_date: date, case_id: str) -> bool:
|
| 72 |
+
"""Schedule a case for a hearing.
|
| 73 |
+
|
| 74 |
+
Args:
|
| 75 |
+
hearing_date: Date of hearing
|
| 76 |
+
case_id: Case identifier
|
| 77 |
+
|
| 78 |
+
Returns:
|
| 79 |
+
True if successfully scheduled, False if at capacity
|
| 80 |
+
"""
|
| 81 |
+
if not self.can_schedule(hearing_date, case_id):
|
| 82 |
+
return False
|
| 83 |
+
|
| 84 |
+
if hearing_date not in self.schedule:
|
| 85 |
+
self.schedule[hearing_date] = []
|
| 86 |
+
|
| 87 |
+
self.schedule[hearing_date].append(case_id)
|
| 88 |
+
return True
|
| 89 |
+
|
| 90 |
+
def unschedule_case(self, hearing_date: date, case_id: str) -> bool:
|
| 91 |
+
"""Remove a case from schedule (e.g., if adjourned).
|
| 92 |
+
|
| 93 |
+
Args:
|
| 94 |
+
hearing_date: Date of hearing
|
| 95 |
+
case_id: Case identifier
|
| 96 |
+
|
| 97 |
+
Returns:
|
| 98 |
+
True if successfully removed, False if not found
|
| 99 |
+
"""
|
| 100 |
+
if hearing_date not in self.schedule:
|
| 101 |
+
return False
|
| 102 |
+
|
| 103 |
+
if case_id in self.schedule[hearing_date]:
|
| 104 |
+
self.schedule[hearing_date].remove(case_id)
|
| 105 |
+
return True
|
| 106 |
+
|
| 107 |
+
return False
|
| 108 |
+
|
| 109 |
+
def get_daily_schedule(self, hearing_date: date) -> List[str]:
|
| 110 |
+
"""Get list of cases scheduled for a specific date.
|
| 111 |
+
|
| 112 |
+
Args:
|
| 113 |
+
hearing_date: Date to query
|
| 114 |
+
|
| 115 |
+
Returns:
|
| 116 |
+
List of case_ids scheduled (empty if none)
|
| 117 |
+
"""
|
| 118 |
+
return self.schedule.get(hearing_date, [])
|
| 119 |
+
|
| 120 |
+
def get_capacity_for_date(self, hearing_date: date) -> int:
|
| 121 |
+
"""Get remaining capacity for a specific date.
|
| 122 |
+
|
| 123 |
+
Args:
|
| 124 |
+
hearing_date: Date to query
|
| 125 |
+
|
| 126 |
+
Returns:
|
| 127 |
+
Number of available slots
|
| 128 |
+
"""
|
| 129 |
+
scheduled_count = len(self.get_daily_schedule(hearing_date))
|
| 130 |
+
return self.daily_capacity - scheduled_count
|
| 131 |
+
|
| 132 |
+
def record_hearing_completed(self, hearing_date: date) -> None:
|
| 133 |
+
"""Record that a hearing was held.
|
| 134 |
+
|
| 135 |
+
Args:
|
| 136 |
+
hearing_date: Date of hearing
|
| 137 |
+
"""
|
| 138 |
+
self.hearings_held += 1
|
| 139 |
+
|
| 140 |
+
def compute_utilization(self, hearing_date: date) -> float:
|
| 141 |
+
"""Compute utilization rate for a specific date.
|
| 142 |
+
|
| 143 |
+
Args:
|
| 144 |
+
hearing_date: Date to compute for
|
| 145 |
+
|
| 146 |
+
Returns:
|
| 147 |
+
Utilization rate (0.0 to 1.0)
|
| 148 |
+
"""
|
| 149 |
+
scheduled_count = len(self.get_daily_schedule(hearing_date))
|
| 150 |
+
return scheduled_count / self.daily_capacity if self.daily_capacity > 0 else 0.0
|
| 151 |
+
|
| 152 |
+
def record_daily_utilization(self, hearing_date: date, actual_hearings: int) -> None:
|
| 153 |
+
"""Record actual utilization for a day.
|
| 154 |
+
|
| 155 |
+
Args:
|
| 156 |
+
hearing_date: Date of hearings
|
| 157 |
+
actual_hearings: Number of hearings actually held (not adjourned)
|
| 158 |
+
"""
|
| 159 |
+
scheduled = len(self.get_daily_schedule(hearing_date))
|
| 160 |
+
utilization = actual_hearings / self.daily_capacity if self.daily_capacity > 0 else 0.0
|
| 161 |
+
|
| 162 |
+
self.utilization_history.append({
|
| 163 |
+
"date": hearing_date,
|
| 164 |
+
"scheduled": scheduled,
|
| 165 |
+
"actual": actual_hearings,
|
| 166 |
+
"capacity": self.daily_capacity,
|
| 167 |
+
"utilization": utilization,
|
| 168 |
+
})
|
| 169 |
+
|
| 170 |
+
def get_average_utilization(self) -> float:
|
| 171 |
+
"""Calculate average utilization rate across all recorded days.
|
| 172 |
+
|
| 173 |
+
Returns:
|
| 174 |
+
Average utilization (0.0 to 1.0)
|
| 175 |
+
"""
|
| 176 |
+
if not self.utilization_history:
|
| 177 |
+
return 0.0
|
| 178 |
+
|
| 179 |
+
total = sum(day["utilization"] for day in self.utilization_history)
|
| 180 |
+
return total / len(self.utilization_history)
|
| 181 |
+
|
| 182 |
+
def get_schedule_summary(self, start_date: date, end_date: date) -> Dict:
|
| 183 |
+
"""Get summary statistics for a date range.
|
| 184 |
+
|
| 185 |
+
Args:
|
| 186 |
+
start_date: Start of range
|
| 187 |
+
end_date: End of range
|
| 188 |
+
|
| 189 |
+
Returns:
|
| 190 |
+
Dict with counts and utilization stats
|
| 191 |
+
"""
|
| 192 |
+
days_in_range = [d for d in self.schedule.keys()
|
| 193 |
+
if start_date <= d <= end_date]
|
| 194 |
+
|
| 195 |
+
total_scheduled = sum(len(self.schedule[d]) for d in days_in_range)
|
| 196 |
+
days_with_hearings = len(days_in_range)
|
| 197 |
+
|
| 198 |
+
return {
|
| 199 |
+
"courtroom_id": self.courtroom_id,
|
| 200 |
+
"days_with_hearings": days_with_hearings,
|
| 201 |
+
"total_cases_scheduled": total_scheduled,
|
| 202 |
+
"avg_cases_per_day": total_scheduled / days_with_hearings if days_with_hearings > 0 else 0,
|
| 203 |
+
"total_capacity": days_with_hearings * self.daily_capacity,
|
| 204 |
+
"utilization_rate": total_scheduled / (days_with_hearings * self.daily_capacity)
|
| 205 |
+
if days_with_hearings > 0 else 0,
|
| 206 |
+
}
|
| 207 |
+
|
| 208 |
+
def clear_schedule(self) -> None:
|
| 209 |
+
"""Clear all scheduled hearings (for testing/reset)."""
|
| 210 |
+
self.schedule.clear()
|
| 211 |
+
self.utilization_history.clear()
|
| 212 |
+
self.hearings_held = 0
|
| 213 |
+
|
| 214 |
+
def __repr__(self) -> str:
|
| 215 |
+
return (f"Courtroom(id={self.courtroom_id}, judge={self.judge_id}, "
|
| 216 |
+
f"capacity={self.daily_capacity}, types={self.case_types})")
|
| 217 |
+
|
| 218 |
+
def to_dict(self) -> dict:
|
| 219 |
+
"""Convert courtroom to dictionary for serialization."""
|
| 220 |
+
return {
|
| 221 |
+
"courtroom_id": self.courtroom_id,
|
| 222 |
+
"judge_id": self.judge_id,
|
| 223 |
+
"daily_capacity": self.daily_capacity,
|
| 224 |
+
"case_types": list(self.case_types),
|
| 225 |
+
"schedule_size": len(self.schedule),
|
| 226 |
+
"hearings_held": self.hearings_held,
|
| 227 |
+
"avg_utilization": self.get_average_utilization(),
|
| 228 |
+
}
|
|
@@ -0,0 +1,134 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Hearing event entity and outcome tracking.
|
| 2 |
+
|
| 3 |
+
This module defines the Hearing class which represents a scheduled court hearing
|
| 4 |
+
with its outcome and associated metadata.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from dataclasses import dataclass, field
|
| 8 |
+
from datetime import date
|
| 9 |
+
from enum import Enum
|
| 10 |
+
from typing import Optional
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
class HearingOutcome(Enum):
|
| 14 |
+
"""Possible outcomes of a hearing."""
|
| 15 |
+
SCHEDULED = "SCHEDULED" # Future hearing
|
| 16 |
+
HEARD = "HEARD" # Completed successfully
|
| 17 |
+
ADJOURNED = "ADJOURNED" # Postponed
|
| 18 |
+
DISPOSED = "DISPOSED" # Case concluded
|
| 19 |
+
NO_SHOW = "NO_SHOW" # Party absent
|
| 20 |
+
WITHDRAWN = "WITHDRAWN" # Case withdrawn
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
@dataclass
|
| 24 |
+
class Hearing:
|
| 25 |
+
"""Represents a scheduled court hearing event.
|
| 26 |
+
|
| 27 |
+
Attributes:
|
| 28 |
+
hearing_id: Unique identifier
|
| 29 |
+
case_id: Associated case
|
| 30 |
+
scheduled_date: Date of hearing
|
| 31 |
+
courtroom_id: Assigned courtroom
|
| 32 |
+
judge_id: Presiding judge
|
| 33 |
+
stage: Case stage at time of hearing
|
| 34 |
+
outcome: Result of hearing
|
| 35 |
+
actual_date: Actual date if rescheduled
|
| 36 |
+
duration_minutes: Estimated duration
|
| 37 |
+
notes: Optional notes
|
| 38 |
+
"""
|
| 39 |
+
hearing_id: str
|
| 40 |
+
case_id: str
|
| 41 |
+
scheduled_date: date
|
| 42 |
+
courtroom_id: int
|
| 43 |
+
judge_id: str
|
| 44 |
+
stage: str
|
| 45 |
+
outcome: HearingOutcome = HearingOutcome.SCHEDULED
|
| 46 |
+
actual_date: Optional[date] = None
|
| 47 |
+
duration_minutes: int = 30
|
| 48 |
+
notes: Optional[str] = None
|
| 49 |
+
|
| 50 |
+
def mark_as_heard(self, actual_date: Optional[date] = None) -> None:
|
| 51 |
+
"""Mark hearing as successfully completed.
|
| 52 |
+
|
| 53 |
+
Args:
|
| 54 |
+
actual_date: Actual date if different from scheduled
|
| 55 |
+
"""
|
| 56 |
+
self.outcome = HearingOutcome.HEARD
|
| 57 |
+
self.actual_date = actual_date or self.scheduled_date
|
| 58 |
+
|
| 59 |
+
def mark_as_adjourned(self, reason: str = "") -> None:
|
| 60 |
+
"""Mark hearing as adjourned.
|
| 61 |
+
|
| 62 |
+
Args:
|
| 63 |
+
reason: Reason for adjournment
|
| 64 |
+
"""
|
| 65 |
+
self.outcome = HearingOutcome.ADJOURNED
|
| 66 |
+
if reason:
|
| 67 |
+
self.notes = reason
|
| 68 |
+
|
| 69 |
+
def mark_as_disposed(self) -> None:
|
| 70 |
+
"""Mark hearing as final disposition."""
|
| 71 |
+
self.outcome = HearingOutcome.DISPOSED
|
| 72 |
+
self.actual_date = self.scheduled_date
|
| 73 |
+
|
| 74 |
+
def mark_as_no_show(self, party: str = "") -> None:
|
| 75 |
+
"""Mark hearing as no-show.
|
| 76 |
+
|
| 77 |
+
Args:
|
| 78 |
+
party: Which party was absent
|
| 79 |
+
"""
|
| 80 |
+
self.outcome = HearingOutcome.NO_SHOW
|
| 81 |
+
if party:
|
| 82 |
+
self.notes = f"No show: {party}"
|
| 83 |
+
|
| 84 |
+
def reschedule(self, new_date: date) -> None:
|
| 85 |
+
"""Reschedule hearing to a new date.
|
| 86 |
+
|
| 87 |
+
Args:
|
| 88 |
+
new_date: New scheduled date
|
| 89 |
+
"""
|
| 90 |
+
self.scheduled_date = new_date
|
| 91 |
+
self.outcome = HearingOutcome.SCHEDULED
|
| 92 |
+
|
| 93 |
+
def is_complete(self) -> bool:
|
| 94 |
+
"""Check if hearing has concluded.
|
| 95 |
+
|
| 96 |
+
Returns:
|
| 97 |
+
True if outcome is not SCHEDULED
|
| 98 |
+
"""
|
| 99 |
+
return self.outcome != HearingOutcome.SCHEDULED
|
| 100 |
+
|
| 101 |
+
def is_successful(self) -> bool:
|
| 102 |
+
"""Check if hearing was successfully held.
|
| 103 |
+
|
| 104 |
+
Returns:
|
| 105 |
+
True if outcome is HEARD or DISPOSED
|
| 106 |
+
"""
|
| 107 |
+
return self.outcome in (HearingOutcome.HEARD, HearingOutcome.DISPOSED)
|
| 108 |
+
|
| 109 |
+
def get_effective_date(self) -> date:
|
| 110 |
+
"""Get actual or scheduled date.
|
| 111 |
+
|
| 112 |
+
Returns:
|
| 113 |
+
actual_date if set, else scheduled_date
|
| 114 |
+
"""
|
| 115 |
+
return self.actual_date or self.scheduled_date
|
| 116 |
+
|
| 117 |
+
def __repr__(self) -> str:
|
| 118 |
+
return (f"Hearing(id={self.hearing_id}, case={self.case_id}, "
|
| 119 |
+
f"date={self.scheduled_date}, outcome={self.outcome.value})")
|
| 120 |
+
|
| 121 |
+
def to_dict(self) -> dict:
|
| 122 |
+
"""Convert hearing to dictionary for serialization."""
|
| 123 |
+
return {
|
| 124 |
+
"hearing_id": self.hearing_id,
|
| 125 |
+
"case_id": self.case_id,
|
| 126 |
+
"scheduled_date": self.scheduled_date.isoformat(),
|
| 127 |
+
"actual_date": self.actual_date.isoformat() if self.actual_date else None,
|
| 128 |
+
"courtroom_id": self.courtroom_id,
|
| 129 |
+
"judge_id": self.judge_id,
|
| 130 |
+
"stage": self.stage,
|
| 131 |
+
"outcome": self.outcome.value,
|
| 132 |
+
"duration_minutes": self.duration_minutes,
|
| 133 |
+
"notes": self.notes,
|
| 134 |
+
}
|
|
@@ -0,0 +1,167 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Judge entity and workload management.
|
| 2 |
+
|
| 3 |
+
This module defines the Judge class which represents a judicial officer
|
| 4 |
+
presiding over hearings in a courtroom.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from dataclasses import dataclass, field
|
| 8 |
+
from datetime import date
|
| 9 |
+
from typing import Dict, List, Optional, Set
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
@dataclass
|
| 13 |
+
class Judge:
|
| 14 |
+
"""Represents a judge with workload tracking.
|
| 15 |
+
|
| 16 |
+
Attributes:
|
| 17 |
+
judge_id: Unique identifier
|
| 18 |
+
name: Judge's name
|
| 19 |
+
courtroom_id: Assigned courtroom (optional)
|
| 20 |
+
preferred_case_types: Case types this judge specializes in
|
| 21 |
+
cases_heard: Count of cases heard
|
| 22 |
+
hearings_presided: Count of hearings presided
|
| 23 |
+
workload_history: Daily workload tracking
|
| 24 |
+
"""
|
| 25 |
+
judge_id: str
|
| 26 |
+
name: str
|
| 27 |
+
courtroom_id: Optional[int] = None
|
| 28 |
+
preferred_case_types: Set[str] = field(default_factory=set)
|
| 29 |
+
cases_heard: int = 0
|
| 30 |
+
hearings_presided: int = 0
|
| 31 |
+
workload_history: List[Dict] = field(default_factory=list)
|
| 32 |
+
|
| 33 |
+
def assign_courtroom(self, courtroom_id: int) -> None:
|
| 34 |
+
"""Assign judge to a courtroom.
|
| 35 |
+
|
| 36 |
+
Args:
|
| 37 |
+
courtroom_id: Courtroom identifier
|
| 38 |
+
"""
|
| 39 |
+
self.courtroom_id = courtroom_id
|
| 40 |
+
|
| 41 |
+
def add_preferred_types(self, *case_types: str) -> None:
|
| 42 |
+
"""Add case types to judge's preferences.
|
| 43 |
+
|
| 44 |
+
Args:
|
| 45 |
+
*case_types: One or more case type strings
|
| 46 |
+
"""
|
| 47 |
+
self.preferred_case_types.update(case_types)
|
| 48 |
+
|
| 49 |
+
def record_hearing(self, hearing_date: date, case_id: str, case_type: str) -> None:
|
| 50 |
+
"""Record a hearing presided over.
|
| 51 |
+
|
| 52 |
+
Args:
|
| 53 |
+
hearing_date: Date of hearing
|
| 54 |
+
case_id: Case identifier
|
| 55 |
+
case_type: Type of case
|
| 56 |
+
"""
|
| 57 |
+
self.hearings_presided += 1
|
| 58 |
+
|
| 59 |
+
def record_daily_workload(self, hearing_date: date, cases_heard: int,
|
| 60 |
+
cases_adjourned: int) -> None:
|
| 61 |
+
"""Record workload for a specific day.
|
| 62 |
+
|
| 63 |
+
Args:
|
| 64 |
+
hearing_date: Date of hearings
|
| 65 |
+
cases_heard: Number of cases actually heard
|
| 66 |
+
cases_adjourned: Number of cases adjourned
|
| 67 |
+
"""
|
| 68 |
+
self.workload_history.append({
|
| 69 |
+
"date": hearing_date,
|
| 70 |
+
"cases_heard": cases_heard,
|
| 71 |
+
"cases_adjourned": cases_adjourned,
|
| 72 |
+
"total_scheduled": cases_heard + cases_adjourned,
|
| 73 |
+
})
|
| 74 |
+
|
| 75 |
+
self.cases_heard += cases_heard
|
| 76 |
+
|
| 77 |
+
def get_average_daily_workload(self) -> float:
|
| 78 |
+
"""Calculate average cases heard per day.
|
| 79 |
+
|
| 80 |
+
Returns:
|
| 81 |
+
Average number of cases per day
|
| 82 |
+
"""
|
| 83 |
+
if not self.workload_history:
|
| 84 |
+
return 0.0
|
| 85 |
+
|
| 86 |
+
total = sum(day["cases_heard"] for day in self.workload_history)
|
| 87 |
+
return total / len(self.workload_history)
|
| 88 |
+
|
| 89 |
+
def get_adjournment_rate(self) -> float:
|
| 90 |
+
"""Calculate judge's adjournment rate.
|
| 91 |
+
|
| 92 |
+
Returns:
|
| 93 |
+
Proportion of cases adjourned (0.0 to 1.0)
|
| 94 |
+
"""
|
| 95 |
+
if not self.workload_history:
|
| 96 |
+
return 0.0
|
| 97 |
+
|
| 98 |
+
total_adjourned = sum(day["cases_adjourned"] for day in self.workload_history)
|
| 99 |
+
total_scheduled = sum(day["total_scheduled"] for day in self.workload_history)
|
| 100 |
+
|
| 101 |
+
return total_adjourned / total_scheduled if total_scheduled > 0 else 0.0
|
| 102 |
+
|
| 103 |
+
def get_workload_summary(self, start_date: date, end_date: date) -> Dict:
|
| 104 |
+
"""Get workload summary for a date range.
|
| 105 |
+
|
| 106 |
+
Args:
|
| 107 |
+
start_date: Start of range
|
| 108 |
+
end_date: End of range
|
| 109 |
+
|
| 110 |
+
Returns:
|
| 111 |
+
Dict with workload statistics
|
| 112 |
+
"""
|
| 113 |
+
days_in_range = [day for day in self.workload_history
|
| 114 |
+
if start_date <= day["date"] <= end_date]
|
| 115 |
+
|
| 116 |
+
if not days_in_range:
|
| 117 |
+
return {
|
| 118 |
+
"judge_id": self.judge_id,
|
| 119 |
+
"days_worked": 0,
|
| 120 |
+
"total_cases_heard": 0,
|
| 121 |
+
"avg_cases_per_day": 0.0,
|
| 122 |
+
"adjournment_rate": 0.0,
|
| 123 |
+
}
|
| 124 |
+
|
| 125 |
+
total_heard = sum(day["cases_heard"] for day in days_in_range)
|
| 126 |
+
total_adjourned = sum(day["cases_adjourned"] for day in days_in_range)
|
| 127 |
+
total_scheduled = total_heard + total_adjourned
|
| 128 |
+
|
| 129 |
+
return {
|
| 130 |
+
"judge_id": self.judge_id,
|
| 131 |
+
"days_worked": len(days_in_range),
|
| 132 |
+
"total_cases_heard": total_heard,
|
| 133 |
+
"total_cases_adjourned": total_adjourned,
|
| 134 |
+
"avg_cases_per_day": total_heard / len(days_in_range),
|
| 135 |
+
"adjournment_rate": total_adjourned / total_scheduled if total_scheduled > 0 else 0.0,
|
| 136 |
+
}
|
| 137 |
+
|
| 138 |
+
def is_specialized_in(self, case_type: str) -> bool:
|
| 139 |
+
"""Check if judge specializes in a case type.
|
| 140 |
+
|
| 141 |
+
Args:
|
| 142 |
+
case_type: Case type to check
|
| 143 |
+
|
| 144 |
+
Returns:
|
| 145 |
+
True if in preferred types or no preferences set
|
| 146 |
+
"""
|
| 147 |
+
if not self.preferred_case_types:
|
| 148 |
+
return True # No preferences means handles all types
|
| 149 |
+
|
| 150 |
+
return case_type in self.preferred_case_types
|
| 151 |
+
|
| 152 |
+
def __repr__(self) -> str:
|
| 153 |
+
return (f"Judge(id={self.judge_id}, courtroom={self.courtroom_id}, "
|
| 154 |
+
f"hearings={self.hearings_presided})")
|
| 155 |
+
|
| 156 |
+
def to_dict(self) -> dict:
|
| 157 |
+
"""Convert judge to dictionary for serialization."""
|
| 158 |
+
return {
|
| 159 |
+
"judge_id": self.judge_id,
|
| 160 |
+
"name": self.name,
|
| 161 |
+
"courtroom_id": self.courtroom_id,
|
| 162 |
+
"preferred_case_types": list(self.preferred_case_types),
|
| 163 |
+
"cases_heard": self.cases_heard,
|
| 164 |
+
"hearings_presided": self.hearings_presided,
|
| 165 |
+
"avg_daily_workload": self.get_average_daily_workload(),
|
| 166 |
+
"adjournment_rate": self.get_adjournment_rate(),
|
| 167 |
+
}
|
|
@@ -0,0 +1,216 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Case ripeness classification for intelligent scheduling.
|
| 2 |
+
|
| 3 |
+
Ripe cases are ready for substantive judicial time.
|
| 4 |
+
Unripe cases have bottlenecks (summons, dependencies, parties, documents).
|
| 5 |
+
|
| 6 |
+
Based on analysis of historical PurposeOfHearing patterns (see scripts/analyze_ripeness_patterns.py).
|
| 7 |
+
"""
|
| 8 |
+
from __future__ import annotations
|
| 9 |
+
|
| 10 |
+
from enum import Enum
|
| 11 |
+
from typing import TYPE_CHECKING
|
| 12 |
+
from datetime import datetime, timedelta
|
| 13 |
+
|
| 14 |
+
if TYPE_CHECKING:
|
| 15 |
+
from scheduler.core.case import Case
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
class RipenessStatus(Enum):
|
| 19 |
+
"""Status indicating whether a case is ready for hearing."""
|
| 20 |
+
|
| 21 |
+
RIPE = "RIPE" # Ready for hearing
|
| 22 |
+
UNRIPE_SUMMONS = "UNRIPE_SUMMONS" # Waiting for summons service
|
| 23 |
+
UNRIPE_DEPENDENT = "UNRIPE_DEPENDENT" # Waiting for dependent case/order
|
| 24 |
+
UNRIPE_PARTY = "UNRIPE_PARTY" # Party/lawyer unavailable
|
| 25 |
+
UNRIPE_DOCUMENT = "UNRIPE_DOCUMENT" # Missing documents/evidence
|
| 26 |
+
UNKNOWN = "UNKNOWN" # Cannot determine
|
| 27 |
+
|
| 28 |
+
def is_ripe(self) -> bool:
|
| 29 |
+
"""Check if status indicates ripeness."""
|
| 30 |
+
return self == RipenessStatus.RIPE
|
| 31 |
+
|
| 32 |
+
def is_unripe(self) -> bool:
|
| 33 |
+
"""Check if status indicates unripeness."""
|
| 34 |
+
return self in {
|
| 35 |
+
RipenessStatus.UNRIPE_SUMMONS,
|
| 36 |
+
RipenessStatus.UNRIPE_DEPENDENT,
|
| 37 |
+
RipenessStatus.UNRIPE_PARTY,
|
| 38 |
+
RipenessStatus.UNRIPE_DOCUMENT,
|
| 39 |
+
}
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
# Keywords indicating bottlenecks (data-driven from analyze_ripeness_patterns.py)
|
| 43 |
+
UNRIPE_KEYWORDS = {
|
| 44 |
+
"SUMMONS": RipenessStatus.UNRIPE_SUMMONS,
|
| 45 |
+
"NOTICE": RipenessStatus.UNRIPE_SUMMONS,
|
| 46 |
+
"ISSUE": RipenessStatus.UNRIPE_SUMMONS,
|
| 47 |
+
"SERVICE": RipenessStatus.UNRIPE_SUMMONS,
|
| 48 |
+
"STAY": RipenessStatus.UNRIPE_DEPENDENT,
|
| 49 |
+
"PENDING": RipenessStatus.UNRIPE_DEPENDENT,
|
| 50 |
+
}
|
| 51 |
+
|
| 52 |
+
RIPE_KEYWORDS = ["ARGUMENTS", "HEARING", "FINAL", "JUDGMENT", "ORDERS", "DISPOSAL"]
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
class RipenessClassifier:
|
| 56 |
+
"""Classify cases as RIPE or UNRIPE for scheduling optimization."""
|
| 57 |
+
|
| 58 |
+
# Stages that indicate case is ready for substantive hearing
|
| 59 |
+
RIPE_STAGES = [
|
| 60 |
+
"ARGUMENTS",
|
| 61 |
+
"EVIDENCE",
|
| 62 |
+
"ORDERS / JUDGMENT",
|
| 63 |
+
"FINAL DISPOSAL"
|
| 64 |
+
]
|
| 65 |
+
|
| 66 |
+
# Stages that indicate administrative/preliminary work
|
| 67 |
+
UNRIPE_STAGES = [
|
| 68 |
+
"PRE-ADMISSION",
|
| 69 |
+
"ADMISSION", # Most cases stuck here waiting for compliance
|
| 70 |
+
"FRAMING OF CHARGES",
|
| 71 |
+
"INTERLOCUTORY APPLICATION"
|
| 72 |
+
]
|
| 73 |
+
|
| 74 |
+
@classmethod
|
| 75 |
+
def classify(cls, case: Case, current_date: datetime | None = None) -> RipenessStatus:
|
| 76 |
+
"""Classify case ripeness status with bottleneck type.
|
| 77 |
+
|
| 78 |
+
Args:
|
| 79 |
+
case: Case to classify
|
| 80 |
+
current_date: Current simulation date (defaults to now)
|
| 81 |
+
|
| 82 |
+
Returns:
|
| 83 |
+
RipenessStatus enum indicating ripeness and bottleneck type
|
| 84 |
+
|
| 85 |
+
Algorithm:
|
| 86 |
+
1. Check last hearing purpose for explicit bottleneck keywords
|
| 87 |
+
2. Check stage (ADMISSION vs ORDERS/JUDGMENT)
|
| 88 |
+
3. Check case maturity (days since filing, hearing count)
|
| 89 |
+
4. Check if stuck (many hearings but no progress)
|
| 90 |
+
5. Default to RIPE if no bottlenecks detected
|
| 91 |
+
"""
|
| 92 |
+
if current_date is None:
|
| 93 |
+
current_date = datetime.now()
|
| 94 |
+
|
| 95 |
+
# 1. Check last hearing purpose for explicit bottleneck keywords
|
| 96 |
+
if hasattr(case, "last_hearing_purpose") and case.last_hearing_purpose:
|
| 97 |
+
purpose_upper = case.last_hearing_purpose.upper()
|
| 98 |
+
|
| 99 |
+
for keyword, bottleneck_type in UNRIPE_KEYWORDS.items():
|
| 100 |
+
if keyword in purpose_upper:
|
| 101 |
+
return bottleneck_type
|
| 102 |
+
|
| 103 |
+
# 2. Check stage - ADMISSION stage with few hearings is likely unripe
|
| 104 |
+
if case.current_stage == "ADMISSION":
|
| 105 |
+
# New cases in ADMISSION (< 3 hearings) are often unripe
|
| 106 |
+
if case.hearing_count < 3:
|
| 107 |
+
return RipenessStatus.UNRIPE_SUMMONS
|
| 108 |
+
|
| 109 |
+
# 3. Check if case is "stuck" (many hearings but no progress)
|
| 110 |
+
if case.hearing_count > 10:
|
| 111 |
+
# Calculate average days between hearings
|
| 112 |
+
if case.age_days > 0:
|
| 113 |
+
avg_gap = case.age_days / case.hearing_count
|
| 114 |
+
|
| 115 |
+
# If average gap > 60 days, likely stuck due to bottleneck
|
| 116 |
+
if avg_gap > 60:
|
| 117 |
+
return RipenessStatus.UNRIPE_PARTY
|
| 118 |
+
|
| 119 |
+
# 4. Check stage-based ripeness (ripe stages are substantive)
|
| 120 |
+
if case.current_stage in cls.RIPE_STAGES:
|
| 121 |
+
return RipenessStatus.RIPE
|
| 122 |
+
|
| 123 |
+
# 5. Default to RIPE if no bottlenecks detected
|
| 124 |
+
# NOTE: Scheduling gap enforcement (MIN_GAP_BETWEEN_HEARINGS) is handled
|
| 125 |
+
# by the simulation engine, not the ripeness classifier. Ripeness only
|
| 126 |
+
# detects substantive bottlenecks (summons, dependencies, party issues).
|
| 127 |
+
return RipenessStatus.RIPE
|
| 128 |
+
|
| 129 |
+
@classmethod
|
| 130 |
+
def get_ripeness_priority(cls, case: Case, current_date: datetime | None = None) -> float:
|
| 131 |
+
"""Get priority adjustment based on ripeness.
|
| 132 |
+
|
| 133 |
+
Ripe cases should get judicial time priority over unripe cases
|
| 134 |
+
when scheduling is tight.
|
| 135 |
+
|
| 136 |
+
Returns:
|
| 137 |
+
Priority multiplier (1.5 for RIPE, 0.7 for UNRIPE)
|
| 138 |
+
"""
|
| 139 |
+
ripeness = cls.classify(case, current_date)
|
| 140 |
+
return 1.5 if ripeness.is_ripe() else 0.7
|
| 141 |
+
|
| 142 |
+
@classmethod
|
| 143 |
+
def is_schedulable(cls, case: Case, current_date: datetime | None = None) -> bool:
|
| 144 |
+
"""Determine if a case can be scheduled for a hearing.
|
| 145 |
+
|
| 146 |
+
A case is schedulable if:
|
| 147 |
+
- It is RIPE (no bottlenecks)
|
| 148 |
+
- It has been sufficient time since last hearing
|
| 149 |
+
- It is not disposed
|
| 150 |
+
|
| 151 |
+
Args:
|
| 152 |
+
case: The case to check
|
| 153 |
+
current_date: Current simulation date
|
| 154 |
+
|
| 155 |
+
Returns:
|
| 156 |
+
True if case can be scheduled, False otherwise
|
| 157 |
+
"""
|
| 158 |
+
# Check disposal status
|
| 159 |
+
if case.is_disposed:
|
| 160 |
+
return False
|
| 161 |
+
|
| 162 |
+
# Calculate current ripeness
|
| 163 |
+
ripeness = cls.classify(case, current_date)
|
| 164 |
+
|
| 165 |
+
# Only RIPE cases can be scheduled
|
| 166 |
+
return ripeness.is_ripe()
|
| 167 |
+
|
| 168 |
+
@classmethod
|
| 169 |
+
def get_ripeness_reason(cls, ripeness_status: RipenessStatus) -> str:
|
| 170 |
+
"""Get human-readable explanation for ripeness status.
|
| 171 |
+
|
| 172 |
+
Used in dashboard tooltips and reports.
|
| 173 |
+
|
| 174 |
+
Args:
|
| 175 |
+
ripeness_status: The status to explain
|
| 176 |
+
|
| 177 |
+
Returns:
|
| 178 |
+
Human-readable explanation string
|
| 179 |
+
"""
|
| 180 |
+
reasons = {
|
| 181 |
+
RipenessStatus.RIPE: "Case is ready for hearing (no bottlenecks detected)",
|
| 182 |
+
RipenessStatus.UNRIPE_SUMMONS: "Waiting for summons service or notice response",
|
| 183 |
+
RipenessStatus.UNRIPE_DEPENDENT: "Waiting for another case or court order",
|
| 184 |
+
RipenessStatus.UNRIPE_PARTY: "Party or lawyer unavailable",
|
| 185 |
+
RipenessStatus.UNRIPE_DOCUMENT: "Missing documents or evidence",
|
| 186 |
+
RipenessStatus.UNKNOWN: "Insufficient data to determine ripeness",
|
| 187 |
+
}
|
| 188 |
+
return reasons.get(ripeness_status, "Unknown status")
|
| 189 |
+
|
| 190 |
+
@classmethod
|
| 191 |
+
def estimate_ripening_time(cls, case: Case, current_date: datetime) -> timedelta | None:
|
| 192 |
+
"""Estimate time until case becomes ripe.
|
| 193 |
+
|
| 194 |
+
This is a heuristic based on bottleneck type and historical data.
|
| 195 |
+
|
| 196 |
+
Args:
|
| 197 |
+
case: The case to evaluate
|
| 198 |
+
current_date: Current simulation date
|
| 199 |
+
|
| 200 |
+
Returns:
|
| 201 |
+
Estimated timedelta until ripe, or None if already ripe or unknown
|
| 202 |
+
"""
|
| 203 |
+
ripeness = cls.classify(case, current_date)
|
| 204 |
+
|
| 205 |
+
if ripeness.is_ripe():
|
| 206 |
+
return timedelta(0)
|
| 207 |
+
|
| 208 |
+
# Heuristic estimates based on bottleneck type
|
| 209 |
+
estimates = {
|
| 210 |
+
RipenessStatus.UNRIPE_SUMMONS: timedelta(days=30),
|
| 211 |
+
RipenessStatus.UNRIPE_DEPENDENT: timedelta(days=60),
|
| 212 |
+
RipenessStatus.UNRIPE_PARTY: timedelta(days=14),
|
| 213 |
+
RipenessStatus.UNRIPE_DOCUMENT: timedelta(days=21),
|
| 214 |
+
}
|
| 215 |
+
|
| 216 |
+
return estimates.get(ripeness, None)
|
|
File without changes
|
|
@@ -0,0 +1,265 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Synthetic case generator (Phase 2).
|
| 2 |
+
|
| 3 |
+
Generates Case objects between start_date and end_date using:
|
| 4 |
+
- CASE_TYPE_DISTRIBUTION
|
| 5 |
+
- Monthly seasonality factors
|
| 6 |
+
- Urgent case percentage
|
| 7 |
+
- Court working days (CourtCalendar)
|
| 8 |
+
|
| 9 |
+
Also provides CSV export/import helpers compatible with scripts.
|
| 10 |
+
"""
|
| 11 |
+
from __future__ import annotations
|
| 12 |
+
|
| 13 |
+
from dataclasses import dataclass
|
| 14 |
+
from datetime import date, timedelta
|
| 15 |
+
from pathlib import Path
|
| 16 |
+
from typing import Iterable, List, Tuple
|
| 17 |
+
import csv
|
| 18 |
+
import random
|
| 19 |
+
|
| 20 |
+
from scheduler.core.case import Case
|
| 21 |
+
from scheduler.utils.calendar import CourtCalendar
|
| 22 |
+
from scheduler.data.config import (
|
| 23 |
+
CASE_TYPE_DISTRIBUTION,
|
| 24 |
+
MONTHLY_SEASONALITY,
|
| 25 |
+
URGENT_CASE_PERCENTAGE,
|
| 26 |
+
)
|
| 27 |
+
from scheduler.data.param_loader import load_parameters
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
def _month_iter(start: date, end: date) -> Iterable[Tuple[int, int]]:
|
| 31 |
+
y, m = start.year, start.month
|
| 32 |
+
while (y, m) <= (end.year, end.month):
|
| 33 |
+
yield (y, m)
|
| 34 |
+
if m == 12:
|
| 35 |
+
y += 1
|
| 36 |
+
m = 1
|
| 37 |
+
else:
|
| 38 |
+
m += 1
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
@dataclass
|
| 42 |
+
class CaseGenerator:
|
| 43 |
+
start: date
|
| 44 |
+
end: date
|
| 45 |
+
seed: int = 42
|
| 46 |
+
|
| 47 |
+
def generate(self, n_cases: int, stage_mix: dict | None = None, stage_mix_auto: bool = False) -> List[Case]:
|
| 48 |
+
random.seed(self.seed)
|
| 49 |
+
cal = CourtCalendar()
|
| 50 |
+
if stage_mix_auto:
|
| 51 |
+
params = load_parameters()
|
| 52 |
+
stage_mix = params.get_stage_stationary_distribution()
|
| 53 |
+
stage_mix = stage_mix or {"ADMISSION": 1.0}
|
| 54 |
+
# normalize explicitly
|
| 55 |
+
total_mix = sum(stage_mix.values()) or 1.0
|
| 56 |
+
stage_mix = {k: v/total_mix for k, v in stage_mix.items()}
|
| 57 |
+
# precompute cumulative for stage sampling
|
| 58 |
+
stage_items = list(stage_mix.items())
|
| 59 |
+
scum = []
|
| 60 |
+
accs = 0.0
|
| 61 |
+
for _, p in stage_items:
|
| 62 |
+
accs += p
|
| 63 |
+
scum.append(accs)
|
| 64 |
+
if scum:
|
| 65 |
+
scum[-1] = 1.0
|
| 66 |
+
def sample_stage() -> str:
|
| 67 |
+
if not stage_items:
|
| 68 |
+
return "ADMISSION"
|
| 69 |
+
r = random.random()
|
| 70 |
+
for i, (st, _) in enumerate(stage_items):
|
| 71 |
+
if r <= scum[i]:
|
| 72 |
+
return st
|
| 73 |
+
return stage_items[-1][0]
|
| 74 |
+
|
| 75 |
+
# duration sampling helpers (lognormal via median & p90)
|
| 76 |
+
def sample_stage_duration(stage: str) -> float:
|
| 77 |
+
params = getattr(sample_stage_duration, "_params", None)
|
| 78 |
+
if params is None:
|
| 79 |
+
setattr(sample_stage_duration, "_params", load_parameters())
|
| 80 |
+
params = getattr(sample_stage_duration, "_params")
|
| 81 |
+
med = params.get_stage_duration(stage, "median")
|
| 82 |
+
p90 = params.get_stage_duration(stage, "p90")
|
| 83 |
+
import math
|
| 84 |
+
med = max(med, 1e-3)
|
| 85 |
+
p90 = max(p90, med + 1e-6)
|
| 86 |
+
z = 1.2815515655446004
|
| 87 |
+
sigma = max(1e-6, math.log(p90) - math.log(med)) / z
|
| 88 |
+
mu = math.log(med)
|
| 89 |
+
# Box-Muller normal sample
|
| 90 |
+
u1 = max(random.random(), 1e-9)
|
| 91 |
+
u2 = max(random.random(), 1e-9)
|
| 92 |
+
z0 = ( (-2.0*math.log(u1)) ** 0.5 ) * math.cos(2.0*math.pi*u2)
|
| 93 |
+
val = math.exp(mu + sigma * z0)
|
| 94 |
+
return max(1.0, val)
|
| 95 |
+
|
| 96 |
+
# 1) Build monthly working-day lists and weights (seasonality * working days)
|
| 97 |
+
month_days = {}
|
| 98 |
+
month_weight = {}
|
| 99 |
+
for (y, m) in _month_iter(self.start, self.end):
|
| 100 |
+
days = cal.get_working_days_in_month(y, m)
|
| 101 |
+
# restrict to [start, end]
|
| 102 |
+
days = [d for d in days if self.start <= d <= self.end]
|
| 103 |
+
if not days:
|
| 104 |
+
continue
|
| 105 |
+
month_days[(y, m)] = days
|
| 106 |
+
month_weight[(y, m)] = MONTHLY_SEASONALITY.get(m, 1.0) * len(days)
|
| 107 |
+
|
| 108 |
+
# normalize weights
|
| 109 |
+
total_w = sum(month_weight.values())
|
| 110 |
+
if total_w == 0:
|
| 111 |
+
return []
|
| 112 |
+
|
| 113 |
+
# 2) Allocate case counts per month (round, then adjust)
|
| 114 |
+
alloc = {}
|
| 115 |
+
remaining = n_cases
|
| 116 |
+
for key, w in month_weight.items():
|
| 117 |
+
cnt = int(round(n_cases * (w / total_w)))
|
| 118 |
+
alloc[key] = cnt
|
| 119 |
+
# adjust rounding to total n_cases
|
| 120 |
+
diff = n_cases - sum(alloc.values())
|
| 121 |
+
if diff != 0:
|
| 122 |
+
# distribute the difference across months deterministically by key order
|
| 123 |
+
keys = sorted(alloc.keys())
|
| 124 |
+
idx = 0
|
| 125 |
+
step = 1 if diff > 0 else -1
|
| 126 |
+
for _ in range(abs(diff)):
|
| 127 |
+
alloc[keys[idx]] += step
|
| 128 |
+
idx = (idx + 1) % len(keys)
|
| 129 |
+
|
| 130 |
+
# 3) Sampling helpers
|
| 131 |
+
type_items = list(CASE_TYPE_DISTRIBUTION.items())
|
| 132 |
+
type_acc = []
|
| 133 |
+
cum = 0.0
|
| 134 |
+
for _, p in type_items:
|
| 135 |
+
cum += p
|
| 136 |
+
type_acc.append(cum)
|
| 137 |
+
# ensure last is exactly 1.0 in case of rounding issues
|
| 138 |
+
if type_acc:
|
| 139 |
+
type_acc[-1] = 1.0
|
| 140 |
+
|
| 141 |
+
def sample_case_type() -> str:
|
| 142 |
+
r = random.random()
|
| 143 |
+
for (i, (ct, _)) in enumerate(type_items):
|
| 144 |
+
if r <= type_acc[i]:
|
| 145 |
+
return ct
|
| 146 |
+
return type_items[-1][0]
|
| 147 |
+
|
| 148 |
+
cases: List[Case] = []
|
| 149 |
+
seq = 0
|
| 150 |
+
for key in sorted(alloc.keys()):
|
| 151 |
+
y, m = key
|
| 152 |
+
days = month_days[key]
|
| 153 |
+
if not days or alloc[key] <= 0:
|
| 154 |
+
continue
|
| 155 |
+
# simple distribution across working days of the month
|
| 156 |
+
for _ in range(alloc[key]):
|
| 157 |
+
filed = days[seq % len(days)]
|
| 158 |
+
seq += 1
|
| 159 |
+
ct = sample_case_type()
|
| 160 |
+
urgent = random.random() < URGENT_CASE_PERCENTAGE
|
| 161 |
+
cid = f"{ct}/{filed.year}/{len(cases)+1:05d}"
|
| 162 |
+
init_stage = sample_stage()
|
| 163 |
+
# For initial cases: they're filed on 'filed' date, started current stage on filed date
|
| 164 |
+
# days_in_stage represents how long they've been in this stage as of simulation start
|
| 165 |
+
# We sample a duration but cap it to not go before filed_date
|
| 166 |
+
dur_days = int(sample_stage_duration(init_stage))
|
| 167 |
+
# stage_start should be between filed_date and some time after
|
| 168 |
+
# For simplicity: set stage_start = filed_date, case just entered this stage
|
| 169 |
+
c = Case(
|
| 170 |
+
case_id=cid,
|
| 171 |
+
case_type=ct,
|
| 172 |
+
filed_date=filed,
|
| 173 |
+
current_stage=init_stage,
|
| 174 |
+
is_urgent=urgent,
|
| 175 |
+
)
|
| 176 |
+
c.stage_start_date = filed
|
| 177 |
+
c.days_in_stage = 0
|
| 178 |
+
# Initialize realistic hearing history
|
| 179 |
+
# Spread last hearings across past 7-30 days to simulate realistic court flow
|
| 180 |
+
# This ensures constant stream of cases becoming eligible, not all at once
|
| 181 |
+
days_since_filed = (self.end - filed).days
|
| 182 |
+
if days_since_filed > 30: # Only if filed at least 30 days before end
|
| 183 |
+
c.hearing_count = max(1, days_since_filed // 30)
|
| 184 |
+
# Last hearing was randomly 7-30 days before end (spread across a month)
|
| 185 |
+
# 7 days = just became eligible, 30 days = long overdue
|
| 186 |
+
days_before_end = random.randint(7, 30)
|
| 187 |
+
c.last_hearing_date = self.end - timedelta(days=days_before_end)
|
| 188 |
+
# Set days_since_last_hearing so simulation starts with staggered eligibility
|
| 189 |
+
c.days_since_last_hearing = days_before_end
|
| 190 |
+
|
| 191 |
+
# Simulate realistic hearing purposes for ripeness classification
|
| 192 |
+
# 20% of cases have bottlenecks (unripe)
|
| 193 |
+
bottleneck_purposes = [
|
| 194 |
+
"ISSUE SUMMONS",
|
| 195 |
+
"FOR NOTICE",
|
| 196 |
+
"AWAIT SERVICE OF NOTICE",
|
| 197 |
+
"STAY APPLICATION PENDING",
|
| 198 |
+
"FOR ORDERS",
|
| 199 |
+
]
|
| 200 |
+
ripe_purposes = [
|
| 201 |
+
"ARGUMENTS",
|
| 202 |
+
"HEARING",
|
| 203 |
+
"FINAL ARGUMENTS",
|
| 204 |
+
"FOR JUDGMENT",
|
| 205 |
+
"EVIDENCE",
|
| 206 |
+
]
|
| 207 |
+
|
| 208 |
+
if init_stage == "ADMISSION" and c.hearing_count < 3:
|
| 209 |
+
# Early ADMISSION cases more likely unripe
|
| 210 |
+
c.last_hearing_purpose = random.choice(bottleneck_purposes) if random.random() < 0.4 else random.choice(ripe_purposes)
|
| 211 |
+
elif init_stage in ["ARGUMENTS", "ORDERS / JUDGMENT", "FINAL DISPOSAL"]:
|
| 212 |
+
# Advanced stages usually ripe
|
| 213 |
+
c.last_hearing_purpose = random.choice(ripe_purposes)
|
| 214 |
+
else:
|
| 215 |
+
# Mixed
|
| 216 |
+
c.last_hearing_purpose = random.choice(bottleneck_purposes) if random.random() < 0.2 else random.choice(ripe_purposes)
|
| 217 |
+
|
| 218 |
+
cases.append(c)
|
| 219 |
+
|
| 220 |
+
return cases
|
| 221 |
+
|
| 222 |
+
# CSV helpers -----------------------------------------------------------
|
| 223 |
+
@staticmethod
|
| 224 |
+
def to_csv(cases: List[Case], out_path: Path) -> None:
|
| 225 |
+
out_path.parent.mkdir(parents=True, exist_ok=True)
|
| 226 |
+
with out_path.open("w", newline="") as f:
|
| 227 |
+
w = csv.writer(f)
|
| 228 |
+
w.writerow(["case_id", "case_type", "filed_date", "current_stage", "is_urgent", "hearing_count", "last_hearing_date", "days_since_last_hearing", "last_hearing_purpose"])
|
| 229 |
+
for c in cases:
|
| 230 |
+
w.writerow([
|
| 231 |
+
c.case_id,
|
| 232 |
+
c.case_type,
|
| 233 |
+
c.filed_date.isoformat(),
|
| 234 |
+
c.current_stage,
|
| 235 |
+
1 if c.is_urgent else 0,
|
| 236 |
+
c.hearing_count,
|
| 237 |
+
c.last_hearing_date.isoformat() if c.last_hearing_date else "",
|
| 238 |
+
c.days_since_last_hearing,
|
| 239 |
+
c.last_hearing_purpose or "",
|
| 240 |
+
])
|
| 241 |
+
|
| 242 |
+
@staticmethod
|
| 243 |
+
def from_csv(path: Path) -> List[Case]:
|
| 244 |
+
cases: List[Case] = []
|
| 245 |
+
with path.open("r", newline="") as f:
|
| 246 |
+
r = csv.DictReader(f)
|
| 247 |
+
for row in r:
|
| 248 |
+
c = Case(
|
| 249 |
+
case_id=row["case_id"],
|
| 250 |
+
case_type=row["case_type"],
|
| 251 |
+
filed_date=date.fromisoformat(row["filed_date"]),
|
| 252 |
+
current_stage=row.get("current_stage", "ADMISSION"),
|
| 253 |
+
is_urgent=(str(row.get("is_urgent", "0")) in ("1", "true", "True")),
|
| 254 |
+
)
|
| 255 |
+
# Load hearing history if available
|
| 256 |
+
if "hearing_count" in row and row["hearing_count"]:
|
| 257 |
+
c.hearing_count = int(row["hearing_count"])
|
| 258 |
+
if "last_hearing_date" in row and row["last_hearing_date"]:
|
| 259 |
+
c.last_hearing_date = date.fromisoformat(row["last_hearing_date"])
|
| 260 |
+
if "days_since_last_hearing" in row and row["days_since_last_hearing"]:
|
| 261 |
+
c.days_since_last_hearing = int(row["days_since_last_hearing"])
|
| 262 |
+
if "last_hearing_purpose" in row and row["last_hearing_purpose"]:
|
| 263 |
+
c.last_hearing_purpose = row["last_hearing_purpose"]
|
| 264 |
+
cases.append(c)
|
| 265 |
+
return cases
|
|
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Configuration constants for court scheduling system.
|
| 2 |
+
|
| 3 |
+
This module contains all configuration parameters and constants used throughout
|
| 4 |
+
the scheduler implementation.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from pathlib import Path
|
| 8 |
+
from typing import Dict, List
|
| 9 |
+
|
| 10 |
+
# Project paths
|
| 11 |
+
PROJECT_ROOT = Path(__file__).parent.parent.parent
|
| 12 |
+
REPORTS_DIR = PROJECT_ROOT / "reports" / "figures"
|
| 13 |
+
|
| 14 |
+
# Find the latest versioned output directory
|
| 15 |
+
def get_latest_params_dir() -> Path:
|
| 16 |
+
"""Get the latest versioned parameters directory from EDA outputs."""
|
| 17 |
+
if not REPORTS_DIR.exists():
|
| 18 |
+
raise FileNotFoundError(f"Reports directory not found: {REPORTS_DIR}")
|
| 19 |
+
|
| 20 |
+
version_dirs = [d for d in REPORTS_DIR.iterdir() if d.is_dir() and d.name.startswith("v")]
|
| 21 |
+
if not version_dirs:
|
| 22 |
+
raise FileNotFoundError(f"No versioned directories found in {REPORTS_DIR}")
|
| 23 |
+
|
| 24 |
+
latest_dir = max(version_dirs, key=lambda d: d.stat().st_mtime)
|
| 25 |
+
params_dir = latest_dir / "params"
|
| 26 |
+
|
| 27 |
+
if not params_dir.exists():
|
| 28 |
+
params_dir = latest_dir # Fallback if params/ subdirectory doesn't exist
|
| 29 |
+
|
| 30 |
+
return params_dir
|
| 31 |
+
|
| 32 |
+
# Court operational constants
|
| 33 |
+
WORKING_DAYS_PER_YEAR = 192 # From Karnataka High Court calendar
|
| 34 |
+
COURTROOMS = 5 # Number of courtrooms to simulate
|
| 35 |
+
SIMULATION_YEARS = 2 # Duration of simulation
|
| 36 |
+
SIMULATION_DAYS = WORKING_DAYS_PER_YEAR * SIMULATION_YEARS # 384 days
|
| 37 |
+
|
| 38 |
+
# Case type distribution (from EDA)
|
| 39 |
+
CASE_TYPE_DISTRIBUTION = {
|
| 40 |
+
"CRP": 0.201, # Civil Revision Petition
|
| 41 |
+
"CA": 0.200, # Civil Appeal
|
| 42 |
+
"RSA": 0.196, # Regular Second Appeal
|
| 43 |
+
"RFA": 0.167, # Regular First Appeal
|
| 44 |
+
"CCC": 0.111, # Civil Contempt Petition
|
| 45 |
+
"CP": 0.096, # Civil Petition
|
| 46 |
+
"CMP": 0.028, # Civil Miscellaneous Petition
|
| 47 |
+
}
|
| 48 |
+
|
| 49 |
+
# Case types ordered list
|
| 50 |
+
CASE_TYPES = list(CASE_TYPE_DISTRIBUTION.keys())
|
| 51 |
+
|
| 52 |
+
# Stage taxonomy (from EDA analysis)
|
| 53 |
+
STAGES = [
|
| 54 |
+
"PRE-ADMISSION",
|
| 55 |
+
"ADMISSION",
|
| 56 |
+
"FRAMING OF CHARGES",
|
| 57 |
+
"EVIDENCE",
|
| 58 |
+
"ARGUMENTS",
|
| 59 |
+
"INTERLOCUTORY APPLICATION",
|
| 60 |
+
"SETTLEMENT",
|
| 61 |
+
"ORDERS / JUDGMENT",
|
| 62 |
+
"FINAL DISPOSAL",
|
| 63 |
+
"OTHER",
|
| 64 |
+
"NA",
|
| 65 |
+
]
|
| 66 |
+
|
| 67 |
+
# Terminal stages (case is disposed after these)
|
| 68 |
+
# NA represents case closure in historical data (most common disposal path)
|
| 69 |
+
TERMINAL_STAGES = ["FINAL DISPOSAL", "SETTLEMENT", "NA"]
|
| 70 |
+
|
| 71 |
+
# Scheduling constraints
|
| 72 |
+
# EDA shows median gaps: RSA=38 days, RFA=31 days, CRP=14 days (transitions.csv)
|
| 73 |
+
# Using conservative 14 days for general scheduling (allows more frequent hearings)
|
| 74 |
+
# Stage-specific gaps handled via transition probabilities in param_loader
|
| 75 |
+
MIN_GAP_BETWEEN_HEARINGS = 14 # days (reduced from 7, based on CRP median)
|
| 76 |
+
MAX_GAP_WITHOUT_ALERT = 90 # days
|
| 77 |
+
URGENT_CASE_PERCENTAGE = 0.05 # 5% of cases marked urgent
|
| 78 |
+
|
| 79 |
+
# Multi-objective optimization weights
|
| 80 |
+
FAIRNESS_WEIGHT = 0.4
|
| 81 |
+
EFFICIENCY_WEIGHT = 0.3
|
| 82 |
+
URGENCY_WEIGHT = 0.3
|
| 83 |
+
|
| 84 |
+
# Daily capacity per courtroom (from EDA: median = 151)
|
| 85 |
+
DEFAULT_DAILY_CAPACITY = 151
|
| 86 |
+
|
| 87 |
+
# Filing rate (cases per year, derived from EDA)
|
| 88 |
+
ANNUAL_FILING_RATE = 6000 # ~500 per month
|
| 89 |
+
MONTHLY_FILING_RATE = ANNUAL_FILING_RATE // 12
|
| 90 |
+
|
| 91 |
+
# Seasonality factors (relative to average)
|
| 92 |
+
# Lower in May (summer), December-January (holidays)
|
| 93 |
+
MONTHLY_SEASONALITY = {
|
| 94 |
+
1: 0.90, # January (holidays)
|
| 95 |
+
2: 1.15, # February (peak)
|
| 96 |
+
3: 1.15, # March (peak)
|
| 97 |
+
4: 1.10, # April (peak)
|
| 98 |
+
5: 0.70, # May (summer vacation)
|
| 99 |
+
6: 0.90, # June (recovery)
|
| 100 |
+
7: 1.10, # July (peak)
|
| 101 |
+
8: 1.10, # August (peak)
|
| 102 |
+
9: 1.10, # September (peak)
|
| 103 |
+
10: 1.10, # October (peak)
|
| 104 |
+
11: 1.05, # November (peak)
|
| 105 |
+
12: 0.85, # December (holidays approaching)
|
| 106 |
+
}
|
| 107 |
+
|
| 108 |
+
# Alias for calendar module compatibility
|
| 109 |
+
SEASONALITY_FACTORS = MONTHLY_SEASONALITY
|
| 110 |
+
|
| 111 |
+
# Success criteria thresholds
|
| 112 |
+
FAIRNESS_GINI_TARGET = 0.4 # Gini coefficient < 0.4
|
| 113 |
+
EFFICIENCY_UTILIZATION_TARGET = 0.85 # > 85% utilization
|
| 114 |
+
URGENCY_SCHEDULING_DAYS = 14 # High-readiness cases scheduled within 14 days
|
| 115 |
+
URGENT_SCHEDULING_DAYS = 7 # Urgent cases scheduled within 7 days
|
| 116 |
+
|
| 117 |
+
# Random seed for reproducibility
|
| 118 |
+
RANDOM_SEED = 42
|
| 119 |
+
|
| 120 |
+
# Logging configuration
|
| 121 |
+
LOG_LEVEL = "INFO"
|
| 122 |
+
LOG_FORMAT = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
|
|
@@ -0,0 +1,343 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Load parameters extracted from exploratory data analysis.
|
| 2 |
+
|
| 3 |
+
This module reads all parameter files generated by the EDA pipeline and makes
|
| 4 |
+
them available to the scheduler.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import json
|
| 8 |
+
import math
|
| 9 |
+
from pathlib import Path
|
| 10 |
+
from typing import Dict, Optional, List
|
| 11 |
+
|
| 12 |
+
import pandas as pd
|
| 13 |
+
import polars as pl
|
| 14 |
+
|
| 15 |
+
from scheduler.data.config import get_latest_params_dir
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
class ParameterLoader:
|
| 19 |
+
"""Loads and manages parameters from EDA outputs.
|
| 20 |
+
|
| 21 |
+
Performance notes:
|
| 22 |
+
- Builds in-memory lookup caches to avoid repeated DataFrame filtering.
|
| 23 |
+
"""
|
| 24 |
+
|
| 25 |
+
def __init__(self, params_dir: Optional[Path] = None):
|
| 26 |
+
"""Initialize parameter loader.
|
| 27 |
+
|
| 28 |
+
Args:
|
| 29 |
+
params_dir: Directory containing parameter files. If None, uses latest.
|
| 30 |
+
"""
|
| 31 |
+
self.params_dir = params_dir or get_latest_params_dir()
|
| 32 |
+
|
| 33 |
+
# Cached parameters
|
| 34 |
+
self._transition_probs: Optional[pd.DataFrame] = None
|
| 35 |
+
self._stage_duration: Optional[pd.DataFrame] = None
|
| 36 |
+
self._court_capacity: Optional[Dict] = None
|
| 37 |
+
self._adjournment_proxies: Optional[pd.DataFrame] = None
|
| 38 |
+
self._case_type_summary: Optional[pd.DataFrame] = None
|
| 39 |
+
self._transition_entropy: Optional[pd.DataFrame] = None
|
| 40 |
+
# caches
|
| 41 |
+
self._duration_map: Optional[Dict[str, Dict[str, float]]] = None # stage -> {"median": x, "p90": y}
|
| 42 |
+
self._transitions_map: Optional[Dict[str, List[tuple]]] = None # stage_from -> [(stage_to, cum_p), ...]
|
| 43 |
+
self._adj_map: Optional[Dict[str, Dict[str, float]]] = None # stage -> {case_type: p_adj}
|
| 44 |
+
|
| 45 |
+
@property
|
| 46 |
+
def transition_probs(self) -> pd.DataFrame:
|
| 47 |
+
"""Stage transition probabilities.
|
| 48 |
+
|
| 49 |
+
Returns:
|
| 50 |
+
DataFrame with columns: STAGE_FROM, STAGE_TO, N, row_n, p
|
| 51 |
+
"""
|
| 52 |
+
if self._transition_probs is None:
|
| 53 |
+
file_path = self.params_dir / "stage_transition_probs.csv"
|
| 54 |
+
self._transition_probs = pd.read_csv(file_path)
|
| 55 |
+
return self._transition_probs
|
| 56 |
+
|
| 57 |
+
def get_transition_prob(self, stage_from: str, stage_to: str) -> float:
|
| 58 |
+
"""Get probability of transitioning from one stage to another.
|
| 59 |
+
|
| 60 |
+
Args:
|
| 61 |
+
stage_from: Current stage
|
| 62 |
+
stage_to: Next stage
|
| 63 |
+
|
| 64 |
+
Returns:
|
| 65 |
+
Transition probability (0-1)
|
| 66 |
+
"""
|
| 67 |
+
df = self.transition_probs
|
| 68 |
+
match = df[(df["STAGE_FROM"] == stage_from) & (df["STAGE_TO"] == stage_to)]
|
| 69 |
+
|
| 70 |
+
if len(match) == 0:
|
| 71 |
+
return 0.0
|
| 72 |
+
|
| 73 |
+
return float(match.iloc[0]["p"])
|
| 74 |
+
|
| 75 |
+
def _build_transitions_map(self) -> None:
|
| 76 |
+
if self._transitions_map is not None:
|
| 77 |
+
return
|
| 78 |
+
df = self.transition_probs
|
| 79 |
+
self._transitions_map = {}
|
| 80 |
+
# group by STAGE_FROM, build cumulative probs for fast sampling
|
| 81 |
+
for st_from, group in df.groupby("STAGE_FROM"):
|
| 82 |
+
cum = 0.0
|
| 83 |
+
lst = []
|
| 84 |
+
for _, row in group.sort_values("p").iterrows():
|
| 85 |
+
cum += float(row["p"])
|
| 86 |
+
lst.append((str(row["STAGE_TO"]), cum))
|
| 87 |
+
# ensure last cum is 1.0 to guard against rounding
|
| 88 |
+
if lst:
|
| 89 |
+
to_last, _ = lst[-1]
|
| 90 |
+
lst[-1] = (to_last, 1.0)
|
| 91 |
+
self._transitions_map[str(st_from)] = lst
|
| 92 |
+
|
| 93 |
+
def get_stage_transitions(self, stage_from: str) -> pd.DataFrame:
|
| 94 |
+
"""Get all possible transitions from a given stage.
|
| 95 |
+
|
| 96 |
+
Args:
|
| 97 |
+
stage_from: Current stage
|
| 98 |
+
|
| 99 |
+
Returns:
|
| 100 |
+
DataFrame with STAGE_TO and p columns
|
| 101 |
+
"""
|
| 102 |
+
df = self.transition_probs
|
| 103 |
+
return df[df["STAGE_FROM"] == stage_from][["STAGE_TO", "p"]].reset_index(drop=True)
|
| 104 |
+
|
| 105 |
+
def get_stage_transitions_fast(self, stage_from: str) -> List[tuple]:
|
| 106 |
+
"""Fast lookup: returns list of (stage_to, cum_p)."""
|
| 107 |
+
self._build_transitions_map()
|
| 108 |
+
if not self._transitions_map:
|
| 109 |
+
return []
|
| 110 |
+
return self._transitions_map.get(stage_from, [])
|
| 111 |
+
|
| 112 |
+
@property
|
| 113 |
+
def stage_duration(self) -> pd.DataFrame:
|
| 114 |
+
"""Stage duration statistics.
|
| 115 |
+
|
| 116 |
+
Returns:
|
| 117 |
+
DataFrame with columns: STAGE, RUN_MEDIAN_DAYS, RUN_P90_DAYS,
|
| 118 |
+
HEARINGS_PER_RUN_MED, N_RUNS
|
| 119 |
+
"""
|
| 120 |
+
if self._stage_duration is None:
|
| 121 |
+
file_path = self.params_dir / "stage_duration.csv"
|
| 122 |
+
self._stage_duration = pd.read_csv(file_path)
|
| 123 |
+
return self._stage_duration
|
| 124 |
+
|
| 125 |
+
def _build_duration_map(self) -> None:
|
| 126 |
+
if self._duration_map is not None:
|
| 127 |
+
return
|
| 128 |
+
df = self.stage_duration
|
| 129 |
+
self._duration_map = {}
|
| 130 |
+
for _, row in df.iterrows():
|
| 131 |
+
st = str(row["STAGE"])
|
| 132 |
+
self._duration_map.setdefault(st, {})
|
| 133 |
+
self._duration_map[st]["median"] = float(row["RUN_MEDIAN_DAYS"])
|
| 134 |
+
self._duration_map[st]["p90"] = float(row["RUN_P90_DAYS"])
|
| 135 |
+
|
| 136 |
+
def get_stage_duration(self, stage: str, percentile: str = "median") -> float:
|
| 137 |
+
"""Get typical duration for a stage.
|
| 138 |
+
|
| 139 |
+
Args:
|
| 140 |
+
stage: Stage name
|
| 141 |
+
percentile: 'median' or 'p90'
|
| 142 |
+
|
| 143 |
+
Returns:
|
| 144 |
+
Duration in days
|
| 145 |
+
"""
|
| 146 |
+
self._build_duration_map()
|
| 147 |
+
if not self._duration_map or stage not in self._duration_map:
|
| 148 |
+
return 30.0
|
| 149 |
+
p = "median" if percentile == "median" else "p90"
|
| 150 |
+
return float(self._duration_map[stage].get(p, 30.0))
|
| 151 |
+
|
| 152 |
+
@property
|
| 153 |
+
def court_capacity(self) -> Dict:
|
| 154 |
+
"""Court capacity metrics.
|
| 155 |
+
|
| 156 |
+
Returns:
|
| 157 |
+
Dict with keys: slots_median_global, slots_p90_global
|
| 158 |
+
"""
|
| 159 |
+
if self._court_capacity is None:
|
| 160 |
+
file_path = self.params_dir / "court_capacity_global.json"
|
| 161 |
+
with open(file_path, "r") as f:
|
| 162 |
+
self._court_capacity = json.load(f)
|
| 163 |
+
return self._court_capacity
|
| 164 |
+
|
| 165 |
+
@property
|
| 166 |
+
def daily_capacity_median(self) -> int:
|
| 167 |
+
"""Median daily capacity per courtroom."""
|
| 168 |
+
return int(self.court_capacity["slots_median_global"])
|
| 169 |
+
|
| 170 |
+
@property
|
| 171 |
+
def daily_capacity_p90(self) -> int:
|
| 172 |
+
"""90th percentile daily capacity per courtroom."""
|
| 173 |
+
return int(self.court_capacity["slots_p90_global"])
|
| 174 |
+
|
| 175 |
+
@property
|
| 176 |
+
def adjournment_proxies(self) -> pd.DataFrame:
|
| 177 |
+
"""Adjournment probabilities by stage and case type.
|
| 178 |
+
|
| 179 |
+
Returns:
|
| 180 |
+
DataFrame with columns: Remappedstages, casetype,
|
| 181 |
+
p_adjourn_proxy, p_not_reached_proxy, n
|
| 182 |
+
"""
|
| 183 |
+
if self._adjournment_proxies is None:
|
| 184 |
+
file_path = self.params_dir / "adjournment_proxies.csv"
|
| 185 |
+
self._adjournment_proxies = pd.read_csv(file_path)
|
| 186 |
+
return self._adjournment_proxies
|
| 187 |
+
|
| 188 |
+
def _build_adj_map(self) -> None:
|
| 189 |
+
if self._adj_map is not None:
|
| 190 |
+
return
|
| 191 |
+
df = self.adjournment_proxies
|
| 192 |
+
self._adj_map = {}
|
| 193 |
+
for _, row in df.iterrows():
|
| 194 |
+
st = str(row["Remappedstages"])
|
| 195 |
+
ct = str(row["casetype"])
|
| 196 |
+
p = float(row["p_adjourn_proxy"])
|
| 197 |
+
self._adj_map.setdefault(st, {})[ct] = p
|
| 198 |
+
|
| 199 |
+
def get_adjournment_prob(self, stage: str, case_type: str) -> float:
|
| 200 |
+
"""Get probability of adjournment for given stage and case type.
|
| 201 |
+
|
| 202 |
+
Args:
|
| 203 |
+
stage: Stage name
|
| 204 |
+
case_type: Case type (e.g., 'RSA', 'CRP')
|
| 205 |
+
|
| 206 |
+
Returns:
|
| 207 |
+
Adjournment probability (0-1)
|
| 208 |
+
"""
|
| 209 |
+
self._build_adj_map()
|
| 210 |
+
if not self._adj_map:
|
| 211 |
+
return 0.4
|
| 212 |
+
if stage in self._adj_map and case_type in self._adj_map[stage]:
|
| 213 |
+
return float(self._adj_map[stage][case_type])
|
| 214 |
+
# fallback: average across types for this stage
|
| 215 |
+
if stage in self._adj_map and self._adj_map[stage]:
|
| 216 |
+
vals = list(self._adj_map[stage].values())
|
| 217 |
+
return float(sum(vals) / len(vals))
|
| 218 |
+
return 0.4
|
| 219 |
+
|
| 220 |
+
@property
|
| 221 |
+
def case_type_summary(self) -> pd.DataFrame:
|
| 222 |
+
"""Summary statistics by case type.
|
| 223 |
+
|
| 224 |
+
Returns:
|
| 225 |
+
DataFrame with columns: CASE_TYPE, n_cases, disp_median,
|
| 226 |
+
disp_p90, hear_median, gap_median
|
| 227 |
+
"""
|
| 228 |
+
if self._case_type_summary is None:
|
| 229 |
+
file_path = self.params_dir / "case_type_summary.csv"
|
| 230 |
+
self._case_type_summary = pd.read_csv(file_path)
|
| 231 |
+
return self._case_type_summary
|
| 232 |
+
|
| 233 |
+
def get_case_type_stats(self, case_type: str) -> Dict:
|
| 234 |
+
"""Get statistics for a specific case type.
|
| 235 |
+
|
| 236 |
+
Args:
|
| 237 |
+
case_type: Case type (e.g., 'RSA', 'CRP')
|
| 238 |
+
|
| 239 |
+
Returns:
|
| 240 |
+
Dict with disp_median, disp_p90, hear_median, gap_median
|
| 241 |
+
"""
|
| 242 |
+
df = self.case_type_summary
|
| 243 |
+
match = df[df["CASE_TYPE"] == case_type]
|
| 244 |
+
|
| 245 |
+
if len(match) == 0:
|
| 246 |
+
raise ValueError(f"Unknown case type: {case_type}")
|
| 247 |
+
|
| 248 |
+
return match.iloc[0].to_dict()
|
| 249 |
+
|
| 250 |
+
@property
|
| 251 |
+
def transition_entropy(self) -> pd.DataFrame:
|
| 252 |
+
"""Stage transition entropy (predictability metric).
|
| 253 |
+
|
| 254 |
+
Returns:
|
| 255 |
+
DataFrame with columns: STAGE_FROM, entropy
|
| 256 |
+
"""
|
| 257 |
+
if self._transition_entropy is None:
|
| 258 |
+
file_path = self.params_dir / "stage_transition_entropy.csv"
|
| 259 |
+
self._transition_entropy = pd.read_csv(file_path)
|
| 260 |
+
return self._transition_entropy
|
| 261 |
+
|
| 262 |
+
def get_stage_predictability(self, stage: str) -> float:
|
| 263 |
+
"""Get predictability of transitions from a stage (inverse of entropy).
|
| 264 |
+
|
| 265 |
+
Args:
|
| 266 |
+
stage: Stage name
|
| 267 |
+
|
| 268 |
+
Returns:
|
| 269 |
+
Predictability score (0-1, higher = more predictable)
|
| 270 |
+
"""
|
| 271 |
+
df = self.transition_entropy
|
| 272 |
+
match = df[df["STAGE_FROM"] == stage]
|
| 273 |
+
|
| 274 |
+
if len(match) == 0:
|
| 275 |
+
return 0.5 # Default: medium predictability
|
| 276 |
+
|
| 277 |
+
entropy = float(match.iloc[0]["entropy"])
|
| 278 |
+
# Convert entropy to predictability (lower entropy = higher predictability)
|
| 279 |
+
# Max entropy ~1.4, so normalize
|
| 280 |
+
predictability = max(0.0, 1.0 - (entropy / 1.5))
|
| 281 |
+
return predictability
|
| 282 |
+
|
| 283 |
+
def get_stage_stationary_distribution(self) -> Dict[str, float]:
|
| 284 |
+
"""Approximate stationary distribution over stages from transition matrix.
|
| 285 |
+
Returns stage -> probability summing to 1.0.
|
| 286 |
+
"""
|
| 287 |
+
df = self.transition_probs.copy()
|
| 288 |
+
# drop nulls and ensure strings
|
| 289 |
+
df = df[df["STAGE_FROM"].notna() & df["STAGE_TO"].notna()]
|
| 290 |
+
df["STAGE_FROM"] = df["STAGE_FROM"].astype(str)
|
| 291 |
+
df["STAGE_TO"] = df["STAGE_TO"].astype(str)
|
| 292 |
+
stages = sorted(set(df["STAGE_FROM"]).union(set(df["STAGE_TO"])) )
|
| 293 |
+
idx = {s: i for i, s in enumerate(stages)}
|
| 294 |
+
n = len(stages)
|
| 295 |
+
# build dense row-stochastic matrix
|
| 296 |
+
P = [[0.0]*n for _ in range(n)]
|
| 297 |
+
for _, row in df.iterrows():
|
| 298 |
+
i = idx[str(row["STAGE_FROM"])]; j = idx[str(row["STAGE_TO"])]
|
| 299 |
+
P[i][j] += float(row["p"])
|
| 300 |
+
# ensure rows sum to 1 by topping up self-loop
|
| 301 |
+
for i in range(n):
|
| 302 |
+
s = sum(P[i])
|
| 303 |
+
if s < 0.999:
|
| 304 |
+
P[i][i] += (1.0 - s)
|
| 305 |
+
elif s > 1.001:
|
| 306 |
+
# normalize if slightly over
|
| 307 |
+
P[i] = [v/s for v in P[i]]
|
| 308 |
+
# power iteration
|
| 309 |
+
pi = [1.0/n]*n
|
| 310 |
+
for _ in range(200):
|
| 311 |
+
new = [0.0]*n
|
| 312 |
+
for j in range(n):
|
| 313 |
+
acc = 0.0
|
| 314 |
+
for i in range(n):
|
| 315 |
+
acc += pi[i]*P[i][j]
|
| 316 |
+
new[j] = acc
|
| 317 |
+
# normalize
|
| 318 |
+
z = sum(new)
|
| 319 |
+
if z == 0:
|
| 320 |
+
break
|
| 321 |
+
new = [v/z for v in new]
|
| 322 |
+
# check convergence
|
| 323 |
+
if sum(abs(new[k]-pi[k]) for k in range(n)) < 1e-9:
|
| 324 |
+
pi = new
|
| 325 |
+
break
|
| 326 |
+
pi = new
|
| 327 |
+
return {stages[i]: pi[i] for i in range(n)}
|
| 328 |
+
|
| 329 |
+
def __repr__(self) -> str:
|
| 330 |
+
return f"ParameterLoader(params_dir={self.params_dir})"
|
| 331 |
+
|
| 332 |
+
|
| 333 |
+
# Convenience function for quick access
|
| 334 |
+
def load_parameters(params_dir: Optional[Path] = None) -> ParameterLoader:
|
| 335 |
+
"""Load parameters from EDA outputs.
|
| 336 |
+
|
| 337 |
+
Args:
|
| 338 |
+
params_dir: Directory containing parameter files. If None, uses latest.
|
| 339 |
+
|
| 340 |
+
Returns:
|
| 341 |
+
ParameterLoader instance
|
| 342 |
+
"""
|
| 343 |
+
return ParameterLoader(params_dir)
|
|
File without changes
|
|
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Basic metrics for scheduler evaluation.
|
| 2 |
+
|
| 3 |
+
These helpers avoid heavy dependencies and can be used by scripts.
|
| 4 |
+
"""
|
| 5 |
+
from __future__ import annotations
|
| 6 |
+
|
| 7 |
+
from typing import Iterable, List, Tuple
|
| 8 |
+
|
| 9 |
+
|
| 10 |
+
def gini(values: Iterable[float]) -> float:
|
| 11 |
+
"""Compute the Gini coefficient for a non-negative list of values.
|
| 12 |
+
|
| 13 |
+
Args:
|
| 14 |
+
values: Sequence of non-negative numbers
|
| 15 |
+
|
| 16 |
+
Returns:
|
| 17 |
+
Gini coefficient in [0, 1]
|
| 18 |
+
"""
|
| 19 |
+
vals = [v for v in values if v is not None]
|
| 20 |
+
n = len(vals)
|
| 21 |
+
if n == 0:
|
| 22 |
+
return 0.0
|
| 23 |
+
if min(vals) < 0:
|
| 24 |
+
raise ValueError("Gini expects non-negative values")
|
| 25 |
+
sorted_vals = sorted(vals)
|
| 26 |
+
cum = 0.0
|
| 27 |
+
for i, x in enumerate(sorted_vals, start=1):
|
| 28 |
+
cum += i * x
|
| 29 |
+
total = sum(sorted_vals)
|
| 30 |
+
if total == 0:
|
| 31 |
+
return 0.0
|
| 32 |
+
# Gini formula: (2*sum(i*x_i)/(n*sum(x)) - (n+1)/n)
|
| 33 |
+
return (2 * cum) / (n * total) - (n + 1) / n
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
def utilization(total_scheduled: int, capacity: int) -> float:
|
| 37 |
+
"""Compute utilization as scheduled/capacity.
|
| 38 |
+
|
| 39 |
+
Args:
|
| 40 |
+
total_scheduled: Number of scheduled hearings
|
| 41 |
+
capacity: Total available slots
|
| 42 |
+
"""
|
| 43 |
+
if capacity <= 0:
|
| 44 |
+
return 0.0
|
| 45 |
+
return min(1.0, total_scheduled / capacity)
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
def urgency_sla(records: List[Tuple[bool, int]], days: int = 7) -> float:
|
| 49 |
+
"""Compute SLA for urgent cases.
|
| 50 |
+
|
| 51 |
+
Args:
|
| 52 |
+
records: List of tuples (is_urgent, working_day_delay)
|
| 53 |
+
days: SLA threshold in working days
|
| 54 |
+
|
| 55 |
+
Returns:
|
| 56 |
+
Proportion of urgent cases within SLA (0..1)
|
| 57 |
+
"""
|
| 58 |
+
urgent = [delay for is_urgent, delay in records if is_urgent]
|
| 59 |
+
if not urgent:
|
| 60 |
+
return 1.0
|
| 61 |
+
within = sum(1 for d in urgent if d <= days)
|
| 62 |
+
return within / len(urgent)
|
|
File without changes
|
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Output generation for court scheduling system."""
|
| 2 |
+
|
| 3 |
+
from .cause_list import CauseListGenerator, generate_cause_lists_from_sweep
|
| 4 |
+
|
| 5 |
+
__all__ = ['CauseListGenerator', 'generate_cause_lists_from_sweep']
|
|
@@ -0,0 +1,232 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Daily cause list generator for court scheduling system.
|
| 2 |
+
|
| 3 |
+
Generates machine-readable cause lists from simulation results with explainability.
|
| 4 |
+
"""
|
| 5 |
+
from pathlib import Path
|
| 6 |
+
from typing import Optional
|
| 7 |
+
import pandas as pd
|
| 8 |
+
from datetime import datetime
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
class CauseListGenerator:
|
| 12 |
+
"""Generates daily cause lists with explanations for scheduling decisions."""
|
| 13 |
+
|
| 14 |
+
def __init__(self, events_file: Path):
|
| 15 |
+
"""Initialize with simulation events CSV.
|
| 16 |
+
|
| 17 |
+
Args:
|
| 18 |
+
events_file: Path to events.csv from simulation
|
| 19 |
+
"""
|
| 20 |
+
self.events_file = events_file
|
| 21 |
+
self.events = pd.read_csv(events_file)
|
| 22 |
+
|
| 23 |
+
def generate_daily_lists(self, output_dir: Path) -> Path:
|
| 24 |
+
"""Generate daily cause lists for entire simulation period.
|
| 25 |
+
|
| 26 |
+
Args:
|
| 27 |
+
output_dir: Directory to save cause list CSVs
|
| 28 |
+
|
| 29 |
+
Returns:
|
| 30 |
+
Path to compiled cause list CSV
|
| 31 |
+
"""
|
| 32 |
+
output_dir.mkdir(parents=True, exist_ok=True)
|
| 33 |
+
|
| 34 |
+
# Filter for 'scheduled' events (actual column name is 'type')
|
| 35 |
+
scheduled = self.events[self.events['type'] == 'scheduled'].copy()
|
| 36 |
+
|
| 37 |
+
if scheduled.empty:
|
| 38 |
+
raise ValueError("No 'scheduled' events found in simulation")
|
| 39 |
+
|
| 40 |
+
# Parse date column (handle different formats)
|
| 41 |
+
scheduled['date'] = pd.to_datetime(scheduled['date'])
|
| 42 |
+
|
| 43 |
+
# Add sequence number per courtroom per day
|
| 44 |
+
# Sort by date, courtroom, then case_id for consistency
|
| 45 |
+
scheduled = scheduled.sort_values(['date', 'courtroom_id', 'case_id'])
|
| 46 |
+
scheduled['sequence_number'] = scheduled.groupby(['date', 'courtroom_id']).cumcount() + 1
|
| 47 |
+
|
| 48 |
+
# Build cause list structure
|
| 49 |
+
cause_list = pd.DataFrame({
|
| 50 |
+
'Date': scheduled['date'].dt.strftime('%Y-%m-%d'),
|
| 51 |
+
'Courtroom_ID': scheduled['courtroom_id'].fillna(1).astype(int),
|
| 52 |
+
'Case_ID': scheduled['case_id'],
|
| 53 |
+
'Case_Type': scheduled['case_type'],
|
| 54 |
+
'Stage': scheduled['stage'],
|
| 55 |
+
'Purpose': 'HEARING', # Default purpose
|
| 56 |
+
'Sequence_Number': scheduled['sequence_number'],
|
| 57 |
+
'Explanation': scheduled.apply(self._generate_explanation, axis=1)
|
| 58 |
+
})
|
| 59 |
+
|
| 60 |
+
# Save compiled cause list
|
| 61 |
+
compiled_path = output_dir / "compiled_cause_list.csv"
|
| 62 |
+
cause_list.to_csv(compiled_path, index=False)
|
| 63 |
+
|
| 64 |
+
# Generate daily summaries
|
| 65 |
+
daily_summary = cause_list.groupby('Date').agg({
|
| 66 |
+
'Case_ID': 'count',
|
| 67 |
+
'Courtroom_ID': 'nunique'
|
| 68 |
+
}).rename(columns={
|
| 69 |
+
'Case_ID': 'Total_Hearings',
|
| 70 |
+
'Courtroom_ID': 'Active_Courtrooms'
|
| 71 |
+
})
|
| 72 |
+
|
| 73 |
+
summary_path = output_dir / "daily_summaries.csv"
|
| 74 |
+
daily_summary.to_csv(summary_path)
|
| 75 |
+
|
| 76 |
+
print(f"Generated cause list: {compiled_path}")
|
| 77 |
+
print(f" Total hearings: {len(cause_list):,}")
|
| 78 |
+
print(f" Date range: {cause_list['Date'].min()} to {cause_list['Date'].max()}")
|
| 79 |
+
print(f" Unique cases: {cause_list['Case_ID'].nunique():,}")
|
| 80 |
+
print(f"Daily summaries: {summary_path}")
|
| 81 |
+
|
| 82 |
+
return compiled_path
|
| 83 |
+
|
| 84 |
+
def _generate_explanation(self, row: pd.Series) -> str:
|
| 85 |
+
"""Generate human-readable explanation for scheduling decision.
|
| 86 |
+
|
| 87 |
+
Args:
|
| 88 |
+
row: Row from scheduled events DataFrame
|
| 89 |
+
|
| 90 |
+
Returns:
|
| 91 |
+
Explanation string
|
| 92 |
+
"""
|
| 93 |
+
parts = []
|
| 94 |
+
|
| 95 |
+
# Case type urgency (heuristic)
|
| 96 |
+
case_type = row.get('case_type', '')
|
| 97 |
+
if case_type in ['CCC', 'CP', 'CMP']:
|
| 98 |
+
parts.append("HIGH URGENCY (criminal)")
|
| 99 |
+
elif case_type in ['CA', 'CRP']:
|
| 100 |
+
parts.append("MEDIUM urgency")
|
| 101 |
+
else:
|
| 102 |
+
parts.append("standard urgency")
|
| 103 |
+
|
| 104 |
+
# Stage information
|
| 105 |
+
stage = row.get('stage', '')
|
| 106 |
+
if isinstance(stage, str):
|
| 107 |
+
if 'JUDGMENT' in stage or 'ORDER' in stage:
|
| 108 |
+
parts.append("ready for orders/judgment")
|
| 109 |
+
elif 'ADMISSION' in stage:
|
| 110 |
+
parts.append("admission stage")
|
| 111 |
+
|
| 112 |
+
# Courtroom allocation
|
| 113 |
+
courtroom = row.get('courtroom_id', 1)
|
| 114 |
+
try:
|
| 115 |
+
parts.append(f"assigned to Courtroom {int(courtroom)}")
|
| 116 |
+
except Exception:
|
| 117 |
+
parts.append("courtroom assigned")
|
| 118 |
+
|
| 119 |
+
# Additional details
|
| 120 |
+
detail = row.get('detail')
|
| 121 |
+
if isinstance(detail, str) and detail:
|
| 122 |
+
parts.append(detail)
|
| 123 |
+
|
| 124 |
+
return " | ".join(parts) if parts else "Scheduled for hearing"
|
| 125 |
+
|
| 126 |
+
def generate_no_case_left_behind_report(self, all_cases_file: Path, output_file: Path):
|
| 127 |
+
"""Verify no case was left unscheduled for too long.
|
| 128 |
+
|
| 129 |
+
Args:
|
| 130 |
+
all_cases_file: Path to CSV with all cases in simulation
|
| 131 |
+
output_file: Path to save verification report
|
| 132 |
+
"""
|
| 133 |
+
scheduled = self.events[self.events['event_type'] == 'HEARING_SCHEDULED'].copy()
|
| 134 |
+
scheduled['date'] = pd.to_datetime(scheduled['date'])
|
| 135 |
+
|
| 136 |
+
# Get unique cases scheduled
|
| 137 |
+
scheduled_cases = set(scheduled['case_id'].unique())
|
| 138 |
+
|
| 139 |
+
# Load all cases
|
| 140 |
+
all_cases = pd.read_csv(all_cases_file)
|
| 141 |
+
all_case_ids = set(all_cases['case_id'].astype(str).unique())
|
| 142 |
+
|
| 143 |
+
# Find never-scheduled cases
|
| 144 |
+
never_scheduled = all_case_ids - scheduled_cases
|
| 145 |
+
|
| 146 |
+
# Calculate gaps between hearings per case
|
| 147 |
+
scheduled['date'] = pd.to_datetime(scheduled['date'])
|
| 148 |
+
scheduled = scheduled.sort_values(['case_id', 'date'])
|
| 149 |
+
scheduled['days_since_last'] = scheduled.groupby('case_id')['date'].diff().dt.days
|
| 150 |
+
|
| 151 |
+
# Statistics
|
| 152 |
+
coverage = len(scheduled_cases) / len(all_case_ids) * 100
|
| 153 |
+
max_gap = scheduled['days_since_last'].max()
|
| 154 |
+
avg_gap = scheduled['days_since_last'].mean()
|
| 155 |
+
|
| 156 |
+
report = pd.DataFrame({
|
| 157 |
+
'Metric': [
|
| 158 |
+
'Total Cases',
|
| 159 |
+
'Cases Scheduled At Least Once',
|
| 160 |
+
'Coverage (%)',
|
| 161 |
+
'Cases Never Scheduled',
|
| 162 |
+
'Max Gap Between Hearings (days)',
|
| 163 |
+
'Avg Gap Between Hearings (days)',
|
| 164 |
+
'Cases with Gap > 60 days',
|
| 165 |
+
'Cases with Gap > 90 days'
|
| 166 |
+
],
|
| 167 |
+
'Value': [
|
| 168 |
+
len(all_case_ids),
|
| 169 |
+
len(scheduled_cases),
|
| 170 |
+
f"{coverage:.2f}",
|
| 171 |
+
len(never_scheduled),
|
| 172 |
+
f"{max_gap:.0f}" if pd.notna(max_gap) else "N/A",
|
| 173 |
+
f"{avg_gap:.1f}" if pd.notna(avg_gap) else "N/A",
|
| 174 |
+
(scheduled['days_since_last'] > 60).sum(),
|
| 175 |
+
(scheduled['days_since_last'] > 90).sum()
|
| 176 |
+
]
|
| 177 |
+
})
|
| 178 |
+
|
| 179 |
+
report.to_csv(output_file, index=False)
|
| 180 |
+
print(f"\nNo-Case-Left-Behind Verification Report: {output_file}")
|
| 181 |
+
print(report.to_string(index=False))
|
| 182 |
+
|
| 183 |
+
return report
|
| 184 |
+
|
| 185 |
+
|
| 186 |
+
def generate_cause_lists_from_sweep(sweep_dir: Path, scenario: str, policy: str):
|
| 187 |
+
"""Generate cause lists from comprehensive sweep results.
|
| 188 |
+
|
| 189 |
+
Args:
|
| 190 |
+
sweep_dir: Path to sweep results directory
|
| 191 |
+
scenario: Scenario name (e.g., 'baseline_10k')
|
| 192 |
+
policy: Policy name (e.g., 'readiness')
|
| 193 |
+
"""
|
| 194 |
+
results_dir = sweep_dir / f"{scenario}_{policy}"
|
| 195 |
+
events_file = results_dir / "events.csv"
|
| 196 |
+
|
| 197 |
+
if not events_file.exists():
|
| 198 |
+
raise FileNotFoundError(f"Events file not found: {events_file}")
|
| 199 |
+
|
| 200 |
+
output_dir = results_dir / "cause_lists"
|
| 201 |
+
|
| 202 |
+
generator = CauseListGenerator(events_file)
|
| 203 |
+
cause_list_path = generator.generate_daily_lists(output_dir)
|
| 204 |
+
|
| 205 |
+
# Generate no-case-left-behind report if cases file exists
|
| 206 |
+
# This would need the original cases dataset - skip for now
|
| 207 |
+
# cases_file = sweep_dir / "datasets" / f"{scenario}_cases.csv"
|
| 208 |
+
# if cases_file.exists():
|
| 209 |
+
# report_path = output_dir / "no_case_left_behind.csv"
|
| 210 |
+
# generator.generate_no_case_left_behind_report(cases_file, report_path)
|
| 211 |
+
|
| 212 |
+
return cause_list_path
|
| 213 |
+
|
| 214 |
+
|
| 215 |
+
if __name__ == "__main__":
|
| 216 |
+
# Example usage
|
| 217 |
+
sweep_dir = Path("data/comprehensive_sweep_20251120_184341")
|
| 218 |
+
|
| 219 |
+
# Generate for our algorithm
|
| 220 |
+
print("="*70)
|
| 221 |
+
print("Generating Cause Lists for Readiness Algorithm (Our Algorithm)")
|
| 222 |
+
print("="*70)
|
| 223 |
+
|
| 224 |
+
cause_list = generate_cause_lists_from_sweep(
|
| 225 |
+
sweep_dir=sweep_dir,
|
| 226 |
+
scenario="baseline_10k",
|
| 227 |
+
policy="readiness"
|
| 228 |
+
)
|
| 229 |
+
|
| 230 |
+
print("\n" + "="*70)
|
| 231 |
+
print("Cause List Generation Complete")
|
| 232 |
+
print("="*70)
|
|
File without changes
|
|
@@ -115,8 +115,8 @@ class CourtroomAllocator:
|
|
| 115 |
self.capacity_rejections += 1
|
| 116 |
continue
|
| 117 |
|
| 118 |
-
# Track if courtroom changed
|
| 119 |
-
if case.courtroom_id is not None and case.courtroom_id != courtroom_id:
|
| 120 |
self.allocation_changes += 1
|
| 121 |
|
| 122 |
# Assign case to courtroom
|
|
|
|
| 115 |
self.capacity_rejections += 1
|
| 116 |
continue
|
| 117 |
|
| 118 |
+
# Track if courtroom changed (only count actual switches, not initial assignments)
|
| 119 |
+
if case.courtroom_id is not None and case.courtroom_id != 0 and case.courtroom_id != courtroom_id:
|
| 120 |
self.allocation_changes += 1
|
| 121 |
|
| 122 |
# Assign case to courtroom
|
|
@@ -279,10 +279,12 @@ class CourtSim:
|
|
| 279 |
|
| 280 |
# Build allocation dict for compatibility with existing loop
|
| 281 |
allocation: Dict[int, List[Case]] = {r.courtroom_id: [] for r in self.rooms}
|
|
|
|
| 282 |
for case in cases_to_allocate:
|
| 283 |
-
if case.case_id in case_to_courtroom:
|
| 284 |
courtroom_id = case_to_courtroom[case.case_id]
|
| 285 |
allocation[courtroom_id].append(case)
|
|
|
|
| 286 |
|
| 287 |
return allocation
|
| 288 |
|
|
@@ -336,11 +338,34 @@ class CourtSim:
|
|
| 336 |
sw.writerow(["case_id", "courtroom_id", "policy", "age_days", "readiness_score", "urgent", "stage", "days_since_last_hearing", "stage_ready_date"])
|
| 337 |
for room in self.rooms:
|
| 338 |
for case in allocation[room.courtroom_id]:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 339 |
if room.schedule_case(current, case.case_id):
|
| 340 |
# Mark case as scheduled (for no-case-left-behind tracking)
|
| 341 |
case.mark_scheduled(current)
|
| 342 |
|
| 343 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 344 |
day_total += 1
|
| 345 |
self._hearings_total += 1
|
| 346 |
# log suggestive rationale
|
|
@@ -438,6 +463,32 @@ class CourtSim:
|
|
| 438 |
# Generate courtroom allocation summary
|
| 439 |
print(f"\n{self.allocator.get_courtroom_summary()}")
|
| 440 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 441 |
return CourtSimResult(
|
| 442 |
hearings_total=self._hearings_total,
|
| 443 |
hearings_heard=self._hearings_heard,
|
|
|
|
| 279 |
|
| 280 |
# Build allocation dict for compatibility with existing loop
|
| 281 |
allocation: Dict[int, List[Case]] = {r.courtroom_id: [] for r in self.rooms}
|
| 282 |
+
seen_cases = set() # Track seen case_ids to prevent duplicates
|
| 283 |
for case in cases_to_allocate:
|
| 284 |
+
if case.case_id in case_to_courtroom and case.case_id not in seen_cases:
|
| 285 |
courtroom_id = case_to_courtroom[case.case_id]
|
| 286 |
allocation[courtroom_id].append(case)
|
| 287 |
+
seen_cases.add(case.case_id)
|
| 288 |
|
| 289 |
return allocation
|
| 290 |
|
|
|
|
| 338 |
sw.writerow(["case_id", "courtroom_id", "policy", "age_days", "readiness_score", "urgent", "stage", "days_since_last_hearing", "stage_ready_date"])
|
| 339 |
for room in self.rooms:
|
| 340 |
for case in allocation[room.courtroom_id]:
|
| 341 |
+
# Skip if case already disposed (safety check)
|
| 342 |
+
if case.status == CaseStatus.DISPOSED:
|
| 343 |
+
continue
|
| 344 |
+
|
| 345 |
if room.schedule_case(current, case.case_id):
|
| 346 |
# Mark case as scheduled (for no-case-left-behind tracking)
|
| 347 |
case.mark_scheduled(current)
|
| 348 |
|
| 349 |
+
# Calculate adjournment boost for logging
|
| 350 |
+
import math
|
| 351 |
+
adj_boost = 0.0
|
| 352 |
+
if case.status == CaseStatus.ADJOURNED and case.hearing_count > 0:
|
| 353 |
+
adj_boost = math.exp(-case.days_since_last_hearing / 21)
|
| 354 |
+
|
| 355 |
+
# Log with full decision metadata
|
| 356 |
+
self._events.write(
|
| 357 |
+
current, "scheduled", case.case_id,
|
| 358 |
+
case_type=case.case_type,
|
| 359 |
+
stage=case.current_stage,
|
| 360 |
+
courtroom_id=room.courtroom_id,
|
| 361 |
+
priority_score=case.get_priority_score(),
|
| 362 |
+
age_days=case.age_days,
|
| 363 |
+
readiness_score=case.readiness_score,
|
| 364 |
+
is_urgent=case.is_urgent,
|
| 365 |
+
adj_boost=adj_boost,
|
| 366 |
+
ripeness_status=case.ripeness_status,
|
| 367 |
+
days_since_hearing=case.days_since_last_hearing
|
| 368 |
+
)
|
| 369 |
day_total += 1
|
| 370 |
self._hearings_total += 1
|
| 371 |
# log suggestive rationale
|
|
|
|
| 463 |
# Generate courtroom allocation summary
|
| 464 |
print(f"\n{self.allocator.get_courtroom_summary()}")
|
| 465 |
|
| 466 |
+
# Generate comprehensive case status breakdown
|
| 467 |
+
total_cases = len(self.cases)
|
| 468 |
+
disposed_cases = [c for c in self.cases if c.status == CaseStatus.DISPOSED]
|
| 469 |
+
scheduled_at_least_once = [c for c in self.cases if c.last_scheduled_date is not None]
|
| 470 |
+
never_scheduled = [c for c in self.cases if c.last_scheduled_date is None]
|
| 471 |
+
scheduled_but_not_disposed = [c for c in scheduled_at_least_once if c.status != CaseStatus.DISPOSED]
|
| 472 |
+
|
| 473 |
+
print(f"\n=== Case Status Breakdown ===")
|
| 474 |
+
print(f"Total cases in system: {total_cases:,}")
|
| 475 |
+
print(f"\nScheduling outcomes:")
|
| 476 |
+
print(f" Scheduled at least once: {len(scheduled_at_least_once):,} ({len(scheduled_at_least_once)/total_cases*100:.1f}%)")
|
| 477 |
+
print(f" - Disposed: {len(disposed_cases):,} ({len(disposed_cases)/total_cases*100:.1f}%)")
|
| 478 |
+
print(f" - Active (not disposed): {len(scheduled_but_not_disposed):,} ({len(scheduled_but_not_disposed)/total_cases*100:.1f}%)")
|
| 479 |
+
print(f" Never scheduled: {len(never_scheduled):,} ({len(never_scheduled)/total_cases*100:.1f}%)")
|
| 480 |
+
|
| 481 |
+
if scheduled_at_least_once:
|
| 482 |
+
avg_hearings = sum(c.hearing_count for c in scheduled_at_least_once) / len(scheduled_at_least_once)
|
| 483 |
+
print(f"\nAverage hearings per scheduled case: {avg_hearings:.1f}")
|
| 484 |
+
|
| 485 |
+
if disposed_cases:
|
| 486 |
+
avg_hearings_to_disposal = sum(c.hearing_count for c in disposed_cases) / len(disposed_cases)
|
| 487 |
+
avg_days_to_disposal = sum((c.disposal_date - c.filed_date).days for c in disposed_cases) / len(disposed_cases)
|
| 488 |
+
print(f"\nDisposal metrics:")
|
| 489 |
+
print(f" Average hearings to disposal: {avg_hearings_to_disposal:.1f}")
|
| 490 |
+
print(f" Average days to disposal: {avg_days_to_disposal:.0f}")
|
| 491 |
+
|
| 492 |
return CourtSimResult(
|
| 493 |
hearings_total=self._hearings_total,
|
| 494 |
hearings_heard=self._hearings_heard,
|
|
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Event schema and writer for simulation audit trail.
|
| 2 |
+
|
| 3 |
+
Each event is a flat dict suitable for CSV logging with a 'type' field.
|
| 4 |
+
Types:
|
| 5 |
+
- filing: a new case filed into the system
|
| 6 |
+
- scheduled: a case scheduled on a date
|
| 7 |
+
- outcome: hearing outcome (heard/adjourned)
|
| 8 |
+
- stage_change: case progresses to a new stage
|
| 9 |
+
- disposed: case disposed
|
| 10 |
+
"""
|
| 11 |
+
from __future__ import annotations
|
| 12 |
+
|
| 13 |
+
from dataclasses import dataclass
|
| 14 |
+
from datetime import date
|
| 15 |
+
from pathlib import Path
|
| 16 |
+
import csv
|
| 17 |
+
from typing import Dict, Any, Iterable
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
@dataclass
|
| 21 |
+
class EventWriter:
|
| 22 |
+
path: Path
|
| 23 |
+
|
| 24 |
+
def __post_init__(self) -> None:
|
| 25 |
+
self.path.parent.mkdir(parents=True, exist_ok=True)
|
| 26 |
+
self._buffer = [] # in-memory rows to append
|
| 27 |
+
if not self.path.exists():
|
| 28 |
+
with self.path.open("w", newline="") as f:
|
| 29 |
+
w = csv.writer(f)
|
| 30 |
+
w.writerow([
|
| 31 |
+
"date", "type", "case_id", "case_type", "stage", "courtroom_id",
|
| 32 |
+
"detail", "extra",
|
| 33 |
+
"priority_score", "age_days", "readiness_score", "is_urgent",
|
| 34 |
+
"adj_boost", "ripeness_status", "days_since_hearing"
|
| 35 |
+
])
|
| 36 |
+
|
| 37 |
+
def write(self, date_: date, type_: str, case_id: str, case_type: str = "",
|
| 38 |
+
stage: str = "", courtroom_id: int | None = None,
|
| 39 |
+
detail: str = "", extra: str = "",
|
| 40 |
+
priority_score: float | None = None, age_days: int | None = None,
|
| 41 |
+
readiness_score: float | None = None, is_urgent: bool | None = None,
|
| 42 |
+
adj_boost: float | None = None, ripeness_status: str = "",
|
| 43 |
+
days_since_hearing: int | None = None) -> None:
|
| 44 |
+
self._buffer.append([
|
| 45 |
+
date_.isoformat(), type_, case_id, case_type, stage,
|
| 46 |
+
courtroom_id if courtroom_id is not None else "",
|
| 47 |
+
detail, extra,
|
| 48 |
+
f"{priority_score:.4f}" if priority_score is not None else "",
|
| 49 |
+
age_days if age_days is not None else "",
|
| 50 |
+
f"{readiness_score:.4f}" if readiness_score is not None else "",
|
| 51 |
+
int(is_urgent) if is_urgent is not None else "",
|
| 52 |
+
f"{adj_boost:.4f}" if adj_boost is not None else "",
|
| 53 |
+
ripeness_status,
|
| 54 |
+
days_since_hearing if days_since_hearing is not None else "",
|
| 55 |
+
])
|
| 56 |
+
|
| 57 |
+
def flush(self) -> None:
|
| 58 |
+
if not self._buffer:
|
| 59 |
+
return
|
| 60 |
+
with self.path.open("a", newline="") as f:
|
| 61 |
+
w = csv.writer(f)
|
| 62 |
+
w.writerows(self._buffer)
|
| 63 |
+
self._buffer.clear()
|
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Scheduling policy implementations."""
|
| 2 |
+
from scheduler.simulation.policies.fifo import FIFOPolicy
|
| 3 |
+
from scheduler.simulation.policies.age import AgeBasedPolicy
|
| 4 |
+
from scheduler.simulation.policies.readiness import ReadinessPolicy
|
| 5 |
+
|
| 6 |
+
POLICY_REGISTRY = {
|
| 7 |
+
"fifo": FIFOPolicy,
|
| 8 |
+
"age": AgeBasedPolicy,
|
| 9 |
+
"readiness": ReadinessPolicy,
|
| 10 |
+
}
|
| 11 |
+
|
| 12 |
+
def get_policy(name: str):
|
| 13 |
+
name_lower = name.lower()
|
| 14 |
+
if name_lower not in POLICY_REGISTRY:
|
| 15 |
+
raise ValueError(f"Unknown policy: {name}")
|
| 16 |
+
return POLICY_REGISTRY[name_lower]()
|
| 17 |
+
|
| 18 |
+
__all__ = ["FIFOPolicy", "AgeBasedPolicy", "ReadinessPolicy", "get_policy"]
|
|
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Age-based scheduling policy.
|
| 2 |
+
|
| 3 |
+
Prioritizes older cases to reduce maximum age and prevent starvation.
|
| 4 |
+
Uses case age (days since filing) as primary criterion.
|
| 5 |
+
"""
|
| 6 |
+
from __future__ import annotations
|
| 7 |
+
|
| 8 |
+
from datetime import date
|
| 9 |
+
from typing import List
|
| 10 |
+
|
| 11 |
+
from scheduler.simulation.scheduler import SchedulerPolicy
|
| 12 |
+
from scheduler.core.case import Case
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
class AgeBasedPolicy(SchedulerPolicy):
|
| 16 |
+
"""Age-based scheduling: oldest cases scheduled first."""
|
| 17 |
+
|
| 18 |
+
def prioritize(self, cases: List[Case], current_date: date) -> List[Case]:
|
| 19 |
+
"""Sort cases by age (oldest first).
|
| 20 |
+
|
| 21 |
+
Args:
|
| 22 |
+
cases: List of eligible cases
|
| 23 |
+
current_date: Current simulation date
|
| 24 |
+
|
| 25 |
+
Returns:
|
| 26 |
+
Cases sorted by age_days (descending)
|
| 27 |
+
"""
|
| 28 |
+
# Update ages first
|
| 29 |
+
for c in cases:
|
| 30 |
+
c.update_age(current_date)
|
| 31 |
+
|
| 32 |
+
return sorted(cases, key=lambda c: c.age_days, reverse=True)
|
| 33 |
+
|
| 34 |
+
def get_name(self) -> str:
|
| 35 |
+
return "Age-Based"
|
| 36 |
+
|
| 37 |
+
def requires_readiness_score(self) -> bool:
|
| 38 |
+
return False
|
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""First-In-First-Out (FIFO) scheduling policy.
|
| 2 |
+
|
| 3 |
+
Schedules cases in the order they were filed, treating all cases equally.
|
| 4 |
+
This is the simplest baseline policy.
|
| 5 |
+
"""
|
| 6 |
+
from __future__ import annotations
|
| 7 |
+
|
| 8 |
+
from datetime import date
|
| 9 |
+
from typing import List
|
| 10 |
+
|
| 11 |
+
from scheduler.simulation.scheduler import SchedulerPolicy
|
| 12 |
+
from scheduler.core.case import Case
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
class FIFOPolicy(SchedulerPolicy):
|
| 16 |
+
"""FIFO scheduling: cases scheduled in filing order."""
|
| 17 |
+
|
| 18 |
+
def prioritize(self, cases: List[Case], current_date: date) -> List[Case]:
|
| 19 |
+
"""Sort cases by filed_date (earliest first).
|
| 20 |
+
|
| 21 |
+
Args:
|
| 22 |
+
cases: List of eligible cases
|
| 23 |
+
current_date: Current simulation date (unused)
|
| 24 |
+
|
| 25 |
+
Returns:
|
| 26 |
+
Cases sorted by filing date (oldest first)
|
| 27 |
+
"""
|
| 28 |
+
return sorted(cases, key=lambda c: c.filed_date)
|
| 29 |
+
|
| 30 |
+
def get_name(self) -> str:
|
| 31 |
+
return "FIFO"
|
| 32 |
+
|
| 33 |
+
def requires_readiness_score(self) -> bool:
|
| 34 |
+
return False
|
|
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Readiness-based scheduling policy.
|
| 2 |
+
|
| 3 |
+
Combines age, readiness score, and urgency into a composite priority score.
|
| 4 |
+
This is the most sophisticated policy, balancing fairness with efficiency.
|
| 5 |
+
|
| 6 |
+
Priority formula:
|
| 7 |
+
priority = (age/2000) * 0.4 + readiness * 0.3 + urgent * 0.3
|
| 8 |
+
"""
|
| 9 |
+
from __future__ import annotations
|
| 10 |
+
|
| 11 |
+
from datetime import date
|
| 12 |
+
from typing import List
|
| 13 |
+
|
| 14 |
+
from scheduler.simulation.scheduler import SchedulerPolicy
|
| 15 |
+
from scheduler.core.case import Case
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
class ReadinessPolicy(SchedulerPolicy):
|
| 19 |
+
"""Readiness-based scheduling: composite priority score."""
|
| 20 |
+
|
| 21 |
+
def prioritize(self, cases: List[Case], current_date: date) -> List[Case]:
|
| 22 |
+
"""Sort cases by composite priority score (highest first).
|
| 23 |
+
|
| 24 |
+
The priority score combines:
|
| 25 |
+
- Age (40% weight)
|
| 26 |
+
- Readiness (30% weight)
|
| 27 |
+
- Urgency (30% weight)
|
| 28 |
+
|
| 29 |
+
Args:
|
| 30 |
+
cases: List of eligible cases
|
| 31 |
+
current_date: Current simulation date
|
| 32 |
+
|
| 33 |
+
Returns:
|
| 34 |
+
Cases sorted by priority score (descending)
|
| 35 |
+
"""
|
| 36 |
+
# Update ages and compute readiness
|
| 37 |
+
for c in cases:
|
| 38 |
+
c.update_age(current_date)
|
| 39 |
+
c.compute_readiness_score()
|
| 40 |
+
|
| 41 |
+
# Sort by priority score (higher = more urgent)
|
| 42 |
+
return sorted(cases, key=lambda c: c.get_priority_score(), reverse=True)
|
| 43 |
+
|
| 44 |
+
def get_name(self) -> str:
|
| 45 |
+
return "Readiness-Based"
|
| 46 |
+
|
| 47 |
+
def requires_readiness_score(self) -> bool:
|
| 48 |
+
return True
|
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Base scheduler interface for policy implementations.
|
| 2 |
+
|
| 3 |
+
This module defines the abstract interface that all scheduling policies must implement.
|
| 4 |
+
Each policy decides which cases to schedule on a given day based on different criteria.
|
| 5 |
+
"""
|
| 6 |
+
from __future__ import annotations
|
| 7 |
+
|
| 8 |
+
from abc import ABC, abstractmethod
|
| 9 |
+
from datetime import date
|
| 10 |
+
from typing import List
|
| 11 |
+
|
| 12 |
+
from scheduler.core.case import Case
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
class SchedulerPolicy(ABC):
|
| 16 |
+
"""Abstract base class for scheduling policies.
|
| 17 |
+
|
| 18 |
+
All scheduling policies must implement the `prioritize` method which
|
| 19 |
+
ranks cases for scheduling on a given day.
|
| 20 |
+
"""
|
| 21 |
+
|
| 22 |
+
@abstractmethod
|
| 23 |
+
def prioritize(self, cases: List[Case], current_date: date) -> List[Case]:
|
| 24 |
+
"""Prioritize cases for scheduling on the given date.
|
| 25 |
+
|
| 26 |
+
Args:
|
| 27 |
+
cases: List of eligible cases (already filtered for readiness, not disposed)
|
| 28 |
+
current_date: Current simulation date
|
| 29 |
+
|
| 30 |
+
Returns:
|
| 31 |
+
Sorted list of cases in priority order (highest priority first)
|
| 32 |
+
"""
|
| 33 |
+
pass
|
| 34 |
+
|
| 35 |
+
@abstractmethod
|
| 36 |
+
def get_name(self) -> str:
|
| 37 |
+
"""Get the policy name for logging/reporting."""
|
| 38 |
+
pass
|
| 39 |
+
|
| 40 |
+
@abstractmethod
|
| 41 |
+
def requires_readiness_score(self) -> bool:
|
| 42 |
+
"""Return True if this policy requires readiness score computation."""
|
| 43 |
+
pass
|
|
File without changes
|
|
@@ -0,0 +1,217 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Court calendar utilities with working days and seasonality.
|
| 2 |
+
|
| 3 |
+
This module provides utilities for calculating working days considering
|
| 4 |
+
court holidays, seasonality, and Karnataka High Court calendar.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
from datetime import date, timedelta
|
| 8 |
+
from typing import List, Set
|
| 9 |
+
|
| 10 |
+
from scheduler.data.config import (
|
| 11 |
+
WORKING_DAYS_PER_YEAR,
|
| 12 |
+
SEASONALITY_FACTORS,
|
| 13 |
+
)
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
class CourtCalendar:
|
| 17 |
+
"""Manages court working days and seasonality.
|
| 18 |
+
|
| 19 |
+
Attributes:
|
| 20 |
+
holidays: Set of holiday dates
|
| 21 |
+
working_days_per_year: Expected working days annually
|
| 22 |
+
"""
|
| 23 |
+
|
| 24 |
+
def __init__(self, working_days_per_year: int = WORKING_DAYS_PER_YEAR):
|
| 25 |
+
"""Initialize court calendar.
|
| 26 |
+
|
| 27 |
+
Args:
|
| 28 |
+
working_days_per_year: Annual working days (default 192)
|
| 29 |
+
"""
|
| 30 |
+
self.working_days_per_year = working_days_per_year
|
| 31 |
+
self.holidays: Set[date] = set()
|
| 32 |
+
|
| 33 |
+
def add_holiday(self, holiday_date: date) -> None:
|
| 34 |
+
"""Add a holiday to the calendar.
|
| 35 |
+
|
| 36 |
+
Args:
|
| 37 |
+
holiday_date: Date to mark as holiday
|
| 38 |
+
"""
|
| 39 |
+
self.holidays.add(holiday_date)
|
| 40 |
+
|
| 41 |
+
def add_holidays(self, holiday_dates: List[date]) -> None:
|
| 42 |
+
"""Add multiple holidays.
|
| 43 |
+
|
| 44 |
+
Args:
|
| 45 |
+
holiday_dates: List of dates to mark as holidays
|
| 46 |
+
"""
|
| 47 |
+
self.holidays.update(holiday_dates)
|
| 48 |
+
|
| 49 |
+
def is_working_day(self, check_date: date) -> bool:
|
| 50 |
+
"""Check if a date is a working day.
|
| 51 |
+
|
| 52 |
+
Args:
|
| 53 |
+
check_date: Date to check
|
| 54 |
+
|
| 55 |
+
Returns:
|
| 56 |
+
True if date is a working day (not weekend or holiday)
|
| 57 |
+
"""
|
| 58 |
+
# Saturday (5) and Sunday (6) are weekends
|
| 59 |
+
if check_date.weekday() in (5, 6):
|
| 60 |
+
return False
|
| 61 |
+
|
| 62 |
+
if check_date in self.holidays:
|
| 63 |
+
return False
|
| 64 |
+
|
| 65 |
+
return True
|
| 66 |
+
|
| 67 |
+
def next_working_day(self, start_date: date, days_ahead: int = 1) -> date:
|
| 68 |
+
"""Get the next working day after a given number of working days.
|
| 69 |
+
|
| 70 |
+
Args:
|
| 71 |
+
start_date: Starting date
|
| 72 |
+
days_ahead: Number of working days to advance
|
| 73 |
+
|
| 74 |
+
Returns:
|
| 75 |
+
Next working day date
|
| 76 |
+
"""
|
| 77 |
+
current = start_date
|
| 78 |
+
working_days_found = 0
|
| 79 |
+
|
| 80 |
+
while working_days_found < days_ahead:
|
| 81 |
+
current += timedelta(days=1)
|
| 82 |
+
if self.is_working_day(current):
|
| 83 |
+
working_days_found += 1
|
| 84 |
+
|
| 85 |
+
return current
|
| 86 |
+
|
| 87 |
+
def working_days_between(self, start_date: date, end_date: date) -> int:
|
| 88 |
+
"""Count working days between two dates (inclusive).
|
| 89 |
+
|
| 90 |
+
Args:
|
| 91 |
+
start_date: Start of range
|
| 92 |
+
end_date: End of range
|
| 93 |
+
|
| 94 |
+
Returns:
|
| 95 |
+
Number of working days
|
| 96 |
+
"""
|
| 97 |
+
if start_date > end_date:
|
| 98 |
+
return 0
|
| 99 |
+
|
| 100 |
+
count = 0
|
| 101 |
+
current = start_date
|
| 102 |
+
|
| 103 |
+
while current <= end_date:
|
| 104 |
+
if self.is_working_day(current):
|
| 105 |
+
count += 1
|
| 106 |
+
current += timedelta(days=1)
|
| 107 |
+
|
| 108 |
+
return count
|
| 109 |
+
|
| 110 |
+
def get_working_days_in_month(self, year: int, month: int) -> List[date]:
|
| 111 |
+
"""Get all working days in a specific month.
|
| 112 |
+
|
| 113 |
+
Args:
|
| 114 |
+
year: Year
|
| 115 |
+
month: Month (1-12)
|
| 116 |
+
|
| 117 |
+
Returns:
|
| 118 |
+
List of working day dates
|
| 119 |
+
"""
|
| 120 |
+
# Get first and last day of month
|
| 121 |
+
first_day = date(year, month, 1)
|
| 122 |
+
|
| 123 |
+
if month == 12:
|
| 124 |
+
last_day = date(year, 12, 31)
|
| 125 |
+
else:
|
| 126 |
+
last_day = date(year, month + 1, 1) - timedelta(days=1)
|
| 127 |
+
|
| 128 |
+
working_days = []
|
| 129 |
+
current = first_day
|
| 130 |
+
|
| 131 |
+
while current <= last_day:
|
| 132 |
+
if self.is_working_day(current):
|
| 133 |
+
working_days.append(current)
|
| 134 |
+
current += timedelta(days=1)
|
| 135 |
+
|
| 136 |
+
return working_days
|
| 137 |
+
|
| 138 |
+
def get_working_days_in_year(self, year: int) -> List[date]:
|
| 139 |
+
"""Get all working days in a year.
|
| 140 |
+
|
| 141 |
+
Args:
|
| 142 |
+
year: Year
|
| 143 |
+
|
| 144 |
+
Returns:
|
| 145 |
+
List of working day dates
|
| 146 |
+
"""
|
| 147 |
+
working_days = []
|
| 148 |
+
|
| 149 |
+
for month in range(1, 13):
|
| 150 |
+
working_days.extend(self.get_working_days_in_month(year, month))
|
| 151 |
+
|
| 152 |
+
return working_days
|
| 153 |
+
|
| 154 |
+
def get_seasonality_factor(self, check_date: date) -> float:
|
| 155 |
+
"""Get seasonality factor for a date based on month.
|
| 156 |
+
|
| 157 |
+
Args:
|
| 158 |
+
check_date: Date to check
|
| 159 |
+
|
| 160 |
+
Returns:
|
| 161 |
+
Seasonality multiplier (from config)
|
| 162 |
+
"""
|
| 163 |
+
return SEASONALITY_FACTORS.get(check_date.month, 1.0)
|
| 164 |
+
|
| 165 |
+
def get_expected_capacity(self, check_date: date, base_capacity: int) -> int:
|
| 166 |
+
"""Get expected capacity adjusted for seasonality.
|
| 167 |
+
|
| 168 |
+
Args:
|
| 169 |
+
check_date: Date to check
|
| 170 |
+
base_capacity: Base daily capacity
|
| 171 |
+
|
| 172 |
+
Returns:
|
| 173 |
+
Adjusted capacity
|
| 174 |
+
"""
|
| 175 |
+
factor = self.get_seasonality_factor(check_date)
|
| 176 |
+
return int(base_capacity * factor)
|
| 177 |
+
|
| 178 |
+
def generate_court_calendar(self, start_date: date, end_date: date) -> List[date]:
|
| 179 |
+
"""Generate list of all court working days in a date range.
|
| 180 |
+
|
| 181 |
+
Args:
|
| 182 |
+
start_date: Start of simulation
|
| 183 |
+
end_date: End of simulation
|
| 184 |
+
|
| 185 |
+
Returns:
|
| 186 |
+
List of working day dates
|
| 187 |
+
"""
|
| 188 |
+
working_days = []
|
| 189 |
+
current = start_date
|
| 190 |
+
|
| 191 |
+
while current <= end_date:
|
| 192 |
+
if self.is_working_day(current):
|
| 193 |
+
working_days.append(current)
|
| 194 |
+
current += timedelta(days=1)
|
| 195 |
+
|
| 196 |
+
return working_days
|
| 197 |
+
|
| 198 |
+
def add_standard_holidays(self, year: int) -> None:
|
| 199 |
+
"""Add standard Indian national holidays for a year.
|
| 200 |
+
|
| 201 |
+
This is a simplified set. In production, use actual court holiday calendar.
|
| 202 |
+
|
| 203 |
+
Args:
|
| 204 |
+
year: Year to add holidays for
|
| 205 |
+
"""
|
| 206 |
+
# Standard national holidays (simplified)
|
| 207 |
+
holidays = [
|
| 208 |
+
date(year, 1, 26), # Republic Day
|
| 209 |
+
date(year, 8, 15), # Independence Day
|
| 210 |
+
date(year, 10, 2), # Gandhi Jayanti
|
| 211 |
+
date(year, 12, 25), # Christmas
|
| 212 |
+
]
|
| 213 |
+
|
| 214 |
+
self.add_holidays(holidays)
|
| 215 |
+
|
| 216 |
+
def __repr__(self) -> str:
|
| 217 |
+
return f"CourtCalendar(working_days/year={self.working_days_per_year}, holidays={len(self.holidays)})"
|
|
File without changes
|
|
@@ -0,0 +1,378 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Demonstration of explainability and judge intervention controls.
|
| 2 |
+
|
| 3 |
+
Shows:
|
| 4 |
+
1. Step-by-step decision reasoning for scheduled/unscheduled cases
|
| 5 |
+
2. Judge override capabilities
|
| 6 |
+
3. Draft cause list review and approval process
|
| 7 |
+
4. Audit trail tracking
|
| 8 |
+
"""
|
| 9 |
+
from datetime import date, datetime
|
| 10 |
+
from pathlib import Path
|
| 11 |
+
import sys
|
| 12 |
+
|
| 13 |
+
# Add parent directory to path
|
| 14 |
+
sys.path.insert(0, str(Path(__file__).parent.parent))
|
| 15 |
+
|
| 16 |
+
from scheduler.core.case import Case, CaseStatus
|
| 17 |
+
from scheduler.control.explainability import ExplainabilityEngine
|
| 18 |
+
from scheduler.control.overrides import (
|
| 19 |
+
OverrideManager,
|
| 20 |
+
Override,
|
| 21 |
+
OverrideType
|
| 22 |
+
)
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
def demo_explainability():
|
| 26 |
+
"""Demonstrate step-by-step decision reasoning."""
|
| 27 |
+
print("=" * 80)
|
| 28 |
+
print("DEMO 1: EXPLAINABILITY - STEP-BY-STEP DECISION REASONING")
|
| 29 |
+
print("=" * 80)
|
| 30 |
+
print()
|
| 31 |
+
|
| 32 |
+
# Create a sample case
|
| 33 |
+
case = Case(
|
| 34 |
+
case_id="CRP/2023/01234",
|
| 35 |
+
case_type="CRP",
|
| 36 |
+
filed_date=date(2023, 1, 15),
|
| 37 |
+
current_stage="ORDERS / JUDGMENT",
|
| 38 |
+
is_urgent=True
|
| 39 |
+
)
|
| 40 |
+
|
| 41 |
+
# Simulate case progression
|
| 42 |
+
case.age_days = 180
|
| 43 |
+
case.hearing_count = 3
|
| 44 |
+
case.days_since_last_hearing = 21
|
| 45 |
+
case.last_hearing_date = date(2023, 6, 1)
|
| 46 |
+
case.last_hearing_purpose = "ARGUMENTS"
|
| 47 |
+
case.readiness_score = 0.85
|
| 48 |
+
case.ripeness_status = "RIPE"
|
| 49 |
+
case.status = CaseStatus.ADJOURNED
|
| 50 |
+
|
| 51 |
+
# Calculate priority
|
| 52 |
+
priority_score = case.get_priority_score()
|
| 53 |
+
|
| 54 |
+
# Example 1: Case SCHEDULED
|
| 55 |
+
print("Example 1: Case SCHEDULED")
|
| 56 |
+
print("-" * 80)
|
| 57 |
+
|
| 58 |
+
explanation = ExplainabilityEngine.explain_scheduling_decision(
|
| 59 |
+
case=case,
|
| 60 |
+
current_date=date(2023, 6, 22),
|
| 61 |
+
scheduled=True,
|
| 62 |
+
ripeness_status="RIPE",
|
| 63 |
+
priority_score=priority_score,
|
| 64 |
+
courtroom_id=3,
|
| 65 |
+
capacity_full=False,
|
| 66 |
+
below_threshold=False
|
| 67 |
+
)
|
| 68 |
+
|
| 69 |
+
print(explanation.to_readable_text())
|
| 70 |
+
print()
|
| 71 |
+
|
| 72 |
+
# Example 2: Case NOT SCHEDULED (capacity full)
|
| 73 |
+
print("\n" + "=" * 80)
|
| 74 |
+
print("Example 2: Case NOT SCHEDULED (Capacity Full)")
|
| 75 |
+
print("-" * 80)
|
| 76 |
+
|
| 77 |
+
explanation2 = ExplainabilityEngine.explain_scheduling_decision(
|
| 78 |
+
case=case,
|
| 79 |
+
current_date=date(2023, 6, 22),
|
| 80 |
+
scheduled=False,
|
| 81 |
+
ripeness_status="RIPE",
|
| 82 |
+
priority_score=priority_score,
|
| 83 |
+
courtroom_id=None,
|
| 84 |
+
capacity_full=True,
|
| 85 |
+
below_threshold=False
|
| 86 |
+
)
|
| 87 |
+
|
| 88 |
+
print(explanation2.to_readable_text())
|
| 89 |
+
print()
|
| 90 |
+
|
| 91 |
+
# Example 3: Case NOT SCHEDULED (unripe)
|
| 92 |
+
print("\n" + "=" * 80)
|
| 93 |
+
print("Example 3: Case NOT SCHEDULED (UNRIPE - Summons Pending)")
|
| 94 |
+
print("-" * 80)
|
| 95 |
+
|
| 96 |
+
case_unripe = Case(
|
| 97 |
+
case_id="RSA/2023/05678",
|
| 98 |
+
case_type="RSA",
|
| 99 |
+
filed_date=date(2023, 5, 1),
|
| 100 |
+
current_stage="ADMISSION",
|
| 101 |
+
is_urgent=False
|
| 102 |
+
)
|
| 103 |
+
case_unripe.age_days = 50
|
| 104 |
+
case_unripe.readiness_score = 0.2
|
| 105 |
+
case_unripe.ripeness_status = "UNRIPE_SUMMONS"
|
| 106 |
+
case_unripe.last_hearing_purpose = "ISSUE SUMMONS"
|
| 107 |
+
|
| 108 |
+
explanation3 = ExplainabilityEngine.explain_scheduling_decision(
|
| 109 |
+
case=case_unripe,
|
| 110 |
+
current_date=date(2023, 6, 22),
|
| 111 |
+
scheduled=False,
|
| 112 |
+
ripeness_status="UNRIPE_SUMMONS",
|
| 113 |
+
priority_score=None,
|
| 114 |
+
courtroom_id=None,
|
| 115 |
+
capacity_full=False,
|
| 116 |
+
below_threshold=False
|
| 117 |
+
)
|
| 118 |
+
|
| 119 |
+
print(explanation3.to_readable_text())
|
| 120 |
+
print()
|
| 121 |
+
|
| 122 |
+
|
| 123 |
+
def demo_judge_overrides():
|
| 124 |
+
"""Demonstrate judge intervention controls."""
|
| 125 |
+
print("\n" + "=" * 80)
|
| 126 |
+
print("DEMO 2: JUDGE INTERVENTION CONTROLS")
|
| 127 |
+
print("=" * 80)
|
| 128 |
+
print()
|
| 129 |
+
|
| 130 |
+
# Create override manager
|
| 131 |
+
manager = OverrideManager()
|
| 132 |
+
|
| 133 |
+
# Create a draft cause list
|
| 134 |
+
print("Step 1: Algorithm generates draft cause list")
|
| 135 |
+
print("-" * 80)
|
| 136 |
+
|
| 137 |
+
algorithm_suggested = [
|
| 138 |
+
"CRP/2023/00101",
|
| 139 |
+
"CRP/2023/00102",
|
| 140 |
+
"RSA/2023/00201",
|
| 141 |
+
"CA/2023/00301",
|
| 142 |
+
"CCC/2023/00401"
|
| 143 |
+
]
|
| 144 |
+
|
| 145 |
+
draft = manager.create_draft(
|
| 146 |
+
date=date(2023, 6, 22),
|
| 147 |
+
courtroom_id=3,
|
| 148 |
+
judge_id="J001",
|
| 149 |
+
algorithm_suggested=algorithm_suggested
|
| 150 |
+
)
|
| 151 |
+
|
| 152 |
+
print(f"Draft created for {draft.date}")
|
| 153 |
+
print(f"Courtroom: {draft.courtroom_id}")
|
| 154 |
+
print(f"Judge: {draft.judge_id}")
|
| 155 |
+
print(f"Algorithm suggested {len(algorithm_suggested)} cases:")
|
| 156 |
+
for i, case_id in enumerate(algorithm_suggested, 1):
|
| 157 |
+
print(f" {i}. {case_id}")
|
| 158 |
+
print()
|
| 159 |
+
|
| 160 |
+
# Judge starts with algorithm suggestions
|
| 161 |
+
draft.judge_approved = algorithm_suggested.copy()
|
| 162 |
+
|
| 163 |
+
# Step 2: Judge makes overrides
|
| 164 |
+
print("\nStep 2: Judge reviews and makes modifications")
|
| 165 |
+
print("-" * 80)
|
| 166 |
+
|
| 167 |
+
# Override 1: Judge adds an urgent case
|
| 168 |
+
print("\nOverride 1: Judge adds urgent case")
|
| 169 |
+
override1 = Override(
|
| 170 |
+
override_id="OV001",
|
| 171 |
+
override_type=OverrideType.ADD_CASE,
|
| 172 |
+
case_id="CCC/2023/00999",
|
| 173 |
+
judge_id="J001",
|
| 174 |
+
timestamp=datetime.now(),
|
| 175 |
+
reason="Medical emergency case, party has critical health condition"
|
| 176 |
+
)
|
| 177 |
+
|
| 178 |
+
success, error = manager.apply_override(draft, override1)
|
| 179 |
+
if success:
|
| 180 |
+
print(f" ✓ {override1.to_readable_text()}")
|
| 181 |
+
else:
|
| 182 |
+
print(f" ✗ Failed: {error}")
|
| 183 |
+
print()
|
| 184 |
+
|
| 185 |
+
# Override 2: Judge removes a case
|
| 186 |
+
print("Override 2: Judge removes a case")
|
| 187 |
+
override2 = Override(
|
| 188 |
+
override_id="OV002",
|
| 189 |
+
override_type=OverrideType.REMOVE_CASE,
|
| 190 |
+
case_id="RSA/2023/00201",
|
| 191 |
+
judge_id="J001",
|
| 192 |
+
timestamp=datetime.now(),
|
| 193 |
+
reason="Party requested postponement due to family emergency"
|
| 194 |
+
)
|
| 195 |
+
|
| 196 |
+
success, error = manager.apply_override(draft, override2)
|
| 197 |
+
if success:
|
| 198 |
+
print(f" ✓ {override2.to_readable_text()}")
|
| 199 |
+
else:
|
| 200 |
+
print(f" ✗ Failed: {error}")
|
| 201 |
+
print()
|
| 202 |
+
|
| 203 |
+
# Override 3: Judge overrides ripeness
|
| 204 |
+
print("Override 3: Judge overrides ripeness status")
|
| 205 |
+
override3 = Override(
|
| 206 |
+
override_id="OV003",
|
| 207 |
+
override_type=OverrideType.RIPENESS,
|
| 208 |
+
case_id="CRP/2023/00102",
|
| 209 |
+
judge_id="J001",
|
| 210 |
+
timestamp=datetime.now(),
|
| 211 |
+
old_value="UNRIPE_SUMMONS",
|
| 212 |
+
new_value="RIPE",
|
| 213 |
+
reason="Summons served yesterday, confirmation received this morning"
|
| 214 |
+
)
|
| 215 |
+
|
| 216 |
+
success, error = manager.apply_override(draft, override3)
|
| 217 |
+
if success:
|
| 218 |
+
print(f" ✓ {override3.to_readable_text()}")
|
| 219 |
+
else:
|
| 220 |
+
print(f" ✗ Failed: {error}")
|
| 221 |
+
print()
|
| 222 |
+
|
| 223 |
+
# Step 3: Judge approves final list
|
| 224 |
+
print("\nStep 3: Judge finalizes cause list")
|
| 225 |
+
print("-" * 80)
|
| 226 |
+
|
| 227 |
+
manager.finalize_draft(draft)
|
| 228 |
+
|
| 229 |
+
print(f"Status: {draft.status}")
|
| 230 |
+
print(f"Finalized at: {draft.finalized_at.strftime('%Y-%m-%d %H:%M') if draft.finalized_at else 'N/A'}")
|
| 231 |
+
print()
|
| 232 |
+
|
| 233 |
+
# Show modifications summary
|
| 234 |
+
print("Modifications Summary:")
|
| 235 |
+
summary = draft.get_modifications_summary()
|
| 236 |
+
print(f" Cases added: {summary['cases_added']}")
|
| 237 |
+
print(f" Cases removed: {summary['cases_removed']}")
|
| 238 |
+
print(f" Cases kept: {summary['cases_kept']}")
|
| 239 |
+
print(f" Acceptance rate: {summary['acceptance_rate']:.1f}%")
|
| 240 |
+
print(f" Override types: {summary['override_types']}")
|
| 241 |
+
print()
|
| 242 |
+
|
| 243 |
+
# Show final list
|
| 244 |
+
print("Final Approved Cases:")
|
| 245 |
+
for i, case_id in enumerate(draft.judge_approved, 1):
|
| 246 |
+
marker = " [NEW]" if case_id not in algorithm_suggested else ""
|
| 247 |
+
print(f" {i}. {case_id}{marker}")
|
| 248 |
+
print()
|
| 249 |
+
|
| 250 |
+
|
| 251 |
+
def demo_judge_preferences():
|
| 252 |
+
"""Demonstrate judge-specific preferences."""
|
| 253 |
+
print("\n" + "=" * 80)
|
| 254 |
+
print("DEMO 3: JUDGE PREFERENCES")
|
| 255 |
+
print("=" * 80)
|
| 256 |
+
print()
|
| 257 |
+
|
| 258 |
+
manager = OverrideManager()
|
| 259 |
+
|
| 260 |
+
# Set judge preferences
|
| 261 |
+
prefs = manager.get_judge_preferences("J001")
|
| 262 |
+
|
| 263 |
+
print("Judge J001 Preferences:")
|
| 264 |
+
print("-" * 80)
|
| 265 |
+
|
| 266 |
+
# Set capacity override
|
| 267 |
+
prefs.daily_capacity_override = 120
|
| 268 |
+
print(f"Daily capacity override: {prefs.daily_capacity_override} (default: 151)")
|
| 269 |
+
print(" Reason: Judge works half-days on Fridays")
|
| 270 |
+
print()
|
| 271 |
+
|
| 272 |
+
# Block dates
|
| 273 |
+
prefs.blocked_dates = [
|
| 274 |
+
date(2023, 7, 10),
|
| 275 |
+
date(2023, 7, 11),
|
| 276 |
+
date(2023, 7, 12)
|
| 277 |
+
]
|
| 278 |
+
print("Blocked dates:")
|
| 279 |
+
for blocked in prefs.blocked_dates:
|
| 280 |
+
print(f" - {blocked} (vacation)")
|
| 281 |
+
print()
|
| 282 |
+
|
| 283 |
+
# Case type preferences
|
| 284 |
+
prefs.case_type_preferences = {
|
| 285 |
+
"Monday": ["CRP", "CA"],
|
| 286 |
+
"Wednesday": ["RSA", "RFA"]
|
| 287 |
+
}
|
| 288 |
+
print("Case type preferences by day:")
|
| 289 |
+
for day, types in prefs.case_type_preferences.items():
|
| 290 |
+
print(f" {day}: {', '.join(types)}")
|
| 291 |
+
print()
|
| 292 |
+
|
| 293 |
+
|
| 294 |
+
def demo_audit_trail():
|
| 295 |
+
"""Demonstrate audit trail export."""
|
| 296 |
+
print("\n" + "=" * 80)
|
| 297 |
+
print("DEMO 4: AUDIT TRAIL")
|
| 298 |
+
print("=" * 80)
|
| 299 |
+
print()
|
| 300 |
+
|
| 301 |
+
manager = OverrideManager()
|
| 302 |
+
|
| 303 |
+
# Simulate some activity
|
| 304 |
+
draft1 = manager.create_draft(
|
| 305 |
+
date=date(2023, 6, 22),
|
| 306 |
+
courtroom_id=1,
|
| 307 |
+
judge_id="J001",
|
| 308 |
+
algorithm_suggested=["CRP/001", "CA/002", "RSA/003"]
|
| 309 |
+
)
|
| 310 |
+
draft1.judge_approved = ["CRP/001", "CA/002"] # Removed one
|
| 311 |
+
draft1.status = "APPROVED"
|
| 312 |
+
|
| 313 |
+
override = Override(
|
| 314 |
+
override_id="OV001",
|
| 315 |
+
override_type=OverrideType.REMOVE_CASE,
|
| 316 |
+
case_id="RSA/003",
|
| 317 |
+
judge_id="J001",
|
| 318 |
+
timestamp=datetime.now(),
|
| 319 |
+
reason="Party unavailable"
|
| 320 |
+
)
|
| 321 |
+
draft1.overrides.append(override)
|
| 322 |
+
manager.overrides.append(override)
|
| 323 |
+
|
| 324 |
+
# Get statistics
|
| 325 |
+
stats = manager.get_override_statistics()
|
| 326 |
+
|
| 327 |
+
print("Override Statistics:")
|
| 328 |
+
print("-" * 80)
|
| 329 |
+
print(f"Total overrides: {stats['total_overrides']}")
|
| 330 |
+
print(f"Total drafts: {stats['total_drafts']}")
|
| 331 |
+
print(f"Approved drafts: {stats['approved_drafts']}")
|
| 332 |
+
print(f"Average acceptance rate: {stats['avg_acceptance_rate']:.1f}%")
|
| 333 |
+
print(f"Modification rate: {stats['modification_rate']:.1f}%")
|
| 334 |
+
print(f"By type: {stats['by_type']}")
|
| 335 |
+
print()
|
| 336 |
+
|
| 337 |
+
# Export audit trail
|
| 338 |
+
output_file = "demo_audit_trail.json"
|
| 339 |
+
manager.export_audit_trail(output_file)
|
| 340 |
+
print(f"✓ Audit trail exported to: {output_file}")
|
| 341 |
+
print()
|
| 342 |
+
|
| 343 |
+
|
| 344 |
+
def main():
|
| 345 |
+
"""Run all demonstrations."""
|
| 346 |
+
print("\n")
|
| 347 |
+
print("#" * 80)
|
| 348 |
+
print("# COURT SCHEDULING SYSTEM - EXPLAINABILITY & CONTROLS DEMO")
|
| 349 |
+
print("# Demonstrating step-by-step reasoning and judge intervention")
|
| 350 |
+
print("#" * 80)
|
| 351 |
+
print()
|
| 352 |
+
|
| 353 |
+
demo_explainability()
|
| 354 |
+
demo_judge_overrides()
|
| 355 |
+
demo_judge_preferences()
|
| 356 |
+
demo_audit_trail()
|
| 357 |
+
|
| 358 |
+
print("\n" + "=" * 80)
|
| 359 |
+
print("DEMO COMPLETE")
|
| 360 |
+
print("=" * 80)
|
| 361 |
+
print()
|
| 362 |
+
print("Key Takeaways:")
|
| 363 |
+
print("1. Every scheduling decision has step-by-step explanation")
|
| 364 |
+
print("2. Judges can override ANY algorithmic decision with reasoning")
|
| 365 |
+
print("3. All overrides are tracked in audit trail")
|
| 366 |
+
print("4. System is SUGGESTIVE, not prescriptive")
|
| 367 |
+
print("5. Judge preferences are respected (capacity, blocked dates, etc.)")
|
| 368 |
+
print()
|
| 369 |
+
print("This demonstrates compliance with hackathon requirements:")
|
| 370 |
+
print(" - Decision transparency (Phase 6.5 requirement)")
|
| 371 |
+
print(" - User control and overrides (Phase 6.5 requirement)")
|
| 372 |
+
print(" - Explainability for each step (Step 3 compliance)")
|
| 373 |
+
print(" - Audit trail tracking (Phase 6.5 requirement)")
|
| 374 |
+
print()
|
| 375 |
+
|
| 376 |
+
|
| 377 |
+
if __name__ == "__main__":
|
| 378 |
+
main()
|
|
@@ -0,0 +1,261 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Generate cause lists for all scenarios and policies from comprehensive sweep.
|
| 2 |
+
|
| 3 |
+
Analyzes distribution and statistics of daily generated cause lists across scenarios and policies.
|
| 4 |
+
"""
|
| 5 |
+
from pathlib import Path
|
| 6 |
+
import pandas as pd
|
| 7 |
+
import matplotlib.pyplot as plt
|
| 8 |
+
import seaborn as sns
|
| 9 |
+
from scheduler.output.cause_list import CauseListGenerator
|
| 10 |
+
|
| 11 |
+
# Set style
|
| 12 |
+
plt.style.use('seaborn-v0_8-darkgrid')
|
| 13 |
+
sns.set_palette("husl")
|
| 14 |
+
|
| 15 |
+
# Find latest sweep directory
|
| 16 |
+
data_dir = Path("data")
|
| 17 |
+
sweep_dirs = sorted([d for d in data_dir.glob("comprehensive_sweep_*")], reverse=True)
|
| 18 |
+
if not sweep_dirs:
|
| 19 |
+
raise FileNotFoundError("No sweep directories found")
|
| 20 |
+
|
| 21 |
+
sweep_dir = sweep_dirs[0]
|
| 22 |
+
print(f"Processing sweep: {sweep_dir.name}")
|
| 23 |
+
print("=" * 80)
|
| 24 |
+
|
| 25 |
+
# Get all result directories
|
| 26 |
+
result_dirs = [d for d in sweep_dir.iterdir() if d.is_dir() and d.name != "datasets"]
|
| 27 |
+
|
| 28 |
+
# Generate cause lists for each
|
| 29 |
+
all_stats = []
|
| 30 |
+
|
| 31 |
+
for result_dir in result_dirs:
|
| 32 |
+
events_file = result_dir / "events.csv"
|
| 33 |
+
if not events_file.exists():
|
| 34 |
+
continue
|
| 35 |
+
|
| 36 |
+
# Parse scenario and policy from directory name
|
| 37 |
+
parts = result_dir.name.rsplit('_', 1)
|
| 38 |
+
if len(parts) != 2:
|
| 39 |
+
continue
|
| 40 |
+
scenario, policy = parts
|
| 41 |
+
|
| 42 |
+
print(f"\n{scenario} - {policy}")
|
| 43 |
+
print("-" * 60)
|
| 44 |
+
|
| 45 |
+
try:
|
| 46 |
+
# Generate cause list
|
| 47 |
+
output_dir = result_dir / "cause_lists"
|
| 48 |
+
generator = CauseListGenerator(events_file)
|
| 49 |
+
cause_list_path = generator.generate_daily_lists(output_dir)
|
| 50 |
+
|
| 51 |
+
# Load and analyze
|
| 52 |
+
cause_list = pd.read_csv(cause_list_path)
|
| 53 |
+
|
| 54 |
+
# Daily statistics
|
| 55 |
+
daily_stats = cause_list.groupby('Date').agg({
|
| 56 |
+
'Case_ID': 'count',
|
| 57 |
+
'Courtroom_ID': 'nunique',
|
| 58 |
+
'Sequence_Number': 'max'
|
| 59 |
+
}).rename(columns={
|
| 60 |
+
'Case_ID': 'hearings',
|
| 61 |
+
'Courtroom_ID': 'active_courtrooms',
|
| 62 |
+
'Sequence_Number': 'max_sequence'
|
| 63 |
+
})
|
| 64 |
+
|
| 65 |
+
# Overall statistics
|
| 66 |
+
stats = {
|
| 67 |
+
'scenario': scenario,
|
| 68 |
+
'policy': policy,
|
| 69 |
+
'total_hearings': len(cause_list),
|
| 70 |
+
'unique_cases': cause_list['Case_ID'].nunique(),
|
| 71 |
+
'total_days': cause_list['Date'].nunique(),
|
| 72 |
+
'avg_hearings_per_day': daily_stats['hearings'].mean(),
|
| 73 |
+
'std_hearings_per_day': daily_stats['hearings'].std(),
|
| 74 |
+
'min_hearings_per_day': daily_stats['hearings'].min(),
|
| 75 |
+
'max_hearings_per_day': daily_stats['hearings'].max(),
|
| 76 |
+
'avg_courtrooms_per_day': daily_stats['active_courtrooms'].mean(),
|
| 77 |
+
'avg_cases_per_courtroom': daily_stats['hearings'].mean() / daily_stats['active_courtrooms'].mean()
|
| 78 |
+
}
|
| 79 |
+
|
| 80 |
+
all_stats.append(stats)
|
| 81 |
+
|
| 82 |
+
print(f" Total hearings: {stats['total_hearings']:,}")
|
| 83 |
+
print(f" Unique cases: {stats['unique_cases']:,}")
|
| 84 |
+
print(f" Days: {stats['total_days']}")
|
| 85 |
+
print(f" Avg hearings/day: {stats['avg_hearings_per_day']:.1f} ± {stats['std_hearings_per_day']:.1f}")
|
| 86 |
+
print(f" Avg cases/courtroom: {stats['avg_cases_per_courtroom']:.1f}")
|
| 87 |
+
|
| 88 |
+
except Exception as e:
|
| 89 |
+
print(f" ERROR: {e}")
|
| 90 |
+
|
| 91 |
+
# Convert to DataFrame
|
| 92 |
+
stats_df = pd.DataFrame(all_stats)
|
| 93 |
+
stats_df.to_csv(sweep_dir / "cause_list_statistics.csv", index=False)
|
| 94 |
+
|
| 95 |
+
print("\n" + "=" * 80)
|
| 96 |
+
print(f"Generated {len(all_stats)} cause lists")
|
| 97 |
+
print(f"Statistics saved to: {sweep_dir / 'cause_list_statistics.csv'}")
|
| 98 |
+
|
| 99 |
+
# Generate comparative visualizations
|
| 100 |
+
print("\nGenerating visualizations...")
|
| 101 |
+
|
| 102 |
+
viz_dir = sweep_dir / "visualizations"
|
| 103 |
+
viz_dir.mkdir(exist_ok=True)
|
| 104 |
+
|
| 105 |
+
# 1. Average daily hearings by policy and scenario
|
| 106 |
+
fig, ax = plt.subplots(figsize=(16, 8))
|
| 107 |
+
|
| 108 |
+
scenarios = stats_df['scenario'].unique()
|
| 109 |
+
policies = ['fifo', 'age', 'readiness']
|
| 110 |
+
x = range(len(scenarios))
|
| 111 |
+
width = 0.25
|
| 112 |
+
|
| 113 |
+
for i, policy in enumerate(policies):
|
| 114 |
+
policy_data = stats_df[stats_df['policy'] == policy].set_index('scenario')
|
| 115 |
+
values = [policy_data.loc[s, 'avg_hearings_per_day'] if s in policy_data.index else 0 for s in scenarios]
|
| 116 |
+
|
| 117 |
+
label = {
|
| 118 |
+
'fifo': 'FIFO (Baseline)',
|
| 119 |
+
'age': 'Age-Based (Baseline)',
|
| 120 |
+
'readiness': 'Our Algorithm (Readiness)'
|
| 121 |
+
}[policy]
|
| 122 |
+
|
| 123 |
+
bars = ax.bar([xi + i*width for xi in x], values, width,
|
| 124 |
+
label=label, alpha=0.8, edgecolor='black', linewidth=1.2)
|
| 125 |
+
|
| 126 |
+
# Add value labels
|
| 127 |
+
for j, v in enumerate(values):
|
| 128 |
+
if v > 0:
|
| 129 |
+
ax.text(x[j] + i*width, v + 5, f'{v:.0f}',
|
| 130 |
+
ha='center', va='bottom', fontsize=9)
|
| 131 |
+
|
| 132 |
+
ax.set_xlabel('Scenario', fontsize=13, fontweight='bold')
|
| 133 |
+
ax.set_ylabel('Average Hearings per Day', fontsize=13, fontweight='bold')
|
| 134 |
+
ax.set_title('Daily Cause List Size: Comparison Across Policies and Scenarios',
|
| 135 |
+
fontsize=15, fontweight='bold', pad=20)
|
| 136 |
+
ax.set_xticks([xi + width for xi in x])
|
| 137 |
+
ax.set_xticklabels(scenarios, rotation=45, ha='right')
|
| 138 |
+
ax.legend(fontsize=11)
|
| 139 |
+
ax.grid(axis='y', alpha=0.3)
|
| 140 |
+
|
| 141 |
+
plt.tight_layout()
|
| 142 |
+
plt.savefig(viz_dir / "cause_list_daily_size_comparison.png", dpi=300, bbox_inches='tight')
|
| 143 |
+
print(f" Saved: {viz_dir / 'cause_list_daily_size_comparison.png'}")
|
| 144 |
+
|
| 145 |
+
# 2. Variability (std dev) comparison
|
| 146 |
+
fig, ax = plt.subplots(figsize=(16, 8))
|
| 147 |
+
|
| 148 |
+
for i, policy in enumerate(policies):
|
| 149 |
+
policy_data = stats_df[stats_df['policy'] == policy].set_index('scenario')
|
| 150 |
+
values = [policy_data.loc[s, 'std_hearings_per_day'] if s in policy_data.index else 0 for s in scenarios]
|
| 151 |
+
|
| 152 |
+
label = {
|
| 153 |
+
'fifo': 'FIFO',
|
| 154 |
+
'age': 'Age',
|
| 155 |
+
'readiness': 'Readiness (Ours)'
|
| 156 |
+
}[policy]
|
| 157 |
+
|
| 158 |
+
bars = ax.bar([xi + i*width for xi in x], values, width,
|
| 159 |
+
label=label, alpha=0.8, edgecolor='black', linewidth=1.2)
|
| 160 |
+
|
| 161 |
+
for j, v in enumerate(values):
|
| 162 |
+
if v > 0:
|
| 163 |
+
ax.text(x[j] + i*width, v + 0.5, f'{v:.1f}',
|
| 164 |
+
ha='center', va='bottom', fontsize=9)
|
| 165 |
+
|
| 166 |
+
ax.set_xlabel('Scenario', fontsize=13, fontweight='bold')
|
| 167 |
+
ax.set_ylabel('Std Dev of Daily Hearings', fontsize=13, fontweight='bold')
|
| 168 |
+
ax.set_title('Cause List Consistency: Lower is More Predictable',
|
| 169 |
+
fontsize=15, fontweight='bold', pad=20)
|
| 170 |
+
ax.set_xticks([xi + width for xi in x])
|
| 171 |
+
ax.set_xticklabels(scenarios, rotation=45, ha='right')
|
| 172 |
+
ax.legend(fontsize=11)
|
| 173 |
+
ax.grid(axis='y', alpha=0.3)
|
| 174 |
+
|
| 175 |
+
plt.tight_layout()
|
| 176 |
+
plt.savefig(viz_dir / "cause_list_variability.png", dpi=300, bbox_inches='tight')
|
| 177 |
+
print(f" Saved: {viz_dir / 'cause_list_variability.png'}")
|
| 178 |
+
|
| 179 |
+
# 3. Cases per courtroom efficiency
|
| 180 |
+
fig, ax = plt.subplots(figsize=(16, 8))
|
| 181 |
+
|
| 182 |
+
for i, policy in enumerate(policies):
|
| 183 |
+
policy_data = stats_df[stats_df['policy'] == policy].set_index('scenario')
|
| 184 |
+
values = [policy_data.loc[s, 'avg_cases_per_courtroom'] if s in policy_data.index else 0 for s in scenarios]
|
| 185 |
+
|
| 186 |
+
label = {
|
| 187 |
+
'fifo': 'FIFO',
|
| 188 |
+
'age': 'Age',
|
| 189 |
+
'readiness': 'Readiness (Ours)'
|
| 190 |
+
}[policy]
|
| 191 |
+
|
| 192 |
+
bars = ax.bar([xi + i*width for xi in x], values, width,
|
| 193 |
+
label=label, alpha=0.8, edgecolor='black', linewidth=1.2)
|
| 194 |
+
|
| 195 |
+
for j, v in enumerate(values):
|
| 196 |
+
if v > 0:
|
| 197 |
+
ax.text(x[j] + i*width, v + 0.5, f'{v:.1f}',
|
| 198 |
+
ha='center', va='bottom', fontsize=9)
|
| 199 |
+
|
| 200 |
+
ax.set_xlabel('Scenario', fontsize=13, fontweight='bold')
|
| 201 |
+
ax.set_ylabel('Avg Cases per Courtroom per Day', fontsize=13, fontweight='bold')
|
| 202 |
+
ax.set_title('Courtroom Load Balance: Cases per Courtroom',
|
| 203 |
+
fontsize=15, fontweight='bold', pad=20)
|
| 204 |
+
ax.set_xticks([xi + width for xi in x])
|
| 205 |
+
ax.set_xticklabels(scenarios, rotation=45, ha='right')
|
| 206 |
+
ax.legend(fontsize=11)
|
| 207 |
+
ax.grid(axis='y', alpha=0.3)
|
| 208 |
+
|
| 209 |
+
plt.tight_layout()
|
| 210 |
+
plt.savefig(viz_dir / "cause_list_courtroom_load.png", dpi=300, bbox_inches='tight')
|
| 211 |
+
print(f" Saved: {viz_dir / 'cause_list_courtroom_load.png'}")
|
| 212 |
+
|
| 213 |
+
# 4. Statistical summary table
|
| 214 |
+
fig, ax = plt.subplots(figsize=(14, 10))
|
| 215 |
+
ax.axis('tight')
|
| 216 |
+
ax.axis('off')
|
| 217 |
+
|
| 218 |
+
# Create summary table
|
| 219 |
+
summary_data = []
|
| 220 |
+
for policy in policies:
|
| 221 |
+
policy_stats = stats_df[stats_df['policy'] == policy]
|
| 222 |
+
summary_data.append([
|
| 223 |
+
{'fifo': 'FIFO', 'age': 'Age', 'readiness': 'Readiness (OURS)'}[policy],
|
| 224 |
+
f"{policy_stats['avg_hearings_per_day'].mean():.1f}",
|
| 225 |
+
f"{policy_stats['std_hearings_per_day'].mean():.2f}",
|
| 226 |
+
f"{policy_stats['avg_cases_per_courtroom'].mean():.1f}",
|
| 227 |
+
f"{policy_stats['unique_cases'].mean():.0f}",
|
| 228 |
+
f"{policy_stats['total_hearings'].mean():.0f}"
|
| 229 |
+
])
|
| 230 |
+
|
| 231 |
+
table = ax.table(cellText=summary_data,
|
| 232 |
+
colLabels=['Policy', 'Avg Hearings/Day', 'Std Dev',
|
| 233 |
+
'Cases/Courtroom', 'Avg Unique Cases', 'Avg Total Hearings'],
|
| 234 |
+
cellLoc='center',
|
| 235 |
+
loc='center',
|
| 236 |
+
colWidths=[0.2, 0.15, 0.15, 0.15, 0.15, 0.15])
|
| 237 |
+
|
| 238 |
+
table.auto_set_font_size(False)
|
| 239 |
+
table.set_fontsize(12)
|
| 240 |
+
table.scale(1, 3)
|
| 241 |
+
|
| 242 |
+
# Style header
|
| 243 |
+
for i in range(6):
|
| 244 |
+
table[(0, i)].set_facecolor('#4CAF50')
|
| 245 |
+
table[(0, i)].set_text_props(weight='bold', color='white')
|
| 246 |
+
|
| 247 |
+
# Highlight our algorithm
|
| 248 |
+
table[(3, 0)].set_facecolor('#E8F5E9')
|
| 249 |
+
for i in range(1, 6):
|
| 250 |
+
table[(3, i)].set_facecolor('#E8F5E9')
|
| 251 |
+
table[(3, i)].set_text_props(weight='bold')
|
| 252 |
+
|
| 253 |
+
plt.title('Cause List Statistics Summary: Average Across All Scenarios',
|
| 254 |
+
fontsize=14, fontweight='bold', pad=20)
|
| 255 |
+
plt.savefig(viz_dir / "cause_list_summary_table.png", dpi=300, bbox_inches='tight')
|
| 256 |
+
print(f" Saved: {viz_dir / 'cause_list_summary_table.png'}")
|
| 257 |
+
|
| 258 |
+
print("\n" + "=" * 80)
|
| 259 |
+
print("CAUSE LIST GENERATION AND ANALYSIS COMPLETE!")
|
| 260 |
+
print(f"All visualizations saved to: {viz_dir}")
|
| 261 |
+
print("=" * 80)
|