Spaces:
Sleeping
Sleeping
Submission ready
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- Data/quick_demo/COMPARISON_REPORT.md +0 -19
- Data/quick_demo/EXECUTIVE_SUMMARY.md +0 -47
- Data/quick_demo/trained_rl_agent.pkl +0 -0
- Data/quick_demo/visualizations/performance_charts.md +0 -7
- HACKATHON_SUBMISSION.md +0 -264
- README.md +30 -15
- cli/commands/__init__.py +0 -1
- cli/config.py +0 -1
- cli/main.py +175 -146
- docs/DASHBOARD.md +25 -388
- docs/ENHANCEMENT_PLAN.md +0 -311
- models/intensive_trained_rl_agent.pkl +0 -0
- models/latest.pkl +0 -1
- models/trained_rl_agent.pkl +0 -0
- outputs/runs/run_20251127_054834/reports/COMPARISON_REPORT.md +0 -19
- outputs/runs/run_20251127_054834/reports/EXECUTIVE_SUMMARY.md +0 -47
- outputs/runs/run_20251127_054834/reports/visualizations/performance_charts.md +0 -7
- outputs/runs/run_20251127_054834/training/agent.pkl +0 -0
- pyproject.toml +8 -2
- report.txt +0 -56
- rl/README.md +0 -110
- rl/__init__.py +0 -12
- rl/config.py +0 -115
- rl/rewards.py +0 -127
- rl/simple_agent.py +0 -291
- rl/training.py +0 -515
- run_comprehensive_sweep.ps1 +0 -316
- runs/baseline/report.txt +0 -56
- runs/baseline_comparison/report.txt +0 -56
- runs/baseline_large_data/report.txt +0 -56
- runs/rl_final_test/report.txt +0 -56
- runs/rl_intensive/report.txt +0 -56
- runs/rl_large_data/report.txt +0 -56
- runs/rl_untrained/report.txt +0 -56
- runs/rl_vs_baseline/comparison_report.md +0 -29
- runs/rl_vs_baseline/readiness/report.txt +0 -56
- runs/rl_vs_baseline/rl/report.txt +0 -56
- scheduler/control/__init__.py +5 -10
- scheduler/control/explainability.py +207 -144
- scheduler/control/overrides.py +84 -87
- scheduler/core/algorithm.py +50 -51
- scheduler/core/case.py +55 -55
- scheduler/core/courtroom.py +47 -47
- scheduler/core/hearing.py +19 -19
- scheduler/core/judge.py +31 -31
- scheduler/core/policy.py +7 -7
- scheduler/core/ripeness.py +33 -35
- scheduler/dashboard/app.py +127 -135
- scheduler/dashboard/pages/1_EDA_Analysis.py +0 -273
- scheduler/dashboard/pages/2_Ripeness_Classifier.py +132 -161
Data/quick_demo/COMPARISON_REPORT.md
DELETED
|
@@ -1,19 +0,0 @@
|
|
| 1 |
-
# Court Scheduling System - Performance Comparison
|
| 2 |
-
|
| 3 |
-
Generated: 2025-11-26 05:47:24
|
| 4 |
-
|
| 5 |
-
## Configuration
|
| 6 |
-
|
| 7 |
-
- Training Cases: 10,000
|
| 8 |
-
- Simulation Period: 90 days (0.2 years)
|
| 9 |
-
- RL Episodes: 20
|
| 10 |
-
- RL Learning Rate: 0.15
|
| 11 |
-
- RL Epsilon: 0.4
|
| 12 |
-
- Policies Compared: readiness, rl
|
| 13 |
-
|
| 14 |
-
## Results Summary
|
| 15 |
-
|
| 16 |
-
| Policy | Disposals | Disposal Rate | Utilization | Avg Hearings/Day |
|
| 17 |
-
|--------|-----------|---------------|-------------|------------------|
|
| 18 |
-
| Readiness | 5,421 | 54.2% | 84.2% | 635.4 |
|
| 19 |
-
| Rl | 5,439 | 54.4% | 83.7% | 631.9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Data/quick_demo/EXECUTIVE_SUMMARY.md
DELETED
|
@@ -1,47 +0,0 @@
|
|
| 1 |
-
# Court Scheduling System - Executive Summary
|
| 2 |
-
|
| 3 |
-
## Hackathon Submission: Karnataka High Court
|
| 4 |
-
|
| 5 |
-
### System Overview
|
| 6 |
-
This intelligent court scheduling system uses Reinforcement Learning to optimize case allocation and improve judicial efficiency. The system was evaluated using a comprehensive 2-year simulation with 10,000 real cases.
|
| 7 |
-
|
| 8 |
-
### Key Achievements
|
| 9 |
-
|
| 10 |
-
**54.4% Case Disposal Rate** - Significantly improved case clearance
|
| 11 |
-
**83.7% Court Utilization** - Optimal resource allocation
|
| 12 |
-
**56,874 Hearings Scheduled** - Over 90 days
|
| 13 |
-
**AI-Powered Decisions** - Reinforcement learning with 20 training episodes
|
| 14 |
-
|
| 15 |
-
### Technical Innovation
|
| 16 |
-
|
| 17 |
-
- **Reinforcement Learning**: Tabular Q-learning with 6D state space
|
| 18 |
-
- **Real-time Adaptation**: Dynamic policy adjustment based on case characteristics
|
| 19 |
-
- **Multi-objective Optimization**: Balances disposal rate, fairness, and utilization
|
| 20 |
-
- **Production Ready**: Generates daily cause lists for immediate deployment
|
| 21 |
-
|
| 22 |
-
### Impact Metrics
|
| 23 |
-
|
| 24 |
-
- **Cases Disposed**: 5,439 out of 10,000
|
| 25 |
-
- **Average Hearings per Day**: 631.9
|
| 26 |
-
- **System Scalability**: Handles 50,000+ case simulations efficiently
|
| 27 |
-
- **Judicial Time Saved**: Estimated 75 productive court days
|
| 28 |
-
|
| 29 |
-
### Deployment Readiness
|
| 30 |
-
|
| 31 |
-
**Daily Cause Lists**: Automated generation for 90 days
|
| 32 |
-
**Performance Monitoring**: Comprehensive metrics and analytics
|
| 33 |
-
**Judicial Override**: Complete control system for judge approval
|
| 34 |
-
**Multi-courtroom Support**: Load-balanced allocation across courtrooms
|
| 35 |
-
|
| 36 |
-
### Next Steps
|
| 37 |
-
|
| 38 |
-
1. **Pilot Deployment**: Begin with select courtrooms for validation
|
| 39 |
-
2. **Judge Training**: Familiarization with AI-assisted scheduling
|
| 40 |
-
3. **Performance Monitoring**: Track real-world improvement metrics
|
| 41 |
-
4. **System Expansion**: Scale to additional court complexes
|
| 42 |
-
|
| 43 |
-
---
|
| 44 |
-
|
| 45 |
-
**Generated**: 2025-11-26 05:47:24
|
| 46 |
-
**System Version**: 2.0 (Hackathon Submission)
|
| 47 |
-
**Contact**: Karnataka High Court Digital Innovation Team
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Data/quick_demo/trained_rl_agent.pkl
DELETED
|
Binary file (4.32 kB)
|
|
|
Data/quick_demo/visualizations/performance_charts.md
DELETED
|
@@ -1,7 +0,0 @@
|
|
| 1 |
-
# Performance Visualizations
|
| 2 |
-
|
| 3 |
-
Generated charts showing:
|
| 4 |
-
- Daily disposal rates
|
| 5 |
-
- Court utilization over time
|
| 6 |
-
- Case type performance
|
| 7 |
-
- Load balancing effectiveness
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
HACKATHON_SUBMISSION.md
DELETED
|
@@ -1,264 +0,0 @@
|
|
| 1 |
-
# Hackathon Submission Guide
|
| 2 |
-
## Intelligent Court Scheduling System with Reinforcement Learning
|
| 3 |
-
|
| 4 |
-
### Quick Start - Hackathon Demo
|
| 5 |
-
|
| 6 |
-
#### Option 1: Full Workflow (Recommended)
|
| 7 |
-
```bash
|
| 8 |
-
# Run complete pipeline: generate cases + simulate
|
| 9 |
-
uv run court-scheduler workflow --cases 50000 --days 730
|
| 10 |
-
```
|
| 11 |
-
|
| 12 |
-
This executes:
|
| 13 |
-
- EDA parameter extraction (if needed)
|
| 14 |
-
- Case generation with realistic distributions
|
| 15 |
-
- Multi-year simulation with policy comparison
|
| 16 |
-
- Performance analysis and reporting
|
| 17 |
-
|
| 18 |
-
#### Option 2: Quick Demo
|
| 19 |
-
```bash
|
| 20 |
-
# 90-day quick demo with 10,000 cases
|
| 21 |
-
uv run court-scheduler workflow --cases 10000 --days 90
|
| 22 |
-
```
|
| 23 |
-
|
| 24 |
-
#### Option 3: Step-by-Step
|
| 25 |
-
```bash
|
| 26 |
-
# 1. Extract parameters from historical data
|
| 27 |
-
uv run court-scheduler eda
|
| 28 |
-
|
| 29 |
-
# 2. Generate synthetic cases
|
| 30 |
-
uv run court-scheduler generate --cases 50000
|
| 31 |
-
|
| 32 |
-
# 3. Train RL agent (optional)
|
| 33 |
-
uv run court-scheduler train --episodes 100
|
| 34 |
-
|
| 35 |
-
# 4. Run simulation
|
| 36 |
-
uv run court-scheduler simulate --cases data/cases.csv --days 730 --policy readiness
|
| 37 |
-
```
|
| 38 |
-
|
| 39 |
-
### What the Pipeline Does
|
| 40 |
-
|
| 41 |
-
The comprehensive pipeline executes 7 automated steps:
|
| 42 |
-
|
| 43 |
-
**Step 1: EDA & Parameter Extraction**
|
| 44 |
-
- Analyzes 739K+ historical hearings
|
| 45 |
-
- Extracts transition probabilities, duration statistics
|
| 46 |
-
- Generates simulation parameters
|
| 47 |
-
|
| 48 |
-
**Step 2: Data Generation**
|
| 49 |
-
- Creates realistic synthetic case dataset
|
| 50 |
-
- Configurable size (default: 50,000 cases)
|
| 51 |
-
- Diverse case types and complexity levels
|
| 52 |
-
|
| 53 |
-
**Step 3: RL Training**
|
| 54 |
-
- Trains Tabular Q-learning agent
|
| 55 |
-
- Real-time progress monitoring with reward tracking
|
| 56 |
-
- Configurable episodes and hyperparameters
|
| 57 |
-
|
| 58 |
-
**Step 4: 2-Year Simulation**
|
| 59 |
-
- Runs 730-day court scheduling simulation
|
| 60 |
-
- Compares RL agent vs baseline algorithms
|
| 61 |
-
- Tracks disposal rates, utilization, fairness metrics
|
| 62 |
-
|
| 63 |
-
**Step 5: Daily Cause List Generation**
|
| 64 |
-
- Generates production-ready daily cause lists
|
| 65 |
-
- Exports for all simulation days
|
| 66 |
-
- Court-room wise scheduling details
|
| 67 |
-
|
| 68 |
-
**Step 6: Performance Analysis**
|
| 69 |
-
- Comprehensive comparison reports
|
| 70 |
-
- Performance visualizations
|
| 71 |
-
- Statistical analysis of all metrics
|
| 72 |
-
|
| 73 |
-
**Step 7: Executive Summary**
|
| 74 |
-
- Hackathon-ready summary document
|
| 75 |
-
- Key achievements and impact metrics
|
| 76 |
-
- Deployment readiness checklist
|
| 77 |
-
|
| 78 |
-
### Expected Output
|
| 79 |
-
|
| 80 |
-
After completion, you'll find in your output directory:
|
| 81 |
-
|
| 82 |
-
```
|
| 83 |
-
data/hackathon_run/
|
| 84 |
-
|-- pipeline_config.json # Full configuration used
|
| 85 |
-
|-- training_cases.csv # Generated case dataset
|
| 86 |
-
|-- trained_rl_agent.pkl # Trained RL model
|
| 87 |
-
|-- EXECUTIVE_SUMMARY.md # Hackathon submission summary
|
| 88 |
-
|-- COMPARISON_REPORT.md # Detailed performance comparison
|
| 89 |
-
|-- simulation_rl/ # RL policy results
|
| 90 |
-
|-- events.csv
|
| 91 |
-
|-- metrics.csv
|
| 92 |
-
|-- report.txt
|
| 93 |
-
|-- cause_lists/
|
| 94 |
-
|-- daily_cause_list.csv # 730 days of cause lists
|
| 95 |
-
|-- simulation_readiness/ # Baseline results
|
| 96 |
-
|-- ...
|
| 97 |
-
|-- visualizations/ # Performance charts
|
| 98 |
-
|-- performance_charts.md
|
| 99 |
-
```
|
| 100 |
-
|
| 101 |
-
### Hackathon Winning Features
|
| 102 |
-
|
| 103 |
-
#### 1. Real-World Impact
|
| 104 |
-
- **52%+ Disposal Rate**: Demonstrable case clearance improvement
|
| 105 |
-
- **730 Days of Cause Lists**: Ready for immediate court deployment
|
| 106 |
-
- **Multi-Courtroom Support**: Load-balanced allocation across 5+ courtrooms
|
| 107 |
-
- **Scalability**: Tested with 50,000+ cases
|
| 108 |
-
|
| 109 |
-
#### 2. Technical Innovation
|
| 110 |
-
- **Reinforcement Learning**: AI-powered adaptive scheduling
|
| 111 |
-
- **6D State Space**: Comprehensive case characteristic modeling
|
| 112 |
-
- **Hybrid Architecture**: Combines RL intelligence with rule-based constraints
|
| 113 |
-
- **Real-time Learning**: Continuous improvement through experience
|
| 114 |
-
|
| 115 |
-
#### 3. Production Readiness
|
| 116 |
-
- **Interactive CLI**: User-friendly parameter configuration
|
| 117 |
-
- **Comprehensive Reporting**: Executive summaries and detailed analytics
|
| 118 |
-
- **Quality Assurance**: Validated against baseline algorithms
|
| 119 |
-
- **Professional Output**: Court-ready cause lists and reports
|
| 120 |
-
|
| 121 |
-
#### 4. Judicial Integration
|
| 122 |
-
- **Ripeness Classification**: Filters unready cases (40%+ efficiency gain)
|
| 123 |
-
- **Fairness Metrics**: Low Gini coefficient for equitable distribution
|
| 124 |
-
- **Transparency**: Explainable decision-making process
|
| 125 |
-
- **Override Capability**: Complete judicial control maintained
|
| 126 |
-
|
| 127 |
-
### Performance Benchmarks
|
| 128 |
-
|
| 129 |
-
Based on comprehensive testing:
|
| 130 |
-
|
| 131 |
-
| Metric | RL Agent | Baseline | Advantage |
|
| 132 |
-
|--------|----------|----------|-----------|
|
| 133 |
-
| Disposal Rate | 52.1% | 51.9% | +0.4% |
|
| 134 |
-
| Court Utilization | 85%+ | 85%+ | Comparable |
|
| 135 |
-
| Load Balance (Gini) | 0.248 | 0.243 | Comparable |
|
| 136 |
-
| Scalability | 50K cases | 50K cases | Yes |
|
| 137 |
-
| Adaptability | High | Fixed | High |
|
| 138 |
-
|
| 139 |
-
### Customization Options
|
| 140 |
-
|
| 141 |
-
#### For Hackathon Judges
|
| 142 |
-
```bash
|
| 143 |
-
# Large-scale impressive demo
|
| 144 |
-
uv run court-scheduler workflow --cases 100000 --days 730
|
| 145 |
-
|
| 146 |
-
# With all policies compared
|
| 147 |
-
uv run court-scheduler simulate --cases data/cases.csv --days 730 --policy readiness
|
| 148 |
-
uv run court-scheduler simulate --cases data/cases.csv --days 730 --policy fifo
|
| 149 |
-
uv run court-scheduler simulate --cases data/cases.csv --days 730 --policy age
|
| 150 |
-
```
|
| 151 |
-
|
| 152 |
-
#### For Technical Evaluation
|
| 153 |
-
```bash
|
| 154 |
-
# Focus on RL training quality
|
| 155 |
-
uv run court-scheduler train --episodes 200 --lr 0.12 --cases 500 --output models/intensive_agent.pkl
|
| 156 |
-
|
| 157 |
-
# Then simulate with trained agent
|
| 158 |
-
uv run court-scheduler simulate --cases data/cases.csv --days 730 --policy rl --agent models/intensive_agent.pkl
|
| 159 |
-
```
|
| 160 |
-
|
| 161 |
-
#### For Quick Demo/Testing
|
| 162 |
-
```bash
|
| 163 |
-
# Fast proof-of-concept
|
| 164 |
-
uv run court-scheduler workflow --cases 10000 --days 90
|
| 165 |
-
|
| 166 |
-
# Pre-configured:
|
| 167 |
-
# - 10,000 cases
|
| 168 |
-
# - 90 days simulation
|
| 169 |
-
# - ~5-10 minutes runtime
|
| 170 |
-
```
|
| 171 |
-
|
| 172 |
-
### Tips for Winning Presentation
|
| 173 |
-
|
| 174 |
-
1. **Start with the Problem**
|
| 175 |
-
- Show Karnataka High Court case pendency statistics
|
| 176 |
-
- Explain judicial efficiency challenges
|
| 177 |
-
- Highlight manual scheduling limitations
|
| 178 |
-
|
| 179 |
-
2. **Demonstrate the Solution**
|
| 180 |
-
- Run the interactive pipeline live
|
| 181 |
-
- Show real-time RL training progress
|
| 182 |
-
- Display generated cause lists
|
| 183 |
-
|
| 184 |
-
3. **Present the Results**
|
| 185 |
-
- Open EXECUTIVE_SUMMARY.md
|
| 186 |
-
- Highlight key achievements from comparison table
|
| 187 |
-
- Show actual cause list files (730 days ready)
|
| 188 |
-
|
| 189 |
-
4. **Emphasize Innovation**
|
| 190 |
-
- Reinforcement Learning for judicial scheduling (novel)
|
| 191 |
-
- Production-ready from day 1 (practical)
|
| 192 |
-
- Scalable to entire court system (impactful)
|
| 193 |
-
|
| 194 |
-
5. **Address Concerns**
|
| 195 |
-
- Judicial oversight: Complete override capability
|
| 196 |
-
- Fairness: Low Gini coefficients, transparent metrics
|
| 197 |
-
- Reliability: Tested against proven baselines
|
| 198 |
-
- Deployment: Ready-to-use cause lists generated
|
| 199 |
-
|
| 200 |
-
### System Requirements
|
| 201 |
-
|
| 202 |
-
- **Python**: 3.10+ with UV
|
| 203 |
-
- **Memory**: 8GB+ RAM (16GB recommended for 50K cases)
|
| 204 |
-
- **Storage**: 2GB+ for full pipeline outputs
|
| 205 |
-
- **Runtime**:
|
| 206 |
-
- Quick demo: 5-10 minutes
|
| 207 |
-
- Full 2-year sim (50K cases): 30-60 minutes
|
| 208 |
-
- Large-scale (100K cases): 1-2 hours
|
| 209 |
-
|
| 210 |
-
### Troubleshooting
|
| 211 |
-
|
| 212 |
-
**Issue**: Out of memory during simulation
|
| 213 |
-
**Solution**: Reduce n_cases to 10,000-20,000 or increase system RAM
|
| 214 |
-
|
| 215 |
-
**Issue**: RL training very slow
|
| 216 |
-
**Solution**: Reduce episodes to 50 or cases_per_episode to 500
|
| 217 |
-
|
| 218 |
-
**Issue**: EDA parameters not found
|
| 219 |
-
**Solution**: Run `uv run court-scheduler eda` first
|
| 220 |
-
|
| 221 |
-
**Issue**: Import errors
|
| 222 |
-
**Solution**: Ensure UV environment is activated, run `uv sync`
|
| 223 |
-
|
| 224 |
-
### Advanced Configuration
|
| 225 |
-
|
| 226 |
-
For fine-tuned control, use configuration files:
|
| 227 |
-
|
| 228 |
-
```bash
|
| 229 |
-
# Create configs/ directory with TOML files
|
| 230 |
-
# Example: configs/generate_config.toml
|
| 231 |
-
# [generation]
|
| 232 |
-
# n_cases = 50000
|
| 233 |
-
# start_date = "2022-01-01"
|
| 234 |
-
# end_date = "2023-12-31"
|
| 235 |
-
|
| 236 |
-
# Then run with config
|
| 237 |
-
uv run court-scheduler generate --config configs/generate_config.toml
|
| 238 |
-
uv run court-scheduler simulate --config configs/simulate_config.toml
|
| 239 |
-
```
|
| 240 |
-
|
| 241 |
-
Or use command-line options:
|
| 242 |
-
```bash
|
| 243 |
-
# Full customization
|
| 244 |
-
uv run court-scheduler workflow \
|
| 245 |
-
--cases 50000 \
|
| 246 |
-
--days 730 \
|
| 247 |
-
--start 2022-01-01 \
|
| 248 |
-
--end 2023-12-31 \
|
| 249 |
-
--output data/custom_run \
|
| 250 |
-
--seed 42
|
| 251 |
-
```
|
| 252 |
-
|
| 253 |
-
### Contact & Support
|
| 254 |
-
|
| 255 |
-
For hackathon questions or technical support:
|
| 256 |
-
- Review PIPELINE.md for detailed architecture
|
| 257 |
-
- Check README.md for system overview
|
| 258 |
-
- See rl/README.md for RL-specific documentation
|
| 259 |
-
|
| 260 |
-
---
|
| 261 |
-
|
| 262 |
-
**Good luck with your hackathon submission!**
|
| 263 |
-
|
| 264 |
-
This system represents a genuine breakthrough in applying AI to judicial efficiency. The combination of production-ready cause lists, proven performance metrics, and innovative RL architecture positions this as a compelling winning submission.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
README.md
CHANGED
|
@@ -75,9 +75,31 @@ This project delivers a **comprehensive** court scheduling system featuring:
|
|
| 75 |
|
| 76 |
## Quick Start
|
| 77 |
|
| 78 |
-
###
|
| 79 |
|
| 80 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
|
| 82 |
```bash
|
| 83 |
# See all available commands
|
|
@@ -282,17 +304,10 @@ These fixes ensure that RL training is reproducible, aligned with evaluation con
|
|
| 282 |
|
| 283 |
## Documentation
|
| 284 |
|
| 285 |
-
|
| 286 |
-
|
| 287 |
-
- `
|
| 288 |
-
|
| 289 |
-
|
| 290 |
-
- `COMPREHENSIVE_ANALYSIS.md` - EDA findings and insights
|
| 291 |
-
- `RIPENESS_VALIDATION.md` - Ripeness system validation results
|
| 292 |
-
- `PIPELINE.md` - Complete development and deployment pipeline
|
| 293 |
-
- `rl/README.md` - Reinforcement learning module documentation
|
| 294 |
|
| 295 |
-
|
| 296 |
-
- `reports/figures/` - Parameter visualizations
|
| 297 |
-
- `data/sim_runs/` - Simulation outputs and metrics
|
| 298 |
-
- `configs/` - RL training configurations and profiles
|
|
|
|
| 75 |
|
| 76 |
## Quick Start
|
| 77 |
|
| 78 |
+
### Interactive Dashboard (Primary Interface)
|
| 79 |
|
| 80 |
+
**For submission/demo, use the dashboard - it's fully self-contained:**
|
| 81 |
+
|
| 82 |
+
```bash
|
| 83 |
+
# Launch dashboard
|
| 84 |
+
uv run streamlit run scheduler/dashboard/app.py
|
| 85 |
+
|
| 86 |
+
# Open browser to http://localhost:8501
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
**The dashboard handles everything:**
|
| 90 |
+
1. Run EDA pipeline (processes raw data, extracts parameters, generates visualizations)
|
| 91 |
+
2. Explore historical data and parameters
|
| 92 |
+
3. Test ripeness classification
|
| 93 |
+
4. Generate cases and run simulations
|
| 94 |
+
5. Review cause lists with judge override capability
|
| 95 |
+
6. Train RL models
|
| 96 |
+
7. Compare performance and generate reports
|
| 97 |
+
|
| 98 |
+
**No CLI commands required** - everything is accessible through the web interface.
|
| 99 |
+
|
| 100 |
+
### Alternative: Command Line Interface
|
| 101 |
+
|
| 102 |
+
For automation or scripting, all operations available via CLI:
|
| 103 |
|
| 104 |
```bash
|
| 105 |
# See all available commands
|
|
|
|
| 304 |
|
| 305 |
## Documentation
|
| 306 |
|
| 307 |
+
**Primary**: This README (complete user guide)
|
| 308 |
+
**Additional**: `docs/` folder contains:
|
| 309 |
+
- `DASHBOARD.md` - Dashboard usage and architecture
|
| 310 |
+
- `CONFIGURATION.md` - Configuration system reference
|
| 311 |
+
- `HACKATHON_SUBMISSION.md` - Hackathon-specific submission guide
|
|
|
|
|
|
|
|
|
|
|
|
|
| 312 |
|
| 313 |
+
**Scripts**: See `scripts/README.md` for analysis utilities
|
|
|
|
|
|
|
|
|
cli/commands/__init__.py
DELETED
|
@@ -1 +0,0 @@
|
|
| 1 |
-
"""CLI command modules."""
|
|
|
|
|
|
cli/config.py
CHANGED
|
@@ -10,7 +10,6 @@ from typing import Any, Dict, Optional
|
|
| 10 |
|
| 11 |
from pydantic import BaseModel, Field, field_validator
|
| 12 |
|
| 13 |
-
|
| 14 |
# Configuration Models
|
| 15 |
|
| 16 |
class GenerateConfig(BaseModel):
|
|
|
|
| 10 |
|
| 11 |
from pydantic import BaseModel, Field, field_validator
|
| 12 |
|
|
|
|
| 13 |
# Configuration Models
|
| 14 |
|
| 15 |
class GenerateConfig(BaseModel):
|
cli/main.py
CHANGED
|
@@ -1,17 +1,15 @@
|
|
| 1 |
"""Unified CLI for Court Scheduling System.
|
| 2 |
|
| 3 |
-
This module provides a single entry point for
|
| 4 |
- EDA pipeline execution
|
| 5 |
- Case generation
|
| 6 |
-
- Simulation runs
|
| 7 |
-
- RL training
|
| 8 |
- Full workflow orchestration
|
| 9 |
"""
|
| 10 |
|
| 11 |
from __future__ import annotations
|
| 12 |
|
| 13 |
import sys
|
| 14 |
-
from datetime import date
|
| 15 |
from pathlib import Path
|
| 16 |
|
| 17 |
import typer
|
|
@@ -20,13 +18,20 @@ from rich.progress import Progress, SpinnerColumn, TextColumn
|
|
| 20 |
|
| 21 |
from cli import __version__
|
| 22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
# Initialize Typer app and console
|
| 24 |
app = typer.Typer(
|
| 25 |
name="court-scheduler",
|
| 26 |
help="Court Scheduling System for Karnataka High Court",
|
| 27 |
add_completion=False,
|
| 28 |
)
|
| 29 |
-
|
|
|
|
| 30 |
|
| 31 |
|
| 32 |
@app.command()
|
|
@@ -37,15 +42,14 @@ def eda(
|
|
| 37 |
) -> None:
|
| 38 |
"""Run the EDA pipeline (load, explore, extract parameters)."""
|
| 39 |
console.print("[bold blue]Running EDA Pipeline[/bold blue]")
|
| 40 |
-
|
| 41 |
try:
|
| 42 |
# Import here to avoid loading heavy dependencies if not needed
|
| 43 |
-
from
|
| 44 |
-
from
|
| 45 |
-
from
|
| 46 |
-
|
| 47 |
with Progress(
|
| 48 |
-
SpinnerColumn(),
|
| 49 |
TextColumn("[progress.description]{task.description}"),
|
| 50 |
console=console,
|
| 51 |
) as progress:
|
|
@@ -53,23 +57,23 @@ def eda(
|
|
| 53 |
task = progress.add_task("Step 1/3: Load and clean data...", total=None)
|
| 54 |
run_load_and_clean()
|
| 55 |
progress.update(task, completed=True)
|
| 56 |
-
console.print("
|
| 57 |
-
|
| 58 |
if not skip_viz:
|
| 59 |
task = progress.add_task("Step 2/3: Generate visualizations...", total=None)
|
| 60 |
run_exploration()
|
| 61 |
progress.update(task, completed=True)
|
| 62 |
-
console.print("
|
| 63 |
-
|
| 64 |
if not skip_params:
|
| 65 |
task = progress.add_task("Step 3/3: Extract parameters...", total=None)
|
| 66 |
run_parameter_export()
|
| 67 |
progress.update(task, completed=True)
|
| 68 |
-
console.print("
|
| 69 |
-
|
| 70 |
-
console.print("\n[bold
|
| 71 |
console.print("Outputs: reports/figures/")
|
| 72 |
-
|
| 73 |
except Exception as e:
|
| 74 |
console.print(f"[bold red]Error:[/bold red] {e}")
|
| 75 |
raise typer.Exit(code=1)
|
|
@@ -77,21 +81,41 @@ def eda(
|
|
| 77 |
|
| 78 |
@app.command()
|
| 79 |
def generate(
|
| 80 |
-
config: Path = typer.Option(
|
| 81 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 82 |
n_cases: int = typer.Option(10000, "--cases", "-n", help="Number of cases to generate"),
|
| 83 |
start_date: str = typer.Option("2022-01-01", "--start", help="Start date (YYYY-MM-DD)"),
|
| 84 |
end_date: str = typer.Option("2023-12-31", "--end", help="End date (YYYY-MM-DD)"),
|
| 85 |
-
output: str = typer.Option(
|
|
|
|
|
|
|
| 86 |
seed: int = typer.Option(42, "--seed", help="Random seed for reproducibility"),
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 87 |
) -> None:
|
| 88 |
"""Generate synthetic test cases for simulation."""
|
| 89 |
console.print(f"[bold blue]Generating {n_cases:,} test cases[/bold blue]")
|
| 90 |
-
|
| 91 |
try:
|
| 92 |
from datetime import date as date_cls
|
|
|
|
|
|
|
| 93 |
from scheduler.data.case_generator import CaseGenerator
|
| 94 |
-
from cli.config import load_generate_config, GenerateConfig
|
| 95 |
|
| 96 |
# Resolve parameters: config -> interactive -> flags
|
| 97 |
if config:
|
|
@@ -115,23 +139,58 @@ def generate(
|
|
| 115 |
end = cfg.end
|
| 116 |
output_path = cfg.output
|
| 117 |
output_path.parent.mkdir(parents=True, exist_ok=True)
|
| 118 |
-
|
| 119 |
with Progress(
|
| 120 |
-
SpinnerColumn(),
|
| 121 |
TextColumn("[progress.description]{task.description}"),
|
| 122 |
console=console,
|
| 123 |
) as progress:
|
| 124 |
task = progress.add_task("Generating cases...", total=None)
|
| 125 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
gen = CaseGenerator(start=start, end=end, seed=seed)
|
| 127 |
-
cases = gen.generate(n_cases, stage_mix_auto=True)
|
|
|
|
| 128 |
CaseGenerator.to_csv(cases, output_path)
|
| 129 |
-
|
|
|
|
|
|
|
|
|
|
| 130 |
progress.update(task, completed=True)
|
| 131 |
-
|
| 132 |
-
console.print(f"
|
| 133 |
-
console.print(f"
|
| 134 |
-
|
|
|
|
| 135 |
except Exception as e:
|
| 136 |
console.print(f"[bold red]Error:[/bold red] {e}")
|
| 137 |
raise typer.Exit(code=1)
|
|
@@ -139,43 +198,60 @@ def generate(
|
|
| 139 |
|
| 140 |
@app.command()
|
| 141 |
def simulate(
|
| 142 |
-
config: Path = typer.Option(
|
| 143 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 144 |
cases_csv: str = typer.Option("data/generated/cases.csv", "--cases", help="Input cases CSV"),
|
| 145 |
days: int = typer.Option(384, "--days", "-d", help="Number of working days to simulate"),
|
| 146 |
start_date: str = typer.Option(None, "--start", help="Simulation start date (YYYY-MM-DD)"),
|
| 147 |
-
policy: str = typer.Option(
|
|
|
|
|
|
|
| 148 |
seed: int = typer.Option(42, "--seed", help="Random seed"),
|
| 149 |
log_dir: str = typer.Option(None, "--log-dir", "-o", help="Output directory for logs"),
|
| 150 |
) -> None:
|
| 151 |
"""Run court scheduling simulation."""
|
| 152 |
console.print(f"[bold blue]Running {days}-day simulation[/bold blue]")
|
| 153 |
-
|
| 154 |
try:
|
| 155 |
from datetime import date as date_cls
|
|
|
|
|
|
|
| 156 |
from scheduler.core.case import CaseStatus
|
| 157 |
from scheduler.data.case_generator import CaseGenerator
|
| 158 |
from scheduler.metrics.basic import gini
|
| 159 |
from scheduler.simulation.engine import CourtSim, CourtSimConfig
|
| 160 |
-
|
| 161 |
-
|
| 162 |
# Resolve parameters: config -> interactive -> flags
|
| 163 |
if config:
|
| 164 |
scfg = load_simulate_config(config)
|
| 165 |
# CLI flags override config if provided
|
| 166 |
-
scfg = scfg.model_copy(
|
| 167 |
-
|
| 168 |
-
|
| 169 |
-
|
| 170 |
-
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
|
|
|
|
|
|
|
| 174 |
else:
|
| 175 |
if interactive:
|
| 176 |
cases_csv = typer.prompt("Cases CSV", default=cases_csv)
|
| 177 |
days = typer.prompt("Days to simulate", default=days)
|
| 178 |
-
start_date =
|
|
|
|
|
|
|
|
|
|
| 179 |
policy = typer.prompt("Policy [readiness|fifo|age]", default=policy)
|
| 180 |
seed = typer.prompt("Random seed", default=seed)
|
| 181 |
log_dir = typer.prompt("Log dir (or blank)", default=log_dir or "") or None
|
|
@@ -198,7 +274,7 @@ def simulate(
|
|
| 198 |
start = scfg.start or date_cls.today().replace(day=1)
|
| 199 |
gen = CaseGenerator(start=start, end=start.replace(day=28), seed=scfg.seed)
|
| 200 |
cases = gen.generate(n_cases=5 * 151)
|
| 201 |
-
|
| 202 |
# Run simulation
|
| 203 |
cfg = CourtSimConfig(
|
| 204 |
start=start,
|
|
@@ -208,7 +284,7 @@ def simulate(
|
|
| 208 |
duration_percentile="median",
|
| 209 |
log_dir=scfg.log_dir,
|
| 210 |
)
|
| 211 |
-
|
| 212 |
with Progress(
|
| 213 |
SpinnerColumn(),
|
| 214 |
TextColumn("[progress.description]{task.description}"),
|
|
@@ -218,94 +294,46 @@ def simulate(
|
|
| 218 |
sim = CourtSim(cfg, cases)
|
| 219 |
res = sim.run()
|
| 220 |
progress.update(task, completed=True)
|
| 221 |
-
|
| 222 |
# Display results
|
| 223 |
console.print("\n[bold green]Simulation Complete![/bold green]")
|
| 224 |
-
console.print(f"\nHorizon: {cfg.start}
|
| 225 |
-
console.print(
|
| 226 |
console.print(f" Total: {res.hearings_total:,}")
|
| 227 |
-
console.print(
|
| 228 |
-
|
| 229 |
-
|
| 230 |
-
|
| 231 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 232 |
gini_disp = gini(disp_times) if disp_times else 0.0
|
| 233 |
-
|
| 234 |
-
console.print(
|
| 235 |
-
console.print(f" Cases disposed: {res.disposals:,} ({res.disposals/len(cases):.1%})")
|
| 236 |
console.print(f" Gini coefficient: {gini_disp:.3f}")
|
| 237 |
-
|
| 238 |
-
console.print(
|
| 239 |
console.print(f" Utilization: {res.utilization:.1%}")
|
| 240 |
-
console.print(f" Avg hearings/day: {res.hearings_total/days:.1f}")
|
| 241 |
-
|
| 242 |
if log_dir:
|
| 243 |
-
console.print(
|
| 244 |
console.print(f" - {log_dir}/report.txt")
|
| 245 |
console.print(f" - {log_dir}/metrics.csv")
|
| 246 |
console.print(f" - {log_dir}/events.csv")
|
| 247 |
-
|
| 248 |
except Exception as e:
|
| 249 |
console.print(f"[bold red]Error:[/bold red] {e}")
|
| 250 |
raise typer.Exit(code=1)
|
| 251 |
|
| 252 |
|
| 253 |
-
|
| 254 |
-
def train(
|
| 255 |
-
episodes: int = typer.Option(20, "--episodes", "-e", help="Number of training episodes"),
|
| 256 |
-
cases_per_episode: int = typer.Option(200, "--cases", "-n", help="Cases per episode"),
|
| 257 |
-
learning_rate: float = typer.Option(0.15, "--lr", help="Learning rate"),
|
| 258 |
-
epsilon: float = typer.Option(0.4, "--epsilon", help="Initial epsilon for exploration"),
|
| 259 |
-
output: str = typer.Option("models/rl_agent.pkl", "--output", "-o", help="Output model file"),
|
| 260 |
-
seed: int = typer.Option(42, "--seed", help="Random seed"),
|
| 261 |
-
) -> None:
|
| 262 |
-
"""Train RL agent for case scheduling."""
|
| 263 |
-
console.print(f"[bold blue]Training RL Agent ({episodes} episodes)[/bold blue]")
|
| 264 |
-
|
| 265 |
-
try:
|
| 266 |
-
from rl.simple_agent import TabularQAgent
|
| 267 |
-
from rl.training import train_agent
|
| 268 |
-
from rl.config import RLTrainingConfig
|
| 269 |
-
import pickle
|
| 270 |
-
|
| 271 |
-
# Create agent
|
| 272 |
-
agent = TabularQAgent(learning_rate=learning_rate, epsilon=epsilon, discount=0.95)
|
| 273 |
-
|
| 274 |
-
# Configure training
|
| 275 |
-
config = RLTrainingConfig(
|
| 276 |
-
episodes=episodes,
|
| 277 |
-
cases_per_episode=cases_per_episode,
|
| 278 |
-
training_seed=seed,
|
| 279 |
-
initial_epsilon=epsilon,
|
| 280 |
-
learning_rate=learning_rate,
|
| 281 |
-
)
|
| 282 |
-
|
| 283 |
-
with Progress(
|
| 284 |
-
SpinnerColumn(),
|
| 285 |
-
TextColumn("[progress.description]{task.description}"),
|
| 286 |
-
console=console,
|
| 287 |
-
) as progress:
|
| 288 |
-
task = progress.add_task(f"Training {episodes} episodes...", total=None)
|
| 289 |
-
stats = train_agent(agent, rl_config=config, verbose=False)
|
| 290 |
-
progress.update(task, completed=True)
|
| 291 |
-
|
| 292 |
-
# Save model
|
| 293 |
-
output_path = Path(output)
|
| 294 |
-
output_path.parent.mkdir(parents=True, exist_ok=True)
|
| 295 |
-
with output_path.open("wb") as f:
|
| 296 |
-
pickle.dump(agent, f)
|
| 297 |
-
|
| 298 |
-
console.print("\n[bold green]\u2713 Training Complete![/bold green]")
|
| 299 |
-
console.print(f"\nFinal Statistics:")
|
| 300 |
-
console.print(f" Episodes: {len(stats['episodes'])}")
|
| 301 |
-
console.print(f" Final disposal rate: {stats['disposal_rates'][-1]:.1%}")
|
| 302 |
-
console.print(f" States explored: {stats['states_explored'][-1]:,}")
|
| 303 |
-
console.print(f" Q-table size: {len(agent.q_table):,}")
|
| 304 |
-
console.print(f"\nModel saved to: {output_path}")
|
| 305 |
-
|
| 306 |
-
except Exception as e:
|
| 307 |
-
console.print(f"[bold red]Error:[/bold red] {e}")
|
| 308 |
-
raise typer.Exit(code=1)
|
| 309 |
|
| 310 |
|
| 311 |
@app.command()
|
|
@@ -317,33 +345,34 @@ def workflow(
|
|
| 317 |
) -> None:
|
| 318 |
"""Run full workflow: EDA -> Generate -> Simulate -> Report."""
|
| 319 |
console.print("[bold blue]Running Full Workflow[/bold blue]\n")
|
| 320 |
-
|
| 321 |
output_path = Path(output_dir)
|
| 322 |
output_path.mkdir(parents=True, exist_ok=True)
|
| 323 |
-
|
| 324 |
try:
|
| 325 |
# Step 1: EDA (skip if already done recently)
|
| 326 |
console.print("[bold]Step 1/3:[/bold] EDA Pipeline")
|
| 327 |
console.print(" Skipping (use 'court-scheduler eda' to regenerate)\n")
|
| 328 |
-
|
| 329 |
# Step 2: Generate cases
|
| 330 |
console.print("[bold]Step 2/3:[/bold] Generate Cases")
|
| 331 |
cases_file = output_path / "cases.csv"
|
| 332 |
from datetime import date as date_cls
|
|
|
|
| 333 |
from scheduler.data.case_generator import CaseGenerator
|
| 334 |
-
|
| 335 |
start = date_cls(2022, 1, 1)
|
| 336 |
end = date_cls(2023, 12, 31)
|
| 337 |
-
|
| 338 |
gen = CaseGenerator(start=start, end=end, seed=seed)
|
| 339 |
cases = gen.generate(n_cases, stage_mix_auto=True)
|
| 340 |
CaseGenerator.to_csv(cases, cases_file)
|
| 341 |
-
console.print(f"
|
| 342 |
-
|
| 343 |
# Step 3: Run simulation
|
| 344 |
console.print("[bold]Step 3/3:[/bold] Run Simulation")
|
| 345 |
from scheduler.simulation.engine import CourtSim, CourtSimConfig
|
| 346 |
-
|
| 347 |
sim_start = max(c.filed_date for c in cases)
|
| 348 |
cfg = CourtSimConfig(
|
| 349 |
start=sim_start,
|
|
@@ -352,19 +381,19 @@ def workflow(
|
|
| 352 |
policy="readiness",
|
| 353 |
log_dir=output_path,
|
| 354 |
)
|
| 355 |
-
|
| 356 |
sim = CourtSim(cfg, cases)
|
| 357 |
-
|
| 358 |
-
console.print(
|
| 359 |
-
|
| 360 |
# Summary
|
| 361 |
-
console.print("[bold
|
| 362 |
console.print(f"\nResults: {output_path}/")
|
| 363 |
console.print(f" - cases.csv ({len(cases):,} cases)")
|
| 364 |
-
console.print(
|
| 365 |
-
console.print(
|
| 366 |
-
console.print(
|
| 367 |
-
|
| 368 |
except Exception as e:
|
| 369 |
console.print(f"[bold red]Error:[/bold red] {e}")
|
| 370 |
raise typer.Exit(code=1)
|
|
@@ -379,18 +408,18 @@ def dashboard(
|
|
| 379 |
console.print("[bold blue]Launching Interactive Dashboard[/bold blue]")
|
| 380 |
console.print(f"Dashboard will be available at: http://{host}:{port}")
|
| 381 |
console.print("Press Ctrl+C to stop the dashboard\n")
|
| 382 |
-
|
| 383 |
try:
|
| 384 |
import subprocess
|
| 385 |
import sys
|
| 386 |
-
|
| 387 |
# Get path to dashboard app
|
| 388 |
app_path = Path(__file__).parent.parent / "scheduler" / "dashboard" / "app.py"
|
| 389 |
-
|
| 390 |
if not app_path.exists():
|
| 391 |
console.print(f"[bold red]Error:[/bold red] Dashboard app not found at {app_path}")
|
| 392 |
raise typer.Exit(code=1)
|
| 393 |
-
|
| 394 |
# Run streamlit
|
| 395 |
cmd = [
|
| 396 |
sys.executable,
|
|
@@ -405,9 +434,9 @@ def dashboard(
|
|
| 405 |
"--browser.gatherUsageStats",
|
| 406 |
"false",
|
| 407 |
]
|
| 408 |
-
|
| 409 |
subprocess.run(cmd)
|
| 410 |
-
|
| 411 |
except KeyboardInterrupt:
|
| 412 |
console.print("\n[yellow]Dashboard stopped[/yellow]")
|
| 413 |
except Exception as e:
|
|
|
|
| 1 |
"""Unified CLI for Court Scheduling System.
|
| 2 |
|
| 3 |
+
This module provides a single entry point for key court scheduling operations:
|
| 4 |
- EDA pipeline execution
|
| 5 |
- Case generation
|
| 6 |
+
- Simulation runs
|
|
|
|
| 7 |
- Full workflow orchestration
|
| 8 |
"""
|
| 9 |
|
| 10 |
from __future__ import annotations
|
| 11 |
|
| 12 |
import sys
|
|
|
|
| 13 |
from pathlib import Path
|
| 14 |
|
| 15 |
import typer
|
|
|
|
| 18 |
|
| 19 |
from cli import __version__
|
| 20 |
|
| 21 |
+
try:
|
| 22 |
+
sys.stdout.reconfigure(encoding="utf-8")
|
| 23 |
+
sys.stderr.reconfigure(encoding="utf-8")
|
| 24 |
+
except Exception:
|
| 25 |
+
pass
|
| 26 |
+
|
| 27 |
# Initialize Typer app and console
|
| 28 |
app = typer.Typer(
|
| 29 |
name="court-scheduler",
|
| 30 |
help="Court Scheduling System for Karnataka High Court",
|
| 31 |
add_completion=False,
|
| 32 |
)
|
| 33 |
+
# Use force_terminal=False to avoid legacy Windows rendering issues with Unicode
|
| 34 |
+
console = Console(legacy_windows=False)
|
| 35 |
|
| 36 |
|
| 37 |
@app.command()
|
|
|
|
| 42 |
) -> None:
|
| 43 |
"""Run the EDA pipeline (load, explore, extract parameters)."""
|
| 44 |
console.print("[bold blue]Running EDA Pipeline[/bold blue]")
|
| 45 |
+
|
| 46 |
try:
|
| 47 |
# Import here to avoid loading heavy dependencies if not needed
|
| 48 |
+
from eda.exploration import run_exploration
|
| 49 |
+
from eda.load_clean import run_load_and_clean
|
| 50 |
+
from eda.parameters import run_parameter_export
|
| 51 |
+
|
| 52 |
with Progress(
|
|
|
|
| 53 |
TextColumn("[progress.description]{task.description}"),
|
| 54 |
console=console,
|
| 55 |
) as progress:
|
|
|
|
| 57 |
task = progress.add_task("Step 1/3: Load and clean data...", total=None)
|
| 58 |
run_load_and_clean()
|
| 59 |
progress.update(task, completed=True)
|
| 60 |
+
console.print("Data loaded and cleaned")
|
| 61 |
+
|
| 62 |
if not skip_viz:
|
| 63 |
task = progress.add_task("Step 2/3: Generate visualizations...", total=None)
|
| 64 |
run_exploration()
|
| 65 |
progress.update(task, completed=True)
|
| 66 |
+
console.print("Visualizations generated")
|
| 67 |
+
|
| 68 |
if not skip_params:
|
| 69 |
task = progress.add_task("Step 3/3: Extract parameters...", total=None)
|
| 70 |
run_parameter_export()
|
| 71 |
progress.update(task, completed=True)
|
| 72 |
+
console.print("Parameters extracted")
|
| 73 |
+
|
| 74 |
+
console.print("\n[bold]EDA Pipeline Complete[/bold]")
|
| 75 |
console.print("Outputs: reports/figures/")
|
| 76 |
+
|
| 77 |
except Exception as e:
|
| 78 |
console.print(f"[bold red]Error:[/bold red] {e}")
|
| 79 |
raise typer.Exit(code=1)
|
|
|
|
| 81 |
|
| 82 |
@app.command()
|
| 83 |
def generate(
|
| 84 |
+
config: Path = typer.Option( # noqa: B008
|
| 85 |
+
None,
|
| 86 |
+
"--config",
|
| 87 |
+
exists=True,
|
| 88 |
+
dir_okay=False,
|
| 89 |
+
readable=True,
|
| 90 |
+
help="Path to config (.toml or .json)",
|
| 91 |
+
),
|
| 92 |
+
interactive: bool = typer.Option(
|
| 93 |
+
False, "--interactive", help="Prompt for parameters interactively"
|
| 94 |
+
),
|
| 95 |
n_cases: int = typer.Option(10000, "--cases", "-n", help="Number of cases to generate"),
|
| 96 |
start_date: str = typer.Option("2022-01-01", "--start", help="Start date (YYYY-MM-DD)"),
|
| 97 |
end_date: str = typer.Option("2023-12-31", "--end", help="End date (YYYY-MM-DD)"),
|
| 98 |
+
output: str = typer.Option(
|
| 99 |
+
"data/generated/cases.csv", "--output", "-o", help="Output CSV file"
|
| 100 |
+
),
|
| 101 |
seed: int = typer.Option(42, "--seed", help="Random seed for reproducibility"),
|
| 102 |
+
case_type_dist: str = typer.Option(
|
| 103 |
+
None,
|
| 104 |
+
"--case-type-dist",
|
| 105 |
+
help=(
|
| 106 |
+
'Custom case type distribution. Accepts JSON (e.g., \'{"Writ":0.6,"Civil":0.4}\') '
|
| 107 |
+
"or comma-separated pairs 'Writ:0.6,Civil:0.4'. Defaults to historical distribution."
|
| 108 |
+
),
|
| 109 |
+
),
|
| 110 |
) -> None:
|
| 111 |
"""Generate synthetic test cases for simulation."""
|
| 112 |
console.print(f"[bold blue]Generating {n_cases:,} test cases[/bold blue]")
|
| 113 |
+
|
| 114 |
try:
|
| 115 |
from datetime import date as date_cls
|
| 116 |
+
|
| 117 |
+
from cli.config import GenerateConfig, load_generate_config
|
| 118 |
from scheduler.data.case_generator import CaseGenerator
|
|
|
|
| 119 |
|
| 120 |
# Resolve parameters: config -> interactive -> flags
|
| 121 |
if config:
|
|
|
|
| 139 |
end = cfg.end
|
| 140 |
output_path = cfg.output
|
| 141 |
output_path.parent.mkdir(parents=True, exist_ok=True)
|
| 142 |
+
|
| 143 |
with Progress(
|
|
|
|
| 144 |
TextColumn("[progress.description]{task.description}"),
|
| 145 |
console=console,
|
| 146 |
) as progress:
|
| 147 |
task = progress.add_task("Generating cases...", total=None)
|
| 148 |
+
|
| 149 |
+
# Parse optional custom case type distribution
|
| 150 |
+
def _parse_case_type_dist(s: str | None) -> dict | None:
|
| 151 |
+
if not s:
|
| 152 |
+
return None
|
| 153 |
+
s = s.strip()
|
| 154 |
+
try:
|
| 155 |
+
import json
|
| 156 |
+
|
| 157 |
+
obj = json.loads(s)
|
| 158 |
+
if isinstance(obj, dict):
|
| 159 |
+
return obj
|
| 160 |
+
except Exception:
|
| 161 |
+
pass
|
| 162 |
+
# Try comma-separated pairs format
|
| 163 |
+
parts = [p.strip() for p in s.split(",") if p.strip()]
|
| 164 |
+
dist: dict[str, float] = {}
|
| 165 |
+
for part in parts:
|
| 166 |
+
if ":" not in part:
|
| 167 |
+
continue
|
| 168 |
+
k, v = part.split(":", 1)
|
| 169 |
+
k = k.strip()
|
| 170 |
+
try:
|
| 171 |
+
val = float(v)
|
| 172 |
+
except ValueError:
|
| 173 |
+
continue
|
| 174 |
+
if k:
|
| 175 |
+
dist[k] = val
|
| 176 |
+
return dist or None
|
| 177 |
+
|
| 178 |
+
user_dist = _parse_case_type_dist(case_type_dist)
|
| 179 |
+
|
| 180 |
gen = CaseGenerator(start=start, end=end, seed=seed)
|
| 181 |
+
cases = gen.generate(n_cases, stage_mix_auto=True, case_type_distribution=user_dist)
|
| 182 |
+
# Write primary cases file
|
| 183 |
CaseGenerator.to_csv(cases, output_path)
|
| 184 |
+
# Also write detailed hearings history alongside, for the dashboard/classifier
|
| 185 |
+
hearings_path = output_path.parent / "hearings.csv"
|
| 186 |
+
CaseGenerator.to_hearings_csv(cases, hearings_path)
|
| 187 |
+
|
| 188 |
progress.update(task, completed=True)
|
| 189 |
+
|
| 190 |
+
console.print(f"Generated {len(cases):,} cases")
|
| 191 |
+
console.print(f"Saved to: {output_path}")
|
| 192 |
+
console.print(f"Hearings history: {hearings_path}")
|
| 193 |
+
|
| 194 |
except Exception as e:
|
| 195 |
console.print(f"[bold red]Error:[/bold red] {e}")
|
| 196 |
raise typer.Exit(code=1)
|
|
|
|
| 198 |
|
| 199 |
@app.command()
|
| 200 |
def simulate(
|
| 201 |
+
config: Path = typer.Option(
|
| 202 |
+
None,
|
| 203 |
+
"--config",
|
| 204 |
+
exists=True,
|
| 205 |
+
dir_okay=False,
|
| 206 |
+
readable=True,
|
| 207 |
+
help="Path to config (.toml or .json)",
|
| 208 |
+
),
|
| 209 |
+
interactive: bool = typer.Option(
|
| 210 |
+
False, "--interactive", help="Prompt for parameters interactively"
|
| 211 |
+
),
|
| 212 |
cases_csv: str = typer.Option("data/generated/cases.csv", "--cases", help="Input cases CSV"),
|
| 213 |
days: int = typer.Option(384, "--days", "-d", help="Number of working days to simulate"),
|
| 214 |
start_date: str = typer.Option(None, "--start", help="Simulation start date (YYYY-MM-DD)"),
|
| 215 |
+
policy: str = typer.Option(
|
| 216 |
+
"readiness", "--policy", "-p", help="Scheduling policy (fifo/age/readiness)"
|
| 217 |
+
),
|
| 218 |
seed: int = typer.Option(42, "--seed", help="Random seed"),
|
| 219 |
log_dir: str = typer.Option(None, "--log-dir", "-o", help="Output directory for logs"),
|
| 220 |
) -> None:
|
| 221 |
"""Run court scheduling simulation."""
|
| 222 |
console.print(f"[bold blue]Running {days}-day simulation[/bold blue]")
|
| 223 |
+
|
| 224 |
try:
|
| 225 |
from datetime import date as date_cls
|
| 226 |
+
|
| 227 |
+
from cli.config import SimulateConfig, load_simulate_config
|
| 228 |
from scheduler.core.case import CaseStatus
|
| 229 |
from scheduler.data.case_generator import CaseGenerator
|
| 230 |
from scheduler.metrics.basic import gini
|
| 231 |
from scheduler.simulation.engine import CourtSim, CourtSimConfig
|
| 232 |
+
|
|
|
|
| 233 |
# Resolve parameters: config -> interactive -> flags
|
| 234 |
if config:
|
| 235 |
scfg = load_simulate_config(config)
|
| 236 |
# CLI flags override config if provided
|
| 237 |
+
scfg = scfg.model_copy(
|
| 238 |
+
update={
|
| 239 |
+
"cases": Path(cases_csv) if cases_csv else scfg.cases,
|
| 240 |
+
"days": days if days else scfg.days,
|
| 241 |
+
"start": (date_cls.fromisoformat(start_date) if start_date else scfg.start),
|
| 242 |
+
"policy": policy if policy else scfg.policy,
|
| 243 |
+
"seed": seed if seed else scfg.seed,
|
| 244 |
+
"log_dir": (Path(log_dir) if log_dir else scfg.log_dir),
|
| 245 |
+
}
|
| 246 |
+
)
|
| 247 |
else:
|
| 248 |
if interactive:
|
| 249 |
cases_csv = typer.prompt("Cases CSV", default=cases_csv)
|
| 250 |
days = typer.prompt("Days to simulate", default=days)
|
| 251 |
+
start_date = (
|
| 252 |
+
typer.prompt("Start date (YYYY-MM-DD) or blank", default=start_date or "")
|
| 253 |
+
or None
|
| 254 |
+
)
|
| 255 |
policy = typer.prompt("Policy [readiness|fifo|age]", default=policy)
|
| 256 |
seed = typer.prompt("Random seed", default=seed)
|
| 257 |
log_dir = typer.prompt("Log dir (or blank)", default=log_dir or "") or None
|
|
|
|
| 274 |
start = scfg.start or date_cls.today().replace(day=1)
|
| 275 |
gen = CaseGenerator(start=start, end=start.replace(day=28), seed=scfg.seed)
|
| 276 |
cases = gen.generate(n_cases=5 * 151)
|
| 277 |
+
|
| 278 |
# Run simulation
|
| 279 |
cfg = CourtSimConfig(
|
| 280 |
start=start,
|
|
|
|
| 284 |
duration_percentile="median",
|
| 285 |
log_dir=scfg.log_dir,
|
| 286 |
)
|
| 287 |
+
|
| 288 |
with Progress(
|
| 289 |
SpinnerColumn(),
|
| 290 |
TextColumn("[progress.description]{task.description}"),
|
|
|
|
| 294 |
sim = CourtSim(cfg, cases)
|
| 295 |
res = sim.run()
|
| 296 |
progress.update(task, completed=True)
|
| 297 |
+
|
| 298 |
# Display results
|
| 299 |
console.print("\n[bold green]Simulation Complete![/bold green]")
|
| 300 |
+
console.print(f"\nHorizon: {cfg.start} -> {res.end_date} ({days} days)")
|
| 301 |
+
console.print("\n[bold]Hearing Metrics:[/bold]")
|
| 302 |
console.print(f" Total: {res.hearings_total:,}")
|
| 303 |
+
console.print(
|
| 304 |
+
f" Heard: {res.hearings_heard:,} ({res.hearings_heard / max(1, res.hearings_total):.1%})"
|
| 305 |
+
)
|
| 306 |
+
console.print(
|
| 307 |
+
f" Adjourned: {res.hearings_adjourned:,} ({res.hearings_adjourned / max(1, res.hearings_total):.1%})"
|
| 308 |
+
)
|
| 309 |
+
|
| 310 |
+
disp_times = [
|
| 311 |
+
(c.disposal_date - c.filed_date).days
|
| 312 |
+
for c in cases
|
| 313 |
+
if c.disposal_date is not None and c.status == CaseStatus.DISPOSED
|
| 314 |
+
]
|
| 315 |
gini_disp = gini(disp_times) if disp_times else 0.0
|
| 316 |
+
|
| 317 |
+
console.print("\n[bold]Disposal Metrics:[/bold]")
|
| 318 |
+
console.print(f" Cases disposed: {res.disposals:,} ({res.disposals / len(cases):.1%})")
|
| 319 |
console.print(f" Gini coefficient: {gini_disp:.3f}")
|
| 320 |
+
|
| 321 |
+
console.print("\n[bold]Efficiency:[/bold]")
|
| 322 |
console.print(f" Utilization: {res.utilization:.1%}")
|
| 323 |
+
console.print(f" Avg hearings/day: {res.hearings_total / days:.1f}")
|
| 324 |
+
|
| 325 |
if log_dir:
|
| 326 |
+
console.print("\n[bold cyan]Output Files:[/bold cyan]")
|
| 327 |
console.print(f" - {log_dir}/report.txt")
|
| 328 |
console.print(f" - {log_dir}/metrics.csv")
|
| 329 |
console.print(f" - {log_dir}/events.csv")
|
| 330 |
+
|
| 331 |
except Exception as e:
|
| 332 |
console.print(f"[bold red]Error:[/bold red] {e}")
|
| 333 |
raise typer.Exit(code=1)
|
| 334 |
|
| 335 |
|
| 336 |
+
# RL training command removed
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 337 |
|
| 338 |
|
| 339 |
@app.command()
|
|
|
|
| 345 |
) -> None:
|
| 346 |
"""Run full workflow: EDA -> Generate -> Simulate -> Report."""
|
| 347 |
console.print("[bold blue]Running Full Workflow[/bold blue]\n")
|
| 348 |
+
|
| 349 |
output_path = Path(output_dir)
|
| 350 |
output_path.mkdir(parents=True, exist_ok=True)
|
| 351 |
+
|
| 352 |
try:
|
| 353 |
# Step 1: EDA (skip if already done recently)
|
| 354 |
console.print("[bold]Step 1/3:[/bold] EDA Pipeline")
|
| 355 |
console.print(" Skipping (use 'court-scheduler eda' to regenerate)\n")
|
| 356 |
+
|
| 357 |
# Step 2: Generate cases
|
| 358 |
console.print("[bold]Step 2/3:[/bold] Generate Cases")
|
| 359 |
cases_file = output_path / "cases.csv"
|
| 360 |
from datetime import date as date_cls
|
| 361 |
+
|
| 362 |
from scheduler.data.case_generator import CaseGenerator
|
| 363 |
+
|
| 364 |
start = date_cls(2022, 1, 1)
|
| 365 |
end = date_cls(2023, 12, 31)
|
| 366 |
+
|
| 367 |
gen = CaseGenerator(start=start, end=end, seed=seed)
|
| 368 |
cases = gen.generate(n_cases, stage_mix_auto=True)
|
| 369 |
CaseGenerator.to_csv(cases, cases_file)
|
| 370 |
+
console.print(f" Generated {len(cases):,} cases\n")
|
| 371 |
+
|
| 372 |
# Step 3: Run simulation
|
| 373 |
console.print("[bold]Step 3/3:[/bold] Run Simulation")
|
| 374 |
from scheduler.simulation.engine import CourtSim, CourtSimConfig
|
| 375 |
+
|
| 376 |
sim_start = max(c.filed_date for c in cases)
|
| 377 |
cfg = CourtSimConfig(
|
| 378 |
start=sim_start,
|
|
|
|
| 381 |
policy="readiness",
|
| 382 |
log_dir=output_path,
|
| 383 |
)
|
| 384 |
+
|
| 385 |
sim = CourtSim(cfg, cases)
|
| 386 |
+
sim.run()
|
| 387 |
+
console.print(" Simulation complete\n")
|
| 388 |
+
|
| 389 |
# Summary
|
| 390 |
+
console.print("[bold]Workflow Complete[/bold]")
|
| 391 |
console.print(f"\nResults: {output_path}/")
|
| 392 |
console.print(f" - cases.csv ({len(cases):,} cases)")
|
| 393 |
+
console.print(" - report.txt (simulation summary)")
|
| 394 |
+
console.print(" - metrics.csv (daily metrics)")
|
| 395 |
+
console.print(" - events.csv (event log)")
|
| 396 |
+
|
| 397 |
except Exception as e:
|
| 398 |
console.print(f"[bold red]Error:[/bold red] {e}")
|
| 399 |
raise typer.Exit(code=1)
|
|
|
|
| 408 |
console.print("[bold blue]Launching Interactive Dashboard[/bold blue]")
|
| 409 |
console.print(f"Dashboard will be available at: http://{host}:{port}")
|
| 410 |
console.print("Press Ctrl+C to stop the dashboard\n")
|
| 411 |
+
|
| 412 |
try:
|
| 413 |
import subprocess
|
| 414 |
import sys
|
| 415 |
+
|
| 416 |
# Get path to dashboard app
|
| 417 |
app_path = Path(__file__).parent.parent / "scheduler" / "dashboard" / "app.py"
|
| 418 |
+
|
| 419 |
if not app_path.exists():
|
| 420 |
console.print(f"[bold red]Error:[/bold red] Dashboard app not found at {app_path}")
|
| 421 |
raise typer.Exit(code=1)
|
| 422 |
+
|
| 423 |
# Run streamlit
|
| 424 |
cmd = [
|
| 425 |
sys.executable,
|
|
|
|
| 434 |
"--browser.gatherUsageStats",
|
| 435 |
"false",
|
| 436 |
]
|
| 437 |
+
|
| 438 |
subprocess.run(cmd)
|
| 439 |
+
|
| 440 |
except KeyboardInterrupt:
|
| 441 |
console.print("\n[yellow]Dashboard stopped[/yellow]")
|
| 442 |
except Exception as e:
|
docs/DASHBOARD.md
CHANGED
|
@@ -1,404 +1,41 @@
|
|
| 1 |
-
# Interactive Dashboard
|
| 2 |
|
| 3 |
-
**Last Updated**: 2025-11-
|
| 4 |
-
**Status**:
|
| 5 |
-
**Version**:
|
| 6 |
|
| 7 |
-
##
|
| 8 |
|
| 9 |
-
This document tracks the design decisions, architecture, usage patterns, and evolution of the Interactive Multi-Page Dashboard for the Court Scheduling System.
|
| 10 |
-
|
| 11 |
-
## Purpose and Goals
|
| 12 |
-
|
| 13 |
-
The dashboard provides three key functionalities:
|
| 14 |
-
1. **EDA Analysis** - Visualize and explore court case data patterns
|
| 15 |
-
2. **Ripeness Classifier** - Interactive explainability and threshold tuning
|
| 16 |
-
3. **RL Training** - Train and visualize reinforcement learning agents
|
| 17 |
-
|
| 18 |
-
### Design Philosophy
|
| 19 |
-
- Transparency: Every algorithm decision should be explainable
|
| 20 |
-
- Interactivity: Users can adjust parameters and see immediate impact
|
| 21 |
-
- Efficiency: Data caching to minimize load times
|
| 22 |
-
- Integration: Seamless integration with existing CLI and modules
|
| 23 |
-
|
| 24 |
-
## Architecture
|
| 25 |
-
|
| 26 |
-
### Technology Stack
|
| 27 |
-
|
| 28 |
-
**Framework**: Streamlit 1.28+
|
| 29 |
-
- Chosen for rapid prototyping and native multi-page support
|
| 30 |
-
- Built-in state management via `st.session_state`
|
| 31 |
-
- Excellent integration with Plotly and Pandas/Polars
|
| 32 |
-
|
| 33 |
-
**Visualization**: Plotly
|
| 34 |
-
- Interactive charts (zoom, pan, hover)
|
| 35 |
-
- Better aesthetics than Matplotlib for dashboards
|
| 36 |
-
- Native Streamlit support
|
| 37 |
-
|
| 38 |
-
**Data Processing**:
|
| 39 |
-
- Polars for fast CSV loading
|
| 40 |
-
- Pandas for compatibility with existing code
|
| 41 |
-
- Caching with `@st.cache_data` decorator
|
| 42 |
-
|
| 43 |
-
### Directory Structure
|
| 44 |
-
|
| 45 |
-
```
|
| 46 |
-
scheduler/
|
| 47 |
-
dashboard/
|
| 48 |
-
__init__.py # Package initialization
|
| 49 |
-
app.py # Main entry point (home page)
|
| 50 |
-
utils/
|
| 51 |
-
__init__.py
|
| 52 |
-
data_loader.py # Cached data loading functions
|
| 53 |
-
pages/
|
| 54 |
-
1_EDA_Analysis.py # EDA visualizations
|
| 55 |
-
2_Ripeness_Classifier.py # Ripeness explainability
|
| 56 |
-
3_RL_Training.py # RL training interface
|
| 57 |
-
```
|
| 58 |
-
|
| 59 |
-
### Module Reuse Strategy
|
| 60 |
-
|
| 61 |
-
The dashboard reuses existing components without duplication:
|
| 62 |
-
- `scheduler.data.param_loader.ParameterLoader` - Load EDA-derived parameters
|
| 63 |
-
- `scheduler.data.case_generator.CaseGenerator` - Load generated cases
|
| 64 |
-
- `scheduler.core.ripeness.RipenessClassifier` - Classification logic
|
| 65 |
-
- `scheduler.core.case.Case` - Case data structure
|
| 66 |
-
- `rl.training.train_agent()` - RL training (future integration)
|
| 67 |
-
|
| 68 |
-
## Page Implementations
|
| 69 |
-
|
| 70 |
-
### Page 1: EDA Analysis
|
| 71 |
-
|
| 72 |
-
**Features**:
|
| 73 |
-
- Key metrics dashboard (total cases, adjournment rates, stages)
|
| 74 |
-
- Interactive filters (case type, stage)
|
| 75 |
-
- Multiple visualizations:
|
| 76 |
-
- Case distribution by type (bar chart + pie chart)
|
| 77 |
-
- Stage analysis (bar chart + pie chart)
|
| 78 |
-
- Adjournment patterns (bar charts by type and stage)
|
| 79 |
-
- Adjournment probability heatmap (stage × case type)
|
| 80 |
-
- Raw data viewer with download capability
|
| 81 |
-
|
| 82 |
-
**Data Sources**:
|
| 83 |
-
- `Data/processed/cleaned_cases.csv` - Cleaned case data from EDA pipeline
|
| 84 |
-
- `configs/parameters/` - Pre-computed parameters from ParameterLoader
|
| 85 |
-
|
| 86 |
-
**Design Decisions**:
|
| 87 |
-
- Use tabs instead of separate sections for better organization
|
| 88 |
-
- Show top 10/15 items in charts to avoid clutter
|
| 89 |
-
- Provide download button for filtered data
|
| 90 |
-
- Cache data with 1-hour TTL to balance freshness and performance
|
| 91 |
-
|
| 92 |
-
### Page 2: Ripeness Classifier
|
| 93 |
-
|
| 94 |
-
**Features**:
|
| 95 |
-
- **Tab 1: Configuration**
|
| 96 |
-
- Display current thresholds
|
| 97 |
-
- Stage-specific rules table
|
| 98 |
-
- Decision tree logic explanation
|
| 99 |
-
- **Tab 2: Interactive Testing**
|
| 100 |
-
- Synthetic case creation
|
| 101 |
-
- Real-time classification with explanations
|
| 102 |
-
- Feature importance visualization
|
| 103 |
-
- Criteria pass/fail breakdown
|
| 104 |
-
- **Tab 3: Batch Classification**
|
| 105 |
-
- Load generated test cases
|
| 106 |
-
- Classify all with current thresholds
|
| 107 |
-
- Show distribution (RIPE/UNRIPE/UNKNOWN)
|
| 108 |
-
|
| 109 |
-
**State Management**:
|
| 110 |
-
- Thresholds stored in `st.session_state`
|
| 111 |
-
- Sidebar sliders for real-time adjustment
|
| 112 |
-
- Reset button to restore defaults
|
| 113 |
-
- Session-based (not persisted to disk)
|
| 114 |
-
|
| 115 |
-
**Explainability Approach**:
|
| 116 |
-
- Clear criteria breakdown (service hearings, case age, stage days, keywords)
|
| 117 |
-
- Visual indicators (✓/✗) for pass/fail
|
| 118 |
-
- Feature importance bar chart
|
| 119 |
-
- Before/after comparison capability
|
| 120 |
-
|
| 121 |
-
**Design Decisions**:
|
| 122 |
-
- Simplified classification logic for demo (uses basic criteria)
|
| 123 |
-
- Future: Integrate actual RipenessClassifier.classify_case()
|
| 124 |
-
- Stage-specific rules hardcoded for now (future: load from config)
|
| 125 |
-
- Color coding: green (RIPE), orange (UNKNOWN), red (UNRIPE)
|
| 126 |
-
|
| 127 |
-
### Page 3: RL Training
|
| 128 |
-
|
| 129 |
-
**Features**:
|
| 130 |
-
- **Tab 1: Train Agent**
|
| 131 |
-
- Configuration form (episodes, learning rate, epsilon, etc.)
|
| 132 |
-
- Training progress visualization (demo mode)
|
| 133 |
-
- Multiple live charts (disposal rate, rewards, states, epsilon decay)
|
| 134 |
-
- Command generation for CLI training
|
| 135 |
-
- **Tab 2: Training History**
|
| 136 |
-
- Load and display previous training runs
|
| 137 |
-
- Plot historical performance
|
| 138 |
-
- **Tab 3: Model Comparison**
|
| 139 |
-
- Load saved models from models/ directory
|
| 140 |
-
- Compare Q-table sizes and hyperparameters
|
| 141 |
-
- Visualization of model differences
|
| 142 |
-
|
| 143 |
-
**Demo Mode**:
|
| 144 |
-
- Current implementation simulates training results
|
| 145 |
-
- Generates synthetic stats for visualization
|
| 146 |
-
- Shows CLI command for actual training
|
| 147 |
-
- Future: Integrate real-time training with rl.training.train_agent()
|
| 148 |
-
|
| 149 |
-
**Design Decisions**:
|
| 150 |
-
- Demo mode chosen for initial release (no blocking UI during training)
|
| 151 |
-
- Future: Add async training with progress updates
|
| 152 |
-
- Hyperparameter guide in expander for educational value
|
| 153 |
-
- Model persistence via pickle (existing pattern)
|
| 154 |
-
|
| 155 |
-
## CLI Integration
|
| 156 |
-
|
| 157 |
-
### Command
|
| 158 |
```bash
|
| 159 |
-
uv run
|
|
|
|
| 160 |
```
|
| 161 |
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
**Implementation**:
|
| 165 |
-
- Added to `cli/main.py` as `@app.command()`
|
| 166 |
-
- Uses subprocess to launch Streamlit
|
| 167 |
-
- Validates dashboard app.py exists before launching
|
| 168 |
-
- Handles KeyboardInterrupt gracefully
|
| 169 |
-
|
| 170 |
-
**Usage Example**:
|
| 171 |
-
```bash
|
| 172 |
-
# Launch on default port
|
| 173 |
-
uv run court-scheduler dashboard
|
| 174 |
-
|
| 175 |
-
# Custom port
|
| 176 |
-
uv run court-scheduler dashboard --port 8080
|
| 177 |
-
|
| 178 |
-
# Bind to all interfaces
|
| 179 |
-
uv run court-scheduler dashboard --host 0.0.0.0 --port 8080
|
| 180 |
-
```
|
| 181 |
-
|
| 182 |
-
## Data Flow
|
| 183 |
-
|
| 184 |
-
### Loading Sequence
|
| 185 |
-
1. User launches dashboard via CLI
|
| 186 |
-
2. `app.py` loads, displays home page and system status
|
| 187 |
-
3. User navigates to a page (e.g., EDA Analysis)
|
| 188 |
-
4. Page imports data_loader utilities
|
| 189 |
-
5. `@st.cache_data` checks cache for data
|
| 190 |
-
6. If not cached, load from disk and cache
|
| 191 |
-
7. Data processed and visualized
|
| 192 |
-
8. User interactions trigger re-renders (cached data reused)
|
| 193 |
|
| 194 |
-
|
| 195 |
-
|
| 196 |
-
|
| 197 |
-
|
|
|
|
|
|
|
| 198 |
|
| 199 |
-
|
| 200 |
-
- Polars for fast CSV loading
|
| 201 |
-
- Limit DataFrame display to first 100 rows
|
| 202 |
-
- Top N filtering for visualizations (top 10/15)
|
| 203 |
-
- Lazy loading (pages only load data when accessed)
|
| 204 |
|
| 205 |
-
|
|
|
|
|
|
|
| 206 |
|
| 207 |
-
|
| 208 |
-
1. Run EDA pipeline: `uv run court-scheduler eda`
|
| 209 |
-
2. Launch dashboard: `uv run court-scheduler dashboard`
|
| 210 |
-
3. Navigate to EDA Analysis page
|
| 211 |
-
4. Apply filters (case type, stage)
|
| 212 |
-
5. Explore visualizations
|
| 213 |
-
6. Download filtered data if needed
|
| 214 |
|
| 215 |
-
|
| 216 |
-
|
| 217 |
-
|
| 218 |
-
3. Navigate to Ripeness Classifier page
|
| 219 |
-
4. Adjust thresholds in sidebar
|
| 220 |
-
5. Test with synthetic case (Tab 2)
|
| 221 |
-
6. Run batch classification (Tab 3)
|
| 222 |
-
7. Analyze impact on RIPE/UNRIPE distribution
|
| 223 |
-
|
| 224 |
-
### Typical Workflow 3: RL Training
|
| 225 |
-
1. Launch dashboard: `uv run court-scheduler dashboard`
|
| 226 |
-
2. Navigate to RL Training page
|
| 227 |
-
3. Configure hyperparameters (Tab 1)
|
| 228 |
-
4. Copy CLI command and run separately (or use demo)
|
| 229 |
-
5. Return to dashboard, view history (Tab 2)
|
| 230 |
-
6. Compare models (Tab 3)
|
| 231 |
-
|
| 232 |
-
## Future Enhancements
|
| 233 |
-
|
| 234 |
-
### Planned Features
|
| 235 |
-
- [ ] Real-time RL training integration (non-blocking)
|
| 236 |
-
- [ ] RipenessCalibrator integration (auto-suggest thresholds)
|
| 237 |
-
- [ ] RipenessMetrics tracking (false positive/negative rates)
|
| 238 |
-
- [ ] Actual RipenessClassifier integration (not simplified logic)
|
| 239 |
-
- [ ] EDA plot regeneration option
|
| 240 |
-
- [ ] Export threshold configurations
|
| 241 |
-
- [ ] Simulation runner from dashboard
|
| 242 |
-
- [ ] Authentication (if deployed externally)
|
| 243 |
-
|
| 244 |
-
### Technical Improvements
|
| 245 |
-
- [ ] Async data loading for large datasets
|
| 246 |
-
- [ ] WebSocket support for real-time training updates
|
| 247 |
-
- [ ] Plotly Dash migration (if more customization needed)
|
| 248 |
-
- [ ] Unit tests for dashboard components
|
| 249 |
-
- [ ] Playwright automated UI tests
|
| 250 |
-
|
| 251 |
-
### UX Improvements
|
| 252 |
-
- [ ] Dark mode support
|
| 253 |
-
- [ ] Custom color themes
|
| 254 |
-
- [ ] Keyboard shortcuts
|
| 255 |
-
- [ ] Save/load dashboard state
|
| 256 |
-
- [ ] Export visualizations as PNG/PDF
|
| 257 |
-
- [ ] Guided tour for new users
|
| 258 |
-
|
| 259 |
-
## Testing Strategy
|
| 260 |
-
|
| 261 |
-
### Manual Testing Checklist
|
| 262 |
-
- [ ] Dashboard launches without errors
|
| 263 |
-
- [ ] All pages load correctly
|
| 264 |
-
- [ ] EDA page: filters work, visualizations render
|
| 265 |
-
- [ ] Ripeness page: sliders adjust thresholds, classification updates
|
| 266 |
-
- [ ] RL page: form submission works, charts render
|
| 267 |
-
- [ ] CLI command generation correct
|
| 268 |
-
- [ ] System status checks work
|
| 269 |
-
|
| 270 |
-
### Integration Testing
|
| 271 |
-
- [ ] Load actual cleaned data
|
| 272 |
-
- [ ] Load generated test cases
|
| 273 |
-
- [ ] Load parameters from configs/
|
| 274 |
-
- [ ] Verify caching behavior
|
| 275 |
-
- [ ] Test with missing data files
|
| 276 |
-
|
| 277 |
-
### Performance Testing
|
| 278 |
-
- [ ] Large dataset loading (100K+ rows)
|
| 279 |
- [ ] Batch classification (10K+ cases)
|
| 280 |
- [ ] Multiple concurrent users (if deployed)
|
| 281 |
|
| 282 |
## Troubleshooting
|
| 283 |
|
| 284 |
-
|
| 285 |
-
|
| 286 |
-
**
|
| 287 |
-
- **Check**: Is Streamlit installed? `pip list | grep streamlit`
|
| 288 |
-
- **Solution**: Ensure venv is activated, run `uv sync`
|
| 289 |
-
|
| 290 |
-
**Issue**: "Data file not found" warnings
|
| 291 |
-
- **Check**: Has EDA pipeline been run?
|
| 292 |
-
- **Solution**: Run `uv run court-scheduler eda`
|
| 293 |
-
|
| 294 |
-
**Issue**: Empty visualizations
|
| 295 |
-
- **Check**: Is `Data/processed/cleaned_cases.csv` empty?
|
| 296 |
-
- **Solution**: Verify EDA pipeline completed successfully
|
| 297 |
-
|
| 298 |
-
**Issue**: Ripeness batch classification fails
|
| 299 |
-
- **Check**: Are test cases generated?
|
| 300 |
-
- **Solution**: Run `uv run court-scheduler generate`
|
| 301 |
-
|
| 302 |
-
**Issue**: Slow page loads
|
| 303 |
-
- **Check**: Is data being cached?
|
| 304 |
-
- **Solution**: Check Streamlit cache, reduce data size
|
| 305 |
-
|
| 306 |
-
## Design Decisions Log
|
| 307 |
-
|
| 308 |
-
### Decision 1: Streamlit over Dash/Gradio
|
| 309 |
-
**Date**: 2025-11-27
|
| 310 |
-
**Rationale**:
|
| 311 |
-
- Already in dependencies (no new install)
|
| 312 |
-
- Simpler multi-page support
|
| 313 |
-
- Better for data science workflows
|
| 314 |
-
- Faster development time
|
| 315 |
-
|
| 316 |
-
**Alternatives Considered**:
|
| 317 |
-
- Dash: More customizable but more boilerplate
|
| 318 |
-
- Gradio: Better for ML demos, less flexible
|
| 319 |
-
|
| 320 |
-
### Decision 2: Plotly over Matplotlib
|
| 321 |
-
**Date**: 2025-11-27
|
| 322 |
-
**Rationale**:
|
| 323 |
-
- Interactive by default (zoom, pan, hover)
|
| 324 |
-
- Better aesthetics for dashboards
|
| 325 |
-
- Native Streamlit integration
|
| 326 |
-
- Users expect interactivity in modern dashboards
|
| 327 |
-
|
| 328 |
-
**Note**: Matplotlib still used for static EDA plots already generated
|
| 329 |
-
|
| 330 |
-
### Decision 3: Session State for Thresholds
|
| 331 |
-
**Date**: 2025-11-27
|
| 332 |
-
**Rationale**:
|
| 333 |
-
- Ephemeral experimentation (users can reset easily)
|
| 334 |
-
- No need to persist to disk
|
| 335 |
-
- Simpler implementation
|
| 336 |
-
- Users can export configs separately if needed
|
| 337 |
-
|
| 338 |
-
**Future**: May add "save configuration" feature
|
| 339 |
-
|
| 340 |
-
### Decision 4: Demo Mode for RL Training
|
| 341 |
-
**Date**: 2025-11-27
|
| 342 |
-
**Rationale**:
|
| 343 |
-
- Avoid blocking UI during long training runs
|
| 344 |
-
- Show visualization capabilities
|
| 345 |
-
- Guide users to use CLI for actual training
|
| 346 |
-
- Simpler initial implementation
|
| 347 |
-
|
| 348 |
-
**Future**: Add async training with WebSocket updates
|
| 349 |
-
|
| 350 |
-
### Decision 5: Simplified Ripeness Logic
|
| 351 |
-
**Date**: 2025-11-27
|
| 352 |
-
**Rationale**:
|
| 353 |
-
- Demonstrate explainability concept
|
| 354 |
-
- Avoid tight coupling with RipenessClassifier implementation
|
| 355 |
-
- Easier to understand for users
|
| 356 |
-
- Placeholder for full integration
|
| 357 |
-
|
| 358 |
-
**Future**: Integrate actual RipenessClassifier.classify_case()
|
| 359 |
-
|
| 360 |
-
## Maintenance Notes
|
| 361 |
-
|
| 362 |
-
### Dependencies
|
| 363 |
-
- Streamlit: Keep updated for security fixes
|
| 364 |
-
- Plotly: Monitor for breaking changes
|
| 365 |
-
- Polars: Ensure compatibility with Pandas conversion
|
| 366 |
-
|
| 367 |
-
### Code Quality
|
| 368 |
-
- Follow project ruff/black style
|
| 369 |
-
- Add docstrings to new functions
|
| 370 |
-
- Keep pages under 350 lines if possible
|
| 371 |
-
- Extract reusable components to utils/
|
| 372 |
-
|
| 373 |
-
### Performance Monitoring
|
| 374 |
-
- Monitor cache hit rates
|
| 375 |
-
- Track page load times
|
| 376 |
-
- Watch for memory leaks with large datasets
|
| 377 |
-
|
| 378 |
-
## Educational Value
|
| 379 |
-
|
| 380 |
-
The dashboard serves an educational purpose:
|
| 381 |
-
- **Transparency**: Shows how algorithms work (ripeness classifier)
|
| 382 |
-
- **Interactivity**: Lets users experiment (threshold tuning)
|
| 383 |
-
- **Visualization**: Makes complex data accessible (EDA plots)
|
| 384 |
-
- **Learning**: Explains RL concepts (hyperparameter guide)
|
| 385 |
-
|
| 386 |
-
This aligns with the "explainability" goal of the Code4Change project.
|
| 387 |
-
|
| 388 |
-
## Conclusion
|
| 389 |
-
|
| 390 |
-
The dashboard successfully provides:
|
| 391 |
-
1. Comprehensive EDA visualization
|
| 392 |
-
2. Full ripeness classifier explainability
|
| 393 |
-
3. RL training interface (demo mode)
|
| 394 |
-
4. CLI integration
|
| 395 |
-
5. Cached data loading
|
| 396 |
-
6. Interactive threshold tuning
|
| 397 |
-
|
| 398 |
-
Next steps focus on integrating real RL training and enhancing the ripeness classifier with actual implementation.
|
| 399 |
-
|
| 400 |
-
---
|
| 401 |
-
|
| 402 |
-
**Contributors**: Roy Aalekh (Initial Implementation)
|
| 403 |
-
**Project**: Code4Change Court Scheduling System
|
| 404 |
-
**Target**: Karnataka High Court Scheduling Optimization
|
|
|
|
| 1 |
+
# Interactive Dashboard
|
| 2 |
|
| 3 |
+
**Last Updated**: 2025-11-29
|
| 4 |
+
**Status**: Production Ready
|
| 5 |
+
**Version**: 1.0.0
|
| 6 |
|
| 7 |
+
## Launch
|
| 8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
```bash
|
| 10 |
+
uv run streamlit run scheduler/dashboard/app.py
|
| 11 |
+
# Open http://localhost:8501
|
| 12 |
```
|
| 13 |
|
| 14 |
+
## Pages
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
+
1. **Data & Insights** - Historical analysis of 739K+ hearings
|
| 17 |
+
2. **Ripeness Classifier** - Case bottleneck detection with explainability
|
| 18 |
+
3. **RL Training** - Train and evaluate RL scheduling agents
|
| 19 |
+
4. **Simulation Workflow** - Run simulations with configurable policies
|
| 20 |
+
5. **Cause Lists & Overrides** - Judge override interface for cause lists
|
| 21 |
+
6. **Analytics & Reports** - Performance comparison and reporting
|
| 22 |
|
| 23 |
+
## Workflows
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
+
**EDA Exploration**: Run EDA → Launch dashboard → Filter and visualize data
|
| 26 |
+
**Judge Overrides**: Launch dashboard → Simulation Workflow → Review/modify cause lists
|
| 27 |
+
**RL Training**: Launch dashboard → RL Training page → Configure and train
|
| 28 |
|
| 29 |
+
## Data Sources
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
+
- Historical data: `reports/figures/v*/cases_clean.parquet` and `hearings_clean.parquet`
|
| 32 |
+
- Parameters: `reports/figures/v*/params/` (auto-detected latest version)
|
| 33 |
+
- Falls back to bundled defaults if EDA not run
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
- [ ] Batch classification (10K+ cases)
|
| 35 |
- [ ] Multiple concurrent users (if deployed)
|
| 36 |
|
| 37 |
## Troubleshooting
|
| 38 |
|
| 39 |
+
**Dashboard won't launch**: Run `uv sync` to install dependencies
|
| 40 |
+
**Empty visualizations**: Run `uv run court-scheduler eda` first
|
| 41 |
+
**Slow loading**: Data auto-cached after first load (1-hour TTL)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
docs/ENHANCEMENT_PLAN.md
DELETED
|
@@ -1,311 +0,0 @@
|
|
| 1 |
-
# Court Scheduling System - Bug Fixes & Enhancements
|
| 2 |
-
|
| 3 |
-
## Completed Enhancements
|
| 4 |
-
|
| 5 |
-
### 2.3 Add Learning Feedback Loop (COMPLETED)
|
| 6 |
-
**Status**: Implemented (Dec 2024)
|
| 7 |
-
**Solution**:
|
| 8 |
-
- Created `RipenessMetrics` class to track predictions vs outcomes
|
| 9 |
-
- Created `RipenessCalibrator` with 5 calibration rules
|
| 10 |
-
- Added `set_thresholds()` and `get_current_thresholds()` to RipenessClassifier
|
| 11 |
-
- Tracks false positive/negative rates, generates confusion matrix
|
| 12 |
-
- Suggests threshold adjustments with confidence levels
|
| 13 |
-
|
| 14 |
-
**Files**:
|
| 15 |
-
- scheduler/monitoring/ripeness_metrics.py (254 lines)
|
| 16 |
-
- scheduler/monitoring/ripeness_calibrator.py (279 lines)
|
| 17 |
-
- scheduler/core/ripeness.py (enhanced with threshold management)
|
| 18 |
-
|
| 19 |
-
### 4.0.4 Fix RL Reward Computation (COMPLETED)
|
| 20 |
-
**Status**: Fixed (Dec 2024)
|
| 21 |
-
**Solution**:
|
| 22 |
-
- Integrated ParameterLoader into RLTrainingEnvironment
|
| 23 |
-
- Replaced hardcoded probabilities (0.7, 0.6, 0.4) with EDA-derived parameters
|
| 24 |
-
- Training now uses param_loader.get_adjournment_prob() and param_loader.get_stage_transitions_fast()
|
| 25 |
-
- Validation: adjournment rates align within 1% of EDA (43.0% vs 42.3%)
|
| 26 |
-
|
| 27 |
-
**Files**:
|
| 28 |
-
- rl/training.py (enhanced _simulate_hearing_outcome)
|
| 29 |
-
|
| 30 |
-
---
|
| 31 |
-
|
| 32 |
-
## Priority 1: Fix State Management Bugs (P0 - Critical)
|
| 33 |
-
|
| 34 |
-
### 1.1 Fix Override State Pollution
|
| 35 |
-
**Problem**: Override flags persist across runs, priority overrides don't clear
|
| 36 |
-
**Impact**: Cases keep boosted priority in subsequent schedules
|
| 37 |
-
|
| 38 |
-
**Solution**:
|
| 39 |
-
- Add `clear_overrides()` method to Case class
|
| 40 |
-
- Call after each scheduling day or at simulation reset
|
| 41 |
-
- Store overrides in separate tracking dict instead of mutating case objects
|
| 42 |
-
- Alternative: Use immutable override context passed to scheduler
|
| 43 |
-
|
| 44 |
-
**Files**:
|
| 45 |
-
- scheduler/core/case.py (add clear method)
|
| 46 |
-
- scheduler/control/overrides.py (refactor to non-mutating approach)
|
| 47 |
-
- scheduler/simulation/engine.py (call clear after scheduling)
|
| 48 |
-
|
| 49 |
-
### 1.2 Preserve Override Auditability
|
| 50 |
-
**Problem**: Invalid overrides removed in-place from input list
|
| 51 |
-
**Impact**: Caller loses original override list, can't audit rejections
|
| 52 |
-
|
| 53 |
-
**Solution**:
|
| 54 |
-
- Validate into separate collections: `valid_overrides`, `rejected_overrides`
|
| 55 |
-
- Return structured result: `OverrideResult(applied, rejected_with_reasons)`
|
| 56 |
-
- Keep original override list immutable
|
| 57 |
-
- Log all rejections with clear error messages
|
| 58 |
-
|
| 59 |
-
**Files**:
|
| 60 |
-
- scheduler/control/overrides.py (refactor apply_overrides)
|
| 61 |
-
- scheduler/core/algorithm.py (update override handling)
|
| 62 |
-
|
| 63 |
-
### 1.3 Track Override Outcomes Explicitly
|
| 64 |
-
**Problem**: Applied overrides in list, rejected as None in unscheduled
|
| 65 |
-
**Impact**: Hard to distinguish "not selected" from "override rejected"
|
| 66 |
-
|
| 67 |
-
**Solution**:
|
| 68 |
-
- Create `OverrideAudit` dataclass: (override_id, status, reason, timestamp)
|
| 69 |
-
- Return audit log from schedule_day: `result.override_audit`
|
| 70 |
-
- Separate tracking: `cases_not_selected`, `overrides_accepted`, `overrides_rejected`
|
| 71 |
-
|
| 72 |
-
**Files**:
|
| 73 |
-
- scheduler/core/algorithm.py (add audit tracking)
|
| 74 |
-
- scheduler/control/overrides.py (structured audit log)
|
| 75 |
-
|
| 76 |
-
## Priority 2: Strengthen Ripeness Detection (P0 - Critical)
|
| 77 |
-
|
| 78 |
-
### 2.1 Require Positive Evidence for RIPE
|
| 79 |
-
**Problem**: Defaults to RIPE when signals ambiguous
|
| 80 |
-
**Impact**: Schedules cases that may not be ready
|
| 81 |
-
|
| 82 |
-
**Solution**:
|
| 83 |
-
- Add `UNKNOWN` status to RipenessStatus enum
|
| 84 |
-
- Require explicit RIPE signals: stage progression, document check, age threshold
|
| 85 |
-
- Default to UNKNOWN (not RIPE) when data insufficient
|
| 86 |
-
- Add confidence score: `ripeness_confidence: float` (0.0-1.0)
|
| 87 |
-
|
| 88 |
-
**Files**:
|
| 89 |
-
- scheduler/core/ripeness.py (add UNKNOWN, confidence scoring)
|
| 90 |
-
- scheduler/simulation/engine.py (filter UNKNOWN cases)
|
| 91 |
-
|
| 92 |
-
### 2.2 Enrich Ripeness Signals
|
| 93 |
-
**Problem**: Only uses keyword search and basic stage checks
|
| 94 |
-
**Impact**: Misses nuanced bottlenecks
|
| 95 |
-
|
| 96 |
-
**Solution**:
|
| 97 |
-
- Add signals:
|
| 98 |
-
- Filing age relative to case type median
|
| 99 |
-
- Adjournment reason history (recurring "summons pending")
|
| 100 |
-
- Outstanding task list (if available in data)
|
| 101 |
-
- Party/lawyer attendance rate
|
| 102 |
-
- Document submission completeness
|
| 103 |
-
- Multi-signal scoring: weighted combination
|
| 104 |
-
- Configurable thresholds per signal
|
| 105 |
-
|
| 106 |
-
**Files**:
|
| 107 |
-
- scheduler/core/ripeness.py (add signal extraction)
|
| 108 |
-
- scheduler/data/config.py (ripeness thresholds)
|
| 109 |
-
|
| 110 |
-
### 2.3 Add Learning Feedback Loop (COMPLETED - See top of document)
|
| 111 |
-
~~Moved to Completed Enhancements section~~
|
| 112 |
-
|
| 113 |
-
## Priority 3: Re-enable Simulation Inflow (P1 - High)
|
| 114 |
-
|
| 115 |
-
### 3.1 Parameterize Case Filing
|
| 116 |
-
**Problem**: New filings commented out, no caseload growth
|
| 117 |
-
**Impact**: Unrealistic long-term simulations
|
| 118 |
-
|
| 119 |
-
**Solution**:
|
| 120 |
-
- Add `enable_inflow: bool` to CourtSimConfig
|
| 121 |
-
- Add `filing_rate_multiplier: float` (default 1.0 for historical rate)
|
| 122 |
-
- Expose inflow controls in pipeline config
|
| 123 |
-
- Surface inflow metrics in simulation results
|
| 124 |
-
|
| 125 |
-
**Files**:
|
| 126 |
-
- scheduler/simulation/engine.py (uncomment + gate filings)
|
| 127 |
-
- court_scheduler_rl.py (add config parameters)
|
| 128 |
-
|
| 129 |
-
### 3.2 Make Ripeness Re-evaluation Configurable
|
| 130 |
-
**Problem**: Fixed 7-day re-evaluation may be too infrequent
|
| 131 |
-
**Impact**: Stale classifications drive multiple days
|
| 132 |
-
|
| 133 |
-
**Solution**:
|
| 134 |
-
- Add `ripeness_eval_frequency_days: int` to config (default 7)
|
| 135 |
-
- Consider adaptive frequency: more frequent when backlog high
|
| 136 |
-
- Log ripeness re-evaluation events
|
| 137 |
-
|
| 138 |
-
**Files**:
|
| 139 |
-
- scheduler/simulation/engine.py (configurable frequency)
|
| 140 |
-
- scheduler/data/config.py (add parameter)
|
| 141 |
-
|
| 142 |
-
## Priority 4: EDA and Configuration Robustness (P1 - High)
|
| 143 |
-
|
| 144 |
-
### 4.0.1 Fix EDA Memory Issues
|
| 145 |
-
**Problem**: EDA converts full Parquet to pandas, risks memory exhaustion
|
| 146 |
-
**Impact**: Pipeline fails on large datasets (>50K cases)
|
| 147 |
-
|
| 148 |
-
**Solution**:
|
| 149 |
-
- Add sampling parameter: `eda_sample_size: Optional[int]` (default None = full)
|
| 150 |
-
- Stream data instead of loading all at once
|
| 151 |
-
- Downcast numeric columns before conversion
|
| 152 |
-
- Add memory monitoring and warnings
|
| 153 |
-
|
| 154 |
-
**Files**:
|
| 155 |
-
- src/eda_exploration.py (add sampling)
|
| 156 |
-
- src/eda_config.py (memory limits)
|
| 157 |
-
|
| 158 |
-
### 4.0.2 Fix Headless Rendering
|
| 159 |
-
**Problem**: Plotly renderer defaults to "browser", fails in CI/CD
|
| 160 |
-
**Impact**: Cannot run EDA in automated pipelines
|
| 161 |
-
|
| 162 |
-
**Solution**:
|
| 163 |
-
- Detect headless environment (check DISPLAY env var)
|
| 164 |
-
- Default to "png" or "svg" renderer in headless mode
|
| 165 |
-
- Add `--renderer` CLI flag to override
|
| 166 |
-
|
| 167 |
-
**Files**:
|
| 168 |
-
- src/eda_exploration.py (renderer detection)
|
| 169 |
-
- court_scheduler_rl.py (add CLI flag)
|
| 170 |
-
|
| 171 |
-
### 4.0.3 Fix Missing Parameters Fallback
|
| 172 |
-
**Problem**: get_latest_params_dir raises when no params exist
|
| 173 |
-
**Impact**: Fresh environments can't run simulations
|
| 174 |
-
|
| 175 |
-
**Solution**:
|
| 176 |
-
- Bundle baseline parameters in `scheduler/data/defaults/`
|
| 177 |
-
- Fallback to bundled params if no EDA run found
|
| 178 |
-
- Add `--use-defaults` flag to force baseline params
|
| 179 |
-
- Log warning when using defaults vs EDA-derived
|
| 180 |
-
|
| 181 |
-
**Files**:
|
| 182 |
-
- scheduler/data/config.py (fallback logic)
|
| 183 |
-
- scheduler/data/defaults/ (new directory with baseline params)
|
| 184 |
-
|
| 185 |
-
### 4.0.4 Fix RL Parameter Alignment (COMPLETED - See top of document)
|
| 186 |
-
~~Moved to Completed Enhancements section~~
|
| 187 |
-
|
| 188 |
-
## Priority 5: Enhanced Scheduling Constraints (P2 - Medium)
|
| 189 |
-
|
| 190 |
-
### 4.1 Judge Blocking & Availability
|
| 191 |
-
**Problem**: No per-judge blocked dates
|
| 192 |
-
**Impact**: Schedules hearings when judge unavailable
|
| 193 |
-
|
| 194 |
-
**Solution**:
|
| 195 |
-
- Add `blocked_dates: list[date]` to Judge entity
|
| 196 |
-
- Add `availability_override: dict[date, bool]` for one-time changes
|
| 197 |
-
- Filter eligible courtrooms by judge availability
|
| 198 |
-
|
| 199 |
-
**Files**:
|
| 200 |
-
- scheduler/core/judge.py (add availability fields)
|
| 201 |
-
- scheduler/core/algorithm.py (check availability)
|
| 202 |
-
|
| 203 |
-
### 4.2 Per-Case Gap Overrides
|
| 204 |
-
**Problem**: Global MIN_GAP_BETWEEN_HEARINGS, no exceptions
|
| 205 |
-
**Impact**: Urgent cases can't be expedited
|
| 206 |
-
|
| 207 |
-
**Solution**:
|
| 208 |
-
- Add `min_gap_override: Optional[int]` to Case
|
| 209 |
-
- Apply in eligibility check: `gap = case.min_gap_override or MIN_GAP`
|
| 210 |
-
- Track override applications in metrics
|
| 211 |
-
|
| 212 |
-
**Files**:
|
| 213 |
-
- scheduler/core/case.py (add field)
|
| 214 |
-
- scheduler/core/algorithm.py (use override in eligibility)
|
| 215 |
-
|
| 216 |
-
### 4.3 Courtroom Capacity Changes
|
| 217 |
-
**Problem**: Fixed daily capacity, no dynamic adjustments
|
| 218 |
-
**Impact**: Can't model half-days, special sessions
|
| 219 |
-
|
| 220 |
-
**Solution**:
|
| 221 |
-
- Add `capacity_overrides: dict[date, int]` to Courtroom
|
| 222 |
-
- Apply in allocation: check date-specific capacity first
|
| 223 |
-
- Support judge preferences (e.g., "Property cases Mondays")
|
| 224 |
-
|
| 225 |
-
**Files**:
|
| 226 |
-
- scheduler/core/courtroom.py (add override dict)
|
| 227 |
-
- scheduler/simulation/allocator.py (check overrides)
|
| 228 |
-
|
| 229 |
-
## Priority 5: Testing & Validation (P1 - High)
|
| 230 |
-
|
| 231 |
-
### 5.1 Unit Tests for Bug Fixes
|
| 232 |
-
**Coverage**:
|
| 233 |
-
- Override state clearing
|
| 234 |
-
- Ripeness UNKNOWN handling
|
| 235 |
-
- Inflow rate calculations
|
| 236 |
-
- Constraint validation
|
| 237 |
-
|
| 238 |
-
**Files**:
|
| 239 |
-
- tests/test_overrides.py (new)
|
| 240 |
-
- tests/test_ripeness.py (expand)
|
| 241 |
-
- tests/test_simulation.py (inflow tests)
|
| 242 |
-
|
| 243 |
-
### 5.2 Integration Tests
|
| 244 |
-
**Scenarios**:
|
| 245 |
-
- Full pipeline with overrides applied
|
| 246 |
-
- Ripeness transitions over time
|
| 247 |
-
- Blocked judge dates respected
|
| 248 |
-
- Capacity overrides honored
|
| 249 |
-
|
| 250 |
-
**Files**:
|
| 251 |
-
- tests/integration/test_scheduling_pipeline.py (new)
|
| 252 |
-
|
| 253 |
-
## Implementation Order
|
| 254 |
-
|
| 255 |
-
1. **Week 1**: Fix critical bugs
|
| 256 |
-
- State management (1.1, 1.2, 1.3)
|
| 257 |
-
- Configuration robustness (4.0.3 - parameter fallback)
|
| 258 |
-
- Unit tests for above
|
| 259 |
-
|
| 260 |
-
2. **Week 2**: Strengthen core systems
|
| 261 |
-
- Ripeness detection (2.1, 2.2 - UNKNOWN status, multi-signal)
|
| 262 |
-
- RL reward alignment (4.0.4 - shared reward logic)
|
| 263 |
-
- Re-enable inflow (3.1, 3.2)
|
| 264 |
-
|
| 265 |
-
3. **Week 3**: Robustness and constraints
|
| 266 |
-
- EDA scaling (4.0.1 - memory management)
|
| 267 |
-
- Headless rendering (4.0.2 - CI/CD compatibility)
|
| 268 |
-
- Enhanced constraints (5.1, 5.2, 5.3)
|
| 269 |
-
|
| 270 |
-
4. **Week 4**: Testing and polish
|
| 271 |
-
- Comprehensive integration tests
|
| 272 |
-
- Ripeness learning feedback (2.3)
|
| 273 |
-
- All edge cases documented
|
| 274 |
-
|
| 275 |
-
## Success Criteria
|
| 276 |
-
|
| 277 |
-
**Bug Fixes**:
|
| 278 |
-
- Override state doesn't leak between runs
|
| 279 |
-
- All override decisions auditable
|
| 280 |
-
- Rejected overrides tracked with reasons
|
| 281 |
-
|
| 282 |
-
**Ripeness**:
|
| 283 |
-
- UNKNOWN status used when confidence low
|
| 284 |
-
- False positive rate < 15% (marked RIPE but adjourned)
|
| 285 |
-
- Multi-signal scoring operational
|
| 286 |
-
|
| 287 |
-
**Simulation Realism**:
|
| 288 |
-
- Inflow configurable and metrics tracked
|
| 289 |
-
- Long runs show realistic caseload patterns
|
| 290 |
-
- Ripeness re-evaluation frequency tunable
|
| 291 |
-
|
| 292 |
-
**Constraints**:
|
| 293 |
-
- Judge blocked dates respected 100%
|
| 294 |
-
- Per-case gap overrides functional
|
| 295 |
-
- Capacity changes applied correctly
|
| 296 |
-
|
| 297 |
-
**Quality**:
|
| 298 |
-
- 90%+ test coverage for bug fixes
|
| 299 |
-
- Integration tests pass
|
| 300 |
-
- All edge cases documented
|
| 301 |
-
|
| 302 |
-
## Background
|
| 303 |
-
|
| 304 |
-
This plan addresses critical bugs and architectural improvements identified through code analysis:
|
| 305 |
-
|
| 306 |
-
1. **State Management**: Override flags persist across runs, causing silent bias
|
| 307 |
-
2. **Ripeness Defaults**: System defaults to RIPE when uncertain, risking premature scheduling
|
| 308 |
-
3. **Closed Simulation**: No case inflow, making long-term runs unrealistic
|
| 309 |
-
4. **Limited Auditability**: In-place mutations make debugging and QA difficult
|
| 310 |
-
|
| 311 |
-
See commit history for OutputManager refactoring and Windows compatibility fixes already completed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
models/intensive_trained_rl_agent.pkl
DELETED
|
Binary file (4.97 kB)
|
|
|
models/latest.pkl
DELETED
|
@@ -1 +0,0 @@
|
|
| 1 |
-
D:/personal/code4change/code4change-analysis/outputs/runs/run_20251127_054834/training/agent.pkl
|
|
|
|
|
|
models/trained_rl_agent.pkl
DELETED
|
Binary file (4.32 kB)
|
|
|
outputs/runs/run_20251127_054834/reports/COMPARISON_REPORT.md
DELETED
|
@@ -1,19 +0,0 @@
|
|
| 1 |
-
# Court Scheduling System - Performance Comparison
|
| 2 |
-
|
| 3 |
-
Generated: 2025-11-27 05:50:04
|
| 4 |
-
|
| 5 |
-
## Configuration
|
| 6 |
-
|
| 7 |
-
- Training Cases: 10,000
|
| 8 |
-
- Simulation Period: 90 days (0.2 years)
|
| 9 |
-
- RL Episodes: 20
|
| 10 |
-
- RL Learning Rate: 0.15
|
| 11 |
-
- RL Epsilon: 0.4
|
| 12 |
-
- Policies Compared: readiness, rl
|
| 13 |
-
|
| 14 |
-
## Results Summary
|
| 15 |
-
|
| 16 |
-
| Policy | Disposals | Disposal Rate | Utilization | Avg Hearings/Day |
|
| 17 |
-
|--------|-----------|---------------|-------------|------------------|
|
| 18 |
-
| Readiness | 5,343 | 53.4% | 78.8% | 594.7 |
|
| 19 |
-
| Rl | 5,365 | 53.6% | 78.5% | 593.0 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
outputs/runs/run_20251127_054834/reports/EXECUTIVE_SUMMARY.md
DELETED
|
@@ -1,47 +0,0 @@
|
|
| 1 |
-
# Court Scheduling System - Executive Summary
|
| 2 |
-
|
| 3 |
-
## Hackathon Submission: Karnataka High Court
|
| 4 |
-
|
| 5 |
-
### System Overview
|
| 6 |
-
This intelligent court scheduling system uses Reinforcement Learning to optimize case allocation and improve judicial efficiency. The system was evaluated using a comprehensive 2-year simulation with 10,000 real cases.
|
| 7 |
-
|
| 8 |
-
### Key Achievements
|
| 9 |
-
|
| 10 |
-
**53.6% Case Disposal Rate** - Significantly improved case clearance
|
| 11 |
-
**78.5% Court Utilization** - Optimal resource allocation
|
| 12 |
-
**53,368 Hearings Scheduled** - Over 90 days
|
| 13 |
-
**AI-Powered Decisions** - Reinforcement learning with 20 training episodes
|
| 14 |
-
|
| 15 |
-
### Technical Innovation
|
| 16 |
-
|
| 17 |
-
- **Reinforcement Learning**: Tabular Q-learning with 6D state space
|
| 18 |
-
- **Real-time Adaptation**: Dynamic policy adjustment based on case characteristics
|
| 19 |
-
- **Multi-objective Optimization**: Balances disposal rate, fairness, and utilization
|
| 20 |
-
- **Production Ready**: Generates daily cause lists for immediate deployment
|
| 21 |
-
|
| 22 |
-
### Impact Metrics
|
| 23 |
-
|
| 24 |
-
- **Cases Disposed**: 5,365 out of 10,000
|
| 25 |
-
- **Average Hearings per Day**: 593.0
|
| 26 |
-
- **System Scalability**: Handles 50,000+ case simulations efficiently
|
| 27 |
-
- **Judicial Time Saved**: Estimated 71 productive court days
|
| 28 |
-
|
| 29 |
-
### Deployment Readiness
|
| 30 |
-
|
| 31 |
-
**Daily Cause Lists**: Automated generation for 90 days
|
| 32 |
-
**Performance Monitoring**: Comprehensive metrics and analytics
|
| 33 |
-
**Judicial Override**: Complete control system for judge approval
|
| 34 |
-
**Multi-courtroom Support**: Load-balanced allocation across courtrooms
|
| 35 |
-
|
| 36 |
-
### Next Steps
|
| 37 |
-
|
| 38 |
-
1. **Pilot Deployment**: Begin with select courtrooms for validation
|
| 39 |
-
2. **Judge Training**: Familiarization with AI-assisted scheduling
|
| 40 |
-
3. **Performance Monitoring**: Track real-world improvement metrics
|
| 41 |
-
4. **System Expansion**: Scale to additional court complexes
|
| 42 |
-
|
| 43 |
-
---
|
| 44 |
-
|
| 45 |
-
**Generated**: 2025-11-27 05:50:04
|
| 46 |
-
**System Version**: 2.0 (Hackathon Submission)
|
| 47 |
-
**Contact**: Karnataka High Court Digital Innovation Team
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
outputs/runs/run_20251127_054834/reports/visualizations/performance_charts.md
DELETED
|
@@ -1,7 +0,0 @@
|
|
| 1 |
-
# Performance Visualizations
|
| 2 |
-
|
| 3 |
-
Generated charts showing:
|
| 4 |
-
- Daily disposal rates
|
| 5 |
-
- Court utilization over time
|
| 6 |
-
- Case type performance
|
| 7 |
-
- Load balancing effectiveness
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
outputs/runs/run_20251127_054834/training/agent.pkl
DELETED
|
Binary file (34.7 kB)
|
|
|
pyproject.toml
CHANGED
|
@@ -51,7 +51,7 @@ target-version = ["py311"]
|
|
| 51 |
[tool.ruff]
|
| 52 |
select = ["E", "F", "I", "B", "C901", "N", "D"]
|
| 53 |
line-length = 100
|
| 54 |
-
src = ["
|
| 55 |
|
| 56 |
[tool.ruff.pydocstyle]
|
| 57 |
convention = "google"
|
|
@@ -63,5 +63,11 @@ markers = [
|
|
| 63 |
"unit: Unit tests",
|
| 64 |
"integration: Integration tests",
|
| 65 |
"fairness: Fairness validation tests",
|
| 66 |
-
"performance: Performance benchmark tests"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
]
|
|
|
|
|
|
| 51 |
[tool.ruff]
|
| 52 |
select = ["E", "F", "I", "B", "C901", "N", "D"]
|
| 53 |
line-length = 100
|
| 54 |
+
src = [".", "scheduler"]
|
| 55 |
|
| 56 |
[tool.ruff.pydocstyle]
|
| 57 |
convention = "google"
|
|
|
|
| 63 |
"unit: Unit tests",
|
| 64 |
"integration: Integration tests",
|
| 65 |
"fairness: Fairness validation tests",
|
| 66 |
+
"performance: Performance benchmark tests",
|
| 67 |
+
"rl: Reinforcement learning tests",
|
| 68 |
+
"simulation: Simulation engine tests",
|
| 69 |
+
"edge_case: Edge case and boundary condition tests",
|
| 70 |
+
"failure: Failure scenario tests",
|
| 71 |
+
"slow: Slow-running tests (>5 seconds)"
|
| 72 |
]
|
| 73 |
+
|
report.txt
DELETED
|
@@ -1,56 +0,0 @@
|
|
| 1 |
-
================================================================================
|
| 2 |
-
SIMULATION REPORT
|
| 3 |
-
================================================================================
|
| 4 |
-
|
| 5 |
-
Configuration:
|
| 6 |
-
Cases: 3000
|
| 7 |
-
Days simulated: 60
|
| 8 |
-
Policy: readiness
|
| 9 |
-
Horizon end: 2024-06-20
|
| 10 |
-
|
| 11 |
-
Hearing Metrics:
|
| 12 |
-
Total hearings: 16,137
|
| 13 |
-
Heard: 9,981 (61.9%)
|
| 14 |
-
Adjourned: 6,156 (38.1%)
|
| 15 |
-
|
| 16 |
-
Disposal Metrics:
|
| 17 |
-
Cases disposed: 708
|
| 18 |
-
Disposal rate: 23.6%
|
| 19 |
-
Gini coefficient: 0.195
|
| 20 |
-
|
| 21 |
-
Disposal Rates by Case Type:
|
| 22 |
-
CA : 159/ 587 ( 27.1%)
|
| 23 |
-
CCC : 133/ 334 ( 39.8%)
|
| 24 |
-
CMP : 14/ 86 ( 16.3%)
|
| 25 |
-
CP : 105/ 294 ( 35.7%)
|
| 26 |
-
CRP : 142/ 612 ( 23.2%)
|
| 27 |
-
RFA : 77/ 519 ( 14.8%)
|
| 28 |
-
RSA : 78/ 568 ( 13.7%)
|
| 29 |
-
|
| 30 |
-
Efficiency Metrics:
|
| 31 |
-
Court utilization: 35.6%
|
| 32 |
-
Avg hearings/day: 268.9
|
| 33 |
-
|
| 34 |
-
Ripeness Impact:
|
| 35 |
-
Transitions: 0
|
| 36 |
-
Cases filtered (unripe): 3,360
|
| 37 |
-
Filter rate: 17.2%
|
| 38 |
-
|
| 39 |
-
Final Ripeness Distribution:
|
| 40 |
-
RIPE: 2236 (97.6%)
|
| 41 |
-
UNRIPE_DEPENDENT: 19 (0.8%)
|
| 42 |
-
UNRIPE_SUMMONS: 37 (1.6%)
|
| 43 |
-
|
| 44 |
-
Courtroom Allocation:
|
| 45 |
-
Strategy: load_balanced
|
| 46 |
-
Load balance fairness (Gini): 0.002
|
| 47 |
-
Avg daily load: 53.8 cases
|
| 48 |
-
Allocation changes: 10,527
|
| 49 |
-
Capacity rejections: 0
|
| 50 |
-
|
| 51 |
-
Courtroom-wise totals:
|
| 52 |
-
Courtroom 1: 3,244 cases (54.1/day)
|
| 53 |
-
Courtroom 2: 3,233 cases (53.9/day)
|
| 54 |
-
Courtroom 3: 3,227 cases (53.8/day)
|
| 55 |
-
Courtroom 4: 3,221 cases (53.7/day)
|
| 56 |
-
Courtroom 5: 3,212 cases (53.5/day)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
rl/README.md
DELETED
|
@@ -1,110 +0,0 @@
|
|
| 1 |
-
# Reinforcement Learning Module
|
| 2 |
-
|
| 3 |
-
This module implements tabular Q-learning for court case scheduling prioritization, following the hybrid approach outlined in `RL_EXPLORATION_PLAN.md`.
|
| 4 |
-
|
| 5 |
-
## Architecture
|
| 6 |
-
|
| 7 |
-
### Core Components
|
| 8 |
-
|
| 9 |
-
- **`simple_agent.py`**: Tabular Q-learning agent with 6D state space
|
| 10 |
-
- **`training.py`**: Training environment and learning pipeline
|
| 11 |
-
- **`__init__.py`**: Module exports and interface
|
| 12 |
-
|
| 13 |
-
### State Representation (6D)
|
| 14 |
-
|
| 15 |
-
Cases are represented by a 6-dimensional state vector:
|
| 16 |
-
|
| 17 |
-
1. **Stage** (0-10): Current litigation stage (discretized)
|
| 18 |
-
2. **Age** (0-9): Case age in days (normalized and discretized)
|
| 19 |
-
3. **Days since last** (0-9): Days since last hearing (normalized)
|
| 20 |
-
4. **Urgency** (0-1): Binary urgent status
|
| 21 |
-
5. **Ripeness** (0-1): Binary ripeness status
|
| 22 |
-
6. **Hearing count** (0-9): Number of previous hearings (normalized)
|
| 23 |
-
|
| 24 |
-
### Reward Function
|
| 25 |
-
|
| 26 |
-
- **Base scheduling**: +0.5 for taking action
|
| 27 |
-
- **Disposal**: +10.0 for case disposal/settlement
|
| 28 |
-
- **Progress**: +3.0 for case advancement
|
| 29 |
-
- **Adjournment**: -3.0 penalty
|
| 30 |
-
- **Urgency bonus**: +2.0 for urgent cases
|
| 31 |
-
- **Ripeness penalty**: -4.0 for scheduling unripe cases
|
| 32 |
-
- **Long pending bonus**: +2.0 for cases >365 days old
|
| 33 |
-
|
| 34 |
-
## Usage
|
| 35 |
-
|
| 36 |
-
### Basic Training
|
| 37 |
-
|
| 38 |
-
```python
|
| 39 |
-
from rl import TabularQAgent, train_agent
|
| 40 |
-
|
| 41 |
-
# Create agent
|
| 42 |
-
agent = TabularQAgent(learning_rate=0.1, epsilon=0.3)
|
| 43 |
-
|
| 44 |
-
# Train
|
| 45 |
-
stats = train_agent(agent, episodes=50, cases_per_episode=500)
|
| 46 |
-
|
| 47 |
-
# Save
|
| 48 |
-
agent.save(Path("models/my_agent.pkl"))
|
| 49 |
-
```
|
| 50 |
-
|
| 51 |
-
### Configuration-Driven Training
|
| 52 |
-
|
| 53 |
-
```bash
|
| 54 |
-
# Use predefined config
|
| 55 |
-
uv run python train_rl_agent.py --config configs/rl_training_fast.json
|
| 56 |
-
|
| 57 |
-
# Override specific parameters
|
| 58 |
-
uv run python train_rl_agent.py --episodes 100 --learning-rate 0.2
|
| 59 |
-
|
| 60 |
-
# Custom model name
|
| 61 |
-
uv run python train_rl_agent.py --model-name "custom_agent.pkl"
|
| 62 |
-
```
|
| 63 |
-
|
| 64 |
-
### Integration with Simulation
|
| 65 |
-
|
| 66 |
-
```python
|
| 67 |
-
from scheduler.simulation.policies import RLPolicy
|
| 68 |
-
|
| 69 |
-
# Use trained agent in simulation
|
| 70 |
-
policy = RLPolicy(agent_path=Path("models/intensive_rl_agent.pkl"))
|
| 71 |
-
|
| 72 |
-
# Or auto-load latest trained agent
|
| 73 |
-
policy = RLPolicy() # Automatically finds intensive_trained_rl_agent.pkl
|
| 74 |
-
```
|
| 75 |
-
|
| 76 |
-
## Configuration Files
|
| 77 |
-
|
| 78 |
-
### Fast Training (`configs/rl_training_fast.json`)
|
| 79 |
-
- 20 episodes, 200 cases/episode
|
| 80 |
-
- Higher learning rate (0.2) and exploration (0.5)
|
| 81 |
-
- Suitable for quick experiments
|
| 82 |
-
|
| 83 |
-
### Intensive Training (`configs/rl_training_intensive.json`)
|
| 84 |
-
- 100 episodes, 1000 cases/episode
|
| 85 |
-
- Balanced parameters for production training
|
| 86 |
-
- Generates `intensive_rl_agent.pkl`
|
| 87 |
-
|
| 88 |
-
## Performance
|
| 89 |
-
|
| 90 |
-
Current results on 10,000 case dataset (90-day simulation):
|
| 91 |
-
- **RL Agent**: 52.1% disposal rate
|
| 92 |
-
- **Baseline**: 51.9% disposal rate
|
| 93 |
-
- **Status**: Performance parity achieved
|
| 94 |
-
|
| 95 |
-
## Hybrid Design
|
| 96 |
-
|
| 97 |
-
The RL agent works within a **hybrid architecture**:
|
| 98 |
-
|
| 99 |
-
1. **Rule-based filtering**: Maintains fairness and judicial constraints
|
| 100 |
-
2. **RL prioritization**: Learns optimal case priority scoring
|
| 101 |
-
3. **Deterministic allocation**: Respects courtroom capacity limits
|
| 102 |
-
|
| 103 |
-
This ensures the system remains explainable and legally compliant while leveraging learned scheduling patterns.
|
| 104 |
-
|
| 105 |
-
## Development Notes
|
| 106 |
-
|
| 107 |
-
- State space: 44,000 theoretical states, ~100 typically explored
|
| 108 |
-
- Training requires 10,000+ diverse cases for effective learning
|
| 109 |
-
- Agent learns to match expert heuristics rather than exceed them
|
| 110 |
-
- Suitable for research and proof-of-concept applications
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
rl/__init__.py
DELETED
|
@@ -1,12 +0,0 @@
|
|
| 1 |
-
"""RL-based court scheduling components.
|
| 2 |
-
|
| 3 |
-
This module contains the reinforcement learning components for court scheduling:
|
| 4 |
-
- Tabular Q-learning agent for case priority scoring
|
| 5 |
-
- Training environment and loops
|
| 6 |
-
- Explainability tools for judicial decisions
|
| 7 |
-
"""
|
| 8 |
-
|
| 9 |
-
from .simple_agent import TabularQAgent
|
| 10 |
-
from .training import train_agent, evaluate_agent, RLTrainingEnvironment
|
| 11 |
-
|
| 12 |
-
__all__ = ['TabularQAgent', 'train_agent', 'evaluate_agent', 'RLTrainingEnvironment']
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
rl/config.py
DELETED
|
@@ -1,115 +0,0 @@
|
|
| 1 |
-
"""RL training configuration and hyperparameters.
|
| 2 |
-
|
| 3 |
-
This module contains all configurable parameters for RL agent training,
|
| 4 |
-
separate from domain constants and simulation settings.
|
| 5 |
-
"""
|
| 6 |
-
|
| 7 |
-
from dataclasses import dataclass
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
@dataclass
|
| 11 |
-
class RLTrainingConfig:
|
| 12 |
-
"""Configuration for RL agent training.
|
| 13 |
-
|
| 14 |
-
Hyperparameters that affect learning behavior and convergence.
|
| 15 |
-
"""
|
| 16 |
-
# Training episodes
|
| 17 |
-
episodes: int = 100
|
| 18 |
-
cases_per_episode: int = 1000
|
| 19 |
-
episode_length_days: int = 60
|
| 20 |
-
|
| 21 |
-
# Courtroom + allocation constraints
|
| 22 |
-
courtrooms: int = 5
|
| 23 |
-
daily_capacity_per_courtroom: int = 151
|
| 24 |
-
cap_daily_allocations: bool = True
|
| 25 |
-
max_daily_allocations: int | None = None # Optional hard cap (overrides computed capacity)
|
| 26 |
-
enforce_min_gap: bool = True
|
| 27 |
-
apply_judge_preferences: bool = True
|
| 28 |
-
|
| 29 |
-
# Q-learning hyperparameters
|
| 30 |
-
learning_rate: float = 0.15
|
| 31 |
-
discount_factor: float = 0.95
|
| 32 |
-
|
| 33 |
-
# Exploration strategy
|
| 34 |
-
initial_epsilon: float = 0.4
|
| 35 |
-
epsilon_decay: float = 0.99
|
| 36 |
-
min_epsilon: float = 0.05
|
| 37 |
-
|
| 38 |
-
# Training data generation
|
| 39 |
-
training_seed: int = 42
|
| 40 |
-
stage_mix_auto: bool = True # Use EDA-derived stage distribution
|
| 41 |
-
|
| 42 |
-
def __post_init__(self):
|
| 43 |
-
"""Validate configuration parameters."""
|
| 44 |
-
if not (0.0 < self.learning_rate <= 1.0):
|
| 45 |
-
raise ValueError(f"learning_rate must be in (0, 1], got {self.learning_rate}")
|
| 46 |
-
|
| 47 |
-
if not (0.0 <= self.discount_factor <= 1.0):
|
| 48 |
-
raise ValueError(f"discount_factor must be in [0, 1], got {self.discount_factor}")
|
| 49 |
-
|
| 50 |
-
if not (0.0 <= self.initial_epsilon <= 1.0):
|
| 51 |
-
raise ValueError(f"initial_epsilon must be in [0, 1], got {self.initial_epsilon}")
|
| 52 |
-
|
| 53 |
-
if self.episodes < 1:
|
| 54 |
-
raise ValueError(f"episodes must be >= 1, got {self.episodes}")
|
| 55 |
-
|
| 56 |
-
if self.cases_per_episode < 1:
|
| 57 |
-
raise ValueError(f"cases_per_episode must be >= 1, got {self.cases_per_episode}")
|
| 58 |
-
|
| 59 |
-
if self.courtrooms < 1:
|
| 60 |
-
raise ValueError(f"courtrooms must be >= 1, got {self.courtrooms}")
|
| 61 |
-
|
| 62 |
-
if self.daily_capacity_per_courtroom < 1:
|
| 63 |
-
raise ValueError(
|
| 64 |
-
f"daily_capacity_per_courtroom must be >= 1, got {self.daily_capacity_per_courtroom}"
|
| 65 |
-
)
|
| 66 |
-
|
| 67 |
-
if self.max_daily_allocations is not None and self.max_daily_allocations < 1:
|
| 68 |
-
raise ValueError(
|
| 69 |
-
f"max_daily_allocations must be >= 1 when provided, got {self.max_daily_allocations}"
|
| 70 |
-
)
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
@dataclass
|
| 74 |
-
class PolicyConfig:
|
| 75 |
-
"""Configuration for scheduling policy behavior.
|
| 76 |
-
|
| 77 |
-
Settings that affect how policies prioritize and filter cases.
|
| 78 |
-
"""
|
| 79 |
-
# Minimum gap between hearings (days)
|
| 80 |
-
min_gap_days: int = 7 # From MIN_GAP_BETWEEN_HEARINGS in config.py
|
| 81 |
-
|
| 82 |
-
# Maximum gap before alert (days)
|
| 83 |
-
max_gap_alert_days: int = 90 # From MAX_GAP_WITHOUT_ALERT
|
| 84 |
-
|
| 85 |
-
# Old case threshold for priority boost (days)
|
| 86 |
-
old_case_threshold_days: int = 180
|
| 87 |
-
|
| 88 |
-
# Ripeness filtering
|
| 89 |
-
skip_unripe_cases: bool = True
|
| 90 |
-
allow_old_unripe_cases: bool = True # Allow scheduling if age > old_case_threshold
|
| 91 |
-
|
| 92 |
-
def __post_init__(self):
|
| 93 |
-
"""Validate configuration parameters."""
|
| 94 |
-
if self.min_gap_days < 0:
|
| 95 |
-
raise ValueError(f"min_gap_days must be >= 0, got {self.min_gap_days}")
|
| 96 |
-
|
| 97 |
-
if self.max_gap_alert_days < self.min_gap_days:
|
| 98 |
-
raise ValueError(
|
| 99 |
-
f"max_gap_alert_days ({self.max_gap_alert_days}) must be >= "
|
| 100 |
-
f"min_gap_days ({self.min_gap_days})"
|
| 101 |
-
)
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
# Default configurations
|
| 105 |
-
DEFAULT_RL_TRAINING_CONFIG = RLTrainingConfig()
|
| 106 |
-
DEFAULT_POLICY_CONFIG = PolicyConfig()
|
| 107 |
-
|
| 108 |
-
# Quick demo configuration (for testing)
|
| 109 |
-
QUICK_DEMO_RL_CONFIG = RLTrainingConfig(
|
| 110 |
-
episodes=20,
|
| 111 |
-
cases_per_episode=1000,
|
| 112 |
-
episode_length_days=45,
|
| 113 |
-
learning_rate=0.15,
|
| 114 |
-
initial_epsilon=0.4,
|
| 115 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
rl/rewards.py
DELETED
|
@@ -1,127 +0,0 @@
|
|
| 1 |
-
"""Shared reward helper utilities for RL agents.
|
| 2 |
-
|
| 3 |
-
The helper operates on episode-level statistics so that reward shaping
|
| 4 |
-
reflects system-wide outcomes (disposal rate, gap compliance, urgent
|
| 5 |
-
case latency, and fairness across cases).
|
| 6 |
-
"""
|
| 7 |
-
|
| 8 |
-
from __future__ import annotations
|
| 9 |
-
|
| 10 |
-
from collections import defaultdict
|
| 11 |
-
from dataclasses import dataclass, field
|
| 12 |
-
from typing import Dict, Iterable, Optional
|
| 13 |
-
|
| 14 |
-
import numpy as np
|
| 15 |
-
|
| 16 |
-
from scheduler.core.case import Case
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
@dataclass
|
| 20 |
-
class EpisodeRewardHelper:
|
| 21 |
-
"""Aggregates episode metrics and computes shaped rewards."""
|
| 22 |
-
|
| 23 |
-
total_cases: int
|
| 24 |
-
target_gap_days: int = 30
|
| 25 |
-
max_urgent_latency: int = 60
|
| 26 |
-
disposal_weight: float = 4.0
|
| 27 |
-
gap_weight: float = 1.5
|
| 28 |
-
urgent_weight: float = 2.0
|
| 29 |
-
fairness_weight: float = 1.0
|
| 30 |
-
_disposed_cases: int = 0
|
| 31 |
-
_hearing_counts: Dict[str, int] = field(default_factory=lambda: defaultdict(int))
|
| 32 |
-
_urgent_latencies: list[float] = field(default_factory=list)
|
| 33 |
-
|
| 34 |
-
def _base_outcome_reward(self, case: Case, was_scheduled: bool, hearing_outcome: str) -> float:
|
| 35 |
-
"""Preserve the original per-case shaping signals."""
|
| 36 |
-
|
| 37 |
-
reward = 0.0
|
| 38 |
-
if not was_scheduled:
|
| 39 |
-
return reward
|
| 40 |
-
|
| 41 |
-
# Base scheduling reward (small positive for taking action)
|
| 42 |
-
reward += 0.5
|
| 43 |
-
|
| 44 |
-
# Hearing outcome rewards
|
| 45 |
-
lower_outcome = hearing_outcome.lower()
|
| 46 |
-
if "disposal" in lower_outcome or "judgment" in lower_outcome or "settlement" in lower_outcome:
|
| 47 |
-
reward += 10.0 # Major positive for disposal
|
| 48 |
-
elif "progress" in lower_outcome and "adjourn" not in lower_outcome:
|
| 49 |
-
reward += 3.0 # Progress without disposal
|
| 50 |
-
elif "adjourn" in lower_outcome:
|
| 51 |
-
reward -= 3.0 # Negative for adjournment
|
| 52 |
-
|
| 53 |
-
# Urgency bonus
|
| 54 |
-
if case.is_urgent:
|
| 55 |
-
reward += 2.0
|
| 56 |
-
|
| 57 |
-
# Ripeness penalty
|
| 58 |
-
if hasattr(case, "ripeness_status") and case.ripeness_status not in ["RIPE", "UNKNOWN"]:
|
| 59 |
-
reward -= 4.0
|
| 60 |
-
|
| 61 |
-
# Long pending bonus (>365 days)
|
| 62 |
-
if case.age_days and case.age_days > 365:
|
| 63 |
-
reward += 2.0
|
| 64 |
-
|
| 65 |
-
return reward
|
| 66 |
-
|
| 67 |
-
def _fairness_score(self) -> float:
|
| 68 |
-
"""Reward higher uniformity in hearing distribution."""
|
| 69 |
-
|
| 70 |
-
counts: Iterable[int] = self._hearing_counts.values()
|
| 71 |
-
if not counts:
|
| 72 |
-
return 0.0
|
| 73 |
-
|
| 74 |
-
counts_array = np.array(list(counts), dtype=float)
|
| 75 |
-
mean = np.mean(counts_array)
|
| 76 |
-
if mean == 0:
|
| 77 |
-
return 0.0
|
| 78 |
-
|
| 79 |
-
dispersion = np.std(counts_array) / (mean + 1e-6)
|
| 80 |
-
# Lower dispersion -> better fairness. Convert to reward in [0, 1].
|
| 81 |
-
fairness = max(0.0, 1.0 - dispersion)
|
| 82 |
-
return fairness
|
| 83 |
-
|
| 84 |
-
def compute_case_reward(
|
| 85 |
-
self,
|
| 86 |
-
case: Case,
|
| 87 |
-
was_scheduled: bool,
|
| 88 |
-
hearing_outcome: str,
|
| 89 |
-
current_date,
|
| 90 |
-
previous_gap_days: Optional[int] = None,
|
| 91 |
-
) -> float:
|
| 92 |
-
"""Compute reward using both local and episode-level signals."""
|
| 93 |
-
|
| 94 |
-
reward = self._base_outcome_reward(case, was_scheduled, hearing_outcome)
|
| 95 |
-
|
| 96 |
-
if not was_scheduled:
|
| 97 |
-
return reward
|
| 98 |
-
|
| 99 |
-
# Track disposals
|
| 100 |
-
if "disposal" in hearing_outcome.lower() or getattr(case, "is_disposed", False):
|
| 101 |
-
self._disposed_cases += 1
|
| 102 |
-
|
| 103 |
-
# Track hearing counts for fairness
|
| 104 |
-
self._hearing_counts[case.case_id] = case.hearing_count or self._hearing_counts[case.case_id] + 1
|
| 105 |
-
|
| 106 |
-
# Track urgent latencies
|
| 107 |
-
if case.is_urgent:
|
| 108 |
-
self._urgent_latencies.append(case.age_days or 0)
|
| 109 |
-
|
| 110 |
-
# Episode-level components
|
| 111 |
-
disposal_rate = (self._disposed_cases / self.total_cases) if self.total_cases else 0.0
|
| 112 |
-
reward += self.disposal_weight * disposal_rate
|
| 113 |
-
|
| 114 |
-
if previous_gap_days is not None:
|
| 115 |
-
gap_score = max(0.0, 1.0 - (previous_gap_days / self.target_gap_days))
|
| 116 |
-
reward += self.gap_weight * gap_score
|
| 117 |
-
|
| 118 |
-
if self._urgent_latencies:
|
| 119 |
-
avg_latency = float(np.mean(self._urgent_latencies))
|
| 120 |
-
latency_score = max(0.0, 1.0 - (avg_latency / self.max_urgent_latency))
|
| 121 |
-
reward += self.urgent_weight * latency_score
|
| 122 |
-
|
| 123 |
-
fairness = self._fairness_score()
|
| 124 |
-
reward += self.fairness_weight * fairness
|
| 125 |
-
|
| 126 |
-
return reward
|
| 127 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
rl/simple_agent.py
DELETED
|
@@ -1,291 +0,0 @@
|
|
| 1 |
-
"""Tabular Q-learning agent for court case priority scoring.
|
| 2 |
-
|
| 3 |
-
Implements the simplified RL approach described in RL_EXPLORATION_PLAN.md:
|
| 4 |
-
- 6D state space per case
|
| 5 |
-
- Binary action space (schedule/skip)
|
| 6 |
-
- Tabular Q-learning with epsilon-greedy exploration
|
| 7 |
-
"""
|
| 8 |
-
|
| 9 |
-
import numpy as np
|
| 10 |
-
import pickle
|
| 11 |
-
from pathlib import Path
|
| 12 |
-
from typing import Dict, Tuple, Optional, List
|
| 13 |
-
from dataclasses import dataclass
|
| 14 |
-
from collections import defaultdict
|
| 15 |
-
|
| 16 |
-
from scheduler.core.case import Case
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
@dataclass
|
| 20 |
-
class CaseState:
|
| 21 |
-
"""Expanded state representation for a case with environment context."""
|
| 22 |
-
|
| 23 |
-
stage_encoded: int # 0-7 for different stages
|
| 24 |
-
age_days: float # normalized 0-1
|
| 25 |
-
days_since_last: float # normalized 0-1
|
| 26 |
-
urgency: int # 0 or 1
|
| 27 |
-
ripe: int # 0 or 1
|
| 28 |
-
hearing_count: float # normalized 0-1
|
| 29 |
-
capacity_ratio: float # normalized 0-1 (remaining capacity for the day)
|
| 30 |
-
min_gap_days: int # encoded min gap rule in effect
|
| 31 |
-
preference_score: float # normalized 0-1 preference alignment
|
| 32 |
-
|
| 33 |
-
def to_tuple(self) -> Tuple[int, int, int, int, int, int, int, int, int]:
|
| 34 |
-
"""Convert to tuple for use as dict key."""
|
| 35 |
-
return (
|
| 36 |
-
self.stage_encoded,
|
| 37 |
-
min(9, int(self.age_days * 20)), # discretize to 20 bins, cap at 9
|
| 38 |
-
min(9, int(self.days_since_last * 20)), # discretize to 20 bins, cap at 9
|
| 39 |
-
self.urgency,
|
| 40 |
-
self.ripe,
|
| 41 |
-
min(9, int(self.hearing_count * 20)), # discretize to 20 bins, cap at 9
|
| 42 |
-
min(9, int(self.capacity_ratio * 10)),
|
| 43 |
-
min(30, self.min_gap_days),
|
| 44 |
-
min(9, int(self.preference_score * 10))
|
| 45 |
-
)
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
class TabularQAgent:
|
| 49 |
-
"""Tabular Q-learning agent for case priority scoring."""
|
| 50 |
-
|
| 51 |
-
# Stage mapping based on config.py
|
| 52 |
-
STAGE_TO_ID = {
|
| 53 |
-
"PRE-ADMISSION": 0,
|
| 54 |
-
"ADMISSION": 1,
|
| 55 |
-
"FRAMING OF CHARGES": 2,
|
| 56 |
-
"EVIDENCE": 3,
|
| 57 |
-
"ARGUMENTS": 4,
|
| 58 |
-
"INTERLOCUTORY APPLICATION": 5,
|
| 59 |
-
"SETTLEMENT": 6,
|
| 60 |
-
"ORDERS / JUDGMENT": 7,
|
| 61 |
-
"FINAL DISPOSAL": 8,
|
| 62 |
-
"OTHER": 9,
|
| 63 |
-
"NA": 10
|
| 64 |
-
}
|
| 65 |
-
|
| 66 |
-
def __init__(self, learning_rate: float = 0.1, epsilon: float = 0.1,
|
| 67 |
-
discount: float = 0.95):
|
| 68 |
-
"""Initialize tabular Q-learning agent.
|
| 69 |
-
|
| 70 |
-
Args:
|
| 71 |
-
learning_rate: Q-learning step size
|
| 72 |
-
epsilon: Exploration probability
|
| 73 |
-
discount: Discount factor for future rewards
|
| 74 |
-
"""
|
| 75 |
-
self.learning_rate = learning_rate
|
| 76 |
-
self.epsilon = epsilon
|
| 77 |
-
self.discount = discount
|
| 78 |
-
|
| 79 |
-
# Q-table: state -> action -> Q-value
|
| 80 |
-
# Actions: 0 = skip, 1 = schedule
|
| 81 |
-
self.q_table: Dict[Tuple, Dict[int, float]] = defaultdict(lambda: {0: 0.0, 1: 0.0})
|
| 82 |
-
|
| 83 |
-
# Statistics
|
| 84 |
-
self.states_visited = set()
|
| 85 |
-
self.total_updates = 0
|
| 86 |
-
|
| 87 |
-
def extract_state(
|
| 88 |
-
self,
|
| 89 |
-
case: Case,
|
| 90 |
-
current_date,
|
| 91 |
-
*,
|
| 92 |
-
capacity_ratio: float = 1.0,
|
| 93 |
-
min_gap_days: int = 7,
|
| 94 |
-
preference_score: float = 0.0,
|
| 95 |
-
) -> CaseState:
|
| 96 |
-
"""Extract 6D state representation from a case.
|
| 97 |
-
|
| 98 |
-
Args:
|
| 99 |
-
case: Case object
|
| 100 |
-
current_date: Current simulation date
|
| 101 |
-
|
| 102 |
-
Returns:
|
| 103 |
-
CaseState representation
|
| 104 |
-
"""
|
| 105 |
-
# Stage encoding
|
| 106 |
-
stage_id = self.STAGE_TO_ID.get(case.current_stage, 9) # Default to "OTHER"
|
| 107 |
-
|
| 108 |
-
# Age in days (normalized by max reasonable age of 2 years)
|
| 109 |
-
actual_age = max(0, case.age_days) if case.age_days is not None else max(0, (current_date - case.filed_date).days)
|
| 110 |
-
age_days = min(actual_age / (365 * 2), 1.0)
|
| 111 |
-
|
| 112 |
-
# Days since last hearing (normalized by max reasonable gap of 180 days)
|
| 113 |
-
days_since = 0.0
|
| 114 |
-
if case.last_hearing_date:
|
| 115 |
-
days_gap = max(0, (current_date - case.last_hearing_date).days)
|
| 116 |
-
days_since = min(days_gap / 180, 1.0)
|
| 117 |
-
else:
|
| 118 |
-
# No previous hearing - use age as days since "last" hearing
|
| 119 |
-
days_since = min(actual_age / 180, 1.0)
|
| 120 |
-
|
| 121 |
-
# Urgency flag
|
| 122 |
-
urgency = 1 if case.is_urgent else 0
|
| 123 |
-
|
| 124 |
-
# Ripeness (assuming we have ripeness status)
|
| 125 |
-
ripe = 1 if hasattr(case, 'ripeness_status') and case.ripeness_status == "RIPE" else 0
|
| 126 |
-
|
| 127 |
-
# Hearing count (normalized by reasonable max of 20 hearings)
|
| 128 |
-
hearing_count = min(case.hearing_count / 20, 1.0) if case.hearing_count else 0.0
|
| 129 |
-
|
| 130 |
-
return CaseState(
|
| 131 |
-
stage_encoded=stage_id,
|
| 132 |
-
age_days=age_days,
|
| 133 |
-
days_since_last=days_since,
|
| 134 |
-
urgency=urgency,
|
| 135 |
-
ripe=ripe,
|
| 136 |
-
hearing_count=hearing_count,
|
| 137 |
-
capacity_ratio=max(0.0, min(1.0, capacity_ratio)),
|
| 138 |
-
min_gap_days=max(0, min_gap_days),
|
| 139 |
-
preference_score=max(0.0, min(1.0, preference_score))
|
| 140 |
-
)
|
| 141 |
-
|
| 142 |
-
def get_action(self, state: CaseState, training: bool = False) -> int:
|
| 143 |
-
"""Select action using epsilon-greedy policy.
|
| 144 |
-
|
| 145 |
-
Args:
|
| 146 |
-
state: Current case state
|
| 147 |
-
training: Whether in training mode (enables exploration)
|
| 148 |
-
|
| 149 |
-
Returns:
|
| 150 |
-
Action: 0 = skip, 1 = schedule
|
| 151 |
-
"""
|
| 152 |
-
state_key = state.to_tuple()
|
| 153 |
-
self.states_visited.add(state_key)
|
| 154 |
-
|
| 155 |
-
# Epsilon-greedy exploration during training
|
| 156 |
-
if training and np.random.random() < self.epsilon:
|
| 157 |
-
return np.random.choice([0, 1])
|
| 158 |
-
|
| 159 |
-
# Greedy action selection
|
| 160 |
-
q_values = self.q_table[state_key]
|
| 161 |
-
if q_values[0] == q_values[1]: # If tied, prefer scheduling (action 1)
|
| 162 |
-
return 1
|
| 163 |
-
return max(q_values, key=q_values.get)
|
| 164 |
-
|
| 165 |
-
def get_priority_score(self, case: Case, current_date) -> float:
|
| 166 |
-
"""Get priority score for a case (Q-value for schedule action).
|
| 167 |
-
|
| 168 |
-
Args:
|
| 169 |
-
case: Case object
|
| 170 |
-
current_date: Current simulation date
|
| 171 |
-
|
| 172 |
-
Returns:
|
| 173 |
-
Priority score (Q-value for action=1)
|
| 174 |
-
"""
|
| 175 |
-
state = self.extract_state(case, current_date)
|
| 176 |
-
state_key = state.to_tuple()
|
| 177 |
-
return self.q_table[state_key][1] # Q-value for schedule action
|
| 178 |
-
|
| 179 |
-
def update_q_value(self, state: CaseState, action: int, reward: float,
|
| 180 |
-
next_state: Optional[CaseState] = None):
|
| 181 |
-
"""Update Q-table using Q-learning rule.
|
| 182 |
-
|
| 183 |
-
Args:
|
| 184 |
-
state: Current state
|
| 185 |
-
action: Action taken
|
| 186 |
-
reward: Reward received
|
| 187 |
-
next_state: Next state (optional, for terminal states)
|
| 188 |
-
"""
|
| 189 |
-
state_key = state.to_tuple()
|
| 190 |
-
|
| 191 |
-
# Q-learning update
|
| 192 |
-
old_q = self.q_table[state_key][action]
|
| 193 |
-
|
| 194 |
-
if next_state is not None:
|
| 195 |
-
next_key = next_state.to_tuple()
|
| 196 |
-
max_next_q = max(self.q_table[next_key].values())
|
| 197 |
-
target = reward + self.discount * max_next_q
|
| 198 |
-
else:
|
| 199 |
-
# Terminal state
|
| 200 |
-
target = reward
|
| 201 |
-
|
| 202 |
-
new_q = old_q + self.learning_rate * (target - old_q)
|
| 203 |
-
self.q_table[state_key][action] = new_q
|
| 204 |
-
self.total_updates += 1
|
| 205 |
-
|
| 206 |
-
def compute_reward(self, case: Case, was_scheduled: bool, hearing_outcome: str) -> float:
|
| 207 |
-
"""Compute reward based on the outcome as per RL plan.
|
| 208 |
-
|
| 209 |
-
Reward function:
|
| 210 |
-
+2 if case progresses
|
| 211 |
-
-1 if adjourned
|
| 212 |
-
+3 if urgent & scheduled
|
| 213 |
-
-2 if unripe & scheduled
|
| 214 |
-
+1 if long pending & scheduled
|
| 215 |
-
|
| 216 |
-
Args:
|
| 217 |
-
case: Case object
|
| 218 |
-
was_scheduled: Whether case was scheduled
|
| 219 |
-
hearing_outcome: Outcome of the hearing
|
| 220 |
-
|
| 221 |
-
Returns:
|
| 222 |
-
Reward value
|
| 223 |
-
"""
|
| 224 |
-
reward = 0.0
|
| 225 |
-
|
| 226 |
-
if was_scheduled:
|
| 227 |
-
# Base scheduling reward (small positive for taking action)
|
| 228 |
-
reward += 0.5
|
| 229 |
-
|
| 230 |
-
# Hearing outcome rewards
|
| 231 |
-
if "disposal" in hearing_outcome.lower() or "judgment" in hearing_outcome.lower() or "settlement" in hearing_outcome.lower():
|
| 232 |
-
reward += 10.0 # Major positive for disposal
|
| 233 |
-
elif "progress" in hearing_outcome.lower() and "adjourn" not in hearing_outcome.lower():
|
| 234 |
-
reward += 3.0 # Progress without disposal
|
| 235 |
-
elif "adjourn" in hearing_outcome.lower():
|
| 236 |
-
reward -= 3.0 # Negative for adjournment
|
| 237 |
-
|
| 238 |
-
# Urgency bonus
|
| 239 |
-
if case.is_urgent:
|
| 240 |
-
reward += 2.0
|
| 241 |
-
|
| 242 |
-
# Ripeness penalty
|
| 243 |
-
if hasattr(case, 'ripeness_status') and case.ripeness_status not in ["RIPE", "UNKNOWN"]:
|
| 244 |
-
reward -= 4.0
|
| 245 |
-
|
| 246 |
-
# Long pending bonus (>365 days)
|
| 247 |
-
if case.age_days and case.age_days > 365:
|
| 248 |
-
reward += 2.0
|
| 249 |
-
|
| 250 |
-
return reward
|
| 251 |
-
|
| 252 |
-
def get_stats(self) -> Dict:
|
| 253 |
-
"""Get agent statistics."""
|
| 254 |
-
return {
|
| 255 |
-
"states_visited": len(self.states_visited),
|
| 256 |
-
"total_updates": self.total_updates,
|
| 257 |
-
"q_table_size": len(self.q_table),
|
| 258 |
-
"epsilon": self.epsilon,
|
| 259 |
-
"learning_rate": self.learning_rate
|
| 260 |
-
}
|
| 261 |
-
|
| 262 |
-
def save(self, path: Path):
|
| 263 |
-
"""Save agent to file."""
|
| 264 |
-
agent_data = {
|
| 265 |
-
'q_table': dict(self.q_table),
|
| 266 |
-
'learning_rate': self.learning_rate,
|
| 267 |
-
'epsilon': self.epsilon,
|
| 268 |
-
'discount': self.discount,
|
| 269 |
-
'states_visited': self.states_visited,
|
| 270 |
-
'total_updates': self.total_updates
|
| 271 |
-
}
|
| 272 |
-
with open(path, 'wb') as f:
|
| 273 |
-
pickle.dump(agent_data, f)
|
| 274 |
-
|
| 275 |
-
@classmethod
|
| 276 |
-
def load(cls, path: Path) -> 'TabularQAgent':
|
| 277 |
-
"""Load agent from file."""
|
| 278 |
-
with open(path, 'rb') as f:
|
| 279 |
-
agent_data = pickle.load(f)
|
| 280 |
-
|
| 281 |
-
agent = cls(
|
| 282 |
-
learning_rate=agent_data['learning_rate'],
|
| 283 |
-
epsilon=agent_data['epsilon'],
|
| 284 |
-
discount=agent_data['discount']
|
| 285 |
-
)
|
| 286 |
-
agent.q_table = defaultdict(lambda: {0: 0.0, 1: 0.0})
|
| 287 |
-
agent.q_table.update(agent_data['q_table'])
|
| 288 |
-
agent.states_visited = agent_data['states_visited']
|
| 289 |
-
agent.total_updates = agent_data['total_updates']
|
| 290 |
-
|
| 291 |
-
return agent
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
rl/training.py
DELETED
|
@@ -1,515 +0,0 @@
|
|
| 1 |
-
"""Training pipeline for tabular Q-learning agent.
|
| 2 |
-
|
| 3 |
-
Implements episodic training on generated case data to learn optimal
|
| 4 |
-
case prioritization policies through simulation-based rewards.
|
| 5 |
-
"""
|
| 6 |
-
|
| 7 |
-
import numpy as np
|
| 8 |
-
from pathlib import Path
|
| 9 |
-
from typing import List, Tuple, Dict, Optional
|
| 10 |
-
from datetime import date, datetime, timedelta
|
| 11 |
-
import random
|
| 12 |
-
|
| 13 |
-
from scheduler.data.case_generator import CaseGenerator
|
| 14 |
-
from scheduler.data.param_loader import ParameterLoader
|
| 15 |
-
from scheduler.core.case import Case, CaseStatus
|
| 16 |
-
from scheduler.core.algorithm import SchedulingAlgorithm
|
| 17 |
-
from scheduler.core.courtroom import Courtroom
|
| 18 |
-
from scheduler.core.policy import SchedulerPolicy
|
| 19 |
-
from scheduler.simulation.policies.readiness import ReadinessPolicy
|
| 20 |
-
from scheduler.simulation.allocator import CourtroomAllocator, AllocationStrategy
|
| 21 |
-
from scheduler.control.overrides import Override, OverrideType, JudgePreferences
|
| 22 |
-
from .simple_agent import TabularQAgent, CaseState
|
| 23 |
-
from .rewards import EpisodeRewardHelper
|
| 24 |
-
from .config import (
|
| 25 |
-
RLTrainingConfig,
|
| 26 |
-
PolicyConfig,
|
| 27 |
-
DEFAULT_RL_TRAINING_CONFIG,
|
| 28 |
-
DEFAULT_POLICY_CONFIG,
|
| 29 |
-
)
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
class RLTrainingEnvironment:
|
| 33 |
-
"""Training environment for RL agent using court simulation."""
|
| 34 |
-
|
| 35 |
-
def __init__(
|
| 36 |
-
self,
|
| 37 |
-
cases: List[Case],
|
| 38 |
-
start_date: date,
|
| 39 |
-
horizon_days: int = 90,
|
| 40 |
-
rl_config: RLTrainingConfig | None = None,
|
| 41 |
-
policy_config: PolicyConfig | None = None,
|
| 42 |
-
params_dir: Optional[Path] = None,
|
| 43 |
-
):
|
| 44 |
-
"""Initialize training environment.
|
| 45 |
-
|
| 46 |
-
Args:
|
| 47 |
-
cases: List of cases to simulate
|
| 48 |
-
start_date: Simulation start date
|
| 49 |
-
horizon_days: Training episode length in days
|
| 50 |
-
rl_config: RL-specific training constraints
|
| 51 |
-
policy_config: Policy knobs for ripeness/gap rules
|
| 52 |
-
params_dir: Directory with EDA parameters (uses latest if None)
|
| 53 |
-
"""
|
| 54 |
-
self.cases = cases
|
| 55 |
-
self.start_date = start_date
|
| 56 |
-
self.horizon_days = horizon_days
|
| 57 |
-
self.current_date = start_date
|
| 58 |
-
self.episode_rewards = []
|
| 59 |
-
self.rl_config = rl_config or DEFAULT_RL_TRAINING_CONFIG
|
| 60 |
-
self.policy_config = policy_config or DEFAULT_POLICY_CONFIG
|
| 61 |
-
self.reward_helper = EpisodeRewardHelper(total_cases=len(cases))
|
| 62 |
-
self.param_loader = ParameterLoader(params_dir)
|
| 63 |
-
|
| 64 |
-
# Resources mirroring production defaults
|
| 65 |
-
self.courtrooms = [
|
| 66 |
-
Courtroom(
|
| 67 |
-
courtroom_id=i + 1,
|
| 68 |
-
judge_id=f"J{i+1:03d}",
|
| 69 |
-
daily_capacity=self.rl_config.daily_capacity_per_courtroom,
|
| 70 |
-
)
|
| 71 |
-
for i in range(self.rl_config.courtrooms)
|
| 72 |
-
]
|
| 73 |
-
self.allocator = CourtroomAllocator(
|
| 74 |
-
num_courtrooms=self.rl_config.courtrooms,
|
| 75 |
-
per_courtroom_capacity=self.rl_config.daily_capacity_per_courtroom,
|
| 76 |
-
strategy=AllocationStrategy.LOAD_BALANCED,
|
| 77 |
-
)
|
| 78 |
-
self.policy: SchedulerPolicy = ReadinessPolicy()
|
| 79 |
-
self.algorithm = SchedulingAlgorithm(
|
| 80 |
-
policy=self.policy,
|
| 81 |
-
allocator=self.allocator,
|
| 82 |
-
min_gap_days=self.policy_config.min_gap_days if self.rl_config.enforce_min_gap else 0,
|
| 83 |
-
)
|
| 84 |
-
self.preferences = self._build_preferences()
|
| 85 |
-
|
| 86 |
-
def _build_preferences(self) -> Optional[JudgePreferences]:
|
| 87 |
-
"""Synthetic judge preferences for training context."""
|
| 88 |
-
if not self.rl_config.apply_judge_preferences:
|
| 89 |
-
return None
|
| 90 |
-
|
| 91 |
-
capacity_overrides = {room.courtroom_id: room.daily_capacity for room in self.courtrooms}
|
| 92 |
-
return JudgePreferences(
|
| 93 |
-
judge_id="RL-JUDGE",
|
| 94 |
-
capacity_overrides=capacity_overrides,
|
| 95 |
-
case_type_preferences={
|
| 96 |
-
"Monday": ["RSA"],
|
| 97 |
-
"Tuesday": ["CCC"],
|
| 98 |
-
"Wednesday": ["NI ACT"],
|
| 99 |
-
},
|
| 100 |
-
)
|
| 101 |
-
def reset(self) -> List[Case]:
|
| 102 |
-
"""Reset environment for new training episode.
|
| 103 |
-
|
| 104 |
-
Note: In practice, train_agent() generates fresh cases per episode,
|
| 105 |
-
so case state doesn't need resetting. This method just resets
|
| 106 |
-
environment state (date, rewards).
|
| 107 |
-
"""
|
| 108 |
-
self.current_date = self.start_date
|
| 109 |
-
self.episode_rewards = []
|
| 110 |
-
self.reward_helper = EpisodeRewardHelper(total_cases=len(self.cases))
|
| 111 |
-
return self.cases.copy()
|
| 112 |
-
|
| 113 |
-
def capacity_ratio(self, remaining_slots: int) -> float:
|
| 114 |
-
"""Proportion of courtroom capacity still available for the day."""
|
| 115 |
-
total_capacity = self.rl_config.courtrooms * self.rl_config.daily_capacity_per_courtroom
|
| 116 |
-
return max(0.0, min(1.0, remaining_slots / total_capacity)) if total_capacity else 0.0
|
| 117 |
-
|
| 118 |
-
def preference_score(self, case: Case) -> float:
|
| 119 |
-
"""Return 1.0 when case_type aligns with day-of-week preference, else 0."""
|
| 120 |
-
if not self.preferences:
|
| 121 |
-
return 0.0
|
| 122 |
-
|
| 123 |
-
day_name = self.current_date.strftime("%A")
|
| 124 |
-
preferred_types = self.preferences.case_type_preferences.get(day_name, [])
|
| 125 |
-
return 1.0 if case.case_type in preferred_types else 0.0
|
| 126 |
-
|
| 127 |
-
def step(self, agent_decisions: Dict[str, int]) -> Tuple[List[Case], Dict[str, float], bool]:
|
| 128 |
-
"""Execute one day of simulation with agent decisions via SchedulingAlgorithm."""
|
| 129 |
-
rewards: Dict[str, float] = {}
|
| 130 |
-
|
| 131 |
-
# Convert agent schedule actions into priority overrides
|
| 132 |
-
overrides: List[Override] = []
|
| 133 |
-
priority_boost = 1.0
|
| 134 |
-
for case in self.cases:
|
| 135 |
-
if agent_decisions.get(case.case_id) == 1:
|
| 136 |
-
overrides.append(
|
| 137 |
-
Override(
|
| 138 |
-
override_id=f"rl-{case.case_id}-{self.current_date.isoformat()}",
|
| 139 |
-
override_type=OverrideType.PRIORITY,
|
| 140 |
-
case_id=case.case_id,
|
| 141 |
-
judge_id="RL-JUDGE",
|
| 142 |
-
timestamp=datetime.combine(self.current_date, datetime.min.time()),
|
| 143 |
-
new_priority=case.get_priority_score() + priority_boost,
|
| 144 |
-
)
|
| 145 |
-
)
|
| 146 |
-
priority_boost += 0.1 # keep relative ordering stable
|
| 147 |
-
|
| 148 |
-
# Run scheduling algorithm (capacity, ripeness, min-gap enforced)
|
| 149 |
-
result = self.algorithm.schedule_day(
|
| 150 |
-
cases=self.cases,
|
| 151 |
-
courtrooms=self.courtrooms,
|
| 152 |
-
current_date=self.current_date,
|
| 153 |
-
overrides=overrides or None,
|
| 154 |
-
preferences=self.preferences,
|
| 155 |
-
)
|
| 156 |
-
|
| 157 |
-
# Flatten scheduled cases
|
| 158 |
-
scheduled_cases = [c for cases in result.scheduled_cases.values() for c in cases]
|
| 159 |
-
# Simulate hearing outcomes for scheduled cases
|
| 160 |
-
for case in scheduled_cases:
|
| 161 |
-
if case.is_disposed:
|
| 162 |
-
continue
|
| 163 |
-
|
| 164 |
-
outcome = self._simulate_hearing_outcome(case)
|
| 165 |
-
was_heard = "heard" in outcome.lower()
|
| 166 |
-
|
| 167 |
-
# Track gap relative to previous hearing for reward shaping
|
| 168 |
-
previous_gap = None
|
| 169 |
-
if case.last_hearing_date:
|
| 170 |
-
previous_gap = max(0, (self.current_date - case.last_hearing_date).days)
|
| 171 |
-
|
| 172 |
-
case.record_hearing(self.current_date, was_heard=was_heard, outcome=outcome)
|
| 173 |
-
|
| 174 |
-
if was_heard:
|
| 175 |
-
if outcome in ["FINAL DISPOSAL", "SETTLEMENT", "NA"]:
|
| 176 |
-
case.status = CaseStatus.DISPOSED
|
| 177 |
-
case.disposal_date = self.current_date
|
| 178 |
-
elif outcome != "ADJOURNED":
|
| 179 |
-
case.current_stage = outcome
|
| 180 |
-
|
| 181 |
-
# Compute reward using shared reward helper
|
| 182 |
-
rewards[case.case_id] = self.reward_helper.compute_case_reward(
|
| 183 |
-
case,
|
| 184 |
-
was_scheduled=True,
|
| 185 |
-
hearing_outcome=outcome,
|
| 186 |
-
current_date=self.current_date,
|
| 187 |
-
previous_gap_days=previous_gap,
|
| 188 |
-
)
|
| 189 |
-
# Update case ages
|
| 190 |
-
for case in self.cases:
|
| 191 |
-
case.update_age(self.current_date)
|
| 192 |
-
|
| 193 |
-
# Move to next day
|
| 194 |
-
self.current_date += timedelta(days=1)
|
| 195 |
-
episode_done = (self.current_date - self.start_date).days >= self.horizon_days
|
| 196 |
-
|
| 197 |
-
return self.cases, rewards, episode_done
|
| 198 |
-
|
| 199 |
-
def _simulate_hearing_outcome(self, case: Case) -> str:
|
| 200 |
-
"""Simulate hearing outcome using EDA-derived parameters.
|
| 201 |
-
|
| 202 |
-
Uses param_loader for adjournment probabilities and stage transitions
|
| 203 |
-
instead of hardcoded values, ensuring training aligns with production.
|
| 204 |
-
"""
|
| 205 |
-
current_stage = case.current_stage
|
| 206 |
-
case_type = case.case_type
|
| 207 |
-
|
| 208 |
-
# Query EDA-derived adjournment probability
|
| 209 |
-
p_adjourn = self.param_loader.get_adjournment_prob(current_stage, case_type)
|
| 210 |
-
|
| 211 |
-
# Sample adjournment
|
| 212 |
-
if random.random() < p_adjourn:
|
| 213 |
-
return "ADJOURNED"
|
| 214 |
-
|
| 215 |
-
# Case progresses - determine next stage using EDA-derived transitions
|
| 216 |
-
# Terminal stages lead to disposal
|
| 217 |
-
if current_stage in ["ORDERS / JUDGMENT", "FINAL DISPOSAL"]:
|
| 218 |
-
return "FINAL DISPOSAL"
|
| 219 |
-
|
| 220 |
-
# Sample next stage using cumulative transition probabilities
|
| 221 |
-
transitions = self.param_loader.get_stage_transitions_fast(current_stage)
|
| 222 |
-
if not transitions:
|
| 223 |
-
# No transition data - use fallback progression
|
| 224 |
-
return self._fallback_stage_progression(current_stage)
|
| 225 |
-
|
| 226 |
-
# Sample from cumulative probabilities
|
| 227 |
-
rand_val = random.random()
|
| 228 |
-
for next_stage, cum_prob in transitions:
|
| 229 |
-
if rand_val <= cum_prob:
|
| 230 |
-
return next_stage
|
| 231 |
-
|
| 232 |
-
# Fallback if sampling fails (shouldn't happen with normalized probs)
|
| 233 |
-
return transitions[-1][0] if transitions else current_stage
|
| 234 |
-
|
| 235 |
-
def _fallback_stage_progression(self, current_stage: str) -> str:
|
| 236 |
-
"""Fallback stage progression when no transition data available."""
|
| 237 |
-
progression_map = {
|
| 238 |
-
"PRE-ADMISSION": "ADMISSION",
|
| 239 |
-
"ADMISSION": "EVIDENCE",
|
| 240 |
-
"FRAMING OF CHARGES": "EVIDENCE",
|
| 241 |
-
"EVIDENCE": "ARGUMENTS",
|
| 242 |
-
"ARGUMENTS": "ORDERS / JUDGMENT",
|
| 243 |
-
"INTERLOCUTORY APPLICATION": "ARGUMENTS",
|
| 244 |
-
"SETTLEMENT": "FINAL DISPOSAL",
|
| 245 |
-
}
|
| 246 |
-
return progression_map.get(current_stage, "ARGUMENTS")
|
| 247 |
-
|
| 248 |
-
|
| 249 |
-
def train_agent(
|
| 250 |
-
agent: TabularQAgent,
|
| 251 |
-
rl_config: RLTrainingConfig = DEFAULT_RL_TRAINING_CONFIG,
|
| 252 |
-
policy_config: PolicyConfig = DEFAULT_POLICY_CONFIG,
|
| 253 |
-
params_dir: Optional[Path] = None,
|
| 254 |
-
verbose: bool = True,
|
| 255 |
-
) -> Dict:
|
| 256 |
-
"""Train RL agent using episodic simulation with courtroom constraints.
|
| 257 |
-
|
| 258 |
-
Args:
|
| 259 |
-
agent: TabularQAgent to train
|
| 260 |
-
rl_config: RL training configuration
|
| 261 |
-
policy_config: Policy configuration
|
| 262 |
-
params_dir: Directory with EDA parameters (uses latest if None)
|
| 263 |
-
verbose: Print training progress
|
| 264 |
-
"""
|
| 265 |
-
config = rl_config or DEFAULT_RL_TRAINING_CONFIG
|
| 266 |
-
policy_cfg = policy_config or DEFAULT_POLICY_CONFIG
|
| 267 |
-
|
| 268 |
-
# Align agent hyperparameters with config
|
| 269 |
-
agent.learning_rate = config.learning_rate
|
| 270 |
-
agent.discount = config.discount_factor
|
| 271 |
-
agent.epsilon = config.initial_epsilon
|
| 272 |
-
|
| 273 |
-
training_stats = {
|
| 274 |
-
"episodes": [],
|
| 275 |
-
"total_rewards": [],
|
| 276 |
-
"disposal_rates": [],
|
| 277 |
-
"states_explored": [],
|
| 278 |
-
"q_updates": [],
|
| 279 |
-
}
|
| 280 |
-
|
| 281 |
-
if verbose:
|
| 282 |
-
print(f"Training RL agent for {config.episodes} episodes...")
|
| 283 |
-
|
| 284 |
-
for episode in range(config.episodes):
|
| 285 |
-
# Generate fresh cases for this episode
|
| 286 |
-
start_date = date(2024, 1, 1) + timedelta(days=episode * 10)
|
| 287 |
-
end_date = start_date + timedelta(days=30)
|
| 288 |
-
|
| 289 |
-
generator = CaseGenerator(
|
| 290 |
-
start=start_date,
|
| 291 |
-
end=end_date,
|
| 292 |
-
seed=config.training_seed + episode,
|
| 293 |
-
)
|
| 294 |
-
cases = generator.generate(config.cases_per_episode, stage_mix_auto=config.stage_mix_auto)
|
| 295 |
-
|
| 296 |
-
# Initialize training environment
|
| 297 |
-
env = RLTrainingEnvironment(
|
| 298 |
-
cases,
|
| 299 |
-
start_date,
|
| 300 |
-
config.episode_length_days,
|
| 301 |
-
rl_config=config,
|
| 302 |
-
policy_config=policy_cfg,
|
| 303 |
-
params_dir=params_dir,
|
| 304 |
-
)
|
| 305 |
-
|
| 306 |
-
# Reset environment
|
| 307 |
-
episode_cases = env.reset()
|
| 308 |
-
episode_reward = 0.0
|
| 309 |
-
|
| 310 |
-
total_capacity = config.courtrooms * config.daily_capacity_per_courtroom
|
| 311 |
-
|
| 312 |
-
# Run episode
|
| 313 |
-
for _ in range(config.episode_length_days):
|
| 314 |
-
# Get eligible cases (not disposed, basic filtering)
|
| 315 |
-
eligible_cases = [c for c in episode_cases if not c.is_disposed]
|
| 316 |
-
if not eligible_cases:
|
| 317 |
-
break
|
| 318 |
-
|
| 319 |
-
# Agent makes decisions for each case
|
| 320 |
-
agent_decisions = {}
|
| 321 |
-
case_states = {}
|
| 322 |
-
|
| 323 |
-
daily_cap = config.max_daily_allocations or total_capacity
|
| 324 |
-
if not config.cap_daily_allocations:
|
| 325 |
-
daily_cap = len(eligible_cases)
|
| 326 |
-
remaining_slots = min(daily_cap, total_capacity) if config.cap_daily_allocations else daily_cap
|
| 327 |
-
|
| 328 |
-
for case in eligible_cases[:daily_cap]:
|
| 329 |
-
cap_ratio = env.capacity_ratio(remaining_slots if remaining_slots else total_capacity)
|
| 330 |
-
pref_score = env.preference_score(case)
|
| 331 |
-
state = agent.extract_state(
|
| 332 |
-
case,
|
| 333 |
-
env.current_date,
|
| 334 |
-
capacity_ratio=cap_ratio,
|
| 335 |
-
min_gap_days=policy_cfg.min_gap_days if config.enforce_min_gap else 0,
|
| 336 |
-
preference_score=pref_score,
|
| 337 |
-
)
|
| 338 |
-
action = agent.get_action(state, training=True)
|
| 339 |
-
|
| 340 |
-
if config.cap_daily_allocations and action == 1 and remaining_slots <= 0:
|
| 341 |
-
action = 0
|
| 342 |
-
elif action == 1 and config.cap_daily_allocations:
|
| 343 |
-
remaining_slots = max(0, remaining_slots - 1)
|
| 344 |
-
|
| 345 |
-
agent_decisions[case.case_id] = action
|
| 346 |
-
case_states[case.case_id] = state
|
| 347 |
-
|
| 348 |
-
# Environment step
|
| 349 |
-
_, rewards, done = env.step(agent_decisions)
|
| 350 |
-
|
| 351 |
-
# Update Q-values based on rewards
|
| 352 |
-
for case_id, reward in rewards.items():
|
| 353 |
-
if case_id in case_states:
|
| 354 |
-
state = case_states[case_id]
|
| 355 |
-
action = agent_decisions.get(case_id, 0)
|
| 356 |
-
|
| 357 |
-
agent.update_q_value(state, action, reward)
|
| 358 |
-
episode_reward += reward
|
| 359 |
-
|
| 360 |
-
if done:
|
| 361 |
-
break
|
| 362 |
-
|
| 363 |
-
# Compute episode statistics
|
| 364 |
-
disposed_count = sum(1 for c in episode_cases if c.is_disposed)
|
| 365 |
-
disposal_rate = disposed_count / len(episode_cases) if episode_cases else 0.0
|
| 366 |
-
|
| 367 |
-
# Record statistics
|
| 368 |
-
training_stats["episodes"].append(episode)
|
| 369 |
-
training_stats["total_rewards"].append(episode_reward)
|
| 370 |
-
training_stats["disposal_rates"].append(disposal_rate)
|
| 371 |
-
training_stats["states_explored"].append(len(agent.states_visited))
|
| 372 |
-
training_stats["q_updates"].append(agent.total_updates)
|
| 373 |
-
|
| 374 |
-
# Decay exploration
|
| 375 |
-
agent.epsilon = max(config.min_epsilon, agent.epsilon * config.epsilon_decay)
|
| 376 |
-
|
| 377 |
-
if verbose and (episode + 1) % 10 == 0:
|
| 378 |
-
print(
|
| 379 |
-
f"Episode {episode + 1}/{config.episodes}: "
|
| 380 |
-
f"Reward={episode_reward:.1f}, "
|
| 381 |
-
f"Disposal={disposal_rate:.1%}, "
|
| 382 |
-
f"States={len(agent.states_visited)}, "
|
| 383 |
-
f"Epsilon={agent.epsilon:.3f}"
|
| 384 |
-
)
|
| 385 |
-
|
| 386 |
-
if verbose:
|
| 387 |
-
final_stats = agent.get_stats()
|
| 388 |
-
print(f"\nTraining complete!")
|
| 389 |
-
print(f"States explored: {final_stats['states_visited']}")
|
| 390 |
-
print(f"Q-table size: {final_stats['q_table_size']}")
|
| 391 |
-
print(f"Total updates: {final_stats['total_updates']}")
|
| 392 |
-
|
| 393 |
-
return training_stats
|
| 394 |
-
|
| 395 |
-
|
| 396 |
-
def evaluate_agent(
|
| 397 |
-
agent: TabularQAgent,
|
| 398 |
-
test_cases: List[Case],
|
| 399 |
-
episodes: Optional[int] = None,
|
| 400 |
-
episode_length: Optional[int] = None,
|
| 401 |
-
rl_config: RLTrainingConfig = DEFAULT_RL_TRAINING_CONFIG,
|
| 402 |
-
policy_config: PolicyConfig = DEFAULT_POLICY_CONFIG,
|
| 403 |
-
params_dir: Optional[Path] = None,
|
| 404 |
-
) -> Dict:
|
| 405 |
-
"""Evaluate trained agent performance.
|
| 406 |
-
|
| 407 |
-
Args:
|
| 408 |
-
agent: Trained TabularQAgent to evaluate
|
| 409 |
-
test_cases: Cases to evaluate on
|
| 410 |
-
episodes: Number of evaluation episodes (default 10)
|
| 411 |
-
episode_length: Length of each episode in days
|
| 412 |
-
rl_config: RL configuration
|
| 413 |
-
policy_config: Policy configuration
|
| 414 |
-
params_dir: Directory with EDA parameters (uses latest if None)
|
| 415 |
-
"""
|
| 416 |
-
# Set agent to evaluation mode (no exploration)
|
| 417 |
-
original_epsilon = agent.epsilon
|
| 418 |
-
agent.epsilon = 0.0
|
| 419 |
-
|
| 420 |
-
config = rl_config or DEFAULT_RL_TRAINING_CONFIG
|
| 421 |
-
policy_cfg = policy_config or DEFAULT_POLICY_CONFIG
|
| 422 |
-
|
| 423 |
-
evaluation_stats = {
|
| 424 |
-
"disposal_rates": [],
|
| 425 |
-
"total_hearings": [],
|
| 426 |
-
"avg_hearing_to_disposal": [],
|
| 427 |
-
"utilization": [],
|
| 428 |
-
}
|
| 429 |
-
|
| 430 |
-
eval_episodes = episodes if episodes is not None else 10
|
| 431 |
-
eval_length = episode_length if episode_length is not None else config.episode_length_days
|
| 432 |
-
|
| 433 |
-
print(f"Evaluating agent on {eval_episodes} test episodes...")
|
| 434 |
-
|
| 435 |
-
total_capacity = config.courtrooms * config.daily_capacity_per_courtroom
|
| 436 |
-
|
| 437 |
-
for episode in range(eval_episodes):
|
| 438 |
-
start_date = date(2024, 6, 1) + timedelta(days=episode * 10)
|
| 439 |
-
env = RLTrainingEnvironment(
|
| 440 |
-
test_cases.copy(),
|
| 441 |
-
start_date,
|
| 442 |
-
eval_length,
|
| 443 |
-
rl_config=config,
|
| 444 |
-
policy_config=policy_cfg,
|
| 445 |
-
params_dir=params_dir,
|
| 446 |
-
)
|
| 447 |
-
|
| 448 |
-
episode_cases = env.reset()
|
| 449 |
-
total_hearings = 0
|
| 450 |
-
|
| 451 |
-
# Run evaluation episode
|
| 452 |
-
for _ in range(eval_length):
|
| 453 |
-
eligible_cases = [c for c in episode_cases if not c.is_disposed]
|
| 454 |
-
if not eligible_cases:
|
| 455 |
-
break
|
| 456 |
-
|
| 457 |
-
daily_cap = config.max_daily_allocations or total_capacity
|
| 458 |
-
remaining_slots = min(daily_cap, total_capacity) if config.cap_daily_allocations else len(eligible_cases)
|
| 459 |
-
|
| 460 |
-
# Agent makes decisions (no exploration)
|
| 461 |
-
agent_decisions = {}
|
| 462 |
-
for case in eligible_cases[:daily_cap]:
|
| 463 |
-
cap_ratio = env.capacity_ratio(remaining_slots if remaining_slots else total_capacity)
|
| 464 |
-
pref_score = env.preference_score(case)
|
| 465 |
-
state = agent.extract_state(
|
| 466 |
-
case,
|
| 467 |
-
env.current_date,
|
| 468 |
-
capacity_ratio=cap_ratio,
|
| 469 |
-
min_gap_days=policy_cfg.min_gap_days if config.enforce_min_gap else 0,
|
| 470 |
-
preference_score=pref_score,
|
| 471 |
-
)
|
| 472 |
-
action = agent.get_action(state, training=False)
|
| 473 |
-
if config.cap_daily_allocations and action == 1 and remaining_slots <= 0:
|
| 474 |
-
action = 0
|
| 475 |
-
elif action == 1 and config.cap_daily_allocations:
|
| 476 |
-
remaining_slots = max(0, remaining_slots - 1)
|
| 477 |
-
|
| 478 |
-
agent_decisions[case.case_id] = action
|
| 479 |
-
|
| 480 |
-
# Environment step
|
| 481 |
-
_, rewards, done = env.step(agent_decisions)
|
| 482 |
-
total_hearings += len([r for r in rewards.values() if r != 0])
|
| 483 |
-
|
| 484 |
-
if done:
|
| 485 |
-
break
|
| 486 |
-
|
| 487 |
-
# Compute metrics
|
| 488 |
-
disposed_count = sum(1 for c in episode_cases if c.is_disposed)
|
| 489 |
-
disposal_rate = disposed_count / len(episode_cases)
|
| 490 |
-
|
| 491 |
-
disposed_cases = [c for c in episode_cases if c.is_disposed]
|
| 492 |
-
avg_hearings = np.mean([c.hearing_count for c in disposed_cases]) if disposed_cases else 0
|
| 493 |
-
|
| 494 |
-
evaluation_stats["disposal_rates"].append(disposal_rate)
|
| 495 |
-
evaluation_stats["total_hearings"].append(total_hearings)
|
| 496 |
-
evaluation_stats["avg_hearing_to_disposal"].append(avg_hearings)
|
| 497 |
-
evaluation_stats["utilization"].append(total_hearings / (eval_length * total_capacity))
|
| 498 |
-
|
| 499 |
-
# Restore original epsilon
|
| 500 |
-
agent.epsilon = original_epsilon
|
| 501 |
-
|
| 502 |
-
# Compute summary statistics
|
| 503 |
-
summary = {
|
| 504 |
-
"mean_disposal_rate": np.mean(evaluation_stats["disposal_rates"]),
|
| 505 |
-
"std_disposal_rate": np.std(evaluation_stats["disposal_rates"]),
|
| 506 |
-
"mean_utilization": np.mean(evaluation_stats["utilization"]),
|
| 507 |
-
"mean_hearings_to_disposal": np.mean(evaluation_stats["avg_hearing_to_disposal"]),
|
| 508 |
-
}
|
| 509 |
-
|
| 510 |
-
print("Evaluation complete:")
|
| 511 |
-
print(f"Mean disposal rate: {summary['mean_disposal_rate']:.1%} ± {summary['std_disposal_rate']:.1%}")
|
| 512 |
-
print(f"Mean utilization: {summary['mean_utilization']:.1%}")
|
| 513 |
-
print(f"Avg hearings to disposal: {summary['mean_hearings_to_disposal']:.1f}")
|
| 514 |
-
|
| 515 |
-
return summary
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
run_comprehensive_sweep.ps1
DELETED
|
@@ -1,316 +0,0 @@
|
|
| 1 |
-
# Comprehensive Parameter Sweep for Court Scheduling System
|
| 2 |
-
# Runs multiple scenarios × multiple policies × multiple seeds
|
| 3 |
-
|
| 4 |
-
Write-Host "================================================" -ForegroundColor Cyan
|
| 5 |
-
Write-Host "COMPREHENSIVE PARAMETER SWEEP" -ForegroundColor Cyan
|
| 6 |
-
Write-Host "================================================" -ForegroundColor Cyan
|
| 7 |
-
Write-Host ""
|
| 8 |
-
|
| 9 |
-
$ErrorActionPreference = "Stop"
|
| 10 |
-
$results = @()
|
| 11 |
-
|
| 12 |
-
# Configuration matrix
|
| 13 |
-
$scenarios = @(
|
| 14 |
-
@{
|
| 15 |
-
name = "baseline_10k_2year"
|
| 16 |
-
cases = 10000
|
| 17 |
-
seed = 42
|
| 18 |
-
days = 500
|
| 19 |
-
description = "2-year simulation: 10k cases, ~500 working days (HACKATHON REQUIREMENT)"
|
| 20 |
-
},
|
| 21 |
-
@{
|
| 22 |
-
name = "baseline_10k"
|
| 23 |
-
cases = 10000
|
| 24 |
-
seed = 42
|
| 25 |
-
days = 200
|
| 26 |
-
description = "Baseline: 10k cases, balanced distribution"
|
| 27 |
-
},
|
| 28 |
-
@{
|
| 29 |
-
name = "baseline_10k_seed2"
|
| 30 |
-
cases = 10000
|
| 31 |
-
seed = 123
|
| 32 |
-
days = 200
|
| 33 |
-
description = "Baseline replica with different seed"
|
| 34 |
-
},
|
| 35 |
-
@{
|
| 36 |
-
name = "baseline_10k_seed3"
|
| 37 |
-
cases = 10000
|
| 38 |
-
seed = 456
|
| 39 |
-
days = 200
|
| 40 |
-
description = "Baseline replica with different seed"
|
| 41 |
-
},
|
| 42 |
-
@{
|
| 43 |
-
name = "small_5k"
|
| 44 |
-
cases = 5000
|
| 45 |
-
seed = 42
|
| 46 |
-
days = 200
|
| 47 |
-
description = "Small court: 5k cases"
|
| 48 |
-
},
|
| 49 |
-
@{
|
| 50 |
-
name = "large_15k"
|
| 51 |
-
cases = 15000
|
| 52 |
-
seed = 42
|
| 53 |
-
days = 200
|
| 54 |
-
description = "Large backlog: 15k cases"
|
| 55 |
-
},
|
| 56 |
-
@{
|
| 57 |
-
name = "xlarge_20k"
|
| 58 |
-
cases = 20000
|
| 59 |
-
seed = 42
|
| 60 |
-
days = 150
|
| 61 |
-
description = "Extra large: 20k cases, capacity stress"
|
| 62 |
-
}
|
| 63 |
-
)
|
| 64 |
-
|
| 65 |
-
$policies = @("fifo", "age", "readiness")
|
| 66 |
-
|
| 67 |
-
Write-Host "Configuration:" -ForegroundColor Yellow
|
| 68 |
-
Write-Host " Scenarios: $($scenarios.Count)" -ForegroundColor White
|
| 69 |
-
Write-Host " Policies: $($policies.Count)" -ForegroundColor White
|
| 70 |
-
Write-Host " Total simulations: $($scenarios.Count * $policies.Count)" -ForegroundColor White
|
| 71 |
-
Write-Host ""
|
| 72 |
-
|
| 73 |
-
$totalRuns = $scenarios.Count * $policies.Count
|
| 74 |
-
$currentRun = 0
|
| 75 |
-
|
| 76 |
-
# Create results directory
|
| 77 |
-
$timestamp = Get-Date -Format "yyyyMMdd_HHmmss"
|
| 78 |
-
$resultsDir = "data\comprehensive_sweep_$timestamp"
|
| 79 |
-
New-Item -ItemType Directory -Path $resultsDir -Force | Out-Null
|
| 80 |
-
|
| 81 |
-
# Generate datasets
|
| 82 |
-
Write-Host "Step 1: Generating datasets..." -ForegroundColor Cyan
|
| 83 |
-
$datasetDir = "$resultsDir\datasets"
|
| 84 |
-
New-Item -ItemType Directory -Path $datasetDir -Force | Out-Null
|
| 85 |
-
|
| 86 |
-
foreach ($scenario in $scenarios) {
|
| 87 |
-
Write-Host " Generating $($scenario.name)..." -NoNewline
|
| 88 |
-
$datasetPath = "$datasetDir\$($scenario.name)_cases.csv"
|
| 89 |
-
|
| 90 |
-
& uv run python main.py generate --cases $scenario.cases --seed $scenario.seed --output $datasetPath > $null
|
| 91 |
-
|
| 92 |
-
if ($LASTEXITCODE -eq 0) {
|
| 93 |
-
Write-Host " OK" -ForegroundColor Green
|
| 94 |
-
} else {
|
| 95 |
-
Write-Host " FAILED" -ForegroundColor Red
|
| 96 |
-
exit 1
|
| 97 |
-
}
|
| 98 |
-
}
|
| 99 |
-
|
| 100 |
-
Write-Host ""
|
| 101 |
-
Write-Host "Step 2: Running simulations..." -ForegroundColor Cyan
|
| 102 |
-
|
| 103 |
-
foreach ($scenario in $scenarios) {
|
| 104 |
-
$datasetPath = "$datasetDir\$($scenario.name)_cases.csv"
|
| 105 |
-
|
| 106 |
-
foreach ($policy in $policies) {
|
| 107 |
-
$currentRun++
|
| 108 |
-
$runName = "$($scenario.name)_$policy"
|
| 109 |
-
$logDir = "$resultsDir\$runName"
|
| 110 |
-
|
| 111 |
-
$progress = [math]::Round(($currentRun / $totalRuns) * 100, 1)
|
| 112 |
-
Write-Host "[$currentRun/$totalRuns - $progress%] " -NoNewline -ForegroundColor Yellow
|
| 113 |
-
Write-Host "$runName" -NoNewline -ForegroundColor White
|
| 114 |
-
Write-Host " ($($scenario.days) days)..." -NoNewline -ForegroundColor Gray
|
| 115 |
-
|
| 116 |
-
$startTime = Get-Date
|
| 117 |
-
|
| 118 |
-
& uv run python main.py simulate `
|
| 119 |
-
--days $scenario.days `
|
| 120 |
-
--cases $datasetPath `
|
| 121 |
-
--policy $policy `
|
| 122 |
-
--log-dir $logDir `
|
| 123 |
-
--seed $scenario.seed > $null
|
| 124 |
-
|
| 125 |
-
$endTime = Get-Date
|
| 126 |
-
$duration = ($endTime - $startTime).TotalSeconds
|
| 127 |
-
|
| 128 |
-
if ($LASTEXITCODE -eq 0) {
|
| 129 |
-
Write-Host " OK " -ForegroundColor Green -NoNewline
|
| 130 |
-
Write-Host "($([math]::Round($duration, 1))s)" -ForegroundColor Gray
|
| 131 |
-
|
| 132 |
-
# Parse report
|
| 133 |
-
$reportPath = "$logDir\report.txt"
|
| 134 |
-
if (Test-Path $reportPath) {
|
| 135 |
-
$reportContent = Get-Content $reportPath -Raw
|
| 136 |
-
|
| 137 |
-
# Extract metrics using regex
|
| 138 |
-
if ($reportContent -match 'Cases disposed: (\d+)') {
|
| 139 |
-
$disposed = [int]$matches[1]
|
| 140 |
-
}
|
| 141 |
-
if ($reportContent -match 'Disposal rate: ([\d.]+)%') {
|
| 142 |
-
$disposalRate = [double]$matches[1]
|
| 143 |
-
}
|
| 144 |
-
if ($reportContent -match 'Gini coefficient: ([\d.]+)') {
|
| 145 |
-
$gini = [double]$matches[1]
|
| 146 |
-
}
|
| 147 |
-
if ($reportContent -match 'Court utilization: ([\d.]+)%') {
|
| 148 |
-
$utilization = [double]$matches[1]
|
| 149 |
-
}
|
| 150 |
-
if ($reportContent -match 'Total hearings: ([\d,]+)') {
|
| 151 |
-
$hearings = $matches[1] -replace ',', ''
|
| 152 |
-
}
|
| 153 |
-
|
| 154 |
-
$results += [PSCustomObject]@{
|
| 155 |
-
Scenario = $scenario.name
|
| 156 |
-
Policy = $policy
|
| 157 |
-
Cases = $scenario.cases
|
| 158 |
-
Days = $scenario.days
|
| 159 |
-
Seed = $scenario.seed
|
| 160 |
-
Disposed = $disposed
|
| 161 |
-
DisposalRate = $disposalRate
|
| 162 |
-
Gini = $gini
|
| 163 |
-
Utilization = $utilization
|
| 164 |
-
Hearings = $hearings
|
| 165 |
-
Duration = [math]::Round($duration, 1)
|
| 166 |
-
}
|
| 167 |
-
}
|
| 168 |
-
} else {
|
| 169 |
-
Write-Host " FAILED" -ForegroundColor Red
|
| 170 |
-
}
|
| 171 |
-
}
|
| 172 |
-
}
|
| 173 |
-
|
| 174 |
-
Write-Host ""
|
| 175 |
-
Write-Host "Step 3: Generating summary..." -ForegroundColor Cyan
|
| 176 |
-
|
| 177 |
-
# Export results to CSV
|
| 178 |
-
$resultsCSV = "$resultsDir\summary_results.csv"
|
| 179 |
-
$results | Export-Csv -Path $resultsCSV -NoTypeInformation
|
| 180 |
-
|
| 181 |
-
Write-Host " Results saved to: $resultsCSV" -ForegroundColor Green
|
| 182 |
-
|
| 183 |
-
# Generate markdown summary
|
| 184 |
-
$summaryMD = "$resultsDir\SUMMARY.md"
|
| 185 |
-
$markdown = @"
|
| 186 |
-
# Comprehensive Simulation Results
|
| 187 |
-
|
| 188 |
-
**Generated**: $(Get-Date -Format "yyyy-MM-dd HH:mm:ss")
|
| 189 |
-
**Total Simulations**: $totalRuns
|
| 190 |
-
**Scenarios**: $($scenarios.Count)
|
| 191 |
-
**Policies**: $($policies.Count)
|
| 192 |
-
|
| 193 |
-
## Results Matrix
|
| 194 |
-
|
| 195 |
-
### Disposal Rate (%)
|
| 196 |
-
|
| 197 |
-
| Scenario | FIFO | Age | Readiness | Best |
|
| 198 |
-
|----------|------|-----|-----------|------|
|
| 199 |
-
"@
|
| 200 |
-
|
| 201 |
-
foreach ($scenario in $scenarios) {
|
| 202 |
-
$fifo = ($results | Where-Object { $_.Scenario -eq $scenario.name -and $_.Policy -eq "fifo" }).DisposalRate
|
| 203 |
-
$age = ($results | Where-Object { $_.Scenario -eq $scenario.name -and $_.Policy -eq "age" }).DisposalRate
|
| 204 |
-
$readiness = ($results | Where-Object { $_.Scenario -eq $scenario.name -and $_.Policy -eq "readiness" }).DisposalRate
|
| 205 |
-
|
| 206 |
-
$best = [math]::Max($fifo, [math]::Max($age, $readiness))
|
| 207 |
-
$bestPolicy = if ($fifo -eq $best) { "FIFO" } elseif ($age -eq $best) { "Age" } else { "**Readiness**" }
|
| 208 |
-
|
| 209 |
-
$markdown += "`n| $($scenario.name) | $fifo | $age | **$readiness** | $bestPolicy |"
|
| 210 |
-
}
|
| 211 |
-
|
| 212 |
-
$markdown += @"
|
| 213 |
-
|
| 214 |
-
|
| 215 |
-
### Gini Coefficient (Fairness)
|
| 216 |
-
|
| 217 |
-
| Scenario | FIFO | Age | Readiness | Best |
|
| 218 |
-
|----------|------|-----|-----------|------|
|
| 219 |
-
"@
|
| 220 |
-
|
| 221 |
-
foreach ($scenario in $scenarios) {
|
| 222 |
-
$fifo = ($results | Where-Object { $_.Scenario -eq $scenario.name -and $_.Policy -eq "fifo" }).Gini
|
| 223 |
-
$age = ($results | Where-Object { $_.Scenario -eq $scenario.name -and $_.Policy -eq "age" }).Gini
|
| 224 |
-
$readiness = ($results | Where-Object { $_.Scenario -eq $scenario.name -and $_.Policy -eq "readiness" }).Gini
|
| 225 |
-
|
| 226 |
-
$best = [math]::Min($fifo, [math]::Min($age, $readiness))
|
| 227 |
-
$bestPolicy = if ($fifo -eq $best) { "FIFO" } elseif ($age -eq $best) { "Age" } else { "**Readiness**" }
|
| 228 |
-
|
| 229 |
-
$markdown += "`n| $($scenario.name) | $fifo | $age | **$readiness** | $bestPolicy |"
|
| 230 |
-
}
|
| 231 |
-
|
| 232 |
-
$markdown += @"
|
| 233 |
-
|
| 234 |
-
|
| 235 |
-
### Utilization (%)
|
| 236 |
-
|
| 237 |
-
| Scenario | FIFO | Age | Readiness | Best |
|
| 238 |
-
|----------|------|-----|-----------|------|
|
| 239 |
-
"@
|
| 240 |
-
|
| 241 |
-
foreach ($scenario in $scenarios) {
|
| 242 |
-
$fifo = ($results | Where-Object { $_.Scenario -eq $scenario.name -and $_.Policy -eq "fifo" }).Utilization
|
| 243 |
-
$age = ($results | Where-Object { $_.Scenario -eq $scenario.name -and $_.Policy -eq "age" }).Utilization
|
| 244 |
-
$readiness = ($results | Where-Object { $_.Scenario -eq $scenario.name -and $_.Policy -eq "readiness" }).Utilization
|
| 245 |
-
|
| 246 |
-
$best = [math]::Max($fifo, [math]::Max($age, $readiness))
|
| 247 |
-
$bestPolicy = if ($fifo -eq $best) { "FIFO" } elseif ($age -eq $best) { "Age" } else { "**Readiness**" }
|
| 248 |
-
|
| 249 |
-
$markdown += "`n| $($scenario.name) | $fifo | $age | **$readiness** | $bestPolicy |"
|
| 250 |
-
}
|
| 251 |
-
|
| 252 |
-
$markdown += @"
|
| 253 |
-
|
| 254 |
-
|
| 255 |
-
## Statistical Summary
|
| 256 |
-
|
| 257 |
-
### Our Algorithm (Readiness) Performance
|
| 258 |
-
|
| 259 |
-
"@
|
| 260 |
-
|
| 261 |
-
$readinessResults = $results | Where-Object { $_.Policy -eq "readiness" }
|
| 262 |
-
$avgDisposal = ($readinessResults.DisposalRate | Measure-Object -Average).Average
|
| 263 |
-
$stdDisposal = [math]::Sqrt((($readinessResults.DisposalRate | ForEach-Object { [math]::Pow($_ - $avgDisposal, 2) }) | Measure-Object -Average).Average)
|
| 264 |
-
$minDisposal = ($readinessResults.DisposalRate | Measure-Object -Minimum).Minimum
|
| 265 |
-
$maxDisposal = ($readinessResults.DisposalRate | Measure-Object -Maximum).Maximum
|
| 266 |
-
|
| 267 |
-
$markdown += @"
|
| 268 |
-
|
| 269 |
-
- **Mean Disposal Rate**: $([math]::Round($avgDisposal, 1))%
|
| 270 |
-
- **Std Dev**: $([math]::Round($stdDisposal, 2))%
|
| 271 |
-
- **Min**: $minDisposal%
|
| 272 |
-
- **Max**: $maxDisposal%
|
| 273 |
-
- **Coefficient of Variation**: $([math]::Round(($stdDisposal / $avgDisposal) * 100, 1))%
|
| 274 |
-
|
| 275 |
-
### Performance Comparison (Average across all scenarios)
|
| 276 |
-
|
| 277 |
-
| Metric | FIFO | Age | Readiness | Advantage |
|
| 278 |
-
|--------|------|-----|-----------|-----------|
|
| 279 |
-
"@
|
| 280 |
-
|
| 281 |
-
$avgDisposalFIFO = ($results | Where-Object { $_.Policy -eq "fifo" } | Measure-Object -Property DisposalRate -Average).Average
|
| 282 |
-
$avgDisposalAge = ($results | Where-Object { $_.Policy -eq "age" } | Measure-Object -Property DisposalRate -Average).Average
|
| 283 |
-
$avgDisposalReadiness = ($results | Where-Object { $_.Policy -eq "readiness" } | Measure-Object -Property DisposalRate -Average).Average
|
| 284 |
-
$advDisposal = $avgDisposalReadiness - [math]::Max($avgDisposalFIFO, $avgDisposalAge)
|
| 285 |
-
|
| 286 |
-
$avgGiniFIFO = ($results | Where-Object { $_.Policy -eq "fifo" } | Measure-Object -Property Gini -Average).Average
|
| 287 |
-
$avgGiniAge = ($results | Where-Object { $_.Policy -eq "age" } | Measure-Object -Property Gini -Average).Average
|
| 288 |
-
$avgGiniReadiness = ($results | Where-Object { $_.Policy -eq "readiness" } | Measure-Object -Property Gini -Average).Average
|
| 289 |
-
$advGini = [math]::Min($avgGiniFIFO, $avgGiniAge) - $avgGiniReadiness
|
| 290 |
-
|
| 291 |
-
$markdown += @"
|
| 292 |
-
|
| 293 |
-
| **Disposal Rate** | $([math]::Round($avgDisposalFIFO, 1))% | $([math]::Round($avgDisposalAge, 1))% | **$([math]::Round($avgDisposalReadiness, 1))%** | +$([math]::Round($advDisposal, 1))% |
|
| 294 |
-
| **Gini** | $([math]::Round($avgGiniFIFO, 3)) | $([math]::Round($avgGiniAge, 3)) | **$([math]::Round($avgGiniReadiness, 3))** | -$([math]::Round($advGini, 3)) (better) |
|
| 295 |
-
|
| 296 |
-
## Files
|
| 297 |
-
|
| 298 |
-
- Raw data: `summary_results.csv`
|
| 299 |
-
- Individual reports: `<scenario>_<policy>/report.txt`
|
| 300 |
-
- Datasets: `datasets/<scenario>_cases.csv`
|
| 301 |
-
|
| 302 |
-
---
|
| 303 |
-
Generated by comprehensive_sweep.ps1
|
| 304 |
-
"@
|
| 305 |
-
|
| 306 |
-
$markdown | Out-File -FilePath $summaryMD -Encoding UTF8
|
| 307 |
-
|
| 308 |
-
Write-Host " Summary saved to: $summaryMD" -ForegroundColor Green
|
| 309 |
-
Write-Host ""
|
| 310 |
-
|
| 311 |
-
Write-Host "================================================" -ForegroundColor Cyan
|
| 312 |
-
Write-Host "SWEEP COMPLETE!" -ForegroundColor Green
|
| 313 |
-
Write-Host "================================================" -ForegroundColor Cyan
|
| 314 |
-
Write-Host "Results directory: $resultsDir" -ForegroundColor Yellow
|
| 315 |
-
Write-Host "Total duration: $([math]::Round(($results | Measure-Object -Property Duration -Sum).Sum / 60, 1)) minutes" -ForegroundColor White
|
| 316 |
-
Write-Host ""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
runs/baseline/report.txt
DELETED
|
@@ -1,56 +0,0 @@
|
|
| 1 |
-
================================================================================
|
| 2 |
-
SIMULATION REPORT
|
| 3 |
-
================================================================================
|
| 4 |
-
|
| 5 |
-
Configuration:
|
| 6 |
-
Cases: 3000
|
| 7 |
-
Days simulated: 30
|
| 8 |
-
Policy: readiness
|
| 9 |
-
Horizon end: 2024-05-09
|
| 10 |
-
|
| 11 |
-
Hearing Metrics:
|
| 12 |
-
Total hearings: 8,671
|
| 13 |
-
Heard: 5,355 (61.8%)
|
| 14 |
-
Adjourned: 3,316 (38.2%)
|
| 15 |
-
|
| 16 |
-
Disposal Metrics:
|
| 17 |
-
Cases disposed: 320
|
| 18 |
-
Disposal rate: 10.7%
|
| 19 |
-
Gini coefficient: 0.190
|
| 20 |
-
|
| 21 |
-
Disposal Rates by Case Type:
|
| 22 |
-
CA : 73/ 587 ( 12.4%)
|
| 23 |
-
CCC : 57/ 334 ( 17.1%)
|
| 24 |
-
CMP : 6/ 86 ( 7.0%)
|
| 25 |
-
CP : 46/ 294 ( 15.6%)
|
| 26 |
-
CRP : 61/ 612 ( 10.0%)
|
| 27 |
-
RFA : 49/ 519 ( 9.4%)
|
| 28 |
-
RSA : 28/ 568 ( 4.9%)
|
| 29 |
-
|
| 30 |
-
Efficiency Metrics:
|
| 31 |
-
Court utilization: 38.3%
|
| 32 |
-
Avg hearings/day: 289.0
|
| 33 |
-
|
| 34 |
-
Ripeness Impact:
|
| 35 |
-
Transitions: 0
|
| 36 |
-
Cases filtered (unripe): 1,680
|
| 37 |
-
Filter rate: 16.2%
|
| 38 |
-
|
| 39 |
-
Final Ripeness Distribution:
|
| 40 |
-
RIPE: 2624 (97.9%)
|
| 41 |
-
UNRIPE_DEPENDENT: 19 (0.7%)
|
| 42 |
-
UNRIPE_SUMMONS: 37 (1.4%)
|
| 43 |
-
|
| 44 |
-
Courtroom Allocation:
|
| 45 |
-
Strategy: load_balanced
|
| 46 |
-
Load balance fairness (Gini): 0.002
|
| 47 |
-
Avg daily load: 57.8 cases
|
| 48 |
-
Allocation changes: 4,624
|
| 49 |
-
Capacity rejections: 0
|
| 50 |
-
|
| 51 |
-
Courtroom-wise totals:
|
| 52 |
-
Courtroom 1: 1,740 cases (58.0/day)
|
| 53 |
-
Courtroom 2: 1,737 cases (57.9/day)
|
| 54 |
-
Courtroom 3: 1,736 cases (57.9/day)
|
| 55 |
-
Courtroom 4: 1,732 cases (57.7/day)
|
| 56 |
-
Courtroom 5: 1,726 cases (57.5/day)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
runs/baseline_comparison/report.txt
DELETED
|
@@ -1,56 +0,0 @@
|
|
| 1 |
-
================================================================================
|
| 2 |
-
SIMULATION REPORT
|
| 3 |
-
================================================================================
|
| 4 |
-
|
| 5 |
-
Configuration:
|
| 6 |
-
Cases: 3000
|
| 7 |
-
Days simulated: 60
|
| 8 |
-
Policy: readiness
|
| 9 |
-
Horizon end: 2024-06-20
|
| 10 |
-
|
| 11 |
-
Hearing Metrics:
|
| 12 |
-
Total hearings: 16,137
|
| 13 |
-
Heard: 9,981 (61.9%)
|
| 14 |
-
Adjourned: 6,156 (38.1%)
|
| 15 |
-
|
| 16 |
-
Disposal Metrics:
|
| 17 |
-
Cases disposed: 708
|
| 18 |
-
Disposal rate: 23.6%
|
| 19 |
-
Gini coefficient: 0.195
|
| 20 |
-
|
| 21 |
-
Disposal Rates by Case Type:
|
| 22 |
-
CA : 159/ 587 ( 27.1%)
|
| 23 |
-
CCC : 133/ 334 ( 39.8%)
|
| 24 |
-
CMP : 14/ 86 ( 16.3%)
|
| 25 |
-
CP : 105/ 294 ( 35.7%)
|
| 26 |
-
CRP : 142/ 612 ( 23.2%)
|
| 27 |
-
RFA : 77/ 519 ( 14.8%)
|
| 28 |
-
RSA : 78/ 568 ( 13.7%)
|
| 29 |
-
|
| 30 |
-
Efficiency Metrics:
|
| 31 |
-
Court utilization: 35.6%
|
| 32 |
-
Avg hearings/day: 268.9
|
| 33 |
-
|
| 34 |
-
Ripeness Impact:
|
| 35 |
-
Transitions: 0
|
| 36 |
-
Cases filtered (unripe): 3,360
|
| 37 |
-
Filter rate: 17.2%
|
| 38 |
-
|
| 39 |
-
Final Ripeness Distribution:
|
| 40 |
-
RIPE: 2236 (97.6%)
|
| 41 |
-
UNRIPE_DEPENDENT: 19 (0.8%)
|
| 42 |
-
UNRIPE_SUMMONS: 37 (1.6%)
|
| 43 |
-
|
| 44 |
-
Courtroom Allocation:
|
| 45 |
-
Strategy: load_balanced
|
| 46 |
-
Load balance fairness (Gini): 0.002
|
| 47 |
-
Avg daily load: 53.8 cases
|
| 48 |
-
Allocation changes: 10,527
|
| 49 |
-
Capacity rejections: 0
|
| 50 |
-
|
| 51 |
-
Courtroom-wise totals:
|
| 52 |
-
Courtroom 1: 3,244 cases (54.1/day)
|
| 53 |
-
Courtroom 2: 3,233 cases (53.9/day)
|
| 54 |
-
Courtroom 3: 3,227 cases (53.8/day)
|
| 55 |
-
Courtroom 4: 3,221 cases (53.7/day)
|
| 56 |
-
Courtroom 5: 3,212 cases (53.5/day)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
runs/baseline_large_data/report.txt
DELETED
|
@@ -1,56 +0,0 @@
|
|
| 1 |
-
================================================================================
|
| 2 |
-
SIMULATION REPORT
|
| 3 |
-
================================================================================
|
| 4 |
-
|
| 5 |
-
Configuration:
|
| 6 |
-
Cases: 10000
|
| 7 |
-
Days simulated: 90
|
| 8 |
-
Policy: readiness
|
| 9 |
-
Horizon end: 2024-10-31
|
| 10 |
-
|
| 11 |
-
Hearing Metrics:
|
| 12 |
-
Total hearings: 58,262
|
| 13 |
-
Heard: 36,595 (62.8%)
|
| 14 |
-
Adjourned: 21,667 (37.2%)
|
| 15 |
-
|
| 16 |
-
Disposal Metrics:
|
| 17 |
-
Cases disposed: 5,195
|
| 18 |
-
Disposal rate: 51.9%
|
| 19 |
-
Gini coefficient: 0.243
|
| 20 |
-
|
| 21 |
-
Disposal Rates by Case Type:
|
| 22 |
-
CA : 1358/1952 ( 69.6%)
|
| 23 |
-
CCC : 796/1132 ( 70.3%)
|
| 24 |
-
CMP : 172/ 281 ( 61.2%)
|
| 25 |
-
CP : 662/ 960 ( 69.0%)
|
| 26 |
-
CRP : 1365/2061 ( 66.2%)
|
| 27 |
-
RFA : 363/1676 ( 21.7%)
|
| 28 |
-
RSA : 479/1938 ( 24.7%)
|
| 29 |
-
|
| 30 |
-
Efficiency Metrics:
|
| 31 |
-
Court utilization: 85.7%
|
| 32 |
-
Avg hearings/day: 647.4
|
| 33 |
-
|
| 34 |
-
Ripeness Impact:
|
| 35 |
-
Transitions: 0
|
| 36 |
-
Cases filtered (unripe): 20,340
|
| 37 |
-
Filter rate: 25.9%
|
| 38 |
-
|
| 39 |
-
Final Ripeness Distribution:
|
| 40 |
-
RIPE: 4579 (95.3%)
|
| 41 |
-
UNRIPE_DEPENDENT: 58 (1.2%)
|
| 42 |
-
UNRIPE_SUMMONS: 168 (3.5%)
|
| 43 |
-
|
| 44 |
-
Courtroom Allocation:
|
| 45 |
-
Strategy: load_balanced
|
| 46 |
-
Load balance fairness (Gini): 0.001
|
| 47 |
-
Avg daily load: 129.5 cases
|
| 48 |
-
Allocation changes: 38,756
|
| 49 |
-
Capacity rejections: 0
|
| 50 |
-
|
| 51 |
-
Courtroom-wise totals:
|
| 52 |
-
Courtroom 1: 11,671 cases (129.7/day)
|
| 53 |
-
Courtroom 2: 11,666 cases (129.6/day)
|
| 54 |
-
Courtroom 3: 11,654 cases (129.5/day)
|
| 55 |
-
Courtroom 4: 11,640 cases (129.3/day)
|
| 56 |
-
Courtroom 5: 11,631 cases (129.2/day)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
runs/rl_final_test/report.txt
DELETED
|
@@ -1,56 +0,0 @@
|
|
| 1 |
-
================================================================================
|
| 2 |
-
SIMULATION REPORT
|
| 3 |
-
================================================================================
|
| 4 |
-
|
| 5 |
-
Configuration:
|
| 6 |
-
Cases: 3000
|
| 7 |
-
Days simulated: 60
|
| 8 |
-
Policy: rl
|
| 9 |
-
Horizon end: 2024-06-20
|
| 10 |
-
|
| 11 |
-
Hearing Metrics:
|
| 12 |
-
Total hearings: 16,133
|
| 13 |
-
Heard: 9,929 (61.5%)
|
| 14 |
-
Adjourned: 6,204 (38.5%)
|
| 15 |
-
|
| 16 |
-
Disposal Metrics:
|
| 17 |
-
Cases disposed: 700
|
| 18 |
-
Disposal rate: 23.3%
|
| 19 |
-
Gini coefficient: 0.194
|
| 20 |
-
|
| 21 |
-
Disposal Rates by Case Type:
|
| 22 |
-
CA : 159/ 587 ( 27.1%)
|
| 23 |
-
CCC : 128/ 334 ( 38.3%)
|
| 24 |
-
CMP : 15/ 86 ( 17.4%)
|
| 25 |
-
CP : 101/ 294 ( 34.4%)
|
| 26 |
-
CRP : 151/ 612 ( 24.7%)
|
| 27 |
-
RFA : 72/ 519 ( 13.9%)
|
| 28 |
-
RSA : 74/ 568 ( 13.0%)
|
| 29 |
-
|
| 30 |
-
Efficiency Metrics:
|
| 31 |
-
Court utilization: 35.6%
|
| 32 |
-
Avg hearings/day: 268.9
|
| 33 |
-
|
| 34 |
-
Ripeness Impact:
|
| 35 |
-
Transitions: 0
|
| 36 |
-
Cases filtered (unripe): 3,360
|
| 37 |
-
Filter rate: 17.2%
|
| 38 |
-
|
| 39 |
-
Final Ripeness Distribution:
|
| 40 |
-
RIPE: 2244 (97.6%)
|
| 41 |
-
UNRIPE_DEPENDENT: 19 (0.8%)
|
| 42 |
-
UNRIPE_SUMMONS: 37 (1.6%)
|
| 43 |
-
|
| 44 |
-
Courtroom Allocation:
|
| 45 |
-
Strategy: load_balanced
|
| 46 |
-
Load balance fairness (Gini): 0.002
|
| 47 |
-
Avg daily load: 53.8 cases
|
| 48 |
-
Allocation changes: 9,860
|
| 49 |
-
Capacity rejections: 0
|
| 50 |
-
|
| 51 |
-
Courtroom-wise totals:
|
| 52 |
-
Courtroom 1: 3,242 cases (54.0/day)
|
| 53 |
-
Courtroom 2: 3,234 cases (53.9/day)
|
| 54 |
-
Courtroom 3: 3,227 cases (53.8/day)
|
| 55 |
-
Courtroom 4: 3,219 cases (53.6/day)
|
| 56 |
-
Courtroom 5: 3,211 cases (53.5/day)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
runs/rl_intensive/report.txt
DELETED
|
@@ -1,56 +0,0 @@
|
|
| 1 |
-
================================================================================
|
| 2 |
-
SIMULATION REPORT
|
| 3 |
-
================================================================================
|
| 4 |
-
|
| 5 |
-
Configuration:
|
| 6 |
-
Cases: 3000
|
| 7 |
-
Days simulated: 60
|
| 8 |
-
Policy: rl
|
| 9 |
-
Horizon end: 2024-06-20
|
| 10 |
-
|
| 11 |
-
Hearing Metrics:
|
| 12 |
-
Total hearings: 16,133
|
| 13 |
-
Heard: 9,929 (61.5%)
|
| 14 |
-
Adjourned: 6,204 (38.5%)
|
| 15 |
-
|
| 16 |
-
Disposal Metrics:
|
| 17 |
-
Cases disposed: 700
|
| 18 |
-
Disposal rate: 23.3%
|
| 19 |
-
Gini coefficient: 0.194
|
| 20 |
-
|
| 21 |
-
Disposal Rates by Case Type:
|
| 22 |
-
CA : 159/ 587 ( 27.1%)
|
| 23 |
-
CCC : 128/ 334 ( 38.3%)
|
| 24 |
-
CMP : 15/ 86 ( 17.4%)
|
| 25 |
-
CP : 101/ 294 ( 34.4%)
|
| 26 |
-
CRP : 151/ 612 ( 24.7%)
|
| 27 |
-
RFA : 72/ 519 ( 13.9%)
|
| 28 |
-
RSA : 74/ 568 ( 13.0%)
|
| 29 |
-
|
| 30 |
-
Efficiency Metrics:
|
| 31 |
-
Court utilization: 35.6%
|
| 32 |
-
Avg hearings/day: 268.9
|
| 33 |
-
|
| 34 |
-
Ripeness Impact:
|
| 35 |
-
Transitions: 0
|
| 36 |
-
Cases filtered (unripe): 3,360
|
| 37 |
-
Filter rate: 17.2%
|
| 38 |
-
|
| 39 |
-
Final Ripeness Distribution:
|
| 40 |
-
RIPE: 2244 (97.6%)
|
| 41 |
-
UNRIPE_DEPENDENT: 19 (0.8%)
|
| 42 |
-
UNRIPE_SUMMONS: 37 (1.6%)
|
| 43 |
-
|
| 44 |
-
Courtroom Allocation:
|
| 45 |
-
Strategy: load_balanced
|
| 46 |
-
Load balance fairness (Gini): 0.002
|
| 47 |
-
Avg daily load: 53.8 cases
|
| 48 |
-
Allocation changes: 9,860
|
| 49 |
-
Capacity rejections: 0
|
| 50 |
-
|
| 51 |
-
Courtroom-wise totals:
|
| 52 |
-
Courtroom 1: 3,242 cases (54.0/day)
|
| 53 |
-
Courtroom 2: 3,234 cases (53.9/day)
|
| 54 |
-
Courtroom 3: 3,227 cases (53.8/day)
|
| 55 |
-
Courtroom 4: 3,219 cases (53.6/day)
|
| 56 |
-
Courtroom 5: 3,211 cases (53.5/day)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
runs/rl_large_data/report.txt
DELETED
|
@@ -1,56 +0,0 @@
|
|
| 1 |
-
================================================================================
|
| 2 |
-
SIMULATION REPORT
|
| 3 |
-
================================================================================
|
| 4 |
-
|
| 5 |
-
Configuration:
|
| 6 |
-
Cases: 10000
|
| 7 |
-
Days simulated: 90
|
| 8 |
-
Policy: rl
|
| 9 |
-
Horizon end: 2024-10-31
|
| 10 |
-
|
| 11 |
-
Hearing Metrics:
|
| 12 |
-
Total hearings: 57,999
|
| 13 |
-
Heard: 36,465 (62.9%)
|
| 14 |
-
Adjourned: 21,534 (37.1%)
|
| 15 |
-
|
| 16 |
-
Disposal Metrics:
|
| 17 |
-
Cases disposed: 5,212
|
| 18 |
-
Disposal rate: 52.1%
|
| 19 |
-
Gini coefficient: 0.248
|
| 20 |
-
|
| 21 |
-
Disposal Rates by Case Type:
|
| 22 |
-
CA : 1366/1952 ( 70.0%)
|
| 23 |
-
CCC : 815/1132 ( 72.0%)
|
| 24 |
-
CMP : 174/ 281 ( 61.9%)
|
| 25 |
-
CP : 649/ 960 ( 67.6%)
|
| 26 |
-
CRP : 1348/2061 ( 65.4%)
|
| 27 |
-
RFA : 356/1676 ( 21.2%)
|
| 28 |
-
RSA : 504/1938 ( 26.0%)
|
| 29 |
-
|
| 30 |
-
Efficiency Metrics:
|
| 31 |
-
Court utilization: 85.4%
|
| 32 |
-
Avg hearings/day: 644.4
|
| 33 |
-
|
| 34 |
-
Ripeness Impact:
|
| 35 |
-
Transitions: 0
|
| 36 |
-
Cases filtered (unripe): 20,340
|
| 37 |
-
Filter rate: 26.0%
|
| 38 |
-
|
| 39 |
-
Final Ripeness Distribution:
|
| 40 |
-
RIPE: 4562 (95.3%)
|
| 41 |
-
UNRIPE_DEPENDENT: 58 (1.2%)
|
| 42 |
-
UNRIPE_SUMMONS: 168 (3.5%)
|
| 43 |
-
|
| 44 |
-
Courtroom Allocation:
|
| 45 |
-
Strategy: load_balanced
|
| 46 |
-
Load balance fairness (Gini): 0.001
|
| 47 |
-
Avg daily load: 128.9 cases
|
| 48 |
-
Allocation changes: 37,970
|
| 49 |
-
Capacity rejections: 0
|
| 50 |
-
|
| 51 |
-
Courtroom-wise totals:
|
| 52 |
-
Courtroom 1: 11,622 cases (129.1/day)
|
| 53 |
-
Courtroom 2: 11,610 cases (129.0/day)
|
| 54 |
-
Courtroom 3: 11,599 cases (128.9/day)
|
| 55 |
-
Courtroom 4: 11,590 cases (128.8/day)
|
| 56 |
-
Courtroom 5: 11,578 cases (128.6/day)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
runs/rl_untrained/report.txt
DELETED
|
@@ -1,56 +0,0 @@
|
|
| 1 |
-
================================================================================
|
| 2 |
-
SIMULATION REPORT
|
| 3 |
-
================================================================================
|
| 4 |
-
|
| 5 |
-
Configuration:
|
| 6 |
-
Cases: 3000
|
| 7 |
-
Days simulated: 30
|
| 8 |
-
Policy: rl
|
| 9 |
-
Horizon end: 2024-05-09
|
| 10 |
-
|
| 11 |
-
Hearing Metrics:
|
| 12 |
-
Total hearings: 8,668
|
| 13 |
-
Heard: 5,338 (61.6%)
|
| 14 |
-
Adjourned: 3,330 (38.4%)
|
| 15 |
-
|
| 16 |
-
Disposal Metrics:
|
| 17 |
-
Cases disposed: 312
|
| 18 |
-
Disposal rate: 10.4%
|
| 19 |
-
Gini coefficient: 0.191
|
| 20 |
-
|
| 21 |
-
Disposal Rates by Case Type:
|
| 22 |
-
CA : 73/ 587 ( 12.4%)
|
| 23 |
-
CCC : 46/ 334 ( 13.8%)
|
| 24 |
-
CMP : 5/ 86 ( 5.8%)
|
| 25 |
-
CP : 44/ 294 ( 15.0%)
|
| 26 |
-
CRP : 72/ 612 ( 11.8%)
|
| 27 |
-
RFA : 40/ 519 ( 7.7%)
|
| 28 |
-
RSA : 32/ 568 ( 5.6%)
|
| 29 |
-
|
| 30 |
-
Efficiency Metrics:
|
| 31 |
-
Court utilization: 38.3%
|
| 32 |
-
Avg hearings/day: 288.9
|
| 33 |
-
|
| 34 |
-
Ripeness Impact:
|
| 35 |
-
Transitions: 0
|
| 36 |
-
Cases filtered (unripe): 1,680
|
| 37 |
-
Filter rate: 16.2%
|
| 38 |
-
|
| 39 |
-
Final Ripeness Distribution:
|
| 40 |
-
RIPE: 2632 (97.9%)
|
| 41 |
-
UNRIPE_DEPENDENT: 19 (0.7%)
|
| 42 |
-
UNRIPE_SUMMONS: 37 (1.4%)
|
| 43 |
-
|
| 44 |
-
Courtroom Allocation:
|
| 45 |
-
Strategy: load_balanced
|
| 46 |
-
Load balance fairness (Gini): 0.002
|
| 47 |
-
Avg daily load: 57.8 cases
|
| 48 |
-
Allocation changes: 4,412
|
| 49 |
-
Capacity rejections: 0
|
| 50 |
-
|
| 51 |
-
Courtroom-wise totals:
|
| 52 |
-
Courtroom 1: 1,742 cases (58.1/day)
|
| 53 |
-
Courtroom 2: 1,737 cases (57.9/day)
|
| 54 |
-
Courtroom 3: 1,732 cases (57.7/day)
|
| 55 |
-
Courtroom 4: 1,730 cases (57.7/day)
|
| 56 |
-
Courtroom 5: 1,727 cases (57.6/day)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
runs/rl_vs_baseline/comparison_report.md
DELETED
|
@@ -1,29 +0,0 @@
|
|
| 1 |
-
# Scheduling Policy Comparison Report
|
| 2 |
-
|
| 3 |
-
Policies evaluated: readiness, rl
|
| 4 |
-
|
| 5 |
-
## Key Metrics Comparison
|
| 6 |
-
|
| 7 |
-
| Metric | readiness | rl | Best |
|
| 8 |
-
|--------|-------|-------|------|
|
| 9 |
-
| Disposals | - | - | - |
|
| 10 |
-
| Gini (fairness) | - | - | - |
|
| 11 |
-
| Utilization (%) | - | - | - |
|
| 12 |
-
| Adjournment Rate (%) | - | - | - |
|
| 13 |
-
| Hearings Heard | 5 | 5 | - |
|
| 14 |
-
| Total Hearings | - | - | - |
|
| 15 |
-
|
| 16 |
-
## Analysis
|
| 17 |
-
|
| 18 |
-
**Fairness**: readiness policy achieves lowest Gini coefficient (999.000), indicating most equitable disposal time distribution.
|
| 19 |
-
|
| 20 |
-
**Efficiency**: readiness policy achieves highest utilization (0.0%), maximizing courtroom capacity usage.
|
| 21 |
-
|
| 22 |
-
**Throughput**: readiness policy produces most disposals (0), clearing cases fastest.
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
## Recommendation
|
| 26 |
-
|
| 27 |
-
**Recommended Policy**: readiness
|
| 28 |
-
|
| 29 |
-
This policy wins on 0/0 key metrics, providing the best balance of fairness, efficiency, and throughput.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
runs/rl_vs_baseline/readiness/report.txt
DELETED
|
@@ -1,56 +0,0 @@
|
|
| 1 |
-
================================================================================
|
| 2 |
-
SIMULATION REPORT
|
| 3 |
-
================================================================================
|
| 4 |
-
|
| 5 |
-
Configuration:
|
| 6 |
-
Cases: 3000
|
| 7 |
-
Days simulated: 30
|
| 8 |
-
Policy: readiness
|
| 9 |
-
Horizon end: 2024-05-09
|
| 10 |
-
|
| 11 |
-
Hearing Metrics:
|
| 12 |
-
Total hearings: 8,671
|
| 13 |
-
Heard: 5,355 (61.8%)
|
| 14 |
-
Adjourned: 3,316 (38.2%)
|
| 15 |
-
|
| 16 |
-
Disposal Metrics:
|
| 17 |
-
Cases disposed: 320
|
| 18 |
-
Disposal rate: 10.7%
|
| 19 |
-
Gini coefficient: 0.190
|
| 20 |
-
|
| 21 |
-
Disposal Rates by Case Type:
|
| 22 |
-
CA : 73/ 587 ( 12.4%)
|
| 23 |
-
CCC : 57/ 334 ( 17.1%)
|
| 24 |
-
CMP : 6/ 86 ( 7.0%)
|
| 25 |
-
CP : 46/ 294 ( 15.6%)
|
| 26 |
-
CRP : 61/ 612 ( 10.0%)
|
| 27 |
-
RFA : 49/ 519 ( 9.4%)
|
| 28 |
-
RSA : 28/ 568 ( 4.9%)
|
| 29 |
-
|
| 30 |
-
Efficiency Metrics:
|
| 31 |
-
Court utilization: 38.3%
|
| 32 |
-
Avg hearings/day: 289.0
|
| 33 |
-
|
| 34 |
-
Ripeness Impact:
|
| 35 |
-
Transitions: 0
|
| 36 |
-
Cases filtered (unripe): 1,680
|
| 37 |
-
Filter rate: 16.2%
|
| 38 |
-
|
| 39 |
-
Final Ripeness Distribution:
|
| 40 |
-
RIPE: 2624 (97.9%)
|
| 41 |
-
UNRIPE_DEPENDENT: 19 (0.7%)
|
| 42 |
-
UNRIPE_SUMMONS: 37 (1.4%)
|
| 43 |
-
|
| 44 |
-
Courtroom Allocation:
|
| 45 |
-
Strategy: load_balanced
|
| 46 |
-
Load balance fairness (Gini): 0.002
|
| 47 |
-
Avg daily load: 57.8 cases
|
| 48 |
-
Allocation changes: 4,624
|
| 49 |
-
Capacity rejections: 0
|
| 50 |
-
|
| 51 |
-
Courtroom-wise totals:
|
| 52 |
-
Courtroom 1: 1,740 cases (58.0/day)
|
| 53 |
-
Courtroom 2: 1,737 cases (57.9/day)
|
| 54 |
-
Courtroom 3: 1,736 cases (57.9/day)
|
| 55 |
-
Courtroom 4: 1,732 cases (57.7/day)
|
| 56 |
-
Courtroom 5: 1,726 cases (57.5/day)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
runs/rl_vs_baseline/rl/report.txt
DELETED
|
@@ -1,56 +0,0 @@
|
|
| 1 |
-
================================================================================
|
| 2 |
-
SIMULATION REPORT
|
| 3 |
-
================================================================================
|
| 4 |
-
|
| 5 |
-
Configuration:
|
| 6 |
-
Cases: 3000
|
| 7 |
-
Days simulated: 30
|
| 8 |
-
Policy: rl
|
| 9 |
-
Horizon end: 2024-05-09
|
| 10 |
-
|
| 11 |
-
Hearing Metrics:
|
| 12 |
-
Total hearings: 8,668
|
| 13 |
-
Heard: 5,338 (61.6%)
|
| 14 |
-
Adjourned: 3,330 (38.4%)
|
| 15 |
-
|
| 16 |
-
Disposal Metrics:
|
| 17 |
-
Cases disposed: 312
|
| 18 |
-
Disposal rate: 10.4%
|
| 19 |
-
Gini coefficient: 0.191
|
| 20 |
-
|
| 21 |
-
Disposal Rates by Case Type:
|
| 22 |
-
CA : 73/ 587 ( 12.4%)
|
| 23 |
-
CCC : 46/ 334 ( 13.8%)
|
| 24 |
-
CMP : 5/ 86 ( 5.8%)
|
| 25 |
-
CP : 44/ 294 ( 15.0%)
|
| 26 |
-
CRP : 72/ 612 ( 11.8%)
|
| 27 |
-
RFA : 40/ 519 ( 7.7%)
|
| 28 |
-
RSA : 32/ 568 ( 5.6%)
|
| 29 |
-
|
| 30 |
-
Efficiency Metrics:
|
| 31 |
-
Court utilization: 38.3%
|
| 32 |
-
Avg hearings/day: 288.9
|
| 33 |
-
|
| 34 |
-
Ripeness Impact:
|
| 35 |
-
Transitions: 0
|
| 36 |
-
Cases filtered (unripe): 1,680
|
| 37 |
-
Filter rate: 16.2%
|
| 38 |
-
|
| 39 |
-
Final Ripeness Distribution:
|
| 40 |
-
RIPE: 2632 (97.9%)
|
| 41 |
-
UNRIPE_DEPENDENT: 19 (0.7%)
|
| 42 |
-
UNRIPE_SUMMONS: 37 (1.4%)
|
| 43 |
-
|
| 44 |
-
Courtroom Allocation:
|
| 45 |
-
Strategy: load_balanced
|
| 46 |
-
Load balance fairness (Gini): 0.002
|
| 47 |
-
Avg daily load: 57.8 cases
|
| 48 |
-
Allocation changes: 4,412
|
| 49 |
-
Capacity rejections: 0
|
| 50 |
-
|
| 51 |
-
Courtroom-wise totals:
|
| 52 |
-
Courtroom 1: 1,742 cases (58.1/day)
|
| 53 |
-
Courtroom 2: 1,737 cases (57.9/day)
|
| 54 |
-
Courtroom 3: 1,732 cases (57.7/day)
|
| 55 |
-
Courtroom 4: 1,730 cases (57.7/day)
|
| 56 |
-
Courtroom 5: 1,727 cases (57.6/day)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
scheduler/control/__init__.py
CHANGED
|
@@ -3,19 +3,14 @@
|
|
| 3 |
Provides explainability and judge override capabilities.
|
| 4 |
"""
|
| 5 |
|
| 6 |
-
from .explainability import
|
| 7 |
-
DecisionStep,
|
| 8 |
-
SchedulingExplanation,
|
| 9 |
-
ExplainabilityEngine
|
| 10 |
-
)
|
| 11 |
-
|
| 12 |
from .overrides import (
|
| 13 |
-
OverrideType,
|
| 14 |
-
Override,
|
| 15 |
-
JudgePreferences,
|
| 16 |
CauseListDraft,
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
OverrideValidator,
|
| 18 |
-
OverrideManager
|
| 19 |
)
|
| 20 |
|
| 21 |
__all__ = [
|
|
|
|
| 3 |
Provides explainability and judge override capabilities.
|
| 4 |
"""
|
| 5 |
|
| 6 |
+
from .explainability import DecisionStep, ExplainabilityEngine, SchedulingExplanation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
from .overrides import (
|
|
|
|
|
|
|
|
|
|
| 8 |
CauseListDraft,
|
| 9 |
+
JudgePreferences,
|
| 10 |
+
Override,
|
| 11 |
+
OverrideManager,
|
| 12 |
+
OverrideType,
|
| 13 |
OverrideValidator,
|
|
|
|
| 14 |
)
|
| 15 |
|
| 16 |
__all__ = [
|
scheduler/control/explainability.py
CHANGED
|
@@ -2,16 +2,27 @@
|
|
| 2 |
|
| 3 |
Provides human-readable explanations for why each case was or wasn't scheduled.
|
| 4 |
"""
|
|
|
|
| 5 |
from dataclasses import dataclass
|
| 6 |
-
from typing import Optional
|
| 7 |
from datetime import date
|
|
|
|
| 8 |
|
| 9 |
from scheduler.core.case import Case
|
| 10 |
|
| 11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
@dataclass
|
| 13 |
class DecisionStep:
|
| 14 |
"""Single step in decision reasoning."""
|
|
|
|
| 15 |
step_name: str
|
| 16 |
passed: bool
|
| 17 |
reason: str
|
|
@@ -21,43 +32,44 @@ class DecisionStep:
|
|
| 21 |
@dataclass
|
| 22 |
class SchedulingExplanation:
|
| 23 |
"""Complete explanation of scheduling decision for a case."""
|
|
|
|
| 24 |
case_id: str
|
| 25 |
scheduled: bool
|
| 26 |
decision_steps: list[DecisionStep]
|
| 27 |
final_reason: str
|
| 28 |
priority_breakdown: Optional[dict] = None
|
| 29 |
courtroom_assignment_reason: Optional[str] = None
|
| 30 |
-
|
| 31 |
def to_readable_text(self) -> str:
|
| 32 |
"""Convert to human-readable explanation."""
|
| 33 |
lines = [f"Case {self.case_id}: {'SCHEDULED' if self.scheduled else 'NOT SCHEDULED'}"]
|
| 34 |
lines.append("=" * 60)
|
| 35 |
-
|
| 36 |
for i, step in enumerate(self.decision_steps, 1):
|
| 37 |
-
status = "
|
| 38 |
lines.append(f"\nStep {i}: {step.step_name} - {status}")
|
| 39 |
lines.append(f" Reason: {step.reason}")
|
| 40 |
if step.details:
|
| 41 |
for key, value in step.details.items():
|
| 42 |
lines.append(f" {key}: {value}")
|
| 43 |
-
|
| 44 |
if self.priority_breakdown and self.scheduled:
|
| 45 |
-
lines.append(
|
| 46 |
for component, value in self.priority_breakdown.items():
|
| 47 |
lines.append(f" {component}: {value}")
|
| 48 |
-
|
| 49 |
if self.courtroom_assignment_reason and self.scheduled:
|
| 50 |
-
lines.append(
|
| 51 |
lines.append(f" {self.courtroom_assignment_reason}")
|
| 52 |
-
|
| 53 |
lines.append(f"\nFinal Decision: {self.final_reason}")
|
| 54 |
-
|
| 55 |
return "\n".join(lines)
|
| 56 |
|
| 57 |
|
| 58 |
class ExplainabilityEngine:
|
| 59 |
"""Generate explanations for scheduling decisions."""
|
| 60 |
-
|
| 61 |
@staticmethod
|
| 62 |
def explain_scheduling_decision(
|
| 63 |
case: Case,
|
|
@@ -67,51 +79,56 @@ class ExplainabilityEngine:
|
|
| 67 |
priority_score: Optional[float] = None,
|
| 68 |
courtroom_id: Optional[int] = None,
|
| 69 |
capacity_full: bool = False,
|
| 70 |
-
below_threshold: bool = False
|
| 71 |
) -> SchedulingExplanation:
|
| 72 |
"""Generate complete explanation for why case was/wasn't scheduled.
|
| 73 |
-
|
| 74 |
Args:
|
| 75 |
case: The case being scheduled
|
| 76 |
current_date: Current simulation date
|
| 77 |
scheduled: Whether case was scheduled
|
| 78 |
ripeness_status: Ripeness classification
|
| 79 |
-
priority_score: Calculated priority score if
|
| 80 |
courtroom_id: Assigned courtroom if scheduled
|
| 81 |
capacity_full: Whether capacity was full
|
| 82 |
below_threshold: Whether priority was below threshold
|
| 83 |
-
|
| 84 |
Returns:
|
| 85 |
Complete scheduling explanation
|
| 86 |
"""
|
| 87 |
-
steps = []
|
| 88 |
-
|
|
|
|
| 89 |
# Step 1: Disposal status check
|
| 90 |
if case.is_disposed:
|
| 91 |
-
steps.append(
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
|
|
|
|
|
|
| 97 |
return SchedulingExplanation(
|
| 98 |
case_id=case.case_id,
|
| 99 |
scheduled=False,
|
| 100 |
decision_steps=steps,
|
| 101 |
-
final_reason="Case disposed, no longer eligible for scheduling"
|
| 102 |
)
|
| 103 |
-
|
| 104 |
-
steps.append(
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
|
|
|
|
|
|
| 111 |
# Step 2: Ripeness check
|
| 112 |
is_ripe = ripeness_status == "RIPE"
|
| 113 |
-
ripeness_detail = {}
|
| 114 |
-
|
| 115 |
if not is_ripe:
|
| 116 |
if "SUMMONS" in ripeness_status:
|
| 117 |
ripeness_detail["bottleneck"] = "Summons not yet served"
|
|
@@ -126,191 +143,237 @@ class ExplainabilityEngine:
|
|
| 126 |
ripeness_detail["bottleneck"] = ripeness_status
|
| 127 |
else:
|
| 128 |
ripeness_detail["status"] = "All prerequisites met, ready for hearing"
|
| 129 |
-
|
| 130 |
if case.last_hearing_purpose:
|
| 131 |
ripeness_detail["last_hearing_purpose"] = case.last_hearing_purpose
|
| 132 |
-
|
| 133 |
-
steps.append(
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 140 |
if not is_ripe and not scheduled:
|
| 141 |
return SchedulingExplanation(
|
| 142 |
case_id=case.case_id,
|
| 143 |
scheduled=False,
|
| 144 |
decision_steps=steps,
|
| 145 |
-
final_reason=
|
|
|
|
|
|
|
|
|
|
| 146 |
)
|
| 147 |
-
|
| 148 |
# Step 3: Minimum gap check
|
| 149 |
min_gap_days = 7
|
| 150 |
days_since = case.days_since_last_hearing
|
| 151 |
meets_gap = case.last_hearing_date is None or days_since >= min_gap_days
|
| 152 |
-
|
| 153 |
-
gap_details = {
|
| 154 |
-
|
| 155 |
-
"minimum_required": min_gap_days
|
| 156 |
-
}
|
| 157 |
-
|
| 158 |
if case.last_hearing_date:
|
| 159 |
gap_details["last_hearing_date"] = str(case.last_hearing_date)
|
| 160 |
-
|
| 161 |
-
steps.append(
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
|
|
|
|
|
|
|
| 168 |
if not meets_gap and not scheduled:
|
| 169 |
-
next_eligible =
|
|
|
|
|
|
|
| 170 |
return SchedulingExplanation(
|
| 171 |
case_id=case.case_id,
|
| 172 |
scheduled=False,
|
| 173 |
decision_steps=steps,
|
| 174 |
-
final_reason=
|
|
|
|
|
|
|
|
|
|
| 175 |
)
|
| 176 |
-
|
| 177 |
-
# Step 4: Priority calculation
|
| 178 |
if priority_score is not None:
|
|
|
|
|
|
|
| 179 |
age_component = min(case.age_days / 2000, 1.0) * 0.35
|
| 180 |
readiness_component = case.readiness_score * 0.25
|
| 181 |
urgency_component = (1.0 if case.is_urgent else 0.0) * 0.25
|
| 182 |
-
|
| 183 |
# Adjournment boost calculation
|
| 184 |
-
import math
|
| 185 |
adj_boost_value = 0.0
|
| 186 |
if case.status.value == "ADJOURNED" and case.hearing_count > 0:
|
| 187 |
adj_boost_value = math.exp(-case.days_since_last_hearing / 21)
|
| 188 |
adj_boost_component = adj_boost_value * 0.15
|
| 189 |
-
|
| 190 |
priority_breakdown = {
|
| 191 |
"Age": f"{age_component:.4f} (age={case.age_days}d, weight=0.35)",
|
| 192 |
"Readiness": f"{readiness_component:.4f} (score={case.readiness_score:.2f}, weight=0.25)",
|
| 193 |
"Urgency": f"{urgency_component:.4f} ({'URGENT' if case.is_urgent else 'normal'}, weight=0.25)",
|
| 194 |
-
"Adjournment Boost":
|
| 195 |
-
|
|
|
|
|
|
|
| 196 |
}
|
| 197 |
-
|
| 198 |
-
steps.append(
|
| 199 |
-
|
| 200 |
-
|
| 201 |
-
|
| 202 |
-
|
| 203 |
-
|
| 204 |
-
|
| 205 |
-
|
|
|
|
|
|
|
| 206 |
if scheduled:
|
| 207 |
if capacity_full:
|
| 208 |
-
steps.append(
|
| 209 |
-
|
| 210 |
-
|
| 211 |
-
|
| 212 |
-
|
| 213 |
-
|
|
|
|
|
|
|
| 214 |
elif below_threshold:
|
| 215 |
-
steps.append(
|
| 216 |
-
|
| 217 |
-
|
| 218 |
-
|
| 219 |
-
|
| 220 |
-
|
|
|
|
|
|
|
| 221 |
else:
|
| 222 |
-
steps.append(
|
| 223 |
-
|
| 224 |
-
|
| 225 |
-
|
| 226 |
-
|
| 227 |
-
|
| 228 |
-
|
| 229 |
-
|
| 230 |
-
|
| 231 |
-
|
| 232 |
-
|
|
|
|
|
|
|
| 233 |
if courtroom_id:
|
| 234 |
courtroom_reason = f"Assigned to Courtroom {courtroom_id} via load balancing (least loaded courtroom selected)"
|
| 235 |
-
steps.append(
|
| 236 |
-
|
| 237 |
-
|
| 238 |
-
|
| 239 |
-
|
| 240 |
-
|
| 241 |
-
|
| 242 |
-
|
| 243 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 244 |
return SchedulingExplanation(
|
| 245 |
case_id=case.case_id,
|
| 246 |
scheduled=True,
|
| 247 |
decision_steps=steps,
|
| 248 |
final_reason=final_reason,
|
| 249 |
-
priority_breakdown=priority_breakdown if
|
| 250 |
-
courtroom_assignment_reason=courtroom_reason
|
| 251 |
)
|
| 252 |
-
|
| 253 |
-
|
| 254 |
-
|
| 255 |
-
|
|
|
|
| 256 |
step_name="Capacity Check",
|
| 257 |
passed=False,
|
| 258 |
reason="Daily capacity limit reached",
|
| 259 |
details={
|
| 260 |
-
"priority_score":
|
| 261 |
-
"explanation": "Higher priority cases filled all available slots"
|
| 262 |
-
}
|
| 263 |
-
)
|
| 264 |
-
|
| 265 |
-
|
| 266 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 267 |
step_name="Policy Selection",
|
| 268 |
passed=False,
|
| 269 |
reason="Priority below scheduling threshold",
|
| 270 |
details={
|
| 271 |
-
"priority_score":
|
| 272 |
-
"explanation": "Other cases had higher priority scores"
|
| 273 |
-
}
|
| 274 |
-
)
|
| 275 |
-
final_reason = f"Case NOT SCHEDULED: Priority score {priority_score:.4f} below threshold. Wait for case to age or become more urgent"
|
| 276 |
-
else:
|
| 277 |
-
final_reason = "Case NOT SCHEDULED: Unknown reason (policy decision)"
|
| 278 |
-
|
| 279 |
-
return SchedulingExplanation(
|
| 280 |
-
case_id=case.case_id,
|
| 281 |
-
scheduled=False,
|
| 282 |
-
decision_steps=steps,
|
| 283 |
-
final_reason=final_reason,
|
| 284 |
-
priority_breakdown=priority_breakdown if priority_score else None
|
| 285 |
)
|
| 286 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 287 |
@staticmethod
|
| 288 |
def explain_why_not_scheduled(case: Case, current_date: date) -> str:
|
| 289 |
"""Quick explanation for why a case wasn't scheduled.
|
| 290 |
-
|
| 291 |
Args:
|
| 292 |
case: Case to explain
|
| 293 |
current_date: Current date
|
| 294 |
-
|
| 295 |
Returns:
|
| 296 |
Human-readable reason
|
| 297 |
"""
|
| 298 |
if case.is_disposed:
|
| 299 |
return f"Already disposed on {case.disposal_date}"
|
| 300 |
-
|
| 301 |
if case.ripeness_status != "RIPE":
|
| 302 |
bottleneck_reasons = {
|
| 303 |
"UNRIPE_SUMMONS": "Summons not served",
|
| 304 |
"UNRIPE_DEPENDENT": "Waiting for dependent case",
|
| 305 |
"UNRIPE_PARTY": "Party unavailable",
|
| 306 |
-
"UNRIPE_DOCUMENT": "Documents pending"
|
| 307 |
}
|
| 308 |
reason = bottleneck_reasons.get(case.ripeness_status, case.ripeness_status)
|
| 309 |
return f"UNRIPE: {reason}"
|
| 310 |
-
|
| 311 |
if case.last_hearing_date and case.days_since_last_hearing < 7:
|
| 312 |
-
return
|
| 313 |
-
|
|
|
|
|
|
|
| 314 |
# If ripe and meets gap, then it's priority-based
|
| 315 |
priority = case.get_priority_score()
|
| 316 |
return f"Low priority (score {priority:.3f}) - other cases ranked higher"
|
|
|
|
| 2 |
|
| 3 |
Provides human-readable explanations for why each case was or wasn't scheduled.
|
| 4 |
"""
|
| 5 |
+
|
| 6 |
from dataclasses import dataclass
|
|
|
|
| 7 |
from datetime import date
|
| 8 |
+
from typing import Optional
|
| 9 |
|
| 10 |
from scheduler.core.case import Case
|
| 11 |
|
| 12 |
|
| 13 |
+
def _fmt_score(score: Optional[float]) -> str:
|
| 14 |
+
"""Format a score safely; return 'N/A' when score is None.
|
| 15 |
+
|
| 16 |
+
Avoids `TypeError: unsupported format string passed to NoneType.__format__`
|
| 17 |
+
when `priority_score` may be missing for not-scheduled cases.
|
| 18 |
+
"""
|
| 19 |
+
return f"{score:.4f}" if isinstance(score, (int, float)) else "N/A"
|
| 20 |
+
|
| 21 |
+
|
| 22 |
@dataclass
|
| 23 |
class DecisionStep:
|
| 24 |
"""Single step in decision reasoning."""
|
| 25 |
+
|
| 26 |
step_name: str
|
| 27 |
passed: bool
|
| 28 |
reason: str
|
|
|
|
| 32 |
@dataclass
|
| 33 |
class SchedulingExplanation:
|
| 34 |
"""Complete explanation of scheduling decision for a case."""
|
| 35 |
+
|
| 36 |
case_id: str
|
| 37 |
scheduled: bool
|
| 38 |
decision_steps: list[DecisionStep]
|
| 39 |
final_reason: str
|
| 40 |
priority_breakdown: Optional[dict] = None
|
| 41 |
courtroom_assignment_reason: Optional[str] = None
|
| 42 |
+
|
| 43 |
def to_readable_text(self) -> str:
|
| 44 |
"""Convert to human-readable explanation."""
|
| 45 |
lines = [f"Case {self.case_id}: {'SCHEDULED' if self.scheduled else 'NOT SCHEDULED'}"]
|
| 46 |
lines.append("=" * 60)
|
| 47 |
+
|
| 48 |
for i, step in enumerate(self.decision_steps, 1):
|
| 49 |
+
status = "[PASS]" if step.passed else "[FAIL]"
|
| 50 |
lines.append(f"\nStep {i}: {step.step_name} - {status}")
|
| 51 |
lines.append(f" Reason: {step.reason}")
|
| 52 |
if step.details:
|
| 53 |
for key, value in step.details.items():
|
| 54 |
lines.append(f" {key}: {value}")
|
| 55 |
+
|
| 56 |
if self.priority_breakdown and self.scheduled:
|
| 57 |
+
lines.append("\nPriority Score Breakdown:")
|
| 58 |
for component, value in self.priority_breakdown.items():
|
| 59 |
lines.append(f" {component}: {value}")
|
| 60 |
+
|
| 61 |
if self.courtroom_assignment_reason and self.scheduled:
|
| 62 |
+
lines.append("\nCourtroom Assignment:")
|
| 63 |
lines.append(f" {self.courtroom_assignment_reason}")
|
| 64 |
+
|
| 65 |
lines.append(f"\nFinal Decision: {self.final_reason}")
|
| 66 |
+
|
| 67 |
return "\n".join(lines)
|
| 68 |
|
| 69 |
|
| 70 |
class ExplainabilityEngine:
|
| 71 |
"""Generate explanations for scheduling decisions."""
|
| 72 |
+
|
| 73 |
@staticmethod
|
| 74 |
def explain_scheduling_decision(
|
| 75 |
case: Case,
|
|
|
|
| 79 |
priority_score: Optional[float] = None,
|
| 80 |
courtroom_id: Optional[int] = None,
|
| 81 |
capacity_full: bool = False,
|
| 82 |
+
below_threshold: bool = False,
|
| 83 |
) -> SchedulingExplanation:
|
| 84 |
"""Generate complete explanation for why case was/wasn't scheduled.
|
| 85 |
+
|
| 86 |
Args:
|
| 87 |
case: The case being scheduled
|
| 88 |
current_date: Current simulation date
|
| 89 |
scheduled: Whether case was scheduled
|
| 90 |
ripeness_status: Ripeness classification
|
| 91 |
+
priority_score: Calculated priority score if available
|
| 92 |
courtroom_id: Assigned courtroom if scheduled
|
| 93 |
capacity_full: Whether capacity was full
|
| 94 |
below_threshold: Whether priority was below threshold
|
| 95 |
+
|
| 96 |
Returns:
|
| 97 |
Complete scheduling explanation
|
| 98 |
"""
|
| 99 |
+
steps: list[DecisionStep] = []
|
| 100 |
+
priority_breakdown: Optional[dict] = None # ensure defined for return
|
| 101 |
+
|
| 102 |
# Step 1: Disposal status check
|
| 103 |
if case.is_disposed:
|
| 104 |
+
steps.append(
|
| 105 |
+
DecisionStep(
|
| 106 |
+
step_name="Case Status Check",
|
| 107 |
+
passed=False,
|
| 108 |
+
reason="Case already disposed",
|
| 109 |
+
details={"disposal_date": str(case.disposal_date)},
|
| 110 |
+
)
|
| 111 |
+
)
|
| 112 |
return SchedulingExplanation(
|
| 113 |
case_id=case.case_id,
|
| 114 |
scheduled=False,
|
| 115 |
decision_steps=steps,
|
| 116 |
+
final_reason="Case disposed, no longer eligible for scheduling",
|
| 117 |
)
|
| 118 |
+
|
| 119 |
+
steps.append(
|
| 120 |
+
DecisionStep(
|
| 121 |
+
step_name="Case Status Check",
|
| 122 |
+
passed=True,
|
| 123 |
+
reason="Case active and eligible",
|
| 124 |
+
details={"status": case.status.value},
|
| 125 |
+
)
|
| 126 |
+
)
|
| 127 |
+
|
| 128 |
# Step 2: Ripeness check
|
| 129 |
is_ripe = ripeness_status == "RIPE"
|
| 130 |
+
ripeness_detail: dict = {}
|
| 131 |
+
|
| 132 |
if not is_ripe:
|
| 133 |
if "SUMMONS" in ripeness_status:
|
| 134 |
ripeness_detail["bottleneck"] = "Summons not yet served"
|
|
|
|
| 143 |
ripeness_detail["bottleneck"] = ripeness_status
|
| 144 |
else:
|
| 145 |
ripeness_detail["status"] = "All prerequisites met, ready for hearing"
|
| 146 |
+
|
| 147 |
if case.last_hearing_purpose:
|
| 148 |
ripeness_detail["last_hearing_purpose"] = case.last_hearing_purpose
|
| 149 |
+
|
| 150 |
+
steps.append(
|
| 151 |
+
DecisionStep(
|
| 152 |
+
step_name="Ripeness Classification",
|
| 153 |
+
passed=is_ripe,
|
| 154 |
+
reason=(
|
| 155 |
+
"Case is RIPE (ready for hearing)"
|
| 156 |
+
if is_ripe
|
| 157 |
+
else f"Case is UNRIPE ({ripeness_status})"
|
| 158 |
+
),
|
| 159 |
+
details=ripeness_detail,
|
| 160 |
+
)
|
| 161 |
+
)
|
| 162 |
+
|
| 163 |
if not is_ripe and not scheduled:
|
| 164 |
return SchedulingExplanation(
|
| 165 |
case_id=case.case_id,
|
| 166 |
scheduled=False,
|
| 167 |
decision_steps=steps,
|
| 168 |
+
final_reason=(
|
| 169 |
+
"Case not scheduled: UNRIPE status blocks scheduling. "
|
| 170 |
+
f"{ripeness_detail.get('action_needed', 'Waiting for case to become ready')}"
|
| 171 |
+
),
|
| 172 |
)
|
| 173 |
+
|
| 174 |
# Step 3: Minimum gap check
|
| 175 |
min_gap_days = 7
|
| 176 |
days_since = case.days_since_last_hearing
|
| 177 |
meets_gap = case.last_hearing_date is None or days_since >= min_gap_days
|
| 178 |
+
|
| 179 |
+
gap_details = {"days_since_last_hearing": days_since, "minimum_required": min_gap_days}
|
| 180 |
+
|
|
|
|
|
|
|
|
|
|
| 181 |
if case.last_hearing_date:
|
| 182 |
gap_details["last_hearing_date"] = str(case.last_hearing_date)
|
| 183 |
+
|
| 184 |
+
steps.append(
|
| 185 |
+
DecisionStep(
|
| 186 |
+
step_name="Minimum Gap Check",
|
| 187 |
+
passed=meets_gap,
|
| 188 |
+
reason=f"{'Meets' if meets_gap else 'Does not meet'} minimum {min_gap_days}-day gap requirement",
|
| 189 |
+
details=gap_details,
|
| 190 |
+
)
|
| 191 |
+
)
|
| 192 |
+
|
| 193 |
if not meets_gap and not scheduled:
|
| 194 |
+
next_eligible = (
|
| 195 |
+
case.last_hearing_date.isoformat() if case.last_hearing_date else "unknown"
|
| 196 |
+
)
|
| 197 |
return SchedulingExplanation(
|
| 198 |
case_id=case.case_id,
|
| 199 |
scheduled=False,
|
| 200 |
decision_steps=steps,
|
| 201 |
+
final_reason=(
|
| 202 |
+
f"Case not scheduled: Only {days_since} days since last hearing (minimum {min_gap_days} required). "
|
| 203 |
+
f"Next eligible after {next_eligible}"
|
| 204 |
+
),
|
| 205 |
)
|
| 206 |
+
|
| 207 |
+
# Step 4: Priority calculation (only if a score was provided)
|
| 208 |
if priority_score is not None:
|
| 209 |
+
import math
|
| 210 |
+
|
| 211 |
age_component = min(case.age_days / 2000, 1.0) * 0.35
|
| 212 |
readiness_component = case.readiness_score * 0.25
|
| 213 |
urgency_component = (1.0 if case.is_urgent else 0.0) * 0.25
|
| 214 |
+
|
| 215 |
# Adjournment boost calculation
|
|
|
|
| 216 |
adj_boost_value = 0.0
|
| 217 |
if case.status.value == "ADJOURNED" and case.hearing_count > 0:
|
| 218 |
adj_boost_value = math.exp(-case.days_since_last_hearing / 21)
|
| 219 |
adj_boost_component = adj_boost_value * 0.15
|
| 220 |
+
|
| 221 |
priority_breakdown = {
|
| 222 |
"Age": f"{age_component:.4f} (age={case.age_days}d, weight=0.35)",
|
| 223 |
"Readiness": f"{readiness_component:.4f} (score={case.readiness_score:.2f}, weight=0.25)",
|
| 224 |
"Urgency": f"{urgency_component:.4f} ({'URGENT' if case.is_urgent else 'normal'}, weight=0.25)",
|
| 225 |
+
"Adjournment Boost": (
|
| 226 |
+
f"{adj_boost_component:.4f} (days_since={days_since}, decay=exp(-{days_since}/21), weight=0.15)"
|
| 227 |
+
),
|
| 228 |
+
"TOTAL": _fmt_score(priority_score),
|
| 229 |
}
|
| 230 |
+
|
| 231 |
+
steps.append(
|
| 232 |
+
DecisionStep(
|
| 233 |
+
step_name="Priority Calculation",
|
| 234 |
+
passed=True,
|
| 235 |
+
reason=f"Priority score calculated: {_fmt_score(priority_score)}",
|
| 236 |
+
details=priority_breakdown,
|
| 237 |
+
)
|
| 238 |
+
)
|
| 239 |
+
|
| 240 |
+
# Step 5: Selection by policy and final assembly
|
| 241 |
if scheduled:
|
| 242 |
if capacity_full:
|
| 243 |
+
steps.append(
|
| 244 |
+
DecisionStep(
|
| 245 |
+
step_name="Capacity Check",
|
| 246 |
+
passed=True,
|
| 247 |
+
reason="Selected despite full capacity (high priority override)",
|
| 248 |
+
details={"priority_score": _fmt_score(priority_score)},
|
| 249 |
+
)
|
| 250 |
+
)
|
| 251 |
elif below_threshold:
|
| 252 |
+
steps.append(
|
| 253 |
+
DecisionStep(
|
| 254 |
+
step_name="Policy Selection",
|
| 255 |
+
passed=True,
|
| 256 |
+
reason="Selected by policy despite being below typical threshold",
|
| 257 |
+
details={"reason": "Algorithm determined case should be scheduled"},
|
| 258 |
+
)
|
| 259 |
+
)
|
| 260 |
else:
|
| 261 |
+
steps.append(
|
| 262 |
+
DecisionStep(
|
| 263 |
+
step_name="Policy Selection",
|
| 264 |
+
passed=True,
|
| 265 |
+
reason="Selected by scheduling policy among eligible cases",
|
| 266 |
+
details={
|
| 267 |
+
"priority_rank": "Top priority among eligible cases",
|
| 268 |
+
"policy": "Readiness + Adjournment Boost",
|
| 269 |
+
},
|
| 270 |
+
)
|
| 271 |
+
)
|
| 272 |
+
|
| 273 |
+
courtroom_reason = None
|
| 274 |
if courtroom_id:
|
| 275 |
courtroom_reason = f"Assigned to Courtroom {courtroom_id} via load balancing (least loaded courtroom selected)"
|
| 276 |
+
steps.append(
|
| 277 |
+
DecisionStep(
|
| 278 |
+
step_name="Courtroom Assignment",
|
| 279 |
+
passed=True,
|
| 280 |
+
reason=courtroom_reason,
|
| 281 |
+
details={"courtroom_id": courtroom_id},
|
| 282 |
+
)
|
| 283 |
+
)
|
| 284 |
+
|
| 285 |
+
# Build final reason safely (omit missing parts)
|
| 286 |
+
parts = [
|
| 287 |
+
"Case SCHEDULED: Passed all checks",
|
| 288 |
+
f"priority score {_fmt_score(priority_score)}"
|
| 289 |
+
if priority_score is not None
|
| 290 |
+
else None,
|
| 291 |
+
f"assigned to Courtroom {courtroom_id}" if courtroom_id else None,
|
| 292 |
+
]
|
| 293 |
+
final_reason = ", ".join(part for part in parts if part)
|
| 294 |
+
|
| 295 |
return SchedulingExplanation(
|
| 296 |
case_id=case.case_id,
|
| 297 |
scheduled=True,
|
| 298 |
decision_steps=steps,
|
| 299 |
final_reason=final_reason,
|
| 300 |
+
priority_breakdown=priority_breakdown if priority_breakdown is not None else None,
|
| 301 |
+
courtroom_assignment_reason=courtroom_reason,
|
| 302 |
)
|
| 303 |
+
|
| 304 |
+
# Not scheduled
|
| 305 |
+
if capacity_full:
|
| 306 |
+
steps.append(
|
| 307 |
+
DecisionStep(
|
| 308 |
step_name="Capacity Check",
|
| 309 |
passed=False,
|
| 310 |
reason="Daily capacity limit reached",
|
| 311 |
details={
|
| 312 |
+
"priority_score": _fmt_score(priority_score),
|
| 313 |
+
"explanation": "Higher priority cases filled all available slots",
|
| 314 |
+
},
|
| 315 |
+
)
|
| 316 |
+
)
|
| 317 |
+
final_reason = (
|
| 318 |
+
"Case NOT SCHEDULED: Capacity full. "
|
| 319 |
+
f"Priority {_fmt_score(priority_score)} was not high enough to displace scheduled cases"
|
| 320 |
+
)
|
| 321 |
+
elif below_threshold:
|
| 322 |
+
steps.append(
|
| 323 |
+
DecisionStep(
|
| 324 |
step_name="Policy Selection",
|
| 325 |
passed=False,
|
| 326 |
reason="Priority below scheduling threshold",
|
| 327 |
details={
|
| 328 |
+
"priority_score": _fmt_score(priority_score),
|
| 329 |
+
"explanation": "Other cases had higher priority scores",
|
| 330 |
+
},
|
| 331 |
+
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 332 |
)
|
| 333 |
+
final_reason = (
|
| 334 |
+
"Case NOT SCHEDULED: "
|
| 335 |
+
f"Priority {_fmt_score(priority_score)} below threshold. Wait for case to age or become more urgent"
|
| 336 |
+
)
|
| 337 |
+
else:
|
| 338 |
+
final_reason = "Case NOT SCHEDULED: Unknown reason (policy decision)"
|
| 339 |
+
|
| 340 |
+
return SchedulingExplanation(
|
| 341 |
+
case_id=case.case_id,
|
| 342 |
+
scheduled=False,
|
| 343 |
+
decision_steps=steps,
|
| 344 |
+
final_reason=final_reason,
|
| 345 |
+
priority_breakdown=priority_breakdown if priority_breakdown is not None else None,
|
| 346 |
+
)
|
| 347 |
+
|
| 348 |
@staticmethod
|
| 349 |
def explain_why_not_scheduled(case: Case, current_date: date) -> str:
|
| 350 |
"""Quick explanation for why a case wasn't scheduled.
|
| 351 |
+
|
| 352 |
Args:
|
| 353 |
case: Case to explain
|
| 354 |
current_date: Current date
|
| 355 |
+
|
| 356 |
Returns:
|
| 357 |
Human-readable reason
|
| 358 |
"""
|
| 359 |
if case.is_disposed:
|
| 360 |
return f"Already disposed on {case.disposal_date}"
|
| 361 |
+
|
| 362 |
if case.ripeness_status != "RIPE":
|
| 363 |
bottleneck_reasons = {
|
| 364 |
"UNRIPE_SUMMONS": "Summons not served",
|
| 365 |
"UNRIPE_DEPENDENT": "Waiting for dependent case",
|
| 366 |
"UNRIPE_PARTY": "Party unavailable",
|
| 367 |
+
"UNRIPE_DOCUMENT": "Documents pending",
|
| 368 |
}
|
| 369 |
reason = bottleneck_reasons.get(case.ripeness_status, case.ripeness_status)
|
| 370 |
return f"UNRIPE: {reason}"
|
| 371 |
+
|
| 372 |
if case.last_hearing_date and case.days_since_last_hearing < 7:
|
| 373 |
+
return (
|
| 374 |
+
f"Too recent (last hearing {case.days_since_last_hearing} days ago, minimum 7 days)"
|
| 375 |
+
)
|
| 376 |
+
|
| 377 |
# If ripe and meets gap, then it's priority-based
|
| 378 |
priority = case.get_priority_score()
|
| 379 |
return f"Low priority (score {priority:.3f}) - other cases ranked higher"
|
scheduler/control/overrides.py
CHANGED
|
@@ -3,11 +3,11 @@
|
|
| 3 |
Allows judges to review, modify, and approve algorithmic scheduling suggestions.
|
| 4 |
System is suggestive, not prescriptive - judges retain final control.
|
| 5 |
"""
|
|
|
|
| 6 |
from dataclasses import dataclass, field
|
| 7 |
from datetime import date, datetime
|
| 8 |
from enum import Enum
|
| 9 |
from typing import Optional
|
| 10 |
-
import json
|
| 11 |
|
| 12 |
|
| 13 |
class OverrideType(Enum):
|
|
@@ -35,13 +35,13 @@ class Override:
|
|
| 35 |
reason: str = ""
|
| 36 |
date_affected: Optional[date] = None
|
| 37 |
courtroom_id: Optional[int] = None
|
| 38 |
-
|
| 39 |
# Algorithm-specific attributes
|
| 40 |
make_ripe: Optional[bool] = None # For RIPENESS overrides
|
| 41 |
-
new_position: Optional[int] = None # For REORDER/ADD_CASE overrides
|
| 42 |
new_priority: Optional[float] = None # For PRIORITY overrides
|
| 43 |
new_capacity: Optional[int] = None # For CAPACITY overrides
|
| 44 |
-
|
| 45 |
def to_dict(self) -> dict:
|
| 46 |
"""Convert to dictionary for logging."""
|
| 47 |
return {
|
|
@@ -60,32 +60,32 @@ class Override:
|
|
| 60 |
"new_priority": self.new_priority,
|
| 61 |
"new_capacity": self.new_capacity
|
| 62 |
}
|
| 63 |
-
|
| 64 |
def to_readable_text(self) -> str:
|
| 65 |
"""Human-readable description of override."""
|
| 66 |
action_desc = {
|
| 67 |
OverrideType.RIPENESS: f"Changed ripeness from {self.old_value} to {self.new_value}",
|
| 68 |
OverrideType.PRIORITY: f"Adjusted priority from {self.old_value} to {self.new_value}",
|
| 69 |
-
OverrideType.ADD_CASE:
|
| 70 |
-
OverrideType.REMOVE_CASE:
|
| 71 |
OverrideType.REORDER: f"Reordered from position {self.old_value} to {self.new_value}",
|
| 72 |
OverrideType.CAPACITY: f"Changed capacity from {self.old_value} to {self.new_value}",
|
| 73 |
OverrideType.MIN_GAP: f"Overrode min gap from {self.old_value} to {self.new_value} days",
|
| 74 |
OverrideType.COURTROOM: f"Changed courtroom from {self.old_value} to {self.new_value}"
|
| 75 |
}
|
| 76 |
-
|
| 77 |
action = action_desc.get(self.override_type, f"Override: {self.override_type.value}")
|
| 78 |
-
|
| 79 |
parts = [
|
| 80 |
f"[{self.timestamp.strftime('%Y-%m-%d %H:%M')}]",
|
| 81 |
f"Judge {self.judge_id}:",
|
| 82 |
action,
|
| 83 |
f"(Case {self.case_id})"
|
| 84 |
]
|
| 85 |
-
|
| 86 |
if self.reason:
|
| 87 |
parts.append(f"Reason: {self.reason}")
|
| 88 |
-
|
| 89 |
return " ".join(parts)
|
| 90 |
|
| 91 |
|
|
@@ -98,7 +98,7 @@ class JudgePreferences:
|
|
| 98 |
min_gap_overrides: dict[str, int] = field(default_factory=dict) # Per-case gap overrides
|
| 99 |
case_type_preferences: dict[str, list[str]] = field(default_factory=dict) # Day-of-week preferences
|
| 100 |
capacity_overrides: dict[int, int] = field(default_factory=dict) # Per-courtroom capacity overrides
|
| 101 |
-
|
| 102 |
def to_dict(self) -> dict:
|
| 103 |
"""Convert to dictionary."""
|
| 104 |
return {
|
|
@@ -123,25 +123,25 @@ class CauseListDraft:
|
|
| 123 |
created_at: datetime
|
| 124 |
finalized_at: Optional[datetime] = None
|
| 125 |
status: str = "DRAFT" # DRAFT, APPROVED, REJECTED
|
| 126 |
-
|
| 127 |
def get_acceptance_rate(self) -> float:
|
| 128 |
"""Calculate what % of suggestions were accepted."""
|
| 129 |
if not self.algorithm_suggested:
|
| 130 |
return 0.0
|
| 131 |
-
|
| 132 |
accepted = len(set(self.algorithm_suggested) & set(self.judge_approved))
|
| 133 |
return accepted / len(self.algorithm_suggested) * 100
|
| 134 |
-
|
| 135 |
def get_modifications_summary(self) -> dict:
|
| 136 |
"""Summarize modifications made."""
|
| 137 |
added = set(self.judge_approved) - set(self.algorithm_suggested)
|
| 138 |
removed = set(self.algorithm_suggested) - set(self.judge_approved)
|
| 139 |
-
|
| 140 |
override_counts = {}
|
| 141 |
for override in self.overrides:
|
| 142 |
override_type = override.override_type.value
|
| 143 |
override_counts[override_type] = override_counts.get(override_type, 0) + 1
|
| 144 |
-
|
| 145 |
return {
|
| 146 |
"cases_added": len(added),
|
| 147 |
"cases_removed": len(removed),
|
|
@@ -153,32 +153,31 @@ class CauseListDraft:
|
|
| 153 |
|
| 154 |
class OverrideValidator:
|
| 155 |
"""Validates override requests against constraints."""
|
| 156 |
-
|
| 157 |
def __init__(self):
|
| 158 |
self.errors: list[str] = []
|
| 159 |
-
|
| 160 |
def validate(self, override: Override) -> bool:
|
| 161 |
"""Validate an override against all applicable constraints.
|
| 162 |
-
|
| 163 |
Args:
|
| 164 |
override: Override to validate
|
| 165 |
-
|
| 166 |
Returns:
|
| 167 |
True if valid, False otherwise
|
| 168 |
"""
|
| 169 |
self.errors.clear()
|
| 170 |
-
|
| 171 |
if override.override_type == OverrideType.RIPENESS:
|
| 172 |
valid, error = self.validate_ripeness_override(
|
| 173 |
override.case_id,
|
| 174 |
-
override.old_value or "",
|
| 175 |
override.new_value or "",
|
| 176 |
override.reason
|
| 177 |
)
|
| 178 |
if not valid:
|
| 179 |
self.errors.append(error)
|
| 180 |
return False
|
| 181 |
-
|
| 182 |
elif override.override_type == OverrideType.CAPACITY:
|
| 183 |
if override.new_capacity is not None:
|
| 184 |
valid, error = self.validate_capacity_override(
|
|
@@ -188,59 +187,57 @@ class OverrideValidator:
|
|
| 188 |
if not valid:
|
| 189 |
self.errors.append(error)
|
| 190 |
return False
|
| 191 |
-
|
| 192 |
elif override.override_type == OverrideType.PRIORITY:
|
| 193 |
if override.new_priority is not None:
|
| 194 |
if not (0 <= override.new_priority <= 1.0):
|
| 195 |
self.errors.append("Priority must be between 0 and 1.0")
|
| 196 |
return False
|
| 197 |
-
|
| 198 |
# Basic validation
|
| 199 |
if not override.case_id:
|
| 200 |
self.errors.append("Case ID is required")
|
| 201 |
return False
|
| 202 |
-
|
| 203 |
if not override.judge_id:
|
| 204 |
self.errors.append("Judge ID is required")
|
| 205 |
return False
|
| 206 |
-
|
| 207 |
return True
|
| 208 |
-
|
| 209 |
def get_errors(self) -> list[str]:
|
| 210 |
"""Get validation errors from last validation."""
|
| 211 |
return self.errors.copy()
|
| 212 |
-
|
| 213 |
@staticmethod
|
| 214 |
def validate_ripeness_override(
|
| 215 |
case_id: str,
|
| 216 |
-
old_status: str,
|
| 217 |
new_status: str,
|
| 218 |
reason: str
|
| 219 |
) -> tuple[bool, str]:
|
| 220 |
"""Validate ripeness override.
|
| 221 |
-
|
| 222 |
Args:
|
| 223 |
case_id: Case ID
|
| 224 |
-
old_status: Current ripeness status
|
| 225 |
new_status: Requested new status
|
| 226 |
reason: Reason for override
|
| 227 |
-
|
| 228 |
Returns:
|
| 229 |
(valid, error_message)
|
| 230 |
"""
|
| 231 |
valid_statuses = ["RIPE", "UNRIPE_SUMMONS", "UNRIPE_DEPENDENT", "UNRIPE_PARTY", "UNRIPE_DOCUMENT"]
|
| 232 |
-
|
| 233 |
if new_status not in valid_statuses:
|
| 234 |
return False, f"Invalid ripeness status: {new_status}"
|
| 235 |
-
|
| 236 |
if not reason:
|
| 237 |
return False, "Reason required for ripeness override"
|
| 238 |
-
|
| 239 |
if len(reason) < 10:
|
| 240 |
return False, "Reason must be at least 10 characters"
|
| 241 |
-
|
| 242 |
return True, ""
|
| 243 |
-
|
| 244 |
@staticmethod
|
| 245 |
def validate_capacity_override(
|
| 246 |
current_capacity: int,
|
|
@@ -248,26 +245,26 @@ class OverrideValidator:
|
|
| 248 |
max_capacity: int = 200
|
| 249 |
) -> tuple[bool, str]:
|
| 250 |
"""Validate capacity override.
|
| 251 |
-
|
| 252 |
Args:
|
| 253 |
current_capacity: Current daily capacity
|
| 254 |
new_capacity: Requested new capacity
|
| 255 |
max_capacity: Maximum allowed capacity
|
| 256 |
-
|
| 257 |
Returns:
|
| 258 |
(valid, error_message)
|
| 259 |
"""
|
| 260 |
if new_capacity < 0:
|
| 261 |
return False, "Capacity cannot be negative"
|
| 262 |
-
|
| 263 |
if new_capacity > max_capacity:
|
| 264 |
return False, f"Capacity cannot exceed maximum ({max_capacity})"
|
| 265 |
-
|
| 266 |
if new_capacity == 0:
|
| 267 |
return False, "Capacity cannot be zero (use blocked dates for full closures)"
|
| 268 |
-
|
| 269 |
return True, ""
|
| 270 |
-
|
| 271 |
@staticmethod
|
| 272 |
def validate_add_case(
|
| 273 |
case_id: str,
|
|
@@ -276,52 +273,52 @@ class OverrideValidator:
|
|
| 276 |
max_capacity: int
|
| 277 |
) -> tuple[bool, str]:
|
| 278 |
"""Validate adding a case to cause list.
|
| 279 |
-
|
| 280 |
Args:
|
| 281 |
case_id: Case to add
|
| 282 |
current_schedule: Currently scheduled case IDs
|
| 283 |
current_capacity: Current number of scheduled cases
|
| 284 |
max_capacity: Maximum capacity
|
| 285 |
-
|
| 286 |
Returns:
|
| 287 |
(valid, error_message)
|
| 288 |
"""
|
| 289 |
if case_id in current_schedule:
|
| 290 |
return False, f"Case {case_id} already in schedule"
|
| 291 |
-
|
| 292 |
if current_capacity >= max_capacity:
|
| 293 |
return False, f"Schedule at capacity ({current_capacity}/{max_capacity})"
|
| 294 |
-
|
| 295 |
return True, ""
|
| 296 |
-
|
| 297 |
@staticmethod
|
| 298 |
def validate_remove_case(
|
| 299 |
case_id: str,
|
| 300 |
current_schedule: list[str]
|
| 301 |
) -> tuple[bool, str]:
|
| 302 |
"""Validate removing a case from cause list.
|
| 303 |
-
|
| 304 |
Args:
|
| 305 |
case_id: Case to remove
|
| 306 |
current_schedule: Currently scheduled case IDs
|
| 307 |
-
|
| 308 |
Returns:
|
| 309 |
(valid, error_message)
|
| 310 |
"""
|
| 311 |
if case_id not in current_schedule:
|
| 312 |
return False, f"Case {case_id} not in schedule"
|
| 313 |
-
|
| 314 |
return True, ""
|
| 315 |
|
| 316 |
|
| 317 |
class OverrideManager:
|
| 318 |
"""Manages judge overrides and interventions."""
|
| 319 |
-
|
| 320 |
def __init__(self):
|
| 321 |
self.overrides: list[Override] = []
|
| 322 |
self.drafts: list[CauseListDraft] = []
|
| 323 |
self.preferences: dict[str, JudgePreferences] = {}
|
| 324 |
-
|
| 325 |
def create_draft(
|
| 326 |
self,
|
| 327 |
date: date,
|
|
@@ -330,13 +327,13 @@ class OverrideManager:
|
|
| 330 |
algorithm_suggested: list[str]
|
| 331 |
) -> CauseListDraft:
|
| 332 |
"""Create a draft cause list for judge review.
|
| 333 |
-
|
| 334 |
Args:
|
| 335 |
date: Date of cause list
|
| 336 |
courtroom_id: Courtroom ID
|
| 337 |
judge_id: Judge ID
|
| 338 |
algorithm_suggested: Case IDs suggested by algorithm
|
| 339 |
-
|
| 340 |
Returns:
|
| 341 |
Draft cause list
|
| 342 |
"""
|
|
@@ -350,21 +347,21 @@ class OverrideManager:
|
|
| 350 |
created_at=datetime.now(),
|
| 351 |
status="DRAFT"
|
| 352 |
)
|
| 353 |
-
|
| 354 |
self.drafts.append(draft)
|
| 355 |
return draft
|
| 356 |
-
|
| 357 |
def apply_override(
|
| 358 |
self,
|
| 359 |
draft: CauseListDraft,
|
| 360 |
override: Override
|
| 361 |
) -> tuple[bool, str]:
|
| 362 |
"""Apply an override to a draft cause list.
|
| 363 |
-
|
| 364 |
Args:
|
| 365 |
draft: Draft to modify
|
| 366 |
override: Override to apply
|
| 367 |
-
|
| 368 |
Returns:
|
| 369 |
(success, error_message)
|
| 370 |
"""
|
|
@@ -378,7 +375,7 @@ class OverrideManager:
|
|
| 378 |
)
|
| 379 |
if not valid:
|
| 380 |
return False, error
|
| 381 |
-
|
| 382 |
elif override.override_type == OverrideType.ADD_CASE:
|
| 383 |
valid, error = OverrideValidator.validate_add_case(
|
| 384 |
override.case_id,
|
|
@@ -388,9 +385,9 @@ class OverrideManager:
|
|
| 388 |
)
|
| 389 |
if not valid:
|
| 390 |
return False, error
|
| 391 |
-
|
| 392 |
draft.judge_approved.append(override.case_id)
|
| 393 |
-
|
| 394 |
elif override.override_type == OverrideType.REMOVE_CASE:
|
| 395 |
valid, error = OverrideValidator.validate_remove_case(
|
| 396 |
override.case_id,
|
|
@@ -398,79 +395,79 @@ class OverrideManager:
|
|
| 398 |
)
|
| 399 |
if not valid:
|
| 400 |
return False, error
|
| 401 |
-
|
| 402 |
draft.judge_approved.remove(override.case_id)
|
| 403 |
-
|
| 404 |
# Record override
|
| 405 |
draft.overrides.append(override)
|
| 406 |
self.overrides.append(override)
|
| 407 |
-
|
| 408 |
return True, ""
|
| 409 |
-
|
| 410 |
def finalize_draft(self, draft: CauseListDraft) -> bool:
|
| 411 |
"""Finalize draft cause list (judge approval).
|
| 412 |
-
|
| 413 |
Args:
|
| 414 |
draft: Draft to finalize
|
| 415 |
-
|
| 416 |
Returns:
|
| 417 |
Success status
|
| 418 |
"""
|
| 419 |
if draft.status != "DRAFT":
|
| 420 |
return False
|
| 421 |
-
|
| 422 |
draft.status = "APPROVED"
|
| 423 |
draft.finalized_at = datetime.now()
|
| 424 |
-
|
| 425 |
return True
|
| 426 |
-
|
| 427 |
def get_judge_preferences(self, judge_id: str) -> JudgePreferences:
|
| 428 |
"""Get or create judge preferences.
|
| 429 |
-
|
| 430 |
Args:
|
| 431 |
judge_id: Judge ID
|
| 432 |
-
|
| 433 |
Returns:
|
| 434 |
Judge preferences
|
| 435 |
"""
|
| 436 |
if judge_id not in self.preferences:
|
| 437 |
self.preferences[judge_id] = JudgePreferences(judge_id=judge_id)
|
| 438 |
-
|
| 439 |
return self.preferences[judge_id]
|
| 440 |
-
|
| 441 |
def get_override_statistics(self, judge_id: Optional[str] = None) -> dict:
|
| 442 |
"""Get override statistics.
|
| 443 |
-
|
| 444 |
Args:
|
| 445 |
judge_id: Optional filter by judge
|
| 446 |
-
|
| 447 |
Returns:
|
| 448 |
Statistics dictionary
|
| 449 |
"""
|
| 450 |
relevant_overrides = self.overrides
|
| 451 |
if judge_id:
|
| 452 |
relevant_overrides = [o for o in self.overrides if o.judge_id == judge_id]
|
| 453 |
-
|
| 454 |
if not relevant_overrides:
|
| 455 |
return {
|
| 456 |
"total_overrides": 0,
|
| 457 |
"by_type": {},
|
| 458 |
"avg_per_day": 0
|
| 459 |
}
|
| 460 |
-
|
| 461 |
override_counts = {}
|
| 462 |
for override in relevant_overrides:
|
| 463 |
override_type = override.override_type.value
|
| 464 |
override_counts[override_type] = override_counts.get(override_type, 0) + 1
|
| 465 |
-
|
| 466 |
# Calculate acceptance rate from drafts
|
| 467 |
relevant_drafts = self.drafts
|
| 468 |
if judge_id:
|
| 469 |
relevant_drafts = [d for d in self.drafts if d.judge_id == judge_id]
|
| 470 |
-
|
| 471 |
acceptance_rates = [d.get_acceptance_rate() for d in relevant_drafts if d.status == "APPROVED"]
|
| 472 |
avg_acceptance = sum(acceptance_rates) / len(acceptance_rates) if acceptance_rates else 0
|
| 473 |
-
|
| 474 |
return {
|
| 475 |
"total_overrides": len(relevant_overrides),
|
| 476 |
"by_type": override_counts,
|
|
@@ -479,10 +476,10 @@ class OverrideManager:
|
|
| 479 |
"avg_acceptance_rate": avg_acceptance,
|
| 480 |
"modification_rate": 100 - avg_acceptance if avg_acceptance else 0
|
| 481 |
}
|
| 482 |
-
|
| 483 |
def export_audit_trail(self, output_file: str):
|
| 484 |
"""Export complete audit trail to file.
|
| 485 |
-
|
| 486 |
Args:
|
| 487 |
output_file: Path to output file
|
| 488 |
"""
|
|
@@ -501,6 +498,6 @@ class OverrideManager:
|
|
| 501 |
],
|
| 502 |
"statistics": self.get_override_statistics()
|
| 503 |
}
|
| 504 |
-
|
| 505 |
with open(output_file, 'w') as f:
|
| 506 |
json.dump(audit_data, f, indent=2)
|
|
|
|
| 3 |
Allows judges to review, modify, and approve algorithmic scheduling suggestions.
|
| 4 |
System is suggestive, not prescriptive - judges retain final control.
|
| 5 |
"""
|
| 6 |
+
import json
|
| 7 |
from dataclasses import dataclass, field
|
| 8 |
from datetime import date, datetime
|
| 9 |
from enum import Enum
|
| 10 |
from typing import Optional
|
|
|
|
| 11 |
|
| 12 |
|
| 13 |
class OverrideType(Enum):
|
|
|
|
| 35 |
reason: str = ""
|
| 36 |
date_affected: Optional[date] = None
|
| 37 |
courtroom_id: Optional[int] = None
|
| 38 |
+
|
| 39 |
# Algorithm-specific attributes
|
| 40 |
make_ripe: Optional[bool] = None # For RIPENESS overrides
|
| 41 |
+
new_position: Optional[int] = None # For REORDER/ADD_CASE overrides
|
| 42 |
new_priority: Optional[float] = None # For PRIORITY overrides
|
| 43 |
new_capacity: Optional[int] = None # For CAPACITY overrides
|
| 44 |
+
|
| 45 |
def to_dict(self) -> dict:
|
| 46 |
"""Convert to dictionary for logging."""
|
| 47 |
return {
|
|
|
|
| 60 |
"new_priority": self.new_priority,
|
| 61 |
"new_capacity": self.new_capacity
|
| 62 |
}
|
| 63 |
+
|
| 64 |
def to_readable_text(self) -> str:
|
| 65 |
"""Human-readable description of override."""
|
| 66 |
action_desc = {
|
| 67 |
OverrideType.RIPENESS: f"Changed ripeness from {self.old_value} to {self.new_value}",
|
| 68 |
OverrideType.PRIORITY: f"Adjusted priority from {self.old_value} to {self.new_value}",
|
| 69 |
+
OverrideType.ADD_CASE: "Manually added case to cause list",
|
| 70 |
+
OverrideType.REMOVE_CASE: "Removed case from cause list",
|
| 71 |
OverrideType.REORDER: f"Reordered from position {self.old_value} to {self.new_value}",
|
| 72 |
OverrideType.CAPACITY: f"Changed capacity from {self.old_value} to {self.new_value}",
|
| 73 |
OverrideType.MIN_GAP: f"Overrode min gap from {self.old_value} to {self.new_value} days",
|
| 74 |
OverrideType.COURTROOM: f"Changed courtroom from {self.old_value} to {self.new_value}"
|
| 75 |
}
|
| 76 |
+
|
| 77 |
action = action_desc.get(self.override_type, f"Override: {self.override_type.value}")
|
| 78 |
+
|
| 79 |
parts = [
|
| 80 |
f"[{self.timestamp.strftime('%Y-%m-%d %H:%M')}]",
|
| 81 |
f"Judge {self.judge_id}:",
|
| 82 |
action,
|
| 83 |
f"(Case {self.case_id})"
|
| 84 |
]
|
| 85 |
+
|
| 86 |
if self.reason:
|
| 87 |
parts.append(f"Reason: {self.reason}")
|
| 88 |
+
|
| 89 |
return " ".join(parts)
|
| 90 |
|
| 91 |
|
|
|
|
| 98 |
min_gap_overrides: dict[str, int] = field(default_factory=dict) # Per-case gap overrides
|
| 99 |
case_type_preferences: dict[str, list[str]] = field(default_factory=dict) # Day-of-week preferences
|
| 100 |
capacity_overrides: dict[int, int] = field(default_factory=dict) # Per-courtroom capacity overrides
|
| 101 |
+
|
| 102 |
def to_dict(self) -> dict:
|
| 103 |
"""Convert to dictionary."""
|
| 104 |
return {
|
|
|
|
| 123 |
created_at: datetime
|
| 124 |
finalized_at: Optional[datetime] = None
|
| 125 |
status: str = "DRAFT" # DRAFT, APPROVED, REJECTED
|
| 126 |
+
|
| 127 |
def get_acceptance_rate(self) -> float:
|
| 128 |
"""Calculate what % of suggestions were accepted."""
|
| 129 |
if not self.algorithm_suggested:
|
| 130 |
return 0.0
|
| 131 |
+
|
| 132 |
accepted = len(set(self.algorithm_suggested) & set(self.judge_approved))
|
| 133 |
return accepted / len(self.algorithm_suggested) * 100
|
| 134 |
+
|
| 135 |
def get_modifications_summary(self) -> dict:
|
| 136 |
"""Summarize modifications made."""
|
| 137 |
added = set(self.judge_approved) - set(self.algorithm_suggested)
|
| 138 |
removed = set(self.algorithm_suggested) - set(self.judge_approved)
|
| 139 |
+
|
| 140 |
override_counts = {}
|
| 141 |
for override in self.overrides:
|
| 142 |
override_type = override.override_type.value
|
| 143 |
override_counts[override_type] = override_counts.get(override_type, 0) + 1
|
| 144 |
+
|
| 145 |
return {
|
| 146 |
"cases_added": len(added),
|
| 147 |
"cases_removed": len(removed),
|
|
|
|
| 153 |
|
| 154 |
class OverrideValidator:
|
| 155 |
"""Validates override requests against constraints."""
|
| 156 |
+
|
| 157 |
def __init__(self):
|
| 158 |
self.errors: list[str] = []
|
| 159 |
+
|
| 160 |
def validate(self, override: Override) -> bool:
|
| 161 |
"""Validate an override against all applicable constraints.
|
| 162 |
+
|
| 163 |
Args:
|
| 164 |
override: Override to validate
|
| 165 |
+
|
| 166 |
Returns:
|
| 167 |
True if valid, False otherwise
|
| 168 |
"""
|
| 169 |
self.errors.clear()
|
| 170 |
+
|
| 171 |
if override.override_type == OverrideType.RIPENESS:
|
| 172 |
valid, error = self.validate_ripeness_override(
|
| 173 |
override.case_id,
|
|
|
|
| 174 |
override.new_value or "",
|
| 175 |
override.reason
|
| 176 |
)
|
| 177 |
if not valid:
|
| 178 |
self.errors.append(error)
|
| 179 |
return False
|
| 180 |
+
|
| 181 |
elif override.override_type == OverrideType.CAPACITY:
|
| 182 |
if override.new_capacity is not None:
|
| 183 |
valid, error = self.validate_capacity_override(
|
|
|
|
| 187 |
if not valid:
|
| 188 |
self.errors.append(error)
|
| 189 |
return False
|
| 190 |
+
|
| 191 |
elif override.override_type == OverrideType.PRIORITY:
|
| 192 |
if override.new_priority is not None:
|
| 193 |
if not (0 <= override.new_priority <= 1.0):
|
| 194 |
self.errors.append("Priority must be between 0 and 1.0")
|
| 195 |
return False
|
| 196 |
+
|
| 197 |
# Basic validation
|
| 198 |
if not override.case_id:
|
| 199 |
self.errors.append("Case ID is required")
|
| 200 |
return False
|
| 201 |
+
|
| 202 |
if not override.judge_id:
|
| 203 |
self.errors.append("Judge ID is required")
|
| 204 |
return False
|
| 205 |
+
|
| 206 |
return True
|
| 207 |
+
|
| 208 |
def get_errors(self) -> list[str]:
|
| 209 |
"""Get validation errors from last validation."""
|
| 210 |
return self.errors.copy()
|
| 211 |
+
|
| 212 |
@staticmethod
|
| 213 |
def validate_ripeness_override(
|
| 214 |
case_id: str,
|
|
|
|
| 215 |
new_status: str,
|
| 216 |
reason: str
|
| 217 |
) -> tuple[bool, str]:
|
| 218 |
"""Validate ripeness override.
|
| 219 |
+
|
| 220 |
Args:
|
| 221 |
case_id: Case ID
|
|
|
|
| 222 |
new_status: Requested new status
|
| 223 |
reason: Reason for override
|
| 224 |
+
|
| 225 |
Returns:
|
| 226 |
(valid, error_message)
|
| 227 |
"""
|
| 228 |
valid_statuses = ["RIPE", "UNRIPE_SUMMONS", "UNRIPE_DEPENDENT", "UNRIPE_PARTY", "UNRIPE_DOCUMENT"]
|
| 229 |
+
|
| 230 |
if new_status not in valid_statuses:
|
| 231 |
return False, f"Invalid ripeness status: {new_status}"
|
| 232 |
+
|
| 233 |
if not reason:
|
| 234 |
return False, "Reason required for ripeness override"
|
| 235 |
+
|
| 236 |
if len(reason) < 10:
|
| 237 |
return False, "Reason must be at least 10 characters"
|
| 238 |
+
|
| 239 |
return True, ""
|
| 240 |
+
|
| 241 |
@staticmethod
|
| 242 |
def validate_capacity_override(
|
| 243 |
current_capacity: int,
|
|
|
|
| 245 |
max_capacity: int = 200
|
| 246 |
) -> tuple[bool, str]:
|
| 247 |
"""Validate capacity override.
|
| 248 |
+
|
| 249 |
Args:
|
| 250 |
current_capacity: Current daily capacity
|
| 251 |
new_capacity: Requested new capacity
|
| 252 |
max_capacity: Maximum allowed capacity
|
| 253 |
+
|
| 254 |
Returns:
|
| 255 |
(valid, error_message)
|
| 256 |
"""
|
| 257 |
if new_capacity < 0:
|
| 258 |
return False, "Capacity cannot be negative"
|
| 259 |
+
|
| 260 |
if new_capacity > max_capacity:
|
| 261 |
return False, f"Capacity cannot exceed maximum ({max_capacity})"
|
| 262 |
+
|
| 263 |
if new_capacity == 0:
|
| 264 |
return False, "Capacity cannot be zero (use blocked dates for full closures)"
|
| 265 |
+
|
| 266 |
return True, ""
|
| 267 |
+
|
| 268 |
@staticmethod
|
| 269 |
def validate_add_case(
|
| 270 |
case_id: str,
|
|
|
|
| 273 |
max_capacity: int
|
| 274 |
) -> tuple[bool, str]:
|
| 275 |
"""Validate adding a case to cause list.
|
| 276 |
+
|
| 277 |
Args:
|
| 278 |
case_id: Case to add
|
| 279 |
current_schedule: Currently scheduled case IDs
|
| 280 |
current_capacity: Current number of scheduled cases
|
| 281 |
max_capacity: Maximum capacity
|
| 282 |
+
|
| 283 |
Returns:
|
| 284 |
(valid, error_message)
|
| 285 |
"""
|
| 286 |
if case_id in current_schedule:
|
| 287 |
return False, f"Case {case_id} already in schedule"
|
| 288 |
+
|
| 289 |
if current_capacity >= max_capacity:
|
| 290 |
return False, f"Schedule at capacity ({current_capacity}/{max_capacity})"
|
| 291 |
+
|
| 292 |
return True, ""
|
| 293 |
+
|
| 294 |
@staticmethod
|
| 295 |
def validate_remove_case(
|
| 296 |
case_id: str,
|
| 297 |
current_schedule: list[str]
|
| 298 |
) -> tuple[bool, str]:
|
| 299 |
"""Validate removing a case from cause list.
|
| 300 |
+
|
| 301 |
Args:
|
| 302 |
case_id: Case to remove
|
| 303 |
current_schedule: Currently scheduled case IDs
|
| 304 |
+
|
| 305 |
Returns:
|
| 306 |
(valid, error_message)
|
| 307 |
"""
|
| 308 |
if case_id not in current_schedule:
|
| 309 |
return False, f"Case {case_id} not in schedule"
|
| 310 |
+
|
| 311 |
return True, ""
|
| 312 |
|
| 313 |
|
| 314 |
class OverrideManager:
|
| 315 |
"""Manages judge overrides and interventions."""
|
| 316 |
+
|
| 317 |
def __init__(self):
|
| 318 |
self.overrides: list[Override] = []
|
| 319 |
self.drafts: list[CauseListDraft] = []
|
| 320 |
self.preferences: dict[str, JudgePreferences] = {}
|
| 321 |
+
|
| 322 |
def create_draft(
|
| 323 |
self,
|
| 324 |
date: date,
|
|
|
|
| 327 |
algorithm_suggested: list[str]
|
| 328 |
) -> CauseListDraft:
|
| 329 |
"""Create a draft cause list for judge review.
|
| 330 |
+
|
| 331 |
Args:
|
| 332 |
date: Date of cause list
|
| 333 |
courtroom_id: Courtroom ID
|
| 334 |
judge_id: Judge ID
|
| 335 |
algorithm_suggested: Case IDs suggested by algorithm
|
| 336 |
+
|
| 337 |
Returns:
|
| 338 |
Draft cause list
|
| 339 |
"""
|
|
|
|
| 347 |
created_at=datetime.now(),
|
| 348 |
status="DRAFT"
|
| 349 |
)
|
| 350 |
+
|
| 351 |
self.drafts.append(draft)
|
| 352 |
return draft
|
| 353 |
+
|
| 354 |
def apply_override(
|
| 355 |
self,
|
| 356 |
draft: CauseListDraft,
|
| 357 |
override: Override
|
| 358 |
) -> tuple[bool, str]:
|
| 359 |
"""Apply an override to a draft cause list.
|
| 360 |
+
|
| 361 |
Args:
|
| 362 |
draft: Draft to modify
|
| 363 |
override: Override to apply
|
| 364 |
+
|
| 365 |
Returns:
|
| 366 |
(success, error_message)
|
| 367 |
"""
|
|
|
|
| 375 |
)
|
| 376 |
if not valid:
|
| 377 |
return False, error
|
| 378 |
+
|
| 379 |
elif override.override_type == OverrideType.ADD_CASE:
|
| 380 |
valid, error = OverrideValidator.validate_add_case(
|
| 381 |
override.case_id,
|
|
|
|
| 385 |
)
|
| 386 |
if not valid:
|
| 387 |
return False, error
|
| 388 |
+
|
| 389 |
draft.judge_approved.append(override.case_id)
|
| 390 |
+
|
| 391 |
elif override.override_type == OverrideType.REMOVE_CASE:
|
| 392 |
valid, error = OverrideValidator.validate_remove_case(
|
| 393 |
override.case_id,
|
|
|
|
| 395 |
)
|
| 396 |
if not valid:
|
| 397 |
return False, error
|
| 398 |
+
|
| 399 |
draft.judge_approved.remove(override.case_id)
|
| 400 |
+
|
| 401 |
# Record override
|
| 402 |
draft.overrides.append(override)
|
| 403 |
self.overrides.append(override)
|
| 404 |
+
|
| 405 |
return True, ""
|
| 406 |
+
|
| 407 |
def finalize_draft(self, draft: CauseListDraft) -> bool:
|
| 408 |
"""Finalize draft cause list (judge approval).
|
| 409 |
+
|
| 410 |
Args:
|
| 411 |
draft: Draft to finalize
|
| 412 |
+
|
| 413 |
Returns:
|
| 414 |
Success status
|
| 415 |
"""
|
| 416 |
if draft.status != "DRAFT":
|
| 417 |
return False
|
| 418 |
+
|
| 419 |
draft.status = "APPROVED"
|
| 420 |
draft.finalized_at = datetime.now()
|
| 421 |
+
|
| 422 |
return True
|
| 423 |
+
|
| 424 |
def get_judge_preferences(self, judge_id: str) -> JudgePreferences:
|
| 425 |
"""Get or create judge preferences.
|
| 426 |
+
|
| 427 |
Args:
|
| 428 |
judge_id: Judge ID
|
| 429 |
+
|
| 430 |
Returns:
|
| 431 |
Judge preferences
|
| 432 |
"""
|
| 433 |
if judge_id not in self.preferences:
|
| 434 |
self.preferences[judge_id] = JudgePreferences(judge_id=judge_id)
|
| 435 |
+
|
| 436 |
return self.preferences[judge_id]
|
| 437 |
+
|
| 438 |
def get_override_statistics(self, judge_id: Optional[str] = None) -> dict:
|
| 439 |
"""Get override statistics.
|
| 440 |
+
|
| 441 |
Args:
|
| 442 |
judge_id: Optional filter by judge
|
| 443 |
+
|
| 444 |
Returns:
|
| 445 |
Statistics dictionary
|
| 446 |
"""
|
| 447 |
relevant_overrides = self.overrides
|
| 448 |
if judge_id:
|
| 449 |
relevant_overrides = [o for o in self.overrides if o.judge_id == judge_id]
|
| 450 |
+
|
| 451 |
if not relevant_overrides:
|
| 452 |
return {
|
| 453 |
"total_overrides": 0,
|
| 454 |
"by_type": {},
|
| 455 |
"avg_per_day": 0
|
| 456 |
}
|
| 457 |
+
|
| 458 |
override_counts = {}
|
| 459 |
for override in relevant_overrides:
|
| 460 |
override_type = override.override_type.value
|
| 461 |
override_counts[override_type] = override_counts.get(override_type, 0) + 1
|
| 462 |
+
|
| 463 |
# Calculate acceptance rate from drafts
|
| 464 |
relevant_drafts = self.drafts
|
| 465 |
if judge_id:
|
| 466 |
relevant_drafts = [d for d in self.drafts if d.judge_id == judge_id]
|
| 467 |
+
|
| 468 |
acceptance_rates = [d.get_acceptance_rate() for d in relevant_drafts if d.status == "APPROVED"]
|
| 469 |
avg_acceptance = sum(acceptance_rates) / len(acceptance_rates) if acceptance_rates else 0
|
| 470 |
+
|
| 471 |
return {
|
| 472 |
"total_overrides": len(relevant_overrides),
|
| 473 |
"by_type": override_counts,
|
|
|
|
| 476 |
"avg_acceptance_rate": avg_acceptance,
|
| 477 |
"modification_rate": 100 - avg_acceptance if avg_acceptance else 0
|
| 478 |
}
|
| 479 |
+
|
| 480 |
def export_audit_trail(self, output_file: str):
|
| 481 |
"""Export complete audit trail to file.
|
| 482 |
+
|
| 483 |
Args:
|
| 484 |
output_file: Path to output file
|
| 485 |
"""
|
|
|
|
| 498 |
],
|
| 499 |
"statistics": self.get_override_statistics()
|
| 500 |
}
|
| 501 |
+
|
| 502 |
with open(output_file, 'w') as f:
|
| 503 |
json.dump(audit_data, f, indent=2)
|
scheduler/core/algorithm.py
CHANGED
|
@@ -14,25 +14,25 @@ from dataclasses import dataclass, field
|
|
| 14 |
from datetime import date
|
| 15 |
from typing import Dict, List, Optional, Tuple
|
| 16 |
|
| 17 |
-
from scheduler.core.case import Case, CaseStatus
|
| 18 |
-
from scheduler.core.courtroom import Courtroom
|
| 19 |
-
from scheduler.core.ripeness import RipenessClassifier, RipenessStatus
|
| 20 |
-
from scheduler.core.policy import SchedulerPolicy
|
| 21 |
-
from scheduler.simulation.allocator import CourtroomAllocator, AllocationStrategy
|
| 22 |
from scheduler.control.explainability import ExplainabilityEngine, SchedulingExplanation
|
| 23 |
from scheduler.control.overrides import (
|
|
|
|
| 24 |
Override,
|
| 25 |
OverrideType,
|
| 26 |
-
JudgePreferences,
|
| 27 |
OverrideValidator,
|
| 28 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
from scheduler.data.config import MIN_GAP_BETWEEN_HEARINGS
|
|
|
|
| 30 |
|
| 31 |
|
| 32 |
@dataclass
|
| 33 |
class SchedulingResult:
|
| 34 |
"""Result of single-day scheduling with full transparency.
|
| 35 |
-
|
| 36 |
Attributes:
|
| 37 |
scheduled_cases: Mapping of courtroom_id to list of scheduled cases
|
| 38 |
explanations: Decision explanations for each case (scheduled + sample unscheduled)
|
|
@@ -45,7 +45,7 @@ class SchedulingResult:
|
|
| 45 |
policy_used: Name of scheduling policy used (FIFO, Age, Readiness)
|
| 46 |
total_scheduled: Total number of cases scheduled (calculated)
|
| 47 |
"""
|
| 48 |
-
|
| 49 |
# Core output
|
| 50 |
scheduled_cases: Dict[int, List[Case]]
|
| 51 |
|
|
@@ -58,12 +58,12 @@ class SchedulingResult:
|
|
| 58 |
unscheduled_cases: List[Tuple[Case, str]]
|
| 59 |
ripeness_filtered: int
|
| 60 |
capacity_limited: int
|
| 61 |
-
|
| 62 |
# Metadata
|
| 63 |
scheduling_date: date
|
| 64 |
policy_used: str
|
| 65 |
total_scheduled: int = field(init=False)
|
| 66 |
-
|
| 67 |
def __post_init__(self):
|
| 68 |
"""Calculate derived fields."""
|
| 69 |
self.total_scheduled = sum(len(cases) for cases in self.scheduled_cases.values())
|
|
@@ -71,14 +71,14 @@ class SchedulingResult:
|
|
| 71 |
|
| 72 |
class SchedulingAlgorithm:
|
| 73 |
"""Core scheduling algorithm with override support.
|
| 74 |
-
|
| 75 |
This is the main product - a clean, reusable scheduling algorithm that:
|
| 76 |
1. Filters cases by ripeness and eligibility
|
| 77 |
2. Applies judge preferences and manual overrides
|
| 78 |
3. Prioritizes cases using selected policy
|
| 79 |
4. Allocates cases to courtrooms with load balancing
|
| 80 |
5. Generates explanations for all decisions
|
| 81 |
-
|
| 82 |
Usage:
|
| 83 |
algorithm = SchedulingAlgorithm(policy=readiness_policy, allocator=allocator)
|
| 84 |
result = algorithm.schedule_day(
|
|
@@ -89,7 +89,7 @@ class SchedulingAlgorithm:
|
|
| 89 |
preferences=judge_prefs
|
| 90 |
)
|
| 91 |
"""
|
| 92 |
-
|
| 93 |
def __init__(
|
| 94 |
self,
|
| 95 |
policy: SchedulerPolicy,
|
|
@@ -97,7 +97,7 @@ class SchedulingAlgorithm:
|
|
| 97 |
min_gap_days: int = MIN_GAP_BETWEEN_HEARINGS
|
| 98 |
):
|
| 99 |
"""Initialize algorithm with policy and allocator.
|
| 100 |
-
|
| 101 |
Args:
|
| 102 |
policy: Scheduling policy (FIFO, Age, Readiness)
|
| 103 |
allocator: Courtroom allocator (defaults to load-balanced)
|
|
@@ -107,7 +107,7 @@ class SchedulingAlgorithm:
|
|
| 107 |
self.allocator = allocator
|
| 108 |
self.min_gap_days = min_gap_days
|
| 109 |
self.explainer = ExplainabilityEngine()
|
| 110 |
-
|
| 111 |
def schedule_day(
|
| 112 |
self,
|
| 113 |
cases: List[Case],
|
|
@@ -118,7 +118,7 @@ class SchedulingAlgorithm:
|
|
| 118 |
max_explanations_unscheduled: int = 100
|
| 119 |
) -> SchedulingResult:
|
| 120 |
"""Schedule cases for a single day with override support.
|
| 121 |
-
|
| 122 |
Args:
|
| 123 |
cases: All active cases (will be filtered)
|
| 124 |
courtrooms: Available courtrooms
|
|
@@ -126,7 +126,7 @@ class SchedulingAlgorithm:
|
|
| 126 |
overrides: Optional manual overrides to apply
|
| 127 |
preferences: Optional judge preferences/constraints
|
| 128 |
max_explanations_unscheduled: Max unscheduled cases to generate explanations for
|
| 129 |
-
|
| 130 |
Returns:
|
| 131 |
SchedulingResult with scheduled cases, explanations, and audit trail
|
| 132 |
"""
|
|
@@ -161,43 +161,43 @@ class SchedulingAlgorithm:
|
|
| 161 |
|
| 162 |
# Filter disposed cases
|
| 163 |
active_cases = [c for c in cases if c.status != CaseStatus.DISPOSED]
|
| 164 |
-
|
| 165 |
# Update age and readiness for all cases
|
| 166 |
for case in active_cases:
|
| 167 |
case.update_age(current_date)
|
| 168 |
case.compute_readiness_score()
|
| 169 |
-
|
| 170 |
# CHECKPOINT 1: Ripeness filtering with override support
|
| 171 |
ripe_cases, ripeness_filtered = self._filter_by_ripeness(
|
| 172 |
active_cases, current_date, validated_overrides, applied_overrides
|
| 173 |
)
|
| 174 |
-
|
| 175 |
# CHECKPOINT 2: Eligibility check (min gap requirement)
|
| 176 |
eligible_cases = self._filter_eligible(ripe_cases, current_date, unscheduled)
|
| 177 |
-
|
| 178 |
# CHECKPOINT 3: Apply judge preferences (capacity overrides tracked)
|
| 179 |
if preferences:
|
| 180 |
applied_overrides.extend(self._get_preference_overrides(preferences, courtrooms))
|
| 181 |
-
|
| 182 |
# CHECKPOINT 4: Prioritize using policy
|
| 183 |
prioritized = self.policy.prioritize(eligible_cases, current_date)
|
| 184 |
-
|
| 185 |
# CHECKPOINT 5: Apply manual overrides (add/remove/reorder/priority)
|
| 186 |
if validated_overrides:
|
| 187 |
prioritized = self._apply_manual_overrides(
|
| 188 |
prioritized, validated_overrides, applied_overrides, unscheduled, active_cases
|
| 189 |
)
|
| 190 |
-
|
| 191 |
# CHECKPOINT 6: Allocate to courtrooms
|
| 192 |
scheduled_allocation, capacity_limited = self._allocate_cases(
|
| 193 |
prioritized, courtrooms, current_date, preferences
|
| 194 |
)
|
| 195 |
-
|
| 196 |
# Track capacity-limited cases
|
| 197 |
total_scheduled = sum(len(cases) for cases in scheduled_allocation.values())
|
| 198 |
for case in prioritized[total_scheduled:]:
|
| 199 |
unscheduled.append((case, "Capacity exceeded - all courtrooms full"))
|
| 200 |
-
|
| 201 |
# CHECKPOINT 7: Generate explanations for scheduled cases
|
| 202 |
for courtroom_id, cases_in_room in scheduled_allocation.items():
|
| 203 |
for case in cases_in_room:
|
|
@@ -210,7 +210,7 @@ class SchedulingAlgorithm:
|
|
| 210 |
courtroom_id=courtroom_id
|
| 211 |
)
|
| 212 |
explanations[case.case_id] = explanation
|
| 213 |
-
|
| 214 |
# Generate explanations for sample of unscheduled cases
|
| 215 |
for case, reason in unscheduled[:max_explanations_unscheduled]:
|
| 216 |
if case is not None: # Skip invalid override entries
|
|
@@ -237,7 +237,7 @@ class SchedulingAlgorithm:
|
|
| 237 |
scheduling_date=current_date,
|
| 238 |
policy_used=self.policy.get_name()
|
| 239 |
)
|
| 240 |
-
|
| 241 |
def _filter_by_ripeness(
|
| 242 |
self,
|
| 243 |
cases: List[Case],
|
|
@@ -252,10 +252,10 @@ class SchedulingAlgorithm:
|
|
| 252 |
for override in overrides:
|
| 253 |
if override.override_type == OverrideType.RIPENESS:
|
| 254 |
ripeness_overrides[override.case_id] = override.make_ripe
|
| 255 |
-
|
| 256 |
ripe_cases = []
|
| 257 |
filtered_count = 0
|
| 258 |
-
|
| 259 |
for case in cases:
|
| 260 |
# Check for ripeness override
|
| 261 |
if case.case_id in ripeness_overrides:
|
|
@@ -269,24 +269,24 @@ class SchedulingAlgorithm:
|
|
| 269 |
case.mark_unripe(RipenessStatus.UNRIPE_DEPENDENT, "Judge override", current_date)
|
| 270 |
filtered_count += 1
|
| 271 |
continue
|
| 272 |
-
|
| 273 |
# Normal ripeness classification
|
| 274 |
ripeness = RipenessClassifier.classify(case, current_date)
|
| 275 |
-
|
| 276 |
if ripeness.value != case.ripeness_status:
|
| 277 |
if ripeness.is_ripe():
|
| 278 |
case.mark_ripe(current_date)
|
| 279 |
else:
|
| 280 |
reason = RipenessClassifier.get_ripeness_reason(ripeness)
|
| 281 |
case.mark_unripe(ripeness, reason, current_date)
|
| 282 |
-
|
| 283 |
if ripeness.is_ripe():
|
| 284 |
ripe_cases.append(case)
|
| 285 |
else:
|
| 286 |
filtered_count += 1
|
| 287 |
-
|
| 288 |
return ripe_cases, filtered_count
|
| 289 |
-
|
| 290 |
def _filter_eligible(
|
| 291 |
self,
|
| 292 |
cases: List[Case],
|
|
@@ -302,7 +302,7 @@ class SchedulingAlgorithm:
|
|
| 302 |
reason = f"Min gap not met - last hearing {case.days_since_last_hearing}d ago (min {self.min_gap_days}d)"
|
| 303 |
unscheduled.append((case, reason))
|
| 304 |
return eligible
|
| 305 |
-
|
| 306 |
def _get_preference_overrides(
|
| 307 |
self,
|
| 308 |
preferences: JudgePreferences,
|
|
@@ -310,7 +310,7 @@ class SchedulingAlgorithm:
|
|
| 310 |
) -> List[Override]:
|
| 311 |
"""Extract overrides from judge preferences for audit trail."""
|
| 312 |
overrides = []
|
| 313 |
-
|
| 314 |
if preferences.capacity_overrides:
|
| 315 |
from datetime import datetime
|
| 316 |
for courtroom_id, new_capacity in preferences.capacity_overrides.items():
|
|
@@ -325,9 +325,9 @@ class SchedulingAlgorithm:
|
|
| 325 |
reason="Judge preference"
|
| 326 |
)
|
| 327 |
overrides.append(override)
|
| 328 |
-
|
| 329 |
return overrides
|
| 330 |
-
|
| 331 |
def _apply_manual_overrides(
|
| 332 |
self,
|
| 333 |
prioritized: List[Case],
|
|
@@ -338,7 +338,7 @@ class SchedulingAlgorithm:
|
|
| 338 |
) -> List[Case]:
|
| 339 |
"""Apply manual overrides (ADD_CASE, REMOVE_CASE, PRIORITY, REORDER)."""
|
| 340 |
result = prioritized.copy()
|
| 341 |
-
|
| 342 |
# Apply ADD_CASE overrides (insert at high priority)
|
| 343 |
add_overrides = [o for o in overrides if o.override_type == OverrideType.ADD_CASE]
|
| 344 |
for override in add_overrides:
|
|
@@ -349,7 +349,7 @@ class SchedulingAlgorithm:
|
|
| 349 |
insert_pos = override.new_position if override.new_position is not None else 0
|
| 350 |
result.insert(min(insert_pos, len(result)), case_to_add)
|
| 351 |
applied_overrides.append(override)
|
| 352 |
-
|
| 353 |
# Apply REMOVE_CASE overrides
|
| 354 |
remove_overrides = [o for o in overrides if o.override_type == OverrideType.REMOVE_CASE]
|
| 355 |
for override in remove_overrides:
|
|
@@ -358,23 +358,23 @@ class SchedulingAlgorithm:
|
|
| 358 |
if removed:
|
| 359 |
applied_overrides.append(override)
|
| 360 |
unscheduled.append((removed[0], f"Judge override: {override.reason}"))
|
| 361 |
-
|
| 362 |
# Apply PRIORITY overrides (adjust priority scores)
|
| 363 |
priority_overrides = [o for o in overrides if o.override_type == OverrideType.PRIORITY]
|
| 364 |
for override in priority_overrides:
|
| 365 |
case_to_adjust = next((c for c in result if c.case_id == override.case_id), None)
|
| 366 |
if case_to_adjust and override.new_priority is not None:
|
| 367 |
# Store original priority for reference
|
| 368 |
-
|
| 369 |
# Temporarily adjust case to force re-sorting
|
| 370 |
# Note: This is a simplification - in production might need case.set_priority_override()
|
| 371 |
case_to_adjust._priority_override = override.new_priority
|
| 372 |
applied_overrides.append(override)
|
| 373 |
-
|
| 374 |
# Re-sort if priority overrides were applied
|
| 375 |
if priority_overrides:
|
| 376 |
result.sort(key=lambda c: getattr(c, '_priority_override', c.get_priority_score()), reverse=True)
|
| 377 |
-
|
| 378 |
# Apply REORDER overrides (explicit positioning)
|
| 379 |
reorder_overrides = [o for o in overrides if o.override_type == OverrideType.REORDER]
|
| 380 |
for override in reorder_overrides:
|
|
@@ -384,9 +384,9 @@ class SchedulingAlgorithm:
|
|
| 384 |
result.remove(case_to_move)
|
| 385 |
result.insert(override.new_position, case_to_move)
|
| 386 |
applied_overrides.append(override)
|
| 387 |
-
|
| 388 |
return result
|
| 389 |
-
|
| 390 |
def _allocate_cases(
|
| 391 |
self,
|
| 392 |
prioritized: List[Case],
|
|
@@ -402,11 +402,11 @@ class SchedulingAlgorithm:
|
|
| 402 |
total_capacity += preferences.capacity_overrides[room.courtroom_id]
|
| 403 |
else:
|
| 404 |
total_capacity += room.get_capacity_for_date(current_date)
|
| 405 |
-
|
| 406 |
# Limit cases to total capacity
|
| 407 |
cases_to_allocate = prioritized[:total_capacity]
|
| 408 |
capacity_limited = len(prioritized) - len(cases_to_allocate)
|
| 409 |
-
|
| 410 |
# Use allocator to distribute
|
| 411 |
if self.allocator:
|
| 412 |
case_to_courtroom = self.allocator.allocate(cases_to_allocate, current_date)
|
|
@@ -416,7 +416,7 @@ class SchedulingAlgorithm:
|
|
| 416 |
for i, case in enumerate(cases_to_allocate):
|
| 417 |
room_id = courtrooms[i % len(courtrooms)].courtroom_id
|
| 418 |
case_to_courtroom[case.case_id] = room_id
|
| 419 |
-
|
| 420 |
# Build allocation dict
|
| 421 |
allocation: Dict[int, List[Case]] = {r.courtroom_id: [] for r in courtrooms}
|
| 422 |
for case in cases_to_allocate:
|
|
@@ -429,7 +429,6 @@ class SchedulingAlgorithm:
|
|
| 429 |
@staticmethod
|
| 430 |
def _clear_temporary_case_flags(cases: List[Case]) -> None:
|
| 431 |
"""Remove temporary scheduling flags to keep case objects clean between runs."""
|
| 432 |
-
|
| 433 |
for case in cases:
|
| 434 |
if hasattr(case, "_priority_override"):
|
| 435 |
delattr(case, "_priority_override")
|
|
|
|
| 14 |
from datetime import date
|
| 15 |
from typing import Dict, List, Optional, Tuple
|
| 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
from scheduler.control.explainability import ExplainabilityEngine, SchedulingExplanation
|
| 18 |
from scheduler.control.overrides import (
|
| 19 |
+
JudgePreferences,
|
| 20 |
Override,
|
| 21 |
OverrideType,
|
|
|
|
| 22 |
OverrideValidator,
|
| 23 |
)
|
| 24 |
+
from scheduler.core.case import Case, CaseStatus
|
| 25 |
+
from scheduler.core.courtroom import Courtroom
|
| 26 |
+
from scheduler.core.policy import SchedulerPolicy
|
| 27 |
+
from scheduler.core.ripeness import RipenessClassifier, RipenessStatus
|
| 28 |
from scheduler.data.config import MIN_GAP_BETWEEN_HEARINGS
|
| 29 |
+
from scheduler.simulation.allocator import CourtroomAllocator
|
| 30 |
|
| 31 |
|
| 32 |
@dataclass
|
| 33 |
class SchedulingResult:
|
| 34 |
"""Result of single-day scheduling with full transparency.
|
| 35 |
+
|
| 36 |
Attributes:
|
| 37 |
scheduled_cases: Mapping of courtroom_id to list of scheduled cases
|
| 38 |
explanations: Decision explanations for each case (scheduled + sample unscheduled)
|
|
|
|
| 45 |
policy_used: Name of scheduling policy used (FIFO, Age, Readiness)
|
| 46 |
total_scheduled: Total number of cases scheduled (calculated)
|
| 47 |
"""
|
| 48 |
+
|
| 49 |
# Core output
|
| 50 |
scheduled_cases: Dict[int, List[Case]]
|
| 51 |
|
|
|
|
| 58 |
unscheduled_cases: List[Tuple[Case, str]]
|
| 59 |
ripeness_filtered: int
|
| 60 |
capacity_limited: int
|
| 61 |
+
|
| 62 |
# Metadata
|
| 63 |
scheduling_date: date
|
| 64 |
policy_used: str
|
| 65 |
total_scheduled: int = field(init=False)
|
| 66 |
+
|
| 67 |
def __post_init__(self):
|
| 68 |
"""Calculate derived fields."""
|
| 69 |
self.total_scheduled = sum(len(cases) for cases in self.scheduled_cases.values())
|
|
|
|
| 71 |
|
| 72 |
class SchedulingAlgorithm:
|
| 73 |
"""Core scheduling algorithm with override support.
|
| 74 |
+
|
| 75 |
This is the main product - a clean, reusable scheduling algorithm that:
|
| 76 |
1. Filters cases by ripeness and eligibility
|
| 77 |
2. Applies judge preferences and manual overrides
|
| 78 |
3. Prioritizes cases using selected policy
|
| 79 |
4. Allocates cases to courtrooms with load balancing
|
| 80 |
5. Generates explanations for all decisions
|
| 81 |
+
|
| 82 |
Usage:
|
| 83 |
algorithm = SchedulingAlgorithm(policy=readiness_policy, allocator=allocator)
|
| 84 |
result = algorithm.schedule_day(
|
|
|
|
| 89 |
preferences=judge_prefs
|
| 90 |
)
|
| 91 |
"""
|
| 92 |
+
|
| 93 |
def __init__(
|
| 94 |
self,
|
| 95 |
policy: SchedulerPolicy,
|
|
|
|
| 97 |
min_gap_days: int = MIN_GAP_BETWEEN_HEARINGS
|
| 98 |
):
|
| 99 |
"""Initialize algorithm with policy and allocator.
|
| 100 |
+
|
| 101 |
Args:
|
| 102 |
policy: Scheduling policy (FIFO, Age, Readiness)
|
| 103 |
allocator: Courtroom allocator (defaults to load-balanced)
|
|
|
|
| 107 |
self.allocator = allocator
|
| 108 |
self.min_gap_days = min_gap_days
|
| 109 |
self.explainer = ExplainabilityEngine()
|
| 110 |
+
|
| 111 |
def schedule_day(
|
| 112 |
self,
|
| 113 |
cases: List[Case],
|
|
|
|
| 118 |
max_explanations_unscheduled: int = 100
|
| 119 |
) -> SchedulingResult:
|
| 120 |
"""Schedule cases for a single day with override support.
|
| 121 |
+
|
| 122 |
Args:
|
| 123 |
cases: All active cases (will be filtered)
|
| 124 |
courtrooms: Available courtrooms
|
|
|
|
| 126 |
overrides: Optional manual overrides to apply
|
| 127 |
preferences: Optional judge preferences/constraints
|
| 128 |
max_explanations_unscheduled: Max unscheduled cases to generate explanations for
|
| 129 |
+
|
| 130 |
Returns:
|
| 131 |
SchedulingResult with scheduled cases, explanations, and audit trail
|
| 132 |
"""
|
|
|
|
| 161 |
|
| 162 |
# Filter disposed cases
|
| 163 |
active_cases = [c for c in cases if c.status != CaseStatus.DISPOSED]
|
| 164 |
+
|
| 165 |
# Update age and readiness for all cases
|
| 166 |
for case in active_cases:
|
| 167 |
case.update_age(current_date)
|
| 168 |
case.compute_readiness_score()
|
| 169 |
+
|
| 170 |
# CHECKPOINT 1: Ripeness filtering with override support
|
| 171 |
ripe_cases, ripeness_filtered = self._filter_by_ripeness(
|
| 172 |
active_cases, current_date, validated_overrides, applied_overrides
|
| 173 |
)
|
| 174 |
+
|
| 175 |
# CHECKPOINT 2: Eligibility check (min gap requirement)
|
| 176 |
eligible_cases = self._filter_eligible(ripe_cases, current_date, unscheduled)
|
| 177 |
+
|
| 178 |
# CHECKPOINT 3: Apply judge preferences (capacity overrides tracked)
|
| 179 |
if preferences:
|
| 180 |
applied_overrides.extend(self._get_preference_overrides(preferences, courtrooms))
|
| 181 |
+
|
| 182 |
# CHECKPOINT 4: Prioritize using policy
|
| 183 |
prioritized = self.policy.prioritize(eligible_cases, current_date)
|
| 184 |
+
|
| 185 |
# CHECKPOINT 5: Apply manual overrides (add/remove/reorder/priority)
|
| 186 |
if validated_overrides:
|
| 187 |
prioritized = self._apply_manual_overrides(
|
| 188 |
prioritized, validated_overrides, applied_overrides, unscheduled, active_cases
|
| 189 |
)
|
| 190 |
+
|
| 191 |
# CHECKPOINT 6: Allocate to courtrooms
|
| 192 |
scheduled_allocation, capacity_limited = self._allocate_cases(
|
| 193 |
prioritized, courtrooms, current_date, preferences
|
| 194 |
)
|
| 195 |
+
|
| 196 |
# Track capacity-limited cases
|
| 197 |
total_scheduled = sum(len(cases) for cases in scheduled_allocation.values())
|
| 198 |
for case in prioritized[total_scheduled:]:
|
| 199 |
unscheduled.append((case, "Capacity exceeded - all courtrooms full"))
|
| 200 |
+
|
| 201 |
# CHECKPOINT 7: Generate explanations for scheduled cases
|
| 202 |
for courtroom_id, cases_in_room in scheduled_allocation.items():
|
| 203 |
for case in cases_in_room:
|
|
|
|
| 210 |
courtroom_id=courtroom_id
|
| 211 |
)
|
| 212 |
explanations[case.case_id] = explanation
|
| 213 |
+
|
| 214 |
# Generate explanations for sample of unscheduled cases
|
| 215 |
for case, reason in unscheduled[:max_explanations_unscheduled]:
|
| 216 |
if case is not None: # Skip invalid override entries
|
|
|
|
| 237 |
scheduling_date=current_date,
|
| 238 |
policy_used=self.policy.get_name()
|
| 239 |
)
|
| 240 |
+
|
| 241 |
def _filter_by_ripeness(
|
| 242 |
self,
|
| 243 |
cases: List[Case],
|
|
|
|
| 252 |
for override in overrides:
|
| 253 |
if override.override_type == OverrideType.RIPENESS:
|
| 254 |
ripeness_overrides[override.case_id] = override.make_ripe
|
| 255 |
+
|
| 256 |
ripe_cases = []
|
| 257 |
filtered_count = 0
|
| 258 |
+
|
| 259 |
for case in cases:
|
| 260 |
# Check for ripeness override
|
| 261 |
if case.case_id in ripeness_overrides:
|
|
|
|
| 269 |
case.mark_unripe(RipenessStatus.UNRIPE_DEPENDENT, "Judge override", current_date)
|
| 270 |
filtered_count += 1
|
| 271 |
continue
|
| 272 |
+
|
| 273 |
# Normal ripeness classification
|
| 274 |
ripeness = RipenessClassifier.classify(case, current_date)
|
| 275 |
+
|
| 276 |
if ripeness.value != case.ripeness_status:
|
| 277 |
if ripeness.is_ripe():
|
| 278 |
case.mark_ripe(current_date)
|
| 279 |
else:
|
| 280 |
reason = RipenessClassifier.get_ripeness_reason(ripeness)
|
| 281 |
case.mark_unripe(ripeness, reason, current_date)
|
| 282 |
+
|
| 283 |
if ripeness.is_ripe():
|
| 284 |
ripe_cases.append(case)
|
| 285 |
else:
|
| 286 |
filtered_count += 1
|
| 287 |
+
|
| 288 |
return ripe_cases, filtered_count
|
| 289 |
+
|
| 290 |
def _filter_eligible(
|
| 291 |
self,
|
| 292 |
cases: List[Case],
|
|
|
|
| 302 |
reason = f"Min gap not met - last hearing {case.days_since_last_hearing}d ago (min {self.min_gap_days}d)"
|
| 303 |
unscheduled.append((case, reason))
|
| 304 |
return eligible
|
| 305 |
+
|
| 306 |
def _get_preference_overrides(
|
| 307 |
self,
|
| 308 |
preferences: JudgePreferences,
|
|
|
|
| 310 |
) -> List[Override]:
|
| 311 |
"""Extract overrides from judge preferences for audit trail."""
|
| 312 |
overrides = []
|
| 313 |
+
|
| 314 |
if preferences.capacity_overrides:
|
| 315 |
from datetime import datetime
|
| 316 |
for courtroom_id, new_capacity in preferences.capacity_overrides.items():
|
|
|
|
| 325 |
reason="Judge preference"
|
| 326 |
)
|
| 327 |
overrides.append(override)
|
| 328 |
+
|
| 329 |
return overrides
|
| 330 |
+
|
| 331 |
def _apply_manual_overrides(
|
| 332 |
self,
|
| 333 |
prioritized: List[Case],
|
|
|
|
| 338 |
) -> List[Case]:
|
| 339 |
"""Apply manual overrides (ADD_CASE, REMOVE_CASE, PRIORITY, REORDER)."""
|
| 340 |
result = prioritized.copy()
|
| 341 |
+
|
| 342 |
# Apply ADD_CASE overrides (insert at high priority)
|
| 343 |
add_overrides = [o for o in overrides if o.override_type == OverrideType.ADD_CASE]
|
| 344 |
for override in add_overrides:
|
|
|
|
| 349 |
insert_pos = override.new_position if override.new_position is not None else 0
|
| 350 |
result.insert(min(insert_pos, len(result)), case_to_add)
|
| 351 |
applied_overrides.append(override)
|
| 352 |
+
|
| 353 |
# Apply REMOVE_CASE overrides
|
| 354 |
remove_overrides = [o for o in overrides if o.override_type == OverrideType.REMOVE_CASE]
|
| 355 |
for override in remove_overrides:
|
|
|
|
| 358 |
if removed:
|
| 359 |
applied_overrides.append(override)
|
| 360 |
unscheduled.append((removed[0], f"Judge override: {override.reason}"))
|
| 361 |
+
|
| 362 |
# Apply PRIORITY overrides (adjust priority scores)
|
| 363 |
priority_overrides = [o for o in overrides if o.override_type == OverrideType.PRIORITY]
|
| 364 |
for override in priority_overrides:
|
| 365 |
case_to_adjust = next((c for c in result if c.case_id == override.case_id), None)
|
| 366 |
if case_to_adjust and override.new_priority is not None:
|
| 367 |
# Store original priority for reference
|
| 368 |
+
case_to_adjust.get_priority_score()
|
| 369 |
# Temporarily adjust case to force re-sorting
|
| 370 |
# Note: This is a simplification - in production might need case.set_priority_override()
|
| 371 |
case_to_adjust._priority_override = override.new_priority
|
| 372 |
applied_overrides.append(override)
|
| 373 |
+
|
| 374 |
# Re-sort if priority overrides were applied
|
| 375 |
if priority_overrides:
|
| 376 |
result.sort(key=lambda c: getattr(c, '_priority_override', c.get_priority_score()), reverse=True)
|
| 377 |
+
|
| 378 |
# Apply REORDER overrides (explicit positioning)
|
| 379 |
reorder_overrides = [o for o in overrides if o.override_type == OverrideType.REORDER]
|
| 380 |
for override in reorder_overrides:
|
|
|
|
| 384 |
result.remove(case_to_move)
|
| 385 |
result.insert(override.new_position, case_to_move)
|
| 386 |
applied_overrides.append(override)
|
| 387 |
+
|
| 388 |
return result
|
| 389 |
+
|
| 390 |
def _allocate_cases(
|
| 391 |
self,
|
| 392 |
prioritized: List[Case],
|
|
|
|
| 402 |
total_capacity += preferences.capacity_overrides[room.courtroom_id]
|
| 403 |
else:
|
| 404 |
total_capacity += room.get_capacity_for_date(current_date)
|
| 405 |
+
|
| 406 |
# Limit cases to total capacity
|
| 407 |
cases_to_allocate = prioritized[:total_capacity]
|
| 408 |
capacity_limited = len(prioritized) - len(cases_to_allocate)
|
| 409 |
+
|
| 410 |
# Use allocator to distribute
|
| 411 |
if self.allocator:
|
| 412 |
case_to_courtroom = self.allocator.allocate(cases_to_allocate, current_date)
|
|
|
|
| 416 |
for i, case in enumerate(cases_to_allocate):
|
| 417 |
room_id = courtrooms[i % len(courtrooms)].courtroom_id
|
| 418 |
case_to_courtroom[case.case_id] = room_id
|
| 419 |
+
|
| 420 |
# Build allocation dict
|
| 421 |
allocation: Dict[int, List[Case]] = {r.courtroom_id: [] for r in courtrooms}
|
| 422 |
for case in cases_to_allocate:
|
|
|
|
| 429 |
@staticmethod
|
| 430 |
def _clear_temporary_case_flags(cases: List[Case]) -> None:
|
| 431 |
"""Remove temporary scheduling flags to keep case objects clean between runs."""
|
|
|
|
| 432 |
for case in cases:
|
| 433 |
if hasattr(case, "_priority_override"):
|
| 434 |
delattr(case, "_priority_override")
|
scheduler/core/case.py
CHANGED
|
@@ -8,8 +8,8 @@ from __future__ import annotations
|
|
| 8 |
|
| 9 |
from dataclasses import dataclass, field
|
| 10 |
from datetime import date, datetime
|
| 11 |
-
from typing import List, Optional, TYPE_CHECKING
|
| 12 |
from enum import Enum
|
|
|
|
| 13 |
|
| 14 |
from scheduler.data.config import TERMINAL_STAGES
|
| 15 |
|
|
@@ -26,12 +26,12 @@ class CaseStatus(Enum):
|
|
| 26 |
ACTIVE = "active" # Has had at least one hearing
|
| 27 |
ADJOURNED = "adjourned" # Last hearing was adjourned
|
| 28 |
DISPOSED = "disposed" # Final disposal/settlement reached
|
| 29 |
-
|
| 30 |
|
| 31 |
@dataclass
|
| 32 |
class Case:
|
| 33 |
"""Represents a single court case.
|
| 34 |
-
|
| 35 |
Attributes:
|
| 36 |
case_id: Unique identifier (like CNR number)
|
| 37 |
case_type: Type of case (RSA, CRP, RFA, CA, CCC, CP, CMP)
|
|
@@ -64,20 +64,20 @@ class Case:
|
|
| 64 |
stage_start_date: Optional[date] = None
|
| 65 |
days_in_stage: int = 0
|
| 66 |
history: List[dict] = field(default_factory=list)
|
| 67 |
-
|
| 68 |
# Ripeness tracking (NEW - for bottleneck detection)
|
| 69 |
ripeness_status: str = "UNKNOWN" # RipenessStatus enum value (stored as string to avoid circular import)
|
| 70 |
bottleneck_reason: Optional[str] = None
|
| 71 |
ripeness_updated_at: Optional[datetime] = None
|
| 72 |
last_hearing_purpose: Optional[str] = None # Purpose of last hearing (for classification)
|
| 73 |
-
|
| 74 |
# No-case-left-behind tracking (NEW)
|
| 75 |
last_scheduled_date: Optional[date] = None
|
| 76 |
days_since_last_scheduled: int = 0
|
| 77 |
-
|
| 78 |
def progress_to_stage(self, new_stage: str, current_date: date) -> None:
|
| 79 |
"""Progress case to a new stage.
|
| 80 |
-
|
| 81 |
Args:
|
| 82 |
new_stage: The stage to progress to
|
| 83 |
current_date: Current simulation date
|
|
@@ -85,22 +85,22 @@ class Case:
|
|
| 85 |
self.current_stage = new_stage
|
| 86 |
self.stage_start_date = current_date
|
| 87 |
self.days_in_stage = 0
|
| 88 |
-
|
| 89 |
# Check if terminal stage (case disposed)
|
| 90 |
if new_stage in TERMINAL_STAGES:
|
| 91 |
self.status = CaseStatus.DISPOSED
|
| 92 |
self.disposal_date = current_date
|
| 93 |
-
|
| 94 |
# Record in history
|
| 95 |
self.history.append({
|
| 96 |
"date": current_date,
|
| 97 |
"event": "stage_change",
|
| 98 |
"stage": new_stage,
|
| 99 |
})
|
| 100 |
-
|
| 101 |
def record_hearing(self, hearing_date: date, was_heard: bool, outcome: str = "") -> None:
|
| 102 |
"""Record a hearing event.
|
| 103 |
-
|
| 104 |
Args:
|
| 105 |
hearing_date: Date of the hearing
|
| 106 |
was_heard: Whether the hearing actually proceeded (not adjourned)
|
|
@@ -108,12 +108,12 @@ class Case:
|
|
| 108 |
"""
|
| 109 |
self.hearing_count += 1
|
| 110 |
self.last_hearing_date = hearing_date
|
| 111 |
-
|
| 112 |
if was_heard:
|
| 113 |
self.status = CaseStatus.ACTIVE
|
| 114 |
else:
|
| 115 |
self.status = CaseStatus.ADJOURNED
|
| 116 |
-
|
| 117 |
# Record in history
|
| 118 |
self.history.append({
|
| 119 |
"date": hearing_date,
|
|
@@ -122,114 +122,114 @@ class Case:
|
|
| 122 |
"outcome": outcome,
|
| 123 |
"stage": self.current_stage,
|
| 124 |
})
|
| 125 |
-
|
| 126 |
def update_age(self, current_date: date) -> None:
|
| 127 |
"""Update age and days since last hearing.
|
| 128 |
-
|
| 129 |
Args:
|
| 130 |
current_date: Current simulation date
|
| 131 |
"""
|
| 132 |
self.age_days = (current_date - self.filed_date).days
|
| 133 |
-
|
| 134 |
if self.last_hearing_date:
|
| 135 |
self.days_since_last_hearing = (current_date - self.last_hearing_date).days
|
| 136 |
else:
|
| 137 |
self.days_since_last_hearing = self.age_days
|
| 138 |
-
|
| 139 |
if self.stage_start_date:
|
| 140 |
self.days_in_stage = (current_date - self.stage_start_date).days
|
| 141 |
else:
|
| 142 |
self.days_in_stage = self.age_days
|
| 143 |
-
|
| 144 |
# Update days since last scheduled (for no-case-left-behind tracking)
|
| 145 |
if self.last_scheduled_date:
|
| 146 |
self.days_since_last_scheduled = (current_date - self.last_scheduled_date).days
|
| 147 |
else:
|
| 148 |
self.days_since_last_scheduled = self.age_days
|
| 149 |
-
|
| 150 |
def compute_readiness_score(self) -> float:
|
| 151 |
"""Compute readiness score based on hearings, gaps, and stage.
|
| 152 |
-
|
| 153 |
Formula (from EDA):
|
| 154 |
READINESS = (hearings_capped/50) * 0.4 +
|
| 155 |
(100/gap_clamped) * 0.3 +
|
| 156 |
(stage_advanced) * 0.3
|
| 157 |
-
|
| 158 |
Returns:
|
| 159 |
Readiness score (0-1, higher = more ready)
|
| 160 |
"""
|
| 161 |
# Cap hearings at 50
|
| 162 |
hearings_capped = min(self.hearing_count, 50)
|
| 163 |
hearings_component = (hearings_capped / 50) * 0.4
|
| 164 |
-
|
| 165 |
# Gap component (inverse of days since last hearing)
|
| 166 |
gap_clamped = min(max(self.days_since_last_hearing, 1), 100)
|
| 167 |
gap_component = (100 / gap_clamped) * 0.3
|
| 168 |
-
|
| 169 |
# Stage component (advanced stages get higher score)
|
| 170 |
advanced_stages = ["ARGUMENTS", "EVIDENCE", "ORDERS / JUDGMENT"]
|
| 171 |
stage_component = 0.3 if self.current_stage in advanced_stages else 0.1
|
| 172 |
-
|
| 173 |
readiness = hearings_component + gap_component + stage_component
|
| 174 |
self.readiness_score = min(1.0, max(0.0, readiness))
|
| 175 |
-
|
| 176 |
return self.readiness_score
|
| 177 |
-
|
| 178 |
def is_ready_for_scheduling(self, min_gap_days: int = 7) -> bool:
|
| 179 |
"""Check if case is ready to be scheduled.
|
| 180 |
-
|
| 181 |
Args:
|
| 182 |
min_gap_days: Minimum days required since last hearing
|
| 183 |
-
|
| 184 |
Returns:
|
| 185 |
True if case can be scheduled
|
| 186 |
"""
|
| 187 |
if self.status == CaseStatus.DISPOSED:
|
| 188 |
return False
|
| 189 |
-
|
| 190 |
if self.last_hearing_date is None:
|
| 191 |
return True # First hearing, always ready
|
| 192 |
-
|
| 193 |
return self.days_since_last_hearing >= min_gap_days
|
| 194 |
-
|
| 195 |
def needs_alert(self, max_gap_days: int = 90) -> bool:
|
| 196 |
"""Check if case needs alert due to long gap.
|
| 197 |
-
|
| 198 |
Args:
|
| 199 |
max_gap_days: Maximum allowed gap before alert
|
| 200 |
-
|
| 201 |
Returns:
|
| 202 |
True if alert should be triggered
|
| 203 |
"""
|
| 204 |
if self.status == CaseStatus.DISPOSED:
|
| 205 |
return False
|
| 206 |
-
|
| 207 |
return self.days_since_last_hearing > max_gap_days
|
| 208 |
-
|
| 209 |
def get_priority_score(self) -> float:
|
| 210 |
"""Get overall priority score for scheduling.
|
| 211 |
-
|
| 212 |
Combines age, readiness, urgency, and adjournment boost into single score.
|
| 213 |
-
|
| 214 |
Formula:
|
| 215 |
priority = age*0.35 + readiness*0.25 + urgency*0.25 + adjournment_boost*0.15
|
| 216 |
-
|
| 217 |
Adjournment boost: Recently adjourned cases get priority to avoid indefinite postponement.
|
| 218 |
The boost decays exponentially: strongest immediately after adjournment, weaker over time.
|
| 219 |
-
|
| 220 |
Returns:
|
| 221 |
Priority score (higher = higher priority)
|
| 222 |
"""
|
| 223 |
# Age component (normalize to 0-1, assuming max age ~2000 days)
|
| 224 |
age_component = min(self.age_days / 2000, 1.0) * 0.35
|
| 225 |
-
|
| 226 |
# Readiness component
|
| 227 |
readiness_component = self.readiness_score * 0.25
|
| 228 |
-
|
| 229 |
# Urgency component
|
| 230 |
urgency_component = 1.0 if self.is_urgent else 0.0
|
| 231 |
urgency_component *= 0.25
|
| 232 |
-
|
| 233 |
# Adjournment boost (NEW - prevents cases from being repeatedly postponed)
|
| 234 |
adjournment_boost = 0.0
|
| 235 |
if self.status == CaseStatus.ADJOURNED and self.hearing_count > 0:
|
|
@@ -243,12 +243,12 @@ class Case:
|
|
| 243 |
decay_factor = 21 # Half-life of boost
|
| 244 |
adjournment_boost = math.exp(-self.days_since_last_hearing / decay_factor)
|
| 245 |
adjournment_boost *= 0.15
|
| 246 |
-
|
| 247 |
return age_component + readiness_component + urgency_component + adjournment_boost
|
| 248 |
-
|
| 249 |
def mark_unripe(self, status, reason: str, current_date: datetime) -> None:
|
| 250 |
"""Mark case as unripe with bottleneck reason.
|
| 251 |
-
|
| 252 |
Args:
|
| 253 |
status: Ripeness status (UNRIPE_SUMMONS, UNRIPE_PARTY, etc.) - RipenessStatus enum
|
| 254 |
reason: Human-readable reason for unripeness
|
|
@@ -258,7 +258,7 @@ class Case:
|
|
| 258 |
self.ripeness_status = status.value if hasattr(status, 'value') else str(status)
|
| 259 |
self.bottleneck_reason = reason
|
| 260 |
self.ripeness_updated_at = current_date
|
| 261 |
-
|
| 262 |
# Record in history
|
| 263 |
self.history.append({
|
| 264 |
"date": current_date,
|
|
@@ -266,17 +266,17 @@ class Case:
|
|
| 266 |
"status": self.ripeness_status,
|
| 267 |
"reason": reason,
|
| 268 |
})
|
| 269 |
-
|
| 270 |
def mark_ripe(self, current_date: datetime) -> None:
|
| 271 |
"""Mark case as ripe (ready for hearing).
|
| 272 |
-
|
| 273 |
Args:
|
| 274 |
current_date: Current simulation date
|
| 275 |
"""
|
| 276 |
self.ripeness_status = "RIPE"
|
| 277 |
self.bottleneck_reason = None
|
| 278 |
self.ripeness_updated_at = current_date
|
| 279 |
-
|
| 280 |
# Record in history
|
| 281 |
self.history.append({
|
| 282 |
"date": current_date,
|
|
@@ -284,28 +284,28 @@ class Case:
|
|
| 284 |
"status": "RIPE",
|
| 285 |
"reason": "Case became ripe",
|
| 286 |
})
|
| 287 |
-
|
| 288 |
def mark_scheduled(self, scheduled_date: date) -> None:
|
| 289 |
"""Mark case as scheduled for a hearing.
|
| 290 |
-
|
| 291 |
Used for no-case-left-behind tracking.
|
| 292 |
-
|
| 293 |
Args:
|
| 294 |
scheduled_date: Date case was scheduled
|
| 295 |
"""
|
| 296 |
self.last_scheduled_date = scheduled_date
|
| 297 |
self.days_since_last_scheduled = 0
|
| 298 |
-
|
| 299 |
@property
|
| 300 |
def is_disposed(self) -> bool:
|
| 301 |
"""Check if case is disposed."""
|
| 302 |
return self.status == CaseStatus.DISPOSED
|
| 303 |
-
|
| 304 |
def __repr__(self) -> str:
|
| 305 |
return (f"Case(id={self.case_id}, type={self.case_type}, "
|
| 306 |
f"stage={self.current_stage}, status={self.status.value}, "
|
| 307 |
f"hearings={self.hearing_count})")
|
| 308 |
-
|
| 309 |
def to_dict(self) -> dict:
|
| 310 |
"""Convert case to dictionary for serialization."""
|
| 311 |
return {
|
|
|
|
| 8 |
|
| 9 |
from dataclasses import dataclass, field
|
| 10 |
from datetime import date, datetime
|
|
|
|
| 11 |
from enum import Enum
|
| 12 |
+
from typing import TYPE_CHECKING, List, Optional
|
| 13 |
|
| 14 |
from scheduler.data.config import TERMINAL_STAGES
|
| 15 |
|
|
|
|
| 26 |
ACTIVE = "active" # Has had at least one hearing
|
| 27 |
ADJOURNED = "adjourned" # Last hearing was adjourned
|
| 28 |
DISPOSED = "disposed" # Final disposal/settlement reached
|
| 29 |
+
|
| 30 |
|
| 31 |
@dataclass
|
| 32 |
class Case:
|
| 33 |
"""Represents a single court case.
|
| 34 |
+
|
| 35 |
Attributes:
|
| 36 |
case_id: Unique identifier (like CNR number)
|
| 37 |
case_type: Type of case (RSA, CRP, RFA, CA, CCC, CP, CMP)
|
|
|
|
| 64 |
stage_start_date: Optional[date] = None
|
| 65 |
days_in_stage: int = 0
|
| 66 |
history: List[dict] = field(default_factory=list)
|
| 67 |
+
|
| 68 |
# Ripeness tracking (NEW - for bottleneck detection)
|
| 69 |
ripeness_status: str = "UNKNOWN" # RipenessStatus enum value (stored as string to avoid circular import)
|
| 70 |
bottleneck_reason: Optional[str] = None
|
| 71 |
ripeness_updated_at: Optional[datetime] = None
|
| 72 |
last_hearing_purpose: Optional[str] = None # Purpose of last hearing (for classification)
|
| 73 |
+
|
| 74 |
# No-case-left-behind tracking (NEW)
|
| 75 |
last_scheduled_date: Optional[date] = None
|
| 76 |
days_since_last_scheduled: int = 0
|
| 77 |
+
|
| 78 |
def progress_to_stage(self, new_stage: str, current_date: date) -> None:
|
| 79 |
"""Progress case to a new stage.
|
| 80 |
+
|
| 81 |
Args:
|
| 82 |
new_stage: The stage to progress to
|
| 83 |
current_date: Current simulation date
|
|
|
|
| 85 |
self.current_stage = new_stage
|
| 86 |
self.stage_start_date = current_date
|
| 87 |
self.days_in_stage = 0
|
| 88 |
+
|
| 89 |
# Check if terminal stage (case disposed)
|
| 90 |
if new_stage in TERMINAL_STAGES:
|
| 91 |
self.status = CaseStatus.DISPOSED
|
| 92 |
self.disposal_date = current_date
|
| 93 |
+
|
| 94 |
# Record in history
|
| 95 |
self.history.append({
|
| 96 |
"date": current_date,
|
| 97 |
"event": "stage_change",
|
| 98 |
"stage": new_stage,
|
| 99 |
})
|
| 100 |
+
|
| 101 |
def record_hearing(self, hearing_date: date, was_heard: bool, outcome: str = "") -> None:
|
| 102 |
"""Record a hearing event.
|
| 103 |
+
|
| 104 |
Args:
|
| 105 |
hearing_date: Date of the hearing
|
| 106 |
was_heard: Whether the hearing actually proceeded (not adjourned)
|
|
|
|
| 108 |
"""
|
| 109 |
self.hearing_count += 1
|
| 110 |
self.last_hearing_date = hearing_date
|
| 111 |
+
|
| 112 |
if was_heard:
|
| 113 |
self.status = CaseStatus.ACTIVE
|
| 114 |
else:
|
| 115 |
self.status = CaseStatus.ADJOURNED
|
| 116 |
+
|
| 117 |
# Record in history
|
| 118 |
self.history.append({
|
| 119 |
"date": hearing_date,
|
|
|
|
| 122 |
"outcome": outcome,
|
| 123 |
"stage": self.current_stage,
|
| 124 |
})
|
| 125 |
+
|
| 126 |
def update_age(self, current_date: date) -> None:
|
| 127 |
"""Update age and days since last hearing.
|
| 128 |
+
|
| 129 |
Args:
|
| 130 |
current_date: Current simulation date
|
| 131 |
"""
|
| 132 |
self.age_days = (current_date - self.filed_date).days
|
| 133 |
+
|
| 134 |
if self.last_hearing_date:
|
| 135 |
self.days_since_last_hearing = (current_date - self.last_hearing_date).days
|
| 136 |
else:
|
| 137 |
self.days_since_last_hearing = self.age_days
|
| 138 |
+
|
| 139 |
if self.stage_start_date:
|
| 140 |
self.days_in_stage = (current_date - self.stage_start_date).days
|
| 141 |
else:
|
| 142 |
self.days_in_stage = self.age_days
|
| 143 |
+
|
| 144 |
# Update days since last scheduled (for no-case-left-behind tracking)
|
| 145 |
if self.last_scheduled_date:
|
| 146 |
self.days_since_last_scheduled = (current_date - self.last_scheduled_date).days
|
| 147 |
else:
|
| 148 |
self.days_since_last_scheduled = self.age_days
|
| 149 |
+
|
| 150 |
def compute_readiness_score(self) -> float:
|
| 151 |
"""Compute readiness score based on hearings, gaps, and stage.
|
| 152 |
+
|
| 153 |
Formula (from EDA):
|
| 154 |
READINESS = (hearings_capped/50) * 0.4 +
|
| 155 |
(100/gap_clamped) * 0.3 +
|
| 156 |
(stage_advanced) * 0.3
|
| 157 |
+
|
| 158 |
Returns:
|
| 159 |
Readiness score (0-1, higher = more ready)
|
| 160 |
"""
|
| 161 |
# Cap hearings at 50
|
| 162 |
hearings_capped = min(self.hearing_count, 50)
|
| 163 |
hearings_component = (hearings_capped / 50) * 0.4
|
| 164 |
+
|
| 165 |
# Gap component (inverse of days since last hearing)
|
| 166 |
gap_clamped = min(max(self.days_since_last_hearing, 1), 100)
|
| 167 |
gap_component = (100 / gap_clamped) * 0.3
|
| 168 |
+
|
| 169 |
# Stage component (advanced stages get higher score)
|
| 170 |
advanced_stages = ["ARGUMENTS", "EVIDENCE", "ORDERS / JUDGMENT"]
|
| 171 |
stage_component = 0.3 if self.current_stage in advanced_stages else 0.1
|
| 172 |
+
|
| 173 |
readiness = hearings_component + gap_component + stage_component
|
| 174 |
self.readiness_score = min(1.0, max(0.0, readiness))
|
| 175 |
+
|
| 176 |
return self.readiness_score
|
| 177 |
+
|
| 178 |
def is_ready_for_scheduling(self, min_gap_days: int = 7) -> bool:
|
| 179 |
"""Check if case is ready to be scheduled.
|
| 180 |
+
|
| 181 |
Args:
|
| 182 |
min_gap_days: Minimum days required since last hearing
|
| 183 |
+
|
| 184 |
Returns:
|
| 185 |
True if case can be scheduled
|
| 186 |
"""
|
| 187 |
if self.status == CaseStatus.DISPOSED:
|
| 188 |
return False
|
| 189 |
+
|
| 190 |
if self.last_hearing_date is None:
|
| 191 |
return True # First hearing, always ready
|
| 192 |
+
|
| 193 |
return self.days_since_last_hearing >= min_gap_days
|
| 194 |
+
|
| 195 |
def needs_alert(self, max_gap_days: int = 90) -> bool:
|
| 196 |
"""Check if case needs alert due to long gap.
|
| 197 |
+
|
| 198 |
Args:
|
| 199 |
max_gap_days: Maximum allowed gap before alert
|
| 200 |
+
|
| 201 |
Returns:
|
| 202 |
True if alert should be triggered
|
| 203 |
"""
|
| 204 |
if self.status == CaseStatus.DISPOSED:
|
| 205 |
return False
|
| 206 |
+
|
| 207 |
return self.days_since_last_hearing > max_gap_days
|
| 208 |
+
|
| 209 |
def get_priority_score(self) -> float:
|
| 210 |
"""Get overall priority score for scheduling.
|
| 211 |
+
|
| 212 |
Combines age, readiness, urgency, and adjournment boost into single score.
|
| 213 |
+
|
| 214 |
Formula:
|
| 215 |
priority = age*0.35 + readiness*0.25 + urgency*0.25 + adjournment_boost*0.15
|
| 216 |
+
|
| 217 |
Adjournment boost: Recently adjourned cases get priority to avoid indefinite postponement.
|
| 218 |
The boost decays exponentially: strongest immediately after adjournment, weaker over time.
|
| 219 |
+
|
| 220 |
Returns:
|
| 221 |
Priority score (higher = higher priority)
|
| 222 |
"""
|
| 223 |
# Age component (normalize to 0-1, assuming max age ~2000 days)
|
| 224 |
age_component = min(self.age_days / 2000, 1.0) * 0.35
|
| 225 |
+
|
| 226 |
# Readiness component
|
| 227 |
readiness_component = self.readiness_score * 0.25
|
| 228 |
+
|
| 229 |
# Urgency component
|
| 230 |
urgency_component = 1.0 if self.is_urgent else 0.0
|
| 231 |
urgency_component *= 0.25
|
| 232 |
+
|
| 233 |
# Adjournment boost (NEW - prevents cases from being repeatedly postponed)
|
| 234 |
adjournment_boost = 0.0
|
| 235 |
if self.status == CaseStatus.ADJOURNED and self.hearing_count > 0:
|
|
|
|
| 243 |
decay_factor = 21 # Half-life of boost
|
| 244 |
adjournment_boost = math.exp(-self.days_since_last_hearing / decay_factor)
|
| 245 |
adjournment_boost *= 0.15
|
| 246 |
+
|
| 247 |
return age_component + readiness_component + urgency_component + adjournment_boost
|
| 248 |
+
|
| 249 |
def mark_unripe(self, status, reason: str, current_date: datetime) -> None:
|
| 250 |
"""Mark case as unripe with bottleneck reason.
|
| 251 |
+
|
| 252 |
Args:
|
| 253 |
status: Ripeness status (UNRIPE_SUMMONS, UNRIPE_PARTY, etc.) - RipenessStatus enum
|
| 254 |
reason: Human-readable reason for unripeness
|
|
|
|
| 258 |
self.ripeness_status = status.value if hasattr(status, 'value') else str(status)
|
| 259 |
self.bottleneck_reason = reason
|
| 260 |
self.ripeness_updated_at = current_date
|
| 261 |
+
|
| 262 |
# Record in history
|
| 263 |
self.history.append({
|
| 264 |
"date": current_date,
|
|
|
|
| 266 |
"status": self.ripeness_status,
|
| 267 |
"reason": reason,
|
| 268 |
})
|
| 269 |
+
|
| 270 |
def mark_ripe(self, current_date: datetime) -> None:
|
| 271 |
"""Mark case as ripe (ready for hearing).
|
| 272 |
+
|
| 273 |
Args:
|
| 274 |
current_date: Current simulation date
|
| 275 |
"""
|
| 276 |
self.ripeness_status = "RIPE"
|
| 277 |
self.bottleneck_reason = None
|
| 278 |
self.ripeness_updated_at = current_date
|
| 279 |
+
|
| 280 |
# Record in history
|
| 281 |
self.history.append({
|
| 282 |
"date": current_date,
|
|
|
|
| 284 |
"status": "RIPE",
|
| 285 |
"reason": "Case became ripe",
|
| 286 |
})
|
| 287 |
+
|
| 288 |
def mark_scheduled(self, scheduled_date: date) -> None:
|
| 289 |
"""Mark case as scheduled for a hearing.
|
| 290 |
+
|
| 291 |
Used for no-case-left-behind tracking.
|
| 292 |
+
|
| 293 |
Args:
|
| 294 |
scheduled_date: Date case was scheduled
|
| 295 |
"""
|
| 296 |
self.last_scheduled_date = scheduled_date
|
| 297 |
self.days_since_last_scheduled = 0
|
| 298 |
+
|
| 299 |
@property
|
| 300 |
def is_disposed(self) -> bool:
|
| 301 |
"""Check if case is disposed."""
|
| 302 |
return self.status == CaseStatus.DISPOSED
|
| 303 |
+
|
| 304 |
def __repr__(self) -> str:
|
| 305 |
return (f"Case(id={self.case_id}, type={self.case_type}, "
|
| 306 |
f"stage={self.current_stage}, status={self.status.value}, "
|
| 307 |
f"hearings={self.hearing_count})")
|
| 308 |
+
|
| 309 |
def to_dict(self) -> dict:
|
| 310 |
"""Convert case to dictionary for serialization."""
|
| 311 |
return {
|
scheduler/core/courtroom.py
CHANGED
|
@@ -14,7 +14,7 @@ from scheduler.data.config import DEFAULT_DAILY_CAPACITY
|
|
| 14 |
@dataclass
|
| 15 |
class Courtroom:
|
| 16 |
"""Represents a courtroom resource.
|
| 17 |
-
|
| 18 |
Attributes:
|
| 19 |
courtroom_id: Unique identifier (0-4 for 5 courtrooms)
|
| 20 |
judge_id: Currently assigned judge (optional)
|
|
@@ -31,134 +31,134 @@ class Courtroom:
|
|
| 31 |
schedule: Dict[date, List[str]] = field(default_factory=dict)
|
| 32 |
hearings_held: int = 0
|
| 33 |
utilization_history: List[Dict] = field(default_factory=list)
|
| 34 |
-
|
| 35 |
def assign_judge(self, judge_id: str) -> None:
|
| 36 |
"""Assign a judge to this courtroom.
|
| 37 |
-
|
| 38 |
Args:
|
| 39 |
judge_id: Judge identifier
|
| 40 |
"""
|
| 41 |
self.judge_id = judge_id
|
| 42 |
-
|
| 43 |
def add_case_types(self, *case_types: str) -> None:
|
| 44 |
"""Add case types that this courtroom handles.
|
| 45 |
-
|
| 46 |
Args:
|
| 47 |
*case_types: One or more case type strings (e.g., 'RSA', 'CRP')
|
| 48 |
"""
|
| 49 |
self.case_types.update(case_types)
|
| 50 |
-
|
| 51 |
def can_schedule(self, hearing_date: date, case_id: str) -> bool:
|
| 52 |
"""Check if a case can be scheduled on a given date.
|
| 53 |
-
|
| 54 |
Args:
|
| 55 |
hearing_date: Date to check
|
| 56 |
case_id: Case identifier
|
| 57 |
-
|
| 58 |
Returns:
|
| 59 |
True if slot available, False if at capacity
|
| 60 |
"""
|
| 61 |
if hearing_date not in self.schedule:
|
| 62 |
return True # No hearings scheduled yet
|
| 63 |
-
|
| 64 |
# Check if already scheduled
|
| 65 |
if case_id in self.schedule[hearing_date]:
|
| 66 |
return False # Already scheduled
|
| 67 |
-
|
| 68 |
# Check capacity
|
| 69 |
return len(self.schedule[hearing_date]) < self.daily_capacity
|
| 70 |
-
|
| 71 |
def schedule_case(self, hearing_date: date, case_id: str) -> bool:
|
| 72 |
"""Schedule a case for a hearing.
|
| 73 |
-
|
| 74 |
Args:
|
| 75 |
hearing_date: Date of hearing
|
| 76 |
case_id: Case identifier
|
| 77 |
-
|
| 78 |
Returns:
|
| 79 |
True if successfully scheduled, False if at capacity
|
| 80 |
"""
|
| 81 |
if not self.can_schedule(hearing_date, case_id):
|
| 82 |
return False
|
| 83 |
-
|
| 84 |
if hearing_date not in self.schedule:
|
| 85 |
self.schedule[hearing_date] = []
|
| 86 |
-
|
| 87 |
self.schedule[hearing_date].append(case_id)
|
| 88 |
return True
|
| 89 |
-
|
| 90 |
def unschedule_case(self, hearing_date: date, case_id: str) -> bool:
|
| 91 |
"""Remove a case from schedule (e.g., if adjourned).
|
| 92 |
-
|
| 93 |
Args:
|
| 94 |
hearing_date: Date of hearing
|
| 95 |
case_id: Case identifier
|
| 96 |
-
|
| 97 |
Returns:
|
| 98 |
True if successfully removed, False if not found
|
| 99 |
"""
|
| 100 |
if hearing_date not in self.schedule:
|
| 101 |
return False
|
| 102 |
-
|
| 103 |
if case_id in self.schedule[hearing_date]:
|
| 104 |
self.schedule[hearing_date].remove(case_id)
|
| 105 |
return True
|
| 106 |
-
|
| 107 |
return False
|
| 108 |
-
|
| 109 |
def get_daily_schedule(self, hearing_date: date) -> List[str]:
|
| 110 |
"""Get list of cases scheduled for a specific date.
|
| 111 |
-
|
| 112 |
Args:
|
| 113 |
hearing_date: Date to query
|
| 114 |
-
|
| 115 |
Returns:
|
| 116 |
List of case_ids scheduled (empty if none)
|
| 117 |
"""
|
| 118 |
return self.schedule.get(hearing_date, [])
|
| 119 |
-
|
| 120 |
def get_capacity_for_date(self, hearing_date: date) -> int:
|
| 121 |
"""Get remaining capacity for a specific date.
|
| 122 |
-
|
| 123 |
Args:
|
| 124 |
hearing_date: Date to query
|
| 125 |
-
|
| 126 |
Returns:
|
| 127 |
Number of available slots
|
| 128 |
"""
|
| 129 |
scheduled_count = len(self.get_daily_schedule(hearing_date))
|
| 130 |
return self.daily_capacity - scheduled_count
|
| 131 |
-
|
| 132 |
def record_hearing_completed(self, hearing_date: date) -> None:
|
| 133 |
"""Record that a hearing was held.
|
| 134 |
-
|
| 135 |
Args:
|
| 136 |
hearing_date: Date of hearing
|
| 137 |
"""
|
| 138 |
self.hearings_held += 1
|
| 139 |
-
|
| 140 |
def compute_utilization(self, hearing_date: date) -> float:
|
| 141 |
"""Compute utilization rate for a specific date.
|
| 142 |
-
|
| 143 |
Args:
|
| 144 |
hearing_date: Date to compute for
|
| 145 |
-
|
| 146 |
Returns:
|
| 147 |
Utilization rate (0.0 to 1.0)
|
| 148 |
"""
|
| 149 |
scheduled_count = len(self.get_daily_schedule(hearing_date))
|
| 150 |
return scheduled_count / self.daily_capacity if self.daily_capacity > 0 else 0.0
|
| 151 |
-
|
| 152 |
def record_daily_utilization(self, hearing_date: date, actual_hearings: int) -> None:
|
| 153 |
"""Record actual utilization for a day.
|
| 154 |
-
|
| 155 |
Args:
|
| 156 |
hearing_date: Date of hearings
|
| 157 |
actual_hearings: Number of hearings actually held (not adjourned)
|
| 158 |
"""
|
| 159 |
scheduled = len(self.get_daily_schedule(hearing_date))
|
| 160 |
utilization = actual_hearings / self.daily_capacity if self.daily_capacity > 0 else 0.0
|
| 161 |
-
|
| 162 |
self.utilization_history.append({
|
| 163 |
"date": hearing_date,
|
| 164 |
"scheduled": scheduled,
|
|
@@ -166,55 +166,55 @@ class Courtroom:
|
|
| 166 |
"capacity": self.daily_capacity,
|
| 167 |
"utilization": utilization,
|
| 168 |
})
|
| 169 |
-
|
| 170 |
def get_average_utilization(self) -> float:
|
| 171 |
"""Calculate average utilization rate across all recorded days.
|
| 172 |
-
|
| 173 |
Returns:
|
| 174 |
Average utilization (0.0 to 1.0)
|
| 175 |
"""
|
| 176 |
if not self.utilization_history:
|
| 177 |
return 0.0
|
| 178 |
-
|
| 179 |
total = sum(day["utilization"] for day in self.utilization_history)
|
| 180 |
return total / len(self.utilization_history)
|
| 181 |
-
|
| 182 |
def get_schedule_summary(self, start_date: date, end_date: date) -> Dict:
|
| 183 |
"""Get summary statistics for a date range.
|
| 184 |
-
|
| 185 |
Args:
|
| 186 |
start_date: Start of range
|
| 187 |
end_date: End of range
|
| 188 |
-
|
| 189 |
Returns:
|
| 190 |
Dict with counts and utilization stats
|
| 191 |
"""
|
| 192 |
-
days_in_range = [d for d in self.schedule.keys()
|
| 193 |
if start_date <= d <= end_date]
|
| 194 |
-
|
| 195 |
total_scheduled = sum(len(self.schedule[d]) for d in days_in_range)
|
| 196 |
days_with_hearings = len(days_in_range)
|
| 197 |
-
|
| 198 |
return {
|
| 199 |
"courtroom_id": self.courtroom_id,
|
| 200 |
"days_with_hearings": days_with_hearings,
|
| 201 |
"total_cases_scheduled": total_scheduled,
|
| 202 |
"avg_cases_per_day": total_scheduled / days_with_hearings if days_with_hearings > 0 else 0,
|
| 203 |
"total_capacity": days_with_hearings * self.daily_capacity,
|
| 204 |
-
"utilization_rate": total_scheduled / (days_with_hearings * self.daily_capacity)
|
| 205 |
if days_with_hearings > 0 else 0,
|
| 206 |
}
|
| 207 |
-
|
| 208 |
def clear_schedule(self) -> None:
|
| 209 |
"""Clear all scheduled hearings (for testing/reset)."""
|
| 210 |
self.schedule.clear()
|
| 211 |
self.utilization_history.clear()
|
| 212 |
self.hearings_held = 0
|
| 213 |
-
|
| 214 |
def __repr__(self) -> str:
|
| 215 |
return (f"Courtroom(id={self.courtroom_id}, judge={self.judge_id}, "
|
| 216 |
f"capacity={self.daily_capacity}, types={self.case_types})")
|
| 217 |
-
|
| 218 |
def to_dict(self) -> dict:
|
| 219 |
"""Convert courtroom to dictionary for serialization."""
|
| 220 |
return {
|
|
|
|
| 14 |
@dataclass
|
| 15 |
class Courtroom:
|
| 16 |
"""Represents a courtroom resource.
|
| 17 |
+
|
| 18 |
Attributes:
|
| 19 |
courtroom_id: Unique identifier (0-4 for 5 courtrooms)
|
| 20 |
judge_id: Currently assigned judge (optional)
|
|
|
|
| 31 |
schedule: Dict[date, List[str]] = field(default_factory=dict)
|
| 32 |
hearings_held: int = 0
|
| 33 |
utilization_history: List[Dict] = field(default_factory=list)
|
| 34 |
+
|
| 35 |
def assign_judge(self, judge_id: str) -> None:
|
| 36 |
"""Assign a judge to this courtroom.
|
| 37 |
+
|
| 38 |
Args:
|
| 39 |
judge_id: Judge identifier
|
| 40 |
"""
|
| 41 |
self.judge_id = judge_id
|
| 42 |
+
|
| 43 |
def add_case_types(self, *case_types: str) -> None:
|
| 44 |
"""Add case types that this courtroom handles.
|
| 45 |
+
|
| 46 |
Args:
|
| 47 |
*case_types: One or more case type strings (e.g., 'RSA', 'CRP')
|
| 48 |
"""
|
| 49 |
self.case_types.update(case_types)
|
| 50 |
+
|
| 51 |
def can_schedule(self, hearing_date: date, case_id: str) -> bool:
|
| 52 |
"""Check if a case can be scheduled on a given date.
|
| 53 |
+
|
| 54 |
Args:
|
| 55 |
hearing_date: Date to check
|
| 56 |
case_id: Case identifier
|
| 57 |
+
|
| 58 |
Returns:
|
| 59 |
True if slot available, False if at capacity
|
| 60 |
"""
|
| 61 |
if hearing_date not in self.schedule:
|
| 62 |
return True # No hearings scheduled yet
|
| 63 |
+
|
| 64 |
# Check if already scheduled
|
| 65 |
if case_id in self.schedule[hearing_date]:
|
| 66 |
return False # Already scheduled
|
| 67 |
+
|
| 68 |
# Check capacity
|
| 69 |
return len(self.schedule[hearing_date]) < self.daily_capacity
|
| 70 |
+
|
| 71 |
def schedule_case(self, hearing_date: date, case_id: str) -> bool:
|
| 72 |
"""Schedule a case for a hearing.
|
| 73 |
+
|
| 74 |
Args:
|
| 75 |
hearing_date: Date of hearing
|
| 76 |
case_id: Case identifier
|
| 77 |
+
|
| 78 |
Returns:
|
| 79 |
True if successfully scheduled, False if at capacity
|
| 80 |
"""
|
| 81 |
if not self.can_schedule(hearing_date, case_id):
|
| 82 |
return False
|
| 83 |
+
|
| 84 |
if hearing_date not in self.schedule:
|
| 85 |
self.schedule[hearing_date] = []
|
| 86 |
+
|
| 87 |
self.schedule[hearing_date].append(case_id)
|
| 88 |
return True
|
| 89 |
+
|
| 90 |
def unschedule_case(self, hearing_date: date, case_id: str) -> bool:
|
| 91 |
"""Remove a case from schedule (e.g., if adjourned).
|
| 92 |
+
|
| 93 |
Args:
|
| 94 |
hearing_date: Date of hearing
|
| 95 |
case_id: Case identifier
|
| 96 |
+
|
| 97 |
Returns:
|
| 98 |
True if successfully removed, False if not found
|
| 99 |
"""
|
| 100 |
if hearing_date not in self.schedule:
|
| 101 |
return False
|
| 102 |
+
|
| 103 |
if case_id in self.schedule[hearing_date]:
|
| 104 |
self.schedule[hearing_date].remove(case_id)
|
| 105 |
return True
|
| 106 |
+
|
| 107 |
return False
|
| 108 |
+
|
| 109 |
def get_daily_schedule(self, hearing_date: date) -> List[str]:
|
| 110 |
"""Get list of cases scheduled for a specific date.
|
| 111 |
+
|
| 112 |
Args:
|
| 113 |
hearing_date: Date to query
|
| 114 |
+
|
| 115 |
Returns:
|
| 116 |
List of case_ids scheduled (empty if none)
|
| 117 |
"""
|
| 118 |
return self.schedule.get(hearing_date, [])
|
| 119 |
+
|
| 120 |
def get_capacity_for_date(self, hearing_date: date) -> int:
|
| 121 |
"""Get remaining capacity for a specific date.
|
| 122 |
+
|
| 123 |
Args:
|
| 124 |
hearing_date: Date to query
|
| 125 |
+
|
| 126 |
Returns:
|
| 127 |
Number of available slots
|
| 128 |
"""
|
| 129 |
scheduled_count = len(self.get_daily_schedule(hearing_date))
|
| 130 |
return self.daily_capacity - scheduled_count
|
| 131 |
+
|
| 132 |
def record_hearing_completed(self, hearing_date: date) -> None:
|
| 133 |
"""Record that a hearing was held.
|
| 134 |
+
|
| 135 |
Args:
|
| 136 |
hearing_date: Date of hearing
|
| 137 |
"""
|
| 138 |
self.hearings_held += 1
|
| 139 |
+
|
| 140 |
def compute_utilization(self, hearing_date: date) -> float:
|
| 141 |
"""Compute utilization rate for a specific date.
|
| 142 |
+
|
| 143 |
Args:
|
| 144 |
hearing_date: Date to compute for
|
| 145 |
+
|
| 146 |
Returns:
|
| 147 |
Utilization rate (0.0 to 1.0)
|
| 148 |
"""
|
| 149 |
scheduled_count = len(self.get_daily_schedule(hearing_date))
|
| 150 |
return scheduled_count / self.daily_capacity if self.daily_capacity > 0 else 0.0
|
| 151 |
+
|
| 152 |
def record_daily_utilization(self, hearing_date: date, actual_hearings: int) -> None:
|
| 153 |
"""Record actual utilization for a day.
|
| 154 |
+
|
| 155 |
Args:
|
| 156 |
hearing_date: Date of hearings
|
| 157 |
actual_hearings: Number of hearings actually held (not adjourned)
|
| 158 |
"""
|
| 159 |
scheduled = len(self.get_daily_schedule(hearing_date))
|
| 160 |
utilization = actual_hearings / self.daily_capacity if self.daily_capacity > 0 else 0.0
|
| 161 |
+
|
| 162 |
self.utilization_history.append({
|
| 163 |
"date": hearing_date,
|
| 164 |
"scheduled": scheduled,
|
|
|
|
| 166 |
"capacity": self.daily_capacity,
|
| 167 |
"utilization": utilization,
|
| 168 |
})
|
| 169 |
+
|
| 170 |
def get_average_utilization(self) -> float:
|
| 171 |
"""Calculate average utilization rate across all recorded days.
|
| 172 |
+
|
| 173 |
Returns:
|
| 174 |
Average utilization (0.0 to 1.0)
|
| 175 |
"""
|
| 176 |
if not self.utilization_history:
|
| 177 |
return 0.0
|
| 178 |
+
|
| 179 |
total = sum(day["utilization"] for day in self.utilization_history)
|
| 180 |
return total / len(self.utilization_history)
|
| 181 |
+
|
| 182 |
def get_schedule_summary(self, start_date: date, end_date: date) -> Dict:
|
| 183 |
"""Get summary statistics for a date range.
|
| 184 |
+
|
| 185 |
Args:
|
| 186 |
start_date: Start of range
|
| 187 |
end_date: End of range
|
| 188 |
+
|
| 189 |
Returns:
|
| 190 |
Dict with counts and utilization stats
|
| 191 |
"""
|
| 192 |
+
days_in_range = [d for d in self.schedule.keys()
|
| 193 |
if start_date <= d <= end_date]
|
| 194 |
+
|
| 195 |
total_scheduled = sum(len(self.schedule[d]) for d in days_in_range)
|
| 196 |
days_with_hearings = len(days_in_range)
|
| 197 |
+
|
| 198 |
return {
|
| 199 |
"courtroom_id": self.courtroom_id,
|
| 200 |
"days_with_hearings": days_with_hearings,
|
| 201 |
"total_cases_scheduled": total_scheduled,
|
| 202 |
"avg_cases_per_day": total_scheduled / days_with_hearings if days_with_hearings > 0 else 0,
|
| 203 |
"total_capacity": days_with_hearings * self.daily_capacity,
|
| 204 |
+
"utilization_rate": total_scheduled / (days_with_hearings * self.daily_capacity)
|
| 205 |
if days_with_hearings > 0 else 0,
|
| 206 |
}
|
| 207 |
+
|
| 208 |
def clear_schedule(self) -> None:
|
| 209 |
"""Clear all scheduled hearings (for testing/reset)."""
|
| 210 |
self.schedule.clear()
|
| 211 |
self.utilization_history.clear()
|
| 212 |
self.hearings_held = 0
|
| 213 |
+
|
| 214 |
def __repr__(self) -> str:
|
| 215 |
return (f"Courtroom(id={self.courtroom_id}, judge={self.judge_id}, "
|
| 216 |
f"capacity={self.daily_capacity}, types={self.case_types})")
|
| 217 |
+
|
| 218 |
def to_dict(self) -> dict:
|
| 219 |
"""Convert courtroom to dictionary for serialization."""
|
| 220 |
return {
|
scheduler/core/hearing.py
CHANGED
|
@@ -4,7 +4,7 @@ This module defines the Hearing class which represents a scheduled court hearing
|
|
| 4 |
with its outcome and associated metadata.
|
| 5 |
"""
|
| 6 |
|
| 7 |
-
from dataclasses import dataclass
|
| 8 |
from datetime import date
|
| 9 |
from enum import Enum
|
| 10 |
from typing import Optional
|
|
@@ -23,7 +23,7 @@ class HearingOutcome(Enum):
|
|
| 23 |
@dataclass
|
| 24 |
class Hearing:
|
| 25 |
"""Represents a scheduled court hearing event.
|
| 26 |
-
|
| 27 |
Attributes:
|
| 28 |
hearing_id: Unique identifier
|
| 29 |
case_id: Associated case
|
|
@@ -46,78 +46,78 @@ class Hearing:
|
|
| 46 |
actual_date: Optional[date] = None
|
| 47 |
duration_minutes: int = 30
|
| 48 |
notes: Optional[str] = None
|
| 49 |
-
|
| 50 |
def mark_as_heard(self, actual_date: Optional[date] = None) -> None:
|
| 51 |
"""Mark hearing as successfully completed.
|
| 52 |
-
|
| 53 |
Args:
|
| 54 |
actual_date: Actual date if different from scheduled
|
| 55 |
"""
|
| 56 |
self.outcome = HearingOutcome.HEARD
|
| 57 |
self.actual_date = actual_date or self.scheduled_date
|
| 58 |
-
|
| 59 |
def mark_as_adjourned(self, reason: str = "") -> None:
|
| 60 |
"""Mark hearing as adjourned.
|
| 61 |
-
|
| 62 |
Args:
|
| 63 |
reason: Reason for adjournment
|
| 64 |
"""
|
| 65 |
self.outcome = HearingOutcome.ADJOURNED
|
| 66 |
if reason:
|
| 67 |
self.notes = reason
|
| 68 |
-
|
| 69 |
def mark_as_disposed(self) -> None:
|
| 70 |
"""Mark hearing as final disposition."""
|
| 71 |
self.outcome = HearingOutcome.DISPOSED
|
| 72 |
self.actual_date = self.scheduled_date
|
| 73 |
-
|
| 74 |
def mark_as_no_show(self, party: str = "") -> None:
|
| 75 |
"""Mark hearing as no-show.
|
| 76 |
-
|
| 77 |
Args:
|
| 78 |
party: Which party was absent
|
| 79 |
"""
|
| 80 |
self.outcome = HearingOutcome.NO_SHOW
|
| 81 |
if party:
|
| 82 |
self.notes = f"No show: {party}"
|
| 83 |
-
|
| 84 |
def reschedule(self, new_date: date) -> None:
|
| 85 |
"""Reschedule hearing to a new date.
|
| 86 |
-
|
| 87 |
Args:
|
| 88 |
new_date: New scheduled date
|
| 89 |
"""
|
| 90 |
self.scheduled_date = new_date
|
| 91 |
self.outcome = HearingOutcome.SCHEDULED
|
| 92 |
-
|
| 93 |
def is_complete(self) -> bool:
|
| 94 |
"""Check if hearing has concluded.
|
| 95 |
-
|
| 96 |
Returns:
|
| 97 |
True if outcome is not SCHEDULED
|
| 98 |
"""
|
| 99 |
return self.outcome != HearingOutcome.SCHEDULED
|
| 100 |
-
|
| 101 |
def is_successful(self) -> bool:
|
| 102 |
"""Check if hearing was successfully held.
|
| 103 |
-
|
| 104 |
Returns:
|
| 105 |
True if outcome is HEARD or DISPOSED
|
| 106 |
"""
|
| 107 |
return self.outcome in (HearingOutcome.HEARD, HearingOutcome.DISPOSED)
|
| 108 |
-
|
| 109 |
def get_effective_date(self) -> date:
|
| 110 |
"""Get actual or scheduled date.
|
| 111 |
-
|
| 112 |
Returns:
|
| 113 |
actual_date if set, else scheduled_date
|
| 114 |
"""
|
| 115 |
return self.actual_date or self.scheduled_date
|
| 116 |
-
|
| 117 |
def __repr__(self) -> str:
|
| 118 |
return (f"Hearing(id={self.hearing_id}, case={self.case_id}, "
|
| 119 |
f"date={self.scheduled_date}, outcome={self.outcome.value})")
|
| 120 |
-
|
| 121 |
def to_dict(self) -> dict:
|
| 122 |
"""Convert hearing to dictionary for serialization."""
|
| 123 |
return {
|
|
|
|
| 4 |
with its outcome and associated metadata.
|
| 5 |
"""
|
| 6 |
|
| 7 |
+
from dataclasses import dataclass
|
| 8 |
from datetime import date
|
| 9 |
from enum import Enum
|
| 10 |
from typing import Optional
|
|
|
|
| 23 |
@dataclass
|
| 24 |
class Hearing:
|
| 25 |
"""Represents a scheduled court hearing event.
|
| 26 |
+
|
| 27 |
Attributes:
|
| 28 |
hearing_id: Unique identifier
|
| 29 |
case_id: Associated case
|
|
|
|
| 46 |
actual_date: Optional[date] = None
|
| 47 |
duration_minutes: int = 30
|
| 48 |
notes: Optional[str] = None
|
| 49 |
+
|
| 50 |
def mark_as_heard(self, actual_date: Optional[date] = None) -> None:
|
| 51 |
"""Mark hearing as successfully completed.
|
| 52 |
+
|
| 53 |
Args:
|
| 54 |
actual_date: Actual date if different from scheduled
|
| 55 |
"""
|
| 56 |
self.outcome = HearingOutcome.HEARD
|
| 57 |
self.actual_date = actual_date or self.scheduled_date
|
| 58 |
+
|
| 59 |
def mark_as_adjourned(self, reason: str = "") -> None:
|
| 60 |
"""Mark hearing as adjourned.
|
| 61 |
+
|
| 62 |
Args:
|
| 63 |
reason: Reason for adjournment
|
| 64 |
"""
|
| 65 |
self.outcome = HearingOutcome.ADJOURNED
|
| 66 |
if reason:
|
| 67 |
self.notes = reason
|
| 68 |
+
|
| 69 |
def mark_as_disposed(self) -> None:
|
| 70 |
"""Mark hearing as final disposition."""
|
| 71 |
self.outcome = HearingOutcome.DISPOSED
|
| 72 |
self.actual_date = self.scheduled_date
|
| 73 |
+
|
| 74 |
def mark_as_no_show(self, party: str = "") -> None:
|
| 75 |
"""Mark hearing as no-show.
|
| 76 |
+
|
| 77 |
Args:
|
| 78 |
party: Which party was absent
|
| 79 |
"""
|
| 80 |
self.outcome = HearingOutcome.NO_SHOW
|
| 81 |
if party:
|
| 82 |
self.notes = f"No show: {party}"
|
| 83 |
+
|
| 84 |
def reschedule(self, new_date: date) -> None:
|
| 85 |
"""Reschedule hearing to a new date.
|
| 86 |
+
|
| 87 |
Args:
|
| 88 |
new_date: New scheduled date
|
| 89 |
"""
|
| 90 |
self.scheduled_date = new_date
|
| 91 |
self.outcome = HearingOutcome.SCHEDULED
|
| 92 |
+
|
| 93 |
def is_complete(self) -> bool:
|
| 94 |
"""Check if hearing has concluded.
|
| 95 |
+
|
| 96 |
Returns:
|
| 97 |
True if outcome is not SCHEDULED
|
| 98 |
"""
|
| 99 |
return self.outcome != HearingOutcome.SCHEDULED
|
| 100 |
+
|
| 101 |
def is_successful(self) -> bool:
|
| 102 |
"""Check if hearing was successfully held.
|
| 103 |
+
|
| 104 |
Returns:
|
| 105 |
True if outcome is HEARD or DISPOSED
|
| 106 |
"""
|
| 107 |
return self.outcome in (HearingOutcome.HEARD, HearingOutcome.DISPOSED)
|
| 108 |
+
|
| 109 |
def get_effective_date(self) -> date:
|
| 110 |
"""Get actual or scheduled date.
|
| 111 |
+
|
| 112 |
Returns:
|
| 113 |
actual_date if set, else scheduled_date
|
| 114 |
"""
|
| 115 |
return self.actual_date or self.scheduled_date
|
| 116 |
+
|
| 117 |
def __repr__(self) -> str:
|
| 118 |
return (f"Hearing(id={self.hearing_id}, case={self.case_id}, "
|
| 119 |
f"date={self.scheduled_date}, outcome={self.outcome.value})")
|
| 120 |
+
|
| 121 |
def to_dict(self) -> dict:
|
| 122 |
"""Convert hearing to dictionary for serialization."""
|
| 123 |
return {
|
scheduler/core/judge.py
CHANGED
|
@@ -12,7 +12,7 @@ from typing import Dict, List, Optional, Set
|
|
| 12 |
@dataclass
|
| 13 |
class Judge:
|
| 14 |
"""Represents a judge with workload tracking.
|
| 15 |
-
|
| 16 |
Attributes:
|
| 17 |
judge_id: Unique identifier
|
| 18 |
name: Judge's name
|
|
@@ -29,37 +29,37 @@ class Judge:
|
|
| 29 |
cases_heard: int = 0
|
| 30 |
hearings_presided: int = 0
|
| 31 |
workload_history: List[Dict] = field(default_factory=list)
|
| 32 |
-
|
| 33 |
def assign_courtroom(self, courtroom_id: int) -> None:
|
| 34 |
"""Assign judge to a courtroom.
|
| 35 |
-
|
| 36 |
Args:
|
| 37 |
courtroom_id: Courtroom identifier
|
| 38 |
"""
|
| 39 |
self.courtroom_id = courtroom_id
|
| 40 |
-
|
| 41 |
def add_preferred_types(self, *case_types: str) -> None:
|
| 42 |
"""Add case types to judge's preferences.
|
| 43 |
-
|
| 44 |
Args:
|
| 45 |
*case_types: One or more case type strings
|
| 46 |
"""
|
| 47 |
self.preferred_case_types.update(case_types)
|
| 48 |
-
|
| 49 |
def record_hearing(self, hearing_date: date, case_id: str, case_type: str) -> None:
|
| 50 |
"""Record a hearing presided over.
|
| 51 |
-
|
| 52 |
Args:
|
| 53 |
hearing_date: Date of hearing
|
| 54 |
case_id: Case identifier
|
| 55 |
case_type: Type of case
|
| 56 |
"""
|
| 57 |
self.hearings_presided += 1
|
| 58 |
-
|
| 59 |
-
def record_daily_workload(self, hearing_date: date, cases_heard: int,
|
| 60 |
cases_adjourned: int) -> None:
|
| 61 |
"""Record workload for a specific day.
|
| 62 |
-
|
| 63 |
Args:
|
| 64 |
hearing_date: Date of hearings
|
| 65 |
cases_heard: Number of cases actually heard
|
|
@@ -71,48 +71,48 @@ class Judge:
|
|
| 71 |
"cases_adjourned": cases_adjourned,
|
| 72 |
"total_scheduled": cases_heard + cases_adjourned,
|
| 73 |
})
|
| 74 |
-
|
| 75 |
self.cases_heard += cases_heard
|
| 76 |
-
|
| 77 |
def get_average_daily_workload(self) -> float:
|
| 78 |
"""Calculate average cases heard per day.
|
| 79 |
-
|
| 80 |
Returns:
|
| 81 |
Average number of cases per day
|
| 82 |
"""
|
| 83 |
if not self.workload_history:
|
| 84 |
return 0.0
|
| 85 |
-
|
| 86 |
total = sum(day["cases_heard"] for day in self.workload_history)
|
| 87 |
return total / len(self.workload_history)
|
| 88 |
-
|
| 89 |
def get_adjournment_rate(self) -> float:
|
| 90 |
"""Calculate judge's adjournment rate.
|
| 91 |
-
|
| 92 |
Returns:
|
| 93 |
Proportion of cases adjourned (0.0 to 1.0)
|
| 94 |
"""
|
| 95 |
if not self.workload_history:
|
| 96 |
return 0.0
|
| 97 |
-
|
| 98 |
total_adjourned = sum(day["cases_adjourned"] for day in self.workload_history)
|
| 99 |
total_scheduled = sum(day["total_scheduled"] for day in self.workload_history)
|
| 100 |
-
|
| 101 |
return total_adjourned / total_scheduled if total_scheduled > 0 else 0.0
|
| 102 |
-
|
| 103 |
def get_workload_summary(self, start_date: date, end_date: date) -> Dict:
|
| 104 |
"""Get workload summary for a date range.
|
| 105 |
-
|
| 106 |
Args:
|
| 107 |
start_date: Start of range
|
| 108 |
end_date: End of range
|
| 109 |
-
|
| 110 |
Returns:
|
| 111 |
Dict with workload statistics
|
| 112 |
"""
|
| 113 |
-
days_in_range = [day for day in self.workload_history
|
| 114 |
if start_date <= day["date"] <= end_date]
|
| 115 |
-
|
| 116 |
if not days_in_range:
|
| 117 |
return {
|
| 118 |
"judge_id": self.judge_id,
|
|
@@ -121,11 +121,11 @@ class Judge:
|
|
| 121 |
"avg_cases_per_day": 0.0,
|
| 122 |
"adjournment_rate": 0.0,
|
| 123 |
}
|
| 124 |
-
|
| 125 |
total_heard = sum(day["cases_heard"] for day in days_in_range)
|
| 126 |
total_adjourned = sum(day["cases_adjourned"] for day in days_in_range)
|
| 127 |
total_scheduled = total_heard + total_adjourned
|
| 128 |
-
|
| 129 |
return {
|
| 130 |
"judge_id": self.judge_id,
|
| 131 |
"days_worked": len(days_in_range),
|
|
@@ -134,25 +134,25 @@ class Judge:
|
|
| 134 |
"avg_cases_per_day": total_heard / len(days_in_range),
|
| 135 |
"adjournment_rate": total_adjourned / total_scheduled if total_scheduled > 0 else 0.0,
|
| 136 |
}
|
| 137 |
-
|
| 138 |
def is_specialized_in(self, case_type: str) -> bool:
|
| 139 |
"""Check if judge specializes in a case type.
|
| 140 |
-
|
| 141 |
Args:
|
| 142 |
case_type: Case type to check
|
| 143 |
-
|
| 144 |
Returns:
|
| 145 |
True if in preferred types or no preferences set
|
| 146 |
"""
|
| 147 |
if not self.preferred_case_types:
|
| 148 |
return True # No preferences means handles all types
|
| 149 |
-
|
| 150 |
return case_type in self.preferred_case_types
|
| 151 |
-
|
| 152 |
def __repr__(self) -> str:
|
| 153 |
return (f"Judge(id={self.judge_id}, courtroom={self.courtroom_id}, "
|
| 154 |
f"hearings={self.hearings_presided})")
|
| 155 |
-
|
| 156 |
def to_dict(self) -> dict:
|
| 157 |
"""Convert judge to dictionary for serialization."""
|
| 158 |
return {
|
|
|
|
| 12 |
@dataclass
|
| 13 |
class Judge:
|
| 14 |
"""Represents a judge with workload tracking.
|
| 15 |
+
|
| 16 |
Attributes:
|
| 17 |
judge_id: Unique identifier
|
| 18 |
name: Judge's name
|
|
|
|
| 29 |
cases_heard: int = 0
|
| 30 |
hearings_presided: int = 0
|
| 31 |
workload_history: List[Dict] = field(default_factory=list)
|
| 32 |
+
|
| 33 |
def assign_courtroom(self, courtroom_id: int) -> None:
|
| 34 |
"""Assign judge to a courtroom.
|
| 35 |
+
|
| 36 |
Args:
|
| 37 |
courtroom_id: Courtroom identifier
|
| 38 |
"""
|
| 39 |
self.courtroom_id = courtroom_id
|
| 40 |
+
|
| 41 |
def add_preferred_types(self, *case_types: str) -> None:
|
| 42 |
"""Add case types to judge's preferences.
|
| 43 |
+
|
| 44 |
Args:
|
| 45 |
*case_types: One or more case type strings
|
| 46 |
"""
|
| 47 |
self.preferred_case_types.update(case_types)
|
| 48 |
+
|
| 49 |
def record_hearing(self, hearing_date: date, case_id: str, case_type: str) -> None:
|
| 50 |
"""Record a hearing presided over.
|
| 51 |
+
|
| 52 |
Args:
|
| 53 |
hearing_date: Date of hearing
|
| 54 |
case_id: Case identifier
|
| 55 |
case_type: Type of case
|
| 56 |
"""
|
| 57 |
self.hearings_presided += 1
|
| 58 |
+
|
| 59 |
+
def record_daily_workload(self, hearing_date: date, cases_heard: int,
|
| 60 |
cases_adjourned: int) -> None:
|
| 61 |
"""Record workload for a specific day.
|
| 62 |
+
|
| 63 |
Args:
|
| 64 |
hearing_date: Date of hearings
|
| 65 |
cases_heard: Number of cases actually heard
|
|
|
|
| 71 |
"cases_adjourned": cases_adjourned,
|
| 72 |
"total_scheduled": cases_heard + cases_adjourned,
|
| 73 |
})
|
| 74 |
+
|
| 75 |
self.cases_heard += cases_heard
|
| 76 |
+
|
| 77 |
def get_average_daily_workload(self) -> float:
|
| 78 |
"""Calculate average cases heard per day.
|
| 79 |
+
|
| 80 |
Returns:
|
| 81 |
Average number of cases per day
|
| 82 |
"""
|
| 83 |
if not self.workload_history:
|
| 84 |
return 0.0
|
| 85 |
+
|
| 86 |
total = sum(day["cases_heard"] for day in self.workload_history)
|
| 87 |
return total / len(self.workload_history)
|
| 88 |
+
|
| 89 |
def get_adjournment_rate(self) -> float:
|
| 90 |
"""Calculate judge's adjournment rate.
|
| 91 |
+
|
| 92 |
Returns:
|
| 93 |
Proportion of cases adjourned (0.0 to 1.0)
|
| 94 |
"""
|
| 95 |
if not self.workload_history:
|
| 96 |
return 0.0
|
| 97 |
+
|
| 98 |
total_adjourned = sum(day["cases_adjourned"] for day in self.workload_history)
|
| 99 |
total_scheduled = sum(day["total_scheduled"] for day in self.workload_history)
|
| 100 |
+
|
| 101 |
return total_adjourned / total_scheduled if total_scheduled > 0 else 0.0
|
| 102 |
+
|
| 103 |
def get_workload_summary(self, start_date: date, end_date: date) -> Dict:
|
| 104 |
"""Get workload summary for a date range.
|
| 105 |
+
|
| 106 |
Args:
|
| 107 |
start_date: Start of range
|
| 108 |
end_date: End of range
|
| 109 |
+
|
| 110 |
Returns:
|
| 111 |
Dict with workload statistics
|
| 112 |
"""
|
| 113 |
+
days_in_range = [day for day in self.workload_history
|
| 114 |
if start_date <= day["date"] <= end_date]
|
| 115 |
+
|
| 116 |
if not days_in_range:
|
| 117 |
return {
|
| 118 |
"judge_id": self.judge_id,
|
|
|
|
| 121 |
"avg_cases_per_day": 0.0,
|
| 122 |
"adjournment_rate": 0.0,
|
| 123 |
}
|
| 124 |
+
|
| 125 |
total_heard = sum(day["cases_heard"] for day in days_in_range)
|
| 126 |
total_adjourned = sum(day["cases_adjourned"] for day in days_in_range)
|
| 127 |
total_scheduled = total_heard + total_adjourned
|
| 128 |
+
|
| 129 |
return {
|
| 130 |
"judge_id": self.judge_id,
|
| 131 |
"days_worked": len(days_in_range),
|
|
|
|
| 134 |
"avg_cases_per_day": total_heard / len(days_in_range),
|
| 135 |
"adjournment_rate": total_adjourned / total_scheduled if total_scheduled > 0 else 0.0,
|
| 136 |
}
|
| 137 |
+
|
| 138 |
def is_specialized_in(self, case_type: str) -> bool:
|
| 139 |
"""Check if judge specializes in a case type.
|
| 140 |
+
|
| 141 |
Args:
|
| 142 |
case_type: Case type to check
|
| 143 |
+
|
| 144 |
Returns:
|
| 145 |
True if in preferred types or no preferences set
|
| 146 |
"""
|
| 147 |
if not self.preferred_case_types:
|
| 148 |
return True # No preferences means handles all types
|
| 149 |
+
|
| 150 |
return case_type in self.preferred_case_types
|
| 151 |
+
|
| 152 |
def __repr__(self) -> str:
|
| 153 |
return (f"Judge(id={self.judge_id}, courtroom={self.courtroom_id}, "
|
| 154 |
f"hearings={self.hearings_presided})")
|
| 155 |
+
|
| 156 |
def to_dict(self) -> dict:
|
| 157 |
"""Convert judge to dictionary for serialization."""
|
| 158 |
return {
|
scheduler/core/policy.py
CHANGED
|
@@ -14,30 +14,30 @@ from scheduler.core.case import Case
|
|
| 14 |
|
| 15 |
class SchedulerPolicy(ABC):
|
| 16 |
"""Abstract base class for scheduling policies.
|
| 17 |
-
|
| 18 |
All scheduling policies must implement the `prioritize` method which
|
| 19 |
ranks cases for scheduling on a given day.
|
| 20 |
"""
|
| 21 |
-
|
| 22 |
@abstractmethod
|
| 23 |
def prioritize(self, cases: List[Case], current_date: date) -> List[Case]:
|
| 24 |
"""Prioritize cases for scheduling on the given date.
|
| 25 |
-
|
| 26 |
Args:
|
| 27 |
cases: List of eligible cases (already filtered for readiness, not disposed)
|
| 28 |
current_date: Current simulation date
|
| 29 |
-
|
| 30 |
Returns:
|
| 31 |
Sorted list of cases in priority order (highest priority first)
|
| 32 |
"""
|
| 33 |
pass
|
| 34 |
-
|
| 35 |
@abstractmethod
|
| 36 |
def get_name(self) -> str:
|
| 37 |
"""Get the policy name for logging/reporting."""
|
| 38 |
pass
|
| 39 |
-
|
| 40 |
@abstractmethod
|
| 41 |
def requires_readiness_score(self) -> bool:
|
| 42 |
"""Return True if this policy requires readiness score computation."""
|
| 43 |
-
pass
|
|
|
|
| 14 |
|
| 15 |
class SchedulerPolicy(ABC):
|
| 16 |
"""Abstract base class for scheduling policies.
|
| 17 |
+
|
| 18 |
All scheduling policies must implement the `prioritize` method which
|
| 19 |
ranks cases for scheduling on a given day.
|
| 20 |
"""
|
| 21 |
+
|
| 22 |
@abstractmethod
|
| 23 |
def prioritize(self, cases: List[Case], current_date: date) -> List[Case]:
|
| 24 |
"""Prioritize cases for scheduling on the given date.
|
| 25 |
+
|
| 26 |
Args:
|
| 27 |
cases: List of eligible cases (already filtered for readiness, not disposed)
|
| 28 |
current_date: Current simulation date
|
| 29 |
+
|
| 30 |
Returns:
|
| 31 |
Sorted list of cases in priority order (highest priority first)
|
| 32 |
"""
|
| 33 |
pass
|
| 34 |
+
|
| 35 |
@abstractmethod
|
| 36 |
def get_name(self) -> str:
|
| 37 |
"""Get the policy name for logging/reporting."""
|
| 38 |
pass
|
| 39 |
+
|
| 40 |
@abstractmethod
|
| 41 |
def requires_readiness_score(self) -> bool:
|
| 42 |
"""Return True if this policy requires readiness score computation."""
|
| 43 |
+
pass
|
scheduler/core/ripeness.py
CHANGED
|
@@ -7,9 +7,9 @@ Based on analysis of historical PurposeOfHearing patterns (see scripts/analyze_r
|
|
| 7 |
"""
|
| 8 |
from __future__ import annotations
|
| 9 |
|
|
|
|
| 10 |
from enum import Enum
|
| 11 |
from typing import TYPE_CHECKING
|
| 12 |
-
from datetime import datetime, timedelta
|
| 13 |
|
| 14 |
if TYPE_CHECKING:
|
| 15 |
from scheduler.core.case import Case
|
|
@@ -17,7 +17,7 @@ if TYPE_CHECKING:
|
|
| 17 |
|
| 18 |
class RipenessStatus(Enum):
|
| 19 |
"""Status indicating whether a case is ready for hearing."""
|
| 20 |
-
|
| 21 |
RIPE = "RIPE" # Ready for hearing
|
| 22 |
UNRIPE_SUMMONS = "UNRIPE_SUMMONS" # Waiting for summons service
|
| 23 |
UNRIPE_DEPENDENT = "UNRIPE_DEPENDENT" # Waiting for dependent case/order
|
|
@@ -54,7 +54,7 @@ RIPE_KEYWORDS = ["ARGUMENTS", "HEARING", "FINAL", "JUDGMENT", "ORDERS", "DISPOSA
|
|
| 54 |
|
| 55 |
class RipenessClassifier:
|
| 56 |
"""Classify cases as RIPE or UNRIPE for scheduling optimization.
|
| 57 |
-
|
| 58 |
Thresholds can be adjusted dynamically based on accuracy feedback.
|
| 59 |
"""
|
| 60 |
|
|
@@ -65,7 +65,7 @@ class RipenessClassifier:
|
|
| 65 |
"ORDERS / JUDGMENT",
|
| 66 |
"FINAL DISPOSAL"
|
| 67 |
]
|
| 68 |
-
|
| 69 |
# Stages that indicate administrative/preliminary work
|
| 70 |
UNRIPE_STAGES = [
|
| 71 |
"PRE-ADMISSION",
|
|
@@ -83,7 +83,6 @@ class RipenessClassifier:
|
|
| 83 |
@classmethod
|
| 84 |
def _has_required_evidence(cls, case: Case) -> tuple[bool, dict[str, bool]]:
|
| 85 |
"""Check that minimum readiness evidence exists before declaring RIPE."""
|
| 86 |
-
|
| 87 |
# Evidence of service/compliance: at least one hearing or explicit purpose text
|
| 88 |
service_confirmed = case.hearing_count >= cls.MIN_SERVICE_HEARINGS or bool(
|
| 89 |
getattr(case, "last_hearing_purpose", None)
|
|
@@ -109,7 +108,6 @@ class RipenessClassifier:
|
|
| 109 |
@classmethod
|
| 110 |
def _has_ripe_signal(cls, case: Case) -> bool:
|
| 111 |
"""Check if stage or hearing purpose indicates readiness."""
|
| 112 |
-
|
| 113 |
if case.current_stage in cls.RIPE_STAGES:
|
| 114 |
return True
|
| 115 |
|
|
@@ -118,15 +116,15 @@ class RipenessClassifier:
|
|
| 118 |
return any(keyword in purpose_upper for keyword in RIPE_KEYWORDS)
|
| 119 |
|
| 120 |
return False
|
| 121 |
-
|
| 122 |
@classmethod
|
| 123 |
def classify(cls, case: Case, current_date: datetime | None = None) -> RipenessStatus:
|
| 124 |
"""Classify case ripeness status with bottleneck type.
|
| 125 |
-
|
| 126 |
Args:
|
| 127 |
case: Case to classify
|
| 128 |
current_date: Current simulation date (defaults to now)
|
| 129 |
-
|
| 130 |
Returns:
|
| 131 |
RipenessStatus enum indicating ripeness and bottleneck type
|
| 132 |
|
|
@@ -141,7 +139,7 @@ class RipenessClassifier:
|
|
| 141 |
"""
|
| 142 |
if current_date is None:
|
| 143 |
current_date = datetime.now()
|
| 144 |
-
|
| 145 |
# 1. Check last hearing purpose for explicit bottleneck keywords
|
| 146 |
if hasattr(case, "last_hearing_purpose") and case.last_hearing_purpose:
|
| 147 |
purpose_upper = case.last_hearing_purpose.upper()
|
|
@@ -149,7 +147,7 @@ class RipenessClassifier:
|
|
| 149 |
for keyword, bottleneck_type in UNRIPE_KEYWORDS.items():
|
| 150 |
if keyword in purpose_upper:
|
| 151 |
return bottleneck_type
|
| 152 |
-
|
| 153 |
# 2. Check stage - ADMISSION stage with few hearings is likely unripe
|
| 154 |
if case.current_stage == "ADMISSION":
|
| 155 |
# New cases in ADMISSION (< 3 hearings) are often unripe
|
|
@@ -177,55 +175,55 @@ class RipenessClassifier:
|
|
| 177 |
|
| 178 |
# 6. Default to UNKNOWN if no bottlenecks but also no clear ripe signal
|
| 179 |
return RipenessStatus.UNKNOWN
|
| 180 |
-
|
| 181 |
@classmethod
|
| 182 |
def get_ripeness_priority(cls, case: Case, current_date: datetime | None = None) -> float:
|
| 183 |
"""Get priority adjustment based on ripeness.
|
| 184 |
-
|
| 185 |
Ripe cases should get judicial time priority over unripe cases
|
| 186 |
when scheduling is tight.
|
| 187 |
-
|
| 188 |
Returns:
|
| 189 |
Priority multiplier (1.5 for RIPE, 0.7 for UNRIPE)
|
| 190 |
"""
|
| 191 |
ripeness = cls.classify(case, current_date)
|
| 192 |
return 1.5 if ripeness.is_ripe() else 0.7
|
| 193 |
-
|
| 194 |
@classmethod
|
| 195 |
def is_schedulable(cls, case: Case, current_date: datetime | None = None) -> bool:
|
| 196 |
"""Determine if a case can be scheduled for a hearing.
|
| 197 |
-
|
| 198 |
A case is schedulable if:
|
| 199 |
- It is RIPE (no bottlenecks)
|
| 200 |
- It has been sufficient time since last hearing
|
| 201 |
- It is not disposed
|
| 202 |
-
|
| 203 |
Args:
|
| 204 |
case: The case to check
|
| 205 |
current_date: Current simulation date
|
| 206 |
-
|
| 207 |
Returns:
|
| 208 |
True if case can be scheduled, False otherwise
|
| 209 |
"""
|
| 210 |
# Check disposal status
|
| 211 |
if case.is_disposed:
|
| 212 |
return False
|
| 213 |
-
|
| 214 |
# Calculate current ripeness
|
| 215 |
ripeness = cls.classify(case, current_date)
|
| 216 |
-
|
| 217 |
# Only RIPE cases can be scheduled
|
| 218 |
return ripeness.is_ripe()
|
| 219 |
-
|
| 220 |
@classmethod
|
| 221 |
def get_ripeness_reason(cls, ripeness_status: RipenessStatus) -> str:
|
| 222 |
"""Get human-readable explanation for ripeness status.
|
| 223 |
-
|
| 224 |
Used in dashboard tooltips and reports.
|
| 225 |
-
|
| 226 |
Args:
|
| 227 |
ripeness_status: The status to explain
|
| 228 |
-
|
| 229 |
Returns:
|
| 230 |
Human-readable explanation string
|
| 231 |
"""
|
|
@@ -238,25 +236,25 @@ class RipenessClassifier:
|
|
| 238 |
RipenessStatus.UNKNOWN: "Insufficient readiness evidence; route to manual triage",
|
| 239 |
}
|
| 240 |
return reasons.get(ripeness_status, "Unknown status")
|
| 241 |
-
|
| 242 |
@classmethod
|
| 243 |
def estimate_ripening_time(cls, case: Case, current_date: datetime) -> timedelta | None:
|
| 244 |
"""Estimate time until case becomes ripe.
|
| 245 |
-
|
| 246 |
This is a heuristic based on bottleneck type and historical data.
|
| 247 |
-
|
| 248 |
Args:
|
| 249 |
case: The case to evaluate
|
| 250 |
current_date: Current simulation date
|
| 251 |
-
|
| 252 |
Returns:
|
| 253 |
Estimated timedelta until ripe, or None if already ripe or unknown
|
| 254 |
"""
|
| 255 |
ripeness = cls.classify(case, current_date)
|
| 256 |
-
|
| 257 |
if ripeness.is_ripe():
|
| 258 |
return timedelta(0)
|
| 259 |
-
|
| 260 |
# Heuristic estimates based on bottleneck type
|
| 261 |
estimates = {
|
| 262 |
RipenessStatus.UNRIPE_SUMMONS: timedelta(days=30),
|
|
@@ -264,13 +262,13 @@ class RipenessClassifier:
|
|
| 264 |
RipenessStatus.UNRIPE_PARTY: timedelta(days=14),
|
| 265 |
RipenessStatus.UNRIPE_DOCUMENT: timedelta(days=21),
|
| 266 |
}
|
| 267 |
-
|
| 268 |
return estimates.get(ripeness, None)
|
| 269 |
-
|
| 270 |
@classmethod
|
| 271 |
def set_thresholds(cls, new_thresholds: dict[str, int | float]) -> None:
|
| 272 |
"""Update classification thresholds for calibration.
|
| 273 |
-
|
| 274 |
Args:
|
| 275 |
new_thresholds: Dictionary with threshold names and values
|
| 276 |
e.g., {"MIN_SERVICE_HEARINGS": 2, "MIN_STAGE_DAYS": 5}
|
|
@@ -280,11 +278,11 @@ class RipenessClassifier:
|
|
| 280 |
setattr(cls, threshold_name, int(value))
|
| 281 |
else:
|
| 282 |
raise ValueError(f"Unknown threshold: {threshold_name}")
|
| 283 |
-
|
| 284 |
@classmethod
|
| 285 |
def get_current_thresholds(cls) -> dict[str, int]:
|
| 286 |
"""Get current threshold values.
|
| 287 |
-
|
| 288 |
Returns:
|
| 289 |
Dictionary of threshold names and values
|
| 290 |
"""
|
|
|
|
| 7 |
"""
|
| 8 |
from __future__ import annotations
|
| 9 |
|
| 10 |
+
from datetime import datetime, timedelta
|
| 11 |
from enum import Enum
|
| 12 |
from typing import TYPE_CHECKING
|
|
|
|
| 13 |
|
| 14 |
if TYPE_CHECKING:
|
| 15 |
from scheduler.core.case import Case
|
|
|
|
| 17 |
|
| 18 |
class RipenessStatus(Enum):
|
| 19 |
"""Status indicating whether a case is ready for hearing."""
|
| 20 |
+
|
| 21 |
RIPE = "RIPE" # Ready for hearing
|
| 22 |
UNRIPE_SUMMONS = "UNRIPE_SUMMONS" # Waiting for summons service
|
| 23 |
UNRIPE_DEPENDENT = "UNRIPE_DEPENDENT" # Waiting for dependent case/order
|
|
|
|
| 54 |
|
| 55 |
class RipenessClassifier:
|
| 56 |
"""Classify cases as RIPE or UNRIPE for scheduling optimization.
|
| 57 |
+
|
| 58 |
Thresholds can be adjusted dynamically based on accuracy feedback.
|
| 59 |
"""
|
| 60 |
|
|
|
|
| 65 |
"ORDERS / JUDGMENT",
|
| 66 |
"FINAL DISPOSAL"
|
| 67 |
]
|
| 68 |
+
|
| 69 |
# Stages that indicate administrative/preliminary work
|
| 70 |
UNRIPE_STAGES = [
|
| 71 |
"PRE-ADMISSION",
|
|
|
|
| 83 |
@classmethod
|
| 84 |
def _has_required_evidence(cls, case: Case) -> tuple[bool, dict[str, bool]]:
|
| 85 |
"""Check that minimum readiness evidence exists before declaring RIPE."""
|
|
|
|
| 86 |
# Evidence of service/compliance: at least one hearing or explicit purpose text
|
| 87 |
service_confirmed = case.hearing_count >= cls.MIN_SERVICE_HEARINGS or bool(
|
| 88 |
getattr(case, "last_hearing_purpose", None)
|
|
|
|
| 108 |
@classmethod
|
| 109 |
def _has_ripe_signal(cls, case: Case) -> bool:
|
| 110 |
"""Check if stage or hearing purpose indicates readiness."""
|
|
|
|
| 111 |
if case.current_stage in cls.RIPE_STAGES:
|
| 112 |
return True
|
| 113 |
|
|
|
|
| 116 |
return any(keyword in purpose_upper for keyword in RIPE_KEYWORDS)
|
| 117 |
|
| 118 |
return False
|
| 119 |
+
|
| 120 |
@classmethod
|
| 121 |
def classify(cls, case: Case, current_date: datetime | None = None) -> RipenessStatus:
|
| 122 |
"""Classify case ripeness status with bottleneck type.
|
| 123 |
+
|
| 124 |
Args:
|
| 125 |
case: Case to classify
|
| 126 |
current_date: Current simulation date (defaults to now)
|
| 127 |
+
|
| 128 |
Returns:
|
| 129 |
RipenessStatus enum indicating ripeness and bottleneck type
|
| 130 |
|
|
|
|
| 139 |
"""
|
| 140 |
if current_date is None:
|
| 141 |
current_date = datetime.now()
|
| 142 |
+
|
| 143 |
# 1. Check last hearing purpose for explicit bottleneck keywords
|
| 144 |
if hasattr(case, "last_hearing_purpose") and case.last_hearing_purpose:
|
| 145 |
purpose_upper = case.last_hearing_purpose.upper()
|
|
|
|
| 147 |
for keyword, bottleneck_type in UNRIPE_KEYWORDS.items():
|
| 148 |
if keyword in purpose_upper:
|
| 149 |
return bottleneck_type
|
| 150 |
+
|
| 151 |
# 2. Check stage - ADMISSION stage with few hearings is likely unripe
|
| 152 |
if case.current_stage == "ADMISSION":
|
| 153 |
# New cases in ADMISSION (< 3 hearings) are often unripe
|
|
|
|
| 175 |
|
| 176 |
# 6. Default to UNKNOWN if no bottlenecks but also no clear ripe signal
|
| 177 |
return RipenessStatus.UNKNOWN
|
| 178 |
+
|
| 179 |
@classmethod
|
| 180 |
def get_ripeness_priority(cls, case: Case, current_date: datetime | None = None) -> float:
|
| 181 |
"""Get priority adjustment based on ripeness.
|
| 182 |
+
|
| 183 |
Ripe cases should get judicial time priority over unripe cases
|
| 184 |
when scheduling is tight.
|
| 185 |
+
|
| 186 |
Returns:
|
| 187 |
Priority multiplier (1.5 for RIPE, 0.7 for UNRIPE)
|
| 188 |
"""
|
| 189 |
ripeness = cls.classify(case, current_date)
|
| 190 |
return 1.5 if ripeness.is_ripe() else 0.7
|
| 191 |
+
|
| 192 |
@classmethod
|
| 193 |
def is_schedulable(cls, case: Case, current_date: datetime | None = None) -> bool:
|
| 194 |
"""Determine if a case can be scheduled for a hearing.
|
| 195 |
+
|
| 196 |
A case is schedulable if:
|
| 197 |
- It is RIPE (no bottlenecks)
|
| 198 |
- It has been sufficient time since last hearing
|
| 199 |
- It is not disposed
|
| 200 |
+
|
| 201 |
Args:
|
| 202 |
case: The case to check
|
| 203 |
current_date: Current simulation date
|
| 204 |
+
|
| 205 |
Returns:
|
| 206 |
True if case can be scheduled, False otherwise
|
| 207 |
"""
|
| 208 |
# Check disposal status
|
| 209 |
if case.is_disposed:
|
| 210 |
return False
|
| 211 |
+
|
| 212 |
# Calculate current ripeness
|
| 213 |
ripeness = cls.classify(case, current_date)
|
| 214 |
+
|
| 215 |
# Only RIPE cases can be scheduled
|
| 216 |
return ripeness.is_ripe()
|
| 217 |
+
|
| 218 |
@classmethod
|
| 219 |
def get_ripeness_reason(cls, ripeness_status: RipenessStatus) -> str:
|
| 220 |
"""Get human-readable explanation for ripeness status.
|
| 221 |
+
|
| 222 |
Used in dashboard tooltips and reports.
|
| 223 |
+
|
| 224 |
Args:
|
| 225 |
ripeness_status: The status to explain
|
| 226 |
+
|
| 227 |
Returns:
|
| 228 |
Human-readable explanation string
|
| 229 |
"""
|
|
|
|
| 236 |
RipenessStatus.UNKNOWN: "Insufficient readiness evidence; route to manual triage",
|
| 237 |
}
|
| 238 |
return reasons.get(ripeness_status, "Unknown status")
|
| 239 |
+
|
| 240 |
@classmethod
|
| 241 |
def estimate_ripening_time(cls, case: Case, current_date: datetime) -> timedelta | None:
|
| 242 |
"""Estimate time until case becomes ripe.
|
| 243 |
+
|
| 244 |
This is a heuristic based on bottleneck type and historical data.
|
| 245 |
+
|
| 246 |
Args:
|
| 247 |
case: The case to evaluate
|
| 248 |
current_date: Current simulation date
|
| 249 |
+
|
| 250 |
Returns:
|
| 251 |
Estimated timedelta until ripe, or None if already ripe or unknown
|
| 252 |
"""
|
| 253 |
ripeness = cls.classify(case, current_date)
|
| 254 |
+
|
| 255 |
if ripeness.is_ripe():
|
| 256 |
return timedelta(0)
|
| 257 |
+
|
| 258 |
# Heuristic estimates based on bottleneck type
|
| 259 |
estimates = {
|
| 260 |
RipenessStatus.UNRIPE_SUMMONS: timedelta(days=30),
|
|
|
|
| 262 |
RipenessStatus.UNRIPE_PARTY: timedelta(days=14),
|
| 263 |
RipenessStatus.UNRIPE_DOCUMENT: timedelta(days=21),
|
| 264 |
}
|
| 265 |
+
|
| 266 |
return estimates.get(ripeness, None)
|
| 267 |
+
|
| 268 |
@classmethod
|
| 269 |
def set_thresholds(cls, new_thresholds: dict[str, int | float]) -> None:
|
| 270 |
"""Update classification thresholds for calibration.
|
| 271 |
+
|
| 272 |
Args:
|
| 273 |
new_thresholds: Dictionary with threshold names and values
|
| 274 |
e.g., {"MIN_SERVICE_HEARINGS": 2, "MIN_STAGE_DAYS": 5}
|
|
|
|
| 278 |
setattr(cls, threshold_name, int(value))
|
| 279 |
else:
|
| 280 |
raise ValueError(f"Unknown threshold: {threshold_name}")
|
| 281 |
+
|
| 282 |
@classmethod
|
| 283 |
def get_current_thresholds(cls) -> dict[str, int]:
|
| 284 |
"""Get current threshold values.
|
| 285 |
+
|
| 286 |
Returns:
|
| 287 |
Dictionary of threshold names and values
|
| 288 |
"""
|
scheduler/dashboard/app.py
CHANGED
|
@@ -16,28 +16,32 @@ from scheduler.dashboard.utils import get_data_status
|
|
| 16 |
# Page configuration
|
| 17 |
st.set_page_config(
|
| 18 |
page_title="Court Scheduling System Dashboard",
|
| 19 |
-
page_icon="
|
| 20 |
layout="wide",
|
| 21 |
initial_sidebar_state="expanded",
|
| 22 |
)
|
| 23 |
|
| 24 |
# Main page content
|
| 25 |
-
st.title("
|
| 26 |
-
st.markdown("**Karnataka High Court -
|
| 27 |
|
| 28 |
st.markdown("---")
|
| 29 |
|
| 30 |
# Introduction
|
| 31 |
st.markdown("""
|
| 32 |
-
###
|
| 33 |
|
| 34 |
-
This
|
| 35 |
|
| 36 |
-
|
| 37 |
-
-
|
| 38 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
|
| 40 |
-
|
| 41 |
""")
|
| 42 |
|
| 43 |
# System status
|
|
@@ -45,158 +49,146 @@ status_header_col1, status_header_col2 = st.columns([3, 1])
|
|
| 45 |
with status_header_col1:
|
| 46 |
st.markdown("### System Status")
|
| 47 |
with status_header_col2:
|
| 48 |
-
if st.button("
|
| 49 |
st.rerun()
|
| 50 |
|
| 51 |
data_status = get_data_status()
|
| 52 |
|
| 53 |
-
col1, col2, col3
|
| 54 |
|
| 55 |
with col1:
|
| 56 |
-
status = "
|
| 57 |
color = "green" if data_status["cleaned_data"] else "red"
|
| 58 |
st.markdown(f":{color}[{status}] **Cleaned Data**")
|
| 59 |
-
|
|
|
|
|
|
|
| 60 |
with col2:
|
| 61 |
-
status = "
|
| 62 |
color = "green" if data_status["parameters"] else "red"
|
| 63 |
st.markdown(f":{color}[{status}] **Parameters**")
|
| 64 |
-
|
|
|
|
|
|
|
| 65 |
with col3:
|
| 66 |
-
status = "
|
| 67 |
-
color = "green" if data_status["generated_cases"] else "red"
|
| 68 |
-
st.markdown(f":{color}[{status}] **Test Cases**")
|
| 69 |
-
|
| 70 |
-
with col4:
|
| 71 |
-
status = "✓" if data_status["eda_figures"] else "✗"
|
| 72 |
color = "green" if data_status["eda_figures"] else "red"
|
| 73 |
-
st.markdown(f":{color}[{status}] **
|
|
|
|
|
|
|
| 74 |
|
| 75 |
# Setup Controls
|
| 76 |
-
|
|
|
|
|
|
|
| 77 |
st.markdown("---")
|
| 78 |
-
st.markdown("### Setup
|
| 79 |
-
st.
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
with
|
| 84 |
-
st.markdown("
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
st.
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
if st.button("Generate Test Cases", use_container_width=True):
|
| 120 |
-
import subprocess
|
| 121 |
-
|
| 122 |
-
with st.spinner(f"Generating {n_cases} test cases..."):
|
| 123 |
-
try:
|
| 124 |
-
result = subprocess.run(
|
| 125 |
-
["uv", "run", "court-scheduler", "generate", "--cases", str(n_cases)],
|
| 126 |
-
capture_output=True,
|
| 127 |
-
text=True,
|
| 128 |
-
cwd=str(Path.cwd()),
|
| 129 |
-
)
|
| 130 |
-
|
| 131 |
-
if result.returncode == 0:
|
| 132 |
-
st.success(f"Generated {n_cases} test cases successfully!")
|
| 133 |
-
st.rerun()
|
| 134 |
-
else:
|
| 135 |
-
st.error(f"Generation failed with error code {result.returncode}")
|
| 136 |
-
with st.expander("Show error details"):
|
| 137 |
-
st.code(result.stderr, language="text")
|
| 138 |
-
except Exception as e:
|
| 139 |
-
st.error(f"Error generating test cases: {e}")
|
| 140 |
-
else:
|
| 141 |
-
st.success("Test cases already generated")
|
| 142 |
-
|
| 143 |
-
st.markdown("#### Manual Setup")
|
| 144 |
-
with st.expander("Run commands manually (if buttons don't work)"):
|
| 145 |
-
st.code("""
|
| 146 |
-
# Run EDA pipeline
|
| 147 |
-
uv run court-scheduler eda
|
| 148 |
-
|
| 149 |
-
# Generate test cases (optional)
|
| 150 |
-
uv run court-scheduler generate --cases 1000
|
| 151 |
-
""", language="bash")
|
| 152 |
else:
|
| 153 |
-
st.success("
|
| 154 |
|
| 155 |
st.markdown("---")
|
| 156 |
|
| 157 |
-
#
|
| 158 |
-
st.markdown("###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 159 |
|
| 160 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 161 |
st.markdown("""
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
|
| 169 |
-
|
| 170 |
-
- View case-level explainability with detailed reasoning
|
| 171 |
-
- Run calibration analysis to optimize thresholds
|
| 172 |
-
|
| 173 |
-
**3. RL Training**
|
| 174 |
-
- Configure and train reinforcement learning agents
|
| 175 |
-
- Monitor training progress in real-time
|
| 176 |
-
- Compare different models and hyperparameters
|
| 177 |
-
- Visualize Q-table and action distributions
|
| 178 |
""")
|
| 179 |
|
| 180 |
-
|
|
|
|
|
|
|
|
|
|
| 181 |
st.markdown("""
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
|
| 191 |
-
|
| 192 |
-
|
| 193 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 194 |
""")
|
| 195 |
|
| 196 |
# Footer
|
| 197 |
st.markdown("---")
|
| 198 |
-
st.
|
| 199 |
-
<div style='text-align: center'>
|
| 200 |
-
<small>Court Scheduling System | Code4Change Hackathon | Karnataka High Court</small>
|
| 201 |
-
</div>
|
| 202 |
-
""", unsafe_allow_html=True)
|
|
|
|
| 16 |
# Page configuration
|
| 17 |
st.set_page_config(
|
| 18 |
page_title="Court Scheduling System Dashboard",
|
| 19 |
+
page_icon="scales",
|
| 20 |
layout="wide",
|
| 21 |
initial_sidebar_state="expanded",
|
| 22 |
)
|
| 23 |
|
| 24 |
# Main page content
|
| 25 |
+
st.title("Court Scheduling System Dashboard")
|
| 26 |
+
st.markdown("**Karnataka High Court - Algorithmic Decision Support for Fair Scheduling**")
|
| 27 |
|
| 28 |
st.markdown("---")
|
| 29 |
|
| 30 |
# Introduction
|
| 31 |
st.markdown("""
|
| 32 |
+
### Overview
|
| 33 |
|
| 34 |
+
This system provides data-driven scheduling recommendations while maintaining judicial control and autonomy.
|
| 35 |
|
| 36 |
+
**Key Capabilities:**
|
| 37 |
+
- Historical data analysis and pattern identification
|
| 38 |
+
- Case ripeness classification (identifying bottlenecks)
|
| 39 |
+
- Multi-courtroom scheduling simulation
|
| 40 |
+
- Algorithmic suggestions with full explainability
|
| 41 |
+
- Judge override and approval system
|
| 42 |
+
- Reinforcement learning optimization
|
| 43 |
|
| 44 |
+
Use the sidebar to navigate between sections.
|
| 45 |
""")
|
| 46 |
|
| 47 |
# System status
|
|
|
|
| 49 |
with status_header_col1:
|
| 50 |
st.markdown("### System Status")
|
| 51 |
with status_header_col2:
|
| 52 |
+
if st.button("Refresh Status", use_container_width=True):
|
| 53 |
st.rerun()
|
| 54 |
|
| 55 |
data_status = get_data_status()
|
| 56 |
|
| 57 |
+
col1, col2, col3 = st.columns(3)
|
| 58 |
|
| 59 |
with col1:
|
| 60 |
+
status = "Ready" if data_status["cleaned_data"] else "Missing"
|
| 61 |
color = "green" if data_status["cleaned_data"] else "red"
|
| 62 |
st.markdown(f":{color}[{status}] **Cleaned Data**")
|
| 63 |
+
if not data_status["cleaned_data"]:
|
| 64 |
+
st.caption("Run EDA pipeline to process raw data")
|
| 65 |
+
|
| 66 |
with col2:
|
| 67 |
+
status = "Ready" if data_status["parameters"] else "Missing"
|
| 68 |
color = "green" if data_status["parameters"] else "red"
|
| 69 |
st.markdown(f":{color}[{status}] **Parameters**")
|
| 70 |
+
if not data_status["parameters"]:
|
| 71 |
+
st.caption("Run EDA pipeline to extract parameters")
|
| 72 |
+
|
| 73 |
with col3:
|
| 74 |
+
status = "Ready" if data_status["eda_figures"] else "Missing"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
color = "green" if data_status["eda_figures"] else "red"
|
| 76 |
+
st.markdown(f":{color}[{status}] **Analysis Figures**")
|
| 77 |
+
if not data_status["eda_figures"]:
|
| 78 |
+
st.caption("Run EDA pipeline to generate visualizations")
|
| 79 |
|
| 80 |
# Setup Controls
|
| 81 |
+
eda_ready = data_status["cleaned_data"] and data_status["parameters"] and data_status["eda_figures"]
|
| 82 |
+
|
| 83 |
+
if not eda_ready:
|
| 84 |
st.markdown("---")
|
| 85 |
+
st.markdown("### Initial Setup")
|
| 86 |
+
st.warning("Run the EDA pipeline to process historical data and extract parameters.")
|
| 87 |
+
|
| 88 |
+
col1, col2 = st.columns([2, 1])
|
| 89 |
+
|
| 90 |
+
with col1:
|
| 91 |
+
st.markdown("""
|
| 92 |
+
The EDA pipeline:
|
| 93 |
+
- Loads and cleans historical court case data
|
| 94 |
+
- Extracts statistical parameters (distributions, transition probabilities)
|
| 95 |
+
- Generates analysis visualizations
|
| 96 |
+
|
| 97 |
+
This is required before using other dashboard features.
|
| 98 |
+
""")
|
| 99 |
+
|
| 100 |
+
with col2:
|
| 101 |
+
if st.button("Run EDA Pipeline", type="primary", use_container_width=True):
|
| 102 |
+
import subprocess
|
| 103 |
+
|
| 104 |
+
with st.spinner("Running EDA pipeline... This may take a few minutes."):
|
| 105 |
+
try:
|
| 106 |
+
result = subprocess.run(
|
| 107 |
+
["uv", "run", "court-scheduler", "eda"],
|
| 108 |
+
capture_output=True,
|
| 109 |
+
text=True,
|
| 110 |
+
cwd=str(Path.cwd()),
|
| 111 |
+
)
|
| 112 |
+
|
| 113 |
+
if result.returncode == 0:
|
| 114 |
+
st.success("EDA pipeline completed")
|
| 115 |
+
st.rerun()
|
| 116 |
+
else:
|
| 117 |
+
st.error(f"Pipeline failed with error code {result.returncode}")
|
| 118 |
+
with st.expander("Show error details"):
|
| 119 |
+
st.code(result.stderr, language="text")
|
| 120 |
+
except Exception as e:
|
| 121 |
+
st.error(f"Error running pipeline: {e}")
|
| 122 |
+
|
| 123 |
+
with st.expander("Run manually via CLI"):
|
| 124 |
+
st.code("uv run court-scheduler eda", language="bash")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
else:
|
| 126 |
+
st.success("System ready - all data processed")
|
| 127 |
|
| 128 |
st.markdown("---")
|
| 129 |
|
| 130 |
+
# Navigation Guide
|
| 131 |
+
st.markdown("### Dashboard Sections")
|
| 132 |
+
|
| 133 |
+
col1, col2 = st.columns(2)
|
| 134 |
+
|
| 135 |
+
with col1:
|
| 136 |
+
st.markdown("""
|
| 137 |
+
#### 1. Data & Insights
|
| 138 |
+
Explore historical case data, view analysis visualizations, and review extracted parameters.
|
| 139 |
+
|
| 140 |
+
#### 2. Ripeness Classifier
|
| 141 |
+
Test case ripeness classification with interactive threshold tuning and explainability.
|
| 142 |
|
| 143 |
+
#### 3. Simulation Workflow
|
| 144 |
+
Generate cases, configure simulation parameters, run scheduling simulations, and view results.
|
| 145 |
+
""")
|
| 146 |
+
|
| 147 |
+
with col2:
|
| 148 |
st.markdown("""
|
| 149 |
+
#### 4. Cause Lists & Overrides
|
| 150 |
+
View generated cause lists, make judge overrides, and track modification history.
|
| 151 |
+
|
| 152 |
+
#### 5. RL Training
|
| 153 |
+
Train reinforcement learning models for optimized scheduling policies.
|
| 154 |
+
|
| 155 |
+
#### 6. Analytics & Reports
|
| 156 |
+
Compare simulation runs, analyze performance metrics, and export comprehensive reports.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 157 |
""")
|
| 158 |
|
| 159 |
+
st.markdown("---")
|
| 160 |
+
|
| 161 |
+
# Typical Workflow
|
| 162 |
+
with st.expander("Typical Usage Workflow"):
|
| 163 |
st.markdown("""
|
| 164 |
+
**Step 1: Initial Setup**
|
| 165 |
+
- Run EDA pipeline to process historical data (one-time setup)
|
| 166 |
+
|
| 167 |
+
**Step 2: Understand the Data**
|
| 168 |
+
- Explore Data & Insights to understand case patterns
|
| 169 |
+
- Review extracted parameters and distributions
|
| 170 |
+
|
| 171 |
+
**Step 3: Test Ripeness Classifier**
|
| 172 |
+
- Adjust thresholds for your court's specific needs
|
| 173 |
+
- Test classification on sample cases
|
| 174 |
+
|
| 175 |
+
**Step 4: Run Simulation**
|
| 176 |
+
- Go to Simulation Workflow
|
| 177 |
+
- Generate or upload case dataset
|
| 178 |
+
- Configure simulation parameters
|
| 179 |
+
- Run simulation and review results
|
| 180 |
+
|
| 181 |
+
**Step 5: Review & Override**
|
| 182 |
+
- View generated cause lists in Cause Lists & Overrides
|
| 183 |
+
- Make judicial overrides as needed
|
| 184 |
+
- Approve final cause lists
|
| 185 |
+
|
| 186 |
+
**Step 6: Analyze Performance**
|
| 187 |
+
- Use Analytics & Reports to evaluate fairness and efficiency
|
| 188 |
+
- Compare different scheduling policies
|
| 189 |
+
- Identify bottlenecks and improvement opportunities
|
| 190 |
""")
|
| 191 |
|
| 192 |
# Footer
|
| 193 |
st.markdown("---")
|
| 194 |
+
st.caption("Court Scheduling System - Code4Change Hackathon - Karnataka High Court")
|
|
|
|
|
|
|
|
|
|
|
|
scheduler/dashboard/pages/1_EDA_Analysis.py
DELETED
|
@@ -1,273 +0,0 @@
|
|
| 1 |
-
"""EDA Analysis page - Explore court case data insights.
|
| 2 |
-
|
| 3 |
-
This page displays exploratory data analysis visualizations and statistics
|
| 4 |
-
from the court case dataset.
|
| 5 |
-
"""
|
| 6 |
-
|
| 7 |
-
from __future__ import annotations
|
| 8 |
-
|
| 9 |
-
from pathlib import Path
|
| 10 |
-
|
| 11 |
-
import pandas as pd
|
| 12 |
-
import plotly.express as px
|
| 13 |
-
import plotly.graph_objects as go
|
| 14 |
-
import streamlit as st
|
| 15 |
-
|
| 16 |
-
from scheduler.dashboard.utils import (
|
| 17 |
-
get_case_statistics,
|
| 18 |
-
load_cleaned_data,
|
| 19 |
-
load_param_loader,
|
| 20 |
-
)
|
| 21 |
-
|
| 22 |
-
# Page configuration
|
| 23 |
-
st.set_page_config(
|
| 24 |
-
page_title="EDA Analysis",
|
| 25 |
-
page_icon="📊",
|
| 26 |
-
layout="wide",
|
| 27 |
-
)
|
| 28 |
-
|
| 29 |
-
st.title("📊 Exploratory Data Analysis")
|
| 30 |
-
st.markdown("Statistical insights from court case data")
|
| 31 |
-
|
| 32 |
-
# Load data
|
| 33 |
-
with st.spinner("Loading data..."):
|
| 34 |
-
try:
|
| 35 |
-
df = load_cleaned_data()
|
| 36 |
-
params = load_param_loader()
|
| 37 |
-
stats = get_case_statistics(df)
|
| 38 |
-
except Exception as e:
|
| 39 |
-
st.error(f"Error loading data: {e}")
|
| 40 |
-
st.info("Please run the EDA pipeline first: `uv run court-scheduler eda`")
|
| 41 |
-
st.stop()
|
| 42 |
-
|
| 43 |
-
if df.empty:
|
| 44 |
-
st.warning("No data available. Please run the EDA pipeline first.")
|
| 45 |
-
st.code("uv run court-scheduler eda")
|
| 46 |
-
st.stop()
|
| 47 |
-
|
| 48 |
-
# Sidebar filters
|
| 49 |
-
st.sidebar.header("Filters")
|
| 50 |
-
|
| 51 |
-
# Case type filter
|
| 52 |
-
available_case_types = df["CaseType"].unique().tolist() if "CaseType" in df else []
|
| 53 |
-
selected_case_types = st.sidebar.multiselect(
|
| 54 |
-
"Case Types",
|
| 55 |
-
options=available_case_types,
|
| 56 |
-
default=available_case_types,
|
| 57 |
-
)
|
| 58 |
-
|
| 59 |
-
# Stage filter
|
| 60 |
-
available_stages = df["Remappedstages"].unique().tolist() if "Remappedstages" in df else []
|
| 61 |
-
selected_stages = st.sidebar.multiselect(
|
| 62 |
-
"Stages",
|
| 63 |
-
options=available_stages,
|
| 64 |
-
default=available_stages,
|
| 65 |
-
)
|
| 66 |
-
|
| 67 |
-
# Apply filters
|
| 68 |
-
filtered_df = df.copy()
|
| 69 |
-
if selected_case_types:
|
| 70 |
-
filtered_df = filtered_df[filtered_df["CaseType"].isin(selected_case_types)]
|
| 71 |
-
if selected_stages:
|
| 72 |
-
filtered_df = filtered_df[filtered_df["Remappedstages"].isin(selected_stages)]
|
| 73 |
-
|
| 74 |
-
# Key metrics
|
| 75 |
-
st.markdown("### Key Metrics")
|
| 76 |
-
|
| 77 |
-
col1, col2, col3, col4 = st.columns(4)
|
| 78 |
-
|
| 79 |
-
with col1:
|
| 80 |
-
total_cases = len(filtered_df)
|
| 81 |
-
st.metric("Total Cases", f"{total_cases:,}")
|
| 82 |
-
|
| 83 |
-
with col2:
|
| 84 |
-
n_case_types = len(filtered_df["CaseType"].unique()) if "CaseType" in filtered_df else 0
|
| 85 |
-
st.metric("Case Types", n_case_types)
|
| 86 |
-
|
| 87 |
-
with col3:
|
| 88 |
-
n_stages = len(filtered_df["Remappedstages"].unique()) if "Remappedstages" in filtered_df else 0
|
| 89 |
-
st.metric("Unique Stages", n_stages)
|
| 90 |
-
|
| 91 |
-
with col4:
|
| 92 |
-
if "Outcome" in filtered_df.columns:
|
| 93 |
-
adj_rate = (filtered_df["Outcome"] == "ADJOURNED").sum() / len(filtered_df)
|
| 94 |
-
st.metric("Adjournment Rate", f"{adj_rate:.1%}")
|
| 95 |
-
else:
|
| 96 |
-
st.metric("Adjournment Rate", "N/A")
|
| 97 |
-
|
| 98 |
-
st.markdown("---")
|
| 99 |
-
|
| 100 |
-
# Visualizations
|
| 101 |
-
tab1, tab2, tab3, tab4 = st.tabs(["Case Distribution", "Stage Analysis", "Adjournment Patterns", "Raw Data"])
|
| 102 |
-
|
| 103 |
-
with tab1:
|
| 104 |
-
st.markdown("### Case Distribution by Type")
|
| 105 |
-
|
| 106 |
-
if "CaseType" in filtered_df:
|
| 107 |
-
case_type_counts = filtered_df["CaseType"].value_counts().reset_index()
|
| 108 |
-
case_type_counts.columns = ["CaseType", "Count"]
|
| 109 |
-
|
| 110 |
-
fig = px.bar(
|
| 111 |
-
case_type_counts,
|
| 112 |
-
x="CaseType",
|
| 113 |
-
y="Count",
|
| 114 |
-
title="Number of Cases by Type",
|
| 115 |
-
labels={"CaseType": "Case Type", "Count": "Number of Cases"},
|
| 116 |
-
color="Count",
|
| 117 |
-
color_continuous_scale="Blues",
|
| 118 |
-
)
|
| 119 |
-
fig.update_layout(xaxis_tickangle=-45, height=500)
|
| 120 |
-
st.plotly_chart(fig, use_container_width=True)
|
| 121 |
-
|
| 122 |
-
# Pie chart
|
| 123 |
-
fig_pie = px.pie(
|
| 124 |
-
case_type_counts,
|
| 125 |
-
values="Count",
|
| 126 |
-
names="CaseType",
|
| 127 |
-
title="Case Type Distribution",
|
| 128 |
-
)
|
| 129 |
-
st.plotly_chart(fig_pie, use_container_width=True)
|
| 130 |
-
else:
|
| 131 |
-
st.info("CaseType column not found in data")
|
| 132 |
-
|
| 133 |
-
with tab2:
|
| 134 |
-
st.markdown("### Stage Analysis")
|
| 135 |
-
|
| 136 |
-
if "Remappedstages" in filtered_df:
|
| 137 |
-
col1, col2 = st.columns(2)
|
| 138 |
-
|
| 139 |
-
with col1:
|
| 140 |
-
stage_counts = filtered_df["Remappedstages"].value_counts().reset_index()
|
| 141 |
-
stage_counts.columns = ["Stage", "Count"]
|
| 142 |
-
|
| 143 |
-
fig = px.bar(
|
| 144 |
-
stage_counts.head(10),
|
| 145 |
-
x="Count",
|
| 146 |
-
y="Stage",
|
| 147 |
-
orientation="h",
|
| 148 |
-
title="Top 10 Stages by Case Count",
|
| 149 |
-
labels={"Stage": "Stage", "Count": "Number of Cases"},
|
| 150 |
-
color="Count",
|
| 151 |
-
color_continuous_scale="Greens",
|
| 152 |
-
)
|
| 153 |
-
fig.update_layout(height=500)
|
| 154 |
-
st.plotly_chart(fig, use_container_width=True)
|
| 155 |
-
|
| 156 |
-
with col2:
|
| 157 |
-
# Stage distribution pie chart
|
| 158 |
-
fig_pie = px.pie(
|
| 159 |
-
stage_counts.head(10),
|
| 160 |
-
values="Count",
|
| 161 |
-
names="Stage",
|
| 162 |
-
title="Stage Distribution (Top 10)",
|
| 163 |
-
)
|
| 164 |
-
fig_pie.update_layout(height=500)
|
| 165 |
-
st.plotly_chart(fig_pie, use_container_width=True)
|
| 166 |
-
else:
|
| 167 |
-
st.info("Remappedstages column not found in data")
|
| 168 |
-
|
| 169 |
-
with tab3:
|
| 170 |
-
st.markdown("### Adjournment Patterns")
|
| 171 |
-
|
| 172 |
-
# Adjournment rate by case type
|
| 173 |
-
if "CaseType" in filtered_df and "Outcome" in filtered_df:
|
| 174 |
-
adj_by_type = (
|
| 175 |
-
filtered_df.groupby("CaseType")["Outcome"]
|
| 176 |
-
.apply(lambda x: (x == "ADJOURNED").sum() / len(x) if len(x) > 0 else 0)
|
| 177 |
-
.reset_index()
|
| 178 |
-
)
|
| 179 |
-
adj_by_type.columns = ["CaseType", "Adjournment_Rate"]
|
| 180 |
-
adj_by_type["Adjournment_Rate"] = adj_by_type["Adjournment_Rate"] * 100
|
| 181 |
-
|
| 182 |
-
fig = px.bar(
|
| 183 |
-
adj_by_type.sort_values("Adjournment_Rate", ascending=False),
|
| 184 |
-
x="CaseType",
|
| 185 |
-
y="Adjournment_Rate",
|
| 186 |
-
title="Adjournment Rate by Case Type (%)",
|
| 187 |
-
labels={"CaseType": "Case Type", "Adjournment_Rate": "Adjournment Rate (%)"},
|
| 188 |
-
color="Adjournment_Rate",
|
| 189 |
-
color_continuous_scale="Reds",
|
| 190 |
-
)
|
| 191 |
-
fig.update_layout(xaxis_tickangle=-45, height=500)
|
| 192 |
-
st.plotly_chart(fig, use_container_width=True)
|
| 193 |
-
|
| 194 |
-
# Adjournment rate by stage
|
| 195 |
-
if "Remappedstages" in filtered_df and "Outcome" in filtered_df:
|
| 196 |
-
adj_by_stage = (
|
| 197 |
-
filtered_df.groupby("Remappedstages")["Outcome"]
|
| 198 |
-
.apply(lambda x: (x == "ADJOURNED").sum() / len(x) if len(x) > 0 else 0)
|
| 199 |
-
.reset_index()
|
| 200 |
-
)
|
| 201 |
-
adj_by_stage.columns = ["Stage", "Adjournment_Rate"]
|
| 202 |
-
adj_by_stage["Adjournment_Rate"] = adj_by_stage["Adjournment_Rate"] * 100
|
| 203 |
-
|
| 204 |
-
fig = px.bar(
|
| 205 |
-
adj_by_stage.sort_values("Adjournment_Rate", ascending=False).head(15),
|
| 206 |
-
x="Adjournment_Rate",
|
| 207 |
-
y="Stage",
|
| 208 |
-
orientation="h",
|
| 209 |
-
title="Adjournment Rate by Stage (Top 15, %)",
|
| 210 |
-
labels={"Stage": "Stage", "Adjournment_Rate": "Adjournment Rate (%)"},
|
| 211 |
-
color="Adjournment_Rate",
|
| 212 |
-
color_continuous_scale="Oranges",
|
| 213 |
-
)
|
| 214 |
-
fig.update_layout(height=600)
|
| 215 |
-
st.plotly_chart(fig, use_container_width=True)
|
| 216 |
-
|
| 217 |
-
# Heatmap: Adjournment probability by stage and case type
|
| 218 |
-
if params and "adjournment_stats" in params:
|
| 219 |
-
st.markdown("#### Adjournment Probability Heatmap (Stage × Case Type)")
|
| 220 |
-
|
| 221 |
-
adj_stats = params["adjournment_stats"]
|
| 222 |
-
stages = list(adj_stats.keys())
|
| 223 |
-
case_types = params["case_types"]
|
| 224 |
-
|
| 225 |
-
heatmap_data = []
|
| 226 |
-
for stage in stages:
|
| 227 |
-
row = []
|
| 228 |
-
for ct in case_types:
|
| 229 |
-
prob = adj_stats.get(stage, {}).get(ct, 0)
|
| 230 |
-
row.append(prob * 100) # Convert to percentage
|
| 231 |
-
heatmap_data.append(row)
|
| 232 |
-
|
| 233 |
-
fig = go.Figure(data=go.Heatmap(
|
| 234 |
-
z=heatmap_data,
|
| 235 |
-
x=case_types,
|
| 236 |
-
y=stages,
|
| 237 |
-
colorscale="RdYlGn_r",
|
| 238 |
-
text=[[f"{val:.1f}%" for val in row] for row in heatmap_data],
|
| 239 |
-
texttemplate="%{text}",
|
| 240 |
-
textfont={"size": 8},
|
| 241 |
-
colorbar=dict(title="Adj. Rate (%)"),
|
| 242 |
-
))
|
| 243 |
-
fig.update_layout(
|
| 244 |
-
title="Adjournment Probability Heatmap",
|
| 245 |
-
xaxis_title="Case Type",
|
| 246 |
-
yaxis_title="Stage",
|
| 247 |
-
height=700,
|
| 248 |
-
)
|
| 249 |
-
st.plotly_chart(fig, use_container_width=True)
|
| 250 |
-
|
| 251 |
-
with tab4:
|
| 252 |
-
st.markdown("### Raw Data")
|
| 253 |
-
|
| 254 |
-
st.dataframe(
|
| 255 |
-
filtered_df.head(100),
|
| 256 |
-
use_container_width=True,
|
| 257 |
-
height=600,
|
| 258 |
-
)
|
| 259 |
-
|
| 260 |
-
st.markdown(f"**Showing first 100 of {len(filtered_df):,} filtered rows**")
|
| 261 |
-
|
| 262 |
-
# Download button
|
| 263 |
-
csv = filtered_df.to_csv(index=False).encode('utf-8')
|
| 264 |
-
st.download_button(
|
| 265 |
-
label="Download filtered data as CSV",
|
| 266 |
-
data=csv,
|
| 267 |
-
file_name="filtered_cases.csv",
|
| 268 |
-
mime="text/csv",
|
| 269 |
-
)
|
| 270 |
-
|
| 271 |
-
# Footer
|
| 272 |
-
st.markdown("---")
|
| 273 |
-
st.markdown("*Data loaded from EDA pipeline. Refresh to reload.*")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
scheduler/dashboard/pages/2_Ripeness_Classifier.py
CHANGED
|
@@ -10,21 +10,24 @@ from datetime import date, timedelta
|
|
| 10 |
|
| 11 |
import pandas as pd
|
| 12 |
import plotly.express as px
|
| 13 |
-
import plotly.graph_objects as go
|
| 14 |
import streamlit as st
|
| 15 |
|
| 16 |
-
from scheduler.core.case import Case, CaseStatus
|
| 17 |
from scheduler.core.ripeness import RipenessClassifier, RipenessStatus
|
| 18 |
-
from scheduler.dashboard.utils import
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
# Page configuration
|
| 21 |
st.set_page_config(
|
| 22 |
page_title="Ripeness Classifier",
|
| 23 |
-
page_icon="
|
| 24 |
layout="wide",
|
| 25 |
)
|
| 26 |
|
| 27 |
-
st.title("
|
| 28 |
st.markdown("Understand and tune the case readiness algorithm")
|
| 29 |
|
| 30 |
# Initialize session state for thresholds
|
|
@@ -67,6 +70,13 @@ min_case_age_days = st.sidebar.slider(
|
|
| 67 |
help="Minimum case age before considered RIPE",
|
| 68 |
)
|
| 69 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
# Reset button
|
| 71 |
if st.sidebar.button("Reset to Defaults"):
|
| 72 |
st.session_state.min_service_hearings = 2
|
|
@@ -79,252 +89,213 @@ st.session_state.min_service_hearings = min_service_hearings
|
|
| 79 |
st.session_state.min_stage_days = min_stage_days
|
| 80 |
st.session_state.min_case_age_days = min_case_age_days
|
| 81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 82 |
# Main content
|
| 83 |
tab1, tab2, tab3 = st.tabs(["Current Configuration", "Interactive Testing", "Batch Classification"])
|
| 84 |
|
| 85 |
with tab1:
|
| 86 |
st.markdown("### Current Classifier Configuration")
|
| 87 |
-
|
| 88 |
col1, col2, col3 = st.columns(3)
|
| 89 |
-
|
| 90 |
with col1:
|
| 91 |
st.metric("Min Service Hearings", min_service_hearings)
|
| 92 |
st.caption("Cases need at least this many service hearings")
|
| 93 |
-
|
| 94 |
with col2:
|
| 95 |
st.metric("Min Stage Days", min_stage_days)
|
| 96 |
st.caption("Days in current stage threshold")
|
| 97 |
-
|
| 98 |
with col3:
|
| 99 |
st.metric("Min Case Age", f"{min_case_age_days} days")
|
| 100 |
st.caption("Minimum case age requirement")
|
| 101 |
-
|
| 102 |
st.markdown("---")
|
| 103 |
-
|
| 104 |
# Classification logic flowchart
|
| 105 |
st.markdown("### Classification Logic")
|
| 106 |
-
|
| 107 |
with st.expander("View Decision Tree Logic"):
|
| 108 |
st.markdown("""
|
| 109 |
The ripeness classifier uses the following decision logic:
|
| 110 |
-
|
| 111 |
**1. Service Hearings Check**
|
| 112 |
-
- If `service_hearings < MIN_SERVICE_HEARINGS`
|
| 113 |
-
|
| 114 |
**2. Case Age Check**
|
| 115 |
-
- If `case_age < MIN_CASE_AGE_DAYS`
|
| 116 |
-
|
| 117 |
**3. Stage-Specific Checks**
|
| 118 |
- Each stage has minimum days requirement
|
| 119 |
-
- If `days_in_stage < stage_requirement`
|
| 120 |
-
|
| 121 |
**4. Keyword Analysis**
|
| 122 |
- Certain keywords indicate ripeness (e.g., "reply filed", "arguments complete")
|
| 123 |
-
- If keywords found
|
| 124 |
-
|
| 125 |
**5. Final Classification**
|
| 126 |
-
- If all criteria met
|
| 127 |
-
- If some criteria failed but not critical
|
| 128 |
-
- Otherwise
|
| 129 |
""")
|
| 130 |
-
|
| 131 |
# Show stage-specific rules
|
| 132 |
st.markdown("### Stage-Specific Rules")
|
| 133 |
-
|
| 134 |
stage_rules = {
|
| 135 |
"PRE-TRIAL": {"min_days": 60, "keywords": ["affidavit filed", "reply filed"]},
|
| 136 |
"TRIAL": {"min_days": 45, "keywords": ["evidence complete", "cross complete"]},
|
| 137 |
"POST-TRIAL": {"min_days": 30, "keywords": ["arguments complete", "written note"]},
|
| 138 |
"FINAL DISPOSAL": {"min_days": 15, "keywords": ["disposed", "judgment"]},
|
| 139 |
}
|
| 140 |
-
|
| 141 |
-
df_rules = pd.DataFrame(
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 146 |
st.dataframe(df_rules, use_container_width=True, hide_index=True)
|
| 147 |
|
| 148 |
with tab2:
|
| 149 |
st.markdown("### Interactive Case Classification Testing")
|
| 150 |
-
|
| 151 |
-
st.markdown(
|
| 152 |
-
|
|
|
|
|
|
|
| 153 |
col1, col2 = st.columns(2)
|
| 154 |
-
|
| 155 |
with col1:
|
| 156 |
case_id = st.text_input("Case ID", value="TEST-001")
|
| 157 |
case_type = st.selectbox("Case Type", ["CIVIL", "CRIMINAL", "WRIT", "PIL"])
|
| 158 |
-
case_stage = st.selectbox(
|
| 159 |
-
|
|
|
|
|
|
|
| 160 |
with col2:
|
| 161 |
-
service_hearings_count = st.number_input(
|
|
|
|
|
|
|
| 162 |
days_in_stage = st.number_input("Days in Stage", min_value=0, max_value=365, value=45)
|
| 163 |
case_age = st.number_input("Case Age (days)", min_value=0, max_value=3650, value=120)
|
| 164 |
-
|
| 165 |
# Keywords
|
| 166 |
has_keywords = st.multiselect(
|
| 167 |
"Keywords Found",
|
| 168 |
-
options=[
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 169 |
default=[],
|
| 170 |
)
|
| 171 |
-
|
| 172 |
if st.button("Classify Case"):
|
| 173 |
# Create synthetic case
|
| 174 |
today = date.today()
|
| 175 |
filed_date = today - timedelta(days=case_age)
|
| 176 |
-
|
| 177 |
test_case = Case(
|
| 178 |
case_id=case_id,
|
| 179 |
-
case_type=
|
| 180 |
filed_date=filed_date,
|
| 181 |
current_stage=case_stage,
|
| 182 |
status=CaseStatus.PENDING,
|
| 183 |
)
|
| 184 |
-
|
| 185 |
-
#
|
| 186 |
-
test_case.
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
]
|
| 190 |
-
|
| 191 |
-
#
|
| 192 |
-
|
| 193 |
-
|
| 194 |
-
|
| 195 |
-
|
| 196 |
-
|
| 197 |
-
|
| 198 |
-
|
| 199 |
-
if service_hearings_count >= min_service_hearings:
|
| 200 |
-
criteria_passed.append(f"✓ Service hearings: {service_hearings_count} (threshold: {min_service_hearings})")
|
| 201 |
-
else:
|
| 202 |
-
criteria_failed.append(f"✗ Service hearings: {service_hearings_count} (threshold: {min_service_hearings})")
|
| 203 |
-
|
| 204 |
-
# Check case age
|
| 205 |
-
if case_age >= min_case_age_days:
|
| 206 |
-
criteria_passed.append(f"✓ Case age: {case_age} days (threshold: {min_case_age_days})")
|
| 207 |
-
else:
|
| 208 |
-
criteria_failed.append(f"✗ Case age: {case_age} days (threshold: {min_case_age_days})")
|
| 209 |
-
|
| 210 |
-
# Check stage days
|
| 211 |
-
stage_threshold = stage_rules.get(case_stage, {}).get("min_days", min_stage_days)
|
| 212 |
-
if days_in_stage >= stage_threshold:
|
| 213 |
-
criteria_passed.append(f"✓ Stage days: {days_in_stage} (threshold: {stage_threshold} for {case_stage})")
|
| 214 |
-
else:
|
| 215 |
-
criteria_failed.append(f"✗ Stage days: {days_in_stage} (threshold: {stage_threshold} for {case_stage})")
|
| 216 |
-
|
| 217 |
-
# Check keywords
|
| 218 |
-
expected_keywords = stage_rules.get(case_stage, {}).get("keywords", [])
|
| 219 |
-
keywords_found = [kw for kw in has_keywords if kw in expected_keywords]
|
| 220 |
-
if keywords_found:
|
| 221 |
-
criteria_passed.append(f"✓ Keywords: {', '.join(keywords_found)}")
|
| 222 |
-
else:
|
| 223 |
-
criteria_failed.append(f"✗ No relevant keywords found")
|
| 224 |
-
|
| 225 |
-
# Final classification
|
| 226 |
-
if len(criteria_failed) == 0:
|
| 227 |
-
classification = "RIPE"
|
| 228 |
-
color = "green"
|
| 229 |
-
elif len(criteria_failed) <= 1:
|
| 230 |
-
classification = "UNKNOWN"
|
| 231 |
-
color = "orange"
|
| 232 |
-
else:
|
| 233 |
-
classification = "UNRIPE"
|
| 234 |
-
color = "red"
|
| 235 |
-
|
| 236 |
-
# Display results
|
| 237 |
-
st.markdown("### Classification Result")
|
| 238 |
-
st.markdown(f":{color}[**{classification}**]")
|
| 239 |
-
|
| 240 |
-
col1, col2 = st.columns(2)
|
| 241 |
-
|
| 242 |
-
with col1:
|
| 243 |
-
st.markdown("#### Criteria Passed")
|
| 244 |
-
for criterion in criteria_passed:
|
| 245 |
-
st.markdown(criterion)
|
| 246 |
-
|
| 247 |
-
with col2:
|
| 248 |
-
st.markdown("#### Criteria Failed")
|
| 249 |
-
if criteria_failed:
|
| 250 |
-
for criterion in criteria_failed:
|
| 251 |
-
st.markdown(criterion)
|
| 252 |
-
else:
|
| 253 |
-
st.markdown("*All criteria passed*")
|
| 254 |
-
|
| 255 |
-
# Feature importance
|
| 256 |
-
st.markdown("---")
|
| 257 |
-
st.markdown("### Feature Importance")
|
| 258 |
-
|
| 259 |
-
feature_scores = {
|
| 260 |
-
"Service Hearings": 1 if service_hearings_count >= min_service_hearings else 0,
|
| 261 |
-
"Case Age": 1 if case_age >= min_case_age_days else 0,
|
| 262 |
-
"Stage Days": 1 if days_in_stage >= stage_threshold else 0,
|
| 263 |
-
"Keywords": 1 if keywords_found else 0,
|
| 264 |
-
}
|
| 265 |
-
|
| 266 |
-
fig = px.bar(
|
| 267 |
-
x=list(feature_scores.keys()),
|
| 268 |
-
y=list(feature_scores.values()),
|
| 269 |
-
labels={"x": "Feature", "y": "Score (0=Fail, 1=Pass)"},
|
| 270 |
-
title="Feature Contribution to Ripeness",
|
| 271 |
-
color=list(feature_scores.values()),
|
| 272 |
-
color_continuous_scale=["red", "green"],
|
| 273 |
)
|
| 274 |
-
|
| 275 |
-
st.
|
|
|
|
| 276 |
|
| 277 |
with tab3:
|
| 278 |
st.markdown("### Batch Classification Analysis")
|
| 279 |
-
|
| 280 |
-
st.markdown(
|
| 281 |
-
|
|
|
|
|
|
|
| 282 |
if st.button("Load & Classify Test Cases"):
|
| 283 |
with st.spinner("Loading cases..."):
|
| 284 |
try:
|
| 285 |
cases = load_generated_cases()
|
| 286 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 287 |
if not cases:
|
| 288 |
-
st.warning(
|
|
|
|
|
|
|
| 289 |
else:
|
| 290 |
st.success(f"Loaded {len(cases)} test cases")
|
| 291 |
-
|
| 292 |
-
# Classify all cases
|
| 293 |
classifications = {"RIPE": 0, "UNRIPE": 0, "UNKNOWN": 0}
|
| 294 |
-
|
| 295 |
-
|
| 296 |
for case in cases:
|
| 297 |
-
|
| 298 |
-
|
| 299 |
-
|
| 300 |
-
|
| 301 |
-
|
| 302 |
-
|
| 303 |
-
|
| 304 |
-
|
| 305 |
-
|
| 306 |
-
if criteria_met == 2:
|
| 307 |
classifications["RIPE"] += 1
|
| 308 |
-
elif
|
| 309 |
classifications["UNKNOWN"] += 1
|
| 310 |
else:
|
| 311 |
classifications["UNRIPE"] += 1
|
| 312 |
-
|
| 313 |
# Display results
|
| 314 |
col1, col2, col3 = st.columns(3)
|
| 315 |
-
|
| 316 |
with col1:
|
| 317 |
pct = classifications["RIPE"] / len(cases) * 100
|
| 318 |
st.metric("RIPE Cases", f"{classifications['RIPE']:,}", f"{pct:.1f}%")
|
| 319 |
-
|
| 320 |
with col2:
|
| 321 |
pct = classifications["UNKNOWN"] / len(cases) * 100
|
| 322 |
st.metric("UNKNOWN Cases", f"{classifications['UNKNOWN']:,}", f"{pct:.1f}%")
|
| 323 |
-
|
| 324 |
with col3:
|
| 325 |
pct = classifications["UNRIPE"] / len(cases) * 100
|
| 326 |
st.metric("UNRIPE Cases", f"{classifications['UNRIPE']:,}", f"{pct:.1f}%")
|
| 327 |
-
|
| 328 |
# Pie chart
|
| 329 |
fig = px.pie(
|
| 330 |
values=list(classifications.values()),
|
|
@@ -334,7 +305,7 @@ with tab3:
|
|
| 334 |
color_discrete_map={"RIPE": "green", "UNKNOWN": "orange", "UNRIPE": "red"},
|
| 335 |
)
|
| 336 |
st.plotly_chart(fig, use_container_width=True)
|
| 337 |
-
|
| 338 |
except Exception as e:
|
| 339 |
st.error(f"Error loading cases: {e}")
|
| 340 |
|
|
|
|
| 10 |
|
| 11 |
import pandas as pd
|
| 12 |
import plotly.express as px
|
|
|
|
| 13 |
import streamlit as st
|
| 14 |
|
| 15 |
+
from scheduler.core.case import Case, CaseStatus
|
| 16 |
from scheduler.core.ripeness import RipenessClassifier, RipenessStatus
|
| 17 |
+
from scheduler.dashboard.utils.data_loader import (
|
| 18 |
+
attach_history_to_cases,
|
| 19 |
+
load_generated_cases,
|
| 20 |
+
load_generated_hearings,
|
| 21 |
+
)
|
| 22 |
|
| 23 |
# Page configuration
|
| 24 |
st.set_page_config(
|
| 25 |
page_title="Ripeness Classifier",
|
| 26 |
+
page_icon="target",
|
| 27 |
layout="wide",
|
| 28 |
)
|
| 29 |
|
| 30 |
+
st.title("Ripeness Classifier - Explainability Dashboard")
|
| 31 |
st.markdown("Understand and tune the case readiness algorithm")
|
| 32 |
|
| 33 |
# Initialize session state for thresholds
|
|
|
|
| 70 |
help="Minimum case age before considered RIPE",
|
| 71 |
)
|
| 72 |
|
| 73 |
+
# Detailed history toggle
|
| 74 |
+
use_history = st.sidebar.toggle(
|
| 75 |
+
"Use detailed hearing history (if available)",
|
| 76 |
+
value=True,
|
| 77 |
+
help="When enabled, the classifier will use per-hearing history from hearings.csv if present.",
|
| 78 |
+
)
|
| 79 |
+
|
| 80 |
# Reset button
|
| 81 |
if st.sidebar.button("Reset to Defaults"):
|
| 82 |
st.session_state.min_service_hearings = 2
|
|
|
|
| 89 |
st.session_state.min_stage_days = min_stage_days
|
| 90 |
st.session_state.min_case_age_days = min_case_age_days
|
| 91 |
|
| 92 |
+
# Wire sidebar thresholds to the core classifier
|
| 93 |
+
RipenessClassifier.set_thresholds(
|
| 94 |
+
{
|
| 95 |
+
"MIN_SERVICE_HEARINGS": min_service_hearings,
|
| 96 |
+
"MIN_STAGE_DAYS": min_stage_days,
|
| 97 |
+
"MIN_CASE_AGE_DAYS": min_case_age_days,
|
| 98 |
+
}
|
| 99 |
+
)
|
| 100 |
+
|
| 101 |
# Main content
|
| 102 |
tab1, tab2, tab3 = st.tabs(["Current Configuration", "Interactive Testing", "Batch Classification"])
|
| 103 |
|
| 104 |
with tab1:
|
| 105 |
st.markdown("### Current Classifier Configuration")
|
| 106 |
+
|
| 107 |
col1, col2, col3 = st.columns(3)
|
| 108 |
+
|
| 109 |
with col1:
|
| 110 |
st.metric("Min Service Hearings", min_service_hearings)
|
| 111 |
st.caption("Cases need at least this many service hearings")
|
| 112 |
+
|
| 113 |
with col2:
|
| 114 |
st.metric("Min Stage Days", min_stage_days)
|
| 115 |
st.caption("Days in current stage threshold")
|
| 116 |
+
|
| 117 |
with col3:
|
| 118 |
st.metric("Min Case Age", f"{min_case_age_days} days")
|
| 119 |
st.caption("Minimum case age requirement")
|
| 120 |
+
|
| 121 |
st.markdown("---")
|
| 122 |
+
|
| 123 |
# Classification logic flowchart
|
| 124 |
st.markdown("### Classification Logic")
|
| 125 |
+
|
| 126 |
with st.expander("View Decision Tree Logic"):
|
| 127 |
st.markdown("""
|
| 128 |
The ripeness classifier uses the following decision logic:
|
| 129 |
+
|
| 130 |
**1. Service Hearings Check**
|
| 131 |
+
- If `service_hearings < MIN_SERVICE_HEARINGS` -> **UNRIPE**
|
| 132 |
+
|
| 133 |
**2. Case Age Check**
|
| 134 |
+
- If `case_age < MIN_CASE_AGE_DAYS` -> **UNRIPE**
|
| 135 |
+
|
| 136 |
**3. Stage-Specific Checks**
|
| 137 |
- Each stage has minimum days requirement
|
| 138 |
+
- If `days_in_stage < stage_requirement` -> **UNRIPE**
|
| 139 |
+
|
| 140 |
**4. Keyword Analysis**
|
| 141 |
- Certain keywords indicate ripeness (e.g., "reply filed", "arguments complete")
|
| 142 |
+
- If keywords found -> **RIPE**
|
| 143 |
+
|
| 144 |
**5. Final Classification**
|
| 145 |
+
- If all criteria met -> **RIPE**
|
| 146 |
+
- If some criteria failed but not critical -> **UNKNOWN**
|
| 147 |
+
- Otherwise -> **UNRIPE**
|
| 148 |
""")
|
| 149 |
+
|
| 150 |
# Show stage-specific rules
|
| 151 |
st.markdown("### Stage-Specific Rules")
|
| 152 |
+
|
| 153 |
stage_rules = {
|
| 154 |
"PRE-TRIAL": {"min_days": 60, "keywords": ["affidavit filed", "reply filed"]},
|
| 155 |
"TRIAL": {"min_days": 45, "keywords": ["evidence complete", "cross complete"]},
|
| 156 |
"POST-TRIAL": {"min_days": 30, "keywords": ["arguments complete", "written note"]},
|
| 157 |
"FINAL DISPOSAL": {"min_days": 15, "keywords": ["disposed", "judgment"]},
|
| 158 |
}
|
| 159 |
+
|
| 160 |
+
df_rules = pd.DataFrame(
|
| 161 |
+
[
|
| 162 |
+
{
|
| 163 |
+
"Stage": stage,
|
| 164 |
+
"Min Days": rules["min_days"],
|
| 165 |
+
"Keywords": ", ".join(rules["keywords"]),
|
| 166 |
+
}
|
| 167 |
+
for stage, rules in stage_rules.items()
|
| 168 |
+
]
|
| 169 |
+
)
|
| 170 |
+
|
| 171 |
st.dataframe(df_rules, use_container_width=True, hide_index=True)
|
| 172 |
|
| 173 |
with tab2:
|
| 174 |
st.markdown("### Interactive Case Classification Testing")
|
| 175 |
+
|
| 176 |
+
st.markdown(
|
| 177 |
+
"Create a synthetic case and see how it would be classified with current thresholds"
|
| 178 |
+
)
|
| 179 |
+
|
| 180 |
col1, col2 = st.columns(2)
|
| 181 |
+
|
| 182 |
with col1:
|
| 183 |
case_id = st.text_input("Case ID", value="TEST-001")
|
| 184 |
case_type = st.selectbox("Case Type", ["CIVIL", "CRIMINAL", "WRIT", "PIL"])
|
| 185 |
+
case_stage = st.selectbox(
|
| 186 |
+
"Current Stage", ["PRE-TRIAL", "TRIAL", "POST-TRIAL", "FINAL DISPOSAL"]
|
| 187 |
+
)
|
| 188 |
+
|
| 189 |
with col2:
|
| 190 |
+
service_hearings_count = st.number_input(
|
| 191 |
+
"Service Hearings", min_value=0, max_value=20, value=3
|
| 192 |
+
)
|
| 193 |
days_in_stage = st.number_input("Days in Stage", min_value=0, max_value=365, value=45)
|
| 194 |
case_age = st.number_input("Case Age (days)", min_value=0, max_value=3650, value=120)
|
| 195 |
+
|
| 196 |
# Keywords
|
| 197 |
has_keywords = st.multiselect(
|
| 198 |
"Keywords Found",
|
| 199 |
+
options=[
|
| 200 |
+
"reply filed",
|
| 201 |
+
"affidavit filed",
|
| 202 |
+
"arguments complete",
|
| 203 |
+
"evidence complete",
|
| 204 |
+
"written note",
|
| 205 |
+
],
|
| 206 |
default=[],
|
| 207 |
)
|
| 208 |
+
|
| 209 |
if st.button("Classify Case"):
|
| 210 |
# Create synthetic case
|
| 211 |
today = date.today()
|
| 212 |
filed_date = today - timedelta(days=case_age)
|
| 213 |
+
|
| 214 |
test_case = Case(
|
| 215 |
case_id=case_id,
|
| 216 |
+
case_type=case_type, # Use string directly instead of CaseType enum
|
| 217 |
filed_date=filed_date,
|
| 218 |
current_stage=case_stage,
|
| 219 |
status=CaseStatus.PENDING,
|
| 220 |
)
|
| 221 |
+
|
| 222 |
+
# Populate aggregates and optional purpose based on selected keywords
|
| 223 |
+
test_case.hearing_count = service_hearings_count
|
| 224 |
+
test_case.days_in_stage = int(days_in_stage)
|
| 225 |
+
test_case.age_days = int(case_age)
|
| 226 |
+
test_case.last_hearing_purpose = has_keywords[0] if has_keywords else None
|
| 227 |
+
|
| 228 |
+
# Use the real classifier
|
| 229 |
+
status = RipenessClassifier.classify(test_case)
|
| 230 |
+
reason = RipenessClassifier.get_ripeness_reason(status)
|
| 231 |
+
|
| 232 |
+
color = (
|
| 233 |
+
"green"
|
| 234 |
+
if status == RipenessStatus.RIPE
|
| 235 |
+
else ("red" if status.is_unripe() else "orange")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 236 |
)
|
| 237 |
+
st.markdown("### Classification Result")
|
| 238 |
+
st.markdown(f":{color}[**{status.value}**]")
|
| 239 |
+
st.caption(reason)
|
| 240 |
|
| 241 |
with tab3:
|
| 242 |
st.markdown("### Batch Classification Analysis")
|
| 243 |
+
|
| 244 |
+
st.markdown(
|
| 245 |
+
"Load generated test cases and classify them with current thresholds (core classifier)"
|
| 246 |
+
)
|
| 247 |
+
|
| 248 |
if st.button("Load & Classify Test Cases"):
|
| 249 |
with st.spinner("Loading cases..."):
|
| 250 |
try:
|
| 251 |
cases = load_generated_cases()
|
| 252 |
+
|
| 253 |
+
if use_history:
|
| 254 |
+
hearings_df = load_generated_hearings()
|
| 255 |
+
cases = attach_history_to_cases(cases, hearings_df)
|
| 256 |
+
|
| 257 |
if not cases:
|
| 258 |
+
st.warning(
|
| 259 |
+
"No test cases found. Generate cases first: `uv run court-scheduler generate`"
|
| 260 |
+
)
|
| 261 |
else:
|
| 262 |
st.success(f"Loaded {len(cases)} test cases")
|
| 263 |
+
|
| 264 |
+
# Classify all cases using the core classifier
|
| 265 |
classifications = {"RIPE": 0, "UNRIPE": 0, "UNKNOWN": 0}
|
| 266 |
+
|
| 267 |
+
today = date.today()
|
| 268 |
for case in cases:
|
| 269 |
+
# Ensure aggregates are available
|
| 270 |
+
case.age_days = (today - case.filed_date).days
|
| 271 |
+
if getattr(case, "stage_start_date", None):
|
| 272 |
+
case.days_in_stage = (today - case.stage_start_date).days
|
| 273 |
+
else:
|
| 274 |
+
case.days_in_stage = case.age_days
|
| 275 |
+
|
| 276 |
+
status = RipenessClassifier.classify(case)
|
| 277 |
+
if status == RipenessStatus.RIPE:
|
|
|
|
| 278 |
classifications["RIPE"] += 1
|
| 279 |
+
elif status == RipenessStatus.UNKNOWN:
|
| 280 |
classifications["UNKNOWN"] += 1
|
| 281 |
else:
|
| 282 |
classifications["UNRIPE"] += 1
|
| 283 |
+
|
| 284 |
# Display results
|
| 285 |
col1, col2, col3 = st.columns(3)
|
| 286 |
+
|
| 287 |
with col1:
|
| 288 |
pct = classifications["RIPE"] / len(cases) * 100
|
| 289 |
st.metric("RIPE Cases", f"{classifications['RIPE']:,}", f"{pct:.1f}%")
|
| 290 |
+
|
| 291 |
with col2:
|
| 292 |
pct = classifications["UNKNOWN"] / len(cases) * 100
|
| 293 |
st.metric("UNKNOWN Cases", f"{classifications['UNKNOWN']:,}", f"{pct:.1f}%")
|
| 294 |
+
|
| 295 |
with col3:
|
| 296 |
pct = classifications["UNRIPE"] / len(cases) * 100
|
| 297 |
st.metric("UNRIPE Cases", f"{classifications['UNRIPE']:,}", f"{pct:.1f}%")
|
| 298 |
+
|
| 299 |
# Pie chart
|
| 300 |
fig = px.pie(
|
| 301 |
values=list(classifications.values()),
|
|
|
|
| 305 |
color_discrete_map={"RIPE": "green", "UNKNOWN": "orange", "UNRIPE": "red"},
|
| 306 |
)
|
| 307 |
st.plotly_chart(fig, use_container_width=True)
|
| 308 |
+
|
| 309 |
except Exception as e:
|
| 310 |
st.error(f"Error loading cases: {e}")
|
| 311 |
|