RoyAalekh commited on
Commit
efc8383
·
1 Parent(s): 549606a

chore: Cleanup old test runs and duplicate EDA figures

Browse files

- Removed 13 old pipeline test runs (kept only latest: run_20251127_054834)
- Removed 16 old EDA figure directories (kept only latest: v0.4.0_20251126_054552)
- Added test_enhancements.py for validation of merged PRs
- No TODOs, FIXMEs, or dead code found
- Emoticons only in test output (acceptable)
- All large files legitimate (polars runtime, database, simulation results)

models/latest.pkl CHANGED
@@ -1 +1 @@
1
- D:/personal/code4change/code4change-analysis/outputs/runs/run_20251126_061429/training/agent.pkl
 
1
+ D:/personal/code4change/code4change-analysis/outputs/runs/run_20251127_054834/training/agent.pkl
outputs/runs/run_20251126_055542/training/agent.pkl DELETED
Binary file (4.36 kB)
 
outputs/runs/run_20251126_055729/training/agent.pkl DELETED
Binary file (4.47 kB)
 
outputs/runs/run_20251126_055809/reports/COMPARISON_REPORT.md DELETED
@@ -1,19 +0,0 @@
1
- # Court Scheduling System - Performance Comparison
2
-
3
- Generated: 2025-11-26 05:58:54
4
-
5
- ## Configuration
6
-
7
- - Training Cases: 10,000
8
- - Simulation Period: 90 days (0.2 years)
9
- - RL Episodes: 20
10
- - RL Learning Rate: 0.15
11
- - RL Epsilon: 0.4
12
- - Policies Compared: readiness, rl
13
-
14
- ## Results Summary
15
-
16
- | Policy | Disposals | Disposal Rate | Utilization | Avg Hearings/Day |
17
- |--------|-----------|---------------|-------------|------------------|
18
- | Readiness | 5,421 | 54.2% | 84.2% | 635.4 |
19
- | Rl | 5,439 | 54.4% | 83.7% | 631.9 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
outputs/runs/run_20251126_055809/reports/EXECUTIVE_SUMMARY.md DELETED
@@ -1,47 +0,0 @@
1
- # Court Scheduling System - Executive Summary
2
-
3
- ## Hackathon Submission: Karnataka High Court
4
-
5
- ### System Overview
6
- This intelligent court scheduling system uses Reinforcement Learning to optimize case allocation and improve judicial efficiency. The system was evaluated using a comprehensive 2-year simulation with 10,000 real cases.
7
-
8
- ### Key Achievements
9
-
10
- **54.4% Case Disposal Rate** - Significantly improved case clearance
11
- **83.7% Court Utilization** - Optimal resource allocation
12
- **56,874 Hearings Scheduled** - Over 90 days
13
- **AI-Powered Decisions** - Reinforcement learning with 20 training episodes
14
-
15
- ### Technical Innovation
16
-
17
- - **Reinforcement Learning**: Tabular Q-learning with 6D state space
18
- - **Real-time Adaptation**: Dynamic policy adjustment based on case characteristics
19
- - **Multi-objective Optimization**: Balances disposal rate, fairness, and utilization
20
- - **Production Ready**: Generates daily cause lists for immediate deployment
21
-
22
- ### Impact Metrics
23
-
24
- - **Cases Disposed**: 5,439 out of 10,000
25
- - **Average Hearings per Day**: 631.9
26
- - **System Scalability**: Handles 50,000+ case simulations efficiently
27
- - **Judicial Time Saved**: Estimated 75 productive court days
28
-
29
- ### Deployment Readiness
30
-
31
- **Daily Cause Lists**: Automated generation for 90 days
32
- **Performance Monitoring**: Comprehensive metrics and analytics
33
- **Judicial Override**: Complete control system for judge approval
34
- **Multi-courtroom Support**: Load-balanced allocation across courtrooms
35
-
36
- ### Next Steps
37
-
38
- 1. **Pilot Deployment**: Begin with select courtrooms for validation
39
- 2. **Judge Training**: Familiarization with AI-assisted scheduling
40
- 3. **Performance Monitoring**: Track real-world improvement metrics
41
- 4. **System Expansion**: Scale to additional court complexes
42
-
43
- ---
44
-
45
- **Generated**: 2025-11-26 05:58:54
46
- **System Version**: 2.0 (Hackathon Submission)
47
- **Contact**: Karnataka High Court Digital Innovation Team
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
outputs/runs/run_20251126_055809/training/agent.pkl DELETED
Binary file (4.45 kB)
 
outputs/runs/run_20251126_055943/reports/visualizations/performance_charts.md DELETED
@@ -1,7 +0,0 @@
1
- # Performance Visualizations
2
-
3
- Generated charts showing:
4
- - Daily disposal rates
5
- - Court utilization over time
6
- - Case type performance
7
- - Load balancing effectiveness
 
 
 
 
 
 
 
 
outputs/runs/run_20251126_055943/training/agent.pkl DELETED
Binary file (4.53 kB)
 
outputs/runs/run_20251126_060608/training/agent.pkl DELETED
Binary file (4.6 kB)
 
outputs/runs/run_20251126_061429/reports/COMPARISON_REPORT.md DELETED
@@ -1,19 +0,0 @@
1
- # Court Scheduling System - Performance Comparison
2
-
3
- Generated: 2025-11-26 06:29:04
4
-
5
- ## Configuration
6
-
7
- - Training Cases: 50,000
8
- - Simulation Period: 730 days (2.0 years)
9
- - RL Episodes: 200
10
- - RL Learning Rate: 0.15
11
- - RL Epsilon: 0.4
12
- - Policies Compared: readiness, rl
13
-
14
- ## Results Summary
15
-
16
- | Policy | Disposals | Disposal Rate | Utilization | Avg Hearings/Day |
17
- |--------|-----------|---------------|-------------|------------------|
18
- | Readiness | 35,284 | 70.6% | 92.0% | 537.5 |
19
- | Rl | 33,394 | 66.8% | 93.7% | 547.4 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
outputs/runs/run_20251126_061429/reports/EXECUTIVE_SUMMARY.md DELETED
@@ -1,47 +0,0 @@
1
- # Court Scheduling System - Executive Summary
2
-
3
- ## Hackathon Submission: Karnataka High Court
4
-
5
- ### System Overview
6
- This intelligent court scheduling system uses Reinforcement Learning to optimize case allocation and improve judicial efficiency. The system was evaluated using a comprehensive 2-year simulation with 50,000 real cases.
7
-
8
- ### Key Achievements
9
-
10
- **66.8% Case Disposal Rate** - Significantly improved case clearance
11
- **93.7% Court Utilization** - Optimal resource allocation
12
- **399,629 Hearings Scheduled** - Over 730 days
13
- **AI-Powered Decisions** - Reinforcement learning with 200 training episodes
14
-
15
- ### Technical Innovation
16
-
17
- - **Reinforcement Learning**: Tabular Q-learning with 6D state space
18
- - **Real-time Adaptation**: Dynamic policy adjustment based on case characteristics
19
- - **Multi-objective Optimization**: Balances disposal rate, fairness, and utilization
20
- - **Production Ready**: Generates daily cause lists for immediate deployment
21
-
22
- ### Impact Metrics
23
-
24
- - **Cases Disposed**: 33,394 out of 50,000
25
- - **Average Hearings per Day**: 547.4
26
- - **System Scalability**: Handles 50,000+ case simulations efficiently
27
- - **Judicial Time Saved**: Estimated 684 productive court days
28
-
29
- ### Deployment Readiness
30
-
31
- **Daily Cause Lists**: Automated generation for 730 days
32
- **Performance Monitoring**: Comprehensive metrics and analytics
33
- **Judicial Override**: Complete control system for judge approval
34
- **Multi-courtroom Support**: Load-balanced allocation across courtrooms
35
-
36
- ### Next Steps
37
-
38
- 1. **Pilot Deployment**: Begin with select courtrooms for validation
39
- 2. **Judge Training**: Familiarization with AI-assisted scheduling
40
- 3. **Performance Monitoring**: Track real-world improvement metrics
41
- 4. **System Expansion**: Scale to additional court complexes
42
-
43
- ---
44
-
45
- **Generated**: 2025-11-26 06:29:04
46
- **System Version**: 2.0 (Hackathon Submission)
47
- **Contact**: Karnataka High Court Digital Innovation Team
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
outputs/runs/run_20251126_061429/reports/visualizations/performance_charts.md DELETED
@@ -1,7 +0,0 @@
1
- # Performance Visualizations
2
-
3
- Generated charts showing:
4
- - Daily disposal rates
5
- - Court utilization over time
6
- - Case type performance
7
- - Load balancing effectiveness
 
 
 
 
 
 
 
 
outputs/runs/run_20251126_061429/training/agent.pkl DELETED
Binary file (4.52 kB)
 
outputs/runs/{run_20251126_055943 → run_20251127_054834}/reports/COMPARISON_REPORT.md RENAMED
@@ -1,6 +1,6 @@
1
  # Court Scheduling System - Performance Comparison
2
 
3
- Generated: 2025-11-26 06:00:28
4
 
5
  ## Configuration
6
 
@@ -15,5 +15,5 @@ Generated: 2025-11-26 06:00:28
15
 
16
  | Policy | Disposals | Disposal Rate | Utilization | Avg Hearings/Day |
17
  |--------|-----------|---------------|-------------|------------------|
18
- | Readiness | 5,421 | 54.2% | 84.2% | 635.4 |
19
- | Rl | 5,439 | 54.4% | 83.7% | 631.9 |
 
1
  # Court Scheduling System - Performance Comparison
2
 
3
+ Generated: 2025-11-27 05:50:04
4
 
5
  ## Configuration
6
 
 
15
 
16
  | Policy | Disposals | Disposal Rate | Utilization | Avg Hearings/Day |
17
  |--------|-----------|---------------|-------------|------------------|
18
+ | Readiness | 5,343 | 53.4% | 78.8% | 594.7 |
19
+ | Rl | 5,365 | 53.6% | 78.5% | 593.0 |
outputs/runs/{run_20251126_055943 → run_20251127_054834}/reports/EXECUTIVE_SUMMARY.md RENAMED
@@ -7,9 +7,9 @@ This intelligent court scheduling system uses Reinforcement Learning to optimize
7
 
8
  ### Key Achievements
9
 
10
- **54.4% Case Disposal Rate** - Significantly improved case clearance
11
- **83.7% Court Utilization** - Optimal resource allocation
12
- **56,874 Hearings Scheduled** - Over 90 days
13
  **AI-Powered Decisions** - Reinforcement learning with 20 training episodes
14
 
15
  ### Technical Innovation
@@ -21,10 +21,10 @@ This intelligent court scheduling system uses Reinforcement Learning to optimize
21
 
22
  ### Impact Metrics
23
 
24
- - **Cases Disposed**: 5,439 out of 10,000
25
- - **Average Hearings per Day**: 631.9
26
  - **System Scalability**: Handles 50,000+ case simulations efficiently
27
- - **Judicial Time Saved**: Estimated 75 productive court days
28
 
29
  ### Deployment Readiness
30
 
@@ -42,6 +42,6 @@ This intelligent court scheduling system uses Reinforcement Learning to optimize
42
 
43
  ---
44
 
45
- **Generated**: 2025-11-26 06:00:28
46
  **System Version**: 2.0 (Hackathon Submission)
47
  **Contact**: Karnataka High Court Digital Innovation Team
 
7
 
8
  ### Key Achievements
9
 
10
+ **53.6% Case Disposal Rate** - Significantly improved case clearance
11
+ **78.5% Court Utilization** - Optimal resource allocation
12
+ **53,368 Hearings Scheduled** - Over 90 days
13
  **AI-Powered Decisions** - Reinforcement learning with 20 training episodes
14
 
15
  ### Technical Innovation
 
21
 
22
  ### Impact Metrics
23
 
24
+ - **Cases Disposed**: 5,365 out of 10,000
25
+ - **Average Hearings per Day**: 593.0
26
  - **System Scalability**: Handles 50,000+ case simulations efficiently
27
+ - **Judicial Time Saved**: Estimated 71 productive court days
28
 
29
  ### Deployment Readiness
30
 
 
42
 
43
  ---
44
 
45
+ **Generated**: 2025-11-27 05:50:04
46
  **System Version**: 2.0 (Hackathon Submission)
47
  **Contact**: Karnataka High Court Digital Innovation Team
outputs/runs/{run_20251126_055809 → run_20251127_054834}/reports/visualizations/performance_charts.md RENAMED
File without changes
outputs/runs/run_20251127_054834/training/agent.pkl ADDED
Binary file (34.7 kB). View file
 
test_enhancements.py ADDED
@@ -0,0 +1,404 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Test script to validate all merged enhancements are properly parameterized.
2
+
3
+ Tests the following merged PRs:
4
+ - PR #2: Override handling (state pollution fix)
5
+ - PR #3: Ripeness UNKNOWN state
6
+ - PR #6: Parameter fallback with bundled defaults
7
+ - PR #4: RL training with SchedulingAlgorithm constraints
8
+ - PR #5: Shared reward helper
9
+ - PR #7: Output metadata tracking
10
+ """
11
+
12
+ import sys
13
+ from pathlib import Path
14
+ from datetime import date, datetime, timedelta
15
+ from typing import Dict, List
16
+
17
+ # Test configurations
18
+ TESTS_PASSED = []
19
+ TESTS_FAILED = []
20
+
21
+
22
+ def log_test(name: str, passed: bool, details: str = ""):
23
+ """Log test result."""
24
+ if passed:
25
+ TESTS_PASSED.append(name)
26
+ print(f"✓ {name}")
27
+ if details:
28
+ print(f" {details}")
29
+ else:
30
+ TESTS_FAILED.append(name)
31
+ print(f"✗ {name}")
32
+ if details:
33
+ print(f" {details}")
34
+
35
+
36
+ def test_pr2_override_validation():
37
+ """Test PR #2: Override validation preserves original list and tracks rejections."""
38
+ from scheduler.core.algorithm import SchedulingAlgorithm
39
+ from scheduler.core.courtroom import Courtroom
40
+ from scheduler.simulation.policies.readiness import ReadinessPolicy
41
+ from scheduler.simulation.allocator import CourtroomAllocator, AllocationStrategy
42
+ from scheduler.control.overrides import Override, OverrideType
43
+ from scheduler.data.case_generator import CaseGenerator
44
+
45
+ try:
46
+ # Generate test cases
47
+ gen = CaseGenerator(start=date(2024, 1, 1), end=date(2024, 1, 10), seed=42)
48
+ cases = gen.generate(50)
49
+
50
+ # Create test overrides (some valid, some invalid)
51
+ test_overrides = [
52
+ Override(
53
+ override_id="test-1",
54
+ override_type=OverrideType.PRIORITY,
55
+ case_id=cases[0].case_id,
56
+ judge_id="TEST-JUDGE",
57
+ timestamp=datetime.now(),
58
+ new_priority=0.95
59
+ ),
60
+ Override(
61
+ override_id="test-2",
62
+ override_type=OverrideType.PRIORITY,
63
+ case_id="INVALID-CASE-ID", # Invalid case
64
+ judge_id="TEST-JUDGE",
65
+ timestamp=datetime.now(),
66
+ new_priority=0.85
67
+ )
68
+ ]
69
+
70
+ original_count = len(test_overrides)
71
+
72
+ # Setup algorithm
73
+ courtrooms = [Courtroom(courtroom_id=1, judge_id="J001", daily_capacity=50)]
74
+ allocator = CourtroomAllocator(num_courtrooms=1, per_courtroom_capacity=50)
75
+ policy = ReadinessPolicy()
76
+ algorithm = SchedulingAlgorithm(policy=policy, allocator=allocator)
77
+
78
+ # Run scheduling with overrides
79
+ result = algorithm.schedule_day(
80
+ cases=cases,
81
+ courtrooms=courtrooms,
82
+ current_date=date(2024, 1, 15),
83
+ overrides=test_overrides
84
+ )
85
+
86
+ # Verify original list unchanged
87
+ assert len(test_overrides) == original_count, "Original override list was mutated"
88
+
89
+ # Verify rejection tracking exists (even if empty for valid overrides)
90
+ assert hasattr(result, 'override_rejections'), "No override_rejections field"
91
+
92
+ # Verify applied overrides tracked
93
+ assert hasattr(result, 'applied_overrides'), "No applied_overrides field"
94
+
95
+ log_test("PR #2: Override validation", True,
96
+ f"Applied: {len(result.applied_overrides)}, Rejected: {len(result.override_rejections)}")
97
+ return True
98
+
99
+ except Exception as e:
100
+ log_test("PR #2: Override validation", False, str(e))
101
+ return False
102
+
103
+
104
+ def test_pr2_flag_cleanup():
105
+ """Test PR #2: Temporary case flags are cleared after scheduling."""
106
+ from scheduler.data.case_generator import CaseGenerator
107
+ from scheduler.core.algorithm import SchedulingAlgorithm
108
+ from scheduler.core.courtroom import Courtroom
109
+ from scheduler.simulation.policies.readiness import ReadinessPolicy
110
+ from scheduler.simulation.allocator import CourtroomAllocator
111
+ from scheduler.control.overrides import Override, OverrideType
112
+
113
+ try:
114
+ gen = CaseGenerator(start=date(2024, 1, 1), end=date(2024, 1, 10), seed=42)
115
+ cases = gen.generate(10)
116
+
117
+ # Set priority override flag
118
+ test_case = cases[0]
119
+ test_case._priority_override = 0.99
120
+
121
+ # Run scheduling
122
+ courtrooms = [Courtroom(courtroom_id=1, judge_id="J001", daily_capacity=50)]
123
+ allocator = CourtroomAllocator(num_courtrooms=1, per_courtroom_capacity=50)
124
+ policy = ReadinessPolicy()
125
+ algorithm = SchedulingAlgorithm(policy=policy, allocator=allocator)
126
+
127
+ algorithm.schedule_day(cases, courtrooms, date(2024, 1, 15))
128
+
129
+ # Verify flag cleared
130
+ assert not hasattr(test_case, '_priority_override') or test_case._priority_override is None, \
131
+ "Priority override flag not cleared"
132
+
133
+ log_test("PR #2: Flag cleanup", True, "Temporary flags cleared after scheduling")
134
+ return True
135
+
136
+ except Exception as e:
137
+ log_test("PR #2: Flag cleanup", False, str(e))
138
+ return False
139
+
140
+
141
+ def test_pr3_unknown_ripeness():
142
+ """Test PR #3: UNKNOWN ripeness status exists and is used."""
143
+ from scheduler.core.ripeness import RipenessStatus, RipenessClassifier
144
+ from scheduler.data.case_generator import CaseGenerator
145
+
146
+ try:
147
+ # Verify UNKNOWN status exists
148
+ assert hasattr(RipenessStatus, 'UNKNOWN'), "RipenessStatus.UNKNOWN not found"
149
+
150
+ # Create case with ambiguous ripeness
151
+ gen = CaseGenerator(start=date(2024, 1, 1), end=date(2024, 1, 10), seed=42)
152
+ cases = gen.generate(10)
153
+
154
+ # Clear ripeness indicators to test UNKNOWN default
155
+ test_case = cases[0]
156
+ test_case.last_hearing_date = None
157
+ test_case.service_status = None
158
+ test_case.compliance_status = None
159
+
160
+ # Classify ripeness
161
+ ripeness = RipenessClassifier.classify(test_case, date(2024, 1, 15))
162
+
163
+ # Should default to UNKNOWN when no evidence
164
+ assert ripeness == RipenessStatus.UNKNOWN or not ripeness.is_ripe(), \
165
+ "Ambiguous case did not get UNKNOWN or non-RIPE status"
166
+
167
+ log_test("PR #3: UNKNOWN ripeness", True, f"Status: {ripeness.value}")
168
+ return True
169
+
170
+ except Exception as e:
171
+ log_test("PR #3: UNKNOWN ripeness", False, str(e))
172
+ return False
173
+
174
+
175
+ def test_pr6_parameter_fallback():
176
+ """Test PR #6: Parameter fallback with bundled defaults."""
177
+ from pathlib import Path
178
+
179
+ try:
180
+ # Test that defaults directory exists
181
+ defaults_dir = Path("scheduler/data/defaults")
182
+ assert defaults_dir.exists(), f"Defaults directory not found: {defaults_dir}"
183
+
184
+ # Check for expected default files
185
+ expected_files = [
186
+ "stage_transition_probs.csv",
187
+ "stage_duration.csv",
188
+ "adjournment_proxies.csv",
189
+ "court_capacity_global.json",
190
+ "stage_transition_entropy.csv",
191
+ "case_type_summary.csv"
192
+ ]
193
+
194
+ for file in expected_files:
195
+ file_path = defaults_dir / file
196
+ assert file_path.exists(), f"Default file missing: {file}"
197
+
198
+ log_test("PR #6: Parameter fallback", True,
199
+ f"Found {len(expected_files)} default parameter files")
200
+ return True
201
+
202
+ except Exception as e:
203
+ log_test("PR #6: Parameter fallback", False, str(e))
204
+ return False
205
+
206
+
207
+ def test_pr4_rl_constraints():
208
+ """Test PR #4: RL training uses SchedulingAlgorithm with constraints."""
209
+ from rl.training import RLTrainingEnvironment
210
+ from rl.config import RLTrainingConfig, DEFAULT_RL_TRAINING_CONFIG
211
+ from scheduler.data.case_generator import CaseGenerator
212
+
213
+ try:
214
+ # Create training environment
215
+ gen = CaseGenerator(start=date(2024, 1, 1), end=date(2024, 1, 10), seed=42)
216
+ cases = gen.generate(100)
217
+
218
+ config = RLTrainingConfig(
219
+ episodes=2,
220
+ cases_per_episode=100,
221
+ episode_length_days=10,
222
+ courtrooms=2,
223
+ daily_capacity_per_courtroom=50,
224
+ enforce_min_gap=True,
225
+ cap_daily_allocations=True,
226
+ apply_judge_preferences=True
227
+ )
228
+
229
+ env = RLTrainingEnvironment(
230
+ cases=cases,
231
+ start_date=date(2024, 1, 1),
232
+ horizon_days=10,
233
+ rl_config=config
234
+ )
235
+
236
+ # Verify SchedulingAlgorithm components exist
237
+ assert hasattr(env, 'algorithm'), "No SchedulingAlgorithm in training environment"
238
+ assert hasattr(env, 'courtrooms'), "No courtrooms in training environment"
239
+ assert hasattr(env, 'allocator'), "No allocator in training environment"
240
+ assert hasattr(env, 'policy'), "No policy in training environment"
241
+
242
+ # Test step with agent decisions
243
+ agent_decisions = {cases[0].case_id: 1, cases[1].case_id: 1}
244
+ updated_cases, rewards, done = env.step(agent_decisions)
245
+
246
+ assert len(rewards) >= 0, "No rewards returned from step"
247
+
248
+ log_test("PR #4: RL constraints", True,
249
+ f"Environment has algorithm, courtrooms, allocator. Capacity enforced: {config.cap_daily_allocations}")
250
+ return True
251
+
252
+ except Exception as e:
253
+ log_test("PR #4: RL constraints", False, str(e))
254
+ return False
255
+
256
+
257
+ def test_pr5_shared_rewards():
258
+ """Test PR #5: Shared reward helper exists and is used."""
259
+ from rl.rewards import EpisodeRewardHelper
260
+ from rl.training import RLTrainingEnvironment
261
+ from scheduler.data.case_generator import CaseGenerator
262
+
263
+ try:
264
+ # Verify EpisodeRewardHelper exists
265
+ helper = EpisodeRewardHelper(total_cases=100)
266
+ assert hasattr(helper, 'compute_case_reward'), "No compute_case_reward method"
267
+
268
+ # Verify training environment uses it
269
+ gen = CaseGenerator(start=date(2024, 1, 1), end=date(2024, 1, 10), seed=42)
270
+ cases = gen.generate(50)
271
+
272
+ env = RLTrainingEnvironment(cases, date(2024, 1, 1), 10)
273
+ assert hasattr(env, 'reward_helper'), "Training environment doesn't use reward_helper"
274
+ assert isinstance(env.reward_helper, EpisodeRewardHelper), \
275
+ "reward_helper is not EpisodeRewardHelper instance"
276
+
277
+ # Test reward computation
278
+ test_case = cases[0]
279
+ reward = env.reward_helper.compute_case_reward(
280
+ case=test_case,
281
+ was_scheduled=True,
282
+ hearing_outcome="PROGRESS",
283
+ current_date=date(2024, 1, 15),
284
+ previous_gap_days=30
285
+ )
286
+
287
+ assert isinstance(reward, float), "Reward is not a float"
288
+
289
+ log_test("PR #5: Shared rewards", True, f"Helper integrated, sample reward: {reward:.2f}")
290
+ return True
291
+
292
+ except Exception as e:
293
+ log_test("PR #5: Shared rewards", False, str(e))
294
+ return False
295
+
296
+
297
+ def test_pr7_metadata_tracking():
298
+ """Test PR #7: Output metadata tracking."""
299
+ from scheduler.utils.output_manager import OutputManager
300
+ from pathlib import Path
301
+
302
+ try:
303
+ # Create output manager
304
+ output = OutputManager(run_id="test_run")
305
+ output.create_structure()
306
+
307
+ # Verify metadata methods exist
308
+ assert hasattr(output, 'record_eda_metadata'), "No record_eda_metadata method"
309
+ assert hasattr(output, 'save_training_stats'), "No save_training_stats method"
310
+ assert hasattr(output, 'save_evaluation_stats'), "No save_evaluation_stats method"
311
+ assert hasattr(output, 'record_simulation_kpis'), "No record_simulation_kpis method"
312
+
313
+ # Verify run_record file created
314
+ assert output.run_record_file.exists(), "run_record.json not created"
315
+
316
+ # Test metadata recording
317
+ output.record_eda_metadata(
318
+ version="test_v1",
319
+ used_cached=False,
320
+ params_path=Path("test_params"),
321
+ figures_path=Path("test_figures")
322
+ )
323
+
324
+ # Verify metadata was written
325
+ import json
326
+ with open(output.run_record_file, 'r') as f:
327
+ record = json.load(f)
328
+
329
+ assert 'sections' in record, "No sections in run_record"
330
+ assert 'eda' in record['sections'], "EDA metadata not recorded"
331
+
332
+ log_test("PR #7: Metadata tracking", True,
333
+ f"Run record created with {len(record['sections'])} sections")
334
+ return True
335
+
336
+ except Exception as e:
337
+ log_test("PR #7: Metadata tracking", False, str(e))
338
+ return False
339
+
340
+
341
+ def run_all_tests():
342
+ """Run all enhancement tests."""
343
+ print("=" * 60)
344
+ print("Testing Merged Enhancements")
345
+ print("=" * 60)
346
+ print()
347
+
348
+ # PR #2 tests
349
+ print("PR #2: Override Handling Refactor")
350
+ print("-" * 40)
351
+ test_pr2_override_validation()
352
+ test_pr2_flag_cleanup()
353
+ print()
354
+
355
+ # PR #3 tests
356
+ print("PR #3: Ripeness UNKNOWN State")
357
+ print("-" * 40)
358
+ test_pr3_unknown_ripeness()
359
+ print()
360
+
361
+ # PR #6 tests
362
+ print("PR #6: Parameter Fallback")
363
+ print("-" * 40)
364
+ test_pr6_parameter_fallback()
365
+ print()
366
+
367
+ # PR #4 tests
368
+ print("PR #4: RL Training Alignment")
369
+ print("-" * 40)
370
+ test_pr4_rl_constraints()
371
+ print()
372
+
373
+ # PR #5 tests
374
+ print("PR #5: Shared Reward Helper")
375
+ print("-" * 40)
376
+ test_pr5_shared_rewards()
377
+ print()
378
+
379
+ # PR #7 tests
380
+ print("PR #7: Output Metadata Tracking")
381
+ print("-" * 40)
382
+ test_pr7_metadata_tracking()
383
+ print()
384
+
385
+ # Summary
386
+ print("=" * 60)
387
+ print("Test Summary")
388
+ print("=" * 60)
389
+ print(f"Passed: {len(TESTS_PASSED)}")
390
+ print(f"Failed: {len(TESTS_FAILED)}")
391
+ print()
392
+
393
+ if TESTS_FAILED:
394
+ print("Failed tests:")
395
+ for test in TESTS_FAILED:
396
+ print(f" - {test}")
397
+ return 1
398
+ else:
399
+ print("All tests passed!")
400
+ return 0
401
+
402
+
403
+ if __name__ == "__main__":
404
+ sys.exit(run_all_tests())