Arpit-Bansal commited on
Commit
01795ee
·
1 Parent(s): 51de58f

nsga-II tuning

Browse files
api/greedyoptim_api.py CHANGED
@@ -791,4 +791,4 @@ async def validate_data(request: ScheduleOptimizationRequest):
791
 
792
  if __name__ == "__main__":
793
  import uvicorn
794
- uvicorn.run(app, host="0.0.0.0", port=7860)
 
791
 
792
  if __name__ == "__main__":
793
  import uvicorn
794
+ uvicorn.run("api.greedyoptim_api:app", host="0.0.0.0", port=7860, reload=True)
docs/OPTIMIZER_TUNING.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Optimizer Tuning Guide
2
+
3
+ This document describes the tuning changes made to optimize service train selection across all optimization methods.
4
+
5
+ ## Problem Statement
6
+
7
+ The optimizers were initially selecting too few trains for service (as low as 1-13 trains), when 21-22 healthy trainsets were available. This was due to:
8
+
9
+ 1. **Synthetic data issues**: Only 12% of trainsets were healthy
10
+ 2. **Fitness function priorities**: Branding compliance weighted too heavily
11
+ 3. **NSGA-II specific issues**: No elitism, random initialization, equal objective weights
12
+
13
+ ## Changes Made
14
+
15
+ ### 1. Synthetic Data Generator (`DataService/enhanced_generator.py`)
16
+
17
+ **Problem**: Only 3/25 trainsets (12%) were healthy enough for service.
18
+
19
+ **Solution**: Increased healthy trainset ratio to 85%.
20
+
21
+ ```python
22
+ # Before: Equal probability of healthy/unhealthy components
23
+ # After: 85% healthy trainsets with wear capped at 60% of threshold
24
+
25
+ healthy_trainset_count = int(self.num_trainsets * 0.85) # 85% healthy
26
+ max_healthy_wear = comp_info["wear_threshold"] * 0.60 # 60% cap
27
+ ```
28
+
29
+ ### 2. Fitness Function Weights (`greedyOptim/evaluator.py`)
30
+
31
+ **Problem**: Branding compliance was weighted too heavily, causing optimizers to prefer fewer trains with better branding over more trains in service.
32
+
33
+ **Solution**: Rebalanced weights to prioritize operational needs.
34
+
35
+ | Objective | Old Weight | New Weight | Priority |
36
+ |-----------|------------|------------|----------|
37
+ | service_availability | 2.0 | **5.0** | HIGHEST |
38
+ | constraint_penalty | 5.0 | **10.0** | CRITICAL |
39
+ | mileage_balance | 1.5 | 1.5 | Medium |
40
+ | maintenance_cost | 1.0 | 1.0 | Medium |
41
+ | branding_compliance | 1.5 | **0.2** | LOW |
42
+
43
+ **Buffer Bonus**: Added bonus for trains beyond minimum requirement:
44
+ ```python
45
+ # Reward having more than minimum trains for smooth operations
46
+ buffer = max(0, len(service_trains) - self.config.required_service_trains)
47
+ objectives['service_availability'] += buffer * 3.0 # Bonus per extra train
48
+ ```
49
+
50
+ ### 3. NSGA-II Optimizer (`greedyOptim/hybrid_optimizers.py`)
51
+
52
+ #### 3.1 Weighted Dominance Comparison
53
+
54
+ **Problem**: All objectives treated equally in Pareto dominance.
55
+
56
+ **Solution**: Apply weights to objectives before dominance comparison.
57
+
58
+ ```python
59
+ self.objective_weights = {
60
+ 'service_availability': 5.0, # HIGHEST
61
+ 'mileage_balance': 1.5,
62
+ 'maintenance_cost': 1.0,
63
+ 'branding_compliance': 0.2, # LOW
64
+ 'constraint_penalty': 10.0 # CRITICAL
65
+ }
66
+
67
+ # In dominates():
68
+ obj1 = [
69
+ -solution1['service_availability'] * w['service_availability'],
70
+ # ... other objectives with weights
71
+ ]
72
+ ```
73
+
74
+ #### 3.2 Smart Initialization
75
+
76
+ **Problem**: Random initialization created many invalid solutions.
77
+
78
+ **Solution**: Seed population with constraint-aware solutions.
79
+
80
+ ```python
81
+ def _create_smart_initial_solution(self):
82
+ solution = np.zeros(self.n_genes, dtype=int) # All service
83
+ for i, ts_id in enumerate(self.evaluator.trainsets):
84
+ valid, _ = self.evaluator.check_hard_constraints(ts_id)
85
+ if not valid:
86
+ solution[i] = 2 # Maintenance for invalid
87
+ elif standby_count < self.config.min_standby:
88
+ solution[i] = 1 # Reserve for standby
89
+ return solution
90
+
91
+ # Mix: 20% smart solutions + 80% biased random
92
+ ```
93
+
94
+ #### 3.3 Biased Random Solutions
95
+
96
+ **Problem**: Equal probability for service/depot/maintenance (33% each).
97
+
98
+ **Solution**: Bias toward service assignment.
99
+
100
+ ```python
101
+ # Initial population: 65% service, 20% depot, 15% maintenance
102
+ solution = np.random.choice([0, 1, 2], size=n, p=[0.65, 0.20, 0.15])
103
+
104
+ # Mutation: 55% service, 30% depot, 15% maintenance
105
+ child[i] = np.random.choice([0, 1, 2], p=[0.55, 0.30, 0.15])
106
+ ```
107
+
108
+ #### 3.4 Elitism with Combined Population
109
+
110
+ **Problem**: Offspring replaced parents completely, losing good solutions.
111
+
112
+ **Solution**: Combine parents and offspring, then select best via non-dominated sorting.
113
+
114
+ ```python
115
+ # Combine parents and offspring
116
+ combined_population = new_population + offspring
117
+
118
+ # Re-evaluate and sort
119
+ combined_fronts = self.fast_non_dominated_sort(combined_objectives)
120
+
121
+ # Select best from combined (preserves good solutions)
122
+ for front in combined_fronts:
123
+ # Add to next generation up to population_size
124
+ ```
125
+
126
+ #### 3.5 Service-Prioritized Final Selection
127
+
128
+ **Problem**: Random selection from Pareto front.
129
+
130
+ **Solution**: Explicitly select solution with highest service availability.
131
+
132
+ ```python
133
+ # Among zero-penalty solutions, choose highest service_availability
134
+ valid_solutions = [(i, sol, obj) for i, (sol, obj) in enumerate(best_solutions)
135
+ if obj.get('constraint_penalty', 0) == 0]
136
+
137
+ if valid_solutions:
138
+ best_idx = max(valid_solutions,
139
+ key=lambda x: x[2].get('service_availability', 0))[0]
140
+ ```
141
+
142
+ ## Results
143
+
144
+ ### Before Tuning
145
+ | Method | Service Trains | Notes |
146
+ |--------|---------------|-------|
147
+ | GA | 1-13 | Poor due to unhealthy data |
148
+ | PSO | 1-13 | Same issue |
149
+ | SA | 1-13 | Same issue |
150
+ | CMA-ES | 1-13 | Same issue |
151
+ | NSGA2 | 12-13 | Worst performer |
152
+
153
+ ### After Tuning
154
+ | Method | Service Trains | Notes |
155
+ |--------|---------------|-------|
156
+ | GA | 21-22 | Excellent |
157
+ | SA | 21-22 | Excellent |
158
+ | CMA-ES | 19-20 | Good |
159
+ | NSGA2 | 21-22 | **Fixed!** |
160
+ | PSO | 15-18 | Acceptable |
161
+
162
+ ## Recommendations
163
+
164
+ 1. **Use GA or SA** for best results in single-objective optimization
165
+ 2. **Use NSGA2** when you need to explore trade-offs between objectives
166
+ 3. **PSO** may need further tuning for this problem domain
167
+ 4. **CMA-ES** provides good balance between quality and exploration
168
+
169
+ ## Configuration Parameters
170
+
171
+ Recommended settings for Kochi Metro (25 trainsets, 106 blocks):
172
+
173
+ ```python
174
+ config = OptimizationConfig(
175
+ required_service_trains=15, # Minimum for service
176
+ min_standby=2, # Safety buffer
177
+ population_size=50, # Larger = better but slower
178
+ generations=100, # More = better convergence
179
+ mutation_rate=0.1, # Standard
180
+ crossover_rate=0.8, # Standard
181
+ optimize_block_assignment=True # Enable block optimization
182
+ )
183
+ ```
docs/block_optimization_fix.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Block Optimization Fix Summary
2
+
3
+ ## The Problem
4
+
5
+ NSGA-II optimizer was only producing **33-42 blocks** instead of the expected **106 blocks**.
6
+
7
+ ## Root Causes
8
+
9
+ ### 1. Reference vs Copy Issue
10
+ When storing best solutions from the Pareto front, we stored references instead of copies:
11
+
12
+ ```python
13
+ # WRONG - stores references that get overwritten
14
+ best_solutions = [(population[i], objectives[i]) for i in fronts[0]]
15
+ best_block_solutions = [block_population[i] for i in fronts[0]]
16
+ ```
17
+
18
+ Since `population` and `block_population` are replaced each generation with `offspring`, the stored references pointed to stale/corrupted data.
19
+
20
+ ### 2. Block-Trainset Mismatch
21
+ Even with copies, the stored block assignments were created for a *different* trainset selection. When the best solution evolved to have different service trainsets, the old block assignment still mapped to old trainset indices.
22
+
23
+ Example:
24
+ - Generation 50: Best solution has trainsets [0, 2, 5] → blocks assigned to indices 0, 2, 5
25
+ - Generation 150: Best solution evolves to trainsets [1, 3, 7] → but block assignment still references 0, 2, 5
26
+ - Result: Many blocks map to non-service trainsets → lost blocks
27
+
28
+ ## The Fix
29
+
30
+ **Always create fresh block assignments for the final best solution:**
31
+
32
+ ```python
33
+ # Select best solution from Pareto front
34
+ if best_solutions:
35
+ best_idx = min(range(len(best_solutions)),
36
+ key=lambda i: self.evaluator.fitness_function(best_solutions[i][0]))
37
+ best_solution, best_objectives = best_solutions[best_idx]
38
+ if self.optimize_blocks:
39
+ # Always create fresh block assignment for the best solution
40
+ # to ensure all 106 blocks are properly assigned
41
+ best_block_sol = self._create_block_assignment(best_solution)
42
+ ```
43
+
44
+ The `_create_block_assignment` distributes all blocks evenly across current service trainsets:
45
+
46
+ ```python
47
+ def _create_block_assignment(self, trainset_sol: np.ndarray) -> np.ndarray:
48
+ service_indices = np.where(trainset_sol == 0)[0]
49
+
50
+ if len(service_indices) == 0:
51
+ return np.full(self.n_blocks, -1, dtype=int)
52
+
53
+ # Distribute blocks evenly across service trains
54
+ block_sol = np.zeros(self.n_blocks, dtype=int)
55
+ for i in range(self.n_blocks):
56
+ block_sol[i] = service_indices[i % len(service_indices)]
57
+
58
+ return block_sol
59
+ ```
60
+
61
+ ## Result
62
+
63
+ | Optimizer | Before Fix | After Fix |
64
+ |-----------|-----------|-----------|
65
+ | GA | 106 ✓ | 106 ✓ |
66
+ | CMA-ES | 106 ✓ | 106 ✓ |
67
+ | PSO | 106 ✓ | 106 ✓ |
68
+ | SA | 106 ✓ | 106 ✓ |
69
+ | NSGA-II | 33-42 ✗ | 106 ✓ |
70
+
71
+ All optimizers now correctly assign all 106 service blocks.
greedyOptim/evaluator.py CHANGED
@@ -219,10 +219,19 @@ class TrainsetSchedulingEvaluator:
219
  maint_trains.append(ts_id)
220
 
221
  # Objective 1: Service Availability (maximize)
222
- availability = len(service_trains) / self.config.required_service_trains
223
- if len(service_trains) < self.config.required_service_trains:
224
- objectives['constraint_penalty'] += (self.config.required_service_trains - len(service_trains)) * 100.0
225
- objectives['service_availability'] = min(availability, 1.0) * 100.0
 
 
 
 
 
 
 
 
 
226
 
227
  # Objective 2: Mileage Balance (maximize via minimizing std dev)
228
  mileages = [self.status_map[ts].get('total_mileage_km', 0) for ts in service_trains]
@@ -232,7 +241,7 @@ class TrainsetSchedulingEvaluator:
232
  else:
233
  objectives['mileage_balance'] = 100.0
234
 
235
- # Objective 3: Branding Compliance (maximize)
236
  brand_scores = []
237
  for ts_id in service_trains:
238
  if ts_id in self.brand_map:
@@ -270,16 +279,25 @@ class TrainsetSchedulingEvaluator:
270
  return objectives
271
 
272
  def fitness_function(self, solution: np.ndarray) -> float:
273
- """Aggregate fitness function for minimization."""
 
 
 
 
 
 
 
 
274
  obj = self.calculate_objectives(solution)
275
 
276
  # Weighted sum (convert maximization objectives to minimization)
 
277
  fitness = (
278
- -obj['service_availability'] * 2.0 + # Maximize (negative weight)
279
- -obj['branding_compliance'] * 1.5 + # Maximize
280
- -obj['mileage_balance'] * 1.0 + # Maximize
281
- -obj['maintenance_cost'] * 1.0 + # Maximize
282
- obj['constraint_penalty'] * 5.0 # Minimize (positive weight)
283
  )
284
 
285
  return fitness
 
219
  maint_trains.append(ts_id)
220
 
221
  # Objective 1: Service Availability (maximize)
222
+ # Reward having MORE than minimum required (smooth operations)
223
+ num_service = len(service_trains)
224
+ if num_service < self.config.required_service_trains:
225
+ # Heavy penalty for not meeting minimum
226
+ objectives['constraint_penalty'] += (self.config.required_service_trains - num_service) * 200.0
227
+ objectives['service_availability'] = (num_service / self.config.required_service_trains) * 100.0
228
+ else:
229
+ # Reward additional trains beyond minimum (up to 50% more for full fleet coverage)
230
+ # This encourages smooth operations with more trains available
231
+ bonus_trains = num_service - self.config.required_service_trains
232
+ max_bonus = int(self.config.required_service_trains * 0.5) # Up to 50% more
233
+ bonus_score = min(bonus_trains / max_bonus, 1.0) * 20.0 if max_bonus > 0 else 0
234
+ objectives['service_availability'] = 100.0 + bonus_score
235
 
236
  # Objective 2: Mileage Balance (maximize via minimizing std dev)
237
  mileages = [self.status_map[ts].get('total_mileage_km', 0) for ts in service_trains]
 
241
  else:
242
  objectives['mileage_balance'] = 100.0
243
 
244
+ # Objective 3: Branding Compliance (low priority - nice to have)
245
  brand_scores = []
246
  for ts_id in service_trains:
247
  if ts_id in self.brand_map:
 
279
  return objectives
280
 
281
  def fitness_function(self, solution: np.ndarray) -> float:
282
+ """Aggregate fitness function for minimization.
283
+
284
+ Priority order (highest to lowest):
285
+ 1. Meeting minimum service trains (hard constraint)
286
+ 2. Having MORE trains for smooth operations
287
+ 3. Mileage balance across fleet
288
+ 4. Maintenance cost optimization
289
+ 5. Branding compliance (low priority, nice-to-have)
290
+ """
291
  obj = self.calculate_objectives(solution)
292
 
293
  # Weighted sum (convert maximization objectives to minimization)
294
+ # Higher weight = more important
295
  fitness = (
296
+ -obj['service_availability'] * 5.0 + # HIGHEST: Maximize trains in service
297
+ -obj['mileage_balance'] * 1.5 + # Medium: Fleet wear balance
298
+ -obj['maintenance_cost'] * 1.0 + # Medium: Avoid overdue maintenance
299
+ -obj['branding_compliance'] * 0.2 + # LOW: Branding is nice-to-have
300
+ obj['constraint_penalty'] * 10.0 # CRITICAL: Hard constraints must be met
301
  )
302
 
303
  return fitness
greedyOptim/hybrid_optimizers.py CHANGED
@@ -24,15 +24,38 @@ class MultiObjectiveOptimizer:
24
  self.n_blocks = evaluator.num_blocks
25
  self.optimize_blocks = self.config.optimize_block_assignment
26
 
 
 
 
 
 
 
 
 
 
 
27
  def dominates(self, solution1: Dict[str, float], solution2: Dict[str, float]) -> bool:
28
- """Check if solution1 dominates solution2 in multi-objective sense."""
 
 
 
29
  # Convert maximization objectives to minimization (lower is better)
30
- obj1 = [-solution1['service_availability'], -solution1['branding_compliance'],
31
- -solution1['mileage_balance'], -solution1['maintenance_cost'],
32
- solution1['constraint_penalty']]
33
- obj2 = [-solution2['service_availability'], -solution2['branding_compliance'],
34
- -solution2['mileage_balance'], -solution2['maintenance_cost'],
35
- solution2['constraint_penalty']]
 
 
 
 
 
 
 
 
 
 
36
 
37
  # Check if all objectives are better or equal, with at least one strictly better
38
  all_better_equal = all(o1 <= o2 for o1, o2 in zip(obj1, obj2))
@@ -133,13 +156,44 @@ class MultiObjectiveOptimizer:
133
 
134
  return mutated
135
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
136
  def optimize(self) -> OptimizationResult:
137
  """Run NSGA-II multi-objective optimization."""
138
  # Initialize population with trainset solutions and block assignments
 
139
  population = []
140
  block_population = []
141
- for _ in range(self.config.population_size):
142
- solution = np.random.randint(0, 3, self.n_genes)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
143
  population.append(solution)
144
  if self.optimize_blocks:
145
  block_sol = self._create_block_assignment(solution)
@@ -210,10 +264,11 @@ class MultiObjectiveOptimizer:
210
  else:
211
  child = parent1.copy()
212
 
213
- # Mutation
214
  for i in range(self.n_genes):
215
  if random.random() < self.config.mutation_rate:
216
- child[i] = random.randint(0, 2)
 
217
 
218
  offspring.append(child)
219
 
@@ -238,23 +293,66 @@ class MultiObjectiveOptimizer:
238
 
239
  offspring_blocks.append(block_child)
240
 
241
- population = offspring
242
- if self.optimize_blocks:
243
- block_population = offspring_blocks
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
244
 
245
  if gen % 50 == 0:
246
- print(f"Generation {gen}: {len(fronts)} fronts, best front size: {len(fronts[0]) if fronts else 0}")
 
 
247
 
248
  except Exception as e:
249
  print(f"Error in NSGA-II generation {gen}: {e}")
250
  break
251
 
252
- # Select best solution from Pareto front
253
  best_block_sol = None
254
  if best_solutions:
255
- # Choose solution with best overall fitness
256
- best_idx = min(range(len(best_solutions)),
257
- key=lambda i: self.evaluator.fitness_function(best_solutions[i][0]))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
258
  best_solution, best_objectives = best_solutions[best_idx]
259
  if self.optimize_blocks:
260
  # Always create fresh block assignment for the best solution
 
24
  self.n_blocks = evaluator.num_blocks
25
  self.optimize_blocks = self.config.optimize_block_assignment
26
 
27
+ # Objective weights for dominance comparison
28
+ # Higher weight = more important in determining dominance
29
+ self.objective_weights = {
30
+ 'service_availability': 5.0, # HIGHEST: More trains = better operations
31
+ 'mileage_balance': 1.5, # Medium: Fleet wear balance
32
+ 'maintenance_cost': 1.0, # Medium: Avoid overdue maintenance
33
+ 'branding_compliance': 0.2, # LOW: Nice-to-have
34
+ 'constraint_penalty': 10.0 # CRITICAL: Hard constraints
35
+ }
36
+
37
  def dominates(self, solution1: Dict[str, float], solution2: Dict[str, float]) -> bool:
38
+ """Check if solution1 dominates solution2 in multi-objective sense.
39
+
40
+ Uses weighted objectives to prioritize service availability over branding.
41
+ """
42
  # Convert maximization objectives to minimization (lower is better)
43
+ # Apply weights to emphasize important objectives
44
+ w = self.objective_weights
45
+ obj1 = [
46
+ -solution1['service_availability'] * w['service_availability'],
47
+ -solution1['mileage_balance'] * w['mileage_balance'],
48
+ -solution1['maintenance_cost'] * w['maintenance_cost'],
49
+ -solution1['branding_compliance'] * w['branding_compliance'],
50
+ solution1['constraint_penalty'] * w['constraint_penalty']
51
+ ]
52
+ obj2 = [
53
+ -solution2['service_availability'] * w['service_availability'],
54
+ -solution2['mileage_balance'] * w['mileage_balance'],
55
+ -solution2['maintenance_cost'] * w['maintenance_cost'],
56
+ -solution2['branding_compliance'] * w['branding_compliance'],
57
+ solution2['constraint_penalty'] * w['constraint_penalty']
58
+ ]
59
 
60
  # Check if all objectives are better or equal, with at least one strictly better
61
  all_better_equal = all(o1 <= o2 for o1, o2 in zip(obj1, obj2))
 
156
 
157
  return mutated
158
 
159
+ def _create_smart_initial_solution(self) -> np.ndarray:
160
+ """Create a smart initial solution that respects constraints."""
161
+ solution = np.zeros(self.n_genes, dtype=int) # Start with all service
162
+
163
+ standby_count = 0
164
+ for i, ts_id in enumerate(self.evaluator.trainsets):
165
+ valid, _ = self.evaluator.check_hard_constraints(ts_id)
166
+ if not valid:
167
+ solution[i] = 2 # Put constraint-violating trainsets in maintenance
168
+ elif standby_count < self.config.min_standby:
169
+ solution[i] = 1 # Reserve some healthy ones for standby
170
+ standby_count += 1
171
+
172
+ return solution
173
+
174
  def optimize(self) -> OptimizationResult:
175
  """Run NSGA-II multi-objective optimization."""
176
  # Initialize population with trainset solutions and block assignments
177
+ # Mix of smart and random solutions for diversity
178
  population = []
179
  block_population = []
180
+
181
+ # First, add some smart solutions (constraint-aware)
182
+ num_smart = min(10, self.config.population_size // 5)
183
+ for _ in range(num_smart):
184
+ solution = self._create_smart_initial_solution()
185
+ # Add some random mutation to create diversity
186
+ for i in range(self.n_genes):
187
+ if np.random.random() < 0.1: # 10% mutation
188
+ solution[i] = np.random.choice([0, 1, 2], p=[0.70, 0.20, 0.10])
189
+ population.append(solution)
190
+ if self.optimize_blocks:
191
+ block_sol = self._create_block_assignment(solution)
192
+ block_population.append(block_sol)
193
+
194
+ # Fill rest with biased random (favor service)
195
+ for _ in range(self.config.population_size - num_smart):
196
+ solution = np.random.choice([0, 1, 2], size=self.n_genes, p=[0.65, 0.20, 0.15])
197
  population.append(solution)
198
  if self.optimize_blocks:
199
  block_sol = self._create_block_assignment(solution)
 
264
  else:
265
  child = parent1.copy()
266
 
267
+ # Mutation with bias towards service (0)
268
  for i in range(self.n_genes):
269
  if random.random() < self.config.mutation_rate:
270
+ # 55% chance to mutate to service, 30% depot, 15% maintenance
271
+ child[i] = np.random.choice([0, 1, 2], p=[0.55, 0.30, 0.15])
272
 
273
  offspring.append(child)
274
 
 
293
 
294
  offspring_blocks.append(block_child)
295
 
296
+ # ELITISM: Combine parents and offspring, then select best
297
+ combined_population = new_population + offspring
298
+ combined_blocks = (new_block_population + offspring_blocks) if self.optimize_blocks else None
299
+
300
+ # Evaluate combined population
301
+ combined_objectives = []
302
+ for sol in combined_population:
303
+ combined_objectives.append(self.evaluator.calculate_objectives(sol))
304
+
305
+ # Non-dominated sorting on combined population
306
+ combined_fronts = self.fast_non_dominated_sort(combined_objectives)
307
+
308
+ # Select best individuals for next generation
309
+ population = []
310
+ block_population = [] if self.optimize_blocks else None
311
+
312
+ for front in combined_fronts:
313
+ if len(population) + len(front) <= self.config.population_size:
314
+ population.extend([combined_population[i].copy() for i in front])
315
+ if self.optimize_blocks:
316
+ block_population.extend([combined_blocks[i].copy() for i in front])
317
+ else:
318
+ # Use crowding distance for this front
319
+ distances = self.crowding_distance(front, combined_objectives)
320
+ sorted_front = sorted(zip(front, distances), key=lambda x: x[1], reverse=True)
321
+ remaining = self.config.population_size - len(population)
322
+ population.extend([combined_population[i].copy() for i, _ in sorted_front[:remaining]])
323
+ if self.optimize_blocks:
324
+ block_population.extend([combined_blocks[i].copy() for i, _ in sorted_front[:remaining]])
325
+ break
326
 
327
  if gen % 50 == 0:
328
+ best_service = max(obj.get('service_availability', 0) for obj in combined_objectives)
329
+ min_penalty = min(obj.get('constraint_penalty', 9999) for obj in combined_objectives)
330
+ print(f"Generation {gen}: {len(combined_fronts)} fronts, best service: {best_service:.1f}, min penalty: {min_penalty:.0f}")
331
 
332
  except Exception as e:
333
  print(f"Error in NSGA-II generation {gen}: {e}")
334
  break
335
 
336
+ # Select best solution from Pareto front - prioritize service availability
337
  best_block_sol = None
338
  if best_solutions:
339
+ # First, find solutions with zero constraint penalty
340
+ valid_solutions = [(i, sol, obj) for i, (sol, obj) in enumerate(best_solutions)
341
+ if obj.get('constraint_penalty', 0) == 0]
342
+
343
+ if valid_solutions:
344
+ # Among valid solutions, choose the one with highest service_availability
345
+ # (which means more trains in service)
346
+ best_idx = max(valid_solutions,
347
+ key=lambda x: x[2].get('service_availability', 0))[0]
348
+ else:
349
+ # Fall back to lowest constraint penalty + highest service
350
+ best_idx = max(range(len(best_solutions)),
351
+ key=lambda i: (
352
+ -best_solutions[i][1].get('constraint_penalty', float('inf')),
353
+ best_solutions[i][1].get('service_availability', 0)
354
+ ))
355
+
356
  best_solution, best_objectives = best_solutions[best_idx]
357
  if self.optimize_blocks:
358
  # Always create fresh block assignment for the best solution