NextGenC commited on
Commit
6f35956
·
verified ·
1 Parent(s): bc67601

Upload 9 files

Browse files
best_evolved_model_trained.keras ADDED
Binary file (33.3 kB). View file
 
config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "output_base_dir": "./my_neuroevolution_results",
3
+ "seq_length": 10,
4
+ "train_samples": 5000,
5
+ "test_samples": 1000,
6
+ "pop_size": 80,
7
+ "generations": 100,
8
+ "mutation_rate": 0.5,
9
+ "weight_mut_rate": 0.8,
10
+ "activation_mut_rate": 0.2,
11
+ "mutation_strength": 0.1,
12
+ "tournament_size": 5,
13
+ "elitism_count": 2,
14
+ "batch_size": 64,
15
+ "epochs_final_train": 100,
16
+ "seed": 123
17
+ }
evolution.log ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-03-29 16:23:07,752 - INFO - Starting EvoNet Pipeline Run: 20250329_162307
2
+ 2025-03-29 16:23:07,752 - INFO - Output directory: ./my_neuroevolution_results\evorun_20250329_162307
3
+ 2025-03-29 16:23:07,752 - INFO - Configuration:
4
+ 2025-03-29 16:23:07,752 - INFO - output_base_dir: ./my_neuroevolution_results
5
+ 2025-03-29 16:23:07,752 - INFO - seq_length: 10
6
+ 2025-03-29 16:23:07,752 - INFO - train_samples: 5000
7
+ 2025-03-29 16:23:07,752 - INFO - test_samples: 1000
8
+ 2025-03-29 16:23:07,753 - INFO - pop_size: 80
9
+ 2025-03-29 16:23:07,753 - INFO - generations: 100
10
+ 2025-03-29 16:23:07,753 - INFO - mutation_rate: 0.5
11
+ 2025-03-29 16:23:07,753 - INFO - weight_mut_rate: 0.8
12
+ 2025-03-29 16:23:07,753 - INFO - activation_mut_rate: 0.2
13
+ 2025-03-29 16:23:07,753 - INFO - mutation_strength: 0.1
14
+ 2025-03-29 16:23:07,753 - INFO - tournament_size: 5
15
+ 2025-03-29 16:23:07,753 - INFO - elitism_count: 2
16
+ 2025-03-29 16:23:07,753 - INFO - batch_size: 64
17
+ 2025-03-29 16:23:07,753 - INFO - epochs_final_train: 100
18
+ 2025-03-29 16:23:07,753 - INFO - seed: 123
19
+ 2025-03-29 16:23:07,754 - INFO - Configuration saved to ./my_neuroevolution_results\evorun_20250329_162307\config.json
20
+ 2025-03-29 16:23:07,755 - INFO - Using random seed: 123
21
+ 2025-03-29 16:23:07,766 - WARNING - GPU not found. Using CPU.
22
+ 2025-03-29 16:23:07,766 - INFO - Generating 5000 samples with sequence length 10...
23
+ 2025-03-29 16:23:07,768 - INFO - Data generation complete.
24
+ 2025-03-29 16:23:07,768 - INFO - Generating 1000 samples with sequence length 10...
25
+ 2025-03-29 16:23:07,768 - INFO - Data generation complete.
26
+ 2025-03-29 16:23:07,769 - INFO - Initializing population of 80 individuals...
27
+ 2025-03-29 16:23:09,618 - INFO - Population initialized.
28
+ 2025-03-29 16:23:09,619 - INFO - Starting evolution for 100 generations...
29
+ 2025-03-29 16:23:09,815 - WARNING - 5 out of the last 5 calls to <function get_predictions at 0x0000029FA1CB5090> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
30
+ 2025-03-29 16:23:09,837 - WARNING - 6 out of the last 6 calls to <function get_predictions at 0x0000029FA1CB5090> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has reduce_retracing=True option that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
31
+ 2025-03-29 16:23:11,240 - INFO - Generation 1: New overall best fitness: 0.0005
32
+ 2025-03-29 16:23:11,241 - INFO - Generation 1/100 - Best Fitness: 0.0005, Avg Fitness: 0.0003
33
+ 2025-03-29 16:23:15,416 - INFO - Generation 2: New overall best fitness: 0.0005
34
+ 2025-03-29 16:23:15,416 - INFO - Generation 2/100 - Best Fitness: 0.0005, Avg Fitness: 0.0003
35
+ 2025-03-29 16:23:19,031 - INFO - Generation 3: New overall best fitness: 0.0006
36
+ 2025-03-29 16:23:19,032 - INFO - Generation 3/100 - Best Fitness: 0.0006, Avg Fitness: 0.0003
37
+ 2025-03-29 16:23:22,543 - INFO - Generation 4: New overall best fitness: 0.0007
38
+ 2025-03-29 16:23:22,544 - INFO - Generation 4/100 - Best Fitness: 0.0007, Avg Fitness: 0.0004
39
+ 2025-03-29 16:23:26,214 - INFO - Generation 5: New overall best fitness: 0.0007
40
+ 2025-03-29 16:23:26,214 - INFO - Generation 5/100 - Best Fitness: 0.0007, Avg Fitness: 0.0005
41
+ 2025-03-29 16:23:31,401 - INFO - Generation 6: New overall best fitness: 0.0007
42
+ 2025-03-29 16:23:31,402 - INFO - Generation 6/100 - Best Fitness: 0.0007, Avg Fitness: 0.0005
43
+ 2025-03-29 16:23:36,194 - INFO - Generation 7/100 - Best Fitness: 0.0007, Avg Fitness: 0.0006
44
+ 2025-03-29 16:23:40,653 - INFO - Generation 8/100 - Best Fitness: 0.0007, Avg Fitness: 0.0006
45
+ 2025-03-29 16:23:45,833 - INFO - Generation 9: New overall best fitness: 0.0009
46
+ 2025-03-29 16:23:45,834 - INFO - Generation 9/100 - Best Fitness: 0.0009, Avg Fitness: 0.0006
47
+ 2025-03-29 16:23:51,287 - INFO - Generation 10/100 - Best Fitness: 0.0009, Avg Fitness: 0.0006
48
+ 2025-03-29 16:23:57,811 - INFO - Generation 11: New overall best fitness: 0.0010
49
+ 2025-03-29 16:23:57,812 - INFO - Generation 11/100 - Best Fitness: 0.0010, Avg Fitness: 0.0006
50
+ 2025-03-29 16:24:03,660 - INFO - Generation 12/100 - Best Fitness: 0.0010, Avg Fitness: 0.0007
51
+ 2025-03-29 16:24:08,561 - INFO - Generation 13/100 - Best Fitness: 0.0010, Avg Fitness: 0.0007
52
+ 2025-03-29 16:24:13,156 - INFO - Generation 14/100 - Best Fitness: 0.0010, Avg Fitness: 0.0007
53
+ 2025-03-29 16:24:18,199 - INFO - Generation 15/100 - Best Fitness: 0.0010, Avg Fitness: 0.0008
54
+ 2025-03-29 16:24:23,022 - INFO - Generation 16/100 - Best Fitness: 0.0010, Avg Fitness: 0.0008
55
+ 2025-03-29 16:24:27,709 - INFO - Generation 17/100 - Best Fitness: 0.0010, Avg Fitness: 0.0009
56
+ 2025-03-29 16:24:32,992 - INFO - Generation 18/100 - Best Fitness: 0.0010, Avg Fitness: 0.0008
57
+ 2025-03-29 16:24:38,218 - INFO - Generation 19/100 - Best Fitness: 0.0010, Avg Fitness: 0.0008
58
+ 2025-03-29 16:24:43,234 - INFO - Generation 20/100 - Best Fitness: 0.0010, Avg Fitness: 0.0007
59
+ 2025-03-29 16:24:48,349 - INFO - Generation 21/100 - Best Fitness: 0.0010, Avg Fitness: 0.0008
60
+ 2025-03-29 16:24:53,961 - INFO - Generation 22/100 - Best Fitness: 0.0010, Avg Fitness: 0.0008
61
+ 2025-03-29 16:24:59,377 - INFO - Generation 23/100 - Best Fitness: 0.0010, Avg Fitness: 0.0008
62
+ 2025-03-29 16:25:05,294 - INFO - Generation 24/100 - Best Fitness: 0.0010, Avg Fitness: 0.0008
63
+ 2025-03-29 16:25:11,954 - INFO - Generation 25: New overall best fitness: 0.0011
64
+ 2025-03-29 16:25:11,954 - INFO - Generation 25/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
65
+ 2025-03-29 16:25:18,394 - INFO - Generation 26/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
66
+ 2025-03-29 16:25:24,756 - INFO - Generation 27/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
67
+ 2025-03-29 16:25:30,985 - INFO - Generation 28/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
68
+ 2025-03-29 16:25:37,146 - INFO - Generation 29/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
69
+ 2025-03-29 16:25:42,858 - INFO - Generation 30/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
70
+ 2025-03-29 16:25:48,828 - INFO - Generation 31/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
71
+ 2025-03-29 16:25:54,665 - INFO - Generation 32/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
72
+ 2025-03-29 16:26:01,264 - INFO - Generation 33/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
73
+ 2025-03-29 16:26:07,385 - INFO - Generation 34/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
74
+ 2025-03-29 16:26:13,600 - INFO - Generation 35/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
75
+ 2025-03-29 16:26:20,388 - INFO - Generation 36/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
76
+ 2025-03-29 16:26:27,847 - INFO - Generation 37/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
77
+ 2025-03-29 16:26:34,794 - INFO - Generation 38/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
78
+ 2025-03-29 16:26:41,511 - INFO - Generation 39/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
79
+ 2025-03-29 16:26:48,222 - INFO - Generation 40/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
80
+ 2025-03-29 16:26:55,038 - INFO - Generation 41/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
81
+ 2025-03-29 16:27:03,038 - INFO - Generation 42/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
82
+ 2025-03-29 16:27:10,563 - INFO - Generation 43/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
83
+ 2025-03-29 16:27:18,195 - INFO - Generation 44/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
84
+ 2025-03-29 16:27:26,225 - INFO - Generation 45/100 - Best Fitness: 0.0011, Avg Fitness: 0.0007
85
+ 2025-03-29 16:27:33,738 - INFO - Generation 46/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
86
+ 2025-03-29 16:27:41,358 - INFO - Generation 47/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
87
+ 2025-03-29 16:27:48,662 - INFO - Generation 48/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
88
+ 2025-03-29 16:27:57,800 - INFO - Generation 49/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
89
+ 2025-03-29 16:28:05,729 - INFO - Generation 50/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
90
+ 2025-03-29 16:28:13,621 - INFO - Generation 51/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
91
+ 2025-03-29 16:28:22,441 - INFO - Generation 52/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
92
+ 2025-03-29 16:28:30,788 - INFO - Generation 53/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
93
+ 2025-03-29 16:28:39,245 - INFO - Generation 54/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
94
+ 2025-03-29 16:28:48,128 - INFO - Generation 55/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
95
+ 2025-03-29 16:28:56,611 - INFO - Generation 56/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
96
+ 2025-03-29 16:29:07,827 - INFO - Generation 57/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
97
+ 2025-03-29 16:29:17,239 - INFO - Generation 58/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
98
+ 2025-03-29 16:29:28,206 - INFO - Generation 59/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
99
+ 2025-03-29 16:29:37,831 - INFO - Generation 60/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
100
+ 2025-03-29 16:29:46,407 - INFO - Generation 61/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
101
+ 2025-03-29 16:29:55,415 - INFO - Generation 62/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
102
+ 2025-03-29 16:30:05,102 - INFO - Generation 63/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
103
+ 2025-03-29 16:30:14,019 - INFO - Generation 64/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
104
+ 2025-03-29 16:30:23,684 - INFO - Generation 65/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
105
+ 2025-03-29 16:30:33,262 - INFO - Generation 66/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
106
+ 2025-03-29 16:30:42,568 - INFO - Generation 67/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
107
+ 2025-03-29 16:30:53,168 - INFO - Generation 68/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
108
+ 2025-03-29 16:31:04,197 - INFO - Generation 69/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
109
+ 2025-03-29 16:31:17,892 - INFO - Generation 70/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
110
+ 2025-03-29 16:31:30,381 - INFO - Generation 71/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
111
+ 2025-03-29 16:31:42,523 - INFO - Generation 72/100 - Best Fitness: 0.0011, Avg Fitness: 0.0007
112
+ 2025-03-29 16:31:54,222 - INFO - Generation 73/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
113
+ 2025-03-29 16:32:05,454 - INFO - Generation 74/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
114
+ 2025-03-29 16:32:16,454 - INFO - Generation 75/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
115
+ 2025-03-29 16:32:28,258 - INFO - Generation 76/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
116
+ 2025-03-29 16:32:39,477 - INFO - Generation 77/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
117
+ 2025-03-29 16:32:50,192 - INFO - Generation 78/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
118
+ 2025-03-29 16:33:01,331 - INFO - Generation 79/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
119
+ 2025-03-29 16:33:13,647 - INFO - Generation 80/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
120
+ 2025-03-29 16:33:27,063 - INFO - Generation 81/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
121
+ 2025-03-29 16:33:40,751 - INFO - Generation 82/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
122
+ 2025-03-29 16:33:53,452 - INFO - Generation 83/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
123
+ 2025-03-29 16:34:08,825 - INFO - Generation 84/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
124
+ 2025-03-29 16:34:20,926 - INFO - Generation 85/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
125
+ 2025-03-29 16:34:32,460 - INFO - Generation 86/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
126
+ 2025-03-29 16:34:44,641 - INFO - Generation 87/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
127
+ 2025-03-29 16:34:56,168 - INFO - Generation 88/100 - Best Fitness: 0.0011, Avg Fitness: 0.0007
128
+ 2025-03-29 16:35:08,763 - INFO - Generation 89/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
129
+ 2025-03-29 16:35:20,565 - INFO - Generation 90/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
130
+ 2025-03-29 16:35:33,487 - INFO - Generation 91/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
131
+ 2025-03-29 16:35:49,718 - INFO - Generation 92/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
132
+ 2025-03-29 16:36:05,621 - INFO - Generation 93/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
133
+ 2025-03-29 16:36:19,000 - INFO - Generation 94/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
134
+ 2025-03-29 16:36:31,437 - INFO - Generation 95/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
135
+ 2025-03-29 16:36:44,000 - INFO - Generation 96/100 - Best Fitness: 0.0011, Avg Fitness: 0.0008
136
+ 2025-03-29 16:36:56,637 - INFO - Generation 97/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
137
+ 2025-03-29 16:37:10,609 - INFO - Generation 98/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
138
+ 2025-03-29 16:37:25,139 - INFO - Generation 99/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
139
+ 2025-03-29 16:37:43,057 - INFO - Generation 100/100 - Best Fitness: 0.0011, Avg Fitness: 0.0009
140
+ 2025-03-29 16:37:45,655 - INFO - Evolution complete.
141
+ 2025-03-29 16:37:45,659 - INFO - Fitness history data saved to ./my_neuroevolution_results\evorun_20250329_162307\fitness_history.csv
142
+ 2025-03-29 16:37:47,223 - INFO - Fitness history plot saved to ./my_neuroevolution_results\evorun_20250329_162307\fitness_history.png
143
+ 2025-03-29 16:37:47,223 - INFO - Starting final training of the best evolved model...
144
+ 2025-03-29 16:37:59,419 - INFO - Final training complete.
145
+ 2025-03-29 16:37:59,419 - INFO - Evaluating final model on test data...
146
+ 2025-03-29 16:37:59,569 - INFO - Final Test MSE: 38.610789
147
+ 2025-03-29 16:37:59,585 - INFO - Average Kendall's Tau (on 100 samples): 0.9964
148
+ 2025-03-29 16:37:59,630 - INFO - Final trained model saved to ./my_neuroevolution_results\evorun_20250329_162307\best_evolved_model_trained.keras
149
+ 2025-03-29 16:37:59,635 - INFO - Final results saved to ./my_neuroevolution_results\evorun_20250329_162307\final_results.json
150
+ 2025-03-29 16:37:59,635 - INFO - Pipeline finished successfully!
evonet_optimizer.py ADDED
@@ -0,0 +1,500 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import subprocess
3
+ import sys
4
+ import argparse
5
+ import random
6
+ import logging
7
+ from datetime import datetime
8
+ import json
9
+ from typing import List, Tuple, Dict, Any
10
+
11
+ import numpy as np
12
+ import tensorflow as tf
13
+ from tensorflow.keras.models import Sequential, load_model, clone_model
14
+ from tensorflow.keras.layers import Dense, Input
15
+ from tensorflow.keras.optimizers import Adam
16
+ from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
17
+ import matplotlib.pyplot as plt
18
+ from scipy.stats import kendalltau
19
+
20
+ # --- Constants ---
21
+ DEFAULT_SEQ_LENGTH = 10
22
+ DEFAULT_POP_SIZE = 50
23
+ DEFAULT_GENERATIONS = 50
24
+ DEFAULT_MUTATION_RATE = 0.4 # Probability of applying any mutation to an individual
25
+ DEFAULT_WEIGHT_MUT_RATE = 0.8 # If mutation occurs, probability of weight perturbation
26
+ DEFAULT_ACTIVATION_MUT_RATE = 0.2 # If mutation occurs, probability of activation change
27
+ DEFAULT_MUTATION_STRENGTH = 0.1 # Magnitude of weight perturbation
28
+ DEFAULT_TOURNAMENT_SIZE = 5
29
+ DEFAULT_ELITISM_COUNT = 2 # Keep top N individuals directly
30
+ DEFAULT_EPOCHS_FINAL_TRAIN = 100
31
+ DEFAULT_BATCH_SIZE = 64
32
+
33
+ # --- Logging Setup ---
34
+ def setup_logging(log_dir: str, log_level=logging.INFO) -> None:
35
+ """Configures logging to file and console."""
36
+ log_filename = os.path.join(log_dir, 'evolution.log')
37
+ logging.basicConfig(
38
+ level=log_level,
39
+ format='%(asctime)s - %(levelname)s - %(message)s',
40
+ handlers=[
41
+ logging.FileHandler(log_filename),
42
+ logging.StreamHandler(sys.stdout) # Also print to console
43
+ ]
44
+ )
45
+
46
+ # --- GPU Check ---
47
+ def check_gpu() -> bool:
48
+ """Checks for GPU availability and sets memory growth."""
49
+ gpus = tf.config.list_physical_devices('GPU')
50
+ if gpus:
51
+ try:
52
+ # Currently, memory growth needs to be the same across GPUs
53
+ for gpu in gpus:
54
+ tf.config.experimental.set_memory_growth(gpu, True)
55
+ logical_gpus = tf.config.list_logical_devices('GPU')
56
+ logging.info(f"{len(gpus)} Physical GPUs, {len(logical_gpus)} Logical GPUs found.")
57
+ logging.info(f"Using GPU: {gpus[0].name}")
58
+ return True
59
+ except RuntimeError as e:
60
+ # Memory growth must be set before GPUs have been initialized
61
+ logging.error(f"Error setting memory growth: {e}")
62
+ return False
63
+ else:
64
+ logging.warning("GPU not found. Using CPU.")
65
+ return False
66
+
67
+ # --- Data Generation ---
68
+ def generate_data(num_samples: int, seq_length: int) -> Tuple[np.ndarray, np.ndarray]:
69
+ """Generates random sequences and their sorted versions."""
70
+ logging.info(f"Generating {num_samples} samples with sequence length {seq_length}...")
71
+ X = np.random.rand(num_samples, seq_length) * 100
72
+ y = np.sort(X, axis=1)
73
+ logging.info("Data generation complete.")
74
+ return X, y
75
+
76
+ # --- Neuroevolution Core ---
77
+ def create_individual(seq_length: int) -> Sequential:
78
+ """Creates a Keras Sequential model with random architecture."""
79
+ model = Sequential(name=f"model_random_{random.randint(1000, 9999)}")
80
+ num_hidden_layers = random.randint(1, 4) # Reduced max layers for simplicity
81
+ neurons_per_layer = [random.randint(8, 64) for _ in range(num_hidden_layers)]
82
+ activations = [random.choice(['relu', 'tanh', 'sigmoid']) for _ in range(num_hidden_layers)]
83
+
84
+ # Input Layer
85
+ model.add(Input(shape=(seq_length,)))
86
+
87
+ # Hidden Layers
88
+ for i in range(num_hidden_layers):
89
+ model.add(Dense(neurons_per_layer[i], activation=activations[i]))
90
+
91
+ # Output Layer - must match sequence length for sorting
92
+ model.add(Dense(seq_length, activation='linear')) # Linear activation for regression output
93
+
94
+ # Compile the model immediately for weight manipulation capabilities
95
+ # Use a standard optimizer; learning rate might be adjusted during final training
96
+ model.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
97
+ return model
98
+
99
+ @tf.function # Potentially speeds up prediction
100
+ def get_predictions(model: Sequential, X: np.ndarray, batch_size: int) -> tf.Tensor:
101
+ """Gets model predictions using tf.function."""
102
+ return model(X, training=False) # Use __call__ inside tf.function
103
+
104
+ def calculate_fitness(individual: Sequential, X: np.ndarray, y: np.ndarray, batch_size: int) -> float:
105
+ """Calculates fitness based on inverse MSE. Handles potential errors."""
106
+ try:
107
+ # Ensure data is float32 for TensorFlow
108
+ X_tf = tf.cast(X, tf.float32)
109
+ y_tf = tf.cast(y, tf.float32)
110
+
111
+ # Use the tf.function decorated prediction function
112
+ y_pred_tf = get_predictions(individual, X_tf, batch_size)
113
+
114
+ # Calculate MSE using TensorFlow operations for potential GPU acceleration
115
+ mse = tf.reduce_mean(tf.square(y_tf - y_pred_tf))
116
+ mse_val = mse.numpy() # Get the numpy value
117
+
118
+ # Fitness: Inverse MSE (add small epsilon to avoid division by zero)
119
+ fitness_score = 1.0 / (mse_val + 1e-8)
120
+
121
+ # Handle potential NaN or Inf values in fitness
122
+ if not np.isfinite(fitness_score):
123
+ logging.warning(f"Non-finite fitness detected ({fitness_score}) for model {individual.name}. Assigning low fitness.")
124
+ return 1e-8 # Assign a very low fitness
125
+
126
+ return float(fitness_score)
127
+
128
+ except Exception as e:
129
+ logging.error(f"Error during fitness calculation for model {individual.name}: {e}", exc_info=True)
130
+ return 1e-8 # Return minimal fitness on error
131
+
132
+
133
+ def mutate_individual(individual: Sequential, weight_mut_rate: float, act_mut_rate: float, mut_strength: float) -> Sequential:
134
+ """Applies mutations (weight perturbation, activation change) to an individual."""
135
+ mutated_model = clone_model(individual)
136
+ mutated_model.set_weights(individual.get_weights()) # Crucial: Copy weights
137
+
138
+ mutated = False
139
+ # 1. Weight Mutation
140
+ if random.random() < weight_mut_rate:
141
+ mutated = True
142
+ for layer in mutated_model.layers:
143
+ if isinstance(layer, Dense):
144
+ weights_biases = layer.get_weights()
145
+ new_weights_biases = []
146
+ for wb in weights_biases:
147
+ noise = np.random.normal(0, mut_strength, wb.shape)
148
+ new_weights_biases.append(wb + noise)
149
+ if new_weights_biases: # Ensure layer had weights
150
+ layer.set_weights(new_weights_biases)
151
+ # logging.debug(f"Applied weight mutation to {mutated_model.name}")
152
+
153
+ # 2. Activation Mutation (Applied independently)
154
+ if random.random() < act_mut_rate:
155
+ # Find Dense layers eligible for activation change (not the output layer)
156
+ dense_layers = [layer for layer in mutated_model.layers if isinstance(layer, Dense)]
157
+ if len(dense_layers) > 1: # Ensure there's at least one hidden layer
158
+ mutated = True
159
+ layer_to_mutate = random.choice(dense_layers[:-1]) # Exclude output layer
160
+ current_activation = layer_to_mutate.get_config().get('activation', 'linear')
161
+ possible_activations = ['relu', 'tanh', 'sigmoid']
162
+ if current_activation in possible_activations:
163
+ possible_activations.remove(current_activation)
164
+ new_activation = random.choice(possible_activations)
165
+
166
+ # Rebuild the model config with the new activation
167
+ # This is safer than trying to modify layer activation in-place
168
+ config = mutated_model.get_config()
169
+ for layer_config in config['layers']:
170
+ if layer_config['config']['name'] == layer_to_mutate.name:
171
+ layer_config['config']['activation'] = new_activation
172
+ # logging.debug(f"Changed activation of layer {layer_to_mutate.name} to {new_activation} in {mutated_model.name}")
173
+ break # Found the layer
174
+
175
+ # Create a new model from the modified config
176
+ # Important: Need to re-compile after structural changes from config
177
+ try:
178
+ mutated_model_new_act = Sequential.from_config(config)
179
+ mutated_model_new_act.compile(optimizer=Adam(learning_rate=0.001), loss='mse') # Re-compile
180
+ mutated_model = mutated_model_new_act # Replace the old model
181
+ except Exception as e:
182
+ logging.error(f"Error rebuilding model after activation mutation for {mutated_model.name}: {e}")
183
+ # Revert mutation if rebuilding fails
184
+
185
+
186
+ # Re-compile the final mutated model to ensure optimizer state is fresh
187
+ if mutated:
188
+ mutated_model.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
189
+ mutated_model._name = f"mutated_{individual.name}" # Rename
190
+
191
+ return mutated_model
192
+
193
+
194
+ def tournament_selection(population: List[Sequential], fitness_scores: List[float], k: int) -> Sequential:
195
+ """Selects the best individual from a randomly chosen tournament group."""
196
+ tournament_indices = random.sample(range(len(population)), k)
197
+ tournament_fitness = [fitness_scores[i] for i in tournament_indices]
198
+ winner_index_in_tournament = np.argmax(tournament_fitness)
199
+ winner_original_index = tournament_indices[winner_index_in_tournament]
200
+ return population[winner_original_index]
201
+
202
+ def evolve_population(population: List[Sequential], X: np.ndarray, y: np.ndarray, generations: int,
203
+ mutation_rate: float, weight_mut_rate: float, act_mut_rate: float, mut_strength: float,
204
+ tournament_size: int, elitism_count: int, batch_size: int) -> Tuple[Sequential, List[float], List[float]]:
205
+ """Runs the evolutionary process."""
206
+ best_fitness_history = []
207
+ avg_fitness_history = []
208
+ best_model_overall = None
209
+ best_fitness_overall = -1.0
210
+
211
+ for gen in range(generations):
212
+ # 1. Evaluate Fitness
213
+ fitness_scores = [calculate_fitness(ind, X, y, batch_size) for ind in population]
214
+
215
+ # Track overall best
216
+ current_best_idx = np.argmax(fitness_scores)
217
+ current_best_fitness = fitness_scores[current_best_idx]
218
+ if current_best_fitness > best_fitness_overall:
219
+ best_fitness_overall = current_best_fitness
220
+ # Keep a copy of the best model structure and weights
221
+ best_model_overall = clone_model(population[current_best_idx])
222
+ best_model_overall.set_weights(population[current_best_idx].get_weights())
223
+ best_model_overall.compile(optimizer=Adam(), loss='mse') # Re-compile just in case
224
+ logging.info(f"Generation {gen+1}: New overall best fitness: {best_fitness_overall:.4f}")
225
+
226
+
227
+ avg_fitness = np.mean(fitness_scores)
228
+ best_fitness_history.append(current_best_fitness)
229
+ avg_fitness_history.append(avg_fitness)
230
+
231
+ logging.info(f"Generation {gen+1}/{generations} - Best Fitness: {current_best_fitness:.4f}, Avg Fitness: {avg_fitness:.4f}")
232
+
233
+ new_population = []
234
+
235
+ # 2. Elitism: Carry over the best individuals
236
+ if elitism_count > 0:
237
+ elite_indices = np.argsort(fitness_scores)[-elitism_count:]
238
+ for idx in elite_indices:
239
+ # Clone elite models to avoid modifications affecting originals if selected again
240
+ elite_clone = clone_model(population[idx])
241
+ elite_clone.set_weights(population[idx].get_weights())
242
+ elite_clone.compile(optimizer=Adam(), loss='mse') # Ensure compiled
243
+ new_population.append(elite_clone)
244
+
245
+
246
+ # 3. Selection & Reproduction for the rest of the population
247
+ while len(new_population) < len(population):
248
+ # Select parent(s) using tournament selection
249
+ parent = tournament_selection(population, fitness_scores, tournament_size)
250
+
251
+ # Create child through mutation (crossover could be added here)
252
+ child = parent # Start with the parent
253
+ if random.random() < mutation_rate:
254
+ # Clone parent before mutation to avoid modifying the original selected parent
255
+ parent_clone = clone_model(parent)
256
+ parent_clone.set_weights(parent.get_weights())
257
+ parent_clone.compile(optimizer=Adam(), loss='mse') # Ensure compiled
258
+ child = mutate_individual(parent_clone, weight_mut_rate, act_mut_rate, mut_strength)
259
+ else:
260
+ # If no mutation, still clone the parent to ensure new population has distinct objects
261
+ child = clone_model(parent)
262
+ child.set_weights(parent.get_weights())
263
+ child.compile(optimizer=Adam(), loss='mse') # Ensure compiled
264
+
265
+
266
+ new_population.append(child)
267
+
268
+ population = new_population[:len(population)] # Ensure population size is maintained
269
+
270
+ if best_model_overall is None: # Handle case where no improvement was ever found
271
+ best_idx = np.argmax([calculate_fitness(ind, X, y, batch_size) for ind in population])
272
+ best_model_overall = population[best_idx]
273
+
274
+ return best_model_overall, best_fitness_history, avg_fitness_history
275
+
276
+
277
+ # --- Plotting ---
278
+ def plot_fitness_history(history_best: List[float], history_avg: List[float], output_dir: str) -> None:
279
+ """Plots and saves the fitness history."""
280
+ plt.figure(figsize=(12, 6))
281
+ plt.plot(history_best, label="Best Fitness per Generation", marker='o', linestyle='-')
282
+ plt.plot(history_avg, label="Average Fitness per Generation", marker='x', linestyle='--')
283
+ plt.xlabel("Generation")
284
+ plt.ylabel("Fitness Score (1 / MSE)")
285
+ plt.title("Evolutionary Process Fitness History")
286
+ plt.legend()
287
+ plt.grid(True)
288
+ plt.tight_layout()
289
+ plot_path = os.path.join(output_dir, "fitness_history.png")
290
+ plt.savefig(plot_path)
291
+ plt.close()
292
+ logging.info(f"Fitness history plot saved to {plot_path}")
293
+
294
+ # --- Evaluation ---
295
+ def evaluate_model(model: Sequential, X_test: np.ndarray, y_test: np.ndarray, batch_size: int) -> Dict[str, float]:
296
+ """Evaluates the final model on the test set."""
297
+ logging.info("Evaluating final model on test data...")
298
+ y_pred = model.predict(X_test, batch_size=batch_size, verbose=0)
299
+ test_mse = np.mean(np.square(y_test - y_pred))
300
+ logging.info(f"Final Test MSE: {test_mse:.6f}")
301
+
302
+ # Calculate Kendall's Tau for a sample (can be slow for large datasets)
303
+ sample_size = min(100, X_test.shape[0])
304
+ taus = []
305
+ indices = np.random.choice(X_test.shape[0], sample_size, replace=False)
306
+ for i in indices:
307
+ tau, _ = kendalltau(y_test[i], y_pred[i])
308
+ if not np.isnan(tau): # Handle potential NaN if predictions are constant
309
+ taus.append(tau)
310
+ avg_kendall_tau = np.mean(taus) if taus else 0.0
311
+ logging.info(f"Average Kendall's Tau (on {sample_size} samples): {avg_kendall_tau:.4f}")
312
+
313
+ return {
314
+ "test_mse": float(test_mse),
315
+ "avg_kendall_tau": float(avg_kendall_tau)
316
+ }
317
+
318
+ # --- Main Pipeline ---
319
+ def run_pipeline(args: argparse.Namespace):
320
+ """Executes the complete neuroevolution pipeline."""
321
+
322
+ # Create unique output directory for this run
323
+ timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
324
+ output_dir = os.path.join(args.output_base_dir, f"evorun_{timestamp}")
325
+ os.makedirs(output_dir, exist_ok=True)
326
+
327
+ # Setup logging for this run
328
+ setup_logging(output_dir)
329
+ logging.info(f"Starting EvoNet Pipeline Run: {timestamp}")
330
+ logging.info(f"Output directory: {output_dir}")
331
+
332
+ # Log arguments/configuration
333
+ logging.info("Configuration:")
334
+ args_dict = vars(args)
335
+ for k, v in args_dict.items():
336
+ logging.info(f" {k}: {v}")
337
+ # Save config to file
338
+ config_path = os.path.join(output_dir, "config.json")
339
+ with open(config_path, 'w') as f:
340
+ json.dump(args_dict, f, indent=4)
341
+ logging.info(f"Configuration saved to {config_path}")
342
+
343
+
344
+ # Set random seeds for reproducibility
345
+ random.seed(args.seed)
346
+ np.random.seed(args.seed)
347
+ tf.random.set_seed(args.seed)
348
+ logging.info(f"Using random seed: {args.seed}")
349
+
350
+ # Check GPU
351
+ check_gpu()
352
+
353
+ # Generate Data
354
+ X_train, y_train = generate_data(args.train_samples, args.seq_length)
355
+ X_test, y_test = generate_data(args.test_samples, args.seq_length)
356
+
357
+ # Initialize Population
358
+ logging.info(f"Initializing population of {args.pop_size} individuals...")
359
+ population = [create_individual(args.seq_length) for _ in range(args.pop_size)]
360
+ logging.info("Population initialized.")
361
+
362
+ # Run Evolution
363
+ logging.info(f"Starting evolution for {args.generations} generations...")
364
+ best_model_unevolved, best_fitness_hist, avg_fitness_hist = evolve_population(
365
+ population, X_train, y_train, args.generations,
366
+ args.mutation_rate, args.weight_mut_rate, args.activation_mut_rate, args.mutation_strength,
367
+ args.tournament_size, args.elitism_count, args.batch_size
368
+ )
369
+ logging.info("Evolution complete.")
370
+
371
+ # Save fitness history data
372
+ history_path = os.path.join(output_dir, "fitness_history.csv")
373
+ history_data = np.array([best_fitness_hist, avg_fitness_hist]).T
374
+ np.savetxt(history_path, history_data, delimiter=',', header='BestFitness,AvgFitness', comments='')
375
+ logging.info(f"Fitness history data saved to {history_path}")
376
+
377
+ # Plot fitness history
378
+ plot_fitness_history(best_fitness_hist, avg_fitness_hist, output_dir)
379
+
380
+ # Final Training of the Best Model
381
+ logging.info("Starting final training of the best evolved model...")
382
+ # Clone the best model again to ensure we don't modify the original reference unintentionally
383
+ final_model = clone_model(best_model_unevolved)
384
+ final_model.set_weights(best_model_unevolved.get_weights())
385
+ # Use a fresh optimizer instance for final training
386
+ final_model.compile(optimizer=Adam(learning_rate=0.001), loss='mse', metrics=['mae'])
387
+
388
+ # Callbacks for efficient training
389
+ early_stopping = EarlyStopping(monitor='val_loss', patience=10, restore_best_weights=True, verbose=1)
390
+ reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=1e-6, verbose=1)
391
+
392
+ # Use a portion of training data for validation during final training
393
+ history = final_model.fit(
394
+ X_train, y_train,
395
+ epochs=args.epochs_final_train,
396
+ batch_size=args.batch_size,
397
+ validation_split=0.2, # Use 20% of training data for validation
398
+ callbacks=[early_stopping, reduce_lr],
399
+ verbose=2 # Show one line per epoch
400
+ )
401
+ logging.info("Final training complete.")
402
+
403
+ # Evaluate the TRAINED final model
404
+ final_metrics = evaluate_model(final_model, X_test, y_test, args.batch_size)
405
+
406
+ # Save the TRAINED final model
407
+ model_path = os.path.join(output_dir, "best_evolved_model_trained.keras") # Use .keras format
408
+ final_model.save(model_path)
409
+ logging.info(f"Final trained model saved to {model_path}")
410
+
411
+ # Save final results
412
+ results = {
413
+ "config": args_dict,
414
+ "final_evaluation": final_metrics,
415
+ "evolution_summary": {
416
+ "best_fitness_overall": best_fitness_hist[-1] if best_fitness_hist else None,
417
+ "avg_fitness_final_gen": avg_fitness_hist[-1] if avg_fitness_hist else None,
418
+ },
419
+ "training_history": history.history # Include loss/val_loss history from final training
420
+ }
421
+ results_path = os.path.join(output_dir, "final_results.json")
422
+ # Convert numpy types in history to native Python types for JSON serialization
423
+ for key in results['training_history']:
424
+ results['training_history'][key] = [float(v) for v in results['training_history'][key]]
425
+
426
+ with open(results_path, 'w') as f:
427
+ json.dump(results, f, indent=4)
428
+ logging.info(f"Final results saved to {results_path}")
429
+ logging.info("Pipeline finished successfully!")
430
+
431
+
432
+ # --- Argument Parser ---
433
+ def parse_arguments() -> argparse.Namespace:
434
+ parser = argparse.ArgumentParser(description="EvoNet: Neuroevolution for Sorting Task")
435
+
436
+ # --- Directory ---
437
+ parser.add_argument('--output_base_dir', type=str, default=os.path.join(os.getcwd(), "evonet_runs"),
438
+ help='Base directory to store run results.')
439
+
440
+ # --- Data ---
441
+ parser.add_argument('--seq_length', type=int, default=DEFAULT_SEQ_LENGTH,
442
+ help='Length of the sequences to sort.')
443
+ parser.add_argument('--train_samples', type=int, default=5000, help='Number of training samples.')
444
+ parser.add_argument('--test_samples', type=int, default=1000, help='Number of test samples.')
445
+
446
+ # --- Evolution Parameters ---
447
+ parser.add_argument('--pop_size', type=int, default=DEFAULT_POP_SIZE, help='Population size.')
448
+ parser.add_argument('--generations', type=int, default=DEFAULT_GENERATIONS, help='Number of generations.')
449
+ parser.add_argument('--mutation_rate', type=float, default=DEFAULT_MUTATION_RATE,
450
+ help='Overall probability of mutating an individual.')
451
+ parser.add_argument('--weight_mut_rate', type=float, default=DEFAULT_WEIGHT_MUT_RATE,
452
+ help='Probability of weight perturbation if mutation occurs.')
453
+ parser.add_argument('--activation_mut_rate', type=float, default=DEFAULT_ACTIVATION_MUT_RATE,
454
+ help='Probability of activation change if mutation occurs.')
455
+ parser.add_argument('--mutation_strength', type=float, default=DEFAULT_MUTATION_STRENGTH,
456
+ help='Standard deviation of Gaussian noise for weight mutation.')
457
+ parser.add_argument('--tournament_size', type=int, default=DEFAULT_TOURNAMENT_SIZE,
458
+ help='Number of individuals participating in tournament selection.')
459
+ parser.add_argument('--elitism_count', type=int, default=DEFAULT_ELITISM_COUNT,
460
+ help='Number of best individuals to carry over directly.')
461
+
462
+ # --- Training & Evaluation ---
463
+ parser.add_argument('--batch_size', type=int, default=DEFAULT_BATCH_SIZE, help='Batch size for predictions and training.')
464
+ parser.add_argument('--epochs_final_train', type=int, default=DEFAULT_EPOCHS_FINAL_TRAIN,
465
+ help='Max epochs for final training of the best model.')
466
+
467
+ # --- Reproducibility ---
468
+ parser.add_argument('--seed', type=int, default=None, help='Random seed for reproducibility (default: random).')
469
+
470
+ args = parser.parse_args()
471
+
472
+ # If seed is not provided, generate one
473
+ if args.seed is None:
474
+ args.seed = random.randint(0, 2**32 - 1)
475
+
476
+ return args
477
+
478
+
479
+ # --- Main Execution ---
480
+ if __name__ == "__main__":
481
+ # 1. Parse Command Line Arguments
482
+ cli_args = parse_arguments()
483
+
484
+ # Ensure output directory exists
485
+ os.makedirs(cli_args.output_base_dir, exist_ok=True)
486
+
487
+ # 2. Run the Pipeline
488
+ try:
489
+ run_pipeline(cli_args)
490
+ except Exception as e:
491
+ # Log any uncaught exceptions during the pipeline execution
492
+ # The logger might not be set up if error is early, so print as fallback
493
+ print(f"FATAL ERROR in pipeline execution: {e}", file=sys.stderr)
494
+ # Attempt to log if logger was initialized
495
+ if logging.getLogger().hasHandlers():
496
+ logging.critical("FATAL ERROR in pipeline execution:", exc_info=True)
497
+ else:
498
+ import traceback
499
+ print(traceback.format_exc(), file=sys.stderr)
500
+ sys.exit(1) # Exit with error code
final_results.json ADDED
@@ -0,0 +1,539 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "config": {
3
+ "output_base_dir": "./my_neuroevolution_results",
4
+ "seq_length": 10,
5
+ "train_samples": 5000,
6
+ "test_samples": 1000,
7
+ "pop_size": 80,
8
+ "generations": 100,
9
+ "mutation_rate": 0.5,
10
+ "weight_mut_rate": 0.8,
11
+ "activation_mut_rate": 0.2,
12
+ "mutation_strength": 0.1,
13
+ "tournament_size": 5,
14
+ "elitism_count": 2,
15
+ "batch_size": 64,
16
+ "epochs_final_train": 100,
17
+ "seed": 123
18
+ },
19
+ "final_evaluation": {
20
+ "test_mse": 38.61078889492296,
21
+ "avg_kendall_tau": 0.9964444444444444
22
+ },
23
+ "evolution_summary": {
24
+ "best_fitness_overall": 0.0010725238619096907,
25
+ "avg_fitness_final_gen": 0.0008708313889016431
26
+ },
27
+ "training_history": {
28
+ "loss": [
29
+ 546.2453002929688,
30
+ 259.88385009765625,
31
+ 160.8056182861328,
32
+ 121.15694427490234,
33
+ 103.09577941894531,
34
+ 93.04764556884766,
35
+ 86.2327880859375,
36
+ 81.19600677490234,
37
+ 76.95745849609375,
38
+ 73.48666381835938,
39
+ 69.82528686523438,
40
+ 66.72440338134766,
41
+ 64.06942749023438,
42
+ 61.33348083496094,
43
+ 59.45895767211914,
44
+ 57.71091842651367,
45
+ 55.872772216796875,
46
+ 54.16001892089844,
47
+ 52.91276550292969,
48
+ 51.56428146362305,
49
+ 50.49087142944336,
50
+ 49.65439987182617,
51
+ 48.75180435180664,
52
+ 48.34307861328125,
53
+ 47.3436279296875,
54
+ 46.7180061340332,
55
+ 46.27332305908203,
56
+ 45.95494079589844,
57
+ 45.99394607543945,
58
+ 45.554264068603516,
59
+ 45.08098220825195,
60
+ 45.12644958496094,
61
+ 44.66641616821289,
62
+ 44.55065155029297,
63
+ 44.405067443847656,
64
+ 44.24126052856445,
65
+ 44.15995788574219,
66
+ 44.05458068847656,
67
+ 43.82747268676758,
68
+ 43.609779357910156,
69
+ 43.547706604003906,
70
+ 43.29800033569336,
71
+ 43.19523620605469,
72
+ 43.16196060180664,
73
+ 42.95027160644531,
74
+ 42.81744384765625,
75
+ 43.091712951660156,
76
+ 42.707786560058594,
77
+ 42.47276306152344,
78
+ 42.355682373046875,
79
+ 42.49977111816406,
80
+ 42.08684158325195,
81
+ 42.29066467285156,
82
+ 41.80927276611328,
83
+ 41.90619659423828,
84
+ 41.8099250793457,
85
+ 41.549163818359375,
86
+ 41.396934509277344,
87
+ 41.21516036987305,
88
+ 41.11375427246094,
89
+ 40.800689697265625,
90
+ 40.44392013549805,
91
+ 40.39072036743164,
92
+ 40.210655212402344,
93
+ 40.23362350463867,
94
+ 40.23074722290039,
95
+ 39.86910629272461,
96
+ 39.74824905395508,
97
+ 39.67206954956055,
98
+ 39.67225646972656,
99
+ 39.297149658203125,
100
+ 39.36761474609375,
101
+ 39.19108200073242,
102
+ 38.89718246459961,
103
+ 38.87799835205078,
104
+ 38.871070861816406,
105
+ 38.60243225097656,
106
+ 38.5323486328125,
107
+ 38.74274444580078,
108
+ 38.49137496948242,
109
+ 38.1933479309082,
110
+ 38.089134216308594,
111
+ 38.048973083496094,
112
+ 37.9232177734375,
113
+ 37.71906661987305,
114
+ 37.57664489746094,
115
+ 37.903602600097656,
116
+ 37.46571350097656,
117
+ 37.543052673339844,
118
+ 37.541290283203125,
119
+ 37.1763916015625,
120
+ 36.93217849731445,
121
+ 36.88774108886719,
122
+ 36.782711029052734,
123
+ 36.954158782958984,
124
+ 36.48606872558594,
125
+ 36.34406280517578,
126
+ 36.34116744995117,
127
+ 36.13870620727539,
128
+ 35.894798278808594
129
+ ],
130
+ "mae": [
131
+ 18.270145416259766,
132
+ 12.641241073608398,
133
+ 9.978185653686523,
134
+ 8.655787467956543,
135
+ 7.988434791564941,
136
+ 7.588530540466309,
137
+ 7.30443000793457,
138
+ 7.094987869262695,
139
+ 6.908982753753662,
140
+ 6.748968601226807,
141
+ 6.582261085510254,
142
+ 6.4301300048828125,
143
+ 6.297377109527588,
144
+ 6.160496711730957,
145
+ 6.065600395202637,
146
+ 5.973959922790527,
147
+ 5.875831127166748,
148
+ 5.78226375579834,
149
+ 5.717526912689209,
150
+ 5.645669460296631,
151
+ 5.583215236663818,
152
+ 5.5392632484436035,
153
+ 5.489008903503418,
154
+ 5.464126110076904,
155
+ 5.406632900238037,
156
+ 5.367351055145264,
157
+ 5.343441009521484,
158
+ 5.327566623687744,
159
+ 5.331641674041748,
160
+ 5.305015563964844,
161
+ 5.273298740386963,
162
+ 5.281223773956299,
163
+ 5.2487711906433105,
164
+ 5.242580413818359,
165
+ 5.235714912414551,
166
+ 5.22819185256958,
167
+ 5.223107814788818,
168
+ 5.215035915374756,
169
+ 5.200500965118408,
170
+ 5.18730354309082,
171
+ 5.1813459396362305,
172
+ 5.171675682067871,
173
+ 5.160656452178955,
174
+ 5.158697605133057,
175
+ 5.1457648277282715,
176
+ 5.143444538116455,
177
+ 5.157040119171143,
178
+ 5.133112907409668,
179
+ 5.117641448974609,
180
+ 5.114692687988281,
181
+ 5.1225385665893555,
182
+ 5.094179153442383,
183
+ 5.106266975402832,
184
+ 5.074340343475342,
185
+ 5.085508346557617,
186
+ 5.074405193328857,
187
+ 5.0639567375183105,
188
+ 5.0499067306518555,
189
+ 5.040680408477783,
190
+ 5.036265850067139,
191
+ 5.013760089874268,
192
+ 4.9970011711120605,
193
+ 4.989516735076904,
194
+ 4.973187446594238,
195
+ 4.977268218994141,
196
+ 4.97721004486084,
197
+ 4.951179504394531,
198
+ 4.944492816925049,
199
+ 4.943199157714844,
200
+ 4.94016170501709,
201
+ 4.913013458251953,
202
+ 4.921325206756592,
203
+ 4.903163909912109,
204
+ 4.890426158905029,
205
+ 4.887558937072754,
206
+ 4.888551235198975,
207
+ 4.870633125305176,
208
+ 4.86383581161499,
209
+ 4.8799238204956055,
210
+ 4.856790542602539,
211
+ 4.84244441986084,
212
+ 4.841256141662598,
213
+ 4.829007148742676,
214
+ 4.817996501922607,
215
+ 4.810219764709473,
216
+ 4.801018714904785,
217
+ 4.82061243057251,
218
+ 4.793266296386719,
219
+ 4.801506042480469,
220
+ 4.7967915534973145,
221
+ 4.774496555328369,
222
+ 4.759250640869141,
223
+ 4.757969379425049,
224
+ 4.748015880584717,
225
+ 4.763484477996826,
226
+ 4.726821422576904,
227
+ 4.717569828033447,
228
+ 4.720791339874268,
229
+ 4.704056739807129,
230
+ 4.687249183654785
231
+ ],
232
+ "val_loss": [
233
+ 355.85028076171875,
234
+ 198.4627227783203,
235
+ 138.49102783203125,
236
+ 112.78740692138672,
237
+ 99.88117980957031,
238
+ 91.56582641601562,
239
+ 85.2259750366211,
240
+ 80.70155334472656,
241
+ 77.23670959472656,
242
+ 73.29762268066406,
243
+ 69.72806549072266,
244
+ 67.10668182373047,
245
+ 64.06816101074219,
246
+ 62.02535629272461,
247
+ 60.76095199584961,
248
+ 58.276878356933594,
249
+ 56.67515182495117,
250
+ 55.075042724609375,
251
+ 53.78398513793945,
252
+ 52.859310150146484,
253
+ 51.962432861328125,
254
+ 50.62075424194336,
255
+ 51.23684310913086,
256
+ 49.018898010253906,
257
+ 48.78486633300781,
258
+ 48.1305046081543,
259
+ 48.10685729980469,
260
+ 47.723087310791016,
261
+ 47.94329071044922,
262
+ 47.151859283447266,
263
+ 47.02169418334961,
264
+ 46.635581970214844,
265
+ 45.934898376464844,
266
+ 46.0821418762207,
267
+ 45.82994842529297,
268
+ 46.47359848022461,
269
+ 45.441795349121094,
270
+ 45.479679107666016,
271
+ 45.500396728515625,
272
+ 45.16574478149414,
273
+ 45.373207092285156,
274
+ 44.835838317871094,
275
+ 45.28779602050781,
276
+ 45.18010330200195,
277
+ 44.22176742553711,
278
+ 44.56682586669922,
279
+ 44.70293045043945,
280
+ 45.086891174316406,
281
+ 44.14433288574219,
282
+ 43.95674133300781,
283
+ 44.348411560058594,
284
+ 45.011810302734375,
285
+ 43.68522644042969,
286
+ 44.200740814208984,
287
+ 43.5811653137207,
288
+ 43.36353302001953,
289
+ 43.99952697753906,
290
+ 42.700592041015625,
291
+ 42.49974822998047,
292
+ 43.01130294799805,
293
+ 42.199161529541016,
294
+ 41.973419189453125,
295
+ 41.9033203125,
296
+ 41.96733474731445,
297
+ 43.144779205322266,
298
+ 41.36708068847656,
299
+ 41.23895263671875,
300
+ 41.200260162353516,
301
+ 41.32600402832031,
302
+ 40.975711822509766,
303
+ 40.65323257446289,
304
+ 40.83009719848633,
305
+ 40.8490104675293,
306
+ 40.55123519897461,
307
+ 40.89341354370117,
308
+ 40.507877349853516,
309
+ 40.25646209716797,
310
+ 40.46010208129883,
311
+ 40.98613357543945,
312
+ 40.15856170654297,
313
+ 40.59947967529297,
314
+ 40.15495681762695,
315
+ 40.13038635253906,
316
+ 39.703712463378906,
317
+ 40.23557662963867,
318
+ 39.47261428833008,
319
+ 39.48463439941406,
320
+ 39.428077697753906,
321
+ 39.35265350341797,
322
+ 38.96087646484375,
323
+ 39.11784744262695,
324
+ 38.510807037353516,
325
+ 38.77788543701172,
326
+ 38.9130973815918,
327
+ 38.90206527709961,
328
+ 38.353179931640625,
329
+ 38.32897186279297,
330
+ 38.58757400512695,
331
+ 38.03741455078125,
332
+ 38.96195602416992
333
+ ],
334
+ "val_mae": [
335
+ 14.81877613067627,
336
+ 11.014273643493652,
337
+ 9.209857940673828,
338
+ 8.31789779663086,
339
+ 7.8487067222595215,
340
+ 7.5107526779174805,
341
+ 7.277787685394287,
342
+ 7.0798516273498535,
343
+ 6.912877559661865,
344
+ 6.746294975280762,
345
+ 6.584061622619629,
346
+ 6.454700469970703,
347
+ 6.3074188232421875,
348
+ 6.200479984283447,
349
+ 6.125111103057861,
350
+ 6.004154205322266,
351
+ 5.9107279777526855,
352
+ 5.826013565063477,
353
+ 5.759653091430664,
354
+ 5.69955587387085,
355
+ 5.671257495880127,
356
+ 5.592477321624756,
357
+ 5.6359992027282715,
358
+ 5.5066704750061035,
359
+ 5.480874538421631,
360
+ 5.46126651763916,
361
+ 5.460818767547607,
362
+ 5.435495376586914,
363
+ 5.446165084838867,
364
+ 5.404515743255615,
365
+ 5.398741722106934,
366
+ 5.376246929168701,
367
+ 5.3371262550354,
368
+ 5.347751617431641,
369
+ 5.340644836425781,
370
+ 5.379054069519043,
371
+ 5.310771942138672,
372
+ 5.311794281005859,
373
+ 5.3252387046813965,
374
+ 5.307089805603027,
375
+ 5.31290864944458,
376
+ 5.2750959396362305,
377
+ 5.326480865478516,
378
+ 5.314337730407715,
379
+ 5.258793354034424,
380
+ 5.263396263122559,
381
+ 5.2734599113464355,
382
+ 5.282870769500732,
383
+ 5.236101150512695,
384
+ 5.2455010414123535,
385
+ 5.255309581756592,
386
+ 5.277646541595459,
387
+ 5.214536666870117,
388
+ 5.260697841644287,
389
+ 5.2117719650268555,
390
+ 5.1837873458862305,
391
+ 5.225375175476074,
392
+ 5.150074481964111,
393
+ 5.1272478103637695,
394
+ 5.172309875488281,
395
+ 5.12300968170166,
396
+ 5.10131311416626,
397
+ 5.11003303527832,
398
+ 5.106845855712891,
399
+ 5.181298732757568,
400
+ 5.072604656219482,
401
+ 5.0620012283325195,
402
+ 5.0632781982421875,
403
+ 5.061835765838623,
404
+ 5.043211460113525,
405
+ 5.014331817626953,
406
+ 5.024109840393066,
407
+ 5.031244277954102,
408
+ 5.010079860687256,
409
+ 5.016538619995117,
410
+ 5.0130205154418945,
411
+ 4.99027681350708,
412
+ 5.008115291595459,
413
+ 5.03699254989624,
414
+ 4.9775004386901855,
415
+ 5.0150980949401855,
416
+ 4.961390018463135,
417
+ 4.971592426300049,
418
+ 4.9428887367248535,
419
+ 4.9885478019714355,
420
+ 4.943417549133301,
421
+ 4.943925380706787,
422
+ 4.9367451667785645,
423
+ 4.91554594039917,
424
+ 4.90092134475708,
425
+ 4.91062593460083,
426
+ 4.876039028167725,
427
+ 4.889153957366943,
428
+ 4.889742374420166,
429
+ 4.894570350646973,
430
+ 4.847760200500488,
431
+ 4.861299514770508,
432
+ 4.878693103790283,
433
+ 4.834355354309082,
434
+ 4.890781879425049
435
+ ],
436
+ "lr": [
437
+ 0.0010000000474974513,
438
+ 0.0010000000474974513,
439
+ 0.0010000000474974513,
440
+ 0.0010000000474974513,
441
+ 0.0010000000474974513,
442
+ 0.0010000000474974513,
443
+ 0.0010000000474974513,
444
+ 0.0010000000474974513,
445
+ 0.0010000000474974513,
446
+ 0.0010000000474974513,
447
+ 0.0010000000474974513,
448
+ 0.0010000000474974513,
449
+ 0.0010000000474974513,
450
+ 0.0010000000474974513,
451
+ 0.0010000000474974513,
452
+ 0.0010000000474974513,
453
+ 0.0010000000474974513,
454
+ 0.0010000000474974513,
455
+ 0.0010000000474974513,
456
+ 0.0010000000474974513,
457
+ 0.0010000000474974513,
458
+ 0.0010000000474974513,
459
+ 0.0010000000474974513,
460
+ 0.0010000000474974513,
461
+ 0.0010000000474974513,
462
+ 0.0010000000474974513,
463
+ 0.0010000000474974513,
464
+ 0.0010000000474974513,
465
+ 0.0010000000474974513,
466
+ 0.0010000000474974513,
467
+ 0.0010000000474974513,
468
+ 0.0010000000474974513,
469
+ 0.0010000000474974513,
470
+ 0.0010000000474974513,
471
+ 0.0010000000474974513,
472
+ 0.0010000000474974513,
473
+ 0.0010000000474974513,
474
+ 0.0010000000474974513,
475
+ 0.0010000000474974513,
476
+ 0.0010000000474974513,
477
+ 0.0010000000474974513,
478
+ 0.0010000000474974513,
479
+ 0.0010000000474974513,
480
+ 0.0010000000474974513,
481
+ 0.0010000000474974513,
482
+ 0.0010000000474974513,
483
+ 0.0010000000474974513,
484
+ 0.0010000000474974513,
485
+ 0.0010000000474974513,
486
+ 0.0010000000474974513,
487
+ 0.0010000000474974513,
488
+ 0.0010000000474974513,
489
+ 0.0010000000474974513,
490
+ 0.0010000000474974513,
491
+ 0.0010000000474974513,
492
+ 0.0010000000474974513,
493
+ 0.0010000000474974513,
494
+ 0.0010000000474974513,
495
+ 0.0010000000474974513,
496
+ 0.0010000000474974513,
497
+ 0.0010000000474974513,
498
+ 0.0010000000474974513,
499
+ 0.0010000000474974513,
500
+ 0.0010000000474974513,
501
+ 0.0010000000474974513,
502
+ 0.0010000000474974513,
503
+ 0.0010000000474974513,
504
+ 0.0010000000474974513,
505
+ 0.0010000000474974513,
506
+ 0.0010000000474974513,
507
+ 0.0010000000474974513,
508
+ 0.0010000000474974513,
509
+ 0.0010000000474974513,
510
+ 0.0010000000474974513,
511
+ 0.0010000000474974513,
512
+ 0.0010000000474974513,
513
+ 0.0010000000474974513,
514
+ 0.0010000000474974513,
515
+ 0.0010000000474974513,
516
+ 0.0010000000474974513,
517
+ 0.0010000000474974513,
518
+ 0.0010000000474974513,
519
+ 0.0010000000474974513,
520
+ 0.0010000000474974513,
521
+ 0.0010000000474974513,
522
+ 0.0010000000474974513,
523
+ 0.0010000000474974513,
524
+ 0.0010000000474974513,
525
+ 0.0010000000474974513,
526
+ 0.0010000000474974513,
527
+ 0.0010000000474974513,
528
+ 0.0010000000474974513,
529
+ 0.0010000000474974513,
530
+ 0.0010000000474974513,
531
+ 0.0010000000474974513,
532
+ 0.0010000000474974513,
533
+ 0.0010000000474974513,
534
+ 0.0010000000474974513,
535
+ 0.0010000000474974513,
536
+ 0.0010000000474974513
537
+ ]
538
+ }
539
+ }
fitness_history.csv ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ BestFitness,AvgFitness
2
+ 4.879378336679981065e-04,2.923409132461042646e-04
3
+ 5.120372187027128581e-04,3.102480528539362900e-04
4
+ 5.961601522501598378e-04,3.354280512428634421e-04
5
+ 6.632602251899356437e-04,3.921150275019214849e-04
6
+ 6.656605376610913146e-04,4.989973818539260222e-04
7
+ 7.395930847630745753e-04,5.248300942283199181e-04
8
+ 7.395930847630745753e-04,5.502964959263820233e-04
9
+ 7.395930847630745753e-04,5.866856208493637656e-04
10
+ 8.535108813708519853e-04,6.384033667292426288e-04
11
+ 8.535108813708519853e-04,6.492234966614883994e-04
12
+ 1.025801949338753022e-03,6.200751622285075254e-04
13
+ 1.025801949338753022e-03,6.713093422606043314e-04
14
+ 1.025801949338753022e-03,6.627829985415179155e-04
15
+ 1.025801949338753022e-03,6.980553544763470897e-04
16
+ 1.025801949338753022e-03,8.000164903113022663e-04
17
+ 1.025801949338753022e-03,7.926606643373794231e-04
18
+ 1.025801949338753022e-03,8.537321248410459796e-04
19
+ 1.025801949338753022e-03,7.865953276882823924e-04
20
+ 1.025801949338753022e-03,7.954519342879059086e-04
21
+ 1.025801949338753022e-03,7.495992568831947190e-04
22
+ 1.025801949338753022e-03,8.129212712032358448e-04
23
+ 1.025801949338753022e-03,8.320175320381643924e-04
24
+ 1.025801949338753022e-03,8.283260998181871332e-04
25
+ 1.025801949338753022e-03,8.094471947653295463e-04
26
+ 1.072523861909690677e-03,8.034063857733104302e-04
27
+ 1.072523861909690677e-03,8.078584674860950308e-04
28
+ 1.072523861909690677e-03,8.126480465242214490e-04
29
+ 1.072523861909690677e-03,8.406919351322233568e-04
30
+ 1.072523861909690677e-03,8.421425704229778837e-04
31
+ 1.072523861909690677e-03,8.266319397254315651e-04
32
+ 1.072523861909690677e-03,8.189656164316591385e-04
33
+ 1.072523861909690677e-03,8.671482166059202518e-04
34
+ 1.072523861909690677e-03,8.636414180249784092e-04
35
+ 1.072523861909690677e-03,8.666964492040071322e-04
36
+ 1.072523861909690677e-03,8.248040059764021525e-04
37
+ 1.072523861909690677e-03,7.558637790243980057e-04
38
+ 1.072523861909690677e-03,8.607504905122149086e-04
39
+ 1.072523861909690677e-03,7.852067503426332857e-04
40
+ 1.072523861909690677e-03,7.922704422613266028e-04
41
+ 1.072523861909690677e-03,8.327999675597427820e-04
42
+ 1.072523861909690677e-03,8.109242161579308323e-04
43
+ 1.072523861909690677e-03,8.236296535256398387e-04
44
+ 1.072523861909690677e-03,8.268113823558676662e-04
45
+ 1.072523861909690677e-03,8.363130032633112661e-04
46
+ 1.072523861909690677e-03,7.378171369123170823e-04
47
+ 1.072523861909690677e-03,8.482781612404125676e-04
48
+ 1.072523861909690677e-03,8.403266371703604909e-04
49
+ 1.072523861909690677e-03,8.268147025820285143e-04
50
+ 1.072523861909690677e-03,8.879933677382103327e-04
51
+ 1.072523861909690677e-03,7.699174581584162158e-04
52
+ 1.072523861909690677e-03,8.077807061097061004e-04
53
+ 1.072523861909690677e-03,8.427391672955050414e-04
54
+ 1.072523861909690677e-03,8.554911134751854102e-04
55
+ 1.072523861909690677e-03,8.291372182286990819e-04
56
+ 1.072523861909690677e-03,7.750175427182952412e-04
57
+ 1.072523861909690677e-03,8.376154870339909322e-04
58
+ 1.072523861909690677e-03,7.886120355178166111e-04
59
+ 1.072523861909690677e-03,8.278253787101065922e-04
60
+ 1.072523861909690677e-03,8.515559088346543858e-04
61
+ 1.072523861909690677e-03,8.587190911354431345e-04
62
+ 1.072523861909690677e-03,8.255115092803182392e-04
63
+ 1.072523861909690677e-03,8.642214780840772959e-04
64
+ 1.072523861909690677e-03,8.628927131915506074e-04
65
+ 1.072523861909690677e-03,8.631160812412311169e-04
66
+ 1.072523861909690677e-03,8.044881614619446304e-04
67
+ 1.072523861909690677e-03,8.570926279075916435e-04
68
+ 1.072523861909690677e-03,8.749681438229552294e-04
69
+ 1.072523861909690677e-03,8.278706554081565394e-04
70
+ 1.072523861909690677e-03,8.157551330855907640e-04
71
+ 1.072523861909690677e-03,7.964370127076100207e-04
72
+ 1.072523861909690677e-03,7.918045828701765054e-04
73
+ 1.072523861909690677e-03,7.336502007874407894e-04
74
+ 1.072523861909690677e-03,8.215698327851003952e-04
75
+ 1.072523861909690677e-03,8.380718853258096448e-04
76
+ 1.072523861909690677e-03,7.967431312469321980e-04
77
+ 1.072523861909690677e-03,8.511079970992240775e-04
78
+ 1.072523861909690677e-03,8.306266038751997354e-04
79
+ 1.072523861909690677e-03,8.281458760471336732e-04
80
+ 1.072523861909690677e-03,7.722702630573293143e-04
81
+ 1.072523861909690677e-03,8.440294699035297624e-04
82
+ 1.072523861909690677e-03,8.662165717125309972e-04
83
+ 1.072523861909690677e-03,8.175639995972565225e-04
84
+ 1.072523861909690677e-03,8.118812444553594665e-04
85
+ 1.072523861909690677e-03,8.510362443319926824e-04
86
+ 1.072523861909690677e-03,8.722723885450754536e-04
87
+ 1.072523861909690677e-03,7.916319241258804170e-04
88
+ 1.072523861909690677e-03,8.069695097822701538e-04
89
+ 1.072523861909690677e-03,7.472721163610672630e-04
90
+ 1.072523861909690677e-03,8.339170941398657078e-04
91
+ 1.072523861909690677e-03,8.487886671014362172e-04
92
+ 1.072523861909690677e-03,7.984410213554164531e-04
93
+ 1.072523861909690677e-03,8.079140745054025699e-04
94
+ 1.072523861909690677e-03,8.537223872123788151e-04
95
+ 1.072523861909690677e-03,8.168824582432641923e-04
96
+ 1.072523861909690677e-03,8.591290731223225757e-04
97
+ 1.072523861909690677e-03,8.346388265996646525e-04
98
+ 1.072523861909690677e-03,8.873753424933579275e-04
99
+ 1.072523861909690677e-03,8.757968718670166002e-04
100
+ 1.072523861909690677e-03,8.716829100268331122e-04
101
+ 1.072523861909690677e-03,8.708313889016430645e-04
fitness_history.png ADDED
v2.py ADDED
@@ -0,0 +1,643 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ==============================================================================
2
+ # EvoNet Optimizer 2 - Revize Edilmiş ve İyileştirilmiş Kod
3
+ # Açıklama: Bu kod, sıralama görevini öğrenmek için rastgele topolojilere
4
+ # sahip sinir ağlarını evrimleştiren bir neuroevolution süreci uygular.
5
+ # Daha sağlam hata kontrolü, yapılandırma, loglama ve iyileştirilmiş
6
+ # evrimsel operatörler içerir.
7
+ # ==============================================================================
8
+
9
+ import os
10
+ import subprocess
11
+ import sys
12
+ import argparse
13
+ import random
14
+ import logging
15
+ from datetime import datetime
16
+ import json
17
+ from typing import List, Tuple, Dict, Any
18
+
19
+ import numpy as np
20
+ import tensorflow as tf
21
+ from tensorflow.keras.models import Sequential, load_model, clone_model
22
+ from tensorflow.keras.layers import Dense, Input
23
+ from tensorflow.keras.optimizers import Adam
24
+ from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
25
+ import matplotlib.pyplot as plt
26
+ from scipy.stats import kendalltau
27
+
28
+ # --- Sabitler ve Varsayılan Değerler ---
29
+ DEFAULT_SEQ_LENGTH = 10
30
+ DEFAULT_POP_SIZE = 50
31
+ DEFAULT_GENERATIONS = 50
32
+ DEFAULT_MUTATION_RATE = 0.4 # Bireye mutasyon uygulama olasılığı
33
+ DEFAULT_WEIGHT_MUT_RATE = 0.8 # Mutasyon olursa, ağırlık bozulması olasılığı
34
+ DEFAULT_ACTIVATION_MUT_RATE = 0.2 # Mutasyon olursa, aktivasyon değişimi olasılığı
35
+ DEFAULT_MUTATION_STRENGTH = 0.1 # Ağırlık bozulmasının büyüklüğü (std dev)
36
+ DEFAULT_TOURNAMENT_SIZE = 5 # Turnuva seçilimindeki birey sayısı
37
+ DEFAULT_ELITISM_COUNT = 2 # Sonraki nesle doğrudan aktarılacak en iyi birey sayısı
38
+ DEFAULT_EPOCHS_FINAL_TRAIN = 100 # En iyi modelin son eğitimindeki max epoch
39
+ DEFAULT_BATCH_SIZE = 64 # Tahmin ve eğitim için batch boyutu
40
+ DEFAULT_OUTPUT_BASE_DIR = os.path.join(os.getcwd(), "evonet_runs_revised") # Ana çıktı klasörü
41
+
42
+ # --- Loglama Ayarları ---
43
+ def setup_logging(log_dir: str, log_level=logging.INFO) -> None:
44
+ """Loglamayı dosyaya ve konsola ayarlayan fonksiyon."""
45
+ log_filename = os.path.join(log_dir, 'evolution_run.log')
46
+ # Önceki handler'ları temizle (Jupyter gibi ortamlarda tekrar çalıştırmada önemli)
47
+ for handler in logging.root.handlers[:]:
48
+ logging.root.removeHandler(handler)
49
+ # Yeni handler'ları ayarla
50
+ logging.basicConfig(
51
+ level=log_level,
52
+ format='%(asctime)s - %(levelname)-8s - %(message)s',
53
+ handlers=[
54
+ logging.FileHandler(log_filename, mode='w'), # 'w' modu ile her çalıştırmada üzerine yazar
55
+ logging.StreamHandler(sys.stdout)
56
+ ]
57
+ )
58
+ logging.info("Logging setup complete.")
59
+
60
+ # --- GPU Kontrolü ---
61
+ def check_gpu() -> bool:
62
+ """GPU varlığını kontrol eder ve bellek artışını ayarlar."""
63
+ gpus = tf.config.list_physical_devices('GPU')
64
+ if gpus:
65
+ try:
66
+ for gpu in gpus:
67
+ tf.config.experimental.set_memory_growth(gpu, True)
68
+ logical_gpus = tf.config.list_logical_devices('GPU')
69
+ logging.info(f"{len(gpus)} Physical GPUs, {len(logical_gpus)} Logical GPUs found.")
70
+ if logical_gpus:
71
+ logging.info(f"Using GPU: {tf.config.experimental.get_device_details(gpus[0])['device_name']}")
72
+ return True
73
+ except RuntimeError as e:
74
+ logging.error(f"Error setting memory growth for GPU: {e}", exc_info=True)
75
+ return False
76
+ else:
77
+ logging.warning("GPU not found. Using CPU.")
78
+ return False
79
+
80
+ # --- Veri Üretimi ---
81
+ def generate_data(num_samples: int, seq_length: int) -> Tuple[np.ndarray, np.ndarray]:
82
+ """Rastgele diziler ve sıralanmış hallerini üretir."""
83
+ logging.info(f"Generating {num_samples} samples with sequence length {seq_length}...")
84
+ try:
85
+ X = np.random.rand(num_samples, seq_length).astype(np.float32) * 100
86
+ y = np.sort(X, axis=1).astype(np.float32)
87
+ logging.info("Data generation successful.")
88
+ return X, y
89
+ except Exception as e:
90
+ logging.error(f"Error during data generation: {e}", exc_info=True)
91
+ raise # Hatanın yukarıya bildirilmesi önemli
92
+
93
+ # --- Neuroevolution Çekirdeği ---
94
+ def create_individual(seq_length: int, input_shape: Tuple) -> Sequential:
95
+ """Rastgele mimariye sahip bir Keras Sequential modeli oluşturur ve derler."""
96
+ try:
97
+ model = Sequential(name=f"model_random_{random.randint(10000, 99999)}")
98
+ num_hidden_layers = random.randint(1, 4)
99
+ neurons_per_layer = [random.randint(8, 64) for _ in range(num_hidden_layers)]
100
+ activations = [random.choice(['relu', 'tanh', 'sigmoid']) for _ in range(num_hidden_layers)]
101
+
102
+ model.add(Input(shape=input_shape)) # Input katmanı
103
+
104
+ for i in range(num_hidden_layers): # Gizli katmanlar
105
+ model.add(Dense(neurons_per_layer[i], activation=activations[i]))
106
+
107
+ model.add(Dense(seq_length, activation='linear')) # Çıkış katmanı
108
+
109
+ # Ağırlık manipülasyonu ve potansiyel eğitim için modeli derle
110
+ model.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
111
+ #logging.debug(f"Created individual: {model.name} with {len(model.layers)} layers.")
112
+ return model
113
+ except Exception as e:
114
+ logging.error(f"Error creating individual model: {e}", exc_info=True)
115
+ raise
116
+
117
+ @tf.function # TensorFlow grafiği olarak derleyerek potansiyel hızlandırma
118
+ def get_predictions(model: Sequential, X: tf.Tensor) -> tf.Tensor:
119
+ """Model tahminlerini tf.function kullanarak alır."""
120
+ return model(X, training=False)
121
+
122
+ def calculate_fitness(individual: Sequential, X: np.ndarray, y: np.ndarray, batch_size: int) -> float:
123
+ """Bir bireyin fitness değerini (1/MSE) hesaplar, hataları yönetir."""
124
+ if not isinstance(X, tf.Tensor): X = tf.cast(X, tf.float32)
125
+ if not isinstance(y, tf.Tensor): y = tf.cast(y, tf.float32)
126
+
127
+ try:
128
+ y_pred_tf = get_predictions(individual, X) # Batching predict içinde yapılır
129
+ mse = tf.reduce_mean(tf.square(y - y_pred_tf))
130
+ mse_val = mse.numpy()
131
+
132
+ # Fitness: Ters MSE (sıfıra bölmeyi önlemek için epsilon ekle)
133
+ fitness_score = 1.0 / (mse_val + 1e-8)
134
+
135
+ if not np.isfinite(fitness_score) or fitness_score < 0:
136
+ logging.warning(f"Non-finite or negative fitness detected ({fitness_score:.4g}) for model {individual.name}. Assigning minimal fitness.")
137
+ return 1e-8 # Çok düşük bir fitness ata
138
+
139
+ #logging.debug(f"Fitness for {individual.name}: {fitness_score:.4f} (MSE: {mse_val:.4f})")
140
+ return float(fitness_score)
141
+
142
+ except tf.errors.InvalidArgumentError as e:
143
+ logging.error(f"TensorFlow InvalidArgumentError during fitness calculation for model {individual.name} (potential shape mismatch?): {e}")
144
+ return 1e-8
145
+ except Exception as e:
146
+ logging.error(f"Unhandled error during fitness calculation for model {individual.name}: {e}", exc_info=True)
147
+ return 1e-8 # Hata durumunda minimum fitness döndür
148
+
149
+
150
+ def mutate_individual(individual: Sequential, weight_mut_rate: float, act_mut_rate: float, mut_strength: float) -> Sequential:
151
+ """Bir bireye mutasyonlar uygular (ağırlık bozulması, aktivasyon değişimi)."""
152
+ try:
153
+ # Mutasyon için modeli klonla, orijinali bozma
154
+ mutated_model = clone_model(individual)
155
+ mutated_model.set_weights(individual.get_weights())
156
+
157
+ mutated = False
158
+ # 1. Ağırlık Mutasyonu
159
+ if random.random() < weight_mut_rate:
160
+ mutated = True
161
+ for layer in mutated_model.layers:
162
+ if isinstance(layer, Dense) and layer.get_weights(): # Sadece ağırlığı olan Dense katmanları
163
+ weights_biases = layer.get_weights()
164
+ new_weights_biases = []
165
+ for wb in weights_biases:
166
+ noise = np.random.normal(0, mut_strength, wb.shape).astype(np.float32)
167
+ new_weights_biases.append(wb + noise)
168
+ layer.set_weights(new_weights_biases)
169
+
170
+ # 2. Aktivasyon Mutasyonu (Bağımsız olasılık)
171
+ if random.random() < act_mut_rate:
172
+ dense_layers = [layer for layer in mutated_model.layers if isinstance(layer, Dense)]
173
+ if len(dense_layers) > 1: # En az bir gizli katman varsa
174
+ layer_to_mutate = random.choice(dense_layers[:-1]) # Çıkış katmanı hariç
175
+ current_activation_name = tf.keras.activations.serialize(layer_to_mutate.activation)
176
+ possible_activations = ['relu', 'tanh', 'sigmoid']
177
+ if current_activation_name in possible_activations:
178
+ possible_activations.remove(current_activation_name)
179
+ if possible_activations: # Değiştirilecek başka aktivasyon varsa
180
+ new_activation = random.choice(possible_activations)
181
+ # Katman konfigürasyonunu güncellemek daha güvenli
182
+ layer_config = layer_to_mutate.get_config()
183
+ layer_config['activation'] = new_activation
184
+ # Yeni konfigürasyondan yeni katman oluştur ve ağırlıkları aktar
185
+ try:
186
+ new_layer = Dense.from_config(layer_config)
187
+ # Model içinde katmanı değiştirmek yerine, modeli yeniden oluşturmak daha sağlam olabilir.
188
+ # Ancak basitlik için bu yaklaşımı deneyelim (riskli olabilir).
189
+ # Aktivasyon değiştirmek için katmanı yeniden build etmek gerekebilir.
190
+ # Bu kısım karmaşık olabilir, şimdilik loglayalım.
191
+ logging.debug(f"Attempting activation change on layer {layer_to_mutate.name} to {new_activation} (Implementation needs robust handling).")
192
+ # Gerçek uygulamada modeli yeniden oluşturmak daha iyi olabilir.
193
+ # Şimdilik sadece ağırlık mutasyonuna odaklanalım. Aktivasyon mutasyonu deneysel kalabilir.
194
+ mutated = True # Aktivasyon mutasyon denemesi yapıldı olarak işaretle
195
+ except Exception as e:
196
+ logging.warning(f"Could not directly modify/rebuild layer for activation change: {e}")
197
+
198
+
199
+ # Mutasyona uğradıysa modeli yeniden derle (optimizer durumu sıfırlanabilir)
200
+ if mutated:
201
+ mutated_model.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
202
+ mutated_model._name = f"mutated_{individual.name}_{random.randint(1000,9999)}" # İsmi güncelle
203
+ #logging.debug(f"Mutated model {individual.name} -> {mutated_model.name}")
204
+
205
+ return mutated_model
206
+ except Exception as e:
207
+ logging.error(f"Error during mutation of model {individual.name}: {e}", exc_info=True)
208
+ return individual # Hata olursa orijinal bireyi döndür
209
+
210
+
211
+ def tournament_selection(population: List[Sequential], fitness_scores: List[float], k: int) -> Sequential:
212
+ """Rastgele seçilen bir turnuva grubundan en iyi bireyi seçer."""
213
+ if not population:
214
+ raise ValueError("Population cannot be empty for selection.")
215
+ if len(population) < k:
216
+ logging.warning(f"Tournament size {k} is larger than population size {len(population)}. Using population size.")
217
+ k = len(population)
218
+ try:
219
+ tournament_indices = random.sample(range(len(population)), k)
220
+ tournament_fitness = [fitness_scores[i] for i in tournament_indices]
221
+ winner_local_idx = np.argmax(tournament_fitness)
222
+ winner_global_idx = tournament_indices[winner_local_idx]
223
+ #logging.debug(f"Tournament winner: Index {winner_global_idx}, Fitness: {fitness_scores[winner_global_idx]:.4f}")
224
+ return population[winner_global_idx]
225
+ except Exception as e:
226
+ logging.error(f"Error during tournament selection: {e}", exc_info=True)
227
+ # Hata durumunda rastgele bir birey seçmek bir alternatif olabilir
228
+ return random.choice(population)
229
+
230
+
231
+ def evolve_population(population: List[Sequential], X: np.ndarray, y: np.ndarray, generations: int,
232
+ mutation_rate: float, weight_mut_rate: float, act_mut_rate: float, mut_strength: float,
233
+ tournament_size: int, elitism_count: int, batch_size: int) -> Tuple[Sequential, List[float], List[float]]:
234
+ """Evrimsel süreci çalıştırır, en iyi modeli ve fitness geçmişini döndürür."""
235
+ best_fitness_history = []
236
+ avg_fitness_history = []
237
+ best_model_overall = None
238
+ best_fitness_overall = -np.inf # Negatif sonsuz ile başla
239
+
240
+ # Veriyi TensorFlow tensor'üne dönüştür (döngü dışında bir kez yap)
241
+ X_tf = tf.cast(X, tf.float32)
242
+ y_tf = tf.cast(y, tf.float32)
243
+
244
+ for gen in range(generations):
245
+ generation_start_time = datetime.now()
246
+ # 1. Fitness Değerlendirme
247
+ try:
248
+ # Tüm popülasyon için fitness'ı hesapla
249
+ fitness_scores = [calculate_fitness(ind, X_tf, y_tf, batch_size) for ind in population]
250
+ except Exception as e:
251
+ logging.critical(f"Error calculating fitness for population in Generation {gen+1}: {e}", exc_info=True)
252
+ # Bu kritik bir hata, süreci durdurmak gerekebilir veya önceki popülasyonla devam edilebilir.
253
+ # Şimdilik en iyi modeli döndürelim ve çıkalım.
254
+ if best_model_overall: return best_model_overall, best_fitness_history, avg_fitness_history
255
+ else: raise # Henüz iyi model yoksa hatayı yükselt
256
+
257
+ # 2. İstatistikler ve En İyiyi Takip Etme
258
+ current_best_idx = np.argmax(fitness_scores)
259
+ current_best_fitness = fitness_scores[current_best_idx]
260
+ avg_fitness = np.mean(fitness_scores)
261
+ best_fitness_history.append(current_best_fitness)
262
+ avg_fitness_history.append(avg_fitness)
263
+
264
+ if current_best_fitness > best_fitness_overall:
265
+ best_fitness_overall = current_best_fitness
266
+ try:
267
+ # En iyi modelin yapısını ve ağırlıklarını güvenli bir şekilde kopyala
268
+ best_model_overall = clone_model(population[current_best_idx])
269
+ best_model_overall.set_weights(population[current_best_idx].get_weights())
270
+ best_model_overall.compile(optimizer=Adam(), loss='mse') # Yeniden derle
271
+ logging.info(f"Generation {gen+1}: *** New overall best fitness found: {best_fitness_overall:.6f} ***")
272
+ except Exception as e:
273
+ logging.error(f"Could not clone or set weights for the new best model: {e}", exc_info=True)
274
+ # Klonlama başarısız olursa devam et, ama en iyi model güncellenmemiş olabilir.
275
+ best_fitness_overall = current_best_fitness # Fitness'�� yine de güncelle
276
+
277
+ generation_time = (datetime.now() - generation_start_time).total_seconds()
278
+ logging.info(f"Generation {gen+1}/{generations} | Best Fitness: {current_best_fitness:.6f} | Avg Fitness: {avg_fitness:.6f} | Time: {generation_time:.2f}s")
279
+
280
+ # 3. Yeni Popülasyon Oluşturma
281
+ new_population = []
282
+
283
+ # 3a. Elitizm
284
+ if elitism_count > 0 and len(population) >= elitism_count:
285
+ try:
286
+ elite_indices = np.argsort(fitness_scores)[-elitism_count:]
287
+ for idx in elite_indices:
288
+ elite_clone = clone_model(population[idx])
289
+ elite_clone.set_weights(population[idx].get_weights())
290
+ elite_clone.compile(optimizer=Adam(), loss='mse')
291
+ new_population.append(elite_clone)
292
+ #logging.debug(f"Added elite model {elite_clone.name} (Index: {idx}, Fitness: {fitness_scores[idx]:.4f})")
293
+ except Exception as e:
294
+ logging.error(f"Error during elitism: {e}", exc_info=True)
295
+
296
+
297
+ # 3b. Seçilim ve Üreme (Kalan Bireyler İçin)
298
+ num_to_generate = len(population) - len(new_population)
299
+ offspring_population = []
300
+ while len(offspring_population) < num_to_generate:
301
+ try:
302
+ # Ebeveyn seç
303
+ parent = tournament_selection(population, fitness_scores, tournament_size)
304
+
305
+ # Çocuk oluştur (mutasyon uygula veya uygulama)
306
+ if random.random() < mutation_rate:
307
+ child = mutate_individual(parent, weight_mut_rate, act_mut_rate, mut_strength)
308
+ else:
309
+ # Mutasyon yoksa, yine de klonla ki aynı nesne referansı olmasın
310
+ child = clone_model(parent)
311
+ child.set_weights(parent.get_weights())
312
+ child.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
313
+ child._name = f"cloned_{parent.name}_{random.randint(1000,9999)}" # İsmi güncelle
314
+
315
+ offspring_population.append(child)
316
+ except Exception as e:
317
+ logging.error(f"Error during selection/reproduction cycle: {e}", exc_info=True)
318
+ # Hata durumunda döngüyü kırmak veya rastgele birey eklemek düşünülebilir
319
+ # Şimdilik döngü devam etsin, belki sonraki denemede düzelir
320
+ if len(offspring_population) < num_to_generate: # Eksik kalmaması için rastgele ekle
321
+ logging.warning("Adding random individual due to reproduction error.")
322
+ offspring_population.append(create_individual(y.shape[1], X.shape[1:]))
323
+
324
+
325
+ new_population.extend(offspring_population)
326
+ population = new_population # Popülasyonu güncelle
327
+
328
+ # Döngü bittiğinde en iyi modeli döndür
329
+ if best_model_overall is None and population: # Hiç iyileşme olmadıysa veya elitizm yoksa
330
+ logging.warning("No overall best model tracked (or cloning failed). Returning best from final population.")
331
+ final_fitness_scores = [calculate_fitness(ind, X_tf, y_tf, batch_size) for ind in population]
332
+ best_idx_final = np.argmax(final_fitness_scores)
333
+ best_model_overall = population[best_idx_final]
334
+ elif not population:
335
+ logging.error("Evolution finished with an empty population!")
336
+ return None, best_fitness_history, avg_fitness_history
337
+
338
+
339
+ logging.info(f"Evolution finished. Best fitness achieved: {best_fitness_overall:.6f}")
340
+ return best_model_overall, best_fitness_history, avg_fitness_history
341
+
342
+
343
+ # --- Grafik Çizimi ---
344
+ def plot_fitness_history(history_best: List[float], history_avg: List[float], output_dir: str) -> None:
345
+ """Fitness geçmişini çizer ve kaydeder."""
346
+ if not history_best or not history_avg:
347
+ logging.warning("Fitness history is empty, cannot plot.")
348
+ return
349
+ try:
350
+ plt.figure(figsize=(12, 7))
351
+ plt.plot(history_best, label="Best Fitness per Generation", marker='o', linestyle='-', linewidth=2)
352
+ plt.plot(history_avg, label="Average Fitness per Generation", marker='x', linestyle='--', alpha=0.7)
353
+ plt.xlabel("Generation")
354
+ plt.ylabel("Fitness Score (1 / MSE)")
355
+ plt.title("Evolutionary Process Fitness History")
356
+ plt.legend()
357
+ plt.grid(True, which='both', linestyle='--', linewidth=0.5)
358
+ plt.tight_layout()
359
+ plot_path = os.path.join(output_dir, "fitness_history.png")
360
+ plt.savefig(plot_path)
361
+ plt.close() # Bellekte figürü kapat
362
+ logging.info(f"Fitness history plot saved to {plot_path}")
363
+ except Exception as e:
364
+ logging.error(f"Error plotting fitness history: {e}", exc_info=True)
365
+
366
+ # --- Değerlendirme ---
367
+ def evaluate_model(model: Sequential, X_test: np.ndarray, y_test: np.ndarray, batch_size: int) -> Dict[str, float]:
368
+ """Son modeli test verisi üzerinde değerlendirir."""
369
+ if model is None:
370
+ logging.error("Cannot evaluate a None model.")
371
+ return {"test_mse": np.inf, "avg_kendall_tau": 0.0}
372
+ logging.info("Evaluating final model on test data...")
373
+ try:
374
+ y_pred = model.predict(X_test, batch_size=batch_size, verbose=0)
375
+ test_mse = np.mean(np.square(y_test - y_pred))
376
+ logging.info(f"Final Test MSE: {test_mse:.6f}")
377
+
378
+ # Kendall's Tau (örneklem üzerinde)
379
+ sample_size = min(500, X_test.shape[0]) # Örneklem boyutunu ayarla
380
+ taus = []
381
+ indices = np.random.choice(X_test.shape[0], sample_size, replace=False)
382
+ for i in indices:
383
+ try:
384
+ tau, _ = kendalltau(y_test[i], y_pred[i])
385
+ if not np.isnan(tau): taus.append(tau)
386
+ except ValueError as ve: # Eğer y_pred sabit değerler içeriyorsa
387
+ logging.debug(f"Kendall tau ValueError for sample {i}: {ve}")
388
+
389
+ avg_kendall_tau = np.mean(taus) if taus else 0.0
390
+ logging.info(f"Average Kendall's Tau (on {sample_size} samples): {avg_kendall_tau:.4f}")
391
+
392
+ return {
393
+ "test_mse": float(test_mse),
394
+ "avg_kendall_tau": float(avg_kendall_tau)
395
+ }
396
+ except Exception as e:
397
+ logging.error(f"Error during final model evaluation: {e}", exc_info=True)
398
+ return {"test_mse": np.inf, "avg_kendall_tau": 0.0} # Hata durumunda kötü değerler döndür
399
+
400
+ # --- Ana İş Akışı ---
401
+ def run_pipeline(args: argparse.Namespace):
402
+ """Tüm neuroevolution iş akışını çalıştırır."""
403
+
404
+ # Benzersiz çıktı klasörü oluştur
405
+ timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
406
+ run_name = f"evorun_{timestamp}_gen{args.generations}_pop{args.pop_size}"
407
+ output_dir = os.path.join(args.output_base_dir, run_name)
408
+ try:
409
+ os.makedirs(output_dir, exist_ok=True)
410
+ except OSError as e:
411
+ print(f"FATAL: Could not create output directory: {output_dir}. Error: {e}", file=sys.stderr)
412
+ sys.exit(1)
413
+
414
+ # Loglamayı ayarla
415
+ setup_logging(output_dir)
416
+ logging.info(f"========== Starting EvoNet Pipeline Run: {run_name} ==========")
417
+ logging.info(f"Output directory: {output_dir}")
418
+
419
+ # Argümanları logla ve kaydet
420
+ logging.info("--- Configuration ---")
421
+ args_dict = vars(args)
422
+ for k, v in args_dict.items():
423
+ logging.info(f" {k:<20}: {v}")
424
+ logging.info("---------------------")
425
+ config_path = os.path.join(output_dir, "config.json")
426
+ try:
427
+ with open(config_path, 'w') as f:
428
+ json.dump(args_dict, f, indent=4, sort_keys=True)
429
+ logging.info(f"Configuration saved to {config_path}")
430
+ except Exception as e:
431
+ logging.error(f"Failed to save configuration: {e}", exc_info=True)
432
+
433
+
434
+ # Rastgele tohumları ayarla
435
+ try:
436
+ random.seed(args.seed)
437
+ np.random.seed(args.seed)
438
+ tf.random.set_seed(args.seed)
439
+ logging.info(f"Using random seed: {args.seed}")
440
+ # Deterministic ops (TensorFlow >= 2.8): Opsiyonel, performansı düşürebilir ama tekrarlanabilirliği artırır
441
+ # tf.config.experimental.enable_op_determinism()
442
+ except Exception as e:
443
+ logging.warning(f"Could not set all random seeds: {e}")
444
+
445
+
446
+ # GPU kontrolü
447
+ is_gpu_available = check_gpu()
448
+
449
+ # Veri Üretimi
450
+ try:
451
+ X_train, y_train = generate_data(args.train_samples, args.seq_length)
452
+ X_test, y_test = generate_data(args.test_samples, args.seq_length)
453
+ input_shape = X_train.shape[1:] # Model oluşturmak için girdi şekli
454
+ except Exception:
455
+ logging.critical("Failed to generate data. Exiting.")
456
+ sys.exit(1)
457
+
458
+
459
+ # Popülasyon Başlatma
460
+ logging.info(f"--- Initializing Population (Size: {args.pop_size}) ---")
461
+ try:
462
+ population = [create_individual(args.seq_length, input_shape) for _ in range(args.pop_size)]
463
+ logging.info("Population initialized successfully.")
464
+ except Exception:
465
+ logging.critical("Failed to initialize population. Exiting.")
466
+ sys.exit(1)
467
+
468
+ # Evrim Süreci
469
+ logging.info(f"--- Starting Evolution ({args.generations} Generations) ---")
470
+ try:
471
+ best_model_unevolved, best_fitness_hist, avg_fitness_hist = evolve_population(
472
+ population, X_train, y_train, args.generations,
473
+ args.mutation_rate, args.weight_mut_rate, args.activation_mut_rate, args.mutation_strength,
474
+ args.tournament_size, args.elitism_count, args.batch_size
475
+ )
476
+ except Exception as e:
477
+ logging.critical(f"Fatal error during evolution process: {e}", exc_info=True)
478
+ sys.exit(1)
479
+ logging.info("--- Evolution Complete ---")
480
+
481
+ # Fitness geçmişini kaydet ve çizdir
482
+ if best_fitness_hist and avg_fitness_hist:
483
+ history_path = os.path.join(output_dir, "fitness_history.csv")
484
+ try:
485
+ history_data = np.array([np.arange(1, len(best_fitness_hist) + 1), best_fitness_hist, avg_fitness_hist]).T
486
+ np.savetxt(history_path, history_data, delimiter=',', header='Generation,BestFitness,AvgFitness', comments='', fmt=['%d', '%.8f', '%.8f'])
487
+ logging.info(f"Fitness history data saved to {history_path}")
488
+ except Exception as e:
489
+ logging.error(f"Could not save fitness history data: {e}", exc_info=True)
490
+ plot_fitness_history(best_fitness_hist, avg_fitness_hist, output_dir)
491
+ else:
492
+ logging.warning("Fitness history is empty, skipping saving/plotting.")
493
+
494
+
495
+ # En İyi Modelin Son Eğitimi
496
+ if best_model_unevolved is None:
497
+ logging.error("Evolution did not yield a best model. Skipping final training and evaluation.")
498
+ final_metrics = {"test_mse": np.inf, "avg_kendall_tau": 0.0}
499
+ final_model_path = None
500
+ training_summary = {}
501
+ else:
502
+ logging.info("--- Starting Final Training of Best Evolved Model ---")
503
+ try:
504
+ # En iyi modeli tekrar klonla ve derle (güvenlik için)
505
+ final_model = clone_model(best_model_unevolved)
506
+ final_model.set_weights(best_model_unevolved.get_weights())
507
+ # Son eğitim için belki farklı bir öğrenme oranı denenebilir
508
+ final_model.compile(optimizer=Adam(learning_rate=0.001), loss='mse', metrics=['mae'])
509
+ logging.info("Model Summary of Best Evolved (Untrained):")
510
+ final_model.summary(print_fn=logging.info)
511
+
512
+
513
+ # Callback'ler
514
+ early_stopping = EarlyStopping(monitor='val_loss', patience=15, restore_best_weights=True, verbose=1) # Sabrı biraz artır
515
+ reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.3, patience=7, min_lr=1e-7, verbose=1) # Faktörü ve sabrı ayarla
516
+
517
+ history = final_model.fit(
518
+ X_train, y_train,
519
+ epochs=args.epochs_final_train,
520
+ batch_size=args.batch_size,
521
+ validation_split=0.2, # Eğitim verisinin %20'si validasyon için
522
+ callbacks=[early_stopping, reduce_lr],
523
+ verbose=2 # Her epoch için bir satır log
524
+ )
525
+ logging.info("Final training complete.")
526
+ training_summary = {
527
+ "epochs_run": len(history.history['loss']),
528
+ "final_train_loss": history.history['loss'][-1],
529
+ "final_val_loss": history.history['val_loss'][-1]
530
+ }
531
+
532
+ # Eğitilmiş modeli değerlendir
533
+ final_metrics = evaluate_model(final_model, X_test, y_test, args.batch_size)
534
+
535
+ # Eğitilmiş modeli kaydet
536
+ final_model_path = os.path.join(output_dir, "best_evolved_model_trained.keras")
537
+ final_model.save(final_model_path)
538
+ logging.info(f"Final trained model saved to {final_model_path}")
539
+
540
+ except Exception as e:
541
+ logging.error(f"Error during final training or evaluation: {e}", exc_info=True)
542
+ final_metrics = {"test_mse": np.inf, "avg_kendall_tau": 0.0}
543
+ final_model_path = None
544
+ training_summary = {"error": str(e)}
545
+
546
+
547
+ # Sonuçları Kaydet
548
+ logging.info("--- Saving Final Results ---")
549
+ final_results = {
550
+ "run_info": {
551
+ "run_name": run_name,
552
+ "timestamp": timestamp,
553
+ "output_directory": output_dir,
554
+ "gpu_used": is_gpu_available,
555
+ },
556
+ "config": args_dict,
557
+ "evolution_summary": {
558
+ "generations_run": len(best_fitness_hist) if best_fitness_hist else 0,
559
+ "best_fitness_achieved": best_fitness_overall if best_fitness_overall > -np.inf else None,
560
+ "best_fitness_final_gen": best_fitness_hist[-1] if best_fitness_hist else None,
561
+ "avg_fitness_final_gen": avg_fitness_hist[-1] if avg_fitness_hist else None,
562
+ },
563
+ "final_training_summary": training_summary,
564
+ "final_evaluation_on_test": final_metrics,
565
+ "saved_model_path": final_model_path
566
+ }
567
+ results_path = os.path.join(output_dir, "final_results.json")
568
+ try:
569
+ # JSON'a kaydederken NumPy türlerini dönüştür
570
+ def convert_numpy_types(obj):
571
+ if isinstance(obj, np.integer): return int(obj)
572
+ elif isinstance(obj, np.floating): return float(obj)
573
+ elif isinstance(obj, np.ndarray): return obj.tolist()
574
+ return obj
575
+ with open(results_path, 'w') as f:
576
+ json.dump(final_results, f, indent=4, default=convert_numpy_types) # default handler ekle
577
+ logging.info(f"Final results summary saved to {results_path}")
578
+ except Exception as e:
579
+ logging.error(f"Failed to save final results JSON: {e}", exc_info=True)
580
+
581
+ logging.info(f"========== Pipeline Run {run_name} Finished ==========")
582
+
583
+
584
+ # --- Argüman Ayrıştırıcı ---
585
+ def parse_arguments() -> argparse.Namespace:
586
+ parser = argparse.ArgumentParser(description="EvoNet Revised: Neuroevolution for Sorting Task")
587
+
588
+ # --- Dizinler ---
589
+ parser.add_argument('--output_base_dir', type=str, default=DEFAULT_OUTPUT_BASE_DIR,
590
+ help='Base directory to store run results.')
591
+
592
+ # --- Veri Ayarları ---
593
+ parser.add_argument('--seq_length', type=int, default=DEFAULT_SEQ_LENGTH, help='Length of sequences.')
594
+ parser.add_argument('--train_samples', type=int, default=5000, help='Number of training samples.')
595
+ parser.add_argument('--test_samples', type=int, default=1000, help='Number of test samples.')
596
+
597
+ # --- Evrim Parametreleri ---
598
+ parser.add_argument('--pop_size', type=int, default=DEFAULT_POP_SIZE, help='Population size.')
599
+ parser.add_argument('--generations', type=int, default=DEFAULT_GENERATIONS, help='Number of generations.')
600
+ parser.add_argument('--mutation_rate', type=float, default=DEFAULT_MUTATION_RATE, help='Overall mutation probability.')
601
+ parser.add_argument('--weight_mut_rate', type=float, default=DEFAULT_WEIGHT_MUT_RATE, help='Weight mutation probability (if mutation occurs).')
602
+ parser.add_argument('--activation_mut_rate', type=float, default=DEFAULT_ACTIVATION_MUT_RATE, help='Activation mutation probability (if mutation occurs).')
603
+ parser.add_argument('--mutation_strength', type=float, default=DEFAULT_MUTATION_STRENGTH, help='Std dev for weight mutation noise.')
604
+ parser.add_argument('--tournament_size', type=int, default=DEFAULT_TOURNAMENT_SIZE, help='Number of individuals in tournament selection.')
605
+ parser.add_argument('--elitism_count', type=int, default=DEFAULT_ELITISM_COUNT, help='Number of elite individuals to carry over.')
606
+
607
+ # --- Eğitim ve Değerlendirme ---
608
+ parser.add_argument('--batch_size', type=int, default=DEFAULT_BATCH_SIZE, help='Batch size for predictions and final training.')
609
+ parser.add_argument('--epochs_final_train', type=int, default=DEFAULT_EPOCHS_FINAL_TRAIN, help='Max epochs for final training.')
610
+
611
+ # --- Tekrarlanabilirlik ---
612
+ parser.add_argument('--seed', type=int, default=None, help='Random seed (default: random).')
613
+
614
+ args = parser.parse_args()
615
+
616
+ # Varsayılan tohum ayarla (eğer verilmediyse)
617
+ if args.seed is None:
618
+ args.seed = random.randint(0, 2**32 - 1)
619
+ print(f"Generated random seed: {args.seed}") # Loglama başlamadan önce print et
620
+
621
+ return args
622
+
623
+
624
+ # --- Ana Çalıştırma Bloğu ---
625
+ if __name__ == "__main__":
626
+ # Argümanları ayrıştır
627
+ cli_args = parse_arguments()
628
+
629
+ # Ana iş akışını çalıştır
630
+ try:
631
+ run_pipeline(cli_args)
632
+ except SystemExit: # sys.exit() çağrılarını yakala ve normal çıkış yap
633
+ pass
634
+ except Exception as e:
635
+ # Loglama başlamamışsa bile hatayı yazdırmaya çalış
636
+ print(f"\nFATAL UNHANDLED ERROR in main execution block: {e}", file=sys.stderr)
637
+ # Loglama ayarlandıysa oraya da yaz
638
+ if logging.getLogger().hasHandlers():
639
+ logging.critical("FATAL UNHANDLED ERROR in main execution block:", exc_info=True)
640
+ else:
641
+ import traceback
642
+ print(traceback.format_exc(), file=sys.stderr)
643
+ sys.exit(1) # Hata kodu ile çık
v3.py ADDED
@@ -0,0 +1,784 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ==============================================================================
2
+ # EvoNet Optimizer - v3 - Daha İleri İyileştirmeler
3
+ # Açıklama: Çaprazlama, Kontrol Noktası eklenmiş, Adaptif Mutasyon ve
4
+ # Gelişmiş Fitness için kavramsal öneriler içeren versiyon.
5
+ # ==============================================================================
6
+
7
+ import os
8
+ import subprocess
9
+ import sys
10
+ import argparse
11
+ import random
12
+ import logging
13
+ from datetime import datetime
14
+ import json
15
+ import pickle # Checkpointing için
16
+ import time # Checkpointing için
17
+ from typing import List, Tuple, Dict, Any, Optional
18
+
19
+ import numpy as np
20
+ import tensorflow as tf
21
+ from tensorflow.keras.models import Sequential, load_model, clone_model
22
+ from tensorflow.keras.layers import Dense, Input
23
+ from tensorflow.keras.optimizers import Adam
24
+ from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
25
+ import matplotlib.pyplot as plt
26
+ from scipy.stats import kendalltau
27
+
28
+ # --- Sabitler ve Varsayılan Değerler ---
29
+ DEFAULT_SEQ_LENGTH = 10
30
+ DEFAULT_POP_SIZE = 50
31
+ DEFAULT_GENERATIONS = 50
32
+ DEFAULT_CROSSOVER_RATE = 0.6 # Çaprazlama uygulama olasılığı
33
+ DEFAULT_MUTATION_RATE = 0.4 # Mutasyon uygulama olasılığı (eğer çaprazlama olmazsa)
34
+ DEFAULT_WEIGHT_MUT_RATE = 0.8
35
+ DEFAULT_ACTIVATION_MUT_RATE = 0.2 # Aktivasyon mutasyonu hala deneysel
36
+ DEFAULT_MUTATION_STRENGTH = 0.1
37
+ DEFAULT_TOURNAMENT_SIZE = 5
38
+ DEFAULT_ELITISM_COUNT = 2
39
+ DEFAULT_EPOCHS_FINAL_TRAIN = 100
40
+ DEFAULT_BATCH_SIZE = 64
41
+ DEFAULT_OUTPUT_BASE_DIR = os.path.join(os.getcwd(), "evonet_runs_v3")
42
+ DEFAULT_CHECKPOINT_INTERVAL = 10 # Kaç nesilde bir checkpoint alınacağı (0 = kapalı)
43
+
44
+ # --- Loglama Ayarları ---
45
+ # (setup_logging fonksiyonu öncekiyle aynı, tekrar eklemiyorum)
46
+ def setup_logging(log_dir: str, log_level=logging.INFO) -> None:
47
+ log_filename = os.path.join(log_dir, 'evolution_run.log')
48
+ for handler in logging.root.handlers[:]: logging.root.removeHandler(handler)
49
+ logging.basicConfig(
50
+ level=log_level,
51
+ format='%(asctime)s - %(levelname)-8s - %(message)s',
52
+ handlers=[
53
+ logging.FileHandler(log_filename, mode='a'), # 'a' mode append for resuming
54
+ logging.StreamHandler(sys.stdout)
55
+ ]
56
+ )
57
+ logging.info("Logging setup complete.")
58
+
59
+ # --- GPU Kontrolü ---
60
+ # (check_gpu fonksiyonu öncekiyle aynı, tekrar eklemiyorum)
61
+ def check_gpu() -> bool:
62
+ gpus = tf.config.list_physical_devices('GPU')
63
+ if gpus:
64
+ try:
65
+ for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True)
66
+ logical_gpus = tf.config.list_logical_devices('GPU')
67
+ logging.info(f"{len(gpus)} Physical GPUs, {len(logical_gpus)} Logical GPUs found.")
68
+ if logical_gpus: logging.info(f"Using GPU: {tf.config.experimental.get_device_details(gpus[0])['device_name']}")
69
+ return True
70
+ except RuntimeError as e:
71
+ logging.error(f"Error setting memory growth for GPU: {e}", exc_info=True)
72
+ return False
73
+ else:
74
+ logging.warning("GPU not found. Using CPU.")
75
+ return False
76
+
77
+ # --- Veri Üretimi ---
78
+ # (generate_data fonksiyonu öncekiyle aynı, tekrar eklemiyorum)
79
+ def generate_data(num_samples: int, seq_length: int) -> Tuple[np.ndarray, np.ndarray]:
80
+ logging.info(f"Generating {num_samples} samples with sequence length {seq_length}...")
81
+ try:
82
+ X = np.random.rand(num_samples, seq_length).astype(np.float32) * 100
83
+ y = np.sort(X, axis=1).astype(np.float32)
84
+ logging.info("Data generation successful.")
85
+ return X, y
86
+ except Exception as e:
87
+ logging.error(f"Error during data generation: {e}", exc_info=True)
88
+ raise
89
+
90
+ # --- Neuroevolution Çekirdeği ---
91
+
92
+ def create_individual(seq_length: int, input_shape: Tuple) -> Sequential:
93
+ """Rastgele mimariye sahip bir Keras Sequential modeli oluşturur ve derler."""
94
+ # (Fonksiyon öncekiyle büyük ölçüde aynı, isim revize edildi)
95
+ try:
96
+ model = Sequential(name=f"model_rnd_{random.randint(10000, 99999)}")
97
+ num_hidden_layers = random.randint(1, 4)
98
+ neurons_per_layer = [random.randint(8, 64) for _ in range(num_hidden_layers)]
99
+ activations = [random.choice(['relu', 'tanh', 'sigmoid']) for _ in range(num_hidden_layers)]
100
+ model.add(Input(shape=input_shape))
101
+ for i in range(num_hidden_layers):
102
+ model.add(Dense(neurons_per_layer[i], activation=activations[i]))
103
+ model.add(Dense(seq_length, activation='linear'))
104
+ model.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
105
+ return model
106
+ except Exception as e:
107
+ logging.error(f"Error creating individual model: {e}", exc_info=True)
108
+ raise
109
+
110
+ @tf.function
111
+ def get_predictions(model: Sequential, X: tf.Tensor) -> tf.Tensor:
112
+ """Model tahminlerini tf.function kullanarak alır."""
113
+ return model(X, training=False)
114
+
115
+ def calculate_fitness(individual: Sequential, X: np.ndarray, y: np.ndarray, batch_size: int, fitness_params: Dict = None) -> float:
116
+ """Bir bireyin fitness değerini hesaplar. Gelişmiş fitness için öneri içerir."""
117
+ # --- KAVRAMSAL: Gelişmiş Fitness Fonksiyonu ---
118
+ # Burada sadece MSE kullanılıyor. Daha gelişmiş bir fitness için:
119
+ # 1. Diğer metrikleri hesapla (örn: Kendall Tau).
120
+ # 2. Model karmaşıklığını hesapla (örn: parametre sayısı).
121
+ # 3. Bu değerleri ağırlıklı bir formülle birleştir.
122
+ # fitness_params = fitness_params or {}
123
+ # w_mse = fitness_params.get('w_mse', 1.0)
124
+ # w_tau = fitness_params.get('w_tau', 0.1)
125
+ # w_comp = fitness_params.get('w_comp', 0.0001)
126
+ # --------------------------------------------
127
+ if not isinstance(X, tf.Tensor): X = tf.cast(X, tf.float32)
128
+ if not isinstance(y, tf.Tensor): y = tf.cast(y, tf.float32)
129
+ try:
130
+ y_pred_tf = get_predictions(individual, X)
131
+ mse = tf.reduce_mean(tf.square(y - y_pred_tf))
132
+ mse_val = mse.numpy()
133
+ fitness_score = 1.0 / (mse_val + 1e-8) # Temel fitness
134
+
135
+ # --- KAVRAMSAL: Gelişmiş Fitness Hesabı ---
136
+ # if w_tau > 0 or w_comp > 0:
137
+ # # Kendall Tau hesapla (maliyetli olabilir, örneklem gerekebilir)
138
+ # tau_val = calculate_avg_kendall_tau(y.numpy(), y_pred_tf.numpy(), sample_size=100) # Örnek bir fonksiyon
139
+ # # Karmaşıklık hesapla
140
+ # complexity = individual.count_params()
141
+ # # Birleştirilmiş fitness
142
+ # fitness_score = w_mse * fitness_score + w_tau * tau_val - w_comp * complexity
143
+ # --------------------------------------------
144
+
145
+ if not np.isfinite(fitness_score) or fitness_score < -1e6: # Negatif olabilen fitness için kontrol
146
+ logging.warning(f"Non-finite or very low fitness ({fitness_score:.4g}) for model {individual.name}. Assigning minimal fitness.")
147
+ return -1e7 # Gelişmiş fitness negatif olabileceği için daha düşük sınır
148
+ return float(fitness_score)
149
+ except Exception as e:
150
+ logging.error(f"Error during fitness calculation for model {individual.name}: {e}", exc_info=True)
151
+ return -1e7
152
+
153
+ # (Aktivasyon mutasyonu hala deneysel, ana odak ağırlık mutasyonunda)
154
+ def mutate_individual(individual: Sequential, weight_mut_rate: float, mut_strength: float) -> Sequential:
155
+ """Bir bireye ağırlık bozulması mutasyonu uygular."""
156
+ try:
157
+ mutated_model = clone_model(individual)
158
+ mutated_model.set_weights(individual.get_weights())
159
+ mutated = False
160
+ if random.random() < weight_mut_rate: # Ağırlık mutasyon olasılığı (dışarıdan gelen genel rate ile birleştirilebilir)
161
+ mutated = True
162
+ for layer in mutated_model.layers:
163
+ if isinstance(layer, Dense) and layer.get_weights():
164
+ weights_biases = layer.get_weights()
165
+ new_weights_biases = [wb + np.random.normal(0, mut_strength, wb.shape).astype(np.float32) for wb in weights_biases]
166
+ layer.set_weights(new_weights_biases)
167
+
168
+ if mutated:
169
+ mutated_model.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
170
+ mutated_model._name = f"mutated_{individual.name}_{random.randint(1000,9999)}"
171
+ return mutated_model
172
+ except Exception as e:
173
+ logging.error(f"Error during mutation of model {individual.name}: {e}", exc_info=True)
174
+ return individual
175
+
176
+
177
+ def check_architecture_compatibility(model1: Sequential, model2: Sequential) -> bool:
178
+ """İki modelin basit çaprazlama için uyumlu olup olmadığını kontrol eder (katman sayısı ve tipleri)."""
179
+ if len(model1.layers) != len(model2.layers):
180
+ return False
181
+ for l1, l2 in zip(model1.layers, model2.layers):
182
+ if type(l1) != type(l2):
183
+ return False
184
+ # Daha detaylı kontrol (nöron sayısı vb.) eklenebilir, ancak basit tutalım.
185
+ return True
186
+
187
+ def crossover_individuals(parent1: Sequential, parent2: Sequential) -> Tuple[Optional[Sequential], Optional[Sequential]]:
188
+ """İki ebeveynden basit ağırlık ortalaması/karıştırması ile çocuklar oluşturur."""
189
+ # Mimari uyumluluğunu kontrol et (basit versiyon)
190
+ if not check_architecture_compatibility(parent1, parent2):
191
+ logging.debug("Skipping crossover due to incompatible architectures.")
192
+ return None, None # Uyumsuzsa çaprazlama yapma
193
+
194
+ try:
195
+ # Çocukları ebeveynleri klonlayarak başlat
196
+ child1 = clone_model(parent1)
197
+ child2 = clone_model(parent2)
198
+ child1.set_weights(parent1.get_weights()) # Başlangıç ağırlıklarını ata
199
+ child2.set_weights(parent2.get_weights())
200
+
201
+ p1_weights = parent1.get_weights()
202
+ p2_weights = parent2.get_weights()
203
+ child1_new_weights = []
204
+ child2_new_weights = []
205
+
206
+ # Katman katman ağırlıkları çaprazla
207
+ for i in range(len(p1_weights)): # Ağırlık matrisleri/bias vektörleri üzerinde döngü
208
+ w1 = p1_weights[i]
209
+ w2 = p2_weights[i]
210
+ # Basit ortalama veya rastgele seçim (örnek: rastgele seçim)
211
+ mask = np.random.rand(*w1.shape) < 0.5
212
+ cw1 = np.where(mask, w1, w2)
213
+ cw2 = np.where(mask, w2, w1) # Ters maske ile
214
+ # Veya basit ortalama: cw1 = (w1 + w2) / 2.0; cw2 = cw1
215
+ child1_new_weights.append(cw1.astype(np.float32))
216
+ child2_new_weights.append(cw2.astype(np.float32))
217
+
218
+
219
+ child1.set_weights(child1_new_weights)
220
+ child2.set_weights(child2_new_weights)
221
+
222
+ # Çocukları derle
223
+ child1.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
224
+ child2.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
225
+ child1._name = f"xover_{parent1.name[:10]}_{parent2.name[:10]}_c1_{random.randint(1000,9999)}"
226
+ child2._name = f"xover_{parent1.name[:10]}_{parent2.name[:10]}_c2_{random.randint(1000,9999)}"
227
+ #logging.debug(f"Crossover performed between {parent1.name} and {parent2.name}")
228
+ return child1, child2
229
+
230
+ except Exception as e:
231
+ logging.error(f"Error during crossover between {parent1.name} and {parent2.name}: {e}", exc_info=True)
232
+ return None, None # Hata olursa çocuk üretme
233
+
234
+ # (tournament_selection fonksiyonu öncekiyle aynı)
235
+ def tournament_selection(population: List[Sequential], fitness_scores: List[float], k: int) -> Sequential:
236
+ if not population: raise ValueError("Population cannot be empty.")
237
+ if len(population) < k: k = len(population)
238
+ try:
239
+ tournament_indices = random.sample(range(len(population)), k)
240
+ tournament_fitness = [fitness_scores[i] for i in tournament_indices]
241
+ winner_local_idx = np.argmax(tournament_fitness)
242
+ winner_global_idx = tournament_indices[winner_local_idx]
243
+ return population[winner_global_idx]
244
+ except Exception as e:
245
+ logging.error(f"Error during tournament selection: {e}", exc_info=True)
246
+ return random.choice(population)
247
+
248
+ # --- Checkpointing ---
249
+ def save_checkpoint(output_dir: str, generation: int, population: List[Sequential], rnd_state: Tuple, np_rnd_state: Tuple, tf_rnd_state: Any):
250
+ """Evrim durumunu kaydeder."""
251
+ checkpoint_dir = os.path.join(output_dir, "checkpoints")
252
+ os.makedirs(checkpoint_dir, exist_ok=True)
253
+ checkpoint_file = os.path.join(checkpoint_dir, f"evo_gen_{generation}.pkl")
254
+ logging.info(f"Saving checkpoint for generation {generation} to {checkpoint_file}...")
255
+ try:
256
+ # Modelleri kaydetmek için ağırlıkları ve konfigürasyonları al
257
+ population_state = []
258
+ for model in population:
259
+ try:
260
+ # Önce modeli diske kaydetmeyi dene (daha sağlam olabilir ama yavaş)
261
+ # model_path = os.path.join(checkpoint_dir, f"model_gen{generation}_{model.name}.keras")
262
+ # model.save(model_path)
263
+ # population_state.append({"config": model.get_config(), "saved_path": model_path})
264
+
265
+ # Alternatif: Ağırlık ve config'i pickle içine göm (daha riskli)
266
+ population_state.append({
267
+ "name": model.name,
268
+ "config": model.get_config(),
269
+ "weights": model.get_weights()
270
+ })
271
+ except Exception as e:
272
+ logging.error(f"Could not serialize model {model.name} for checkpoint: {e}")
273
+ population_state.append(None) # Hata durumunda None ekle
274
+
275
+ state = {
276
+ "generation": generation,
277
+ "population_state": [p for p in population_state if p is not None], # Başarısız olanları çıkarma
278
+ "random_state": rnd_state,
279
+ "numpy_random_state": np_rnd_state,
280
+ "tensorflow_random_state": tf_rnd_state, # TensorFlow state'i pickle ile kaydetmek sorunlu olabilir
281
+ "timestamp": datetime.now().isoformat()
282
+ }
283
+ with open(checkpoint_file, 'wb') as f:
284
+ pickle.dump(state, f)
285
+ logging.info(f"Checkpoint saved successfully for generation {generation}.")
286
+ except Exception as e:
287
+ logging.error(f"Failed to save checkpoint for generation {generation}: {e}", exc_info=True)
288
+
289
+
290
+ def load_checkpoint(checkpoint_path: str) -> Optional[Dict]:
291
+ """Kaydedilmiş evrim durumunu yükler."""
292
+ if not os.path.exists(checkpoint_path):
293
+ logging.error(f"Checkpoint file not found: {checkpoint_path}")
294
+ return None
295
+ logging.info(f"Loading checkpoint from {checkpoint_path}...")
296
+ try:
297
+ with open(checkpoint_path, 'rb') as f:
298
+ state = pickle.load(f)
299
+
300
+ population = []
301
+ for model_state in state["population_state"]:
302
+ try:
303
+ # Eğer model ayrı kaydedildiyse:
304
+ # model = load_model(model_state["saved_path"])
305
+ # population.append(model)
306
+
307
+ # Pickle içine gömüldüyse:
308
+ model = Sequential.from_config(model_state["config"])
309
+ model.set_weights(model_state["weights"])
310
+ # Modelin yeniden derlenmesi GEREKİR!
311
+ model.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
312
+ model._name = model_state.get("name", f"model_loaded_{random.randint(1000,9999)}") # İsmi geri yükle
313
+ population.append(model)
314
+ except Exception as e:
315
+ logging.error(f"Failed to load model state from checkpoint for model {model_state.get('name', 'UNKNOWN')}: {e}")
316
+
317
+ # Sadece başarıyla yüklenen modelleri al
318
+ state["population"] = population
319
+ if not population:
320
+ logging.error("Failed to load any model from the checkpoint population state.")
321
+ return None # Hiç model yüklenemediyse checkpoint geçersiz
322
+
323
+ logging.info(f"Checkpoint loaded successfully. Resuming from generation {state['generation'] + 1}.")
324
+ return state
325
+ except Exception as e:
326
+ logging.error(f"Failed to load checkpoint from {checkpoint_path}: {e}", exc_info=True)
327
+ return None
328
+
329
+ def find_latest_checkpoint(output_dir: str) -> Optional[str]:
330
+ """Verilen klasördeki en son checkpoint dosyasını bulur."""
331
+ checkpoint_dir = os.path.join(output_dir, "checkpoints")
332
+ if not os.path.isdir(checkpoint_dir):
333
+ return None
334
+ checkpoints = [f for f in os.listdir(checkpoint_dir) if f.startswith("evo_gen_") and f.endswith(".pkl")]
335
+ if not checkpoints:
336
+ return None
337
+ # Dosya adından nesil numarasını çıkar ve en yükseğini bul
338
+ latest_gen = -1
339
+ latest_file = None
340
+ for cp in checkpoints:
341
+ try:
342
+ gen_num = int(cp.split('_')[2].split('.')[0])
343
+ if gen_num > latest_gen:
344
+ latest_gen = gen_num
345
+ latest_file = os.path.join(checkpoint_dir, cp)
346
+ except (IndexError, ValueError):
347
+ logging.warning(f"Could not parse generation number from checkpoint file: {cp}")
348
+ continue
349
+ return latest_file
350
+
351
+
352
+ # --- Ana Evrim Döngüsü (Checkpoint ve Crossover ile) ---
353
+ def evolve_population_v3(population: List[Sequential], X: np.ndarray, y: np.ndarray, start_generation: int, total_generations: int,
354
+ crossover_rate: float, mutation_rate: float, weight_mut_rate: float, mut_strength: float,
355
+ tournament_size: int, elitism_count: int, batch_size: int,
356
+ output_dir: str, checkpoint_interval: int) -> Tuple[Optional[Sequential], List[float], List[float]]:
357
+ """Evrimsel süreci çalıştırır (Checkpoint ve Crossover içerir)."""
358
+ best_fitness_history = []
359
+ avg_fitness_history = []
360
+ best_model_overall = None
361
+ best_fitness_overall = -np.inf
362
+
363
+ X_tf = tf.cast(X, tf.float32)
364
+ y_tf = tf.cast(y, tf.float32)
365
+
366
+ # --- KAVRAMSAL: Uyarlanabilir Mutasyon Oranı ---
367
+ # current_mutation_rate = mutation_rate # Başlangıç değeri
368
+ # stagnation_counter = 0
369
+ # --------------------------------------------
370
+
371
+ for gen in range(start_generation, total_generations):
372
+ generation_start_time = datetime.now()
373
+ # 1. Fitness Değerlendirme
374
+ try:
375
+ fitness_scores = [calculate_fitness(ind, X_tf, y_tf, batch_size) for ind in population]
376
+ except Exception as e:
377
+ logging.critical(f"Error calculating fitness for population in Generation {gen+1}: {e}", exc_info=True)
378
+ if best_model_overall: return best_model_overall, best_fitness_history, avg_fitness_history
379
+ else: raise
380
+
381
+ # 2. İstatistikler ve En İyiyi Takip
382
+ current_best_idx = np.argmax(fitness_scores)
383
+ current_best_fitness = fitness_scores[current_best_idx]
384
+ avg_fitness = np.mean(fitness_scores)
385
+ best_fitness_history.append(current_best_fitness)
386
+ avg_fitness_history.append(avg_fitness)
387
+
388
+ new_best_found = False
389
+ if current_best_fitness > best_fitness_overall:
390
+ best_fitness_overall = current_best_fitness
391
+ new_best_found = True
392
+ try:
393
+ best_model_overall = clone_model(population[current_best_idx])
394
+ best_model_overall.set_weights(population[current_best_idx].get_weights())
395
+ best_model_overall.compile(optimizer=Adam(), loss='mse')
396
+ logging.info(f"Generation {gen+1}: *** New overall best fitness found: {best_fitness_overall:.6f} ***")
397
+ except Exception as e:
398
+ logging.error(f"Could not clone new best model: {e}", exc_info=True)
399
+ best_fitness_overall = current_best_fitness # Sadece fitness'ı güncelle
400
+
401
+ generation_time = (datetime.now() - generation_start_time).total_seconds()
402
+ logging.info(f"Generation {gen+1}/{total_generations} | Best Fitness: {current_best_fitness:.6f} | Avg Fitness: {avg_fitness:.6f} | Time: {generation_time:.2f}s")
403
+
404
+ # --- KAVRAMSAL: Uyarlanabilir Mutasyon Oranı Güncelleme ---
405
+ # if new_best_found:
406
+ # stagnation_counter = 0
407
+ # # current_mutation_rate = max(min_mutation_rate, current_mutation_rate * 0.98) # Azalt
408
+ # else:
409
+ # stagnation_counter += 1
410
+ # if stagnation_counter > stagnation_limit:
411
+ # # current_mutation_rate = min(max_mutation_rate, current_mutation_rate * 1.1) # Artır
412
+ # stagnation_counter = 0 # Sayacı sıfırla
413
+ # logging.debug(f"Current mutation rate: {current_mutation_rate:.4f}")
414
+ # --------------------------------------------
415
+
416
+ # 3. Yeni Popülasyon Oluşturma
417
+ new_population = []
418
+
419
+ # 3a. Elitizm
420
+ if elitism_count > 0 and len(population) >= elitism_count:
421
+ try:
422
+ elite_indices = np.argsort(fitness_scores)[-elitism_count:]
423
+ for idx in elite_indices:
424
+ elite_clone = clone_model(population[idx])
425
+ elite_clone.set_weights(population[idx].get_weights())
426
+ elite_clone.compile(optimizer=Adam(), loss='mse')
427
+ new_population.append(elite_clone)
428
+ except Exception as e:
429
+ logging.error(f"Error during elitism: {e}", exc_info=True)
430
+
431
+
432
+ # 3b. Seçilim, Çaprazlama ve Mutasyon
433
+ num_to_generate = len(population) - len(new_population)
434
+ generated_count = 0
435
+ while generated_count < num_to_generate:
436
+ try:
437
+ # İki ebeveyn seç
438
+ parent1 = tournament_selection(population, fitness_scores, tournament_size)
439
+ parent2 = tournament_selection(population, fitness_scores, tournament_size)
440
+
441
+ child1, child2 = None, None # Çocukları başlat
442
+
443
+ # Çaprazlama uygula (belirli bir olasılıkla)
444
+ if random.random() < crossover_rate and parent1 is not parent2:
445
+ child1, child2 = crossover_individuals(parent1, parent2)
446
+
447
+ # Eğer çaprazlama yapılmadıysa veya başarısız olduysa, mutasyonla devam et
448
+ if child1 is None: # İlk çocuk oluşmadıysa
449
+ # Ebeveynlerden birini mutasyona uğrat
450
+ parent_to_mutate = parent1 # Veya parent2 veya rastgele biri
451
+ if random.random() < mutation_rate: # Genel mutasyon oranı kontrolü
452
+ child1 = mutate_individual(parent_to_mutate, weight_mut_rate, mut_strength)
453
+ else: # Mutasyon da olmazsa, ebeveyni klonla
454
+ child1 = clone_model(parent_to_mutate); child1.set_weights(parent_to_mutate.get_weights())
455
+ child1.compile(optimizer=Adam(learning_rate=0.001), loss='mse')
456
+ child1._name = f"cloned_{parent_to_mutate.name}_{random.randint(1000,9999)}"
457
+
458
+ # Yeni popülasyona ekle
459
+ if child1:
460
+ new_population.append(child1)
461
+ generated_count += 1
462
+ if generated_count >= num_to_generate: break # Gerekli sayıya ulaşıldıysa çık
463
+
464
+ else: # Çaprazlama başarılı olduysa (child1 ve child2 var)
465
+ # Çaprazlama sonrası çocuklara ayrıca mutasyon uygulama seçeneği eklenebilir
466
+ # if random.random() < post_crossover_mutation_rate: child1 = mutate(...)
467
+ # if random.random() < post_crossover_mutation_rate: child2 = mutate(...)
468
+
469
+ new_population.append(child1)
470
+ generated_count += 1
471
+ if generated_count >= num_to_generate: break
472
+
473
+ if child2: # İkinci çocuk da varsa ekle
474
+ new_population.append(child2)
475
+ generated_count += 1
476
+ if generated_count >= num_to_generate: break
477
+
478
+ except Exception as e:
479
+ logging.error(f"Error during selection/reproduction cycle: {e}", exc_info=True)
480
+ if generated_count < num_to_generate: # Eksik kalırsa rastgele doldur
481
+ logging.warning("Adding random individual due to reproduction error.")
482
+ new_population.append(create_individual(y.shape[1], X.shape[1:]))
483
+ generated_count += 1
484
+
485
+ population = new_population[:len(population)] # Popülasyon boyutunu garantile
486
+
487
+ # 4. Checkpoint Alma
488
+ if checkpoint_interval > 0 and (gen + 1) % checkpoint_interval == 0:
489
+ try:
490
+ # Rastgele durumları al
491
+ rnd_state = random.getstate()
492
+ np_rnd_state = np.random.get_state()
493
+ # tf_rnd_state = tf.random.get_global_generator().state # TF state kaydetmek zor olabilir
494
+ tf_rnd_state = None # Şimdilik None
495
+ save_checkpoint(output_dir, gen + 1, population, rnd_state, np_rnd_state, tf_rnd_state)
496
+ except Exception as e:
497
+ logging.error(f"Failed to execute checkpoint saving for generation {gen+1}: {e}", exc_info=True)
498
+
499
+
500
+ # Döngü sonu
501
+ if best_model_overall is None and population:
502
+ logging.warning("No overall best model tracked. Returning best from final population.")
503
+ final_fitness_scores = [calculate_fitness(ind, X_tf, y_tf, batch_size) for ind in population]
504
+ best_idx_final = np.argmax(final_fitness_scores)
505
+ best_model_overall = population[best_idx_final]
506
+ elif not population:
507
+ logging.error("Evolution finished with an empty population!")
508
+ return None, best_fitness_history, avg_fitness_history
509
+
510
+ logging.info(f"Evolution finished. Best fitness achieved: {best_fitness_overall:.6f}")
511
+ return best_model_overall, best_fitness_history, avg_fitness_history
512
+
513
+ # --- Grafik Çizimi (Öncekiyle aynı) ---
514
+ def plot_fitness_history(history_best: List[float], history_avg: List[float], output_dir: str) -> None:
515
+ if not history_best or not history_avg:
516
+ logging.warning("Fitness history is empty, cannot plot.")
517
+ return
518
+ try:
519
+ plt.figure(figsize=(12, 7)); plt.plot(history_best, label="Best Fitness", marker='o', linestyle='-', linewidth=2)
520
+ plt.plot(history_avg, label="Average Fitness", marker='x', linestyle='--', alpha=0.7); plt.xlabel("Generation")
521
+ plt.ylabel("Fitness Score"); plt.title("Evolutionary Fitness History"); plt.legend(); plt.grid(True); plt.tight_layout()
522
+ plot_path = os.path.join(output_dir, "fitness_history.png"); plt.savefig(plot_path); plt.close()
523
+ logging.info(f"Fitness history plot saved to {plot_path}")
524
+ except Exception as e: logging.error(f"Error plotting fitness history: {e}", exc_info=True)
525
+
526
+ # --- Değerlendirme (Öncekiyle aynı) ---
527
+ def evaluate_model(model: Sequential, X_test: np.ndarray, y_test: np.ndarray, batch_size: int) -> Dict[str, float]:
528
+ if model is None: return {"test_mse": np.inf, "avg_kendall_tau": 0.0}
529
+ logging.info("Evaluating final model on test data...")
530
+ try:
531
+ y_pred = model.predict(X_test, batch_size=batch_size, verbose=0)
532
+ test_mse = np.mean(np.square(y_test - y_pred))
533
+ logging.info(f"Final Test MSE: {test_mse:.6f}")
534
+ sample_size = min(500, X_test.shape[0]); taus = []; indices = np.random.choice(X_test.shape[0], sample_size, replace=False)
535
+ for i in indices:
536
+ try: tau, _ = kendalltau(y_test[i], y_pred[i]);
537
+ if not np.isnan(tau): taus.append(tau)
538
+ except ValueError: pass # Handle constant prediction case
539
+ avg_kendall_tau = np.mean(taus) if taus else 0.0
540
+ logging.info(f"Average Kendall's Tau (on {sample_size} samples): {avg_kendall_tau:.4f}")
541
+ return {"test_mse": float(test_mse), "avg_kendall_tau": float(avg_kendall_tau)}
542
+ except Exception as e:
543
+ logging.error(f"Error during final model evaluation: {e}", exc_info=True)
544
+ return {"test_mse": np.inf, "avg_kendall_tau": 0.0}
545
+
546
+ # --- Ana İş Akışı (Checkpoint Yükleme ile) ---
547
+ def run_pipeline_v3(args: argparse.Namespace):
548
+ """Checkpoint ve Crossover içeren ana iş akışı."""
549
+
550
+ # Çalıştırma adı ve çıktı klasörü
551
+ timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
552
+ run_name = f"evorun_{timestamp}_gen{args.generations}_pop{args.pop_size}"
553
+ # Eğer resume path verilmişse, o klasörü kullan
554
+ output_dir = args.resume_from if args.resume_from else os.path.join(args.output_base_dir, run_name)
555
+ resume_run = bool(args.resume_from)
556
+ if resume_run:
557
+ run_name = os.path.basename(output_dir) # Klasör adını kullan
558
+ logging.info(f"Attempting to resume run from: {output_dir}")
559
+ else:
560
+ try: os.makedirs(output_dir, exist_ok=True)
561
+ except OSError as e: print(f"FATAL: Could not create output directory: {output_dir}. Error: {e}", file=sys.stderr); sys.exit(1)
562
+
563
+ # Loglamayı ayarla ('a' modu ile devam etmeye uygun)
564
+ setup_logging(output_dir)
565
+ logging.info(f"========== Starting/Resuming EvoNet Pipeline Run: {run_name} ==========")
566
+ logging.info(f"Output directory: {output_dir}")
567
+
568
+ # --- Checkpoint Yükleme ---
569
+ start_generation = 0
570
+ population = []
571
+ initial_state_loaded = False
572
+ latest_checkpoint_path = find_latest_checkpoint(output_dir) if resume_run else None
573
+
574
+ if latest_checkpoint_path:
575
+ loaded_state = load_checkpoint(latest_checkpoint_path)
576
+ if loaded_state:
577
+ start_generation = loaded_state['generation'] # Kaldığı nesilden başla
578
+ population = loaded_state['population']
579
+ # Rastgele durumları geri yükle
580
+ try:
581
+ random.setstate(loaded_state['random_state'])
582
+ np.random.set_state(loaded_state['numpy_random_state'])
583
+ # tf.random.set_global_generator(tf.random.Generator.from_state(loaded_state['tensorflow_random_state'])) # TF state sorunlu olabilir
584
+ logging.info(f"Random states restored from checkpoint.")
585
+ except Exception as e:
586
+ logging.warning(f"Could not fully restore random states from checkpoint: {e}")
587
+ initial_state_loaded = True
588
+ logging.info(f"Resuming from Generation {start_generation + 1} with {len(population)} individuals.")
589
+ else:
590
+ logging.error("Failed to load checkpoint. Starting from scratch.")
591
+ resume_run = False # Checkpoint yüklenemediyse sıfırdan başla
592
+ elif resume_run:
593
+ logging.warning(f"Resume requested but no valid checkpoint found in {output_dir}. Starting from scratch.")
594
+ resume_run = False # Checkpoint yoksa sıfırdan başla
595
+
596
+
597
+ # --- Sıfırdan Başlama veya Devam Etme Ayarları ---
598
+ if not initial_state_loaded:
599
+ # Argümanları logla ve kaydet (sadece sıfırdan başlarken)
600
+ logging.info("--- Configuration ---")
601
+ args_dict = vars(args)
602
+ for k, v in args_dict.items(): logging.info(f" {k:<20}: {v}")
603
+ logging.info("---------------------")
604
+ config_path = os.path.join(output_dir, "config.json")
605
+ try:
606
+ with open(config_path, 'w') as f: json.dump(args_dict, f, indent=4, sort_keys=True)
607
+ logging.info(f"Configuration saved to {config_path}")
608
+ except Exception as e: logging.error(f"Failed to save configuration: {e}", exc_info=True)
609
+
610
+ # Rastgele tohumları ayarla
611
+ try:
612
+ random.seed(args.seed); np.random.seed(args.seed); tf.random.set_seed(args.seed)
613
+ logging.info(f"Using random seed: {args.seed}")
614
+ except Exception as e: logging.warning(f"Could not set all random seeds: {e}")
615
+
616
+ # GPU kontrolü
617
+ is_gpu_available = check_gpu()
618
+
619
+ # Veri Üretimi
620
+ try:
621
+ X_train, y_train = generate_data(args.train_samples, args.seq_length)
622
+ X_test, y_test = generate_data(args.test_samples, args.seq_length)
623
+ input_shape = X_train.shape[1:]
624
+ except Exception: logging.critical("Failed to generate data. Exiting."); sys.exit(1)
625
+
626
+ # Popülasyon Başlatma
627
+ logging.info(f"--- Initializing Population (Size: {args.pop_size}) ---")
628
+ try:
629
+ population = [create_individual(args.seq_length, input_shape) for _ in range(args.pop_size)]
630
+ logging.info("Population initialized successfully.")
631
+ except Exception: logging.critical("Failed to initialize population. Exiting."); sys.exit(1)
632
+ else:
633
+ # Checkpoint'ten devam ediliyorsa, veriyi yeniden üretmemiz gerekebilir
634
+ # veya checkpoint'e veriyi de dahil edebiliriz (büyük olabilir).
635
+ # Şimdilik veriyi yeniden üretelim.
636
+ logging.info("Reloading data for resumed run...")
637
+ is_gpu_available = check_gpu() # GPU durumunu tekrar kontrol et
638
+ try:
639
+ X_train, y_train = generate_data(args.train_samples, args.seq_length)
640
+ X_test, y_test = generate_data(args.test_samples, args.seq_length)
641
+ except Exception: logging.critical("Failed to reload data for resumed run. Exiting."); sys.exit(1)
642
+ # Config dosyasını tekrar okuyup loglayabiliriz
643
+ config_path = os.path.join(output_dir, "config.json")
644
+ try:
645
+ with open(config_path, 'r') as f: args_dict = json.load(f)
646
+ logging.info("--- Loaded Configuration (from resumed run) ---")
647
+ for k, v in args_dict.items(): logging.info(f" {k:<20}: {v}")
648
+ logging.info("-----------------------------------------------")
649
+ except Exception as e:
650
+ logging.warning(f"Could not reload config.json: {e}")
651
+ args_dict = vars(args) # Argümanları kullan
652
+
653
+
654
+ # Evrim Süreci
655
+ logging.info(f"--- Starting/Resuming Evolution ({args.generations} Total Generations) ---")
656
+ if start_generation >= args.generations:
657
+ logging.warning(f"Loaded checkpoint generation ({start_generation}) is already >= total generations ({args.generations}). Skipping evolution.")
658
+ best_model_unevolved = population[0] if population else None # En iyi modeli checkpoint'ten almaya çalışmak lazım
659
+ best_fitness_hist, avg_fitness_hist = [], [] # Geçmişi de yüklemek lazım
660
+ # TODO: Checkpoint'ten en iyi modeli ve geçmişi de yükle
661
+ # Şimdilik basitleştirilmiş - evrim atlanıyor
662
+ else:
663
+ try:
664
+ best_model_unevolved, best_fitness_hist, avg_fitness_hist = evolve_population_v3(
665
+ population, X_train, y_train, start_generation, args.generations,
666
+ args.crossover_rate, args.mutation_rate, args.weight_mut_rate, args.mutation_strength,
667
+ args.tournament_size, args.elitism_count, args.batch_size,
668
+ output_dir, args.checkpoint_interval
669
+ )
670
+ except Exception as e:
671
+ logging.critical(f"Fatal error during evolution process: {e}", exc_info=True)
672
+ sys.exit(1)
673
+ logging.info("--- Evolution Complete ---")
674
+
675
+ # (Fitness geçmişini kaydetme ve çizdirme - öncekiyle aynı)
676
+ if best_fitness_hist or avg_fitness_hist: # Sadece listeler boş değilse
677
+ # Geçmişi de checkpoint'ten yükleyip birleştirmek gerekebilir.
678
+ # Şimdilik sadece bu çalıştırmadaki kısmı kaydediyoruz/çizdiriyoruz.
679
+ # TODO: Checkpoint'ten yüklenen geçmişle birleştir.
680
+ plot_fitness_history(best_fitness_hist, avg_fitness_hist, output_dir)
681
+ history_path = os.path.join(output_dir, "fitness_history_run.csv") # Farklı isim?
682
+ try:
683
+ history_data = np.array([np.arange(start_generation + 1, start_generation + len(best_fitness_hist) + 1), best_fitness_hist, avg_fitness_hist]).T
684
+ np.savetxt(history_path, history_data, delimiter=',', header='Generation,BestFitness,AvgFitness', comments='', fmt=['%d', '%.8f', '%.8f'])
685
+ logging.info(f"Fitness history (this run) saved to {history_path}")
686
+ except Exception as e: logging.error(f"Could not save fitness history data: {e}")
687
+ else: logging.warning("Fitness history is empty, skipping saving/plotting.")
688
+
689
+ # (En iyi modelin son eğitimi, değerlendirme ve sonuç kaydı - öncekiyle aynı)
690
+ if best_model_unevolved is None:
691
+ logging.error("Evolution did not yield a best model. Skipping final training and evaluation.")
692
+ final_metrics = {"test_mse": np.inf, "avg_kendall_tau": 0.0}; final_model_path = None; training_summary = {}
693
+ else:
694
+ logging.info("--- Starting Final Training of Best Evolved Model ---")
695
+ try:
696
+ final_model = clone_model(best_model_unevolved); final_model.set_weights(best_model_unevolved.get_weights())
697
+ final_model.compile(optimizer=Adam(learning_rate=0.001), loss='mse', metrics=['mae'])
698
+ logging.info("Model Summary of Best Evolved (Untrained):"); final_model.summary(print_fn=logging.info)
699
+ early_stopping = EarlyStopping(monitor='val_loss', patience=15, restore_best_weights=True, verbose=1)
700
+ reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.3, patience=7, min_lr=1e-7, verbose=1)
701
+ history = final_model.fit(X_train, y_train, epochs=args.epochs_final_train, batch_size=args.batch_size, validation_split=0.2, callbacks=[early_stopping, reduce_lr], verbose=2)
702
+ logging.info("Final training complete.")
703
+ training_summary = {"epochs_run": len(history.history['loss']), "final_train_loss": history.history['loss'][-1], "final_val_loss": history.history['val_loss'][-1]}
704
+ final_metrics = evaluate_model(final_model, X_test, y_test, args.batch_size)
705
+ final_model_path = os.path.join(output_dir, "best_evolved_model_trained.keras")
706
+ final_model.save(final_model_path); logging.info(f"Final trained model saved to {final_model_path}")
707
+ except Exception as e:
708
+ logging.error(f"Error during final training or evaluation: {e}", exc_info=True)
709
+ final_metrics = {"test_mse": np.inf, "avg_kendall_tau": 0.0}; final_model_path = None; training_summary = {"error": str(e)}
710
+
711
+ logging.info("--- Saving Final Results ---")
712
+ final_results = { # ... (öncekiyle aynı sonuç yapısı) ...
713
+ "run_info": {"run_name": run_name, "timestamp": timestamp, "output_directory": output_dir, "gpu_used": is_gpu_available, "resumed": resume_run},
714
+ "config": args_dict,
715
+ "evolution_summary": { # TODO: Checkpoint'ten yüklenen geçmişle birleştirilmeli
716
+ "generations_run_this_session": len(best_fitness_hist) if best_fitness_hist else 0,
717
+ "best_fitness_achieved_overall": best_fitness_overall if best_fitness_overall > -np.inf else None,
718
+ "best_fitness_final_gen": best_fitness_hist[-1] if best_fitness_hist else None,
719
+ "avg_fitness_final_gen": avg_fitness_hist[-1] if avg_fitness_hist else None, },
720
+ "final_training_summary": training_summary, "final_evaluation_on_test": final_metrics, "saved_model_path": final_model_path }
721
+ results_path = os.path.join(output_dir, "final_results.json")
722
+ try:
723
+ def convert_numpy_types(obj):
724
+ if isinstance(obj, np.integer): return int(obj)
725
+ elif isinstance(obj, np.floating): return float(obj)
726
+ elif isinstance(obj, np.ndarray): return obj.tolist()
727
+ return obj
728
+ with open(results_path, 'w') as f: json.dump(final_results, f, indent=4, default=convert_numpy_types)
729
+ logging.info(f"Final results summary saved to {results_path}")
730
+ except Exception as e: logging.error(f"Failed to save final results JSON: {e}", exc_info=True)
731
+
732
+ logging.info(f"========== Pipeline Run {run_name} Finished ==========")
733
+
734
+
735
+ # --- Argüman Ayrıştırıcı (Yeni Argümanlar Eklendi) ---
736
+ def parse_arguments_v3() -> argparse.Namespace:
737
+ parser = argparse.ArgumentParser(description="EvoNet v3: Neuroevolution with Crossover & Checkpointing")
738
+
739
+ # --- Dizinler ve Kontrol ---
740
+ parser.add_argument('--output_base_dir', type=str, default=DEFAULT_OUTPUT_BASE_DIR, help='Base directory for new runs.')
741
+ parser.add_argument('--resume_from', type=str, default=None, help='Path to a previous run directory to resume from.')
742
+ parser.add_argument('--checkpoint_interval', type=int, default=DEFAULT_CHECKPOINT_INTERVAL, help='Save checkpoint every N generations (0 to disable).')
743
+
744
+ # --- Veri Ayarları ---
745
+ parser.add_argument('--seq_length', type=int, default=DEFAULT_SEQ_LENGTH, help='Length of sequences.')
746
+ parser.add_argument('--train_samples', type=int, default=5000, help='Number of training samples.')
747
+ parser.add_argument('--test_samples', type=int, default=1000, help='Number of test samples.')
748
+
749
+ # --- Evrim Parametreleri ---
750
+ parser.add_argument('--pop_size', type=int, default=DEFAULT_POP_SIZE, help='Population size.')
751
+ parser.add_argument('--generations', type=int, default=DEFAULT_GENERATIONS, help='Total number of generations.')
752
+ parser.add_argument('--crossover_rate', type=float, default=DEFAULT_CROSSOVER_RATE, help='Probability of applying crossover.')
753
+ parser.add_argument('--mutation_rate', type=float, default=DEFAULT_MUTATION_RATE, help='Probability of applying mutation (if crossover is not applied).')
754
+ parser.add_argument('--weight_mut_rate', type=float, default=DEFAULT_WEIGHT_MUT_RATE, help='Weight mutation probability within mutation.')
755
+ # parser.add_argument('--activation_mut_rate', type=float, default=DEFAULT_ACTIVATION_MUT_RATE, help='Activation mutation probability (experimental).')
756
+ parser.add_argument('--mutation_strength', type=float, default=DEFAULT_MUTATION_STRENGTH, help='Std dev for weight mutation noise.')
757
+ parser.add_argument('--tournament_size', type=int, default=DEFAULT_TOURNAMENT_SIZE, help='Tournament selection size.')
758
+ parser.add_argument('--elitism_count', type=int, default=DEFAULT_ELITISM_COUNT, help='Number of elite individuals.')
759
+
760
+ # --- Eğitim ve Değerlendirme ---
761
+ parser.add_argument('--batch_size', type=int, default=DEFAULT_BATCH_SIZE, help='Batch size.')
762
+ parser.add_argument('--epochs_final_train', type=int, default=DEFAULT_EPOCHS_FINAL_TRAIN, help='Max epochs for final training.')
763
+
764
+ # --- Tekrarlanabilirlik ---
765
+ parser.add_argument('--seed', type=int, default=None, help='Random seed (default: random).')
766
+
767
+ args = parser.parse_args()
768
+ if args.seed is None: args.seed = random.randint(0, 2**32 - 1); print(f"Generated random seed: {args.seed}")
769
+ # Basit kontrol: Crossover + Mutation oranı > 1 olmamalı (teknik olarak olabilir ama mantık gereği biri seçilmeli)
770
+ # if args.crossover_rate + args.mutation_rate > 1.0: logging.warning("Sum of crossover and mutation rates exceeds 1.0")
771
+ return args
772
+
773
+
774
+ # --- Ana Çalıştırma Bloğu ---
775
+ if __name__ == "__main__":
776
+ cli_args = parse_arguments_v3()
777
+ try:
778
+ run_pipeline_v3(cli_args)
779
+ except SystemExit: pass
780
+ except Exception as e:
781
+ print(f"\nFATAL UNHANDLED ERROR in main execution block: {e}", file=sys.stderr)
782
+ if logging.getLogger().hasHandlers(): logging.critical("FATAL UNHANDLED ERROR:", exc_info=True)
783
+ else: import traceback; print(traceback.format_exc(), file=sys.stderr)
784
+ sys.exit(1)