diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,2369 @@ +[2023-09-26 04:48:41,679][06561] Saving configuration to ./train_atari/atari_enduro/config.json... +[2023-09-26 04:48:41,946][06561] Rollout worker 0 uses device cpu +[2023-09-26 04:48:41,947][06561] Rollout worker 1 uses device cpu +[2023-09-26 04:48:41,947][06561] Rollout worker 2 uses device cpu +[2023-09-26 04:48:41,948][06561] Rollout worker 3 uses device cpu +[2023-09-26 04:48:41,948][06561] Rollout worker 4 uses device cpu +[2023-09-26 04:48:41,949][06561] Rollout worker 5 uses device cpu +[2023-09-26 04:48:41,949][06561] Rollout worker 6 uses device cpu +[2023-09-26 04:48:41,950][06561] Rollout worker 7 uses device cpu +[2023-09-26 04:48:41,950][06561] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 +[2023-09-26 04:48:41,996][06561] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 04:48:41,996][06561] InferenceWorker_p0-w0: min num requests: 1 +[2023-09-26 04:48:42,000][06561] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 04:48:42,000][06561] InferenceWorker_p1-w0: min num requests: 1 +[2023-09-26 04:48:42,024][06561] Starting all processes... +[2023-09-26 04:48:42,024][06561] Starting process learner_proc0 +[2023-09-26 04:48:43,619][06561] Starting process learner_proc1 +[2023-09-26 04:48:43,623][07269] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 04:48:43,624][07269] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-09-26 04:48:43,642][07269] Num visible devices: 1 +[2023-09-26 04:48:43,667][07269] Starting seed is not provided +[2023-09-26 04:48:43,667][07269] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 04:48:43,667][07269] Initializing actor-critic model on device cuda:0 +[2023-09-26 04:48:43,668][07269] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 04:48:43,668][07269] RunningMeanStd input shape: (1,) +[2023-09-26 04:48:43,680][07269] ConvEncoder: input_channels=4 +[2023-09-26 04:48:43,859][07269] Conv encoder output size: 512 +[2023-09-26 04:48:43,861][07269] Created Actor Critic model with architecture: +[2023-09-26 04:48:43,861][07269] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=9, bias=True) + ) +) +[2023-09-26 04:48:44,450][07269] Using optimizer +[2023-09-26 04:48:44,450][07269] No checkpoints found +[2023-09-26 04:48:44,451][07269] Did not load from checkpoint, starting from scratch! +[2023-09-26 04:48:44,451][07269] Initialized policy 0 weights for model version 0 +[2023-09-26 04:48:44,452][07269] LearnerWorker_p0 finished initialization! +[2023-09-26 04:48:44,453][07269] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 04:48:45,270][06561] Starting all processes... +[2023-09-26 04:48:45,274][07486] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 04:48:45,274][07486] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for learning process 1 +[2023-09-26 04:48:45,278][06561] Starting process inference_proc0-0 +[2023-09-26 04:48:45,278][06561] Starting process inference_proc1-0 +[2023-09-26 04:48:45,278][06561] Starting process rollout_proc0 +[2023-09-26 04:48:45,278][06561] Starting process rollout_proc1 +[2023-09-26 04:48:45,293][07486] Num visible devices: 1 +[2023-09-26 04:48:45,279][06561] Starting process rollout_proc2 +[2023-09-26 04:48:45,279][06561] Starting process rollout_proc3 +[2023-09-26 04:48:45,283][06561] Starting process rollout_proc4 +[2023-09-26 04:48:45,284][06561] Starting process rollout_proc5 +[2023-09-26 04:48:45,320][07486] Starting seed is not provided +[2023-09-26 04:48:45,287][06561] Starting process rollout_proc6 +[2023-09-26 04:48:45,320][07486] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-26 04:48:45,320][07486] Initializing actor-critic model on device cuda:0 +[2023-09-26 04:48:45,321][07486] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 04:48:45,321][07486] RunningMeanStd input shape: (1,) +[2023-09-26 04:48:45,288][06561] Starting process rollout_proc7 +[2023-09-26 04:48:45,333][07486] ConvEncoder: input_channels=4 +[2023-09-26 04:48:45,681][07486] Conv encoder output size: 512 +[2023-09-26 04:48:45,684][07486] Created Actor Critic model with architecture: +[2023-09-26 04:48:45,684][07486] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=9, bias=True) + ) +) +[2023-09-26 04:48:46,304][07486] Using optimizer +[2023-09-26 04:48:46,304][07486] No checkpoints found +[2023-09-26 04:48:46,304][07486] Did not load from checkpoint, starting from scratch! +[2023-09-26 04:48:46,305][07486] Initialized policy 1 weights for model version 0 +[2023-09-26 04:48:46,306][07486] LearnerWorker_p1 finished initialization! +[2023-09-26 04:48:46,307][07486] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-26 04:48:47,202][07696] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 04:48:47,202][07696] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-09-26 04:48:47,213][07759] Worker 7 uses CPU cores [28, 29, 30, 31] +[2023-09-26 04:48:47,220][07696] Num visible devices: 1 +[2023-09-26 04:48:47,222][07757] Worker 6 uses CPU cores [24, 25, 26, 27] +[2023-09-26 04:48:47,234][07753] Worker 2 uses CPU cores [8, 9, 10, 11] +[2023-09-26 04:48:47,242][07751] Worker 0 uses CPU cores [0, 1, 2, 3] +[2023-09-26 04:48:47,249][07697] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 04:48:47,249][07697] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for inference process 1 +[2023-09-26 04:48:47,274][07755] Worker 4 uses CPU cores [16, 17, 18, 19] +[2023-09-26 04:48:47,277][07752] Worker 1 uses CPU cores [4, 5, 6, 7] +[2023-09-26 04:48:47,290][07756] Worker 3 uses CPU cores [12, 13, 14, 15] +[2023-09-26 04:48:47,294][07697] Num visible devices: 1 +[2023-09-26 04:48:47,317][07758] Worker 5 uses CPU cores [20, 21, 22, 23] +[2023-09-26 04:48:47,811][07696] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 04:48:47,812][07696] RunningMeanStd input shape: (1,) +[2023-09-26 04:48:47,823][07696] ConvEncoder: input_channels=4 +[2023-09-26 04:48:47,837][06561] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan, 1: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-09-26 04:48:47,890][07697] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 04:48:47,891][07697] RunningMeanStd input shape: (1,) +[2023-09-26 04:48:47,902][07697] ConvEncoder: input_channels=4 +[2023-09-26 04:48:47,925][07696] Conv encoder output size: 512 +[2023-09-26 04:48:47,931][06561] Inference worker 0-0 is ready! +[2023-09-26 04:48:48,002][07697] Conv encoder output size: 512 +[2023-09-26 04:48:48,008][06561] Inference worker 1-0 is ready! +[2023-09-26 04:48:48,009][06561] All inference workers are ready! Signal rollout workers to start! +[2023-09-26 04:48:48,481][07755] Decorrelating experience for 0 frames... +[2023-09-26 04:48:48,489][07758] Decorrelating experience for 0 frames... +[2023-09-26 04:48:48,489][07752] Decorrelating experience for 0 frames... +[2023-09-26 04:48:48,489][07751] Decorrelating experience for 0 frames... +[2023-09-26 04:48:48,521][07757] Decorrelating experience for 0 frames... +[2023-09-26 04:48:48,530][07759] Decorrelating experience for 0 frames... +[2023-09-26 04:48:48,530][07753] Decorrelating experience for 0 frames... +[2023-09-26 04:48:48,598][07756] Decorrelating experience for 0 frames... +[2023-09-26 04:48:52,837][06561] Fps is (10 sec: 1638.4, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 8192. Throughput: 0: 204.8, 1: 204.8. Samples: 2048. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 04:48:57,837][06561] Fps is (10 sec: 3276.9, 60 sec: 3276.9, 300 sec: 3276.9). Total num frames: 32768. Throughput: 0: 384.4, 1: 387.9. Samples: 7723. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:49:01,984][06561] Heartbeat connected on Batcher_0 +[2023-09-26 04:49:01,987][06561] Heartbeat connected on LearnerWorker_p0 +[2023-09-26 04:49:01,990][06561] Heartbeat connected on Batcher_1 +[2023-09-26 04:49:01,992][06561] Heartbeat connected on LearnerWorker_p1 +[2023-09-26 04:49:01,999][06561] Heartbeat connected on InferenceWorker_p0-w0 +[2023-09-26 04:49:02,002][06561] Heartbeat connected on InferenceWorker_p1-w0 +[2023-09-26 04:49:02,005][06561] Heartbeat connected on RolloutWorker_w0 +[2023-09-26 04:49:02,008][06561] Heartbeat connected on RolloutWorker_w1 +[2023-09-26 04:49:02,009][06561] Heartbeat connected on RolloutWorker_w2 +[2023-09-26 04:49:02,012][06561] Heartbeat connected on RolloutWorker_w3 +[2023-09-26 04:49:02,017][06561] Heartbeat connected on RolloutWorker_w4 +[2023-09-26 04:49:02,019][06561] Heartbeat connected on RolloutWorker_w5 +[2023-09-26 04:49:02,021][06561] Heartbeat connected on RolloutWorker_w6 +[2023-09-26 04:49:02,024][06561] Heartbeat connected on RolloutWorker_w7 +[2023-09-26 04:49:02,837][06561] Fps is (10 sec: 5734.5, 60 sec: 4369.1, 300 sec: 4369.1). Total num frames: 65536. Throughput: 0: 409.6, 1: 409.6. Samples: 12288. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:49:05,407][07696] Updated weights for policy 0, policy_version 160 (0.0015) +[2023-09-26 04:49:05,407][07697] Updated weights for policy 1, policy_version 160 (0.0018) +[2023-09-26 04:49:07,837][06561] Fps is (10 sec: 5734.3, 60 sec: 4505.6, 300 sec: 4505.6). Total num frames: 90112. Throughput: 0: 536.2, 1: 538.9. Samples: 21501. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 04:49:12,837][06561] Fps is (10 sec: 5734.4, 60 sec: 4915.2, 300 sec: 4915.2). Total num frames: 122880. Throughput: 0: 617.4, 1: 619.8. Samples: 30930. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:49:12,838][06561] Avg episode reward: [(0, '8.500'), (1, '0.000')] +[2023-09-26 04:49:17,837][06561] Fps is (10 sec: 6553.7, 60 sec: 5188.3, 300 sec: 5188.3). Total num frames: 155648. Throughput: 0: 591.8, 1: 594.4. Samples: 35585. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:49:17,838][06561] Avg episode reward: [(0, '8.500'), (1, '0.000')] +[2023-09-26 04:49:18,368][07696] Updated weights for policy 0, policy_version 320 (0.0016) +[2023-09-26 04:49:18,369][07697] Updated weights for policy 1, policy_version 320 (0.0019) +[2023-09-26 04:49:22,837][06561] Fps is (10 sec: 6553.5, 60 sec: 5383.3, 300 sec: 5383.3). Total num frames: 188416. Throughput: 0: 643.8, 1: 644.7. Samples: 45095. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:49:22,838][06561] Avg episode reward: [(0, '8.500'), (1, '0.000')] +[2023-09-26 04:49:27,837][06561] Fps is (10 sec: 6553.5, 60 sec: 5529.6, 300 sec: 5529.6). Total num frames: 221184. Throughput: 0: 674.9, 1: 676.7. Samples: 54066. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:49:27,838][06561] Avg episode reward: [(0, '5.125'), (1, '0.000')] +[2023-09-26 04:49:27,839][07269] Saving new best policy, reward=5.125! +[2023-09-26 04:49:27,839][07486] Saving new best policy, reward=0.000! +[2023-09-26 04:49:31,621][07696] Updated weights for policy 0, policy_version 480 (0.0016) +[2023-09-26 04:49:31,621][07697] Updated weights for policy 1, policy_version 480 (0.0017) +[2023-09-26 04:49:32,837][06561] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5461.3). Total num frames: 245760. Throughput: 0: 655.7, 1: 657.8. Samples: 59109. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 04:49:32,838][06561] Avg episode reward: [(0, '5.125'), (1, '0.000')] +[2023-09-26 04:49:37,837][06561] Fps is (10 sec: 5734.4, 60 sec: 5570.6, 300 sec: 5570.6). Total num frames: 278528. Throughput: 0: 738.2, 1: 740.0. Samples: 68571. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 04:49:37,838][06561] Avg episode reward: [(0, '5.125'), (1, '0.000')] +[2023-09-26 04:49:42,837][06561] Fps is (10 sec: 6553.7, 60 sec: 5659.9, 300 sec: 5659.9). Total num frames: 311296. Throughput: 0: 779.3, 1: 778.5. Samples: 77824. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 04:49:42,838][06561] Avg episode reward: [(0, '5.125'), (1, '0.000')] +[2023-09-26 04:49:44,879][07696] Updated weights for policy 0, policy_version 640 (0.0019) +[2023-09-26 04:49:44,879][07697] Updated weights for policy 1, policy_version 640 (0.0020) +[2023-09-26 04:49:47,837][06561] Fps is (10 sec: 6553.6, 60 sec: 5734.4, 300 sec: 5734.4). Total num frames: 344064. Throughput: 0: 774.7, 1: 776.4. Samples: 82085. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:49:47,838][06561] Avg episode reward: [(0, '5.750'), (1, '0.000')] +[2023-09-26 04:49:47,839][07269] Saving new best policy, reward=5.750! +[2023-09-26 04:49:52,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6007.5, 300 sec: 5671.4). Total num frames: 368640. Throughput: 0: 777.4, 1: 778.9. Samples: 91531. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:49:52,838][06561] Avg episode reward: [(0, '5.750'), (1, '0.000')] +[2023-09-26 04:49:57,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 5734.4). Total num frames: 401408. Throughput: 0: 776.4, 1: 777.0. Samples: 100830. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 04:49:57,838][06561] Avg episode reward: [(0, '5.750'), (1, '0.000')] +[2023-09-26 04:49:58,004][07696] Updated weights for policy 0, policy_version 800 (0.0016) +[2023-09-26 04:49:58,004][07697] Updated weights for policy 1, policy_version 800 (0.0017) +[2023-09-26 04:50:02,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 5789.0). Total num frames: 434176. Throughput: 0: 779.2, 1: 779.1. Samples: 105710. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:50:02,838][06561] Avg episode reward: [(0, '5.375'), (1, '0.000')] +[2023-09-26 04:50:07,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 5836.8). Total num frames: 466944. Throughput: 0: 773.6, 1: 773.4. Samples: 114711. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:50:07,837][06561] Avg episode reward: [(0, '5.375'), (1, '0.000')] +[2023-09-26 04:50:11,323][07696] Updated weights for policy 0, policy_version 960 (0.0017) +[2023-09-26 04:50:11,324][07697] Updated weights for policy 1, policy_version 960 (0.0018) +[2023-09-26 04:50:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 5879.0). Total num frames: 499712. Throughput: 0: 778.9, 1: 778.6. Samples: 124151. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:50:12,838][06561] Avg episode reward: [(0, '5.375'), (1, '0.000')] +[2023-09-26 04:50:17,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 5825.4). Total num frames: 524288. Throughput: 0: 773.6, 1: 775.8. Samples: 128828. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 04:50:17,838][06561] Avg episode reward: [(0, '5.250'), (1, '0.000')] +[2023-09-26 04:50:22,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 5863.7). Total num frames: 557056. Throughput: 0: 769.2, 1: 769.6. Samples: 137818. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:50:22,838][06561] Avg episode reward: [(0, '5.250'), (1, '0.000')] +[2023-09-26 04:50:24,545][07696] Updated weights for policy 0, policy_version 1120 (0.0017) +[2023-09-26 04:50:24,545][07697] Updated weights for policy 1, policy_version 1120 (0.0014) +[2023-09-26 04:50:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 5898.2). Total num frames: 589824. Throughput: 0: 773.7, 1: 773.7. Samples: 147457. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 04:50:27,838][06561] Avg episode reward: [(0, '5.250'), (1, '0.000')] +[2023-09-26 04:50:32,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 5929.5). Total num frames: 622592. Throughput: 0: 776.5, 1: 776.3. Samples: 151962. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 04:50:32,838][06561] Avg episode reward: [(0, '5.250'), (1, '0.000')] +[2023-09-26 04:50:37,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 5883.4). Total num frames: 647168. Throughput: 0: 770.1, 1: 769.9. Samples: 160834. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:50:37,837][06561] Avg episode reward: [(0, '5.292'), (1, '0.000')] +[2023-09-26 04:50:37,841][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000001264_323584.pth... +[2023-09-26 04:50:37,842][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000001264_323584.pth... +[2023-09-26 04:50:38,193][07696] Updated weights for policy 0, policy_version 1280 (0.0017) +[2023-09-26 04:50:38,193][07697] Updated weights for policy 1, policy_version 1280 (0.0017) +[2023-09-26 04:50:42,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 5912.5). Total num frames: 679936. Throughput: 0: 768.8, 1: 767.2. Samples: 169951. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:50:42,838][06561] Avg episode reward: [(0, '5.292'), (1, '0.000')] +[2023-09-26 04:50:47,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 5939.2). Total num frames: 712704. Throughput: 0: 762.4, 1: 762.4. Samples: 174329. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:50:47,837][06561] Avg episode reward: [(0, '5.292'), (1, '0.000')] +[2023-09-26 04:50:51,367][07696] Updated weights for policy 0, policy_version 1440 (0.0017) +[2023-09-26 04:50:51,367][07697] Updated weights for policy 1, policy_version 1440 (0.0018) +[2023-09-26 04:50:52,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 5963.8). Total num frames: 745472. Throughput: 0: 769.8, 1: 770.2. Samples: 184013. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 04:50:52,838][06561] Avg episode reward: [(0, '5.500'), (1, '0.000')] +[2023-09-26 04:50:57,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 5986.5). Total num frames: 778240. Throughput: 0: 769.7, 1: 770.4. Samples: 193458. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 04:50:57,838][06561] Avg episode reward: [(0, '5.500'), (1, '0.000')] +[2023-09-26 04:51:02,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 5946.8). Total num frames: 802816. Throughput: 0: 773.2, 1: 770.8. Samples: 198310. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 04:51:02,838][06561] Avg episode reward: [(0, '5.500'), (1, '0.000')] +[2023-09-26 04:51:04,234][07696] Updated weights for policy 0, policy_version 1600 (0.0015) +[2023-09-26 04:51:04,234][07697] Updated weights for policy 1, policy_version 1600 (0.0018) +[2023-09-26 04:51:07,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 5968.5). Total num frames: 835584. Throughput: 0: 774.6, 1: 774.0. Samples: 207503. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:51:07,838][06561] Avg episode reward: [(0, '5.500'), (1, '0.000')] +[2023-09-26 04:51:12,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 5988.6). Total num frames: 868352. Throughput: 0: 773.7, 1: 773.7. Samples: 217088. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 04:51:12,838][06561] Avg episode reward: [(0, '5.875'), (1, '0.000')] +[2023-09-26 04:51:12,838][07269] Saving new best policy, reward=5.875! +[2023-09-26 04:51:17,590][07696] Updated weights for policy 0, policy_version 1760 (0.0018) +[2023-09-26 04:51:17,590][07697] Updated weights for policy 1, policy_version 1760 (0.0017) +[2023-09-26 04:51:17,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6007.5). Total num frames: 901120. Throughput: 0: 772.3, 1: 772.8. Samples: 221491. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:51:17,837][06561] Avg episode reward: [(0, '5.875'), (1, '0.000')] +[2023-09-26 04:51:22,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6025.1). Total num frames: 933888. Throughput: 0: 780.6, 1: 780.2. Samples: 231071. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:51:22,838][06561] Avg episode reward: [(0, '5.875'), (1, '0.000')] +[2023-09-26 04:51:27,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 5990.4). Total num frames: 958464. Throughput: 0: 780.5, 1: 781.3. Samples: 240234. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:51:27,838][06561] Avg episode reward: [(0, '8.694'), (1, '0.000')] +[2023-09-26 04:51:27,920][07269] Saving new best policy, reward=8.694! +[2023-09-26 04:51:30,687][07696] Updated weights for policy 0, policy_version 1920 (0.0014) +[2023-09-26 04:51:30,687][07697] Updated weights for policy 1, policy_version 1920 (0.0017) +[2023-09-26 04:51:32,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6007.5). Total num frames: 991232. Throughput: 0: 783.4, 1: 783.3. Samples: 244832. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:51:32,838][06561] Avg episode reward: [(0, '8.694'), (1, '0.000')] +[2023-09-26 04:51:37,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6023.5). Total num frames: 1024000. Throughput: 0: 780.3, 1: 780.4. Samples: 254247. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:51:37,838][06561] Avg episode reward: [(0, '8.694'), (1, '0.000')] +[2023-09-26 04:51:41,241][07269] Early stopping after 3 epochs (12 sgd steps), loss delta 0.0000003 +[2023-09-26 04:51:42,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6038.7). Total num frames: 1056768. Throughput: 0: 779.3, 1: 779.0. Samples: 263584. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:51:42,838][06561] Avg episode reward: [(0, '8.694'), (1, '0.000')] +[2023-09-26 04:51:43,937][07697] Updated weights for policy 1, policy_version 2080 (0.0018) +[2023-09-26 04:51:43,938][07696] Updated weights for policy 0, policy_version 2076 (0.0017) +[2023-09-26 04:51:47,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6007.5). Total num frames: 1081344. Throughput: 0: 777.2, 1: 776.2. Samples: 268217. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:51:47,838][06561] Avg episode reward: [(0, '8.550'), (1, '0.000')] +[2023-09-26 04:51:52,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6022.2). Total num frames: 1114112. Throughput: 0: 775.6, 1: 775.0. Samples: 277276. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 04:51:52,838][06561] Avg episode reward: [(0, '8.550'), (1, '0.000')] +[2023-09-26 04:51:57,138][07696] Updated weights for policy 0, policy_version 2236 (0.0019) +[2023-09-26 04:51:57,138][07697] Updated weights for policy 1, policy_version 2240 (0.0018) +[2023-09-26 04:51:57,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6036.2). Total num frames: 1146880. Throughput: 0: 773.7, 1: 773.7. Samples: 286720. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:51:57,838][06561] Avg episode reward: [(0, '8.550'), (1, '0.000')] +[2023-09-26 04:52:02,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6049.5). Total num frames: 1179648. Throughput: 0: 772.9, 1: 772.6. Samples: 291038. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:52:02,838][06561] Avg episode reward: [(0, '9.273'), (1, '0.000')] +[2023-09-26 04:52:02,839][07269] Saving new best policy, reward=9.273! +[2023-09-26 04:52:07,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6021.1). Total num frames: 1204224. Throughput: 0: 769.6, 1: 768.2. Samples: 300272. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 04:52:07,838][06561] Avg episode reward: [(0, '9.273'), (1, '0.000')] +[2023-09-26 04:52:10,549][07697] Updated weights for policy 1, policy_version 2400 (0.0017) +[2023-09-26 04:52:10,549][07696] Updated weights for policy 0, policy_version 2396 (0.0018) +[2023-09-26 04:52:12,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6034.1). Total num frames: 1236992. Throughput: 0: 768.8, 1: 769.1. Samples: 309437. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 04:52:12,838][06561] Avg episode reward: [(0, '9.273'), (1, '0.000')] +[2023-09-26 04:52:17,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6046.5). Total num frames: 1269760. Throughput: 0: 773.3, 1: 772.4. Samples: 314388. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:52:17,838][06561] Avg episode reward: [(0, '9.553'), (1, '0.000')] +[2023-09-26 04:52:17,839][07269] Saving new best policy, reward=9.553! +[2023-09-26 04:52:22,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6058.3). Total num frames: 1302528. Throughput: 0: 771.0, 1: 771.1. Samples: 323641. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:52:22,837][06561] Avg episode reward: [(0, '9.354'), (1, '0.000')] +[2023-09-26 04:52:23,466][07696] Updated weights for policy 0, policy_version 2556 (0.0016) +[2023-09-26 04:52:23,466][07697] Updated weights for policy 1, policy_version 2560 (0.0017) +[2023-09-26 04:52:27,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6069.5). Total num frames: 1335296. Throughput: 0: 777.6, 1: 777.9. Samples: 333579. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:52:27,837][06561] Avg episode reward: [(0, '9.354'), (1, '0.000')] +[2023-09-26 04:52:32,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6080.3). Total num frames: 1368064. Throughput: 0: 776.2, 1: 776.8. Samples: 338102. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 04:52:32,838][06561] Avg episode reward: [(0, '9.354'), (1, '0.000')] +[2023-09-26 04:52:36,286][07696] Updated weights for policy 0, policy_version 2716 (0.0014) +[2023-09-26 04:52:36,287][07697] Updated weights for policy 1, policy_version 2720 (0.0017) +[2023-09-26 04:52:37,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6090.6). Total num frames: 1400832. Throughput: 0: 785.1, 1: 785.8. Samples: 347967. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:52:37,838][06561] Avg episode reward: [(0, '8.769'), (1, '0.000')] +[2023-09-26 04:52:37,843][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000002732_700416.pth... +[2023-09-26 04:52:37,843][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000002736_700416.pth... +[2023-09-26 04:52:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6065.6). Total num frames: 1425408. Throughput: 0: 779.0, 1: 780.2. Samples: 356885. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 04:52:42,838][06561] Avg episode reward: [(0, '8.769'), (1, '0.000')] +[2023-09-26 04:52:47,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6075.7). Total num frames: 1458176. Throughput: 0: 786.3, 1: 785.4. Samples: 361768. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:52:47,838][06561] Avg episode reward: [(0, '8.769'), (1, '0.000')] +[2023-09-26 04:52:49,451][07696] Updated weights for policy 0, policy_version 2876 (0.0017) +[2023-09-26 04:52:49,451][07697] Updated weights for policy 1, policy_version 2880 (0.0019) +[2023-09-26 04:52:52,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6085.5). Total num frames: 1490944. Throughput: 0: 787.0, 1: 788.1. Samples: 371150. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:52:52,838][06561] Avg episode reward: [(0, '8.768'), (1, '0.000')] +[2023-09-26 04:52:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6094.8). Total num frames: 1523712. Throughput: 0: 794.0, 1: 793.6. Samples: 380882. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:52:57,838][06561] Avg episode reward: [(0, '8.768'), (1, '0.000')] +[2023-09-26 04:53:02,629][07696] Updated weights for policy 0, policy_version 3036 (0.0016) +[2023-09-26 04:53:02,630][07697] Updated weights for policy 1, policy_version 3040 (0.0016) +[2023-09-26 04:53:02,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6103.8). Total num frames: 1556480. Throughput: 0: 786.7, 1: 787.6. Samples: 385233. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 04:53:02,838][06561] Avg episode reward: [(0, '8.768'), (1, '0.000')] +[2023-09-26 04:53:07,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6081.0). Total num frames: 1581056. Throughput: 0: 786.1, 1: 789.2. Samples: 394532. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:53:07,839][06561] Avg episode reward: [(0, '8.768'), (1, '0.000')] +[2023-09-26 04:53:12,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6089.9). Total num frames: 1613824. Throughput: 0: 777.4, 1: 775.5. Samples: 403460. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:53:12,838][06561] Avg episode reward: [(0, '8.283'), (1, '0.000')] +[2023-09-26 04:53:15,883][07696] Updated weights for policy 0, policy_version 3196 (0.0013) +[2023-09-26 04:53:15,883][07697] Updated weights for policy 1, policy_version 3200 (0.0018) +[2023-09-26 04:53:17,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6098.5). Total num frames: 1646592. Throughput: 0: 779.9, 1: 779.7. Samples: 408286. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:53:17,838][06561] Avg episode reward: [(0, '8.283'), (1, '0.000')] +[2023-09-26 04:53:22,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6106.8). Total num frames: 1679360. Throughput: 0: 775.5, 1: 775.0. Samples: 417739. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 04:53:22,838][06561] Avg episode reward: [(0, '8.283'), (1, '0.000')] +[2023-09-26 04:53:27,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6114.7). Total num frames: 1712128. Throughput: 0: 777.8, 1: 778.1. Samples: 426901. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:53:27,838][06561] Avg episode reward: [(0, '8.234'), (1, '0.000')] +[2023-09-26 04:53:29,094][07696] Updated weights for policy 0, policy_version 3356 (0.0017) +[2023-09-26 04:53:29,094][07697] Updated weights for policy 1, policy_version 3360 (0.0017) +[2023-09-26 04:53:32,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6093.7). Total num frames: 1736704. Throughput: 0: 775.6, 1: 778.3. Samples: 431696. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:53:32,838][06561] Avg episode reward: [(0, '8.234'), (1, '0.000')] +[2023-09-26 04:53:37,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6101.6). Total num frames: 1769472. Throughput: 0: 773.6, 1: 773.5. Samples: 440769. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:53:37,838][06561] Avg episode reward: [(0, '8.234'), (1, '0.000')] +[2023-09-26 04:53:42,340][07697] Updated weights for policy 1, policy_version 3520 (0.0019) +[2023-09-26 04:53:42,340][07696] Updated weights for policy 0, policy_version 3516 (0.0017) +[2023-09-26 04:53:42,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6109.3). Total num frames: 1802240. Throughput: 0: 771.7, 1: 772.1. Samples: 450352. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 04:53:42,838][06561] Avg episode reward: [(0, '8.234'), (1, '0.000')] +[2023-09-26 04:53:47,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 1835008. Throughput: 0: 772.1, 1: 770.6. Samples: 454656. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:53:47,838][06561] Avg episode reward: [(0, '8.338'), (1, '0.000')] +[2023-09-26 04:53:52,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 1867776. Throughput: 0: 776.2, 1: 773.6. Samples: 464276. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:53:52,838][06561] Avg episode reward: [(0, '8.338'), (1, '0.000')] +[2023-09-26 04:53:55,454][07696] Updated weights for policy 0, policy_version 3676 (0.0017) +[2023-09-26 04:53:55,455][07697] Updated weights for policy 1, policy_version 3680 (0.0018) +[2023-09-26 04:53:57,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 1892352. Throughput: 0: 777.3, 1: 778.5. Samples: 473471. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:53:57,838][06561] Avg episode reward: [(0, '8.338'), (1, '0.000')] +[2023-09-26 04:54:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 1925120. Throughput: 0: 776.9, 1: 777.3. Samples: 478226. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 04:54:02,838][06561] Avg episode reward: [(0, '8.292'), (1, '0.000')] +[2023-09-26 04:54:07,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 1957888. Throughput: 0: 774.9, 1: 773.8. Samples: 487428. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:54:07,837][06561] Avg episode reward: [(0, '8.292'), (1, '0.000')] +[2023-09-26 04:54:08,775][07696] Updated weights for policy 0, policy_version 3836 (0.0018) +[2023-09-26 04:54:08,775][07697] Updated weights for policy 1, policy_version 3840 (0.0019) +[2023-09-26 04:54:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 1990656. Throughput: 0: 776.2, 1: 777.1. Samples: 496803. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 04:54:12,838][06561] Avg episode reward: [(0, '8.292'), (1, '0.000')] +[2023-09-26 04:54:17,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 2015232. Throughput: 0: 778.2, 1: 776.0. Samples: 501632. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 04:54:17,838][06561] Avg episode reward: [(0, '8.434'), (1, '0.000')] +[2023-09-26 04:54:21,869][07697] Updated weights for policy 1, policy_version 4000 (0.0019) +[2023-09-26 04:54:21,869][07696] Updated weights for policy 0, policy_version 3996 (0.0018) +[2023-09-26 04:54:22,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 2048000. Throughput: 0: 776.5, 1: 776.3. Samples: 510644. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 04:54:22,838][06561] Avg episode reward: [(0, '8.434'), (1, '0.000')] +[2023-09-26 04:54:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2080768. Throughput: 0: 776.1, 1: 775.3. Samples: 520164. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:54:27,838][06561] Avg episode reward: [(0, '8.434'), (1, '0.000')] +[2023-09-26 04:54:32,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2113536. Throughput: 0: 778.7, 1: 780.0. Samples: 524795. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 04:54:32,838][06561] Avg episode reward: [(0, '8.434'), (1, '0.000')] +[2023-09-26 04:54:35,103][07697] Updated weights for policy 1, policy_version 4160 (0.0016) +[2023-09-26 04:54:35,105][07696] Updated weights for policy 0, policy_version 4156 (0.0017) +[2023-09-26 04:54:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2146304. Throughput: 0: 776.9, 1: 776.3. Samples: 534172. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 04:54:37,838][06561] Avg episode reward: [(0, '8.625'), (1, '0.000')] +[2023-09-26 04:54:37,842][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000004192_1073152.pth... +[2023-09-26 04:54:37,843][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000004188_1073152.pth... +[2023-09-26 04:54:37,870][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000001264_323584.pth +[2023-09-26 04:54:37,877][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000001264_323584.pth +[2023-09-26 04:54:42,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 2170880. Throughput: 0: 776.2, 1: 776.2. Samples: 543332. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:54:42,838][06561] Avg episode reward: [(0, '8.625'), (1, '0.000')] +[2023-09-26 04:54:47,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2203648. Throughput: 0: 775.0, 1: 776.3. Samples: 548037. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:54:47,838][06561] Avg episode reward: [(0, '8.625'), (1, '0.000')] +[2023-09-26 04:54:48,336][07696] Updated weights for policy 0, policy_version 4316 (0.0018) +[2023-09-26 04:54:48,336][07697] Updated weights for policy 1, policy_version 4320 (0.0017) +[2023-09-26 04:54:52,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2236416. Throughput: 0: 773.8, 1: 775.1. Samples: 557129. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 04:54:52,838][06561] Avg episode reward: [(0, '8.583'), (1, '0.000')] +[2023-09-26 04:54:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2269184. Throughput: 0: 775.8, 1: 773.4. Samples: 566517. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:54:57,838][06561] Avg episode reward: [(0, '8.583'), (1, '0.000')] +[2023-09-26 04:55:01,640][07696] Updated weights for policy 0, policy_version 4476 (0.0016) +[2023-09-26 04:55:01,640][07697] Updated weights for policy 1, policy_version 4480 (0.0018) +[2023-09-26 04:55:02,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 2293760. Throughput: 0: 772.1, 1: 773.2. Samples: 571172. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:55:02,837][06561] Avg episode reward: [(0, '8.583'), (1, '0.000')] +[2023-09-26 04:55:07,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 2326528. Throughput: 0: 775.4, 1: 775.1. Samples: 580416. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 04:55:07,838][06561] Avg episode reward: [(0, '8.583'), (1, '0.000')] +[2023-09-26 04:55:12,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2359296. Throughput: 0: 774.3, 1: 773.8. Samples: 589827. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:55:12,838][06561] Avg episode reward: [(0, '8.443'), (1, '0.000')] +[2023-09-26 04:55:14,635][07697] Updated weights for policy 1, policy_version 4640 (0.0017) +[2023-09-26 04:55:14,635][07696] Updated weights for policy 0, policy_version 4636 (0.0017) +[2023-09-26 04:55:17,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2392064. Throughput: 0: 775.0, 1: 774.9. Samples: 594540. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 04:55:17,838][06561] Avg episode reward: [(0, '8.443'), (1, '0.000')] +[2023-09-26 04:55:22,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2424832. Throughput: 0: 778.1, 1: 777.1. Samples: 604157. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:55:22,838][06561] Avg episode reward: [(0, '8.443'), (1, '0.000')] +[2023-09-26 04:55:27,680][07696] Updated weights for policy 0, policy_version 4796 (0.0016) +[2023-09-26 04:55:27,680][07697] Updated weights for policy 1, policy_version 4800 (0.0018) +[2023-09-26 04:55:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2457600. Throughput: 0: 780.4, 1: 780.5. Samples: 613570. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:55:27,838][06561] Avg episode reward: [(0, '8.402'), (1, '0.000')] +[2023-09-26 04:55:32,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2482176. Throughput: 0: 781.3, 1: 778.2. Samples: 618214. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 04:55:32,838][06561] Avg episode reward: [(0, '8.402'), (1, '0.000')] +[2023-09-26 04:55:37,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2514944. Throughput: 0: 777.8, 1: 778.1. Samples: 627142. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 04:55:37,838][06561] Avg episode reward: [(0, '8.402'), (1, '0.000')] +[2023-09-26 04:55:40,887][07697] Updated weights for policy 1, policy_version 4960 (0.0017) +[2023-09-26 04:55:40,888][07696] Updated weights for policy 0, policy_version 4956 (0.0019) +[2023-09-26 04:55:42,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2547712. Throughput: 0: 782.4, 1: 782.3. Samples: 636928. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 04:55:42,838][06561] Avg episode reward: [(0, '8.490'), (1, '0.000')] +[2023-09-26 04:55:47,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2580480. Throughput: 0: 780.5, 1: 779.2. Samples: 641362. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:55:47,838][06561] Avg episode reward: [(0, '8.490'), (1, '0.000')] +[2023-09-26 04:55:52,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2613248. Throughput: 0: 783.9, 1: 783.5. Samples: 650948. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 04:55:52,838][06561] Avg episode reward: [(0, '8.490'), (1, '0.000')] +[2023-09-26 04:55:53,923][07697] Updated weights for policy 1, policy_version 5120 (0.0018) +[2023-09-26 04:55:53,923][07696] Updated weights for policy 0, policy_version 5116 (0.0017) +[2023-09-26 04:55:57,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 2646016. Throughput: 0: 784.5, 1: 786.8. Samples: 660532. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:55:57,837][06561] Avg episode reward: [(0, '8.490'), (1, '0.000')] +[2023-09-26 04:56:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2670592. Throughput: 0: 785.7, 1: 786.1. Samples: 665271. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:56:02,838][06561] Avg episode reward: [(0, '8.730'), (1, '0.000')] +[2023-09-26 04:56:06,845][07696] Updated weights for policy 0, policy_version 5276 (0.0016) +[2023-09-26 04:56:06,846][07697] Updated weights for policy 1, policy_version 5280 (0.0017) +[2023-09-26 04:56:07,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2703360. Throughput: 0: 782.2, 1: 785.0. Samples: 674684. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:56:07,838][06561] Avg episode reward: [(0, '8.730'), (1, '0.000')] +[2023-09-26 04:56:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2736128. Throughput: 0: 783.7, 1: 784.0. Samples: 684116. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:56:12,838][06561] Avg episode reward: [(0, '8.730'), (1, '0.000')] +[2023-09-26 04:56:17,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2768896. Throughput: 0: 783.3, 1: 783.8. Samples: 688734. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 04:56:17,838][06561] Avg episode reward: [(0, '9.120'), (1, '0.000')] +[2023-09-26 04:56:19,967][07697] Updated weights for policy 1, policy_version 5440 (0.0017) +[2023-09-26 04:56:19,967][07696] Updated weights for policy 0, policy_version 5436 (0.0016) +[2023-09-26 04:56:22,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 2801664. Throughput: 0: 791.1, 1: 790.4. Samples: 698311. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:56:22,838][06561] Avg episode reward: [(0, '9.120'), (1, '0.000')] +[2023-09-26 04:56:27,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2826240. Throughput: 0: 782.2, 1: 784.0. Samples: 707410. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:56:27,838][06561] Avg episode reward: [(0, '9.120'), (1, '0.000')] +[2023-09-26 04:56:32,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2859008. Throughput: 0: 783.4, 1: 784.2. Samples: 711900. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:56:32,838][06561] Avg episode reward: [(0, '9.120'), (1, '0.000')] +[2023-09-26 04:56:33,264][07696] Updated weights for policy 0, policy_version 5596 (0.0018) +[2023-09-26 04:56:33,264][07697] Updated weights for policy 1, policy_version 5600 (0.0016) +[2023-09-26 04:56:37,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2891776. Throughput: 0: 777.8, 1: 778.2. Samples: 720969. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 04:56:37,837][06561] Avg episode reward: [(0, '9.580'), (1, '0.000')] +[2023-09-26 04:56:37,846][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000005648_1445888.pth... +[2023-09-26 04:56:37,847][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000005644_1445888.pth... +[2023-09-26 04:56:37,883][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000002732_700416.pth +[2023-09-26 04:56:37,884][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000002736_700416.pth +[2023-09-26 04:56:37,886][07269] Saving new best policy, reward=9.580! +[2023-09-26 04:56:42,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 2924544. Throughput: 0: 781.9, 1: 780.2. Samples: 730827. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 04:56:42,838][06561] Avg episode reward: [(0, '9.580'), (1, '0.000')] +[2023-09-26 04:56:46,531][07696] Updated weights for policy 0, policy_version 5756 (0.0017) +[2023-09-26 04:56:46,531][07697] Updated weights for policy 1, policy_version 5760 (0.0018) +[2023-09-26 04:56:47,843][06561] Fps is (10 sec: 6549.4, 60 sec: 6279.9, 300 sec: 6248.0). Total num frames: 2957312. Throughput: 0: 778.0, 1: 776.4. Samples: 735229. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:56:47,846][06561] Avg episode reward: [(0, '9.580'), (1, '0.000')] +[2023-09-26 04:56:52,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2981888. Throughput: 0: 773.9, 1: 772.5. Samples: 744271. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 04:56:52,838][06561] Avg episode reward: [(0, '10.380'), (1, '0.000')] +[2023-09-26 04:56:52,849][07269] Saving new best policy, reward=10.380! +[2023-09-26 04:56:57,837][06561] Fps is (10 sec: 5738.0, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3014656. Throughput: 0: 773.6, 1: 772.1. Samples: 753669. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 04:56:57,838][06561] Avg episode reward: [(0, '10.380'), (1, '0.000')] +[2023-09-26 04:56:59,625][07697] Updated weights for policy 1, policy_version 5920 (0.0016) +[2023-09-26 04:56:59,625][07696] Updated weights for policy 0, policy_version 5916 (0.0018) +[2023-09-26 04:57:02,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 3047424. Throughput: 0: 774.2, 1: 774.9. Samples: 758441. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 04:57:02,838][06561] Avg episode reward: [(0, '10.380'), (1, '0.000')] +[2023-09-26 04:57:07,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 3080192. Throughput: 0: 772.1, 1: 773.8. Samples: 767879. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:57:07,837][06561] Avg episode reward: [(0, '10.380'), (1, '0.000')] +[2023-09-26 04:57:12,837][06561] Fps is (10 sec: 6143.9, 60 sec: 6212.3, 300 sec: 6234.3). Total num frames: 3108864. Throughput: 0: 773.6, 1: 772.9. Samples: 777002. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 04:57:12,838][06561] Avg episode reward: [(0, '11.260'), (1, '0.000')] +[2023-09-26 04:57:12,839][07269] Saving new best policy, reward=11.260! +[2023-09-26 04:57:12,869][07697] Updated weights for policy 1, policy_version 6080 (0.0016) +[2023-09-26 04:57:12,869][07696] Updated weights for policy 0, policy_version 6076 (0.0015) +[2023-09-26 04:57:17,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3137536. Throughput: 0: 778.0, 1: 777.2. Samples: 781882. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 04:57:17,838][06561] Avg episode reward: [(0, '11.260'), (1, '0.000')] +[2023-09-26 04:57:22,837][06561] Fps is (10 sec: 6144.1, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3170304. Throughput: 0: 779.2, 1: 779.6. Samples: 791112. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 04:57:22,838][06561] Avg episode reward: [(0, '11.260'), (1, '0.000')] +[2023-09-26 04:57:25,956][07696] Updated weights for policy 0, policy_version 6236 (0.0018) +[2023-09-26 04:57:25,957][07697] Updated weights for policy 1, policy_version 6240 (0.0019) +[2023-09-26 04:57:27,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3203072. Throughput: 0: 776.7, 1: 775.6. Samples: 800678. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:57:27,838][06561] Avg episode reward: [(0, '12.260'), (1, '0.000')] +[2023-09-26 04:57:27,839][07269] Saving new best policy, reward=12.260! +[2023-09-26 04:57:32,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3235840. Throughput: 0: 773.9, 1: 774.0. Samples: 804872. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 04:57:32,838][06561] Avg episode reward: [(0, '12.260'), (1, '0.000')] +[2023-09-26 04:57:37,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 3268608. Throughput: 0: 782.5, 1: 782.0. Samples: 814674. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 04:57:37,838][06561] Avg episode reward: [(0, '12.260'), (1, '0.000')] +[2023-09-26 04:57:39,031][07696] Updated weights for policy 0, policy_version 6396 (0.0015) +[2023-09-26 04:57:39,031][07697] Updated weights for policy 1, policy_version 6400 (0.0018) +[2023-09-26 04:57:42,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3293184. Throughput: 0: 778.2, 1: 780.2. Samples: 823797. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 04:57:42,837][06561] Avg episode reward: [(0, '13.180'), (1, '0.000')] +[2023-09-26 04:57:42,838][07269] Saving new best policy, reward=13.180! +[2023-09-26 04:57:47,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.7, 300 sec: 6220.4). Total num frames: 3325952. Throughput: 0: 776.4, 1: 777.8. Samples: 828379. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:57:47,838][06561] Avg episode reward: [(0, '13.180'), (1, '0.000')] +[2023-09-26 04:57:52,410][07696] Updated weights for policy 0, policy_version 6556 (0.0016) +[2023-09-26 04:57:52,410][07697] Updated weights for policy 1, policy_version 6560 (0.0018) +[2023-09-26 04:57:52,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 3358720. Throughput: 0: 776.4, 1: 773.7. Samples: 837632. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:57:52,838][06561] Avg episode reward: [(0, '13.180'), (1, '0.000')] +[2023-09-26 04:57:57,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3391488. Throughput: 0: 775.8, 1: 776.0. Samples: 846833. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 04:57:57,838][06561] Avg episode reward: [(0, '13.180'), (1, '0.000')] +[2023-09-26 04:58:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3416064. Throughput: 0: 774.9, 1: 773.3. Samples: 851550. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 04:58:02,838][06561] Avg episode reward: [(0, '14.120'), (1, '0.000')] +[2023-09-26 04:58:03,008][07269] Saving new best policy, reward=14.120! +[2023-09-26 04:58:05,697][07696] Updated weights for policy 0, policy_version 6716 (0.0018) +[2023-09-26 04:58:05,697][07697] Updated weights for policy 1, policy_version 6720 (0.0017) +[2023-09-26 04:58:07,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3448832. Throughput: 0: 772.3, 1: 773.1. Samples: 860658. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 04:58:07,838][06561] Avg episode reward: [(0, '14.120'), (1, '0.000')] +[2023-09-26 04:58:12,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 3481600. Throughput: 0: 766.2, 1: 769.7. Samples: 869794. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:58:12,838][06561] Avg episode reward: [(0, '14.120'), (1, '0.000')] +[2023-09-26 04:58:17,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3514368. Throughput: 0: 773.6, 1: 773.5. Samples: 874492. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 04:58:17,838][06561] Avg episode reward: [(0, '15.070'), (1, '0.000')] +[2023-09-26 04:58:17,839][07269] Saving new best policy, reward=15.070! +[2023-09-26 04:58:19,277][07696] Updated weights for policy 0, policy_version 6876 (0.0017) +[2023-09-26 04:58:19,277][07697] Updated weights for policy 1, policy_version 6880 (0.0016) +[2023-09-26 04:58:22,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 3538944. Throughput: 0: 761.5, 1: 762.7. Samples: 883262. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 04:58:22,837][06561] Avg episode reward: [(0, '15.070'), (1, '0.000')] +[2023-09-26 04:58:27,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3571712. Throughput: 0: 764.3, 1: 762.9. Samples: 892520. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:58:27,837][06561] Avg episode reward: [(0, '15.070'), (1, '0.000')] +[2023-09-26 04:58:32,723][07696] Updated weights for policy 0, policy_version 7036 (0.0017) +[2023-09-26 04:58:32,723][07697] Updated weights for policy 1, policy_version 7040 (0.0018) +[2023-09-26 04:58:32,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3604480. Throughput: 0: 764.0, 1: 761.5. Samples: 897024. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:58:32,838][06561] Avg episode reward: [(0, '15.070'), (1, '0.000')] +[2023-09-26 04:58:37,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6192.6). Total num frames: 3629056. Throughput: 0: 762.3, 1: 765.0. Samples: 906363. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 04:58:37,837][06561] Avg episode reward: [(0, '15.350'), (1, '0.000')] +[2023-09-26 04:58:37,846][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000007088_1814528.pth... +[2023-09-26 04:58:37,846][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000007084_1814528.pth... +[2023-09-26 04:58:37,880][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000004188_1073152.pth +[2023-09-26 04:58:37,883][07269] Saving new best policy, reward=15.350! +[2023-09-26 04:58:37,884][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000004192_1073152.pth +[2023-09-26 04:58:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 3661824. Throughput: 0: 763.2, 1: 761.8. Samples: 915456. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 04:58:42,838][06561] Avg episode reward: [(0, '15.350'), (1, '0.000')] +[2023-09-26 04:58:46,064][07697] Updated weights for policy 1, policy_version 7200 (0.0017) +[2023-09-26 04:58:46,064][07696] Updated weights for policy 0, policy_version 7196 (0.0018) +[2023-09-26 04:58:47,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 3694592. Throughput: 0: 758.0, 1: 760.0. Samples: 919864. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 04:58:47,838][06561] Avg episode reward: [(0, '15.350'), (1, '0.000')] +[2023-09-26 04:58:52,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3727360. Throughput: 0: 765.5, 1: 763.7. Samples: 929471. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:58:52,838][06561] Avg episode reward: [(0, '17.330'), (1, '0.000')] +[2023-09-26 04:58:52,848][07269] Saving new best policy, reward=17.330! +[2023-09-26 04:58:57,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6192.6). Total num frames: 3751936. Throughput: 0: 763.9, 1: 762.4. Samples: 938481. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 04:58:57,838][06561] Avg episode reward: [(0, '17.330'), (1, '0.000')] +[2023-09-26 04:58:59,336][07696] Updated weights for policy 0, policy_version 7356 (0.0017) +[2023-09-26 04:58:59,337][07697] Updated weights for policy 1, policy_version 7360 (0.0015) +[2023-09-26 04:59:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 3784704. Throughput: 0: 763.8, 1: 765.4. Samples: 943306. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 04:59:02,838][06561] Avg episode reward: [(0, '17.330'), (1, '0.000')] +[2023-09-26 04:59:07,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 3817472. Throughput: 0: 768.4, 1: 767.1. Samples: 952358. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 04:59:07,838][06561] Avg episode reward: [(0, '17.330'), (1, '0.000')] +[2023-09-26 04:59:12,537][07696] Updated weights for policy 0, policy_version 7516 (0.0016) +[2023-09-26 04:59:12,539][07697] Updated weights for policy 1, policy_version 7520 (0.0020) +[2023-09-26 04:59:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3850240. Throughput: 0: 772.1, 1: 771.2. Samples: 961967. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 04:59:12,838][06561] Avg episode reward: [(0, '18.410'), (1, '0.000')] +[2023-09-26 04:59:12,839][07269] Saving new best policy, reward=18.410! +[2023-09-26 04:59:17,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3883008. Throughput: 0: 772.9, 1: 773.7. Samples: 966621. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 04:59:17,837][06561] Avg episode reward: [(0, '18.410'), (1, '0.000')] +[2023-09-26 04:59:22,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 3907584. Throughput: 0: 773.8, 1: 771.7. Samples: 975910. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 04:59:22,837][06561] Avg episode reward: [(0, '18.410'), (1, '0.000')] +[2023-09-26 04:59:25,696][07696] Updated weights for policy 0, policy_version 7676 (0.0018) +[2023-09-26 04:59:25,696][07697] Updated weights for policy 1, policy_version 7680 (0.0018) +[2023-09-26 04:59:27,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 3940352. Throughput: 0: 773.7, 1: 773.7. Samples: 985089. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:59:27,838][06561] Avg episode reward: [(0, '19.050'), (1, '0.000')] +[2023-09-26 04:59:27,839][07269] Saving new best policy, reward=19.050! +[2023-09-26 04:59:32,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 3973120. Throughput: 0: 775.5, 1: 775.4. Samples: 989654. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 04:59:32,838][06561] Avg episode reward: [(0, '19.050'), (1, '0.000')] +[2023-09-26 04:59:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 4005888. Throughput: 0: 775.6, 1: 776.5. Samples: 999315. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 04:59:37,838][06561] Avg episode reward: [(0, '19.050'), (1, '0.000')] +[2023-09-26 04:59:38,787][07696] Updated weights for policy 0, policy_version 7836 (0.0019) +[2023-09-26 04:59:38,787][07697] Updated weights for policy 1, policy_version 7840 (0.0019) +[2023-09-26 04:59:42,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 4038656. Throughput: 0: 780.8, 1: 780.3. Samples: 1008731. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:59:42,838][06561] Avg episode reward: [(0, '19.340'), (1, '0.000')] +[2023-09-26 04:59:42,839][07269] Saving new best policy, reward=19.340! +[2023-09-26 04:59:47,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4063232. Throughput: 0: 780.4, 1: 780.1. Samples: 1013527. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:59:47,838][06561] Avg episode reward: [(0, '20.350'), (1, '0.000')] +[2023-09-26 04:59:47,906][07269] Saving new best policy, reward=20.350! +[2023-09-26 04:59:51,835][07696] Updated weights for policy 0, policy_version 7996 (0.0018) +[2023-09-26 04:59:51,836][07697] Updated weights for policy 1, policy_version 8000 (0.0017) +[2023-09-26 04:59:52,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4096000. Throughput: 0: 781.9, 1: 782.2. Samples: 1022743. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 04:59:52,838][06561] Avg episode reward: [(0, '20.350'), (1, '0.000')] +[2023-09-26 04:59:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 4128768. Throughput: 0: 780.2, 1: 780.3. Samples: 1032192. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 04:59:57,838][06561] Avg episode reward: [(0, '20.350'), (1, '0.000')] +[2023-09-26 05:00:02,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 4161536. Throughput: 0: 779.9, 1: 780.7. Samples: 1036846. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:00:02,838][06561] Avg episode reward: [(0, '21.440'), (1, '0.000')] +[2023-09-26 05:00:02,839][07269] Saving new best policy, reward=21.440! +[2023-09-26 05:00:04,991][07697] Updated weights for policy 1, policy_version 8160 (0.0017) +[2023-09-26 05:00:04,991][07696] Updated weights for policy 0, policy_version 8156 (0.0018) +[2023-09-26 05:00:07,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 4194304. Throughput: 0: 782.6, 1: 781.8. Samples: 1046311. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:00:07,837][06561] Avg episode reward: [(0, '21.440'), (1, '0.000')] +[2023-09-26 05:00:12,837][06561] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4218880. Throughput: 0: 778.8, 1: 780.5. Samples: 1055256. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:00:12,837][06561] Avg episode reward: [(0, '21.440'), (1, '0.000')] +[2023-09-26 05:00:17,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4251648. Throughput: 0: 781.0, 1: 780.6. Samples: 1059924. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:00:17,838][06561] Avg episode reward: [(0, '23.050'), (1, '0.000')] +[2023-09-26 05:00:17,839][07269] Saving new best policy, reward=23.050! +[2023-09-26 05:00:18,285][07696] Updated weights for policy 0, policy_version 8316 (0.0016) +[2023-09-26 05:00:18,285][07697] Updated weights for policy 1, policy_version 8320 (0.0018) +[2023-09-26 05:00:22,837][06561] Fps is (10 sec: 6553.3, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 4284416. Throughput: 0: 775.8, 1: 774.9. Samples: 1069097. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:00:22,838][06561] Avg episode reward: [(0, '23.050'), (1, '0.000')] +[2023-09-26 05:00:27,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 4317184. Throughput: 0: 773.4, 1: 774.2. Samples: 1078370. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:00:27,837][06561] Avg episode reward: [(0, '23.050'), (1, '0.000')] +[2023-09-26 05:00:31,763][07697] Updated weights for policy 1, policy_version 8480 (0.0016) +[2023-09-26 05:00:31,763][07696] Updated weights for policy 0, policy_version 8476 (0.0018) +[2023-09-26 05:00:32,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4341760. Throughput: 0: 773.2, 1: 772.0. Samples: 1083060. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:00:32,838][06561] Avg episode reward: [(0, '23.050'), (1, '0.000')] +[2023-09-26 05:00:37,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4374528. Throughput: 0: 770.4, 1: 771.0. Samples: 1092106. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:00:37,837][06561] Avg episode reward: [(0, '24.160'), (1, '0.000')] +[2023-09-26 05:00:37,846][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000008544_2187264.pth... +[2023-09-26 05:00:37,846][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000008540_2187264.pth... +[2023-09-26 05:00:37,881][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000005644_1445888.pth +[2023-09-26 05:00:37,884][07269] Saving new best policy, reward=24.160! +[2023-09-26 05:00:37,887][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000005648_1445888.pth +[2023-09-26 05:00:42,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4407296. Throughput: 0: 772.3, 1: 773.7. Samples: 1101762. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:00:42,838][06561] Avg episode reward: [(0, '24.160'), (1, '0.000')] +[2023-09-26 05:00:44,857][07697] Updated weights for policy 1, policy_version 8640 (0.0018) +[2023-09-26 05:00:44,858][07696] Updated weights for policy 0, policy_version 8636 (0.0019) +[2023-09-26 05:00:47,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 4440064. Throughput: 0: 769.7, 1: 770.0. Samples: 1106130. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:00:47,838][06561] Avg episode reward: [(0, '24.160'), (1, '0.000')] +[2023-09-26 05:00:52,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 4472832. Throughput: 0: 772.6, 1: 773.4. Samples: 1115878. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:00:52,837][06561] Avg episode reward: [(0, '26.600'), (1, '0.000')] +[2023-09-26 05:00:52,845][07269] Saving new best policy, reward=26.600! +[2023-09-26 05:00:57,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4497408. Throughput: 0: 773.2, 1: 772.7. Samples: 1124823. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:00:57,838][06561] Avg episode reward: [(0, '26.600'), (1, '0.000')] +[2023-09-26 05:00:57,998][07697] Updated weights for policy 1, policy_version 8800 (0.0019) +[2023-09-26 05:00:57,999][07696] Updated weights for policy 0, policy_version 8796 (0.0017) +[2023-09-26 05:01:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4530176. Throughput: 0: 774.8, 1: 774.5. Samples: 1129641. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:01:02,837][06561] Avg episode reward: [(0, '26.600'), (1, '0.000')] +[2023-09-26 05:01:07,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4562944. Throughput: 0: 774.1, 1: 774.9. Samples: 1138803. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 05:01:07,837][06561] Avg episode reward: [(0, '26.600'), (1, '0.000')] +[2023-09-26 05:01:11,193][07696] Updated weights for policy 0, policy_version 8956 (0.0015) +[2023-09-26 05:01:11,194][07697] Updated weights for policy 1, policy_version 8960 (0.0018) +[2023-09-26 05:01:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 4595712. Throughput: 0: 778.8, 1: 778.7. Samples: 1148460. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 05:01:12,838][06561] Avg episode reward: [(0, '28.790'), (1, '0.000')] +[2023-09-26 05:01:12,839][07269] Saving new best policy, reward=28.790! +[2023-09-26 05:01:17,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 4628480. Throughput: 0: 777.5, 1: 777.3. Samples: 1153025. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:01:17,838][06561] Avg episode reward: [(0, '28.790'), (1, '0.000')] +[2023-09-26 05:01:22,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4653056. Throughput: 0: 779.6, 1: 779.1. Samples: 1162248. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:01:22,838][06561] Avg episode reward: [(0, '28.790'), (1, '0.000')] +[2023-09-26 05:01:24,352][07696] Updated weights for policy 0, policy_version 9116 (0.0016) +[2023-09-26 05:01:24,352][07697] Updated weights for policy 1, policy_version 9120 (0.0017) +[2023-09-26 05:01:27,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4685824. Throughput: 0: 775.2, 1: 775.3. Samples: 1171536. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:01:27,838][06561] Avg episode reward: [(0, '29.760'), (1, '0.000')] +[2023-09-26 05:01:27,839][07269] Saving new best policy, reward=29.760! +[2023-09-26 05:01:32,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 4718592. Throughput: 0: 782.2, 1: 782.5. Samples: 1176542. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:01:32,838][06561] Avg episode reward: [(0, '29.760'), (1, '0.000')] +[2023-09-26 05:01:37,523][07697] Updated weights for policy 1, policy_version 9280 (0.0016) +[2023-09-26 05:01:37,523][07696] Updated weights for policy 0, policy_version 9276 (0.0018) +[2023-09-26 05:01:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 4751360. Throughput: 0: 777.2, 1: 776.5. Samples: 1185792. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:01:37,838][06561] Avg episode reward: [(0, '29.760'), (1, '0.000')] +[2023-09-26 05:01:42,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6192.7). Total num frames: 4784128. Throughput: 0: 782.6, 1: 782.8. Samples: 1195265. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:01:42,837][06561] Avg episode reward: [(0, '31.370'), (1, '0.000')] +[2023-09-26 05:01:42,838][07269] Saving new best policy, reward=31.370! +[2023-09-26 05:01:47,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4808704. Throughput: 0: 779.0, 1: 779.6. Samples: 1199780. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:01:47,838][06561] Avg episode reward: [(0, '31.370'), (1, '0.000')] +[2023-09-26 05:01:50,751][07697] Updated weights for policy 1, policy_version 9440 (0.0019) +[2023-09-26 05:01:50,751][07696] Updated weights for policy 0, policy_version 9436 (0.0019) +[2023-09-26 05:01:52,837][06561] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4841472. Throughput: 0: 777.0, 1: 776.3. Samples: 1208701. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:01:52,838][06561] Avg episode reward: [(0, '31.370'), (1, '0.000')] +[2023-09-26 05:01:57,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6192.6). Total num frames: 4874240. Throughput: 0: 774.9, 1: 773.4. Samples: 1218134. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:01:57,837][06561] Avg episode reward: [(0, '31.370'), (1, '0.000')] +[2023-09-26 05:02:02,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 4907008. Throughput: 0: 773.4, 1: 773.7. Samples: 1222642. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:02:02,838][06561] Avg episode reward: [(0, '31.960'), (1, '0.000')] +[2023-09-26 05:02:02,839][07269] Saving new best policy, reward=31.960! +[2023-09-26 05:02:04,026][07696] Updated weights for policy 0, policy_version 9596 (0.0016) +[2023-09-26 05:02:04,027][07697] Updated weights for policy 1, policy_version 9600 (0.0018) +[2023-09-26 05:02:07,837][06561] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6178.7). Total num frames: 4931584. Throughput: 0: 775.1, 1: 776.1. Samples: 1232052. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:02:07,838][06561] Avg episode reward: [(0, '31.960'), (1, '0.000')] +[2023-09-26 05:02:12,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4964352. Throughput: 0: 777.5, 1: 777.6. Samples: 1241516. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:02:12,838][06561] Avg episode reward: [(0, '31.960'), (1, '0.000')] +[2023-09-26 05:02:16,940][07697] Updated weights for policy 1, policy_version 9760 (0.0019) +[2023-09-26 05:02:16,940][07696] Updated weights for policy 0, policy_version 9756 (0.0018) +[2023-09-26 05:02:17,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 4997120. Throughput: 0: 777.1, 1: 776.1. Samples: 1246436. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:02:17,838][06561] Avg episode reward: [(0, '32.860'), (1, '0.000')] +[2023-09-26 05:02:17,839][07269] Saving new best policy, reward=32.860! +[2023-09-26 05:02:22,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 5029888. Throughput: 0: 776.6, 1: 778.1. Samples: 1255754. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:02:22,838][06561] Avg episode reward: [(0, '32.860'), (1, '0.000')] +[2023-09-26 05:02:27,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6192.6). Total num frames: 5062656. Throughput: 0: 779.7, 1: 778.7. Samples: 1265394. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:02:27,837][06561] Avg episode reward: [(0, '32.860'), (1, '0.000')] +[2023-09-26 05:02:29,949][07696] Updated weights for policy 0, policy_version 9916 (0.0017) +[2023-09-26 05:02:29,949][07697] Updated weights for policy 1, policy_version 9920 (0.0017) +[2023-09-26 05:02:32,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 5095424. Throughput: 0: 778.4, 1: 777.9. Samples: 1269810. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:02:32,838][06561] Avg episode reward: [(0, '32.860'), (1, '0.000')] +[2023-09-26 05:02:37,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 5128192. Throughput: 0: 787.4, 1: 788.3. Samples: 1279607. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:02:37,838][06561] Avg episode reward: [(0, '34.410'), (1, '0.000')] +[2023-09-26 05:02:37,849][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000010016_2564096.pth... +[2023-09-26 05:02:37,849][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000010012_2564096.pth... +[2023-09-26 05:02:37,878][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000007088_1814528.pth +[2023-09-26 05:02:37,885][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000007084_1814528.pth +[2023-09-26 05:02:37,888][07269] Saving new best policy, reward=34.410! +[2023-09-26 05:02:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 5152768. Throughput: 0: 782.7, 1: 784.7. Samples: 1288666. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:02:42,838][06561] Avg episode reward: [(0, '34.410'), (1, '0.000')] +[2023-09-26 05:02:43,002][07696] Updated weights for policy 0, policy_version 10076 (0.0017) +[2023-09-26 05:02:43,002][07697] Updated weights for policy 1, policy_version 10080 (0.0017) +[2023-09-26 05:02:47,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 5185536. Throughput: 0: 788.0, 1: 787.4. Samples: 1293539. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:02:47,839][06561] Avg episode reward: [(0, '34.410'), (1, '0.000')] +[2023-09-26 05:02:52,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 5218304. Throughput: 0: 784.2, 1: 782.0. Samples: 1302532. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:02:52,838][06561] Avg episode reward: [(0, '37.930'), (1, '0.000')] +[2023-09-26 05:02:52,846][07269] Saving new best policy, reward=37.930! +[2023-09-26 05:02:56,425][07696] Updated weights for policy 0, policy_version 10236 (0.0019) +[2023-09-26 05:02:56,425][07697] Updated weights for policy 1, policy_version 10240 (0.0017) +[2023-09-26 05:02:57,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 5251072. Throughput: 0: 781.3, 1: 780.4. Samples: 1311793. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:02:57,838][06561] Avg episode reward: [(0, '37.930'), (1, '0.000')] +[2023-09-26 05:03:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 5275648. Throughput: 0: 780.0, 1: 780.6. Samples: 1316660. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:03:02,838][06561] Avg episode reward: [(0, '37.930'), (1, '0.000')] +[2023-09-26 05:03:07,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 5308416. Throughput: 0: 779.8, 1: 779.4. Samples: 1325920. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:03:07,838][06561] Avg episode reward: [(0, '39.190'), (1, '0.000')] +[2023-09-26 05:03:07,847][07269] Saving new best policy, reward=39.190! +[2023-09-26 05:03:09,401][07697] Updated weights for policy 1, policy_version 10400 (0.0017) +[2023-09-26 05:03:09,402][07696] Updated weights for policy 0, policy_version 10396 (0.0018) +[2023-09-26 05:03:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 5341184. Throughput: 0: 776.9, 1: 776.5. Samples: 1335296. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 05:03:12,838][06561] Avg episode reward: [(0, '40.410'), (1, '0.000')] +[2023-09-26 05:03:12,839][07269] Saving new best policy, reward=40.410! +[2023-09-26 05:03:17,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 5373952. Throughput: 0: 778.4, 1: 779.0. Samples: 1339893. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 05:03:17,838][06561] Avg episode reward: [(0, '40.410'), (1, '0.000')] +[2023-09-26 05:03:22,350][07697] Updated weights for policy 1, policy_version 10560 (0.0013) +[2023-09-26 05:03:22,351][07696] Updated weights for policy 0, policy_version 10556 (0.0016) +[2023-09-26 05:03:22,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 5406720. Throughput: 0: 779.0, 1: 777.2. Samples: 1349632. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:03:22,838][06561] Avg episode reward: [(0, '40.410'), (1, '0.000')] +[2023-09-26 05:03:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 5439488. Throughput: 0: 783.4, 1: 783.0. Samples: 1359151. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:03:27,838][06561] Avg episode reward: [(0, '41.460'), (1, '0.000')] +[2023-09-26 05:03:27,838][07269] Saving new best policy, reward=41.460! +[2023-09-26 05:03:32,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5472256. Throughput: 0: 782.0, 1: 782.7. Samples: 1363952. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:03:32,838][06561] Avg episode reward: [(0, '41.460'), (1, '0.000')] +[2023-09-26 05:03:35,387][07696] Updated weights for policy 0, policy_version 10716 (0.0017) +[2023-09-26 05:03:35,387][07697] Updated weights for policy 1, policy_version 10720 (0.0018) +[2023-09-26 05:03:37,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 5496832. Throughput: 0: 784.5, 1: 786.6. Samples: 1373233. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:03:37,838][06561] Avg episode reward: [(0, '41.460'), (1, '0.000')] +[2023-09-26 05:03:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 5529600. Throughput: 0: 784.9, 1: 784.2. Samples: 1382405. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:03:42,838][06561] Avg episode reward: [(0, '44.700'), (1, '0.000')] +[2023-09-26 05:03:42,840][07269] Saving new best policy, reward=44.700! +[2023-09-26 05:03:47,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 5562368. Throughput: 0: 783.8, 1: 783.4. Samples: 1387186. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:03:47,838][06561] Avg episode reward: [(0, '44.700'), (1, '0.000')] +[2023-09-26 05:03:48,604][07696] Updated weights for policy 0, policy_version 10876 (0.0018) +[2023-09-26 05:03:48,604][07697] Updated weights for policy 1, policy_version 10880 (0.0019) +[2023-09-26 05:03:52,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5595136. Throughput: 0: 787.1, 1: 786.3. Samples: 1396725. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:03:52,838][06561] Avg episode reward: [(0, '44.700'), (1, '0.000')] +[2023-09-26 05:03:57,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5627904. Throughput: 0: 782.5, 1: 784.1. Samples: 1405793. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:03:57,838][06561] Avg episode reward: [(0, '44.700'), (1, '0.000')] +[2023-09-26 05:04:01,761][07696] Updated weights for policy 0, policy_version 11036 (0.0017) +[2023-09-26 05:04:01,761][07697] Updated weights for policy 1, policy_version 11040 (0.0017) +[2023-09-26 05:04:02,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 5652480. Throughput: 0: 786.8, 1: 787.1. Samples: 1410721. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:04:02,838][06561] Avg episode reward: [(0, '48.160'), (1, '0.000')] +[2023-09-26 05:04:02,839][07269] Saving new best policy, reward=48.160! +[2023-09-26 05:04:07,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 5685248. Throughput: 0: 776.5, 1: 778.5. Samples: 1419605. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:04:07,838][06561] Avg episode reward: [(0, '48.160'), (1, '0.000')] +[2023-09-26 05:04:12,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 5718016. Throughput: 0: 780.5, 1: 780.1. Samples: 1429378. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:04:12,838][06561] Avg episode reward: [(0, '48.160'), (1, '0.000')] +[2023-09-26 05:04:14,916][07697] Updated weights for policy 1, policy_version 11200 (0.0017) +[2023-09-26 05:04:14,916][07696] Updated weights for policy 0, policy_version 11196 (0.0018) +[2023-09-26 05:04:17,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5750784. Throughput: 0: 774.3, 1: 775.3. Samples: 1433684. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:04:17,838][06561] Avg episode reward: [(0, '51.640'), (1, '0.000')] +[2023-09-26 05:04:17,838][07269] Saving new best policy, reward=51.640! +[2023-09-26 05:04:22,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6212.3, 300 sec: 6234.3). Total num frames: 5779456. Throughput: 0: 776.4, 1: 775.7. Samples: 1443076. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:04:22,838][06561] Avg episode reward: [(0, '51.640'), (1, '0.000')] +[2023-09-26 05:04:27,837][06561] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 5808128. Throughput: 0: 774.0, 1: 775.6. Samples: 1452136. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:04:27,838][06561] Avg episode reward: [(0, '51.640'), (1, '0.000')] +[2023-09-26 05:04:28,247][07696] Updated weights for policy 0, policy_version 11356 (0.0018) +[2023-09-26 05:04:28,247][07697] Updated weights for policy 1, policy_version 11360 (0.0016) +[2023-09-26 05:04:32,837][06561] Fps is (10 sec: 6143.9, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 5840896. Throughput: 0: 774.9, 1: 774.6. Samples: 1456913. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:04:32,838][06561] Avg episode reward: [(0, '51.640'), (1, '0.000')] +[2023-09-26 05:04:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 5873664. Throughput: 0: 771.4, 1: 772.6. Samples: 1466203. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:04:37,839][06561] Avg episode reward: [(0, '54.810'), (1, '0.000')] +[2023-09-26 05:04:37,850][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000011468_2936832.pth... +[2023-09-26 05:04:37,850][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000011472_2936832.pth... +[2023-09-26 05:04:37,885][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000008540_2187264.pth +[2023-09-26 05:04:37,889][07269] Saving new best policy, reward=54.810! +[2023-09-26 05:04:37,893][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000008544_2187264.pth +[2023-09-26 05:04:41,581][07697] Updated weights for policy 1, policy_version 11520 (0.0017) +[2023-09-26 05:04:41,581][07696] Updated weights for policy 0, policy_version 11516 (0.0016) +[2023-09-26 05:04:42,837][06561] Fps is (10 sec: 6144.1, 60 sec: 6212.3, 300 sec: 6234.3). Total num frames: 5902336. Throughput: 0: 772.3, 1: 772.0. Samples: 1475286. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:04:42,838][06561] Avg episode reward: [(0, '54.810'), (1, '0.000')] +[2023-09-26 05:04:47,837][06561] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 5931008. Throughput: 0: 771.6, 1: 770.8. Samples: 1480130. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:04:47,837][06561] Avg episode reward: [(0, '54.810'), (1, '0.000')] +[2023-09-26 05:04:52,837][06561] Fps is (10 sec: 6143.9, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 5963776. Throughput: 0: 775.6, 1: 774.6. Samples: 1489367. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:04:52,838][06561] Avg episode reward: [(0, '56.050'), (1, '0.000')] +[2023-09-26 05:04:52,849][07269] Saving new best policy, reward=56.050! +[2023-09-26 05:04:54,713][07697] Updated weights for policy 1, policy_version 11680 (0.0016) +[2023-09-26 05:04:54,713][07696] Updated weights for policy 0, policy_version 11676 (0.0018) +[2023-09-26 05:04:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 5996544. Throughput: 0: 773.8, 1: 774.2. Samples: 1499035. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:04:57,838][06561] Avg episode reward: [(0, '56.050'), (1, '0.000')] +[2023-09-26 05:05:02,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 6029312. Throughput: 0: 774.3, 1: 774.3. Samples: 1503372. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:05:02,837][06561] Avg episode reward: [(0, '56.050'), (1, '0.000')] +[2023-09-26 05:05:07,722][07696] Updated weights for policy 0, policy_version 11836 (0.0018) +[2023-09-26 05:05:07,722][07697] Updated weights for policy 1, policy_version 11840 (0.0018) +[2023-09-26 05:05:07,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6062080. Throughput: 0: 778.3, 1: 777.3. Samples: 1513080. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:05:07,839][06561] Avg episode reward: [(0, '58.930'), (1, '0.000')] +[2023-09-26 05:05:07,849][07269] Saving new best policy, reward=58.930! +[2023-09-26 05:05:12,837][06561] Fps is (10 sec: 6143.9, 60 sec: 6212.3, 300 sec: 6234.3). Total num frames: 6090752. Throughput: 0: 781.0, 1: 780.3. Samples: 1522396. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:05:12,838][06561] Avg episode reward: [(0, '58.930'), (1, '0.000')] +[2023-09-26 05:05:17,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6119424. Throughput: 0: 780.2, 1: 782.3. Samples: 1527224. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:05:17,838][06561] Avg episode reward: [(0, '58.930'), (1, '0.000')] +[2023-09-26 05:05:20,757][07697] Updated weights for policy 1, policy_version 12000 (0.0017) +[2023-09-26 05:05:20,757][07696] Updated weights for policy 0, policy_version 11996 (0.0018) +[2023-09-26 05:05:22,837][06561] Fps is (10 sec: 6144.1, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 6152192. Throughput: 0: 779.3, 1: 779.9. Samples: 1536366. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:05:22,837][06561] Avg episode reward: [(0, '58.930'), (1, '0.000')] +[2023-09-26 05:05:27,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6184960. Throughput: 0: 787.6, 1: 787.0. Samples: 1546142. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:05:27,838][06561] Avg episode reward: [(0, '61.420'), (1, '0.000')] +[2023-09-26 05:05:27,839][07269] Saving new best policy, reward=61.420! +[2023-09-26 05:05:32,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6217728. Throughput: 0: 781.0, 1: 781.4. Samples: 1550441. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:05:32,838][06561] Avg episode reward: [(0, '61.420'), (1, '0.000')] +[2023-09-26 05:05:33,930][07696] Updated weights for policy 0, policy_version 12156 (0.0015) +[2023-09-26 05:05:33,930][07697] Updated weights for policy 1, policy_version 12160 (0.0016) +[2023-09-26 05:05:37,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6242304. Throughput: 0: 783.4, 1: 782.8. Samples: 1559847. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:05:37,838][06561] Avg episode reward: [(0, '61.420'), (1, '0.000')] +[2023-09-26 05:05:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 6275072. Throughput: 0: 775.8, 1: 774.8. Samples: 1568812. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:05:42,838][06561] Avg episode reward: [(0, '63.900'), (1, '0.000')] +[2023-09-26 05:05:42,840][07269] Saving new best policy, reward=63.900! +[2023-09-26 05:05:47,233][07696] Updated weights for policy 0, policy_version 12316 (0.0018) +[2023-09-26 05:05:47,233][07697] Updated weights for policy 1, policy_version 12320 (0.0019) +[2023-09-26 05:05:47,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 6307840. Throughput: 0: 780.0, 1: 779.9. Samples: 1573565. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:05:47,838][06561] Avg episode reward: [(0, '63.900'), (1, '0.000')] +[2023-09-26 05:05:52,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 6340608. Throughput: 0: 777.2, 1: 777.8. Samples: 1583053. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:05:52,838][06561] Avg episode reward: [(0, '63.900'), (1, '0.000')] +[2023-09-26 05:05:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6373376. Throughput: 0: 776.5, 1: 777.2. Samples: 1592312. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:05:57,838][06561] Avg episode reward: [(0, '63.900'), (1, '0.000')] +[2023-09-26 05:06:00,532][07697] Updated weights for policy 1, policy_version 12480 (0.0017) +[2023-09-26 05:06:00,532][07696] Updated weights for policy 0, policy_version 12476 (0.0017) +[2023-09-26 05:06:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6397952. Throughput: 0: 775.7, 1: 773.0. Samples: 1596917. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:06:02,837][06561] Avg episode reward: [(0, '65.370'), (1, '0.000')] +[2023-09-26 05:06:02,838][07269] Saving new best policy, reward=65.370! +[2023-09-26 05:06:07,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6430720. Throughput: 0: 770.7, 1: 769.8. Samples: 1605691. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:06:07,838][06561] Avg episode reward: [(0, '65.370'), (1, '0.000')] +[2023-09-26 05:06:12,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 6463488. Throughput: 0: 771.3, 1: 772.1. Samples: 1615594. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:06:12,838][06561] Avg episode reward: [(0, '65.370'), (1, '0.000')] +[2023-09-26 05:06:13,684][07696] Updated weights for policy 0, policy_version 12636 (0.0017) +[2023-09-26 05:06:13,684][07697] Updated weights for policy 1, policy_version 12640 (0.0016) +[2023-09-26 05:06:17,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6496256. Throughput: 0: 773.3, 1: 771.8. Samples: 1619968. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:06:17,838][06561] Avg episode reward: [(0, '67.940'), (1, '0.000')] +[2023-09-26 05:06:17,839][07269] Saving new best policy, reward=67.940! +[2023-09-26 05:06:22,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6520832. Throughput: 0: 771.5, 1: 772.3. Samples: 1629319. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:06:22,838][06561] Avg episode reward: [(0, '67.940'), (1, '0.000')] +[2023-09-26 05:06:26,952][07696] Updated weights for policy 0, policy_version 12796 (0.0017) +[2023-09-26 05:06:26,953][07697] Updated weights for policy 1, policy_version 12800 (0.0019) +[2023-09-26 05:06:27,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6553600. Throughput: 0: 773.7, 1: 773.9. Samples: 1638453. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:06:27,838][06561] Avg episode reward: [(0, '67.940'), (1, '0.000')] +[2023-09-26 05:06:32,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6586368. Throughput: 0: 773.4, 1: 774.5. Samples: 1643222. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:06:32,838][06561] Avg episode reward: [(0, '67.940'), (1, '0.000')] +[2023-09-26 05:06:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 6619136. Throughput: 0: 772.9, 1: 773.7. Samples: 1652651. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 05:06:37,838][06561] Avg episode reward: [(0, '71.590'), (1, '0.000')] +[2023-09-26 05:06:37,847][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000012924_3309568.pth... +[2023-09-26 05:06:37,848][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000012928_3309568.pth... +[2023-09-26 05:06:37,878][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000010012_2564096.pth +[2023-09-26 05:06:37,881][07269] Saving new best policy, reward=71.590! +[2023-09-26 05:06:37,882][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000010016_2564096.pth +[2023-09-26 05:06:40,091][07696] Updated weights for policy 0, policy_version 12956 (0.0018) +[2023-09-26 05:06:40,091][07697] Updated weights for policy 1, policy_version 12960 (0.0018) +[2023-09-26 05:06:42,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6651904. Throughput: 0: 775.5, 1: 775.2. Samples: 1662096. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 05:06:42,838][06561] Avg episode reward: [(0, '71.590'), (1, '0.000')] +[2023-09-26 05:06:47,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6676480. Throughput: 0: 775.4, 1: 776.1. Samples: 1666737. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:06:47,838][06561] Avg episode reward: [(0, '71.590'), (1, '0.000')] +[2023-09-26 05:06:52,837][06561] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6709248. Throughput: 0: 778.4, 1: 779.4. Samples: 1675789. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:06:52,837][06561] Avg episode reward: [(0, '74.410'), (1, '0.000')] +[2023-09-26 05:06:52,846][07269] Saving new best policy, reward=74.410! +[2023-09-26 05:06:53,262][07696] Updated weights for policy 0, policy_version 13116 (0.0016) +[2023-09-26 05:06:53,263][07697] Updated weights for policy 1, policy_version 13120 (0.0016) +[2023-09-26 05:06:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6742016. Throughput: 0: 775.6, 1: 775.6. Samples: 1685397. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 05:06:57,838][06561] Avg episode reward: [(0, '74.410'), (1, '0.000')] +[2023-09-26 05:07:02,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6774784. Throughput: 0: 775.5, 1: 777.0. Samples: 1689832. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 05:07:02,838][06561] Avg episode reward: [(0, '74.410'), (1, '0.000')] +[2023-09-26 05:07:06,373][07697] Updated weights for policy 1, policy_version 13280 (0.0018) +[2023-09-26 05:07:06,374][07696] Updated weights for policy 0, policy_version 13276 (0.0016) +[2023-09-26 05:07:07,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6807552. Throughput: 0: 780.0, 1: 779.9. Samples: 1699514. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:07:07,838][06561] Avg episode reward: [(0, '78.900'), (1, '0.000')] +[2023-09-26 05:07:07,849][07269] Saving new best policy, reward=78.900! +[2023-09-26 05:07:12,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6832128. Throughput: 0: 779.6, 1: 780.2. Samples: 1708644. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:07:12,838][06561] Avg episode reward: [(0, '78.900'), (1, '0.000')] +[2023-09-26 05:07:17,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6864896. Throughput: 0: 780.6, 1: 778.8. Samples: 1713394. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:07:17,838][06561] Avg episode reward: [(0, '78.900'), (1, '0.000')] +[2023-09-26 05:07:19,577][07697] Updated weights for policy 1, policy_version 13440 (0.0018) +[2023-09-26 05:07:19,577][07696] Updated weights for policy 0, policy_version 13436 (0.0017) +[2023-09-26 05:07:22,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 6897664. Throughput: 0: 776.9, 1: 776.5. Samples: 1722553. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:07:22,837][06561] Avg episode reward: [(0, '78.900'), (1, '0.000')] +[2023-09-26 05:07:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 6930432. Throughput: 0: 776.9, 1: 776.4. Samples: 1731996. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:07:27,838][06561] Avg episode reward: [(0, '82.100'), (1, '0.000')] +[2023-09-26 05:07:27,839][07269] Saving new best policy, reward=82.100! +[2023-09-26 05:07:32,809][07697] Updated weights for policy 1, policy_version 13600 (0.0018) +[2023-09-26 05:07:32,809][07696] Updated weights for policy 0, policy_version 13596 (0.0017) +[2023-09-26 05:07:32,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 6963200. Throughput: 0: 778.0, 1: 776.8. Samples: 1736704. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:07:32,838][06561] Avg episode reward: [(0, '82.100'), (1, '0.000')] +[2023-09-26 05:07:37,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6212.3, 300 sec: 6234.3). Total num frames: 6991872. Throughput: 0: 782.0, 1: 781.2. Samples: 1746133. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:07:37,838][06561] Avg episode reward: [(0, '82.100'), (1, '0.000')] +[2023-09-26 05:07:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7020544. Throughput: 0: 775.6, 1: 774.9. Samples: 1755171. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:07:42,838][06561] Avg episode reward: [(0, '85.680'), (1, '0.000')] +[2023-09-26 05:07:42,839][07269] Saving new best policy, reward=85.680! +[2023-09-26 05:07:45,895][07697] Updated weights for policy 1, policy_version 13760 (0.0018) +[2023-09-26 05:07:45,895][07696] Updated weights for policy 0, policy_version 13756 (0.0016) +[2023-09-26 05:07:47,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7053312. Throughput: 0: 780.0, 1: 779.6. Samples: 1760013. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 05:07:47,838][06561] Avg episode reward: [(0, '85.680'), (1, '0.000')] +[2023-09-26 05:07:52,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7086080. Throughput: 0: 777.9, 1: 776.7. Samples: 1769472. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 05:07:52,838][06561] Avg episode reward: [(0, '85.680'), (1, '0.000')] +[2023-09-26 05:07:57,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 7118848. Throughput: 0: 778.2, 1: 776.8. Samples: 1778622. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:07:57,838][06561] Avg episode reward: [(0, '85.680'), (1, '0.000')] +[2023-09-26 05:07:59,132][07696] Updated weights for policy 0, policy_version 13916 (0.0017) +[2023-09-26 05:07:59,132][07697] Updated weights for policy 1, policy_version 13920 (0.0017) +[2023-09-26 05:08:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7143424. Throughput: 0: 776.6, 1: 776.0. Samples: 1783264. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:08:02,838][06561] Avg episode reward: [(0, '88.200'), (1, '0.000')] +[2023-09-26 05:08:02,839][07269] Saving new best policy, reward=88.200! +[2023-09-26 05:08:07,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7176192. Throughput: 0: 773.9, 1: 774.1. Samples: 1792214. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:08:07,838][06561] Avg episode reward: [(0, '88.200'), (1, '0.000')] +[2023-09-26 05:08:12,408][07696] Updated weights for policy 0, policy_version 14076 (0.0019) +[2023-09-26 05:08:12,408][07697] Updated weights for policy 1, policy_version 14080 (0.0018) +[2023-09-26 05:08:12,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 7208960. Throughput: 0: 778.7, 1: 776.9. Samples: 1801999. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:08:12,837][06561] Avg episode reward: [(0, '88.200'), (1, '0.000')] +[2023-09-26 05:08:17,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7241728. Throughput: 0: 773.7, 1: 773.7. Samples: 1806337. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:08:17,838][06561] Avg episode reward: [(0, '92.110'), (1, '0.000')] +[2023-09-26 05:08:17,839][07269] Saving new best policy, reward=92.110! +[2023-09-26 05:08:22,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 7266304. Throughput: 0: 770.4, 1: 770.9. Samples: 1815493. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:08:22,838][06561] Avg episode reward: [(0, '92.110'), (1, '0.000')] +[2023-09-26 05:08:25,727][07696] Updated weights for policy 0, policy_version 14236 (0.0017) +[2023-09-26 05:08:25,728][07697] Updated weights for policy 1, policy_version 14240 (0.0017) +[2023-09-26 05:08:27,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 7299072. Throughput: 0: 773.7, 1: 772.9. Samples: 1824768. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:08:27,838][06561] Avg episode reward: [(0, '92.110'), (1, '0.000')] +[2023-09-26 05:08:32,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7331840. Throughput: 0: 766.8, 1: 767.5. Samples: 1829057. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 05:08:32,838][06561] Avg episode reward: [(0, '92.110'), (1, '0.000')] +[2023-09-26 05:08:37,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6144.0, 300 sec: 6206.5). Total num frames: 7360512. Throughput: 0: 766.8, 1: 768.0. Samples: 1838536. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 05:08:37,838][06561] Avg episode reward: [(0, '96.140'), (1, '0.000')] +[2023-09-26 05:08:37,850][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000014380_3682304.pth... +[2023-09-26 05:08:37,864][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000014384_3682304.pth... +[2023-09-26 05:08:37,885][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000011468_2936832.pth +[2023-09-26 05:08:37,889][07269] Saving new best policy, reward=96.140! +[2023-09-26 05:08:37,894][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000011472_2936832.pth +[2023-09-26 05:08:39,196][07696] Updated weights for policy 0, policy_version 14396 (0.0015) +[2023-09-26 05:08:39,196][07697] Updated weights for policy 1, policy_version 14400 (0.0017) +[2023-09-26 05:08:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 7389184. Throughput: 0: 763.3, 1: 764.5. Samples: 1847376. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:08:42,838][06561] Avg episode reward: [(0, '96.140'), (1, '0.000')] +[2023-09-26 05:08:47,837][06561] Fps is (10 sec: 6144.1, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 7421952. Throughput: 0: 767.0, 1: 767.7. Samples: 1852326. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:08:47,838][06561] Avg episode reward: [(0, '96.140'), (1, '0.000')] +[2023-09-26 05:08:52,354][07696] Updated weights for policy 0, policy_version 14556 (0.0018) +[2023-09-26 05:08:52,355][07697] Updated weights for policy 1, policy_version 14560 (0.0018) +[2023-09-26 05:08:52,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 7454720. Throughput: 0: 772.2, 1: 770.5. Samples: 1861632. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:08:52,837][06561] Avg episode reward: [(0, '99.340'), (1, '0.000')] +[2023-09-26 05:08:52,845][07269] Saving new best policy, reward=99.340! +[2023-09-26 05:08:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7487488. Throughput: 0: 769.3, 1: 771.1. Samples: 1871318. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:08:57,837][06561] Avg episode reward: [(0, '99.340'), (1, '0.000')] +[2023-09-26 05:09:02,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7520256. Throughput: 0: 773.7, 1: 773.7. Samples: 1875968. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:09:02,838][06561] Avg episode reward: [(0, '99.340'), (1, '0.000')] +[2023-09-26 05:09:05,402][07696] Updated weights for policy 0, policy_version 14716 (0.0019) +[2023-09-26 05:09:05,402][07697] Updated weights for policy 1, policy_version 14720 (0.0018) +[2023-09-26 05:09:07,837][06561] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 7544832. Throughput: 0: 774.5, 1: 774.6. Samples: 1885201. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:09:07,838][06561] Avg episode reward: [(0, '103.410'), (1, '0.000')] +[2023-09-26 05:09:07,849][07269] Saving new best policy, reward=103.410! +[2023-09-26 05:09:12,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 7577600. Throughput: 0: 773.7, 1: 773.7. Samples: 1894400. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:09:12,838][06561] Avg episode reward: [(0, '103.410'), (1, '0.000')] +[2023-09-26 05:09:17,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6206.5). Total num frames: 7610368. Throughput: 0: 774.1, 1: 773.4. Samples: 1898691. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:09:17,838][06561] Avg episode reward: [(0, '103.410'), (1, '0.000')] +[2023-09-26 05:09:18,888][07697] Updated weights for policy 1, policy_version 14880 (0.0018) +[2023-09-26 05:09:18,888][07696] Updated weights for policy 0, policy_version 14876 (0.0018) +[2023-09-26 05:09:22,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7643136. Throughput: 0: 774.2, 1: 774.5. Samples: 1908228. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:09:22,838][06561] Avg episode reward: [(0, '103.410'), (1, '0.000')] +[2023-09-26 05:09:27,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 7667712. Throughput: 0: 778.1, 1: 777.8. Samples: 1917392. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:09:27,837][06561] Avg episode reward: [(0, '107.320'), (1, '0.000')] +[2023-09-26 05:09:27,962][07269] Saving new best policy, reward=107.320! +[2023-09-26 05:09:31,997][07696] Updated weights for policy 0, policy_version 15036 (0.0016) +[2023-09-26 05:09:31,997][07697] Updated weights for policy 1, policy_version 15040 (0.0017) +[2023-09-26 05:09:32,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 7700480. Throughput: 0: 775.1, 1: 775.4. Samples: 1922101. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:09:32,838][06561] Avg episode reward: [(0, '107.320'), (1, '0.000')] +[2023-09-26 05:09:37,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6212.3, 300 sec: 6206.5). Total num frames: 7733248. Throughput: 0: 773.7, 1: 773.7. Samples: 1931264. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:09:37,838][06561] Avg episode reward: [(0, '107.320'), (1, '0.000')] +[2023-09-26 05:09:42,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7766016. Throughput: 0: 773.2, 1: 771.4. Samples: 1940823. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:09:42,838][06561] Avg episode reward: [(0, '111.290'), (1, '0.000')] +[2023-09-26 05:09:42,838][07269] Saving new best policy, reward=111.290! +[2023-09-26 05:09:45,286][07696] Updated weights for policy 0, policy_version 15196 (0.0018) +[2023-09-26 05:09:45,286][07697] Updated weights for policy 1, policy_version 15200 (0.0016) +[2023-09-26 05:09:47,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6212.3, 300 sec: 6206.5). Total num frames: 7794688. Throughput: 0: 770.8, 1: 772.4. Samples: 1945413. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:09:47,838][06561] Avg episode reward: [(0, '111.290'), (1, '0.000')] +[2023-09-26 05:09:52,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 7823360. Throughput: 0: 774.8, 1: 774.4. Samples: 1954912. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:09:52,838][06561] Avg episode reward: [(0, '111.290'), (1, '0.000')] +[2023-09-26 05:09:57,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 7856128. Throughput: 0: 773.7, 1: 773.7. Samples: 1964033. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:09:57,838][06561] Avg episode reward: [(0, '111.290'), (1, '0.000')] +[2023-09-26 05:09:58,318][07697] Updated weights for policy 1, policy_version 15360 (0.0017) +[2023-09-26 05:09:58,318][07696] Updated weights for policy 0, policy_version 15356 (0.0017) +[2023-09-26 05:10:02,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 7888896. Throughput: 0: 777.0, 1: 777.5. Samples: 1968642. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:10:02,838][06561] Avg episode reward: [(0, '113.490'), (1, '0.000')] +[2023-09-26 05:10:02,839][07269] Saving new best policy, reward=113.490! +[2023-09-26 05:10:07,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6206.5). Total num frames: 7921664. Throughput: 0: 779.4, 1: 778.5. Samples: 1978336. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:10:07,838][06561] Avg episode reward: [(0, '113.490'), (1, '0.000')] +[2023-09-26 05:10:11,563][07696] Updated weights for policy 0, policy_version 15516 (0.0017) +[2023-09-26 05:10:11,563][07697] Updated weights for policy 1, policy_version 15520 (0.0015) +[2023-09-26 05:10:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7954432. Throughput: 0: 776.9, 1: 777.6. Samples: 1987344. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:10:12,838][06561] Avg episode reward: [(0, '113.490'), (1, '0.000')] +[2023-09-26 05:10:17,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 7979008. Throughput: 0: 778.4, 1: 778.8. Samples: 1992177. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:10:17,838][06561] Avg episode reward: [(0, '117.280'), (1, '0.000')] +[2023-09-26 05:10:17,838][07269] Saving new best policy, reward=117.280! +[2023-09-26 05:10:22,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8011776. Throughput: 0: 775.4, 1: 777.2. Samples: 2001127. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:10:22,839][06561] Avg episode reward: [(0, '117.280'), (1, '0.000')] +[2023-09-26 05:10:24,935][07697] Updated weights for policy 1, policy_version 15680 (0.0018) +[2023-09-26 05:10:24,935][07696] Updated weights for policy 0, policy_version 15676 (0.0019) +[2023-09-26 05:10:27,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 8044544. Throughput: 0: 772.7, 1: 776.1. Samples: 2010521. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:10:27,838][06561] Avg episode reward: [(0, '117.280'), (1, '0.000')] +[2023-09-26 05:10:32,837][06561] Fps is (10 sec: 6144.1, 60 sec: 6212.2, 300 sec: 6206.5). Total num frames: 8073216. Throughput: 0: 776.6, 1: 775.0. Samples: 2015232. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:10:32,838][06561] Avg episode reward: [(0, '117.280'), (1, '0.000')] +[2023-09-26 05:10:37,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8101888. Throughput: 0: 769.9, 1: 770.9. Samples: 2024248. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:10:37,838][06561] Avg episode reward: [(0, '121.700'), (1, '0.000')] +[2023-09-26 05:10:37,851][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000015820_4050944.pth... +[2023-09-26 05:10:37,851][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000015824_4050944.pth... +[2023-09-26 05:10:37,886][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000012924_3309568.pth +[2023-09-26 05:10:37,887][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000012928_3309568.pth +[2023-09-26 05:10:37,889][07269] Saving new best policy, reward=121.700! +[2023-09-26 05:10:38,227][07696] Updated weights for policy 0, policy_version 15836 (0.0017) +[2023-09-26 05:10:38,227][07697] Updated weights for policy 1, policy_version 15840 (0.0019) +[2023-09-26 05:10:42,837][06561] Fps is (10 sec: 6144.1, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8134656. Throughput: 0: 773.7, 1: 774.3. Samples: 2033694. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:10:42,838][06561] Avg episode reward: [(0, '121.700'), (1, '0.000')] +[2023-09-26 05:10:47,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6212.3, 300 sec: 6192.6). Total num frames: 8167424. Throughput: 0: 774.7, 1: 774.4. Samples: 2038348. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:10:47,838][06561] Avg episode reward: [(0, '121.700'), (1, '0.000')] +[2023-09-26 05:10:51,107][07696] Updated weights for policy 0, policy_version 15996 (0.0017) +[2023-09-26 05:10:51,107][07697] Updated weights for policy 1, policy_version 16000 (0.0017) +[2023-09-26 05:10:52,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 8200192. Throughput: 0: 774.3, 1: 773.8. Samples: 2048000. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:10:52,838][06561] Avg episode reward: [(0, '124.480'), (1, '0.000')] +[2023-09-26 05:10:52,849][07269] Saving new best policy, reward=124.480! +[2023-09-26 05:10:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 8232960. Throughput: 0: 778.2, 1: 777.3. Samples: 2057343. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:10:57,838][06561] Avg episode reward: [(0, '124.480'), (1, '0.000')] +[2023-09-26 05:11:02,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 8265728. Throughput: 0: 778.0, 1: 778.5. Samples: 2062220. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:11:02,838][06561] Avg episode reward: [(0, '124.480'), (1, '0.000')] +[2023-09-26 05:11:04,133][07696] Updated weights for policy 0, policy_version 16156 (0.0018) +[2023-09-26 05:11:04,133][07697] Updated weights for policy 1, policy_version 16160 (0.0019) +[2023-09-26 05:11:07,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8290304. Throughput: 0: 780.7, 1: 778.5. Samples: 2071286. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:11:07,837][06561] Avg episode reward: [(0, '126.390'), (1, '0.000')] +[2023-09-26 05:11:07,846][07269] Saving new best policy, reward=126.390! +[2023-09-26 05:11:12,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8323072. Throughput: 0: 772.8, 1: 773.6. Samples: 2080110. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:11:12,837][06561] Avg episode reward: [(0, '126.390'), (1, '0.000')] +[2023-09-26 05:11:17,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8347648. Throughput: 0: 768.8, 1: 771.1. Samples: 2084529. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:11:17,838][06561] Avg episode reward: [(0, '126.390'), (1, '0.000')] +[2023-09-26 05:11:17,937][07696] Updated weights for policy 0, policy_version 16316 (0.0017) +[2023-09-26 05:11:17,937][07697] Updated weights for policy 1, policy_version 16320 (0.0016) +[2023-09-26 05:11:22,837][06561] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8380416. Throughput: 0: 773.1, 1: 772.5. Samples: 2093800. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:11:22,839][06561] Avg episode reward: [(0, '126.390'), (1, '0.000')] +[2023-09-26 05:11:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8413184. Throughput: 0: 773.7, 1: 773.0. Samples: 2103296. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:11:27,838][06561] Avg episode reward: [(0, '131.260'), (1, '0.000')] +[2023-09-26 05:11:27,839][07269] Saving new best policy, reward=131.260! +[2023-09-26 05:11:30,923][07696] Updated weights for policy 0, policy_version 16476 (0.0016) +[2023-09-26 05:11:30,923][07697] Updated weights for policy 1, policy_version 16480 (0.0017) +[2023-09-26 05:11:32,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6212.3, 300 sec: 6192.6). Total num frames: 8445952. Throughput: 0: 773.6, 1: 773.2. Samples: 2107954. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:11:32,838][06561] Avg episode reward: [(0, '131.260'), (1, '0.000')] +[2023-09-26 05:11:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 8478720. Throughput: 0: 772.4, 1: 772.9. Samples: 2117539. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:11:37,838][06561] Avg episode reward: [(0, '131.260'), (1, '0.000')] +[2023-09-26 05:11:42,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 8511488. Throughput: 0: 770.3, 1: 771.0. Samples: 2126701. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:11:42,838][06561] Avg episode reward: [(0, '132.990'), (1, '0.000')] +[2023-09-26 05:11:42,840][07269] Saving new best policy, reward=132.990! +[2023-09-26 05:11:44,073][07696] Updated weights for policy 0, policy_version 16636 (0.0017) +[2023-09-26 05:11:44,073][07697] Updated weights for policy 1, policy_version 16640 (0.0018) +[2023-09-26 05:11:47,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8536064. Throughput: 0: 770.9, 1: 770.0. Samples: 2131560. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:11:47,838][06561] Avg episode reward: [(0, '132.990'), (1, '0.000')] +[2023-09-26 05:11:52,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8568832. Throughput: 0: 768.3, 1: 770.4. Samples: 2140527. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 05:11:52,838][06561] Avg episode reward: [(0, '132.990'), (1, '0.000')] +[2023-09-26 05:11:57,283][07696] Updated weights for policy 0, policy_version 16796 (0.0017) +[2023-09-26 05:11:57,283][07697] Updated weights for policy 1, policy_version 16800 (0.0017) +[2023-09-26 05:11:57,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8601600. Throughput: 0: 781.3, 1: 778.4. Samples: 2150297. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 05:11:57,838][06561] Avg episode reward: [(0, '132.990'), (1, '0.000')] +[2023-09-26 05:12:02,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8634368. Throughput: 0: 778.6, 1: 776.4. Samples: 2154503. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 05:12:02,837][06561] Avg episode reward: [(0, '135.760'), (1, '0.000')] +[2023-09-26 05:12:02,838][07269] Saving new best policy, reward=135.760! +[2023-09-26 05:12:07,837][06561] Fps is (10 sec: 6143.9, 60 sec: 6212.2, 300 sec: 6206.5). Total num frames: 8663040. Throughput: 0: 782.7, 1: 782.4. Samples: 2164230. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:12:07,839][06561] Avg episode reward: [(0, '135.760'), (1, '0.000')] +[2023-09-26 05:12:10,556][07696] Updated weights for policy 0, policy_version 16956 (0.0017) +[2023-09-26 05:12:10,556][07697] Updated weights for policy 1, policy_version 16960 (0.0018) +[2023-09-26 05:12:12,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8691712. Throughput: 0: 773.8, 1: 775.3. Samples: 2173005. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:12:12,838][06561] Avg episode reward: [(0, '135.760'), (1, '0.000')] +[2023-09-26 05:12:17,837][06561] Fps is (10 sec: 6144.2, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 8724480. Throughput: 0: 775.8, 1: 776.0. Samples: 2177786. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:12:17,838][06561] Avg episode reward: [(0, '141.240'), (1, '0.000')] +[2023-09-26 05:12:17,839][07269] Saving new best policy, reward=141.240! +[2023-09-26 05:12:22,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 8757248. Throughput: 0: 774.9, 1: 774.6. Samples: 2187266. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:12:22,838][06561] Avg episode reward: [(0, '141.240'), (1, '0.000')] +[2023-09-26 05:12:23,731][07696] Updated weights for policy 0, policy_version 17116 (0.0017) +[2023-09-26 05:12:23,731][07697] Updated weights for policy 1, policy_version 17120 (0.0016) +[2023-09-26 05:12:27,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 8790016. Throughput: 0: 777.4, 1: 776.9. Samples: 2196643. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:12:27,838][06561] Avg episode reward: [(0, '141.240'), (1, '0.000')] +[2023-09-26 05:12:32,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6178.7). Total num frames: 8814592. Throughput: 0: 776.6, 1: 772.9. Samples: 2201287. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:12:32,838][06561] Avg episode reward: [(0, '141.240'), (1, '0.000')] +[2023-09-26 05:12:37,150][07697] Updated weights for policy 1, policy_version 17280 (0.0018) +[2023-09-26 05:12:37,150][07696] Updated weights for policy 0, policy_version 17276 (0.0016) +[2023-09-26 05:12:37,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8847360. Throughput: 0: 771.2, 1: 771.1. Samples: 2209930. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:12:37,839][06561] Avg episode reward: [(0, '142.730'), (1, '0.000')] +[2023-09-26 05:12:37,852][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000017280_4423680.pth... +[2023-09-26 05:12:37,852][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000017276_4423680.pth... +[2023-09-26 05:12:37,886][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000014380_3682304.pth +[2023-09-26 05:12:37,890][07269] Saving new best policy, reward=142.730! +[2023-09-26 05:12:37,893][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000014384_3682304.pth +[2023-09-26 05:12:42,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8880128. Throughput: 0: 772.4, 1: 772.8. Samples: 2219830. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:12:42,838][06561] Avg episode reward: [(0, '142.730'), (1, '0.000')] +[2023-09-26 05:12:47,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 8912896. Throughput: 0: 773.7, 1: 774.1. Samples: 2224155. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:12:47,838][06561] Avg episode reward: [(0, '142.730'), (1, '0.000')] +[2023-09-26 05:12:50,211][07696] Updated weights for policy 0, policy_version 17436 (0.0017) +[2023-09-26 05:12:50,211][07697] Updated weights for policy 1, policy_version 17440 (0.0018) +[2023-09-26 05:12:52,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6212.3, 300 sec: 6178.7). Total num frames: 8941568. Throughput: 0: 771.8, 1: 771.6. Samples: 2233683. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:12:52,837][06561] Avg episode reward: [(0, '144.000'), (1, '0.000')] +[2023-09-26 05:12:52,849][07269] Saving new best policy, reward=144.000! +[2023-09-26 05:12:57,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 8970240. Throughput: 0: 777.7, 1: 778.2. Samples: 2243021. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:12:57,838][06561] Avg episode reward: [(0, '144.000'), (1, '0.000')] +[2023-09-26 05:13:02,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9003008. Throughput: 0: 776.5, 1: 776.6. Samples: 2247674. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:13:02,837][06561] Avg episode reward: [(0, '144.000'), (1, '0.000')] +[2023-09-26 05:13:03,418][07697] Updated weights for policy 1, policy_version 17600 (0.0015) +[2023-09-26 05:13:03,419][07696] Updated weights for policy 0, policy_version 17596 (0.0016) +[2023-09-26 05:13:07,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6212.3, 300 sec: 6192.6). Total num frames: 9035776. Throughput: 0: 773.7, 1: 773.6. Samples: 2256896. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:13:07,838][06561] Avg episode reward: [(0, '146.330'), (1, '0.000')] +[2023-09-26 05:13:07,849][07269] Saving new best policy, reward=146.330! +[2023-09-26 05:13:12,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 9068544. Throughput: 0: 774.6, 1: 772.4. Samples: 2266255. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:13:12,838][06561] Avg episode reward: [(0, '146.330'), (1, '0.000')] +[2023-09-26 05:13:16,828][07696] Updated weights for policy 0, policy_version 17756 (0.0017) +[2023-09-26 05:13:16,828][07697] Updated weights for policy 1, policy_version 17760 (0.0017) +[2023-09-26 05:13:17,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9093120. Throughput: 0: 767.6, 1: 773.2. Samples: 2270624. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:13:17,838][06561] Avg episode reward: [(0, '146.330'), (1, '0.000')] +[2023-09-26 05:13:22,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9125888. Throughput: 0: 779.1, 1: 779.4. Samples: 2280063. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:13:22,838][06561] Avg episode reward: [(0, '146.330'), (1, '0.000')] +[2023-09-26 05:13:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9158656. Throughput: 0: 771.5, 1: 773.6. Samples: 2289358. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 05:13:27,838][06561] Avg episode reward: [(0, '150.090'), (1, '0.000')] +[2023-09-26 05:13:27,839][07269] Saving new best policy, reward=150.090! +[2023-09-26 05:13:30,293][07697] Updated weights for policy 1, policy_version 17920 (0.0019) +[2023-09-26 05:13:30,293][07696] Updated weights for policy 0, policy_version 17916 (0.0018) +[2023-09-26 05:13:32,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6178.7). Total num frames: 9183232. Throughput: 0: 771.7, 1: 772.3. Samples: 2293634. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 05:13:32,838][06561] Avg episode reward: [(0, '150.090'), (1, '0.000')] +[2023-09-26 05:13:37,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9216000. Throughput: 0: 762.1, 1: 761.9. Samples: 2302265. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:13:37,838][06561] Avg episode reward: [(0, '150.090'), (1, '0.000')] +[2023-09-26 05:13:42,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9248768. Throughput: 0: 765.8, 1: 766.0. Samples: 2311954. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:13:42,838][06561] Avg episode reward: [(0, '151.780'), (1, '0.000')] +[2023-09-26 05:13:42,838][07269] Saving new best policy, reward=151.780! +[2023-09-26 05:13:43,605][07697] Updated weights for policy 1, policy_version 18080 (0.0017) +[2023-09-26 05:13:43,606][07696] Updated weights for policy 0, policy_version 18076 (0.0019) +[2023-09-26 05:13:47,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9281536. Throughput: 0: 764.0, 1: 764.2. Samples: 2316441. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:13:47,838][06561] Avg episode reward: [(0, '151.780'), (1, '0.000')] +[2023-09-26 05:13:52,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6212.2, 300 sec: 6192.6). Total num frames: 9314304. Throughput: 0: 769.7, 1: 771.4. Samples: 2326244. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:13:52,838][06561] Avg episode reward: [(0, '151.780'), (1, '0.000')] +[2023-09-26 05:13:56,686][07696] Updated weights for policy 0, policy_version 18236 (0.0014) +[2023-09-26 05:13:56,686][07697] Updated weights for policy 1, policy_version 18240 (0.0015) +[2023-09-26 05:13:57,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6164.8). Total num frames: 9338880. Throughput: 0: 765.5, 1: 767.9. Samples: 2335261. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:13:57,838][06561] Avg episode reward: [(0, '151.780'), (1, '0.000')] +[2023-09-26 05:14:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9371648. Throughput: 0: 772.0, 1: 770.4. Samples: 2340029. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:14:02,838][06561] Avg episode reward: [(0, '152.480'), (1, '0.000')] +[2023-09-26 05:14:02,839][07269] Saving new best policy, reward=152.480! +[2023-09-26 05:14:07,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9404416. Throughput: 0: 771.5, 1: 770.8. Samples: 2349470. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:14:07,838][06561] Avg episode reward: [(0, '152.480'), (1, '0.000')] +[2023-09-26 05:14:09,721][07696] Updated weights for policy 0, policy_version 18396 (0.0017) +[2023-09-26 05:14:09,721][07697] Updated weights for policy 1, policy_version 18400 (0.0017) +[2023-09-26 05:14:12,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9437184. Throughput: 0: 776.4, 1: 775.2. Samples: 2359183. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:14:12,837][06561] Avg episode reward: [(0, '152.480'), (1, '0.000')] +[2023-09-26 05:14:17,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 9469952. Throughput: 0: 776.1, 1: 776.4. Samples: 2363498. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:14:17,838][06561] Avg episode reward: [(0, '156.790'), (1, '0.000')] +[2023-09-26 05:14:17,839][07269] Saving new best policy, reward=156.790! +[2023-09-26 05:14:22,837][06561] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9494528. Throughput: 0: 785.1, 1: 784.5. Samples: 2372899. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:14:22,838][06561] Avg episode reward: [(0, '156.790'), (1, '0.000')] +[2023-09-26 05:14:22,917][07696] Updated weights for policy 0, policy_version 18556 (0.0016) +[2023-09-26 05:14:22,918][07697] Updated weights for policy 1, policy_version 18560 (0.0017) +[2023-09-26 05:14:27,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9527296. Throughput: 0: 777.5, 1: 775.5. Samples: 2381836. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:14:27,838][06561] Avg episode reward: [(0, '156.790'), (1, '0.000')] +[2023-09-26 05:14:32,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 9560064. Throughput: 0: 776.9, 1: 777.7. Samples: 2386398. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:14:32,838][06561] Avg episode reward: [(0, '156.790'), (1, '0.000')] +[2023-09-26 05:14:36,354][07697] Updated weights for policy 1, policy_version 18720 (0.0017) +[2023-09-26 05:14:36,355][07696] Updated weights for policy 0, policy_version 18716 (0.0019) +[2023-09-26 05:14:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6192.6). Total num frames: 9592832. Throughput: 0: 773.7, 1: 773.5. Samples: 2395867. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:14:37,838][06561] Avg episode reward: [(0, '158.280'), (1, '0.000')] +[2023-09-26 05:14:37,846][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000018732_4796416.pth... +[2023-09-26 05:14:37,847][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000018736_4796416.pth... +[2023-09-26 05:14:37,882][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000015824_4050944.pth +[2023-09-26 05:14:37,884][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000015820_4050944.pth +[2023-09-26 05:14:37,887][07269] Saving new best policy, reward=158.280! +[2023-09-26 05:14:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6178.7). Total num frames: 9617408. Throughput: 0: 774.6, 1: 774.2. Samples: 2404956. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:14:42,838][06561] Avg episode reward: [(0, '158.280'), (1, '0.000')] +[2023-09-26 05:14:47,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9650176. Throughput: 0: 775.4, 1: 776.5. Samples: 2409864. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:14:47,838][06561] Avg episode reward: [(0, '158.280'), (1, '0.000')] +[2023-09-26 05:14:49,478][07697] Updated weights for policy 1, policy_version 18880 (0.0015) +[2023-09-26 05:14:49,478][07696] Updated weights for policy 0, policy_version 18876 (0.0017) +[2023-09-26 05:14:52,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9682944. Throughput: 0: 773.2, 1: 773.3. Samples: 2419066. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:14:52,838][06561] Avg episode reward: [(0, '160.370'), (1, '0.000')] +[2023-09-26 05:14:52,850][07269] Saving new best policy, reward=160.370! +[2023-09-26 05:14:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 9715712. Throughput: 0: 773.5, 1: 773.8. Samples: 2428814. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:14:57,838][06561] Avg episode reward: [(0, '160.370'), (1, '0.000')] +[2023-09-26 05:15:02,434][07696] Updated weights for policy 0, policy_version 19036 (0.0016) +[2023-09-26 05:15:02,435][07697] Updated weights for policy 1, policy_version 19040 (0.0020) +[2023-09-26 05:15:02,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6192.6). Total num frames: 9748480. Throughput: 0: 775.1, 1: 775.3. Samples: 2433266. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:15:02,837][06561] Avg episode reward: [(0, '160.370'), (1, '0.000')] +[2023-09-26 05:15:07,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 9781248. Throughput: 0: 779.0, 1: 779.0. Samples: 2443010. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:15:07,838][06561] Avg episode reward: [(0, '162.940'), (1, '0.000')] +[2023-09-26 05:15:07,849][07269] Saving new best policy, reward=162.940! +[2023-09-26 05:15:12,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9805824. Throughput: 0: 777.4, 1: 778.6. Samples: 2451854. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:15:12,838][06561] Avg episode reward: [(0, '163.600'), (1, '0.000')] +[2023-09-26 05:15:12,997][07269] Saving new best policy, reward=163.600! +[2023-09-26 05:15:15,655][07696] Updated weights for policy 0, policy_version 19196 (0.0017) +[2023-09-26 05:15:15,655][07697] Updated weights for policy 1, policy_version 19200 (0.0017) +[2023-09-26 05:15:17,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9838592. Throughput: 0: 781.7, 1: 781.3. Samples: 2456733. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:15:17,838][06561] Avg episode reward: [(0, '163.600'), (1, '0.000')] +[2023-09-26 05:15:22,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 9871360. Throughput: 0: 777.8, 1: 777.8. Samples: 2465867. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:15:22,838][06561] Avg episode reward: [(0, '163.600'), (1, '0.000')] +[2023-09-26 05:15:27,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6206.5). Total num frames: 9904128. Throughput: 0: 784.1, 1: 784.9. Samples: 2475560. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:15:27,838][06561] Avg episode reward: [(0, '164.790'), (1, '0.000')] +[2023-09-26 05:15:27,839][07269] Saving new best policy, reward=164.790! +[2023-09-26 05:15:28,740][07696] Updated weights for policy 0, policy_version 19356 (0.0018) +[2023-09-26 05:15:28,740][07697] Updated weights for policy 1, policy_version 19360 (0.0017) +[2023-09-26 05:15:32,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 9936896. Throughput: 0: 781.8, 1: 779.8. Samples: 2480134. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:15:32,837][06561] Avg episode reward: [(0, '164.790'), (1, '0.000')] +[2023-09-26 05:15:37,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 9961472. Throughput: 0: 786.8, 1: 785.1. Samples: 2489799. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:15:37,838][06561] Avg episode reward: [(0, '164.790'), (1, '0.000')] +[2023-09-26 05:15:42,114][07697] Updated weights for policy 1, policy_version 19520 (0.0017) +[2023-09-26 05:15:42,114][07696] Updated weights for policy 0, policy_version 19516 (0.0017) +[2023-09-26 05:15:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6280.6, 300 sec: 6192.6). Total num frames: 9994240. Throughput: 0: 776.0, 1: 773.9. Samples: 2498560. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:15:42,838][06561] Avg episode reward: [(0, '167.420'), (1, '0.000')] +[2023-09-26 05:15:42,838][07269] Saving new best policy, reward=167.420! +[2023-09-26 05:15:47,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 10027008. Throughput: 0: 776.3, 1: 775.9. Samples: 2503115. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:15:47,838][06561] Avg episode reward: [(0, '167.420'), (1, '0.000')] +[2023-09-26 05:15:52,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 10059776. Throughput: 0: 774.3, 1: 775.0. Samples: 2512726. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:15:52,838][06561] Avg episode reward: [(0, '167.420'), (1, '0.000')] +[2023-09-26 05:15:55,216][07696] Updated weights for policy 0, policy_version 19676 (0.0018) +[2023-09-26 05:15:55,217][07697] Updated weights for policy 1, policy_version 19680 (0.0018) +[2023-09-26 05:15:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 10092544. Throughput: 0: 779.9, 1: 780.1. Samples: 2522055. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:15:57,838][06561] Avg episode reward: [(0, '167.420'), (1, '0.000')] +[2023-09-26 05:16:02,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6212.3, 300 sec: 6206.5). Total num frames: 10121216. Throughput: 0: 781.2, 1: 781.3. Samples: 2527045. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:16:02,838][06561] Avg episode reward: [(0, '171.360'), (1, '0.000')] +[2023-09-26 05:16:02,839][07269] Saving new best policy, reward=171.360! +[2023-09-26 05:16:07,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 10149888. Throughput: 0: 785.4, 1: 785.0. Samples: 2536538. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:16:07,838][06561] Avg episode reward: [(0, '171.360'), (1, '0.000')] +[2023-09-26 05:16:08,035][07697] Updated weights for policy 1, policy_version 19840 (0.0016) +[2023-09-26 05:16:08,035][07696] Updated weights for policy 0, policy_version 19836 (0.0017) +[2023-09-26 05:16:12,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10182656. Throughput: 0: 779.8, 1: 778.2. Samples: 2545668. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:16:12,838][06561] Avg episode reward: [(0, '171.360'), (1, '0.000')] +[2023-09-26 05:16:17,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10215424. Throughput: 0: 780.7, 1: 781.8. Samples: 2550448. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:16:17,838][06561] Avg episode reward: [(0, '173.530'), (1, '0.000')] +[2023-09-26 05:16:17,839][07269] Saving new best policy, reward=173.530! +[2023-09-26 05:16:21,244][07697] Updated weights for policy 1, policy_version 20000 (0.0016) +[2023-09-26 05:16:21,245][07696] Updated weights for policy 0, policy_version 19996 (0.0018) +[2023-09-26 05:16:22,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10248192. Throughput: 0: 778.9, 1: 780.1. Samples: 2559954. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:16:22,838][06561] Avg episode reward: [(0, '173.530'), (1, '0.000')] +[2023-09-26 05:16:27,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 10272768. Throughput: 0: 779.6, 1: 781.2. Samples: 2568796. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:16:27,838][06561] Avg episode reward: [(0, '173.530'), (1, '0.000')] +[2023-09-26 05:16:32,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 10305536. Throughput: 0: 780.5, 1: 782.5. Samples: 2573449. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:16:32,837][06561] Avg episode reward: [(0, '173.530'), (1, '0.000')] +[2023-09-26 05:16:34,798][07696] Updated weights for policy 0, policy_version 20156 (0.0014) +[2023-09-26 05:16:34,798][07697] Updated weights for policy 1, policy_version 20160 (0.0017) +[2023-09-26 05:16:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6192.6). Total num frames: 10338304. Throughput: 0: 776.2, 1: 774.9. Samples: 2582528. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:16:37,838][06561] Avg episode reward: [(0, '176.840'), (1, '0.000')] +[2023-09-26 05:16:37,847][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000020188_5169152.pth... +[2023-09-26 05:16:37,847][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000020192_5169152.pth... +[2023-09-26 05:16:37,885][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000017276_4423680.pth +[2023-09-26 05:16:37,890][07269] Saving new best policy, reward=176.840! +[2023-09-26 05:16:37,890][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000017280_4423680.pth +[2023-09-26 05:16:42,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10371072. Throughput: 0: 780.2, 1: 779.2. Samples: 2592230. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:16:42,837][06561] Avg episode reward: [(0, '176.840'), (1, '0.000')] +[2023-09-26 05:16:47,654][07696] Updated weights for policy 0, policy_version 20316 (0.0018) +[2023-09-26 05:16:47,654][07697] Updated weights for policy 1, policy_version 20320 (0.0018) +[2023-09-26 05:16:47,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10403840. Throughput: 0: 776.9, 1: 774.6. Samples: 2596864. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:16:47,838][06561] Avg episode reward: [(0, '176.840'), (1, '0.000')] +[2023-09-26 05:16:52,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 10428416. Throughput: 0: 775.5, 1: 775.5. Samples: 2606333. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:16:52,838][06561] Avg episode reward: [(0, '180.880'), (1, '0.000')] +[2023-09-26 05:16:52,862][07269] Saving new best policy, reward=180.880! +[2023-09-26 05:16:57,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 10461184. Throughput: 0: 779.0, 1: 780.4. Samples: 2615842. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:16:57,838][06561] Avg episode reward: [(0, '180.880'), (1, '0.000')] +[2023-09-26 05:17:00,692][07696] Updated weights for policy 0, policy_version 20476 (0.0015) +[2023-09-26 05:17:00,692][07697] Updated weights for policy 1, policy_version 20480 (0.0015) +[2023-09-26 05:17:02,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6212.3, 300 sec: 6206.5). Total num frames: 10493952. Throughput: 0: 779.3, 1: 779.5. Samples: 2620595. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:17:02,838][06561] Avg episode reward: [(0, '180.880'), (1, '0.000')] +[2023-09-26 05:17:07,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10526720. Throughput: 0: 775.0, 1: 775.8. Samples: 2629742. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:17:07,838][06561] Avg episode reward: [(0, '181.290'), (1, '0.000')] +[2023-09-26 05:17:07,847][07269] Saving new best policy, reward=181.290! +[2023-09-26 05:17:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10559488. Throughput: 0: 787.4, 1: 787.2. Samples: 2639650. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:17:12,838][06561] Avg episode reward: [(0, '181.290'), (1, '0.000')] +[2023-09-26 05:17:13,688][07697] Updated weights for policy 1, policy_version 20640 (0.0017) +[2023-09-26 05:17:13,688][07696] Updated weights for policy 0, policy_version 20636 (0.0017) +[2023-09-26 05:17:17,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10592256. Throughput: 0: 785.3, 1: 782.8. Samples: 2644010. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:17:17,838][06561] Avg episode reward: [(0, '181.290'), (1, '0.000')] +[2023-09-26 05:17:22,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 10616832. Throughput: 0: 785.7, 1: 788.3. Samples: 2653357. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:17:22,838][06561] Avg episode reward: [(0, '181.290'), (1, '0.000')] +[2023-09-26 05:17:26,939][07697] Updated weights for policy 1, policy_version 20800 (0.0017) +[2023-09-26 05:17:26,940][07696] Updated weights for policy 0, policy_version 20796 (0.0017) +[2023-09-26 05:17:27,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10649600. Throughput: 0: 780.1, 1: 780.7. Samples: 2662466. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:17:27,837][06561] Avg episode reward: [(0, '189.310'), (1, '0.000')] +[2023-09-26 05:17:27,838][07269] Saving new best policy, reward=189.310! +[2023-09-26 05:17:32,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10682368. Throughput: 0: 778.2, 1: 779.5. Samples: 2666964. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:17:32,838][06561] Avg episode reward: [(0, '189.310'), (1, '0.000')] +[2023-09-26 05:17:37,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10715136. Throughput: 0: 780.9, 1: 781.1. Samples: 2676624. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:17:37,838][06561] Avg episode reward: [(0, '189.310'), (1, '0.000')] +[2023-09-26 05:17:40,163][07696] Updated weights for policy 0, policy_version 20956 (0.0018) +[2023-09-26 05:17:40,163][07697] Updated weights for policy 1, policy_version 20960 (0.0017) +[2023-09-26 05:17:42,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10747904. Throughput: 0: 777.9, 1: 777.7. Samples: 2685845. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:17:42,838][06561] Avg episode reward: [(0, '193.420'), (1, '0.000')] +[2023-09-26 05:17:42,839][07269] Saving new best policy, reward=193.420! +[2023-09-26 05:17:47,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6206.5). Total num frames: 10772480. Throughput: 0: 779.6, 1: 781.2. Samples: 2690829. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:17:47,838][06561] Avg episode reward: [(0, '193.420'), (1, '0.000')] +[2023-09-26 05:17:52,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10805248. Throughput: 0: 777.0, 1: 777.3. Samples: 2699683. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:17:52,838][06561] Avg episode reward: [(0, '193.420'), (1, '0.000')] +[2023-09-26 05:17:53,252][07696] Updated weights for policy 0, policy_version 21116 (0.0016) +[2023-09-26 05:17:53,252][07697] Updated weights for policy 1, policy_version 21120 (0.0016) +[2023-09-26 05:17:57,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 10838016. Throughput: 0: 776.8, 1: 775.5. Samples: 2709504. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:17:57,837][06561] Avg episode reward: [(0, '193.420'), (1, '0.000')] +[2023-09-26 05:18:02,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 10870784. Throughput: 0: 774.2, 1: 774.9. Samples: 2713722. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:18:02,837][06561] Avg episode reward: [(0, '196.120'), (1, '0.000')] +[2023-09-26 05:18:02,838][07269] Saving new best policy, reward=196.120! +[2023-09-26 05:18:06,407][07696] Updated weights for policy 0, policy_version 21276 (0.0018) +[2023-09-26 05:18:06,407][07697] Updated weights for policy 1, policy_version 21280 (0.0016) +[2023-09-26 05:18:07,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10903552. Throughput: 0: 779.3, 1: 778.0. Samples: 2723434. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:18:07,838][06561] Avg episode reward: [(0, '196.120'), (1, '0.000')] +[2023-09-26 05:18:12,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 10936320. Throughput: 0: 782.1, 1: 782.6. Samples: 2732877. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:18:12,838][06561] Avg episode reward: [(0, '196.120'), (1, '0.000')] +[2023-09-26 05:18:17,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 10960896. Throughput: 0: 787.3, 1: 787.0. Samples: 2737807. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:18:17,838][06561] Avg episode reward: [(0, '201.230'), (1, '0.000')] +[2023-09-26 05:18:17,839][07269] Saving new best policy, reward=201.230! +[2023-09-26 05:18:19,379][07696] Updated weights for policy 0, policy_version 21436 (0.0016) +[2023-09-26 05:18:19,380][07697] Updated weights for policy 1, policy_version 21440 (0.0019) +[2023-09-26 05:18:22,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 10993664. Throughput: 0: 780.6, 1: 781.1. Samples: 2746900. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 05:18:22,838][06561] Avg episode reward: [(0, '201.230'), (1, '0.000')] +[2023-09-26 05:18:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 11026432. Throughput: 0: 786.9, 1: 785.6. Samples: 2756608. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 05:18:27,838][06561] Avg episode reward: [(0, '201.230'), (1, '0.000')] +[2023-09-26 05:18:32,447][07696] Updated weights for policy 0, policy_version 21596 (0.0018) +[2023-09-26 05:18:32,447][07697] Updated weights for policy 1, policy_version 21600 (0.0018) +[2023-09-26 05:18:32,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 11059200. Throughput: 0: 780.1, 1: 778.6. Samples: 2760970. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 05:18:32,838][06561] Avg episode reward: [(0, '201.230'), (1, '0.000')] +[2023-09-26 05:18:37,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 11091968. Throughput: 0: 788.7, 1: 787.6. Samples: 2770620. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:18:37,838][06561] Avg episode reward: [(0, '201.230'), (1, '0.000')] +[2023-09-26 05:18:37,851][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000021660_5545984.pth... +[2023-09-26 05:18:37,851][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000021664_5545984.pth... +[2023-09-26 05:18:37,886][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000018732_4796416.pth +[2023-09-26 05:18:37,886][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000018736_4796416.pth +[2023-09-26 05:18:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11116544. Throughput: 0: 779.9, 1: 781.0. Samples: 2779745. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:18:42,838][06561] Avg episode reward: [(0, '201.230'), (1, '0.000')] +[2023-09-26 05:18:45,731][07696] Updated weights for policy 0, policy_version 21756 (0.0017) +[2023-09-26 05:18:45,731][07697] Updated weights for policy 1, policy_version 21760 (0.0016) +[2023-09-26 05:18:47,837][06561] Fps is (10 sec: 5734.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 11149312. Throughput: 0: 785.5, 1: 784.9. Samples: 2784389. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:18:47,837][06561] Avg episode reward: [(0, '201.230'), (1, '0.000')] +[2023-09-26 05:18:52,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 11182080. Throughput: 0: 779.0, 1: 778.8. Samples: 2793531. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:18:52,838][06561] Avg episode reward: [(0, '209.550'), (1, '0.000')] +[2023-09-26 05:18:52,850][07269] Saving new best policy, reward=209.550! +[2023-09-26 05:18:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 11214848. Throughput: 0: 775.5, 1: 775.7. Samples: 2802682. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:18:57,837][06561] Avg episode reward: [(0, '209.550'), (1, '0.000')] +[2023-09-26 05:18:59,046][07696] Updated weights for policy 0, policy_version 21916 (0.0017) +[2023-09-26 05:18:59,046][07697] Updated weights for policy 1, policy_version 21920 (0.0017) +[2023-09-26 05:19:02,837][06561] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11239424. Throughput: 0: 773.1, 1: 772.9. Samples: 2807374. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:19:02,838][06561] Avg episode reward: [(0, '209.550'), (1, '0.000')] +[2023-09-26 05:19:07,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11272192. Throughput: 0: 771.2, 1: 771.1. Samples: 2816305. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:19:07,838][06561] Avg episode reward: [(0, '209.790'), (1, '0.000')] +[2023-09-26 05:19:07,846][07269] Saving new best policy, reward=209.790! +[2023-09-26 05:19:12,529][07696] Updated weights for policy 0, policy_version 22076 (0.0016) +[2023-09-26 05:19:12,530][07697] Updated weights for policy 1, policy_version 22080 (0.0017) +[2023-09-26 05:19:12,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11304960. Throughput: 0: 765.5, 1: 768.3. Samples: 2825632. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:19:12,838][06561] Avg episode reward: [(0, '209.790'), (1, '0.000')] +[2023-09-26 05:19:17,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11329536. Throughput: 0: 770.0, 1: 770.0. Samples: 2830269. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:19:17,838][06561] Avg episode reward: [(0, '209.790'), (1, '0.000')] +[2023-09-26 05:19:22,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11362304. Throughput: 0: 760.4, 1: 761.0. Samples: 2839085. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:19:22,838][06561] Avg episode reward: [(0, '209.790'), (1, '0.000')] +[2023-09-26 05:19:26,006][07696] Updated weights for policy 0, policy_version 22236 (0.0017) +[2023-09-26 05:19:26,006][07697] Updated weights for policy 1, policy_version 22240 (0.0016) +[2023-09-26 05:19:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11395072. Throughput: 0: 765.0, 1: 766.0. Samples: 2848640. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:19:27,838][06561] Avg episode reward: [(0, '216.300'), (1, '0.000')] +[2023-09-26 05:19:27,839][07269] Saving new best policy, reward=216.300! +[2023-09-26 05:19:32,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11427840. Throughput: 0: 762.9, 1: 763.8. Samples: 2853091. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:19:32,838][06561] Avg episode reward: [(0, '216.300'), (1, '0.000')] +[2023-09-26 05:19:37,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6220.4). Total num frames: 11452416. Throughput: 0: 762.8, 1: 762.4. Samples: 2862164. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:19:37,838][06561] Avg episode reward: [(0, '216.300'), (1, '0.000')] +[2023-09-26 05:19:39,364][07696] Updated weights for policy 0, policy_version 22396 (0.0018) +[2023-09-26 05:19:39,364][07697] Updated weights for policy 1, policy_version 22400 (0.0019) +[2023-09-26 05:19:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11485184. Throughput: 0: 763.4, 1: 762.1. Samples: 2871326. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:19:42,838][06561] Avg episode reward: [(0, '219.430'), (1, '0.000')] +[2023-09-26 05:19:42,839][07269] Saving new best policy, reward=219.430! +[2023-09-26 05:19:47,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11517952. Throughput: 0: 763.8, 1: 764.6. Samples: 2876153. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:19:47,838][06561] Avg episode reward: [(0, '219.430'), (1, '0.000')] +[2023-09-26 05:19:52,563][07697] Updated weights for policy 1, policy_version 22560 (0.0017) +[2023-09-26 05:19:52,563][07696] Updated weights for policy 0, policy_version 22556 (0.0017) +[2023-09-26 05:19:52,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11550720. Throughput: 0: 770.9, 1: 768.7. Samples: 2885589. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:19:52,838][06561] Avg episode reward: [(0, '219.430'), (1, '0.000')] +[2023-09-26 05:19:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11583488. Throughput: 0: 769.3, 1: 767.0. Samples: 2894766. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:19:57,838][06561] Avg episode reward: [(0, '219.430'), (1, '0.000')] +[2023-09-26 05:20:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 11608064. Throughput: 0: 766.8, 1: 767.7. Samples: 2899319. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:20:02,838][06561] Avg episode reward: [(0, '224.750'), (1, '0.000')] +[2023-09-26 05:20:02,839][07269] Saving new best policy, reward=224.750! +[2023-09-26 05:20:05,686][07696] Updated weights for policy 0, policy_version 22716 (0.0017) +[2023-09-26 05:20:05,686][07697] Updated weights for policy 1, policy_version 22720 (0.0018) +[2023-09-26 05:20:07,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11640832. Throughput: 0: 773.7, 1: 773.2. Samples: 2908697. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:20:07,838][06561] Avg episode reward: [(0, '224.750'), (1, '0.000')] +[2023-09-26 05:20:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11673600. Throughput: 0: 773.4, 1: 773.4. Samples: 2918245. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:20:12,838][06561] Avg episode reward: [(0, '224.750'), (1, '0.000')] +[2023-09-26 05:20:17,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 11706368. Throughput: 0: 774.3, 1: 774.1. Samples: 2922770. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:20:17,838][06561] Avg episode reward: [(0, '227.430'), (1, '0.000')] +[2023-09-26 05:20:17,839][07269] Saving new best policy, reward=227.430! +[2023-09-26 05:20:18,763][07697] Updated weights for policy 1, policy_version 22880 (0.0017) +[2023-09-26 05:20:18,763][07696] Updated weights for policy 0, policy_version 22876 (0.0017) +[2023-09-26 05:20:22,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 11739136. Throughput: 0: 778.4, 1: 779.5. Samples: 2932272. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:20:22,838][06561] Avg episode reward: [(0, '227.430'), (1, '0.000')] +[2023-09-26 05:20:27,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 11771904. Throughput: 0: 783.3, 1: 784.0. Samples: 2941854. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:20:27,837][06561] Avg episode reward: [(0, '227.430'), (1, '0.000')] +[2023-09-26 05:20:31,769][07696] Updated weights for policy 0, policy_version 23036 (0.0018) +[2023-09-26 05:20:31,769][07697] Updated weights for policy 1, policy_version 23040 (0.0016) +[2023-09-26 05:20:32,837][06561] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 11796480. Throughput: 0: 783.4, 1: 784.8. Samples: 2946725. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:20:32,837][06561] Avg episode reward: [(0, '232.940'), (1, '0.000')] +[2023-09-26 05:20:32,839][07269] Saving new best policy, reward=232.940! +[2023-09-26 05:20:37,837][06561] Fps is (10 sec: 5734.2, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 11829248. Throughput: 0: 779.6, 1: 781.6. Samples: 2955843. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:20:37,838][06561] Avg episode reward: [(0, '232.940'), (1, '0.000')] +[2023-09-26 05:20:37,848][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000023104_5914624.pth... +[2023-09-26 05:20:37,848][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000023100_5914624.pth... +[2023-09-26 05:20:37,883][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000020188_5169152.pth +[2023-09-26 05:20:37,885][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000020192_5169152.pth +[2023-09-26 05:20:42,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 11862016. Throughput: 0: 784.7, 1: 785.5. Samples: 2965425. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:20:42,838][06561] Avg episode reward: [(0, '232.940'), (1, '0.000')] +[2023-09-26 05:20:44,917][07697] Updated weights for policy 1, policy_version 23200 (0.0016) +[2023-09-26 05:20:44,917][07696] Updated weights for policy 0, policy_version 23196 (0.0016) +[2023-09-26 05:20:47,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 11894784. Throughput: 0: 782.3, 1: 781.5. Samples: 2969692. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:20:47,838][06561] Avg episode reward: [(0, '232.940'), (1, '0.000')] +[2023-09-26 05:20:52,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 11919360. Throughput: 0: 780.7, 1: 783.3. Samples: 2979076. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:20:52,839][06561] Avg episode reward: [(0, '235.210'), (1, '0.000')] +[2023-09-26 05:20:52,867][07269] Saving new best policy, reward=235.210! +[2023-09-26 05:20:57,837][06561] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6206.5). Total num frames: 11952128. Throughput: 0: 776.5, 1: 775.9. Samples: 2988104. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:20:57,837][06561] Avg episode reward: [(0, '235.210'), (1, '0.000')] +[2023-09-26 05:20:58,291][07696] Updated weights for policy 0, policy_version 23356 (0.0018) +[2023-09-26 05:20:58,291][07697] Updated weights for policy 1, policy_version 23360 (0.0018) +[2023-09-26 05:21:02,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 11984896. Throughput: 0: 779.8, 1: 780.1. Samples: 2992966. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:21:02,838][06561] Avg episode reward: [(0, '235.210'), (1, '0.000')] +[2023-09-26 05:21:07,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 12017664. Throughput: 0: 779.8, 1: 777.9. Samples: 3002368. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:21:07,838][06561] Avg episode reward: [(0, '239.640'), (1, '0.000')] +[2023-09-26 05:21:07,849][07269] Saving new best policy, reward=239.640! +[2023-09-26 05:21:11,470][07696] Updated weights for policy 0, policy_version 23516 (0.0017) +[2023-09-26 05:21:11,470][07697] Updated weights for policy 1, policy_version 23520 (0.0015) +[2023-09-26 05:21:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 12050432. Throughput: 0: 774.5, 1: 774.8. Samples: 3011572. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:21:12,838][06561] Avg episode reward: [(0, '239.640'), (1, '0.000')] +[2023-09-26 05:21:17,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 12075008. Throughput: 0: 774.0, 1: 773.7. Samples: 3016372. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:21:17,838][06561] Avg episode reward: [(0, '239.640'), (1, '0.000')] +[2023-09-26 05:21:22,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 12107776. Throughput: 0: 771.8, 1: 772.2. Samples: 3025324. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:21:22,838][06561] Avg episode reward: [(0, '239.640'), (1, '0.000')] +[2023-09-26 05:21:24,789][07696] Updated weights for policy 0, policy_version 23676 (0.0016) +[2023-09-26 05:21:24,789][07697] Updated weights for policy 1, policy_version 23680 (0.0015) +[2023-09-26 05:21:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 12140544. Throughput: 0: 770.4, 1: 771.2. Samples: 3034797. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:21:27,838][06561] Avg episode reward: [(0, '242.060'), (1, '0.000')] +[2023-09-26 05:21:27,839][07269] Saving new best policy, reward=242.060! +[2023-09-26 05:21:32,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 12173312. Throughput: 0: 773.5, 1: 771.9. Samples: 3039232. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:21:32,837][06561] Avg episode reward: [(0, '242.060'), (1, '0.000')] +[2023-09-26 05:21:37,749][07697] Updated weights for policy 1, policy_version 23840 (0.0017) +[2023-09-26 05:21:37,750][07696] Updated weights for policy 0, policy_version 23836 (0.0017) +[2023-09-26 05:21:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 12206080. Throughput: 0: 777.5, 1: 774.9. Samples: 3048932. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:21:37,838][06561] Avg episode reward: [(0, '242.060'), (1, '0.000')] +[2023-09-26 05:21:42,837][06561] Fps is (10 sec: 6143.9, 60 sec: 6212.3, 300 sec: 6206.5). Total num frames: 12234752. Throughput: 0: 780.9, 1: 781.0. Samples: 3058392. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:21:42,838][06561] Avg episode reward: [(0, '246.690'), (1, '0.000')] +[2023-09-26 05:21:42,839][07269] Saving new best policy, reward=246.690! +[2023-09-26 05:21:47,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 12263424. Throughput: 0: 779.0, 1: 781.4. Samples: 3063186. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:21:47,838][06561] Avg episode reward: [(0, '246.690'), (1, '0.000')] +[2023-09-26 05:21:50,825][07696] Updated weights for policy 0, policy_version 23996 (0.0016) +[2023-09-26 05:21:50,825][07697] Updated weights for policy 1, policy_version 24000 (0.0015) +[2023-09-26 05:21:52,837][06561] Fps is (10 sec: 6143.9, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 12296192. Throughput: 0: 775.8, 1: 777.6. Samples: 3072272. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:21:52,838][06561] Avg episode reward: [(0, '246.690'), (1, '0.000')] +[2023-09-26 05:21:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 12328960. Throughput: 0: 783.4, 1: 783.2. Samples: 3082067. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:21:57,838][06561] Avg episode reward: [(0, '246.690'), (1, '0.000')] +[2023-09-26 05:22:02,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 12361728. Throughput: 0: 780.5, 1: 779.5. Samples: 3086574. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:22:02,838][06561] Avg episode reward: [(0, '249.020'), (1, '0.000')] +[2023-09-26 05:22:02,839][07269] Saving new best policy, reward=249.020! +[2023-09-26 05:22:03,742][07696] Updated weights for policy 0, policy_version 24156 (0.0017) +[2023-09-26 05:22:03,742][07697] Updated weights for policy 1, policy_version 24160 (0.0017) +[2023-09-26 05:22:07,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 12394496. Throughput: 0: 787.5, 1: 787.6. Samples: 3096203. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 05:22:07,838][06561] Avg episode reward: [(0, '249.020'), (1, '0.000')] +[2023-09-26 05:22:12,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 12419072. Throughput: 0: 781.2, 1: 780.7. Samples: 3105082. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:22:12,838][06561] Avg episode reward: [(0, '249.020'), (1, '0.000')] +[2023-09-26 05:22:17,278][07696] Updated weights for policy 0, policy_version 24316 (0.0015) +[2023-09-26 05:22:17,278][07697] Updated weights for policy 1, policy_version 24320 (0.0017) +[2023-09-26 05:22:17,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 12451840. Throughput: 0: 784.4, 1: 785.0. Samples: 3109853. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:22:17,837][06561] Avg episode reward: [(0, '253.730'), (1, '0.000')] +[2023-09-26 05:22:17,838][07269] Saving new best policy, reward=253.730! +[2023-09-26 05:22:22,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 12476416. Throughput: 0: 767.7, 1: 767.7. Samples: 3118024. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:22:22,838][06561] Avg episode reward: [(0, '253.730'), (1, '0.000')] +[2023-09-26 05:22:27,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 12509184. Throughput: 0: 759.5, 1: 759.0. Samples: 3126723. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 05:22:27,838][06561] Avg episode reward: [(0, '253.730'), (1, '0.000')] +[2023-09-26 05:22:32,068][07696] Updated weights for policy 0, policy_version 24476 (0.0013) +[2023-09-26 05:22:32,069][07697] Updated weights for policy 1, policy_version 24480 (0.0015) +[2023-09-26 05:22:32,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6007.5, 300 sec: 6164.8). Total num frames: 12533760. Throughput: 0: 752.6, 1: 749.4. Samples: 3130774. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 05:22:32,838][06561] Avg episode reward: [(0, '253.730'), (1, '0.000')] +[2023-09-26 05:22:37,837][06561] Fps is (10 sec: 4915.1, 60 sec: 5870.9, 300 sec: 6137.1). Total num frames: 12558336. Throughput: 0: 739.7, 1: 738.9. Samples: 3138809. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 05:22:37,838][06561] Avg episode reward: [(0, '254.890'), (1, '0.000')] +[2023-09-26 05:22:37,849][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000024528_6279168.pth... +[2023-09-26 05:22:37,849][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000024524_6279168.pth... +[2023-09-26 05:22:37,878][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000021664_5545984.pth +[2023-09-26 05:22:37,883][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000021660_5545984.pth +[2023-09-26 05:22:37,886][07269] Saving new best policy, reward=254.890! +[2023-09-26 05:22:41,030][07269] Early stopping after 2 epochs (8 sgd steps), loss delta 0.0000003 +[2023-09-26 05:22:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 5939.2, 300 sec: 6164.8). Total num frames: 12591104. Throughput: 0: 723.8, 1: 724.4. Samples: 3147239. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:22:42,838][06561] Avg episode reward: [(0, '254.890'), (1, '0.000')] +[2023-09-26 05:22:46,918][07696] Updated weights for policy 0, policy_version 24628 (0.0017) +[2023-09-26 05:22:46,919][07697] Updated weights for policy 1, policy_version 24640 (0.0015) +[2023-09-26 05:22:47,837][06561] Fps is (10 sec: 5734.6, 60 sec: 5871.0, 300 sec: 6137.1). Total num frames: 12615680. Throughput: 0: 720.1, 1: 720.1. Samples: 3151381. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:22:47,837][06561] Avg episode reward: [(0, '254.890'), (1, '0.000')] +[2023-09-26 05:22:52,837][06561] Fps is (10 sec: 5734.3, 60 sec: 5870.9, 300 sec: 6137.1). Total num frames: 12648448. Throughput: 0: 706.4, 1: 707.5. Samples: 3159829. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:22:52,838][06561] Avg episode reward: [(0, '254.890'), (1, '0.000')] +[2023-09-26 05:22:57,837][06561] Fps is (10 sec: 5734.3, 60 sec: 5734.4, 300 sec: 6109.3). Total num frames: 12673024. Throughput: 0: 710.1, 1: 710.4. Samples: 3169007. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:22:57,838][06561] Avg episode reward: [(0, '260.780'), (1, '0.000')] +[2023-09-26 05:22:57,853][07269] Saving new best policy, reward=260.780! +[2023-09-26 05:23:00,443][07697] Updated weights for policy 1, policy_version 24800 (0.0013) +[2023-09-26 05:23:00,444][07696] Updated weights for policy 0, policy_version 24788 (0.0018) +[2023-09-26 05:23:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 6109.3). Total num frames: 12705792. Throughput: 0: 711.5, 1: 712.5. Samples: 3173931. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:23:02,838][06561] Avg episode reward: [(0, '260.780'), (1, '0.000')] +[2023-09-26 05:23:07,837][06561] Fps is (10 sec: 6553.8, 60 sec: 5734.4, 300 sec: 6109.3). Total num frames: 12738560. Throughput: 0: 718.0, 1: 717.0. Samples: 3182597. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:23:07,837][06561] Avg episode reward: [(0, '260.780'), (1, '0.000')] +[2023-09-26 05:23:12,837][06561] Fps is (10 sec: 6553.7, 60 sec: 5870.9, 300 sec: 6137.1). Total num frames: 12771328. Throughput: 0: 725.0, 1: 724.5. Samples: 3191949. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:23:12,838][06561] Avg episode reward: [(0, '261.980'), (1, '0.000')] +[2023-09-26 05:23:12,838][07269] Saving new best policy, reward=261.980! +[2023-09-26 05:23:14,017][07697] Updated weights for policy 1, policy_version 24960 (0.0019) +[2023-09-26 05:23:14,017][07696] Updated weights for policy 0, policy_version 24948 (0.0018) +[2023-09-26 05:23:17,837][06561] Fps is (10 sec: 5734.2, 60 sec: 5734.4, 300 sec: 6109.3). Total num frames: 12795904. Throughput: 0: 732.0, 1: 733.0. Samples: 3196697. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:23:17,838][06561] Avg episode reward: [(0, '261.980'), (1, '0.000')] +[2023-09-26 05:23:22,837][06561] Fps is (10 sec: 5734.2, 60 sec: 5870.9, 300 sec: 6109.3). Total num frames: 12828672. Throughput: 0: 743.0, 1: 744.7. Samples: 3205757. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:23:22,839][06561] Avg episode reward: [(0, '261.980'), (1, '0.000')] +[2023-09-26 05:23:27,365][07696] Updated weights for policy 0, policy_version 25108 (0.0017) +[2023-09-26 05:23:27,366][07697] Updated weights for policy 1, policy_version 25120 (0.0018) +[2023-09-26 05:23:27,837][06561] Fps is (10 sec: 6553.7, 60 sec: 5870.9, 300 sec: 6109.3). Total num frames: 12861440. Throughput: 0: 753.1, 1: 753.3. Samples: 3215027. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:23:27,837][06561] Avg episode reward: [(0, '261.980'), (1, '0.000')] +[2023-09-26 05:23:32,837][06561] Fps is (10 sec: 6553.9, 60 sec: 6007.5, 300 sec: 6109.3). Total num frames: 12894208. Throughput: 0: 757.1, 1: 755.7. Samples: 3219460. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:23:32,837][06561] Avg episode reward: [(0, '266.740'), (1, '0.000')] +[2023-09-26 05:23:32,838][07269] Saving new best policy, reward=266.740! +[2023-09-26 05:23:37,837][06561] Fps is (10 sec: 6143.9, 60 sec: 6075.7, 300 sec: 6123.2). Total num frames: 12922880. Throughput: 0: 768.5, 1: 767.4. Samples: 3228944. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:23:37,838][06561] Avg episode reward: [(0, '266.740'), (1, '0.000')] +[2023-09-26 05:23:40,445][07696] Updated weights for policy 0, policy_version 25268 (0.0019) +[2023-09-26 05:23:40,445][07697] Updated weights for policy 1, policy_version 25280 (0.0018) +[2023-09-26 05:23:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6007.5, 300 sec: 6109.3). Total num frames: 12951552. Throughput: 0: 770.4, 1: 770.1. Samples: 3238330. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:23:42,837][06561] Avg episode reward: [(0, '266.740'), (1, '0.000')] +[2023-09-26 05:23:47,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6144.0, 300 sec: 6109.3). Total num frames: 12984320. Throughput: 0: 769.1, 1: 769.8. Samples: 3243179. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:23:47,838][06561] Avg episode reward: [(0, '266.740'), (1, '0.000')] +[2023-09-26 05:23:52,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6144.0, 300 sec: 6109.3). Total num frames: 13017088. Throughput: 0: 775.1, 1: 776.9. Samples: 3252436. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:23:52,838][06561] Avg episode reward: [(0, '266.740'), (1, '0.000')] +[2023-09-26 05:23:53,536][07696] Updated weights for policy 0, policy_version 25428 (0.0014) +[2023-09-26 05:23:53,537][07697] Updated weights for policy 1, policy_version 25440 (0.0015) +[2023-09-26 05:23:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 13049856. Throughput: 0: 779.0, 1: 779.9. Samples: 3262099. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:23:57,838][06561] Avg episode reward: [(0, '266.740'), (1, '0.000')] +[2023-09-26 05:24:02,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 13082624. Throughput: 0: 777.3, 1: 775.3. Samples: 3266564. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:24:02,838][06561] Avg episode reward: [(0, '274.210'), (1, '0.000')] +[2023-09-26 05:24:02,839][07269] Saving new best policy, reward=274.210! +[2023-09-26 05:24:06,696][07696] Updated weights for policy 0, policy_version 25588 (0.0017) +[2023-09-26 05:24:06,696][07697] Updated weights for policy 1, policy_version 25600 (0.0018) +[2023-09-26 05:24:07,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6109.3). Total num frames: 13107200. Throughput: 0: 779.1, 1: 777.7. Samples: 3275810. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:24:07,838][06561] Avg episode reward: [(0, '274.210'), (1, '0.000')] +[2023-09-26 05:24:12,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 13139968. Throughput: 0: 779.6, 1: 779.1. Samples: 3285166. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:24:12,838][06561] Avg episode reward: [(0, '274.210'), (1, '0.000')] +[2023-09-26 05:24:17,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6137.1). Total num frames: 13172736. Throughput: 0: 785.3, 1: 786.8. Samples: 3290207. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:24:17,838][06561] Avg episode reward: [(0, '274.210'), (1, '0.000')] +[2023-09-26 05:24:19,763][07696] Updated weights for policy 0, policy_version 25748 (0.0015) +[2023-09-26 05:24:19,764][07697] Updated weights for policy 1, policy_version 25760 (0.0016) +[2023-09-26 05:24:22,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6137.1). Total num frames: 13205504. Throughput: 0: 783.0, 1: 781.2. Samples: 3299332. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:24:22,837][06561] Avg episode reward: [(0, '274.310'), (1, '0.000')] +[2023-09-26 05:24:22,846][07269] Saving new best policy, reward=274.310! +[2023-09-26 05:24:27,837][06561] Fps is (10 sec: 6143.9, 60 sec: 6212.3, 300 sec: 6123.2). Total num frames: 13234176. Throughput: 0: 779.2, 1: 780.5. Samples: 3308518. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:24:27,838][06561] Avg episode reward: [(0, '274.310'), (1, '0.000')] +[2023-09-26 05:24:32,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 13262848. Throughput: 0: 775.9, 1: 776.6. Samples: 3313044. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:24:32,838][06561] Avg episode reward: [(0, '274.310'), (1, '0.000')] +[2023-09-26 05:24:33,246][07697] Updated weights for policy 1, policy_version 25920 (0.0016) +[2023-09-26 05:24:33,247][07696] Updated weights for policy 0, policy_version 25908 (0.0017) +[2023-09-26 05:24:37,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6212.3, 300 sec: 6137.1). Total num frames: 13295616. Throughput: 0: 773.8, 1: 773.6. Samples: 3322066. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:24:37,837][06561] Avg episode reward: [(0, '280.470'), (1, '0.000')] +[2023-09-26 05:24:37,848][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000025956_6647808.pth... +[2023-09-26 05:24:37,848][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000025968_6647808.pth... +[2023-09-26 05:24:37,883][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000023104_5914624.pth +[2023-09-26 05:24:37,883][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000023100_5914624.pth +[2023-09-26 05:24:37,887][07269] Saving new best policy, reward=280.470! +[2023-09-26 05:24:42,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 13328384. Throughput: 0: 773.2, 1: 774.8. Samples: 3331758. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:24:42,838][06561] Avg episode reward: [(0, '280.470'), (1, '0.000')] +[2023-09-26 05:24:46,439][07696] Updated weights for policy 0, policy_version 26068 (0.0019) +[2023-09-26 05:24:46,439][07697] Updated weights for policy 1, policy_version 26080 (0.0019) +[2023-09-26 05:24:47,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 13361152. Throughput: 0: 773.7, 1: 773.6. Samples: 3336192. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:24:47,838][06561] Avg episode reward: [(0, '280.470'), (1, '0.000')] +[2023-09-26 05:24:52,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6109.3). Total num frames: 13385728. Throughput: 0: 770.8, 1: 770.9. Samples: 3345183. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:24:52,838][06561] Avg episode reward: [(0, '280.470'), (1, '0.000')] +[2023-09-26 05:24:57,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 13418496. Throughput: 0: 772.6, 1: 770.9. Samples: 3354624. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:24:57,838][06561] Avg episode reward: [(0, '283.460'), (1, '0.000')] +[2023-09-26 05:24:57,839][07269] Saving new best policy, reward=283.460! +[2023-09-26 05:24:59,794][07697] Updated weights for policy 1, policy_version 26240 (0.0016) +[2023-09-26 05:24:59,794][07696] Updated weights for policy 0, policy_version 26228 (0.0017) +[2023-09-26 05:25:02,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 13451264. Throughput: 0: 764.9, 1: 764.9. Samples: 3359050. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:25:02,838][06561] Avg episode reward: [(0, '283.460'), (1, '0.000')] +[2023-09-26 05:25:07,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 13484032. Throughput: 0: 773.7, 1: 773.6. Samples: 3368960. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:25:07,838][06561] Avg episode reward: [(0, '283.460'), (1, '0.000')] +[2023-09-26 05:25:12,397][07697] Updated weights for policy 1, policy_version 26400 (0.0016) +[2023-09-26 05:25:12,397][07696] Updated weights for policy 0, policy_version 26388 (0.0017) +[2023-09-26 05:25:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 13516800. Throughput: 0: 782.0, 1: 780.2. Samples: 3378820. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:25:12,838][06561] Avg episode reward: [(0, '285.630'), (1, '0.000')] +[2023-09-26 05:25:12,839][07269] Saving new best policy, reward=285.630! +[2023-09-26 05:25:17,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 13549568. Throughput: 0: 782.1, 1: 779.1. Samples: 3383296. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:25:17,838][06561] Avg episode reward: [(0, '285.630'), (1, '0.000')] +[2023-09-26 05:25:22,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6109.3). Total num frames: 13574144. Throughput: 0: 781.6, 1: 780.8. Samples: 3392370. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:25:22,838][06561] Avg episode reward: [(0, '285.630'), (1, '0.000')] +[2023-09-26 05:25:25,874][07697] Updated weights for policy 1, policy_version 26560 (0.0015) +[2023-09-26 05:25:25,875][07696] Updated weights for policy 0, policy_version 26548 (0.0018) +[2023-09-26 05:25:27,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6212.3, 300 sec: 6137.1). Total num frames: 13606912. Throughput: 0: 778.8, 1: 776.0. Samples: 3401723. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:25:27,838][06561] Avg episode reward: [(0, '290.600'), (1, '0.000')] +[2023-09-26 05:25:27,839][07269] Saving new best policy, reward=290.600! +[2023-09-26 05:25:32,837][06561] Fps is (10 sec: 6553.9, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 13639680. Throughput: 0: 775.8, 1: 778.0. Samples: 3406110. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:25:32,837][06561] Avg episode reward: [(0, '290.030'), (1, '0.000')] +[2023-09-26 05:25:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 13672448. Throughput: 0: 783.0, 1: 782.8. Samples: 3415643. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:25:37,838][06561] Avg episode reward: [(0, '290.030'), (1, '0.000')] +[2023-09-26 05:25:39,033][07696] Updated weights for policy 0, policy_version 26708 (0.0017) +[2023-09-26 05:25:39,033][07697] Updated weights for policy 1, policy_version 26720 (0.0017) +[2023-09-26 05:25:42,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6109.3). Total num frames: 13697024. Throughput: 0: 780.4, 1: 782.6. Samples: 3424955. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:25:42,838][06561] Avg episode reward: [(0, '290.030'), (1, '0.000')] +[2023-09-26 05:25:47,837][06561] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 13729792. Throughput: 0: 786.7, 1: 786.0. Samples: 3429818. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:25:47,837][06561] Avg episode reward: [(0, '292.830'), (1, '0.000')] +[2023-09-26 05:25:47,838][07269] Saving new best policy, reward=292.830! +[2023-09-26 05:25:51,977][07697] Updated weights for policy 1, policy_version 26880 (0.0016) +[2023-09-26 05:25:51,977][07696] Updated weights for policy 0, policy_version 26868 (0.0017) +[2023-09-26 05:25:52,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6137.1). Total num frames: 13762560. Throughput: 0: 777.9, 1: 779.9. Samples: 3439062. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:25:52,837][06561] Avg episode reward: [(0, '292.830'), (1, '0.000')] +[2023-09-26 05:25:57,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 13795328. Throughput: 0: 776.2, 1: 777.2. Samples: 3448723. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:25:57,838][06561] Avg episode reward: [(0, '292.830'), (1, '0.000')] +[2023-09-26 05:26:02,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 13828096. Throughput: 0: 776.2, 1: 777.9. Samples: 3453230. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:26:02,838][06561] Avg episode reward: [(0, '297.390'), (1, '0.000')] +[2023-09-26 05:26:02,839][07269] Saving new best policy, reward=297.390! +[2023-09-26 05:26:05,168][07697] Updated weights for policy 1, policy_version 27040 (0.0017) +[2023-09-26 05:26:05,168][07696] Updated weights for policy 0, policy_version 27028 (0.0017) +[2023-09-26 05:26:07,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6137.1). Total num frames: 13860864. Throughput: 0: 780.1, 1: 781.6. Samples: 3462648. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:26:07,838][06561] Avg episode reward: [(0, '297.390'), (1, '0.000')] +[2023-09-26 05:26:12,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 13885440. Throughput: 0: 778.6, 1: 780.5. Samples: 3471880. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:26:12,838][06561] Avg episode reward: [(0, '297.390'), (1, '0.000')] +[2023-09-26 05:26:17,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 13918208. Throughput: 0: 786.0, 1: 786.0. Samples: 3476852. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:26:17,838][06561] Avg episode reward: [(0, '297.390'), (1, '0.000')] +[2023-09-26 05:26:18,187][07696] Updated weights for policy 0, policy_version 27188 (0.0016) +[2023-09-26 05:26:18,187][07697] Updated weights for policy 1, policy_version 27200 (0.0018) +[2023-09-26 05:26:22,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 13950976. Throughput: 0: 780.9, 1: 781.5. Samples: 3485952. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:26:22,838][06561] Avg episode reward: [(0, '302.480'), (1, '0.000')] +[2023-09-26 05:26:22,848][07269] Saving new best policy, reward=302.480! +[2023-09-26 05:26:27,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6137.1). Total num frames: 13983744. Throughput: 0: 788.8, 1: 787.4. Samples: 3495883. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:26:27,837][06561] Avg episode reward: [(0, '302.480'), (1, '0.000')] +[2023-09-26 05:26:31,310][07697] Updated weights for policy 1, policy_version 27360 (0.0018) +[2023-09-26 05:26:31,310][07696] Updated weights for policy 0, policy_version 27348 (0.0017) +[2023-09-26 05:26:32,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 14016512. Throughput: 0: 780.8, 1: 781.2. Samples: 3500107. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:26:32,838][06561] Avg episode reward: [(0, '302.480'), (1, '0.000')] +[2023-09-26 05:26:37,837][06561] Fps is (10 sec: 6143.8, 60 sec: 6212.3, 300 sec: 6137.1). Total num frames: 14045184. Throughput: 0: 784.4, 1: 782.9. Samples: 3509592. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:26:37,838][06561] Avg episode reward: [(0, '307.200'), (1, '0.000')] +[2023-09-26 05:26:37,850][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000027428_7024640.pth... +[2023-09-26 05:26:37,875][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000027440_7024640.pth... +[2023-09-26 05:26:37,882][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000024524_6279168.pth +[2023-09-26 05:26:37,886][07269] Saving new best policy, reward=307.200! +[2023-09-26 05:26:37,913][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000024528_6279168.pth +[2023-09-26 05:26:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 14073856. Throughput: 0: 777.8, 1: 777.2. Samples: 3518701. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:26:42,838][06561] Avg episode reward: [(0, '307.200'), (1, '0.000')] +[2023-09-26 05:26:44,533][07697] Updated weights for policy 1, policy_version 27520 (0.0017) +[2023-09-26 05:26:44,534][07696] Updated weights for policy 0, policy_version 27508 (0.0017) +[2023-09-26 05:26:47,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 14106624. Throughput: 0: 779.0, 1: 778.3. Samples: 3523308. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:26:47,838][06561] Avg episode reward: [(0, '307.200'), (1, '0.000')] +[2023-09-26 05:26:52,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 14139392. Throughput: 0: 780.0, 1: 778.2. Samples: 3532769. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:26:52,838][06561] Avg episode reward: [(0, '307.200'), (1, '0.000')] +[2023-09-26 05:26:57,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6109.3). Total num frames: 14163968. Throughput: 0: 773.4, 1: 772.9. Samples: 3541467. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:26:57,838][06561] Avg episode reward: [(0, '308.440'), (1, '0.000')] +[2023-09-26 05:26:57,839][07269] Saving new best policy, reward=308.440! +[2023-09-26 05:26:58,081][07696] Updated weights for policy 0, policy_version 27668 (0.0016) +[2023-09-26 05:26:58,082][07697] Updated weights for policy 1, policy_version 27680 (0.0016) +[2023-09-26 05:27:02,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6109.3). Total num frames: 14196736. Throughput: 0: 771.9, 1: 770.3. Samples: 3546251. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:27:02,838][06561] Avg episode reward: [(0, '308.440'), (1, '0.000')] +[2023-09-26 05:27:07,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 14229504. Throughput: 0: 771.7, 1: 770.0. Samples: 3555328. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:27:07,838][06561] Avg episode reward: [(0, '308.440'), (1, '0.000')] +[2023-09-26 05:27:11,331][07697] Updated weights for policy 1, policy_version 27840 (0.0016) +[2023-09-26 05:27:11,332][07696] Updated weights for policy 0, policy_version 27828 (0.0017) +[2023-09-26 05:27:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 14262272. Throughput: 0: 765.3, 1: 766.3. Samples: 3564806. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:27:12,838][06561] Avg episode reward: [(0, '314.090'), (1, '0.000')] +[2023-09-26 05:27:12,839][07269] Saving new best policy, reward=314.090! +[2023-09-26 05:27:17,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6164.8). Total num frames: 14295040. Throughput: 0: 772.3, 1: 772.1. Samples: 3569605. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:27:17,837][06561] Avg episode reward: [(0, '314.090'), (1, '0.000')] +[2023-09-26 05:27:22,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6137.1). Total num frames: 14319616. Throughput: 0: 771.2, 1: 772.6. Samples: 3579063. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:27:22,837][06561] Avg episode reward: [(0, '314.090'), (1, '0.000')] +[2023-09-26 05:27:24,198][07697] Updated weights for policy 1, policy_version 28000 (0.0014) +[2023-09-26 05:27:24,199][07696] Updated weights for policy 0, policy_version 27988 (0.0015) +[2023-09-26 05:27:27,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6164.8). Total num frames: 14352384. Throughput: 0: 773.6, 1: 773.1. Samples: 3588304. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:27:27,838][06561] Avg episode reward: [(0, '319.540'), (1, '0.000')] +[2023-09-26 05:27:27,839][07269] Saving new best policy, reward=319.540! +[2023-09-26 05:27:32,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 14385152. Throughput: 0: 773.0, 1: 774.0. Samples: 3592923. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:27:32,838][06561] Avg episode reward: [(0, '319.540'), (1, '0.000')] +[2023-09-26 05:27:37,496][07697] Updated weights for policy 1, policy_version 28160 (0.0016) +[2023-09-26 05:27:37,496][07696] Updated weights for policy 0, policy_version 28148 (0.0017) +[2023-09-26 05:27:37,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6212.3, 300 sec: 6192.6). Total num frames: 14417920. Throughput: 0: 774.4, 1: 773.7. Samples: 3602432. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:27:37,837][06561] Avg episode reward: [(0, '319.540'), (1, '0.000')] +[2023-09-26 05:27:42,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14450688. Throughput: 0: 781.4, 1: 781.3. Samples: 3611789. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 05:27:42,838][06561] Avg episode reward: [(0, '319.540'), (1, '0.000')] +[2023-09-26 05:27:47,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 14475264. Throughput: 0: 782.0, 1: 783.3. Samples: 3616687. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 05:27:47,838][06561] Avg episode reward: [(0, '321.330'), (1, '0.000')] +[2023-09-26 05:27:47,852][07269] Saving new best policy, reward=321.330! +[2023-09-26 05:27:50,496][07697] Updated weights for policy 1, policy_version 28320 (0.0019) +[2023-09-26 05:27:50,496][07696] Updated weights for policy 0, policy_version 28308 (0.0018) +[2023-09-26 05:27:52,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 14508032. Throughput: 0: 783.3, 1: 784.9. Samples: 3625900. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 05:27:52,837][06561] Avg episode reward: [(0, '321.330'), (1, '0.000')] +[2023-09-26 05:27:57,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 14540800. Throughput: 0: 785.2, 1: 785.2. Samples: 3635478. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 05:27:57,838][06561] Avg episode reward: [(0, '321.330'), (1, '0.000')] +[2023-09-26 05:28:02,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 14573568. Throughput: 0: 786.6, 1: 786.6. Samples: 3640403. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 05:28:02,837][06561] Avg episode reward: [(0, '321.690'), (1, '0.000')] +[2023-09-26 05:28:02,838][07269] Saving new best policy, reward=321.690! +[2023-09-26 05:28:03,474][07697] Updated weights for policy 1, policy_version 28480 (0.0017) +[2023-09-26 05:28:03,475][07696] Updated weights for policy 0, policy_version 28468 (0.0017) +[2023-09-26 05:28:07,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 14606336. Throughput: 0: 782.5, 1: 782.2. Samples: 3649471. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:28:07,838][06561] Avg episode reward: [(0, '321.690'), (1, '0.000')] +[2023-09-26 05:28:12,837][06561] Fps is (10 sec: 6143.9, 60 sec: 6212.3, 300 sec: 6234.3). Total num frames: 14635008. Throughput: 0: 781.6, 1: 783.7. Samples: 3658745. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:28:12,838][06561] Avg episode reward: [(0, '321.690'), (1, '0.000')] +[2023-09-26 05:28:16,894][07696] Updated weights for policy 0, policy_version 28628 (0.0017) +[2023-09-26 05:28:16,894][07697] Updated weights for policy 1, policy_version 28640 (0.0016) +[2023-09-26 05:28:17,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 14663680. Throughput: 0: 780.7, 1: 782.6. Samples: 3663268. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:28:17,838][06561] Avg episode reward: [(0, '321.690'), (1, '0.000')] +[2023-09-26 05:28:22,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14696448. Throughput: 0: 773.8, 1: 774.9. Samples: 3672121. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:28:22,838][06561] Avg episode reward: [(0, '328.360'), (1, '0.000')] +[2023-09-26 05:28:22,849][07269] Saving new best policy, reward=328.360! +[2023-09-26 05:28:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14729216. Throughput: 0: 774.2, 1: 774.5. Samples: 3681482. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:28:27,838][06561] Avg episode reward: [(0, '328.360'), (1, '0.000')] +[2023-09-26 05:28:30,255][07696] Updated weights for policy 0, policy_version 28788 (0.0018) +[2023-09-26 05:28:30,255][07697] Updated weights for policy 1, policy_version 28800 (0.0017) +[2023-09-26 05:28:32,837][06561] Fps is (10 sec: 6144.1, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 14757888. Throughput: 0: 774.7, 1: 773.1. Samples: 3686339. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:28:32,838][06561] Avg episode reward: [(0, '328.360'), (1, '0.000')] +[2023-09-26 05:28:37,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 14786560. Throughput: 0: 773.0, 1: 773.4. Samples: 3695491. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:28:37,838][06561] Avg episode reward: [(0, '331.090'), (1, '0.000')] +[2023-09-26 05:28:37,851][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000028880_7393280.pth... +[2023-09-26 05:28:37,851][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000028868_7393280.pth... +[2023-09-26 05:28:37,882][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000025968_6647808.pth +[2023-09-26 05:28:37,888][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000025956_6647808.pth +[2023-09-26 05:28:37,892][07269] Saving new best policy, reward=331.090! +[2023-09-26 05:28:42,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 14819328. Throughput: 0: 771.5, 1: 769.7. Samples: 3704832. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:28:42,838][06561] Avg episode reward: [(0, '331.090'), (1, '0.000')] +[2023-09-26 05:28:43,398][07696] Updated weights for policy 0, policy_version 28948 (0.0017) +[2023-09-26 05:28:43,399][07697] Updated weights for policy 1, policy_version 28960 (0.0018) +[2023-09-26 05:28:47,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14852096. Throughput: 0: 768.0, 1: 768.5. Samples: 3709544. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:28:47,838][06561] Avg episode reward: [(0, '331.090'), (1, '0.000')] +[2023-09-26 05:28:52,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14884864. Throughput: 0: 775.1, 1: 773.7. Samples: 3719168. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:28:52,838][06561] Avg episode reward: [(0, '332.800'), (1, '0.000')] +[2023-09-26 05:28:52,847][07269] Saving new best policy, reward=332.800! +[2023-09-26 05:28:56,288][07696] Updated weights for policy 0, policy_version 29108 (0.0016) +[2023-09-26 05:28:56,289][07697] Updated weights for policy 1, policy_version 29120 (0.0017) +[2023-09-26 05:28:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14917632. Throughput: 0: 777.2, 1: 776.3. Samples: 3728651. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:28:57,838][06561] Avg episode reward: [(0, '336.870'), (1, '0.000')] +[2023-09-26 05:28:57,839][07269] Saving new best policy, reward=336.870! +[2023-09-26 05:29:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 14942208. Throughput: 0: 778.4, 1: 776.2. Samples: 3733227. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:29:02,838][06561] Avg episode reward: [(0, '336.870'), (1, '0.000')] +[2023-09-26 05:29:07,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 14974976. Throughput: 0: 783.0, 1: 783.3. Samples: 3742603. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:29:07,838][06561] Avg episode reward: [(0, '336.870'), (1, '0.000')] +[2023-09-26 05:29:09,454][07696] Updated weights for policy 0, policy_version 29268 (0.0016) +[2023-09-26 05:29:09,454][07697] Updated weights for policy 1, policy_version 29280 (0.0016) +[2023-09-26 05:29:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 15007744. Throughput: 0: 783.6, 1: 782.0. Samples: 3751936. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:29:12,838][06561] Avg episode reward: [(0, '337.450'), (1, '0.000')] +[2023-09-26 05:29:12,839][07269] Saving new best policy, reward=337.450! +[2023-09-26 05:29:17,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 15040512. Throughput: 0: 777.2, 1: 778.5. Samples: 3756345. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:29:17,838][06561] Avg episode reward: [(0, '337.450'), (1, '0.000')] +[2023-09-26 05:29:22,753][07696] Updated weights for policy 0, policy_version 29428 (0.0018) +[2023-09-26 05:29:22,753][07697] Updated weights for policy 1, policy_version 29440 (0.0017) +[2023-09-26 05:29:22,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6234.2). Total num frames: 15073280. Throughput: 0: 782.5, 1: 781.4. Samples: 3765870. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:29:22,838][06561] Avg episode reward: [(0, '337.450'), (1, '0.000')] +[2023-09-26 05:29:27,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 15097856. Throughput: 0: 776.7, 1: 778.1. Samples: 3774795. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:29:27,838][06561] Avg episode reward: [(0, '345.150'), (1, '0.000')] +[2023-09-26 05:29:27,838][07269] Saving new best policy, reward=345.150! +[2023-09-26 05:29:32,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 15130624. Throughput: 0: 778.7, 1: 778.1. Samples: 3779601. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:29:32,838][06561] Avg episode reward: [(0, '345.150'), (1, '0.000')] +[2023-09-26 05:29:36,278][07696] Updated weights for policy 0, policy_version 29588 (0.0015) +[2023-09-26 05:29:36,278][07697] Updated weights for policy 1, policy_version 29600 (0.0015) +[2023-09-26 05:29:37,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 15163392. Throughput: 0: 771.4, 1: 773.7. Samples: 3788698. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:29:37,838][06561] Avg episode reward: [(0, '345.150'), (1, '0.000')] +[2023-09-26 05:29:42,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 15187968. Throughput: 0: 764.9, 1: 764.0. Samples: 3797452. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:29:42,838][06561] Avg episode reward: [(0, '345.150'), (1, '0.000')] +[2023-09-26 05:29:47,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 15220736. Throughput: 0: 765.1, 1: 765.6. Samples: 3802106. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:29:47,838][06561] Avg episode reward: [(0, '344.400'), (1, '0.000')] +[2023-09-26 05:29:49,781][07696] Updated weights for policy 0, policy_version 29748 (0.0019) +[2023-09-26 05:29:49,781][07697] Updated weights for policy 1, policy_version 29760 (0.0019) +[2023-09-26 05:29:52,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 15253504. Throughput: 0: 764.3, 1: 762.9. Samples: 3811328. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:29:52,838][06561] Avg episode reward: [(0, '344.400'), (1, '0.000')] +[2023-09-26 05:29:57,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 15286272. Throughput: 0: 762.9, 1: 765.0. Samples: 3820692. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:29:57,838][06561] Avg episode reward: [(0, '344.400'), (1, '0.000')] +[2023-09-26 05:30:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 15310848. Throughput: 0: 765.5, 1: 768.0. Samples: 3825353. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 05:30:02,838][06561] Avg episode reward: [(0, '348.340'), (1, '0.000')] +[2023-09-26 05:30:02,838][07269] Saving new best policy, reward=348.340! +[2023-09-26 05:30:03,295][07697] Updated weights for policy 1, policy_version 29920 (0.0016) +[2023-09-26 05:30:03,296][07696] Updated weights for policy 0, policy_version 29908 (0.0016) +[2023-09-26 05:30:07,837][06561] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 15343616. Throughput: 0: 756.0, 1: 756.6. Samples: 3833938. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:30:07,837][06561] Avg episode reward: [(0, '348.340'), (1, '0.000')] +[2023-09-26 05:30:12,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 15376384. Throughput: 0: 767.9, 1: 767.4. Samples: 3843882. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:30:12,838][06561] Avg episode reward: [(0, '348.340'), (1, '0.000')] +[2023-09-26 05:30:16,114][07696] Updated weights for policy 0, policy_version 30068 (0.0017) +[2023-09-26 05:30:16,114][07697] Updated weights for policy 1, policy_version 30080 (0.0018) +[2023-09-26 05:30:17,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 15409152. Throughput: 0: 764.0, 1: 764.6. Samples: 3848387. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:30:17,838][06561] Avg episode reward: [(0, '348.340'), (1, '0.000')] +[2023-09-26 05:30:22,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 15441920. Throughput: 0: 773.0, 1: 772.9. Samples: 3858265. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:30:22,837][06561] Avg episode reward: [(0, '348.340'), (1, '0.000')] +[2023-09-26 05:30:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 15474688. Throughput: 0: 780.5, 1: 780.1. Samples: 3867680. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:30:27,838][06561] Avg episode reward: [(0, '348.340'), (1, '0.000')] +[2023-09-26 05:30:29,016][07696] Updated weights for policy 0, policy_version 30228 (0.0017) +[2023-09-26 05:30:29,016][07697] Updated weights for policy 1, policy_version 30240 (0.0018) +[2023-09-26 05:30:32,837][06561] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 15499264. Throughput: 0: 782.2, 1: 780.6. Samples: 3872435. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:30:32,838][06561] Avg episode reward: [(0, '348.340'), (1, '0.000')] +[2023-09-26 05:30:37,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 15532032. Throughput: 0: 781.0, 1: 782.7. Samples: 3881694. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:30:37,838][06561] Avg episode reward: [(0, '356.170'), (1, '0.000')] +[2023-09-26 05:30:37,847][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000030324_7766016.pth... +[2023-09-26 05:30:37,848][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000030336_7766016.pth... +[2023-09-26 05:30:37,877][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000027440_7024640.pth +[2023-09-26 05:30:37,884][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000027428_7024640.pth +[2023-09-26 05:30:37,889][07269] Saving new best policy, reward=356.170! +[2023-09-26 05:30:42,203][07697] Updated weights for policy 1, policy_version 30400 (0.0016) +[2023-09-26 05:30:42,204][07696] Updated weights for policy 0, policy_version 30388 (0.0016) +[2023-09-26 05:30:42,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 15564800. Throughput: 0: 783.5, 1: 782.4. Samples: 3891157. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:30:42,838][06561] Avg episode reward: [(0, '356.170'), (1, '0.000')] +[2023-09-26 05:30:47,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 15597568. Throughput: 0: 780.9, 1: 778.8. Samples: 3895537. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:30:47,838][06561] Avg episode reward: [(0, '356.170'), (1, '0.000')] +[2023-09-26 05:30:52,837][06561] Fps is (10 sec: 6143.9, 60 sec: 6212.2, 300 sec: 6206.5). Total num frames: 15626240. Throughput: 0: 789.3, 1: 788.9. Samples: 3904956. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:30:52,838][06561] Avg episode reward: [(0, '356.170'), (1, '0.000')] +[2023-09-26 05:30:55,413][07696] Updated weights for policy 0, policy_version 30548 (0.0018) +[2023-09-26 05:30:55,413][07697] Updated weights for policy 1, policy_version 30560 (0.0019) +[2023-09-26 05:30:57,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 15654912. Throughput: 0: 782.1, 1: 783.0. Samples: 3914312. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:30:57,838][06561] Avg episode reward: [(0, '356.170'), (1, '0.000')] +[2023-09-26 05:31:02,837][06561] Fps is (10 sec: 6144.1, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 15687680. Throughput: 0: 789.2, 1: 789.0. Samples: 3919407. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:31:02,838][06561] Avg episode reward: [(0, '356.170'), (1, '0.000')] +[2023-09-26 05:31:07,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 15720448. Throughput: 0: 785.6, 1: 785.0. Samples: 3928945. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:31:07,838][06561] Avg episode reward: [(0, '356.170'), (1, '0.000')] +[2023-09-26 05:31:08,147][07696] Updated weights for policy 0, policy_version 30708 (0.0016) +[2023-09-26 05:31:08,147][07697] Updated weights for policy 1, policy_version 30720 (0.0017) +[2023-09-26 05:31:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 15753216. Throughput: 0: 785.2, 1: 784.2. Samples: 3938304. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:31:12,838][06561] Avg episode reward: [(0, '362.940'), (1, '0.000')] +[2023-09-26 05:31:12,839][07269] Saving new best policy, reward=362.940! +[2023-09-26 05:31:17,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 15785984. Throughput: 0: 781.1, 1: 782.0. Samples: 3942777. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:31:17,838][06561] Avg episode reward: [(0, '362.940'), (1, '0.000')] +[2023-09-26 05:31:21,283][07696] Updated weights for policy 0, policy_version 30868 (0.0017) +[2023-09-26 05:31:21,284][07697] Updated weights for policy 1, policy_version 30880 (0.0015) +[2023-09-26 05:31:22,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 15818752. Throughput: 0: 787.1, 1: 786.4. Samples: 3952501. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:31:22,838][06561] Avg episode reward: [(0, '362.940'), (1, '0.000')] +[2023-09-26 05:31:27,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 15843328. Throughput: 0: 779.7, 1: 780.8. Samples: 3961378. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:31:27,837][06561] Avg episode reward: [(0, '362.940'), (1, '0.000')] +[2023-09-26 05:31:32,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6206.5). Total num frames: 15876096. Throughput: 0: 782.8, 1: 782.0. Samples: 3965956. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:31:32,838][06561] Avg episode reward: [(0, '362.940'), (1, '0.000')] +[2023-09-26 05:31:34,783][07696] Updated weights for policy 0, policy_version 31028 (0.0017) +[2023-09-26 05:31:34,783][07697] Updated weights for policy 1, policy_version 31040 (0.0014) +[2023-09-26 05:31:37,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 15908864. Throughput: 0: 780.7, 1: 779.6. Samples: 3975169. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:31:37,838][06561] Avg episode reward: [(0, '362.940'), (1, '0.000')] +[2023-09-26 05:31:42,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 15941632. Throughput: 0: 778.4, 1: 778.6. Samples: 3984380. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:31:42,837][06561] Avg episode reward: [(0, '362.940'), (1, '0.000')] +[2023-09-26 05:31:47,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 15966208. Throughput: 0: 773.1, 1: 774.7. Samples: 3989059. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:31:47,838][06561] Avg episode reward: [(0, '363.680'), (1, '0.000')] +[2023-09-26 05:31:47,995][07269] Saving new best policy, reward=363.680! +[2023-09-26 05:31:48,056][07696] Updated weights for policy 0, policy_version 31188 (0.0014) +[2023-09-26 05:31:48,057][07697] Updated weights for policy 1, policy_version 31200 (0.0017) +[2023-09-26 05:31:52,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 15998976. Throughput: 0: 769.8, 1: 769.5. Samples: 3998213. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:31:52,838][06561] Avg episode reward: [(0, '363.680'), (1, '0.000')] +[2023-09-26 05:31:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 16031744. Throughput: 0: 772.8, 1: 773.7. Samples: 4007898. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:31:57,838][06561] Avg episode reward: [(0, '363.680'), (1, '0.000')] +[2023-09-26 05:32:01,198][07696] Updated weights for policy 0, policy_version 31348 (0.0017) +[2023-09-26 05:32:01,198][07697] Updated weights for policy 1, policy_version 31360 (0.0016) +[2023-09-26 05:32:02,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 16064512. Throughput: 0: 770.7, 1: 770.7. Samples: 4012141. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:32:02,838][06561] Avg episode reward: [(0, '372.800'), (1, '0.000')] +[2023-09-26 05:32:02,839][07269] Saving new best policy, reward=372.800! +[2023-09-26 05:32:07,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 16089088. Throughput: 0: 765.7, 1: 766.3. Samples: 4021442. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:32:07,838][06561] Avg episode reward: [(0, '372.800'), (1, '0.000')] +[2023-09-26 05:32:12,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 16121856. Throughput: 0: 768.7, 1: 767.7. Samples: 4030518. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:32:12,838][06561] Avg episode reward: [(0, '372.800'), (1, '0.000')] +[2023-09-26 05:32:14,469][07697] Updated weights for policy 1, policy_version 31520 (0.0017) +[2023-09-26 05:32:14,469][07696] Updated weights for policy 0, policy_version 31508 (0.0018) +[2023-09-26 05:32:17,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 16154624. Throughput: 0: 772.4, 1: 773.2. Samples: 4035509. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:32:17,838][06561] Avg episode reward: [(0, '372.800'), (1, '0.000')] +[2023-09-26 05:32:22,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 16187392. Throughput: 0: 773.8, 1: 775.3. Samples: 4044880. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:32:22,838][06561] Avg episode reward: [(0, '375.220'), (1, '0.000')] +[2023-09-26 05:32:22,849][07269] Saving new best policy, reward=375.220! +[2023-09-26 05:32:27,629][07696] Updated weights for policy 0, policy_version 31668 (0.0016) +[2023-09-26 05:32:27,629][07697] Updated weights for policy 1, policy_version 31680 (0.0016) +[2023-09-26 05:32:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 16220160. Throughput: 0: 776.1, 1: 775.4. Samples: 4054200. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 05:32:27,838][06561] Avg episode reward: [(0, '375.220'), (1, '0.000')] +[2023-09-26 05:32:32,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 16252928. Throughput: 0: 778.3, 1: 777.1. Samples: 4059049. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:32:32,837][06561] Avg episode reward: [(0, '375.220'), (1, '0.000')] +[2023-09-26 05:32:37,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 16277504. Throughput: 0: 776.5, 1: 778.4. Samples: 4068184. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:32:37,838][06561] Avg episode reward: [(0, '376.570'), (1, '0.060')] +[2023-09-26 05:32:37,847][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000031780_8138752.pth... +[2023-09-26 05:32:37,847][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000031792_8138752.pth... +[2023-09-26 05:32:37,882][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000028880_7393280.pth +[2023-09-26 05:32:37,882][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000028868_7393280.pth +[2023-09-26 05:32:37,885][07486] Saving new best policy, reward=0.060! +[2023-09-26 05:32:37,885][07269] Saving new best policy, reward=376.570! +[2023-09-26 05:32:40,963][07696] Updated weights for policy 0, policy_version 31828 (0.0017) +[2023-09-26 05:32:40,963][07697] Updated weights for policy 1, policy_version 31840 (0.0017) +[2023-09-26 05:32:42,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 16310272. Throughput: 0: 772.5, 1: 773.6. Samples: 4077473. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:32:42,838][06561] Avg episode reward: [(0, '376.570'), (1, '0.060')] +[2023-09-26 05:32:47,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 16343040. Throughput: 0: 773.2, 1: 772.2. Samples: 4081682. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:32:47,838][06561] Avg episode reward: [(0, '376.570'), (1, '0.060')] +[2023-09-26 05:32:52,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6212.3, 300 sec: 6206.5). Total num frames: 16371712. Throughput: 0: 774.0, 1: 773.9. Samples: 4091098. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:32:52,838][06561] Avg episode reward: [(0, '376.570'), (1, '1.000')] +[2023-09-26 05:32:52,923][07486] Saving new best policy, reward=1.000! +[2023-09-26 05:32:54,323][07696] Updated weights for policy 0, policy_version 31988 (0.0017) +[2023-09-26 05:32:54,323][07697] Updated weights for policy 1, policy_version 32000 (0.0018) +[2023-09-26 05:32:57,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 16400384. Throughput: 0: 773.7, 1: 772.9. Samples: 4100113. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:32:57,838][06561] Avg episode reward: [(0, '377.400'), (1, '1.000')] +[2023-09-26 05:32:57,839][07269] Saving new best policy, reward=377.400! +[2023-09-26 05:33:02,837][06561] Fps is (10 sec: 6143.9, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 16433152. Throughput: 0: 770.7, 1: 771.2. Samples: 4104894. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:33:02,838][06561] Avg episode reward: [(0, '377.400'), (1, '1.000')] +[2023-09-26 05:33:07,564][07697] Updated weights for policy 1, policy_version 32160 (0.0018) +[2023-09-26 05:33:07,565][07696] Updated weights for policy 0, policy_version 32148 (0.0018) +[2023-09-26 05:33:07,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6206.5). Total num frames: 16465920. Throughput: 0: 773.5, 1: 772.1. Samples: 4114432. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:33:07,837][06561] Avg episode reward: [(0, '377.400'), (1, '1.000')] +[2023-09-26 05:33:12,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 16490496. Throughput: 0: 767.8, 1: 767.8. Samples: 4123303. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 05:33:12,838][06561] Avg episode reward: [(0, '377.740'), (1, '1.440')] +[2023-09-26 05:33:12,871][07269] Saving new best policy, reward=377.740! +[2023-09-26 05:33:12,899][07486] Saving new best policy, reward=1.440! +[2023-09-26 05:33:17,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 16523264. Throughput: 0: 768.8, 1: 767.7. Samples: 4128190. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:33:17,837][06561] Avg episode reward: [(0, '377.740'), (1, '1.440')] +[2023-09-26 05:33:20,634][07697] Updated weights for policy 1, policy_version 32320 (0.0017) +[2023-09-26 05:33:20,634][07696] Updated weights for policy 0, policy_version 32308 (0.0018) +[2023-09-26 05:33:22,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 16556032. Throughput: 0: 772.3, 1: 771.0. Samples: 4137630. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:33:22,838][06561] Avg episode reward: [(0, '377.740'), (1, '1.440')] +[2023-09-26 05:33:27,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6206.5). Total num frames: 16588800. Throughput: 0: 775.4, 1: 773.7. Samples: 4147182. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:33:27,838][06561] Avg episode reward: [(0, '380.290'), (1, '1.960')] +[2023-09-26 05:33:27,839][07269] Saving new best policy, reward=380.290! +[2023-09-26 05:33:27,839][07486] Saving new best policy, reward=1.960! +[2023-09-26 05:33:32,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 16621568. Throughput: 0: 776.1, 1: 777.2. Samples: 4151581. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:33:32,837][06561] Avg episode reward: [(0, '380.290'), (1, '1.960')] +[2023-09-26 05:33:33,722][07696] Updated weights for policy 0, policy_version 32468 (0.0019) +[2023-09-26 05:33:33,722][07697] Updated weights for policy 1, policy_version 32480 (0.0017) +[2023-09-26 05:33:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 16654336. Throughput: 0: 777.0, 1: 778.2. Samples: 4161083. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 05:33:37,838][06561] Avg episode reward: [(0, '380.290'), (1, '1.960')] +[2023-09-26 05:33:42,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 16678912. Throughput: 0: 778.2, 1: 779.7. Samples: 4170222. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 05:33:42,838][06561] Avg episode reward: [(0, '380.290'), (1, '1.960')] +[2023-09-26 05:33:47,021][07696] Updated weights for policy 0, policy_version 32628 (0.0018) +[2023-09-26 05:33:47,021][07697] Updated weights for policy 1, policy_version 32640 (0.0018) +[2023-09-26 05:33:47,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 16711680. Throughput: 0: 777.6, 1: 777.6. Samples: 4174880. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 05:33:47,838][06561] Avg episode reward: [(0, '380.600'), (1, '2.730')] +[2023-09-26 05:33:47,839][07486] Saving new best policy, reward=2.730! +[2023-09-26 05:33:47,839][07269] Saving new best policy, reward=380.600! +[2023-09-26 05:33:52,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6212.3, 300 sec: 6192.6). Total num frames: 16744448. Throughput: 0: 773.9, 1: 775.3. Samples: 4184145. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 05:33:52,838][06561] Avg episode reward: [(0, '380.600'), (1, '2.730')] +[2023-09-26 05:33:57,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 16777216. Throughput: 0: 782.8, 1: 783.3. Samples: 4193775. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:33:57,838][06561] Avg episode reward: [(0, '380.600'), (1, '2.730')] +[2023-09-26 05:34:00,222][07696] Updated weights for policy 0, policy_version 32788 (0.0017) +[2023-09-26 05:34:00,222][07697] Updated weights for policy 1, policy_version 32800 (0.0017) +[2023-09-26 05:34:02,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 16809984. Throughput: 0: 779.2, 1: 779.7. Samples: 4198337. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:34:02,838][06561] Avg episode reward: [(0, '374.760'), (1, '3.570')] +[2023-09-26 05:34:02,839][07486] Saving new best policy, reward=3.570! +[2023-09-26 05:34:07,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 16834560. Throughput: 0: 777.2, 1: 777.8. Samples: 4207607. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:34:07,838][06561] Avg episode reward: [(0, '374.760'), (1, '3.570')] +[2023-09-26 05:34:12,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 16867328. Throughput: 0: 774.1, 1: 773.7. Samples: 4216832. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:34:12,838][06561] Avg episode reward: [(0, '374.760'), (1, '3.570')] +[2023-09-26 05:34:13,451][07697] Updated weights for policy 1, policy_version 32960 (0.0016) +[2023-09-26 05:34:13,451][07696] Updated weights for policy 0, policy_version 32948 (0.0017) +[2023-09-26 05:34:17,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 16900096. Throughput: 0: 773.7, 1: 773.5. Samples: 4221207. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:34:17,838][06561] Avg episode reward: [(0, '374.760'), (1, '3.720')] +[2023-09-26 05:34:17,839][07486] Saving new best policy, reward=3.720! +[2023-09-26 05:34:22,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 16932864. Throughput: 0: 775.0, 1: 774.3. Samples: 4230800. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:34:22,838][06561] Avg episode reward: [(0, '372.870'), (1, '4.450')] +[2023-09-26 05:34:22,849][07486] Saving new best policy, reward=4.450! +[2023-09-26 05:34:26,767][07697] Updated weights for policy 1, policy_version 33120 (0.0015) +[2023-09-26 05:34:26,768][07696] Updated weights for policy 0, policy_version 33108 (0.0016) +[2023-09-26 05:34:27,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 16957440. Throughput: 0: 772.6, 1: 772.5. Samples: 4239750. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:34:27,838][06561] Avg episode reward: [(0, '372.870'), (1, '4.450')] +[2023-09-26 05:34:32,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 16990208. Throughput: 0: 775.3, 1: 775.1. Samples: 4244650. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:34:32,838][06561] Avg episode reward: [(0, '372.870'), (1, '4.450')] +[2023-09-26 05:34:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 17022976. Throughput: 0: 774.8, 1: 775.1. Samples: 4253890. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:34:37,838][06561] Avg episode reward: [(0, '374.780'), (1, '5.160')] +[2023-09-26 05:34:37,847][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000033236_8511488.pth... +[2023-09-26 05:34:37,847][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000033248_8511488.pth... +[2023-09-26 05:34:37,883][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000030324_7766016.pth +[2023-09-26 05:34:37,884][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000030336_7766016.pth +[2023-09-26 05:34:37,888][07486] Saving new best policy, reward=5.160! +[2023-09-26 05:34:39,730][07696] Updated weights for policy 0, policy_version 33268 (0.0015) +[2023-09-26 05:34:39,731][07697] Updated weights for policy 1, policy_version 33280 (0.0018) +[2023-09-26 05:34:42,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 17055744. Throughput: 0: 778.6, 1: 776.3. Samples: 4263743. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:34:42,838][06561] Avg episode reward: [(0, '374.780'), (1, '5.160')] +[2023-09-26 05:34:47,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 17088512. Throughput: 0: 775.1, 1: 773.8. Samples: 4268037. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:34:47,837][06561] Avg episode reward: [(0, '374.780'), (1, '5.160')] +[2023-09-26 05:34:52,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 17113088. Throughput: 0: 774.2, 1: 773.6. Samples: 4277257. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:34:52,838][06561] Avg episode reward: [(0, '374.780'), (1, '5.720')] +[2023-09-26 05:34:52,850][07486] Saving new best policy, reward=5.720! +[2023-09-26 05:34:53,183][07696] Updated weights for policy 0, policy_version 33428 (0.0016) +[2023-09-26 05:34:53,183][07697] Updated weights for policy 1, policy_version 33440 (0.0018) +[2023-09-26 05:34:57,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 17145856. Throughput: 0: 773.7, 1: 773.7. Samples: 4286465. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:34:57,838][06561] Avg episode reward: [(0, '376.800'), (1, '5.720')] +[2023-09-26 05:35:02,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 17178624. Throughput: 0: 775.7, 1: 775.8. Samples: 4291026. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:35:02,838][06561] Avg episode reward: [(0, '376.800'), (1, '5.720')] +[2023-09-26 05:35:06,306][07697] Updated weights for policy 1, policy_version 33600 (0.0015) +[2023-09-26 05:35:06,306][07696] Updated weights for policy 0, policy_version 33588 (0.0018) +[2023-09-26 05:35:07,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 17211392. Throughput: 0: 775.9, 1: 775.2. Samples: 4300599. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:35:07,838][06561] Avg episode reward: [(0, '376.800'), (1, '5.720')] +[2023-09-26 05:35:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 17244160. Throughput: 0: 778.8, 1: 778.6. Samples: 4309834. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:35:12,838][06561] Avg episode reward: [(0, '377.190'), (1, '6.370')] +[2023-09-26 05:35:12,839][07486] Saving new best policy, reward=6.370! +[2023-09-26 05:35:17,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 17268736. Throughput: 0: 777.9, 1: 777.7. Samples: 4314654. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:35:17,838][06561] Avg episode reward: [(0, '377.190'), (1, '6.370')] +[2023-09-26 05:35:19,344][07696] Updated weights for policy 0, policy_version 33748 (0.0017) +[2023-09-26 05:35:19,344][07697] Updated weights for policy 1, policy_version 33760 (0.0018) +[2023-09-26 05:35:22,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 17301504. Throughput: 0: 776.9, 1: 776.5. Samples: 4323796. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:35:22,838][06561] Avg episode reward: [(0, '377.190'), (1, '6.370')] +[2023-09-26 05:35:27,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 17334272. Throughput: 0: 775.0, 1: 776.0. Samples: 4333539. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 05:35:27,838][06561] Avg episode reward: [(0, '377.490'), (1, '6.850')] +[2023-09-26 05:35:27,840][07486] Saving new best policy, reward=6.850! +[2023-09-26 05:35:32,634][07697] Updated weights for policy 1, policy_version 33920 (0.0018) +[2023-09-26 05:35:32,634][07696] Updated weights for policy 0, policy_version 33908 (0.0016) +[2023-09-26 05:35:32,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 17367040. Throughput: 0: 773.9, 1: 775.2. Samples: 4337748. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 05:35:32,838][06561] Avg episode reward: [(0, '377.490'), (1, '6.850')] +[2023-09-26 05:35:37,837][06561] Fps is (10 sec: 6144.1, 60 sec: 6212.3, 300 sec: 6206.5). Total num frames: 17395712. Throughput: 0: 778.0, 1: 778.0. Samples: 4347277. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 05:35:37,838][06561] Avg episode reward: [(0, '377.490'), (1, '6.850')] +[2023-09-26 05:35:42,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 17424384. Throughput: 0: 773.7, 1: 774.0. Samples: 4356109. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 05:35:42,838][06561] Avg episode reward: [(0, '377.490'), (1, '6.850')] +[2023-09-26 05:35:45,843][07696] Updated weights for policy 0, policy_version 34068 (0.0017) +[2023-09-26 05:35:45,844][07697] Updated weights for policy 1, policy_version 34080 (0.0018) +[2023-09-26 05:35:47,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6144.0, 300 sec: 6206.5). Total num frames: 17457152. Throughput: 0: 777.0, 1: 778.0. Samples: 4361000. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 05:35:47,838][06561] Avg episode reward: [(0, '381.030'), (1, '7.540')] +[2023-09-26 05:35:47,839][07269] Saving new best policy, reward=381.030! +[2023-09-26 05:35:47,839][07486] Saving new best policy, reward=7.540! +[2023-09-26 05:35:52,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 17489920. Throughput: 0: 776.7, 1: 775.2. Samples: 4370432. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 05:35:52,837][06561] Avg episode reward: [(0, '381.030'), (1, '7.540')] +[2023-09-26 05:35:57,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 17522688. Throughput: 0: 777.4, 1: 777.8. Samples: 4379819. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 05:35:57,838][06561] Avg episode reward: [(0, '381.030'), (1, '7.540')] +[2023-09-26 05:35:58,929][07696] Updated weights for policy 0, policy_version 34228 (0.0017) +[2023-09-26 05:35:58,929][07697] Updated weights for policy 1, policy_version 34240 (0.0017) +[2023-09-26 05:36:02,837][06561] Fps is (10 sec: 6143.9, 60 sec: 6212.3, 300 sec: 6206.5). Total num frames: 17551360. Throughput: 0: 778.8, 1: 778.0. Samples: 4384709. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 05:36:02,838][06561] Avg episode reward: [(0, '381.990'), (1, '8.300')] +[2023-09-26 05:36:02,839][07269] Saving new best policy, reward=381.990! +[2023-09-26 05:36:02,841][07486] Saving new best policy, reward=8.300! +[2023-09-26 05:36:07,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 17580032. Throughput: 0: 776.2, 1: 776.6. Samples: 4393672. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 05:36:07,838][06561] Avg episode reward: [(0, '381.990'), (1, '8.300')] +[2023-09-26 05:36:12,268][07696] Updated weights for policy 0, policy_version 34388 (0.0016) +[2023-09-26 05:36:12,268][07697] Updated weights for policy 1, policy_version 34400 (0.0018) +[2023-09-26 05:36:12,837][06561] Fps is (10 sec: 6144.0, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 17612800. Throughput: 0: 774.3, 1: 773.3. Samples: 4403180. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:36:12,838][06561] Avg episode reward: [(0, '381.990'), (1, '8.300')] +[2023-09-26 05:36:17,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 17645568. Throughput: 0: 773.5, 1: 772.0. Samples: 4407296. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:36:17,838][06561] Avg episode reward: [(0, '381.990'), (1, '8.300')] +[2023-09-26 05:36:22,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 17678336. Throughput: 0: 774.5, 1: 773.8. Samples: 4416949. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:36:22,839][06561] Avg episode reward: [(0, '389.570'), (1, '8.610')] +[2023-09-26 05:36:22,849][07269] Saving new best policy, reward=389.570! +[2023-09-26 05:36:22,849][07486] Saving new best policy, reward=8.610! +[2023-09-26 05:36:25,428][07696] Updated weights for policy 0, policy_version 34548 (0.0017) +[2023-09-26 05:36:25,428][07697] Updated weights for policy 1, policy_version 34560 (0.0017) +[2023-09-26 05:36:27,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 17702912. Throughput: 0: 776.7, 1: 777.7. Samples: 4426058. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:36:27,838][06561] Avg episode reward: [(0, '389.570'), (1, '8.610')] +[2023-09-26 05:36:32,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 17735680. Throughput: 0: 776.3, 1: 775.4. Samples: 4430826. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 05:36:32,838][06561] Avg episode reward: [(0, '389.570'), (1, '8.610')] +[2023-09-26 05:36:37,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6212.3, 300 sec: 6192.6). Total num frames: 17768448. Throughput: 0: 772.0, 1: 773.5. Samples: 4439977. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:36:37,838][06561] Avg episode reward: [(0, '389.590'), (1, '9.270')] +[2023-09-26 05:36:37,845][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000034704_8884224.pth... +[2023-09-26 05:36:37,845][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000034692_8884224.pth... +[2023-09-26 05:36:37,880][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000031780_8138752.pth +[2023-09-26 05:36:37,881][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000031792_8138752.pth +[2023-09-26 05:36:37,884][07269] Saving new best policy, reward=389.590! +[2023-09-26 05:36:37,886][07486] Saving new best policy, reward=9.270! +[2023-09-26 05:36:38,943][07696] Updated weights for policy 0, policy_version 34708 (0.0017) +[2023-09-26 05:36:38,943][07697] Updated weights for policy 1, policy_version 34720 (0.0018) +[2023-09-26 05:36:42,837][06561] Fps is (10 sec: 6144.2, 60 sec: 6212.3, 300 sec: 6206.5). Total num frames: 17797120. Throughput: 0: 768.8, 1: 768.3. Samples: 4448986. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:36:42,837][06561] Avg episode reward: [(0, '389.590'), (1, '9.270')] +[2023-09-26 05:36:47,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 17825792. Throughput: 0: 769.0, 1: 768.7. Samples: 4453906. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:36:47,837][06561] Avg episode reward: [(0, '389.590'), (1, '9.270')] +[2023-09-26 05:36:52,183][07697] Updated weights for policy 1, policy_version 34880 (0.0016) +[2023-09-26 05:36:52,183][07696] Updated weights for policy 0, policy_version 34868 (0.0018) +[2023-09-26 05:36:52,837][06561] Fps is (10 sec: 6143.8, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 17858560. Throughput: 0: 767.2, 1: 767.2. Samples: 4462721. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 05:36:52,838][06561] Avg episode reward: [(0, '389.590'), (1, '9.680')] +[2023-09-26 05:36:52,846][07486] Saving new best policy, reward=9.680! +[2023-09-26 05:36:57,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 17891328. Throughput: 0: 768.2, 1: 769.7. Samples: 4472385. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:36:57,837][06561] Avg episode reward: [(0, '395.380'), (1, '9.680')] +[2023-09-26 05:36:57,838][07269] Saving new best policy, reward=395.380! +[2023-09-26 05:37:02,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 17924096. Throughput: 0: 773.7, 1: 773.7. Samples: 4476928. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:37:02,837][06561] Avg episode reward: [(0, '395.380'), (1, '9.680')] +[2023-09-26 05:37:05,425][07697] Updated weights for policy 1, policy_version 35040 (0.0018) +[2023-09-26 05:37:05,425][07696] Updated weights for policy 0, policy_version 35028 (0.0017) +[2023-09-26 05:37:07,837][06561] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 17948672. Throughput: 0: 768.5, 1: 769.4. Samples: 4486152. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:37:07,838][06561] Avg episode reward: [(0, '395.380'), (1, '9.680')] +[2023-09-26 05:37:12,837][06561] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 17981440. Throughput: 0: 770.4, 1: 769.4. Samples: 4495348. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:37:12,838][06561] Avg episode reward: [(0, '392.300'), (1, '9.970')] +[2023-09-26 05:37:12,840][07486] Saving new best policy, reward=9.970! +[2023-09-26 05:37:17,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18014208. Throughput: 0: 765.1, 1: 765.4. Samples: 4499696. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:37:17,838][06561] Avg episode reward: [(0, '392.300'), (1, '9.970')] +[2023-09-26 05:37:18,823][07696] Updated weights for policy 0, policy_version 35188 (0.0018) +[2023-09-26 05:37:18,823][07697] Updated weights for policy 1, policy_version 35200 (0.0017) +[2023-09-26 05:37:22,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6007.5, 300 sec: 6164.8). Total num frames: 18038784. Throughput: 0: 767.3, 1: 767.3. Samples: 4509035. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:37:22,839][06561] Avg episode reward: [(0, '392.300'), (1, '9.970')] +[2023-09-26 05:37:27,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6164.8). Total num frames: 18071552. Throughput: 0: 766.4, 1: 766.2. Samples: 4517956. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:37:27,838][06561] Avg episode reward: [(0, '392.300'), (1, '10.300')] +[2023-09-26 05:37:27,839][07486] Saving new best policy, reward=10.300! +[2023-09-26 05:37:32,385][07696] Updated weights for policy 0, policy_version 35348 (0.0017) +[2023-09-26 05:37:32,385][07697] Updated weights for policy 1, policy_version 35360 (0.0019) +[2023-09-26 05:37:32,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18104320. Throughput: 0: 761.9, 1: 762.6. Samples: 4522505. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:37:32,838][06561] Avg episode reward: [(0, '392.670'), (1, '10.300')] +[2023-09-26 05:37:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18137088. Throughput: 0: 772.6, 1: 771.0. Samples: 4532184. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:37:37,838][06561] Avg episode reward: [(0, '392.670'), (1, '10.300')] +[2023-09-26 05:37:42,837][06561] Fps is (10 sec: 6144.1, 60 sec: 6144.0, 300 sec: 6178.7). Total num frames: 18165760. Throughput: 0: 764.0, 1: 764.5. Samples: 4541168. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:37:42,838][06561] Avg episode reward: [(0, '392.670'), (1, '10.300')] +[2023-09-26 05:37:45,487][07696] Updated weights for policy 0, policy_version 35508 (0.0016) +[2023-09-26 05:37:45,487][07697] Updated weights for policy 1, policy_version 35520 (0.0018) +[2023-09-26 05:37:47,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6178.7). Total num frames: 18194432. Throughput: 0: 767.5, 1: 768.3. Samples: 4546037. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:37:47,838][06561] Avg episode reward: [(0, '397.240'), (1, '10.490')] +[2023-09-26 05:37:47,993][07486] Saving new best policy, reward=10.490! +[2023-09-26 05:37:48,034][07269] Saving new best policy, reward=397.240! +[2023-09-26 05:37:52,837][06561] Fps is (10 sec: 6143.8, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18227200. Throughput: 0: 769.7, 1: 769.0. Samples: 4555393. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:37:52,839][06561] Avg episode reward: [(0, '397.240'), (1, '10.490')] +[2023-09-26 05:37:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18259968. Throughput: 0: 771.9, 1: 772.8. Samples: 4564860. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:37:57,838][06561] Avg episode reward: [(0, '397.240'), (1, '10.490')] +[2023-09-26 05:37:58,566][07697] Updated weights for policy 1, policy_version 35680 (0.0018) +[2023-09-26 05:37:58,567][07696] Updated weights for policy 0, policy_version 35668 (0.0017) +[2023-09-26 05:38:02,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18292736. Throughput: 0: 772.4, 1: 772.5. Samples: 4569216. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:38:02,838][06561] Avg episode reward: [(0, '397.140'), (1, '10.760')] +[2023-09-26 05:38:02,840][07486] Saving new best policy, reward=10.760! +[2023-09-26 05:38:07,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 18325504. Throughput: 0: 774.5, 1: 773.5. Samples: 4578694. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:38:07,838][06561] Avg episode reward: [(0, '397.140'), (1, '10.760')] +[2023-09-26 05:38:11,618][07696] Updated weights for policy 0, policy_version 35828 (0.0016) +[2023-09-26 05:38:11,618][07697] Updated weights for policy 1, policy_version 35840 (0.0017) +[2023-09-26 05:38:12,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18350080. Throughput: 0: 780.5, 1: 779.8. Samples: 4588173. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:38:12,838][06561] Avg episode reward: [(0, '397.140'), (1, '10.760')] +[2023-09-26 05:38:17,837][06561] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18382848. Throughput: 0: 782.5, 1: 781.6. Samples: 4592886. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:38:17,838][06561] Avg episode reward: [(0, '397.140'), (1, '10.760')] +[2023-09-26 05:38:22,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6192.6). Total num frames: 18415616. Throughput: 0: 774.4, 1: 775.5. Samples: 4601931. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:38:22,838][06561] Avg episode reward: [(0, '402.560'), (1, '11.270')] +[2023-09-26 05:38:22,850][07269] Saving new best policy, reward=402.560! +[2023-09-26 05:38:22,850][07486] Saving new best policy, reward=11.270! +[2023-09-26 05:38:24,834][07696] Updated weights for policy 0, policy_version 35988 (0.0016) +[2023-09-26 05:38:24,834][07697] Updated weights for policy 1, policy_version 36000 (0.0017) +[2023-09-26 05:38:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 18448384. Throughput: 0: 783.6, 1: 783.8. Samples: 4611701. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:38:27,837][06561] Avg episode reward: [(0, '402.560'), (1, '11.270')] +[2023-09-26 05:38:32,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6192.6). Total num frames: 18481152. Throughput: 0: 780.0, 1: 780.5. Samples: 4616262. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:38:32,837][06561] Avg episode reward: [(0, '402.560'), (1, '11.270')] +[2023-09-26 05:38:37,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18505728. Throughput: 0: 781.5, 1: 781.2. Samples: 4625714. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:38:37,838][06561] Avg episode reward: [(0, '402.980'), (1, '11.700')] +[2023-09-26 05:38:37,845][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000036160_9256960.pth... +[2023-09-26 05:38:37,845][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000036148_9256960.pth... +[2023-09-26 05:38:37,847][07696] Updated weights for policy 0, policy_version 36148 (0.0017) +[2023-09-26 05:38:37,847][07697] Updated weights for policy 1, policy_version 36160 (0.0017) +[2023-09-26 05:38:37,874][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000033248_8511488.pth +[2023-09-26 05:38:37,874][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000033236_8511488.pth +[2023-09-26 05:38:37,877][07269] Saving new best policy, reward=402.980! +[2023-09-26 05:38:37,878][07486] Saving new best policy, reward=11.700! +[2023-09-26 05:38:42,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6212.3, 300 sec: 6192.6). Total num frames: 18538496. Throughput: 0: 779.6, 1: 779.8. Samples: 4635031. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:38:42,838][06561] Avg episode reward: [(0, '402.980'), (1, '11.700')] +[2023-09-26 05:38:47,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 18571264. Throughput: 0: 786.1, 1: 785.8. Samples: 4639952. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:38:47,838][06561] Avg episode reward: [(0, '402.980'), (1, '11.700')] +[2023-09-26 05:38:50,967][07697] Updated weights for policy 1, policy_version 36320 (0.0017) +[2023-09-26 05:38:50,968][07696] Updated weights for policy 0, policy_version 36308 (0.0016) +[2023-09-26 05:38:52,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6192.6). Total num frames: 18604032. Throughput: 0: 781.0, 1: 781.3. Samples: 4648996. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:38:52,837][06561] Avg episode reward: [(0, '402.980'), (1, '12.760')] +[2023-09-26 05:38:52,846][07486] Saving new best policy, reward=12.760! +[2023-09-26 05:38:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 18636800. Throughput: 0: 781.0, 1: 780.6. Samples: 4658448. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:38:57,838][06561] Avg episode reward: [(0, '404.520'), (1, '12.760')] +[2023-09-26 05:38:57,839][07269] Saving new best policy, reward=404.520! +[2023-09-26 05:39:02,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18661376. Throughput: 0: 776.7, 1: 779.0. Samples: 4662891. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:39:02,838][06561] Avg episode reward: [(0, '404.520'), (1, '12.760')] +[2023-09-26 05:39:04,492][07697] Updated weights for policy 1, policy_version 36480 (0.0019) +[2023-09-26 05:39:04,492][07696] Updated weights for policy 0, policy_version 36468 (0.0018) +[2023-09-26 05:39:07,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18694144. Throughput: 0: 776.1, 1: 776.9. Samples: 4671815. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:39:07,838][06561] Avg episode reward: [(0, '404.520'), (1, '12.760')] +[2023-09-26 05:39:12,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 18726912. Throughput: 0: 777.8, 1: 775.8. Samples: 4681615. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:39:12,838][06561] Avg episode reward: [(0, '405.300'), (1, '13.770')] +[2023-09-26 05:39:12,839][07269] Saving new best policy, reward=405.300! +[2023-09-26 05:39:12,839][07486] Saving new best policy, reward=13.770! +[2023-09-26 05:39:17,576][07697] Updated weights for policy 1, policy_version 36640 (0.0015) +[2023-09-26 05:39:17,576][07696] Updated weights for policy 0, policy_version 36628 (0.0017) +[2023-09-26 05:39:17,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 18759680. Throughput: 0: 773.7, 1: 773.0. Samples: 4685860. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:39:17,838][06561] Avg episode reward: [(0, '405.300'), (1, '13.770')] +[2023-09-26 05:39:22,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 18792448. Throughput: 0: 774.0, 1: 773.7. Samples: 4695363. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:39:22,838][06561] Avg episode reward: [(0, '405.300'), (1, '13.770')] +[2023-09-26 05:39:27,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18817024. Throughput: 0: 774.7, 1: 774.6. Samples: 4704749. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:39:27,838][06561] Avg episode reward: [(0, '408.520'), (1, '14.760')] +[2023-09-26 05:39:27,963][07269] Saving new best policy, reward=408.520! +[2023-09-26 05:39:28,001][07486] Saving new best policy, reward=14.760! +[2023-09-26 05:39:30,716][07697] Updated weights for policy 1, policy_version 36800 (0.0018) +[2023-09-26 05:39:30,716][07696] Updated weights for policy 0, policy_version 36788 (0.0017) +[2023-09-26 05:39:32,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18849792. Throughput: 0: 772.5, 1: 773.6. Samples: 4709526. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:39:32,838][06561] Avg episode reward: [(0, '408.520'), (1, '14.760')] +[2023-09-26 05:39:37,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 18882560. Throughput: 0: 773.7, 1: 773.0. Samples: 4718596. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:39:37,838][06561] Avg episode reward: [(0, '408.520'), (1, '14.760')] +[2023-09-26 05:39:42,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 18915328. Throughput: 0: 770.3, 1: 772.3. Samples: 4727865. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:39:42,838][06561] Avg episode reward: [(0, '408.520'), (1, '14.760')] +[2023-09-26 05:39:44,126][07696] Updated weights for policy 0, policy_version 36948 (0.0016) +[2023-09-26 05:39:44,126][07697] Updated weights for policy 1, policy_version 36960 (0.0016) +[2023-09-26 05:39:47,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18939904. Throughput: 0: 772.6, 1: 771.8. Samples: 4732393. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:39:47,838][06561] Avg episode reward: [(0, '408.290'), (1, '15.100')] +[2023-09-26 05:39:47,839][07486] Saving new best policy, reward=15.100! +[2023-09-26 05:39:52,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 18972672. Throughput: 0: 774.8, 1: 774.0. Samples: 4741512. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:39:52,838][06561] Avg episode reward: [(0, '408.290'), (1, '15.100')] +[2023-09-26 05:39:57,386][07696] Updated weights for policy 0, policy_version 37108 (0.0017) +[2023-09-26 05:39:57,386][07697] Updated weights for policy 1, policy_version 37120 (0.0017) +[2023-09-26 05:39:57,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19005440. Throughput: 0: 769.6, 1: 771.9. Samples: 4750982. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:39:57,838][06561] Avg episode reward: [(0, '408.290'), (1, '15.100')] +[2023-09-26 05:40:02,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6192.6). Total num frames: 19038208. Throughput: 0: 773.6, 1: 773.0. Samples: 4755457. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:40:02,837][06561] Avg episode reward: [(0, '410.530'), (1, '14.690')] +[2023-09-26 05:40:02,838][07269] Saving new best policy, reward=410.530! +[2023-09-26 05:40:07,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6192.6). Total num frames: 19070976. Throughput: 0: 774.2, 1: 774.6. Samples: 4765057. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:40:07,838][06561] Avg episode reward: [(0, '410.530'), (1, '14.690')] +[2023-09-26 05:40:10,476][07696] Updated weights for policy 0, policy_version 37268 (0.0018) +[2023-09-26 05:40:10,476][07697] Updated weights for policy 1, policy_version 37280 (0.0018) +[2023-09-26 05:40:12,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19095552. Throughput: 0: 773.4, 1: 773.4. Samples: 4774352. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:40:12,837][06561] Avg episode reward: [(0, '410.530'), (1, '14.690')] +[2023-09-26 05:40:17,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19128320. Throughput: 0: 774.5, 1: 773.8. Samples: 4779200. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:40:17,838][06561] Avg episode reward: [(0, '410.530'), (1, '14.690')] +[2023-09-26 05:40:22,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19161088. Throughput: 0: 773.7, 1: 773.6. Samples: 4788224. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:40:22,838][06561] Avg episode reward: [(0, '410.940'), (1, '14.370')] +[2023-09-26 05:40:22,848][07269] Saving new best policy, reward=410.940! +[2023-09-26 05:40:23,749][07696] Updated weights for policy 0, policy_version 37428 (0.0017) +[2023-09-26 05:40:23,749][07697] Updated weights for policy 1, policy_version 37440 (0.0018) +[2023-09-26 05:40:27,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 19193856. Throughput: 0: 773.2, 1: 772.4. Samples: 4797415. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:40:27,838][06561] Avg episode reward: [(0, '410.940'), (1, '14.370')] +[2023-09-26 05:40:32,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6178.7). Total num frames: 19218432. Throughput: 0: 775.4, 1: 775.8. Samples: 4802199. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:40:32,838][06561] Avg episode reward: [(0, '410.940'), (1, '14.370')] +[2023-09-26 05:40:37,101][07696] Updated weights for policy 0, policy_version 37588 (0.0017) +[2023-09-26 05:40:37,102][07697] Updated weights for policy 1, policy_version 37600 (0.0017) +[2023-09-26 05:40:37,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19251200. Throughput: 0: 774.6, 1: 774.4. Samples: 4811217. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:40:37,838][06561] Avg episode reward: [(0, '414.180'), (1, '14.080')] +[2023-09-26 05:40:37,849][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000037588_9625600.pth... +[2023-09-26 05:40:37,849][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000037600_9625600.pth... +[2023-09-26 05:40:37,885][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000034704_8884224.pth +[2023-09-26 05:40:37,896][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000034692_8884224.pth +[2023-09-26 05:40:37,901][07269] Saving new best policy, reward=414.180! +[2023-09-26 05:40:42,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19283968. Throughput: 0: 770.7, 1: 771.0. Samples: 4820360. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:40:42,838][06561] Avg episode reward: [(0, '414.180'), (1, '14.080')] +[2023-09-26 05:40:47,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6164.8). Total num frames: 19308544. Throughput: 0: 770.5, 1: 773.4. Samples: 4824931. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:40:47,838][06561] Avg episode reward: [(0, '414.180'), (1, '14.080')] +[2023-09-26 05:40:50,848][07696] Updated weights for policy 0, policy_version 37748 (0.0017) +[2023-09-26 05:40:50,849][07697] Updated weights for policy 1, policy_version 37760 (0.0017) +[2023-09-26 05:40:52,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6164.8). Total num frames: 19341312. Throughput: 0: 760.4, 1: 760.5. Samples: 4833501. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 05:40:52,838][06561] Avg episode reward: [(0, '414.180'), (1, '14.080')] +[2023-09-26 05:40:57,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6178.7). Total num frames: 19374080. Throughput: 0: 766.0, 1: 766.0. Samples: 4843294. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:40:57,838][06561] Avg episode reward: [(0, '411.530'), (1, '13.500')] +[2023-09-26 05:41:02,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19406848. Throughput: 0: 761.3, 1: 760.6. Samples: 4847683. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:41:02,838][06561] Avg episode reward: [(0, '411.530'), (1, '13.500')] +[2023-09-26 05:41:03,893][07696] Updated weights for policy 0, policy_version 37908 (0.0016) +[2023-09-26 05:41:03,895][07697] Updated weights for policy 1, policy_version 37920 (0.0019) +[2023-09-26 05:41:07,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19439616. Throughput: 0: 767.8, 1: 768.7. Samples: 4857366. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:41:07,838][06561] Avg episode reward: [(0, '411.530'), (1, '13.500')] +[2023-09-26 05:41:12,837][06561] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6164.8). Total num frames: 19464192. Throughput: 0: 766.0, 1: 766.1. Samples: 4866361. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:41:12,838][06561] Avg episode reward: [(0, '411.520'), (1, '13.050')] +[2023-09-26 05:41:17,050][07696] Updated weights for policy 0, policy_version 38068 (0.0018) +[2023-09-26 05:41:17,050][07697] Updated weights for policy 1, policy_version 38080 (0.0018) +[2023-09-26 05:41:17,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6164.8). Total num frames: 19496960. Throughput: 0: 767.2, 1: 767.9. Samples: 4871279. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:41:17,838][06561] Avg episode reward: [(0, '411.520'), (1, '13.050')] +[2023-09-26 05:41:22,837][06561] Fps is (10 sec: 6553.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19529728. Throughput: 0: 769.2, 1: 767.9. Samples: 4880388. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:41:22,838][06561] Avg episode reward: [(0, '411.520'), (1, '13.050')] +[2023-09-26 05:41:27,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19562496. Throughput: 0: 771.8, 1: 771.2. Samples: 4889794. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:41:27,838][06561] Avg episode reward: [(0, '411.520'), (1, '13.060')] +[2023-09-26 05:41:30,301][07696] Updated weights for policy 0, policy_version 38228 (0.0016) +[2023-09-26 05:41:30,301][07697] Updated weights for policy 1, policy_version 38240 (0.0017) +[2023-09-26 05:41:32,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6164.8). Total num frames: 19587072. Throughput: 0: 774.0, 1: 772.7. Samples: 4894531. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:41:32,838][06561] Avg episode reward: [(0, '411.060'), (1, '12.790')] +[2023-09-26 05:41:37,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6178.7). Total num frames: 19619840. Throughput: 0: 780.5, 1: 781.0. Samples: 4903768. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:41:37,838][06561] Avg episode reward: [(0, '411.060'), (1, '12.790')] +[2023-09-26 05:41:42,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19652608. Throughput: 0: 776.9, 1: 775.5. Samples: 4913152. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:41:42,838][06561] Avg episode reward: [(0, '411.060'), (1, '12.790')] +[2023-09-26 05:41:43,450][07696] Updated weights for policy 0, policy_version 38388 (0.0018) +[2023-09-26 05:41:43,450][07697] Updated weights for policy 1, policy_version 38400 (0.0019) +[2023-09-26 05:41:47,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6192.6). Total num frames: 19685376. Throughput: 0: 775.9, 1: 776.1. Samples: 4917523. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:41:47,838][06561] Avg episode reward: [(0, '413.030'), (1, '12.940')] +[2023-09-26 05:41:52,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 19718144. Throughput: 0: 774.6, 1: 774.8. Samples: 4927089. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:41:52,838][06561] Avg episode reward: [(0, '413.030'), (1, '12.940')] +[2023-09-26 05:41:56,655][07697] Updated weights for policy 1, policy_version 38560 (0.0015) +[2023-09-26 05:41:56,656][07696] Updated weights for policy 0, policy_version 38548 (0.0017) +[2023-09-26 05:41:57,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6164.8). Total num frames: 19742720. Throughput: 0: 777.1, 1: 777.4. Samples: 4936315. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:41:57,838][06561] Avg episode reward: [(0, '413.030'), (1, '12.940')] +[2023-09-26 05:42:02,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19775488. Throughput: 0: 778.7, 1: 776.6. Samples: 4941266. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:42:02,838][06561] Avg episode reward: [(0, '415.350'), (1, '13.450')] +[2023-09-26 05:42:02,840][07269] Saving new best policy, reward=415.350! +[2023-09-26 05:42:07,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19808256. Throughput: 0: 776.0, 1: 777.8. Samples: 4950310. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 05:42:07,838][06561] Avg episode reward: [(0, '415.710'), (1, '13.450')] +[2023-09-26 05:42:07,850][07269] Saving new best policy, reward=415.710! +[2023-09-26 05:42:09,654][07696] Updated weights for policy 0, policy_version 38708 (0.0018) +[2023-09-26 05:42:09,654][07697] Updated weights for policy 1, policy_version 38720 (0.0016) +[2023-09-26 05:42:12,837][06561] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 19841024. Throughput: 0: 783.6, 1: 782.0. Samples: 4960248. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:42:12,838][06561] Avg episode reward: [(0, '415.710'), (1, '13.450')] +[2023-09-26 05:42:17,837][06561] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 19873792. Throughput: 0: 777.2, 1: 777.3. Samples: 4964482. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:42:17,837][06561] Avg episode reward: [(0, '415.710'), (1, '13.450')] +[2023-09-26 05:42:22,714][07696] Updated weights for policy 0, policy_version 38868 (0.0017) +[2023-09-26 05:42:22,714][07697] Updated weights for policy 1, policy_version 38880 (0.0015) +[2023-09-26 05:42:22,837][06561] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 19906560. Throughput: 0: 782.2, 1: 781.8. Samples: 4974149. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:42:22,838][06561] Avg episode reward: [(0, '415.850'), (1, '14.610')] +[2023-09-26 05:42:22,847][07269] Saving new best policy, reward=415.850! +[2023-09-26 05:42:27,837][06561] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 19931136. Throughput: 0: 778.2, 1: 779.8. Samples: 4983261. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:42:27,838][06561] Avg episode reward: [(0, '415.850'), (1, '14.610')] +[2023-09-26 05:42:32,837][06561] Fps is (10 sec: 5734.4, 60 sec: 6280.6, 300 sec: 6192.6). Total num frames: 19963904. Throughput: 0: 783.7, 1: 783.3. Samples: 4988035. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:42:32,838][06561] Avg episode reward: [(0, '415.850'), (1, '14.610')] +[2023-09-26 05:42:36,176][07696] Updated weights for policy 0, policy_version 39028 (0.0017) +[2023-09-26 05:42:36,176][07697] Updated weights for policy 1, policy_version 39040 (0.0017) +[2023-09-26 05:42:37,837][06561] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6206.5). Total num frames: 19996672. Throughput: 0: 778.6, 1: 777.6. Samples: 4997120. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 05:42:37,838][06561] Avg episode reward: [(0, '414.850'), (1, '16.300')] +[2023-09-26 05:42:37,847][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000039044_9998336.pth... +[2023-09-26 05:42:37,848][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000039056_9998336.pth... +[2023-09-26 05:42:37,881][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000036160_9256960.pth +[2023-09-26 05:42:37,884][07486] Saving new best policy, reward=16.300! +[2023-09-26 05:42:37,885][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000036148_9256960.pth +[2023-09-26 05:42:39,965][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-26 05:42:39,966][07757] Stopping RolloutWorker_w6... +[2023-09-26 05:42:39,966][07755] Stopping RolloutWorker_w4... +[2023-09-26 05:42:39,966][07751] Stopping RolloutWorker_w0... +[2023-09-26 05:42:39,966][07753] Stopping RolloutWorker_w2... +[2023-09-26 05:42:39,966][07758] Stopping RolloutWorker_w5... +[2023-09-26 05:42:39,966][07751] Loop rollout_proc0_evt_loop terminating... +[2023-09-26 05:42:39,966][07757] Loop rollout_proc6_evt_loop terminating... +[2023-09-26 05:42:39,966][07756] Stopping RolloutWorker_w3... +[2023-09-26 05:42:39,966][07759] Stopping RolloutWorker_w7... +[2023-09-26 05:42:39,966][07752] Stopping RolloutWorker_w1... +[2023-09-26 05:42:39,966][07755] Loop rollout_proc4_evt_loop terminating... +[2023-09-26 05:42:39,966][06561] Component RolloutWorker_w6 stopped! +[2023-09-26 05:42:39,966][07753] Loop rollout_proc2_evt_loop terminating... +[2023-09-26 05:42:39,967][07758] Loop rollout_proc5_evt_loop terminating... +[2023-09-26 05:42:39,966][07269] Stopping Batcher_0... +[2023-09-26 05:42:39,967][07759] Loop rollout_proc7_evt_loop terminating... +[2023-09-26 05:42:39,967][07756] Loop rollout_proc3_evt_loop terminating... +[2023-09-26 05:42:39,967][07752] Loop rollout_proc1_evt_loop terminating... +[2023-09-26 05:42:39,967][06561] Component RolloutWorker_w4 stopped! +[2023-09-26 05:42:39,967][07269] Loop batcher_evt_loop terminating... +[2023-09-26 05:42:39,968][06561] Component RolloutWorker_w2 stopped! +[2023-09-26 05:42:39,968][06561] Component RolloutWorker_w5 stopped! +[2023-09-26 05:42:39,969][06561] Component RolloutWorker_w3 stopped! +[2023-09-26 05:42:39,969][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000039076_10006528.pth... +[2023-09-26 05:42:39,969][06561] Component RolloutWorker_w7 stopped! +[2023-09-26 05:42:39,970][06561] Component RolloutWorker_w0 stopped! +[2023-09-26 05:42:39,970][06561] Component RolloutWorker_w1 stopped! +[2023-09-26 05:42:39,971][06561] Component Batcher_0 stopped! +[2023-09-26 05:42:39,977][06561] Component Batcher_1 stopped! +[2023-09-26 05:42:39,986][07486] Stopping Batcher_1... +[2023-09-26 05:42:39,996][07486] Loop batcher_evt_loop terminating... +[2023-09-26 05:42:39,996][07486] Removing ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000037600_9625600.pth +[2023-09-26 05:42:40,000][07486] Saving ./train_atari/atari_enduro/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-26 05:42:40,010][07269] Removing ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000037588_9625600.pth +[2023-09-26 05:42:40,011][07696] Weights refcount: 2 0 +[2023-09-26 05:42:40,012][07696] Stopping InferenceWorker_p0-w0... +[2023-09-26 05:42:40,013][07696] Loop inference_proc0-0_evt_loop terminating... +[2023-09-26 05:42:40,013][06561] Component InferenceWorker_p0-w0 stopped! +[2023-09-26 05:42:40,015][07269] Saving ./train_atari/atari_enduro/checkpoint_p0/checkpoint_000039076_10006528.pth... +[2023-09-26 05:42:40,035][07697] Weights refcount: 2 0 +[2023-09-26 05:42:40,037][07697] Stopping InferenceWorker_p1-w0... +[2023-09-26 05:42:40,037][07697] Loop inference_proc1-0_evt_loop terminating... +[2023-09-26 05:42:40,037][06561] Component InferenceWorker_p1-w0 stopped! +[2023-09-26 05:42:40,037][07486] Stopping LearnerWorker_p1... +[2023-09-26 05:42:40,037][07486] Loop learner_proc1_evt_loop terminating... +[2023-09-26 05:42:40,038][06561] Component LearnerWorker_p1 stopped! +[2023-09-26 05:42:40,071][07269] Stopping LearnerWorker_p0... +[2023-09-26 05:42:40,071][07269] Loop learner_proc0_evt_loop terminating... +[2023-09-26 05:42:40,071][06561] Component LearnerWorker_p0 stopped! +[2023-09-26 05:42:40,072][06561] Waiting for process learner_proc0 to stop... +[2023-09-26 05:42:40,843][06561] Waiting for process learner_proc1 to stop... +[2023-09-26 05:42:40,844][06561] Waiting for process inference_proc0-0 to join... +[2023-09-26 05:42:40,844][06561] Waiting for process inference_proc1-0 to join... +[2023-09-26 05:42:40,845][06561] Waiting for process rollout_proc0 to join... +[2023-09-26 05:42:40,846][06561] Waiting for process rollout_proc1 to join... +[2023-09-26 05:42:40,846][06561] Waiting for process rollout_proc2 to join... +[2023-09-26 05:42:40,847][06561] Waiting for process rollout_proc3 to join... +[2023-09-26 05:42:40,848][06561] Waiting for process rollout_proc4 to join... +[2023-09-26 05:42:40,848][06561] Waiting for process rollout_proc5 to join... +[2023-09-26 05:42:40,849][06561] Waiting for process rollout_proc6 to join... +[2023-09-26 05:42:40,849][06561] Waiting for process rollout_proc7 to join... +[2023-09-26 05:42:40,850][06561] Batcher 0 profile tree view: +batching: 21.1520, releasing_batches: 1.8587 +[2023-09-26 05:42:40,851][06561] Batcher 1 profile tree view: +batching: 21.0712, releasing_batches: 1.7636 +[2023-09-26 05:42:40,851][06561] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0052 + wait_policy_total: 694.8925 +update_model: 37.3916 + weight_update: 0.0017 +one_step: 0.0011 + handle_policy_step: 2295.0612 + deserialize: 68.9995, stack: 15.9002, obs_to_device_normalize: 558.1964, forward: 1106.6455, send_messages: 93.3056 + prepare_outputs: 305.6835 + to_cpu: 152.8315 +[2023-09-26 05:42:40,851][06561] InferenceWorker_p1-w0 profile tree view: +wait_policy: 0.0052 + wait_policy_total: 673.7081 +update_model: 37.2847 + weight_update: 0.0017 +one_step: 0.0011 + handle_policy_step: 2314.7805 + deserialize: 68.5547, stack: 15.9517, obs_to_device_normalize: 559.1499, forward: 1120.9699, send_messages: 95.3196 + prepare_outputs: 305.3434 + to_cpu: 153.2679 +[2023-09-26 05:42:40,852][06561] Learner 0 profile tree view: +misc: 0.0152, prepare_batch: 32.2360 +train: 458.8388 + epoch_init: 0.1030, minibatch_init: 3.1381, losses_postprocess: 62.7119, kl_divergence: 5.4332, after_optimizer: 21.8715 + calculate_losses: 44.0475 + losses_init: 0.0984, forward_head: 13.3577, bptt_initial: 0.4425, bptt: 0.4513, tail: 10.3819, advantages_returns: 3.0514, losses: 12.6724 + update: 317.4412 + clip: 165.7342 +[2023-09-26 05:42:40,852][06561] Learner 1 profile tree view: +misc: 0.0165, prepare_batch: 32.6580 +train: 454.7151 + epoch_init: 0.1011, minibatch_init: 3.1159, losses_postprocess: 61.9693, kl_divergence: 5.4213, after_optimizer: 22.2724 + calculate_losses: 44.7709 + losses_init: 0.1043, forward_head: 14.2841, bptt_initial: 0.4463, bptt: 0.4417, tail: 10.2292, advantages_returns: 3.0697, losses: 12.6178 + update: 312.9933 + clip: 161.6013 +[2023-09-26 05:42:40,852][06561] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.4080, enqueue_policy_requests: 43.5685, env_step: 1282.7242, overhead: 29.0170, complete_rollouts: 1.1003 +save_policy_outputs: 54.5821 + split_output_tensors: 18.4879 +[2023-09-26 05:42:40,852][06561] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.4138, enqueue_policy_requests: 43.3454, env_step: 1260.9724, overhead: 29.5796, complete_rollouts: 1.0831 +save_policy_outputs: 54.3208 + split_output_tensors: 18.9361 +[2023-09-26 05:42:40,853][06561] Loop Runner_EvtLoop terminating... +[2023-09-26 05:42:40,853][06561] Runner profile tree view: +main_loop: 3238.8295 +[2023-09-26 05:42:40,854][06561] Collected {0: 10006528, 1: 10006528}, FPS: 6179.1