diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,2255 @@ +[2023-09-26 01:51:01,872][62705] Saving configuration to ./train_atari/atari_defender/config.json... +[2023-09-26 01:51:02,190][62705] Rollout worker 0 uses device cpu +[2023-09-26 01:51:02,190][62705] Rollout worker 1 uses device cpu +[2023-09-26 01:51:02,191][62705] Rollout worker 2 uses device cpu +[2023-09-26 01:51:02,192][62705] Rollout worker 3 uses device cpu +[2023-09-26 01:51:02,192][62705] Rollout worker 4 uses device cpu +[2023-09-26 01:51:02,193][62705] Rollout worker 5 uses device cpu +[2023-09-26 01:51:02,193][62705] Rollout worker 6 uses device cpu +[2023-09-26 01:51:02,194][62705] Rollout worker 7 uses device cpu +[2023-09-26 01:51:02,194][62705] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 +[2023-09-26 01:51:02,240][62705] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 01:51:02,241][62705] InferenceWorker_p0-w0: min num requests: 1 +[2023-09-26 01:51:02,244][62705] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 01:51:02,244][62705] InferenceWorker_p1-w0: min num requests: 1 +[2023-09-26 01:51:02,268][62705] Starting all processes... +[2023-09-26 01:51:02,268][62705] Starting process learner_proc0 +[2023-09-26 01:51:03,841][62705] Starting process learner_proc1 +[2023-09-26 01:51:03,844][63291] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 01:51:03,844][63291] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-09-26 01:51:03,862][63291] Num visible devices: 1 +[2023-09-26 01:51:03,878][63291] Starting seed is not provided +[2023-09-26 01:51:03,878][63291] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 01:51:03,878][63291] Initializing actor-critic model on device cuda:0 +[2023-09-26 01:51:03,878][63291] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 01:51:03,879][63291] RunningMeanStd input shape: (1,) +[2023-09-26 01:51:03,891][63291] ConvEncoder: input_channels=4 +[2023-09-26 01:51:04,050][63291] Conv encoder output size: 512 +[2023-09-26 01:51:04,052][63291] Created Actor Critic model with architecture: +[2023-09-26 01:51:04,052][63291] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=18, bias=True) + ) +) +[2023-09-26 01:51:04,629][63291] Using optimizer +[2023-09-26 01:51:04,629][63291] No checkpoints found +[2023-09-26 01:51:04,629][63291] Did not load from checkpoint, starting from scratch! +[2023-09-26 01:51:04,630][63291] Initialized policy 0 weights for model version 0 +[2023-09-26 01:51:04,631][63291] LearnerWorker_p0 finished initialization! +[2023-09-26 01:51:04,631][63291] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 01:51:05,472][62705] Starting all processes... +[2023-09-26 01:51:05,476][63410] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 01:51:05,477][63410] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for learning process 1 +[2023-09-26 01:51:05,480][62705] Starting process inference_proc0-0 +[2023-09-26 01:51:05,480][62705] Starting process inference_proc1-0 +[2023-09-26 01:51:05,480][62705] Starting process rollout_proc0 +[2023-09-26 01:51:05,480][62705] Starting process rollout_proc1 +[2023-09-26 01:51:05,495][63410] Num visible devices: 1 +[2023-09-26 01:51:05,481][62705] Starting process rollout_proc2 +[2023-09-26 01:51:05,481][62705] Starting process rollout_proc3 +[2023-09-26 01:51:05,482][62705] Starting process rollout_proc4 +[2023-09-26 01:51:05,520][63410] Starting seed is not provided +[2023-09-26 01:51:05,520][63410] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-26 01:51:05,520][63410] Initializing actor-critic model on device cuda:0 +[2023-09-26 01:51:05,521][63410] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 01:51:05,521][63410] RunningMeanStd input shape: (1,) +[2023-09-26 01:51:05,484][62705] Starting process rollout_proc5 +[2023-09-26 01:51:05,485][62705] Starting process rollout_proc6 +[2023-09-26 01:51:05,485][62705] Starting process rollout_proc7 +[2023-09-26 01:51:05,535][63410] ConvEncoder: input_channels=4 +[2023-09-26 01:51:05,892][63410] Conv encoder output size: 512 +[2023-09-26 01:51:05,894][63410] Created Actor Critic model with architecture: +[2023-09-26 01:51:05,895][63410] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=18, bias=True) + ) +) +[2023-09-26 01:51:06,492][63410] Using optimizer +[2023-09-26 01:51:06,493][63410] No checkpoints found +[2023-09-26 01:51:06,493][63410] Did not load from checkpoint, starting from scratch! +[2023-09-26 01:51:06,493][63410] Initialized policy 1 weights for model version 0 +[2023-09-26 01:51:06,495][63410] LearnerWorker_p1 finished initialization! +[2023-09-26 01:51:06,495][63410] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-26 01:51:07,457][63637] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 01:51:07,457][63637] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for inference process 1 +[2023-09-26 01:51:07,457][63673] Worker 1 uses CPU cores [4, 5, 6, 7] +[2023-09-26 01:51:07,474][63675] Worker 3 uses CPU cores [12, 13, 14, 15] +[2023-09-26 01:51:07,475][63637] Num visible devices: 1 +[2023-09-26 01:51:07,475][63677] Worker 4 uses CPU cores [16, 17, 18, 19] +[2023-09-26 01:51:07,484][63638] Worker 0 uses CPU cores [0, 1, 2, 3] +[2023-09-26 01:51:07,521][63678] Worker 5 uses CPU cores [20, 21, 22, 23] +[2023-09-26 01:51:07,599][63676] Worker 2 uses CPU cores [8, 9, 10, 11] +[2023-09-26 01:51:07,615][63679] Worker 6 uses CPU cores [24, 25, 26, 27] +[2023-09-26 01:51:07,617][63636] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 01:51:07,617][63636] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-09-26 01:51:07,636][63636] Num visible devices: 1 +[2023-09-26 01:51:07,662][63680] Worker 7 uses CPU cores [28, 29, 30, 31] +[2023-09-26 01:51:08,029][62705] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan, 1: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-09-26 01:51:08,072][63637] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 01:51:08,073][63637] RunningMeanStd input shape: (1,) +[2023-09-26 01:51:08,084][63637] ConvEncoder: input_channels=4 +[2023-09-26 01:51:08,182][63637] Conv encoder output size: 512 +[2023-09-26 01:51:08,188][62705] Inference worker 1-0 is ready! +[2023-09-26 01:51:08,209][63636] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 01:51:08,210][63636] RunningMeanStd input shape: (1,) +[2023-09-26 01:51:08,221][63636] ConvEncoder: input_channels=4 +[2023-09-26 01:51:08,318][63636] Conv encoder output size: 512 +[2023-09-26 01:51:08,323][62705] Inference worker 0-0 is ready! +[2023-09-26 01:51:08,324][62705] All inference workers are ready! Signal rollout workers to start! +[2023-09-26 01:51:08,769][63680] Decorrelating experience for 0 frames... +[2023-09-26 01:51:08,771][63675] Decorrelating experience for 0 frames... +[2023-09-26 01:51:08,813][63673] Decorrelating experience for 0 frames... +[2023-09-26 01:51:08,814][63678] Decorrelating experience for 0 frames... +[2023-09-26 01:51:08,855][63677] Decorrelating experience for 0 frames... +[2023-09-26 01:51:08,869][63638] Decorrelating experience for 0 frames... +[2023-09-26 01:51:08,893][63679] Decorrelating experience for 0 frames... +[2023-09-26 01:51:08,912][63676] Decorrelating experience for 0 frames... +[2023-09-26 01:51:13,029][62705] Fps is (10 sec: 1638.4, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 8192. Throughput: 0: 204.8, 1: 204.8. Samples: 2048. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:51:13,029][62705] Avg episode reward: [(1, '1.333')] +[2023-09-26 01:51:18,029][62705] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 32768. Throughput: 0: 409.6, 1: 406.9. Samples: 8165. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:51:18,029][62705] Avg episode reward: [(0, '1.500'), (1, '3.556')] +[2023-09-26 01:51:22,228][62705] Heartbeat connected on Batcher_0 +[2023-09-26 01:51:22,231][62705] Heartbeat connected on LearnerWorker_p0 +[2023-09-26 01:51:22,234][62705] Heartbeat connected on Batcher_1 +[2023-09-26 01:51:22,237][62705] Heartbeat connected on LearnerWorker_p1 +[2023-09-26 01:51:22,242][62705] Heartbeat connected on InferenceWorker_p0-w0 +[2023-09-26 01:51:22,246][62705] Heartbeat connected on InferenceWorker_p1-w0 +[2023-09-26 01:51:22,247][62705] Heartbeat connected on RolloutWorker_w0 +[2023-09-26 01:51:22,250][62705] Heartbeat connected on RolloutWorker_w1 +[2023-09-26 01:51:22,253][62705] Heartbeat connected on RolloutWorker_w2 +[2023-09-26 01:51:22,258][62705] Heartbeat connected on RolloutWorker_w3 +[2023-09-26 01:51:22,259][62705] Heartbeat connected on RolloutWorker_w4 +[2023-09-26 01:51:22,262][62705] Heartbeat connected on RolloutWorker_w5 +[2023-09-26 01:51:22,266][62705] Heartbeat connected on RolloutWorker_w6 +[2023-09-26 01:51:22,267][62705] Heartbeat connected on RolloutWorker_w7 +[2023-09-26 01:51:23,029][62705] Fps is (10 sec: 5734.3, 60 sec: 4369.1, 300 sec: 4369.1). Total num frames: 65536. Throughput: 0: 431.7, 1: 422.5. Samples: 12812. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:51:23,030][62705] Avg episode reward: [(0, '3.500'), (1, '3.857')] +[2023-09-26 01:51:25,048][63636] Updated weights for policy 0, policy_version 160 (0.0018) +[2023-09-26 01:51:25,048][63637] Updated weights for policy 1, policy_version 160 (0.0016) +[2023-09-26 01:51:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 4915.2, 300 sec: 4915.2). Total num frames: 98304. Throughput: 0: 563.4, 1: 563.2. Samples: 22532. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:51:28,030][62705] Avg episode reward: [(0, '5.300'), (1, '3.611')] +[2023-09-26 01:51:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 5242.9, 300 sec: 5242.9). Total num frames: 131072. Throughput: 0: 642.4, 1: 635.9. Samples: 31957. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:51:33,030][62705] Avg episode reward: [(0, '5.471'), (1, '3.724')] +[2023-09-26 01:51:37,848][63636] Updated weights for policy 0, policy_version 320 (0.0019) +[2023-09-26 01:51:37,848][63637] Updated weights for policy 1, policy_version 320 (0.0019) +[2023-09-26 01:51:38,029][62705] Fps is (10 sec: 6553.7, 60 sec: 5461.3, 300 sec: 5461.3). Total num frames: 163840. Throughput: 0: 614.4, 1: 614.4. Samples: 36863. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:51:38,030][62705] Avg episode reward: [(0, '5.091'), (1, '4.405')] +[2023-09-26 01:51:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 5617.3, 300 sec: 5617.3). Total num frames: 196608. Throughput: 0: 671.7, 1: 668.5. Samples: 46907. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:51:43,030][62705] Avg episode reward: [(0, '5.400'), (1, '4.404')] +[2023-09-26 01:51:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 5734.4, 300 sec: 5734.4). Total num frames: 229376. Throughput: 0: 708.5, 1: 704.6. Samples: 56522. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:51:48,030][62705] Avg episode reward: [(0, '5.048'), (1, '4.365')] +[2023-09-26 01:51:48,031][63291] Saving new best policy, reward=5.048! +[2023-09-26 01:51:48,031][63410] Saving new best policy, reward=4.365! +[2023-09-26 01:51:50,436][63637] Updated weights for policy 1, policy_version 480 (0.0018) +[2023-09-26 01:51:50,436][63636] Updated weights for policy 0, policy_version 480 (0.0016) +[2023-09-26 01:51:53,029][62705] Fps is (10 sec: 6553.8, 60 sec: 5825.4, 300 sec: 5825.4). Total num frames: 262144. Throughput: 0: 682.7, 1: 682.6. Samples: 61435. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:51:53,030][62705] Avg episode reward: [(0, '5.019'), (1, '4.483')] +[2023-09-26 01:51:53,032][63410] Saving new best policy, reward=4.483! +[2023-09-26 01:51:58,029][62705] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5734.4). Total num frames: 286720. Throughput: 0: 768.4, 1: 765.7. Samples: 71084. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:51:58,030][62705] Avg episode reward: [(0, '5.089'), (1, '4.547')] +[2023-09-26 01:51:58,049][63291] Saving new best policy, reward=5.089! +[2023-09-26 01:51:58,106][63410] Saving new best policy, reward=4.547! +[2023-09-26 01:52:02,952][63410] High loss value: l:33.6898 pl:-0.0048 vl:33.7195 exp_l:-0.0249 kl_l:0.0000 (recommended to adjust the --reward_scale parameter) +[2023-09-26 01:52:03,002][63410] High loss value: l:32.9575 pl:-0.0090 vl:32.9917 exp_l:-0.0251 kl_l:0.0000 (recommended to adjust the --reward_scale parameter) +[2023-09-26 01:52:03,029][62705] Fps is (10 sec: 5734.5, 60 sec: 5808.9, 300 sec: 5808.9). Total num frames: 319488. Throughput: 0: 805.1, 1: 803.8. Samples: 80562. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:52:03,029][62705] Avg episode reward: [(0, '4.940'), (1, '5.286')] +[2023-09-26 01:52:03,051][63410] High loss value: l:32.9888 pl:0.0228 vl:32.9902 exp_l:-0.0242 kl_l:0.0000 (recommended to adjust the --reward_scale parameter) +[2023-09-26 01:52:03,100][63410] High loss value: l:32.9478 pl:0.0221 vl:32.9499 exp_l:-0.0242 kl_l:0.0000 (recommended to adjust the --reward_scale parameter) +[2023-09-26 01:52:03,126][63410] Saving new best policy, reward=5.286! +[2023-09-26 01:52:03,301][63636] Updated weights for policy 0, policy_version 640 (0.0019) +[2023-09-26 01:52:03,301][63637] Updated weights for policy 1, policy_version 640 (0.0017) +[2023-09-26 01:52:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 5871.0, 300 sec: 5871.0). Total num frames: 352256. Throughput: 0: 804.8, 1: 805.4. Samples: 85272. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:52:08,029][62705] Avg episode reward: [(0, '5.179'), (1, '5.679')] +[2023-09-26 01:52:08,030][63410] Saving new best policy, reward=5.679! +[2023-09-26 01:52:08,030][63291] Saving new best policy, reward=5.179! +[2023-09-26 01:52:13,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 5923.5). Total num frames: 385024. Throughput: 0: 802.2, 1: 799.3. Samples: 94599. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:52:13,029][62705] Avg episode reward: [(0, '4.977'), (1, '5.652')] +[2023-09-26 01:52:16,150][63637] Updated weights for policy 1, policy_version 800 (0.0014) +[2023-09-26 01:52:16,150][63636] Updated weights for policy 0, policy_version 800 (0.0016) +[2023-09-26 01:52:18,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 5968.5). Total num frames: 417792. Throughput: 0: 803.7, 1: 807.2. Samples: 104448. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:52:18,029][62705] Avg episode reward: [(0, '4.880'), (1, '5.773')] +[2023-09-26 01:52:18,030][63410] Saving new best policy, reward=5.773! +[2023-09-26 01:52:23,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6007.5). Total num frames: 450560. Throughput: 0: 806.2, 1: 803.2. Samples: 109284. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:52:23,029][62705] Avg episode reward: [(0, '4.950'), (1, '6.140')] +[2023-09-26 01:52:23,030][63410] Saving new best policy, reward=6.140! +[2023-09-26 01:52:28,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6041.6). Total num frames: 483328. Throughput: 0: 801.5, 1: 800.7. Samples: 119003. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:52:28,030][62705] Avg episode reward: [(0, '4.960'), (1, '6.350')] +[2023-09-26 01:52:28,037][63410] Saving new best policy, reward=6.350! +[2023-09-26 01:52:28,623][63637] Updated weights for policy 1, policy_version 960 (0.0018) +[2023-09-26 01:52:28,623][63636] Updated weights for policy 0, policy_version 960 (0.0016) +[2023-09-26 01:52:33,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6071.7). Total num frames: 516096. Throughput: 0: 803.8, 1: 806.8. Samples: 129003. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:52:33,029][62705] Avg episode reward: [(0, '4.810'), (1, '6.580')] +[2023-09-26 01:52:33,030][63410] Saving new best policy, reward=6.580! +[2023-09-26 01:52:38,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6098.5). Total num frames: 548864. Throughput: 0: 804.1, 1: 801.4. Samples: 133679. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:52:38,029][62705] Avg episode reward: [(0, '5.240'), (1, '6.710')] +[2023-09-26 01:52:38,030][63291] Saving new best policy, reward=5.240! +[2023-09-26 01:52:38,030][63410] Saving new best policy, reward=6.710! +[2023-09-26 01:52:41,305][63636] Updated weights for policy 0, policy_version 1120 (0.0016) +[2023-09-26 01:52:41,305][63637] Updated weights for policy 1, policy_version 1120 (0.0018) +[2023-09-26 01:52:43,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6122.4). Total num frames: 581632. Throughput: 0: 802.0, 1: 804.4. Samples: 143375. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:52:43,030][62705] Avg episode reward: [(0, '5.310'), (1, '7.210')] +[2023-09-26 01:52:43,039][63291] Saving new best policy, reward=5.310! +[2023-09-26 01:52:43,039][63410] Saving new best policy, reward=7.210! +[2023-09-26 01:52:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6144.0). Total num frames: 614400. Throughput: 0: 808.8, 1: 808.3. Samples: 153331. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:52:48,029][62705] Avg episode reward: [(0, '5.480'), (1, '7.430')] +[2023-09-26 01:52:48,030][63291] Saving new best policy, reward=5.480! +[2023-09-26 01:52:48,030][63410] Saving new best policy, reward=7.430! +[2023-09-26 01:52:53,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6163.5). Total num frames: 647168. Throughput: 0: 807.6, 1: 807.3. Samples: 157942. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:52:53,029][62705] Avg episode reward: [(0, '5.270'), (1, '7.930')] +[2023-09-26 01:52:53,030][63410] Saving new best policy, reward=7.930! +[2023-09-26 01:52:53,878][63637] Updated weights for policy 1, policy_version 1280 (0.0017) +[2023-09-26 01:52:53,878][63636] Updated weights for policy 0, policy_version 1280 (0.0017) +[2023-09-26 01:52:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6181.2). Total num frames: 679936. Throughput: 0: 813.4, 1: 816.4. Samples: 167936. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:52:58,029][62705] Avg episode reward: [(0, '5.650'), (1, '8.160')] +[2023-09-26 01:52:58,035][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000001328_339968.pth... +[2023-09-26 01:52:58,036][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000001328_339968.pth... +[2023-09-26 01:52:58,071][63291] Saving new best policy, reward=5.650! +[2023-09-26 01:52:58,072][63410] Saving new best policy, reward=8.160! +[2023-09-26 01:53:03,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6197.4). Total num frames: 712704. Throughput: 0: 814.0, 1: 810.8. Samples: 177567. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:53:03,030][62705] Avg episode reward: [(0, '5.360'), (1, '8.460')] +[2023-09-26 01:53:03,032][63410] Saving new best policy, reward=8.460! +[2023-09-26 01:53:06,621][63637] Updated weights for policy 1, policy_version 1440 (0.0016) +[2023-09-26 01:53:06,622][63636] Updated weights for policy 0, policy_version 1440 (0.0017) +[2023-09-26 01:53:08,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6212.3). Total num frames: 745472. Throughput: 0: 810.0, 1: 812.5. Samples: 182299. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:53:08,030][62705] Avg episode reward: [(0, '5.480'), (1, '8.780')] +[2023-09-26 01:53:08,031][63410] Saving new best policy, reward=8.780! +[2023-09-26 01:53:13,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6225.9). Total num frames: 778240. Throughput: 0: 810.6, 1: 812.8. Samples: 192056. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:53:13,029][62705] Avg episode reward: [(0, '5.650'), (1, '9.020')] +[2023-09-26 01:53:13,037][63410] Saving new best policy, reward=9.020! +[2023-09-26 01:53:18,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6175.5). Total num frames: 802816. Throughput: 0: 808.0, 1: 805.8. Samples: 201625. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:53:18,030][62705] Avg episode reward: [(0, '6.080'), (1, '8.580')] +[2023-09-26 01:53:18,038][63291] Saving new best policy, reward=6.080! +[2023-09-26 01:53:19,357][63637] Updated weights for policy 1, policy_version 1600 (0.0017) +[2023-09-26 01:53:19,357][63636] Updated weights for policy 0, policy_version 1600 (0.0015) +[2023-09-26 01:53:23,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6189.5). Total num frames: 835584. Throughput: 0: 809.7, 1: 810.0. Samples: 206568. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:53:23,030][62705] Avg episode reward: [(0, '6.140'), (1, '8.830')] +[2023-09-26 01:53:23,060][63291] Saving new best policy, reward=6.140! +[2023-09-26 01:53:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6202.5). Total num frames: 868352. Throughput: 0: 812.2, 1: 809.4. Samples: 216343. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:53:28,030][62705] Avg episode reward: [(0, '6.410'), (1, '9.000')] +[2023-09-26 01:53:28,176][63291] Saving new best policy, reward=6.410! +[2023-09-26 01:53:31,935][63637] Updated weights for policy 1, policy_version 1760 (0.0017) +[2023-09-26 01:53:31,935][63636] Updated weights for policy 0, policy_version 1760 (0.0016) +[2023-09-26 01:53:33,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6214.6). Total num frames: 901120. Throughput: 0: 806.9, 1: 805.8. Samples: 225900. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:53:33,030][62705] Avg episode reward: [(0, '6.350'), (1, '9.390')] +[2023-09-26 01:53:33,032][63410] Saving new best policy, reward=9.390! +[2023-09-26 01:53:38,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6225.9). Total num frames: 933888. Throughput: 0: 809.0, 1: 809.4. Samples: 230771. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:53:38,029][62705] Avg episode reward: [(0, '6.250'), (1, '9.470')] +[2023-09-26 01:53:38,030][63410] Saving new best policy, reward=9.470! +[2023-09-26 01:53:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6236.5). Total num frames: 966656. Throughput: 0: 808.6, 1: 805.8. Samples: 240586. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:53:43,030][62705] Avg episode reward: [(0, '6.460'), (1, '10.130')] +[2023-09-26 01:53:43,036][63291] Saving new best policy, reward=6.460! +[2023-09-26 01:53:43,038][63410] Saving new best policy, reward=10.130! +[2023-09-26 01:53:44,558][63636] Updated weights for policy 0, policy_version 1920 (0.0018) +[2023-09-26 01:53:44,558][63637] Updated weights for policy 1, policy_version 1920 (0.0017) +[2023-09-26 01:53:48,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6246.4). Total num frames: 999424. Throughput: 0: 806.0, 1: 805.9. Samples: 250105. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:53:48,030][62705] Avg episode reward: [(0, '6.120'), (1, '10.080')] +[2023-09-26 01:53:53,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6255.7). Total num frames: 1032192. Throughput: 0: 809.9, 1: 807.5. Samples: 255079. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:53:53,030][62705] Avg episode reward: [(0, '6.140'), (1, '10.680')] +[2023-09-26 01:53:53,031][63410] Saving new best policy, reward=10.680! +[2023-09-26 01:53:57,185][63637] Updated weights for policy 1, policy_version 2080 (0.0017) +[2023-09-26 01:53:57,185][63636] Updated weights for policy 0, policy_version 2080 (0.0014) +[2023-09-26 01:53:58,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6264.5). Total num frames: 1064960. Throughput: 0: 809.2, 1: 807.1. Samples: 264790. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:53:58,029][62705] Avg episode reward: [(0, '6.320'), (1, '11.030')] +[2023-09-26 01:53:58,035][63410] Saving new best policy, reward=11.030! +[2023-09-26 01:54:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6272.7). Total num frames: 1097728. Throughput: 0: 807.6, 1: 810.0. Samples: 274416. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:54:03,030][62705] Avg episode reward: [(0, '6.480'), (1, '10.920')] +[2023-09-26 01:54:03,031][63291] Saving new best policy, reward=6.480! +[2023-09-26 01:54:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6280.5). Total num frames: 1130496. Throughput: 0: 804.0, 1: 803.1. Samples: 278887. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:54:08,029][62705] Avg episode reward: [(0, '6.640'), (1, '10.850')] +[2023-09-26 01:54:08,030][63291] Saving new best policy, reward=6.640! +[2023-09-26 01:54:10,087][63637] Updated weights for policy 1, policy_version 2240 (0.0020) +[2023-09-26 01:54:10,088][63636] Updated weights for policy 0, policy_version 2240 (0.0019) +[2023-09-26 01:54:13,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6287.9). Total num frames: 1163264. Throughput: 0: 803.3, 1: 806.3. Samples: 288773. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 01:54:13,030][62705] Avg episode reward: [(0, '6.800'), (1, '10.600')] +[2023-09-26 01:54:13,037][63291] Saving new best policy, reward=6.800! +[2023-09-26 01:54:18,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6294.9). Total num frames: 1196032. Throughput: 0: 809.4, 1: 810.6. Samples: 298798. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:54:18,030][62705] Avg episode reward: [(0, '6.960'), (1, '10.240')] +[2023-09-26 01:54:18,031][63291] Saving new best policy, reward=6.960! +[2023-09-26 01:54:22,600][63636] Updated weights for policy 0, policy_version 2400 (0.0018) +[2023-09-26 01:54:22,600][63637] Updated weights for policy 1, policy_version 2400 (0.0017) +[2023-09-26 01:54:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6301.5). Total num frames: 1228800. Throughput: 0: 807.8, 1: 807.4. Samples: 303452. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 01:54:23,030][62705] Avg episode reward: [(0, '6.600'), (1, '10.420')] +[2023-09-26 01:54:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6307.8). Total num frames: 1261568. Throughput: 0: 807.0, 1: 809.8. Samples: 313344. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:54:28,030][62705] Avg episode reward: [(0, '6.390'), (1, '10.560')] +[2023-09-26 01:54:33,033][62705] Fps is (10 sec: 6551.2, 60 sec: 6553.2, 300 sec: 6313.7). Total num frames: 1294336. Throughput: 0: 804.8, 1: 805.1. Samples: 322556. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:54:33,034][62705] Avg episode reward: [(0, '6.380'), (1, '10.410')] +[2023-09-26 01:54:35,607][63637] Updated weights for policy 1, policy_version 2560 (0.0017) +[2023-09-26 01:54:35,607][63636] Updated weights for policy 0, policy_version 2560 (0.0016) +[2023-09-26 01:54:38,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6280.5). Total num frames: 1318912. Throughput: 0: 804.5, 1: 804.4. Samples: 327482. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:54:38,030][62705] Avg episode reward: [(0, '6.500'), (1, '10.190')] +[2023-09-26 01:54:43,029][62705] Fps is (10 sec: 5736.5, 60 sec: 6417.1, 300 sec: 6286.9). Total num frames: 1351680. Throughput: 0: 803.4, 1: 803.6. Samples: 337105. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:54:43,030][62705] Avg episode reward: [(0, '6.460'), (1, '10.040')] +[2023-09-26 01:54:48,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6292.9). Total num frames: 1384448. Throughput: 0: 806.0, 1: 802.2. Samples: 346784. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:54:48,030][62705] Avg episode reward: [(0, '6.500'), (1, '10.260')] +[2023-09-26 01:54:48,261][63637] Updated weights for policy 1, policy_version 2720 (0.0016) +[2023-09-26 01:54:48,261][63636] Updated weights for policy 0, policy_version 2720 (0.0016) +[2023-09-26 01:54:53,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6298.7). Total num frames: 1417216. Throughput: 0: 808.1, 1: 808.1. Samples: 351616. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:54:53,029][62705] Avg episode reward: [(0, '6.320'), (1, '10.140')] +[2023-09-26 01:54:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6304.3). Total num frames: 1449984. Throughput: 0: 807.6, 1: 803.8. Samples: 361286. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:54:58,030][62705] Avg episode reward: [(0, '6.520'), (1, '10.170')] +[2023-09-26 01:54:58,039][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000002832_724992.pth... +[2023-09-26 01:54:58,039][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000002832_724992.pth... +[2023-09-26 01:55:00,856][63637] Updated weights for policy 1, policy_version 2880 (0.0015) +[2023-09-26 01:55:00,857][63636] Updated weights for policy 0, policy_version 2880 (0.0016) +[2023-09-26 01:55:03,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6309.6). Total num frames: 1482752. Throughput: 0: 802.0, 1: 801.3. Samples: 370947. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:55:03,030][62705] Avg episode reward: [(0, '6.120'), (1, '10.170')] +[2023-09-26 01:55:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6314.7). Total num frames: 1515520. Throughput: 0: 804.7, 1: 804.4. Samples: 375864. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 01:55:08,030][62705] Avg episode reward: [(0, '6.170'), (1, '10.280')] +[2023-09-26 01:55:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6319.5). Total num frames: 1548288. Throughput: 0: 803.2, 1: 800.0. Samples: 385487. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:55:13,030][62705] Avg episode reward: [(0, '6.000'), (1, '10.280')] +[2023-09-26 01:55:13,507][63637] Updated weights for policy 1, policy_version 3040 (0.0017) +[2023-09-26 01:55:13,508][63636] Updated weights for policy 0, policy_version 3040 (0.0017) +[2023-09-26 01:55:18,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6324.2). Total num frames: 1581056. Throughput: 0: 806.5, 1: 809.4. Samples: 395268. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 01:55:18,029][62705] Avg episode reward: [(0, '6.270'), (1, '10.280')] +[2023-09-26 01:55:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6328.7). Total num frames: 1613824. Throughput: 0: 805.1, 1: 805.0. Samples: 399936. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 01:55:23,030][62705] Avg episode reward: [(0, '6.230'), (1, '10.140')] +[2023-09-26 01:55:26,294][63637] Updated weights for policy 1, policy_version 3200 (0.0018) +[2023-09-26 01:55:26,294][63636] Updated weights for policy 0, policy_version 3200 (0.0018) +[2023-09-26 01:55:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6333.0). Total num frames: 1646592. Throughput: 0: 804.5, 1: 807.0. Samples: 409623. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:55:28,030][62705] Avg episode reward: [(0, '6.420'), (1, '10.130')] +[2023-09-26 01:55:33,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.5, 300 sec: 6337.2). Total num frames: 1679360. Throughput: 0: 809.7, 1: 811.6. Samples: 419741. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:55:33,029][62705] Avg episode reward: [(0, '6.500'), (1, '10.400')] +[2023-09-26 01:55:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6341.2). Total num frames: 1712128. Throughput: 0: 807.3, 1: 807.9. Samples: 424301. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:55:38,030][62705] Avg episode reward: [(0, '6.460'), (1, '9.890')] +[2023-09-26 01:55:38,940][63636] Updated weights for policy 0, policy_version 3360 (0.0018) +[2023-09-26 01:55:38,941][63637] Updated weights for policy 1, policy_version 3360 (0.0018) +[2023-09-26 01:55:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6345.1). Total num frames: 1744896. Throughput: 0: 808.0, 1: 809.8. Samples: 434086. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:55:43,029][62705] Avg episode reward: [(0, '6.450'), (1, '9.790')] +[2023-09-26 01:55:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6348.8). Total num frames: 1777664. Throughput: 0: 808.7, 1: 808.5. Samples: 443719. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:55:48,030][62705] Avg episode reward: [(0, '6.460'), (1, '9.740')] +[2023-09-26 01:55:51,655][63637] Updated weights for policy 1, policy_version 3520 (0.0017) +[2023-09-26 01:55:51,655][63636] Updated weights for policy 0, policy_version 3520 (0.0017) +[2023-09-26 01:55:53,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6352.4). Total num frames: 1810432. Throughput: 0: 805.6, 1: 808.8. Samples: 448512. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:55:53,029][62705] Avg episode reward: [(0, '6.590'), (1, '9.540')] +[2023-09-26 01:55:58,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6355.9). Total num frames: 1843200. Throughput: 0: 809.8, 1: 810.0. Samples: 458380. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:55:58,029][62705] Avg episode reward: [(0, '6.860'), (1, '9.370')] +[2023-09-26 01:56:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6359.2). Total num frames: 1875968. Throughput: 0: 811.3, 1: 807.2. Samples: 468104. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:56:03,029][62705] Avg episode reward: [(0, '6.870'), (1, '10.490')] +[2023-09-26 01:56:04,234][63636] Updated weights for policy 0, policy_version 3680 (0.0017) +[2023-09-26 01:56:04,234][63637] Updated weights for policy 1, policy_version 3680 (0.0019) +[2023-09-26 01:56:08,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 1908736. Throughput: 0: 811.2, 1: 813.3. Samples: 473039. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:56:08,029][62705] Avg episode reward: [(0, '6.830'), (1, '10.630')] +[2023-09-26 01:56:13,029][62705] Fps is (10 sec: 6143.9, 60 sec: 6485.3, 300 sec: 6456.4). Total num frames: 1937408. Throughput: 0: 814.3, 1: 811.9. Samples: 482802. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:56:13,030][62705] Avg episode reward: [(0, '7.030'), (1, '10.590')] +[2023-09-26 01:56:13,042][63291] Saving new best policy, reward=7.030! +[2023-09-26 01:56:16,882][63637] Updated weights for policy 1, policy_version 3840 (0.0017) +[2023-09-26 01:56:16,883][63636] Updated weights for policy 0, policy_version 3840 (0.0015) +[2023-09-26 01:56:18,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 1966080. Throughput: 0: 806.4, 1: 805.5. Samples: 492276. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:56:18,029][62705] Avg episode reward: [(0, '6.930'), (1, '11.010')] +[2023-09-26 01:56:23,029][62705] Fps is (10 sec: 6143.9, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 1998848. Throughput: 0: 811.1, 1: 811.8. Samples: 497330. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:56:23,030][62705] Avg episode reward: [(0, '6.780'), (1, '11.080')] +[2023-09-26 01:56:23,124][63410] Saving new best policy, reward=11.080! +[2023-09-26 01:56:28,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2031616. Throughput: 0: 812.0, 1: 811.3. Samples: 507135. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:56:28,030][62705] Avg episode reward: [(0, '6.990'), (1, '11.020')] +[2023-09-26 01:56:29,426][63636] Updated weights for policy 0, policy_version 4000 (0.0017) +[2023-09-26 01:56:29,426][63637] Updated weights for policy 1, policy_version 4000 (0.0016) +[2023-09-26 01:56:33,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2064384. Throughput: 0: 810.3, 1: 810.4. Samples: 516651. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:56:33,029][62705] Avg episode reward: [(0, '6.970'), (1, '11.090')] +[2023-09-26 01:56:33,212][63410] Saving new best policy, reward=11.090! +[2023-09-26 01:56:38,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2097152. Throughput: 0: 815.7, 1: 812.6. Samples: 521784. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:56:38,029][62705] Avg episode reward: [(0, '6.760'), (1, '11.080')] +[2023-09-26 01:56:42,186][63637] Updated weights for policy 1, policy_version 4160 (0.0016) +[2023-09-26 01:56:42,187][63636] Updated weights for policy 0, policy_version 4160 (0.0017) +[2023-09-26 01:56:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2129920. Throughput: 0: 808.5, 1: 808.1. Samples: 531127. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 01:56:43,029][62705] Avg episode reward: [(0, '7.290'), (1, '11.080')] +[2023-09-26 01:56:43,036][63291] Saving new best policy, reward=7.290! +[2023-09-26 01:56:48,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 2162688. Throughput: 0: 806.9, 1: 808.5. Samples: 540797. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:56:48,030][62705] Avg episode reward: [(0, '7.120'), (1, '10.950')] +[2023-09-26 01:56:53,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 2195456. Throughput: 0: 810.7, 1: 809.0. Samples: 545923. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:56:53,030][62705] Avg episode reward: [(0, '7.140'), (1, '10.730')] +[2023-09-26 01:56:54,663][63636] Updated weights for policy 0, policy_version 4320 (0.0017) +[2023-09-26 01:56:54,663][63637] Updated weights for policy 1, policy_version 4320 (0.0016) +[2023-09-26 01:56:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 2228224. Throughput: 0: 809.1, 1: 809.2. Samples: 555624. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:56:58,030][62705] Avg episode reward: [(0, '6.910'), (1, '10.600')] +[2023-09-26 01:56:58,040][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000004352_1114112.pth... +[2023-09-26 01:56:58,040][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000004352_1114112.pth... +[2023-09-26 01:56:58,071][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000001328_339968.pth +[2023-09-26 01:56:58,078][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000001328_339968.pth +[2023-09-26 01:57:03,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 2260992. Throughput: 0: 811.9, 1: 812.4. Samples: 565369. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:57:03,029][62705] Avg episode reward: [(0, '6.760'), (1, '10.600')] +[2023-09-26 01:57:07,196][63637] Updated weights for policy 1, policy_version 4480 (0.0018) +[2023-09-26 01:57:07,196][63636] Updated weights for policy 0, policy_version 4480 (0.0016) +[2023-09-26 01:57:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 2293760. Throughput: 0: 813.4, 1: 812.6. Samples: 570498. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:57:08,030][62705] Avg episode reward: [(0, '6.920'), (1, '10.480')] +[2023-09-26 01:57:13,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6485.3, 300 sec: 6470.3). Total num frames: 2326528. Throughput: 0: 812.8, 1: 812.4. Samples: 580271. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:57:13,030][62705] Avg episode reward: [(0, '7.060'), (1, '10.660')] +[2023-09-26 01:57:18,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 2359296. Throughput: 0: 813.1, 1: 814.6. Samples: 589894. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:57:18,029][62705] Avg episode reward: [(0, '6.830'), (1, '10.900')] +[2023-09-26 01:57:19,751][63636] Updated weights for policy 0, policy_version 4640 (0.0015) +[2023-09-26 01:57:19,751][63637] Updated weights for policy 1, policy_version 4640 (0.0015) +[2023-09-26 01:57:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 2392064. Throughput: 0: 812.6, 1: 812.6. Samples: 594915. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:57:23,030][62705] Avg episode reward: [(0, '6.780'), (1, '10.570')] +[2023-09-26 01:57:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 2424832. Throughput: 0: 811.6, 1: 813.3. Samples: 604247. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:57:28,029][62705] Avg episode reward: [(0, '6.650'), (1, '10.180')] +[2023-09-26 01:57:32,690][63636] Updated weights for policy 0, policy_version 4800 (0.0017) +[2023-09-26 01:57:32,693][63637] Updated weights for policy 1, policy_version 4800 (0.0019) +[2023-09-26 01:57:33,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 2457600. Throughput: 0: 813.2, 1: 812.6. Samples: 613957. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:57:33,029][62705] Avg episode reward: [(0, '6.590'), (1, '10.280')] +[2023-09-26 01:57:38,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 2490368. Throughput: 0: 807.1, 1: 807.8. Samples: 618595. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:57:38,029][62705] Avg episode reward: [(0, '6.680'), (1, '10.120')] +[2023-09-26 01:57:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 2523136. Throughput: 0: 810.9, 1: 812.3. Samples: 628668. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:57:43,030][62705] Avg episode reward: [(0, '6.630'), (1, '9.850')] +[2023-09-26 01:57:45,297][63637] Updated weights for policy 1, policy_version 4960 (0.0021) +[2023-09-26 01:57:45,297][63636] Updated weights for policy 0, policy_version 4960 (0.0020) +[2023-09-26 01:57:48,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 2555904. Throughput: 0: 812.1, 1: 811.6. Samples: 638435. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:57:48,030][62705] Avg episode reward: [(0, '6.850'), (1, '9.980')] +[2023-09-26 01:57:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 2588672. Throughput: 0: 806.1, 1: 807.8. Samples: 643126. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:57:53,030][62705] Avg episode reward: [(0, '6.710'), (1, '10.150')] +[2023-09-26 01:57:57,759][63636] Updated weights for policy 0, policy_version 5120 (0.0017) +[2023-09-26 01:57:57,759][63637] Updated weights for policy 1, policy_version 5120 (0.0016) +[2023-09-26 01:57:58,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 2621440. Throughput: 0: 810.0, 1: 811.5. Samples: 653236. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:57:58,029][62705] Avg episode reward: [(0, '6.960'), (1, '10.640')] +[2023-09-26 01:58:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 2654208. Throughput: 0: 810.5, 1: 808.9. Samples: 662769. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:58:03,030][62705] Avg episode reward: [(0, '6.880'), (1, '10.660')] +[2023-09-26 01:58:08,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 2686976. Throughput: 0: 806.6, 1: 809.7. Samples: 667649. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:58:08,030][62705] Avg episode reward: [(0, '7.080'), (1, '10.790')] +[2023-09-26 01:58:10,415][63636] Updated weights for policy 0, policy_version 5280 (0.0019) +[2023-09-26 01:58:10,415][63637] Updated weights for policy 1, policy_version 5280 (0.0018) +[2023-09-26 01:58:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 2719744. Throughput: 0: 815.1, 1: 814.0. Samples: 677556. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 01:58:13,030][62705] Avg episode reward: [(0, '6.800'), (1, '10.610')] +[2023-09-26 01:58:18,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 2752512. Throughput: 0: 813.1, 1: 813.4. Samples: 687152. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 01:58:18,029][62705] Avg episode reward: [(0, '6.840'), (1, '10.910')] +[2023-09-26 01:58:22,995][63636] Updated weights for policy 0, policy_version 5440 (0.0017) +[2023-09-26 01:58:22,995][63637] Updated weights for policy 1, policy_version 5440 (0.0018) +[2023-09-26 01:58:23,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 2785280. Throughput: 0: 817.0, 1: 818.4. Samples: 692187. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 01:58:23,029][62705] Avg episode reward: [(0, '7.260'), (1, '10.720')] +[2023-09-26 01:58:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 2818048. Throughput: 0: 814.0, 1: 813.1. Samples: 701886. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:58:28,030][62705] Avg episode reward: [(0, '7.350'), (1, '10.050')] +[2023-09-26 01:58:28,042][63291] Saving new best policy, reward=7.350! +[2023-09-26 01:58:33,029][62705] Fps is (10 sec: 6143.9, 60 sec: 6485.3, 300 sec: 6484.2). Total num frames: 2846720. Throughput: 0: 813.2, 1: 813.0. Samples: 711614. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:58:33,030][62705] Avg episode reward: [(0, '7.370'), (1, '10.090')] +[2023-09-26 01:58:33,031][63291] Saving new best policy, reward=7.370! +[2023-09-26 01:58:35,551][63636] Updated weights for policy 0, policy_version 5600 (0.0020) +[2023-09-26 01:58:35,551][63637] Updated weights for policy 1, policy_version 5600 (0.0017) +[2023-09-26 01:58:38,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 2875392. Throughput: 0: 817.8, 1: 816.5. Samples: 716668. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:58:38,030][62705] Avg episode reward: [(0, '7.640'), (1, '9.890')] +[2023-09-26 01:58:38,085][63291] Saving new best policy, reward=7.640! +[2023-09-26 01:58:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6485.3, 300 sec: 6484.2). Total num frames: 2912256. Throughput: 0: 813.4, 1: 812.0. Samples: 726376. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:58:43,030][62705] Avg episode reward: [(0, '7.300'), (1, '9.990')] +[2023-09-26 01:58:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 2940928. Throughput: 0: 811.4, 1: 811.8. Samples: 735812. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 01:58:48,030][62705] Avg episode reward: [(0, '7.330'), (1, '10.250')] +[2023-09-26 01:58:48,197][63637] Updated weights for policy 1, policy_version 5760 (0.0017) +[2023-09-26 01:58:48,197][63636] Updated weights for policy 0, policy_version 5760 (0.0016) +[2023-09-26 01:58:53,029][62705] Fps is (10 sec: 6144.0, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 2973696. Throughput: 0: 814.9, 1: 812.9. Samples: 740902. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:58:53,030][62705] Avg episode reward: [(0, '7.180'), (1, '9.870')] +[2023-09-26 01:58:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 3006464. Throughput: 0: 811.6, 1: 811.7. Samples: 750606. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:58:58,030][62705] Avg episode reward: [(0, '7.780'), (1, '9.610')] +[2023-09-26 01:58:58,211][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000005888_1507328.pth... +[2023-09-26 01:58:58,237][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000005888_1507328.pth... +[2023-09-26 01:58:58,238][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000002832_724992.pth +[2023-09-26 01:58:58,264][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000002832_724992.pth +[2023-09-26 01:58:58,267][63291] Saving new best policy, reward=7.780! +[2023-09-26 01:59:00,757][63636] Updated weights for policy 0, policy_version 5920 (0.0017) +[2023-09-26 01:59:00,757][63637] Updated weights for policy 1, policy_version 5920 (0.0017) +[2023-09-26 01:59:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 3039232. Throughput: 0: 813.1, 1: 812.8. Samples: 760318. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:59:03,030][62705] Avg episode reward: [(0, '7.790'), (1, '9.400')] +[2023-09-26 01:59:03,031][63291] Saving new best policy, reward=7.790! +[2023-09-26 01:59:08,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 3072000. Throughput: 0: 813.5, 1: 811.6. Samples: 765316. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:59:08,030][62705] Avg episode reward: [(0, '8.120'), (1, '9.760')] +[2023-09-26 01:59:08,031][63291] Saving new best policy, reward=8.120! +[2023-09-26 01:59:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 3104768. Throughput: 0: 812.5, 1: 812.1. Samples: 774992. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:59:13,030][62705] Avg episode reward: [(0, '7.560'), (1, '9.680')] +[2023-09-26 01:59:13,336][63636] Updated weights for policy 0, policy_version 6080 (0.0017) +[2023-09-26 01:59:13,336][63637] Updated weights for policy 1, policy_version 6080 (0.0016) +[2023-09-26 01:59:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 3137536. Throughput: 0: 811.1, 1: 811.7. Samples: 784640. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:59:18,030][62705] Avg episode reward: [(0, '7.630'), (1, '9.690')] +[2023-09-26 01:59:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 3170304. Throughput: 0: 812.2, 1: 811.8. Samples: 789748. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:59:23,030][62705] Avg episode reward: [(0, '7.740'), (1, '10.000')] +[2023-09-26 01:59:25,935][63636] Updated weights for policy 0, policy_version 6240 (0.0014) +[2023-09-26 01:59:25,935][63637] Updated weights for policy 1, policy_version 6240 (0.0014) +[2023-09-26 01:59:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.4). Total num frames: 3203072. Throughput: 0: 811.1, 1: 811.1. Samples: 799375. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:59:28,030][62705] Avg episode reward: [(0, '7.910'), (1, '10.000')] +[2023-09-26 01:59:33,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6485.3, 300 sec: 6498.1). Total num frames: 3235840. Throughput: 0: 814.6, 1: 814.5. Samples: 809119. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:59:33,029][62705] Avg episode reward: [(0, '8.010'), (1, '10.000')] +[2023-09-26 01:59:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 3268608. Throughput: 0: 813.2, 1: 812.3. Samples: 814049. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:59:38,030][62705] Avg episode reward: [(0, '8.470'), (1, '10.220')] +[2023-09-26 01:59:38,031][63291] Saving new best policy, reward=8.470! +[2023-09-26 01:59:38,507][63637] Updated weights for policy 1, policy_version 6400 (0.0016) +[2023-09-26 01:59:38,507][63636] Updated weights for policy 0, policy_version 6400 (0.0016) +[2023-09-26 01:59:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6485.3, 300 sec: 6498.1). Total num frames: 3301376. Throughput: 0: 812.5, 1: 812.3. Samples: 823722. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:59:43,030][62705] Avg episode reward: [(0, '8.520'), (1, '10.410')] +[2023-09-26 01:59:43,040][63291] Saving new best policy, reward=8.520! +[2023-09-26 01:59:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 3334144. Throughput: 0: 812.1, 1: 815.1. Samples: 833540. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:59:48,030][62705] Avg episode reward: [(0, '8.950'), (1, '10.820')] +[2023-09-26 01:59:48,031][63291] Saving new best policy, reward=8.950! +[2023-09-26 01:59:51,087][63637] Updated weights for policy 1, policy_version 6560 (0.0019) +[2023-09-26 01:59:51,087][63636] Updated weights for policy 0, policy_version 6560 (0.0018) +[2023-09-26 01:59:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 3366912. Throughput: 0: 812.5, 1: 812.4. Samples: 838437. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:59:53,030][62705] Avg episode reward: [(0, '8.960'), (1, '10.940')] +[2023-09-26 01:59:53,031][63291] Saving new best policy, reward=8.960! +[2023-09-26 01:59:58,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 3399680. Throughput: 0: 812.0, 1: 811.7. Samples: 848055. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:59:58,029][62705] Avg episode reward: [(0, '8.850'), (1, '10.850')] +[2023-09-26 02:00:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 3432448. Throughput: 0: 815.0, 1: 816.7. Samples: 858065. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:00:03,030][62705] Avg episode reward: [(0, '8.580'), (1, '10.850')] +[2023-09-26 02:00:03,935][63636] Updated weights for policy 0, policy_version 6720 (0.0019) +[2023-09-26 02:00:03,936][63637] Updated weights for policy 1, policy_version 6720 (0.0018) +[2023-09-26 02:00:08,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 3465216. Throughput: 0: 806.0, 1: 806.7. Samples: 862317. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:00:08,029][62705] Avg episode reward: [(0, '8.600'), (1, '11.270')] +[2023-09-26 02:00:08,030][63410] Saving new best policy, reward=11.270! +[2023-09-26 02:00:13,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 3497984. Throughput: 0: 810.4, 1: 811.6. Samples: 872364. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:00:13,030][62705] Avg episode reward: [(0, '8.890'), (1, '11.060')] +[2023-09-26 02:00:16,537][63637] Updated weights for policy 1, policy_version 6880 (0.0015) +[2023-09-26 02:00:16,537][63636] Updated weights for policy 0, policy_version 6880 (0.0017) +[2023-09-26 02:00:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 3530752. Throughput: 0: 810.5, 1: 810.6. Samples: 882069. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:00:18,029][62705] Avg episode reward: [(0, '8.920'), (1, '11.580')] +[2023-09-26 02:00:18,030][63410] Saving new best policy, reward=11.580! +[2023-09-26 02:00:23,029][62705] Fps is (10 sec: 6144.1, 60 sec: 6485.3, 300 sec: 6484.2). Total num frames: 3559424. Throughput: 0: 806.8, 1: 809.6. Samples: 886788. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:00:23,030][62705] Avg episode reward: [(0, '9.090'), (1, '11.740')] +[2023-09-26 02:00:23,031][63410] Saving new best policy, reward=11.740! +[2023-09-26 02:00:23,031][63291] Saving new best policy, reward=9.090! +[2023-09-26 02:00:28,029][62705] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 3588096. Throughput: 0: 806.9, 1: 807.1. Samples: 896350. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:00:28,030][62705] Avg episode reward: [(0, '8.850'), (1, '11.970')] +[2023-09-26 02:00:28,106][63410] Saving new best policy, reward=11.970! +[2023-09-26 02:00:29,461][63637] Updated weights for policy 1, policy_version 7040 (0.0017) +[2023-09-26 02:00:29,462][63636] Updated weights for policy 0, policy_version 7040 (0.0017) +[2023-09-26 02:00:33,029][62705] Fps is (10 sec: 6143.9, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 3620864. Throughput: 0: 802.5, 1: 799.4. Samples: 905627. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:00:33,030][62705] Avg episode reward: [(0, '8.970'), (1, '11.890')] +[2023-09-26 02:00:38,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 3653632. Throughput: 0: 802.2, 1: 802.1. Samples: 910631. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:00:38,030][62705] Avg episode reward: [(0, '9.110'), (1, '12.410')] +[2023-09-26 02:00:38,031][63291] Saving new best policy, reward=9.110! +[2023-09-26 02:00:38,031][63410] Saving new best policy, reward=12.410! +[2023-09-26 02:00:42,100][63637] Updated weights for policy 1, policy_version 7200 (0.0016) +[2023-09-26 02:00:42,102][63636] Updated weights for policy 0, policy_version 7200 (0.0018) +[2023-09-26 02:00:43,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 3686400. Throughput: 0: 803.1, 1: 803.2. Samples: 920341. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:00:43,029][62705] Avg episode reward: [(0, '8.960'), (1, '11.020')] +[2023-09-26 02:00:48,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 3719168. Throughput: 0: 801.0, 1: 799.0. Samples: 930062. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 02:00:48,029][62705] Avg episode reward: [(0, '8.810'), (1, '11.060')] +[2023-09-26 02:00:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 3751936. Throughput: 0: 809.3, 1: 808.7. Samples: 935129. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 02:00:53,029][62705] Avg episode reward: [(0, '8.800'), (1, '10.890')] +[2023-09-26 02:00:54,646][63637] Updated weights for policy 1, policy_version 7360 (0.0015) +[2023-09-26 02:00:54,646][63636] Updated weights for policy 0, policy_version 7360 (0.0017) +[2023-09-26 02:00:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 3784704. Throughput: 0: 806.3, 1: 805.2. Samples: 944878. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:00:58,029][62705] Avg episode reward: [(0, '8.870'), (1, '11.170')] +[2023-09-26 02:00:58,039][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000007392_1892352.pth... +[2023-09-26 02:00:58,039][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000007392_1892352.pth... +[2023-09-26 02:00:58,081][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000004352_1114112.pth +[2023-09-26 02:00:58,081][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000004352_1114112.pth +[2023-09-26 02:01:03,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 3817472. Throughput: 0: 803.6, 1: 804.7. Samples: 954446. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:01:03,030][62705] Avg episode reward: [(0, '9.040'), (1, '11.380')] +[2023-09-26 02:01:07,256][63637] Updated weights for policy 1, policy_version 7520 (0.0017) +[2023-09-26 02:01:07,256][63636] Updated weights for policy 0, policy_version 7520 (0.0016) +[2023-09-26 02:01:08,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6484.2). Total num frames: 3850240. Throughput: 0: 809.9, 1: 807.3. Samples: 959560. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:01:08,029][62705] Avg episode reward: [(0, '9.660'), (1, '12.230')] +[2023-09-26 02:01:08,030][63291] Saving new best policy, reward=9.660! +[2023-09-26 02:01:13,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 3883008. Throughput: 0: 808.7, 1: 808.7. Samples: 969134. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:01:13,030][62705] Avg episode reward: [(0, '9.570'), (1, '12.270')] +[2023-09-26 02:01:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 3915776. Throughput: 0: 813.2, 1: 816.2. Samples: 978948. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:01:18,029][62705] Avg episode reward: [(0, '10.250'), (1, '12.290')] +[2023-09-26 02:01:18,030][63291] Saving new best policy, reward=10.250! +[2023-09-26 02:01:19,913][63636] Updated weights for policy 0, policy_version 7680 (0.0017) +[2023-09-26 02:01:19,914][63637] Updated weights for policy 1, policy_version 7680 (0.0018) +[2023-09-26 02:01:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6485.3, 300 sec: 6498.1). Total num frames: 3948544. Throughput: 0: 812.2, 1: 812.0. Samples: 983718. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:01:23,030][62705] Avg episode reward: [(0, '10.380'), (1, '12.650')] +[2023-09-26 02:01:23,031][63291] Saving new best policy, reward=10.380! +[2023-09-26 02:01:23,031][63410] Saving new best policy, reward=12.650! +[2023-09-26 02:01:28,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 3981312. Throughput: 0: 810.7, 1: 812.0. Samples: 993361. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 02:01:28,030][62705] Avg episode reward: [(0, '10.460'), (1, '12.550')] +[2023-09-26 02:01:28,038][63291] Saving new best policy, reward=10.460! +[2023-09-26 02:01:32,605][63636] Updated weights for policy 0, policy_version 7840 (0.0016) +[2023-09-26 02:01:32,606][63637] Updated weights for policy 1, policy_version 7840 (0.0015) +[2023-09-26 02:01:33,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 4014080. Throughput: 0: 813.4, 1: 813.9. Samples: 1003289. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:01:33,030][62705] Avg episode reward: [(0, '10.810'), (1, '12.140')] +[2023-09-26 02:01:33,031][63291] Saving new best policy, reward=10.810! +[2023-09-26 02:01:38,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 4046848. Throughput: 0: 808.7, 1: 808.4. Samples: 1007900. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:01:38,029][62705] Avg episode reward: [(0, '10.630'), (1, '12.020')] +[2023-09-26 02:01:43,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 4079616. Throughput: 0: 809.4, 1: 812.0. Samples: 1017842. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:01:43,029][62705] Avg episode reward: [(0, '10.800'), (1, '11.760')] +[2023-09-26 02:01:45,298][63636] Updated weights for policy 0, policy_version 8000 (0.0015) +[2023-09-26 02:01:45,300][63637] Updated weights for policy 1, policy_version 8000 (0.0018) +[2023-09-26 02:01:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 4112384. Throughput: 0: 812.6, 1: 811.2. Samples: 1027518. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:01:48,029][62705] Avg episode reward: [(0, '11.190'), (1, '11.900')] +[2023-09-26 02:01:48,030][63291] Saving new best policy, reward=11.190! +[2023-09-26 02:01:53,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 4145152. Throughput: 0: 807.7, 1: 808.4. Samples: 1032285. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:01:53,030][62705] Avg episode reward: [(0, '11.550'), (1, '13.050')] +[2023-09-26 02:01:53,031][63291] Saving new best policy, reward=11.550! +[2023-09-26 02:01:53,031][63410] Saving new best policy, reward=13.050! +[2023-09-26 02:01:57,797][63637] Updated weights for policy 1, policy_version 8160 (0.0018) +[2023-09-26 02:01:57,797][63636] Updated weights for policy 0, policy_version 8160 (0.0016) +[2023-09-26 02:01:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 4177920. Throughput: 0: 812.4, 1: 812.2. Samples: 1042239. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 02:01:58,029][62705] Avg episode reward: [(0, '11.400'), (1, '12.880')] +[2023-09-26 02:02:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 4210688. Throughput: 0: 813.2, 1: 810.6. Samples: 1052021. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:02:03,030][62705] Avg episode reward: [(0, '11.830'), (1, '12.950')] +[2023-09-26 02:02:03,031][63291] Saving new best policy, reward=11.830! +[2023-09-26 02:02:08,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 4243456. Throughput: 0: 810.2, 1: 813.2. Samples: 1056772. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:02:08,030][62705] Avg episode reward: [(0, '12.020'), (1, '12.710')] +[2023-09-26 02:02:08,031][63291] Saving new best policy, reward=12.020! +[2023-09-26 02:02:10,539][63636] Updated weights for policy 0, policy_version 8320 (0.0018) +[2023-09-26 02:02:10,539][63637] Updated weights for policy 1, policy_version 8320 (0.0017) +[2023-09-26 02:02:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 4276224. Throughput: 0: 812.0, 1: 811.2. Samples: 1066404. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:02:13,030][62705] Avg episode reward: [(0, '11.840'), (1, '11.710')] +[2023-09-26 02:02:18,029][62705] Fps is (10 sec: 6144.0, 60 sec: 6485.3, 300 sec: 6484.2). Total num frames: 4304896. Throughput: 0: 808.9, 1: 809.6. Samples: 1076121. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:02:18,030][62705] Avg episode reward: [(0, '12.070'), (1, '10.580')] +[2023-09-26 02:02:18,166][63291] Saving new best policy, reward=12.070! +[2023-09-26 02:02:23,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 4333568. Throughput: 0: 809.9, 1: 810.3. Samples: 1080811. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:02:23,030][62705] Avg episode reward: [(0, '11.800'), (1, '10.100')] +[2023-09-26 02:02:23,200][63637] Updated weights for policy 1, policy_version 8480 (0.0017) +[2023-09-26 02:02:23,201][63636] Updated weights for policy 0, policy_version 8480 (0.0017) +[2023-09-26 02:02:28,029][62705] Fps is (10 sec: 6144.1, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 4366336. Throughput: 0: 810.1, 1: 807.7. Samples: 1090644. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:02:28,029][62705] Avg episode reward: [(0, '11.430'), (1, '10.230')] +[2023-09-26 02:02:33,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 4399104. Throughput: 0: 807.8, 1: 807.9. Samples: 1100226. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:02:33,029][62705] Avg episode reward: [(0, '11.610'), (1, '10.670')] +[2023-09-26 02:02:35,780][63636] Updated weights for policy 0, policy_version 8640 (0.0018) +[2023-09-26 02:02:35,781][63637] Updated weights for policy 1, policy_version 8640 (0.0017) +[2023-09-26 02:02:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 4431872. Throughput: 0: 811.8, 1: 811.0. Samples: 1105308. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:02:38,029][62705] Avg episode reward: [(0, '11.380'), (1, '11.050')] +[2023-09-26 02:02:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 4464640. Throughput: 0: 808.5, 1: 809.1. Samples: 1115034. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:02:43,030][62705] Avg episode reward: [(0, '11.790'), (1, '10.380')] +[2023-09-26 02:02:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 4497408. Throughput: 0: 809.2, 1: 808.5. Samples: 1124819. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:02:48,029][62705] Avg episode reward: [(0, '12.000'), (1, '10.240')] +[2023-09-26 02:02:48,276][63637] Updated weights for policy 1, policy_version 8800 (0.0018) +[2023-09-26 02:02:48,276][63636] Updated weights for policy 0, policy_version 8800 (0.0018) +[2023-09-26 02:02:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 4530176. Throughput: 0: 814.5, 1: 811.0. Samples: 1129921. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:02:53,030][62705] Avg episode reward: [(0, '11.870'), (1, '10.210')] +[2023-09-26 02:02:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 4562944. Throughput: 0: 810.8, 1: 811.2. Samples: 1139397. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:02:58,029][62705] Avg episode reward: [(0, '11.480'), (1, '10.730')] +[2023-09-26 02:02:58,038][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000008912_2281472.pth... +[2023-09-26 02:02:58,038][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000008912_2281472.pth... +[2023-09-26 02:02:58,077][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000005888_1507328.pth +[2023-09-26 02:02:58,077][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000005888_1507328.pth +[2023-09-26 02:03:00,893][63636] Updated weights for policy 0, policy_version 8960 (0.0015) +[2023-09-26 02:03:00,894][63637] Updated weights for policy 1, policy_version 8960 (0.0016) +[2023-09-26 02:03:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 4595712. Throughput: 0: 811.0, 1: 810.0. Samples: 1149064. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 02:03:03,030][62705] Avg episode reward: [(0, '10.820'), (1, '10.740')] +[2023-09-26 02:03:08,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 4628480. Throughput: 0: 811.5, 1: 811.5. Samples: 1153848. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 02:03:08,030][62705] Avg episode reward: [(0, '10.860'), (1, '11.560')] +[2023-09-26 02:03:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 4661248. Throughput: 0: 810.1, 1: 809.8. Samples: 1163538. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:03:13,030][62705] Avg episode reward: [(0, '11.070'), (1, '11.540')] +[2023-09-26 02:03:13,629][63636] Updated weights for policy 0, policy_version 9120 (0.0017) +[2023-09-26 02:03:13,630][63637] Updated weights for policy 1, policy_version 9120 (0.0016) +[2023-09-26 02:03:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6485.3, 300 sec: 6470.3). Total num frames: 4694016. Throughput: 0: 812.7, 1: 813.7. Samples: 1173415. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 02:03:18,030][62705] Avg episode reward: [(0, '11.270'), (1, '11.570')] +[2023-09-26 02:03:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 4726784. Throughput: 0: 806.1, 1: 805.9. Samples: 1177845. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 02:03:23,030][62705] Avg episode reward: [(0, '11.720'), (1, '10.750')] +[2023-09-26 02:03:26,508][63636] Updated weights for policy 0, policy_version 9280 (0.0018) +[2023-09-26 02:03:26,508][63637] Updated weights for policy 1, policy_version 9280 (0.0017) +[2023-09-26 02:03:28,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6484.2). Total num frames: 4759552. Throughput: 0: 807.7, 1: 808.8. Samples: 1187779. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 02:03:28,029][62705] Avg episode reward: [(0, '11.770'), (1, '10.580')] +[2023-09-26 02:03:33,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 4792320. Throughput: 0: 804.8, 1: 805.5. Samples: 1197280. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 02:03:33,029][62705] Avg episode reward: [(0, '11.880'), (1, '11.040')] +[2023-09-26 02:03:38,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6484.2). Total num frames: 4825088. Throughput: 0: 801.1, 1: 804.6. Samples: 1202177. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 02:03:38,030][62705] Avg episode reward: [(0, '11.930'), (1, '12.010')] +[2023-09-26 02:03:39,084][63637] Updated weights for policy 1, policy_version 9440 (0.0017) +[2023-09-26 02:03:39,085][63636] Updated weights for policy 0, policy_version 9440 (0.0017) +[2023-09-26 02:03:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 4857856. Throughput: 0: 809.4, 1: 809.2. Samples: 1212234. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:03:43,030][62705] Avg episode reward: [(0, '12.330'), (1, '12.610')] +[2023-09-26 02:03:43,041][63291] Saving new best policy, reward=12.330! +[2023-09-26 02:03:48,029][62705] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 4882432. Throughput: 0: 804.6, 1: 803.8. Samples: 1221446. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:03:48,029][62705] Avg episode reward: [(0, '11.630'), (1, '12.450')] +[2023-09-26 02:03:51,855][63636] Updated weights for policy 0, policy_version 9600 (0.0019) +[2023-09-26 02:03:51,855][63637] Updated weights for policy 1, policy_version 9600 (0.0018) +[2023-09-26 02:03:53,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 4915200. Throughput: 0: 807.2, 1: 807.6. Samples: 1226515. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 02:03:53,030][62705] Avg episode reward: [(0, '11.530'), (1, '11.830')] +[2023-09-26 02:03:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 4947968. Throughput: 0: 807.0, 1: 807.0. Samples: 1236164. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 02:03:58,030][62705] Avg episode reward: [(0, '11.670'), (1, '12.390')] +[2023-09-26 02:04:03,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 4980736. Throughput: 0: 807.4, 1: 806.0. Samples: 1246016. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:04:03,029][62705] Avg episode reward: [(0, '11.580'), (1, '12.770')] +[2023-09-26 02:04:04,359][63637] Updated weights for policy 1, policy_version 9760 (0.0016) +[2023-09-26 02:04:04,360][63636] Updated weights for policy 0, policy_version 9760 (0.0016) +[2023-09-26 02:04:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5013504. Throughput: 0: 814.0, 1: 813.0. Samples: 1251062. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:04:08,029][62705] Avg episode reward: [(0, '11.890'), (1, '12.990')] +[2023-09-26 02:04:13,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5046272. Throughput: 0: 810.0, 1: 808.2. Samples: 1260600. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:04:13,030][62705] Avg episode reward: [(0, '12.000'), (1, '13.250')] +[2023-09-26 02:04:13,204][63410] Saving new best policy, reward=13.250! +[2023-09-26 02:04:17,001][63636] Updated weights for policy 0, policy_version 9920 (0.0018) +[2023-09-26 02:04:17,001][63637] Updated weights for policy 1, policy_version 9920 (0.0019) +[2023-09-26 02:04:18,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5079040. Throughput: 0: 811.0, 1: 810.7. Samples: 1270257. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 02:04:18,030][62705] Avg episode reward: [(0, '11.240'), (1, '13.220')] +[2023-09-26 02:04:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5111808. Throughput: 0: 815.7, 1: 812.3. Samples: 1275437. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 02:04:23,030][62705] Avg episode reward: [(0, '11.280'), (1, '13.900')] +[2023-09-26 02:04:23,172][63410] Saving new best policy, reward=13.900! +[2023-09-26 02:04:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 5144576. Throughput: 0: 812.4, 1: 811.4. Samples: 1285308. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 02:04:28,030][62705] Avg episode reward: [(0, '11.200'), (1, '14.110')] +[2023-09-26 02:04:28,163][63410] Saving new best policy, reward=14.110! +[2023-09-26 02:04:29,504][63636] Updated weights for policy 0, policy_version 10080 (0.0015) +[2023-09-26 02:04:29,505][63637] Updated weights for policy 1, policy_version 10080 (0.0017) +[2023-09-26 02:04:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 5177344. Throughput: 0: 814.8, 1: 815.5. Samples: 1294806. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 02:04:33,030][62705] Avg episode reward: [(0, '11.020'), (1, '13.640')] +[2023-09-26 02:04:38,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5210112. Throughput: 0: 811.7, 1: 812.4. Samples: 1299600. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:04:38,029][62705] Avg episode reward: [(0, '10.870'), (1, '12.970')] +[2023-09-26 02:04:42,216][63637] Updated weights for policy 1, policy_version 10240 (0.0016) +[2023-09-26 02:04:42,216][63636] Updated weights for policy 0, policy_version 10240 (0.0016) +[2023-09-26 02:04:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5242880. Throughput: 0: 811.5, 1: 811.7. Samples: 1309209. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:04:43,030][62705] Avg episode reward: [(0, '10.960'), (1, '12.990')] +[2023-09-26 02:04:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 5275648. Throughput: 0: 810.9, 1: 811.8. Samples: 1319034. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:04:48,029][62705] Avg episode reward: [(0, '11.350'), (1, '12.680')] +[2023-09-26 02:04:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 5308416. Throughput: 0: 810.8, 1: 811.6. Samples: 1324070. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 02:04:53,030][62705] Avg episode reward: [(0, '11.230'), (1, '12.230')] +[2023-09-26 02:04:54,771][63637] Updated weights for policy 1, policy_version 10400 (0.0018) +[2023-09-26 02:04:54,771][63636] Updated weights for policy 0, policy_version 10400 (0.0018) +[2023-09-26 02:04:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 5341184. Throughput: 0: 811.8, 1: 811.6. Samples: 1333656. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 02:04:58,030][62705] Avg episode reward: [(0, '11.600'), (1, '12.370')] +[2023-09-26 02:04:58,040][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000010432_2670592.pth... +[2023-09-26 02:04:58,040][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000010432_2670592.pth... +[2023-09-26 02:04:58,069][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000007392_1892352.pth +[2023-09-26 02:04:58,076][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000007392_1892352.pth +[2023-09-26 02:05:03,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 5373952. Throughput: 0: 812.2, 1: 815.2. Samples: 1343493. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:05:03,029][62705] Avg episode reward: [(0, '12.020'), (1, '12.390')] +[2023-09-26 02:05:07,404][63637] Updated weights for policy 1, policy_version 10560 (0.0017) +[2023-09-26 02:05:07,404][63636] Updated weights for policy 0, policy_version 10560 (0.0017) +[2023-09-26 02:05:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 5406720. Throughput: 0: 809.4, 1: 810.0. Samples: 1348313. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:05:08,030][62705] Avg episode reward: [(0, '12.250'), (1, '12.090')] +[2023-09-26 02:05:13,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 5439488. Throughput: 0: 808.5, 1: 808.8. Samples: 1358084. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:05:13,030][62705] Avg episode reward: [(0, '12.180'), (1, '11.750')] +[2023-09-26 02:05:18,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6484.2). Total num frames: 5472256. Throughput: 0: 812.5, 1: 815.2. Samples: 1368052. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 02:05:18,030][62705] Avg episode reward: [(0, '12.640'), (1, '12.270')] +[2023-09-26 02:05:18,031][63291] Saving new best policy, reward=12.640! +[2023-09-26 02:05:19,991][63637] Updated weights for policy 1, policy_version 10720 (0.0018) +[2023-09-26 02:05:19,991][63636] Updated weights for policy 0, policy_version 10720 (0.0018) +[2023-09-26 02:05:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 5505024. Throughput: 0: 812.5, 1: 811.4. Samples: 1372679. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 02:05:23,030][62705] Avg episode reward: [(0, '13.710'), (1, '13.130')] +[2023-09-26 02:05:23,031][63291] Saving new best policy, reward=13.710! +[2023-09-26 02:05:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 5537792. Throughput: 0: 812.0, 1: 814.7. Samples: 1382408. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:05:28,030][62705] Avg episode reward: [(0, '13.760'), (1, '14.020')] +[2023-09-26 02:05:28,042][63291] Saving new best policy, reward=13.760! +[2023-09-26 02:05:32,666][63637] Updated weights for policy 1, policy_version 10880 (0.0017) +[2023-09-26 02:05:32,667][63636] Updated weights for policy 0, policy_version 10880 (0.0018) +[2023-09-26 02:05:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 5570560. Throughput: 0: 814.6, 1: 815.9. Samples: 1392405. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:05:33,030][62705] Avg episode reward: [(0, '13.360'), (1, '13.950')] +[2023-09-26 02:05:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 5603328. Throughput: 0: 807.9, 1: 809.1. Samples: 1396834. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:05:38,030][62705] Avg episode reward: [(0, '13.260'), (1, '13.910')] +[2023-09-26 02:05:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 5636096. Throughput: 0: 812.2, 1: 812.4. Samples: 1406764. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:05:43,030][62705] Avg episode reward: [(0, '13.660'), (1, '15.080')] +[2023-09-26 02:05:43,042][63410] Saving new best policy, reward=15.080! +[2023-09-26 02:05:45,320][63636] Updated weights for policy 0, policy_version 11040 (0.0018) +[2023-09-26 02:05:45,320][63637] Updated weights for policy 1, policy_version 11040 (0.0016) +[2023-09-26 02:05:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 5668864. Throughput: 0: 812.4, 1: 809.3. Samples: 1416470. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 02:05:48,030][62705] Avg episode reward: [(0, '14.010'), (1, '14.850')] +[2023-09-26 02:05:48,031][63291] Saving new best policy, reward=14.010! +[2023-09-26 02:05:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 5701632. Throughput: 0: 809.7, 1: 812.5. Samples: 1421313. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:05:53,030][62705] Avg episode reward: [(0, '13.920'), (1, '14.710')] +[2023-09-26 02:05:58,004][63637] Updated weights for policy 1, policy_version 11200 (0.0017) +[2023-09-26 02:05:58,005][63636] Updated weights for policy 0, policy_version 11200 (0.0018) +[2023-09-26 02:05:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 5734400. Throughput: 0: 810.7, 1: 810.8. Samples: 1431050. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:05:58,030][62705] Avg episode reward: [(0, '14.520'), (1, '14.760')] +[2023-09-26 02:05:58,041][63291] Saving new best policy, reward=14.520! +[2023-09-26 02:06:03,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5758976. Throughput: 0: 805.9, 1: 803.4. Samples: 1440470. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:06:03,030][62705] Avg episode reward: [(0, '13.510'), (1, '15.450')] +[2023-09-26 02:06:03,111][63410] Saving new best policy, reward=15.450! +[2023-09-26 02:06:08,029][62705] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5791744. Throughput: 0: 809.6, 1: 809.7. Samples: 1445547. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:06:08,030][62705] Avg episode reward: [(0, '13.690'), (1, '15.160')] +[2023-09-26 02:06:10,670][63637] Updated weights for policy 1, policy_version 11360 (0.0018) +[2023-09-26 02:06:10,670][63636] Updated weights for policy 0, policy_version 11360 (0.0017) +[2023-09-26 02:06:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5824512. Throughput: 0: 810.7, 1: 808.0. Samples: 1455249. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:06:13,030][62705] Avg episode reward: [(0, '13.500'), (1, '15.910')] +[2023-09-26 02:06:13,173][63410] Saving new best policy, reward=15.910! +[2023-09-26 02:06:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5857280. Throughput: 0: 807.0, 1: 805.2. Samples: 1464957. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:06:18,030][62705] Avg episode reward: [(0, '12.710'), (1, '15.860')] +[2023-09-26 02:06:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5890048. Throughput: 0: 814.0, 1: 812.4. Samples: 1470018. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:06:23,030][62705] Avg episode reward: [(0, '13.420'), (1, '16.370')] +[2023-09-26 02:06:23,190][63410] Saving new best policy, reward=16.370! +[2023-09-26 02:06:23,193][63636] Updated weights for policy 0, policy_version 11520 (0.0015) +[2023-09-26 02:06:23,193][63637] Updated weights for policy 1, policy_version 11520 (0.0019) +[2023-09-26 02:06:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5922816. Throughput: 0: 807.8, 1: 808.8. Samples: 1479512. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:06:28,030][62705] Avg episode reward: [(0, '13.570'), (1, '16.180')] +[2023-09-26 02:06:33,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5955584. Throughput: 0: 805.3, 1: 806.4. Samples: 1488995. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:06:33,029][62705] Avg episode reward: [(0, '14.250'), (1, '15.070')] +[2023-09-26 02:06:35,962][63636] Updated weights for policy 0, policy_version 11680 (0.0016) +[2023-09-26 02:06:35,963][63637] Updated weights for policy 1, policy_version 11680 (0.0018) +[2023-09-26 02:06:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 5988352. Throughput: 0: 810.1, 1: 807.9. Samples: 1494123. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:06:38,030][62705] Avg episode reward: [(0, '14.780'), (1, '15.540')] +[2023-09-26 02:06:38,031][63291] Saving new best policy, reward=14.780! +[2023-09-26 02:06:43,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 6021120. Throughput: 0: 807.8, 1: 807.7. Samples: 1503750. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 02:06:43,030][62705] Avg episode reward: [(0, '14.960'), (1, '15.430')] +[2023-09-26 02:06:43,040][63291] Saving new best policy, reward=14.960! +[2023-09-26 02:06:48,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 6053888. Throughput: 0: 812.5, 1: 812.7. Samples: 1513601. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 02:06:48,030][62705] Avg episode reward: [(0, '15.160'), (1, '15.180')] +[2023-09-26 02:06:48,031][63291] Saving new best policy, reward=15.160! +[2023-09-26 02:06:48,430][63637] Updated weights for policy 1, policy_version 11840 (0.0015) +[2023-09-26 02:06:48,432][63636] Updated weights for policy 0, policy_version 11840 (0.0019) +[2023-09-26 02:06:53,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 6086656. Throughput: 0: 813.4, 1: 813.1. Samples: 1518741. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 02:06:53,029][62705] Avg episode reward: [(0, '14.850'), (1, '15.650')] +[2023-09-26 02:06:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 6119424. Throughput: 0: 814.5, 1: 814.4. Samples: 1528551. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 02:06:58,030][62705] Avg episode reward: [(0, '14.700'), (1, '15.480')] +[2023-09-26 02:06:58,040][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000011952_3059712.pth... +[2023-09-26 02:06:58,041][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000011952_3059712.pth... +[2023-09-26 02:06:58,075][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000008912_2281472.pth +[2023-09-26 02:06:58,082][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000008912_2281472.pth +[2023-09-26 02:07:00,821][63636] Updated weights for policy 0, policy_version 12000 (0.0018) +[2023-09-26 02:07:00,821][63637] Updated weights for policy 1, policy_version 12000 (0.0017) +[2023-09-26 02:07:03,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 6152192. Throughput: 0: 816.1, 1: 816.0. Samples: 1538402. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:07:03,030][62705] Avg episode reward: [(0, '15.130'), (1, '15.920')] +[2023-09-26 02:07:08,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 6184960. Throughput: 0: 815.4, 1: 816.7. Samples: 1543462. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:07:08,030][62705] Avg episode reward: [(0, '15.290'), (1, '16.520')] +[2023-09-26 02:07:08,031][63291] Saving new best policy, reward=15.290! +[2023-09-26 02:07:08,031][63410] Saving new best policy, reward=16.520! +[2023-09-26 02:07:13,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6484.2). Total num frames: 6217728. Throughput: 0: 818.2, 1: 816.7. Samples: 1553082. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:07:13,029][62705] Avg episode reward: [(0, '15.580'), (1, '15.320')] +[2023-09-26 02:07:13,037][63291] Saving new best policy, reward=15.580! +[2023-09-26 02:07:13,412][63637] Updated weights for policy 1, policy_version 12160 (0.0019) +[2023-09-26 02:07:13,412][63636] Updated weights for policy 0, policy_version 12160 (0.0019) +[2023-09-26 02:07:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6250496. Throughput: 0: 820.0, 1: 819.3. Samples: 1562763. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:07:18,030][62705] Avg episode reward: [(0, '16.300'), (1, '15.530')] +[2023-09-26 02:07:18,031][63291] Saving new best policy, reward=16.300! +[2023-09-26 02:07:23,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6283264. Throughput: 0: 819.0, 1: 817.6. Samples: 1567773. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:07:23,030][62705] Avg episode reward: [(0, '16.700'), (1, '15.270')] +[2023-09-26 02:07:23,031][63291] Saving new best policy, reward=16.700! +[2023-09-26 02:07:26,020][63637] Updated weights for policy 1, policy_version 12320 (0.0014) +[2023-09-26 02:07:26,020][63636] Updated weights for policy 0, policy_version 12320 (0.0016) +[2023-09-26 02:07:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6316032. Throughput: 0: 818.4, 1: 818.7. Samples: 1577420. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:07:28,030][62705] Avg episode reward: [(0, '16.870'), (1, '15.460')] +[2023-09-26 02:07:28,040][63291] Saving new best policy, reward=16.870! +[2023-09-26 02:07:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6348800. Throughput: 0: 817.0, 1: 819.1. Samples: 1587223. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 02:07:33,030][62705] Avg episode reward: [(0, '16.770'), (1, '15.490')] +[2023-09-26 02:07:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6381568. Throughput: 0: 814.6, 1: 815.2. Samples: 1592084. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 02:07:38,030][62705] Avg episode reward: [(0, '16.200'), (1, '15.120')] +[2023-09-26 02:07:38,620][63636] Updated weights for policy 0, policy_version 12480 (0.0016) +[2023-09-26 02:07:38,621][63637] Updated weights for policy 1, policy_version 12480 (0.0018) +[2023-09-26 02:07:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6414336. Throughput: 0: 814.6, 1: 814.6. Samples: 1601864. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 02:07:43,030][62705] Avg episode reward: [(0, '15.690'), (1, '14.590')] +[2023-09-26 02:07:48,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6447104. Throughput: 0: 814.5, 1: 816.8. Samples: 1611809. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 02:07:48,030][62705] Avg episode reward: [(0, '15.890'), (1, '15.140')] +[2023-09-26 02:07:51,035][63636] Updated weights for policy 0, policy_version 12640 (0.0019) +[2023-09-26 02:07:51,035][63637] Updated weights for policy 1, policy_version 12640 (0.0017) +[2023-09-26 02:07:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6479872. Throughput: 0: 815.2, 1: 814.7. Samples: 1616808. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:07:53,030][62705] Avg episode reward: [(0, '16.710'), (1, '14.730')] +[2023-09-26 02:07:58,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6512640. Throughput: 0: 816.4, 1: 817.2. Samples: 1626594. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:07:58,030][62705] Avg episode reward: [(0, '16.480'), (1, '15.000')] +[2023-09-26 02:08:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6545408. Throughput: 0: 817.4, 1: 819.1. Samples: 1636407. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 02:08:03,030][62705] Avg episode reward: [(0, '16.790'), (1, '14.730')] +[2023-09-26 02:08:03,497][63636] Updated weights for policy 0, policy_version 12800 (0.0019) +[2023-09-26 02:08:03,497][63637] Updated weights for policy 1, policy_version 12800 (0.0019) +[2023-09-26 02:08:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6578176. Throughput: 0: 818.5, 1: 819.4. Samples: 1641479. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 02:08:08,029][62705] Avg episode reward: [(0, '16.300'), (1, '13.150')] +[2023-09-26 02:08:13,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6610944. Throughput: 0: 817.9, 1: 817.8. Samples: 1651025. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:08:13,030][62705] Avg episode reward: [(0, '16.280'), (1, '13.430')] +[2023-09-26 02:08:16,084][63637] Updated weights for policy 1, policy_version 12960 (0.0017) +[2023-09-26 02:08:16,084][63636] Updated weights for policy 0, policy_version 12960 (0.0018) +[2023-09-26 02:08:18,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6643712. Throughput: 0: 818.8, 1: 819.2. Samples: 1660934. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:08:18,030][62705] Avg episode reward: [(0, '16.580'), (1, '14.030')] +[2023-09-26 02:08:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6676480. Throughput: 0: 821.3, 1: 820.7. Samples: 1665974. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:08:23,030][62705] Avg episode reward: [(0, '16.170'), (1, '13.540')] +[2023-09-26 02:08:28,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6709248. Throughput: 0: 818.7, 1: 818.9. Samples: 1675557. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:08:28,029][62705] Avg episode reward: [(0, '14.780'), (1, '13.210')] +[2023-09-26 02:08:28,593][63636] Updated weights for policy 0, policy_version 13120 (0.0016) +[2023-09-26 02:08:28,594][63637] Updated weights for policy 1, policy_version 13120 (0.0017) +[2023-09-26 02:08:33,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6742016. Throughput: 0: 818.5, 1: 819.2. Samples: 1685503. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:08:33,029][62705] Avg episode reward: [(0, '14.860'), (1, '13.680')] +[2023-09-26 02:08:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 6774784. Throughput: 0: 814.2, 1: 814.1. Samples: 1690081. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:08:38,030][62705] Avg episode reward: [(0, '16.300'), (1, '14.480')] +[2023-09-26 02:08:41,332][63637] Updated weights for policy 1, policy_version 13280 (0.0017) +[2023-09-26 02:08:41,332][63636] Updated weights for policy 0, policy_version 13280 (0.0017) +[2023-09-26 02:08:43,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6807552. Throughput: 0: 812.5, 1: 815.3. Samples: 1699845. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:08:43,030][62705] Avg episode reward: [(0, '16.150'), (1, '14.770')] +[2023-09-26 02:08:48,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6840320. Throughput: 0: 816.4, 1: 814.4. Samples: 1709795. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:08:48,029][62705] Avg episode reward: [(0, '16.190'), (1, '14.690')] +[2023-09-26 02:08:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6873088. Throughput: 0: 811.2, 1: 810.7. Samples: 1714462. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:08:53,030][62705] Avg episode reward: [(0, '16.040'), (1, '15.000')] +[2023-09-26 02:08:53,954][63636] Updated weights for policy 0, policy_version 13440 (0.0016) +[2023-09-26 02:08:53,955][63637] Updated weights for policy 1, policy_version 13440 (0.0014) +[2023-09-26 02:08:58,029][62705] Fps is (10 sec: 6553.3, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6905856. Throughput: 0: 814.0, 1: 813.2. Samples: 1724247. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:08:58,030][62705] Avg episode reward: [(0, '14.960'), (1, '16.210')] +[2023-09-26 02:08:58,041][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000013488_3452928.pth... +[2023-09-26 02:08:58,041][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000013488_3452928.pth... +[2023-09-26 02:08:58,077][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000010432_2670592.pth +[2023-09-26 02:08:58,078][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000010432_2670592.pth +[2023-09-26 02:09:03,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6938624. Throughput: 0: 813.0, 1: 810.2. Samples: 1733979. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 02:09:03,030][62705] Avg episode reward: [(0, '15.150'), (1, '15.590')] +[2023-09-26 02:09:06,578][63637] Updated weights for policy 1, policy_version 13600 (0.0018) +[2023-09-26 02:09:06,579][63636] Updated weights for policy 0, policy_version 13600 (0.0017) +[2023-09-26 02:09:08,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6971392. Throughput: 0: 807.2, 1: 810.2. Samples: 1738756. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 02:09:08,030][62705] Avg episode reward: [(0, '14.560'), (1, '16.310')] +[2023-09-26 02:09:13,029][62705] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 6995968. Throughput: 0: 808.5, 1: 808.7. Samples: 1748330. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:09:13,030][62705] Avg episode reward: [(0, '15.350'), (1, '16.290')] +[2023-09-26 02:09:18,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 7028736. Throughput: 0: 806.4, 1: 802.9. Samples: 1757922. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:09:18,030][62705] Avg episode reward: [(0, '14.290'), (1, '16.160')] +[2023-09-26 02:09:19,454][63636] Updated weights for policy 0, policy_version 13760 (0.0018) +[2023-09-26 02:09:19,454][63637] Updated weights for policy 1, policy_version 13760 (0.0016) +[2023-09-26 02:09:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 7061504. Throughput: 0: 808.6, 1: 808.5. Samples: 1762850. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:09:23,029][62705] Avg episode reward: [(0, '14.130'), (1, '15.160')] +[2023-09-26 02:09:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 7094272. Throughput: 0: 806.4, 1: 804.0. Samples: 1772313. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:09:28,030][62705] Avg episode reward: [(0, '14.330'), (1, '16.010')] +[2023-09-26 02:09:32,208][63637] Updated weights for policy 1, policy_version 13920 (0.0020) +[2023-09-26 02:09:32,209][63636] Updated weights for policy 0, policy_version 13920 (0.0020) +[2023-09-26 02:09:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 7127040. Throughput: 0: 800.2, 1: 801.3. Samples: 1781865. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:09:33,029][62705] Avg episode reward: [(0, '14.620'), (1, '15.210')] +[2023-09-26 02:09:38,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 7159808. Throughput: 0: 804.1, 1: 804.8. Samples: 1786862. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:09:38,030][62705] Avg episode reward: [(0, '13.860'), (1, '14.850')] +[2023-09-26 02:09:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 7192576. Throughput: 0: 804.2, 1: 804.7. Samples: 1796648. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:09:43,029][62705] Avg episode reward: [(0, '14.610'), (1, '15.590')] +[2023-09-26 02:09:44,784][63637] Updated weights for policy 1, policy_version 14080 (0.0015) +[2023-09-26 02:09:44,784][63636] Updated weights for policy 0, policy_version 14080 (0.0017) +[2023-09-26 02:09:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 7225344. Throughput: 0: 804.1, 1: 805.5. Samples: 1806408. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:09:48,030][62705] Avg episode reward: [(0, '14.180'), (1, '15.980')] +[2023-09-26 02:09:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 7258112. Throughput: 0: 808.8, 1: 805.6. Samples: 1811404. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:09:53,029][62705] Avg episode reward: [(0, '14.320'), (1, '15.590')] +[2023-09-26 02:09:57,305][63637] Updated weights for policy 1, policy_version 14240 (0.0015) +[2023-09-26 02:09:57,305][63636] Updated weights for policy 0, policy_version 14240 (0.0018) +[2023-09-26 02:09:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 7290880. Throughput: 0: 807.9, 1: 807.5. Samples: 1821023. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:09:58,030][62705] Avg episode reward: [(0, '14.380'), (1, '15.320')] +[2023-09-26 02:10:03,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 7323648. Throughput: 0: 810.4, 1: 812.8. Samples: 1830963. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:10:03,030][62705] Avg episode reward: [(0, '14.890'), (1, '15.530')] +[2023-09-26 02:10:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 7356416. Throughput: 0: 810.2, 1: 810.1. Samples: 1835761. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:10:08,029][62705] Avg episode reward: [(0, '15.960'), (1, '15.060')] +[2023-09-26 02:10:09,823][63636] Updated weights for policy 0, policy_version 14400 (0.0018) +[2023-09-26 02:10:09,823][63637] Updated weights for policy 1, policy_version 14400 (0.0018) +[2023-09-26 02:10:13,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 7389184. Throughput: 0: 814.8, 1: 814.4. Samples: 1845627. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:10:13,029][62705] Avg episode reward: [(0, '15.710'), (1, '15.110')] +[2023-09-26 02:10:18,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 7421952. Throughput: 0: 817.0, 1: 819.1. Samples: 1855492. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:10:18,030][62705] Avg episode reward: [(0, '15.820'), (1, '16.230')] +[2023-09-26 02:10:22,380][63637] Updated weights for policy 1, policy_version 14560 (0.0017) +[2023-09-26 02:10:22,380][63636] Updated weights for policy 0, policy_version 14560 (0.0018) +[2023-09-26 02:10:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 7454720. Throughput: 0: 816.2, 1: 815.1. Samples: 1860271. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:10:23,029][62705] Avg episode reward: [(0, '16.360'), (1, '16.580')] +[2023-09-26 02:10:23,030][63410] Saving new best policy, reward=16.580! +[2023-09-26 02:10:28,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 7487488. Throughput: 0: 815.3, 1: 815.5. Samples: 1870034. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:10:28,029][62705] Avg episode reward: [(0, '16.670'), (1, '16.860')] +[2023-09-26 02:10:28,038][63410] Saving new best policy, reward=16.860! +[2023-09-26 02:10:33,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 7520256. Throughput: 0: 817.7, 1: 819.2. Samples: 1880068. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:10:33,030][62705] Avg episode reward: [(0, '17.140'), (1, '17.710')] +[2023-09-26 02:10:33,031][63291] Saving new best policy, reward=17.140! +[2023-09-26 02:10:33,031][63410] Saving new best policy, reward=17.710! +[2023-09-26 02:10:34,979][63637] Updated weights for policy 1, policy_version 14720 (0.0018) +[2023-09-26 02:10:34,979][63636] Updated weights for policy 0, policy_version 14720 (0.0019) +[2023-09-26 02:10:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 7553024. Throughput: 0: 813.8, 1: 814.7. Samples: 1884686. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:10:38,029][62705] Avg episode reward: [(0, '17.560'), (1, '17.070')] +[2023-09-26 02:10:38,030][63291] Saving new best policy, reward=17.560! +[2023-09-26 02:10:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 7585792. Throughput: 0: 813.8, 1: 816.1. Samples: 1894368. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:10:43,030][62705] Avg episode reward: [(0, '17.830'), (1, '16.780')] +[2023-09-26 02:10:43,041][63291] Saving new best policy, reward=17.830! +[2023-09-26 02:10:47,693][63637] Updated weights for policy 1, policy_version 14880 (0.0017) +[2023-09-26 02:10:47,693][63636] Updated weights for policy 0, policy_version 14880 (0.0017) +[2023-09-26 02:10:48,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 7618560. Throughput: 0: 815.1, 1: 813.6. Samples: 1904254. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:10:48,030][62705] Avg episode reward: [(0, '17.220'), (1, '18.240')] +[2023-09-26 02:10:48,031][63410] Saving new best policy, reward=18.240! +[2023-09-26 02:10:53,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 7651328. Throughput: 0: 811.9, 1: 812.5. Samples: 1908857. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:10:53,029][62705] Avg episode reward: [(0, '17.370'), (1, '18.620')] +[2023-09-26 02:10:53,030][63410] Saving new best policy, reward=18.620! +[2023-09-26 02:10:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7684096. Throughput: 0: 813.5, 1: 816.0. Samples: 1918957. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:10:58,030][62705] Avg episode reward: [(0, '18.130'), (1, '18.430')] +[2023-09-26 02:10:58,041][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000015008_3842048.pth... +[2023-09-26 02:10:58,041][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000015008_3842048.pth... +[2023-09-26 02:10:58,076][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000011952_3059712.pth +[2023-09-26 02:10:58,076][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000011952_3059712.pth +[2023-09-26 02:10:58,080][63291] Saving new best policy, reward=18.130! +[2023-09-26 02:11:00,367][63637] Updated weights for policy 1, policy_version 15040 (0.0018) +[2023-09-26 02:11:00,367][63636] Updated weights for policy 0, policy_version 15040 (0.0017) +[2023-09-26 02:11:03,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7716864. Throughput: 0: 812.7, 1: 809.8. Samples: 1928505. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:11:03,029][62705] Avg episode reward: [(0, '18.330'), (1, '18.420')] +[2023-09-26 02:11:03,030][63291] Saving new best policy, reward=18.330! +[2023-09-26 02:11:08,035][62705] Fps is (10 sec: 6549.9, 60 sec: 6553.0, 300 sec: 6525.7). Total num frames: 7749632. Throughput: 0: 809.9, 1: 813.2. Samples: 1933321. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:11:08,035][62705] Avg episode reward: [(0, '18.430'), (1, '19.100')] +[2023-09-26 02:11:08,036][63291] Saving new best policy, reward=18.430! +[2023-09-26 02:11:08,036][63410] Saving new best policy, reward=19.100! +[2023-09-26 02:11:12,793][63637] Updated weights for policy 1, policy_version 15200 (0.0017) +[2023-09-26 02:11:12,793][63636] Updated weights for policy 0, policy_version 15200 (0.0017) +[2023-09-26 02:11:13,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7782400. Throughput: 0: 815.2, 1: 815.6. Samples: 1943421. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:11:13,030][62705] Avg episode reward: [(0, '17.780'), (1, '19.700')] +[2023-09-26 02:11:13,042][63410] Saving new best policy, reward=19.700! +[2023-09-26 02:11:18,029][62705] Fps is (10 sec: 6557.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7815168. Throughput: 0: 813.5, 1: 810.6. Samples: 1953152. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:11:18,029][62705] Avg episode reward: [(0, '18.760'), (1, '19.790')] +[2023-09-26 02:11:18,030][63410] Saving new best policy, reward=19.790! +[2023-09-26 02:11:18,030][63291] Saving new best policy, reward=18.760! +[2023-09-26 02:11:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7847936. Throughput: 0: 812.7, 1: 814.5. Samples: 1957913. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:11:23,030][62705] Avg episode reward: [(0, '19.730'), (1, '19.960')] +[2023-09-26 02:11:23,031][63291] Saving new best policy, reward=19.730! +[2023-09-26 02:11:23,031][63410] Saving new best policy, reward=19.960! +[2023-09-26 02:11:25,358][63636] Updated weights for policy 0, policy_version 15360 (0.0019) +[2023-09-26 02:11:25,359][63637] Updated weights for policy 1, policy_version 15360 (0.0019) +[2023-09-26 02:11:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7880704. Throughput: 0: 818.1, 1: 815.9. Samples: 1967898. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:11:28,029][62705] Avg episode reward: [(0, '20.390'), (1, '19.430')] +[2023-09-26 02:11:28,037][63291] Saving new best policy, reward=20.390! +[2023-09-26 02:11:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7913472. Throughput: 0: 815.0, 1: 814.1. Samples: 1977567. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:11:33,030][62705] Avg episode reward: [(0, '20.360'), (1, '20.150')] +[2023-09-26 02:11:33,032][63410] Saving new best policy, reward=20.150! +[2023-09-26 02:11:37,883][63636] Updated weights for policy 0, policy_version 15520 (0.0016) +[2023-09-26 02:11:37,883][63637] Updated weights for policy 1, policy_version 15520 (0.0015) +[2023-09-26 02:11:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7946240. Throughput: 0: 816.6, 1: 819.1. Samples: 1982465. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:11:38,029][62705] Avg episode reward: [(0, '20.310'), (1, '19.700')] +[2023-09-26 02:11:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7979008. Throughput: 0: 817.7, 1: 815.1. Samples: 1992435. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:11:43,030][62705] Avg episode reward: [(0, '19.650'), (1, '19.380')] +[2023-09-26 02:11:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8011776. Throughput: 0: 817.3, 1: 817.8. Samples: 2002083. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:11:48,029][62705] Avg episode reward: [(0, '19.550'), (1, '19.770')] +[2023-09-26 02:11:50,414][63636] Updated weights for policy 0, policy_version 15680 (0.0021) +[2023-09-26 02:11:50,415][63637] Updated weights for policy 1, policy_version 15680 (0.0019) +[2023-09-26 02:11:53,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8044544. Throughput: 0: 819.1, 1: 819.2. Samples: 2007037. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:11:53,030][62705] Avg episode reward: [(0, '19.090'), (1, '19.800')] +[2023-09-26 02:11:58,029][62705] Fps is (10 sec: 6553.3, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8077312. Throughput: 0: 817.3, 1: 817.1. Samples: 2016971. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 02:11:58,031][62705] Avg episode reward: [(0, '19.370'), (1, '20.030')] +[2023-09-26 02:12:02,927][63637] Updated weights for policy 1, policy_version 15840 (0.0015) +[2023-09-26 02:12:02,927][63636] Updated weights for policy 0, policy_version 15840 (0.0020) +[2023-09-26 02:12:03,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8110080. Throughput: 0: 816.6, 1: 816.6. Samples: 2026642. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 02:12:03,029][62705] Avg episode reward: [(0, '19.150'), (1, '19.200')] +[2023-09-26 02:12:08,029][62705] Fps is (10 sec: 6553.9, 60 sec: 6554.3, 300 sec: 6525.8). Total num frames: 8142848. Throughput: 0: 818.7, 1: 818.4. Samples: 2031578. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 02:12:08,029][62705] Avg episode reward: [(0, '19.200'), (1, '18.710')] +[2023-09-26 02:12:13,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8175616. Throughput: 0: 817.6, 1: 817.5. Samples: 2041481. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 02:12:13,030][62705] Avg episode reward: [(0, '18.500'), (1, '17.910')] +[2023-09-26 02:12:15,455][63636] Updated weights for policy 0, policy_version 16000 (0.0015) +[2023-09-26 02:12:15,456][63637] Updated weights for policy 1, policy_version 16000 (0.0017) +[2023-09-26 02:12:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8208384. Throughput: 0: 817.1, 1: 818.3. Samples: 2051159. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 02:12:18,029][62705] Avg episode reward: [(0, '18.700'), (1, '17.780')] +[2023-09-26 02:12:23,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 8232960. Throughput: 0: 819.2, 1: 818.0. Samples: 2056139. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 02:12:23,030][62705] Avg episode reward: [(0, '18.210'), (1, '17.650')] +[2023-09-26 02:12:28,029][62705] Fps is (10 sec: 5734.1, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 8265728. Throughput: 0: 814.8, 1: 815.0. Samples: 2065777. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 02:12:28,031][62705] Avg episode reward: [(0, '17.300'), (1, '16.770')] +[2023-09-26 02:12:28,058][63636] Updated weights for policy 0, policy_version 16160 (0.0017) +[2023-09-26 02:12:28,058][63637] Updated weights for policy 1, policy_version 16160 (0.0015) +[2023-09-26 02:12:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 8298496. Throughput: 0: 814.0, 1: 813.6. Samples: 2075328. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 02:12:33,030][62705] Avg episode reward: [(0, '17.290'), (1, '16.480')] +[2023-09-26 02:12:38,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 8331264. Throughput: 0: 816.6, 1: 814.1. Samples: 2080419. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:12:38,030][62705] Avg episode reward: [(0, '17.580'), (1, '16.720')] +[2023-09-26 02:12:40,719][63636] Updated weights for policy 0, policy_version 16320 (0.0018) +[2023-09-26 02:12:40,719][63637] Updated weights for policy 1, policy_version 16320 (0.0017) +[2023-09-26 02:12:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 8364032. Throughput: 0: 812.0, 1: 812.0. Samples: 2090050. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:12:43,030][62705] Avg episode reward: [(0, '17.240'), (1, '16.850')] +[2023-09-26 02:12:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 8396800. Throughput: 0: 810.6, 1: 810.9. Samples: 2099612. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:12:48,030][62705] Avg episode reward: [(0, '17.490'), (1, '16.830')] +[2023-09-26 02:12:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 8429568. Throughput: 0: 812.1, 1: 810.5. Samples: 2104592. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 02:12:53,030][62705] Avg episode reward: [(0, '17.750'), (1, '16.640')] +[2023-09-26 02:12:53,418][63637] Updated weights for policy 1, policy_version 16480 (0.0015) +[2023-09-26 02:12:53,419][63636] Updated weights for policy 0, policy_version 16480 (0.0017) +[2023-09-26 02:12:58,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 8462336. Throughput: 0: 808.3, 1: 807.8. Samples: 2114209. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 02:12:58,029][62705] Avg episode reward: [(0, '18.810'), (1, '16.280')] +[2023-09-26 02:12:58,036][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000016528_4231168.pth... +[2023-09-26 02:12:58,037][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000016528_4231168.pth... +[2023-09-26 02:12:58,065][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000013488_3452928.pth +[2023-09-26 02:12:58,072][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000013488_3452928.pth +[2023-09-26 02:13:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 8495104. Throughput: 0: 809.1, 1: 808.4. Samples: 2123950. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:13:03,030][62705] Avg episode reward: [(0, '18.920'), (1, '15.190')] +[2023-09-26 02:13:06,014][63637] Updated weights for policy 1, policy_version 16640 (0.0020) +[2023-09-26 02:13:06,014][63636] Updated weights for policy 0, policy_version 16640 (0.0019) +[2023-09-26 02:13:08,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 8527872. Throughput: 0: 809.3, 1: 807.2. Samples: 2128884. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:13:08,030][62705] Avg episode reward: [(0, '19.750'), (1, '15.310')] +[2023-09-26 02:13:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 8560640. Throughput: 0: 805.6, 1: 805.4. Samples: 2138269. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:13:13,030][62705] Avg episode reward: [(0, '19.170'), (1, '15.790')] +[2023-09-26 02:13:18,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 8593408. Throughput: 0: 810.0, 1: 812.7. Samples: 2148348. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:13:18,030][62705] Avg episode reward: [(0, '18.970'), (1, '15.960')] +[2023-09-26 02:13:18,677][63636] Updated weights for policy 0, policy_version 16800 (0.0017) +[2023-09-26 02:13:18,678][63637] Updated weights for policy 1, policy_version 16800 (0.0016) +[2023-09-26 02:13:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 8626176. Throughput: 0: 807.9, 1: 807.6. Samples: 2153114. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:13:23,030][62705] Avg episode reward: [(0, '19.570'), (1, '16.120')] +[2023-09-26 02:13:28,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 8658944. Throughput: 0: 807.4, 1: 808.6. Samples: 2162770. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:13:28,030][62705] Avg episode reward: [(0, '19.420'), (1, '15.930')] +[2023-09-26 02:13:31,302][63636] Updated weights for policy 0, policy_version 16960 (0.0016) +[2023-09-26 02:13:31,302][63637] Updated weights for policy 1, policy_version 16960 (0.0017) +[2023-09-26 02:13:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 8691712. Throughput: 0: 812.3, 1: 813.1. Samples: 2172757. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:13:33,030][62705] Avg episode reward: [(0, '21.170'), (1, '16.070')] +[2023-09-26 02:13:33,031][63291] Saving new best policy, reward=21.170! +[2023-09-26 02:13:38,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 8724480. Throughput: 0: 806.4, 1: 806.1. Samples: 2177155. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:13:38,030][62705] Avg episode reward: [(0, '21.200'), (1, '16.190')] +[2023-09-26 02:13:38,031][63291] Saving new best policy, reward=21.200! +[2023-09-26 02:13:43,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 8757248. Throughput: 0: 809.9, 1: 812.5. Samples: 2187217. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:13:43,030][62705] Avg episode reward: [(0, '20.610'), (1, '16.990')] +[2023-09-26 02:13:43,980][63636] Updated weights for policy 0, policy_version 17120 (0.0013) +[2023-09-26 02:13:43,980][63637] Updated weights for policy 1, policy_version 17120 (0.0016) +[2023-09-26 02:13:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 8790016. Throughput: 0: 812.4, 1: 813.0. Samples: 2197093. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:13:48,030][62705] Avg episode reward: [(0, '20.100'), (1, '16.900')] +[2023-09-26 02:13:53,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 8822784. Throughput: 0: 806.5, 1: 809.6. Samples: 2201610. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 02:13:53,029][62705] Avg episode reward: [(0, '19.970'), (1, '16.430')] +[2023-09-26 02:13:56,592][63637] Updated weights for policy 1, policy_version 17280 (0.0017) +[2023-09-26 02:13:56,592][63636] Updated weights for policy 0, policy_version 17280 (0.0019) +[2023-09-26 02:13:58,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 8855552. Throughput: 0: 814.9, 1: 815.3. Samples: 2211629. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 02:13:58,030][62705] Avg episode reward: [(0, '19.780'), (1, '17.510')] +[2023-09-26 02:14:03,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 8888320. Throughput: 0: 812.0, 1: 809.1. Samples: 2221299. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 02:14:03,030][62705] Avg episode reward: [(0, '20.270'), (1, '17.740')] +[2023-09-26 02:14:08,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8921088. Throughput: 0: 810.4, 1: 813.2. Samples: 2226176. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:14:08,030][62705] Avg episode reward: [(0, '19.100'), (1, '17.880')] +[2023-09-26 02:14:09,111][63636] Updated weights for policy 0, policy_version 17440 (0.0019) +[2023-09-26 02:14:09,111][63637] Updated weights for policy 1, policy_version 17440 (0.0019) +[2023-09-26 02:14:13,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8953856. Throughput: 0: 815.8, 1: 814.4. Samples: 2236130. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:14:13,030][62705] Avg episode reward: [(0, '17.790'), (1, '18.170')] +[2023-09-26 02:14:18,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8986624. Throughput: 0: 811.6, 1: 810.5. Samples: 2245750. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:14:18,029][62705] Avg episode reward: [(0, '17.610'), (1, '19.440')] +[2023-09-26 02:14:21,693][63636] Updated weights for policy 0, policy_version 17600 (0.0018) +[2023-09-26 02:14:21,693][63637] Updated weights for policy 1, policy_version 17600 (0.0018) +[2023-09-26 02:14:23,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9019392. Throughput: 0: 816.4, 1: 819.0. Samples: 2250750. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:14:23,030][62705] Avg episode reward: [(0, '17.200'), (1, '18.650')] +[2023-09-26 02:14:28,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9052160. Throughput: 0: 816.4, 1: 814.5. Samples: 2260608. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:14:28,030][62705] Avg episode reward: [(0, '17.470'), (1, '18.920')] +[2023-09-26 02:14:33,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 9076736. Throughput: 0: 811.6, 1: 811.0. Samples: 2270114. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:14:33,030][62705] Avg episode reward: [(0, '17.510'), (1, '17.810')] +[2023-09-26 02:14:34,399][63636] Updated weights for policy 0, policy_version 17760 (0.0013) +[2023-09-26 02:14:34,400][63637] Updated weights for policy 1, policy_version 17760 (0.0017) +[2023-09-26 02:14:38,029][62705] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 9109504. Throughput: 0: 817.2, 1: 813.4. Samples: 2274989. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:14:38,029][62705] Avg episode reward: [(0, '17.770'), (1, '18.950')] +[2023-09-26 02:14:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 9142272. Throughput: 0: 807.8, 1: 807.5. Samples: 2284320. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:14:43,029][62705] Avg episode reward: [(0, '17.800'), (1, '18.210')] +[2023-09-26 02:14:47,130][63637] Updated weights for policy 1, policy_version 17920 (0.0017) +[2023-09-26 02:14:47,131][63636] Updated weights for policy 0, policy_version 17920 (0.0017) +[2023-09-26 02:14:48,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 9175040. Throughput: 0: 807.8, 1: 807.8. Samples: 2294002. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:14:48,030][62705] Avg episode reward: [(0, '18.590'), (1, '17.710')] +[2023-09-26 02:14:53,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 9207808. Throughput: 0: 806.7, 1: 804.8. Samples: 2298694. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:14:53,030][62705] Avg episode reward: [(0, '19.360'), (1, '17.770')] +[2023-09-26 02:14:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 9240576. Throughput: 0: 798.1, 1: 801.2. Samples: 2308096. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:14:58,030][62705] Avg episode reward: [(0, '19.100'), (1, '18.750')] +[2023-09-26 02:14:58,041][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000018048_4620288.pth... +[2023-09-26 02:14:58,042][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000018048_4620288.pth... +[2023-09-26 02:14:58,077][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000015008_3842048.pth +[2023-09-26 02:14:58,081][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000015008_3842048.pth +[2023-09-26 02:15:00,404][63636] Updated weights for policy 0, policy_version 18080 (0.0018) +[2023-09-26 02:15:00,404][63637] Updated weights for policy 1, policy_version 18080 (0.0018) +[2023-09-26 02:15:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 9273344. Throughput: 0: 797.3, 1: 796.9. Samples: 2317490. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:15:03,030][62705] Avg episode reward: [(0, '18.360'), (1, '18.940')] +[2023-09-26 02:15:08,029][62705] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6470.3). Total num frames: 9297920. Throughput: 0: 796.4, 1: 793.0. Samples: 2322276. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:15:08,030][62705] Avg episode reward: [(0, '19.310'), (1, '17.310')] +[2023-09-26 02:15:13,029][62705] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6470.3). Total num frames: 9330688. Throughput: 0: 790.8, 1: 791.0. Samples: 2331791. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:15:13,029][62705] Avg episode reward: [(0, '19.810'), (1, '18.390')] +[2023-09-26 02:15:13,243][63637] Updated weights for policy 1, policy_version 18240 (0.0017) +[2023-09-26 02:15:13,243][63636] Updated weights for policy 0, policy_version 18240 (0.0017) +[2023-09-26 02:15:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6470.3). Total num frames: 9363456. Throughput: 0: 791.4, 1: 791.4. Samples: 2341340. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:15:18,030][62705] Avg episode reward: [(0, '20.710'), (1, '18.280')] +[2023-09-26 02:15:23,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6470.3). Total num frames: 9396224. Throughput: 0: 793.5, 1: 794.4. Samples: 2346444. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 02:15:23,030][62705] Avg episode reward: [(0, '21.140'), (1, '18.840')] +[2023-09-26 02:15:25,835][63636] Updated weights for policy 0, policy_version 18400 (0.0010) +[2023-09-26 02:15:25,837][63637] Updated weights for policy 1, policy_version 18400 (0.0017) +[2023-09-26 02:15:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6470.3). Total num frames: 9428992. Throughput: 0: 796.8, 1: 796.2. Samples: 2356009. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 02:15:28,030][62705] Avg episode reward: [(0, '21.070'), (1, '17.640')] +[2023-09-26 02:15:33,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 9461760. Throughput: 0: 796.2, 1: 796.2. Samples: 2365660. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 02:15:33,029][62705] Avg episode reward: [(0, '20.660'), (1, '17.510')] +[2023-09-26 02:15:38,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 9494528. Throughput: 0: 800.9, 1: 800.9. Samples: 2370777. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:15:38,030][62705] Avg episode reward: [(0, '19.940'), (1, '17.720')] +[2023-09-26 02:15:38,487][63636] Updated weights for policy 0, policy_version 18560 (0.0017) +[2023-09-26 02:15:38,487][63637] Updated weights for policy 1, policy_version 18560 (0.0015) +[2023-09-26 02:15:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 9527296. Throughput: 0: 804.3, 1: 801.4. Samples: 2380352. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:15:43,030][62705] Avg episode reward: [(0, '19.650'), (1, '17.900')] +[2023-09-26 02:15:48,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 9560064. Throughput: 0: 804.2, 1: 807.5. Samples: 2390020. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:15:48,029][62705] Avg episode reward: [(0, '20.620'), (1, '17.830')] +[2023-09-26 02:15:51,095][63636] Updated weights for policy 0, policy_version 18720 (0.0016) +[2023-09-26 02:15:51,095][63637] Updated weights for policy 1, policy_version 18720 (0.0018) +[2023-09-26 02:15:53,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 9592832. Throughput: 0: 806.4, 1: 807.0. Samples: 2394880. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:15:53,030][62705] Avg episode reward: [(0, '19.930'), (1, '18.090')] +[2023-09-26 02:15:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 9625600. Throughput: 0: 808.8, 1: 808.7. Samples: 2404577. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:15:58,029][62705] Avg episode reward: [(0, '20.410'), (1, '18.510')] +[2023-09-26 02:16:03,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.4). Total num frames: 9658368. Throughput: 0: 812.4, 1: 814.6. Samples: 2414555. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:16:03,029][62705] Avg episode reward: [(0, '20.820'), (1, '18.430')] +[2023-09-26 02:16:03,689][63636] Updated weights for policy 0, policy_version 18880 (0.0016) +[2023-09-26 02:16:03,691][63637] Updated weights for policy 1, policy_version 18880 (0.0018) +[2023-09-26 02:16:08,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9691136. Throughput: 0: 808.5, 1: 808.3. Samples: 2419202. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:16:08,030][62705] Avg episode reward: [(0, '20.510'), (1, '18.370')] +[2023-09-26 02:16:13,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9723904. Throughput: 0: 811.2, 1: 812.2. Samples: 2429063. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:16:13,030][62705] Avg episode reward: [(0, '19.940'), (1, '18.360')] +[2023-09-26 02:16:16,159][63637] Updated weights for policy 1, policy_version 19040 (0.0018) +[2023-09-26 02:16:16,159][63636] Updated weights for policy 0, policy_version 19040 (0.0014) +[2023-09-26 02:16:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9756672. Throughput: 0: 815.3, 1: 818.2. Samples: 2439169. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:16:18,030][62705] Avg episode reward: [(0, '20.410'), (1, '17.590')] +[2023-09-26 02:16:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9789440. Throughput: 0: 813.0, 1: 812.4. Samples: 2443921. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:16:23,030][62705] Avg episode reward: [(0, '20.170'), (1, '17.740')] +[2023-09-26 02:16:28,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9822208. Throughput: 0: 812.9, 1: 814.3. Samples: 2453571. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:16:28,029][62705] Avg episode reward: [(0, '19.950'), (1, '18.300')] +[2023-09-26 02:16:28,790][63636] Updated weights for policy 0, policy_version 19200 (0.0018) +[2023-09-26 02:16:28,790][63637] Updated weights for policy 1, policy_version 19200 (0.0017) +[2023-09-26 02:16:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9854976. Throughput: 0: 819.0, 1: 815.4. Samples: 2463564. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:16:33,030][62705] Avg episode reward: [(0, '20.220'), (1, '18.000')] +[2023-09-26 02:16:38,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9887744. Throughput: 0: 812.5, 1: 812.4. Samples: 2468002. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:16:38,030][62705] Avg episode reward: [(0, '20.240'), (1, '18.390')] +[2023-09-26 02:16:41,510][63636] Updated weights for policy 0, policy_version 19360 (0.0017) +[2023-09-26 02:16:41,510][63637] Updated weights for policy 1, policy_version 19360 (0.0018) +[2023-09-26 02:16:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9920512. Throughput: 0: 815.3, 1: 816.9. Samples: 2478027. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:16:43,030][62705] Avg episode reward: [(0, '20.700'), (1, '18.220')] +[2023-09-26 02:16:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9953280. Throughput: 0: 813.4, 1: 811.6. Samples: 2487681. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:16:48,030][62705] Avg episode reward: [(0, '20.650'), (1, '18.300')] +[2023-09-26 02:16:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 9986048. Throughput: 0: 812.4, 1: 815.1. Samples: 2492441. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:16:53,030][62705] Avg episode reward: [(0, '20.080'), (1, '17.090')] +[2023-09-26 02:16:54,107][63637] Updated weights for policy 1, policy_version 19520 (0.0018) +[2023-09-26 02:16:54,107][63636] Updated weights for policy 0, policy_version 19520 (0.0015) +[2023-09-26 02:16:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10018816. Throughput: 0: 813.7, 1: 813.7. Samples: 2502298. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:16:58,030][62705] Avg episode reward: [(0, '20.250'), (1, '17.690')] +[2023-09-26 02:16:58,044][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000019568_5009408.pth... +[2023-09-26 02:16:58,044][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000019568_5009408.pth... +[2023-09-26 02:16:58,074][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000016528_4231168.pth +[2023-09-26 02:16:58,079][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000016528_4231168.pth +[2023-09-26 02:17:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10051584. Throughput: 0: 809.7, 1: 806.8. Samples: 2511913. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:17:03,030][62705] Avg episode reward: [(0, '20.080'), (1, '17.240')] +[2023-09-26 02:17:06,778][63636] Updated weights for policy 0, policy_version 19680 (0.0017) +[2023-09-26 02:17:06,778][63637] Updated weights for policy 1, policy_version 19680 (0.0017) +[2023-09-26 02:17:08,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10084352. Throughput: 0: 810.7, 1: 811.8. Samples: 2516933. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:17:08,030][62705] Avg episode reward: [(0, '19.500'), (1, '18.170')] +[2023-09-26 02:17:13,033][62705] Fps is (10 sec: 6551.0, 60 sec: 6553.2, 300 sec: 6470.2). Total num frames: 10117120. Throughput: 0: 812.7, 1: 810.2. Samples: 2526609. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:17:13,035][62705] Avg episode reward: [(0, '18.970'), (1, '18.250')] +[2023-09-26 02:17:18,029][62705] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10141696. Throughput: 0: 807.7, 1: 808.2. Samples: 2536283. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:17:18,030][62705] Avg episode reward: [(0, '19.650'), (1, '17.770')] +[2023-09-26 02:17:19,347][63636] Updated weights for policy 0, policy_version 19840 (0.0017) +[2023-09-26 02:17:19,347][63637] Updated weights for policy 1, policy_version 19840 (0.0017) +[2023-09-26 02:17:23,029][62705] Fps is (10 sec: 5736.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10174464. Throughput: 0: 814.6, 1: 815.2. Samples: 2541340. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:17:23,030][62705] Avg episode reward: [(0, '19.260'), (1, '18.260')] +[2023-09-26 02:17:28,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10207232. Throughput: 0: 810.7, 1: 810.1. Samples: 2550966. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:17:28,029][62705] Avg episode reward: [(0, '19.240'), (1, '18.550')] +[2023-09-26 02:17:32,165][63636] Updated weights for policy 0, policy_version 20000 (0.0017) +[2023-09-26 02:17:32,165][63637] Updated weights for policy 1, policy_version 20000 (0.0017) +[2023-09-26 02:17:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10240000. Throughput: 0: 805.6, 1: 805.3. Samples: 2560173. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:17:33,030][62705] Avg episode reward: [(0, '19.510'), (1, '18.660')] +[2023-09-26 02:17:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10272768. Throughput: 0: 810.8, 1: 808.4. Samples: 2565304. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:17:38,029][62705] Avg episode reward: [(0, '19.030'), (1, '18.610')] +[2023-09-26 02:17:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10305536. Throughput: 0: 809.4, 1: 809.2. Samples: 2575139. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:17:43,030][62705] Avg episode reward: [(0, '19.790'), (1, '20.240')] +[2023-09-26 02:17:43,041][63410] Saving new best policy, reward=20.240! +[2023-09-26 02:17:44,609][63636] Updated weights for policy 0, policy_version 20160 (0.0018) +[2023-09-26 02:17:44,610][63637] Updated weights for policy 1, policy_version 20160 (0.0018) +[2023-09-26 02:17:48,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10338304. Throughput: 0: 809.4, 1: 809.2. Samples: 2584746. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:17:48,030][62705] Avg episode reward: [(0, '18.760'), (1, '19.930')] +[2023-09-26 02:17:53,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10371072. Throughput: 0: 811.0, 1: 809.6. Samples: 2589860. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:17:53,029][62705] Avg episode reward: [(0, '18.570'), (1, '21.200')] +[2023-09-26 02:17:53,030][63410] Saving new best policy, reward=21.200! +[2023-09-26 02:17:57,439][63637] Updated weights for policy 1, policy_version 20320 (0.0017) +[2023-09-26 02:17:57,440][63636] Updated weights for policy 0, policy_version 20320 (0.0018) +[2023-09-26 02:17:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10403840. Throughput: 0: 804.0, 1: 805.5. Samples: 2599029. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:17:58,030][62705] Avg episode reward: [(0, '18.690'), (1, '21.270')] +[2023-09-26 02:17:58,039][63410] Saving new best policy, reward=21.270! +[2023-09-26 02:18:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10436608. Throughput: 0: 808.1, 1: 811.1. Samples: 2609147. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:18:03,029][62705] Avg episode reward: [(0, '19.000'), (1, '20.870')] +[2023-09-26 02:18:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 10469376. Throughput: 0: 806.1, 1: 805.3. Samples: 2613851. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:18:08,030][62705] Avg episode reward: [(0, '19.110'), (1, '20.740')] +[2023-09-26 02:18:09,929][63636] Updated weights for policy 0, policy_version 20480 (0.0014) +[2023-09-26 02:18:09,929][63637] Updated weights for policy 1, policy_version 20480 (0.0018) +[2023-09-26 02:18:13,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.5, 300 sec: 6470.3). Total num frames: 10502144. Throughput: 0: 808.7, 1: 807.3. Samples: 2623685. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:18:13,030][62705] Avg episode reward: [(0, '19.470'), (1, '20.370')] +[2023-09-26 02:18:18,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10534912. Throughput: 0: 815.8, 1: 818.8. Samples: 2633729. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:18:18,029][62705] Avg episode reward: [(0, '18.660'), (1, '20.250')] +[2023-09-26 02:18:22,380][63637] Updated weights for policy 1, policy_version 20640 (0.0017) +[2023-09-26 02:18:22,380][63636] Updated weights for policy 0, policy_version 20640 (0.0013) +[2023-09-26 02:18:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10567680. Throughput: 0: 813.7, 1: 813.8. Samples: 2638542. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:18:23,030][62705] Avg episode reward: [(0, '19.370'), (1, '19.540')] +[2023-09-26 02:18:28,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10600448. Throughput: 0: 812.4, 1: 812.3. Samples: 2648254. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 02:18:28,030][62705] Avg episode reward: [(0, '18.970'), (1, '18.790')] +[2023-09-26 02:18:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10633216. Throughput: 0: 815.7, 1: 818.9. Samples: 2658304. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 02:18:33,030][62705] Avg episode reward: [(0, '18.320'), (1, '19.930')] +[2023-09-26 02:18:34,977][63637] Updated weights for policy 1, policy_version 20800 (0.0016) +[2023-09-26 02:18:34,977][63636] Updated weights for policy 0, policy_version 20800 (0.0018) +[2023-09-26 02:18:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10665984. Throughput: 0: 812.2, 1: 811.9. Samples: 2662945. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 02:18:38,030][62705] Avg episode reward: [(0, '17.300'), (1, '18.630')] +[2023-09-26 02:18:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10698752. Throughput: 0: 818.1, 1: 819.1. Samples: 2672701. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 02:18:43,030][62705] Avg episode reward: [(0, '17.830'), (1, '19.620')] +[2023-09-26 02:18:47,516][63636] Updated weights for policy 0, policy_version 20960 (0.0018) +[2023-09-26 02:18:47,517][63637] Updated weights for policy 1, policy_version 20960 (0.0015) +[2023-09-26 02:18:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10731520. Throughput: 0: 819.2, 1: 817.4. Samples: 2682795. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 02:18:48,030][62705] Avg episode reward: [(0, '18.110'), (1, '19.200')] +[2023-09-26 02:18:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10764288. Throughput: 0: 816.9, 1: 817.0. Samples: 2687375. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 02:18:53,030][62705] Avg episode reward: [(0, '19.480'), (1, '19.820')] +[2023-09-26 02:18:58,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10797056. Throughput: 0: 815.5, 1: 818.6. Samples: 2697220. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:18:58,029][62705] Avg episode reward: [(0, '19.020'), (1, '19.230')] +[2023-09-26 02:18:58,036][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000021088_5398528.pth... +[2023-09-26 02:18:58,036][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000021088_5398528.pth... +[2023-09-26 02:18:58,074][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000018048_4620288.pth +[2023-09-26 02:18:58,075][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000018048_4620288.pth +[2023-09-26 02:19:00,081][63637] Updated weights for policy 1, policy_version 21120 (0.0015) +[2023-09-26 02:19:00,081][63636] Updated weights for policy 0, policy_version 21120 (0.0017) +[2023-09-26 02:19:03,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10829824. Throughput: 0: 818.5, 1: 815.0. Samples: 2707234. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:19:03,030][62705] Avg episode reward: [(0, '19.060'), (1, '19.320')] +[2023-09-26 02:19:08,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10862592. Throughput: 0: 812.8, 1: 812.8. Samples: 2711698. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:19:08,030][62705] Avg episode reward: [(0, '19.350'), (1, '19.600')] +[2023-09-26 02:19:12,661][63636] Updated weights for policy 0, policy_version 21280 (0.0016) +[2023-09-26 02:19:12,662][63637] Updated weights for policy 1, policy_version 21280 (0.0018) +[2023-09-26 02:19:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10895360. Throughput: 0: 815.6, 1: 818.6. Samples: 2721793. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:19:13,030][62705] Avg episode reward: [(0, '19.170'), (1, '20.140')] +[2023-09-26 02:19:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10928128. Throughput: 0: 816.2, 1: 813.4. Samples: 2731634. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:19:18,030][62705] Avg episode reward: [(0, '20.130'), (1, '20.200')] +[2023-09-26 02:19:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 10960896. Throughput: 0: 812.8, 1: 814.7. Samples: 2736182. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:19:23,030][62705] Avg episode reward: [(0, '19.550'), (1, '20.690')] +[2023-09-26 02:19:25,272][63637] Updated weights for policy 1, policy_version 21440 (0.0017) +[2023-09-26 02:19:25,272][63636] Updated weights for policy 0, policy_version 21440 (0.0019) +[2023-09-26 02:19:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 10993664. Throughput: 0: 817.8, 1: 817.1. Samples: 2746272. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:19:28,030][62705] Avg episode reward: [(0, '19.640'), (1, '20.530')] +[2023-09-26 02:19:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 11026432. Throughput: 0: 813.7, 1: 812.8. Samples: 2755987. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:19:33,030][62705] Avg episode reward: [(0, '20.020'), (1, '19.550')] +[2023-09-26 02:19:37,793][63637] Updated weights for policy 1, policy_version 21600 (0.0017) +[2023-09-26 02:19:37,793][63636] Updated weights for policy 0, policy_version 21600 (0.0018) +[2023-09-26 02:19:38,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 11059200. Throughput: 0: 813.8, 1: 816.3. Samples: 2760730. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:19:38,029][62705] Avg episode reward: [(0, '20.200'), (1, '18.690')] +[2023-09-26 02:19:43,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 11091968. Throughput: 0: 818.3, 1: 815.2. Samples: 2770731. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:19:43,030][62705] Avg episode reward: [(0, '21.620'), (1, '18.730')] +[2023-09-26 02:19:43,038][63291] Saving new best policy, reward=21.620! +[2023-09-26 02:19:48,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 11124736. Throughput: 0: 812.5, 1: 812.4. Samples: 2780357. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:19:48,030][62705] Avg episode reward: [(0, '21.200'), (1, '18.000')] +[2023-09-26 02:19:50,512][63637] Updated weights for policy 1, policy_version 21760 (0.0019) +[2023-09-26 02:19:50,512][63636] Updated weights for policy 0, policy_version 21760 (0.0019) +[2023-09-26 02:19:53,029][62705] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11149312. Throughput: 0: 816.1, 1: 818.3. Samples: 2785247. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:19:53,030][62705] Avg episode reward: [(0, '20.170'), (1, '18.310')] +[2023-09-26 02:19:58,029][62705] Fps is (10 sec: 5734.3, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 11182080. Throughput: 0: 811.8, 1: 808.7. Samples: 2794714. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:19:58,030][62705] Avg episode reward: [(0, '19.760'), (1, '17.800')] +[2023-09-26 02:20:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 11214848. Throughput: 0: 807.5, 1: 807.4. Samples: 2804308. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:20:03,030][62705] Avg episode reward: [(0, '20.230'), (1, '17.600')] +[2023-09-26 02:20:03,208][63636] Updated weights for policy 0, policy_version 21920 (0.0018) +[2023-09-26 02:20:03,209][63637] Updated weights for policy 1, policy_version 21920 (0.0018) +[2023-09-26 02:20:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 11247616. Throughput: 0: 815.1, 1: 813.4. Samples: 2809465. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:20:08,030][62705] Avg episode reward: [(0, '20.040'), (1, '17.910')] +[2023-09-26 02:20:13,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 11280384. Throughput: 0: 809.4, 1: 809.1. Samples: 2819102. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:20:13,029][62705] Avg episode reward: [(0, '19.820'), (1, '17.890')] +[2023-09-26 02:20:15,880][63637] Updated weights for policy 1, policy_version 22080 (0.0018) +[2023-09-26 02:20:15,881][63636] Updated weights for policy 0, policy_version 22080 (0.0017) +[2023-09-26 02:20:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 11313152. Throughput: 0: 806.0, 1: 805.7. Samples: 2828515. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:20:18,029][62705] Avg episode reward: [(0, '19.920'), (1, '18.330')] +[2023-09-26 02:20:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 11345920. Throughput: 0: 808.9, 1: 806.4. Samples: 2833422. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:20:23,029][62705] Avg episode reward: [(0, '19.870'), (1, '18.940')] +[2023-09-26 02:20:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 11378688. Throughput: 0: 804.1, 1: 804.3. Samples: 2843106. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:20:28,030][62705] Avg episode reward: [(0, '19.780'), (1, '18.860')] +[2023-09-26 02:20:28,544][63637] Updated weights for policy 1, policy_version 22240 (0.0018) +[2023-09-26 02:20:28,544][63636] Updated weights for policy 0, policy_version 22240 (0.0018) +[2023-09-26 02:20:33,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 11411456. Throughput: 0: 803.8, 1: 807.4. Samples: 2852864. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:20:33,030][62705] Avg episode reward: [(0, '19.590'), (1, '18.930')] +[2023-09-26 02:20:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 11444224. Throughput: 0: 804.7, 1: 802.3. Samples: 2857561. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:20:38,030][62705] Avg episode reward: [(0, '20.140'), (1, '19.700')] +[2023-09-26 02:20:41,230][63636] Updated weights for policy 0, policy_version 22400 (0.0017) +[2023-09-26 02:20:41,230][63637] Updated weights for policy 1, policy_version 22400 (0.0015) +[2023-09-26 02:20:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 11476992. Throughput: 0: 805.5, 1: 807.1. Samples: 2867280. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 02:20:43,030][62705] Avg episode reward: [(0, '19.910'), (1, '19.800')] +[2023-09-26 02:20:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 11509760. Throughput: 0: 810.2, 1: 810.3. Samples: 2877231. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 02:20:48,030][62705] Avg episode reward: [(0, '19.470'), (1, '19.570')] +[2023-09-26 02:20:53,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 11542528. Throughput: 0: 802.2, 1: 802.4. Samples: 2881672. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 02:20:53,030][62705] Avg episode reward: [(0, '19.100'), (1, '20.630')] +[2023-09-26 02:20:54,026][63636] Updated weights for policy 0, policy_version 22560 (0.0016) +[2023-09-26 02:20:54,027][63637] Updated weights for policy 1, policy_version 22560 (0.0018) +[2023-09-26 02:20:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 11575296. Throughput: 0: 806.3, 1: 806.1. Samples: 2891659. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 02:20:58,030][62705] Avg episode reward: [(0, '19.170'), (1, '21.470')] +[2023-09-26 02:20:58,043][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000022608_5787648.pth... +[2023-09-26 02:20:58,044][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000022608_5787648.pth... +[2023-09-26 02:20:58,079][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000019568_5009408.pth +[2023-09-26 02:20:58,082][63410] Saving new best policy, reward=21.470! +[2023-09-26 02:20:58,083][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000019568_5009408.pth +[2023-09-26 02:21:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 11608064. Throughput: 0: 807.7, 1: 808.1. Samples: 2901227. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 02:21:03,030][62705] Avg episode reward: [(0, '18.370'), (1, '21.010')] +[2023-09-26 02:21:06,601][63637] Updated weights for policy 1, policy_version 22720 (0.0015) +[2023-09-26 02:21:06,601][63636] Updated weights for policy 0, policy_version 22720 (0.0017) +[2023-09-26 02:21:08,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 11640832. Throughput: 0: 806.2, 1: 809.2. Samples: 2906113. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 02:21:08,029][62705] Avg episode reward: [(0, '19.610'), (1, '20.680')] +[2023-09-26 02:21:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 11673600. Throughput: 0: 810.1, 1: 809.8. Samples: 2915999. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:21:13,030][62705] Avg episode reward: [(0, '19.840'), (1, '21.120')] +[2023-09-26 02:21:18,029][62705] Fps is (10 sec: 6143.9, 60 sec: 6485.3, 300 sec: 6484.2). Total num frames: 11702272. Throughput: 0: 808.1, 1: 806.3. Samples: 2925513. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:21:18,030][62705] Avg episode reward: [(0, '19.820'), (1, '20.930')] +[2023-09-26 02:21:19,399][63636] Updated weights for policy 0, policy_version 22880 (0.0017) +[2023-09-26 02:21:19,400][63637] Updated weights for policy 1, policy_version 22880 (0.0018) +[2023-09-26 02:21:23,029][62705] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11730944. Throughput: 0: 808.3, 1: 808.3. Samples: 2930311. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:21:23,029][62705] Avg episode reward: [(0, '19.240'), (1, '22.130')] +[2023-09-26 02:21:23,200][63410] Saving new best policy, reward=22.130! +[2023-09-26 02:21:28,029][62705] Fps is (10 sec: 6144.1, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11763712. Throughput: 0: 808.0, 1: 806.7. Samples: 2939940. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:21:28,029][62705] Avg episode reward: [(0, '20.840'), (1, '22.100')] +[2023-09-26 02:21:32,024][63636] Updated weights for policy 0, policy_version 23040 (0.0014) +[2023-09-26 02:21:32,025][63637] Updated weights for policy 1, policy_version 23040 (0.0015) +[2023-09-26 02:21:33,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11796480. Throughput: 0: 804.1, 1: 803.6. Samples: 2949578. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:21:33,030][62705] Avg episode reward: [(0, '21.430'), (1, '23.470')] +[2023-09-26 02:21:33,031][63410] Saving new best policy, reward=23.470! +[2023-09-26 02:21:38,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11829248. Throughput: 0: 810.8, 1: 810.8. Samples: 2954640. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:21:38,030][62705] Avg episode reward: [(0, '20.270'), (1, '23.180')] +[2023-09-26 02:21:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11862016. Throughput: 0: 806.8, 1: 806.7. Samples: 2964267. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:21:43,030][62705] Avg episode reward: [(0, '19.860'), (1, '22.790')] +[2023-09-26 02:21:44,670][63636] Updated weights for policy 0, policy_version 23200 (0.0017) +[2023-09-26 02:21:44,671][63637] Updated weights for policy 1, policy_version 23200 (0.0017) +[2023-09-26 02:21:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11894784. Throughput: 0: 805.6, 1: 806.7. Samples: 2973780. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:21:48,030][62705] Avg episode reward: [(0, '19.960'), (1, '22.680')] +[2023-09-26 02:21:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11927552. Throughput: 0: 807.2, 1: 804.2. Samples: 2978627. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:21:53,030][62705] Avg episode reward: [(0, '19.800'), (1, '22.440')] +[2023-09-26 02:21:57,433][63636] Updated weights for policy 0, policy_version 23360 (0.0017) +[2023-09-26 02:21:57,434][63637] Updated weights for policy 1, policy_version 23360 (0.0017) +[2023-09-26 02:21:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11960320. Throughput: 0: 801.6, 1: 802.2. Samples: 2988168. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:21:58,030][62705] Avg episode reward: [(0, '21.420'), (1, '20.270')] +[2023-09-26 02:22:03,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 11993088. Throughput: 0: 807.5, 1: 806.8. Samples: 2998157. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:22:03,030][62705] Avg episode reward: [(0, '21.290'), (1, '20.730')] +[2023-09-26 02:22:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.4). Total num frames: 12025856. Throughput: 0: 804.2, 1: 804.2. Samples: 3002690. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:22:08,030][62705] Avg episode reward: [(0, '22.250'), (1, '20.110')] +[2023-09-26 02:22:08,031][63291] Saving new best policy, reward=22.250! +[2023-09-26 02:22:10,163][63636] Updated weights for policy 0, policy_version 23520 (0.0015) +[2023-09-26 02:22:10,164][63637] Updated weights for policy 1, policy_version 23520 (0.0017) +[2023-09-26 02:22:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 12058624. Throughput: 0: 806.0, 1: 808.8. Samples: 3012608. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:22:13,029][62705] Avg episode reward: [(0, '21.520'), (1, '20.010')] +[2023-09-26 02:22:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6485.3, 300 sec: 6498.1). Total num frames: 12091392. Throughput: 0: 810.6, 1: 810.9. Samples: 3022545. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:22:18,030][62705] Avg episode reward: [(0, '21.010'), (1, '18.860')] +[2023-09-26 02:22:22,608][63636] Updated weights for policy 0, policy_version 23680 (0.0017) +[2023-09-26 02:22:22,608][63637] Updated weights for policy 1, policy_version 23680 (0.0017) +[2023-09-26 02:22:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 12124160. Throughput: 0: 806.0, 1: 805.7. Samples: 3027168. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:22:23,030][62705] Avg episode reward: [(0, '21.980'), (1, '17.860')] +[2023-09-26 02:22:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 12156928. Throughput: 0: 808.8, 1: 811.5. Samples: 3037184. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:22:28,029][62705] Avg episode reward: [(0, '21.130'), (1, '18.030')] +[2023-09-26 02:22:33,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 12189696. Throughput: 0: 814.0, 1: 812.6. Samples: 3046976. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:22:33,029][62705] Avg episode reward: [(0, '21.410'), (1, '17.720')] +[2023-09-26 02:22:35,225][63636] Updated weights for policy 0, policy_version 23840 (0.0018) +[2023-09-26 02:22:35,225][63637] Updated weights for policy 1, policy_version 23840 (0.0018) +[2023-09-26 02:22:38,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 12222464. Throughput: 0: 810.3, 1: 811.5. Samples: 3051609. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:22:38,030][62705] Avg episode reward: [(0, '21.270'), (1, '18.380')] +[2023-09-26 02:22:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 12255232. Throughput: 0: 816.3, 1: 817.9. Samples: 3061706. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:22:43,030][62705] Avg episode reward: [(0, '22.040'), (1, '19.140')] +[2023-09-26 02:22:47,961][63636] Updated weights for policy 0, policy_version 24000 (0.0017) +[2023-09-26 02:22:47,962][63637] Updated weights for policy 1, policy_version 24000 (0.0016) +[2023-09-26 02:22:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 12288000. Throughput: 0: 810.6, 1: 809.6. Samples: 3071065. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:22:48,030][62705] Avg episode reward: [(0, '22.130'), (1, '18.920')] +[2023-09-26 02:22:53,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 12312576. Throughput: 0: 813.9, 1: 814.8. Samples: 3075982. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:22:53,030][62705] Avg episode reward: [(0, '21.470'), (1, '19.360')] +[2023-09-26 02:22:58,029][62705] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 12345344. Throughput: 0: 809.5, 1: 806.4. Samples: 3085326. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:22:58,030][62705] Avg episode reward: [(0, '20.710'), (1, '18.680')] +[2023-09-26 02:22:58,040][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000024112_6172672.pth... +[2023-09-26 02:22:58,076][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000021088_5398528.pth +[2023-09-26 02:22:58,222][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000024128_6176768.pth... +[2023-09-26 02:22:58,249][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000021088_5398528.pth +[2023-09-26 02:23:00,915][63636] Updated weights for policy 0, policy_version 24160 (0.0017) +[2023-09-26 02:23:00,915][63637] Updated weights for policy 1, policy_version 24160 (0.0017) +[2023-09-26 02:23:03,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 12378112. Throughput: 0: 799.4, 1: 801.3. Samples: 3094578. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:23:03,029][62705] Avg episode reward: [(0, '19.950'), (1, '19.360')] +[2023-09-26 02:23:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 12410880. Throughput: 0: 802.9, 1: 803.2. Samples: 3099444. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:23:08,030][62705] Avg episode reward: [(0, '21.050'), (1, '19.400')] +[2023-09-26 02:23:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 12443648. Throughput: 0: 801.0, 1: 797.7. Samples: 3109125. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:23:13,029][62705] Avg episode reward: [(0, '20.040'), (1, '19.690')] +[2023-09-26 02:23:13,626][63636] Updated weights for policy 0, policy_version 24320 (0.0016) +[2023-09-26 02:23:13,627][63637] Updated weights for policy 1, policy_version 24320 (0.0016) +[2023-09-26 02:23:18,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 12476416. Throughput: 0: 799.8, 1: 803.0. Samples: 3119104. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:23:18,029][62705] Avg episode reward: [(0, '19.770'), (1, '19.920')] +[2023-09-26 02:23:23,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 12509184. Throughput: 0: 803.2, 1: 802.6. Samples: 3123872. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:23:23,030][62705] Avg episode reward: [(0, '19.960'), (1, '20.090')] +[2023-09-26 02:23:26,301][63637] Updated weights for policy 1, policy_version 24480 (0.0015) +[2023-09-26 02:23:26,302][63636] Updated weights for policy 0, policy_version 24480 (0.0017) +[2023-09-26 02:23:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 12541952. Throughput: 0: 796.8, 1: 797.6. Samples: 3133457. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:23:28,029][62705] Avg episode reward: [(0, '20.160'), (1, '19.340')] +[2023-09-26 02:23:33,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 12574720. Throughput: 0: 799.0, 1: 799.5. Samples: 3142997. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:23:33,029][62705] Avg episode reward: [(0, '18.410'), (1, '20.130')] +[2023-09-26 02:23:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 12607488. Throughput: 0: 797.0, 1: 798.9. Samples: 3147797. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:23:38,029][62705] Avg episode reward: [(0, '17.370'), (1, '19.100')] +[2023-09-26 02:23:39,055][63637] Updated weights for policy 1, policy_version 24640 (0.0019) +[2023-09-26 02:23:39,055][63636] Updated weights for policy 0, policy_version 24640 (0.0018) +[2023-09-26 02:23:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 12640256. Throughput: 0: 805.1, 1: 804.8. Samples: 3157773. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:23:43,029][62705] Avg episode reward: [(0, '16.950'), (1, '19.470')] +[2023-09-26 02:23:48,029][62705] Fps is (10 sec: 6143.9, 60 sec: 6348.8, 300 sec: 6456.4). Total num frames: 12668928. Throughput: 0: 806.4, 1: 806.0. Samples: 3167136. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:23:48,030][62705] Avg episode reward: [(0, '17.000'), (1, '18.770')] +[2023-09-26 02:23:51,927][63636] Updated weights for policy 0, policy_version 24800 (0.0017) +[2023-09-26 02:23:51,927][63637] Updated weights for policy 1, policy_version 24800 (0.0015) +[2023-09-26 02:23:53,029][62705] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12697600. Throughput: 0: 805.9, 1: 805.3. Samples: 3171949. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:23:53,030][62705] Avg episode reward: [(0, '18.120'), (1, '19.160')] +[2023-09-26 02:23:58,029][62705] Fps is (10 sec: 6144.0, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12730368. Throughput: 0: 804.7, 1: 804.9. Samples: 3181556. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:23:58,030][62705] Avg episode reward: [(0, '16.930'), (1, '18.810')] +[2023-09-26 02:24:03,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12763136. Throughput: 0: 802.6, 1: 799.5. Samples: 3191200. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:24:03,030][62705] Avg episode reward: [(0, '16.810'), (1, '19.010')] +[2023-09-26 02:24:04,535][63636] Updated weights for policy 0, policy_version 24960 (0.0018) +[2023-09-26 02:24:04,535][63637] Updated weights for policy 1, policy_version 24960 (0.0015) +[2023-09-26 02:24:08,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12795904. Throughput: 0: 805.3, 1: 804.8. Samples: 3196325. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:24:08,029][62705] Avg episode reward: [(0, '17.170'), (1, '19.680')] +[2023-09-26 02:24:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 12828672. Throughput: 0: 807.8, 1: 805.5. Samples: 3206055. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:24:13,030][62705] Avg episode reward: [(0, '17.790'), (1, '19.670')] +[2023-09-26 02:24:17,109][63637] Updated weights for policy 1, policy_version 25120 (0.0017) +[2023-09-26 02:24:17,109][63636] Updated weights for policy 0, policy_version 25120 (0.0018) +[2023-09-26 02:24:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12861440. Throughput: 0: 807.1, 1: 807.1. Samples: 3215637. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:24:18,029][62705] Avg episode reward: [(0, '18.720'), (1, '20.420')] +[2023-09-26 02:24:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12894208. Throughput: 0: 811.4, 1: 808.6. Samples: 3220700. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:24:23,030][62705] Avg episode reward: [(0, '17.980'), (1, '20.530')] +[2023-09-26 02:24:28,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 12926976. Throughput: 0: 803.4, 1: 804.0. Samples: 3230107. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:24:28,030][62705] Avg episode reward: [(0, '18.860'), (1, '21.140')] +[2023-09-26 02:24:29,844][63636] Updated weights for policy 0, policy_version 25280 (0.0016) +[2023-09-26 02:24:29,844][63637] Updated weights for policy 1, policy_version 25280 (0.0016) +[2023-09-26 02:24:33,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 12959744. Throughput: 0: 808.1, 1: 809.7. Samples: 3239937. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:24:33,030][62705] Avg episode reward: [(0, '18.510'), (1, '21.040')] +[2023-09-26 02:24:38,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 12992512. Throughput: 0: 807.2, 1: 807.9. Samples: 3244629. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:24:38,029][62705] Avg episode reward: [(0, '18.650'), (1, '20.600')] +[2023-09-26 02:24:42,623][63637] Updated weights for policy 1, policy_version 25440 (0.0018) +[2023-09-26 02:24:42,624][63636] Updated weights for policy 0, policy_version 25440 (0.0018) +[2023-09-26 02:24:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 13025280. Throughput: 0: 806.5, 1: 809.5. Samples: 3254277. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:24:43,030][62705] Avg episode reward: [(0, '18.490'), (1, '19.960')] +[2023-09-26 02:24:48,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6485.4, 300 sec: 6470.3). Total num frames: 13058048. Throughput: 0: 812.4, 1: 812.2. Samples: 3264308. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:24:48,029][62705] Avg episode reward: [(0, '18.600'), (1, '21.570')] +[2023-09-26 02:24:53,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 13090816. Throughput: 0: 805.9, 1: 806.1. Samples: 3268866. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:24:53,029][62705] Avg episode reward: [(0, '18.200'), (1, '21.850')] +[2023-09-26 02:24:55,217][63637] Updated weights for policy 1, policy_version 25600 (0.0016) +[2023-09-26 02:24:55,218][63636] Updated weights for policy 0, policy_version 25600 (0.0017) +[2023-09-26 02:24:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 13123584. Throughput: 0: 807.5, 1: 808.8. Samples: 3278787. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:24:58,029][62705] Avg episode reward: [(0, '18.090'), (1, '21.980')] +[2023-09-26 02:24:58,040][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000025632_6561792.pth... +[2023-09-26 02:24:58,040][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000025632_6561792.pth... +[2023-09-26 02:24:58,075][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000022608_5787648.pth +[2023-09-26 02:24:58,076][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000022608_5787648.pth +[2023-09-26 02:25:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 13156352. Throughput: 0: 808.0, 1: 806.9. Samples: 3288308. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:25:03,029][62705] Avg episode reward: [(0, '18.650'), (1, '21.890')] +[2023-09-26 02:25:07,886][63637] Updated weights for policy 1, policy_version 25760 (0.0018) +[2023-09-26 02:25:07,887][63636] Updated weights for policy 0, policy_version 25760 (0.0018) +[2023-09-26 02:25:08,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 13189120. Throughput: 0: 803.8, 1: 807.0. Samples: 3293184. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:25:08,030][62705] Avg episode reward: [(0, '18.240'), (1, '21.090')] +[2023-09-26 02:25:13,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 13221888. Throughput: 0: 811.2, 1: 810.3. Samples: 3303077. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 02:25:13,030][62705] Avg episode reward: [(0, '17.340'), (1, '21.420')] +[2023-09-26 02:25:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 13254656. Throughput: 0: 809.6, 1: 807.5. Samples: 3312703. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 02:25:18,030][62705] Avg episode reward: [(0, '17.060'), (1, '21.030')] +[2023-09-26 02:25:20,463][63637] Updated weights for policy 1, policy_version 25920 (0.0017) +[2023-09-26 02:25:20,464][63636] Updated weights for policy 0, policy_version 25920 (0.0018) +[2023-09-26 02:25:23,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 13279232. Throughput: 0: 811.2, 1: 812.7. Samples: 3317703. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 02:25:23,030][62705] Avg episode reward: [(0, '16.870'), (1, '20.070')] +[2023-09-26 02:25:28,029][62705] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 13312000. Throughput: 0: 809.5, 1: 808.3. Samples: 3327077. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:25:28,030][62705] Avg episode reward: [(0, '16.950'), (1, '20.520')] +[2023-09-26 02:25:33,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 13344768. Throughput: 0: 799.9, 1: 800.4. Samples: 3336323. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:25:33,029][62705] Avg episode reward: [(0, '17.640'), (1, '20.380')] +[2023-09-26 02:25:33,533][63637] Updated weights for policy 1, policy_version 26080 (0.0015) +[2023-09-26 02:25:33,534][63636] Updated weights for policy 0, policy_version 26080 (0.0018) +[2023-09-26 02:25:38,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 13377536. Throughput: 0: 803.8, 1: 804.1. Samples: 3341223. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:25:38,029][62705] Avg episode reward: [(0, '18.850'), (1, '20.140')] +[2023-09-26 02:25:43,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 13410304. Throughput: 0: 802.4, 1: 801.1. Samples: 3350941. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:25:43,029][62705] Avg episode reward: [(0, '18.410'), (1, '21.090')] +[2023-09-26 02:25:46,179][63637] Updated weights for policy 1, policy_version 26240 (0.0017) +[2023-09-26 02:25:46,179][63636] Updated weights for policy 0, policy_version 26240 (0.0017) +[2023-09-26 02:25:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 13443072. Throughput: 0: 803.1, 1: 807.2. Samples: 3360768. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:25:48,029][62705] Avg episode reward: [(0, '17.780'), (1, '20.210')] +[2023-09-26 02:25:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 13475840. Throughput: 0: 805.9, 1: 802.8. Samples: 3365574. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:25:53,029][62705] Avg episode reward: [(0, '19.010'), (1, '20.360')] +[2023-09-26 02:25:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 13508608. Throughput: 0: 800.8, 1: 802.3. Samples: 3375218. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:25:58,029][62705] Avg episode reward: [(0, '19.610'), (1, '19.660')] +[2023-09-26 02:25:58,756][63637] Updated weights for policy 1, policy_version 26400 (0.0018) +[2023-09-26 02:25:58,756][63636] Updated weights for policy 0, policy_version 26400 (0.0015) +[2023-09-26 02:26:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 13541376. Throughput: 0: 805.7, 1: 805.2. Samples: 3385192. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 02:26:03,029][62705] Avg episode reward: [(0, '19.130'), (1, '19.900')] +[2023-09-26 02:26:08,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 13574144. Throughput: 0: 802.2, 1: 800.5. Samples: 3389828. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 02:26:08,030][62705] Avg episode reward: [(0, '19.170'), (1, '20.760')] +[2023-09-26 02:26:11,417][63637] Updated weights for policy 1, policy_version 26560 (0.0018) +[2023-09-26 02:26:11,418][63636] Updated weights for policy 0, policy_version 26560 (0.0018) +[2023-09-26 02:26:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6456.4). Total num frames: 13606912. Throughput: 0: 806.1, 1: 807.3. Samples: 3399677. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 02:26:13,029][62705] Avg episode reward: [(0, '19.740'), (1, '20.100')] +[2023-09-26 02:26:18,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 13639680. Throughput: 0: 811.9, 1: 811.6. Samples: 3409381. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 02:26:18,029][62705] Avg episode reward: [(0, '18.640'), (1, '19.560')] +[2023-09-26 02:26:23,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 13672448. Throughput: 0: 807.7, 1: 810.1. Samples: 3414021. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:26:23,030][62705] Avg episode reward: [(0, '18.970'), (1, '19.540')] +[2023-09-26 02:26:24,042][63636] Updated weights for policy 0, policy_version 26720 (0.0019) +[2023-09-26 02:26:24,042][63637] Updated weights for policy 1, policy_version 26720 (0.0019) +[2023-09-26 02:26:28,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 13705216. Throughput: 0: 811.5, 1: 811.6. Samples: 3423983. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:26:28,030][62705] Avg episode reward: [(0, '18.890'), (1, '19.250')] +[2023-09-26 02:26:33,029][62705] Fps is (10 sec: 6144.0, 60 sec: 6485.3, 300 sec: 6456.4). Total num frames: 13733888. Throughput: 0: 808.2, 1: 805.3. Samples: 3433374. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:26:33,030][62705] Avg episode reward: [(0, '17.960'), (1, '20.210')] +[2023-09-26 02:26:36,798][63637] Updated weights for policy 1, policy_version 26880 (0.0020) +[2023-09-26 02:26:36,798][63636] Updated weights for policy 0, policy_version 26880 (0.0021) +[2023-09-26 02:26:38,029][62705] Fps is (10 sec: 5734.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 13762560. Throughput: 0: 809.8, 1: 809.6. Samples: 3438444. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:26:38,029][62705] Avg episode reward: [(0, '18.520'), (1, '18.880')] +[2023-09-26 02:26:43,029][62705] Fps is (10 sec: 6963.1, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 13803520. Throughput: 0: 812.1, 1: 811.5. Samples: 3448282. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:26:43,030][62705] Avg episode reward: [(0, '18.500'), (1, '19.150')] +[2023-09-26 02:26:48,029][62705] Fps is (10 sec: 7372.8, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 13836288. Throughput: 0: 810.1, 1: 810.0. Samples: 3458099. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:26:48,029][62705] Avg episode reward: [(0, '18.090'), (1, '19.430')] +[2023-09-26 02:26:49,212][63636] Updated weights for policy 0, policy_version 27040 (0.0016) +[2023-09-26 02:26:49,212][63637] Updated weights for policy 1, policy_version 27040 (0.0017) +[2023-09-26 02:26:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 13869056. Throughput: 0: 813.4, 1: 815.7. Samples: 3463139. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:26:53,030][62705] Avg episode reward: [(0, '18.160'), (1, '20.720')] +[2023-09-26 02:26:58,029][62705] Fps is (10 sec: 5734.3, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 13893632. Throughput: 0: 813.9, 1: 811.5. Samples: 3472821. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:26:58,030][62705] Avg episode reward: [(0, '17.830'), (1, '20.080')] +[2023-09-26 02:26:58,042][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000027152_6950912.pth... +[2023-09-26 02:26:58,060][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000027152_6950912.pth... +[2023-09-26 02:26:58,068][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000024112_6172672.pth +[2023-09-26 02:26:58,088][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000024128_6176768.pth +[2023-09-26 02:27:01,796][63637] Updated weights for policy 1, policy_version 27200 (0.0017) +[2023-09-26 02:27:01,796][63636] Updated weights for policy 0, policy_version 27200 (0.0017) +[2023-09-26 02:27:03,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 13926400. Throughput: 0: 813.0, 1: 812.9. Samples: 3482544. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:27:03,030][62705] Avg episode reward: [(0, '18.480'), (1, '20.200')] +[2023-09-26 02:27:08,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 13959168. Throughput: 0: 816.8, 1: 815.1. Samples: 3487460. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:27:08,030][62705] Avg episode reward: [(0, '19.720'), (1, '20.700')] +[2023-09-26 02:27:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 13991936. Throughput: 0: 813.1, 1: 812.5. Samples: 3497134. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:27:13,030][62705] Avg episode reward: [(0, '19.740'), (1, '20.240')] +[2023-09-26 02:27:14,513][63636] Updated weights for policy 0, policy_version 27360 (0.0017) +[2023-09-26 02:27:14,513][63637] Updated weights for policy 1, policy_version 27360 (0.0018) +[2023-09-26 02:27:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6442.5). Total num frames: 14024704. Throughput: 0: 814.4, 1: 814.2. Samples: 3506658. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:27:18,030][62705] Avg episode reward: [(0, '20.650'), (1, '20.100')] +[2023-09-26 02:27:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14057472. Throughput: 0: 814.4, 1: 814.4. Samples: 3511739. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:27:23,029][62705] Avg episode reward: [(0, '20.970'), (1, '20.660')] +[2023-09-26 02:27:27,148][63636] Updated weights for policy 0, policy_version 27520 (0.0016) +[2023-09-26 02:27:27,148][63637] Updated weights for policy 1, policy_version 27520 (0.0017) +[2023-09-26 02:27:28,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14090240. Throughput: 0: 810.3, 1: 810.1. Samples: 3521200. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:27:28,029][62705] Avg episode reward: [(0, '20.820'), (1, '21.510')] +[2023-09-26 02:27:33,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6485.3, 300 sec: 6442.5). Total num frames: 14123008. Throughput: 0: 807.9, 1: 808.7. Samples: 3530846. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:27:33,030][62705] Avg episode reward: [(0, '20.870'), (1, '21.960')] +[2023-09-26 02:27:38,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 14155776. Throughput: 0: 809.2, 1: 807.9. Samples: 3535906. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:27:38,030][62705] Avg episode reward: [(0, '21.230'), (1, '21.100')] +[2023-09-26 02:27:40,025][63636] Updated weights for policy 0, policy_version 27680 (0.0018) +[2023-09-26 02:27:40,026][63637] Updated weights for policy 1, policy_version 27680 (0.0017) +[2023-09-26 02:27:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 14188544. Throughput: 0: 803.1, 1: 804.2. Samples: 3545149. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:27:43,030][62705] Avg episode reward: [(0, '21.480'), (1, '21.120')] +[2023-09-26 02:27:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 14221312. Throughput: 0: 807.2, 1: 807.9. Samples: 3555223. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:27:48,030][62705] Avg episode reward: [(0, '22.140'), (1, '21.580')] +[2023-09-26 02:27:52,523][63636] Updated weights for policy 0, policy_version 27840 (0.0017) +[2023-09-26 02:27:52,524][63637] Updated weights for policy 1, policy_version 27840 (0.0018) +[2023-09-26 02:27:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 14254080. Throughput: 0: 805.8, 1: 804.8. Samples: 3559935. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:27:53,030][62705] Avg episode reward: [(0, '22.140'), (1, '22.430')] +[2023-09-26 02:27:58,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14286848. Throughput: 0: 806.3, 1: 807.5. Samples: 3569755. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:27:58,029][62705] Avg episode reward: [(0, '22.090'), (1, '22.230')] +[2023-09-26 02:28:03,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14319616. Throughput: 0: 812.3, 1: 814.9. Samples: 3579882. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:28:03,030][62705] Avg episode reward: [(0, '23.500'), (1, '22.890')] +[2023-09-26 02:28:03,031][63291] Saving new best policy, reward=23.500! +[2023-09-26 02:28:04,939][63637] Updated weights for policy 1, policy_version 28000 (0.0018) +[2023-09-26 02:28:04,940][63636] Updated weights for policy 0, policy_version 28000 (0.0018) +[2023-09-26 02:28:08,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14352384. Throughput: 0: 809.8, 1: 810.0. Samples: 3584632. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:28:08,030][62705] Avg episode reward: [(0, '23.790'), (1, '22.770')] +[2023-09-26 02:28:08,031][63291] Saving new best policy, reward=23.790! +[2023-09-26 02:28:13,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14385152. Throughput: 0: 812.4, 1: 813.2. Samples: 3594355. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:28:13,030][62705] Avg episode reward: [(0, '23.520'), (1, '22.890')] +[2023-09-26 02:28:17,437][63637] Updated weights for policy 1, policy_version 28160 (0.0018) +[2023-09-26 02:28:17,437][63636] Updated weights for policy 0, policy_version 28160 (0.0018) +[2023-09-26 02:28:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14417920. Throughput: 0: 817.2, 1: 818.7. Samples: 3604461. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:28:18,030][62705] Avg episode reward: [(0, '23.090'), (1, '24.600')] +[2023-09-26 02:28:18,031][63410] Saving new best policy, reward=24.600! +[2023-09-26 02:28:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14450688. Throughput: 0: 814.7, 1: 813.4. Samples: 3609170. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:28:23,030][62705] Avg episode reward: [(0, '22.910'), (1, '23.630')] +[2023-09-26 02:28:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14483456. Throughput: 0: 819.2, 1: 819.2. Samples: 3618875. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:28:28,030][62705] Avg episode reward: [(0, '22.910'), (1, '24.040')] +[2023-09-26 02:28:29,966][63637] Updated weights for policy 1, policy_version 28320 (0.0017) +[2023-09-26 02:28:29,967][63636] Updated weights for policy 0, policy_version 28320 (0.0016) +[2023-09-26 02:28:33,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14516224. Throughput: 0: 819.2, 1: 820.6. Samples: 3629012. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:28:33,029][62705] Avg episode reward: [(0, '22.530'), (1, '24.070')] +[2023-09-26 02:28:38,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 14548992. Throughput: 0: 819.8, 1: 819.5. Samples: 3633704. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:28:38,029][62705] Avg episode reward: [(0, '23.000'), (1, '23.110')] +[2023-09-26 02:28:42,651][63636] Updated weights for policy 0, policy_version 28480 (0.0016) +[2023-09-26 02:28:42,651][63637] Updated weights for policy 1, policy_version 28480 (0.0016) +[2023-09-26 02:28:43,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6484.2). Total num frames: 14581760. Throughput: 0: 817.4, 1: 819.1. Samples: 3643396. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:28:43,030][62705] Avg episode reward: [(0, '23.270'), (1, '23.900')] +[2023-09-26 02:28:48,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 14614528. Throughput: 0: 814.6, 1: 812.4. Samples: 3653098. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:28:48,030][62705] Avg episode reward: [(0, '20.250'), (1, '24.430')] +[2023-09-26 02:28:53,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 14647296. Throughput: 0: 812.2, 1: 813.7. Samples: 3657797. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:28:53,029][62705] Avg episode reward: [(0, '21.470'), (1, '24.450')] +[2023-09-26 02:28:55,236][63637] Updated weights for policy 1, policy_version 28640 (0.0018) +[2023-09-26 02:28:55,237][63636] Updated weights for policy 0, policy_version 28640 (0.0016) +[2023-09-26 02:28:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 14680064. Throughput: 0: 816.7, 1: 817.6. Samples: 3667902. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:28:58,030][62705] Avg episode reward: [(0, '20.980'), (1, '24.120')] +[2023-09-26 02:28:58,041][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000028672_7340032.pth... +[2023-09-26 02:28:58,041][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000028672_7340032.pth... +[2023-09-26 02:28:58,076][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000025632_6561792.pth +[2023-09-26 02:28:58,076][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000025632_6561792.pth +[2023-09-26 02:29:03,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 14712832. Throughput: 0: 812.3, 1: 810.4. Samples: 3677479. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:29:03,030][62705] Avg episode reward: [(0, '19.990'), (1, '23.260')] +[2023-09-26 02:29:07,808][63637] Updated weights for policy 1, policy_version 28800 (0.0019) +[2023-09-26 02:29:07,809][63636] Updated weights for policy 0, policy_version 28800 (0.0019) +[2023-09-26 02:29:08,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 14745600. Throughput: 0: 811.1, 1: 814.2. Samples: 3682309. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:29:08,029][62705] Avg episode reward: [(0, '20.370'), (1, '22.430')] +[2023-09-26 02:29:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 14778368. Throughput: 0: 817.7, 1: 815.4. Samples: 3692363. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:29:13,030][62705] Avg episode reward: [(0, '19.330'), (1, '22.590')] +[2023-09-26 02:29:18,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 14811136. Throughput: 0: 811.0, 1: 810.3. Samples: 3701972. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:29:18,029][62705] Avg episode reward: [(0, '18.740'), (1, '20.850')] +[2023-09-26 02:29:20,498][63636] Updated weights for policy 0, policy_version 28960 (0.0020) +[2023-09-26 02:29:20,498][63637] Updated weights for policy 1, policy_version 28960 (0.0017) +[2023-09-26 02:29:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 14843904. Throughput: 0: 811.5, 1: 813.3. Samples: 3706820. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:29:23,030][62705] Avg episode reward: [(0, '19.120'), (1, '20.260')] +[2023-09-26 02:29:28,029][62705] Fps is (10 sec: 6143.9, 60 sec: 6485.3, 300 sec: 6484.2). Total num frames: 14872576. Throughput: 0: 814.8, 1: 811.1. Samples: 3716562. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:29:28,030][62705] Avg episode reward: [(0, '19.200'), (1, '20.030')] +[2023-09-26 02:29:33,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 14901248. Throughput: 0: 812.1, 1: 811.6. Samples: 3726162. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:29:33,030][62705] Avg episode reward: [(0, '19.430'), (1, '20.140')] +[2023-09-26 02:29:33,096][63636] Updated weights for policy 0, policy_version 29120 (0.0017) +[2023-09-26 02:29:33,096][63637] Updated weights for policy 1, policy_version 29120 (0.0017) +[2023-09-26 02:29:38,029][62705] Fps is (10 sec: 6144.0, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 14934016. Throughput: 0: 816.4, 1: 815.0. Samples: 3731208. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:29:38,030][62705] Avg episode reward: [(0, '20.190'), (1, '20.200')] +[2023-09-26 02:29:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 14966784. Throughput: 0: 811.2, 1: 809.1. Samples: 3740815. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:29:43,030][62705] Avg episode reward: [(0, '19.900'), (1, '19.660')] +[2023-09-26 02:29:45,767][63636] Updated weights for policy 0, policy_version 29280 (0.0015) +[2023-09-26 02:29:45,768][63637] Updated weights for policy 1, policy_version 29280 (0.0018) +[2023-09-26 02:29:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 14999552. Throughput: 0: 810.3, 1: 809.6. Samples: 3750375. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:29:48,030][62705] Avg episode reward: [(0, '20.940'), (1, '19.640')] +[2023-09-26 02:29:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 15032320. Throughput: 0: 813.5, 1: 811.1. Samples: 3755415. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:29:53,030][62705] Avg episode reward: [(0, '20.040'), (1, '19.820')] +[2023-09-26 02:29:58,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 15065088. Throughput: 0: 807.1, 1: 807.2. Samples: 3765007. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:29:58,029][62705] Avg episode reward: [(0, '20.180'), (1, '19.770')] +[2023-09-26 02:29:58,457][63637] Updated weights for policy 1, policy_version 29440 (0.0014) +[2023-09-26 02:29:58,457][63636] Updated weights for policy 0, policy_version 29440 (0.0017) +[2023-09-26 02:30:03,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 15097856. Throughput: 0: 804.8, 1: 806.3. Samples: 3774473. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:30:03,029][62705] Avg episode reward: [(0, '21.150'), (1, '21.090')] +[2023-09-26 02:30:08,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 15130624. Throughput: 0: 808.4, 1: 806.5. Samples: 3779493. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:30:08,030][62705] Avg episode reward: [(0, '20.720'), (1, '20.130')] +[2023-09-26 02:30:10,955][63637] Updated weights for policy 1, policy_version 29600 (0.0018) +[2023-09-26 02:30:10,955][63636] Updated weights for policy 0, policy_version 29600 (0.0018) +[2023-09-26 02:30:13,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 15163392. Throughput: 0: 809.0, 1: 809.8. Samples: 3789411. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:30:13,030][62705] Avg episode reward: [(0, '21.480'), (1, '20.200')] +[2023-09-26 02:30:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 15196160. Throughput: 0: 808.2, 1: 811.4. Samples: 3799045. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:30:18,030][62705] Avg episode reward: [(0, '21.450'), (1, '20.680')] +[2023-09-26 02:30:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 15228928. Throughput: 0: 808.4, 1: 808.4. Samples: 3803961. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:30:23,030][62705] Avg episode reward: [(0, '22.760'), (1, '20.090')] +[2023-09-26 02:30:23,679][63637] Updated weights for policy 1, policy_version 29760 (0.0017) +[2023-09-26 02:30:23,679][63636] Updated weights for policy 0, policy_version 29760 (0.0016) +[2023-09-26 02:30:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6485.3, 300 sec: 6498.1). Total num frames: 15261696. Throughput: 0: 807.7, 1: 808.5. Samples: 3813541. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:30:28,030][62705] Avg episode reward: [(0, '19.820'), (1, '21.410')] +[2023-09-26 02:30:33,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15294464. Throughput: 0: 812.3, 1: 814.4. Samples: 3823576. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:30:33,029][62705] Avg episode reward: [(0, '19.670'), (1, '21.090')] +[2023-09-26 02:30:36,320][63637] Updated weights for policy 1, policy_version 29920 (0.0015) +[2023-09-26 02:30:36,320][63636] Updated weights for policy 0, policy_version 29920 (0.0017) +[2023-09-26 02:30:38,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15327232. Throughput: 0: 808.0, 1: 807.4. Samples: 3828106. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:30:38,029][62705] Avg episode reward: [(0, '20.360'), (1, '20.950')] +[2023-09-26 02:30:43,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15360000. Throughput: 0: 808.9, 1: 812.2. Samples: 3837957. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:30:43,030][62705] Avg episode reward: [(0, '20.510'), (1, '20.070')] +[2023-09-26 02:30:48,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15392768. Throughput: 0: 817.6, 1: 815.9. Samples: 3847981. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 02:30:48,030][62705] Avg episode reward: [(0, '18.890'), (1, '20.590')] +[2023-09-26 02:30:48,885][63636] Updated weights for policy 0, policy_version 30080 (0.0018) +[2023-09-26 02:30:48,885][63637] Updated weights for policy 1, policy_version 30080 (0.0017) +[2023-09-26 02:30:53,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15425536. Throughput: 0: 810.9, 1: 811.1. Samples: 3852483. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:30:53,029][62705] Avg episode reward: [(0, '19.580'), (1, '19.760')] +[2023-09-26 02:30:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15458304. Throughput: 0: 810.9, 1: 812.4. Samples: 3862460. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:30:58,030][62705] Avg episode reward: [(0, '20.700'), (1, '18.220')] +[2023-09-26 02:30:58,041][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000030192_7729152.pth... +[2023-09-26 02:30:58,041][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000030192_7729152.pth... +[2023-09-26 02:30:58,077][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000027152_6950912.pth +[2023-09-26 02:30:58,077][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000027152_6950912.pth +[2023-09-26 02:31:01,507][63636] Updated weights for policy 0, policy_version 30240 (0.0019) +[2023-09-26 02:31:01,507][63637] Updated weights for policy 1, policy_version 30240 (0.0019) +[2023-09-26 02:31:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15491072. Throughput: 0: 814.8, 1: 811.9. Samples: 3872244. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:31:03,029][62705] Avg episode reward: [(0, '20.260'), (1, '18.920')] +[2023-09-26 02:31:08,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15523840. Throughput: 0: 809.7, 1: 811.5. Samples: 3876915. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:31:08,029][62705] Avg episode reward: [(0, '20.920'), (1, '18.460')] +[2023-09-26 02:31:13,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15556608. Throughput: 0: 816.0, 1: 817.8. Samples: 3887059. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:31:13,030][62705] Avg episode reward: [(0, '20.400'), (1, '18.610')] +[2023-09-26 02:31:13,975][63636] Updated weights for policy 0, policy_version 30400 (0.0016) +[2023-09-26 02:31:13,975][63637] Updated weights for policy 1, policy_version 30400 (0.0017) +[2023-09-26 02:31:18,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15589376. Throughput: 0: 813.8, 1: 812.9. Samples: 3896779. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:31:18,029][62705] Avg episode reward: [(0, '20.930'), (1, '19.620')] +[2023-09-26 02:31:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15622144. Throughput: 0: 813.3, 1: 816.4. Samples: 3901444. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:31:23,030][62705] Avg episode reward: [(0, '20.150'), (1, '18.640')] +[2023-09-26 02:31:26,555][63636] Updated weights for policy 0, policy_version 30560 (0.0016) +[2023-09-26 02:31:26,555][63637] Updated weights for policy 1, policy_version 30560 (0.0016) +[2023-09-26 02:31:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6511.9). Total num frames: 15654912. Throughput: 0: 818.9, 1: 816.2. Samples: 3911535. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:31:28,030][62705] Avg episode reward: [(0, '20.360'), (1, '18.560')] +[2023-09-26 02:31:33,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15687680. Throughput: 0: 814.5, 1: 812.7. Samples: 3921205. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:31:33,029][62705] Avg episode reward: [(0, '21.000'), (1, '18.510')] +[2023-09-26 02:31:38,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15720448. Throughput: 0: 815.5, 1: 818.6. Samples: 3926016. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:31:38,029][62705] Avg episode reward: [(0, '21.710'), (1, '18.840')] +[2023-09-26 02:31:39,090][63637] Updated weights for policy 1, policy_version 30720 (0.0018) +[2023-09-26 02:31:39,090][63636] Updated weights for policy 0, policy_version 30720 (0.0015) +[2023-09-26 02:31:43,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15753216. Throughput: 0: 818.1, 1: 817.0. Samples: 3936042. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:31:43,030][62705] Avg episode reward: [(0, '23.180'), (1, '19.150')] +[2023-09-26 02:31:48,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 15785984. Throughput: 0: 818.0, 1: 818.3. Samples: 3945874. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:31:48,030][62705] Avg episode reward: [(0, '22.920'), (1, '19.540')] +[2023-09-26 02:31:51,493][63637] Updated weights for policy 1, policy_version 30880 (0.0017) +[2023-09-26 02:31:51,494][63636] Updated weights for policy 0, policy_version 30880 (0.0015) +[2023-09-26 02:31:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15818752. Throughput: 0: 819.6, 1: 819.2. Samples: 3950661. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:31:53,030][62705] Avg episode reward: [(0, '23.150'), (1, '20.780')] +[2023-09-26 02:31:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15851520. Throughput: 0: 819.2, 1: 817.6. Samples: 3960717. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:31:58,030][62705] Avg episode reward: [(0, '24.670'), (1, '21.250')] +[2023-09-26 02:31:58,040][63291] Saving new best policy, reward=24.670! +[2023-09-26 02:32:03,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15884288. Throughput: 0: 814.8, 1: 814.8. Samples: 3970112. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:32:03,029][62705] Avg episode reward: [(0, '23.160'), (1, '21.400')] +[2023-09-26 02:32:04,290][63637] Updated weights for policy 1, policy_version 31040 (0.0019) +[2023-09-26 02:32:04,290][63636] Updated weights for policy 0, policy_version 31040 (0.0019) +[2023-09-26 02:32:08,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 15908864. Throughput: 0: 819.0, 1: 816.1. Samples: 3975020. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:32:08,030][62705] Avg episode reward: [(0, '23.160'), (1, '21.570')] +[2023-09-26 02:32:13,029][62705] Fps is (10 sec: 5734.2, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 15941632. Throughput: 0: 812.6, 1: 812.8. Samples: 3984675. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:32:13,030][62705] Avg episode reward: [(0, '23.950'), (1, '22.080')] +[2023-09-26 02:32:16,975][63636] Updated weights for policy 0, policy_version 31200 (0.0015) +[2023-09-26 02:32:16,975][63637] Updated weights for policy 1, policy_version 31200 (0.0017) +[2023-09-26 02:32:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 15974400. Throughput: 0: 810.2, 1: 811.0. Samples: 3994160. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:32:18,030][62705] Avg episode reward: [(0, '23.530'), (1, '19.770')] +[2023-09-26 02:32:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 16007168. Throughput: 0: 815.2, 1: 812.5. Samples: 3999265. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:32:23,030][62705] Avg episode reward: [(0, '22.720'), (1, '20.380')] +[2023-09-26 02:32:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 16039936. Throughput: 0: 811.1, 1: 811.4. Samples: 4009053. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:32:28,030][62705] Avg episode reward: [(0, '22.700'), (1, '21.080')] +[2023-09-26 02:32:29,431][63637] Updated weights for policy 1, policy_version 31360 (0.0015) +[2023-09-26 02:32:29,431][63636] Updated weights for policy 0, policy_version 31360 (0.0017) +[2023-09-26 02:32:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 16072704. Throughput: 0: 809.3, 1: 809.2. Samples: 4018705. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 02:32:33,030][62705] Avg episode reward: [(0, '21.160'), (1, '20.430')] +[2023-09-26 02:32:38,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 16105472. Throughput: 0: 812.8, 1: 810.4. Samples: 4023704. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:32:38,030][62705] Avg episode reward: [(0, '19.150'), (1, '20.500')] +[2023-09-26 02:32:42,123][63637] Updated weights for policy 1, policy_version 31520 (0.0017) +[2023-09-26 02:32:42,123][63636] Updated weights for policy 0, policy_version 31520 (0.0018) +[2023-09-26 02:32:43,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 16138240. Throughput: 0: 807.0, 1: 806.3. Samples: 4033317. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:32:43,029][62705] Avg episode reward: [(0, '19.800'), (1, '20.750')] +[2023-09-26 02:32:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 16171008. Throughput: 0: 809.7, 1: 808.5. Samples: 4042931. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:32:48,030][62705] Avg episode reward: [(0, '18.170'), (1, '19.270')] +[2023-09-26 02:32:53,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 16203776. Throughput: 0: 811.3, 1: 811.2. Samples: 4048035. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:32:53,030][62705] Avg episode reward: [(0, '19.060'), (1, '20.150')] +[2023-09-26 02:32:54,677][63636] Updated weights for policy 0, policy_version 31680 (0.0016) +[2023-09-26 02:32:54,677][63637] Updated weights for policy 1, policy_version 31680 (0.0017) +[2023-09-26 02:32:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 16236544. Throughput: 0: 812.6, 1: 812.2. Samples: 4057791. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:32:58,030][62705] Avg episode reward: [(0, '19.480'), (1, '19.350')] +[2023-09-26 02:32:58,040][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000031712_8118272.pth... +[2023-09-26 02:32:58,040][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000031712_8118272.pth... +[2023-09-26 02:32:58,074][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000028672_7340032.pth +[2023-09-26 02:32:58,080][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000028672_7340032.pth +[2023-09-26 02:33:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 16269312. Throughput: 0: 814.4, 1: 814.6. Samples: 4067466. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:33:03,030][62705] Avg episode reward: [(0, '18.570'), (1, '19.570')] +[2023-09-26 02:33:07,238][63636] Updated weights for policy 0, policy_version 31840 (0.0016) +[2023-09-26 02:33:07,238][63637] Updated weights for policy 1, policy_version 31840 (0.0015) +[2023-09-26 02:33:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16302080. Throughput: 0: 813.2, 1: 813.1. Samples: 4072449. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:33:08,030][62705] Avg episode reward: [(0, '17.910'), (1, '19.600')] +[2023-09-26 02:33:13,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16334848. Throughput: 0: 811.8, 1: 811.0. Samples: 4082079. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:33:13,030][62705] Avg episode reward: [(0, '18.210'), (1, '21.460')] +[2023-09-26 02:33:18,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16367616. Throughput: 0: 812.0, 1: 814.7. Samples: 4091908. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:33:18,030][62705] Avg episode reward: [(0, '18.170'), (1, '21.010')] +[2023-09-26 02:33:19,845][63636] Updated weights for policy 0, policy_version 32000 (0.0016) +[2023-09-26 02:33:19,846][63637] Updated weights for policy 1, policy_version 32000 (0.0016) +[2023-09-26 02:33:23,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16400384. Throughput: 0: 811.8, 1: 812.6. Samples: 4096801. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:33:23,029][62705] Avg episode reward: [(0, '18.900'), (1, '20.830')] +[2023-09-26 02:33:28,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16433152. Throughput: 0: 808.6, 1: 811.6. Samples: 4106227. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:33:28,029][62705] Avg episode reward: [(0, '19.400'), (1, '21.480')] +[2023-09-26 02:33:32,739][63636] Updated weights for policy 0, policy_version 32160 (0.0018) +[2023-09-26 02:33:32,740][63637] Updated weights for policy 1, policy_version 32160 (0.0019) +[2023-09-26 02:33:33,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16465920. Throughput: 0: 811.5, 1: 811.4. Samples: 4115960. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:33:33,030][62705] Avg episode reward: [(0, '19.510'), (1, '22.210')] +[2023-09-26 02:33:38,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16498688. Throughput: 0: 806.0, 1: 807.6. Samples: 4120646. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 02:33:38,030][62705] Avg episode reward: [(0, '19.690'), (1, '21.900')] +[2023-09-26 02:33:43,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16531456. Throughput: 0: 807.8, 1: 808.2. Samples: 4130512. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 02:33:43,029][62705] Avg episode reward: [(0, '18.870'), (1, '22.920')] +[2023-09-26 02:33:45,411][63636] Updated weights for policy 0, policy_version 32320 (0.0016) +[2023-09-26 02:33:45,412][63637] Updated weights for policy 1, policy_version 32320 (0.0017) +[2023-09-26 02:33:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16564224. Throughput: 0: 808.6, 1: 808.4. Samples: 4140229. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 02:33:48,030][62705] Avg episode reward: [(0, '19.400'), (1, '21.910')] +[2023-09-26 02:33:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16596992. Throughput: 0: 806.4, 1: 809.2. Samples: 4145152. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 02:33:53,029][62705] Avg episode reward: [(0, '18.640'), (1, '22.260')] +[2023-09-26 02:33:58,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 16621568. Throughput: 0: 807.2, 1: 807.1. Samples: 4154722. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 02:33:58,030][62705] Avg episode reward: [(0, '19.310'), (1, '23.910')] +[2023-09-26 02:33:58,056][63636] Updated weights for policy 0, policy_version 32480 (0.0016) +[2023-09-26 02:33:58,056][63637] Updated weights for policy 1, policy_version 32480 (0.0017) +[2023-09-26 02:34:03,029][62705] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 16654336. Throughput: 0: 806.9, 1: 804.3. Samples: 4164413. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 02:34:03,030][62705] Avg episode reward: [(0, '19.500'), (1, '23.050')] +[2023-09-26 02:34:08,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 16687104. Throughput: 0: 807.9, 1: 808.4. Samples: 4169535. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 02:34:08,030][62705] Avg episode reward: [(0, '19.940'), (1, '22.670')] +[2023-09-26 02:34:10,763][63636] Updated weights for policy 0, policy_version 32640 (0.0016) +[2023-09-26 02:34:10,764][63637] Updated weights for policy 1, policy_version 32640 (0.0017) +[2023-09-26 02:34:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 16719872. Throughput: 0: 808.9, 1: 805.6. Samples: 4178880. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 02:34:13,030][62705] Avg episode reward: [(0, '19.900'), (1, '22.040')] +[2023-09-26 02:34:18,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 16752640. Throughput: 0: 803.2, 1: 803.9. Samples: 4188279. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 02:34:18,029][62705] Avg episode reward: [(0, '21.150'), (1, '22.310')] +[2023-09-26 02:34:23,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6484.2). Total num frames: 16785408. Throughput: 0: 807.0, 1: 805.6. Samples: 4193211. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:34:23,029][62705] Avg episode reward: [(0, '20.130'), (1, '22.580')] +[2023-09-26 02:34:23,611][63637] Updated weights for policy 1, policy_version 32800 (0.0018) +[2023-09-26 02:34:23,611][63636] Updated weights for policy 0, policy_version 32800 (0.0016) +[2023-09-26 02:34:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 16818176. Throughput: 0: 803.6, 1: 803.4. Samples: 4202827. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:34:28,029][62705] Avg episode reward: [(0, '18.850'), (1, '23.300')] +[2023-09-26 02:34:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 16850944. Throughput: 0: 804.2, 1: 807.1. Samples: 4212736. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:34:33,029][62705] Avg episode reward: [(0, '18.460'), (1, '23.720')] +[2023-09-26 02:34:36,176][63637] Updated weights for policy 1, policy_version 32960 (0.0016) +[2023-09-26 02:34:36,176][63636] Updated weights for policy 0, policy_version 32960 (0.0018) +[2023-09-26 02:34:38,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 16883712. Throughput: 0: 804.8, 1: 801.8. Samples: 4217449. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:34:38,030][62705] Avg episode reward: [(0, '18.550'), (1, '23.240')] +[2023-09-26 02:34:43,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 16916480. Throughput: 0: 802.4, 1: 805.4. Samples: 4227073. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:34:43,030][62705] Avg episode reward: [(0, '18.010'), (1, '24.300')] +[2023-09-26 02:34:48,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 16949248. Throughput: 0: 803.6, 1: 803.3. Samples: 4236723. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:34:48,030][62705] Avg episode reward: [(0, '17.840'), (1, '23.760')] +[2023-09-26 02:34:49,079][63636] Updated weights for policy 0, policy_version 33120 (0.0017) +[2023-09-26 02:34:49,081][63637] Updated weights for policy 1, policy_version 33120 (0.0016) +[2023-09-26 02:34:53,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 16982016. Throughput: 0: 797.5, 1: 800.0. Samples: 4241424. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:34:53,029][62705] Avg episode reward: [(0, '18.260'), (1, '23.030')] +[2023-09-26 02:34:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 17014784. Throughput: 0: 806.5, 1: 808.1. Samples: 4251538. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:34:58,030][62705] Avg episode reward: [(0, '16.510'), (1, '23.020')] +[2023-09-26 02:34:58,042][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000033232_8507392.pth... +[2023-09-26 02:34:58,042][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000033232_8507392.pth... +[2023-09-26 02:34:58,077][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000030192_7729152.pth +[2023-09-26 02:34:58,077][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000030192_7729152.pth +[2023-09-26 02:35:01,606][63636] Updated weights for policy 0, policy_version 33280 (0.0018) +[2023-09-26 02:35:01,607][63637] Updated weights for policy 1, policy_version 33280 (0.0018) +[2023-09-26 02:35:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 17047552. Throughput: 0: 809.7, 1: 808.9. Samples: 4261119. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:35:03,029][62705] Avg episode reward: [(0, '15.690'), (1, '22.860')] +[2023-09-26 02:35:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 17080320. Throughput: 0: 807.1, 1: 808.5. Samples: 4265915. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:35:08,030][62705] Avg episode reward: [(0, '15.900'), (1, '22.900')] +[2023-09-26 02:35:13,029][62705] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 17104896. Throughput: 0: 808.2, 1: 808.2. Samples: 4275565. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:35:13,030][62705] Avg episode reward: [(0, '16.800'), (1, '22.530')] +[2023-09-26 02:35:14,361][63637] Updated weights for policy 1, policy_version 33440 (0.0017) +[2023-09-26 02:35:14,361][63636] Updated weights for policy 0, policy_version 33440 (0.0018) +[2023-09-26 02:35:18,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 17137664. Throughput: 0: 804.2, 1: 800.9. Samples: 4284968. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:35:18,030][62705] Avg episode reward: [(0, '17.380'), (1, '22.240')] +[2023-09-26 02:35:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 17170432. Throughput: 0: 805.5, 1: 805.0. Samples: 4289920. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:35:23,029][62705] Avg episode reward: [(0, '17.200'), (1, '22.740')] +[2023-09-26 02:35:27,019][63636] Updated weights for policy 0, policy_version 33600 (0.0019) +[2023-09-26 02:35:27,019][63637] Updated weights for policy 1, policy_version 33600 (0.0019) +[2023-09-26 02:35:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 17203200. Throughput: 0: 808.7, 1: 805.3. Samples: 4299702. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 02:35:28,030][62705] Avg episode reward: [(0, '17.480'), (1, '22.660')] +[2023-09-26 02:35:33,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 17235968. Throughput: 0: 805.2, 1: 805.2. Samples: 4309190. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:35:33,030][62705] Avg episode reward: [(0, '17.630'), (1, '22.020')] +[2023-09-26 02:35:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 17268736. Throughput: 0: 811.3, 1: 808.5. Samples: 4314313. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:35:38,030][62705] Avg episode reward: [(0, '17.500'), (1, '22.130')] +[2023-09-26 02:35:39,664][63637] Updated weights for policy 1, policy_version 33760 (0.0018) +[2023-09-26 02:35:39,665][63636] Updated weights for policy 0, policy_version 33760 (0.0019) +[2023-09-26 02:35:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 17301504. Throughput: 0: 806.0, 1: 805.2. Samples: 4324041. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:35:43,030][62705] Avg episode reward: [(0, '16.750'), (1, '22.410')] +[2023-09-26 02:35:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 17334272. Throughput: 0: 805.2, 1: 806.6. Samples: 4333651. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:35:48,030][62705] Avg episode reward: [(0, '17.690'), (1, '22.020')] +[2023-09-26 02:35:52,254][63636] Updated weights for policy 0, policy_version 33920 (0.0016) +[2023-09-26 02:35:52,254][63637] Updated weights for policy 1, policy_version 33920 (0.0017) +[2023-09-26 02:35:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 17367040. Throughput: 0: 808.7, 1: 806.9. Samples: 4338614. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:35:53,030][62705] Avg episode reward: [(0, '17.410'), (1, '22.160')] +[2023-09-26 02:35:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 17399808. Throughput: 0: 809.8, 1: 809.5. Samples: 4348434. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:35:58,030][62705] Avg episode reward: [(0, '17.690'), (1, '22.460')] +[2023-09-26 02:36:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 17432576. Throughput: 0: 813.5, 1: 814.8. Samples: 4358244. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:36:03,030][62705] Avg episode reward: [(0, '17.690'), (1, '22.730')] +[2023-09-26 02:36:04,819][63636] Updated weights for policy 0, policy_version 34080 (0.0016) +[2023-09-26 02:36:04,819][63637] Updated weights for policy 1, policy_version 34080 (0.0018) +[2023-09-26 02:36:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 17465344. Throughput: 0: 812.9, 1: 813.4. Samples: 4363104. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:36:08,030][62705] Avg episode reward: [(0, '17.770'), (1, '23.650')] +[2023-09-26 02:36:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 17498112. Throughput: 0: 807.0, 1: 810.4. Samples: 4372481. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:36:13,030][62705] Avg episode reward: [(0, '19.000'), (1, '23.380')] +[2023-09-26 02:36:17,640][63636] Updated weights for policy 0, policy_version 34240 (0.0016) +[2023-09-26 02:36:17,640][63637] Updated weights for policy 1, policy_version 34240 (0.0015) +[2023-09-26 02:36:18,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 17530880. Throughput: 0: 813.6, 1: 813.5. Samples: 4382408. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:36:18,030][62705] Avg episode reward: [(0, '19.030'), (1, '22.860')] +[2023-09-26 02:36:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 17563648. Throughput: 0: 807.1, 1: 807.3. Samples: 4386961. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:36:23,030][62705] Avg episode reward: [(0, '19.940'), (1, '23.150')] +[2023-09-26 02:36:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 17596416. Throughput: 0: 809.8, 1: 810.8. Samples: 4396968. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:36:28,030][62705] Avg episode reward: [(0, '19.000'), (1, '23.310')] +[2023-09-26 02:36:30,383][63636] Updated weights for policy 0, policy_version 34400 (0.0018) +[2023-09-26 02:36:30,384][63637] Updated weights for policy 1, policy_version 34400 (0.0016) +[2023-09-26 02:36:33,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 17629184. Throughput: 0: 809.2, 1: 808.0. Samples: 4406424. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:36:33,029][62705] Avg episode reward: [(0, '18.250'), (1, '22.250')] +[2023-09-26 02:36:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 17661952. Throughput: 0: 807.0, 1: 809.4. Samples: 4411350. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:36:38,029][62705] Avg episode reward: [(0, '18.740'), (1, '21.470')] +[2023-09-26 02:36:43,023][63636] Updated weights for policy 0, policy_version 34560 (0.0017) +[2023-09-26 02:36:43,024][63637] Updated weights for policy 1, policy_version 34560 (0.0016) +[2023-09-26 02:36:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 17694720. Throughput: 0: 806.7, 1: 807.2. Samples: 4421059. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:36:43,029][62705] Avg episode reward: [(0, '18.650'), (1, '21.660')] +[2023-09-26 02:36:48,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 17727488. Throughput: 0: 806.8, 1: 806.0. Samples: 4430821. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:36:48,030][62705] Avg episode reward: [(0, '19.090'), (1, '21.750')] +[2023-09-26 02:36:53,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 17760256. Throughput: 0: 808.1, 1: 809.8. Samples: 4435913. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:36:53,030][62705] Avg episode reward: [(0, '19.280'), (1, '19.900')] +[2023-09-26 02:36:55,488][63636] Updated weights for policy 0, policy_version 34720 (0.0017) +[2023-09-26 02:36:55,488][63637] Updated weights for policy 1, policy_version 34720 (0.0018) +[2023-09-26 02:36:58,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 17793024. Throughput: 0: 814.9, 1: 812.2. Samples: 4445699. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:36:58,030][62705] Avg episode reward: [(0, '19.350'), (1, '19.850')] +[2023-09-26 02:36:58,039][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000034752_8896512.pth... +[2023-09-26 02:36:58,039][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000034752_8896512.pth... +[2023-09-26 02:36:58,074][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000031712_8118272.pth +[2023-09-26 02:36:58,076][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000031712_8118272.pth +[2023-09-26 02:37:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 17825792. Throughput: 0: 811.0, 1: 811.3. Samples: 4455412. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:37:03,030][62705] Avg episode reward: [(0, '20.570'), (1, '19.300')] +[2023-09-26 02:37:08,029][62705] Fps is (10 sec: 6144.0, 60 sec: 6485.3, 300 sec: 6484.2). Total num frames: 17854464. Throughput: 0: 816.1, 1: 817.2. Samples: 4460459. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:37:08,030][62705] Avg episode reward: [(0, '20.520'), (1, '20.030')] +[2023-09-26 02:37:08,032][63636] Updated weights for policy 0, policy_version 34880 (0.0018) +[2023-09-26 02:37:08,032][63637] Updated weights for policy 1, policy_version 34880 (0.0017) +[2023-09-26 02:37:13,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 17883136. Throughput: 0: 813.8, 1: 813.1. Samples: 4470178. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:37:13,030][62705] Avg episode reward: [(0, '20.460'), (1, '19.750')] +[2023-09-26 02:37:18,029][62705] Fps is (10 sec: 6144.0, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 17915904. Throughput: 0: 815.5, 1: 815.5. Samples: 4479820. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 02:37:18,030][62705] Avg episode reward: [(0, '19.480'), (1, '19.350')] +[2023-09-26 02:37:20,591][63637] Updated weights for policy 1, policy_version 35040 (0.0014) +[2023-09-26 02:37:20,592][63636] Updated weights for policy 0, policy_version 35040 (0.0017) +[2023-09-26 02:37:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 17948672. Throughput: 0: 818.5, 1: 817.0. Samples: 4484950. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:37:23,030][62705] Avg episode reward: [(0, '19.620'), (1, '20.290')] +[2023-09-26 02:37:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 17981440. Throughput: 0: 816.0, 1: 815.6. Samples: 4494481. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:37:28,030][62705] Avg episode reward: [(0, '19.810'), (1, '20.530')] +[2023-09-26 02:37:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 18014208. Throughput: 0: 815.8, 1: 815.6. Samples: 4504232. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:37:33,030][62705] Avg episode reward: [(0, '19.820'), (1, '19.830')] +[2023-09-26 02:37:33,151][63636] Updated weights for policy 0, policy_version 35200 (0.0018) +[2023-09-26 02:37:33,151][63637] Updated weights for policy 1, policy_version 35200 (0.0015) +[2023-09-26 02:37:38,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 18046976. Throughput: 0: 815.0, 1: 815.2. Samples: 4509274. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:37:38,030][62705] Avg episode reward: [(0, '19.930'), (1, '19.380')] +[2023-09-26 02:37:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 18079744. Throughput: 0: 813.3, 1: 813.3. Samples: 4518895. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 02:37:43,030][62705] Avg episode reward: [(0, '19.700'), (1, '20.200')] +[2023-09-26 02:37:46,164][63636] Updated weights for policy 0, policy_version 35360 (0.0018) +[2023-09-26 02:37:46,164][63637] Updated weights for policy 1, policy_version 35360 (0.0018) +[2023-09-26 02:37:48,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 18112512. Throughput: 0: 806.5, 1: 809.1. Samples: 4528116. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 02:37:48,029][62705] Avg episode reward: [(0, '18.770'), (1, '20.480')] +[2023-09-26 02:37:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 18145280. Throughput: 0: 804.2, 1: 803.2. Samples: 4532794. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 02:37:53,030][62705] Avg episode reward: [(0, '17.550'), (1, '19.790')] +[2023-09-26 02:37:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 18178048. Throughput: 0: 801.8, 1: 804.1. Samples: 4542447. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 02:37:58,030][62705] Avg episode reward: [(0, '17.480'), (1, '20.080')] +[2023-09-26 02:37:58,918][63636] Updated weights for policy 0, policy_version 35520 (0.0016) +[2023-09-26 02:37:58,919][63637] Updated weights for policy 1, policy_version 35520 (0.0018) +[2023-09-26 02:38:03,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 18210816. Throughput: 0: 804.8, 1: 804.9. Samples: 4552256. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 02:38:03,029][62705] Avg episode reward: [(0, '17.150'), (1, '20.540')] +[2023-09-26 02:38:08,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6485.4, 300 sec: 6470.3). Total num frames: 18243584. Throughput: 0: 800.0, 1: 799.6. Samples: 4556932. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:38:08,029][62705] Avg episode reward: [(0, '17.840'), (1, '21.300')] +[2023-09-26 02:38:11,566][63636] Updated weights for policy 0, policy_version 35680 (0.0016) +[2023-09-26 02:38:11,566][63637] Updated weights for policy 1, policy_version 35680 (0.0018) +[2023-09-26 02:38:13,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 18276352. Throughput: 0: 804.4, 1: 805.3. Samples: 4566918. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:38:13,030][62705] Avg episode reward: [(0, '17.820'), (1, '22.440')] +[2023-09-26 02:38:18,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 18309120. Throughput: 0: 802.8, 1: 801.7. Samples: 4576434. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:38:18,030][62705] Avg episode reward: [(0, '18.700'), (1, '22.590')] +[2023-09-26 02:38:23,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 18341888. Throughput: 0: 800.6, 1: 801.7. Samples: 4581376. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:38:23,029][62705] Avg episode reward: [(0, '18.060'), (1, '22.740')] +[2023-09-26 02:38:24,154][63637] Updated weights for policy 1, policy_version 35840 (0.0017) +[2023-09-26 02:38:24,155][63636] Updated weights for policy 0, policy_version 35840 (0.0018) +[2023-09-26 02:38:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 18374656. Throughput: 0: 805.0, 1: 804.8. Samples: 4591335. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:38:28,030][62705] Avg episode reward: [(0, '19.440'), (1, '23.160')] +[2023-09-26 02:38:33,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 18407424. Throughput: 0: 810.1, 1: 808.0. Samples: 4600931. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:38:33,030][62705] Avg episode reward: [(0, '18.520'), (1, '23.510')] +[2023-09-26 02:38:36,787][63637] Updated weights for policy 1, policy_version 36000 (0.0016) +[2023-09-26 02:38:36,787][63636] Updated weights for policy 0, policy_version 36000 (0.0018) +[2023-09-26 02:38:38,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 18432000. Throughput: 0: 811.4, 1: 811.5. Samples: 4605825. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:38:38,030][62705] Avg episode reward: [(0, '18.870'), (1, '23.890')] +[2023-09-26 02:38:43,029][62705] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 18464768. Throughput: 0: 811.5, 1: 809.1. Samples: 4615377. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:38:43,030][62705] Avg episode reward: [(0, '19.250'), (1, '23.230')] +[2023-09-26 02:38:48,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6442.5). Total num frames: 18497536. Throughput: 0: 811.1, 1: 811.2. Samples: 4625261. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:38:48,029][62705] Avg episode reward: [(0, '20.240'), (1, '22.950')] +[2023-09-26 02:38:49,326][63637] Updated weights for policy 1, policy_version 36160 (0.0017) +[2023-09-26 02:38:49,326][63636] Updated weights for policy 0, policy_version 36160 (0.0016) +[2023-09-26 02:38:53,029][62705] Fps is (10 sec: 6963.2, 60 sec: 6485.3, 300 sec: 6484.2). Total num frames: 18534400. Throughput: 0: 815.9, 1: 815.4. Samples: 4630338. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:38:53,030][62705] Avg episode reward: [(0, '20.540'), (1, '22.720')] +[2023-09-26 02:38:58,029][62705] Fps is (10 sec: 6963.0, 60 sec: 6485.3, 300 sec: 6484.2). Total num frames: 18567168. Throughput: 0: 813.6, 1: 813.6. Samples: 4640140. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:38:58,030][62705] Avg episode reward: [(0, '20.190'), (1, '22.550')] +[2023-09-26 02:38:58,041][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000036272_9285632.pth... +[2023-09-26 02:38:58,074][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000033232_8507392.pth +[2023-09-26 02:38:58,150][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000036272_9285632.pth... +[2023-09-26 02:38:58,178][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000033232_8507392.pth +[2023-09-26 02:39:01,943][63636] Updated weights for policy 0, policy_version 36320 (0.0017) +[2023-09-26 02:39:01,943][63637] Updated weights for policy 1, policy_version 36320 (0.0018) +[2023-09-26 02:39:03,029][62705] Fps is (10 sec: 6144.0, 60 sec: 6417.0, 300 sec: 6470.3). Total num frames: 18595840. Throughput: 0: 812.5, 1: 813.4. Samples: 4649599. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:39:03,030][62705] Avg episode reward: [(0, '20.890'), (1, '22.600')] +[2023-09-26 02:39:08,029][62705] Fps is (10 sec: 6144.2, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 18628608. Throughput: 0: 817.1, 1: 813.1. Samples: 4654738. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:39:08,029][62705] Avg episode reward: [(0, '20.730'), (1, '22.110')] +[2023-09-26 02:39:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 18661376. Throughput: 0: 811.7, 1: 811.5. Samples: 4664382. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:39:13,030][62705] Avg episode reward: [(0, '20.840'), (1, '21.770')] +[2023-09-26 02:39:14,441][63636] Updated weights for policy 0, policy_version 36480 (0.0017) +[2023-09-26 02:39:14,441][63637] Updated weights for policy 1, policy_version 36480 (0.0016) +[2023-09-26 02:39:18,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 18694144. Throughput: 0: 815.1, 1: 814.2. Samples: 4674249. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:39:18,030][62705] Avg episode reward: [(0, '20.640'), (1, '21.790')] +[2023-09-26 02:39:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 18726912. Throughput: 0: 815.2, 1: 815.1. Samples: 4679189. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:39:23,030][62705] Avg episode reward: [(0, '20.800'), (1, '21.630')] +[2023-09-26 02:39:26,921][63637] Updated weights for policy 1, policy_version 36640 (0.0017) +[2023-09-26 02:39:26,922][63636] Updated weights for policy 0, policy_version 36640 (0.0016) +[2023-09-26 02:39:28,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 18759680. Throughput: 0: 818.6, 1: 818.3. Samples: 4689039. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:39:28,030][62705] Avg episode reward: [(0, '20.980'), (1, '21.500')] +[2023-09-26 02:39:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6470.3). Total num frames: 18792448. Throughput: 0: 817.1, 1: 816.8. Samples: 4698788. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:39:33,030][62705] Avg episode reward: [(0, '22.150'), (1, '20.760')] +[2023-09-26 02:39:38,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 18825216. Throughput: 0: 816.9, 1: 816.9. Samples: 4703860. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:39:38,029][62705] Avg episode reward: [(0, '22.340'), (1, '19.900')] +[2023-09-26 02:39:39,419][63637] Updated weights for policy 1, policy_version 36800 (0.0018) +[2023-09-26 02:39:39,420][63636] Updated weights for policy 0, policy_version 36800 (0.0018) +[2023-09-26 02:39:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 18857984. Throughput: 0: 817.3, 1: 816.5. Samples: 4713660. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 02:39:43,030][62705] Avg episode reward: [(0, '22.510'), (1, '20.720')] +[2023-09-26 02:39:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 18890752. Throughput: 0: 815.7, 1: 816.0. Samples: 4723029. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:39:48,029][62705] Avg episode reward: [(0, '21.480'), (1, '20.860')] +[2023-09-26 02:39:52,100][63636] Updated weights for policy 0, policy_version 36960 (0.0016) +[2023-09-26 02:39:52,100][63637] Updated weights for policy 1, policy_version 36960 (0.0015) +[2023-09-26 02:39:53,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6485.4, 300 sec: 6470.3). Total num frames: 18923520. Throughput: 0: 814.3, 1: 815.9. Samples: 4728096. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:39:53,029][62705] Avg episode reward: [(0, '20.580'), (1, '21.230')] +[2023-09-26 02:39:58,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6485.3, 300 sec: 6470.3). Total num frames: 18956288. Throughput: 0: 814.8, 1: 815.3. Samples: 4737738. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:39:58,030][62705] Avg episode reward: [(0, '20.640'), (1, '20.960')] +[2023-09-26 02:40:03,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6470.3). Total num frames: 18989056. Throughput: 0: 809.7, 1: 812.9. Samples: 4747265. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:40:03,030][62705] Avg episode reward: [(0, '20.010'), (1, '21.160')] +[2023-09-26 02:40:04,886][63637] Updated weights for policy 1, policy_version 37120 (0.0017) +[2023-09-26 02:40:04,887][63636] Updated weights for policy 0, policy_version 37120 (0.0017) +[2023-09-26 02:40:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19021824. Throughput: 0: 809.8, 1: 810.3. Samples: 4752096. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:40:08,029][62705] Avg episode reward: [(0, '19.460'), (1, '20.170')] +[2023-09-26 02:40:13,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19054592. Throughput: 0: 806.5, 1: 807.8. Samples: 4761683. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:40:13,029][62705] Avg episode reward: [(0, '19.380'), (1, '19.120')] +[2023-09-26 02:40:17,518][63636] Updated weights for policy 0, policy_version 37280 (0.0016) +[2023-09-26 02:40:17,519][63637] Updated weights for policy 1, policy_version 37280 (0.0015) +[2023-09-26 02:40:18,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19087360. Throughput: 0: 810.2, 1: 811.4. Samples: 4771757. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:40:18,030][62705] Avg episode reward: [(0, '19.280'), (1, '20.450')] +[2023-09-26 02:40:23,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19120128. Throughput: 0: 806.1, 1: 806.5. Samples: 4776428. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:40:23,030][62705] Avg episode reward: [(0, '19.390'), (1, '19.530')] +[2023-09-26 02:40:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19152896. Throughput: 0: 804.3, 1: 807.2. Samples: 4786176. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:40:28,030][62705] Avg episode reward: [(0, '19.630'), (1, '20.190')] +[2023-09-26 02:40:30,137][63637] Updated weights for policy 1, policy_version 37440 (0.0018) +[2023-09-26 02:40:30,138][63636] Updated weights for policy 0, policy_version 37440 (0.0020) +[2023-09-26 02:40:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19185664. Throughput: 0: 812.2, 1: 812.1. Samples: 4796123. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 02:40:33,030][62705] Avg episode reward: [(0, '20.290'), (1, '20.950')] +[2023-09-26 02:40:38,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19218432. Throughput: 0: 807.2, 1: 806.9. Samples: 4800728. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 02:40:38,030][62705] Avg episode reward: [(0, '19.600'), (1, '20.880')] +[2023-09-26 02:40:42,720][63636] Updated weights for policy 0, policy_version 37600 (0.0017) +[2023-09-26 02:40:42,720][63637] Updated weights for policy 1, policy_version 37600 (0.0016) +[2023-09-26 02:40:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19251200. Throughput: 0: 809.9, 1: 812.1. Samples: 4810727. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 02:40:43,030][62705] Avg episode reward: [(0, '20.460'), (1, '22.370')] +[2023-09-26 02:40:48,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19283968. Throughput: 0: 815.6, 1: 812.9. Samples: 4820545. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 02:40:48,030][62705] Avg episode reward: [(0, '21.110'), (1, '21.660')] +[2023-09-26 02:40:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19316736. Throughput: 0: 812.3, 1: 812.4. Samples: 4825208. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 02:40:53,030][62705] Avg episode reward: [(0, '21.350'), (1, '22.370')] +[2023-09-26 02:40:55,163][63636] Updated weights for policy 0, policy_version 37760 (0.0018) +[2023-09-26 02:40:55,163][63637] Updated weights for policy 1, policy_version 37760 (0.0017) +[2023-09-26 02:40:58,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19349504. Throughput: 0: 817.4, 1: 819.2. Samples: 4835329. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:40:58,029][62705] Avg episode reward: [(0, '21.430'), (1, '22.600')] +[2023-09-26 02:40:58,039][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000037792_9674752.pth... +[2023-09-26 02:40:58,039][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000037792_9674752.pth... +[2023-09-26 02:40:58,072][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000034752_8896512.pth +[2023-09-26 02:40:58,073][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000034752_8896512.pth +[2023-09-26 02:41:03,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19382272. Throughput: 0: 816.2, 1: 815.5. Samples: 4845183. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:41:03,030][62705] Avg episode reward: [(0, '21.560'), (1, '23.000')] +[2023-09-26 02:41:07,727][63637] Updated weights for policy 1, policy_version 37920 (0.0017) +[2023-09-26 02:41:07,727][63636] Updated weights for policy 0, policy_version 37920 (0.0017) +[2023-09-26 02:41:08,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19415040. Throughput: 0: 814.2, 1: 815.2. Samples: 4849749. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:41:08,030][62705] Avg episode reward: [(0, '21.400'), (1, '23.240')] +[2023-09-26 02:41:13,029][62705] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19447808. Throughput: 0: 819.2, 1: 818.5. Samples: 4859872. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:41:13,030][62705] Avg episode reward: [(0, '22.530'), (1, '23.680')] +[2023-09-26 02:41:18,029][62705] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19480576. Throughput: 0: 817.5, 1: 817.3. Samples: 4869689. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:41:18,029][62705] Avg episode reward: [(0, '22.060'), (1, '23.340')] +[2023-09-26 02:41:20,266][63636] Updated weights for policy 0, policy_version 38080 (0.0019) +[2023-09-26 02:41:20,267][63637] Updated weights for policy 1, policy_version 38080 (0.0017) +[2023-09-26 02:41:23,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19513344. Throughput: 0: 816.6, 1: 818.2. Samples: 4874291. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:41:23,030][62705] Avg episode reward: [(0, '22.660'), (1, '23.290')] +[2023-09-26 02:41:28,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19546112. Throughput: 0: 819.2, 1: 817.3. Samples: 4884370. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:41:28,030][62705] Avg episode reward: [(0, '22.660'), (1, '22.140')] +[2023-09-26 02:41:32,792][63637] Updated weights for policy 1, policy_version 38240 (0.0014) +[2023-09-26 02:41:32,793][63636] Updated weights for policy 0, policy_version 38240 (0.0014) +[2023-09-26 02:41:33,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19578880. Throughput: 0: 817.1, 1: 816.7. Samples: 4894063. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:41:33,030][62705] Avg episode reward: [(0, '21.800'), (1, '22.390')] +[2023-09-26 02:41:38,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19611648. Throughput: 0: 817.0, 1: 819.1. Samples: 4898833. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:41:38,029][62705] Avg episode reward: [(0, '21.570'), (1, '23.930')] +[2023-09-26 02:41:43,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19644416. Throughput: 0: 817.6, 1: 815.3. Samples: 4908812. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:41:43,030][62705] Avg episode reward: [(0, '21.190'), (1, '24.260')] +[2023-09-26 02:41:45,380][63637] Updated weights for policy 1, policy_version 38400 (0.0017) +[2023-09-26 02:41:45,381][63636] Updated weights for policy 0, policy_version 38400 (0.0017) +[2023-09-26 02:41:48,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19677184. Throughput: 0: 815.3, 1: 815.2. Samples: 4918553. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:41:48,029][62705] Avg episode reward: [(0, '22.320'), (1, '23.180')] +[2023-09-26 02:41:53,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19709952. Throughput: 0: 817.4, 1: 819.2. Samples: 4923393. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:41:53,030][62705] Avg episode reward: [(0, '22.360'), (1, '22.630')] +[2023-09-26 02:41:57,858][63637] Updated weights for policy 1, policy_version 38560 (0.0016) +[2023-09-26 02:41:57,858][63636] Updated weights for policy 0, policy_version 38560 (0.0018) +[2023-09-26 02:41:58,029][62705] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 19742720. Throughput: 0: 819.0, 1: 816.5. Samples: 4933470. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:41:58,030][62705] Avg episode reward: [(0, '22.120'), (1, '23.180')] +[2023-09-26 02:42:03,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6511.9). Total num frames: 19775488. Throughput: 0: 814.9, 1: 815.1. Samples: 4943041. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:42:03,030][62705] Avg episode reward: [(0, '21.890'), (1, '22.660')] +[2023-09-26 02:42:08,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 19808256. Throughput: 0: 818.1, 1: 819.2. Samples: 4947968. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:42:08,030][62705] Avg episode reward: [(0, '22.030'), (1, '23.460')] +[2023-09-26 02:42:10,378][63636] Updated weights for policy 0, policy_version 38720 (0.0015) +[2023-09-26 02:42:10,379][63637] Updated weights for policy 1, policy_version 38720 (0.0016) +[2023-09-26 02:42:13,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 19841024. Throughput: 0: 816.1, 1: 817.0. Samples: 4957857. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 02:42:13,030][62705] Avg episode reward: [(0, '22.190'), (1, '22.790')] +[2023-09-26 02:42:18,029][62705] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 19873792. Throughput: 0: 815.1, 1: 815.3. Samples: 4967432. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 02:42:18,029][62705] Avg episode reward: [(0, '20.870'), (1, '22.770')] +[2023-09-26 02:42:22,962][63637] Updated weights for policy 1, policy_version 38880 (0.0017) +[2023-09-26 02:42:22,963][63636] Updated weights for policy 0, policy_version 38880 (0.0016) +[2023-09-26 02:42:23,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 19906560. Throughput: 0: 818.8, 1: 817.8. Samples: 4972479. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 02:42:23,030][62705] Avg episode reward: [(0, '21.100'), (1, '23.320')] +[2023-09-26 02:42:28,029][62705] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 19939328. Throughput: 0: 815.8, 1: 816.1. Samples: 4982247. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 02:42:28,029][62705] Avg episode reward: [(0, '21.540'), (1, '23.710')] +[2023-09-26 02:42:33,029][62705] Fps is (10 sec: 6144.1, 60 sec: 6485.4, 300 sec: 6512.0). Total num frames: 19968000. Throughput: 0: 815.5, 1: 814.5. Samples: 4991903. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 02:42:33,029][62705] Avg episode reward: [(0, '22.870'), (1, '23.530')] +[2023-09-26 02:42:35,694][63636] Updated weights for policy 0, policy_version 39040 (0.0018) +[2023-09-26 02:42:35,694][63637] Updated weights for policy 1, policy_version 39040 (0.0016) +[2023-09-26 02:42:38,029][62705] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 19996672. Throughput: 0: 816.0, 1: 813.0. Samples: 4996697. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 02:42:38,030][62705] Avg episode reward: [(0, '22.760'), (1, '22.720')] +[2023-09-26 02:42:39,406][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000039088_10006528.pth... +[2023-09-26 02:42:39,406][63673] Stopping RolloutWorker_w1... +[2023-09-26 02:42:39,406][63678] Stopping RolloutWorker_w5... +[2023-09-26 02:42:39,406][63679] Stopping RolloutWorker_w6... +[2023-09-26 02:42:39,406][63673] Loop rollout_proc1_evt_loop terminating... +[2023-09-26 02:42:39,406][63638] Stopping RolloutWorker_w0... +[2023-09-26 02:42:39,406][63677] Stopping RolloutWorker_w4... +[2023-09-26 02:42:39,406][63675] Stopping RolloutWorker_w3... +[2023-09-26 02:42:39,407][63678] Loop rollout_proc5_evt_loop terminating... +[2023-09-26 02:42:39,407][63680] Stopping RolloutWorker_w7... +[2023-09-26 02:42:39,407][63676] Stopping RolloutWorker_w2... +[2023-09-26 02:42:39,407][63679] Loop rollout_proc6_evt_loop terminating... +[2023-09-26 02:42:39,407][62705] Component RolloutWorker_w1 stopped! +[2023-09-26 02:42:39,407][63638] Loop rollout_proc0_evt_loop terminating... +[2023-09-26 02:42:39,407][63677] Loop rollout_proc4_evt_loop terminating... +[2023-09-26 02:42:39,407][63675] Loop rollout_proc3_evt_loop terminating... +[2023-09-26 02:42:39,407][63676] Loop rollout_proc2_evt_loop terminating... +[2023-09-26 02:42:39,407][63680] Loop rollout_proc7_evt_loop terminating... +[2023-09-26 02:42:39,407][62705] Component RolloutWorker_w6 stopped! +[2023-09-26 02:42:39,408][62705] Component RolloutWorker_w5 stopped! +[2023-09-26 02:42:39,409][63410] Stopping Batcher_1... +[2023-09-26 02:42:39,409][62705] Component RolloutWorker_w3 stopped! +[2023-09-26 02:42:39,409][63410] Loop batcher_evt_loop terminating... +[2023-09-26 02:42:39,409][62705] Component RolloutWorker_w4 stopped! +[2023-09-26 02:42:39,410][62705] Component RolloutWorker_w0 stopped! +[2023-09-26 02:42:39,410][62705] Component RolloutWorker_w7 stopped! +[2023-09-26 02:42:39,411][62705] Component RolloutWorker_w2 stopped! +[2023-09-26 02:42:39,411][62705] Component Batcher_1 stopped! +[2023-09-26 02:42:39,413][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-26 02:42:39,427][62705] Component Batcher_0 stopped! +[2023-09-26 02:42:39,435][63291] Stopping Batcher_0... +[2023-09-26 02:42:39,438][63291] Loop batcher_evt_loop terminating... +[2023-09-26 02:42:39,438][63291] Removing ./train_atari/atari_defender/checkpoint_p0/checkpoint_000036272_9285632.pth +[2023-09-26 02:42:39,444][63291] Saving ./train_atari/atari_defender/checkpoint_p0/checkpoint_000039088_10006528.pth... +[2023-09-26 02:42:39,445][63410] Removing ./train_atari/atari_defender/checkpoint_p1/checkpoint_000036272_9285632.pth +[2023-09-26 02:42:39,449][63410] Saving ./train_atari/atari_defender/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-26 02:42:39,469][63636] Weights refcount: 2 0 +[2023-09-26 02:42:39,470][63636] Stopping InferenceWorker_p0-w0... +[2023-09-26 02:42:39,470][63636] Loop inference_proc0-0_evt_loop terminating... +[2023-09-26 02:42:39,470][62705] Component InferenceWorker_p0-w0 stopped! +[2023-09-26 02:42:39,474][63637] Weights refcount: 2 0 +[2023-09-26 02:42:39,475][63637] Stopping InferenceWorker_p1-w0... +[2023-09-26 02:42:39,476][62705] Component InferenceWorker_p1-w0 stopped! +[2023-09-26 02:42:39,476][63637] Loop inference_proc1-0_evt_loop terminating... +[2023-09-26 02:42:39,479][63291] Stopping LearnerWorker_p0... +[2023-09-26 02:42:39,480][63291] Loop learner_proc0_evt_loop terminating... +[2023-09-26 02:42:39,481][62705] Component LearnerWorker_p0 stopped! +[2023-09-26 02:42:39,484][63410] Stopping LearnerWorker_p1... +[2023-09-26 02:42:39,484][63410] Loop learner_proc1_evt_loop terminating... +[2023-09-26 02:42:39,484][62705] Component LearnerWorker_p1 stopped! +[2023-09-26 02:42:39,485][62705] Waiting for process learner_proc0 to stop... +[2023-09-26 02:42:40,164][62705] Waiting for process learner_proc1 to stop... +[2023-09-26 02:42:40,244][62705] Waiting for process inference_proc0-0 to join... +[2023-09-26 02:42:40,245][62705] Waiting for process inference_proc1-0 to join... +[2023-09-26 02:42:40,246][62705] Waiting for process rollout_proc0 to join... +[2023-09-26 02:42:40,246][62705] Waiting for process rollout_proc1 to join... +[2023-09-26 02:42:40,247][62705] Waiting for process rollout_proc2 to join... +[2023-09-26 02:42:40,248][62705] Waiting for process rollout_proc3 to join... +[2023-09-26 02:42:40,248][62705] Waiting for process rollout_proc4 to join... +[2023-09-26 02:42:40,249][62705] Waiting for process rollout_proc5 to join... +[2023-09-26 02:42:40,250][62705] Waiting for process rollout_proc6 to join... +[2023-09-26 02:42:40,251][62705] Waiting for process rollout_proc7 to join... +[2023-09-26 02:42:40,251][62705] Batcher 0 profile tree view: +batching: 20.5102, releasing_batches: 1.7735 +[2023-09-26 02:42:40,252][62705] Batcher 1 profile tree view: +batching: 20.3853, releasing_batches: 1.9669 +[2023-09-26 02:42:40,252][62705] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0051 + wait_policy_total: 611.3834 +update_model: 36.4543 + weight_update: 0.0016 +one_step: 0.0012 + handle_policy_step: 2241.6364 + deserialize: 65.1910, stack: 15.6022, obs_to_device_normalize: 550.0739, forward: 1072.8709, send_messages: 95.2200 + prepare_outputs: 300.5524 + to_cpu: 150.3891 +[2023-09-26 02:42:40,253][62705] InferenceWorker_p1-w0 profile tree view: +wait_policy: 0.0051 + wait_policy_total: 632.9509 +update_model: 36.0978 + weight_update: 0.0017 +one_step: 0.0011 + handle_policy_step: 2221.7408 + deserialize: 66.4171, stack: 15.9044, obs_to_device_normalize: 540.5989, forward: 1068.1016, send_messages: 92.5030 + prepare_outputs: 294.2774 + to_cpu: 147.4074 +[2023-09-26 02:42:40,254][62705] Learner 0 profile tree view: +misc: 0.0146, prepare_batch: 32.1408 +train: 459.7318 + epoch_init: 0.1081, minibatch_init: 3.1205, losses_postprocess: 62.8743, kl_divergence: 5.4358, after_optimizer: 21.4994 + calculate_losses: 44.9436 + losses_init: 0.1019, forward_head: 14.2969, bptt_initial: 0.4395, bptt: 0.4493, tail: 10.2872, advantages_returns: 3.0555, losses: 12.7336 + update: 317.6536 + clip: 165.9991 +[2023-09-26 02:42:40,255][62705] Learner 1 profile tree view: +misc: 0.0146, prepare_batch: 31.9131 +train: 459.2804 + epoch_init: 0.1027, minibatch_init: 3.0376, losses_postprocess: 62.3148, kl_divergence: 5.4238, after_optimizer: 21.6696 + calculate_losses: 45.0589 + losses_init: 0.1058, forward_head: 14.4016, bptt_initial: 0.4434, bptt: 0.4508, tail: 10.2847, advantages_returns: 3.0806, losses: 12.6788 + update: 317.5751 + clip: 165.1858 +[2023-09-26 02:42:40,255][62705] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.3922, enqueue_policy_requests: 42.9486, env_step: 996.9622, overhead: 30.0111, complete_rollouts: 1.0901 +save_policy_outputs: 54.0278 + split_output_tensors: 18.5274 +[2023-09-26 02:42:40,256][62705] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.4079, enqueue_policy_requests: 43.2610, env_step: 1015.3878, overhead: 29.6419, complete_rollouts: 1.0554 +save_policy_outputs: 53.3152 + split_output_tensors: 18.2824 +[2023-09-26 02:42:40,257][62705] Loop Runner_EvtLoop terminating... +[2023-09-26 02:42:40,257][62705] Runner profile tree view: +main_loop: 3097.9897 +[2023-09-26 02:42:40,258][62705] Collected {0: 10006528, 1: 10006528}, FPS: 6460.0