diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,2271 @@ +[2023-09-26 00:54:43,810][128642] Saving configuration to ./train_atari/atari_crazyclimber/config.json... +[2023-09-26 00:54:44,128][128642] Rollout worker 0 uses device cpu +[2023-09-26 00:54:44,128][128642] Rollout worker 1 uses device cpu +[2023-09-26 00:54:44,129][128642] Rollout worker 2 uses device cpu +[2023-09-26 00:54:44,129][128642] Rollout worker 3 uses device cpu +[2023-09-26 00:54:44,130][128642] Rollout worker 4 uses device cpu +[2023-09-26 00:54:44,130][128642] Rollout worker 5 uses device cpu +[2023-09-26 00:54:44,131][128642] Rollout worker 6 uses device cpu +[2023-09-26 00:54:44,131][128642] Rollout worker 7 uses device cpu +[2023-09-26 00:54:44,132][128642] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 +[2023-09-26 00:54:44,182][128642] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 00:54:44,182][128642] InferenceWorker_p0-w0: min num requests: 1 +[2023-09-26 00:54:44,185][128642] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 00:54:44,186][128642] InferenceWorker_p1-w0: min num requests: 1 +[2023-09-26 00:54:44,209][128642] Starting all processes... +[2023-09-26 00:54:44,210][128642] Starting process learner_proc0 +[2023-09-26 00:54:45,800][128642] Starting process learner_proc1 +[2023-09-26 00:54:45,804][129304] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 00:54:45,804][129304] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-09-26 00:54:45,822][129304] Num visible devices: 1 +[2023-09-26 00:54:45,839][129304] Starting seed is not provided +[2023-09-26 00:54:45,839][129304] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 00:54:45,839][129304] Initializing actor-critic model on device cuda:0 +[2023-09-26 00:54:45,839][129304] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 00:54:45,840][129304] RunningMeanStd input shape: (1,) +[2023-09-26 00:54:45,851][129304] ConvEncoder: input_channels=4 +[2023-09-26 00:54:46,009][129304] Conv encoder output size: 512 +[2023-09-26 00:54:46,011][129304] Created Actor Critic model with architecture: +[2023-09-26 00:54:46,011][129304] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=9, bias=True) + ) +) +[2023-09-26 00:54:46,584][129304] Using optimizer +[2023-09-26 00:54:46,585][129304] No checkpoints found +[2023-09-26 00:54:46,585][129304] Did not load from checkpoint, starting from scratch! +[2023-09-26 00:54:46,585][129304] Initialized policy 0 weights for model version 0 +[2023-09-26 00:54:46,587][129304] LearnerWorker_p0 finished initialization! +[2023-09-26 00:54:46,588][129304] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 00:54:47,423][128642] Starting all processes... +[2023-09-26 00:54:47,427][129382] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 00:54:47,427][129382] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for learning process 1 +[2023-09-26 00:54:47,431][128642] Starting process inference_proc0-0 +[2023-09-26 00:54:47,431][128642] Starting process inference_proc1-0 +[2023-09-26 00:54:47,432][128642] Starting process rollout_proc0 +[2023-09-26 00:54:47,432][128642] Starting process rollout_proc1 +[2023-09-26 00:54:47,445][129382] Num visible devices: 1 +[2023-09-26 00:54:47,432][128642] Starting process rollout_proc2 +[2023-09-26 00:54:47,467][129382] Starting seed is not provided +[2023-09-26 00:54:47,467][129382] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-26 00:54:47,467][129382] Initializing actor-critic model on device cuda:0 +[2023-09-26 00:54:47,435][128642] Starting process rollout_proc3 +[2023-09-26 00:54:47,468][129382] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 00:54:47,468][129382] RunningMeanStd input shape: (1,) +[2023-09-26 00:54:47,437][128642] Starting process rollout_proc4 +[2023-09-26 00:54:47,440][128642] Starting process rollout_proc5 +[2023-09-26 00:54:47,442][128642] Starting process rollout_proc6 +[2023-09-26 00:54:47,448][128642] Starting process rollout_proc7 +[2023-09-26 00:54:47,481][129382] ConvEncoder: input_channels=4 +[2023-09-26 00:54:47,860][129382] Conv encoder output size: 512 +[2023-09-26 00:54:47,863][129382] Created Actor Critic model with architecture: +[2023-09-26 00:54:47,863][129382] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=9, bias=True) + ) +) +[2023-09-26 00:54:48,553][129382] Using optimizer +[2023-09-26 00:54:48,553][129382] No checkpoints found +[2023-09-26 00:54:48,553][129382] Did not load from checkpoint, starting from scratch! +[2023-09-26 00:54:48,554][129382] Initialized policy 1 weights for model version 0 +[2023-09-26 00:54:48,555][129382] LearnerWorker_p1 finished initialization! +[2023-09-26 00:54:48,555][129382] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-26 00:54:49,375][129536] Worker 7 uses CPU cores [28, 29, 30, 31] +[2023-09-26 00:54:49,398][129496] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 00:54:49,398][129496] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for inference process 1 +[2023-09-26 00:54:49,398][129534] Worker 4 uses CPU cores [16, 17, 18, 19] +[2023-09-26 00:54:49,417][129496] Num visible devices: 1 +[2023-09-26 00:54:49,433][129529] Worker 0 uses CPU cores [0, 1, 2, 3] +[2023-09-26 00:54:49,478][129535] Worker 6 uses CPU cores [24, 25, 26, 27] +[2023-09-26 00:54:49,492][129531] Worker 2 uses CPU cores [8, 9, 10, 11] +[2023-09-26 00:54:49,507][129533] Worker 5 uses CPU cores [20, 21, 22, 23] +[2023-09-26 00:54:49,599][129528] Worker 1 uses CPU cores [4, 5, 6, 7] +[2023-09-26 00:54:49,631][129532] Worker 3 uses CPU cores [12, 13, 14, 15] +[2023-09-26 00:54:49,674][129495] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 00:54:49,675][129495] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-09-26 00:54:49,693][129495] Num visible devices: 1 +[2023-09-26 00:54:50,061][129496] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 00:54:50,062][129496] RunningMeanStd input shape: (1,) +[2023-09-26 00:54:50,073][129496] ConvEncoder: input_channels=4 +[2023-09-26 00:54:50,082][128642] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan, 1: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-09-26 00:54:50,171][129496] Conv encoder output size: 512 +[2023-09-26 00:54:50,177][128642] Inference worker 1-0 is ready! +[2023-09-26 00:54:50,264][129495] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 00:54:50,264][129495] RunningMeanStd input shape: (1,) +[2023-09-26 00:54:50,275][129495] ConvEncoder: input_channels=4 +[2023-09-26 00:54:50,376][129495] Conv encoder output size: 512 +[2023-09-26 00:54:50,381][128642] Inference worker 0-0 is ready! +[2023-09-26 00:54:50,382][128642] All inference workers are ready! Signal rollout workers to start! +[2023-09-26 00:54:50,831][129531] Decorrelating experience for 0 frames... +[2023-09-26 00:54:50,831][129528] Decorrelating experience for 0 frames... +[2023-09-26 00:54:50,831][129536] Decorrelating experience for 0 frames... +[2023-09-26 00:54:50,831][129535] Decorrelating experience for 0 frames... +[2023-09-26 00:54:50,834][129533] Decorrelating experience for 0 frames... +[2023-09-26 00:54:50,836][129532] Decorrelating experience for 0 frames... +[2023-09-26 00:54:50,866][129534] Decorrelating experience for 0 frames... +[2023-09-26 00:54:50,879][129529] Decorrelating experience for 0 frames... +[2023-09-26 00:54:55,082][128642] Fps is (10 sec: 1638.4, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 8192. Throughput: 0: 204.8, 1: 204.8. Samples: 2048. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 00:54:55,083][128642] Avg episode reward: [(0, '5.000')] +[2023-09-26 00:55:00,082][128642] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 32768. Throughput: 0: 410.1, 1: 409.6. Samples: 8197. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 00:55:00,083][128642] Avg episode reward: [(0, '15.000'), (1, '15.286')] +[2023-09-26 00:55:04,167][128642] Heartbeat connected on Batcher_0 +[2023-09-26 00:55:04,171][128642] Heartbeat connected on LearnerWorker_p0 +[2023-09-26 00:55:04,175][128642] Heartbeat connected on Batcher_1 +[2023-09-26 00:55:04,189][128642] Heartbeat connected on RolloutWorker_w0 +[2023-09-26 00:55:04,192][128642] Heartbeat connected on RolloutWorker_w1 +[2023-09-26 00:55:04,195][128642] Heartbeat connected on RolloutWorker_w2 +[2023-09-26 00:55:04,198][128642] Heartbeat connected on RolloutWorker_w3 +[2023-09-26 00:55:04,201][128642] Heartbeat connected on RolloutWorker_w4 +[2023-09-26 00:55:04,203][128642] Heartbeat connected on RolloutWorker_w5 +[2023-09-26 00:55:04,206][128642] Heartbeat connected on RolloutWorker_w6 +[2023-09-26 00:55:04,208][128642] Heartbeat connected on RolloutWorker_w7 +[2023-09-26 00:55:04,217][128642] Heartbeat connected on InferenceWorker_p0-w0 +[2023-09-26 00:55:04,234][128642] Heartbeat connected on InferenceWorker_p1-w0 +[2023-09-26 00:55:04,370][128642] Heartbeat connected on LearnerWorker_p1 +[2023-09-26 00:55:05,082][128642] Fps is (10 sec: 5734.4, 60 sec: 4369.1, 300 sec: 4369.1). Total num frames: 65536. Throughput: 0: 443.5, 1: 429.5. Samples: 13095. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:55:05,083][128642] Avg episode reward: [(0, '17.333'), (1, '16.083')] +[2023-09-26 00:55:06,850][129495] Updated weights for policy 0, policy_version 160 (0.0017) +[2023-09-26 00:55:06,851][129496] Updated weights for policy 1, policy_version 160 (0.0018) +[2023-09-26 00:55:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 4915.2, 300 sec: 4915.2). Total num frames: 98304. Throughput: 0: 577.7, 1: 567.2. Samples: 22897. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:55:10,083][128642] Avg episode reward: [(0, '21.263'), (1, '18.950')] +[2023-09-26 00:55:15,082][128642] Fps is (10 sec: 6553.7, 60 sec: 5242.9, 300 sec: 5242.9). Total num frames: 131072. Throughput: 0: 655.6, 1: 655.4. Samples: 32773. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 00:55:15,082][128642] Avg episode reward: [(0, '23.077'), (1, '20.519')] +[2023-09-26 00:55:19,278][129496] Updated weights for policy 1, policy_version 320 (0.0017) +[2023-09-26 00:55:19,278][129495] Updated weights for policy 0, policy_version 320 (0.0018) +[2023-09-26 00:55:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 5461.4, 300 sec: 5461.4). Total num frames: 163840. Throughput: 0: 633.1, 1: 626.0. Samples: 37772. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:55:20,083][128642] Avg episode reward: [(0, '23.294'), (1, '21.889')] +[2023-09-26 00:55:25,082][128642] Fps is (10 sec: 6553.4, 60 sec: 5617.4, 300 sec: 5617.4). Total num frames: 196608. Throughput: 0: 681.9, 1: 675.7. Samples: 47516. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:55:25,083][128642] Avg episode reward: [(0, '23.795'), (1, '23.167')] +[2023-09-26 00:55:30,082][128642] Fps is (10 sec: 6553.5, 60 sec: 5734.4, 300 sec: 5734.4). Total num frames: 229376. Throughput: 0: 719.1, 1: 716.8. Samples: 57436. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:55:30,083][128642] Avg episode reward: [(0, '23.208'), (1, '23.740')] +[2023-09-26 00:55:30,084][129304] Saving new best policy, reward=23.208! +[2023-09-26 00:55:30,084][129382] Saving new best policy, reward=23.740! +[2023-09-26 00:55:31,779][129495] Updated weights for policy 0, policy_version 480 (0.0018) +[2023-09-26 00:55:31,779][129496] Updated weights for policy 1, policy_version 480 (0.0018) +[2023-09-26 00:55:35,082][128642] Fps is (10 sec: 6553.7, 60 sec: 5825.4, 300 sec: 5825.4). Total num frames: 262144. Throughput: 0: 695.9, 1: 691.2. Samples: 62420. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 00:55:35,083][128642] Avg episode reward: [(0, '23.857'), (1, '24.509')] +[2023-09-26 00:55:35,084][129304] Saving new best policy, reward=23.857! +[2023-09-26 00:55:35,084][129382] Saving new best policy, reward=24.509! +[2023-09-26 00:55:40,082][128642] Fps is (10 sec: 6553.7, 60 sec: 5898.3, 300 sec: 5898.3). Total num frames: 294912. Throughput: 0: 777.5, 1: 773.7. Samples: 71853. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 00:55:40,082][128642] Avg episode reward: [(0, '24.065'), (1, '23.203')] +[2023-09-26 00:55:40,085][129304] Saving new best policy, reward=24.065! +[2023-09-26 00:55:44,441][129495] Updated weights for policy 0, policy_version 640 (0.0016) +[2023-09-26 00:55:44,441][129496] Updated weights for policy 1, policy_version 640 (0.0017) +[2023-09-26 00:55:45,082][128642] Fps is (10 sec: 6553.4, 60 sec: 5957.8, 300 sec: 5957.8). Total num frames: 327680. Throughput: 0: 819.1, 1: 818.5. Samples: 81890. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:55:45,083][128642] Avg episode reward: [(0, '23.286'), (1, '23.552')] +[2023-09-26 00:55:50,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6007.5, 300 sec: 6007.5). Total num frames: 360448. Throughput: 0: 817.5, 1: 817.9. Samples: 86685. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 00:55:50,083][128642] Avg episode reward: [(0, '24.423'), (1, '23.342')] +[2023-09-26 00:55:50,084][129304] Saving new best policy, reward=24.423! +[2023-09-26 00:55:55,082][128642] Fps is (10 sec: 6553.9, 60 sec: 6417.1, 300 sec: 6049.5). Total num frames: 393216. Throughput: 0: 814.0, 1: 817.4. Samples: 96312. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:55:55,083][128642] Avg episode reward: [(0, '25.610'), (1, '23.851')] +[2023-09-26 00:55:55,085][129304] Saving new best policy, reward=25.610! +[2023-09-26 00:55:57,068][129495] Updated weights for policy 0, policy_version 800 (0.0016) +[2023-09-26 00:55:57,069][129496] Updated weights for policy 1, policy_version 800 (0.0017) +[2023-09-26 00:56:00,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6085.5). Total num frames: 425984. Throughput: 0: 819.1, 1: 815.3. Samples: 106322. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:56:00,083][128642] Avg episode reward: [(0, '27.180'), (1, '24.075')] +[2023-09-26 00:56:00,083][129304] Saving new best policy, reward=27.180! +[2023-09-26 00:56:05,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6116.7). Total num frames: 458752. Throughput: 0: 815.1, 1: 815.2. Samples: 111134. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:56:05,083][128642] Avg episode reward: [(0, '28.368'), (1, '24.280')] +[2023-09-26 00:56:05,084][129304] Saving new best policy, reward=28.368! +[2023-09-26 00:56:09,524][129495] Updated weights for policy 0, policy_version 960 (0.0019) +[2023-09-26 00:56:09,525][129496] Updated weights for policy 1, policy_version 960 (0.0019) +[2023-09-26 00:56:10,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6144.0). Total num frames: 491520. Throughput: 0: 814.7, 1: 817.0. Samples: 120945. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 00:56:10,083][128642] Avg episode reward: [(0, '30.030'), (1, '25.280')] +[2023-09-26 00:56:10,091][129304] Saving new best policy, reward=30.030! +[2023-09-26 00:56:10,091][129382] Saving new best policy, reward=25.280! +[2023-09-26 00:56:15,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6168.1). Total num frames: 524288. Throughput: 0: 817.2, 1: 817.2. Samples: 130983. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 00:56:15,083][128642] Avg episode reward: [(0, '30.660'), (1, '26.300')] +[2023-09-26 00:56:15,083][129304] Saving new best policy, reward=30.660! +[2023-09-26 00:56:15,084][129382] Saving new best policy, reward=26.300! +[2023-09-26 00:56:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6189.5). Total num frames: 557056. Throughput: 0: 815.7, 1: 815.5. Samples: 135826. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 00:56:20,083][128642] Avg episode reward: [(0, '31.110'), (1, '27.370')] +[2023-09-26 00:56:20,084][129304] Saving new best policy, reward=31.110! +[2023-09-26 00:56:20,085][129382] Saving new best policy, reward=27.370! +[2023-09-26 00:56:21,891][129495] Updated weights for policy 0, policy_version 1120 (0.0015) +[2023-09-26 00:56:21,891][129496] Updated weights for policy 1, policy_version 1120 (0.0018) +[2023-09-26 00:56:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6208.7). Total num frames: 589824. Throughput: 0: 820.6, 1: 819.8. Samples: 145670. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:56:25,082][128642] Avg episode reward: [(0, '33.300'), (1, '27.520')] +[2023-09-26 00:56:25,090][129304] Saving new best policy, reward=33.300! +[2023-09-26 00:56:25,091][129382] Saving new best policy, reward=27.520! +[2023-09-26 00:56:30,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6225.9). Total num frames: 622592. Throughput: 0: 819.2, 1: 818.0. Samples: 155563. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:56:30,083][128642] Avg episode reward: [(0, '33.210'), (1, '28.710')] +[2023-09-26 00:56:30,083][129382] Saving new best policy, reward=28.710! +[2023-09-26 00:56:34,447][129495] Updated weights for policy 0, policy_version 1280 (0.0018) +[2023-09-26 00:56:34,447][129496] Updated weights for policy 1, policy_version 1280 (0.0017) +[2023-09-26 00:56:35,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6241.5). Total num frames: 655360. Throughput: 0: 819.0, 1: 818.5. Samples: 160376. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 00:56:35,084][128642] Avg episode reward: [(0, '34.670'), (1, '29.420')] +[2023-09-26 00:56:35,085][129304] Saving new best policy, reward=34.670! +[2023-09-26 00:56:35,085][129382] Saving new best policy, reward=29.420! +[2023-09-26 00:56:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6255.7). Total num frames: 688128. Throughput: 0: 818.0, 1: 818.9. Samples: 169970. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 00:56:40,083][128642] Avg episode reward: [(0, '36.090'), (1, '29.600')] +[2023-09-26 00:56:40,089][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000001344_344064.pth... +[2023-09-26 00:56:40,090][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000001344_344064.pth... +[2023-09-26 00:56:40,124][129304] Saving new best policy, reward=36.090! +[2023-09-26 00:56:40,131][129382] Saving new best policy, reward=29.600! +[2023-09-26 00:56:45,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6268.7). Total num frames: 720896. Throughput: 0: 816.8, 1: 815.8. Samples: 179789. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 00:56:45,083][128642] Avg episode reward: [(0, '37.440'), (1, '31.160')] +[2023-09-26 00:56:45,084][129304] Saving new best policy, reward=37.440! +[2023-09-26 00:56:45,085][129382] Saving new best policy, reward=31.160! +[2023-09-26 00:56:47,206][129496] Updated weights for policy 1, policy_version 1440 (0.0016) +[2023-09-26 00:56:47,206][129495] Updated weights for policy 0, policy_version 1440 (0.0017) +[2023-09-26 00:56:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6280.5). Total num frames: 753664. Throughput: 0: 814.7, 1: 815.6. Samples: 184497. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:56:50,083][128642] Avg episode reward: [(0, '38.410'), (1, '32.430')] +[2023-09-26 00:56:50,084][129304] Saving new best policy, reward=38.410! +[2023-09-26 00:56:50,084][129382] Saving new best policy, reward=32.430! +[2023-09-26 00:56:55,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6291.5). Total num frames: 786432. Throughput: 0: 816.7, 1: 818.0. Samples: 194505. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:56:55,083][128642] Avg episode reward: [(0, '39.280'), (1, '34.320')] +[2023-09-26 00:56:55,089][129304] Saving new best policy, reward=39.280! +[2023-09-26 00:56:55,090][129382] Saving new best policy, reward=34.320! +[2023-09-26 00:56:59,689][129495] Updated weights for policy 0, policy_version 1600 (0.0013) +[2023-09-26 00:56:59,690][129496] Updated weights for policy 1, policy_version 1600 (0.0015) +[2023-09-26 00:57:00,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6301.6). Total num frames: 819200. Throughput: 0: 817.7, 1: 814.7. Samples: 204443. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:57:00,083][128642] Avg episode reward: [(0, '40.800'), (1, '35.490')] +[2023-09-26 00:57:00,083][129382] Saving new best policy, reward=35.490! +[2023-09-26 00:57:00,083][129304] Saving new best policy, reward=40.800! +[2023-09-26 00:57:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6310.9). Total num frames: 851968. Throughput: 0: 812.5, 1: 814.4. Samples: 209038. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 00:57:05,083][128642] Avg episode reward: [(0, '42.220'), (1, '36.250')] +[2023-09-26 00:57:05,083][129304] Saving new best policy, reward=42.220! +[2023-09-26 00:57:05,084][129382] Saving new best policy, reward=36.250! +[2023-09-26 00:57:10,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6319.5). Total num frames: 884736. Throughput: 0: 814.2, 1: 818.6. Samples: 219144. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:57:10,083][128642] Avg episode reward: [(0, '42.640'), (1, '37.800')] +[2023-09-26 00:57:10,092][129304] Saving new best policy, reward=42.640! +[2023-09-26 00:57:10,094][129382] Saving new best policy, reward=37.800! +[2023-09-26 00:57:12,121][129495] Updated weights for policy 0, policy_version 1760 (0.0017) +[2023-09-26 00:57:12,121][129496] Updated weights for policy 1, policy_version 1760 (0.0015) +[2023-09-26 00:57:15,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6327.6). Total num frames: 917504. Throughput: 0: 818.4, 1: 816.7. Samples: 229140. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:57:15,083][128642] Avg episode reward: [(0, '43.700'), (1, '38.790')] +[2023-09-26 00:57:15,084][129304] Saving new best policy, reward=43.700! +[2023-09-26 00:57:15,085][129382] Saving new best policy, reward=38.790! +[2023-09-26 00:57:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6335.2). Total num frames: 950272. Throughput: 0: 814.7, 1: 815.0. Samples: 233712. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 00:57:20,083][128642] Avg episode reward: [(0, '44.180'), (1, '40.600')] +[2023-09-26 00:57:20,084][129304] Saving new best policy, reward=44.180! +[2023-09-26 00:57:20,084][129382] Saving new best policy, reward=40.600! +[2023-09-26 00:57:24,651][129495] Updated weights for policy 0, policy_version 1920 (0.0019) +[2023-09-26 00:57:24,651][129496] Updated weights for policy 1, policy_version 1920 (0.0019) +[2023-09-26 00:57:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6342.2). Total num frames: 983040. Throughput: 0: 819.3, 1: 819.5. Samples: 243718. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:57:25,083][128642] Avg episode reward: [(0, '45.370'), (1, '40.940')] +[2023-09-26 00:57:25,093][129304] Saving new best policy, reward=45.370! +[2023-09-26 00:57:25,093][129382] Saving new best policy, reward=40.940! +[2023-09-26 00:57:30,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6348.8). Total num frames: 1015808. Throughput: 0: 821.5, 1: 821.3. Samples: 253714. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:57:30,083][128642] Avg episode reward: [(0, '46.850'), (1, '41.440')] +[2023-09-26 00:57:30,083][129304] Saving new best policy, reward=46.850! +[2023-09-26 00:57:30,084][129382] Saving new best policy, reward=41.440! +[2023-09-26 00:57:35,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6355.0). Total num frames: 1048576. Throughput: 0: 820.8, 1: 820.0. Samples: 258336. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 00:57:35,083][128642] Avg episode reward: [(0, '46.870'), (1, '41.080')] +[2023-09-26 00:57:35,083][129304] Saving new best policy, reward=46.870! +[2023-09-26 00:57:37,147][129496] Updated weights for policy 1, policy_version 2080 (0.0015) +[2023-09-26 00:57:37,147][129495] Updated weights for policy 0, policy_version 2080 (0.0018) +[2023-09-26 00:57:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6360.8). Total num frames: 1081344. Throughput: 0: 819.4, 1: 820.4. Samples: 268296. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:57:40,083][128642] Avg episode reward: [(0, '47.490'), (1, '41.430')] +[2023-09-26 00:57:40,090][129304] Saving new best policy, reward=47.490! +[2023-09-26 00:57:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6366.4). Total num frames: 1114112. Throughput: 0: 819.2, 1: 818.4. Samples: 278139. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:57:45,083][128642] Avg episode reward: [(0, '46.630'), (1, '41.970')] +[2023-09-26 00:57:45,084][129382] Saving new best policy, reward=41.970! +[2023-09-26 00:57:49,713][129496] Updated weights for policy 1, policy_version 2240 (0.0016) +[2023-09-26 00:57:49,713][129495] Updated weights for policy 0, policy_version 2240 (0.0017) +[2023-09-26 00:57:50,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6371.6). Total num frames: 1146880. Throughput: 0: 820.1, 1: 819.2. Samples: 282809. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:57:50,083][128642] Avg episode reward: [(0, '47.510'), (1, '42.690')] +[2023-09-26 00:57:50,083][129304] Saving new best policy, reward=47.510! +[2023-09-26 00:57:50,083][129382] Saving new best policy, reward=42.690! +[2023-09-26 00:57:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6376.5). Total num frames: 1179648. Throughput: 0: 819.0, 1: 816.7. Samples: 292753. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:57:55,083][128642] Avg episode reward: [(0, '47.970'), (1, '42.930')] +[2023-09-26 00:57:55,090][129304] Saving new best policy, reward=47.970! +[2023-09-26 00:57:55,090][129382] Saving new best policy, reward=42.930! +[2023-09-26 00:58:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6381.1). Total num frames: 1212416. Throughput: 0: 817.2, 1: 816.0. Samples: 302633. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 00:58:00,082][128642] Avg episode reward: [(0, '48.660'), (1, '40.250')] +[2023-09-26 00:58:00,083][129304] Saving new best policy, reward=48.660! +[2023-09-26 00:58:02,213][129495] Updated weights for policy 0, policy_version 2400 (0.0018) +[2023-09-26 00:58:02,214][129496] Updated weights for policy 1, policy_version 2400 (0.0018) +[2023-09-26 00:58:05,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6385.6). Total num frames: 1245184. Throughput: 0: 817.9, 1: 818.8. Samples: 307364. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:58:05,083][128642] Avg episode reward: [(0, '51.360'), (1, '42.200')] +[2023-09-26 00:58:05,084][129304] Saving new best policy, reward=51.360! +[2023-09-26 00:58:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6389.8). Total num frames: 1277952. Throughput: 0: 819.2, 1: 819.2. Samples: 317445. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:58:10,082][128642] Avg episode reward: [(0, '51.330'), (1, '42.770')] +[2023-09-26 00:58:14,719][129496] Updated weights for policy 1, policy_version 2560 (0.0017) +[2023-09-26 00:58:14,721][129495] Updated weights for policy 0, policy_version 2560 (0.0018) +[2023-09-26 00:58:15,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6393.8). Total num frames: 1310720. Throughput: 0: 816.6, 1: 817.5. Samples: 327249. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 00:58:15,082][128642] Avg episode reward: [(0, '51.140'), (1, '42.320')] +[2023-09-26 00:58:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6397.6). Total num frames: 1343488. Throughput: 0: 818.5, 1: 818.5. Samples: 332000. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 00:58:20,083][128642] Avg episode reward: [(0, '51.410'), (1, '42.720')] +[2023-09-26 00:58:20,083][129304] Saving new best policy, reward=51.410! +[2023-09-26 00:58:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6401.2). Total num frames: 1376256. Throughput: 0: 819.4, 1: 819.2. Samples: 342031. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:58:25,082][128642] Avg episode reward: [(0, '49.820'), (1, '41.650')] +[2023-09-26 00:58:27,023][129495] Updated weights for policy 0, policy_version 2720 (0.0017) +[2023-09-26 00:58:27,024][129496] Updated weights for policy 1, policy_version 2720 (0.0016) +[2023-09-26 00:58:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6404.7). Total num frames: 1409024. Throughput: 0: 820.6, 1: 824.4. Samples: 352164. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:58:30,083][128642] Avg episode reward: [(0, '50.050'), (1, '41.660')] +[2023-09-26 00:58:35,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6408.0). Total num frames: 1441792. Throughput: 0: 824.2, 1: 823.4. Samples: 356953. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:58:35,083][128642] Avg episode reward: [(0, '49.190'), (1, '42.700')] +[2023-09-26 00:58:39,400][129495] Updated weights for policy 0, policy_version 2880 (0.0013) +[2023-09-26 00:58:39,401][129496] Updated weights for policy 1, policy_version 2880 (0.0019) +[2023-09-26 00:58:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6411.1). Total num frames: 1474560. Throughput: 0: 823.9, 1: 821.9. Samples: 366813. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 00:58:40,083][128642] Avg episode reward: [(0, '50.330'), (1, '43.570')] +[2023-09-26 00:58:40,091][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000002880_737280.pth... +[2023-09-26 00:58:40,091][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000002880_737280.pth... +[2023-09-26 00:58:40,128][129382] Saving new best policy, reward=43.570! +[2023-09-26 00:58:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6414.2). Total num frames: 1507328. Throughput: 0: 822.1, 1: 826.7. Samples: 376829. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:58:45,083][128642] Avg episode reward: [(0, '49.090'), (1, '44.960')] +[2023-09-26 00:58:45,084][129382] Saving new best policy, reward=44.960! +[2023-09-26 00:58:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6417.1). Total num frames: 1540096. Throughput: 0: 827.2, 1: 826.0. Samples: 381758. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 00:58:50,083][128642] Avg episode reward: [(0, '50.810'), (1, '43.990')] +[2023-09-26 00:58:51,902][129495] Updated weights for policy 0, policy_version 3040 (0.0017) +[2023-09-26 00:58:51,902][129496] Updated weights for policy 1, policy_version 3040 (0.0018) +[2023-09-26 00:58:55,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6419.8). Total num frames: 1572864. Throughput: 0: 824.1, 1: 820.3. Samples: 391443. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 00:58:55,084][128642] Avg episode reward: [(0, '51.780'), (1, '44.790')] +[2023-09-26 00:58:55,090][129304] Saving new best policy, reward=51.780! +[2023-09-26 00:59:00,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6422.5). Total num frames: 1605632. Throughput: 0: 822.8, 1: 826.1. Samples: 401446. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 00:59:00,082][128642] Avg episode reward: [(0, '50.130'), (1, '44.660')] +[2023-09-26 00:59:04,234][129496] Updated weights for policy 1, policy_version 3200 (0.0018) +[2023-09-26 00:59:04,234][129495] Updated weights for policy 0, policy_version 3200 (0.0018) +[2023-09-26 00:59:05,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6425.1). Total num frames: 1638400. Throughput: 0: 827.6, 1: 827.5. Samples: 406483. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 00:59:05,083][128642] Avg episode reward: [(0, '52.820'), (1, '44.420')] +[2023-09-26 00:59:05,084][129304] Saving new best policy, reward=52.820! +[2023-09-26 00:59:10,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6427.6). Total num frames: 1671168. Throughput: 0: 827.3, 1: 823.0. Samples: 416295. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:59:10,083][128642] Avg episode reward: [(0, '50.530'), (1, '44.400')] +[2023-09-26 00:59:15,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6430.0). Total num frames: 1703936. Throughput: 0: 822.5, 1: 821.3. Samples: 426133. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 00:59:15,083][128642] Avg episode reward: [(0, '52.690'), (1, '44.670')] +[2023-09-26 00:59:16,685][129495] Updated weights for policy 0, policy_version 3360 (0.0018) +[2023-09-26 00:59:16,685][129496] Updated weights for policy 1, policy_version 3360 (0.0016) +[2023-09-26 00:59:20,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6432.2). Total num frames: 1736704. Throughput: 0: 825.8, 1: 826.2. Samples: 431293. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 00:59:20,082][128642] Avg episode reward: [(0, '52.390'), (1, '45.860')] +[2023-09-26 00:59:20,083][129382] Saving new best policy, reward=45.860! +[2023-09-26 00:59:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6434.5). Total num frames: 1769472. Throughput: 0: 826.4, 1: 826.3. Samples: 441185. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:59:25,082][128642] Avg episode reward: [(0, '51.630'), (1, '45.480')] +[2023-09-26 00:59:29,058][129495] Updated weights for policy 0, policy_version 3520 (0.0017) +[2023-09-26 00:59:29,058][129496] Updated weights for policy 1, policy_version 3520 (0.0016) +[2023-09-26 00:59:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6436.6). Total num frames: 1802240. Throughput: 0: 826.2, 1: 821.8. Samples: 450989. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:59:30,082][128642] Avg episode reward: [(0, '52.970'), (1, '47.860')] +[2023-09-26 00:59:30,083][129382] Saving new best policy, reward=47.860! +[2023-09-26 00:59:30,083][129304] Saving new best policy, reward=52.970! +[2023-09-26 00:59:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6438.6). Total num frames: 1835008. Throughput: 0: 825.3, 1: 826.3. Samples: 456079. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 00:59:35,083][128642] Avg episode reward: [(0, '53.270'), (1, '46.500')] +[2023-09-26 00:59:35,083][129304] Saving new best policy, reward=53.270! +[2023-09-26 00:59:40,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6440.6). Total num frames: 1867776. Throughput: 0: 827.4, 1: 826.7. Samples: 465877. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:59:40,083][128642] Avg episode reward: [(0, '52.940'), (1, '46.720')] +[2023-09-26 00:59:41,671][129495] Updated weights for policy 0, policy_version 3680 (0.0016) +[2023-09-26 00:59:41,671][129496] Updated weights for policy 1, policy_version 3680 (0.0016) +[2023-09-26 00:59:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 1900544. Throughput: 0: 824.0, 1: 820.2. Samples: 475435. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 00:59:45,083][128642] Avg episode reward: [(0, '53.100'), (1, '47.250')] +[2023-09-26 00:59:50,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 1933312. Throughput: 0: 821.7, 1: 822.6. Samples: 480474. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 00:59:50,082][128642] Avg episode reward: [(0, '52.950'), (1, '46.100')] +[2023-09-26 00:59:54,207][129495] Updated weights for policy 0, policy_version 3840 (0.0017) +[2023-09-26 00:59:54,208][129496] Updated weights for policy 1, policy_version 3840 (0.0016) +[2023-09-26 00:59:55,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 1966080. Throughput: 0: 819.6, 1: 819.5. Samples: 490057. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 00:59:55,083][128642] Avg episode reward: [(0, '52.750'), (1, '46.830')] +[2023-09-26 01:00:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 1998848. Throughput: 0: 816.0, 1: 819.2. Samples: 499717. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 01:00:00,082][128642] Avg episode reward: [(0, '52.180'), (1, '46.910')] +[2023-09-26 01:00:05,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2031616. Throughput: 0: 814.3, 1: 813.5. Samples: 504545. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:00:05,082][128642] Avg episode reward: [(0, '52.370'), (1, '47.300')] +[2023-09-26 01:00:06,911][129495] Updated weights for policy 0, policy_version 4000 (0.0018) +[2023-09-26 01:00:06,911][129496] Updated weights for policy 1, policy_version 4000 (0.0017) +[2023-09-26 01:00:10,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2064384. Throughput: 0: 812.8, 1: 812.6. Samples: 514329. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:00:10,083][128642] Avg episode reward: [(0, '51.010'), (1, '48.320')] +[2023-09-26 01:00:10,090][129382] Saving new best policy, reward=48.320! +[2023-09-26 01:00:15,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2097152. Throughput: 0: 812.7, 1: 816.7. Samples: 524312. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:00:15,083][128642] Avg episode reward: [(0, '51.290'), (1, '49.550')] +[2023-09-26 01:00:15,085][129382] Saving new best policy, reward=49.550! +[2023-09-26 01:00:19,339][129495] Updated weights for policy 0, policy_version 4160 (0.0016) +[2023-09-26 01:00:19,339][129496] Updated weights for policy 1, policy_version 4160 (0.0016) +[2023-09-26 01:00:20,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2129920. Throughput: 0: 812.9, 1: 811.9. Samples: 529196. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:00:20,083][128642] Avg episode reward: [(0, '50.060'), (1, '50.850')] +[2023-09-26 01:00:20,084][129382] Saving new best policy, reward=50.850! +[2023-09-26 01:00:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2162688. Throughput: 0: 810.9, 1: 811.5. Samples: 538883. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:00:25,083][128642] Avg episode reward: [(0, '50.490'), (1, '49.260')] +[2023-09-26 01:00:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2195456. Throughput: 0: 813.7, 1: 818.2. Samples: 548868. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:00:30,083][128642] Avg episode reward: [(0, '49.740'), (1, '50.140')] +[2023-09-26 01:00:31,964][129495] Updated weights for policy 0, policy_version 4320 (0.0016) +[2023-09-26 01:00:31,964][129496] Updated weights for policy 1, policy_version 4320 (0.0016) +[2023-09-26 01:00:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2228224. Throughput: 0: 812.9, 1: 812.0. Samples: 553593. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:00:35,083][128642] Avg episode reward: [(0, '48.860'), (1, '51.040')] +[2023-09-26 01:00:35,085][129382] Saving new best policy, reward=51.040! +[2023-09-26 01:00:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2260992. Throughput: 0: 812.8, 1: 815.1. Samples: 563311. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 01:00:40,083][128642] Avg episode reward: [(0, '50.450'), (1, '51.070')] +[2023-09-26 01:00:40,095][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000004416_1130496.pth... +[2023-09-26 01:00:40,095][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000004416_1130496.pth... +[2023-09-26 01:00:40,130][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000001344_344064.pth +[2023-09-26 01:00:40,132][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000001344_344064.pth +[2023-09-26 01:00:40,135][129382] Saving new best policy, reward=51.070! +[2023-09-26 01:00:44,482][129495] Updated weights for policy 0, policy_version 4480 (0.0014) +[2023-09-26 01:00:44,484][129496] Updated weights for policy 1, policy_version 4480 (0.0017) +[2023-09-26 01:00:45,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2293760. Throughput: 0: 819.1, 1: 817.8. Samples: 573378. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:00:45,082][128642] Avg episode reward: [(0, '50.450'), (1, '52.920')] +[2023-09-26 01:00:45,083][129382] Saving new best policy, reward=52.920! +[2023-09-26 01:00:50,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2326528. Throughput: 0: 817.3, 1: 817.7. Samples: 578120. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:00:50,083][128642] Avg episode reward: [(0, '51.580'), (1, '54.270')] +[2023-09-26 01:00:50,083][129382] Saving new best policy, reward=54.270! +[2023-09-26 01:00:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2359296. Throughput: 0: 817.7, 1: 818.5. Samples: 587959. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:00:55,083][128642] Avg episode reward: [(0, '52.760'), (1, '54.910')] +[2023-09-26 01:00:55,092][129382] Saving new best policy, reward=54.910! +[2023-09-26 01:00:57,019][129496] Updated weights for policy 1, policy_version 4640 (0.0018) +[2023-09-26 01:00:57,019][129495] Updated weights for policy 0, policy_version 4640 (0.0017) +[2023-09-26 01:01:00,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2392064. Throughput: 0: 818.7, 1: 816.8. Samples: 597910. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:01:00,083][128642] Avg episode reward: [(0, '52.890'), (1, '55.710')] +[2023-09-26 01:01:00,084][129382] Saving new best policy, reward=55.710! +[2023-09-26 01:01:05,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2424832. Throughput: 0: 817.0, 1: 817.0. Samples: 602725. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:01:05,083][128642] Avg episode reward: [(0, '54.030'), (1, '55.880')] +[2023-09-26 01:01:05,084][129304] Saving new best policy, reward=54.030! +[2023-09-26 01:01:05,085][129382] Saving new best policy, reward=55.880! +[2023-09-26 01:01:09,376][129495] Updated weights for policy 0, policy_version 4800 (0.0016) +[2023-09-26 01:01:09,377][129496] Updated weights for policy 1, policy_version 4800 (0.0015) +[2023-09-26 01:01:10,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2457600. Throughput: 0: 819.3, 1: 818.7. Samples: 612592. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:01:10,083][128642] Avg episode reward: [(0, '57.530'), (1, '57.600')] +[2023-09-26 01:01:10,092][129304] Saving new best policy, reward=57.530! +[2023-09-26 01:01:10,093][129382] Saving new best policy, reward=57.600! +[2023-09-26 01:01:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2490368. Throughput: 0: 821.2, 1: 819.2. Samples: 622684. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:01:15,083][128642] Avg episode reward: [(0, '55.090'), (1, '56.220')] +[2023-09-26 01:01:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2523136. Throughput: 0: 825.6, 1: 825.1. Samples: 627873. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:01:20,082][128642] Avg episode reward: [(0, '56.040'), (1, '55.660')] +[2023-09-26 01:01:21,640][129495] Updated weights for policy 0, policy_version 4960 (0.0018) +[2023-09-26 01:01:21,640][129496] Updated weights for policy 1, policy_version 4960 (0.0017) +[2023-09-26 01:01:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2555904. Throughput: 0: 828.0, 1: 825.9. Samples: 637737. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:01:25,083][128642] Avg episode reward: [(0, '54.140'), (1, '55.200')] +[2023-09-26 01:01:30,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2588672. Throughput: 0: 824.1, 1: 821.1. Samples: 647413. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:01:30,083][128642] Avg episode reward: [(0, '54.340'), (1, '54.300')] +[2023-09-26 01:01:34,227][129496] Updated weights for policy 1, policy_version 5120 (0.0017) +[2023-09-26 01:01:34,228][129495] Updated weights for policy 0, policy_version 5120 (0.0017) +[2023-09-26 01:01:35,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2621440. Throughput: 0: 826.7, 1: 826.9. Samples: 652529. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:01:35,082][128642] Avg episode reward: [(0, '53.580'), (1, '54.000')] +[2023-09-26 01:01:40,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2654208. Throughput: 0: 823.5, 1: 822.8. Samples: 662044. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:01:40,082][128642] Avg episode reward: [(0, '56.430'), (1, '53.580')] +[2023-09-26 01:01:45,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2686976. Throughput: 0: 823.6, 1: 821.8. Samples: 671951. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:01:45,083][128642] Avg episode reward: [(0, '55.050'), (1, '53.070')] +[2023-09-26 01:01:46,619][129495] Updated weights for policy 0, policy_version 5280 (0.0014) +[2023-09-26 01:01:46,620][129496] Updated weights for policy 1, policy_version 5280 (0.0016) +[2023-09-26 01:01:50,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2719744. Throughput: 0: 827.4, 1: 827.5. Samples: 677196. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:01:50,083][128642] Avg episode reward: [(0, '55.590'), (1, '50.410')] +[2023-09-26 01:01:55,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2752512. Throughput: 0: 826.8, 1: 826.6. Samples: 686993. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:01:55,083][128642] Avg episode reward: [(0, '56.810'), (1, '49.220')] +[2023-09-26 01:01:58,992][129495] Updated weights for policy 0, policy_version 5440 (0.0017) +[2023-09-26 01:01:58,993][129496] Updated weights for policy 1, policy_version 5440 (0.0019) +[2023-09-26 01:02:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2785280. Throughput: 0: 826.0, 1: 823.3. Samples: 696906. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:02:00,083][128642] Avg episode reward: [(0, '54.900'), (1, '48.000')] +[2023-09-26 01:02:05,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2818048. Throughput: 0: 821.8, 1: 822.8. Samples: 701879. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:02:05,083][128642] Avg episode reward: [(0, '56.020'), (1, '48.730')] +[2023-09-26 01:02:10,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2850816. Throughput: 0: 820.8, 1: 820.2. Samples: 711586. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:02:10,084][128642] Avg episode reward: [(0, '51.450'), (1, '47.990')] +[2023-09-26 01:02:11,555][129495] Updated weights for policy 0, policy_version 5600 (0.0017) +[2023-09-26 01:02:11,555][129496] Updated weights for policy 1, policy_version 5600 (0.0016) +[2023-09-26 01:02:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2883584. Throughput: 0: 821.9, 1: 821.8. Samples: 721380. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:02:15,083][128642] Avg episode reward: [(0, '53.190'), (1, '47.570')] +[2023-09-26 01:02:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2916352. Throughput: 0: 821.6, 1: 821.4. Samples: 726465. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:02:20,083][128642] Avg episode reward: [(0, '55.510'), (1, '47.920')] +[2023-09-26 01:02:23,967][129496] Updated weights for policy 1, policy_version 5760 (0.0015) +[2023-09-26 01:02:23,967][129495] Updated weights for policy 0, policy_version 5760 (0.0017) +[2023-09-26 01:02:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2949120. Throughput: 0: 825.9, 1: 825.9. Samples: 736374. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:02:25,083][128642] Avg episode reward: [(0, '54.740'), (1, '49.100')] +[2023-09-26 01:02:30,082][128642] Fps is (10 sec: 6963.2, 60 sec: 6621.9, 300 sec: 6567.5). Total num frames: 2985984. Throughput: 0: 826.1, 1: 826.1. Samples: 746299. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:02:30,083][128642] Avg episode reward: [(0, '56.220'), (1, '48.770')] +[2023-09-26 01:02:35,090][128642] Fps is (10 sec: 6957.7, 60 sec: 6621.0, 300 sec: 6567.3). Total num frames: 3018752. Throughput: 0: 824.4, 1: 824.9. Samples: 751429. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:02:35,092][128642] Avg episode reward: [(0, '56.160'), (1, '50.490')] +[2023-09-26 01:02:36,321][129495] Updated weights for policy 0, policy_version 5920 (0.0018) +[2023-09-26 01:02:36,321][129496] Updated weights for policy 1, policy_version 5920 (0.0016) +[2023-09-26 01:02:40,082][128642] Fps is (10 sec: 6963.1, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 3055616. Throughput: 0: 825.9, 1: 825.9. Samples: 761323. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:02:40,083][128642] Avg episode reward: [(0, '58.130'), (1, '52.290')] +[2023-09-26 01:02:40,095][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000005968_1527808.pth... +[2023-09-26 01:02:40,095][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000005968_1527808.pth... +[2023-09-26 01:02:40,127][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000002880_737280.pth +[2023-09-26 01:02:40,130][129304] Saving new best policy, reward=58.130! +[2023-09-26 01:02:40,132][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000002880_737280.pth +[2023-09-26 01:02:45,082][128642] Fps is (10 sec: 6968.9, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 3088384. Throughput: 0: 824.7, 1: 824.7. Samples: 771128. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:02:45,083][128642] Avg episode reward: [(0, '57.510'), (1, '52.910')] +[2023-09-26 01:02:48,634][129496] Updated weights for policy 1, policy_version 6080 (0.0018) +[2023-09-26 01:02:48,634][129495] Updated weights for policy 0, policy_version 6080 (0.0018) +[2023-09-26 01:02:50,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 3121152. Throughput: 0: 823.6, 1: 827.3. Samples: 776172. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:02:50,083][128642] Avg episode reward: [(0, '56.500'), (1, '53.680')] +[2023-09-26 01:02:55,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 3153920. Throughput: 0: 828.0, 1: 828.3. Samples: 786120. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:02:55,083][128642] Avg episode reward: [(0, '58.550'), (1, '53.670')] +[2023-09-26 01:02:55,093][129304] Saving new best policy, reward=58.550! +[2023-09-26 01:03:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 3186688. Throughput: 0: 827.8, 1: 827.3. Samples: 795862. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:03:00,082][128642] Avg episode reward: [(0, '58.540'), (1, '55.190')] +[2023-09-26 01:03:01,172][129496] Updated weights for policy 1, policy_version 6240 (0.0018) +[2023-09-26 01:03:01,172][129495] Updated weights for policy 0, policy_version 6240 (0.0018) +[2023-09-26 01:03:05,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 3219456. Throughput: 0: 823.3, 1: 828.0. Samples: 800773. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:03:05,083][128642] Avg episode reward: [(0, '60.040'), (1, '56.070')] +[2023-09-26 01:03:05,084][129304] Saving new best policy, reward=60.040! +[2023-09-26 01:03:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 3252224. Throughput: 0: 827.0, 1: 827.6. Samples: 810832. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:03:10,083][128642] Avg episode reward: [(0, '63.680'), (1, '56.830')] +[2023-09-26 01:03:10,093][129304] Saving new best policy, reward=63.680! +[2023-09-26 01:03:13,505][129495] Updated weights for policy 0, policy_version 6400 (0.0017) +[2023-09-26 01:03:13,505][129496] Updated weights for policy 1, policy_version 6400 (0.0019) +[2023-09-26 01:03:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 3284992. Throughput: 0: 828.2, 1: 827.4. Samples: 820799. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:03:15,083][128642] Avg episode reward: [(0, '62.510'), (1, '57.030')] +[2023-09-26 01:03:20,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 3317760. Throughput: 0: 824.0, 1: 823.6. Samples: 825559. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:03:20,083][128642] Avg episode reward: [(0, '63.810'), (1, '57.700')] +[2023-09-26 01:03:20,084][129304] Saving new best policy, reward=63.810! +[2023-09-26 01:03:20,085][129382] Saving new best policy, reward=57.700! +[2023-09-26 01:03:25,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 3350528. Throughput: 0: 824.2, 1: 827.6. Samples: 835653. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:03:25,082][128642] Avg episode reward: [(0, '64.610'), (1, '57.720')] +[2023-09-26 01:03:25,089][129304] Saving new best policy, reward=64.610! +[2023-09-26 01:03:25,089][129382] Saving new best policy, reward=57.720! +[2023-09-26 01:03:25,772][129496] Updated weights for policy 1, policy_version 6560 (0.0018) +[2023-09-26 01:03:25,772][129495] Updated weights for policy 0, policy_version 6560 (0.0016) +[2023-09-26 01:03:30,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6621.9, 300 sec: 6581.4). Total num frames: 3383296. Throughput: 0: 827.6, 1: 827.4. Samples: 845601. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:03:30,082][128642] Avg episode reward: [(0, '64.050'), (1, '58.780')] +[2023-09-26 01:03:30,083][129382] Saving new best policy, reward=58.780! +[2023-09-26 01:03:35,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6622.7, 300 sec: 6581.4). Total num frames: 3416064. Throughput: 0: 825.6, 1: 821.9. Samples: 850311. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:03:35,083][128642] Avg episode reward: [(0, '66.120'), (1, '58.010')] +[2023-09-26 01:03:35,084][129304] Saving new best policy, reward=66.120! +[2023-09-26 01:03:38,256][129495] Updated weights for policy 0, policy_version 6720 (0.0017) +[2023-09-26 01:03:38,256][129496] Updated weights for policy 1, policy_version 6720 (0.0017) +[2023-09-26 01:03:40,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3448832. Throughput: 0: 823.0, 1: 825.0. Samples: 860281. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:03:40,083][128642] Avg episode reward: [(0, '65.130'), (1, '59.340')] +[2023-09-26 01:03:40,095][129382] Saving new best policy, reward=59.340! +[2023-09-26 01:03:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3481600. Throughput: 0: 825.8, 1: 829.0. Samples: 870328. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:03:45,083][128642] Avg episode reward: [(0, '64.900'), (1, '59.310')] +[2023-09-26 01:03:50,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3514368. Throughput: 0: 827.8, 1: 823.6. Samples: 875084. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:03:50,083][128642] Avg episode reward: [(0, '66.410'), (1, '57.680')] +[2023-09-26 01:03:50,083][129304] Saving new best policy, reward=66.410! +[2023-09-26 01:03:50,704][129495] Updated weights for policy 0, policy_version 6880 (0.0017) +[2023-09-26 01:03:50,705][129496] Updated weights for policy 1, policy_version 6880 (0.0016) +[2023-09-26 01:03:55,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3547136. Throughput: 0: 822.9, 1: 823.2. Samples: 884906. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:03:55,082][128642] Avg episode reward: [(0, '67.010'), (1, '57.220')] +[2023-09-26 01:03:55,089][129304] Saving new best policy, reward=67.010! +[2023-09-26 01:04:00,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3579904. Throughput: 0: 821.7, 1: 825.2. Samples: 894911. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:04:00,083][128642] Avg episode reward: [(0, '67.120'), (1, '58.900')] +[2023-09-26 01:04:00,084][129304] Saving new best policy, reward=67.120! +[2023-09-26 01:04:03,206][129495] Updated weights for policy 0, policy_version 7040 (0.0016) +[2023-09-26 01:04:03,206][129496] Updated weights for policy 1, policy_version 7040 (0.0017) +[2023-09-26 01:04:05,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3612672. Throughput: 0: 824.2, 1: 824.1. Samples: 899729. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 01:04:05,083][128642] Avg episode reward: [(0, '66.300'), (1, '56.440')] +[2023-09-26 01:04:10,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3645440. Throughput: 0: 823.8, 1: 820.6. Samples: 909650. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 01:04:10,083][128642] Avg episode reward: [(0, '65.110'), (1, '55.270')] +[2023-09-26 01:04:15,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3678208. Throughput: 0: 821.3, 1: 824.2. Samples: 919647. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:04:15,082][128642] Avg episode reward: [(0, '63.730'), (1, '54.350')] +[2023-09-26 01:04:15,514][129495] Updated weights for policy 0, policy_version 7200 (0.0015) +[2023-09-26 01:04:15,514][129496] Updated weights for policy 1, policy_version 7200 (0.0016) +[2023-09-26 01:04:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3710976. Throughput: 0: 825.9, 1: 825.6. Samples: 924628. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:04:20,083][128642] Avg episode reward: [(0, '62.960'), (1, '55.760')] +[2023-09-26 01:04:25,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3743744. Throughput: 0: 824.4, 1: 822.4. Samples: 934389. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:04:25,083][128642] Avg episode reward: [(0, '62.220'), (1, '55.710')] +[2023-09-26 01:04:27,976][129495] Updated weights for policy 0, policy_version 7360 (0.0018) +[2023-09-26 01:04:27,977][129496] Updated weights for policy 1, policy_version 7360 (0.0019) +[2023-09-26 01:04:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3776512. Throughput: 0: 823.0, 1: 820.8. Samples: 944300. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:04:30,083][128642] Avg episode reward: [(0, '61.640'), (1, '57.520')] +[2023-09-26 01:04:35,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3809280. Throughput: 0: 825.3, 1: 825.1. Samples: 949353. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:04:35,083][128642] Avg episode reward: [(0, '62.780'), (1, '57.760')] +[2023-09-26 01:04:40,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3842048. Throughput: 0: 824.4, 1: 823.0. Samples: 959041. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:04:40,083][128642] Avg episode reward: [(0, '60.670'), (1, '56.640')] +[2023-09-26 01:04:40,093][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000007504_1921024.pth... +[2023-09-26 01:04:40,093][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000007504_1921024.pth... +[2023-09-26 01:04:40,122][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000004416_1130496.pth +[2023-09-26 01:04:40,129][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000004416_1130496.pth +[2023-09-26 01:04:40,492][129495] Updated weights for policy 0, policy_version 7520 (0.0014) +[2023-09-26 01:04:40,492][129496] Updated weights for policy 1, policy_version 7520 (0.0017) +[2023-09-26 01:04:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3874816. Throughput: 0: 821.9, 1: 820.6. Samples: 968827. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:04:45,082][128642] Avg episode reward: [(0, '65.070'), (1, '58.260')] +[2023-09-26 01:04:50,082][128642] Fps is (10 sec: 6553.9, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3907584. Throughput: 0: 821.8, 1: 821.9. Samples: 973696. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:04:50,083][128642] Avg episode reward: [(0, '64.780'), (1, '59.830')] +[2023-09-26 01:04:50,084][129382] Saving new best policy, reward=59.830! +[2023-09-26 01:04:53,017][129496] Updated weights for policy 1, policy_version 7680 (0.0017) +[2023-09-26 01:04:53,017][129495] Updated weights for policy 0, policy_version 7680 (0.0018) +[2023-09-26 01:04:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3940352. Throughput: 0: 821.3, 1: 821.4. Samples: 983572. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:04:55,083][128642] Avg episode reward: [(0, '64.360'), (1, '60.490')] +[2023-09-26 01:04:55,091][129382] Saving new best policy, reward=60.490! +[2023-09-26 01:05:00,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3973120. Throughput: 0: 822.4, 1: 819.8. Samples: 993544. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:05:00,083][128642] Avg episode reward: [(0, '64.460'), (1, '62.860')] +[2023-09-26 01:05:00,085][129382] Saving new best policy, reward=62.860! +[2023-09-26 01:05:05,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4005888. Throughput: 0: 821.3, 1: 822.5. Samples: 998598. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:05:05,083][128642] Avg episode reward: [(0, '64.970'), (1, '62.190')] +[2023-09-26 01:05:05,450][129495] Updated weights for policy 0, policy_version 7840 (0.0017) +[2023-09-26 01:05:05,450][129496] Updated weights for policy 1, policy_version 7840 (0.0016) +[2023-09-26 01:05:10,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4038656. Throughput: 0: 819.8, 1: 820.0. Samples: 1008183. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:05:10,083][128642] Avg episode reward: [(0, '66.310'), (1, '62.850')] +[2023-09-26 01:05:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4071424. Throughput: 0: 818.4, 1: 819.2. Samples: 1017990. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:05:15,083][128642] Avg episode reward: [(0, '68.090'), (1, '64.620')] +[2023-09-26 01:05:15,084][129304] Saving new best policy, reward=68.090! +[2023-09-26 01:05:15,085][129382] Saving new best policy, reward=64.620! +[2023-09-26 01:05:17,969][129496] Updated weights for policy 1, policy_version 8000 (0.0018) +[2023-09-26 01:05:17,969][129495] Updated weights for policy 0, policy_version 8000 (0.0019) +[2023-09-26 01:05:20,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4104192. Throughput: 0: 818.7, 1: 818.3. Samples: 1023018. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:05:20,083][128642] Avg episode reward: [(0, '70.780'), (1, '64.950')] +[2023-09-26 01:05:20,084][129304] Saving new best policy, reward=70.780! +[2023-09-26 01:05:20,084][129382] Saving new best policy, reward=64.950! +[2023-09-26 01:05:25,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4136960. Throughput: 0: 819.6, 1: 819.9. Samples: 1032817. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:05:25,083][128642] Avg episode reward: [(0, '71.440'), (1, '69.330')] +[2023-09-26 01:05:25,090][129304] Saving new best policy, reward=71.440! +[2023-09-26 01:05:25,091][129382] Saving new best policy, reward=69.330! +[2023-09-26 01:05:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4169728. Throughput: 0: 821.2, 1: 819.4. Samples: 1042654. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:05:30,083][128642] Avg episode reward: [(0, '74.280'), (1, '69.710')] +[2023-09-26 01:05:30,084][129304] Saving new best policy, reward=74.280! +[2023-09-26 01:05:30,084][129382] Saving new best policy, reward=69.710! +[2023-09-26 01:05:30,405][129496] Updated weights for policy 1, policy_version 8160 (0.0017) +[2023-09-26 01:05:30,406][129495] Updated weights for policy 0, policy_version 8160 (0.0019) +[2023-09-26 01:05:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4202496. Throughput: 0: 822.6, 1: 821.8. Samples: 1047694. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:05:35,083][128642] Avg episode reward: [(0, '75.200'), (1, '69.180')] +[2023-09-26 01:05:35,084][129304] Saving new best policy, reward=75.200! +[2023-09-26 01:05:40,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4235264. Throughput: 0: 819.0, 1: 818.8. Samples: 1057273. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:05:40,083][128642] Avg episode reward: [(0, '79.900'), (1, '73.140')] +[2023-09-26 01:05:40,092][129304] Saving new best policy, reward=79.900! +[2023-09-26 01:05:40,092][129382] Saving new best policy, reward=73.140! +[2023-09-26 01:05:43,138][129496] Updated weights for policy 1, policy_version 8320 (0.0018) +[2023-09-26 01:05:43,138][129495] Updated weights for policy 0, policy_version 8320 (0.0016) +[2023-09-26 01:05:45,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4268032. Throughput: 0: 814.2, 1: 818.6. Samples: 1067018. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:05:45,082][128642] Avg episode reward: [(0, '78.570'), (1, '72.560')] +[2023-09-26 01:05:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4300800. Throughput: 0: 816.6, 1: 815.6. Samples: 1072046. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:05:50,083][128642] Avg episode reward: [(0, '81.240'), (1, '77.400')] +[2023-09-26 01:05:50,084][129304] Saving new best policy, reward=81.240! +[2023-09-26 01:05:50,084][129382] Saving new best policy, reward=77.400! +[2023-09-26 01:05:55,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4333568. Throughput: 0: 820.6, 1: 820.6. Samples: 1082034. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:05:55,083][128642] Avg episode reward: [(0, '79.960'), (1, '77.750')] +[2023-09-26 01:05:55,096][129382] Saving new best policy, reward=77.750! +[2023-09-26 01:05:55,393][129495] Updated weights for policy 0, policy_version 8480 (0.0017) +[2023-09-26 01:05:55,394][129496] Updated weights for policy 1, policy_version 8480 (0.0018) +[2023-09-26 01:06:00,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4366336. Throughput: 0: 819.8, 1: 819.3. Samples: 1091750. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:06:00,082][128642] Avg episode reward: [(0, '83.100'), (1, '81.920')] +[2023-09-26 01:06:00,083][129304] Saving new best policy, reward=83.100! +[2023-09-26 01:06:00,083][129382] Saving new best policy, reward=81.920! +[2023-09-26 01:06:05,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4399104. Throughput: 0: 820.4, 1: 820.8. Samples: 1096872. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:06:05,083][128642] Avg episode reward: [(0, '81.530'), (1, '83.160')] +[2023-09-26 01:06:05,083][129382] Saving new best policy, reward=83.160! +[2023-09-26 01:06:07,990][129495] Updated weights for policy 0, policy_version 8640 (0.0017) +[2023-09-26 01:06:07,990][129496] Updated weights for policy 1, policy_version 8640 (0.0016) +[2023-09-26 01:06:10,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4431872. Throughput: 0: 818.8, 1: 818.8. Samples: 1106506. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:06:10,083][128642] Avg episode reward: [(0, '82.670'), (1, '83.530')] +[2023-09-26 01:06:10,093][129382] Saving new best policy, reward=83.530! +[2023-09-26 01:06:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4464640. Throughput: 0: 815.4, 1: 819.0. Samples: 1116203. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:06:15,082][128642] Avg episode reward: [(0, '85.000'), (1, '84.180')] +[2023-09-26 01:06:15,083][129382] Saving new best policy, reward=84.180! +[2023-09-26 01:06:15,083][129304] Saving new best policy, reward=85.000! +[2023-09-26 01:06:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4497408. Throughput: 0: 814.4, 1: 815.2. Samples: 1121023. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:06:20,083][128642] Avg episode reward: [(0, '84.430'), (1, '86.090')] +[2023-09-26 01:06:20,084][129382] Saving new best policy, reward=86.090! +[2023-09-26 01:06:20,587][129496] Updated weights for policy 1, policy_version 8800 (0.0019) +[2023-09-26 01:06:20,587][129495] Updated weights for policy 0, policy_version 8800 (0.0019) +[2023-09-26 01:06:25,082][128642] Fps is (10 sec: 6553.3, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4530176. Throughput: 0: 819.2, 1: 819.4. Samples: 1131007. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:06:25,083][128642] Avg episode reward: [(0, '85.550'), (1, '85.140')] +[2023-09-26 01:06:25,092][129304] Saving new best policy, reward=85.550! +[2023-09-26 01:06:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4562944. Throughput: 0: 821.0, 1: 819.2. Samples: 1140827. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:06:30,083][128642] Avg episode reward: [(0, '82.790'), (1, '85.520')] +[2023-09-26 01:06:33,063][129495] Updated weights for policy 0, policy_version 8960 (0.0017) +[2023-09-26 01:06:33,064][129496] Updated weights for policy 1, policy_version 8960 (0.0016) +[2023-09-26 01:06:35,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4595712. Throughput: 0: 819.7, 1: 818.8. Samples: 1145782. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:06:35,083][128642] Avg episode reward: [(0, '82.280'), (1, '85.250')] +[2023-09-26 01:06:40,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4628480. Throughput: 0: 815.6, 1: 815.5. Samples: 1155432. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:06:40,083][128642] Avg episode reward: [(0, '79.950'), (1, '85.070')] +[2023-09-26 01:06:40,092][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000009040_2314240.pth... +[2023-09-26 01:06:40,092][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000009040_2314240.pth... +[2023-09-26 01:06:40,125][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000005968_1527808.pth +[2023-09-26 01:06:40,127][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000005968_1527808.pth +[2023-09-26 01:06:45,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4661248. Throughput: 0: 815.7, 1: 819.1. Samples: 1165314. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:06:45,083][128642] Avg episode reward: [(0, '74.960'), (1, '85.290')] +[2023-09-26 01:06:45,656][129495] Updated weights for policy 0, policy_version 9120 (0.0014) +[2023-09-26 01:06:45,656][129496] Updated weights for policy 1, policy_version 9120 (0.0017) +[2023-09-26 01:06:50,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4694016. Throughput: 0: 814.5, 1: 814.5. Samples: 1170175. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:06:50,082][128642] Avg episode reward: [(0, '75.340'), (1, '83.870')] +[2023-09-26 01:06:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4726784. Throughput: 0: 817.5, 1: 817.6. Samples: 1180087. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:06:55,083][128642] Avg episode reward: [(0, '73.380'), (1, '85.450')] +[2023-09-26 01:06:58,079][129496] Updated weights for policy 1, policy_version 9280 (0.0017) +[2023-09-26 01:06:58,079][129495] Updated weights for policy 0, policy_version 9280 (0.0018) +[2023-09-26 01:07:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4759552. Throughput: 0: 819.9, 1: 819.2. Samples: 1189963. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:07:00,082][128642] Avg episode reward: [(0, '73.740'), (1, '81.940')] +[2023-09-26 01:07:05,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4792320. Throughput: 0: 819.8, 1: 821.2. Samples: 1194871. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:07:05,082][128642] Avg episode reward: [(0, '77.880'), (1, '79.010')] +[2023-09-26 01:07:10,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4825088. Throughput: 0: 817.7, 1: 817.5. Samples: 1204590. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:07:10,083][128642] Avg episode reward: [(0, '77.550'), (1, '81.700')] +[2023-09-26 01:07:10,606][129496] Updated weights for policy 1, policy_version 9440 (0.0018) +[2023-09-26 01:07:10,606][129495] Updated weights for policy 0, policy_version 9440 (0.0017) +[2023-09-26 01:07:15,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4857856. Throughput: 0: 818.1, 1: 819.2. Samples: 1214507. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:07:15,083][128642] Avg episode reward: [(0, '78.610'), (1, '77.690')] +[2023-09-26 01:07:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4890624. Throughput: 0: 818.0, 1: 819.9. Samples: 1219485. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:07:20,083][128642] Avg episode reward: [(0, '76.640'), (1, '75.930')] +[2023-09-26 01:07:23,159][129495] Updated weights for policy 0, policy_version 9600 (0.0016) +[2023-09-26 01:07:23,159][129496] Updated weights for policy 1, policy_version 9600 (0.0018) +[2023-09-26 01:07:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6567.5). Total num frames: 4923392. Throughput: 0: 817.8, 1: 817.8. Samples: 1229036. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:07:25,083][128642] Avg episode reward: [(0, '81.940'), (1, '73.740')] +[2023-09-26 01:07:30,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6567.7). Total num frames: 4956160. Throughput: 0: 819.1, 1: 817.0. Samples: 1238941. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:07:30,083][128642] Avg episode reward: [(0, '76.630'), (1, '77.330')] +[2023-09-26 01:07:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 4988928. Throughput: 0: 815.5, 1: 815.4. Samples: 1243563. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:07:35,083][128642] Avg episode reward: [(0, '77.840'), (1, '75.710')] +[2023-09-26 01:07:35,783][129495] Updated weights for policy 0, policy_version 9760 (0.0019) +[2023-09-26 01:07:35,783][129496] Updated weights for policy 1, policy_version 9760 (0.0018) +[2023-09-26 01:07:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5021696. Throughput: 0: 815.0, 1: 816.6. Samples: 1253513. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:07:40,083][128642] Avg episode reward: [(0, '78.800'), (1, '77.400')] +[2023-09-26 01:07:45,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5054464. Throughput: 0: 817.8, 1: 819.2. Samples: 1263626. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:07:45,083][128642] Avg episode reward: [(0, '78.950'), (1, '77.340')] +[2023-09-26 01:07:48,227][129495] Updated weights for policy 0, policy_version 9920 (0.0016) +[2023-09-26 01:07:48,227][129496] Updated weights for policy 1, policy_version 9920 (0.0015) +[2023-09-26 01:07:50,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5087232. Throughput: 0: 818.4, 1: 817.9. Samples: 1268505. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:07:50,082][128642] Avg episode reward: [(0, '80.680'), (1, '80.970')] +[2023-09-26 01:07:55,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5120000. Throughput: 0: 815.9, 1: 817.5. Samples: 1278094. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:07:55,083][128642] Avg episode reward: [(0, '81.650'), (1, '82.670')] +[2023-09-26 01:08:00,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5152768. Throughput: 0: 818.3, 1: 819.2. Samples: 1288195. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 01:08:00,083][128642] Avg episode reward: [(0, '81.570'), (1, '83.570')] +[2023-09-26 01:08:00,776][129496] Updated weights for policy 1, policy_version 10080 (0.0017) +[2023-09-26 01:08:00,777][129495] Updated weights for policy 0, policy_version 10080 (0.0018) +[2023-09-26 01:08:05,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5185536. Throughput: 0: 815.7, 1: 814.4. Samples: 1292841. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 01:08:05,082][128642] Avg episode reward: [(0, '82.080'), (1, '85.350')] +[2023-09-26 01:08:10,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5218304. Throughput: 0: 817.6, 1: 819.0. Samples: 1302681. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:08:10,082][128642] Avg episode reward: [(0, '85.620'), (1, '86.210')] +[2023-09-26 01:08:10,093][129304] Saving new best policy, reward=85.620! +[2023-09-26 01:08:10,093][129382] Saving new best policy, reward=86.210! +[2023-09-26 01:08:13,211][129496] Updated weights for policy 1, policy_version 10240 (0.0016) +[2023-09-26 01:08:13,211][129495] Updated weights for policy 0, policy_version 10240 (0.0017) +[2023-09-26 01:08:15,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5251072. Throughput: 0: 819.2, 1: 820.8. Samples: 1312743. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:08:15,083][128642] Avg episode reward: [(0, '84.290'), (1, '83.570')] +[2023-09-26 01:08:20,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5283840. Throughput: 0: 820.0, 1: 820.5. Samples: 1317382. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:08:20,082][128642] Avg episode reward: [(0, '86.380'), (1, '84.580')] +[2023-09-26 01:08:20,083][129304] Saving new best policy, reward=86.380! +[2023-09-26 01:08:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5316608. Throughput: 0: 816.6, 1: 819.2. Samples: 1327126. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:08:25,083][128642] Avg episode reward: [(0, '88.050'), (1, '83.610')] +[2023-09-26 01:08:25,095][129304] Saving new best policy, reward=88.050! +[2023-09-26 01:08:25,967][129496] Updated weights for policy 1, policy_version 10400 (0.0019) +[2023-09-26 01:08:25,967][129495] Updated weights for policy 0, policy_version 10400 (0.0018) +[2023-09-26 01:08:30,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5349376. Throughput: 0: 816.7, 1: 811.3. Samples: 1336885. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:08:30,083][128642] Avg episode reward: [(0, '86.040'), (1, '83.250')] +[2023-09-26 01:08:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5382144. Throughput: 0: 811.3, 1: 812.1. Samples: 1341562. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:08:35,083][128642] Avg episode reward: [(0, '86.800'), (1, '80.570')] +[2023-09-26 01:08:38,650][129495] Updated weights for policy 0, policy_version 10560 (0.0016) +[2023-09-26 01:08:38,650][129496] Updated weights for policy 1, policy_version 10560 (0.0017) +[2023-09-26 01:08:40,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5414912. Throughput: 0: 816.0, 1: 814.0. Samples: 1351447. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:08:40,083][128642] Avg episode reward: [(0, '88.370'), (1, '84.920')] +[2023-09-26 01:08:40,094][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000010576_2707456.pth... +[2023-09-26 01:08:40,094][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000010576_2707456.pth... +[2023-09-26 01:08:40,127][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000007504_1921024.pth +[2023-09-26 01:08:40,130][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000007504_1921024.pth +[2023-09-26 01:08:40,135][129304] Saving new best policy, reward=88.370! +[2023-09-26 01:08:45,082][128642] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 5439488. Throughput: 0: 805.5, 1: 800.9. Samples: 1360482. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:08:45,083][128642] Avg episode reward: [(0, '86.600'), (1, '84.120')] +[2023-09-26 01:08:50,082][128642] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 5472256. Throughput: 0: 807.5, 1: 807.4. Samples: 1365512. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:08:50,083][128642] Avg episode reward: [(0, '84.780'), (1, '83.210')] +[2023-09-26 01:08:51,499][129495] Updated weights for policy 0, policy_version 10720 (0.0015) +[2023-09-26 01:08:51,500][129496] Updated weights for policy 1, policy_version 10720 (0.0017) +[2023-09-26 01:08:55,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 5505024. Throughput: 0: 808.0, 1: 806.2. Samples: 1375319. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:08:55,082][128642] Avg episode reward: [(0, '86.950'), (1, '84.190')] +[2023-09-26 01:09:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 5537792. Throughput: 0: 807.2, 1: 803.1. Samples: 1385208. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:09:00,083][128642] Avg episode reward: [(0, '85.060'), (1, '85.060')] +[2023-09-26 01:09:03,852][129496] Updated weights for policy 1, policy_version 10880 (0.0018) +[2023-09-26 01:09:03,852][129495] Updated weights for policy 0, policy_version 10880 (0.0019) +[2023-09-26 01:09:05,082][128642] Fps is (10 sec: 6963.1, 60 sec: 6485.3, 300 sec: 6539.7). Total num frames: 5574656. Throughput: 0: 811.4, 1: 811.4. Samples: 1390406. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:09:05,083][128642] Avg episode reward: [(0, '86.120'), (1, '87.060')] +[2023-09-26 01:09:05,084][129382] Saving new best policy, reward=87.060! +[2023-09-26 01:09:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 5603328. Throughput: 0: 813.2, 1: 808.1. Samples: 1400084. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:09:10,083][128642] Avg episode reward: [(0, '85.940'), (1, '89.770')] +[2023-09-26 01:09:10,098][129382] Saving new best policy, reward=89.770! +[2023-09-26 01:09:15,082][128642] Fps is (10 sec: 6143.9, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 5636096. Throughput: 0: 810.2, 1: 810.5. Samples: 1409816. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:09:15,083][128642] Avg episode reward: [(0, '86.850'), (1, '90.280')] +[2023-09-26 01:09:15,148][129382] Saving new best policy, reward=90.280! +[2023-09-26 01:09:16,406][129495] Updated weights for policy 0, policy_version 11040 (0.0015) +[2023-09-26 01:09:16,407][129496] Updated weights for policy 1, policy_version 11040 (0.0018) +[2023-09-26 01:09:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 5668864. Throughput: 0: 815.8, 1: 813.6. Samples: 1414885. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:09:20,083][128642] Avg episode reward: [(0, '86.020'), (1, '88.180')] +[2023-09-26 01:09:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 5701632. Throughput: 0: 812.5, 1: 813.2. Samples: 1424604. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:09:25,083][128642] Avg episode reward: [(0, '87.080'), (1, '86.550')] +[2023-09-26 01:09:28,803][129495] Updated weights for policy 0, policy_version 11200 (0.0018) +[2023-09-26 01:09:28,803][129496] Updated weights for policy 1, policy_version 11200 (0.0017) +[2023-09-26 01:09:30,082][128642] Fps is (10 sec: 7372.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5742592. Throughput: 0: 823.2, 1: 822.8. Samples: 1434555. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:09:30,083][128642] Avg episode reward: [(0, '83.500'), (1, '87.400')] +[2023-09-26 01:09:35,082][128642] Fps is (10 sec: 7372.9, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5775360. Throughput: 0: 822.4, 1: 823.4. Samples: 1439574. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:09:35,083][128642] Avg episode reward: [(0, '84.200'), (1, '88.140')] +[2023-09-26 01:09:40,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 5808128. Throughput: 0: 824.4, 1: 825.0. Samples: 1449545. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:09:40,082][128642] Avg episode reward: [(0, '79.770'), (1, '88.980')] +[2023-09-26 01:09:41,182][129495] Updated weights for policy 0, policy_version 11360 (0.0017) +[2023-09-26 01:09:41,183][129496] Updated weights for policy 1, policy_version 11360 (0.0017) +[2023-09-26 01:09:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6690.2, 300 sec: 6553.6). Total num frames: 5840896. Throughput: 0: 824.6, 1: 823.9. Samples: 1459389. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:09:45,082][128642] Avg episode reward: [(0, '77.680'), (1, '89.450')] +[2023-09-26 01:09:50,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 5873664. Throughput: 0: 819.2, 1: 823.3. Samples: 1464320. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:09:50,083][128642] Avg episode reward: [(0, '77.470'), (1, '89.170')] +[2023-09-26 01:09:53,697][129496] Updated weights for policy 1, policy_version 11520 (0.0016) +[2023-09-26 01:09:53,697][129495] Updated weights for policy 0, policy_version 11520 (0.0015) +[2023-09-26 01:09:55,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 5906432. Throughput: 0: 824.3, 1: 823.7. Samples: 1474245. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:09:55,083][128642] Avg episode reward: [(0, '77.300'), (1, '88.130')] +[2023-09-26 01:10:00,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 5939200. Throughput: 0: 823.7, 1: 824.1. Samples: 1483967. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:10:00,083][128642] Avg episode reward: [(0, '76.370'), (1, '87.390')] +[2023-09-26 01:10:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6621.9, 300 sec: 6553.6). Total num frames: 5971968. Throughput: 0: 820.3, 1: 824.8. Samples: 1488917. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:10:05,082][128642] Avg episode reward: [(0, '74.490'), (1, '85.860')] +[2023-09-26 01:10:06,049][129495] Updated weights for policy 0, policy_version 11680 (0.0018) +[2023-09-26 01:10:06,049][129496] Updated weights for policy 1, policy_version 11680 (0.0018) +[2023-09-26 01:10:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 6004736. Throughput: 0: 825.9, 1: 828.3. Samples: 1499041. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:10:10,083][128642] Avg episode reward: [(0, '73.610'), (1, '83.900')] +[2023-09-26 01:10:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6690.2, 300 sec: 6553.6). Total num frames: 6037504. Throughput: 0: 820.8, 1: 821.0. Samples: 1508436. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:10:15,082][128642] Avg episode reward: [(0, '73.420'), (1, '84.940')] +[2023-09-26 01:10:18,675][129495] Updated weights for policy 0, policy_version 11840 (0.0018) +[2023-09-26 01:10:18,676][129496] Updated weights for policy 1, policy_version 11840 (0.0018) +[2023-09-26 01:10:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 6070272. Throughput: 0: 819.3, 1: 822.3. Samples: 1513445. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:10:20,083][128642] Avg episode reward: [(0, '73.390'), (1, '86.450')] +[2023-09-26 01:10:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 6103040. Throughput: 0: 821.8, 1: 821.7. Samples: 1523503. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:10:25,083][128642] Avg episode reward: [(0, '70.340'), (1, '89.110')] +[2023-09-26 01:10:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 6135808. Throughput: 0: 818.4, 1: 819.2. Samples: 1533083. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:10:30,083][128642] Avg episode reward: [(0, '70.740'), (1, '88.930')] +[2023-09-26 01:10:31,194][129495] Updated weights for policy 0, policy_version 12000 (0.0018) +[2023-09-26 01:10:31,194][129496] Updated weights for policy 1, policy_version 12000 (0.0017) +[2023-09-26 01:10:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 6168576. Throughput: 0: 819.2, 1: 818.9. Samples: 1538036. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:10:35,083][128642] Avg episode reward: [(0, '69.180'), (1, '85.400')] +[2023-09-26 01:10:40,082][128642] Fps is (10 sec: 6143.9, 60 sec: 6485.3, 300 sec: 6539.7). Total num frames: 6197248. Throughput: 0: 816.1, 1: 817.5. Samples: 1547759. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:10:40,083][128642] Avg episode reward: [(0, '69.520'), (1, '86.320')] +[2023-09-26 01:10:40,093][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000012112_3100672.pth... +[2023-09-26 01:10:40,124][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000009040_2314240.pth +[2023-09-26 01:10:40,129][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000012112_3100672.pth... +[2023-09-26 01:10:40,157][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000009040_2314240.pth +[2023-09-26 01:10:43,906][129496] Updated weights for policy 1, policy_version 12160 (0.0018) +[2023-09-26 01:10:43,906][129495] Updated weights for policy 0, policy_version 12160 (0.0019) +[2023-09-26 01:10:45,082][128642] Fps is (10 sec: 5734.4, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 6225920. Throughput: 0: 812.7, 1: 814.2. Samples: 1557176. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:10:45,083][128642] Avg episode reward: [(0, '70.010'), (1, '83.660')] +[2023-09-26 01:10:50,082][128642] Fps is (10 sec: 6143.9, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 6258688. Throughput: 0: 817.2, 1: 812.8. Samples: 1562269. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:10:50,084][128642] Avg episode reward: [(0, '69.770'), (1, '83.420')] +[2023-09-26 01:10:55,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 6291456. Throughput: 0: 810.6, 1: 808.1. Samples: 1571882. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:10:55,083][128642] Avg episode reward: [(0, '73.110'), (1, '84.790')] +[2023-09-26 01:10:56,445][129496] Updated weights for policy 1, policy_version 12320 (0.0015) +[2023-09-26 01:10:56,446][129495] Updated weights for policy 0, policy_version 12320 (0.0016) +[2023-09-26 01:11:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 6324224. Throughput: 0: 814.5, 1: 814.5. Samples: 1581739. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:11:00,083][128642] Avg episode reward: [(0, '72.880'), (1, '85.610')] +[2023-09-26 01:11:05,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 6356992. Throughput: 0: 819.1, 1: 815.8. Samples: 1587014. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:11:05,083][128642] Avg episode reward: [(0, '72.370'), (1, '87.050')] +[2023-09-26 01:11:08,967][129495] Updated weights for policy 0, policy_version 12480 (0.0015) +[2023-09-26 01:11:08,967][129496] Updated weights for policy 1, policy_version 12480 (0.0017) +[2023-09-26 01:11:10,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 6389760. Throughput: 0: 811.3, 1: 811.6. Samples: 1596533. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 01:11:10,083][128642] Avg episode reward: [(0, '73.040'), (1, '85.680')] +[2023-09-26 01:11:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 6422528. Throughput: 0: 806.0, 1: 808.4. Samples: 1605731. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 01:11:15,083][128642] Avg episode reward: [(0, '76.090'), (1, '86.730')] +[2023-09-26 01:11:20,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 6455296. Throughput: 0: 810.4, 1: 805.9. Samples: 1610770. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:11:20,082][128642] Avg episode reward: [(0, '77.340'), (1, '88.270')] +[2023-09-26 01:11:21,789][129495] Updated weights for policy 0, policy_version 12640 (0.0016) +[2023-09-26 01:11:21,789][129496] Updated weights for policy 1, policy_version 12640 (0.0017) +[2023-09-26 01:11:25,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 6488064. Throughput: 0: 806.0, 1: 805.8. Samples: 1620289. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:11:25,082][128642] Avg episode reward: [(0, '77.480'), (1, '92.550')] +[2023-09-26 01:11:25,093][129382] Saving new best policy, reward=92.550! +[2023-09-26 01:11:30,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 6520832. Throughput: 0: 811.9, 1: 813.1. Samples: 1630302. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 01:11:30,083][128642] Avg episode reward: [(0, '76.010'), (1, '90.750')] +[2023-09-26 01:11:34,300][129496] Updated weights for policy 1, policy_version 12800 (0.0016) +[2023-09-26 01:11:34,301][129495] Updated weights for policy 0, policy_version 12800 (0.0015) +[2023-09-26 01:11:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 6553600. Throughput: 0: 810.3, 1: 810.3. Samples: 1635192. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 01:11:35,082][128642] Avg episode reward: [(0, '75.060'), (1, '92.300')] +[2023-09-26 01:11:40,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6485.3, 300 sec: 6525.8). Total num frames: 6586368. Throughput: 0: 812.7, 1: 812.8. Samples: 1645030. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:11:40,083][128642] Avg episode reward: [(0, '74.960'), (1, '92.400')] +[2023-09-26 01:11:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6619136. Throughput: 0: 809.6, 1: 814.1. Samples: 1654805. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:11:45,083][128642] Avg episode reward: [(0, '76.130'), (1, '93.360')] +[2023-09-26 01:11:45,084][129382] Saving new best policy, reward=93.360! +[2023-09-26 01:11:46,865][129495] Updated weights for policy 0, policy_version 12960 (0.0016) +[2023-09-26 01:11:46,865][129496] Updated weights for policy 1, policy_version 12960 (0.0017) +[2023-09-26 01:11:50,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6651904. Throughput: 0: 808.2, 1: 807.5. Samples: 1659721. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:11:50,083][128642] Avg episode reward: [(0, '73.800'), (1, '89.810')] +[2023-09-26 01:11:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6684672. Throughput: 0: 808.2, 1: 808.8. Samples: 1669298. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:11:55,083][128642] Avg episode reward: [(0, '73.670'), (1, '88.010')] +[2023-09-26 01:11:59,507][129496] Updated weights for policy 1, policy_version 13120 (0.0015) +[2023-09-26 01:11:59,507][129495] Updated weights for policy 0, policy_version 13120 (0.0018) +[2023-09-26 01:12:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6717440. Throughput: 0: 817.0, 1: 816.8. Samples: 1679250. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:12:00,083][128642] Avg episode reward: [(0, '73.180'), (1, '88.000')] +[2023-09-26 01:12:05,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6750208. Throughput: 0: 815.3, 1: 815.2. Samples: 1684141. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:12:05,083][128642] Avg episode reward: [(0, '74.200'), (1, '90.670')] +[2023-09-26 01:12:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6782976. Throughput: 0: 819.1, 1: 819.2. Samples: 1694014. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:12:10,083][128642] Avg episode reward: [(0, '73.570'), (1, '89.230')] +[2023-09-26 01:12:11,855][129496] Updated weights for policy 1, policy_version 13280 (0.0017) +[2023-09-26 01:12:11,855][129495] Updated weights for policy 0, policy_version 13280 (0.0017) +[2023-09-26 01:12:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6815744. Throughput: 0: 817.5, 1: 819.2. Samples: 1703952. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:12:15,083][128642] Avg episode reward: [(0, '75.990'), (1, '89.980')] +[2023-09-26 01:12:20,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6848512. Throughput: 0: 818.8, 1: 819.5. Samples: 1708916. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:12:20,083][128642] Avg episode reward: [(0, '72.940'), (1, '92.090')] +[2023-09-26 01:12:24,469][129496] Updated weights for policy 1, policy_version 13440 (0.0015) +[2023-09-26 01:12:24,469][129495] Updated weights for policy 0, policy_version 13440 (0.0016) +[2023-09-26 01:12:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6881280. Throughput: 0: 817.3, 1: 816.9. Samples: 1718569. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:12:25,083][128642] Avg episode reward: [(0, '71.800'), (1, '91.770')] +[2023-09-26 01:12:30,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6914048. Throughput: 0: 818.7, 1: 817.2. Samples: 1728422. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:12:30,083][128642] Avg episode reward: [(0, '71.630'), (1, '91.870')] +[2023-09-26 01:12:35,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6946816. Throughput: 0: 817.2, 1: 817.3. Samples: 1733274. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:12:35,083][128642] Avg episode reward: [(0, '71.400'), (1, '91.160')] +[2023-09-26 01:12:36,850][129496] Updated weights for policy 1, policy_version 13600 (0.0017) +[2023-09-26 01:12:36,850][129495] Updated weights for policy 0, policy_version 13600 (0.0016) +[2023-09-26 01:12:40,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6979584. Throughput: 0: 822.6, 1: 821.6. Samples: 1743287. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:12:40,083][128642] Avg episode reward: [(0, '70.680'), (1, '89.230')] +[2023-09-26 01:12:40,092][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000013632_3489792.pth... +[2023-09-26 01:12:40,093][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000013632_3489792.pth... +[2023-09-26 01:12:40,128][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000010576_2707456.pth +[2023-09-26 01:12:40,129][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000010576_2707456.pth +[2023-09-26 01:12:45,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7012352. Throughput: 0: 823.0, 1: 821.7. Samples: 1753258. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:12:45,083][128642] Avg episode reward: [(0, '74.070'), (1, '94.640')] +[2023-09-26 01:12:45,084][129382] Saving new best policy, reward=94.640! +[2023-09-26 01:12:49,230][129496] Updated weights for policy 1, policy_version 13760 (0.0017) +[2023-09-26 01:12:49,230][129495] Updated weights for policy 0, policy_version 13760 (0.0016) +[2023-09-26 01:12:50,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7045120. Throughput: 0: 825.7, 1: 825.7. Samples: 1758455. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:12:50,083][128642] Avg episode reward: [(0, '77.110'), (1, '95.810')] +[2023-09-26 01:12:50,085][129382] Saving new best policy, reward=95.810! +[2023-09-26 01:12:55,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7077888. Throughput: 0: 822.2, 1: 821.9. Samples: 1767995. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:12:55,083][128642] Avg episode reward: [(0, '77.740'), (1, '95.250')] +[2023-09-26 01:13:00,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7110656. Throughput: 0: 822.1, 1: 819.2. Samples: 1777811. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:13:00,082][128642] Avg episode reward: [(0, '80.310'), (1, '96.550')] +[2023-09-26 01:13:00,083][129382] Saving new best policy, reward=96.550! +[2023-09-26 01:13:01,857][129495] Updated weights for policy 0, policy_version 13920 (0.0016) +[2023-09-26 01:13:01,857][129496] Updated weights for policy 1, policy_version 13920 (0.0017) +[2023-09-26 01:13:05,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7143424. Throughput: 0: 818.4, 1: 818.6. Samples: 1782584. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:13:05,083][128642] Avg episode reward: [(0, '83.860'), (1, '97.860')] +[2023-09-26 01:13:05,084][129382] Saving new best policy, reward=97.860! +[2023-09-26 01:13:10,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7176192. Throughput: 0: 817.1, 1: 818.4. Samples: 1792164. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:13:10,083][128642] Avg episode reward: [(0, '84.460'), (1, '98.640')] +[2023-09-26 01:13:10,094][129382] Saving new best policy, reward=98.640! +[2023-09-26 01:13:14,523][129496] Updated weights for policy 1, policy_version 14080 (0.0016) +[2023-09-26 01:13:14,523][129495] Updated weights for policy 0, policy_version 14080 (0.0017) +[2023-09-26 01:13:15,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7208960. Throughput: 0: 819.2, 1: 818.9. Samples: 1802138. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:13:15,082][128642] Avg episode reward: [(0, '88.580'), (1, '99.850')] +[2023-09-26 01:13:15,083][129382] Saving new best policy, reward=99.850! +[2023-09-26 01:13:15,083][129304] Saving new best policy, reward=88.580! +[2023-09-26 01:13:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7241728. Throughput: 0: 817.8, 1: 817.8. Samples: 1806874. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:13:20,083][128642] Avg episode reward: [(0, '87.740'), (1, '99.520')] +[2023-09-26 01:13:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7274496. Throughput: 0: 814.8, 1: 816.6. Samples: 1816701. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:13:25,083][128642] Avg episode reward: [(0, '87.160'), (1, '104.200')] +[2023-09-26 01:13:25,091][129382] Saving new best policy, reward=104.200! +[2023-09-26 01:13:26,959][129495] Updated weights for policy 0, policy_version 14240 (0.0015) +[2023-09-26 01:13:26,960][129496] Updated weights for policy 1, policy_version 14240 (0.0016) +[2023-09-26 01:13:30,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7307264. Throughput: 0: 815.5, 1: 818.8. Samples: 1826799. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:13:30,083][128642] Avg episode reward: [(0, '90.230'), (1, '106.030')] +[2023-09-26 01:13:30,083][129304] Saving new best policy, reward=90.230! +[2023-09-26 01:13:30,083][129382] Saving new best policy, reward=106.030! +[2023-09-26 01:13:35,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7340032. Throughput: 0: 813.0, 1: 813.0. Samples: 1831626. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:13:35,083][128642] Avg episode reward: [(0, '89.460'), (1, '106.120')] +[2023-09-26 01:13:35,084][129382] Saving new best policy, reward=106.120! +[2023-09-26 01:13:39,488][129495] Updated weights for policy 0, policy_version 14400 (0.0016) +[2023-09-26 01:13:39,488][129496] Updated weights for policy 1, policy_version 14400 (0.0017) +[2023-09-26 01:13:40,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 7372800. Throughput: 0: 813.3, 1: 815.4. Samples: 1841287. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:13:40,083][128642] Avg episode reward: [(0, '90.970'), (1, '101.570')] +[2023-09-26 01:13:40,092][129304] Saving new best policy, reward=90.970! +[2023-09-26 01:13:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 7405568. Throughput: 0: 816.0, 1: 818.1. Samples: 1851346. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:13:45,082][128642] Avg episode reward: [(0, '92.080'), (1, '104.520')] +[2023-09-26 01:13:45,083][129304] Saving new best policy, reward=92.080! +[2023-09-26 01:13:50,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 7438336. Throughput: 0: 819.2, 1: 818.6. Samples: 1856285. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:13:50,082][128642] Avg episode reward: [(0, '93.640'), (1, '104.440')] +[2023-09-26 01:13:50,083][129304] Saving new best policy, reward=93.640! +[2023-09-26 01:13:51,930][129496] Updated weights for policy 1, policy_version 14560 (0.0017) +[2023-09-26 01:13:51,931][129495] Updated weights for policy 0, policy_version 14560 (0.0018) +[2023-09-26 01:13:55,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 7471104. Throughput: 0: 820.6, 1: 819.4. Samples: 1865964. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:13:55,083][128642] Avg episode reward: [(0, '92.420'), (1, '105.860')] +[2023-09-26 01:14:00,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6539.7). Total num frames: 7503872. Throughput: 0: 820.7, 1: 821.5. Samples: 1876036. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 01:14:00,083][128642] Avg episode reward: [(0, '92.350'), (1, '101.630')] +[2023-09-26 01:14:04,318][129495] Updated weights for policy 0, policy_version 14720 (0.0018) +[2023-09-26 01:14:04,318][129496] Updated weights for policy 1, policy_version 14720 (0.0018) +[2023-09-26 01:14:05,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 7536640. Throughput: 0: 823.3, 1: 823.5. Samples: 1880978. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 01:14:05,082][128642] Avg episode reward: [(0, '92.640'), (1, '102.270')] +[2023-09-26 01:14:10,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 7569408. Throughput: 0: 823.3, 1: 821.1. Samples: 1890698. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:14:10,082][128642] Avg episode reward: [(0, '93.360'), (1, '101.690')] +[2023-09-26 01:14:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 7602176. Throughput: 0: 819.2, 1: 817.8. Samples: 1900463. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:14:15,082][128642] Avg episode reward: [(0, '89.530'), (1, '98.080')] +[2023-09-26 01:14:16,988][129496] Updated weights for policy 1, policy_version 14880 (0.0020) +[2023-09-26 01:14:16,988][129495] Updated weights for policy 0, policy_version 14880 (0.0018) +[2023-09-26 01:14:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 7634944. Throughput: 0: 817.8, 1: 817.9. Samples: 1905233. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:14:20,083][128642] Avg episode reward: [(0, '89.650'), (1, '96.820')] +[2023-09-26 01:14:25,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7667712. Throughput: 0: 820.5, 1: 819.3. Samples: 1915076. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:14:25,083][128642] Avg episode reward: [(0, '88.490'), (1, '97.780')] +[2023-09-26 01:14:29,352][129496] Updated weights for policy 1, policy_version 15040 (0.0016) +[2023-09-26 01:14:29,352][129495] Updated weights for policy 0, policy_version 15040 (0.0016) +[2023-09-26 01:14:30,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7700480. Throughput: 0: 819.8, 1: 820.2. Samples: 1925146. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:14:30,082][128642] Avg episode reward: [(0, '84.300'), (1, '98.720')] +[2023-09-26 01:14:35,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7733248. Throughput: 0: 820.1, 1: 819.8. Samples: 1930081. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:14:35,083][128642] Avg episode reward: [(0, '79.890'), (1, '100.780')] +[2023-09-26 01:14:40,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7766016. Throughput: 0: 821.4, 1: 821.2. Samples: 1939881. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:14:40,084][128642] Avg episode reward: [(0, '83.280'), (1, '101.950')] +[2023-09-26 01:14:40,094][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000015168_3883008.pth... +[2023-09-26 01:14:40,095][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000015168_3883008.pth... +[2023-09-26 01:14:40,125][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000012112_3100672.pth +[2023-09-26 01:14:40,129][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000012112_3100672.pth +[2023-09-26 01:14:41,826][129495] Updated weights for policy 0, policy_version 15200 (0.0016) +[2023-09-26 01:14:41,827][129496] Updated weights for policy 1, policy_version 15200 (0.0018) +[2023-09-26 01:14:45,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7798784. Throughput: 0: 817.8, 1: 819.2. Samples: 1949701. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:14:45,082][128642] Avg episode reward: [(0, '83.530'), (1, '100.800')] +[2023-09-26 01:14:50,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7831552. Throughput: 0: 813.5, 1: 813.2. Samples: 1954183. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:14:50,082][128642] Avg episode reward: [(0, '85.390'), (1, '96.970')] +[2023-09-26 01:14:54,741][129495] Updated weights for policy 0, policy_version 15360 (0.0017) +[2023-09-26 01:14:54,741][129496] Updated weights for policy 1, policy_version 15360 (0.0015) +[2023-09-26 01:14:55,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7864320. Throughput: 0: 812.4, 1: 816.0. Samples: 1963974. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:14:55,083][128642] Avg episode reward: [(0, '85.280'), (1, '97.980')] +[2023-09-26 01:15:00,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7897088. Throughput: 0: 818.1, 1: 814.4. Samples: 1973928. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:15:00,083][128642] Avg episode reward: [(0, '79.720'), (1, '97.240')] +[2023-09-26 01:15:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7929856. Throughput: 0: 813.1, 1: 815.0. Samples: 1978498. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:15:05,083][128642] Avg episode reward: [(0, '81.910'), (1, '96.330')] +[2023-09-26 01:15:07,261][129496] Updated weights for policy 1, policy_version 15520 (0.0014) +[2023-09-26 01:15:07,262][129495] Updated weights for policy 0, policy_version 15520 (0.0018) +[2023-09-26 01:15:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7962624. Throughput: 0: 814.9, 1: 816.7. Samples: 1988501. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:15:10,083][128642] Avg episode reward: [(0, '77.600'), (1, '97.140')] +[2023-09-26 01:15:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 7995392. Throughput: 0: 815.9, 1: 810.7. Samples: 1998346. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:15:15,083][128642] Avg episode reward: [(0, '78.280'), (1, '95.480')] +[2023-09-26 01:15:19,945][129496] Updated weights for policy 1, policy_version 15680 (0.0016) +[2023-09-26 01:15:19,945][129495] Updated weights for policy 0, policy_version 15680 (0.0016) +[2023-09-26 01:15:20,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8028160. Throughput: 0: 807.2, 1: 811.3. Samples: 2002916. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:15:20,082][128642] Avg episode reward: [(0, '82.240'), (1, '96.140')] +[2023-09-26 01:15:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8060928. Throughput: 0: 811.2, 1: 811.5. Samples: 2012906. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:15:25,083][128642] Avg episode reward: [(0, '79.180'), (1, '96.020')] +[2023-09-26 01:15:30,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8093696. Throughput: 0: 811.0, 1: 806.7. Samples: 2022497. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 01:15:30,083][128642] Avg episode reward: [(0, '80.010'), (1, '97.520')] +[2023-09-26 01:15:32,507][129496] Updated weights for policy 1, policy_version 15840 (0.0013) +[2023-09-26 01:15:32,507][129495] Updated weights for policy 0, policy_version 15840 (0.0017) +[2023-09-26 01:15:35,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6539.7). Total num frames: 8126464. Throughput: 0: 812.5, 1: 815.7. Samples: 2027451. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 01:15:35,082][128642] Avg episode reward: [(0, '80.150'), (1, '94.580')] +[2023-09-26 01:15:40,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8159232. Throughput: 0: 816.3, 1: 812.8. Samples: 2037281. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:15:40,083][128642] Avg episode reward: [(0, '80.820'), (1, '92.880')] +[2023-09-26 01:15:44,935][129496] Updated weights for policy 1, policy_version 16000 (0.0016) +[2023-09-26 01:15:44,935][129495] Updated weights for policy 0, policy_version 16000 (0.0017) +[2023-09-26 01:15:45,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8192000. Throughput: 0: 812.9, 1: 813.4. Samples: 2047115. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:15:45,083][128642] Avg episode reward: [(0, '80.550'), (1, '92.220')] +[2023-09-26 01:15:50,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8224768. Throughput: 0: 816.4, 1: 819.2. Samples: 2052101. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:15:50,083][128642] Avg episode reward: [(0, '79.610'), (1, '94.660')] +[2023-09-26 01:15:55,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8257536. Throughput: 0: 817.2, 1: 814.7. Samples: 2061937. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:15:55,083][128642] Avg episode reward: [(0, '86.770'), (1, '92.860')] +[2023-09-26 01:15:57,392][129495] Updated weights for policy 0, policy_version 16160 (0.0016) +[2023-09-26 01:15:57,393][129496] Updated weights for policy 1, policy_version 16160 (0.0017) +[2023-09-26 01:16:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8290304. Throughput: 0: 816.2, 1: 816.8. Samples: 2071831. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:16:00,083][128642] Avg episode reward: [(0, '83.080'), (1, '96.110')] +[2023-09-26 01:16:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8323072. Throughput: 0: 820.3, 1: 819.8. Samples: 2076720. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:16:05,083][128642] Avg episode reward: [(0, '83.090'), (1, '96.520')] +[2023-09-26 01:16:09,801][129495] Updated weights for policy 0, policy_version 16320 (0.0018) +[2023-09-26 01:16:09,801][129496] Updated weights for policy 1, policy_version 16320 (0.0018) +[2023-09-26 01:16:10,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8355840. Throughput: 0: 819.7, 1: 820.5. Samples: 2086715. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:16:10,083][128642] Avg episode reward: [(0, '81.440'), (1, '95.940')] +[2023-09-26 01:16:15,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8388608. Throughput: 0: 823.6, 1: 821.5. Samples: 2096528. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:16:15,083][128642] Avg episode reward: [(0, '85.190'), (1, '98.790')] +[2023-09-26 01:16:20,082][128642] Fps is (10 sec: 6143.9, 60 sec: 6485.3, 300 sec: 6539.7). Total num frames: 8417280. Throughput: 0: 819.2, 1: 819.5. Samples: 2101192. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:16:20,083][128642] Avg episode reward: [(0, '83.990'), (1, '97.450')] +[2023-09-26 01:16:22,706][129495] Updated weights for policy 0, policy_version 16480 (0.0014) +[2023-09-26 01:16:22,707][129496] Updated weights for policy 1, policy_version 16480 (0.0018) +[2023-09-26 01:16:25,082][128642] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 8445952. Throughput: 0: 814.1, 1: 814.9. Samples: 2110585. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:16:25,083][128642] Avg episode reward: [(0, '88.130'), (1, '98.660')] +[2023-09-26 01:16:30,082][128642] Fps is (10 sec: 6144.0, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 8478720. Throughput: 0: 810.9, 1: 811.2. Samples: 2120111. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:16:30,083][128642] Avg episode reward: [(0, '87.240'), (1, '97.260')] +[2023-09-26 01:16:35,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 8511488. Throughput: 0: 815.2, 1: 810.6. Samples: 2125262. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:16:35,083][128642] Avg episode reward: [(0, '88.890'), (1, '100.460')] +[2023-09-26 01:16:35,273][129496] Updated weights for policy 1, policy_version 16640 (0.0016) +[2023-09-26 01:16:35,273][129495] Updated weights for policy 0, policy_version 16640 (0.0016) +[2023-09-26 01:16:40,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 8544256. Throughput: 0: 811.6, 1: 811.8. Samples: 2134987. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:16:40,082][128642] Avg episode reward: [(0, '87.380'), (1, '96.560')] +[2023-09-26 01:16:40,090][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000016688_4272128.pth... +[2023-09-26 01:16:40,117][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000013632_3489792.pth +[2023-09-26 01:16:40,258][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000016704_4276224.pth... +[2023-09-26 01:16:40,286][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000013632_3489792.pth +[2023-09-26 01:16:45,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 8577024. Throughput: 0: 811.1, 1: 811.5. Samples: 2144846. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:16:45,084][128642] Avg episode reward: [(0, '86.950'), (1, '95.430')] +[2023-09-26 01:16:47,712][129495] Updated weights for policy 0, policy_version 16800 (0.0015) +[2023-09-26 01:16:47,712][129496] Updated weights for policy 1, policy_version 16800 (0.0017) +[2023-09-26 01:16:50,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 8609792. Throughput: 0: 816.0, 1: 812.0. Samples: 2149980. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:16:50,083][128642] Avg episode reward: [(0, '85.860'), (1, '94.070')] +[2023-09-26 01:16:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 8642560. Throughput: 0: 813.3, 1: 812.7. Samples: 2159884. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:16:55,083][128642] Avg episode reward: [(0, '85.980'), (1, '96.690')] +[2023-09-26 01:17:00,082][128642] Fps is (10 sec: 6963.1, 60 sec: 6485.3, 300 sec: 6539.7). Total num frames: 8679424. Throughput: 0: 811.2, 1: 814.4. Samples: 2169680. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:17:00,083][128642] Avg episode reward: [(0, '84.500'), (1, '98.220')] +[2023-09-26 01:17:00,102][129495] Updated weights for policy 0, policy_version 16960 (0.0016) +[2023-09-26 01:17:00,103][129496] Updated weights for policy 1, policy_version 16960 (0.0017) +[2023-09-26 01:17:05,082][128642] Fps is (10 sec: 6963.3, 60 sec: 6485.3, 300 sec: 6539.7). Total num frames: 8712192. Throughput: 0: 818.8, 1: 817.3. Samples: 2174818. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:17:05,083][128642] Avg episode reward: [(0, '88.460'), (1, '100.210')] +[2023-09-26 01:17:10,082][128642] Fps is (10 sec: 6144.1, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 8740864. Throughput: 0: 820.2, 1: 820.1. Samples: 2184398. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:17:10,083][128642] Avg episode reward: [(0, '88.490'), (1, '101.570')] +[2023-09-26 01:17:12,682][129495] Updated weights for policy 0, policy_version 17120 (0.0019) +[2023-09-26 01:17:12,684][129496] Updated weights for policy 1, policy_version 17120 (0.0019) +[2023-09-26 01:17:15,082][128642] Fps is (10 sec: 6144.0, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 8773632. Throughput: 0: 822.7, 1: 822.5. Samples: 2194144. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:17:15,083][128642] Avg episode reward: [(0, '81.830'), (1, '99.900')] +[2023-09-26 01:17:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6485.3, 300 sec: 6525.8). Total num frames: 8806400. Throughput: 0: 820.9, 1: 822.1. Samples: 2199199. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:17:20,083][128642] Avg episode reward: [(0, '81.120'), (1, '94.870')] +[2023-09-26 01:17:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8839168. Throughput: 0: 818.8, 1: 818.9. Samples: 2208684. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:17:25,083][128642] Avg episode reward: [(0, '81.790'), (1, '88.450')] +[2023-09-26 01:17:25,393][129495] Updated weights for policy 0, policy_version 17280 (0.0015) +[2023-09-26 01:17:25,393][129496] Updated weights for policy 1, policy_version 17280 (0.0016) +[2023-09-26 01:17:30,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8871936. Throughput: 0: 816.9, 1: 816.8. Samples: 2218361. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:17:30,083][128642] Avg episode reward: [(0, '84.090'), (1, '87.700')] +[2023-09-26 01:17:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8904704. Throughput: 0: 814.6, 1: 815.2. Samples: 2223324. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:17:35,083][128642] Avg episode reward: [(0, '79.340'), (1, '84.620')] +[2023-09-26 01:17:37,862][129495] Updated weights for policy 0, policy_version 17440 (0.0015) +[2023-09-26 01:17:37,863][129496] Updated weights for policy 1, policy_version 17440 (0.0017) +[2023-09-26 01:17:40,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8937472. Throughput: 0: 814.5, 1: 814.1. Samples: 2233169. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:17:40,083][128642] Avg episode reward: [(0, '79.850'), (1, '83.560')] +[2023-09-26 01:17:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8970240. Throughput: 0: 811.9, 1: 811.5. Samples: 2242734. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:17:45,083][128642] Avg episode reward: [(0, '81.490'), (1, '84.010')] +[2023-09-26 01:17:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9003008. Throughput: 0: 812.4, 1: 810.1. Samples: 2247828. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:17:50,083][128642] Avg episode reward: [(0, '81.750'), (1, '81.170')] +[2023-09-26 01:17:50,527][129496] Updated weights for policy 1, policy_version 17600 (0.0015) +[2023-09-26 01:17:50,527][129495] Updated weights for policy 0, policy_version 17600 (0.0016) +[2023-09-26 01:17:55,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9035776. Throughput: 0: 812.8, 1: 812.2. Samples: 2257521. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:17:55,083][128642] Avg episode reward: [(0, '85.620'), (1, '80.390')] +[2023-09-26 01:18:00,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6485.3, 300 sec: 6525.8). Total num frames: 9068544. Throughput: 0: 812.5, 1: 813.5. Samples: 2267314. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:18:00,084][128642] Avg episode reward: [(0, '85.160'), (1, '80.050')] +[2023-09-26 01:18:03,001][129495] Updated weights for policy 0, policy_version 17760 (0.0017) +[2023-09-26 01:18:03,001][129496] Updated weights for policy 1, policy_version 17760 (0.0017) +[2023-09-26 01:18:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6485.3, 300 sec: 6525.8). Total num frames: 9101312. Throughput: 0: 812.6, 1: 811.6. Samples: 2272288. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:18:05,083][128642] Avg episode reward: [(0, '85.220'), (1, '77.920')] +[2023-09-26 01:18:10,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9134080. Throughput: 0: 814.6, 1: 814.2. Samples: 2281982. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:18:10,083][128642] Avg episode reward: [(0, '88.500'), (1, '78.290')] +[2023-09-26 01:18:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9166848. Throughput: 0: 812.7, 1: 817.4. Samples: 2291717. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:18:15,083][128642] Avg episode reward: [(0, '89.310'), (1, '74.800')] +[2023-09-26 01:18:15,658][129495] Updated weights for policy 0, policy_version 17920 (0.0016) +[2023-09-26 01:18:15,658][129496] Updated weights for policy 1, policy_version 17920 (0.0015) +[2023-09-26 01:18:20,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9199616. Throughput: 0: 814.3, 1: 813.9. Samples: 2296590. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:18:20,082][128642] Avg episode reward: [(0, '89.440'), (1, '75.670')] +[2023-09-26 01:18:25,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9232384. Throughput: 0: 812.1, 1: 812.3. Samples: 2306268. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:18:25,083][128642] Avg episode reward: [(0, '85.410'), (1, '77.320')] +[2023-09-26 01:18:28,119][129495] Updated weights for policy 0, policy_version 18080 (0.0017) +[2023-09-26 01:18:28,119][129496] Updated weights for policy 1, policy_version 18080 (0.0016) +[2023-09-26 01:18:30,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9265152. Throughput: 0: 815.8, 1: 819.1. Samples: 2316303. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:18:30,083][128642] Avg episode reward: [(0, '85.290'), (1, '76.070')] +[2023-09-26 01:18:35,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9297920. Throughput: 0: 814.4, 1: 814.5. Samples: 2321128. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:18:35,082][128642] Avg episode reward: [(0, '86.530'), (1, '77.650')] +[2023-09-26 01:18:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9330688. Throughput: 0: 817.0, 1: 816.9. Samples: 2331049. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:18:40,083][128642] Avg episode reward: [(0, '89.830'), (1, '80.030')] +[2023-09-26 01:18:40,093][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000018224_4665344.pth... +[2023-09-26 01:18:40,094][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000018224_4665344.pth... +[2023-09-26 01:18:40,123][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000015168_3883008.pth +[2023-09-26 01:18:40,124][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000015168_3883008.pth +[2023-09-26 01:18:40,563][129496] Updated weights for policy 1, policy_version 18240 (0.0016) +[2023-09-26 01:18:40,563][129495] Updated weights for policy 0, policy_version 18240 (0.0016) +[2023-09-26 01:18:45,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9363456. Throughput: 0: 818.3, 1: 819.1. Samples: 2340998. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:18:45,083][128642] Avg episode reward: [(0, '93.310'), (1, '82.860')] +[2023-09-26 01:18:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9396224. Throughput: 0: 821.3, 1: 820.9. Samples: 2346189. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:18:50,083][128642] Avg episode reward: [(0, '93.530'), (1, '83.150')] +[2023-09-26 01:18:52,838][129496] Updated weights for policy 1, policy_version 18400 (0.0014) +[2023-09-26 01:18:52,839][129495] Updated weights for policy 0, policy_version 18400 (0.0017) +[2023-09-26 01:18:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9428992. Throughput: 0: 823.1, 1: 823.6. Samples: 2356084. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:18:55,083][128642] Avg episode reward: [(0, '94.040'), (1, '85.620')] +[2023-09-26 01:18:55,096][129304] Saving new best policy, reward=94.040! +[2023-09-26 01:19:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9461760. Throughput: 0: 824.1, 1: 820.1. Samples: 2365708. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:19:00,083][128642] Avg episode reward: [(0, '92.920'), (1, '86.220')] +[2023-09-26 01:19:05,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9494528. Throughput: 0: 824.9, 1: 825.5. Samples: 2370859. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:19:05,083][128642] Avg episode reward: [(0, '94.720'), (1, '85.320')] +[2023-09-26 01:19:05,084][129304] Saving new best policy, reward=94.720! +[2023-09-26 01:19:05,346][129496] Updated weights for policy 1, policy_version 18560 (0.0014) +[2023-09-26 01:19:05,347][129495] Updated weights for policy 0, policy_version 18560 (0.0018) +[2023-09-26 01:19:10,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9527296. Throughput: 0: 827.3, 1: 827.3. Samples: 2380726. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:19:10,082][128642] Avg episode reward: [(0, '92.680'), (1, '84.500')] +[2023-09-26 01:19:15,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9560064. Throughput: 0: 826.2, 1: 821.9. Samples: 2390464. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:19:15,083][128642] Avg episode reward: [(0, '89.280'), (1, '86.410')] +[2023-09-26 01:19:17,843][129495] Updated weights for policy 0, policy_version 18720 (0.0017) +[2023-09-26 01:19:17,844][129496] Updated weights for policy 1, policy_version 18720 (0.0016) +[2023-09-26 01:19:20,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9592832. Throughput: 0: 827.1, 1: 826.6. Samples: 2395547. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:19:20,083][128642] Avg episode reward: [(0, '91.930'), (1, '86.860')] +[2023-09-26 01:19:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9625600. Throughput: 0: 822.0, 1: 821.9. Samples: 2405023. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:19:25,083][128642] Avg episode reward: [(0, '91.930'), (1, '88.840')] +[2023-09-26 01:19:30,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9658368. Throughput: 0: 820.4, 1: 819.3. Samples: 2414784. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:19:30,083][128642] Avg episode reward: [(0, '94.890'), (1, '87.460')] +[2023-09-26 01:19:30,084][129304] Saving new best policy, reward=94.890! +[2023-09-26 01:19:30,499][129495] Updated weights for policy 0, policy_version 18880 (0.0017) +[2023-09-26 01:19:30,499][129496] Updated weights for policy 1, policy_version 18880 (0.0016) +[2023-09-26 01:19:35,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9691136. Throughput: 0: 816.3, 1: 816.4. Samples: 2419660. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:19:35,083][128642] Avg episode reward: [(0, '95.860'), (1, '89.010')] +[2023-09-26 01:19:35,084][129304] Saving new best policy, reward=95.860! +[2023-09-26 01:19:40,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9723904. Throughput: 0: 814.3, 1: 814.1. Samples: 2429362. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:19:40,083][128642] Avg episode reward: [(0, '94.570'), (1, '92.070')] +[2023-09-26 01:19:43,032][129495] Updated weights for policy 0, policy_version 19040 (0.0017) +[2023-09-26 01:19:43,032][129496] Updated weights for policy 1, policy_version 19040 (0.0017) +[2023-09-26 01:19:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9756672. Throughput: 0: 814.3, 1: 818.3. Samples: 2439173. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:19:45,083][128642] Avg episode reward: [(0, '96.280'), (1, '89.050')] +[2023-09-26 01:19:45,084][129304] Saving new best policy, reward=96.280! +[2023-09-26 01:19:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9789440. Throughput: 0: 812.6, 1: 812.0. Samples: 2443967. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:19:50,083][128642] Avg episode reward: [(0, '95.660'), (1, '88.690')] +[2023-09-26 01:19:55,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9822208. Throughput: 0: 812.2, 1: 812.0. Samples: 2453814. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:19:55,083][128642] Avg episode reward: [(0, '96.650'), (1, '88.110')] +[2023-09-26 01:19:55,093][129304] Saving new best policy, reward=96.650! +[2023-09-26 01:19:55,595][129496] Updated weights for policy 1, policy_version 19200 (0.0017) +[2023-09-26 01:19:55,595][129495] Updated weights for policy 0, policy_version 19200 (0.0016) +[2023-09-26 01:20:00,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9854976. Throughput: 0: 812.0, 1: 816.5. Samples: 2463749. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:20:00,082][128642] Avg episode reward: [(0, '97.440'), (1, '85.890')] +[2023-09-26 01:20:00,083][129304] Saving new best policy, reward=97.440! +[2023-09-26 01:20:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9887744. Throughput: 0: 813.3, 1: 813.8. Samples: 2468765. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:20:05,083][128642] Avg episode reward: [(0, '99.720'), (1, '83.420')] +[2023-09-26 01:20:05,083][129304] Saving new best policy, reward=99.720! +[2023-09-26 01:20:08,061][129495] Updated weights for policy 0, policy_version 19360 (0.0013) +[2023-09-26 01:20:08,062][129496] Updated weights for policy 1, policy_version 19360 (0.0017) +[2023-09-26 01:20:10,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9920512. Throughput: 0: 816.4, 1: 816.8. Samples: 2478518. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:20:10,083][128642] Avg episode reward: [(0, '98.050'), (1, '86.450')] +[2023-09-26 01:20:15,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9953280. Throughput: 0: 815.0, 1: 818.8. Samples: 2488306. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:20:15,083][128642] Avg episode reward: [(0, '97.350'), (1, '86.590')] +[2023-09-26 01:20:20,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 9986048. Throughput: 0: 815.0, 1: 815.4. Samples: 2493032. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:20:20,084][128642] Avg episode reward: [(0, '95.450'), (1, '87.190')] +[2023-09-26 01:20:20,758][129496] Updated weights for policy 1, policy_version 19520 (0.0016) +[2023-09-26 01:20:20,758][129495] Updated weights for policy 0, policy_version 19520 (0.0017) +[2023-09-26 01:20:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 10018816. Throughput: 0: 812.2, 1: 816.7. Samples: 2502664. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:20:25,083][128642] Avg episode reward: [(0, '95.430'), (1, '87.200')] +[2023-09-26 01:20:30,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 10051584. Throughput: 0: 817.6, 1: 813.3. Samples: 2512563. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:20:30,083][128642] Avg episode reward: [(0, '96.790'), (1, '88.500')] +[2023-09-26 01:20:33,473][129495] Updated weights for policy 0, policy_version 19680 (0.0016) +[2023-09-26 01:20:33,474][129496] Updated weights for policy 1, policy_version 19680 (0.0016) +[2023-09-26 01:20:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 10084352. Throughput: 0: 812.4, 1: 813.8. Samples: 2517143. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:20:35,083][128642] Avg episode reward: [(0, '101.910'), (1, '85.380')] +[2023-09-26 01:20:35,084][129304] Saving new best policy, reward=101.910! +[2023-09-26 01:20:40,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 10117120. Throughput: 0: 813.5, 1: 818.1. Samples: 2527236. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:20:40,083][128642] Avg episode reward: [(0, '97.370'), (1, '86.410')] +[2023-09-26 01:20:40,092][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000019760_5058560.pth... +[2023-09-26 01:20:40,092][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000019760_5058560.pth... +[2023-09-26 01:20:40,128][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000016704_4276224.pth +[2023-09-26 01:20:40,129][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000016688_4272128.pth +[2023-09-26 01:20:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 10149888. Throughput: 0: 818.1, 1: 814.6. Samples: 2537221. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:20:45,083][128642] Avg episode reward: [(0, '97.800'), (1, '82.880')] +[2023-09-26 01:20:45,877][129495] Updated weights for policy 0, policy_version 19840 (0.0016) +[2023-09-26 01:20:45,878][129496] Updated weights for policy 1, policy_version 19840 (0.0016) +[2023-09-26 01:20:50,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 10182656. Throughput: 0: 811.5, 1: 811.9. Samples: 2541817. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:20:50,083][128642] Avg episode reward: [(0, '97.150'), (1, '81.650')] +[2023-09-26 01:20:55,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 10215424. Throughput: 0: 812.0, 1: 815.7. Samples: 2551763. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:20:55,083][128642] Avg episode reward: [(0, '94.010'), (1, '83.610')] +[2023-09-26 01:20:58,403][129496] Updated weights for policy 1, policy_version 20000 (0.0018) +[2023-09-26 01:20:58,403][129495] Updated weights for policy 0, policy_version 20000 (0.0016) +[2023-09-26 01:21:00,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 10248192. Throughput: 0: 816.4, 1: 814.1. Samples: 2561680. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:21:00,082][128642] Avg episode reward: [(0, '93.390'), (1, '81.160')] +[2023-09-26 01:21:05,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 10280960. Throughput: 0: 813.8, 1: 814.7. Samples: 2566314. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:21:05,083][128642] Avg episode reward: [(0, '93.990'), (1, '83.100')] +[2023-09-26 01:21:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 10313728. Throughput: 0: 819.0, 1: 818.2. Samples: 2576340. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:21:10,082][128642] Avg episode reward: [(0, '95.240'), (1, '85.830')] +[2023-09-26 01:21:10,919][129495] Updated weights for policy 0, policy_version 20160 (0.0017) +[2023-09-26 01:21:10,919][129496] Updated weights for policy 1, policy_version 20160 (0.0020) +[2023-09-26 01:21:15,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6539.7). Total num frames: 10346496. Throughput: 0: 819.9, 1: 819.2. Samples: 2586321. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:21:15,083][128642] Avg episode reward: [(0, '92.430'), (1, '87.330')] +[2023-09-26 01:21:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10379264. Throughput: 0: 821.8, 1: 820.4. Samples: 2591046. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:21:20,083][128642] Avg episode reward: [(0, '97.290'), (1, '89.950')] +[2023-09-26 01:21:23,298][129495] Updated weights for policy 0, policy_version 20320 (0.0016) +[2023-09-26 01:21:23,298][129496] Updated weights for policy 1, policy_version 20320 (0.0017) +[2023-09-26 01:21:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10412032. Throughput: 0: 820.7, 1: 819.2. Samples: 2601031. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:21:25,083][128642] Avg episode reward: [(0, '94.500'), (1, '92.570')] +[2023-09-26 01:21:30,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10444800. Throughput: 0: 820.1, 1: 820.0. Samples: 2611027. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:21:30,083][128642] Avg episode reward: [(0, '94.560'), (1, '91.730')] +[2023-09-26 01:21:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10477568. Throughput: 0: 819.3, 1: 819.0. Samples: 2615539. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:21:35,083][128642] Avg episode reward: [(0, '94.940'), (1, '93.840')] +[2023-09-26 01:21:35,854][129496] Updated weights for policy 1, policy_version 20480 (0.0017) +[2023-09-26 01:21:35,854][129495] Updated weights for policy 0, policy_version 20480 (0.0015) +[2023-09-26 01:21:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10510336. Throughput: 0: 821.0, 1: 820.2. Samples: 2625619. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:21:40,083][128642] Avg episode reward: [(0, '97.270'), (1, '91.820')] +[2023-09-26 01:21:45,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10543104. Throughput: 0: 821.9, 1: 823.7. Samples: 2635734. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:21:45,083][128642] Avg episode reward: [(0, '97.600'), (1, '93.830')] +[2023-09-26 01:21:48,119][129495] Updated weights for policy 0, policy_version 20640 (0.0018) +[2023-09-26 01:21:48,119][129496] Updated weights for policy 1, policy_version 20640 (0.0016) +[2023-09-26 01:21:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10575872. Throughput: 0: 826.3, 1: 825.2. Samples: 2640629. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:21:50,083][128642] Avg episode reward: [(0, '99.040'), (1, '96.150')] +[2023-09-26 01:21:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6539.7). Total num frames: 10608640. Throughput: 0: 825.4, 1: 821.7. Samples: 2650458. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:21:55,083][128642] Avg episode reward: [(0, '95.900'), (1, '98.860')] +[2023-09-26 01:22:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6539.7). Total num frames: 10641408. Throughput: 0: 820.1, 1: 825.1. Samples: 2660356. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:22:00,083][128642] Avg episode reward: [(0, '94.670'), (1, '99.970')] +[2023-09-26 01:22:00,656][129495] Updated weights for policy 0, policy_version 20800 (0.0017) +[2023-09-26 01:22:00,656][129496] Updated weights for policy 1, policy_version 20800 (0.0015) +[2023-09-26 01:22:05,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10674176. Throughput: 0: 823.2, 1: 823.3. Samples: 2665136. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:22:05,082][128642] Avg episode reward: [(0, '95.330'), (1, '103.370')] +[2023-09-26 01:22:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10706944. Throughput: 0: 822.5, 1: 819.4. Samples: 2674915. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:22:10,083][128642] Avg episode reward: [(0, '97.240'), (1, '103.180')] +[2023-09-26 01:22:13,187][129495] Updated weights for policy 0, policy_version 20960 (0.0017) +[2023-09-26 01:22:13,187][129496] Updated weights for policy 1, policy_version 20960 (0.0016) +[2023-09-26 01:22:15,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10739712. Throughput: 0: 819.3, 1: 822.4. Samples: 2684903. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:22:15,083][128642] Avg episode reward: [(0, '97.580'), (1, '108.540')] +[2023-09-26 01:22:15,084][129382] Saving new best policy, reward=108.540! +[2023-09-26 01:22:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10772480. Throughput: 0: 823.8, 1: 824.0. Samples: 2689689. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:22:20,083][128642] Avg episode reward: [(0, '97.650'), (1, '107.790')] +[2023-09-26 01:22:25,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10805248. Throughput: 0: 821.3, 1: 819.3. Samples: 2699444. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:22:25,084][128642] Avg episode reward: [(0, '96.790'), (1, '109.060')] +[2023-09-26 01:22:25,097][129382] Saving new best policy, reward=109.060! +[2023-09-26 01:22:25,667][129495] Updated weights for policy 0, policy_version 21120 (0.0017) +[2023-09-26 01:22:25,669][129496] Updated weights for policy 1, policy_version 21120 (0.0019) +[2023-09-26 01:22:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10838016. Throughput: 0: 819.3, 1: 819.8. Samples: 2709494. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:22:30,083][128642] Avg episode reward: [(0, '94.160'), (1, '105.290')] +[2023-09-26 01:22:35,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10870784. Throughput: 0: 816.3, 1: 817.1. Samples: 2714129. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:22:35,083][128642] Avg episode reward: [(0, '92.870'), (1, '104.190')] +[2023-09-26 01:22:38,335][129495] Updated weights for policy 0, policy_version 21280 (0.0017) +[2023-09-26 01:22:38,335][129496] Updated weights for policy 1, policy_version 21280 (0.0015) +[2023-09-26 01:22:40,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10903552. Throughput: 0: 813.9, 1: 817.7. Samples: 2723880. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:22:40,083][128642] Avg episode reward: [(0, '93.280'), (1, '99.170')] +[2023-09-26 01:22:40,091][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000021296_5451776.pth... +[2023-09-26 01:22:40,091][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000021296_5451776.pth... +[2023-09-26 01:22:40,124][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000018224_4665344.pth +[2023-09-26 01:22:40,125][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000018224_4665344.pth +[2023-09-26 01:22:45,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10936320. Throughput: 0: 818.7, 1: 814.1. Samples: 2733832. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:22:45,082][128642] Avg episode reward: [(0, '90.880'), (1, '101.000')] +[2023-09-26 01:22:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10969088. Throughput: 0: 815.9, 1: 815.7. Samples: 2738555. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:22:50,083][128642] Avg episode reward: [(0, '93.960'), (1, '97.930')] +[2023-09-26 01:22:50,859][129495] Updated weights for policy 0, policy_version 21440 (0.0018) +[2023-09-26 01:22:50,859][129496] Updated weights for policy 1, policy_version 21440 (0.0016) +[2023-09-26 01:22:55,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11001856. Throughput: 0: 815.5, 1: 819.0. Samples: 2748469. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:22:55,083][128642] Avg episode reward: [(0, '93.470'), (1, '100.860')] +[2023-09-26 01:23:00,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11034624. Throughput: 0: 819.2, 1: 817.4. Samples: 2758549. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:23:00,082][128642] Avg episode reward: [(0, '91.490'), (1, '96.970')] +[2023-09-26 01:23:03,211][129495] Updated weights for policy 0, policy_version 21600 (0.0016) +[2023-09-26 01:23:03,211][129496] Updated weights for policy 1, policy_version 21600 (0.0014) +[2023-09-26 01:23:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11067392. Throughput: 0: 818.5, 1: 818.9. Samples: 2763373. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:23:05,083][128642] Avg episode reward: [(0, '87.650'), (1, '98.730')] +[2023-09-26 01:23:10,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11100160. Throughput: 0: 818.4, 1: 819.1. Samples: 2773132. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:23:10,083][128642] Avg episode reward: [(0, '87.230'), (1, '99.280')] +[2023-09-26 01:23:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11132928. Throughput: 0: 819.2, 1: 819.1. Samples: 2783216. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:23:15,083][128642] Avg episode reward: [(0, '87.630'), (1, '97.140')] +[2023-09-26 01:23:15,641][129495] Updated weights for policy 0, policy_version 21760 (0.0019) +[2023-09-26 01:23:15,641][129496] Updated weights for policy 1, policy_version 21760 (0.0017) +[2023-09-26 01:23:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11165696. Throughput: 0: 822.5, 1: 821.6. Samples: 2788113. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:23:20,083][128642] Avg episode reward: [(0, '90.040'), (1, '96.260')] +[2023-09-26 01:23:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11198464. Throughput: 0: 825.9, 1: 822.2. Samples: 2798044. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:23:25,083][128642] Avg episode reward: [(0, '87.540'), (1, '94.930')] +[2023-09-26 01:23:28,029][129496] Updated weights for policy 1, policy_version 21920 (0.0015) +[2023-09-26 01:23:28,029][129495] Updated weights for policy 0, policy_version 21920 (0.0015) +[2023-09-26 01:23:30,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11231232. Throughput: 0: 822.0, 1: 824.3. Samples: 2807915. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:23:30,082][128642] Avg episode reward: [(0, '87.380'), (1, '96.400')] +[2023-09-26 01:23:35,082][128642] Fps is (10 sec: 6553.9, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11264000. Throughput: 0: 825.5, 1: 826.7. Samples: 2812906. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:23:35,082][128642] Avg episode reward: [(0, '89.250'), (1, '90.500')] +[2023-09-26 01:23:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11296768. Throughput: 0: 825.7, 1: 822.0. Samples: 2822612. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:23:40,083][128642] Avg episode reward: [(0, '91.020'), (1, '89.870')] +[2023-09-26 01:23:40,508][129496] Updated weights for policy 1, policy_version 22080 (0.0017) +[2023-09-26 01:23:40,509][129495] Updated weights for policy 0, policy_version 22080 (0.0018) +[2023-09-26 01:23:45,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11329536. Throughput: 0: 822.6, 1: 821.6. Samples: 2832538. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:23:45,083][128642] Avg episode reward: [(0, '93.460'), (1, '89.530')] +[2023-09-26 01:23:50,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11362304. Throughput: 0: 825.2, 1: 823.8. Samples: 2837574. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:23:50,082][128642] Avg episode reward: [(0, '92.220'), (1, '90.700')] +[2023-09-26 01:23:53,127][129495] Updated weights for policy 0, policy_version 22240 (0.0017) +[2023-09-26 01:23:53,128][129496] Updated weights for policy 1, policy_version 22240 (0.0018) +[2023-09-26 01:23:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11395072. Throughput: 0: 821.8, 1: 820.2. Samples: 2847021. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:23:55,083][128642] Avg episode reward: [(0, '95.600'), (1, '91.110')] +[2023-09-26 01:24:00,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11427840. Throughput: 0: 819.1, 1: 817.9. Samples: 2856880. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:24:00,083][128642] Avg episode reward: [(0, '98.350'), (1, '92.180')] +[2023-09-26 01:24:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11460608. Throughput: 0: 815.8, 1: 816.0. Samples: 2861541. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:24:05,083][128642] Avg episode reward: [(0, '97.550'), (1, '91.270')] +[2023-09-26 01:24:05,853][129496] Updated weights for policy 1, policy_version 22400 (0.0017) +[2023-09-26 01:24:05,853][129495] Updated weights for policy 0, policy_version 22400 (0.0016) +[2023-09-26 01:24:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11493376. Throughput: 0: 812.4, 1: 816.2. Samples: 2871330. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:24:10,083][128642] Avg episode reward: [(0, '98.150'), (1, '94.760')] +[2023-09-26 01:24:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11526144. Throughput: 0: 815.9, 1: 815.9. Samples: 2881346. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:24:15,083][128642] Avg episode reward: [(0, '93.060'), (1, '91.750')] +[2023-09-26 01:24:18,472][129495] Updated weights for policy 0, policy_version 22560 (0.0017) +[2023-09-26 01:24:18,472][129496] Updated weights for policy 1, policy_version 22560 (0.0014) +[2023-09-26 01:24:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11558912. Throughput: 0: 810.2, 1: 810.0. Samples: 2885815. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:24:20,083][128642] Avg episode reward: [(0, '91.890'), (1, '90.710')] +[2023-09-26 01:24:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11591680. Throughput: 0: 811.7, 1: 816.4. Samples: 2895877. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:24:25,083][128642] Avg episode reward: [(0, '93.030'), (1, '93.920')] +[2023-09-26 01:24:30,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11624448. Throughput: 0: 812.2, 1: 811.4. Samples: 2905598. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:24:30,083][128642] Avg episode reward: [(0, '93.440'), (1, '87.740')] +[2023-09-26 01:24:31,023][129495] Updated weights for policy 0, policy_version 22720 (0.0017) +[2023-09-26 01:24:31,023][129496] Updated weights for policy 1, policy_version 22720 (0.0018) +[2023-09-26 01:24:35,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11657216. Throughput: 0: 806.0, 1: 809.8. Samples: 2910285. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:24:35,082][128642] Avg episode reward: [(0, '93.350'), (1, '89.250')] +[2023-09-26 01:24:40,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11689984. Throughput: 0: 813.5, 1: 815.6. Samples: 2920333. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:24:40,083][128642] Avg episode reward: [(0, '93.290'), (1, '87.710')] +[2023-09-26 01:24:40,090][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000022832_5844992.pth... +[2023-09-26 01:24:40,091][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000022832_5844992.pth... +[2023-09-26 01:24:40,123][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000019760_5058560.pth +[2023-09-26 01:24:40,125][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000019760_5058560.pth +[2023-09-26 01:24:43,459][129496] Updated weights for policy 1, policy_version 22880 (0.0016) +[2023-09-26 01:24:43,459][129495] Updated weights for policy 0, policy_version 22880 (0.0018) +[2023-09-26 01:24:45,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11722752. Throughput: 0: 817.4, 1: 814.4. Samples: 2930314. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:24:45,083][128642] Avg episode reward: [(0, '90.720'), (1, '89.540')] +[2023-09-26 01:24:50,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11755520. Throughput: 0: 816.6, 1: 816.4. Samples: 2935024. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:24:50,082][128642] Avg episode reward: [(0, '90.790'), (1, '87.530')] +[2023-09-26 01:24:55,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11788288. Throughput: 0: 819.1, 1: 819.2. Samples: 2945053. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:24:55,082][128642] Avg episode reward: [(0, '91.690'), (1, '88.940')] +[2023-09-26 01:24:55,986][129495] Updated weights for policy 0, policy_version 23040 (0.0015) +[2023-09-26 01:24:55,988][129496] Updated weights for policy 1, policy_version 23040 (0.0018) +[2023-09-26 01:25:00,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11821056. Throughput: 0: 819.0, 1: 816.0. Samples: 2954919. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:25:00,083][128642] Avg episode reward: [(0, '89.540'), (1, '90.630')] +[2023-09-26 01:25:05,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11853824. Throughput: 0: 819.9, 1: 819.2. Samples: 2959577. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:25:05,082][128642] Avg episode reward: [(0, '86.450'), (1, '91.530')] +[2023-09-26 01:25:08,532][129495] Updated weights for policy 0, policy_version 23200 (0.0017) +[2023-09-26 01:25:08,532][129496] Updated weights for policy 1, policy_version 23200 (0.0017) +[2023-09-26 01:25:10,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11886592. Throughput: 0: 819.1, 1: 816.4. Samples: 2969472. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:25:10,083][128642] Avg episode reward: [(0, '85.870'), (1, '90.540')] +[2023-09-26 01:25:15,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11919360. Throughput: 0: 816.3, 1: 816.0. Samples: 2979050. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:25:15,083][128642] Avg episode reward: [(0, '83.170'), (1, '90.120')] +[2023-09-26 01:25:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11952128. Throughput: 0: 818.7, 1: 819.2. Samples: 2983989. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:25:20,083][128642] Avg episode reward: [(0, '83.130'), (1, '88.690')] +[2023-09-26 01:25:21,058][129495] Updated weights for policy 0, policy_version 23360 (0.0017) +[2023-09-26 01:25:21,058][129496] Updated weights for policy 1, policy_version 23360 (0.0016) +[2023-09-26 01:25:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11984896. Throughput: 0: 819.2, 1: 818.2. Samples: 2994016. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:25:25,083][128642] Avg episode reward: [(0, '83.980'), (1, '89.480')] +[2023-09-26 01:25:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12017664. Throughput: 0: 815.6, 1: 815.4. Samples: 3003710. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:25:30,083][128642] Avg episode reward: [(0, '84.530'), (1, '86.150')] +[2023-09-26 01:25:33,753][129495] Updated weights for policy 0, policy_version 23520 (0.0017) +[2023-09-26 01:25:33,753][129496] Updated weights for policy 1, policy_version 23520 (0.0018) +[2023-09-26 01:25:35,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12050432. Throughput: 0: 814.2, 1: 818.5. Samples: 3008495. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:25:35,083][128642] Avg episode reward: [(0, '84.790'), (1, '87.010')] +[2023-09-26 01:25:40,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12083200. Throughput: 0: 814.2, 1: 809.9. Samples: 3018135. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:25:40,083][128642] Avg episode reward: [(0, '84.640'), (1, '87.570')] +[2023-09-26 01:25:45,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12115968. Throughput: 0: 811.3, 1: 812.6. Samples: 3027993. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:25:45,083][128642] Avg episode reward: [(0, '90.000'), (1, '88.640')] +[2023-09-26 01:25:46,259][129495] Updated weights for policy 0, policy_version 23680 (0.0017) +[2023-09-26 01:25:46,260][129496] Updated weights for policy 1, policy_version 23680 (0.0017) +[2023-09-26 01:25:50,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12148736. Throughput: 0: 814.5, 1: 816.9. Samples: 3032989. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:25:50,082][128642] Avg episode reward: [(0, '90.550'), (1, '90.940')] +[2023-09-26 01:25:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12181504. Throughput: 0: 816.3, 1: 815.1. Samples: 3042883. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:25:55,083][128642] Avg episode reward: [(0, '94.880'), (1, '92.940')] +[2023-09-26 01:25:58,831][129496] Updated weights for policy 1, policy_version 23840 (0.0017) +[2023-09-26 01:25:58,831][129495] Updated weights for policy 0, policy_version 23840 (0.0016) +[2023-09-26 01:26:00,082][128642] Fps is (10 sec: 6143.8, 60 sec: 6485.3, 300 sec: 6539.7). Total num frames: 12210176. Throughput: 0: 815.4, 1: 814.9. Samples: 3052412. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:26:00,083][128642] Avg episode reward: [(0, '93.920'), (1, '93.190')] +[2023-09-26 01:26:05,082][128642] Fps is (10 sec: 5734.3, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 12238848. Throughput: 0: 817.4, 1: 814.2. Samples: 3057410. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:26:05,083][128642] Avg episode reward: [(0, '97.300'), (1, '95.540')] +[2023-09-26 01:26:10,082][128642] Fps is (10 sec: 6144.0, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 12271616. Throughput: 0: 813.8, 1: 812.8. Samples: 3067214. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:26:10,083][128642] Avg episode reward: [(0, '95.990'), (1, '95.590')] +[2023-09-26 01:26:11,379][129495] Updated weights for policy 0, policy_version 24000 (0.0017) +[2023-09-26 01:26:11,379][129496] Updated weights for policy 1, policy_version 24000 (0.0017) +[2023-09-26 01:26:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 12304384. Throughput: 0: 812.0, 1: 812.7. Samples: 3076823. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:26:15,083][128642] Avg episode reward: [(0, '97.350'), (1, '98.200')] +[2023-09-26 01:26:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 12337152. Throughput: 0: 817.4, 1: 812.5. Samples: 3081841. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:26:20,083][128642] Avg episode reward: [(0, '96.870'), (1, '94.850')] +[2023-09-26 01:26:24,010][129495] Updated weights for policy 0, policy_version 24160 (0.0013) +[2023-09-26 01:26:24,011][129496] Updated weights for policy 1, policy_version 24160 (0.0017) +[2023-09-26 01:26:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 12369920. Throughput: 0: 814.6, 1: 815.1. Samples: 3091471. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:26:25,083][128642] Avg episode reward: [(0, '99.990'), (1, '95.570')] +[2023-09-26 01:26:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 12402688. Throughput: 0: 813.3, 1: 812.4. Samples: 3101151. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:26:30,083][128642] Avg episode reward: [(0, '98.630'), (1, '94.940')] +[2023-09-26 01:26:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 12435456. Throughput: 0: 814.5, 1: 811.9. Samples: 3106180. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:26:35,083][128642] Avg episode reward: [(0, '102.940'), (1, '97.820')] +[2023-09-26 01:26:35,083][129304] Saving new best policy, reward=102.940! +[2023-09-26 01:26:36,553][129495] Updated weights for policy 0, policy_version 24320 (0.0017) +[2023-09-26 01:26:36,553][129496] Updated weights for policy 1, policy_version 24320 (0.0017) +[2023-09-26 01:26:40,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 12468224. Throughput: 0: 812.7, 1: 812.1. Samples: 3116000. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:26:40,083][128642] Avg episode reward: [(0, '100.300'), (1, '97.550')] +[2023-09-26 01:26:40,220][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000024368_6238208.pth... +[2023-09-26 01:26:40,248][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000021296_5451776.pth +[2023-09-26 01:26:40,264][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000024368_6238208.pth... +[2023-09-26 01:26:40,304][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000021296_5451776.pth +[2023-09-26 01:26:45,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 12500992. Throughput: 0: 816.0, 1: 816.7. Samples: 3125886. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:26:45,083][128642] Avg episode reward: [(0, '101.550'), (1, '100.820')] +[2023-09-26 01:26:48,881][129495] Updated weights for policy 0, policy_version 24480 (0.0014) +[2023-09-26 01:26:48,882][129496] Updated weights for policy 1, policy_version 24480 (0.0015) +[2023-09-26 01:26:50,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 12533760. Throughput: 0: 818.7, 1: 818.4. Samples: 3131077. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:26:50,083][128642] Avg episode reward: [(0, '100.980'), (1, '105.090')] +[2023-09-26 01:26:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 12566528. Throughput: 0: 819.5, 1: 818.7. Samples: 3140934. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:26:55,083][128642] Avg episode reward: [(0, '104.000'), (1, '104.120')] +[2023-09-26 01:26:55,116][129304] Saving new best policy, reward=104.000! +[2023-09-26 01:27:00,082][128642] Fps is (10 sec: 7372.9, 60 sec: 6621.9, 300 sec: 6553.6). Total num frames: 12607488. Throughput: 0: 821.3, 1: 821.4. Samples: 3150745. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:27:00,083][128642] Avg episode reward: [(0, '103.750'), (1, '105.960')] +[2023-09-26 01:27:01,267][129496] Updated weights for policy 1, policy_version 24640 (0.0018) +[2023-09-26 01:27:01,267][129495] Updated weights for policy 0, policy_version 24640 (0.0019) +[2023-09-26 01:27:05,082][128642] Fps is (10 sec: 7373.0, 60 sec: 6690.2, 300 sec: 6553.6). Total num frames: 12640256. Throughput: 0: 821.0, 1: 823.9. Samples: 3155859. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:27:05,082][128642] Avg episode reward: [(0, '102.170'), (1, '108.400')] +[2023-09-26 01:27:10,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 12673024. Throughput: 0: 826.7, 1: 826.4. Samples: 3165858. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:27:10,083][128642] Avg episode reward: [(0, '103.100'), (1, '108.220')] +[2023-09-26 01:27:13,746][129495] Updated weights for policy 0, policy_version 24800 (0.0016) +[2023-09-26 01:27:13,746][129496] Updated weights for policy 1, policy_version 24800 (0.0015) +[2023-09-26 01:27:15,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 12705792. Throughput: 0: 825.6, 1: 825.6. Samples: 3175454. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:27:15,083][128642] Avg episode reward: [(0, '98.360'), (1, '106.240')] +[2023-09-26 01:27:20,082][128642] Fps is (10 sec: 5734.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 12730368. Throughput: 0: 823.7, 1: 824.8. Samples: 3180360. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:27:20,083][128642] Avg episode reward: [(0, '97.940'), (1, '103.280')] +[2023-09-26 01:27:25,082][128642] Fps is (10 sec: 5734.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 12763136. Throughput: 0: 820.0, 1: 820.1. Samples: 3189803. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 01:27:25,083][128642] Avg episode reward: [(0, '99.070'), (1, '102.260')] +[2023-09-26 01:27:26,469][129495] Updated weights for policy 0, policy_version 24960 (0.0017) +[2023-09-26 01:27:26,469][129496] Updated weights for policy 1, policy_version 24960 (0.0014) +[2023-09-26 01:27:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 12795904. Throughput: 0: 818.9, 1: 818.0. Samples: 3199547. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:27:30,083][128642] Avg episode reward: [(0, '95.490'), (1, '102.730')] +[2023-09-26 01:27:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 12828672. Throughput: 0: 816.9, 1: 817.0. Samples: 3204602. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:27:35,083][128642] Avg episode reward: [(0, '97.140'), (1, '104.830')] +[2023-09-26 01:27:39,097][129495] Updated weights for policy 0, policy_version 25120 (0.0018) +[2023-09-26 01:27:39,097][129496] Updated weights for policy 1, policy_version 25120 (0.0020) +[2023-09-26 01:27:40,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 12861440. Throughput: 0: 814.1, 1: 815.5. Samples: 3214265. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:27:40,082][128642] Avg episode reward: [(0, '96.830'), (1, '104.840')] +[2023-09-26 01:27:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 12894208. Throughput: 0: 813.8, 1: 813.4. Samples: 3223970. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:27:45,083][128642] Avg episode reward: [(0, '94.420'), (1, '102.660')] +[2023-09-26 01:27:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 12926976. Throughput: 0: 815.1, 1: 812.6. Samples: 3229102. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:27:50,083][128642] Avg episode reward: [(0, '90.480'), (1, '103.720')] +[2023-09-26 01:27:51,518][129495] Updated weights for policy 0, policy_version 25280 (0.0017) +[2023-09-26 01:27:51,518][129496] Updated weights for policy 1, policy_version 25280 (0.0017) +[2023-09-26 01:27:55,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 12959744. Throughput: 0: 811.9, 1: 812.2. Samples: 3238943. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:27:55,083][128642] Avg episode reward: [(0, '92.970'), (1, '105.390')] +[2023-09-26 01:28:00,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 12992512. Throughput: 0: 815.1, 1: 815.5. Samples: 3248832. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:28:00,083][128642] Avg episode reward: [(0, '89.720'), (1, '103.880')] +[2023-09-26 01:28:03,926][129496] Updated weights for policy 1, policy_version 25440 (0.0017) +[2023-09-26 01:28:03,927][129495] Updated weights for policy 0, policy_version 25440 (0.0016) +[2023-09-26 01:28:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 13025280. Throughput: 0: 819.0, 1: 817.0. Samples: 3253976. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:28:05,083][128642] Avg episode reward: [(0, '86.520'), (1, '104.520')] +[2023-09-26 01:28:10,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 13058048. Throughput: 0: 820.4, 1: 819.9. Samples: 3263617. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:28:10,083][128642] Avg episode reward: [(0, '89.040'), (1, '100.850')] +[2023-09-26 01:28:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 13090816. Throughput: 0: 820.8, 1: 821.2. Samples: 3273436. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:28:15,083][128642] Avg episode reward: [(0, '90.520'), (1, '99.570')] +[2023-09-26 01:28:16,383][129495] Updated weights for policy 0, policy_version 25600 (0.0016) +[2023-09-26 01:28:16,383][129496] Updated weights for policy 1, policy_version 25600 (0.0017) +[2023-09-26 01:28:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13123584. Throughput: 0: 822.6, 1: 822.8. Samples: 3278645. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:28:20,083][128642] Avg episode reward: [(0, '90.640'), (1, '95.020')] +[2023-09-26 01:28:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13156352. Throughput: 0: 822.0, 1: 821.1. Samples: 3288203. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:28:25,083][128642] Avg episode reward: [(0, '92.470'), (1, '95.220')] +[2023-09-26 01:28:28,973][129495] Updated weights for policy 0, policy_version 25760 (0.0018) +[2023-09-26 01:28:28,974][129496] Updated weights for policy 1, policy_version 25760 (0.0019) +[2023-09-26 01:28:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13189120. Throughput: 0: 821.6, 1: 821.4. Samples: 3297907. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:28:30,083][128642] Avg episode reward: [(0, '93.500'), (1, '95.750')] +[2023-09-26 01:28:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13221888. Throughput: 0: 821.0, 1: 821.6. Samples: 3303019. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:28:35,083][128642] Avg episode reward: [(0, '93.490'), (1, '92.480')] +[2023-09-26 01:28:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13254656. Throughput: 0: 822.0, 1: 820.1. Samples: 3312837. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:28:40,083][128642] Avg episode reward: [(0, '91.570'), (1, '93.370')] +[2023-09-26 01:28:40,093][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000025888_6627328.pth... +[2023-09-26 01:28:40,127][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000022832_5844992.pth +[2023-09-26 01:28:40,247][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000025904_6631424.pth... +[2023-09-26 01:28:40,274][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000022832_5844992.pth +[2023-09-26 01:28:41,569][129495] Updated weights for policy 0, policy_version 25920 (0.0015) +[2023-09-26 01:28:41,569][129496] Updated weights for policy 1, policy_version 25920 (0.0017) +[2023-09-26 01:28:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13287424. Throughput: 0: 815.2, 1: 815.2. Samples: 3322198. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:28:45,083][128642] Avg episode reward: [(0, '90.440'), (1, '94.610')] +[2023-09-26 01:28:50,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13320192. Throughput: 0: 814.7, 1: 816.0. Samples: 3327356. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:28:50,083][128642] Avg episode reward: [(0, '91.200'), (1, '96.000')] +[2023-09-26 01:28:54,214][129495] Updated weights for policy 0, policy_version 26080 (0.0017) +[2023-09-26 01:28:54,215][129496] Updated weights for policy 1, policy_version 26080 (0.0016) +[2023-09-26 01:28:55,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13352960. Throughput: 0: 813.6, 1: 813.7. Samples: 3336842. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:28:55,083][128642] Avg episode reward: [(0, '94.480'), (1, '96.210')] +[2023-09-26 01:29:00,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13385728. Throughput: 0: 814.1, 1: 813.9. Samples: 3346697. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:29:00,083][128642] Avg episode reward: [(0, '94.830'), (1, '97.070')] +[2023-09-26 01:29:05,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13418496. Throughput: 0: 812.4, 1: 811.5. Samples: 3351720. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:29:05,083][128642] Avg episode reward: [(0, '94.560'), (1, '96.940')] +[2023-09-26 01:29:06,651][129495] Updated weights for policy 0, policy_version 26240 (0.0017) +[2023-09-26 01:29:06,651][129496] Updated weights for policy 1, policy_version 26240 (0.0018) +[2023-09-26 01:29:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13451264. Throughput: 0: 814.6, 1: 814.7. Samples: 3361520. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:29:10,083][128642] Avg episode reward: [(0, '94.290'), (1, '96.570')] +[2023-09-26 01:29:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13484032. Throughput: 0: 813.9, 1: 814.7. Samples: 3371197. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:29:15,083][128642] Avg episode reward: [(0, '96.950'), (1, '94.640')] +[2023-09-26 01:29:19,154][129495] Updated weights for policy 0, policy_version 26400 (0.0017) +[2023-09-26 01:29:19,154][129496] Updated weights for policy 1, policy_version 26400 (0.0017) +[2023-09-26 01:29:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13516800. Throughput: 0: 814.5, 1: 814.2. Samples: 3376313. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:29:20,083][128642] Avg episode reward: [(0, '96.110'), (1, '94.740')] +[2023-09-26 01:29:25,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13549568. Throughput: 0: 811.1, 1: 813.4. Samples: 3385942. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:29:25,083][128642] Avg episode reward: [(0, '95.790'), (1, '95.520')] +[2023-09-26 01:29:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13582336. Throughput: 0: 815.6, 1: 817.6. Samples: 3395695. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 01:29:30,083][128642] Avg episode reward: [(0, '96.290'), (1, '95.580')] +[2023-09-26 01:29:31,764][129496] Updated weights for policy 1, policy_version 26560 (0.0017) +[2023-09-26 01:29:31,764][129495] Updated weights for policy 0, policy_version 26560 (0.0017) +[2023-09-26 01:29:35,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13615104. Throughput: 0: 815.4, 1: 815.0. Samples: 3400721. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:29:35,083][128642] Avg episode reward: [(0, '101.250'), (1, '97.530')] +[2023-09-26 01:29:40,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13647872. Throughput: 0: 815.9, 1: 816.3. Samples: 3410292. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:29:40,083][128642] Avg episode reward: [(0, '99.790'), (1, '94.930')] +[2023-09-26 01:29:44,723][129495] Updated weights for policy 0, policy_version 26720 (0.0018) +[2023-09-26 01:29:44,723][129496] Updated weights for policy 1, policy_version 26720 (0.0017) +[2023-09-26 01:29:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13680640. Throughput: 0: 811.5, 1: 811.7. Samples: 3419741. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 01:29:45,083][128642] Avg episode reward: [(0, '99.780'), (1, '95.200')] +[2023-09-26 01:29:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13713408. Throughput: 0: 806.8, 1: 808.8. Samples: 3424418. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:29:50,083][128642] Avg episode reward: [(0, '99.340'), (1, '94.400')] +[2023-09-26 01:29:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13746176. Throughput: 0: 808.7, 1: 813.2. Samples: 3434502. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:29:55,083][128642] Avg episode reward: [(0, '98.550'), (1, '93.860')] +[2023-09-26 01:29:57,204][129496] Updated weights for policy 1, policy_version 26880 (0.0015) +[2023-09-26 01:29:57,204][129495] Updated weights for policy 0, policy_version 26880 (0.0018) +[2023-09-26 01:30:00,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13778944. Throughput: 0: 812.9, 1: 811.6. Samples: 3444300. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:30:00,083][128642] Avg episode reward: [(0, '97.510'), (1, '94.790')] +[2023-09-26 01:30:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13811712. Throughput: 0: 804.7, 1: 808.1. Samples: 3448889. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 01:30:05,083][128642] Avg episode reward: [(0, '97.990'), (1, '93.520')] +[2023-09-26 01:30:09,769][129496] Updated weights for policy 1, policy_version 27040 (0.0016) +[2023-09-26 01:30:09,769][129495] Updated weights for policy 0, policy_version 27040 (0.0018) +[2023-09-26 01:30:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13844480. Throughput: 0: 810.6, 1: 811.6. Samples: 3458939. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:30:10,083][128642] Avg episode reward: [(0, '99.560'), (1, '93.980')] +[2023-09-26 01:30:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13877248. Throughput: 0: 812.9, 1: 810.9. Samples: 3468768. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:30:15,083][128642] Avg episode reward: [(0, '98.650'), (1, '93.830')] +[2023-09-26 01:30:20,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13910016. Throughput: 0: 808.1, 1: 810.0. Samples: 3473535. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:30:20,082][128642] Avg episode reward: [(0, '101.430'), (1, '92.460')] +[2023-09-26 01:30:22,197][129496] Updated weights for policy 1, policy_version 27200 (0.0016) +[2023-09-26 01:30:22,197][129495] Updated weights for policy 0, policy_version 27200 (0.0017) +[2023-09-26 01:30:25,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13942784. Throughput: 0: 812.8, 1: 816.2. Samples: 3483596. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:30:25,082][128642] Avg episode reward: [(0, '101.100'), (1, '91.880')] +[2023-09-26 01:30:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 13975552. Throughput: 0: 816.8, 1: 816.9. Samples: 3493256. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:30:30,083][128642] Avg episode reward: [(0, '100.900'), (1, '91.770')] +[2023-09-26 01:30:34,739][129495] Updated weights for policy 0, policy_version 27360 (0.0016) +[2023-09-26 01:30:34,740][129496] Updated weights for policy 1, policy_version 27360 (0.0017) +[2023-09-26 01:30:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 14008320. Throughput: 0: 818.1, 1: 819.1. Samples: 3498092. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:30:35,083][128642] Avg episode reward: [(0, '102.490'), (1, '92.760')] +[2023-09-26 01:30:40,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 14041088. Throughput: 0: 819.1, 1: 818.5. Samples: 3508196. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:30:40,083][128642] Avg episode reward: [(0, '100.100'), (1, '93.430')] +[2023-09-26 01:30:40,093][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000027424_7020544.pth... +[2023-09-26 01:30:40,093][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000027424_7020544.pth... +[2023-09-26 01:30:40,124][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000024368_6238208.pth +[2023-09-26 01:30:40,132][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000024368_6238208.pth +[2023-09-26 01:30:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 14073856. Throughput: 0: 820.6, 1: 821.2. Samples: 3518179. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 01:30:45,083][128642] Avg episode reward: [(0, '100.720'), (1, '93.950')] +[2023-09-26 01:30:47,157][129496] Updated weights for policy 1, policy_version 27520 (0.0018) +[2023-09-26 01:30:47,158][129495] Updated weights for policy 0, policy_version 27520 (0.0017) +[2023-09-26 01:30:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 14106624. Throughput: 0: 822.8, 1: 819.5. Samples: 3522794. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 01:30:50,083][128642] Avg episode reward: [(0, '101.010'), (1, '95.440')] +[2023-09-26 01:30:55,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6539.7). Total num frames: 14139392. Throughput: 0: 821.5, 1: 822.2. Samples: 3532903. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 01:30:55,083][128642] Avg episode reward: [(0, '102.180'), (1, '92.360')] +[2023-09-26 01:30:59,452][129495] Updated weights for policy 0, policy_version 27680 (0.0016) +[2023-09-26 01:30:59,453][129496] Updated weights for policy 1, policy_version 27680 (0.0016) +[2023-09-26 01:31:00,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14172160. Throughput: 0: 823.0, 1: 826.8. Samples: 3543010. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 01:31:00,083][128642] Avg episode reward: [(0, '102.530'), (1, '94.120')] +[2023-09-26 01:31:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14204928. Throughput: 0: 826.0, 1: 825.2. Samples: 3547840. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:31:05,083][128642] Avg episode reward: [(0, '98.230'), (1, '93.150')] +[2023-09-26 01:31:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14237696. Throughput: 0: 821.8, 1: 820.4. Samples: 3557495. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:31:10,083][128642] Avg episode reward: [(0, '102.200'), (1, '96.830')] +[2023-09-26 01:31:12,011][129495] Updated weights for policy 0, policy_version 27840 (0.0016) +[2023-09-26 01:31:12,011][129496] Updated weights for policy 1, policy_version 27840 (0.0017) +[2023-09-26 01:31:15,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14270464. Throughput: 0: 824.0, 1: 826.8. Samples: 3567538. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:31:15,082][128642] Avg episode reward: [(0, '101.020'), (1, '100.050')] +[2023-09-26 01:31:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14303232. Throughput: 0: 825.9, 1: 824.2. Samples: 3572346. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:31:20,082][128642] Avg episode reward: [(0, '105.950'), (1, '100.620')] +[2023-09-26 01:31:20,083][129304] Saving new best policy, reward=105.950! +[2023-09-26 01:31:24,418][129496] Updated weights for policy 1, policy_version 28000 (0.0012) +[2023-09-26 01:31:24,419][129495] Updated weights for policy 0, policy_version 28000 (0.0016) +[2023-09-26 01:31:25,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14336000. Throughput: 0: 824.0, 1: 820.1. Samples: 3582178. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:31:25,083][128642] Avg episode reward: [(0, '99.660'), (1, '98.780')] +[2023-09-26 01:31:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14368768. Throughput: 0: 820.0, 1: 823.6. Samples: 3592144. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:31:30,082][128642] Avg episode reward: [(0, '102.630'), (1, '97.190')] +[2023-09-26 01:31:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14401536. Throughput: 0: 824.9, 1: 824.5. Samples: 3597016. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:31:35,083][128642] Avg episode reward: [(0, '103.690'), (1, '95.340')] +[2023-09-26 01:31:36,838][129495] Updated weights for policy 0, policy_version 28160 (0.0016) +[2023-09-26 01:31:36,839][129496] Updated weights for policy 1, policy_version 28160 (0.0017) +[2023-09-26 01:31:40,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14434304. Throughput: 0: 823.6, 1: 821.2. Samples: 3606920. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 01:31:40,083][128642] Avg episode reward: [(0, '106.530'), (1, '90.880')] +[2023-09-26 01:31:40,096][129304] Saving new best policy, reward=106.530! +[2023-09-26 01:31:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14467072. Throughput: 0: 819.3, 1: 819.8. Samples: 3616768. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:31:45,083][128642] Avg episode reward: [(0, '105.080'), (1, '91.930')] +[2023-09-26 01:31:49,286][129495] Updated weights for policy 0, policy_version 28320 (0.0018) +[2023-09-26 01:31:49,286][129496] Updated weights for policy 1, policy_version 28320 (0.0016) +[2023-09-26 01:31:50,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14499840. Throughput: 0: 821.8, 1: 820.7. Samples: 3621754. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:31:50,083][128642] Avg episode reward: [(0, '103.330'), (1, '95.150')] +[2023-09-26 01:31:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 14532608. Throughput: 0: 820.7, 1: 819.3. Samples: 3631294. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:31:55,083][128642] Avg episode reward: [(0, '102.110'), (1, '94.980')] +[2023-09-26 01:32:00,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 14565376. Throughput: 0: 818.6, 1: 817.1. Samples: 3641145. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:32:00,083][128642] Avg episode reward: [(0, '102.560'), (1, '93.830')] +[2023-09-26 01:32:02,133][129495] Updated weights for policy 0, policy_version 28480 (0.0016) +[2023-09-26 01:32:02,134][129496] Updated weights for policy 1, policy_version 28480 (0.0016) +[2023-09-26 01:32:05,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 14598144. Throughput: 0: 815.6, 1: 815.1. Samples: 3645729. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:32:05,083][128642] Avg episode reward: [(0, '102.680'), (1, '89.290')] +[2023-09-26 01:32:10,082][128642] Fps is (10 sec: 6553.3, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 14630912. Throughput: 0: 815.8, 1: 819.0. Samples: 3655744. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:32:10,084][128642] Avg episode reward: [(0, '107.270'), (1, '89.910')] +[2023-09-26 01:32:10,094][129304] Saving new best policy, reward=107.270! +[2023-09-26 01:32:14,621][129495] Updated weights for policy 0, policy_version 28640 (0.0016) +[2023-09-26 01:32:14,621][129496] Updated weights for policy 1, policy_version 28640 (0.0017) +[2023-09-26 01:32:15,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14663680. Throughput: 0: 819.2, 1: 816.2. Samples: 3665735. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:32:15,083][128642] Avg episode reward: [(0, '107.300'), (1, '87.130')] +[2023-09-26 01:32:15,085][129304] Saving new best policy, reward=107.300! +[2023-09-26 01:32:20,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14696448. Throughput: 0: 815.9, 1: 816.0. Samples: 3670453. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:32:20,083][128642] Avg episode reward: [(0, '104.850'), (1, '87.870')] +[2023-09-26 01:32:25,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14729216. Throughput: 0: 812.8, 1: 817.2. Samples: 3680270. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:32:25,083][128642] Avg episode reward: [(0, '104.190'), (1, '92.310')] +[2023-09-26 01:32:27,140][129495] Updated weights for policy 0, policy_version 28800 (0.0016) +[2023-09-26 01:32:27,140][129496] Updated weights for policy 1, policy_version 28800 (0.0017) +[2023-09-26 01:32:30,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14761984. Throughput: 0: 818.9, 1: 814.5. Samples: 3690270. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:32:30,083][128642] Avg episode reward: [(0, '104.370'), (1, '90.200')] +[2023-09-26 01:32:35,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14794752. Throughput: 0: 813.4, 1: 813.5. Samples: 3694967. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:32:35,083][128642] Avg episode reward: [(0, '107.260'), (1, '88.080')] +[2023-09-26 01:32:39,521][129496] Updated weights for policy 1, policy_version 28960 (0.0016) +[2023-09-26 01:32:39,521][129495] Updated weights for policy 0, policy_version 28960 (0.0017) +[2023-09-26 01:32:40,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14827520. Throughput: 0: 817.4, 1: 819.1. Samples: 3704935. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:32:40,083][128642] Avg episode reward: [(0, '108.300'), (1, '84.340')] +[2023-09-26 01:32:40,093][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000028960_7413760.pth... +[2023-09-26 01:32:40,093][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000028960_7413760.pth... +[2023-09-26 01:32:40,129][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000025888_6627328.pth +[2023-09-26 01:32:40,136][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000025904_6631424.pth +[2023-09-26 01:32:40,141][129304] Saving new best policy, reward=108.300! +[2023-09-26 01:32:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14860288. Throughput: 0: 819.8, 1: 821.0. Samples: 3714980. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:32:45,083][128642] Avg episode reward: [(0, '105.620'), (1, '82.760')] +[2023-09-26 01:32:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14893056. Throughput: 0: 821.3, 1: 821.5. Samples: 3719656. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:32:50,083][128642] Avg episode reward: [(0, '99.810'), (1, '85.750')] +[2023-09-26 01:32:51,942][129496] Updated weights for policy 1, policy_version 29120 (0.0016) +[2023-09-26 01:32:51,942][129495] Updated weights for policy 0, policy_version 29120 (0.0015) +[2023-09-26 01:32:55,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14925824. Throughput: 0: 822.0, 1: 819.3. Samples: 3729601. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 01:32:55,083][128642] Avg episode reward: [(0, '100.590'), (1, '86.050')] +[2023-09-26 01:33:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14958592. Throughput: 0: 819.8, 1: 823.3. Samples: 3739676. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:33:00,083][128642] Avg episode reward: [(0, '99.180'), (1, '88.360')] +[2023-09-26 01:33:04,419][129495] Updated weights for policy 0, policy_version 29280 (0.0018) +[2023-09-26 01:33:04,419][129496] Updated weights for policy 1, policy_version 29280 (0.0019) +[2023-09-26 01:33:05,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14991360. Throughput: 0: 824.2, 1: 823.7. Samples: 3744611. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:33:05,083][128642] Avg episode reward: [(0, '101.180'), (1, '92.230')] +[2023-09-26 01:33:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15024128. Throughput: 0: 824.6, 1: 820.2. Samples: 3754286. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:33:10,083][128642] Avg episode reward: [(0, '100.770'), (1, '93.120')] +[2023-09-26 01:33:15,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15056896. Throughput: 0: 820.8, 1: 824.0. Samples: 3764288. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:33:15,083][128642] Avg episode reward: [(0, '101.950'), (1, '92.080')] +[2023-09-26 01:33:16,828][129495] Updated weights for policy 0, policy_version 29440 (0.0017) +[2023-09-26 01:33:16,828][129496] Updated weights for policy 1, policy_version 29440 (0.0016) +[2023-09-26 01:33:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15089664. Throughput: 0: 824.6, 1: 825.2. Samples: 3769212. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:33:20,083][128642] Avg episode reward: [(0, '103.550'), (1, '92.770')] +[2023-09-26 01:33:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15122432. Throughput: 0: 817.9, 1: 819.2. Samples: 3778606. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:33:25,083][128642] Avg episode reward: [(0, '104.180'), (1, '97.950')] +[2023-09-26 01:33:29,693][129495] Updated weights for policy 0, policy_version 29600 (0.0018) +[2023-09-26 01:33:29,693][129496] Updated weights for policy 1, policy_version 29600 (0.0016) +[2023-09-26 01:33:30,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15155200. Throughput: 0: 817.9, 1: 815.0. Samples: 3788464. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:33:30,082][128642] Avg episode reward: [(0, '99.030'), (1, '97.970')] +[2023-09-26 01:33:35,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15187968. Throughput: 0: 816.2, 1: 816.1. Samples: 3793112. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:33:35,082][128642] Avg episode reward: [(0, '99.240'), (1, '97.380')] +[2023-09-26 01:33:40,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15220736. Throughput: 0: 815.0, 1: 818.4. Samples: 3803104. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:33:40,083][128642] Avg episode reward: [(0, '94.980'), (1, '100.140')] +[2023-09-26 01:33:42,150][129495] Updated weights for policy 0, policy_version 29760 (0.0014) +[2023-09-26 01:33:42,150][129496] Updated weights for policy 1, policy_version 29760 (0.0015) +[2023-09-26 01:33:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15253504. Throughput: 0: 817.9, 1: 813.9. Samples: 3813109. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:33:45,083][128642] Avg episode reward: [(0, '102.500'), (1, '101.690')] +[2023-09-26 01:33:50,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15286272. Throughput: 0: 811.6, 1: 812.4. Samples: 3817692. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:33:50,082][128642] Avg episode reward: [(0, '99.810'), (1, '103.120')] +[2023-09-26 01:33:54,602][129496] Updated weights for policy 1, policy_version 29920 (0.0017) +[2023-09-26 01:33:54,602][129495] Updated weights for policy 0, policy_version 29920 (0.0017) +[2023-09-26 01:33:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15319040. Throughput: 0: 814.4, 1: 818.2. Samples: 3827751. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 01:33:55,082][128642] Avg episode reward: [(0, '97.320'), (1, '103.590')] +[2023-09-26 01:34:00,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15351808. Throughput: 0: 817.8, 1: 816.2. Samples: 3837815. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:34:00,083][128642] Avg episode reward: [(0, '91.760'), (1, '102.400')] +[2023-09-26 01:34:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15384576. Throughput: 0: 816.2, 1: 815.6. Samples: 3842640. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:34:05,083][128642] Avg episode reward: [(0, '94.360'), (1, '103.310')] +[2023-09-26 01:34:06,900][129496] Updated weights for policy 1, policy_version 30080 (0.0018) +[2023-09-26 01:34:06,901][129495] Updated weights for policy 0, policy_version 30080 (0.0016) +[2023-09-26 01:34:10,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15417344. Throughput: 0: 823.0, 1: 819.6. Samples: 3852524. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:34:10,082][128642] Avg episode reward: [(0, '95.140'), (1, '103.510')] +[2023-09-26 01:34:15,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15450112. Throughput: 0: 820.6, 1: 825.2. Samples: 3862526. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:34:15,083][128642] Avg episode reward: [(0, '97.100'), (1, '104.250')] +[2023-09-26 01:34:19,341][129495] Updated weights for policy 0, policy_version 30240 (0.0017) +[2023-09-26 01:34:19,341][129496] Updated weights for policy 1, policy_version 30240 (0.0016) +[2023-09-26 01:34:20,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15482880. Throughput: 0: 825.5, 1: 825.4. Samples: 3867406. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:34:20,083][128642] Avg episode reward: [(0, '100.580'), (1, '103.970')] +[2023-09-26 01:34:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15515648. Throughput: 0: 823.8, 1: 820.0. Samples: 3877079. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:34:25,083][128642] Avg episode reward: [(0, '100.860'), (1, '103.450')] +[2023-09-26 01:34:30,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15548416. Throughput: 0: 819.9, 1: 824.5. Samples: 3887105. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:34:30,083][128642] Avg episode reward: [(0, '103.560'), (1, '106.270')] +[2023-09-26 01:34:32,000][129495] Updated weights for policy 0, policy_version 30400 (0.0016) +[2023-09-26 01:34:32,000][129496] Updated weights for policy 1, policy_version 30400 (0.0020) +[2023-09-26 01:34:35,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15581184. Throughput: 0: 822.9, 1: 822.7. Samples: 3891743. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:34:35,083][128642] Avg episode reward: [(0, '102.840'), (1, '105.220')] +[2023-09-26 01:34:40,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15613952. Throughput: 0: 823.5, 1: 819.7. Samples: 3901695. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:34:40,083][128642] Avg episode reward: [(0, '100.480'), (1, '103.740')] +[2023-09-26 01:34:40,095][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000030496_7806976.pth... +[2023-09-26 01:34:40,095][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000030496_7806976.pth... +[2023-09-26 01:34:40,133][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000027424_7020544.pth +[2023-09-26 01:34:40,134][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000027424_7020544.pth +[2023-09-26 01:34:44,393][129496] Updated weights for policy 1, policy_version 30560 (0.0017) +[2023-09-26 01:34:44,393][129495] Updated weights for policy 0, policy_version 30560 (0.0017) +[2023-09-26 01:34:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15646720. Throughput: 0: 819.3, 1: 822.2. Samples: 3911685. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:34:45,083][128642] Avg episode reward: [(0, '98.530'), (1, '104.100')] +[2023-09-26 01:34:50,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15679488. Throughput: 0: 821.2, 1: 821.1. Samples: 3916542. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:34:50,083][128642] Avg episode reward: [(0, '98.080'), (1, '106.840')] +[2023-09-26 01:34:55,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15712256. Throughput: 0: 820.3, 1: 819.9. Samples: 3926333. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:34:55,083][128642] Avg episode reward: [(0, '95.360'), (1, '108.430')] +[2023-09-26 01:34:56,824][129495] Updated weights for policy 0, policy_version 30720 (0.0016) +[2023-09-26 01:34:56,824][129496] Updated weights for policy 1, policy_version 30720 (0.0017) +[2023-09-26 01:35:00,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15745024. Throughput: 0: 819.2, 1: 819.3. Samples: 3936256. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:35:00,082][128642] Avg episode reward: [(0, '97.080'), (1, '104.080')] +[2023-09-26 01:35:05,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15777792. Throughput: 0: 819.3, 1: 818.9. Samples: 3941125. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:35:05,082][128642] Avg episode reward: [(0, '96.110'), (1, '103.910')] +[2023-09-26 01:35:09,353][129495] Updated weights for policy 0, policy_version 30880 (0.0017) +[2023-09-26 01:35:09,354][129496] Updated weights for policy 1, policy_version 30880 (0.0017) +[2023-09-26 01:35:10,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15810560. Throughput: 0: 820.7, 1: 820.6. Samples: 3950937. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:35:10,083][128642] Avg episode reward: [(0, '98.310'), (1, '103.260')] +[2023-09-26 01:35:15,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15843328. Throughput: 0: 821.3, 1: 819.2. Samples: 3960927. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:35:15,083][128642] Avg episode reward: [(0, '93.990'), (1, '104.240')] +[2023-09-26 01:35:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15876096. Throughput: 0: 823.7, 1: 823.0. Samples: 3965842. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:35:20,083][128642] Avg episode reward: [(0, '97.640'), (1, '104.460')] +[2023-09-26 01:35:22,177][129496] Updated weights for policy 1, policy_version 31040 (0.0017) +[2023-09-26 01:35:22,177][129495] Updated weights for policy 0, policy_version 31040 (0.0018) +[2023-09-26 01:35:25,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15908864. Throughput: 0: 814.1, 1: 818.4. Samples: 3975156. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:35:25,082][128642] Avg episode reward: [(0, '99.870'), (1, '105.360')] +[2023-09-26 01:35:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15941632. Throughput: 0: 819.1, 1: 815.7. Samples: 3985249. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:35:30,083][128642] Avg episode reward: [(0, '104.850'), (1, '105.590')] +[2023-09-26 01:35:34,632][129496] Updated weights for policy 1, policy_version 31200 (0.0017) +[2023-09-26 01:35:34,632][129495] Updated weights for policy 0, policy_version 31200 (0.0017) +[2023-09-26 01:35:35,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 15974400. Throughput: 0: 814.2, 1: 814.3. Samples: 3989828. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:35:35,083][128642] Avg episode reward: [(0, '106.240'), (1, '104.330')] +[2023-09-26 01:35:40,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16007168. Throughput: 0: 813.5, 1: 818.1. Samples: 3999752. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:35:40,082][128642] Avg episode reward: [(0, '104.260'), (1, '103.470')] +[2023-09-26 01:35:45,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16039936. Throughput: 0: 819.1, 1: 816.1. Samples: 4009841. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:35:45,083][128642] Avg episode reward: [(0, '104.310'), (1, '107.750')] +[2023-09-26 01:35:47,141][129496] Updated weights for policy 1, policy_version 31360 (0.0019) +[2023-09-26 01:35:47,141][129495] Updated weights for policy 0, policy_version 31360 (0.0017) +[2023-09-26 01:35:50,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16072704. Throughput: 0: 813.4, 1: 813.8. Samples: 4014351. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:35:50,083][128642] Avg episode reward: [(0, '102.810'), (1, '103.880')] +[2023-09-26 01:35:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16105472. Throughput: 0: 813.0, 1: 815.9. Samples: 4024239. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:35:55,083][128642] Avg episode reward: [(0, '102.470'), (1, '107.530')] +[2023-09-26 01:35:59,749][129495] Updated weights for policy 0, policy_version 31520 (0.0017) +[2023-09-26 01:35:59,749][129496] Updated weights for policy 1, policy_version 31520 (0.0017) +[2023-09-26 01:36:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16138240. Throughput: 0: 814.0, 1: 811.6. Samples: 4034076. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:36:00,083][128642] Avg episode reward: [(0, '103.280'), (1, '107.370')] +[2023-09-26 01:36:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16171008. Throughput: 0: 808.4, 1: 811.8. Samples: 4038751. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:36:05,082][128642] Avg episode reward: [(0, '99.550'), (1, '104.520')] +[2023-09-26 01:36:10,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16203776. Throughput: 0: 819.2, 1: 816.1. Samples: 4048742. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:36:10,083][128642] Avg episode reward: [(0, '103.480'), (1, '102.680')] +[2023-09-26 01:36:12,388][129495] Updated weights for policy 0, policy_version 31680 (0.0016) +[2023-09-26 01:36:12,388][129496] Updated weights for policy 1, policy_version 31680 (0.0017) +[2023-09-26 01:36:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16236544. Throughput: 0: 811.1, 1: 810.8. Samples: 4058232. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:36:15,083][128642] Avg episode reward: [(0, '101.950'), (1, '105.090')] +[2023-09-26 01:36:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16269312. Throughput: 0: 813.3, 1: 817.2. Samples: 4063202. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:36:20,083][128642] Avg episode reward: [(0, '104.080'), (1, '104.410')] +[2023-09-26 01:36:24,940][129496] Updated weights for policy 1, policy_version 31840 (0.0017) +[2023-09-26 01:36:24,940][129495] Updated weights for policy 0, policy_version 31840 (0.0017) +[2023-09-26 01:36:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16302080. Throughput: 0: 817.3, 1: 812.3. Samples: 4073085. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:36:25,083][128642] Avg episode reward: [(0, '103.340'), (1, '102.900')] +[2023-09-26 01:36:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16334848. Throughput: 0: 812.7, 1: 811.5. Samples: 4082929. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:36:30,083][128642] Avg episode reward: [(0, '106.480'), (1, '103.330')] +[2023-09-26 01:36:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16367616. Throughput: 0: 814.2, 1: 818.5. Samples: 4087823. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:36:35,083][128642] Avg episode reward: [(0, '107.780'), (1, '105.400')] +[2023-09-26 01:36:37,406][129496] Updated weights for policy 1, policy_version 32000 (0.0017) +[2023-09-26 01:36:37,406][129495] Updated weights for policy 0, policy_version 32000 (0.0018) +[2023-09-26 01:36:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16400384. Throughput: 0: 818.4, 1: 815.2. Samples: 4097752. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:36:40,083][128642] Avg episode reward: [(0, '110.670'), (1, '105.490')] +[2023-09-26 01:36:40,094][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000032032_8200192.pth... +[2023-09-26 01:36:40,094][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000032032_8200192.pth... +[2023-09-26 01:36:40,130][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000028960_7413760.pth +[2023-09-26 01:36:40,134][129304] Saving new best policy, reward=110.670! +[2023-09-26 01:36:40,135][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000028960_7413760.pth +[2023-09-26 01:36:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16433152. Throughput: 0: 815.0, 1: 814.6. Samples: 4107408. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:36:45,083][128642] Avg episode reward: [(0, '112.770'), (1, '103.380')] +[2023-09-26 01:36:45,084][129304] Saving new best policy, reward=112.770! +[2023-09-26 01:36:49,968][129495] Updated weights for policy 0, policy_version 32160 (0.0017) +[2023-09-26 01:36:49,968][129496] Updated weights for policy 1, policy_version 32160 (0.0016) +[2023-09-26 01:36:50,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16465920. Throughput: 0: 817.1, 1: 818.3. Samples: 4112342. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:36:50,082][128642] Avg episode reward: [(0, '111.630'), (1, '101.730')] +[2023-09-26 01:36:55,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 16498688. Throughput: 0: 816.8, 1: 815.1. Samples: 4122178. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:36:55,083][128642] Avg episode reward: [(0, '108.640'), (1, '102.900')] +[2023-09-26 01:37:00,082][128642] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 16523264. Throughput: 0: 816.4, 1: 815.7. Samples: 4131677. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:37:00,083][128642] Avg episode reward: [(0, '101.590'), (1, '103.420')] +[2023-09-26 01:37:02,732][129495] Updated weights for policy 0, policy_version 32320 (0.0019) +[2023-09-26 01:37:02,732][129496] Updated weights for policy 1, policy_version 32320 (0.0019) +[2023-09-26 01:37:05,082][128642] Fps is (10 sec: 5734.3, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 16556032. Throughput: 0: 819.2, 1: 813.7. Samples: 4136682. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:37:05,083][128642] Avg episode reward: [(0, '104.800'), (1, '104.340')] +[2023-09-26 01:37:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 16588800. Throughput: 0: 811.9, 1: 812.3. Samples: 4146175. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:37:10,082][128642] Avg episode reward: [(0, '103.060'), (1, '103.210')] +[2023-09-26 01:37:15,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 16621568. Throughput: 0: 812.0, 1: 812.1. Samples: 4156013. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:37:15,083][128642] Avg episode reward: [(0, '103.090'), (1, '105.690')] +[2023-09-26 01:37:15,253][129495] Updated weights for policy 0, policy_version 32480 (0.0016) +[2023-09-26 01:37:15,253][129496] Updated weights for policy 1, policy_version 32480 (0.0017) +[2023-09-26 01:37:20,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 16654336. Throughput: 0: 816.7, 1: 812.5. Samples: 4161135. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:37:20,083][128642] Avg episode reward: [(0, '104.550'), (1, '107.670')] +[2023-09-26 01:37:25,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 16687104. Throughput: 0: 807.4, 1: 807.8. Samples: 4170433. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:37:25,083][128642] Avg episode reward: [(0, '104.630'), (1, '108.030')] +[2023-09-26 01:37:27,893][129495] Updated weights for policy 0, policy_version 32640 (0.0015) +[2023-09-26 01:37:27,893][129496] Updated weights for policy 1, policy_version 32640 (0.0016) +[2023-09-26 01:37:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 16719872. Throughput: 0: 808.9, 1: 809.1. Samples: 4180217. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 01:37:30,083][128642] Avg episode reward: [(0, '105.020'), (1, '106.600')] +[2023-09-26 01:37:35,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 16752640. Throughput: 0: 812.9, 1: 809.2. Samples: 4185337. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:37:35,083][128642] Avg episode reward: [(0, '105.850'), (1, '106.540')] +[2023-09-26 01:37:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 16785408. Throughput: 0: 808.8, 1: 810.2. Samples: 4195033. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:37:40,083][128642] Avg episode reward: [(0, '106.930'), (1, '109.260')] +[2023-09-26 01:37:40,094][129382] Saving new best policy, reward=109.260! +[2023-09-26 01:37:40,457][129495] Updated weights for policy 0, policy_version 32800 (0.0018) +[2023-09-26 01:37:40,457][129496] Updated weights for policy 1, policy_version 32800 (0.0016) +[2023-09-26 01:37:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 16818176. Throughput: 0: 811.3, 1: 812.0. Samples: 4204724. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:37:45,083][128642] Avg episode reward: [(0, '108.680'), (1, '108.750')] +[2023-09-26 01:37:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 16850944. Throughput: 0: 811.1, 1: 812.2. Samples: 4209730. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:37:50,083][128642] Avg episode reward: [(0, '108.990'), (1, '107.690')] +[2023-09-26 01:37:53,187][129496] Updated weights for policy 1, policy_version 32960 (0.0017) +[2023-09-26 01:37:53,187][129495] Updated weights for policy 0, policy_version 32960 (0.0015) +[2023-09-26 01:37:55,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 16883712. Throughput: 0: 809.8, 1: 810.3. Samples: 4219080. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:37:55,083][128642] Avg episode reward: [(0, '107.750'), (1, '105.640')] +[2023-09-26 01:38:00,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16916480. Throughput: 0: 810.1, 1: 812.8. Samples: 4229047. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:38:00,083][128642] Avg episode reward: [(0, '105.940'), (1, '106.820')] +[2023-09-26 01:38:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16949248. Throughput: 0: 808.2, 1: 807.9. Samples: 4233858. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:38:05,082][128642] Avg episode reward: [(0, '108.090'), (1, '105.650')] +[2023-09-26 01:38:05,622][129495] Updated weights for policy 0, policy_version 33120 (0.0017) +[2023-09-26 01:38:05,622][129496] Updated weights for policy 1, policy_version 33120 (0.0017) +[2023-09-26 01:38:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16982016. Throughput: 0: 814.8, 1: 814.9. Samples: 4243772. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:38:10,082][128642] Avg episode reward: [(0, '105.720'), (1, '108.020')] +[2023-09-26 01:38:15,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17014784. Throughput: 0: 814.1, 1: 815.9. Samples: 4253566. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:38:15,083][128642] Avg episode reward: [(0, '105.160'), (1, '106.880')] +[2023-09-26 01:38:18,292][129495] Updated weights for policy 0, policy_version 33280 (0.0014) +[2023-09-26 01:38:18,293][129496] Updated weights for policy 1, policy_version 33280 (0.0017) +[2023-09-26 01:38:20,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17047552. Throughput: 0: 809.2, 1: 809.3. Samples: 4258173. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:38:20,083][128642] Avg episode reward: [(0, '104.770'), (1, '107.840')] +[2023-09-26 01:38:25,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17080320. Throughput: 0: 813.1, 1: 813.1. Samples: 4268213. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:38:25,083][128642] Avg episode reward: [(0, '107.770'), (1, '107.890')] +[2023-09-26 01:38:30,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17113088. Throughput: 0: 815.3, 1: 818.2. Samples: 4278230. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:38:30,083][128642] Avg episode reward: [(0, '106.720'), (1, '106.830')] +[2023-09-26 01:38:30,701][129495] Updated weights for policy 0, policy_version 33440 (0.0016) +[2023-09-26 01:38:30,701][129496] Updated weights for policy 1, policy_version 33440 (0.0016) +[2023-09-26 01:38:35,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17145856. Throughput: 0: 814.6, 1: 814.9. Samples: 4283057. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:38:35,083][128642] Avg episode reward: [(0, '105.070'), (1, '104.750')] +[2023-09-26 01:38:40,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17178624. Throughput: 0: 817.3, 1: 819.1. Samples: 4292718. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:38:40,083][128642] Avg episode reward: [(0, '105.490'), (1, '107.230')] +[2023-09-26 01:38:40,094][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000033552_8589312.pth... +[2023-09-26 01:38:40,094][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000033552_8589312.pth... +[2023-09-26 01:38:40,125][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000030496_7806976.pth +[2023-09-26 01:38:40,130][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000030496_7806976.pth +[2023-09-26 01:38:43,227][129496] Updated weights for policy 1, policy_version 33600 (0.0017) +[2023-09-26 01:38:43,227][129495] Updated weights for policy 0, policy_version 33600 (0.0017) +[2023-09-26 01:38:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17211392. Throughput: 0: 819.2, 1: 819.7. Samples: 4302799. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:38:45,083][128642] Avg episode reward: [(0, '107.110'), (1, '107.840')] +[2023-09-26 01:38:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17244160. Throughput: 0: 820.0, 1: 820.3. Samples: 4307672. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:38:50,083][128642] Avg episode reward: [(0, '111.340'), (1, '105.780')] +[2023-09-26 01:38:55,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17276928. Throughput: 0: 819.5, 1: 819.5. Samples: 4317527. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:38:55,082][128642] Avg episode reward: [(0, '109.990'), (1, '104.050')] +[2023-09-26 01:38:55,623][129495] Updated weights for policy 0, policy_version 33760 (0.0017) +[2023-09-26 01:38:55,623][129496] Updated weights for policy 1, policy_version 33760 (0.0018) +[2023-09-26 01:39:00,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17309696. Throughput: 0: 819.2, 1: 821.6. Samples: 4327402. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:39:00,083][128642] Avg episode reward: [(0, '108.300'), (1, '108.280')] +[2023-09-26 01:39:05,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17342464. Throughput: 0: 822.3, 1: 823.0. Samples: 4332213. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:39:05,083][128642] Avg episode reward: [(0, '106.100'), (1, '104.570')] +[2023-09-26 01:39:08,263][129496] Updated weights for policy 1, policy_version 33920 (0.0018) +[2023-09-26 01:39:08,263][129495] Updated weights for policy 0, policy_version 33920 (0.0019) +[2023-09-26 01:39:10,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17375232. Throughput: 0: 817.7, 1: 819.1. Samples: 4341870. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:39:10,083][128642] Avg episode reward: [(0, '107.530'), (1, '106.910')] +[2023-09-26 01:39:15,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17408000. Throughput: 0: 819.2, 1: 819.0. Samples: 4351947. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:39:15,082][128642] Avg episode reward: [(0, '103.650'), (1, '104.310')] +[2023-09-26 01:39:20,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17440768. Throughput: 0: 817.6, 1: 817.6. Samples: 4356640. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:39:20,082][128642] Avg episode reward: [(0, '101.700'), (1, '104.300')] +[2023-09-26 01:39:20,735][129496] Updated weights for policy 1, policy_version 34080 (0.0018) +[2023-09-26 01:39:20,736][129495] Updated weights for policy 0, policy_version 34080 (0.0018) +[2023-09-26 01:39:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17473536. Throughput: 0: 820.8, 1: 819.3. Samples: 4366522. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:39:25,083][128642] Avg episode reward: [(0, '99.330'), (1, '107.240')] +[2023-09-26 01:39:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17506304. Throughput: 0: 819.2, 1: 818.5. Samples: 4376493. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:39:30,083][128642] Avg episode reward: [(0, '99.380'), (1, '104.060')] +[2023-09-26 01:39:33,293][129495] Updated weights for policy 0, policy_version 34240 (0.0017) +[2023-09-26 01:39:33,294][129496] Updated weights for policy 1, policy_version 34240 (0.0017) +[2023-09-26 01:39:35,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17539072. Throughput: 0: 817.2, 1: 817.0. Samples: 4381210. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:39:35,083][128642] Avg episode reward: [(0, '102.970'), (1, '102.290')] +[2023-09-26 01:39:40,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17571840. Throughput: 0: 815.0, 1: 817.6. Samples: 4390996. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:39:40,083][128642] Avg episode reward: [(0, '101.060'), (1, '103.020')] +[2023-09-26 01:39:45,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17604608. Throughput: 0: 819.2, 1: 819.3. Samples: 4401135. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:39:45,083][128642] Avg episode reward: [(0, '102.910'), (1, '104.350')] +[2023-09-26 01:39:45,756][129495] Updated weights for policy 0, policy_version 34400 (0.0016) +[2023-09-26 01:39:45,756][129496] Updated weights for policy 1, policy_version 34400 (0.0015) +[2023-09-26 01:39:50,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17637376. Throughput: 0: 817.7, 1: 816.7. Samples: 4405761. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:39:50,083][128642] Avg episode reward: [(0, '101.390'), (1, '103.070')] +[2023-09-26 01:39:55,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17670144. Throughput: 0: 817.0, 1: 819.2. Samples: 4415501. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:39:55,083][128642] Avg episode reward: [(0, '103.470'), (1, '107.540')] +[2023-09-26 01:39:58,364][129496] Updated weights for policy 1, policy_version 34560 (0.0016) +[2023-09-26 01:39:58,364][129495] Updated weights for policy 0, policy_version 34560 (0.0017) +[2023-09-26 01:40:00,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17702912. Throughput: 0: 819.0, 1: 815.6. Samples: 4425507. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:40:00,083][128642] Avg episode reward: [(0, '103.720'), (1, '104.690')] +[2023-09-26 01:40:05,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17735680. Throughput: 0: 815.8, 1: 815.8. Samples: 4430059. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:40:05,082][128642] Avg episode reward: [(0, '104.860'), (1, '104.890')] +[2023-09-26 01:40:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17768448. Throughput: 0: 815.2, 1: 818.1. Samples: 4440018. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:40:10,083][128642] Avg episode reward: [(0, '106.060'), (1, '103.160')] +[2023-09-26 01:40:10,889][129495] Updated weights for policy 0, policy_version 34720 (0.0016) +[2023-09-26 01:40:10,889][129496] Updated weights for policy 1, policy_version 34720 (0.0014) +[2023-09-26 01:40:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17801216. Throughput: 0: 819.2, 1: 817.3. Samples: 4450135. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:40:15,082][128642] Avg episode reward: [(0, '108.270'), (1, '104.450')] +[2023-09-26 01:40:20,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17833984. Throughput: 0: 817.7, 1: 817.7. Samples: 4454801. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:40:20,083][128642] Avg episode reward: [(0, '107.840'), (1, '104.270')] +[2023-09-26 01:40:23,295][129495] Updated weights for policy 0, policy_version 34880 (0.0018) +[2023-09-26 01:40:23,295][129496] Updated weights for policy 1, policy_version 34880 (0.0018) +[2023-09-26 01:40:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17866752. Throughput: 0: 819.2, 1: 819.2. Samples: 4464723. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:40:25,082][128642] Avg episode reward: [(0, '104.780'), (1, '108.060')] +[2023-09-26 01:40:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17899520. Throughput: 0: 815.3, 1: 810.9. Samples: 4474317. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:40:30,083][128642] Avg episode reward: [(0, '106.060'), (1, '107.010')] +[2023-09-26 01:40:35,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17932288. Throughput: 0: 814.1, 1: 815.9. Samples: 4479111. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:40:35,083][128642] Avg episode reward: [(0, '103.680'), (1, '104.240')] +[2023-09-26 01:40:35,991][129495] Updated weights for policy 0, policy_version 35040 (0.0017) +[2023-09-26 01:40:35,992][129496] Updated weights for policy 1, policy_version 35040 (0.0018) +[2023-09-26 01:40:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17965056. Throughput: 0: 818.9, 1: 817.8. Samples: 4489155. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:40:40,083][128642] Avg episode reward: [(0, '102.170'), (1, '103.150')] +[2023-09-26 01:40:40,096][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000035088_8982528.pth... +[2023-09-26 01:40:40,096][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000035088_8982528.pth... +[2023-09-26 01:40:40,132][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000032032_8200192.pth +[2023-09-26 01:40:40,133][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000032032_8200192.pth +[2023-09-26 01:40:45,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17997824. Throughput: 0: 817.3, 1: 817.4. Samples: 4499066. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:40:45,082][128642] Avg episode reward: [(0, '103.180'), (1, '104.220')] +[2023-09-26 01:40:48,456][129495] Updated weights for policy 0, policy_version 35200 (0.0017) +[2023-09-26 01:40:48,457][129496] Updated weights for policy 1, policy_version 35200 (0.0017) +[2023-09-26 01:40:50,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 18030592. Throughput: 0: 817.2, 1: 819.0. Samples: 4503687. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:40:50,083][128642] Avg episode reward: [(0, '103.880'), (1, '106.580')] +[2023-09-26 01:40:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 18063360. Throughput: 0: 819.5, 1: 820.2. Samples: 4513805. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:40:55,082][128642] Avg episode reward: [(0, '104.930'), (1, '105.880')] +[2023-09-26 01:41:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 18096128. Throughput: 0: 819.2, 1: 818.5. Samples: 4523835. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:41:00,082][128642] Avg episode reward: [(0, '106.460'), (1, '103.380')] +[2023-09-26 01:41:00,833][129495] Updated weights for policy 0, policy_version 35360 (0.0019) +[2023-09-26 01:41:00,833][129496] Updated weights for policy 1, policy_version 35360 (0.0019) +[2023-09-26 01:41:05,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 18128896. Throughput: 0: 819.8, 1: 819.7. Samples: 4528577. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:41:05,083][128642] Avg episode reward: [(0, '107.260'), (1, '101.620')] +[2023-09-26 01:41:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 18161664. Throughput: 0: 817.7, 1: 819.2. Samples: 4538383. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:41:10,083][128642] Avg episode reward: [(0, '108.570'), (1, '102.240')] +[2023-09-26 01:41:13,359][129495] Updated weights for policy 0, policy_version 35520 (0.0017) +[2023-09-26 01:41:13,359][129496] Updated weights for policy 1, policy_version 35520 (0.0016) +[2023-09-26 01:41:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 18194432. Throughput: 0: 823.1, 1: 823.6. Samples: 4548417. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:41:15,083][128642] Avg episode reward: [(0, '104.700'), (1, '99.360')] +[2023-09-26 01:41:20,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 18227200. Throughput: 0: 822.4, 1: 820.8. Samples: 4553052. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:41:20,083][128642] Avg episode reward: [(0, '100.490'), (1, '103.340')] +[2023-09-26 01:41:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 18259968. Throughput: 0: 821.8, 1: 820.6. Samples: 4563061. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:41:25,083][128642] Avg episode reward: [(0, '98.490'), (1, '99.880')] +[2023-09-26 01:41:25,751][129496] Updated weights for policy 1, policy_version 35680 (0.0017) +[2023-09-26 01:41:25,751][129495] Updated weights for policy 0, policy_version 35680 (0.0017) +[2023-09-26 01:41:30,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 18292736. Throughput: 0: 821.3, 1: 824.5. Samples: 4573126. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:41:30,082][128642] Avg episode reward: [(0, '103.900'), (1, '100.760')] +[2023-09-26 01:41:35,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 18325504. Throughput: 0: 825.2, 1: 823.8. Samples: 4577891. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:41:35,083][128642] Avg episode reward: [(0, '104.570'), (1, '102.660')] +[2023-09-26 01:41:38,208][129495] Updated weights for policy 0, policy_version 35840 (0.0015) +[2023-09-26 01:41:38,209][129496] Updated weights for policy 1, policy_version 35840 (0.0015) +[2023-09-26 01:41:40,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 18358272. Throughput: 0: 822.9, 1: 819.2. Samples: 4587700. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:41:40,083][128642] Avg episode reward: [(0, '103.950'), (1, '104.490')] +[2023-09-26 01:41:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 18391040. Throughput: 0: 819.2, 1: 819.2. Samples: 4597561. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:41:45,082][128642] Avg episode reward: [(0, '101.170'), (1, '102.440')] +[2023-09-26 01:41:50,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 18423808. Throughput: 0: 817.2, 1: 817.4. Samples: 4602135. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:41:50,083][128642] Avg episode reward: [(0, '100.590'), (1, '100.320')] +[2023-09-26 01:41:50,827][129495] Updated weights for policy 0, policy_version 36000 (0.0015) +[2023-09-26 01:41:50,827][129496] Updated weights for policy 1, policy_version 36000 (0.0016) +[2023-09-26 01:41:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18456576. Throughput: 0: 821.0, 1: 819.2. Samples: 4612190. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:41:55,082][128642] Avg episode reward: [(0, '100.700'), (1, '98.930')] +[2023-09-26 01:42:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18489344. Throughput: 0: 819.2, 1: 821.3. Samples: 4622238. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:42:00,083][128642] Avg episode reward: [(0, '98.190'), (1, '98.310')] +[2023-09-26 01:42:03,298][129496] Updated weights for policy 1, policy_version 36160 (0.0016) +[2023-09-26 01:42:03,298][129495] Updated weights for policy 0, policy_version 36160 (0.0016) +[2023-09-26 01:42:05,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18522112. Throughput: 0: 820.0, 1: 820.0. Samples: 4626848. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:42:05,083][128642] Avg episode reward: [(0, '99.560'), (1, '99.270')] +[2023-09-26 01:42:10,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18554880. Throughput: 0: 816.7, 1: 819.2. Samples: 4636678. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:42:10,083][128642] Avg episode reward: [(0, '104.590'), (1, '99.540')] +[2023-09-26 01:42:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18587648. Throughput: 0: 819.2, 1: 817.2. Samples: 4646762. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:42:15,083][128642] Avg episode reward: [(0, '105.460'), (1, '98.720')] +[2023-09-26 01:42:15,845][129495] Updated weights for policy 0, policy_version 36320 (0.0015) +[2023-09-26 01:42:15,846][129496] Updated weights for policy 1, policy_version 36320 (0.0016) +[2023-09-26 01:42:20,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18620416. Throughput: 0: 816.5, 1: 816.2. Samples: 4651364. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:42:20,083][128642] Avg episode reward: [(0, '104.080'), (1, '100.360')] +[2023-09-26 01:42:25,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18653184. Throughput: 0: 815.7, 1: 819.2. Samples: 4661268. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:42:25,083][128642] Avg episode reward: [(0, '106.330'), (1, '99.170')] +[2023-09-26 01:42:28,295][129495] Updated weights for policy 0, policy_version 36480 (0.0018) +[2023-09-26 01:42:28,295][129496] Updated weights for policy 1, policy_version 36480 (0.0017) +[2023-09-26 01:42:30,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18685952. Throughput: 0: 819.2, 1: 819.7. Samples: 4671312. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:42:30,082][128642] Avg episode reward: [(0, '103.840'), (1, '99.830')] +[2023-09-26 01:42:35,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18718720. Throughput: 0: 820.1, 1: 820.0. Samples: 4675940. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:42:35,083][128642] Avg episode reward: [(0, '102.470'), (1, '98.610')] +[2023-09-26 01:42:40,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18751488. Throughput: 0: 817.1, 1: 819.2. Samples: 4685825. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:42:40,082][128642] Avg episode reward: [(0, '99.560'), (1, '100.050')] +[2023-09-26 01:42:40,091][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000036624_9375744.pth... +[2023-09-26 01:42:40,092][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000036624_9375744.pth... +[2023-09-26 01:42:40,127][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000033552_8589312.pth +[2023-09-26 01:42:40,128][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000033552_8589312.pth +[2023-09-26 01:42:40,954][129495] Updated weights for policy 0, policy_version 36640 (0.0016) +[2023-09-26 01:42:40,954][129496] Updated weights for policy 1, policy_version 36640 (0.0015) +[2023-09-26 01:42:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18784256. Throughput: 0: 817.7, 1: 814.3. Samples: 4695678. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:42:45,083][128642] Avg episode reward: [(0, '101.470'), (1, '99.900')] +[2023-09-26 01:42:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18817024. Throughput: 0: 814.4, 1: 816.9. Samples: 4700254. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 01:42:50,082][128642] Avg episode reward: [(0, '103.950'), (1, '99.460')] +[2023-09-26 01:42:53,452][129496] Updated weights for policy 1, policy_version 36800 (0.0017) +[2023-09-26 01:42:53,452][129495] Updated weights for policy 0, policy_version 36800 (0.0017) +[2023-09-26 01:42:55,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18849792. Throughput: 0: 819.1, 1: 818.3. Samples: 4710359. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:42:55,083][128642] Avg episode reward: [(0, '106.560'), (1, '98.390')] +[2023-09-26 01:43:00,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18882560. Throughput: 0: 815.2, 1: 814.7. Samples: 4720107. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:43:00,083][128642] Avg episode reward: [(0, '100.130'), (1, '98.300')] +[2023-09-26 01:43:05,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18915328. Throughput: 0: 814.4, 1: 817.6. Samples: 4724804. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:43:05,083][128642] Avg episode reward: [(0, '99.270'), (1, '95.730')] +[2023-09-26 01:43:06,246][129495] Updated weights for policy 0, policy_version 36960 (0.0018) +[2023-09-26 01:43:06,247][129496] Updated weights for policy 1, policy_version 36960 (0.0019) +[2023-09-26 01:43:10,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18948096. Throughput: 0: 815.4, 1: 812.1. Samples: 4734505. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:43:10,083][128642] Avg episode reward: [(0, '97.550'), (1, '100.340')] +[2023-09-26 01:43:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18980864. Throughput: 0: 811.1, 1: 810.4. Samples: 4744280. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:43:15,083][128642] Avg episode reward: [(0, '99.020'), (1, '95.300')] +[2023-09-26 01:43:18,686][129495] Updated weights for policy 0, policy_version 37120 (0.0018) +[2023-09-26 01:43:18,687][129496] Updated weights for policy 1, policy_version 37120 (0.0017) +[2023-09-26 01:43:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19013632. Throughput: 0: 812.9, 1: 816.5. Samples: 4749261. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:43:20,083][128642] Avg episode reward: [(0, '100.850'), (1, '95.220')] +[2023-09-26 01:43:25,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19046400. Throughput: 0: 818.8, 1: 813.6. Samples: 4759284. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:43:25,082][128642] Avg episode reward: [(0, '100.010'), (1, '91.600')] +[2023-09-26 01:43:30,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19079168. Throughput: 0: 816.1, 1: 817.4. Samples: 4769187. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:43:30,083][128642] Avg episode reward: [(0, '98.830'), (1, '97.150')] +[2023-09-26 01:43:31,112][129495] Updated weights for policy 0, policy_version 37280 (0.0017) +[2023-09-26 01:43:31,112][129496] Updated weights for policy 1, policy_version 37280 (0.0016) +[2023-09-26 01:43:35,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19111936. Throughput: 0: 817.3, 1: 819.2. Samples: 4773897. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:43:35,083][128642] Avg episode reward: [(0, '95.660'), (1, '94.970')] +[2023-09-26 01:43:40,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19144704. Throughput: 0: 816.3, 1: 812.2. Samples: 4783640. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:43:40,083][128642] Avg episode reward: [(0, '96.370'), (1, '97.420')] +[2023-09-26 01:43:43,680][129495] Updated weights for policy 0, policy_version 37440 (0.0019) +[2023-09-26 01:43:43,680][129496] Updated weights for policy 1, policy_version 37440 (0.0018) +[2023-09-26 01:43:45,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19177472. Throughput: 0: 816.2, 1: 815.6. Samples: 4793537. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:43:45,083][128642] Avg episode reward: [(0, '98.050'), (1, '93.470')] +[2023-09-26 01:43:50,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19210240. Throughput: 0: 818.0, 1: 819.2. Samples: 4798477. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:43:50,083][128642] Avg episode reward: [(0, '98.270'), (1, '94.370')] +[2023-09-26 01:43:55,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19243008. Throughput: 0: 821.7, 1: 821.6. Samples: 4808455. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:43:55,083][128642] Avg episode reward: [(0, '97.120'), (1, '93.620')] +[2023-09-26 01:43:56,144][129496] Updated weights for policy 1, policy_version 37600 (0.0016) +[2023-09-26 01:43:56,144][129495] Updated weights for policy 0, policy_version 37600 (0.0016) +[2023-09-26 01:44:00,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19275776. Throughput: 0: 821.6, 1: 821.4. Samples: 4818217. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 01:44:00,083][128642] Avg episode reward: [(0, '95.950'), (1, '98.630')] +[2023-09-26 01:44:05,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19308544. Throughput: 0: 819.3, 1: 820.3. Samples: 4823045. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:44:05,083][128642] Avg episode reward: [(0, '97.720'), (1, '97.290')] +[2023-09-26 01:44:08,625][129496] Updated weights for policy 1, policy_version 37760 (0.0016) +[2023-09-26 01:44:08,625][129495] Updated weights for policy 0, policy_version 37760 (0.0015) +[2023-09-26 01:44:10,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19341312. Throughput: 0: 819.0, 1: 819.9. Samples: 4833031. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:44:10,083][128642] Avg episode reward: [(0, '97.660'), (1, '98.000')] +[2023-09-26 01:44:15,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19374080. Throughput: 0: 819.7, 1: 819.4. Samples: 4842949. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:44:15,083][128642] Avg episode reward: [(0, '101.070'), (1, '94.960')] +[2023-09-26 01:44:20,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19406848. Throughput: 0: 821.8, 1: 819.2. Samples: 4847740. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:44:20,082][128642] Avg episode reward: [(0, '101.910'), (1, '95.670')] +[2023-09-26 01:44:20,948][129495] Updated weights for policy 0, policy_version 37920 (0.0016) +[2023-09-26 01:44:20,949][129496] Updated weights for policy 1, policy_version 37920 (0.0016) +[2023-09-26 01:44:25,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19439616. Throughput: 0: 822.2, 1: 826.3. Samples: 4857820. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:44:25,083][128642] Avg episode reward: [(0, '99.250'), (1, '93.610')] +[2023-09-26 01:44:30,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19472384. Throughput: 0: 824.3, 1: 823.8. Samples: 4867699. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:44:30,083][128642] Avg episode reward: [(0, '104.200'), (1, '95.470')] +[2023-09-26 01:44:33,461][129495] Updated weights for policy 0, policy_version 38080 (0.0015) +[2023-09-26 01:44:33,461][129496] Updated weights for policy 1, policy_version 38080 (0.0016) +[2023-09-26 01:44:35,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19505152. Throughput: 0: 822.8, 1: 819.3. Samples: 4872370. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:44:35,083][128642] Avg episode reward: [(0, '105.440'), (1, '94.510')] +[2023-09-26 01:44:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19537920. Throughput: 0: 820.1, 1: 823.9. Samples: 4882433. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:44:40,083][128642] Avg episode reward: [(0, '106.050'), (1, '94.020')] +[2023-09-26 01:44:40,094][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000038160_9768960.pth... +[2023-09-26 01:44:40,094][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000038160_9768960.pth... +[2023-09-26 01:44:40,129][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000035088_8982528.pth +[2023-09-26 01:44:40,130][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000035088_8982528.pth +[2023-09-26 01:44:45,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19570688. Throughput: 0: 821.8, 1: 821.8. Samples: 4892179. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:44:45,082][128642] Avg episode reward: [(0, '103.750'), (1, '96.020')] +[2023-09-26 01:44:46,006][129495] Updated weights for policy 0, policy_version 38240 (0.0016) +[2023-09-26 01:44:46,006][129496] Updated weights for policy 1, policy_version 38240 (0.0016) +[2023-09-26 01:44:50,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19603456. Throughput: 0: 822.5, 1: 819.3. Samples: 4896923. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:44:50,083][128642] Avg episode reward: [(0, '102.720'), (1, '93.400')] +[2023-09-26 01:44:55,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19636224. Throughput: 0: 819.8, 1: 823.1. Samples: 4906965. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:44:55,083][128642] Avg episode reward: [(0, '104.120'), (1, '94.740')] +[2023-09-26 01:44:58,456][129496] Updated weights for policy 1, policy_version 38400 (0.0016) +[2023-09-26 01:44:58,456][129495] Updated weights for policy 0, policy_version 38400 (0.0017) +[2023-09-26 01:45:00,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19668992. Throughput: 0: 821.4, 1: 821.5. Samples: 4916877. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:45:00,083][128642] Avg episode reward: [(0, '104.070'), (1, '93.000')] +[2023-09-26 01:45:05,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19701760. Throughput: 0: 820.3, 1: 819.3. Samples: 4921521. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:45:05,083][128642] Avg episode reward: [(0, '105.570'), (1, '96.550')] +[2023-09-26 01:45:10,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19734528. Throughput: 0: 819.7, 1: 820.0. Samples: 4931605. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:45:10,082][128642] Avg episode reward: [(0, '108.950'), (1, '96.730')] +[2023-09-26 01:45:10,856][129495] Updated weights for policy 0, policy_version 38560 (0.0018) +[2023-09-26 01:45:10,856][129496] Updated weights for policy 1, policy_version 38560 (0.0017) +[2023-09-26 01:45:15,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19767296. Throughput: 0: 819.8, 1: 819.9. Samples: 4941483. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 01:45:15,083][128642] Avg episode reward: [(0, '104.620'), (1, '97.300')] +[2023-09-26 01:45:20,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19800064. Throughput: 0: 821.1, 1: 820.5. Samples: 4946244. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:45:20,083][128642] Avg episode reward: [(0, '107.370'), (1, '95.320')] +[2023-09-26 01:45:23,363][129496] Updated weights for policy 1, policy_version 38720 (0.0015) +[2023-09-26 01:45:23,363][129495] Updated weights for policy 0, policy_version 38720 (0.0017) +[2023-09-26 01:45:25,082][128642] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19832832. Throughput: 0: 819.5, 1: 819.2. Samples: 4956174. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:45:25,083][128642] Avg episode reward: [(0, '105.150'), (1, '94.760')] +[2023-09-26 01:45:30,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19865600. Throughput: 0: 819.5, 1: 820.2. Samples: 4965964. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:45:30,083][128642] Avg episode reward: [(0, '105.120'), (1, '93.600')] +[2023-09-26 01:45:35,082][128642] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19898368. Throughput: 0: 819.6, 1: 819.2. Samples: 4970672. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:45:35,082][128642] Avg episode reward: [(0, '103.590'), (1, '93.710')] +[2023-09-26 01:45:36,104][129495] Updated weights for policy 0, policy_version 38880 (0.0016) +[2023-09-26 01:45:36,105][129496] Updated weights for policy 1, policy_version 38880 (0.0017) +[2023-09-26 01:45:40,082][128642] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19931136. Throughput: 0: 819.2, 1: 816.6. Samples: 4980575. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 01:45:40,083][128642] Avg episode reward: [(0, '106.190'), (1, '91.490')] +[2023-09-26 01:45:45,082][128642] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19963904. Throughput: 0: 816.5, 1: 816.2. Samples: 4990350. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:45:45,083][128642] Avg episode reward: [(0, '110.230'), (1, '89.090')] +[2023-09-26 01:45:48,788][129495] Updated weights for policy 0, policy_version 39040 (0.0016) +[2023-09-26 01:45:48,788][129496] Updated weights for policy 1, policy_version 39040 (0.0018) +[2023-09-26 01:45:50,082][128642] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19996672. Throughput: 0: 815.4, 1: 819.1. Samples: 4995072. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 01:45:50,083][128642] Avg episode reward: [(0, '107.900'), (1, '89.340')] +[2023-09-26 01:45:52,498][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-26 01:45:52,499][129529] Stopping RolloutWorker_w0... +[2023-09-26 01:45:52,499][129535] Stopping RolloutWorker_w6... +[2023-09-26 01:45:52,499][129528] Stopping RolloutWorker_w1... +[2023-09-26 01:45:52,499][129536] Stopping RolloutWorker_w7... +[2023-09-26 01:45:52,499][129534] Stopping RolloutWorker_w4... +[2023-09-26 01:45:52,499][129531] Stopping RolloutWorker_w2... +[2023-09-26 01:45:52,499][129532] Stopping RolloutWorker_w3... +[2023-09-26 01:45:52,499][129533] Stopping RolloutWorker_w5... +[2023-09-26 01:45:52,499][129529] Loop rollout_proc0_evt_loop terminating... +[2023-09-26 01:45:52,500][129535] Loop rollout_proc6_evt_loop terminating... +[2023-09-26 01:45:52,500][129536] Loop rollout_proc7_evt_loop terminating... +[2023-09-26 01:45:52,500][129534] Loop rollout_proc4_evt_loop terminating... +[2023-09-26 01:45:52,500][129531] Loop rollout_proc2_evt_loop terminating... +[2023-09-26 01:45:52,500][129532] Loop rollout_proc3_evt_loop terminating... +[2023-09-26 01:45:52,499][128642] Component RolloutWorker_w1 stopped! +[2023-09-26 01:45:52,500][129528] Loop rollout_proc1_evt_loop terminating... +[2023-09-26 01:45:52,500][129533] Loop rollout_proc5_evt_loop terminating... +[2023-09-26 01:45:52,500][128642] Component RolloutWorker_w0 stopped! +[2023-09-26 01:45:52,501][128642] Component RolloutWorker_w6 stopped! +[2023-09-26 01:45:52,501][129304] Stopping Batcher_0... +[2023-09-26 01:45:52,502][128642] Component RolloutWorker_w4 stopped! +[2023-09-26 01:45:52,502][129304] Loop batcher_evt_loop terminating... +[2023-09-26 01:45:52,503][128642] Component RolloutWorker_w5 stopped! +[2023-09-26 01:45:52,503][128642] Component RolloutWorker_w7 stopped! +[2023-09-26 01:45:52,504][128642] Component RolloutWorker_w2 stopped! +[2023-09-26 01:45:52,504][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000039088_10006528.pth... +[2023-09-26 01:45:52,504][128642] Component RolloutWorker_w3 stopped! +[2023-09-26 01:45:52,505][128642] Component Batcher_0 stopped! +[2023-09-26 01:45:52,510][128642] Component Batcher_1 stopped! +[2023-09-26 01:45:52,519][129382] Stopping Batcher_1... +[2023-09-26 01:45:52,529][129382] Loop batcher_evt_loop terminating... +[2023-09-26 01:45:52,530][129382] Removing ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000036624_9375744.pth +[2023-09-26 01:45:52,533][129304] Removing ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000036624_9375744.pth +[2023-09-26 01:45:52,534][129382] Saving ./train_atari/atari_crazyclimber/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-26 01:45:52,537][129304] Saving ./train_atari/atari_crazyclimber/checkpoint_p0/checkpoint_000039088_10006528.pth... +[2023-09-26 01:45:52,548][129495] Weights refcount: 2 0 +[2023-09-26 01:45:52,549][129495] Stopping InferenceWorker_p0-w0... +[2023-09-26 01:45:52,550][129495] Loop inference_proc0-0_evt_loop terminating... +[2023-09-26 01:45:52,550][128642] Component InferenceWorker_p0-w0 stopped! +[2023-09-26 01:45:52,565][129496] Weights refcount: 2 0 +[2023-09-26 01:45:52,567][129496] Stopping InferenceWorker_p1-w0... +[2023-09-26 01:45:52,568][129496] Loop inference_proc1-0_evt_loop terminating... +[2023-09-26 01:45:52,568][128642] Component InferenceWorker_p1-w0 stopped! +[2023-09-26 01:45:52,569][129382] Stopping LearnerWorker_p1... +[2023-09-26 01:45:52,569][129382] Loop learner_proc1_evt_loop terminating... +[2023-09-26 01:45:52,571][128642] Component LearnerWorker_p1 stopped! +[2023-09-26 01:45:52,572][129304] Stopping LearnerWorker_p0... +[2023-09-26 01:45:52,572][129304] Loop learner_proc0_evt_loop terminating... +[2023-09-26 01:45:52,572][128642] Component LearnerWorker_p0 stopped! +[2023-09-26 01:45:52,572][128642] Waiting for process learner_proc0 to stop... +[2023-09-26 01:45:53,265][128642] Waiting for process learner_proc1 to stop... +[2023-09-26 01:45:53,292][128642] Waiting for process inference_proc0-0 to join... +[2023-09-26 01:45:53,293][128642] Waiting for process inference_proc1-0 to join... +[2023-09-26 01:45:53,294][128642] Waiting for process rollout_proc0 to join... +[2023-09-26 01:45:53,295][128642] Waiting for process rollout_proc1 to join... +[2023-09-26 01:45:53,295][128642] Waiting for process rollout_proc2 to join... +[2023-09-26 01:45:53,296][128642] Waiting for process rollout_proc3 to join... +[2023-09-26 01:45:53,296][128642] Waiting for process rollout_proc4 to join... +[2023-09-26 01:45:53,297][128642] Waiting for process rollout_proc5 to join... +[2023-09-26 01:45:53,299][128642] Waiting for process rollout_proc6 to join... +[2023-09-26 01:45:53,299][128642] Waiting for process rollout_proc7 to join... +[2023-09-26 01:45:53,300][128642] Batcher 0 profile tree view: +batching: 21.2117, releasing_batches: 1.6908 +[2023-09-26 01:45:53,300][128642] Batcher 1 profile tree view: +batching: 21.0131, releasing_batches: 1.7489 +[2023-09-26 01:45:53,301][128642] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0052 + wait_policy_total: 617.3300 +update_model: 36.4804 + weight_update: 0.0017 +one_step: 0.0017 + handle_policy_step: 2208.8564 + deserialize: 66.2656, stack: 16.3274, obs_to_device_normalize: 536.9289, forward: 1055.8052, send_messages: 92.5484 + prepare_outputs: 296.5609 + to_cpu: 149.4654 +[2023-09-26 01:45:53,301][128642] InferenceWorker_p1-w0 profile tree view: +wait_policy: 0.0052 + wait_policy_total: 616.1143 +update_model: 36.4773 + weight_update: 0.0017 +one_step: 0.0012 + handle_policy_step: 2209.6846 + deserialize: 66.1619, stack: 15.8275, obs_to_device_normalize: 535.7507, forward: 1060.1229, send_messages: 92.6662 + prepare_outputs: 296.4418 + to_cpu: 149.3885 +[2023-09-26 01:45:53,302][128642] Learner 0 profile tree view: +misc: 0.0154, prepare_batch: 31.9654 +train: 457.0098 + epoch_init: 0.1009, minibatch_init: 3.0971, losses_postprocess: 62.9766, kl_divergence: 5.3876, after_optimizer: 21.4822 + calculate_losses: 44.6143 + losses_init: 0.1011, forward_head: 14.3005, bptt_initial: 0.4325, bptt: 0.4748, tail: 10.1781, advantages_returns: 3.0137, losses: 12.5412 + update: 315.3368 + clip: 163.3682 +[2023-09-26 01:45:53,302][128642] Learner 1 profile tree view: +misc: 0.0152, prepare_batch: 32.2139 +train: 455.8639 + epoch_init: 0.1018, minibatch_init: 3.2193, losses_postprocess: 62.3631, kl_divergence: 5.4013, after_optimizer: 21.5237 + calculate_losses: 44.9576 + losses_init: 0.1135, forward_head: 14.3487, bptt_initial: 0.4484, bptt: 0.4473, tail: 10.2875, advantages_returns: 3.0601, losses: 12.6452 + update: 314.1912 + clip: 162.7625 +[2023-09-26 01:45:53,302][128642] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.3920, enqueue_policy_requests: 43.2478, env_step: 967.8267, overhead: 29.8039, complete_rollouts: 1.0896 +save_policy_outputs: 54.4072 + split_output_tensors: 19.0666 +[2023-09-26 01:45:53,303][128642] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.3945, enqueue_policy_requests: 43.0975, env_step: 955.6062, overhead: 29.0972, complete_rollouts: 1.0912 +save_policy_outputs: 53.7813 + split_output_tensors: 18.4061 +[2023-09-26 01:45:53,303][128642] Loop Runner_EvtLoop terminating... +[2023-09-26 01:45:53,304][128642] Runner profile tree view: +main_loop: 3069.0947 +[2023-09-26 01:45:53,304][128642] Collected {0: 10006528, 1: 10006528}, FPS: 6520.8