diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,2300 @@ +[2023-09-26 13:22:40,797][08516] Saving configuration to ./train_atari/atari_kangaroo/config.json... +[2023-09-26 13:22:41,114][08516] Rollout worker 0 uses device cpu +[2023-09-26 13:22:41,115][08516] Rollout worker 1 uses device cpu +[2023-09-26 13:22:41,115][08516] Rollout worker 2 uses device cpu +[2023-09-26 13:22:41,116][08516] Rollout worker 3 uses device cpu +[2023-09-26 13:22:41,116][08516] Rollout worker 4 uses device cpu +[2023-09-26 13:22:41,117][08516] Rollout worker 5 uses device cpu +[2023-09-26 13:22:41,117][08516] Rollout worker 6 uses device cpu +[2023-09-26 13:22:41,118][08516] Rollout worker 7 uses device cpu +[2023-09-26 13:22:41,118][08516] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 +[2023-09-26 13:22:41,164][08516] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 13:22:41,165][08516] InferenceWorker_p0-w0: min num requests: 1 +[2023-09-26 13:22:41,168][08516] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 13:22:41,168][08516] InferenceWorker_p1-w0: min num requests: 1 +[2023-09-26 13:22:41,191][08516] Starting all processes... +[2023-09-26 13:22:41,191][08516] Starting process learner_proc0 +[2023-09-26 13:22:42,760][08516] Starting process learner_proc1 +[2023-09-26 13:22:42,763][09359] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 13:22:42,763][09359] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-09-26 13:22:42,781][09359] Num visible devices: 1 +[2023-09-26 13:22:42,803][09359] Starting seed is not provided +[2023-09-26 13:22:42,803][09359] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 13:22:42,803][09359] Initializing actor-critic model on device cuda:0 +[2023-09-26 13:22:42,803][09359] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 13:22:42,804][09359] RunningMeanStd input shape: (1,) +[2023-09-26 13:22:42,815][09359] ConvEncoder: input_channels=4 +[2023-09-26 13:22:42,996][09359] Conv encoder output size: 512 +[2023-09-26 13:22:42,998][09359] Created Actor Critic model with architecture: +[2023-09-26 13:22:42,998][09359] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=18, bias=True) + ) +) +[2023-09-26 13:22:43,576][09359] Using optimizer +[2023-09-26 13:22:43,576][09359] No checkpoints found +[2023-09-26 13:22:43,577][09359] Did not load from checkpoint, starting from scratch! +[2023-09-26 13:22:43,577][09359] Initialized policy 0 weights for model version 0 +[2023-09-26 13:22:43,578][09359] LearnerWorker_p0 finished initialization! +[2023-09-26 13:22:43,578][09359] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 13:22:44,421][08516] Starting all processes... +[2023-09-26 13:22:44,425][09597] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 13:22:44,425][09597] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for learning process 1 +[2023-09-26 13:22:44,429][08516] Starting process inference_proc0-0 +[2023-09-26 13:22:44,429][08516] Starting process inference_proc1-0 +[2023-09-26 13:22:44,429][08516] Starting process rollout_proc0 +[2023-09-26 13:22:44,443][09597] Num visible devices: 1 +[2023-09-26 13:22:44,430][08516] Starting process rollout_proc1 +[2023-09-26 13:22:44,430][08516] Starting process rollout_proc2 +[2023-09-26 13:22:44,430][08516] Starting process rollout_proc3 +[2023-09-26 13:22:44,465][09597] Starting seed is not provided +[2023-09-26 13:22:44,465][09597] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-26 13:22:44,465][09597] Initializing actor-critic model on device cuda:0 +[2023-09-26 13:22:44,466][09597] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 13:22:44,466][09597] RunningMeanStd input shape: (1,) +[2023-09-26 13:22:44,431][08516] Starting process rollout_proc4 +[2023-09-26 13:22:44,431][08516] Starting process rollout_proc5 +[2023-09-26 13:22:44,434][08516] Starting process rollout_proc6 +[2023-09-26 13:22:44,438][08516] Starting process rollout_proc7 +[2023-09-26 13:22:44,479][09597] ConvEncoder: input_channels=4 +[2023-09-26 13:22:44,772][09597] Conv encoder output size: 512 +[2023-09-26 13:22:44,774][09597] Created Actor Critic model with architecture: +[2023-09-26 13:22:44,775][09597] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=18, bias=True) + ) +) +[2023-09-26 13:22:45,380][09597] Using optimizer +[2023-09-26 13:22:45,381][09597] No checkpoints found +[2023-09-26 13:22:45,381][09597] Did not load from checkpoint, starting from scratch! +[2023-09-26 13:22:45,381][09597] Initialized policy 1 weights for model version 0 +[2023-09-26 13:22:45,382][09597] LearnerWorker_p1 finished initialization! +[2023-09-26 13:22:45,383][09597] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-26 13:22:46,346][09776] Worker 7 uses CPU cores [28, 29, 30, 31] +[2023-09-26 13:22:46,346][09734] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 13:22:46,347][09734] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-09-26 13:22:46,358][09775] Worker 6 uses CPU cores [24, 25, 26, 27] +[2023-09-26 13:22:46,360][09772] Worker 3 uses CPU cores [12, 13, 14, 15] +[2023-09-26 13:22:46,365][09734] Num visible devices: 1 +[2023-09-26 13:22:46,376][09773] Worker 4 uses CPU cores [16, 17, 18, 19] +[2023-09-26 13:22:46,378][09771] Worker 2 uses CPU cores [8, 9, 10, 11] +[2023-09-26 13:22:46,416][09769] Worker 1 uses CPU cores [4, 5, 6, 7] +[2023-09-26 13:22:46,428][09735] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 13:22:46,428][09735] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for inference process 1 +[2023-09-26 13:22:46,449][09735] Num visible devices: 1 +[2023-09-26 13:22:46,549][09774] Worker 5 uses CPU cores [20, 21, 22, 23] +[2023-09-26 13:22:46,572][09768] Worker 0 uses CPU cores [0, 1, 2, 3] +[2023-09-26 13:22:46,953][09734] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 13:22:46,954][09734] RunningMeanStd input shape: (1,) +[2023-09-26 13:22:46,964][09734] ConvEncoder: input_channels=4 +[2023-09-26 13:22:47,024][08516] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan, 1: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-09-26 13:22:47,037][09735] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 13:22:47,037][09735] RunningMeanStd input shape: (1,) +[2023-09-26 13:22:47,048][09735] ConvEncoder: input_channels=4 +[2023-09-26 13:22:47,064][09734] Conv encoder output size: 512 +[2023-09-26 13:22:47,070][08516] Inference worker 0-0 is ready! +[2023-09-26 13:22:47,144][09735] Conv encoder output size: 512 +[2023-09-26 13:22:47,149][08516] Inference worker 1-0 is ready! +[2023-09-26 13:22:47,150][08516] All inference workers are ready! Signal rollout workers to start! +[2023-09-26 13:22:47,585][09773] Decorrelating experience for 0 frames... +[2023-09-26 13:22:47,593][09769] Decorrelating experience for 0 frames... +[2023-09-26 13:22:47,596][09776] Decorrelating experience for 0 frames... +[2023-09-26 13:22:47,596][09768] Decorrelating experience for 0 frames... +[2023-09-26 13:22:47,597][09774] Decorrelating experience for 0 frames... +[2023-09-26 13:22:47,598][09772] Decorrelating experience for 0 frames... +[2023-09-26 13:22:47,730][09775] Decorrelating experience for 0 frames... +[2023-09-26 13:22:47,736][09771] Decorrelating experience for 0 frames... +[2023-09-26 13:22:52,024][08516] Fps is (10 sec: 1638.4, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 8192. Throughput: 0: 204.8, 1: 204.8. Samples: 2048. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:22:52,025][08516] Avg episode reward: [(0, '0.000'), (1, '0.000')] +[2023-09-26 13:22:57,024][08516] Fps is (10 sec: 3276.9, 60 sec: 3276.9, 300 sec: 3276.9). Total num frames: 32768. Throughput: 0: 408.5, 1: 404.7. Samples: 8132. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 13:22:57,025][08516] Avg episode reward: [(0, '0.097'), (1, '0.040')] +[2023-09-26 13:23:01,152][08516] Heartbeat connected on Batcher_0 +[2023-09-26 13:23:01,155][08516] Heartbeat connected on LearnerWorker_p0 +[2023-09-26 13:23:01,158][08516] Heartbeat connected on Batcher_1 +[2023-09-26 13:23:01,160][08516] Heartbeat connected on LearnerWorker_p1 +[2023-09-26 13:23:01,166][08516] Heartbeat connected on InferenceWorker_p0-w0 +[2023-09-26 13:23:01,170][08516] Heartbeat connected on InferenceWorker_p1-w0 +[2023-09-26 13:23:01,173][08516] Heartbeat connected on RolloutWorker_w0 +[2023-09-26 13:23:01,174][08516] Heartbeat connected on RolloutWorker_w1 +[2023-09-26 13:23:01,179][08516] Heartbeat connected on RolloutWorker_w2 +[2023-09-26 13:23:01,180][08516] Heartbeat connected on RolloutWorker_w3 +[2023-09-26 13:23:01,182][08516] Heartbeat connected on RolloutWorker_w4 +[2023-09-26 13:23:01,187][08516] Heartbeat connected on RolloutWorker_w5 +[2023-09-26 13:23:01,187][08516] Heartbeat connected on RolloutWorker_w6 +[2023-09-26 13:23:01,192][08516] Heartbeat connected on RolloutWorker_w7 +[2023-09-26 13:23:02,024][08516] Fps is (10 sec: 5734.4, 60 sec: 4369.1, 300 sec: 4369.1). Total num frames: 65536. Throughput: 0: 409.9, 1: 412.4. Samples: 12334. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:23:02,025][08516] Avg episode reward: [(0, '0.125'), (1, '0.065')] +[2023-09-26 13:23:04,285][09734] Updated weights for policy 0, policy_version 160 (0.0016) +[2023-09-26 13:23:04,286][09735] Updated weights for policy 1, policy_version 160 (0.0016) +[2023-09-26 13:23:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 4915.3, 300 sec: 4915.3). Total num frames: 98304. Throughput: 0: 551.5, 1: 553.8. Samples: 22106. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:23:07,025][08516] Avg episode reward: [(0, '0.111'), (1, '0.086')] +[2023-09-26 13:23:12,025][08516] Fps is (10 sec: 6553.5, 60 sec: 5242.8, 300 sec: 5242.8). Total num frames: 131072. Throughput: 0: 631.0, 1: 633.8. Samples: 31620. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:23:12,026][08516] Avg episode reward: [(0, '0.110'), (1, '0.098')] +[2023-09-26 13:23:17,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5188.3, 300 sec: 5188.3). Total num frames: 155648. Throughput: 0: 609.6, 1: 612.4. Samples: 36659. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:23:17,025][08516] Avg episode reward: [(0, '0.110'), (1, '0.120')] +[2023-09-26 13:23:17,171][09734] Updated weights for policy 0, policy_version 320 (0.0018) +[2023-09-26 13:23:17,172][09735] Updated weights for policy 1, policy_version 320 (0.0018) +[2023-09-26 13:23:22,024][08516] Fps is (10 sec: 5734.5, 60 sec: 5383.3, 300 sec: 5383.3). Total num frames: 188416. Throughput: 0: 643.7, 1: 643.8. Samples: 45060. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:23:22,026][08516] Avg episode reward: [(0, '0.100'), (1, '0.120')] +[2023-09-26 13:23:27,024][08516] Fps is (10 sec: 5734.3, 60 sec: 5324.8, 300 sec: 5324.8). Total num frames: 212992. Throughput: 0: 673.8, 1: 675.2. Samples: 53961. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:23:27,025][08516] Avg episode reward: [(0, '0.100'), (1, '0.110')] +[2023-09-26 13:23:27,052][09597] Saving new best policy, reward=0.110! +[2023-09-26 13:23:27,068][09359] Saving new best policy, reward=0.100! +[2023-09-26 13:23:31,767][09735] Updated weights for policy 1, policy_version 480 (0.0017) +[2023-09-26 13:23:31,767][09734] Updated weights for policy 0, policy_version 480 (0.0017) +[2023-09-26 13:23:32,024][08516] Fps is (10 sec: 5734.5, 60 sec: 5461.4, 300 sec: 5461.4). Total num frames: 245760. Throughput: 0: 645.6, 1: 647.3. Samples: 58183. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:23:32,025][08516] Avg episode reward: [(0, '0.090'), (1, '0.120')] +[2023-09-26 13:23:32,026][09597] Saving new best policy, reward=0.120! +[2023-09-26 13:23:37,024][08516] Fps is (10 sec: 5734.5, 60 sec: 5406.7, 300 sec: 5406.7). Total num frames: 270336. Throughput: 0: 711.9, 1: 713.5. Samples: 66189. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 13:23:37,025][08516] Avg episode reward: [(0, '0.090'), (1, '0.090')] +[2023-09-26 13:23:42,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5511.0, 300 sec: 5511.0). Total num frames: 303104. Throughput: 0: 741.4, 1: 743.4. Samples: 74949. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:23:42,025][08516] Avg episode reward: [(0, '0.090'), (1, '0.100')] +[2023-09-26 13:23:45,899][09734] Updated weights for policy 0, policy_version 640 (0.0017) +[2023-09-26 13:23:45,899][09735] Updated weights for policy 1, policy_version 640 (0.0013) +[2023-09-26 13:23:47,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.4, 300 sec: 5461.4). Total num frames: 327680. Throughput: 0: 745.5, 1: 747.4. Samples: 79512. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:23:47,025][08516] Avg episode reward: [(0, '0.070'), (1, '0.090')] +[2023-09-26 13:23:52,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5870.9, 300 sec: 5545.4). Total num frames: 360448. Throughput: 0: 733.4, 1: 732.4. Samples: 88068. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:23:52,025][08516] Avg episode reward: [(0, '0.060'), (1, '0.070')] +[2023-09-26 13:23:57,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6007.5, 300 sec: 5617.4). Total num frames: 393216. Throughput: 0: 736.2, 1: 735.8. Samples: 97858. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:23:57,025][08516] Avg episode reward: [(0, '0.050'), (1, '0.070')] +[2023-09-26 13:23:59,147][09734] Updated weights for policy 0, policy_version 800 (0.0018) +[2023-09-26 13:23:59,147][09735] Updated weights for policy 1, policy_version 800 (0.0018) +[2023-09-26 13:24:02,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6007.5, 300 sec: 5679.8). Total num frames: 425984. Throughput: 0: 731.4, 1: 729.6. Samples: 102405. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:24:02,025][08516] Avg episode reward: [(0, '0.080'), (1, '0.050')] +[2023-09-26 13:24:07,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5870.9, 300 sec: 5632.0). Total num frames: 450560. Throughput: 0: 740.1, 1: 742.9. Samples: 111792. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:24:07,025][08516] Avg episode reward: [(0, '0.080'), (1, '0.090')] +[2023-09-26 13:24:12,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5871.0, 300 sec: 5686.2). Total num frames: 483328. Throughput: 0: 745.3, 1: 745.7. Samples: 121056. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:24:12,025][08516] Avg episode reward: [(0, '0.100'), (1, '0.090')] +[2023-09-26 13:24:12,267][09735] Updated weights for policy 1, policy_version 960 (0.0016) +[2023-09-26 13:24:12,268][09734] Updated weights for policy 0, policy_version 960 (0.0017) +[2023-09-26 13:24:17,025][08516] Fps is (10 sec: 6553.5, 60 sec: 6007.4, 300 sec: 5734.4). Total num frames: 516096. Throughput: 0: 752.7, 1: 752.8. Samples: 125930. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:24:17,026][08516] Avg episode reward: [(0, '0.130'), (1, '0.090')] +[2023-09-26 13:24:17,027][09359] Saving new best policy, reward=0.130! +[2023-09-26 13:24:22,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6007.5, 300 sec: 5777.5). Total num frames: 548864. Throughput: 0: 767.4, 1: 767.2. Samples: 135243. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:24:22,025][08516] Avg episode reward: [(0, '0.150'), (1, '0.090')] +[2023-09-26 13:24:22,034][09359] Saving new best policy, reward=0.150! +[2023-09-26 13:24:25,291][09735] Updated weights for policy 1, policy_version 1120 (0.0016) +[2023-09-26 13:24:25,291][09734] Updated weights for policy 0, policy_version 1120 (0.0017) +[2023-09-26 13:24:27,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6144.0, 300 sec: 5816.3). Total num frames: 581632. Throughput: 0: 777.6, 1: 777.2. Samples: 144917. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:24:27,025][08516] Avg episode reward: [(0, '0.140'), (1, '0.060')] +[2023-09-26 13:24:32,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 5851.4). Total num frames: 614400. Throughput: 0: 779.0, 1: 776.4. Samples: 149505. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:24:32,025][08516] Avg episode reward: [(0, '0.150'), (1, '0.060')] +[2023-09-26 13:24:37,024][08516] Fps is (10 sec: 6143.9, 60 sec: 6212.2, 300 sec: 5846.1). Total num frames: 643072. Throughput: 0: 787.6, 1: 789.5. Samples: 159039. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:24:37,025][08516] Avg episode reward: [(0, '0.080'), (1, '0.080')] +[2023-09-26 13:24:37,038][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000001264_323584.pth... +[2023-09-26 13:24:37,038][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000001264_323584.pth... +[2023-09-26 13:24:38,308][09734] Updated weights for policy 0, policy_version 1280 (0.0018) +[2023-09-26 13:24:38,308][09735] Updated weights for policy 1, policy_version 1280 (0.0019) +[2023-09-26 13:24:42,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 5841.3). Total num frames: 671744. Throughput: 0: 783.4, 1: 784.1. Samples: 168397. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:24:42,025][08516] Avg episode reward: [(0, '0.060'), (1, '0.080')] +[2023-09-26 13:24:47,024][08516] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 5870.9). Total num frames: 704512. Throughput: 0: 786.0, 1: 788.2. Samples: 173243. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:24:47,025][08516] Avg episode reward: [(0, '0.040'), (1, '0.080')] +[2023-09-26 13:24:51,322][09734] Updated weights for policy 0, policy_version 1440 (0.0015) +[2023-09-26 13:24:51,322][09735] Updated weights for policy 1, policy_version 1440 (0.0016) +[2023-09-26 13:24:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 5898.2). Total num frames: 737280. Throughput: 0: 787.7, 1: 785.7. Samples: 182594. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:24:52,025][08516] Avg episode reward: [(0, '0.050'), (1, '0.060')] +[2023-09-26 13:24:57,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 5923.4). Total num frames: 770048. Throughput: 0: 791.7, 1: 791.8. Samples: 192313. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:24:57,025][08516] Avg episode reward: [(0, '0.090'), (1, '0.060')] +[2023-09-26 13:25:02,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 5946.8). Total num frames: 802816. Throughput: 0: 786.9, 1: 786.9. Samples: 196750. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:25:02,025][08516] Avg episode reward: [(0, '0.120'), (1, '0.060')] +[2023-09-26 13:25:04,152][09734] Updated weights for policy 0, policy_version 1600 (0.0017) +[2023-09-26 13:25:04,152][09735] Updated weights for policy 1, policy_version 1600 (0.0017) +[2023-09-26 13:25:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 5968.5). Total num frames: 835584. Throughput: 0: 792.4, 1: 792.8. Samples: 206577. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:25:07,025][08516] Avg episode reward: [(0, '0.130'), (1, '0.060')] +[2023-09-26 13:25:12,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 5932.1). Total num frames: 860160. Throughput: 0: 784.3, 1: 786.0. Samples: 215582. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:25:12,025][08516] Avg episode reward: [(0, '0.120'), (1, '0.110')] +[2023-09-26 13:25:17,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 5952.9). Total num frames: 892928. Throughput: 0: 786.7, 1: 788.1. Samples: 220371. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:25:17,025][08516] Avg episode reward: [(0, '0.130'), (1, '0.100')] +[2023-09-26 13:25:17,362][09735] Updated weights for policy 1, policy_version 1760 (0.0015) +[2023-09-26 13:25:17,363][09734] Updated weights for policy 0, policy_version 1760 (0.0016) +[2023-09-26 13:25:22,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 5972.2). Total num frames: 925696. Throughput: 0: 782.6, 1: 781.4. Samples: 229422. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 13:25:22,025][08516] Avg episode reward: [(0, '0.090'), (1, '0.090')] +[2023-09-26 13:25:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 5990.4). Total num frames: 958464. Throughput: 0: 783.8, 1: 783.1. Samples: 238908. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:25:27,025][08516] Avg episode reward: [(0, '0.080'), (1, '0.090')] +[2023-09-26 13:25:30,697][09734] Updated weights for policy 0, policy_version 1920 (0.0017) +[2023-09-26 13:25:30,698][09735] Updated weights for policy 1, policy_version 1920 (0.0016) +[2023-09-26 13:25:32,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6007.5). Total num frames: 991232. Throughput: 0: 782.8, 1: 781.7. Samples: 243646. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 13:25:32,025][08516] Avg episode reward: [(0, '0.080'), (1, '0.060')] +[2023-09-26 13:25:37,025][08516] Fps is (10 sec: 5734.4, 60 sec: 6212.3, 300 sec: 5975.3). Total num frames: 1015808. Throughput: 0: 776.8, 1: 777.8. Samples: 252551. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:25:37,025][08516] Avg episode reward: [(0, '0.090'), (1, '0.060')] +[2023-09-26 13:25:42,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 5991.9). Total num frames: 1048576. Throughput: 0: 776.8, 1: 775.0. Samples: 262145. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:25:42,025][08516] Avg episode reward: [(0, '0.080'), (1, '0.050')] +[2023-09-26 13:25:43,778][09735] Updated weights for policy 1, policy_version 2080 (0.0016) +[2023-09-26 13:25:43,778][09734] Updated weights for policy 0, policy_version 2080 (0.0017) +[2023-09-26 13:25:47,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6007.5). Total num frames: 1081344. Throughput: 0: 779.3, 1: 779.3. Samples: 266885. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:25:47,025][08516] Avg episode reward: [(0, '0.070'), (1, '0.060')] +[2023-09-26 13:25:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6022.2). Total num frames: 1114112. Throughput: 0: 777.0, 1: 775.6. Samples: 276445. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:25:52,025][08516] Avg episode reward: [(0, '0.050'), (1, '0.060')] +[2023-09-26 13:25:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 5993.1). Total num frames: 1138688. Throughput: 0: 776.0, 1: 775.4. Samples: 285395. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:25:57,025][08516] Avg episode reward: [(0, '0.050'), (1, '0.060')] +[2023-09-26 13:25:57,125][09734] Updated weights for policy 0, policy_version 2240 (0.0020) +[2023-09-26 13:25:57,125][09735] Updated weights for policy 1, policy_version 2240 (0.0019) +[2023-09-26 13:26:02,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6007.5). Total num frames: 1171456. Throughput: 0: 774.7, 1: 776.5. Samples: 290174. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 13:26:02,025][08516] Avg episode reward: [(0, '0.030'), (1, '0.080')] +[2023-09-26 13:26:07,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6021.1). Total num frames: 1204224. Throughput: 0: 773.6, 1: 773.2. Samples: 299031. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:26:07,025][08516] Avg episode reward: [(0, '0.060'), (1, '0.090')] +[2023-09-26 13:26:10,259][09734] Updated weights for policy 0, policy_version 2400 (0.0017) +[2023-09-26 13:26:10,260][09735] Updated weights for policy 1, policy_version 2400 (0.0016) +[2023-09-26 13:26:12,024][08516] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6034.1). Total num frames: 1236992. Throughput: 0: 775.4, 1: 777.8. Samples: 308801. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:26:12,025][08516] Avg episode reward: [(0, '0.070'), (1, '0.090')] +[2023-09-26 13:26:17,025][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6046.5). Total num frames: 1269760. Throughput: 0: 775.0, 1: 773.8. Samples: 313344. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:26:17,025][08516] Avg episode reward: [(0, '0.090'), (1, '0.080')] +[2023-09-26 13:26:22,025][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6020.2). Total num frames: 1294336. Throughput: 0: 778.8, 1: 780.6. Samples: 322723. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 13:26:22,026][08516] Avg episode reward: [(0, '0.120'), (1, '0.080')] +[2023-09-26 13:26:23,493][09735] Updated weights for policy 1, policy_version 2560 (0.0019) +[2023-09-26 13:26:23,493][09734] Updated weights for policy 0, policy_version 2560 (0.0018) +[2023-09-26 13:26:27,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6032.3). Total num frames: 1327104. Throughput: 0: 773.7, 1: 773.7. Samples: 331777. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:26:27,025][08516] Avg episode reward: [(0, '0.100'), (1, '0.090')] +[2023-09-26 13:26:32,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6144.0, 300 sec: 6043.9). Total num frames: 1359872. Throughput: 0: 774.9, 1: 775.0. Samples: 336633. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:26:32,025][08516] Avg episode reward: [(0, '0.120'), (1, '0.100')] +[2023-09-26 13:26:36,435][09734] Updated weights for policy 0, policy_version 2720 (0.0017) +[2023-09-26 13:26:36,435][09735] Updated weights for policy 1, policy_version 2720 (0.0016) +[2023-09-26 13:26:37,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6055.0). Total num frames: 1392640. Throughput: 0: 774.3, 1: 773.9. Samples: 346116. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:26:37,025][08516] Avg episode reward: [(0, '0.150'), (1, '0.110')] +[2023-09-26 13:26:37,035][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000002720_696320.pth... +[2023-09-26 13:26:37,035][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000002720_696320.pth... +[2023-09-26 13:26:42,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6065.6). Total num frames: 1425408. Throughput: 0: 777.3, 1: 777.1. Samples: 355340. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:26:42,025][08516] Avg episode reward: [(0, '0.120'), (1, '0.100')] +[2023-09-26 13:26:47,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6075.7). Total num frames: 1458176. Throughput: 0: 780.6, 1: 779.1. Samples: 360362. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:26:47,025][08516] Avg episode reward: [(0, '0.150'), (1, '0.080')] +[2023-09-26 13:26:49,552][09734] Updated weights for policy 0, policy_version 2880 (0.0015) +[2023-09-26 13:26:49,552][09735] Updated weights for policy 1, policy_version 2880 (0.0016) +[2023-09-26 13:26:52,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6052.0). Total num frames: 1482752. Throughput: 0: 784.7, 1: 786.6. Samples: 369738. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:26:52,025][08516] Avg episode reward: [(0, '0.100'), (1, '0.070')] +[2023-09-26 13:26:57,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6062.1). Total num frames: 1515520. Throughput: 0: 781.9, 1: 780.2. Samples: 379096. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:26:57,025][08516] Avg episode reward: [(0, '0.120'), (1, '0.060')] +[2023-09-26 13:27:02,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6071.7). Total num frames: 1548288. Throughput: 0: 782.9, 1: 784.2. Samples: 383864. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:27:02,025][08516] Avg episode reward: [(0, '0.180'), (1, '0.090')] +[2023-09-26 13:27:02,027][09359] Saving new best policy, reward=0.180! +[2023-09-26 13:27:02,485][09734] Updated weights for policy 0, policy_version 3040 (0.0018) +[2023-09-26 13:27:02,485][09735] Updated weights for policy 1, policy_version 3040 (0.0016) +[2023-09-26 13:27:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6081.0). Total num frames: 1581056. Throughput: 0: 785.1, 1: 781.6. Samples: 393224. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 13:27:07,025][08516] Avg episode reward: [(0, '0.230'), (1, '0.080')] +[2023-09-26 13:27:07,033][09359] Saving new best policy, reward=0.230! +[2023-09-26 13:27:12,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6089.9). Total num frames: 1613824. Throughput: 0: 790.4, 1: 792.2. Samples: 402997. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:27:12,025][08516] Avg episode reward: [(0, '0.260'), (1, '0.120')] +[2023-09-26 13:27:12,025][09359] Saving new best policy, reward=0.260! +[2023-09-26 13:27:15,398][09734] Updated weights for policy 0, policy_version 3200 (0.0018) +[2023-09-26 13:27:15,398][09735] Updated weights for policy 1, policy_version 3200 (0.0018) +[2023-09-26 13:27:17,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6098.5). Total num frames: 1646592. Throughput: 0: 789.0, 1: 788.1. Samples: 407603. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:27:17,025][08516] Avg episode reward: [(0, '0.280'), (1, '0.120')] +[2023-09-26 13:27:17,027][09359] Saving new best policy, reward=0.280! +[2023-09-26 13:27:22,025][08516] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6106.8). Total num frames: 1679360. Throughput: 0: 791.3, 1: 792.6. Samples: 417391. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:27:22,025][08516] Avg episode reward: [(0, '0.240'), (1, '0.140')] +[2023-09-26 13:27:22,036][09597] Saving new best policy, reward=0.140! +[2023-09-26 13:27:27,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6114.7). Total num frames: 1712128. Throughput: 0: 792.6, 1: 792.5. Samples: 426669. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:27:27,025][08516] Avg episode reward: [(0, '0.200'), (1, '0.130')] +[2023-09-26 13:27:28,277][09735] Updated weights for policy 1, policy_version 3360 (0.0016) +[2023-09-26 13:27:28,278][09734] Updated weights for policy 0, policy_version 3360 (0.0018) +[2023-09-26 13:27:32,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6093.7). Total num frames: 1736704. Throughput: 0: 792.4, 1: 792.5. Samples: 431681. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:27:32,025][08516] Avg episode reward: [(0, '0.200'), (1, '0.130')] +[2023-09-26 13:27:37,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.6, 300 sec: 6101.6). Total num frames: 1769472. Throughput: 0: 788.7, 1: 788.3. Samples: 440705. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:27:37,025][08516] Avg episode reward: [(0, '0.230'), (1, '0.110')] +[2023-09-26 13:27:41,479][09734] Updated weights for policy 0, policy_version 3520 (0.0017) +[2023-09-26 13:27:41,479][09735] Updated weights for policy 1, policy_version 3520 (0.0018) +[2023-09-26 13:27:42,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6109.3). Total num frames: 1802240. Throughput: 0: 793.6, 1: 791.0. Samples: 450404. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:27:42,025][08516] Avg episode reward: [(0, '0.250'), (1, '0.100')] +[2023-09-26 13:27:47,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 1835008. Throughput: 0: 787.3, 1: 786.7. Samples: 454697. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:27:47,025][08516] Avg episode reward: [(0, '0.250'), (1, '0.100')] +[2023-09-26 13:27:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6220.4). Total num frames: 1867776. Throughput: 0: 791.3, 1: 793.1. Samples: 464522. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:27:52,025][08516] Avg episode reward: [(0, '0.280'), (1, '0.090')] +[2023-09-26 13:27:54,575][09734] Updated weights for policy 0, policy_version 3680 (0.0014) +[2023-09-26 13:27:54,576][09735] Updated weights for policy 1, policy_version 3680 (0.0018) +[2023-09-26 13:27:57,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 1892352. Throughput: 0: 781.9, 1: 781.5. Samples: 473352. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:27:57,025][08516] Avg episode reward: [(0, '0.280'), (1, '0.080')] +[2023-09-26 13:28:02,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 1925120. Throughput: 0: 785.4, 1: 786.0. Samples: 478316. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:28:02,025][08516] Avg episode reward: [(0, '0.260'), (1, '0.090')] +[2023-09-26 13:28:07,024][08516] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 1957888. Throughput: 0: 778.8, 1: 777.6. Samples: 487428. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:28:07,025][08516] Avg episode reward: [(0, '0.280'), (1, '0.080')] +[2023-09-26 13:28:07,714][09735] Updated weights for policy 1, policy_version 3840 (0.0018) +[2023-09-26 13:28:07,714][09734] Updated weights for policy 0, policy_version 3840 (0.0018) +[2023-09-26 13:28:12,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 1990656. Throughput: 0: 785.0, 1: 784.9. Samples: 497313. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:28:12,025][08516] Avg episode reward: [(0, '0.300'), (1, '0.100')] +[2023-09-26 13:28:12,025][09359] Saving new best policy, reward=0.300! +[2023-09-26 13:28:17,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2023424. Throughput: 0: 779.7, 1: 779.5. Samples: 501848. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 13:28:17,025][08516] Avg episode reward: [(0, '0.370'), (1, '0.080')] +[2023-09-26 13:28:17,027][09359] Saving new best policy, reward=0.370! +[2023-09-26 13:28:20,528][09734] Updated weights for policy 0, policy_version 4000 (0.0016) +[2023-09-26 13:28:20,528][09735] Updated weights for policy 1, policy_version 4000 (0.0018) +[2023-09-26 13:28:22,025][08516] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 2056192. Throughput: 0: 788.3, 1: 789.2. Samples: 511692. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:28:22,025][08516] Avg episode reward: [(0, '0.390'), (1, '0.090')] +[2023-09-26 13:28:22,037][09359] Saving new best policy, reward=0.390! +[2023-09-26 13:28:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 2088960. Throughput: 0: 784.1, 1: 786.7. Samples: 521091. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:28:27,025][08516] Avg episode reward: [(0, '0.410'), (1, '0.090')] +[2023-09-26 13:28:27,027][09359] Saving new best policy, reward=0.410! +[2023-09-26 13:28:32,024][08516] Fps is (10 sec: 6144.1, 60 sec: 6348.8, 300 sec: 6262.0). Total num frames: 2117632. Throughput: 0: 793.5, 1: 794.2. Samples: 526145. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:28:32,025][08516] Avg episode reward: [(0, '0.430'), (1, '0.100')] +[2023-09-26 13:28:32,027][09359] Saving new best policy, reward=0.430! +[2023-09-26 13:28:33,327][09735] Updated weights for policy 1, policy_version 4160 (0.0017) +[2023-09-26 13:28:33,327][09734] Updated weights for policy 0, policy_version 4160 (0.0017) +[2023-09-26 13:28:37,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 2146304. Throughput: 0: 785.6, 1: 785.2. Samples: 535206. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:28:37,025][08516] Avg episode reward: [(0, '0.410'), (1, '0.070')] +[2023-09-26 13:28:37,035][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000004192_1073152.pth... +[2023-09-26 13:28:37,035][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000004192_1073152.pth... +[2023-09-26 13:28:37,071][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000001264_323584.pth +[2023-09-26 13:28:37,071][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000001264_323584.pth +[2023-09-26 13:28:42,024][08516] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2179072. Throughput: 0: 792.2, 1: 791.3. Samples: 544608. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:28:42,025][08516] Avg episode reward: [(0, '0.420'), (1, '0.050')] +[2023-09-26 13:28:46,701][09735] Updated weights for policy 1, policy_version 4320 (0.0016) +[2023-09-26 13:28:46,701][09734] Updated weights for policy 0, policy_version 4320 (0.0015) +[2023-09-26 13:28:47,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2211840. Throughput: 0: 784.6, 1: 783.2. Samples: 548869. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 13:28:47,025][08516] Avg episode reward: [(0, '0.500'), (1, '0.030')] +[2023-09-26 13:28:47,027][09359] Saving new best policy, reward=0.500! +[2023-09-26 13:28:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2244608. Throughput: 0: 788.7, 1: 791.6. Samples: 558541. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:28:52,025][08516] Avg episode reward: [(0, '0.520'), (1, '0.020')] +[2023-09-26 13:28:52,035][09359] Saving new best policy, reward=0.520! +[2023-09-26 13:28:57,024][08516] Fps is (10 sec: 6144.1, 60 sec: 6348.8, 300 sec: 6262.0). Total num frames: 2273280. Throughput: 0: 784.6, 1: 785.3. Samples: 567961. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:28:57,025][08516] Avg episode reward: [(0, '0.610'), (1, '0.040')] +[2023-09-26 13:28:57,167][09359] Saving new best policy, reward=0.610! +[2023-09-26 13:28:59,731][09734] Updated weights for policy 0, policy_version 4480 (0.0017) +[2023-09-26 13:28:59,731][09735] Updated weights for policy 1, policy_version 4480 (0.0017) +[2023-09-26 13:29:02,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 2301952. Throughput: 0: 786.8, 1: 786.9. Samples: 572663. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:29:02,025][08516] Avg episode reward: [(0, '0.620'), (1, '0.050')] +[2023-09-26 13:29:02,025][09359] Saving new best policy, reward=0.620! +[2023-09-26 13:29:07,024][08516] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2334720. Throughput: 0: 780.0, 1: 779.1. Samples: 581851. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 13:29:07,025][08516] Avg episode reward: [(0, '0.640'), (1, '0.100')] +[2023-09-26 13:29:07,034][09359] Saving new best policy, reward=0.640! +[2023-09-26 13:29:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2367488. Throughput: 0: 783.9, 1: 783.3. Samples: 591615. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:29:12,025][08516] Avg episode reward: [(0, '0.660'), (1, '0.120')] +[2023-09-26 13:29:12,026][09359] Saving new best policy, reward=0.660! +[2023-09-26 13:29:12,756][09734] Updated weights for policy 0, policy_version 4640 (0.0020) +[2023-09-26 13:29:12,757][09735] Updated weights for policy 1, policy_version 4640 (0.0021) +[2023-09-26 13:29:17,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2400256. Throughput: 0: 777.3, 1: 777.6. Samples: 596115. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:29:17,025][08516] Avg episode reward: [(0, '0.600'), (1, '0.120')] +[2023-09-26 13:29:22,024][08516] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2433024. Throughput: 0: 785.8, 1: 786.6. Samples: 605962. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:29:22,025][08516] Avg episode reward: [(0, '0.640'), (1, '0.150')] +[2023-09-26 13:29:22,037][09597] Saving new best policy, reward=0.150! +[2023-09-26 13:29:25,607][09734] Updated weights for policy 0, policy_version 4800 (0.0017) +[2023-09-26 13:29:25,607][09735] Updated weights for policy 1, policy_version 4800 (0.0014) +[2023-09-26 13:29:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2465792. Throughput: 0: 786.0, 1: 785.5. Samples: 615324. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:29:27,025][08516] Avg episode reward: [(0, '0.640'), (1, '0.170')] +[2023-09-26 13:29:27,027][09597] Saving new best policy, reward=0.170! +[2023-09-26 13:29:32,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6212.3, 300 sec: 6262.0). Total num frames: 2490368. Throughput: 0: 789.4, 1: 791.1. Samples: 619992. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:29:32,026][08516] Avg episode reward: [(0, '0.680'), (1, '0.160')] +[2023-09-26 13:29:32,087][09359] Saving new best policy, reward=0.680! +[2023-09-26 13:29:37,025][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2523136. Throughput: 0: 790.0, 1: 788.7. Samples: 629582. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:29:37,026][08516] Avg episode reward: [(0, '0.710'), (1, '0.120')] +[2023-09-26 13:29:37,218][09359] Saving new best policy, reward=0.710! +[2023-09-26 13:29:38,647][09735] Updated weights for policy 1, policy_version 4960 (0.0018) +[2023-09-26 13:29:38,647][09734] Updated weights for policy 0, policy_version 4960 (0.0016) +[2023-09-26 13:29:42,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2555904. Throughput: 0: 790.2, 1: 787.9. Samples: 638976. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:29:42,025][08516] Avg episode reward: [(0, '0.770'), (1, '0.110')] +[2023-09-26 13:29:42,027][09359] Saving new best policy, reward=0.770! +[2023-09-26 13:29:47,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2588672. Throughput: 0: 784.4, 1: 784.5. Samples: 643264. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:29:47,025][08516] Avg episode reward: [(0, '0.870'), (1, '0.120')] +[2023-09-26 13:29:47,027][09359] Saving new best policy, reward=0.870! +[2023-09-26 13:29:52,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 2613248. Throughput: 0: 786.4, 1: 785.6. Samples: 652591. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:29:52,025][08516] Avg episode reward: [(0, '0.960'), (1, '0.090')] +[2023-09-26 13:29:52,151][09359] Saving new best policy, reward=0.960! +[2023-09-26 13:29:52,156][09735] Updated weights for policy 1, policy_version 5120 (0.0016) +[2023-09-26 13:29:52,157][09734] Updated weights for policy 0, policy_version 5120 (0.0015) +[2023-09-26 13:29:57,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6212.3, 300 sec: 6248.1). Total num frames: 2646016. Throughput: 0: 777.2, 1: 775.9. Samples: 661504. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:29:57,025][08516] Avg episode reward: [(0, '0.910'), (1, '0.070')] +[2023-09-26 13:30:02,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 2678784. Throughput: 0: 777.1, 1: 777.4. Samples: 666069. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:30:02,025][08516] Avg episode reward: [(0, '0.960'), (1, '0.050')] +[2023-09-26 13:30:05,304][09735] Updated weights for policy 1, policy_version 5280 (0.0013) +[2023-09-26 13:30:05,304][09734] Updated weights for policy 0, policy_version 5280 (0.0016) +[2023-09-26 13:30:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2711552. Throughput: 0: 777.6, 1: 775.2. Samples: 675840. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:30:07,025][08516] Avg episode reward: [(0, '0.990'), (1, '0.060')] +[2023-09-26 13:30:07,033][09359] Saving new best policy, reward=0.990! +[2023-09-26 13:30:12,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2744320. Throughput: 0: 777.6, 1: 779.2. Samples: 685379. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:30:12,025][08516] Avg episode reward: [(0, '1.000'), (1, '0.070')] +[2023-09-26 13:30:12,027][09359] Saving new best policy, reward=1.000! +[2023-09-26 13:30:17,025][08516] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2777088. Throughput: 0: 780.2, 1: 778.9. Samples: 690153. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:30:17,026][08516] Avg episode reward: [(0, '0.970'), (1, '0.070')] +[2023-09-26 13:30:18,202][09735] Updated weights for policy 1, policy_version 5440 (0.0016) +[2023-09-26 13:30:18,202][09734] Updated weights for policy 0, policy_version 5440 (0.0014) +[2023-09-26 13:30:22,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 2801664. Throughput: 0: 777.2, 1: 776.8. Samples: 699513. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:30:22,025][08516] Avg episode reward: [(0, '1.080'), (1, '0.070')] +[2023-09-26 13:30:22,116][09359] Saving new best policy, reward=1.080! +[2023-09-26 13:30:27,024][08516] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 2834432. Throughput: 0: 777.0, 1: 778.7. Samples: 708979. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:30:27,025][08516] Avg episode reward: [(0, '1.150'), (1, '0.100')] +[2023-09-26 13:30:27,148][09359] Saving new best policy, reward=1.150! +[2023-09-26 13:30:31,160][09735] Updated weights for policy 1, policy_version 5600 (0.0017) +[2023-09-26 13:30:31,160][09734] Updated weights for policy 0, policy_version 5600 (0.0017) +[2023-09-26 13:30:32,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2867200. Throughput: 0: 783.8, 1: 785.2. Samples: 713868. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:30:32,025][08516] Avg episode reward: [(0, '1.180'), (1, '0.100')] +[2023-09-26 13:30:32,027][09359] Saving new best policy, reward=1.180! +[2023-09-26 13:30:37,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2899968. Throughput: 0: 782.4, 1: 782.5. Samples: 723012. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:30:37,025][08516] Avg episode reward: [(0, '1.190'), (1, '0.150')] +[2023-09-26 13:30:37,038][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000005664_1449984.pth... +[2023-09-26 13:30:37,038][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000005664_1449984.pth... +[2023-09-26 13:30:37,073][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000002720_696320.pth +[2023-09-26 13:30:37,076][09359] Saving new best policy, reward=1.190! +[2023-09-26 13:30:37,078][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000002720_696320.pth +[2023-09-26 13:30:42,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2932736. Throughput: 0: 790.4, 1: 792.2. Samples: 732722. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:30:42,026][08516] Avg episode reward: [(0, '1.240'), (1, '0.160')] +[2023-09-26 13:30:42,027][09359] Saving new best policy, reward=1.240! +[2023-09-26 13:30:44,122][09734] Updated weights for policy 0, policy_version 5760 (0.0018) +[2023-09-26 13:30:44,122][09735] Updated weights for policy 1, policy_version 5760 (0.0015) +[2023-09-26 13:30:47,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 2965504. Throughput: 0: 792.3, 1: 790.7. Samples: 737302. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:30:47,025][08516] Avg episode reward: [(0, '1.220'), (1, '0.170')] +[2023-09-26 13:30:52,024][08516] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6289.8). Total num frames: 2994176. Throughput: 0: 788.9, 1: 789.2. Samples: 746855. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 13:30:52,025][08516] Avg episode reward: [(0, '1.200'), (1, '0.150')] +[2023-09-26 13:30:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3022848. Throughput: 0: 785.7, 1: 785.6. Samples: 756089. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:30:57,025][08516] Avg episode reward: [(0, '1.200'), (1, '0.170')] +[2023-09-26 13:30:57,188][09734] Updated weights for policy 0, policy_version 5920 (0.0016) +[2023-09-26 13:30:57,188][09735] Updated weights for policy 1, policy_version 5920 (0.0017) +[2023-09-26 13:31:02,024][08516] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3055616. Throughput: 0: 785.8, 1: 787.2. Samples: 760936. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:31:02,025][08516] Avg episode reward: [(0, '1.300'), (1, '0.120')] +[2023-09-26 13:31:02,027][09359] Saving new best policy, reward=1.300! +[2023-09-26 13:31:07,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3088384. Throughput: 0: 785.0, 1: 785.1. Samples: 770165. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:31:07,025][08516] Avg episode reward: [(0, '1.310'), (1, '0.100')] +[2023-09-26 13:31:07,036][09359] Saving new best policy, reward=1.310! +[2023-09-26 13:31:10,573][09734] Updated weights for policy 0, policy_version 6080 (0.0017) +[2023-09-26 13:31:10,573][09735] Updated weights for policy 1, policy_version 6080 (0.0018) +[2023-09-26 13:31:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3121152. Throughput: 0: 781.6, 1: 781.3. Samples: 779306. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:31:12,025][08516] Avg episode reward: [(0, '1.400'), (1, '0.080')] +[2023-09-26 13:31:12,027][09359] Saving new best policy, reward=1.400! +[2023-09-26 13:31:17,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 3153920. Throughput: 0: 782.7, 1: 780.0. Samples: 784193. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:31:17,025][08516] Avg episode reward: [(0, '1.420'), (1, '0.080')] +[2023-09-26 13:31:17,026][09359] Saving new best policy, reward=1.420! +[2023-09-26 13:31:22,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3178496. Throughput: 0: 782.3, 1: 784.2. Samples: 793505. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:31:22,025][08516] Avg episode reward: [(0, '1.490'), (1, '0.080')] +[2023-09-26 13:31:22,035][09359] Saving new best policy, reward=1.490! +[2023-09-26 13:31:23,639][09734] Updated weights for policy 0, policy_version 6240 (0.0018) +[2023-09-26 13:31:23,639][09735] Updated weights for policy 1, policy_version 6240 (0.0016) +[2023-09-26 13:31:27,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3211264. Throughput: 0: 779.8, 1: 777.9. Samples: 802817. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:31:27,025][08516] Avg episode reward: [(0, '1.490'), (1, '0.150')] +[2023-09-26 13:31:32,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 3244032. Throughput: 0: 778.4, 1: 779.0. Samples: 807383. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 13:31:32,025][08516] Avg episode reward: [(0, '1.400'), (1, '0.150')] +[2023-09-26 13:31:36,602][09734] Updated weights for policy 0, policy_version 6400 (0.0017) +[2023-09-26 13:31:36,602][09735] Updated weights for policy 1, policy_version 6400 (0.0017) +[2023-09-26 13:31:37,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 3276800. Throughput: 0: 780.9, 1: 781.0. Samples: 817138. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:31:37,025][08516] Avg episode reward: [(0, '1.310'), (1, '0.150')] +[2023-09-26 13:31:42,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3309568. Throughput: 0: 783.5, 1: 783.1. Samples: 826588. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:31:42,025][08516] Avg episode reward: [(0, '1.340'), (1, '0.160')] +[2023-09-26 13:31:47,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 3342336. Throughput: 0: 784.8, 1: 783.0. Samples: 831488. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:31:47,025][08516] Avg episode reward: [(0, '1.270'), (1, '0.180')] +[2023-09-26 13:31:47,027][09597] Saving new best policy, reward=0.180! +[2023-09-26 13:31:49,414][09734] Updated weights for policy 0, policy_version 6560 (0.0016) +[2023-09-26 13:31:49,414][09735] Updated weights for policy 1, policy_version 6560 (0.0015) +[2023-09-26 13:31:52,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6212.3, 300 sec: 6275.9). Total num frames: 3366912. Throughput: 0: 786.5, 1: 787.3. Samples: 840984. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:31:52,025][08516] Avg episode reward: [(0, '1.320'), (1, '0.150')] +[2023-09-26 13:31:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3399680. Throughput: 0: 787.8, 1: 788.0. Samples: 850217. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:31:57,025][08516] Avg episode reward: [(0, '1.360'), (1, '0.140')] +[2023-09-26 13:32:02,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 3432448. Throughput: 0: 787.4, 1: 788.3. Samples: 855100. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:32:02,025][08516] Avg episode reward: [(0, '1.400'), (1, '0.130')] +[2023-09-26 13:32:02,491][09735] Updated weights for policy 1, policy_version 6720 (0.0013) +[2023-09-26 13:32:02,491][09734] Updated weights for policy 0, policy_version 6720 (0.0018) +[2023-09-26 13:32:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3465216. Throughput: 0: 787.9, 1: 786.4. Samples: 864350. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:32:07,025][08516] Avg episode reward: [(0, '1.420'), (1, '0.120')] +[2023-09-26 13:32:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 3497984. Throughput: 0: 792.4, 1: 794.5. Samples: 874228. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:32:12,025][08516] Avg episode reward: [(0, '1.450'), (1, '0.090')] +[2023-09-26 13:32:15,458][09734] Updated weights for policy 0, policy_version 6880 (0.0017) +[2023-09-26 13:32:15,458][09735] Updated weights for policy 1, policy_version 6880 (0.0015) +[2023-09-26 13:32:17,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3530752. Throughput: 0: 791.8, 1: 791.3. Samples: 878621. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:32:17,025][08516] Avg episode reward: [(0, '1.500'), (1, '0.060')] +[2023-09-26 13:32:17,027][09359] Saving new best policy, reward=1.500! +[2023-09-26 13:32:22,024][08516] Fps is (10 sec: 6143.9, 60 sec: 6348.8, 300 sec: 6262.0). Total num frames: 3559424. Throughput: 0: 787.3, 1: 788.9. Samples: 888067. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:32:22,025][08516] Avg episode reward: [(0, '1.490'), (1, '0.080')] +[2023-09-26 13:32:27,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3588096. Throughput: 0: 783.6, 1: 783.9. Samples: 897128. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:32:27,025][08516] Avg episode reward: [(0, '1.570'), (1, '0.090')] +[2023-09-26 13:32:27,027][09359] Saving new best policy, reward=1.570! +[2023-09-26 13:32:28,607][09734] Updated weights for policy 0, policy_version 7040 (0.0019) +[2023-09-26 13:32:28,607][09735] Updated weights for policy 1, policy_version 7040 (0.0018) +[2023-09-26 13:32:32,024][08516] Fps is (10 sec: 6144.2, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3620864. Throughput: 0: 784.3, 1: 785.8. Samples: 902143. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:32:32,025][08516] Avg episode reward: [(0, '1.630'), (1, '0.090')] +[2023-09-26 13:32:32,025][09359] Saving new best policy, reward=1.630! +[2023-09-26 13:32:37,025][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3653632. Throughput: 0: 783.1, 1: 781.6. Samples: 911399. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:32:37,025][08516] Avg episode reward: [(0, '1.630'), (1, '0.100')] +[2023-09-26 13:32:37,036][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000007136_1826816.pth... +[2023-09-26 13:32:37,037][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000007136_1826816.pth... +[2023-09-26 13:32:37,073][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000004192_1073152.pth +[2023-09-26 13:32:37,074][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000004192_1073152.pth +[2023-09-26 13:32:41,719][09734] Updated weights for policy 0, policy_version 7200 (0.0017) +[2023-09-26 13:32:41,719][09735] Updated weights for policy 1, policy_version 7200 (0.0016) +[2023-09-26 13:32:42,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3686400. Throughput: 0: 784.9, 1: 785.5. Samples: 920882. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:32:42,025][08516] Avg episode reward: [(0, '1.720'), (1, '0.080')] +[2023-09-26 13:32:42,026][09359] Saving new best policy, reward=1.720! +[2023-09-26 13:32:47,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3719168. Throughput: 0: 784.9, 1: 783.6. Samples: 925683. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 13:32:47,025][08516] Avg episode reward: [(0, '1.750'), (1, '0.050')] +[2023-09-26 13:32:47,027][09359] Saving new best policy, reward=1.750! +[2023-09-26 13:32:52,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3743744. Throughput: 0: 786.1, 1: 785.2. Samples: 935059. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 13:32:52,025][08516] Avg episode reward: [(0, '1.780'), (1, '0.050')] +[2023-09-26 13:32:52,068][09359] Saving new best policy, reward=1.780! +[2023-09-26 13:32:54,770][09735] Updated weights for policy 1, policy_version 7360 (0.0017) +[2023-09-26 13:32:54,771][09734] Updated weights for policy 0, policy_version 7360 (0.0015) +[2023-09-26 13:32:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3776512. Throughput: 0: 778.0, 1: 777.7. Samples: 944235. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 13:32:57,025][08516] Avg episode reward: [(0, '1.750'), (1, '0.060')] +[2023-09-26 13:33:02,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3809280. Throughput: 0: 781.5, 1: 782.5. Samples: 949003. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:33:02,025][08516] Avg episode reward: [(0, '1.750'), (1, '0.060')] +[2023-09-26 13:33:07,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3842048. Throughput: 0: 783.1, 1: 781.3. Samples: 958465. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:33:07,026][08516] Avg episode reward: [(0, '1.800'), (1, '0.060')] +[2023-09-26 13:33:07,036][09359] Saving new best policy, reward=1.800! +[2023-09-26 13:33:07,811][09734] Updated weights for policy 0, policy_version 7520 (0.0019) +[2023-09-26 13:33:07,811][09735] Updated weights for policy 1, policy_version 7520 (0.0019) +[2023-09-26 13:33:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3874816. Throughput: 0: 786.3, 1: 784.8. Samples: 967827. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:33:12,025][08516] Avg episode reward: [(0, '1.800'), (1, '0.070')] +[2023-09-26 13:33:17,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 3907584. Throughput: 0: 783.6, 1: 784.2. Samples: 972694. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:33:17,025][08516] Avg episode reward: [(0, '1.880'), (1, '0.090')] +[2023-09-26 13:33:17,026][09359] Saving new best policy, reward=1.880! +[2023-09-26 13:33:21,070][09734] Updated weights for policy 0, policy_version 7680 (0.0017) +[2023-09-26 13:33:21,070][09735] Updated weights for policy 1, policy_version 7680 (0.0016) +[2023-09-26 13:33:22,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6212.3, 300 sec: 6248.1). Total num frames: 3932160. Throughput: 0: 782.9, 1: 782.4. Samples: 981837. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:33:22,025][08516] Avg episode reward: [(0, '1.900'), (1, '0.070')] +[2023-09-26 13:33:22,034][09359] Saving new best policy, reward=1.900! +[2023-09-26 13:33:27,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6262.0). Total num frames: 3964928. Throughput: 0: 782.8, 1: 780.6. Samples: 991232. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:33:27,025][08516] Avg episode reward: [(0, '1.960'), (1, '0.070')] +[2023-09-26 13:33:27,027][09359] Saving new best policy, reward=1.960! +[2023-09-26 13:33:32,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 3997696. Throughput: 0: 779.5, 1: 781.7. Samples: 995940. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:33:32,025][08516] Avg episode reward: [(0, '2.070'), (1, '0.060')] +[2023-09-26 13:33:32,026][09359] Saving new best policy, reward=2.070! +[2023-09-26 13:33:34,035][09734] Updated weights for policy 0, policy_version 7840 (0.0018) +[2023-09-26 13:33:34,035][09735] Updated weights for policy 1, policy_version 7840 (0.0017) +[2023-09-26 13:33:37,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4030464. Throughput: 0: 783.8, 1: 783.0. Samples: 1005567. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:33:37,025][08516] Avg episode reward: [(0, '2.090'), (1, '0.060')] +[2023-09-26 13:33:37,035][09359] Saving new best policy, reward=2.090! +[2023-09-26 13:33:42,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4063232. Throughput: 0: 787.1, 1: 787.1. Samples: 1015073. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 13:33:42,025][08516] Avg episode reward: [(0, '2.070'), (1, '0.060')] +[2023-09-26 13:33:46,865][09735] Updated weights for policy 1, policy_version 8000 (0.0016) +[2023-09-26 13:33:46,865][09734] Updated weights for policy 0, policy_version 8000 (0.0017) +[2023-09-26 13:33:47,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4096000. Throughput: 0: 788.6, 1: 787.0. Samples: 1019904. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:33:47,025][08516] Avg episode reward: [(0, '2.060'), (1, '0.050')] +[2023-09-26 13:33:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6289.8). Total num frames: 4128768. Throughput: 0: 788.2, 1: 789.6. Samples: 1029468. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:33:52,025][08516] Avg episode reward: [(0, '2.110'), (1, '0.050')] +[2023-09-26 13:33:52,035][09359] Saving new best policy, reward=2.110! +[2023-09-26 13:33:57,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4153344. Throughput: 0: 789.8, 1: 791.6. Samples: 1038993. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:33:57,025][08516] Avg episode reward: [(0, '2.130'), (1, '0.070')] +[2023-09-26 13:33:57,034][09359] Saving new best policy, reward=2.130! +[2023-09-26 13:33:59,605][09734] Updated weights for policy 0, policy_version 8160 (0.0018) +[2023-09-26 13:33:59,605][09735] Updated weights for policy 1, policy_version 8160 (0.0019) +[2023-09-26 13:34:02,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 4186112. Throughput: 0: 792.7, 1: 791.4. Samples: 1043979. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:34:02,025][08516] Avg episode reward: [(0, '2.060'), (1, '0.090')] +[2023-09-26 13:34:07,025][08516] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4218880. Throughput: 0: 788.4, 1: 789.8. Samples: 1052856. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:34:07,026][08516] Avg episode reward: [(0, '2.130'), (1, '0.110')] +[2023-09-26 13:34:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 4251648. Throughput: 0: 792.1, 1: 791.3. Samples: 1062485. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:34:12,025][08516] Avg episode reward: [(0, '2.170'), (1, '0.100')] +[2023-09-26 13:34:12,026][09359] Saving new best policy, reward=2.170! +[2023-09-26 13:34:12,907][09734] Updated weights for policy 0, policy_version 8320 (0.0018) +[2023-09-26 13:34:12,907][09735] Updated weights for policy 1, policy_version 8320 (0.0017) +[2023-09-26 13:34:17,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4284416. Throughput: 0: 790.9, 1: 788.5. Samples: 1067012. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:34:17,025][08516] Avg episode reward: [(0, '2.180'), (1, '0.110')] +[2023-09-26 13:34:17,025][09359] Saving new best policy, reward=2.180! +[2023-09-26 13:34:22,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6275.9). Total num frames: 4317184. Throughput: 0: 790.8, 1: 792.5. Samples: 1076816. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:34:22,025][08516] Avg episode reward: [(0, '2.200'), (1, '0.100')] +[2023-09-26 13:34:22,034][09359] Saving new best policy, reward=2.200! +[2023-09-26 13:34:25,768][09734] Updated weights for policy 0, policy_version 8480 (0.0017) +[2023-09-26 13:34:25,768][09735] Updated weights for policy 1, policy_version 8480 (0.0018) +[2023-09-26 13:34:27,031][08516] Fps is (10 sec: 6139.9, 60 sec: 6348.1, 300 sec: 6289.7). Total num frames: 4345856. Throughput: 0: 789.2, 1: 789.5. Samples: 1086124. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 13:34:27,033][08516] Avg episode reward: [(0, '2.190'), (1, '0.080')] +[2023-09-26 13:34:32,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4374528. Throughput: 0: 789.3, 1: 789.9. Samples: 1090970. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 13:34:32,025][08516] Avg episode reward: [(0, '2.270'), (1, '0.040')] +[2023-09-26 13:34:32,191][09359] Saving new best policy, reward=2.270! +[2023-09-26 13:34:37,024][08516] Fps is (10 sec: 6148.1, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 4407296. Throughput: 0: 784.7, 1: 785.5. Samples: 1100125. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:34:37,025][08516] Avg episode reward: [(0, '2.220'), (1, '0.030')] +[2023-09-26 13:34:37,033][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000008608_2203648.pth... +[2023-09-26 13:34:37,033][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000008608_2203648.pth... +[2023-09-26 13:34:37,071][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000005664_1449984.pth +[2023-09-26 13:34:37,071][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000005664_1449984.pth +[2023-09-26 13:34:38,697][09735] Updated weights for policy 1, policy_version 8640 (0.0015) +[2023-09-26 13:34:38,697][09734] Updated weights for policy 0, policy_version 8640 (0.0016) +[2023-09-26 13:34:42,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4440064. Throughput: 0: 790.1, 1: 788.2. Samples: 1110016. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:34:42,025][08516] Avg episode reward: [(0, '2.270'), (1, '0.030')] +[2023-09-26 13:34:47,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 4472832. Throughput: 0: 784.4, 1: 785.4. Samples: 1114622. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:34:47,025][08516] Avg episode reward: [(0, '2.270'), (1, '0.030')] +[2023-09-26 13:34:51,483][09734] Updated weights for policy 0, policy_version 8800 (0.0018) +[2023-09-26 13:34:51,483][09735] Updated weights for policy 1, policy_version 8800 (0.0013) +[2023-09-26 13:34:52,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 4505600. Throughput: 0: 795.2, 1: 793.7. Samples: 1124356. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:34:52,025][08516] Avg episode reward: [(0, '2.310'), (1, '0.040')] +[2023-09-26 13:34:52,037][09359] Saving new best policy, reward=2.310! +[2023-09-26 13:34:57,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 4538368. Throughput: 0: 794.8, 1: 797.3. Samples: 1134127. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:34:57,025][08516] Avg episode reward: [(0, '2.430'), (1, '0.080')] +[2023-09-26 13:34:57,026][09359] Saving new best policy, reward=2.430! +[2023-09-26 13:35:02,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 4571136. Throughput: 0: 796.4, 1: 796.5. Samples: 1138693. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:35:02,025][08516] Avg episode reward: [(0, '2.430'), (1, '0.080')] +[2023-09-26 13:35:04,362][09734] Updated weights for policy 0, policy_version 8960 (0.0014) +[2023-09-26 13:35:04,363][09735] Updated weights for policy 1, policy_version 8960 (0.0017) +[2023-09-26 13:35:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 4603904. Throughput: 0: 795.2, 1: 795.0. Samples: 1148375. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:35:07,025][08516] Avg episode reward: [(0, '2.470'), (1, '0.090')] +[2023-09-26 13:35:07,036][09359] Saving new best policy, reward=2.470! +[2023-09-26 13:35:12,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4628480. Throughput: 0: 795.8, 1: 795.7. Samples: 1157733. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:35:12,025][08516] Avg episode reward: [(0, '2.520'), (1, '0.090')] +[2023-09-26 13:35:12,057][09359] Saving new best policy, reward=2.520! +[2023-09-26 13:35:17,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 4661248. Throughput: 0: 796.6, 1: 797.3. Samples: 1162696. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:35:17,025][08516] Avg episode reward: [(0, '2.480'), (1, '0.090')] +[2023-09-26 13:35:17,194][09734] Updated weights for policy 0, policy_version 9120 (0.0017) +[2023-09-26 13:35:17,194][09735] Updated weights for policy 1, policy_version 9120 (0.0016) +[2023-09-26 13:35:22,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 4694016. Throughput: 0: 801.1, 1: 800.5. Samples: 1172195. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:35:22,025][08516] Avg episode reward: [(0, '2.390'), (1, '0.100')] +[2023-09-26 13:35:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6349.5, 300 sec: 6303.7). Total num frames: 4726784. Throughput: 0: 796.4, 1: 796.4. Samples: 1181696. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:35:27,025][08516] Avg episode reward: [(0, '2.340'), (1, '0.090')] +[2023-09-26 13:35:30,094][09734] Updated weights for policy 0, policy_version 9280 (0.0015) +[2023-09-26 13:35:30,095][09735] Updated weights for policy 1, policy_version 9280 (0.0017) +[2023-09-26 13:35:32,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 4759552. Throughput: 0: 796.7, 1: 796.6. Samples: 1186322. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:35:32,025][08516] Avg episode reward: [(0, '2.360'), (1, '0.070')] +[2023-09-26 13:35:37,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 4792320. Throughput: 0: 796.4, 1: 796.4. Samples: 1196032. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 13:35:37,026][08516] Avg episode reward: [(0, '2.370'), (1, '0.070')] +[2023-09-26 13:35:42,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 4825088. Throughput: 0: 794.5, 1: 794.3. Samples: 1205624. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:35:42,025][08516] Avg episode reward: [(0, '2.350'), (1, '0.030')] +[2023-09-26 13:35:42,936][09734] Updated weights for policy 0, policy_version 9440 (0.0015) +[2023-09-26 13:35:42,936][09735] Updated weights for policy 1, policy_version 9440 (0.0018) +[2023-09-26 13:35:47,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6317.6). Total num frames: 4857856. Throughput: 0: 796.4, 1: 796.4. Samples: 1210369. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:35:47,025][08516] Avg episode reward: [(0, '2.410'), (1, '0.010')] +[2023-09-26 13:35:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 4890624. Throughput: 0: 794.8, 1: 795.1. Samples: 1219920. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 13:35:52,025][08516] Avg episode reward: [(0, '2.500'), (1, '0.010')] +[2023-09-26 13:35:55,779][09734] Updated weights for policy 0, policy_version 9600 (0.0018) +[2023-09-26 13:35:55,779][09735] Updated weights for policy 1, policy_version 9600 (0.0016) +[2023-09-26 13:35:57,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 4923392. Throughput: 0: 796.8, 1: 796.8. Samples: 1229444. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 13:35:57,025][08516] Avg episode reward: [(0, '2.530'), (1, '0.020')] +[2023-09-26 13:35:57,026][09359] Saving new best policy, reward=2.530! +[2023-09-26 13:36:02,024][08516] Fps is (10 sec: 5734.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 4947968. Throughput: 0: 794.7, 1: 794.2. Samples: 1234197. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:36:02,025][08516] Avg episode reward: [(0, '2.510'), (1, '0.020')] +[2023-09-26 13:36:07,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 4980736. Throughput: 0: 793.2, 1: 793.7. Samples: 1243606. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 13:36:07,026][08516] Avg episode reward: [(0, '2.520'), (1, '0.040')] +[2023-09-26 13:36:08,817][09735] Updated weights for policy 1, policy_version 9760 (0.0017) +[2023-09-26 13:36:08,817][09734] Updated weights for policy 0, policy_version 9760 (0.0017) +[2023-09-26 13:36:12,024][08516] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 5013504. Throughput: 0: 794.2, 1: 794.2. Samples: 1253175. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 13:36:12,025][08516] Avg episode reward: [(0, '2.500'), (1, '0.050')] +[2023-09-26 13:36:17,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 5046272. Throughput: 0: 792.8, 1: 792.8. Samples: 1257674. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:36:17,025][08516] Avg episode reward: [(0, '2.560'), (1, '0.070')] +[2023-09-26 13:36:17,026][09359] Saving new best policy, reward=2.560! +[2023-09-26 13:36:22,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5070848. Throughput: 0: 788.0, 1: 787.7. Samples: 1266939. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:36:22,025][08516] Avg episode reward: [(0, '2.510'), (1, '0.070')] +[2023-09-26 13:36:22,071][09734] Updated weights for policy 0, policy_version 9920 (0.0016) +[2023-09-26 13:36:22,071][09735] Updated weights for policy 1, policy_version 9920 (0.0017) +[2023-09-26 13:36:27,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5103616. Throughput: 0: 783.5, 1: 783.8. Samples: 1276154. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:36:27,025][08516] Avg episode reward: [(0, '2.510'), (1, '0.090')] +[2023-09-26 13:36:32,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5136384. Throughput: 0: 784.6, 1: 786.1. Samples: 1281052. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:36:32,025][08516] Avg episode reward: [(0, '2.600'), (1, '0.110')] +[2023-09-26 13:36:32,025][09359] Saving new best policy, reward=2.600! +[2023-09-26 13:36:35,143][09735] Updated weights for policy 1, policy_version 10080 (0.0018) +[2023-09-26 13:36:35,143][09734] Updated weights for policy 0, policy_version 10080 (0.0015) +[2023-09-26 13:36:37,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5169152. Throughput: 0: 782.3, 1: 780.5. Samples: 1290246. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:36:37,025][08516] Avg episode reward: [(0, '2.580'), (1, '0.120')] +[2023-09-26 13:36:37,036][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000010096_2584576.pth... +[2023-09-26 13:36:37,036][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000010096_2584576.pth... +[2023-09-26 13:36:37,073][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000007136_1826816.pth +[2023-09-26 13:36:37,074][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000007136_1826816.pth +[2023-09-26 13:36:42,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5201920. Throughput: 0: 786.6, 1: 786.5. Samples: 1300237. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:36:42,025][08516] Avg episode reward: [(0, '2.570'), (1, '0.110')] +[2023-09-26 13:36:47,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6331.4). Total num frames: 5234688. Throughput: 0: 782.4, 1: 781.6. Samples: 1304577. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:36:47,025][08516] Avg episode reward: [(0, '2.640'), (1, '0.100')] +[2023-09-26 13:36:47,025][09359] Saving new best policy, reward=2.640! +[2023-09-26 13:36:48,142][09734] Updated weights for policy 0, policy_version 10240 (0.0015) +[2023-09-26 13:36:48,143][09735] Updated weights for policy 1, policy_version 10240 (0.0017) +[2023-09-26 13:36:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 5267456. Throughput: 0: 784.3, 1: 784.1. Samples: 1314185. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:36:52,025][08516] Avg episode reward: [(0, '2.660'), (1, '0.090')] +[2023-09-26 13:36:52,038][09359] Saving new best policy, reward=2.660! +[2023-09-26 13:36:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6303.7). Total num frames: 5292032. Throughput: 0: 779.2, 1: 780.4. Samples: 1323357. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:36:57,025][08516] Avg episode reward: [(0, '2.720'), (1, '0.070')] +[2023-09-26 13:36:57,153][09359] Saving new best policy, reward=2.720! +[2023-09-26 13:37:01,209][09735] Updated weights for policy 1, policy_version 10400 (0.0018) +[2023-09-26 13:37:01,209][09734] Updated weights for policy 0, policy_version 10400 (0.0017) +[2023-09-26 13:37:02,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5324800. Throughput: 0: 781.1, 1: 781.5. Samples: 1327993. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 13:37:02,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.060')] +[2023-09-26 13:37:02,027][09359] Saving new best policy, reward=2.740! +[2023-09-26 13:37:07,024][08516] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5357568. Throughput: 0: 782.2, 1: 783.3. Samples: 1337385. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 13:37:07,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.080')] +[2023-09-26 13:37:07,037][09359] Saving new best policy, reward=2.800! +[2023-09-26 13:37:12,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 5390336. Throughput: 0: 787.7, 1: 786.7. Samples: 1347001. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 13:37:12,025][08516] Avg episode reward: [(0, '2.780'), (1, '0.060')] +[2023-09-26 13:37:14,326][09735] Updated weights for policy 1, policy_version 10560 (0.0018) +[2023-09-26 13:37:14,327][09734] Updated weights for policy 0, policy_version 10560 (0.0018) +[2023-09-26 13:37:17,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6317.6). Total num frames: 5423104. Throughput: 0: 784.7, 1: 784.0. Samples: 1351643. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:37:17,025][08516] Avg episode reward: [(0, '2.710'), (1, '0.060')] +[2023-09-26 13:37:22,025][08516] Fps is (10 sec: 5734.2, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5447680. Throughput: 0: 785.8, 1: 786.5. Samples: 1361003. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:37:22,026][08516] Avg episode reward: [(0, '2.660'), (1, '0.070')] +[2023-09-26 13:37:27,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5480448. Throughput: 0: 777.6, 1: 777.5. Samples: 1370216. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:37:27,025][08516] Avg episode reward: [(0, '2.630'), (1, '0.080')] +[2023-09-26 13:37:27,332][09735] Updated weights for policy 1, policy_version 10720 (0.0015) +[2023-09-26 13:37:27,332][09734] Updated weights for policy 0, policy_version 10720 (0.0017) +[2023-09-26 13:37:32,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5513216. Throughput: 0: 781.4, 1: 783.2. Samples: 1374982. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:37:32,025][08516] Avg episode reward: [(0, '2.610'), (1, '0.080')] +[2023-09-26 13:37:37,025][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5545984. Throughput: 0: 781.7, 1: 779.7. Samples: 1384448. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:37:37,025][08516] Avg episode reward: [(0, '2.670'), (1, '0.090')] +[2023-09-26 13:37:40,509][09735] Updated weights for policy 1, policy_version 10880 (0.0015) +[2023-09-26 13:37:40,509][09734] Updated weights for policy 0, policy_version 10880 (0.0018) +[2023-09-26 13:37:42,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5578752. Throughput: 0: 783.0, 1: 783.6. Samples: 1393853. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:37:42,025][08516] Avg episode reward: [(0, '2.670'), (1, '0.100')] +[2023-09-26 13:37:47,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 5611520. Throughput: 0: 786.9, 1: 785.5. Samples: 1398750. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:37:47,025][08516] Avg episode reward: [(0, '2.670'), (1, '0.070')] +[2023-09-26 13:37:52,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6303.7). Total num frames: 5636096. Throughput: 0: 782.9, 1: 784.1. Samples: 1407899. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:37:52,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.050')] +[2023-09-26 13:37:53,640][09734] Updated weights for policy 0, policy_version 11040 (0.0017) +[2023-09-26 13:37:53,640][09735] Updated weights for policy 1, policy_version 11040 (0.0016) +[2023-09-26 13:37:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5668864. Throughput: 0: 780.6, 1: 779.8. Samples: 1417218. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:37:57,025][08516] Avg episode reward: [(0, '2.670'), (1, '0.010')] +[2023-09-26 13:38:02,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5701632. Throughput: 0: 782.0, 1: 783.1. Samples: 1422071. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:38:02,025][08516] Avg episode reward: [(0, '2.710'), (1, '0.020')] +[2023-09-26 13:38:06,557][09734] Updated weights for policy 0, policy_version 11200 (0.0017) +[2023-09-26 13:38:06,557][09735] Updated weights for policy 1, policy_version 11200 (0.0018) +[2023-09-26 13:38:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5734400. Throughput: 0: 784.3, 1: 783.5. Samples: 1431552. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:38:07,025][08516] Avg episode reward: [(0, '2.630'), (1, '0.030')] +[2023-09-26 13:38:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5767168. Throughput: 0: 782.9, 1: 783.2. Samples: 1440691. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:38:12,025][08516] Avg episode reward: [(0, '2.650'), (1, '0.030')] +[2023-09-26 13:38:17,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6303.7). Total num frames: 5791744. Throughput: 0: 781.9, 1: 783.6. Samples: 1445429. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:38:17,025][08516] Avg episode reward: [(0, '2.620'), (1, '0.050')] +[2023-09-26 13:38:20,054][09734] Updated weights for policy 0, policy_version 11360 (0.0016) +[2023-09-26 13:38:20,055][09735] Updated weights for policy 1, policy_version 11360 (0.0016) +[2023-09-26 13:38:22,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 5824512. Throughput: 0: 773.9, 1: 775.5. Samples: 1454173. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:38:22,025][08516] Avg episode reward: [(0, '2.600'), (1, '0.070')] +[2023-09-26 13:38:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5857280. Throughput: 0: 779.8, 1: 780.3. Samples: 1464057. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:38:27,025][08516] Avg episode reward: [(0, '2.680'), (1, '0.090')] +[2023-09-26 13:38:32,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5890048. Throughput: 0: 775.3, 1: 776.4. Samples: 1468575. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:38:32,025][08516] Avg episode reward: [(0, '2.620'), (1, '0.100')] +[2023-09-26 13:38:32,950][09734] Updated weights for policy 0, policy_version 11520 (0.0015) +[2023-09-26 13:38:32,950][09735] Updated weights for policy 1, policy_version 11520 (0.0014) +[2023-09-26 13:38:37,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 5922816. Throughput: 0: 782.2, 1: 781.4. Samples: 1478259. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:38:37,025][08516] Avg episode reward: [(0, '2.560'), (1, '0.090')] +[2023-09-26 13:38:37,035][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000011568_2961408.pth... +[2023-09-26 13:38:37,035][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000011568_2961408.pth... +[2023-09-26 13:38:37,071][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000008608_2203648.pth +[2023-09-26 13:38:37,072][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000008608_2203648.pth +[2023-09-26 13:38:42,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 5955584. Throughput: 0: 782.3, 1: 784.1. Samples: 1487703. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:38:42,025][08516] Avg episode reward: [(0, '2.530'), (1, '0.080')] +[2023-09-26 13:38:45,788][09735] Updated weights for policy 1, policy_version 11680 (0.0017) +[2023-09-26 13:38:45,788][09734] Updated weights for policy 0, policy_version 11680 (0.0015) +[2023-09-26 13:38:47,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 5980160. Throughput: 0: 784.4, 1: 785.0. Samples: 1492693. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:38:47,025][08516] Avg episode reward: [(0, '2.620'), (1, '0.060')] +[2023-09-26 13:38:52,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 6012928. Throughput: 0: 783.5, 1: 785.4. Samples: 1502153. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:38:52,025][08516] Avg episode reward: [(0, '2.600'), (1, '0.030')] +[2023-09-26 13:38:57,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 6045696. Throughput: 0: 785.5, 1: 784.7. Samples: 1511349. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:38:57,025][08516] Avg episode reward: [(0, '2.640'), (1, '0.020')] +[2023-09-26 13:38:58,979][09735] Updated weights for policy 1, policy_version 11840 (0.0015) +[2023-09-26 13:38:58,979][09734] Updated weights for policy 0, policy_version 11840 (0.0016) +[2023-09-26 13:39:02,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 6078464. Throughput: 0: 782.1, 1: 780.8. Samples: 1515759. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:39:02,026][08516] Avg episode reward: [(0, '2.710'), (1, '0.030')] +[2023-09-26 13:39:07,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 6111232. Throughput: 0: 794.4, 1: 794.3. Samples: 1525663. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:39:07,025][08516] Avg episode reward: [(0, '2.830'), (1, '0.030')] +[2023-09-26 13:39:07,033][09359] Saving new best policy, reward=2.830! +[2023-09-26 13:39:11,896][09734] Updated weights for policy 0, policy_version 12000 (0.0016) +[2023-09-26 13:39:11,896][09735] Updated weights for policy 1, policy_version 12000 (0.0017) +[2023-09-26 13:39:12,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 6144000. Throughput: 0: 787.3, 1: 786.4. Samples: 1534874. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:39:12,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.050')] +[2023-09-26 13:39:12,027][09359] Saving new best policy, reward=2.870! +[2023-09-26 13:39:17,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 6168576. Throughput: 0: 788.9, 1: 789.8. Samples: 1539614. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:39:17,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.050')] +[2023-09-26 13:39:22,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6289.9). Total num frames: 6201344. Throughput: 0: 786.5, 1: 786.8. Samples: 1549060. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:39:22,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.100')] +[2023-09-26 13:39:25,123][09735] Updated weights for policy 1, policy_version 12160 (0.0016) +[2023-09-26 13:39:25,124][09734] Updated weights for policy 0, policy_version 12160 (0.0017) +[2023-09-26 13:39:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 6234112. Throughput: 0: 785.8, 1: 784.6. Samples: 1558369. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:39:27,025][08516] Avg episode reward: [(0, '2.830'), (1, '0.090')] +[2023-09-26 13:39:32,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 6266880. Throughput: 0: 778.4, 1: 777.8. Samples: 1562725. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:39:32,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.080')] +[2023-09-26 13:39:37,024][08516] Fps is (10 sec: 6143.9, 60 sec: 6212.3, 300 sec: 6289.8). Total num frames: 6295552. Throughput: 0: 779.2, 1: 779.3. Samples: 1572288. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:39:37,026][08516] Avg episode reward: [(0, '2.830'), (1, '0.060')] +[2023-09-26 13:39:38,312][09734] Updated weights for policy 0, policy_version 12320 (0.0017) +[2023-09-26 13:39:38,312][09735] Updated weights for policy 1, policy_version 12320 (0.0016) +[2023-09-26 13:39:42,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 6324224. Throughput: 0: 779.3, 1: 780.5. Samples: 1581542. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 13:39:42,025][08516] Avg episode reward: [(0, '2.830'), (1, '0.070')] +[2023-09-26 13:39:47,024][08516] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6356992. Throughput: 0: 781.4, 1: 782.7. Samples: 1586145. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 13:39:47,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.030')] +[2023-09-26 13:39:51,354][09734] Updated weights for policy 0, policy_version 12480 (0.0017) +[2023-09-26 13:39:51,354][09735] Updated weights for policy 1, policy_version 12480 (0.0018) +[2023-09-26 13:39:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6389760. Throughput: 0: 776.8, 1: 776.9. Samples: 1595580. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 13:39:52,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.050')] +[2023-09-26 13:39:57,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6422528. Throughput: 0: 781.3, 1: 782.1. Samples: 1605227. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 13:39:57,025][08516] Avg episode reward: [(0, '2.850'), (1, '0.050')] +[2023-09-26 13:40:02,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6455296. Throughput: 0: 780.4, 1: 777.8. Samples: 1609732. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:40:02,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.070')] +[2023-09-26 13:40:02,027][09359] Saving new best policy, reward=2.890! +[2023-09-26 13:40:04,379][09735] Updated weights for policy 1, policy_version 12640 (0.0018) +[2023-09-26 13:40:04,380][09734] Updated weights for policy 0, policy_version 12640 (0.0018) +[2023-09-26 13:40:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 6488064. Throughput: 0: 780.8, 1: 780.9. Samples: 1619338. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:40:07,026][08516] Avg episode reward: [(0, '2.870'), (1, '0.070')] +[2023-09-26 13:40:12,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 6512640. Throughput: 0: 780.9, 1: 782.0. Samples: 1628700. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:40:12,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.050')] +[2023-09-26 13:40:17,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6545408. Throughput: 0: 786.0, 1: 785.5. Samples: 1633443. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:40:17,025][08516] Avg episode reward: [(0, '2.860'), (1, '0.040')] +[2023-09-26 13:40:17,356][09734] Updated weights for policy 0, policy_version 12800 (0.0017) +[2023-09-26 13:40:17,357][09735] Updated weights for policy 1, policy_version 12800 (0.0017) +[2023-09-26 13:40:22,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6578176. Throughput: 0: 784.6, 1: 784.7. Samples: 1642908. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:40:22,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.030')] +[2023-09-26 13:40:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6610944. Throughput: 0: 791.0, 1: 789.1. Samples: 1652646. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:40:27,025][08516] Avg episode reward: [(0, '2.860'), (1, '0.040')] +[2023-09-26 13:40:30,439][09734] Updated weights for policy 0, policy_version 12960 (0.0018) +[2023-09-26 13:40:30,439][09735] Updated weights for policy 1, policy_version 12960 (0.0018) +[2023-09-26 13:40:32,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 6643712. Throughput: 0: 787.2, 1: 783.8. Samples: 1656839. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:40:32,025][08516] Avg episode reward: [(0, '2.860'), (1, '0.050')] +[2023-09-26 13:40:37,025][08516] Fps is (10 sec: 6553.4, 60 sec: 6348.8, 300 sec: 6275.9). Total num frames: 6676480. Throughput: 0: 788.5, 1: 788.6. Samples: 1666548. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:40:37,025][08516] Avg episode reward: [(0, '2.850'), (1, '0.060')] +[2023-09-26 13:40:37,037][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000013040_3338240.pth... +[2023-09-26 13:40:37,037][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000013040_3338240.pth... +[2023-09-26 13:40:37,071][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000010096_2584576.pth +[2023-09-26 13:40:37,077][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000010096_2584576.pth +[2023-09-26 13:40:42,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 6709248. Throughput: 0: 787.4, 1: 787.8. Samples: 1676109. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:40:42,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.060')] +[2023-09-26 13:40:43,209][09734] Updated weights for policy 0, policy_version 13120 (0.0017) +[2023-09-26 13:40:43,209][09735] Updated weights for policy 1, policy_version 13120 (0.0016) +[2023-09-26 13:40:47,024][08516] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6262.0). Total num frames: 6737920. Throughput: 0: 792.9, 1: 794.4. Samples: 1681161. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:40:47,025][08516] Avg episode reward: [(0, '2.900'), (1, '0.090')] +[2023-09-26 13:40:47,073][09359] Saving new best policy, reward=2.900! +[2023-09-26 13:40:52,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6766592. Throughput: 0: 789.2, 1: 789.2. Samples: 1690365. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:40:52,025][08516] Avg episode reward: [(0, '2.950'), (1, '0.060')] +[2023-09-26 13:40:52,226][09359] Saving new best policy, reward=2.950! +[2023-09-26 13:40:56,123][09734] Updated weights for policy 0, policy_version 13280 (0.0018) +[2023-09-26 13:40:56,123][09735] Updated weights for policy 1, policy_version 13280 (0.0016) +[2023-09-26 13:40:57,024][08516] Fps is (10 sec: 6144.2, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 6799360. Throughput: 0: 791.4, 1: 789.9. Samples: 1699859. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:40:57,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.060')] +[2023-09-26 13:41:02,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 6832128. Throughput: 0: 790.7, 1: 790.6. Samples: 1704602. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:41:02,025][08516] Avg episode reward: [(0, '2.860'), (1, '0.060')] +[2023-09-26 13:41:07,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6864896. Throughput: 0: 792.8, 1: 790.9. Samples: 1714176. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:41:07,025][08516] Avg episode reward: [(0, '2.880'), (1, '0.050')] +[2023-09-26 13:41:09,170][09734] Updated weights for policy 0, policy_version 13440 (0.0016) +[2023-09-26 13:41:09,170][09735] Updated weights for policy 1, policy_version 13440 (0.0016) +[2023-09-26 13:41:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 6897664. Throughput: 0: 789.2, 1: 790.2. Samples: 1723717. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:41:12,025][08516] Avg episode reward: [(0, '2.900'), (1, '0.030')] +[2023-09-26 13:41:17,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 6930432. Throughput: 0: 795.6, 1: 796.3. Samples: 1728472. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:41:17,025][08516] Avg episode reward: [(0, '2.830'), (1, '0.030')] +[2023-09-26 13:41:22,025][08516] Fps is (10 sec: 6143.7, 60 sec: 6348.8, 300 sec: 6289.8). Total num frames: 6959104. Throughput: 0: 792.0, 1: 792.2. Samples: 1737839. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 13:41:22,026][08516] Avg episode reward: [(0, '2.830'), (1, '0.030')] +[2023-09-26 13:41:22,123][09734] Updated weights for policy 0, policy_version 13600 (0.0016) +[2023-09-26 13:41:22,123][09735] Updated weights for policy 1, policy_version 13600 (0.0016) +[2023-09-26 13:41:27,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6987776. Throughput: 0: 791.7, 1: 790.8. Samples: 1747323. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:41:27,025][08516] Avg episode reward: [(0, '2.850'), (1, '0.020')] +[2023-09-26 13:41:32,025][08516] Fps is (10 sec: 6144.1, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7020544. Throughput: 0: 787.9, 1: 789.1. Samples: 1752127. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:41:32,025][08516] Avg episode reward: [(0, '2.850'), (1, '0.010')] +[2023-09-26 13:41:35,246][09734] Updated weights for policy 0, policy_version 13760 (0.0016) +[2023-09-26 13:41:35,246][09735] Updated weights for policy 1, policy_version 13760 (0.0017) +[2023-09-26 13:41:37,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7053312. Throughput: 0: 788.8, 1: 787.1. Samples: 1761280. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:41:37,026][08516] Avg episode reward: [(0, '2.840'), (1, '0.020')] +[2023-09-26 13:41:42,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7086080. Throughput: 0: 784.0, 1: 785.4. Samples: 1770481. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:41:42,025][08516] Avg episode reward: [(0, '2.790'), (1, '0.040')] +[2023-09-26 13:41:47,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6212.3, 300 sec: 6248.1). Total num frames: 7110656. Throughput: 0: 784.1, 1: 786.0. Samples: 1775258. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:41:47,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.050')] +[2023-09-26 13:41:48,645][09734] Updated weights for policy 0, policy_version 13920 (0.0017) +[2023-09-26 13:41:48,645][09735] Updated weights for policy 1, policy_version 13920 (0.0017) +[2023-09-26 13:41:52,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 7143424. Throughput: 0: 773.8, 1: 774.5. Samples: 1783847. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:41:52,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.050')] +[2023-09-26 13:41:57,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7176192. Throughput: 0: 777.6, 1: 778.4. Samples: 1793738. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:41:57,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.050')] +[2023-09-26 13:42:01,633][09734] Updated weights for policy 0, policy_version 14080 (0.0015) +[2023-09-26 13:42:01,635][09735] Updated weights for policy 1, policy_version 14080 (0.0017) +[2023-09-26 13:42:02,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7208960. Throughput: 0: 774.7, 1: 775.3. Samples: 1798223. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:42:02,025][08516] Avg episode reward: [(0, '2.760'), (1, '0.030')] +[2023-09-26 13:42:07,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7241728. Throughput: 0: 780.2, 1: 780.3. Samples: 1808063. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:42:07,025][08516] Avg episode reward: [(0, '2.760'), (1, '0.030')] +[2023-09-26 13:42:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7274496. Throughput: 0: 781.0, 1: 781.3. Samples: 1817624. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:42:12,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.050')] +[2023-09-26 13:42:14,463][09734] Updated weights for policy 0, policy_version 14240 (0.0015) +[2023-09-26 13:42:14,464][09735] Updated weights for policy 1, policy_version 14240 (0.0015) +[2023-09-26 13:42:17,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 7299072. Throughput: 0: 781.8, 1: 782.7. Samples: 1822531. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:42:17,025][08516] Avg episode reward: [(0, '2.730'), (1, '0.070')] +[2023-09-26 13:42:22,025][08516] Fps is (10 sec: 5734.3, 60 sec: 6212.3, 300 sec: 6275.9). Total num frames: 7331840. Throughput: 0: 780.8, 1: 782.4. Samples: 1831628. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:42:22,026][08516] Avg episode reward: [(0, '2.750'), (1, '0.070')] +[2023-09-26 13:42:27,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 7364608. Throughput: 0: 786.1, 1: 784.4. Samples: 1841156. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:42:27,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.070')] +[2023-09-26 13:42:27,455][09734] Updated weights for policy 0, policy_version 14400 (0.0017) +[2023-09-26 13:42:27,456][09735] Updated weights for policy 1, policy_version 14400 (0.0015) +[2023-09-26 13:42:32,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 7397376. Throughput: 0: 785.6, 1: 785.1. Samples: 1845939. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:42:32,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.050')] +[2023-09-26 13:42:37,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 7430144. Throughput: 0: 796.4, 1: 795.7. Samples: 1855488. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:42:37,025][08516] Avg episode reward: [(0, '2.700'), (1, '0.070')] +[2023-09-26 13:42:37,037][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000014512_3715072.pth... +[2023-09-26 13:42:37,038][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000014512_3715072.pth... +[2023-09-26 13:42:37,068][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000011568_2961408.pth +[2023-09-26 13:42:37,073][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000011568_2961408.pth +[2023-09-26 13:42:40,581][09734] Updated weights for policy 0, policy_version 14560 (0.0018) +[2023-09-26 13:42:40,581][09735] Updated weights for policy 1, policy_version 14560 (0.0017) +[2023-09-26 13:42:42,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7462912. Throughput: 0: 789.4, 1: 788.3. Samples: 1864732. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:42:42,025][08516] Avg episode reward: [(0, '2.660'), (1, '0.060')] +[2023-09-26 13:42:47,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 7495680. Throughput: 0: 795.2, 1: 794.8. Samples: 1869772. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:42:47,025][08516] Avg episode reward: [(0, '2.660'), (1, '0.070')] +[2023-09-26 13:42:52,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 7528448. Throughput: 0: 791.4, 1: 791.0. Samples: 1879271. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:42:52,026][08516] Avg episode reward: [(0, '2.670'), (1, '0.080')] +[2023-09-26 13:42:53,290][09735] Updated weights for policy 1, policy_version 14720 (0.0018) +[2023-09-26 13:42:53,290][09734] Updated weights for policy 0, policy_version 14720 (0.0018) +[2023-09-26 13:42:57,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7553024. Throughput: 0: 792.1, 1: 791.6. Samples: 1888887. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:42:57,025][08516] Avg episode reward: [(0, '2.650'), (1, '0.110')] +[2023-09-26 13:43:02,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7585792. Throughput: 0: 794.2, 1: 792.3. Samples: 1893923. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:43:02,025][08516] Avg episode reward: [(0, '2.700'), (1, '0.090')] +[2023-09-26 13:43:06,140][09735] Updated weights for policy 1, policy_version 14880 (0.0017) +[2023-09-26 13:43:06,141][09734] Updated weights for policy 0, policy_version 14880 (0.0016) +[2023-09-26 13:43:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7618560. Throughput: 0: 794.1, 1: 794.5. Samples: 1903115. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:43:07,026][08516] Avg episode reward: [(0, '2.820'), (1, '0.080')] +[2023-09-26 13:43:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 7651328. Throughput: 0: 796.2, 1: 796.4. Samples: 1912819. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:43:12,025][08516] Avg episode reward: [(0, '2.790'), (1, '0.110')] +[2023-09-26 13:43:17,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 7684096. Throughput: 0: 793.9, 1: 792.4. Samples: 1917324. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 13:43:17,025][08516] Avg episode reward: [(0, '2.790'), (1, '0.110')] +[2023-09-26 13:43:19,206][09735] Updated weights for policy 1, policy_version 15040 (0.0016) +[2023-09-26 13:43:19,206][09734] Updated weights for policy 0, policy_version 15040 (0.0016) +[2023-09-26 13:43:22,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 7716864. Throughput: 0: 793.1, 1: 792.8. Samples: 1926854. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 13:43:22,025][08516] Avg episode reward: [(0, '2.790'), (1, '0.110')] +[2023-09-26 13:43:27,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 7749632. Throughput: 0: 793.3, 1: 794.1. Samples: 1936163. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 13:43:27,025][08516] Avg episode reward: [(0, '2.790'), (1, '0.080')] +[2023-09-26 13:43:32,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7774208. Throughput: 0: 792.8, 1: 792.7. Samples: 1941121. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:43:32,025][08516] Avg episode reward: [(0, '2.790'), (1, '0.080')] +[2023-09-26 13:43:32,100][09734] Updated weights for policy 0, policy_version 15200 (0.0012) +[2023-09-26 13:43:32,101][09735] Updated weights for policy 1, policy_version 15200 (0.0018) +[2023-09-26 13:43:37,025][08516] Fps is (10 sec: 5734.2, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7806976. Throughput: 0: 787.8, 1: 787.8. Samples: 1950175. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:43:37,026][08516] Avg episode reward: [(0, '2.800'), (1, '0.060')] +[2023-09-26 13:43:42,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 7839744. Throughput: 0: 787.6, 1: 788.0. Samples: 1959789. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:43:42,025][08516] Avg episode reward: [(0, '2.770'), (1, '0.050')] +[2023-09-26 13:43:45,262][09734] Updated weights for policy 0, policy_version 15360 (0.0015) +[2023-09-26 13:43:45,263][09735] Updated weights for policy 1, policy_version 15360 (0.0013) +[2023-09-26 13:43:47,024][08516] Fps is (10 sec: 6553.9, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 7872512. Throughput: 0: 782.1, 1: 781.8. Samples: 1964300. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:43:47,025][08516] Avg episode reward: [(0, '2.700'), (1, '0.060')] +[2023-09-26 13:43:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 7905280. Throughput: 0: 786.2, 1: 787.2. Samples: 1973919. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:43:52,025][08516] Avg episode reward: [(0, '2.690'), (1, '0.050')] +[2023-09-26 13:43:57,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7929856. Throughput: 0: 777.5, 1: 779.2. Samples: 1982873. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:43:57,025][08516] Avg episode reward: [(0, '2.660'), (1, '0.050')] +[2023-09-26 13:43:58,453][09735] Updated weights for policy 1, policy_version 15520 (0.0017) +[2023-09-26 13:43:58,453][09734] Updated weights for policy 0, policy_version 15520 (0.0017) +[2023-09-26 13:44:02,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 7962624. Throughput: 0: 782.1, 1: 783.7. Samples: 1987785. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:44:02,025][08516] Avg episode reward: [(0, '2.630'), (1, '0.060')] +[2023-09-26 13:44:07,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 7995392. Throughput: 0: 778.6, 1: 781.0. Samples: 1997037. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:44:07,025][08516] Avg episode reward: [(0, '2.620'), (1, '0.050')] +[2023-09-26 13:44:11,321][09734] Updated weights for policy 0, policy_version 15680 (0.0017) +[2023-09-26 13:44:11,323][09735] Updated weights for policy 1, policy_version 15680 (0.0017) +[2023-09-26 13:44:12,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 8028160. Throughput: 0: 788.0, 1: 786.6. Samples: 2007019. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:44:12,025][08516] Avg episode reward: [(0, '2.670'), (1, '0.060')] +[2023-09-26 13:44:17,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 8060928. Throughput: 0: 780.0, 1: 780.9. Samples: 2011362. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:44:17,025][08516] Avg episode reward: [(0, '2.630'), (1, '0.060')] +[2023-09-26 13:44:22,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 8093696. Throughput: 0: 786.8, 1: 788.8. Samples: 2021078. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:44:22,025][08516] Avg episode reward: [(0, '2.650'), (1, '0.050')] +[2023-09-26 13:44:24,449][09735] Updated weights for policy 1, policy_version 15840 (0.0015) +[2023-09-26 13:44:24,450][09734] Updated weights for policy 0, policy_version 15840 (0.0015) +[2023-09-26 13:44:27,024][08516] Fps is (10 sec: 6144.0, 60 sec: 6212.3, 300 sec: 6289.8). Total num frames: 8122368. Throughput: 0: 783.2, 1: 784.1. Samples: 2030321. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:44:27,025][08516] Avg episode reward: [(0, '2.700'), (1, '0.040')] +[2023-09-26 13:44:32,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6289.8). Total num frames: 8151040. Throughput: 0: 784.1, 1: 784.2. Samples: 2034872. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:44:32,025][08516] Avg episode reward: [(0, '2.690'), (1, '0.080')] +[2023-09-26 13:44:37,024][08516] Fps is (10 sec: 6143.9, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 8183808. Throughput: 0: 780.6, 1: 779.4. Samples: 2044120. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:44:37,025][08516] Avg episode reward: [(0, '2.660'), (1, '0.080')] +[2023-09-26 13:44:37,039][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000015984_4091904.pth... +[2023-09-26 13:44:37,039][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000015984_4091904.pth... +[2023-09-26 13:44:37,075][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000013040_3338240.pth +[2023-09-26 13:44:37,077][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000013040_3338240.pth +[2023-09-26 13:44:37,667][09735] Updated weights for policy 1, policy_version 16000 (0.0018) +[2023-09-26 13:44:37,667][09734] Updated weights for policy 0, policy_version 16000 (0.0017) +[2023-09-26 13:44:42,025][08516] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 8216576. Throughput: 0: 788.6, 1: 788.2. Samples: 2053827. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:44:42,025][08516] Avg episode reward: [(0, '2.710'), (1, '0.070')] +[2023-09-26 13:44:47,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 8249344. Throughput: 0: 784.3, 1: 782.5. Samples: 2058289. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:44:47,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.070')] +[2023-09-26 13:44:50,720][09734] Updated weights for policy 0, policy_version 16160 (0.0015) +[2023-09-26 13:44:50,721][09735] Updated weights for policy 1, policy_version 16160 (0.0017) +[2023-09-26 13:44:52,024][08516] Fps is (10 sec: 6144.0, 60 sec: 6212.2, 300 sec: 6289.8). Total num frames: 8278016. Throughput: 0: 789.4, 1: 786.5. Samples: 2067949. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:44:52,025][08516] Avg episode reward: [(0, '2.770'), (1, '0.050')] +[2023-09-26 13:44:57,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 8306688. Throughput: 0: 774.3, 1: 775.2. Samples: 2076744. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:44:57,025][08516] Avg episode reward: [(0, '2.780'), (1, '0.050')] +[2023-09-26 13:45:02,024][08516] Fps is (10 sec: 6144.1, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 8339456. Throughput: 0: 782.6, 1: 782.6. Samples: 2081796. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:45:02,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.040')] +[2023-09-26 13:45:03,869][09735] Updated weights for policy 1, policy_version 16320 (0.0016) +[2023-09-26 13:45:03,869][09734] Updated weights for policy 0, policy_version 16320 (0.0016) +[2023-09-26 13:45:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 8372224. Throughput: 0: 778.8, 1: 775.6. Samples: 2091026. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 13:45:07,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.060')] +[2023-09-26 13:45:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 8404992. Throughput: 0: 782.3, 1: 781.6. Samples: 2100699. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 13:45:12,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.080')] +[2023-09-26 13:45:16,676][09734] Updated weights for policy 0, policy_version 16480 (0.0019) +[2023-09-26 13:45:16,676][09735] Updated weights for policy 1, policy_version 16480 (0.0019) +[2023-09-26 13:45:17,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 8437760. Throughput: 0: 783.9, 1: 782.3. Samples: 2105348. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:45:17,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.080')] +[2023-09-26 13:45:22,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 8470528. Throughput: 0: 790.2, 1: 790.2. Samples: 2115242. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:45:22,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.120')] +[2023-09-26 13:45:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6348.8, 300 sec: 6303.7). Total num frames: 8503296. Throughput: 0: 786.9, 1: 786.6. Samples: 2124633. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:45:27,025][08516] Avg episode reward: [(0, '2.700'), (1, '0.150')] +[2023-09-26 13:45:29,506][09734] Updated weights for policy 0, policy_version 16640 (0.0015) +[2023-09-26 13:45:29,507][09735] Updated weights for policy 1, policy_version 16640 (0.0016) +[2023-09-26 13:45:32,024][08516] Fps is (10 sec: 6144.1, 60 sec: 6348.8, 300 sec: 6289.8). Total num frames: 8531968. Throughput: 0: 791.6, 1: 792.6. Samples: 2129582. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:45:32,025][08516] Avg episode reward: [(0, '2.660'), (1, '0.120')] +[2023-09-26 13:45:37,025][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 8560640. Throughput: 0: 787.7, 1: 790.8. Samples: 2138979. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:45:37,026][08516] Avg episode reward: [(0, '2.510'), (1, '0.130')] +[2023-09-26 13:45:42,024][08516] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6289.8). Total num frames: 8593408. Throughput: 0: 796.4, 1: 796.2. Samples: 2148411. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:45:42,025][08516] Avg episode reward: [(0, '2.510'), (1, '0.100')] +[2023-09-26 13:45:42,386][09734] Updated weights for policy 0, policy_version 16800 (0.0016) +[2023-09-26 13:45:42,386][09735] Updated weights for policy 1, policy_version 16800 (0.0016) +[2023-09-26 13:45:47,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 8626176. Throughput: 0: 792.3, 1: 791.9. Samples: 2153087. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:45:47,025][08516] Avg episode reward: [(0, '2.500'), (1, '0.070')] +[2023-09-26 13:45:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6348.8, 300 sec: 6303.7). Total num frames: 8658944. Throughput: 0: 796.4, 1: 796.1. Samples: 2162688. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:45:52,025][08516] Avg episode reward: [(0, '2.550'), (1, '0.080')] +[2023-09-26 13:45:55,484][09735] Updated weights for policy 1, policy_version 16960 (0.0015) +[2023-09-26 13:45:55,484][09734] Updated weights for policy 0, policy_version 16960 (0.0018) +[2023-09-26 13:45:57,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 8691712. Throughput: 0: 793.6, 1: 793.4. Samples: 2172114. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:45:57,025][08516] Avg episode reward: [(0, '2.510'), (1, '0.100')] +[2023-09-26 13:46:02,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 8724480. Throughput: 0: 796.4, 1: 796.4. Samples: 2177024. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:46:02,025][08516] Avg episode reward: [(0, '2.500'), (1, '0.080')] +[2023-09-26 13:46:07,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 8757248. Throughput: 0: 793.2, 1: 793.8. Samples: 2186660. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:46:07,025][08516] Avg episode reward: [(0, '2.570'), (1, '0.080')] +[2023-09-26 13:46:08,376][09735] Updated weights for policy 1, policy_version 17120 (0.0017) +[2023-09-26 13:46:08,376][09734] Updated weights for policy 0, policy_version 17120 (0.0018) +[2023-09-26 13:46:12,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 8781824. Throughput: 0: 787.7, 1: 787.3. Samples: 2195506. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:46:12,025][08516] Avg episode reward: [(0, '2.570'), (1, '0.090')] +[2023-09-26 13:46:17,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6289.8). Total num frames: 8814592. Throughput: 0: 788.4, 1: 787.8. Samples: 2200515. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:46:17,025][08516] Avg episode reward: [(0, '2.610'), (1, '0.090')] +[2023-09-26 13:46:21,384][09735] Updated weights for policy 1, policy_version 17280 (0.0017) +[2023-09-26 13:46:21,384][09734] Updated weights for policy 0, policy_version 17280 (0.0017) +[2023-09-26 13:46:22,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 8847360. Throughput: 0: 789.6, 1: 788.9. Samples: 2210011. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:46:22,025][08516] Avg episode reward: [(0, '2.550'), (1, '0.100')] +[2023-09-26 13:46:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 8880128. Throughput: 0: 790.2, 1: 791.1. Samples: 2219569. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:46:27,025][08516] Avg episode reward: [(0, '2.620'), (1, '0.120')] +[2023-09-26 13:46:32,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6348.8, 300 sec: 6303.7). Total num frames: 8912896. Throughput: 0: 790.1, 1: 788.6. Samples: 2224129. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:46:32,025][08516] Avg episode reward: [(0, '2.660'), (1, '0.110')] +[2023-09-26 13:46:34,419][09734] Updated weights for policy 0, policy_version 17440 (0.0017) +[2023-09-26 13:46:34,419][09735] Updated weights for policy 1, policy_version 17440 (0.0016) +[2023-09-26 13:46:37,025][08516] Fps is (10 sec: 6143.9, 60 sec: 6348.8, 300 sec: 6289.8). Total num frames: 8941568. Throughput: 0: 788.2, 1: 789.7. Samples: 2233695. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:46:37,025][08516] Avg episode reward: [(0, '2.630'), (1, '0.100')] +[2023-09-26 13:46:37,040][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000017472_4472832.pth... +[2023-09-26 13:46:37,073][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000014512_3715072.pth +[2023-09-26 13:46:37,083][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000017472_4472832.pth... +[2023-09-26 13:46:37,110][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000014512_3715072.pth +[2023-09-26 13:46:42,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 8970240. Throughput: 0: 784.2, 1: 784.2. Samples: 2242689. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:46:42,025][08516] Avg episode reward: [(0, '2.610'), (1, '0.100')] +[2023-09-26 13:46:47,024][08516] Fps is (10 sec: 6144.2, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 9003008. Throughput: 0: 783.2, 1: 785.5. Samples: 2247614. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:46:47,025][08516] Avg episode reward: [(0, '2.580'), (1, '0.080')] +[2023-09-26 13:46:47,536][09734] Updated weights for policy 0, policy_version 17600 (0.0017) +[2023-09-26 13:46:47,537][09735] Updated weights for policy 1, policy_version 17600 (0.0016) +[2023-09-26 13:46:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 9035776. Throughput: 0: 781.7, 1: 779.9. Samples: 2256930. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:46:52,025][08516] Avg episode reward: [(0, '2.660'), (1, '0.060')] +[2023-09-26 13:46:57,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 9068544. Throughput: 0: 785.6, 1: 786.3. Samples: 2266245. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:46:57,026][08516] Avg episode reward: [(0, '2.660'), (1, '0.070')] +[2023-09-26 13:47:00,838][09734] Updated weights for policy 0, policy_version 17760 (0.0017) +[2023-09-26 13:47:00,838][09735] Updated weights for policy 1, policy_version 17760 (0.0018) +[2023-09-26 13:47:02,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 9093120. Throughput: 0: 781.4, 1: 783.0. Samples: 2270914. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:47:02,025][08516] Avg episode reward: [(0, '2.650'), (1, '0.100')] +[2023-09-26 13:47:07,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 9125888. Throughput: 0: 779.8, 1: 779.7. Samples: 2280189. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:47:07,025][08516] Avg episode reward: [(0, '2.640'), (1, '0.090')] +[2023-09-26 13:47:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 9158656. Throughput: 0: 779.8, 1: 777.9. Samples: 2289668. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:47:12,025][08516] Avg episode reward: [(0, '2.710'), (1, '0.090')] +[2023-09-26 13:47:13,738][09735] Updated weights for policy 1, policy_version 17920 (0.0016) +[2023-09-26 13:47:13,739][09734] Updated weights for policy 0, policy_version 17920 (0.0017) +[2023-09-26 13:47:17,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 9191424. Throughput: 0: 780.8, 1: 782.6. Samples: 2294486. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:47:17,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.080')] +[2023-09-26 13:47:22,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 9224192. Throughput: 0: 781.9, 1: 780.4. Samples: 2304000. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:47:22,025][08516] Avg episode reward: [(0, '2.770'), (1, '0.080')] +[2023-09-26 13:47:26,819][09734] Updated weights for policy 0, policy_version 18080 (0.0018) +[2023-09-26 13:47:26,819][09735] Updated weights for policy 1, policy_version 18080 (0.0017) +[2023-09-26 13:47:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 9256960. Throughput: 0: 786.2, 1: 788.6. Samples: 2313554. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:47:27,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.060')] +[2023-09-26 13:47:32,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 9281536. Throughput: 0: 782.5, 1: 781.5. Samples: 2317993. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:47:32,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.050')] +[2023-09-26 13:47:37,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6212.3, 300 sec: 6275.9). Total num frames: 9314304. Throughput: 0: 781.8, 1: 782.6. Samples: 2327329. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:47:37,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.080')] +[2023-09-26 13:47:39,897][09734] Updated weights for policy 0, policy_version 18240 (0.0016) +[2023-09-26 13:47:39,897][09735] Updated weights for policy 1, policy_version 18240 (0.0017) +[2023-09-26 13:47:42,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9347072. Throughput: 0: 784.4, 1: 782.8. Samples: 2336770. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 13:47:42,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.080')] +[2023-09-26 13:47:47,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9379840. Throughput: 0: 784.2, 1: 783.0. Samples: 2341440. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 13:47:47,025][08516] Avg episode reward: [(0, '2.730'), (1, '0.100')] +[2023-09-26 13:47:52,025][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 9412608. Throughput: 0: 787.2, 1: 786.4. Samples: 2351002. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:47:52,025][08516] Avg episode reward: [(0, '2.720'), (1, '0.120')] +[2023-09-26 13:47:52,993][09734] Updated weights for policy 0, policy_version 18400 (0.0016) +[2023-09-26 13:47:52,994][09735] Updated weights for policy 1, policy_version 18400 (0.0017) +[2023-09-26 13:47:57,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 9445376. Throughput: 0: 785.2, 1: 786.0. Samples: 2360371. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:47:57,025][08516] Avg episode reward: [(0, '2.730'), (1, '0.080')] +[2023-09-26 13:48:02,025][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9469952. Throughput: 0: 784.0, 1: 784.8. Samples: 2365084. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:48:02,026][08516] Avg episode reward: [(0, '2.720'), (1, '0.100')] +[2023-09-26 13:48:06,081][09735] Updated weights for policy 1, policy_version 18560 (0.0018) +[2023-09-26 13:48:06,081][09734] Updated weights for policy 0, policy_version 18560 (0.0014) +[2023-09-26 13:48:07,025][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9502720. Throughput: 0: 780.4, 1: 782.2. Samples: 2374320. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:48:07,026][08516] Avg episode reward: [(0, '2.720'), (1, '0.060')] +[2023-09-26 13:48:12,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 9535488. Throughput: 0: 783.1, 1: 779.2. Samples: 2383857. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:48:12,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.060')] +[2023-09-26 13:48:17,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9568256. Throughput: 0: 781.8, 1: 782.4. Samples: 2388381. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:48:17,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.060')] +[2023-09-26 13:48:19,275][09735] Updated weights for policy 1, policy_version 18720 (0.0017) +[2023-09-26 13:48:19,276][09734] Updated weights for policy 0, policy_version 18720 (0.0015) +[2023-09-26 13:48:22,024][08516] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9601024. Throughput: 0: 782.0, 1: 783.6. Samples: 2397779. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:48:22,025][08516] Avg episode reward: [(0, '2.720'), (1, '0.070')] +[2023-09-26 13:48:27,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 9625600. Throughput: 0: 777.8, 1: 779.0. Samples: 2406824. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:48:27,025][08516] Avg episode reward: [(0, '2.550'), (1, '0.090')] +[2023-09-26 13:48:32,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9658368. Throughput: 0: 781.2, 1: 781.2. Samples: 2411747. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:48:32,025][08516] Avg episode reward: [(0, '2.590'), (1, '0.120')] +[2023-09-26 13:48:32,367][09734] Updated weights for policy 0, policy_version 18880 (0.0017) +[2023-09-26 13:48:32,367][09735] Updated weights for policy 1, policy_version 18880 (0.0017) +[2023-09-26 13:48:37,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9691136. Throughput: 0: 777.0, 1: 778.0. Samples: 2420977. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:48:37,025][08516] Avg episode reward: [(0, '2.640'), (1, '0.120')] +[2023-09-26 13:48:37,036][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000018928_4845568.pth... +[2023-09-26 13:48:37,036][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000018928_4845568.pth... +[2023-09-26 13:48:37,072][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000015984_4091904.pth +[2023-09-26 13:48:37,072][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000015984_4091904.pth +[2023-09-26 13:48:42,025][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9723904. Throughput: 0: 781.0, 1: 781.0. Samples: 2430663. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:48:42,025][08516] Avg episode reward: [(0, '2.680'), (1, '0.080')] +[2023-09-26 13:48:45,409][09735] Updated weights for policy 1, policy_version 19040 (0.0016) +[2023-09-26 13:48:45,410][09734] Updated weights for policy 0, policy_version 19040 (0.0019) +[2023-09-26 13:48:47,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9756672. Throughput: 0: 779.3, 1: 778.2. Samples: 2435172. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:48:47,025][08516] Avg episode reward: [(0, '2.700'), (1, '0.060')] +[2023-09-26 13:48:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 9789440. Throughput: 0: 783.0, 1: 784.2. Samples: 2444844. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:48:52,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.030')] +[2023-09-26 13:48:57,025][08516] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 9814016. Throughput: 0: 779.3, 1: 780.2. Samples: 2454035. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:48:57,026][08516] Avg episode reward: [(0, '2.840'), (1, '0.060')] +[2023-09-26 13:48:58,536][09734] Updated weights for policy 0, policy_version 19200 (0.0016) +[2023-09-26 13:48:58,536][09735] Updated weights for policy 1, policy_version 19200 (0.0017) +[2023-09-26 13:49:02,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 9846784. Throughput: 0: 781.9, 1: 781.8. Samples: 2458750. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:49:02,025][08516] Avg episode reward: [(0, '2.810'), (1, '0.060')] +[2023-09-26 13:49:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9879552. Throughput: 0: 780.1, 1: 778.3. Samples: 2467904. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:49:07,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.060')] +[2023-09-26 13:49:11,459][09734] Updated weights for policy 0, policy_version 19360 (0.0015) +[2023-09-26 13:49:11,459][09735] Updated weights for policy 1, policy_version 19360 (0.0017) +[2023-09-26 13:49:12,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9912320. Throughput: 0: 788.7, 1: 789.3. Samples: 2477836. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:49:12,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.060')] +[2023-09-26 13:49:17,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 9945088. Throughput: 0: 783.4, 1: 781.9. Samples: 2482183. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:49:17,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.070')] +[2023-09-26 13:49:22,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6289.8). Total num frames: 9977856. Throughput: 0: 789.6, 1: 789.3. Samples: 2492028. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:49:22,025][08516] Avg episode reward: [(0, '2.830'), (1, '0.050')] +[2023-09-26 13:49:24,435][09735] Updated weights for policy 1, policy_version 19520 (0.0019) +[2023-09-26 13:49:24,435][09734] Updated weights for policy 0, policy_version 19520 (0.0019) +[2023-09-26 13:49:27,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 10010624. Throughput: 0: 786.0, 1: 786.5. Samples: 2501427. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:49:27,025][08516] Avg episode reward: [(0, '2.850'), (1, '0.040')] +[2023-09-26 13:49:32,025][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10035200. Throughput: 0: 791.1, 1: 791.0. Samples: 2506364. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:49:32,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.080')] +[2023-09-26 13:49:37,025][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10067968. Throughput: 0: 788.9, 1: 787.7. Samples: 2515790. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:49:37,025][08516] Avg episode reward: [(0, '2.780'), (1, '0.090')] +[2023-09-26 13:49:37,197][09735] Updated weights for policy 1, policy_version 19680 (0.0020) +[2023-09-26 13:49:37,197][09734] Updated weights for policy 0, policy_version 19680 (0.0018) +[2023-09-26 13:49:42,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 10100736. Throughput: 0: 791.8, 1: 792.4. Samples: 2525324. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:49:42,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.100')] +[2023-09-26 13:49:47,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6289.8). Total num frames: 10133504. Throughput: 0: 793.8, 1: 792.4. Samples: 2530132. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:49:47,025][08516] Avg episode reward: [(0, '2.780'), (1, '0.100')] +[2023-09-26 13:49:50,330][09735] Updated weights for policy 1, policy_version 19840 (0.0017) +[2023-09-26 13:49:50,330][09734] Updated weights for policy 0, policy_version 19840 (0.0017) +[2023-09-26 13:49:52,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 10166272. Throughput: 0: 796.3, 1: 795.1. Samples: 2539520. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:49:52,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.100')] +[2023-09-26 13:49:57,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 10199040. Throughput: 0: 786.7, 1: 786.9. Samples: 2548651. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:49:57,025][08516] Avg episode reward: [(0, '2.790'), (1, '0.070')] +[2023-09-26 13:50:02,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10223616. Throughput: 0: 789.5, 1: 791.5. Samples: 2553331. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:50:02,025][08516] Avg episode reward: [(0, '2.780'), (1, '0.070')] +[2023-09-26 13:50:03,545][09734] Updated weights for policy 0, policy_version 20000 (0.0018) +[2023-09-26 13:50:03,545][09735] Updated weights for policy 1, policy_version 20000 (0.0016) +[2023-09-26 13:50:07,024][08516] Fps is (10 sec: 5734.2, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10256384. Throughput: 0: 782.6, 1: 783.0. Samples: 2562477. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:50:07,025][08516] Avg episode reward: [(0, '2.790'), (1, '0.060')] +[2023-09-26 13:50:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10289152. Throughput: 0: 788.0, 1: 786.7. Samples: 2572288. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:50:12,025][08516] Avg episode reward: [(0, '2.790'), (1, '0.050')] +[2023-09-26 13:50:16,457][09735] Updated weights for policy 1, policy_version 20160 (0.0015) +[2023-09-26 13:50:16,457][09734] Updated weights for policy 0, policy_version 20160 (0.0014) +[2023-09-26 13:50:17,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10321920. Throughput: 0: 782.9, 1: 783.3. Samples: 2576845. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:50:17,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.050')] +[2023-09-26 13:50:22,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10354688. Throughput: 0: 787.9, 1: 786.2. Samples: 2586624. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:50:22,025][08516] Avg episode reward: [(0, '2.730'), (1, '0.060')] +[2023-09-26 13:50:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6289.8). Total num frames: 10387456. Throughput: 0: 786.9, 1: 787.0. Samples: 2596148. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:50:27,025][08516] Avg episode reward: [(0, '2.700'), (1, '0.040')] +[2023-09-26 13:50:29,301][09734] Updated weights for policy 0, policy_version 20320 (0.0016) +[2023-09-26 13:50:29,302][09735] Updated weights for policy 1, policy_version 20320 (0.0016) +[2023-09-26 13:50:32,025][08516] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 10420224. Throughput: 0: 786.9, 1: 786.8. Samples: 2600946. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:50:32,025][08516] Avg episode reward: [(0, '2.630'), (1, '0.050')] +[2023-09-26 13:50:37,025][08516] Fps is (10 sec: 6143.9, 60 sec: 6348.8, 300 sec: 6289.8). Total num frames: 10448896. Throughput: 0: 786.9, 1: 790.2. Samples: 2610488. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:50:37,026][08516] Avg episode reward: [(0, '2.580'), (1, '0.050')] +[2023-09-26 13:50:37,038][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000020416_5226496.pth... +[2023-09-26 13:50:37,039][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000020416_5226496.pth... +[2023-09-26 13:50:37,074][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000017472_4472832.pth +[2023-09-26 13:50:37,077][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000017472_4472832.pth +[2023-09-26 13:50:42,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10477568. Throughput: 0: 789.5, 1: 788.7. Samples: 2619670. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 13:50:42,025][08516] Avg episode reward: [(0, '2.550'), (1, '0.050')] +[2023-09-26 13:50:42,273][09735] Updated weights for policy 1, policy_version 20480 (0.0012) +[2023-09-26 13:50:42,274][09734] Updated weights for policy 0, policy_version 20480 (0.0016) +[2023-09-26 13:50:47,024][08516] Fps is (10 sec: 6144.2, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10510336. Throughput: 0: 791.6, 1: 791.2. Samples: 2624553. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 13:50:47,025][08516] Avg episode reward: [(0, '2.560'), (1, '0.050')] +[2023-09-26 13:50:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10543104. Throughput: 0: 795.8, 1: 795.1. Samples: 2634069. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 13:50:52,025][08516] Avg episode reward: [(0, '2.580'), (1, '0.050')] +[2023-09-26 13:50:55,285][09734] Updated weights for policy 0, policy_version 20640 (0.0018) +[2023-09-26 13:50:55,286][09735] Updated weights for policy 1, policy_version 20640 (0.0017) +[2023-09-26 13:50:57,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10575872. Throughput: 0: 791.3, 1: 793.2. Samples: 2643587. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:50:57,025][08516] Avg episode reward: [(0, '2.610'), (1, '0.050')] +[2023-09-26 13:51:02,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 10608640. Throughput: 0: 792.4, 1: 791.3. Samples: 2648108. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:51:02,025][08516] Avg episode reward: [(0, '2.700'), (1, '0.060')] +[2023-09-26 13:51:07,025][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10633216. Throughput: 0: 789.2, 1: 791.0. Samples: 2657733. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:51:07,025][08516] Avg episode reward: [(0, '2.660'), (1, '0.080')] +[2023-09-26 13:51:08,452][09734] Updated weights for policy 0, policy_version 20800 (0.0016) +[2023-09-26 13:51:08,453][09735] Updated weights for policy 1, policy_version 20800 (0.0017) +[2023-09-26 13:51:12,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 10665984. Throughput: 0: 782.6, 1: 780.8. Samples: 2666502. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 13:51:12,025][08516] Avg episode reward: [(0, '2.630'), (1, '0.100')] +[2023-09-26 13:51:17,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 10698752. Throughput: 0: 781.6, 1: 782.8. Samples: 2671346. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 13:51:17,025][08516] Avg episode reward: [(0, '2.590'), (1, '0.110')] +[2023-09-26 13:51:21,431][09734] Updated weights for policy 0, policy_version 20960 (0.0016) +[2023-09-26 13:51:21,431][09735] Updated weights for policy 1, policy_version 20960 (0.0016) +[2023-09-26 13:51:22,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10731520. Throughput: 0: 783.2, 1: 780.1. Samples: 2680837. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:51:22,025][08516] Avg episode reward: [(0, '2.540'), (1, '0.110')] +[2023-09-26 13:51:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 10764288. Throughput: 0: 788.6, 1: 789.0. Samples: 2690665. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:51:27,025][08516] Avg episode reward: [(0, '2.520'), (1, '0.090')] +[2023-09-26 13:51:32,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6289.8). Total num frames: 10797056. Throughput: 0: 785.5, 1: 783.8. Samples: 2695173. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:51:32,025][08516] Avg episode reward: [(0, '2.590'), (1, '0.070')] +[2023-09-26 13:51:34,329][09735] Updated weights for policy 1, policy_version 21120 (0.0017) +[2023-09-26 13:51:34,329][09734] Updated weights for policy 0, policy_version 21120 (0.0016) +[2023-09-26 13:51:37,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6348.8, 300 sec: 6303.7). Total num frames: 10829824. Throughput: 0: 786.7, 1: 787.2. Samples: 2704898. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:51:37,025][08516] Avg episode reward: [(0, '2.650'), (1, '0.030')] +[2023-09-26 13:51:42,024][08516] Fps is (10 sec: 6143.9, 60 sec: 6348.8, 300 sec: 6289.8). Total num frames: 10858496. Throughput: 0: 786.0, 1: 785.4. Samples: 2714304. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:51:42,025][08516] Avg episode reward: [(0, '2.580'), (1, '0.020')] +[2023-09-26 13:51:47,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10887168. Throughput: 0: 789.1, 1: 789.7. Samples: 2719155. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:51:47,025][08516] Avg episode reward: [(0, '2.600'), (1, '0.000')] +[2023-09-26 13:51:47,312][09734] Updated weights for policy 0, policy_version 21280 (0.0017) +[2023-09-26 13:51:47,312][09735] Updated weights for policy 1, policy_version 21280 (0.0016) +[2023-09-26 13:51:52,024][08516] Fps is (10 sec: 6144.1, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 10919936. Throughput: 0: 785.6, 1: 785.1. Samples: 2728416. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:51:52,025][08516] Avg episode reward: [(0, '2.670'), (1, '0.000')] +[2023-09-26 13:51:57,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 10952704. Throughput: 0: 796.0, 1: 796.0. Samples: 2738145. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:51:57,025][08516] Avg episode reward: [(0, '2.640'), (1, '0.010')] +[2023-09-26 13:52:02,027][08516] Fps is (10 sec: 4914.0, 60 sec: 6007.2, 300 sec: 6248.1). Total num frames: 10969088. Throughput: 0: 767.1, 1: 766.4. Samples: 2740355. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:52:02,027][08516] Avg episode reward: [(0, '2.600'), (1, '0.010')] +[2023-09-26 13:52:04,525][09734] Updated weights for policy 0, policy_version 21440 (0.0015) +[2023-09-26 13:52:04,536][09735] Updated weights for policy 1, policy_version 21440 (0.0015) +[2023-09-26 13:52:07,026][08516] Fps is (10 sec: 2457.1, 60 sec: 5734.2, 300 sec: 6164.8). Total num frames: 10977280. Throughput: 0: 710.5, 1: 711.1. Samples: 2744813. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 13:52:07,027][08516] Avg episode reward: [(0, '2.610'), (1, '0.020')] +[2023-09-26 13:52:12,025][08516] Fps is (10 sec: 2458.0, 60 sec: 5461.3, 300 sec: 6109.3). Total num frames: 10993664. Throughput: 0: 654.8, 1: 653.1. Samples: 2749520. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 13:52:12,026][08516] Avg episode reward: [(0, '2.550'), (1, '0.020')] +[2023-09-26 13:52:17,024][08516] Fps is (10 sec: 4096.8, 60 sec: 5324.8, 300 sec: 6081.5). Total num frames: 11018240. Throughput: 0: 630.2, 1: 631.1. Samples: 2751934. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 13:52:17,025][08516] Avg episode reward: [(0, '2.550'), (1, '0.030')] +[2023-09-26 13:52:22,024][08516] Fps is (10 sec: 4915.6, 60 sec: 5188.3, 300 sec: 6053.8). Total num frames: 11042816. Throughput: 0: 602.1, 1: 602.1. Samples: 2759085. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 13:52:22,025][08516] Avg episode reward: [(0, '2.660'), (1, '0.040')] +[2023-09-26 13:52:24,153][09735] Updated weights for policy 1, policy_version 21600 (0.0011) +[2023-09-26 13:52:24,154][09734] Updated weights for policy 0, policy_version 21600 (0.0012) +[2023-09-26 13:52:27,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5188.3, 300 sec: 6081.5). Total num frames: 11075584. Throughput: 0: 594.1, 1: 594.9. Samples: 2767809. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:52:27,025][08516] Avg episode reward: [(0, '2.720'), (1, '0.040')] +[2023-09-26 13:52:32,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5051.7, 300 sec: 6053.8). Total num frames: 11100160. Throughput: 0: 589.8, 1: 591.2. Samples: 2772302. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:52:32,025][08516] Avg episode reward: [(0, '2.710'), (1, '0.050')] +[2023-09-26 13:52:37,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5051.7, 300 sec: 6053.8). Total num frames: 11132928. Throughput: 0: 583.8, 1: 583.6. Samples: 2780952. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:52:37,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.050')] +[2023-09-26 13:52:37,032][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000021744_5566464.pth... +[2023-09-26 13:52:37,033][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000021744_5566464.pth... +[2023-09-26 13:52:37,063][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000018928_4845568.pth +[2023-09-26 13:52:37,069][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000018928_4845568.pth +[2023-09-26 13:52:38,362][09734] Updated weights for policy 0, policy_version 21760 (0.0013) +[2023-09-26 13:52:38,362][09735] Updated weights for policy 1, policy_version 21760 (0.0013) +[2023-09-26 13:52:42,024][08516] Fps is (10 sec: 5734.3, 60 sec: 4983.5, 300 sec: 6026.0). Total num frames: 11157504. Throughput: 0: 569.3, 1: 569.2. Samples: 2789376. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:52:42,025][08516] Avg episode reward: [(0, '2.810'), (1, '0.050')] +[2023-09-26 13:52:47,024][08516] Fps is (10 sec: 5734.5, 60 sec: 5051.8, 300 sec: 6026.0). Total num frames: 11190272. Throughput: 0: 590.7, 1: 591.3. Samples: 2793542. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:52:47,025][08516] Avg episode reward: [(0, '2.860'), (1, '0.050')] +[2023-09-26 13:52:52,024][08516] Fps is (10 sec: 5734.4, 60 sec: 4915.2, 300 sec: 5998.2). Total num frames: 11214848. Throughput: 0: 636.4, 1: 637.3. Samples: 2802123. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:52:52,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.070')] +[2023-09-26 13:52:52,719][09734] Updated weights for policy 0, policy_version 21920 (0.0013) +[2023-09-26 13:52:52,719][09735] Updated weights for policy 1, policy_version 21920 (0.0015) +[2023-09-26 13:52:57,026][08516] Fps is (10 sec: 5733.6, 60 sec: 4915.1, 300 sec: 6026.0). Total num frames: 11247616. Throughput: 0: 679.8, 1: 680.9. Samples: 2810754. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:52:57,027][08516] Avg episode reward: [(0, '2.720'), (1, '0.100')] +[2023-09-26 13:53:02,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5051.9, 300 sec: 5998.2). Total num frames: 11272192. Throughput: 0: 700.3, 1: 701.5. Samples: 2815012. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:53:02,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.090')] +[2023-09-26 13:53:06,969][09735] Updated weights for policy 1, policy_version 22080 (0.0013) +[2023-09-26 13:53:06,970][09734] Updated weights for policy 0, policy_version 22080 (0.0015) +[2023-09-26 13:53:07,024][08516] Fps is (10 sec: 5735.2, 60 sec: 5461.5, 300 sec: 5998.2). Total num frames: 11304960. Throughput: 0: 720.2, 1: 719.5. Samples: 2823868. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:53:07,025][08516] Avg episode reward: [(0, '2.700'), (1, '0.080')] +[2023-09-26 13:53:12,024][08516] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5970.4). Total num frames: 11329536. Throughput: 0: 718.5, 1: 716.5. Samples: 2832384. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 13:53:12,025][08516] Avg episode reward: [(0, '2.670'), (1, '0.070')] +[2023-09-26 13:53:17,024][08516] Fps is (10 sec: 5734.3, 60 sec: 5734.4, 300 sec: 5970.4). Total num frames: 11362304. Throughput: 0: 719.0, 1: 717.8. Samples: 2836954. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 13:53:17,025][08516] Avg episode reward: [(0, '2.670'), (1, '0.050')] +[2023-09-26 13:53:20,336][09734] Updated weights for policy 0, policy_version 22240 (0.0014) +[2023-09-26 13:53:20,337][09735] Updated weights for policy 1, policy_version 22240 (0.0015) +[2023-09-26 13:53:22,024][08516] Fps is (10 sec: 6553.6, 60 sec: 5870.9, 300 sec: 5998.2). Total num frames: 11395072. Throughput: 0: 730.3, 1: 730.2. Samples: 2846677. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 13:53:22,025][08516] Avg episode reward: [(0, '2.680'), (1, '0.070')] +[2023-09-26 13:53:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 5870.9, 300 sec: 5998.2). Total num frames: 11427840. Throughput: 0: 741.6, 1: 743.5. Samples: 2856206. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:53:27,025][08516] Avg episode reward: [(0, '2.790'), (1, '0.040')] +[2023-09-26 13:53:32,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6007.5, 300 sec: 5998.2). Total num frames: 11460608. Throughput: 0: 750.8, 1: 749.5. Samples: 2861056. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:53:32,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.040')] +[2023-09-26 13:53:33,112][09734] Updated weights for policy 0, policy_version 22400 (0.0017) +[2023-09-26 13:53:33,112][09735] Updated weights for policy 1, policy_version 22400 (0.0018) +[2023-09-26 13:53:37,025][08516] Fps is (10 sec: 6553.5, 60 sec: 6007.5, 300 sec: 5998.2). Total num frames: 11493376. Throughput: 0: 761.0, 1: 761.4. Samples: 2870634. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:53:37,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.050')] +[2023-09-26 13:53:42,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6007.5, 300 sec: 5970.4). Total num frames: 11517952. Throughput: 0: 765.7, 1: 766.4. Samples: 2879694. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:53:42,025][08516] Avg episode reward: [(0, '2.780'), (1, '0.060')] +[2023-09-26 13:53:46,096][09735] Updated weights for policy 1, policy_version 22560 (0.0017) +[2023-09-26 13:53:46,096][09734] Updated weights for policy 0, policy_version 22560 (0.0015) +[2023-09-26 13:53:47,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6007.4, 300 sec: 5970.4). Total num frames: 11550720. Throughput: 0: 774.1, 1: 774.2. Samples: 2884688. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:53:47,025][08516] Avg episode reward: [(0, '2.780'), (1, '0.060')] +[2023-09-26 13:53:52,024][08516] Fps is (10 sec: 4915.2, 60 sec: 5870.9, 300 sec: 5942.7). Total num frames: 11567104. Throughput: 0: 756.7, 1: 756.5. Samples: 2891960. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:53:52,025][08516] Avg episode reward: [(0, '2.780'), (1, '0.080')] +[2023-09-26 13:53:57,024][08516] Fps is (10 sec: 4505.7, 60 sec: 5802.8, 300 sec: 5928.8). Total num frames: 11595776. Throughput: 0: 735.2, 1: 737.3. Samples: 2898650. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:53:57,025][08516] Avg episode reward: [(0, '2.770'), (1, '0.110')] +[2023-09-26 13:54:02,024][08516] Fps is (10 sec: 5734.5, 60 sec: 5870.9, 300 sec: 5914.9). Total num frames: 11624448. Throughput: 0: 735.6, 1: 734.1. Samples: 2903091. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:54:02,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.130')] +[2023-09-26 13:54:02,775][09735] Updated weights for policy 1, policy_version 22720 (0.0014) +[2023-09-26 13:54:02,775][09734] Updated weights for policy 0, policy_version 22720 (0.0011) +[2023-09-26 13:54:07,024][08516] Fps is (10 sec: 5324.7, 60 sec: 5734.4, 300 sec: 5887.1). Total num frames: 11649024. Throughput: 0: 722.4, 1: 723.2. Samples: 2911732. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:54:07,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.150')] +[2023-09-26 13:54:12,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5871.0, 300 sec: 5887.1). Total num frames: 11681792. Throughput: 0: 713.6, 1: 712.8. Samples: 2920394. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:54:12,025][08516] Avg episode reward: [(0, '2.720'), (1, '0.180')] +[2023-09-26 13:54:17,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5734.4, 300 sec: 5859.4). Total num frames: 11706368. Throughput: 0: 705.4, 1: 705.4. Samples: 2924544. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:54:17,025][08516] Avg episode reward: [(0, '2.670'), (1, '0.180')] +[2023-09-26 13:54:17,052][09735] Updated weights for policy 1, policy_version 22880 (0.0010) +[2023-09-26 13:54:17,053][09734] Updated weights for policy 0, policy_version 22880 (0.0011) +[2023-09-26 13:54:22,025][08516] Fps is (10 sec: 5324.3, 60 sec: 5666.1, 300 sec: 5845.5). Total num frames: 11735040. Throughput: 0: 693.3, 1: 692.9. Samples: 2933014. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:54:22,026][08516] Avg episode reward: [(0, '2.650'), (1, '0.180')] +[2023-09-26 13:54:27,024][08516] Fps is (10 sec: 4915.3, 60 sec: 5461.4, 300 sec: 5831.6). Total num frames: 11755520. Throughput: 0: 652.1, 1: 651.5. Samples: 2938358. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:54:27,025][08516] Avg episode reward: [(0, '2.640'), (1, '0.180')] +[2023-09-26 13:54:32,024][08516] Fps is (10 sec: 3686.7, 60 sec: 5188.3, 300 sec: 5776.1). Total num frames: 11771904. Throughput: 0: 644.5, 1: 642.8. Samples: 2942614. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:54:32,025][08516] Avg episode reward: [(0, '2.630'), (1, '0.180')] +[2023-09-26 13:54:37,024][08516] Fps is (10 sec: 3276.7, 60 sec: 4915.2, 300 sec: 5720.5). Total num frames: 11788288. Throughput: 0: 614.8, 1: 615.0. Samples: 2947304. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:54:37,025][08516] Avg episode reward: [(0, '2.660'), (1, '0.180')] +[2023-09-26 13:54:37,034][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000023024_5894144.pth... +[2023-09-26 13:54:37,034][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000023024_5894144.pth... +[2023-09-26 13:54:37,085][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000020416_5226496.pth +[2023-09-26 13:54:37,087][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000020416_5226496.pth +[2023-09-26 13:54:37,524][09734] Updated weights for policy 0, policy_version 23040 (0.0012) +[2023-09-26 13:54:37,528][09735] Updated weights for policy 1, policy_version 23040 (0.0011) +[2023-09-26 13:54:42,024][08516] Fps is (10 sec: 4096.0, 60 sec: 4915.2, 300 sec: 5692.7). Total num frames: 11812864. Throughput: 0: 602.2, 1: 601.6. Samples: 2952821. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 13:54:42,025][08516] Avg episode reward: [(0, '2.680'), (1, '0.200')] +[2023-09-26 13:54:42,026][09597] Saving new best policy, reward=0.200! +[2023-09-26 13:54:47,024][08516] Fps is (10 sec: 4096.0, 60 sec: 4642.1, 300 sec: 5637.2). Total num frames: 11829248. Throughput: 0: 579.8, 1: 580.0. Samples: 2955279. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:54:47,025][08516] Avg episode reward: [(0, '2.670'), (1, '0.200')] +[2023-09-26 13:54:52,024][08516] Fps is (10 sec: 4096.0, 60 sec: 4778.7, 300 sec: 5609.4). Total num frames: 11853824. Throughput: 0: 556.1, 1: 555.8. Samples: 2961768. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:54:52,025][08516] Avg episode reward: [(0, '2.690'), (1, '0.190')] +[2023-09-26 13:54:57,024][08516] Fps is (10 sec: 4096.0, 60 sec: 4573.9, 300 sec: 5581.7). Total num frames: 11870208. Throughput: 0: 519.8, 1: 519.6. Samples: 2967168. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:54:57,025][08516] Avg episode reward: [(0, '2.700'), (1, '0.220')] +[2023-09-26 13:54:57,025][09597] Saving new best policy, reward=0.220! +[2023-09-26 13:54:58,259][09734] Updated weights for policy 0, policy_version 23200 (0.0011) +[2023-09-26 13:54:58,259][09735] Updated weights for policy 1, policy_version 23200 (0.0011) +[2023-09-26 13:55:02,024][08516] Fps is (10 sec: 4096.0, 60 sec: 4505.6, 300 sec: 5553.9). Total num frames: 11894784. Throughput: 0: 515.3, 1: 517.6. Samples: 2971027. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:55:02,025][08516] Avg episode reward: [(0, '2.700'), (1, '0.310')] +[2023-09-26 13:55:02,025][09597] Saving new best policy, reward=0.310! +[2023-09-26 13:55:07,024][08516] Fps is (10 sec: 4915.2, 60 sec: 4505.6, 300 sec: 5526.1). Total num frames: 11919360. Throughput: 0: 514.4, 1: 514.1. Samples: 2979295. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:55:07,025][08516] Avg episode reward: [(0, '2.730'), (1, '0.410')] +[2023-09-26 13:55:07,185][09597] Saving new best policy, reward=0.410! +[2023-09-26 13:55:12,024][08516] Fps is (10 sec: 5734.3, 60 sec: 4505.6, 300 sec: 5526.1). Total num frames: 11952128. Throughput: 0: 545.9, 1: 546.6. Samples: 2987523. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:55:12,025][08516] Avg episode reward: [(0, '2.720'), (1, '0.460')] +[2023-09-26 13:55:12,027][09597] Saving new best policy, reward=0.460! +[2023-09-26 13:55:13,147][09734] Updated weights for policy 0, policy_version 23360 (0.0010) +[2023-09-26 13:55:13,147][09735] Updated weights for policy 1, policy_version 23360 (0.0011) +[2023-09-26 13:55:17,024][08516] Fps is (10 sec: 5734.5, 60 sec: 4505.6, 300 sec: 5498.4). Total num frames: 11976704. Throughput: 0: 544.0, 1: 545.3. Samples: 2991633. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:55:17,025][08516] Avg episode reward: [(0, '2.730'), (1, '0.560')] +[2023-09-26 13:55:17,026][09597] Saving new best policy, reward=0.560! +[2023-09-26 13:55:22,024][08516] Fps is (10 sec: 4915.2, 60 sec: 4437.4, 300 sec: 5470.6). Total num frames: 12001280. Throughput: 0: 583.7, 1: 584.8. Samples: 2999885. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:55:22,025][08516] Avg episode reward: [(0, '2.770'), (1, '0.690')] +[2023-09-26 13:55:22,112][09597] Saving new best policy, reward=0.690! +[2023-09-26 13:55:27,024][08516] Fps is (10 sec: 5734.4, 60 sec: 4642.1, 300 sec: 5470.6). Total num frames: 12034048. Throughput: 0: 614.6, 1: 614.4. Samples: 3008128. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:55:27,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.790')] +[2023-09-26 13:55:27,026][09597] Saving new best policy, reward=0.790! +[2023-09-26 13:55:28,093][09735] Updated weights for policy 1, policy_version 23520 (0.0012) +[2023-09-26 13:55:28,094][09734] Updated weights for policy 0, policy_version 23520 (0.0012) +[2023-09-26 13:55:32,024][08516] Fps is (10 sec: 5734.5, 60 sec: 4778.7, 300 sec: 5456.7). Total num frames: 12058624. Throughput: 0: 632.6, 1: 633.4. Samples: 3012249. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:55:32,025][08516] Avg episode reward: [(0, '2.710'), (1, '0.860')] +[2023-09-26 13:55:32,025][09597] Saving new best policy, reward=0.860! +[2023-09-26 13:55:37,024][08516] Fps is (10 sec: 4915.2, 60 sec: 4915.2, 300 sec: 5442.8). Total num frames: 12083200. Throughput: 0: 652.2, 1: 651.3. Samples: 3020426. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:55:37,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.880')] +[2023-09-26 13:55:37,100][09597] Saving new best policy, reward=0.880! +[2023-09-26 13:55:42,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5051.7, 300 sec: 5442.8). Total num frames: 12115968. Throughput: 0: 682.1, 1: 681.8. Samples: 3028544. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:55:42,025][08516] Avg episode reward: [(0, '2.770'), (1, '0.970')] +[2023-09-26 13:55:42,026][09597] Saving new best policy, reward=0.970! +[2023-09-26 13:55:43,135][09734] Updated weights for policy 0, policy_version 23680 (0.0011) +[2023-09-26 13:55:43,136][09735] Updated weights for policy 1, policy_version 23680 (0.0011) +[2023-09-26 13:55:47,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5188.3, 300 sec: 5415.1). Total num frames: 12140544. Throughput: 0: 684.6, 1: 684.0. Samples: 3032613. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:55:47,025][08516] Avg episode reward: [(0, '2.710'), (1, '0.990')] +[2023-09-26 13:55:47,025][09597] Saving new best policy, reward=0.990! +[2023-09-26 13:55:52,024][08516] Fps is (10 sec: 4915.2, 60 sec: 5188.3, 300 sec: 5387.3). Total num frames: 12165120. Throughput: 0: 682.1, 1: 681.6. Samples: 3040660. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:55:52,025][08516] Avg episode reward: [(0, '2.670'), (1, '0.990')] +[2023-09-26 13:55:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5387.3). Total num frames: 12197888. Throughput: 0: 680.6, 1: 679.8. Samples: 3048745. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:55:57,025][08516] Avg episode reward: [(0, '2.650'), (1, '0.970')] +[2023-09-26 13:55:58,277][09735] Updated weights for policy 1, policy_version 23840 (0.0012) +[2023-09-26 13:55:58,277][09734] Updated weights for policy 0, policy_version 23840 (0.0010) +[2023-09-26 13:56:02,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5387.3). Total num frames: 12222464. Throughput: 0: 680.8, 1: 680.2. Samples: 3052876. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:56:02,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.980')] +[2023-09-26 13:56:07,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5387.3). Total num frames: 12255232. Throughput: 0: 684.9, 1: 684.7. Samples: 3061518. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 13:56:07,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.980')] +[2023-09-26 13:56:12,024][08516] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5359.5). Total num frames: 12279808. Throughput: 0: 684.4, 1: 684.5. Samples: 3069731. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:56:12,025][08516] Avg episode reward: [(0, '2.680'), (1, '0.910')] +[2023-09-26 13:56:12,974][09734] Updated weights for policy 0, policy_version 24000 (0.0014) +[2023-09-26 13:56:12,974][09735] Updated weights for policy 1, policy_version 24000 (0.0014) +[2023-09-26 13:56:17,024][08516] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5331.7). Total num frames: 12304384. Throughput: 0: 684.0, 1: 684.7. Samples: 3073841. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:56:17,025][08516] Avg episode reward: [(0, '2.690'), (1, '0.920')] +[2023-09-26 13:56:22,024][08516] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5331.7). Total num frames: 12337152. Throughput: 0: 685.3, 1: 685.9. Samples: 3082127. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:56:22,025][08516] Avg episode reward: [(0, '2.720'), (1, '0.880')] +[2023-09-26 13:56:27,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5304.0). Total num frames: 12361728. Throughput: 0: 685.7, 1: 686.6. Samples: 3090297. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:56:27,025][08516] Avg episode reward: [(0, '2.810'), (1, '0.890')] +[2023-09-26 13:56:27,912][09735] Updated weights for policy 1, policy_version 24160 (0.0012) +[2023-09-26 13:56:27,913][09734] Updated weights for policy 0, policy_version 24160 (0.0013) +[2023-09-26 13:56:32,024][08516] Fps is (10 sec: 4915.3, 60 sec: 5461.3, 300 sec: 5276.2). Total num frames: 12386304. Throughput: 0: 685.8, 1: 686.1. Samples: 3094346. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:56:32,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.900')] +[2023-09-26 13:56:37,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5290.1). Total num frames: 12419072. Throughput: 0: 686.5, 1: 687.6. Samples: 3102492. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:56:37,025][08516] Avg episode reward: [(0, '2.850'), (1, '0.860')] +[2023-09-26 13:56:37,034][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000024256_6209536.pth... +[2023-09-26 13:56:37,034][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000024256_6209536.pth... +[2023-09-26 13:56:37,072][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000021744_5566464.pth +[2023-09-26 13:56:37,072][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000021744_5566464.pth +[2023-09-26 13:56:42,024][08516] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5276.2). Total num frames: 12443648. Throughput: 0: 689.0, 1: 689.4. Samples: 3110771. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:56:42,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.890')] +[2023-09-26 13:56:42,895][09735] Updated weights for policy 1, policy_version 24320 (0.0011) +[2023-09-26 13:56:42,895][09734] Updated weights for policy 0, policy_version 24320 (0.0011) +[2023-09-26 13:56:47,024][08516] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5248.4). Total num frames: 12468224. Throughput: 0: 688.6, 1: 689.6. Samples: 3114894. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:56:47,025][08516] Avg episode reward: [(0, '2.930'), (1, '0.900')] +[2023-09-26 13:56:52,024][08516] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5248.4). Total num frames: 12500992. Throughput: 0: 682.9, 1: 682.2. Samples: 3122948. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:56:52,025][08516] Avg episode reward: [(0, '2.950'), (1, '0.920')] +[2023-09-26 13:56:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5276.2). Total num frames: 12525568. Throughput: 0: 683.8, 1: 683.3. Samples: 3131254. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:56:57,025][08516] Avg episode reward: [(0, '2.930'), (1, '0.960')] +[2023-09-26 13:56:57,952][09734] Updated weights for policy 0, policy_version 24480 (0.0012) +[2023-09-26 13:56:57,952][09735] Updated weights for policy 1, policy_version 24480 (0.0010) +[2023-09-26 13:57:02,024][08516] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5331.8). Total num frames: 12550144. Throughput: 0: 682.3, 1: 682.5. Samples: 3135255. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:57:02,025][08516] Avg episode reward: [(0, '2.920'), (1, '0.930')] +[2023-09-26 13:57:07,024][08516] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5387.3). Total num frames: 12582912. Throughput: 0: 680.6, 1: 680.3. Samples: 3143370. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:57:07,025][08516] Avg episode reward: [(0, '2.900'), (1, '0.910')] +[2023-09-26 13:57:12,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5387.3). Total num frames: 12607488. Throughput: 0: 680.8, 1: 680.7. Samples: 3151563. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:57:12,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.920')] +[2023-09-26 13:57:12,998][09735] Updated weights for policy 1, policy_version 24640 (0.0013) +[2023-09-26 13:57:12,998][09734] Updated weights for policy 0, policy_version 24640 (0.0014) +[2023-09-26 13:57:17,024][08516] Fps is (10 sec: 4915.3, 60 sec: 5461.3, 300 sec: 5387.3). Total num frames: 12632064. Throughput: 0: 681.5, 1: 681.5. Samples: 3155682. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:57:17,025][08516] Avg episode reward: [(0, '2.910'), (1, '0.930')] +[2023-09-26 13:57:22,024][08516] Fps is (10 sec: 4915.2, 60 sec: 5324.8, 300 sec: 5359.5). Total num frames: 12656640. Throughput: 0: 680.3, 1: 680.4. Samples: 3163722. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:57:22,025][08516] Avg episode reward: [(0, '2.900'), (1, '0.930')] +[2023-09-26 13:57:27,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5387.3). Total num frames: 12689408. Throughput: 0: 679.9, 1: 679.6. Samples: 3171951. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:57:27,025][08516] Avg episode reward: [(0, '2.920'), (1, '0.910')] +[2023-09-26 13:57:28,107][09734] Updated weights for policy 0, policy_version 24800 (0.0012) +[2023-09-26 13:57:28,108][09735] Updated weights for policy 1, policy_version 24800 (0.0011) +[2023-09-26 13:57:32,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5359.5). Total num frames: 12713984. Throughput: 0: 679.4, 1: 679.0. Samples: 3176023. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:57:32,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.920')] +[2023-09-26 13:57:37,024][08516] Fps is (10 sec: 4915.1, 60 sec: 5324.8, 300 sec: 5359.5). Total num frames: 12738560. Throughput: 0: 680.6, 1: 680.7. Samples: 3184204. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:57:37,025][08516] Avg episode reward: [(0, '2.880'), (1, '0.940')] +[2023-09-26 13:57:42,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5359.5). Total num frames: 12771328. Throughput: 0: 681.0, 1: 681.4. Samples: 3192562. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:57:42,025][08516] Avg episode reward: [(0, '2.900'), (1, '0.940')] +[2023-09-26 13:57:42,953][09735] Updated weights for policy 1, policy_version 24960 (0.0012) +[2023-09-26 13:57:42,953][09734] Updated weights for policy 0, policy_version 24960 (0.0012) +[2023-09-26 13:57:47,024][08516] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5359.5). Total num frames: 12795904. Throughput: 0: 683.9, 1: 682.8. Samples: 3196755. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:57:47,025][08516] Avg episode reward: [(0, '2.900'), (1, '0.970')] +[2023-09-26 13:57:52,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5359.5). Total num frames: 12828672. Throughput: 0: 684.7, 1: 685.3. Samples: 3205020. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:57:52,025][08516] Avg episode reward: [(0, '2.900'), (1, '0.920')] +[2023-09-26 13:57:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5359.5). Total num frames: 12853248. Throughput: 0: 685.9, 1: 685.4. Samples: 3213269. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:57:57,025][08516] Avg episode reward: [(0, '2.910'), (1, '0.960')] +[2023-09-26 13:57:57,808][09735] Updated weights for policy 1, policy_version 25120 (0.0012) +[2023-09-26 13:57:57,808][09734] Updated weights for policy 0, policy_version 25120 (0.0012) +[2023-09-26 13:58:02,024][08516] Fps is (10 sec: 4915.1, 60 sec: 5461.3, 300 sec: 5331.7). Total num frames: 12877824. Throughput: 0: 686.4, 1: 684.8. Samples: 3217388. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:58:02,025][08516] Avg episode reward: [(0, '2.930'), (1, '0.930')] +[2023-09-26 13:58:07,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5359.5). Total num frames: 12910592. Throughput: 0: 688.5, 1: 686.6. Samples: 3225604. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:58:07,025][08516] Avg episode reward: [(0, '2.930'), (1, '0.890')] +[2023-09-26 13:58:12,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5331.7). Total num frames: 12935168. Throughput: 0: 689.0, 1: 689.3. Samples: 3233972. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:58:12,025][08516] Avg episode reward: [(0, '2.940'), (1, '0.890')] +[2023-09-26 13:58:12,432][09734] Updated weights for policy 0, policy_version 25280 (0.0013) +[2023-09-26 13:58:12,432][09735] Updated weights for policy 1, policy_version 25280 (0.0012) +[2023-09-26 13:58:17,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5331.7). Total num frames: 12967936. Throughput: 0: 691.7, 1: 690.8. Samples: 3238234. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:58:17,025][08516] Avg episode reward: [(0, '2.940'), (1, '0.920')] +[2023-09-26 13:58:22,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5304.0). Total num frames: 12992512. Throughput: 0: 692.9, 1: 692.8. Samples: 3246560. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:58:22,025][08516] Avg episode reward: [(0, '2.930'), (1, '0.950')] +[2023-09-26 13:58:27,024][08516] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5276.2). Total num frames: 13017088. Throughput: 0: 692.1, 1: 692.4. Samples: 3254862. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:58:27,025][08516] Avg episode reward: [(0, '2.950'), (1, '0.960')] +[2023-09-26 13:58:27,276][09735] Updated weights for policy 1, policy_version 25440 (0.0013) +[2023-09-26 13:58:27,277][09734] Updated weights for policy 0, policy_version 25440 (0.0012) +[2023-09-26 13:58:32,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5276.2). Total num frames: 13049856. Throughput: 0: 691.3, 1: 691.7. Samples: 3258989. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:58:32,025][08516] Avg episode reward: [(0, '2.960'), (1, '0.930')] +[2023-09-26 13:58:32,026][09359] Saving new best policy, reward=2.960! +[2023-09-26 13:58:37,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5276.2). Total num frames: 13074432. Throughput: 0: 692.5, 1: 692.6. Samples: 3267349. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 13:58:37,025][08516] Avg episode reward: [(0, '2.950'), (1, '0.860')] +[2023-09-26 13:58:37,032][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000025536_6537216.pth... +[2023-09-26 13:58:37,032][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000025536_6537216.pth... +[2023-09-26 13:58:37,072][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000023024_5894144.pth +[2023-09-26 13:58:37,072][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000023024_5894144.pth +[2023-09-26 13:58:41,976][09735] Updated weights for policy 1, policy_version 25600 (0.0013) +[2023-09-26 13:58:41,977][09734] Updated weights for policy 0, policy_version 25600 (0.0013) +[2023-09-26 13:58:42,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5276.2). Total num frames: 13107200. Throughput: 0: 693.6, 1: 694.4. Samples: 3275730. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:58:42,025][08516] Avg episode reward: [(0, '2.950'), (1, '0.840')] +[2023-09-26 13:58:47,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5304.0). Total num frames: 13131776. Throughput: 0: 693.4, 1: 694.7. Samples: 3279851. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:58:47,025][08516] Avg episode reward: [(0, '2.950'), (1, '0.780')] +[2023-09-26 13:58:52,024][08516] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5290.1). Total num frames: 13156352. Throughput: 0: 695.2, 1: 695.8. Samples: 3288201. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:58:52,025][08516] Avg episode reward: [(0, '2.970'), (1, '0.830')] +[2023-09-26 13:58:52,034][09359] Saving new best policy, reward=2.970! +[2023-09-26 13:58:56,867][09735] Updated weights for policy 1, policy_version 25760 (0.0012) +[2023-09-26 13:58:56,868][09734] Updated weights for policy 0, policy_version 25760 (0.0012) +[2023-09-26 13:58:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5304.0). Total num frames: 13189120. Throughput: 0: 693.8, 1: 693.3. Samples: 3296392. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 13:58:57,025][08516] Avg episode reward: [(0, '2.960'), (1, '0.850')] +[2023-09-26 13:59:02,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5304.0). Total num frames: 13213696. Throughput: 0: 692.8, 1: 692.2. Samples: 3300557. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:59:02,025][08516] Avg episode reward: [(0, '2.970'), (1, '0.910')] +[2023-09-26 13:59:07,025][08516] Fps is (10 sec: 4915.1, 60 sec: 5461.3, 300 sec: 5276.2). Total num frames: 13238272. Throughput: 0: 692.1, 1: 692.4. Samples: 3308866. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:59:07,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.930')] +[2023-09-26 13:59:11,670][09735] Updated weights for policy 1, policy_version 25920 (0.0012) +[2023-09-26 13:59:11,670][09734] Updated weights for policy 0, policy_version 25920 (0.0014) +[2023-09-26 13:59:12,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5304.0). Total num frames: 13271040. Throughput: 0: 693.6, 1: 692.6. Samples: 3317237. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:59:12,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.950')] +[2023-09-26 13:59:17,024][08516] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5290.1). Total num frames: 13295616. Throughput: 0: 692.6, 1: 692.1. Samples: 3321302. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 13:59:17,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.930')] +[2023-09-26 13:59:22,024][08516] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5304.0). Total num frames: 13320192. Throughput: 0: 690.9, 1: 690.7. Samples: 3329521. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:59:22,025][08516] Avg episode reward: [(0, '2.810'), (1, '0.960')] +[2023-09-26 13:59:26,624][09735] Updated weights for policy 1, policy_version 26080 (0.0013) +[2023-09-26 13:59:26,624][09734] Updated weights for policy 0, policy_version 26080 (0.0011) +[2023-09-26 13:59:27,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5359.5). Total num frames: 13352960. Throughput: 0: 689.4, 1: 689.0. Samples: 3337760. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:59:27,025][08516] Avg episode reward: [(0, '2.720'), (1, '0.960')] +[2023-09-26 13:59:32,024][08516] Fps is (10 sec: 5734.3, 60 sec: 5461.3, 300 sec: 5387.3). Total num frames: 13377536. Throughput: 0: 691.5, 1: 691.4. Samples: 3342081. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 13:59:32,026][08516] Avg episode reward: [(0, '2.670'), (1, '0.940')] +[2023-09-26 13:59:37,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5415.1). Total num frames: 13410304. Throughput: 0: 691.0, 1: 691.1. Samples: 3350394. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:59:37,025][08516] Avg episode reward: [(0, '2.680'), (1, '0.900')] +[2023-09-26 13:59:41,333][09734] Updated weights for policy 0, policy_version 26240 (0.0016) +[2023-09-26 13:59:41,341][09735] Updated weights for policy 1, policy_version 26240 (0.0014) +[2023-09-26 13:59:42,024][08516] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5442.8). Total num frames: 13434880. Throughput: 0: 692.3, 1: 691.9. Samples: 3358682. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:59:42,025][08516] Avg episode reward: [(0, '2.690'), (1, '0.890')] +[2023-09-26 13:59:47,024][08516] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5442.8). Total num frames: 13459456. Throughput: 0: 691.4, 1: 691.6. Samples: 3362789. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:59:47,025][08516] Avg episode reward: [(0, '2.700'), (1, '0.870')] +[2023-09-26 13:59:52,024][08516] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13492224. Throughput: 0: 691.3, 1: 689.6. Samples: 3371008. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 13:59:52,025][08516] Avg episode reward: [(0, '2.650'), (1, '0.830')] +[2023-09-26 13:59:56,158][09735] Updated weights for policy 1, policy_version 26400 (0.0014) +[2023-09-26 13:59:56,158][09734] Updated weights for policy 0, policy_version 26400 (0.0012) +[2023-09-26 13:59:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 13516800. Throughput: 0: 688.7, 1: 688.4. Samples: 3379205. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 13:59:57,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.830')] +[2023-09-26 14:00:02,024][08516] Fps is (10 sec: 5325.0, 60 sec: 5529.6, 300 sec: 5512.2). Total num frames: 13545472. Throughput: 0: 689.2, 1: 689.2. Samples: 3383330. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 14:00:02,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.850')] +[2023-09-26 14:00:07,024][08516] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13574144. Throughput: 0: 690.3, 1: 690.7. Samples: 3391669. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 14:00:07,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.860')] +[2023-09-26 14:00:10,850][09734] Updated weights for policy 0, policy_version 26560 (0.0012) +[2023-09-26 14:00:10,850][09735] Updated weights for policy 1, policy_version 26560 (0.0012) +[2023-09-26 14:00:12,024][08516] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 13598720. Throughput: 0: 692.8, 1: 692.8. Samples: 3400114. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 14:00:12,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.850')] +[2023-09-26 14:00:17,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 13631488. Throughput: 0: 690.7, 1: 690.3. Samples: 3404226. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 14:00:17,025][08516] Avg episode reward: [(0, '2.810'), (1, '0.870')] +[2023-09-26 14:00:22,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5498.4). Total num frames: 13656064. Throughput: 0: 690.8, 1: 691.9. Samples: 3412617. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 14:00:22,025][08516] Avg episode reward: [(0, '2.810'), (1, '0.920')] +[2023-09-26 14:00:25,625][09734] Updated weights for policy 0, policy_version 26720 (0.0012) +[2023-09-26 14:00:25,625][09735] Updated weights for policy 1, policy_version 26720 (0.0012) +[2023-09-26 14:00:27,024][08516] Fps is (10 sec: 5324.9, 60 sec: 5529.6, 300 sec: 5512.2). Total num frames: 13684736. Throughput: 0: 691.7, 1: 692.8. Samples: 3420986. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 14:00:27,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.910')] +[2023-09-26 14:00:32,024][08516] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 13713408. Throughput: 0: 691.1, 1: 691.4. Samples: 3425000. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:00:32,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.910')] +[2023-09-26 14:00:37,024][08516] Fps is (10 sec: 5324.8, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 13737984. Throughput: 0: 694.8, 1: 696.7. Samples: 3433624. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:00:37,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.900')] +[2023-09-26 14:00:37,031][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000026832_6868992.pth... +[2023-09-26 14:00:37,031][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000026832_6868992.pth... +[2023-09-26 14:00:37,064][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000024256_6209536.pth +[2023-09-26 14:00:37,064][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000024256_6209536.pth +[2023-09-26 14:00:40,309][09735] Updated weights for policy 1, policy_version 26880 (0.0015) +[2023-09-26 14:00:40,309][09734] Updated weights for policy 0, policy_version 26880 (0.0012) +[2023-09-26 14:00:42,024][08516] Fps is (10 sec: 5734.5, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 13770752. Throughput: 0: 696.7, 1: 697.3. Samples: 3441933. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:00:42,025][08516] Avg episode reward: [(0, '2.810'), (1, '0.910')] +[2023-09-26 14:00:47,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 13795328. Throughput: 0: 696.4, 1: 697.0. Samples: 3446032. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:00:47,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.860')] +[2023-09-26 14:00:52,024][08516] Fps is (10 sec: 4915.2, 60 sec: 5461.4, 300 sec: 5498.4). Total num frames: 13819904. Throughput: 0: 693.8, 1: 693.1. Samples: 3454081. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:00:52,025][08516] Avg episode reward: [(0, '2.760'), (1, '0.870')] +[2023-09-26 14:00:55,421][09734] Updated weights for policy 0, policy_version 27040 (0.0010) +[2023-09-26 14:00:55,421][09735] Updated weights for policy 1, policy_version 27040 (0.0012) +[2023-09-26 14:00:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 13852672. Throughput: 0: 690.3, 1: 690.0. Samples: 3462224. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:00:57,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.900')] +[2023-09-26 14:01:02,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5529.6, 300 sec: 5498.4). Total num frames: 13877248. Throughput: 0: 691.3, 1: 691.3. Samples: 3466444. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:01:02,025][08516] Avg episode reward: [(0, '2.690'), (1, '0.920')] +[2023-09-26 14:01:07,024][08516] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 13901824. Throughput: 0: 692.5, 1: 692.8. Samples: 3474958. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:01:07,025][08516] Avg episode reward: [(0, '2.710'), (1, '0.920')] +[2023-09-26 14:01:10,186][09735] Updated weights for policy 1, policy_version 27200 (0.0012) +[2023-09-26 14:01:10,186][09734] Updated weights for policy 0, policy_version 27200 (0.0012) +[2023-09-26 14:01:12,024][08516] Fps is (10 sec: 5734.3, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 13934592. Throughput: 0: 690.4, 1: 689.3. Samples: 3483074. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:01:12,025][08516] Avg episode reward: [(0, '2.710'), (1, '0.960')] +[2023-09-26 14:01:17,024][08516] Fps is (10 sec: 5734.5, 60 sec: 5461.4, 300 sec: 5498.4). Total num frames: 13959168. Throughput: 0: 691.2, 1: 691.2. Samples: 3487209. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:01:17,025][08516] Avg episode reward: [(0, '2.720'), (1, '0.970')] +[2023-09-26 14:01:22,024][08516] Fps is (10 sec: 4915.2, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 13983744. Throughput: 0: 685.9, 1: 685.9. Samples: 3495356. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:01:22,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.940')] +[2023-09-26 14:01:25,183][09734] Updated weights for policy 0, policy_version 27360 (0.0012) +[2023-09-26 14:01:25,183][09735] Updated weights for policy 1, policy_version 27360 (0.0013) +[2023-09-26 14:01:27,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5529.6, 300 sec: 5526.1). Total num frames: 14016512. Throughput: 0: 684.3, 1: 685.3. Samples: 3503563. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:01:27,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.930')] +[2023-09-26 14:01:32,024][08516] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5498.4). Total num frames: 14041088. Throughput: 0: 687.1, 1: 688.1. Samples: 3507919. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:01:32,025][08516] Avg episode reward: [(0, '2.780'), (1, '0.930')] +[2023-09-26 14:01:37,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.9, 300 sec: 5526.1). Total num frames: 14073856. Throughput: 0: 690.6, 1: 691.7. Samples: 3516283. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:01:37,025][08516] Avg episode reward: [(0, '2.770'), (1, '0.920')] +[2023-09-26 14:01:39,640][09735] Updated weights for policy 1, policy_version 27520 (0.0012) +[2023-09-26 14:01:39,640][09734] Updated weights for policy 0, policy_version 27520 (0.0012) +[2023-09-26 14:01:42,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5461.3, 300 sec: 5526.1). Total num frames: 14098432. Throughput: 0: 693.8, 1: 693.7. Samples: 3524662. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:01:42,025][08516] Avg episode reward: [(0, '2.810'), (1, '0.900')] +[2023-09-26 14:01:47,024][08516] Fps is (10 sec: 5734.2, 60 sec: 5597.8, 300 sec: 5526.1). Total num frames: 14131200. Throughput: 0: 695.4, 1: 695.7. Samples: 3529047. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:01:47,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.910')] +[2023-09-26 14:01:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 5734.4, 300 sec: 5553.9). Total num frames: 14163968. Throughput: 0: 705.2, 1: 704.2. Samples: 3538383. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:01:52,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.900')] +[2023-09-26 14:01:53,269][09735] Updated weights for policy 1, policy_version 27680 (0.0014) +[2023-09-26 14:01:53,269][09734] Updated weights for policy 0, policy_version 27680 (0.0014) +[2023-09-26 14:01:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 5597.8, 300 sec: 5553.9). Total num frames: 14188544. Throughput: 0: 716.4, 1: 717.3. Samples: 3547589. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:01:57,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.930')] +[2023-09-26 14:02:02,024][08516] Fps is (10 sec: 5734.3, 60 sec: 5734.4, 300 sec: 5553.9). Total num frames: 14221312. Throughput: 0: 725.7, 1: 726.2. Samples: 3552543. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:02:02,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.920')] +[2023-09-26 14:02:06,172][09734] Updated weights for policy 0, policy_version 27840 (0.0019) +[2023-09-26 14:02:06,172][09735] Updated weights for policy 1, policy_version 27840 (0.0018) +[2023-09-26 14:02:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 5870.9, 300 sec: 5581.7). Total num frames: 14254080. Throughput: 0: 738.6, 1: 738.3. Samples: 3561814. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:02:07,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.950')] +[2023-09-26 14:02:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 5870.9, 300 sec: 5609.4). Total num frames: 14286848. Throughput: 0: 758.0, 1: 756.4. Samples: 3571711. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:02:12,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.960')] +[2023-09-26 14:02:17,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6007.5, 300 sec: 5637.2). Total num frames: 14319616. Throughput: 0: 759.3, 1: 759.4. Samples: 3576260. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:02:17,025][08516] Avg episode reward: [(0, '2.880'), (1, '0.950')] +[2023-09-26 14:02:19,358][09734] Updated weights for policy 0, policy_version 28000 (0.0017) +[2023-09-26 14:02:19,358][09735] Updated weights for policy 1, policy_version 28000 (0.0017) +[2023-09-26 14:02:22,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 5637.2). Total num frames: 14352384. Throughput: 0: 768.5, 1: 768.1. Samples: 3585430. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:02:22,025][08516] Avg episode reward: [(0, '2.860'), (1, '0.960')] +[2023-09-26 14:02:27,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6007.5, 300 sec: 5637.2). Total num frames: 14376960. Throughput: 0: 777.4, 1: 778.0. Samples: 3594657. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:02:27,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.970')] +[2023-09-26 14:02:32,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 5665.0). Total num frames: 14409728. Throughput: 0: 782.9, 1: 783.4. Samples: 3599528. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:02:32,025][08516] Avg episode reward: [(0, '2.880'), (1, '0.960')] +[2023-09-26 14:02:32,388][09735] Updated weights for policy 1, policy_version 28160 (0.0016) +[2023-09-26 14:02:32,388][09734] Updated weights for policy 0, policy_version 28160 (0.0019) +[2023-09-26 14:02:37,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 5665.0). Total num frames: 14442496. Throughput: 0: 783.6, 1: 783.6. Samples: 3608909. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:02:37,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.980')] +[2023-09-26 14:02:37,036][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000028208_7221248.pth... +[2023-09-26 14:02:37,036][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000028208_7221248.pth... +[2023-09-26 14:02:37,065][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000025536_6537216.pth +[2023-09-26 14:02:37,073][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000025536_6537216.pth +[2023-09-26 14:02:42,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 5692.7). Total num frames: 14475264. Throughput: 0: 792.3, 1: 789.2. Samples: 3618758. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:02:42,025][08516] Avg episode reward: [(0, '2.880'), (1, '1.010')] +[2023-09-26 14:02:42,025][09597] Saving new best policy, reward=1.010! +[2023-09-26 14:02:45,367][09734] Updated weights for policy 0, policy_version 28320 (0.0016) +[2023-09-26 14:02:45,367][09735] Updated weights for policy 1, policy_version 28320 (0.0017) +[2023-09-26 14:02:47,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 5692.7). Total num frames: 14508032. Throughput: 0: 783.0, 1: 783.4. Samples: 3623027. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:02:47,025][08516] Avg episode reward: [(0, '2.900'), (1, '1.000')] +[2023-09-26 14:02:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 5720.5). Total num frames: 14540800. Throughput: 0: 787.4, 1: 786.0. Samples: 3632615. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:02:52,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.980')] +[2023-09-26 14:02:57,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 5720.5). Total num frames: 14565376. Throughput: 0: 779.2, 1: 781.0. Samples: 3641917. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:02:57,025][08516] Avg episode reward: [(0, '2.900'), (1, '0.960')] +[2023-09-26 14:02:58,371][09734] Updated weights for policy 0, policy_version 28480 (0.0016) +[2023-09-26 14:02:58,372][09735] Updated weights for policy 1, policy_version 28480 (0.0017) +[2023-09-26 14:03:02,024][08516] Fps is (10 sec: 5734.2, 60 sec: 6280.5, 300 sec: 5720.5). Total num frames: 14598144. Throughput: 0: 785.4, 1: 783.7. Samples: 3646869. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:03:02,025][08516] Avg episode reward: [(0, '2.910'), (1, '0.960')] +[2023-09-26 14:03:07,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 5748.3). Total num frames: 14630912. Throughput: 0: 783.9, 1: 783.8. Samples: 3655979. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:03:07,025][08516] Avg episode reward: [(0, '2.910'), (1, '0.980')] +[2023-09-26 14:03:11,417][09735] Updated weights for policy 1, policy_version 28640 (0.0017) +[2023-09-26 14:03:11,417][09734] Updated weights for policy 0, policy_version 28640 (0.0017) +[2023-09-26 14:03:12,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 5748.3). Total num frames: 14663680. Throughput: 0: 791.7, 1: 790.1. Samples: 3665838. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:03:12,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.980')] +[2023-09-26 14:03:17,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 5776.1). Total num frames: 14696448. Throughput: 0: 787.5, 1: 786.9. Samples: 3670374. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:03:17,025][08516] Avg episode reward: [(0, '2.860'), (1, '0.980')] +[2023-09-26 14:03:22,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 5803.8). Total num frames: 14729216. Throughput: 0: 790.9, 1: 791.5. Samples: 3680118. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 14:03:22,025][08516] Avg episode reward: [(0, '2.870'), (1, '1.000')] +[2023-09-26 14:03:24,258][09734] Updated weights for policy 0, policy_version 28800 (0.0017) +[2023-09-26 14:03:24,258][09735] Updated weights for policy 1, policy_version 28800 (0.0017) +[2023-09-26 14:03:27,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 5803.8). Total num frames: 14761984. Throughput: 0: 783.6, 1: 786.7. Samples: 3689420. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 14:03:27,025][08516] Avg episode reward: [(0, '2.810'), (1, '1.010')] +[2023-09-26 14:03:32,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 5831.6). Total num frames: 14794752. Throughput: 0: 791.8, 1: 792.2. Samples: 3694307. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 14:03:32,025][08516] Avg episode reward: [(0, '2.750'), (1, '1.020')] +[2023-09-26 14:03:32,026][09597] Saving new best policy, reward=1.020! +[2023-09-26 14:03:37,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 5803.8). Total num frames: 14819328. Throughput: 0: 790.6, 1: 791.5. Samples: 3703812. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 14:03:37,025][08516] Avg episode reward: [(0, '2.740'), (1, '1.080')] +[2023-09-26 14:03:37,145][09597] Saving new best policy, reward=1.080! +[2023-09-26 14:03:37,148][09734] Updated weights for policy 0, policy_version 28960 (0.0016) +[2023-09-26 14:03:37,148][09735] Updated weights for policy 1, policy_version 28960 (0.0017) +[2023-09-26 14:03:42,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 5831.6). Total num frames: 14852096. Throughput: 0: 791.8, 1: 791.6. Samples: 3713170. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:03:42,025][08516] Avg episode reward: [(0, '2.720'), (1, '1.090')] +[2023-09-26 14:03:42,027][09597] Saving new best policy, reward=1.090! +[2023-09-26 14:03:47,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 5859.4). Total num frames: 14884864. Throughput: 0: 790.5, 1: 791.6. Samples: 3718062. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:03:47,025][08516] Avg episode reward: [(0, '2.730'), (1, '1.060')] +[2023-09-26 14:03:50,092][09734] Updated weights for policy 0, policy_version 29120 (0.0015) +[2023-09-26 14:03:50,093][09735] Updated weights for policy 1, policy_version 29120 (0.0015) +[2023-09-26 14:03:52,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 5859.4). Total num frames: 14917632. Throughput: 0: 794.0, 1: 793.4. Samples: 3727413. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:03:52,025][08516] Avg episode reward: [(0, '2.750'), (1, '1.040')] +[2023-09-26 14:03:57,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 5887.1). Total num frames: 14950400. Throughput: 0: 791.6, 1: 792.9. Samples: 3737142. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:03:57,025][08516] Avg episode reward: [(0, '2.780'), (1, '0.990')] +[2023-09-26 14:04:02,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 5914.9). Total num frames: 14983168. Throughput: 0: 793.2, 1: 791.7. Samples: 3741697. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:04:02,025][08516] Avg episode reward: [(0, '2.810'), (1, '0.980')] +[2023-09-26 14:04:03,194][09735] Updated weights for policy 1, policy_version 29280 (0.0018) +[2023-09-26 14:04:03,194][09734] Updated weights for policy 0, policy_version 29280 (0.0019) +[2023-09-26 14:04:07,024][08516] Fps is (10 sec: 6143.9, 60 sec: 6348.8, 300 sec: 5901.0). Total num frames: 15011840. Throughput: 0: 789.3, 1: 788.9. Samples: 3751136. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:04:07,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.970')] +[2023-09-26 14:04:12,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 5914.9). Total num frames: 15040512. Throughput: 0: 789.9, 1: 790.0. Samples: 3760517. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:04:12,025][08516] Avg episode reward: [(0, '2.820'), (1, '1.020')] +[2023-09-26 14:04:16,129][09734] Updated weights for policy 0, policy_version 29440 (0.0017) +[2023-09-26 14:04:16,129][09735] Updated weights for policy 1, policy_version 29440 (0.0016) +[2023-09-26 14:04:17,024][08516] Fps is (10 sec: 6144.2, 60 sec: 6280.5, 300 sec: 5942.7). Total num frames: 15073280. Throughput: 0: 790.3, 1: 790.0. Samples: 3765419. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:04:17,025][08516] Avg episode reward: [(0, '2.820'), (1, '1.030')] +[2023-09-26 14:04:22,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 5942.7). Total num frames: 15106048. Throughput: 0: 786.6, 1: 787.2. Samples: 3774631. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 14:04:22,025][08516] Avg episode reward: [(0, '2.750'), (1, '1.040')] +[2023-09-26 14:04:27,025][08516] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 5970.4). Total num frames: 15138816. Throughput: 0: 793.6, 1: 792.9. Samples: 3784559. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 14:04:27,025][08516] Avg episode reward: [(0, '2.780'), (1, '1.050')] +[2023-09-26 14:04:29,014][09734] Updated weights for policy 0, policy_version 29600 (0.0016) +[2023-09-26 14:04:29,014][09735] Updated weights for policy 1, policy_version 29600 (0.0017) +[2023-09-26 14:04:32,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 5970.4). Total num frames: 15171584. Throughput: 0: 788.0, 1: 787.8. Samples: 3788971. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 14:04:32,025][08516] Avg episode reward: [(0, '2.820'), (1, '1.030')] +[2023-09-26 14:04:37,025][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 5998.2). Total num frames: 15204352. Throughput: 0: 788.9, 1: 789.6. Samples: 3798447. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:04:37,025][08516] Avg episode reward: [(0, '2.780'), (1, '1.010')] +[2023-09-26 14:04:37,039][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000029696_7602176.pth... +[2023-09-26 14:04:37,039][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000029696_7602176.pth... +[2023-09-26 14:04:37,074][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000026832_6868992.pth +[2023-09-26 14:04:37,079][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000026832_6868992.pth +[2023-09-26 14:04:42,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 5998.2). Total num frames: 15228928. Throughput: 0: 782.3, 1: 783.0. Samples: 3807579. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:04:42,026][08516] Avg episode reward: [(0, '2.780'), (1, '1.010')] +[2023-09-26 14:04:42,233][09734] Updated weights for policy 0, policy_version 29760 (0.0017) +[2023-09-26 14:04:42,233][09735] Updated weights for policy 1, policy_version 29760 (0.0017) +[2023-09-26 14:04:47,024][08516] Fps is (10 sec: 5734.6, 60 sec: 6280.5, 300 sec: 5998.2). Total num frames: 15261696. Throughput: 0: 785.8, 1: 787.6. Samples: 3812500. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:04:47,025][08516] Avg episode reward: [(0, '2.760'), (1, '0.990')] +[2023-09-26 14:04:52,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6026.0). Total num frames: 15294464. Throughput: 0: 785.1, 1: 785.6. Samples: 3821818. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:04:52,025][08516] Avg episode reward: [(0, '2.730'), (1, '1.010')] +[2023-09-26 14:04:55,099][09734] Updated weights for policy 0, policy_version 29920 (0.0017) +[2023-09-26 14:04:55,099][09735] Updated weights for policy 1, policy_version 29920 (0.0019) +[2023-09-26 14:04:57,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6039.9). Total num frames: 15327232. Throughput: 0: 791.9, 1: 791.2. Samples: 3831754. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:04:57,025][08516] Avg episode reward: [(0, '2.710'), (1, '1.000')] +[2023-09-26 14:05:02,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6053.7). Total num frames: 15360000. Throughput: 0: 785.2, 1: 785.1. Samples: 3836080. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:05:02,025][08516] Avg episode reward: [(0, '2.730'), (1, '0.980')] +[2023-09-26 14:05:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6348.8, 300 sec: 6081.5). Total num frames: 15392768. Throughput: 0: 791.2, 1: 790.4. Samples: 3845803. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:05:07,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.960')] +[2023-09-26 14:05:08,092][09735] Updated weights for policy 1, policy_version 30080 (0.0017) +[2023-09-26 14:05:08,093][09734] Updated weights for policy 0, policy_version 30080 (0.0016) +[2023-09-26 14:05:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6081.5). Total num frames: 15425536. Throughput: 0: 782.5, 1: 783.2. Samples: 3855012. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:05:12,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.950')] +[2023-09-26 14:05:17,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6081.5). Total num frames: 15450112. Throughput: 0: 790.2, 1: 790.3. Samples: 3860097. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:05:17,025][08516] Avg episode reward: [(0, '2.900'), (1, '0.970')] +[2023-09-26 14:05:20,887][09735] Updated weights for policy 1, policy_version 30240 (0.0014) +[2023-09-26 14:05:20,888][09734] Updated weights for policy 0, policy_version 30240 (0.0014) +[2023-09-26 14:05:22,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6095.4). Total num frames: 15482880. Throughput: 0: 791.0, 1: 791.0. Samples: 3869637. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:05:22,025][08516] Avg episode reward: [(0, '2.930'), (1, '1.000')] +[2023-09-26 14:05:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6109.3). Total num frames: 15515648. Throughput: 0: 792.5, 1: 791.6. Samples: 3878864. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:05:27,025][08516] Avg episode reward: [(0, '2.940'), (1, '1.020')] +[2023-09-26 14:05:32,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6137.1). Total num frames: 15548416. Throughput: 0: 787.3, 1: 787.4. Samples: 3883361. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:05:32,025][08516] Avg episode reward: [(0, '2.950'), (1, '1.030')] +[2023-09-26 14:05:34,055][09735] Updated weights for policy 1, policy_version 30400 (0.0015) +[2023-09-26 14:05:34,055][09734] Updated weights for policy 0, policy_version 30400 (0.0016) +[2023-09-26 14:05:37,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6137.1). Total num frames: 15581184. Throughput: 0: 793.8, 1: 792.7. Samples: 3893213. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:05:37,025][08516] Avg episode reward: [(0, '2.940'), (1, '1.030')] +[2023-09-26 14:05:42,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6164.8). Total num frames: 15613952. Throughput: 0: 787.7, 1: 788.4. Samples: 3902681. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:05:42,025][08516] Avg episode reward: [(0, '2.900'), (1, '1.020')] +[2023-09-26 14:05:46,873][09735] Updated weights for policy 1, policy_version 30560 (0.0016) +[2023-09-26 14:05:46,875][09734] Updated weights for policy 0, policy_version 30560 (0.0019) +[2023-09-26 14:05:47,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6192.6). Total num frames: 15646720. Throughput: 0: 794.3, 1: 793.6. Samples: 3907537. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:05:47,025][08516] Avg episode reward: [(0, '2.920'), (1, '1.010')] +[2023-09-26 14:05:52,024][08516] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6178.7). Total num frames: 15675392. Throughput: 0: 791.8, 1: 792.3. Samples: 3917088. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:05:52,026][08516] Avg episode reward: [(0, '2.930'), (1, '1.000')] +[2023-09-26 14:05:57,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6192.6). Total num frames: 15704064. Throughput: 0: 791.5, 1: 791.5. Samples: 3926248. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:05:57,025][08516] Avg episode reward: [(0, '2.930'), (1, '1.020')] +[2023-09-26 14:05:59,936][09734] Updated weights for policy 0, policy_version 30720 (0.0017) +[2023-09-26 14:05:59,936][09735] Updated weights for policy 1, policy_version 30720 (0.0016) +[2023-09-26 14:06:02,025][08516] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 15736832. Throughput: 0: 789.5, 1: 788.8. Samples: 3931120. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:06:02,025][08516] Avg episode reward: [(0, '2.910'), (1, '1.030')] +[2023-09-26 14:06:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 15769600. Throughput: 0: 786.8, 1: 786.6. Samples: 3940440. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:06:07,025][08516] Avg episode reward: [(0, '2.930'), (1, '1.020')] +[2023-09-26 14:06:12,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 15802368. Throughput: 0: 791.3, 1: 791.8. Samples: 3950103. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:06:12,025][08516] Avg episode reward: [(0, '2.920'), (1, '1.000')] +[2023-09-26 14:06:12,869][09734] Updated weights for policy 0, policy_version 30880 (0.0016) +[2023-09-26 14:06:12,870][09735] Updated weights for policy 1, policy_version 30880 (0.0017) +[2023-09-26 14:06:17,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 15835136. Throughput: 0: 793.5, 1: 791.7. Samples: 3954693. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:06:17,025][08516] Avg episode reward: [(0, '2.900'), (1, '0.990')] +[2023-09-26 14:06:22,025][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 15867904. Throughput: 0: 790.2, 1: 790.6. Samples: 3964345. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 14:06:22,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.970')] +[2023-09-26 14:06:25,952][09735] Updated weights for policy 1, policy_version 31040 (0.0017) +[2023-09-26 14:06:25,953][09734] Updated weights for policy 0, policy_version 31040 (0.0016) +[2023-09-26 14:06:27,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 15892480. Throughput: 0: 785.8, 1: 785.9. Samples: 3973408. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 14:06:27,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.980')] +[2023-09-26 14:06:32,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 15925248. Throughput: 0: 785.7, 1: 785.7. Samples: 3978249. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 14:06:32,025][08516] Avg episode reward: [(0, '2.930'), (1, '1.000')] +[2023-09-26 14:06:37,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 15958016. Throughput: 0: 782.6, 1: 781.2. Samples: 3987456. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 14:06:37,026][08516] Avg episode reward: [(0, '2.920'), (1, '1.000')] +[2023-09-26 14:06:37,039][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000031168_7979008.pth... +[2023-09-26 14:06:37,039][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000031168_7979008.pth... +[2023-09-26 14:06:37,075][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000028208_7221248.pth +[2023-09-26 14:06:37,075][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000028208_7221248.pth +[2023-09-26 14:06:39,344][09734] Updated weights for policy 0, policy_version 31200 (0.0018) +[2023-09-26 14:06:39,344][09735] Updated weights for policy 1, policy_version 31200 (0.0018) +[2023-09-26 14:06:42,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 15990784. Throughput: 0: 781.2, 1: 782.0. Samples: 3996590. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:06:42,025][08516] Avg episode reward: [(0, '2.930'), (1, '1.010')] +[2023-09-26 14:06:47,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 16015360. Throughput: 0: 780.0, 1: 781.0. Samples: 4001367. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:06:47,025][08516] Avg episode reward: [(0, '2.920'), (1, '1.000')] +[2023-09-26 14:06:52,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6212.3, 300 sec: 6303.7). Total num frames: 16048128. Throughput: 0: 778.4, 1: 778.6. Samples: 4010504. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:06:52,025][08516] Avg episode reward: [(0, '2.880'), (1, '0.980')] +[2023-09-26 14:06:52,496][09734] Updated weights for policy 0, policy_version 31360 (0.0016) +[2023-09-26 14:06:52,497][09735] Updated weights for policy 1, policy_version 31360 (0.0016) +[2023-09-26 14:06:57,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 16080896. Throughput: 0: 779.4, 1: 777.8. Samples: 4020177. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:06:57,025][08516] Avg episode reward: [(0, '2.860'), (1, '0.980')] +[2023-09-26 14:07:02,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 16113664. Throughput: 0: 775.4, 1: 777.0. Samples: 4024552. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:07:02,025][08516] Avg episode reward: [(0, '2.830'), (1, '0.980')] +[2023-09-26 14:07:05,592][09735] Updated weights for policy 1, policy_version 31520 (0.0018) +[2023-09-26 14:07:05,592][09734] Updated weights for policy 0, policy_version 31520 (0.0017) +[2023-09-26 14:07:07,025][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 16146432. Throughput: 0: 775.2, 1: 778.1. Samples: 4034246. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:07:07,026][08516] Avg episode reward: [(0, '2.820'), (1, '0.990')] +[2023-09-26 14:07:12,024][08516] Fps is (10 sec: 6143.9, 60 sec: 6212.3, 300 sec: 6289.8). Total num frames: 16175104. Throughput: 0: 777.6, 1: 777.7. Samples: 4043398. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:07:12,025][08516] Avg episode reward: [(0, '2.750'), (1, '1.000')] +[2023-09-26 14:07:17,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 16203776. Throughput: 0: 780.2, 1: 779.8. Samples: 4048446. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:07:17,025][08516] Avg episode reward: [(0, '2.780'), (1, '1.000')] +[2023-09-26 14:07:18,643][09734] Updated weights for policy 0, policy_version 31680 (0.0016) +[2023-09-26 14:07:18,644][09735] Updated weights for policy 1, policy_version 31680 (0.0015) +[2023-09-26 14:07:22,024][08516] Fps is (10 sec: 6144.1, 60 sec: 6144.0, 300 sec: 6303.7). Total num frames: 16236544. Throughput: 0: 776.1, 1: 777.8. Samples: 4057380. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:07:22,025][08516] Avg episode reward: [(0, '2.770'), (1, '0.990')] +[2023-09-26 14:07:27,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 16269312. Throughput: 0: 783.9, 1: 781.4. Samples: 4067027. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 14:07:27,025][08516] Avg episode reward: [(0, '2.810'), (1, '0.990')] +[2023-09-26 14:07:31,679][09734] Updated weights for policy 0, policy_version 31840 (0.0016) +[2023-09-26 14:07:31,679][09735] Updated weights for policy 1, policy_version 31840 (0.0016) +[2023-09-26 14:07:32,025][08516] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 16302080. Throughput: 0: 779.5, 1: 779.0. Samples: 4071502. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 14:07:32,026][08516] Avg episode reward: [(0, '2.810'), (1, '0.940')] +[2023-09-26 14:07:37,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 16334848. Throughput: 0: 786.7, 1: 786.7. Samples: 4081310. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 14:07:37,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.920')] +[2023-09-26 14:07:42,024][08516] Fps is (10 sec: 6144.1, 60 sec: 6212.3, 300 sec: 6289.8). Total num frames: 16363520. Throughput: 0: 780.5, 1: 781.8. Samples: 4090480. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 14:07:42,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.940')] +[2023-09-26 14:07:44,649][09734] Updated weights for policy 0, policy_version 32000 (0.0018) +[2023-09-26 14:07:44,650][09735] Updated weights for policy 1, policy_version 32000 (0.0018) +[2023-09-26 14:07:47,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16392192. Throughput: 0: 788.0, 1: 787.8. Samples: 4095466. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 14:07:47,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.940')] +[2023-09-26 14:07:52,024][08516] Fps is (10 sec: 6144.1, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 16424960. Throughput: 0: 786.9, 1: 784.3. Samples: 4104950. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 14:07:52,025][08516] Avg episode reward: [(0, '2.850'), (1, '0.970')] +[2023-09-26 14:07:57,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 16457728. Throughput: 0: 790.3, 1: 788.2. Samples: 4114432. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 14:07:57,025][08516] Avg episode reward: [(0, '2.850'), (1, '0.980')] +[2023-09-26 14:07:57,537][09735] Updated weights for policy 1, policy_version 32160 (0.0017) +[2023-09-26 14:07:57,537][09734] Updated weights for policy 0, policy_version 32160 (0.0017) +[2023-09-26 14:08:02,024][08516] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 16490496. Throughput: 0: 784.3, 1: 785.0. Samples: 4119066. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 14:08:02,025][08516] Avg episode reward: [(0, '2.850'), (1, '0.980')] +[2023-09-26 14:08:07,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 16523264. Throughput: 0: 793.1, 1: 792.3. Samples: 4128724. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:08:07,025][08516] Avg episode reward: [(0, '2.890'), (1, '1.000')] +[2023-09-26 14:08:10,476][09734] Updated weights for policy 0, policy_version 32320 (0.0016) +[2023-09-26 14:08:10,478][09735] Updated weights for policy 1, policy_version 32320 (0.0019) +[2023-09-26 14:08:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6348.8, 300 sec: 6303.7). Total num frames: 16556032. Throughput: 0: 789.9, 1: 791.4. Samples: 4138184. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:08:12,025][08516] Avg episode reward: [(0, '2.880'), (1, '1.010')] +[2023-09-26 14:08:17,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16580608. Throughput: 0: 793.7, 1: 792.3. Samples: 4142871. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:08:17,025][08516] Avg episode reward: [(0, '2.900'), (1, '1.010')] +[2023-09-26 14:08:22,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16613376. Throughput: 0: 785.2, 1: 785.1. Samples: 4151972. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:08:22,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.990')] +[2023-09-26 14:08:23,590][09735] Updated weights for policy 1, policy_version 32480 (0.0016) +[2023-09-26 14:08:23,590][09734] Updated weights for policy 0, policy_version 32480 (0.0014) +[2023-09-26 14:08:27,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16646144. Throughput: 0: 790.3, 1: 790.0. Samples: 4161592. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:08:27,025][08516] Avg episode reward: [(0, '2.880'), (1, '0.980')] +[2023-09-26 14:08:32,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 16678912. Throughput: 0: 788.4, 1: 789.6. Samples: 4166475. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:08:32,025][08516] Avg episode reward: [(0, '2.880'), (1, '0.970')] +[2023-09-26 14:08:36,733][09734] Updated weights for policy 0, policy_version 32640 (0.0017) +[2023-09-26 14:08:36,733][09735] Updated weights for policy 1, policy_version 32640 (0.0016) +[2023-09-26 14:08:37,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 16711680. Throughput: 0: 786.4, 1: 787.3. Samples: 4175766. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:08:37,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.970')] +[2023-09-26 14:08:37,038][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000032640_8355840.pth... +[2023-09-26 14:08:37,038][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000032640_8355840.pth... +[2023-09-26 14:08:37,074][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000029696_7602176.pth +[2023-09-26 14:08:37,075][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000029696_7602176.pth +[2023-09-26 14:08:42,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6348.8, 300 sec: 6303.7). Total num frames: 16744448. Throughput: 0: 785.2, 1: 786.9. Samples: 4185173. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:08:42,025][08516] Avg episode reward: [(0, '2.920'), (1, '0.970')] +[2023-09-26 14:08:47,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 16777216. Throughput: 0: 788.4, 1: 788.7. Samples: 4190037. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:08:47,025][08516] Avg episode reward: [(0, '2.920'), (1, '0.960')] +[2023-09-26 14:08:49,611][09734] Updated weights for policy 0, policy_version 32800 (0.0018) +[2023-09-26 14:08:49,612][09735] Updated weights for policy 1, policy_version 32800 (0.0018) +[2023-09-26 14:08:52,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16801792. Throughput: 0: 785.0, 1: 785.3. Samples: 4199388. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:08:52,025][08516] Avg episode reward: [(0, '2.940'), (1, '0.970')] +[2023-09-26 14:08:57,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16834560. Throughput: 0: 783.5, 1: 782.2. Samples: 4208640. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:08:57,025][08516] Avg episode reward: [(0, '2.920'), (1, '0.970')] +[2023-09-26 14:09:02,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6289.8). Total num frames: 16867328. Throughput: 0: 778.1, 1: 779.5. Samples: 4212963. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:09:02,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.970')] +[2023-09-26 14:09:02,799][09735] Updated weights for policy 1, policy_version 32960 (0.0016) +[2023-09-26 14:09:02,801][09734] Updated weights for policy 0, policy_version 32960 (0.0017) +[2023-09-26 14:09:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 16900096. Throughput: 0: 787.4, 1: 788.1. Samples: 4222870. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:09:07,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.990')] +[2023-09-26 14:09:12,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 16924672. Throughput: 0: 779.4, 1: 779.6. Samples: 4231750. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:09:12,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.990')] +[2023-09-26 14:09:15,927][09735] Updated weights for policy 1, policy_version 33120 (0.0017) +[2023-09-26 14:09:15,927][09734] Updated weights for policy 0, policy_version 33120 (0.0016) +[2023-09-26 14:09:17,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 16957440. Throughput: 0: 780.1, 1: 779.1. Samples: 4236638. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 14:09:17,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.980')] +[2023-09-26 14:09:22,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 16990208. Throughput: 0: 784.1, 1: 782.9. Samples: 4246280. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 14:09:22,025][08516] Avg episode reward: [(0, '2.760'), (1, '0.970')] +[2023-09-26 14:09:27,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17022976. Throughput: 0: 785.0, 1: 783.3. Samples: 4255745. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 14:09:27,025][08516] Avg episode reward: [(0, '2.770'), (1, '0.970')] +[2023-09-26 14:09:28,849][09734] Updated weights for policy 0, policy_version 33280 (0.0017) +[2023-09-26 14:09:28,849][09735] Updated weights for policy 1, policy_version 33280 (0.0015) +[2023-09-26 14:09:32,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17055744. Throughput: 0: 781.3, 1: 781.3. Samples: 4260355. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 14:09:32,025][08516] Avg episode reward: [(0, '2.830'), (1, '0.960')] +[2023-09-26 14:09:37,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 17088512. Throughput: 0: 784.8, 1: 782.4. Samples: 4269909. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:09:37,025][08516] Avg episode reward: [(0, '2.810'), (1, '0.940')] +[2023-09-26 14:09:42,024][08516] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 17113088. Throughput: 0: 775.9, 1: 777.4. Samples: 4278540. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:09:42,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.930')] +[2023-09-26 14:09:42,236][09734] Updated weights for policy 0, policy_version 33440 (0.0016) +[2023-09-26 14:09:42,236][09735] Updated weights for policy 1, policy_version 33440 (0.0016) +[2023-09-26 14:09:47,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 17145856. Throughput: 0: 781.7, 1: 783.7. Samples: 4283405. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:09:47,025][08516] Avg episode reward: [(0, '2.930'), (1, '0.930')] +[2023-09-26 14:09:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 17178624. Throughput: 0: 776.0, 1: 773.9. Samples: 4292616. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:09:52,025][08516] Avg episode reward: [(0, '2.940'), (1, '0.950')] +[2023-09-26 14:09:55,203][09734] Updated weights for policy 0, policy_version 33600 (0.0014) +[2023-09-26 14:09:55,204][09735] Updated weights for policy 1, policy_version 33600 (0.0017) +[2023-09-26 14:09:57,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17211392. Throughput: 0: 786.4, 1: 787.5. Samples: 4302572. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:09:57,025][08516] Avg episode reward: [(0, '2.950'), (1, '0.960')] +[2023-09-26 14:10:02,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17244160. Throughput: 0: 782.3, 1: 782.4. Samples: 4307048. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:10:02,025][08516] Avg episode reward: [(0, '2.900'), (1, '0.970')] +[2023-09-26 14:10:07,025][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17276928. Throughput: 0: 780.8, 1: 781.5. Samples: 4316583. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:10:07,025][08516] Avg episode reward: [(0, '2.920'), (1, '0.940')] +[2023-09-26 14:10:08,226][09734] Updated weights for policy 0, policy_version 33760 (0.0019) +[2023-09-26 14:10:08,227][09735] Updated weights for policy 1, policy_version 33760 (0.0017) +[2023-09-26 14:10:12,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17301504. Throughput: 0: 777.0, 1: 778.1. Samples: 4325726. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:10:12,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.940')] +[2023-09-26 14:10:17,024][08516] Fps is (10 sec: 5734.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17334272. Throughput: 0: 778.9, 1: 778.2. Samples: 4330427. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:10:17,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.960')] +[2023-09-26 14:10:21,356][09735] Updated weights for policy 1, policy_version 33920 (0.0015) +[2023-09-26 14:10:21,357][09734] Updated weights for policy 0, policy_version 33920 (0.0015) +[2023-09-26 14:10:22,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17367040. Throughput: 0: 775.1, 1: 777.6. Samples: 4339781. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:10:22,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.990')] +[2023-09-26 14:10:27,025][08516] Fps is (10 sec: 6553.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17399808. Throughput: 0: 788.7, 1: 789.6. Samples: 4349568. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:10:27,026][08516] Avg episode reward: [(0, '2.830'), (1, '1.010')] +[2023-09-26 14:10:32,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17432576. Throughput: 0: 787.1, 1: 785.4. Samples: 4354167. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:10:32,025][08516] Avg episode reward: [(0, '2.880'), (1, '1.010')] +[2023-09-26 14:10:34,580][09734] Updated weights for policy 0, policy_version 34080 (0.0019) +[2023-09-26 14:10:34,580][09735] Updated weights for policy 1, policy_version 34080 (0.0018) +[2023-09-26 14:10:37,025][08516] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 17457152. Throughput: 0: 784.1, 1: 785.9. Samples: 4363266. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:10:37,025][08516] Avg episode reward: [(0, '2.920'), (1, '0.990')] +[2023-09-26 14:10:37,090][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000034112_8732672.pth... +[2023-09-26 14:10:37,091][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000034112_8732672.pth... +[2023-09-26 14:10:37,120][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000031168_7979008.pth +[2023-09-26 14:10:37,121][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000031168_7979008.pth +[2023-09-26 14:10:42,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 17489920. Throughput: 0: 780.8, 1: 780.3. Samples: 4372822. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:10:42,025][08516] Avg episode reward: [(0, '2.880'), (1, '0.990')] +[2023-09-26 14:10:47,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6262.0). Total num frames: 17522688. Throughput: 0: 787.2, 1: 786.9. Samples: 4377884. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:10:47,025][08516] Avg episode reward: [(0, '2.830'), (1, '0.990')] +[2023-09-26 14:10:47,230][09734] Updated weights for policy 0, policy_version 34240 (0.0015) +[2023-09-26 14:10:47,230][09735] Updated weights for policy 1, policy_version 34240 (0.0018) +[2023-09-26 14:10:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17555456. Throughput: 0: 783.6, 1: 782.8. Samples: 4387070. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:10:52,025][08516] Avg episode reward: [(0, '2.790'), (1, '1.000')] +[2023-09-26 14:10:57,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17588224. Throughput: 0: 787.9, 1: 788.6. Samples: 4396667. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:10:57,025][08516] Avg episode reward: [(0, '2.770'), (1, '1.010')] +[2023-09-26 14:11:00,383][09735] Updated weights for policy 1, policy_version 34400 (0.0018) +[2023-09-26 14:11:00,383][09734] Updated weights for policy 0, policy_version 34400 (0.0014) +[2023-09-26 14:11:02,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17620992. Throughput: 0: 786.4, 1: 787.0. Samples: 4401230. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:11:02,025][08516] Avg episode reward: [(0, '2.750'), (1, '1.000')] +[2023-09-26 14:11:07,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 17653760. Throughput: 0: 789.5, 1: 788.5. Samples: 4410792. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:11:07,025][08516] Avg episode reward: [(0, '2.700'), (1, '0.990')] +[2023-09-26 14:11:12,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 17678336. Throughput: 0: 784.6, 1: 783.6. Samples: 4420137. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:11:12,025][08516] Avg episode reward: [(0, '2.770'), (1, '0.970')] +[2023-09-26 14:11:13,464][09734] Updated weights for policy 0, policy_version 34560 (0.0016) +[2023-09-26 14:11:13,464][09735] Updated weights for policy 1, policy_version 34560 (0.0017) +[2023-09-26 14:11:17,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 17711104. Throughput: 0: 786.6, 1: 786.7. Samples: 4424964. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:11:17,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.950')] +[2023-09-26 14:11:22,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17743872. Throughput: 0: 789.8, 1: 788.7. Samples: 4434299. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:11:22,025][08516] Avg episode reward: [(0, '2.860'), (1, '0.960')] +[2023-09-26 14:11:26,449][09734] Updated weights for policy 0, policy_version 34720 (0.0017) +[2023-09-26 14:11:26,449][09735] Updated weights for policy 1, policy_version 34720 (0.0017) +[2023-09-26 14:11:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 17776640. Throughput: 0: 790.6, 1: 790.4. Samples: 4443965. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:11:27,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.960')] +[2023-09-26 14:11:32,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17809408. Throughput: 0: 784.3, 1: 784.7. Samples: 4448491. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:11:32,026][08516] Avg episode reward: [(0, '2.830'), (1, '0.970')] +[2023-09-26 14:11:37,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 17842176. Throughput: 0: 793.1, 1: 793.0. Samples: 4458442. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:11:37,025][08516] Avg episode reward: [(0, '2.920'), (1, '0.980')] +[2023-09-26 14:11:39,153][09734] Updated weights for policy 0, policy_version 34880 (0.0016) +[2023-09-26 14:11:39,154][09735] Updated weights for policy 1, policy_version 34880 (0.0017) +[2023-09-26 14:11:42,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 17874944. Throughput: 0: 792.8, 1: 792.5. Samples: 4468009. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:11:42,025][08516] Avg episode reward: [(0, '2.910'), (1, '0.980')] +[2023-09-26 14:11:47,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 17907712. Throughput: 0: 796.4, 1: 794.8. Samples: 4472832. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:11:47,025][08516] Avg episode reward: [(0, '2.890'), (1, '0.990')] +[2023-09-26 14:11:51,877][09734] Updated weights for policy 0, policy_version 35040 (0.0017) +[2023-09-26 14:11:51,877][09735] Updated weights for policy 1, policy_version 35040 (0.0017) +[2023-09-26 14:11:52,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 17940480. Throughput: 0: 797.0, 1: 798.1. Samples: 4482570. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:11:52,025][08516] Avg episode reward: [(0, '2.910'), (1, '0.990')] +[2023-09-26 14:11:57,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 17973248. Throughput: 0: 799.4, 1: 799.4. Samples: 4492084. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:11:57,025][08516] Avg episode reward: [(0, '2.910'), (1, '0.970')] +[2023-09-26 14:12:02,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17997824. Throughput: 0: 801.7, 1: 801.0. Samples: 4497083. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:12:02,025][08516] Avg episode reward: [(0, '2.860'), (1, '0.970')] +[2023-09-26 14:12:04,813][09734] Updated weights for policy 0, policy_version 35200 (0.0017) +[2023-09-26 14:12:04,813][09735] Updated weights for policy 1, policy_version 35200 (0.0018) +[2023-09-26 14:12:07,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6289.8). Total num frames: 18030592. Throughput: 0: 797.5, 1: 799.1. Samples: 4506146. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:12:07,025][08516] Avg episode reward: [(0, '2.860'), (1, '0.970')] +[2023-09-26 14:12:12,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 18063360. Throughput: 0: 797.6, 1: 797.8. Samples: 4515761. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:12:12,025][08516] Avg episode reward: [(0, '2.850'), (1, '0.970')] +[2023-09-26 14:12:17,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 18096128. Throughput: 0: 794.9, 1: 794.5. Samples: 4520012. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 14:12:17,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.980')] +[2023-09-26 14:12:17,950][09735] Updated weights for policy 1, policy_version 35360 (0.0015) +[2023-09-26 14:12:17,951][09734] Updated weights for policy 0, policy_version 35360 (0.0017) +[2023-09-26 14:12:22,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 18128896. Throughput: 0: 792.4, 1: 793.0. Samples: 4529785. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 14:12:22,025][08516] Avg episode reward: [(0, '2.850'), (1, '0.970')] +[2023-09-26 14:12:27,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 18161664. Throughput: 0: 790.7, 1: 791.2. Samples: 4539196. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 14:12:27,025][08516] Avg episode reward: [(0, '2.920'), (1, '0.970')] +[2023-09-26 14:12:30,950][09735] Updated weights for policy 1, policy_version 35520 (0.0015) +[2023-09-26 14:12:30,950][09734] Updated weights for policy 0, policy_version 35520 (0.0017) +[2023-09-26 14:12:32,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 18186240. Throughput: 0: 791.1, 1: 793.2. Samples: 4544128. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 14:12:32,025][08516] Avg episode reward: [(0, '2.970'), (1, '0.990')] +[2023-09-26 14:12:37,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.6, 300 sec: 6289.8). Total num frames: 18219008. Throughput: 0: 785.2, 1: 785.6. Samples: 4553257. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 14:12:37,025][08516] Avg episode reward: [(0, '2.940'), (1, '0.990')] +[2023-09-26 14:12:37,031][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000035584_9109504.pth... +[2023-09-26 14:12:37,031][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000035584_9109504.pth... +[2023-09-26 14:12:37,059][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000032640_8355840.pth +[2023-09-26 14:12:37,065][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000032640_8355840.pth +[2023-09-26 14:12:42,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 18251776. Throughput: 0: 788.1, 1: 786.6. Samples: 4562944. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:12:42,025][08516] Avg episode reward: [(0, '2.880'), (1, '1.000')] +[2023-09-26 14:12:43,876][09735] Updated weights for policy 1, policy_version 35680 (0.0015) +[2023-09-26 14:12:43,876][09734] Updated weights for policy 0, policy_version 35680 (0.0018) +[2023-09-26 14:12:47,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 18284544. Throughput: 0: 781.9, 1: 782.7. Samples: 4567487. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:12:47,025][08516] Avg episode reward: [(0, '2.860'), (1, '1.000')] +[2023-09-26 14:12:52,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 18317312. Throughput: 0: 791.6, 1: 789.1. Samples: 4577280. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:12:52,025][08516] Avg episode reward: [(0, '2.870'), (1, '1.010')] +[2023-09-26 14:12:56,897][09734] Updated weights for policy 0, policy_version 35840 (0.0016) +[2023-09-26 14:12:56,898][09735] Updated weights for policy 1, policy_version 35840 (0.0017) +[2023-09-26 14:12:57,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 18350080. Throughput: 0: 785.8, 1: 786.6. Samples: 4586522. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:12:57,025][08516] Avg episode reward: [(0, '2.740'), (1, '1.010')] +[2023-09-26 14:13:02,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 18374656. Throughput: 0: 792.8, 1: 793.1. Samples: 4591379. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:13:02,025][08516] Avg episode reward: [(0, '2.650'), (1, '1.010')] +[2023-09-26 14:13:07,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 18407424. Throughput: 0: 784.7, 1: 785.3. Samples: 4600436. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:13:07,025][08516] Avg episode reward: [(0, '2.660'), (1, '1.010')] +[2023-09-26 14:13:09,897][09735] Updated weights for policy 1, policy_version 36000 (0.0017) +[2023-09-26 14:13:09,897][09734] Updated weights for policy 0, policy_version 36000 (0.0016) +[2023-09-26 14:13:12,025][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 18440192. Throughput: 0: 788.2, 1: 786.3. Samples: 4610049. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:13:12,026][08516] Avg episode reward: [(0, '2.690'), (1, '1.000')] +[2023-09-26 14:13:17,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 18472960. Throughput: 0: 785.6, 1: 785.1. Samples: 4614807. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:13:17,025][08516] Avg episode reward: [(0, '2.680'), (1, '1.000')] +[2023-09-26 14:13:22,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 18505728. Throughput: 0: 791.3, 1: 789.8. Samples: 4624403. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:13:22,025][08516] Avg episode reward: [(0, '2.710'), (1, '1.000')] +[2023-09-26 14:13:22,697][09735] Updated weights for policy 1, policy_version 36160 (0.0016) +[2023-09-26 14:13:22,697][09734] Updated weights for policy 0, policy_version 36160 (0.0014) +[2023-09-26 14:13:27,025][08516] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 18538496. Throughput: 0: 790.5, 1: 792.2. Samples: 4634167. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 14:13:27,026][08516] Avg episode reward: [(0, '2.700'), (1, '1.000')] +[2023-09-26 14:13:32,025][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 18571264. Throughput: 0: 792.4, 1: 790.6. Samples: 4638721. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 14:13:32,026][08516] Avg episode reward: [(0, '2.770'), (1, '1.000')] +[2023-09-26 14:13:35,839][09734] Updated weights for policy 0, policy_version 36320 (0.0018) +[2023-09-26 14:13:35,839][09735] Updated weights for policy 1, policy_version 36320 (0.0018) +[2023-09-26 14:13:37,025][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 18595840. Throughput: 0: 786.2, 1: 788.3. Samples: 4648131. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 14:13:37,025][08516] Avg episode reward: [(0, '2.800'), (1, '1.000')] +[2023-09-26 14:13:42,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 18628608. Throughput: 0: 788.9, 1: 788.0. Samples: 4657482. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 14:13:42,025][08516] Avg episode reward: [(0, '2.810'), (1, '1.000')] +[2023-09-26 14:13:47,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 18661376. Throughput: 0: 790.0, 1: 790.0. Samples: 4662479. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 14:13:47,025][08516] Avg episode reward: [(0, '2.780'), (1, '1.010')] +[2023-09-26 14:13:48,624][09735] Updated weights for policy 1, policy_version 36480 (0.0017) +[2023-09-26 14:13:48,624][09734] Updated weights for policy 0, policy_version 36480 (0.0016) +[2023-09-26 14:13:52,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 18694144. Throughput: 0: 793.2, 1: 792.4. Samples: 4671785. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:13:52,025][08516] Avg episode reward: [(0, '2.760'), (1, '1.000')] +[2023-09-26 14:13:57,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 18726912. Throughput: 0: 791.8, 1: 793.6. Samples: 4681389. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:13:57,025][08516] Avg episode reward: [(0, '2.760'), (1, '0.970')] +[2023-09-26 14:14:01,758][09734] Updated weights for policy 0, policy_version 36640 (0.0016) +[2023-09-26 14:14:01,758][09735] Updated weights for policy 1, policy_version 36640 (0.0017) +[2023-09-26 14:14:02,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6303.7). Total num frames: 18759680. Throughput: 0: 789.9, 1: 788.4. Samples: 4685829. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:14:02,025][08516] Avg episode reward: [(0, '2.760'), (1, '0.960')] +[2023-09-26 14:14:07,025][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 18792448. Throughput: 0: 790.1, 1: 790.8. Samples: 4695543. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:14:07,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.950')] +[2023-09-26 14:14:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 18825216. Throughput: 0: 788.1, 1: 787.9. Samples: 4705088. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:14:12,025][08516] Avg episode reward: [(0, '2.790'), (1, '0.960')] +[2023-09-26 14:14:14,532][09734] Updated weights for policy 0, policy_version 36800 (0.0018) +[2023-09-26 14:14:14,532][09735] Updated weights for policy 1, policy_version 36800 (0.0017) +[2023-09-26 14:14:17,024][08516] Fps is (10 sec: 5734.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 18849792. Throughput: 0: 792.0, 1: 793.5. Samples: 4710068. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:14:17,025][08516] Avg episode reward: [(0, '2.850'), (1, '0.960')] +[2023-09-26 14:14:22,025][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 18882560. Throughput: 0: 791.4, 1: 790.6. Samples: 4719324. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:14:22,025][08516] Avg episode reward: [(0, '2.770'), (1, '0.960')] +[2023-09-26 14:14:27,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 18915328. Throughput: 0: 793.7, 1: 793.4. Samples: 4728900. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:14:27,025][08516] Avg episode reward: [(0, '2.760'), (1, '0.950')] +[2023-09-26 14:14:27,310][09735] Updated weights for policy 1, policy_version 36960 (0.0019) +[2023-09-26 14:14:27,311][09734] Updated weights for policy 0, policy_version 36960 (0.0018) +[2023-09-26 14:14:32,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6303.7). Total num frames: 18948096. Throughput: 0: 793.9, 1: 794.3. Samples: 4733950. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:14:32,025][08516] Avg episode reward: [(0, '2.680'), (1, '0.930')] +[2023-09-26 14:14:37,025][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 18980864. Throughput: 0: 794.0, 1: 794.1. Samples: 4743249. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 14:14:37,026][08516] Avg episode reward: [(0, '2.620'), (1, '0.950')] +[2023-09-26 14:14:37,037][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000037072_9490432.pth... +[2023-09-26 14:14:37,037][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000037072_9490432.pth... +[2023-09-26 14:14:37,074][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000034112_8732672.pth +[2023-09-26 14:14:37,075][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000034112_8732672.pth +[2023-09-26 14:14:40,163][09734] Updated weights for policy 0, policy_version 37120 (0.0018) +[2023-09-26 14:14:40,163][09735] Updated weights for policy 1, policy_version 37120 (0.0016) +[2023-09-26 14:14:42,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 19013632. Throughput: 0: 798.2, 1: 798.6. Samples: 4753245. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 14:14:42,025][08516] Avg episode reward: [(0, '2.590'), (1, '0.960')] +[2023-09-26 14:14:47,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.0, 300 sec: 6331.4). Total num frames: 19046400. Throughput: 0: 797.7, 1: 799.6. Samples: 4757706. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 14:14:47,025][08516] Avg episode reward: [(0, '2.630'), (1, '0.960')] +[2023-09-26 14:14:52,025][08516] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6331.4). Total num frames: 19079168. Throughput: 0: 800.0, 1: 801.7. Samples: 4767620. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 14:14:52,025][08516] Avg episode reward: [(0, '2.640'), (1, '0.980')] +[2023-09-26 14:14:53,004][09734] Updated weights for policy 0, policy_version 37280 (0.0014) +[2023-09-26 14:14:53,004][09735] Updated weights for policy 1, policy_version 37280 (0.0017) +[2023-09-26 14:14:57,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 19111936. Throughput: 0: 797.1, 1: 796.8. Samples: 4776811. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 14:14:57,025][08516] Avg episode reward: [(0, '2.710'), (1, '0.980')] +[2023-09-26 14:15:02,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 19136512. Throughput: 0: 796.9, 1: 797.6. Samples: 4781824. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:15:02,025][08516] Avg episode reward: [(0, '2.780'), (1, '0.980')] +[2023-09-26 14:15:05,935][09735] Updated weights for policy 1, policy_version 37440 (0.0016) +[2023-09-26 14:15:05,935][09734] Updated weights for policy 0, policy_version 37440 (0.0018) +[2023-09-26 14:15:07,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 19169280. Throughput: 0: 797.7, 1: 798.0. Samples: 4791133. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:15:07,025][08516] Avg episode reward: [(0, '2.810'), (1, '0.990')] +[2023-09-26 14:15:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 19202048. Throughput: 0: 796.4, 1: 795.0. Samples: 4800512. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:15:12,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.990')] +[2023-09-26 14:15:17,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6331.4). Total num frames: 19234816. Throughput: 0: 790.0, 1: 789.6. Samples: 4805030. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:15:17,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.990')] +[2023-09-26 14:15:19,094][09735] Updated weights for policy 1, policy_version 37600 (0.0016) +[2023-09-26 14:15:19,094][09734] Updated weights for policy 0, policy_version 37600 (0.0016) +[2023-09-26 14:15:22,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 19267584. Throughput: 0: 794.0, 1: 794.2. Samples: 4814716. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:15:22,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.990')] +[2023-09-26 14:15:27,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 19300352. Throughput: 0: 788.8, 1: 788.2. Samples: 4824210. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:15:27,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.990')] +[2023-09-26 14:15:31,902][09735] Updated weights for policy 1, policy_version 37760 (0.0017) +[2023-09-26 14:15:31,903][09734] Updated weights for policy 0, policy_version 37760 (0.0018) +[2023-09-26 14:15:32,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 19333120. Throughput: 0: 795.2, 1: 793.2. Samples: 4829184. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:15:32,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.990')] +[2023-09-26 14:15:37,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.6, 300 sec: 6331.4). Total num frames: 19357696. Throughput: 0: 787.6, 1: 787.6. Samples: 4838502. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:15:37,025][08516] Avg episode reward: [(0, '2.740'), (1, '0.990')] +[2023-09-26 14:15:42,024][08516] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 19390464. Throughput: 0: 790.9, 1: 791.4. Samples: 4848013. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:15:42,025][08516] Avg episode reward: [(0, '2.780'), (1, '0.990')] +[2023-09-26 14:15:44,866][09735] Updated weights for policy 1, policy_version 37920 (0.0018) +[2023-09-26 14:15:44,866][09734] Updated weights for policy 0, policy_version 37920 (0.0020) +[2023-09-26 14:15:47,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6331.4). Total num frames: 19423232. Throughput: 0: 788.5, 1: 788.2. Samples: 4852775. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 14:15:47,025][08516] Avg episode reward: [(0, '2.810'), (1, '0.980')] +[2023-09-26 14:15:52,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6331.4). Total num frames: 19456000. Throughput: 0: 788.0, 1: 788.2. Samples: 4862062. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 14:15:52,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.980')] +[2023-09-26 14:15:57,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 19488768. Throughput: 0: 791.7, 1: 793.5. Samples: 4871846. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 14:15:57,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.970')] +[2023-09-26 14:15:57,859][09735] Updated weights for policy 1, policy_version 38080 (0.0018) +[2023-09-26 14:15:57,859][09734] Updated weights for policy 0, policy_version 38080 (0.0015) +[2023-09-26 14:16:02,024][08516] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 19521536. Throughput: 0: 792.6, 1: 791.0. Samples: 4876293. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 14:16:02,025][08516] Avg episode reward: [(0, '2.820'), (1, '0.960')] +[2023-09-26 14:16:07,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 19554304. Throughput: 0: 791.9, 1: 791.6. Samples: 4885972. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 14:16:07,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.980')] +[2023-09-26 14:16:10,756][09735] Updated weights for policy 1, policy_version 38240 (0.0016) +[2023-09-26 14:16:10,756][09734] Updated weights for policy 0, policy_version 38240 (0.0015) +[2023-09-26 14:16:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6359.2). Total num frames: 19587072. Throughput: 0: 791.1, 1: 791.2. Samples: 4895412. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 14:16:12,025][08516] Avg episode reward: [(0, '2.870'), (1, '0.980')] +[2023-09-26 14:16:17,024][08516] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 19611648. Throughput: 0: 786.6, 1: 789.2. Samples: 4900095. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:16:17,025][08516] Avg episode reward: [(0, '2.800'), (1, '0.990')] +[2023-09-26 14:16:22,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 19644416. Throughput: 0: 788.0, 1: 786.8. Samples: 4909368. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:16:22,025][08516] Avg episode reward: [(0, '2.900'), (1, '0.990')] +[2023-09-26 14:16:23,859][09735] Updated weights for policy 1, policy_version 38400 (0.0016) +[2023-09-26 14:16:23,860][09734] Updated weights for policy 0, policy_version 38400 (0.0017) +[2023-09-26 14:16:27,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 19677184. Throughput: 0: 791.9, 1: 791.1. Samples: 4919245. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:16:27,025][08516] Avg episode reward: [(0, '2.840'), (1, '0.990')] +[2023-09-26 14:16:32,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6331.5). Total num frames: 19709952. Throughput: 0: 788.6, 1: 788.5. Samples: 4923747. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:16:32,025][08516] Avg episode reward: [(0, '2.750'), (1, '0.990')] +[2023-09-26 14:16:36,834][09734] Updated weights for policy 0, policy_version 38560 (0.0017) +[2023-09-26 14:16:36,834][09735] Updated weights for policy 1, policy_version 38560 (0.0018) +[2023-09-26 14:16:37,024][08516] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 19742720. Throughput: 0: 792.7, 1: 793.4. Samples: 4933438. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 14:16:37,025][08516] Avg episode reward: [(0, '2.780'), (1, '0.990')] +[2023-09-26 14:16:37,033][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000038560_9871360.pth... +[2023-09-26 14:16:37,033][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000038560_9871360.pth... +[2023-09-26 14:16:37,069][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000035584_9109504.pth +[2023-09-26 14:16:37,072][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000035584_9109504.pth +[2023-09-26 14:16:42,024][08516] Fps is (10 sec: 5734.2, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 19767296. Throughput: 0: 784.7, 1: 785.2. Samples: 4942489. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 14:16:42,025][08516] Avg episode reward: [(0, '2.810'), (1, '1.000')] +[2023-09-26 14:16:47,024][08516] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 19800064. Throughput: 0: 783.1, 1: 786.4. Samples: 4946918. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 14:16:47,025][08516] Avg episode reward: [(0, '2.850'), (1, '1.000')] +[2023-09-26 14:16:50,030][09734] Updated weights for policy 0, policy_version 38720 (0.0015) +[2023-09-26 14:16:50,031][09735] Updated weights for policy 1, policy_version 38720 (0.0015) +[2023-09-26 14:16:52,025][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 19832832. Throughput: 0: 781.4, 1: 781.6. Samples: 4956307. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 14:16:52,026][08516] Avg episode reward: [(0, '2.800'), (1, '1.000')] +[2023-09-26 14:16:57,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 19865600. Throughput: 0: 783.8, 1: 784.0. Samples: 4965964. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 14:16:57,025][08516] Avg episode reward: [(0, '2.840'), (1, '1.000')] +[2023-09-26 14:17:02,024][08516] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6331.4). Total num frames: 19898368. Throughput: 0: 783.6, 1: 782.4. Samples: 4970566. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 14:17:02,025][08516] Avg episode reward: [(0, '2.900'), (1, '1.000')] +[2023-09-26 14:17:02,905][09734] Updated weights for policy 0, policy_version 38880 (0.0019) +[2023-09-26 14:17:02,905][09735] Updated weights for policy 1, policy_version 38880 (0.0018) +[2023-09-26 14:17:07,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 19931136. Throughput: 0: 790.4, 1: 790.6. Samples: 4980516. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:17:07,025][08516] Avg episode reward: [(0, '2.860'), (1, '1.000')] +[2023-09-26 14:17:12,024][08516] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6331.4). Total num frames: 19963904. Throughput: 0: 785.2, 1: 786.3. Samples: 4989962. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:17:12,025][08516] Avg episode reward: [(0, '2.830'), (1, '1.000')] +[2023-09-26 14:17:15,709][09734] Updated weights for policy 0, policy_version 39040 (0.0016) +[2023-09-26 14:17:15,711][09735] Updated weights for policy 1, policy_version 39040 (0.0016) +[2023-09-26 14:17:17,024][08516] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6331.4). Total num frames: 19996672. Throughput: 0: 791.9, 1: 791.6. Samples: 4995002. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 14:17:17,025][08516] Avg episode reward: [(0, '2.830'), (1, '1.000')] +[2023-09-26 14:17:19,610][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000039088_10006528.pth... +[2023-09-26 14:17:19,610][09771] Stopping RolloutWorker_w2... +[2023-09-26 14:17:19,610][09775] Stopping RolloutWorker_w6... +[2023-09-26 14:17:19,611][09771] Loop rollout_proc2_evt_loop terminating... +[2023-09-26 14:17:19,610][09769] Stopping RolloutWorker_w1... +[2023-09-26 14:17:19,611][09768] Stopping RolloutWorker_w0... +[2023-09-26 14:17:19,611][09775] Loop rollout_proc6_evt_loop terminating... +[2023-09-26 14:17:19,611][09773] Stopping RolloutWorker_w4... +[2023-09-26 14:17:19,611][09774] Stopping RolloutWorker_w5... +[2023-09-26 14:17:19,611][09772] Stopping RolloutWorker_w3... +[2023-09-26 14:17:19,611][09776] Stopping RolloutWorker_w7... +[2023-09-26 14:17:19,611][08516] Component RolloutWorker_w2 stopped! +[2023-09-26 14:17:19,611][09768] Loop rollout_proc0_evt_loop terminating... +[2023-09-26 14:17:19,611][09773] Loop rollout_proc4_evt_loop terminating... +[2023-09-26 14:17:19,611][09769] Loop rollout_proc1_evt_loop terminating... +[2023-09-26 14:17:19,611][09774] Loop rollout_proc5_evt_loop terminating... +[2023-09-26 14:17:19,611][09772] Loop rollout_proc3_evt_loop terminating... +[2023-09-26 14:17:19,611][09776] Loop rollout_proc7_evt_loop terminating... +[2023-09-26 14:17:19,612][08516] Component RolloutWorker_w1 stopped! +[2023-09-26 14:17:19,612][08516] Component RolloutWorker_w6 stopped! +[2023-09-26 14:17:19,613][08516] Component RolloutWorker_w0 stopped! +[2023-09-26 14:17:19,613][09597] Stopping Batcher_1... +[2023-09-26 14:17:19,613][08516] Component RolloutWorker_w4 stopped! +[2023-09-26 14:17:19,614][08516] Component RolloutWorker_w5 stopped! +[2023-09-26 14:17:19,614][08516] Component RolloutWorker_w3 stopped! +[2023-09-26 14:17:19,614][09597] Loop batcher_evt_loop terminating... +[2023-09-26 14:17:19,615][08516] Component Batcher_0 stopped! +[2023-09-26 14:17:19,615][08516] Component RolloutWorker_w7 stopped! +[2023-09-26 14:17:19,616][08516] Component Batcher_1 stopped! +[2023-09-26 14:17:19,611][09359] Stopping Batcher_0... +[2023-09-26 14:17:19,630][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-26 14:17:19,632][09359] Loop batcher_evt_loop terminating... +[2023-09-26 14:17:19,643][09359] Removing ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000037072_9490432.pth +[2023-09-26 14:17:19,653][09359] Saving ./train_atari/atari_kangaroo/checkpoint_p0/checkpoint_000039088_10006528.pth... +[2023-09-26 14:17:19,668][09597] Removing ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000037072_9490432.pth +[2023-09-26 14:17:19,670][09734] Weights refcount: 2 0 +[2023-09-26 14:17:19,671][09734] Stopping InferenceWorker_p0-w0... +[2023-09-26 14:17:19,671][09734] Loop inference_proc0-0_evt_loop terminating... +[2023-09-26 14:17:19,672][09735] Weights refcount: 2 0 +[2023-09-26 14:17:19,671][08516] Component InferenceWorker_p0-w0 stopped! +[2023-09-26 14:17:19,673][09735] Stopping InferenceWorker_p1-w0... +[2023-09-26 14:17:19,673][09735] Loop inference_proc1-0_evt_loop terminating... +[2023-09-26 14:17:19,673][08516] Component InferenceWorker_p1-w0 stopped! +[2023-09-26 14:17:19,674][09597] Saving ./train_atari/atari_kangaroo/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-26 14:17:19,689][09359] Stopping LearnerWorker_p0... +[2023-09-26 14:17:19,690][09359] Loop learner_proc0_evt_loop terminating... +[2023-09-26 14:17:19,691][08516] Component LearnerWorker_p0 stopped! +[2023-09-26 14:17:19,736][09597] Stopping LearnerWorker_p1... +[2023-09-26 14:17:19,737][09597] Loop learner_proc1_evt_loop terminating... +[2023-09-26 14:17:19,737][08516] Component LearnerWorker_p1 stopped! +[2023-09-26 14:17:19,738][08516] Waiting for process learner_proc0 to stop... +[2023-09-26 14:17:20,380][08516] Waiting for process learner_proc1 to stop... +[2023-09-26 14:17:20,488][08516] Waiting for process inference_proc0-0 to join... +[2023-09-26 14:17:20,489][08516] Waiting for process inference_proc1-0 to join... +[2023-09-26 14:17:20,490][08516] Waiting for process rollout_proc0 to join... +[2023-09-26 14:17:20,491][08516] Waiting for process rollout_proc1 to join... +[2023-09-26 14:17:20,491][08516] Waiting for process rollout_proc2 to join... +[2023-09-26 14:17:20,492][08516] Waiting for process rollout_proc3 to join... +[2023-09-26 14:17:20,493][08516] Waiting for process rollout_proc4 to join... +[2023-09-26 14:17:20,495][08516] Waiting for process rollout_proc5 to join... +[2023-09-26 14:17:20,495][08516] Waiting for process rollout_proc6 to join... +[2023-09-26 14:17:20,496][08516] Waiting for process rollout_proc7 to join... +[2023-09-26 14:17:20,496][08516] Batcher 0 profile tree view: +batching: 20.4615, releasing_batches: 1.8978 +[2023-09-26 14:17:20,497][08516] Batcher 1 profile tree view: +batching: 20.1417, releasing_batches: 1.6332 +[2023-09-26 14:17:20,497][08516] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0051 + wait_policy_total: 670.7768 +update_model: 37.7924 + weight_update: 0.0017 +one_step: 0.0011 + handle_policy_step: 2353.0149 + deserialize: 71.1850, stack: 16.7614, obs_to_device_normalize: 568.5972, forward: 1135.5596, send_messages: 97.8301 + prepare_outputs: 313.2368 + to_cpu: 158.2166 +[2023-09-26 14:17:20,497][08516] InferenceWorker_p1-w0 profile tree view: +wait_policy: 0.0052 + wait_policy_total: 671.2263 +update_model: 38.5947 + weight_update: 0.0016 +one_step: 0.0012 + handle_policy_step: 2350.0644 + deserialize: 70.0897, stack: 16.9073, obs_to_device_normalize: 569.6508, forward: 1133.9406, send_messages: 96.0944 + prepare_outputs: 313.1435 + to_cpu: 157.8479 +[2023-09-26 14:17:20,498][08516] Learner 0 profile tree view: +misc: 0.0154, prepare_batch: 31.4948 +train: 460.0208 + epoch_init: 0.1052, minibatch_init: 3.3473, losses_postprocess: 61.1109, kl_divergence: 5.5777, after_optimizer: 22.8867 + calculate_losses: 47.0126 + losses_init: 0.1048, forward_head: 15.0390, bptt_initial: 0.4407, bptt: 0.5092, tail: 10.8018, advantages_returns: 3.1970, losses: 13.1894 + update: 315.7533 + clip: 167.5532 +[2023-09-26 14:17:20,499][08516] Learner 1 profile tree view: +misc: 0.0145, prepare_batch: 31.6760 +train: 460.0090 + epoch_init: 0.1052, minibatch_init: 3.1857, losses_postprocess: 60.8161, kl_divergence: 5.6149, after_optimizer: 22.8338 + calculate_losses: 46.0637 + losses_init: 0.1033, forward_head: 14.0815, bptt_initial: 0.4454, bptt: 0.4645, tail: 10.8201, advantages_returns: 3.2052, losses: 13.2613 + update: 317.1911 + clip: 166.9993 +[2023-09-26 14:17:20,499][08516] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.3864, enqueue_policy_requests: 45.8416, env_step: 1216.3874, overhead: 30.7083, complete_rollouts: 1.0930 +save_policy_outputs: 56.0081 + split_output_tensors: 19.1399 +[2023-09-26 14:17:20,500][08516] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.3921, enqueue_policy_requests: 45.9030, env_step: 1205.4842, overhead: 30.0213, complete_rollouts: 1.0698 +save_policy_outputs: 55.5012 + split_output_tensors: 18.9362 +[2023-09-26 14:17:20,500][08516] Loop Runner_EvtLoop terminating... +[2023-09-26 14:17:20,501][08516] Runner profile tree view: +main_loop: 3279.3101 +[2023-09-26 14:17:20,501][08516] Collected {0: 10006528, 1: 10006528}, FPS: 6102.8