diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,2241 @@ +[2023-09-26 07:47:17,521][91478] Saving configuration to ./train_atari/atari_frostbite/config.json... +[2023-09-26 07:47:17,839][91478] Rollout worker 0 uses device cpu +[2023-09-26 07:47:17,839][91478] Rollout worker 1 uses device cpu +[2023-09-26 07:47:17,840][91478] Rollout worker 2 uses device cpu +[2023-09-26 07:47:17,841][91478] Rollout worker 3 uses device cpu +[2023-09-26 07:47:17,841][91478] Rollout worker 4 uses device cpu +[2023-09-26 07:47:17,842][91478] Rollout worker 5 uses device cpu +[2023-09-26 07:47:17,842][91478] Rollout worker 6 uses device cpu +[2023-09-26 07:47:17,843][91478] Rollout worker 7 uses device cpu +[2023-09-26 07:47:17,843][91478] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 +[2023-09-26 07:47:17,888][91478] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 07:47:17,888][91478] InferenceWorker_p0-w0: min num requests: 1 +[2023-09-26 07:47:17,891][91478] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 07:47:17,892][91478] InferenceWorker_p1-w0: min num requests: 1 +[2023-09-26 07:47:17,915][91478] Starting all processes... +[2023-09-26 07:47:17,915][91478] Starting process learner_proc0 +[2023-09-26 07:47:19,511][91478] Starting process learner_proc1 +[2023-09-26 07:47:19,514][91993] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 07:47:19,515][91993] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-09-26 07:47:19,533][91993] Num visible devices: 1 +[2023-09-26 07:47:19,554][91993] Starting seed is not provided +[2023-09-26 07:47:19,554][91993] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 07:47:19,554][91993] Initializing actor-critic model on device cuda:0 +[2023-09-26 07:47:19,554][91993] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 07:47:19,555][91993] RunningMeanStd input shape: (1,) +[2023-09-26 07:47:19,566][91993] ConvEncoder: input_channels=4 +[2023-09-26 07:47:19,728][91993] Conv encoder output size: 512 +[2023-09-26 07:47:19,730][91993] Created Actor Critic model with architecture: +[2023-09-26 07:47:19,730][91993] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=18, bias=True) + ) +) +[2023-09-26 07:47:20,288][91993] Using optimizer +[2023-09-26 07:47:20,289][91993] No checkpoints found +[2023-09-26 07:47:20,289][91993] Did not load from checkpoint, starting from scratch! +[2023-09-26 07:47:20,289][91993] Initialized policy 0 weights for model version 0 +[2023-09-26 07:47:20,291][91993] LearnerWorker_p0 finished initialization! +[2023-09-26 07:47:20,291][91993] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 07:47:21,108][91478] Starting all processes... +[2023-09-26 07:47:21,111][92345] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 07:47:21,111][92345] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for learning process 1 +[2023-09-26 07:47:21,115][91478] Starting process inference_proc0-0 +[2023-09-26 07:47:21,115][91478] Starting process inference_proc1-0 +[2023-09-26 07:47:21,116][91478] Starting process rollout_proc0 +[2023-09-26 07:47:21,130][92345] Num visible devices: 1 +[2023-09-26 07:47:21,116][91478] Starting process rollout_proc1 +[2023-09-26 07:47:21,116][91478] Starting process rollout_proc2 +[2023-09-26 07:47:21,117][91478] Starting process rollout_proc3 +[2023-09-26 07:47:21,120][91478] Starting process rollout_proc4 +[2023-09-26 07:47:21,149][92345] Starting seed is not provided +[2023-09-26 07:47:21,149][92345] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-26 07:47:21,149][92345] Initializing actor-critic model on device cuda:0 +[2023-09-26 07:47:21,149][92345] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 07:47:21,150][92345] RunningMeanStd input shape: (1,) +[2023-09-26 07:47:21,122][91478] Starting process rollout_proc5 +[2023-09-26 07:47:21,122][91478] Starting process rollout_proc6 +[2023-09-26 07:47:21,125][91478] Starting process rollout_proc7 +[2023-09-26 07:47:21,162][92345] ConvEncoder: input_channels=4 +[2023-09-26 07:47:21,509][92345] Conv encoder output size: 512 +[2023-09-26 07:47:21,511][92345] Created Actor Critic model with architecture: +[2023-09-26 07:47:21,511][92345] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=18, bias=True) + ) +) +[2023-09-26 07:47:22,101][92345] Using optimizer +[2023-09-26 07:47:22,102][92345] No checkpoints found +[2023-09-26 07:47:22,102][92345] Did not load from checkpoint, starting from scratch! +[2023-09-26 07:47:22,102][92345] Initialized policy 1 weights for model version 0 +[2023-09-26 07:47:22,104][92345] LearnerWorker_p1 finished initialization! +[2023-09-26 07:47:22,104][92345] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-26 07:47:23,038][92475] Worker 0 uses CPU cores [0, 1, 2, 3] +[2023-09-26 07:47:23,049][92474] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-26 07:47:23,049][92474] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for inference process 1 +[2023-09-26 07:47:23,067][92474] Num visible devices: 1 +[2023-09-26 07:47:23,074][92513] Worker 6 uses CPU cores [24, 25, 26, 27] +[2023-09-26 07:47:23,089][92511] Worker 1 uses CPU cores [4, 5, 6, 7] +[2023-09-26 07:47:23,128][92507] Worker 3 uses CPU cores [12, 13, 14, 15] +[2023-09-26 07:47:23,144][92509] Worker 2 uses CPU cores [8, 9, 10, 11] +[2023-09-26 07:47:23,148][92512] Worker 5 uses CPU cores [20, 21, 22, 23] +[2023-09-26 07:47:23,164][92510] Worker 4 uses CPU cores [16, 17, 18, 19] +[2023-09-26 07:47:23,190][92473] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-26 07:47:23,191][92473] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-09-26 07:47:23,210][92473] Num visible devices: 1 +[2023-09-26 07:47:23,264][92514] Worker 7 uses CPU cores [28, 29, 30, 31] +[2023-09-26 07:47:23,705][92474] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 07:47:23,705][92474] RunningMeanStd input shape: (1,) +[2023-09-26 07:47:23,716][92474] ConvEncoder: input_channels=4 +[2023-09-26 07:47:23,762][91478] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan, 1: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-09-26 07:47:23,778][92473] RunningMeanStd input shape: (4, 84, 84) +[2023-09-26 07:47:23,778][92473] RunningMeanStd input shape: (1,) +[2023-09-26 07:47:23,789][92473] ConvEncoder: input_channels=4 +[2023-09-26 07:47:23,815][92474] Conv encoder output size: 512 +[2023-09-26 07:47:23,821][91478] Inference worker 1-0 is ready! +[2023-09-26 07:47:23,883][92473] Conv encoder output size: 512 +[2023-09-26 07:47:23,889][91478] Inference worker 0-0 is ready! +[2023-09-26 07:47:23,889][91478] All inference workers are ready! Signal rollout workers to start! +[2023-09-26 07:47:24,351][92511] Decorrelating experience for 0 frames... +[2023-09-26 07:47:24,354][92514] Decorrelating experience for 0 frames... +[2023-09-26 07:47:24,358][92510] Decorrelating experience for 0 frames... +[2023-09-26 07:47:24,358][92507] Decorrelating experience for 0 frames... +[2023-09-26 07:47:24,360][92512] Decorrelating experience for 0 frames... +[2023-09-26 07:47:24,361][92475] Decorrelating experience for 0 frames... +[2023-09-26 07:47:24,393][92513] Decorrelating experience for 0 frames... +[2023-09-26 07:47:24,408][92509] Decorrelating experience for 0 frames... +[2023-09-26 07:47:28,762][91478] Fps is (10 sec: 1638.4, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 8192. Throughput: 0: 204.8, 1: 204.8. Samples: 2048. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:47:28,763][91478] Avg episode reward: [(0, '1.800'), (1, '1.562')] +[2023-09-26 07:47:33,762][91478] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 32768. Throughput: 0: 404.7, 1: 395.1. Samples: 7998. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 07:47:33,763][91478] Avg episode reward: [(0, '1.667'), (1, '2.178')] +[2023-09-26 07:47:37,875][91478] Heartbeat connected on Batcher_0 +[2023-09-26 07:47:37,878][91478] Heartbeat connected on LearnerWorker_p0 +[2023-09-26 07:47:37,881][91478] Heartbeat connected on Batcher_1 +[2023-09-26 07:47:37,884][91478] Heartbeat connected on LearnerWorker_p1 +[2023-09-26 07:47:37,890][91478] Heartbeat connected on InferenceWorker_p0-w0 +[2023-09-26 07:47:37,894][91478] Heartbeat connected on InferenceWorker_p1-w0 +[2023-09-26 07:47:37,897][91478] Heartbeat connected on RolloutWorker_w0 +[2023-09-26 07:47:37,898][91478] Heartbeat connected on RolloutWorker_w1 +[2023-09-26 07:47:37,903][91478] Heartbeat connected on RolloutWorker_w2 +[2023-09-26 07:47:37,903][91478] Heartbeat connected on RolloutWorker_w3 +[2023-09-26 07:47:37,909][91478] Heartbeat connected on RolloutWorker_w5 +[2023-09-26 07:47:37,912][91478] Heartbeat connected on RolloutWorker_w6 +[2023-09-26 07:47:37,913][91478] Heartbeat connected on RolloutWorker_w4 +[2023-09-26 07:47:37,914][91478] Heartbeat connected on RolloutWorker_w7 +[2023-09-26 07:47:38,762][91478] Fps is (10 sec: 5734.4, 60 sec: 4369.1, 300 sec: 4369.1). Total num frames: 65536. Throughput: 0: 409.1, 1: 409.6. Samples: 12280. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 07:47:38,763][91478] Avg episode reward: [(0, '1.725'), (1, '1.872')] +[2023-09-26 07:47:41,424][92473] Updated weights for policy 0, policy_version 160 (0.0017) +[2023-09-26 07:47:41,425][92474] Updated weights for policy 1, policy_version 160 (0.0017) +[2023-09-26 07:47:43,762][91478] Fps is (10 sec: 5734.4, 60 sec: 4505.6, 300 sec: 4505.6). Total num frames: 90112. Throughput: 0: 534.7, 1: 531.3. Samples: 21320. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 07:47:43,762][91478] Avg episode reward: [(0, '1.960'), (1, '1.810')] +[2023-09-26 07:47:48,762][91478] Fps is (10 sec: 5734.6, 60 sec: 4915.3, 300 sec: 4915.3). Total num frames: 122880. Throughput: 0: 615.6, 1: 614.6. Samples: 30753. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:47:48,762][91478] Avg episode reward: [(0, '2.250'), (1, '2.260')] +[2023-09-26 07:47:53,762][91478] Fps is (10 sec: 6553.5, 60 sec: 5188.3, 300 sec: 5188.3). Total num frames: 155648. Throughput: 0: 595.1, 1: 592.6. Samples: 35631. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 07:47:53,763][91478] Avg episode reward: [(0, '2.870'), (1, '2.630')] +[2023-09-26 07:47:54,335][92474] Updated weights for policy 1, policy_version 320 (0.0019) +[2023-09-26 07:47:54,336][92473] Updated weights for policy 0, policy_version 320 (0.0016) +[2023-09-26 07:47:58,762][91478] Fps is (10 sec: 6553.4, 60 sec: 5383.3, 300 sec: 5383.3). Total num frames: 188416. Throughput: 0: 643.7, 1: 643.7. Samples: 45056. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 07:47:58,763][91478] Avg episode reward: [(0, '2.920'), (1, '2.900')] +[2023-09-26 07:48:03,762][91478] Fps is (10 sec: 6553.6, 60 sec: 5529.6, 300 sec: 5529.6). Total num frames: 221184. Throughput: 0: 681.2, 1: 680.6. Samples: 54471. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:48:03,763][91478] Avg episode reward: [(0, '3.530'), (1, '2.920')] +[2023-09-26 07:48:03,764][91993] Saving new best policy, reward=3.530! +[2023-09-26 07:48:03,765][92345] Saving new best policy, reward=2.920! +[2023-09-26 07:48:07,542][92474] Updated weights for policy 1, policy_version 480 (0.0020) +[2023-09-26 07:48:07,542][92473] Updated weights for policy 0, policy_version 480 (0.0018) +[2023-09-26 07:48:08,762][91478] Fps is (10 sec: 5734.5, 60 sec: 5461.3, 300 sec: 5461.3). Total num frames: 245760. Throughput: 0: 658.5, 1: 657.3. Samples: 59211. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 07:48:08,763][91478] Avg episode reward: [(0, '3.820'), (1, '2.790')] +[2023-09-26 07:48:08,957][91993] Saving new best policy, reward=3.820! +[2023-09-26 07:48:13,762][91478] Fps is (10 sec: 5734.6, 60 sec: 5570.6, 300 sec: 5570.6). Total num frames: 278528. Throughput: 0: 736.2, 1: 735.0. Samples: 68251. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 07:48:13,762][91478] Avg episode reward: [(0, '4.040'), (1, '3.340')] +[2023-09-26 07:48:13,767][92345] Saving new best policy, reward=3.340! +[2023-09-26 07:48:13,767][91993] Saving new best policy, reward=4.040! +[2023-09-26 07:48:18,762][91478] Fps is (10 sec: 6553.6, 60 sec: 5659.9, 300 sec: 5659.9). Total num frames: 311296. Throughput: 0: 772.9, 1: 773.5. Samples: 77585. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:48:18,763][91478] Avg episode reward: [(0, '3.910'), (1, '3.430')] +[2023-09-26 07:48:18,764][92345] Saving new best policy, reward=3.430! +[2023-09-26 07:48:20,814][92474] Updated weights for policy 1, policy_version 640 (0.0018) +[2023-09-26 07:48:20,814][92473] Updated weights for policy 0, policy_version 640 (0.0018) +[2023-09-26 07:48:23,762][91478] Fps is (10 sec: 6553.6, 60 sec: 5734.4, 300 sec: 5734.4). Total num frames: 344064. Throughput: 0: 775.3, 1: 773.8. Samples: 81991. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 07:48:23,762][91478] Avg episode reward: [(0, '4.260'), (1, '3.570')] +[2023-09-26 07:48:23,763][92345] Saving new best policy, reward=3.570! +[2023-09-26 07:48:23,763][91993] Saving new best policy, reward=4.260! +[2023-09-26 07:48:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 5797.4). Total num frames: 376832. Throughput: 0: 784.2, 1: 784.0. Samples: 91888. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 07:48:28,763][91478] Avg episode reward: [(0, '4.510'), (1, '3.790')] +[2023-09-26 07:48:28,773][91993] Saving new best policy, reward=4.510! +[2023-09-26 07:48:28,774][92345] Saving new best policy, reward=3.790! +[2023-09-26 07:48:33,696][92473] Updated weights for policy 0, policy_version 800 (0.0018) +[2023-09-26 07:48:33,696][92474] Updated weights for policy 1, policy_version 800 (0.0017) +[2023-09-26 07:48:33,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 5851.4). Total num frames: 409600. Throughput: 0: 782.8, 1: 782.2. Samples: 101175. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:48:33,763][91478] Avg episode reward: [(0, '4.790'), (1, '4.210')] +[2023-09-26 07:48:33,764][91993] Saving new best policy, reward=4.790! +[2023-09-26 07:48:33,764][92345] Saving new best policy, reward=4.210! +[2023-09-26 07:48:38,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 5789.0). Total num frames: 434176. Throughput: 0: 782.0, 1: 782.2. Samples: 106021. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 07:48:38,763][91478] Avg episode reward: [(0, '5.000'), (1, '4.170')] +[2023-09-26 07:48:38,764][91993] Saving new best policy, reward=5.000! +[2023-09-26 07:48:43,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 5836.8). Total num frames: 466944. Throughput: 0: 774.8, 1: 773.8. Samples: 114741. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:48:43,763][91478] Avg episode reward: [(0, '5.120'), (1, '4.140')] +[2023-09-26 07:48:43,773][91993] Saving new best policy, reward=5.120! +[2023-09-26 07:48:47,115][92474] Updated weights for policy 1, policy_version 960 (0.0017) +[2023-09-26 07:48:47,115][92473] Updated weights for policy 0, policy_version 960 (0.0017) +[2023-09-26 07:48:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 5879.0). Total num frames: 499712. Throughput: 0: 776.8, 1: 776.0. Samples: 124348. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 07:48:48,763][91478] Avg episode reward: [(0, '5.140'), (1, '4.330')] +[2023-09-26 07:48:48,764][91993] Saving new best policy, reward=5.140! +[2023-09-26 07:48:48,764][92345] Saving new best policy, reward=4.330! +[2023-09-26 07:48:53,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 5916.4). Total num frames: 532480. Throughput: 0: 775.1, 1: 775.5. Samples: 128989. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 07:48:53,763][91478] Avg episode reward: [(0, '5.090'), (1, '4.570')] +[2023-09-26 07:48:53,765][92345] Saving new best policy, reward=4.570! +[2023-09-26 07:48:58,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 5863.7). Total num frames: 557056. Throughput: 0: 779.0, 1: 779.0. Samples: 138362. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 07:48:58,763][91478] Avg episode reward: [(0, '5.220'), (1, '4.840')] +[2023-09-26 07:48:58,833][92345] Saving new best policy, reward=4.840! +[2023-09-26 07:48:58,899][91993] Saving new best policy, reward=5.220! +[2023-09-26 07:49:00,167][92473] Updated weights for policy 0, policy_version 1120 (0.0016) +[2023-09-26 07:49:00,168][92474] Updated weights for policy 1, policy_version 1120 (0.0017) +[2023-09-26 07:49:03,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 5898.2). Total num frames: 589824. Throughput: 0: 778.7, 1: 778.7. Samples: 147668. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:49:03,763][91478] Avg episode reward: [(0, '5.290'), (1, '5.120')] +[2023-09-26 07:49:03,764][91993] Saving new best policy, reward=5.290! +[2023-09-26 07:49:03,764][92345] Saving new best policy, reward=5.120! +[2023-09-26 07:49:08,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 5929.5). Total num frames: 622592. Throughput: 0: 782.4, 1: 782.9. Samples: 152428. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 07:49:08,763][91478] Avg episode reward: [(0, '5.410'), (1, '5.360')] +[2023-09-26 07:49:08,764][91993] Saving new best policy, reward=5.410! +[2023-09-26 07:49:08,764][92345] Saving new best policy, reward=5.360! +[2023-09-26 07:49:13,185][92474] Updated weights for policy 1, policy_version 1280 (0.0017) +[2023-09-26 07:49:13,185][92473] Updated weights for policy 0, policy_version 1280 (0.0018) +[2023-09-26 07:49:13,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 5957.8). Total num frames: 655360. Throughput: 0: 776.0, 1: 777.6. Samples: 161798. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 07:49:13,762][91478] Avg episode reward: [(0, '5.470'), (1, '5.570')] +[2023-09-26 07:49:13,770][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000001280_327680.pth... +[2023-09-26 07:49:13,770][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000001280_327680.pth... +[2023-09-26 07:49:13,807][92345] Saving new best policy, reward=5.570! +[2023-09-26 07:49:13,808][91993] Saving new best policy, reward=5.470! +[2023-09-26 07:49:18,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 5983.7). Total num frames: 688128. Throughput: 0: 782.4, 1: 782.6. Samples: 171596. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:49:18,763][91478] Avg episode reward: [(0, '5.410'), (1, '5.360')] +[2023-09-26 07:49:23,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6007.5). Total num frames: 720896. Throughput: 0: 778.2, 1: 779.7. Samples: 176128. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 07:49:23,763][91478] Avg episode reward: [(0, '5.600'), (1, '5.430')] +[2023-09-26 07:49:23,764][91993] Saving new best policy, reward=5.600! +[2023-09-26 07:49:26,168][92474] Updated weights for policy 1, policy_version 1440 (0.0019) +[2023-09-26 07:49:26,168][92473] Updated weights for policy 0, policy_version 1440 (0.0018) +[2023-09-26 07:49:28,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 5963.8). Total num frames: 745472. Throughput: 0: 786.9, 1: 787.5. Samples: 185588. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:49:28,762][91478] Avg episode reward: [(0, '5.780'), (1, '5.390')] +[2023-09-26 07:49:28,904][91993] Saving new best policy, reward=5.780! +[2023-09-26 07:49:33,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 5986.5). Total num frames: 778240. Throughput: 0: 780.0, 1: 780.9. Samples: 194588. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:49:33,762][91478] Avg episode reward: [(0, '5.940'), (1, '5.490')] +[2023-09-26 07:49:33,763][91993] Saving new best policy, reward=5.940! +[2023-09-26 07:49:38,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6007.5). Total num frames: 811008. Throughput: 0: 779.8, 1: 779.3. Samples: 199151. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 07:49:38,762][91478] Avg episode reward: [(0, '5.830'), (1, '5.420')] +[2023-09-26 07:49:39,447][92473] Updated weights for policy 0, policy_version 1600 (0.0018) +[2023-09-26 07:49:39,448][92474] Updated weights for policy 1, policy_version 1600 (0.0019) +[2023-09-26 07:49:43,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6027.0). Total num frames: 843776. Throughput: 0: 783.2, 1: 783.9. Samples: 208879. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:49:43,762][91478] Avg episode reward: [(0, '5.930'), (1, '5.400')] +[2023-09-26 07:49:48,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 5988.6). Total num frames: 868352. Throughput: 0: 778.8, 1: 778.8. Samples: 217760. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 07:49:48,763][91478] Avg episode reward: [(0, '5.990'), (1, '5.480')] +[2023-09-26 07:49:48,773][91993] Saving new best policy, reward=5.990! +[2023-09-26 07:49:52,894][92473] Updated weights for policy 0, policy_version 1760 (0.0017) +[2023-09-26 07:49:52,894][92474] Updated weights for policy 1, policy_version 1760 (0.0017) +[2023-09-26 07:49:53,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6007.5). Total num frames: 901120. Throughput: 0: 780.0, 1: 780.0. Samples: 222626. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 07:49:53,763][91478] Avg episode reward: [(0, '5.990'), (1, '5.540')] +[2023-09-26 07:49:58,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6025.1). Total num frames: 933888. Throughput: 0: 776.2, 1: 775.2. Samples: 231612. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:49:58,763][91478] Avg episode reward: [(0, '6.080'), (1, '5.610')] +[2023-09-26 07:49:58,773][91993] Saving new best policy, reward=6.080! +[2023-09-26 07:49:58,773][92345] Saving new best policy, reward=5.610! +[2023-09-26 07:50:03,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6041.6). Total num frames: 966656. Throughput: 0: 773.4, 1: 773.4. Samples: 241199. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:50:03,763][91478] Avg episode reward: [(0, '5.920'), (1, '5.560')] +[2023-09-26 07:50:05,969][92474] Updated weights for policy 1, policy_version 1920 (0.0015) +[2023-09-26 07:50:05,969][92473] Updated weights for policy 0, policy_version 1920 (0.0017) +[2023-09-26 07:50:08,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6057.1). Total num frames: 999424. Throughput: 0: 773.7, 1: 773.7. Samples: 245760. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:50:08,762][91478] Avg episode reward: [(0, '5.770'), (1, '5.600')] +[2023-09-26 07:50:13,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6212.2, 300 sec: 6047.6). Total num frames: 1028096. Throughput: 0: 774.4, 1: 773.3. Samples: 255232. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:50:13,763][91478] Avg episode reward: [(0, '5.730'), (1, '5.600')] +[2023-09-26 07:50:18,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6038.7). Total num frames: 1056768. Throughput: 0: 776.5, 1: 775.4. Samples: 264422. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:50:18,763][91478] Avg episode reward: [(0, '5.770'), (1, '5.650')] +[2023-09-26 07:50:18,764][92345] Saving new best policy, reward=5.650! +[2023-09-26 07:50:19,017][92474] Updated weights for policy 1, policy_version 2080 (0.0017) +[2023-09-26 07:50:19,017][92473] Updated weights for policy 0, policy_version 2080 (0.0019) +[2023-09-26 07:50:23,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6144.0, 300 sec: 6053.0). Total num frames: 1089536. Throughput: 0: 780.3, 1: 779.8. Samples: 269356. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 07:50:23,762][91478] Avg episode reward: [(0, '5.940'), (1, '5.800')] +[2023-09-26 07:50:23,763][92345] Saving new best policy, reward=5.800! +[2023-09-26 07:50:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6066.5). Total num frames: 1122304. Throughput: 0: 776.8, 1: 775.6. Samples: 278735. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:50:28,763][91478] Avg episode reward: [(0, '6.040'), (1, '5.770')] +[2023-09-26 07:50:32,162][92474] Updated weights for policy 1, policy_version 2240 (0.0017) +[2023-09-26 07:50:32,162][92473] Updated weights for policy 0, policy_version 2240 (0.0016) +[2023-09-26 07:50:33,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6079.3). Total num frames: 1155072. Throughput: 0: 780.5, 1: 779.8. Samples: 287973. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:50:33,763][91478] Avg episode reward: [(0, '6.260'), (1, '5.870')] +[2023-09-26 07:50:33,764][91993] Saving new best policy, reward=6.260! +[2023-09-26 07:50:33,764][92345] Saving new best policy, reward=5.870! +[2023-09-26 07:50:38,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6091.5). Total num frames: 1187840. Throughput: 0: 780.0, 1: 780.2. Samples: 292837. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 07:50:38,763][91478] Avg episode reward: [(0, '6.300'), (1, '5.840')] +[2023-09-26 07:50:38,764][91993] Saving new best policy, reward=6.300! +[2023-09-26 07:50:43,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6062.1). Total num frames: 1212416. Throughput: 0: 783.0, 1: 782.9. Samples: 302077. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 07:50:43,763][91478] Avg episode reward: [(0, '6.270'), (1, '6.050')] +[2023-09-26 07:50:43,883][92345] Saving new best policy, reward=6.050! +[2023-09-26 07:50:45,209][92474] Updated weights for policy 1, policy_version 2400 (0.0017) +[2023-09-26 07:50:45,209][92473] Updated weights for policy 0, policy_version 2400 (0.0019) +[2023-09-26 07:50:48,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6074.1). Total num frames: 1245184. Throughput: 0: 780.4, 1: 779.8. Samples: 311408. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:50:48,762][91478] Avg episode reward: [(0, '6.320'), (1, '5.920')] +[2023-09-26 07:50:48,763][91993] Saving new best policy, reward=6.320! +[2023-09-26 07:50:53,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6085.5). Total num frames: 1277952. Throughput: 0: 783.2, 1: 781.6. Samples: 316178. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:50:53,762][91478] Avg episode reward: [(0, '6.140'), (1, '6.060')] +[2023-09-26 07:50:53,763][92345] Saving new best policy, reward=6.060! +[2023-09-26 07:50:58,456][92474] Updated weights for policy 1, policy_version 2560 (0.0016) +[2023-09-26 07:50:58,458][92473] Updated weights for policy 0, policy_version 2560 (0.0018) +[2023-09-26 07:50:58,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6096.4). Total num frames: 1310720. Throughput: 0: 780.8, 1: 783.0. Samples: 325602. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 07:50:58,762][91478] Avg episode reward: [(0, '6.250'), (1, '6.160')] +[2023-09-26 07:50:58,770][92345] Saving new best policy, reward=6.160! +[2023-09-26 07:51:03,762][91478] Fps is (10 sec: 6143.9, 60 sec: 6212.3, 300 sec: 6088.1). Total num frames: 1339392. Throughput: 0: 779.1, 1: 779.0. Samples: 334535. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:51:03,763][91478] Avg episode reward: [(0, '6.330'), (1, '6.260')] +[2023-09-26 07:51:03,764][91993] Saving new best policy, reward=6.330! +[2023-09-26 07:51:03,780][92345] Saving new best policy, reward=6.260! +[2023-09-26 07:51:08,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6080.3). Total num frames: 1368064. Throughput: 0: 777.8, 1: 777.4. Samples: 339340. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:51:08,763][91478] Avg episode reward: [(0, '6.490'), (1, '6.250')] +[2023-09-26 07:51:08,764][91993] Saving new best policy, reward=6.490! +[2023-09-26 07:51:11,703][92473] Updated weights for policy 0, policy_version 2720 (0.0017) +[2023-09-26 07:51:11,703][92474] Updated weights for policy 1, policy_version 2720 (0.0017) +[2023-09-26 07:51:13,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6212.3, 300 sec: 6090.6). Total num frames: 1400832. Throughput: 0: 774.2, 1: 774.6. Samples: 348430. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:51:13,763][91478] Avg episode reward: [(0, '6.240'), (1, '6.230')] +[2023-09-26 07:51:13,773][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000002736_700416.pth... +[2023-09-26 07:51:13,774][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000002736_700416.pth... +[2023-09-26 07:51:18,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6100.4). Total num frames: 1433600. Throughput: 0: 777.2, 1: 778.7. Samples: 357991. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 07:51:18,763][91478] Avg episode reward: [(0, '6.330'), (1, '6.130')] +[2023-09-26 07:51:23,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6109.9). Total num frames: 1466368. Throughput: 0: 773.8, 1: 774.3. Samples: 362501. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:51:23,763][91478] Avg episode reward: [(0, '6.420'), (1, '6.130')] +[2023-09-26 07:51:24,744][92473] Updated weights for policy 0, policy_version 2880 (0.0019) +[2023-09-26 07:51:24,744][92474] Updated weights for policy 1, policy_version 2880 (0.0018) +[2023-09-26 07:51:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6118.9). Total num frames: 1499136. Throughput: 0: 779.9, 1: 777.3. Samples: 372151. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:51:28,763][91478] Avg episode reward: [(0, '6.130'), (1, '6.170')] +[2023-09-26 07:51:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6094.8). Total num frames: 1523712. Throughput: 0: 775.6, 1: 775.9. Samples: 381224. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:51:33,763][91478] Avg episode reward: [(0, '6.140'), (1, '6.160')] +[2023-09-26 07:51:37,859][92473] Updated weights for policy 0, policy_version 3040 (0.0017) +[2023-09-26 07:51:37,860][92474] Updated weights for policy 1, policy_version 3040 (0.0017) +[2023-09-26 07:51:38,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6103.8). Total num frames: 1556480. Throughput: 0: 776.5, 1: 777.0. Samples: 386084. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:51:38,763][91478] Avg episode reward: [(0, '6.250'), (1, '6.170')] +[2023-09-26 07:51:43,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6112.5). Total num frames: 1589248. Throughput: 0: 774.6, 1: 773.7. Samples: 395276. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:51:43,763][91478] Avg episode reward: [(0, '6.330'), (1, '6.080')] +[2023-09-26 07:51:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6120.8). Total num frames: 1622016. Throughput: 0: 780.9, 1: 780.9. Samples: 404814. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 07:51:48,763][91478] Avg episode reward: [(0, '6.580'), (1, '6.400')] +[2023-09-26 07:51:48,764][91993] Saving new best policy, reward=6.580! +[2023-09-26 07:51:48,764][92345] Saving new best policy, reward=6.400! +[2023-09-26 07:51:50,965][92474] Updated weights for policy 1, policy_version 3200 (0.0016) +[2023-09-26 07:51:50,965][92473] Updated weights for policy 0, policy_version 3200 (0.0018) +[2023-09-26 07:51:53,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6128.8). Total num frames: 1654784. Throughput: 0: 779.6, 1: 781.7. Samples: 409600. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:51:53,762][91478] Avg episode reward: [(0, '6.450'), (1, '6.360')] +[2023-09-26 07:51:58,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6106.8). Total num frames: 1679360. Throughput: 0: 784.3, 1: 784.5. Samples: 419024. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:51:58,763][91478] Avg episode reward: [(0, '6.520'), (1, '6.390')] +[2023-09-26 07:52:03,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6212.3, 300 sec: 6114.8). Total num frames: 1712128. Throughput: 0: 777.9, 1: 778.5. Samples: 428032. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:52:03,762][91478] Avg episode reward: [(0, '6.270'), (1, '6.310')] +[2023-09-26 07:52:04,307][92474] Updated weights for policy 1, policy_version 3360 (0.0017) +[2023-09-26 07:52:04,307][92473] Updated weights for policy 0, policy_version 3360 (0.0018) +[2023-09-26 07:52:08,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6122.4). Total num frames: 1744896. Throughput: 0: 780.2, 1: 778.7. Samples: 432653. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 07:52:08,763][91478] Avg episode reward: [(0, '6.400'), (1, '6.320')] +[2023-09-26 07:52:13,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.6, 300 sec: 6129.9). Total num frames: 1777664. Throughput: 0: 777.4, 1: 777.2. Samples: 442110. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 07:52:13,762][91478] Avg episode reward: [(0, '6.420'), (1, '6.310')] +[2023-09-26 07:52:17,605][92473] Updated weights for policy 0, policy_version 3520 (0.0017) +[2023-09-26 07:52:17,606][92474] Updated weights for policy 1, policy_version 3520 (0.0018) +[2023-09-26 07:52:18,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6109.3). Total num frames: 1802240. Throughput: 0: 775.8, 1: 775.8. Samples: 451046. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 07:52:18,763][91478] Avg episode reward: [(0, '6.250'), (1, '6.390')] +[2023-09-26 07:52:23,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6192.6). Total num frames: 1835008. Throughput: 0: 777.7, 1: 776.9. Samples: 456039. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 07:52:23,762][91478] Avg episode reward: [(0, '6.460'), (1, '6.470')] +[2023-09-26 07:52:23,763][92345] Saving new best policy, reward=6.470! +[2023-09-26 07:52:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 1867776. Throughput: 0: 778.0, 1: 776.5. Samples: 465227. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:52:28,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.530')] +[2023-09-26 07:52:28,774][91993] Saving new best policy, reward=6.620! +[2023-09-26 07:52:28,774][92345] Saving new best policy, reward=6.530! +[2023-09-26 07:52:30,555][92473] Updated weights for policy 0, policy_version 3680 (0.0017) +[2023-09-26 07:52:30,555][92474] Updated weights for policy 1, policy_version 3680 (0.0017) +[2023-09-26 07:52:33,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 1900544. Throughput: 0: 779.5, 1: 780.8. Samples: 475029. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:52:33,762][91478] Avg episode reward: [(0, '6.460'), (1, '6.530')] +[2023-09-26 07:52:38,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 1933312. Throughput: 0: 777.6, 1: 776.0. Samples: 479510. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:52:38,762][91478] Avg episode reward: [(0, '6.550'), (1, '6.390')] +[2023-09-26 07:52:43,467][92473] Updated weights for policy 0, policy_version 3840 (0.0019) +[2023-09-26 07:52:43,467][92474] Updated weights for policy 1, policy_version 3840 (0.0017) +[2023-09-26 07:52:43,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 1966080. Throughput: 0: 780.3, 1: 779.9. Samples: 489232. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:52:43,763][91478] Avg episode reward: [(0, '6.340'), (1, '6.470')] +[2023-09-26 07:52:48,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 1998848. Throughput: 0: 783.4, 1: 781.9. Samples: 498470. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 07:52:48,763][91478] Avg episode reward: [(0, '6.430'), (1, '6.550')] +[2023-09-26 07:52:48,765][92345] Saving new best policy, reward=6.550! +[2023-09-26 07:52:53,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2023424. Throughput: 0: 785.0, 1: 785.1. Samples: 503311. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 07:52:53,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.670')] +[2023-09-26 07:52:53,764][91993] Saving new best policy, reward=6.680! +[2023-09-26 07:52:53,764][92345] Saving new best policy, reward=6.670! +[2023-09-26 07:52:56,777][92473] Updated weights for policy 0, policy_version 4000 (0.0016) +[2023-09-26 07:52:56,777][92474] Updated weights for policy 1, policy_version 4000 (0.0017) +[2023-09-26 07:52:58,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2056192. Throughput: 0: 776.6, 1: 779.4. Samples: 512131. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:52:58,763][91478] Avg episode reward: [(0, '6.410'), (1, '6.610')] +[2023-09-26 07:53:03,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 2088960. Throughput: 0: 785.6, 1: 785.5. Samples: 521748. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:53:03,763][91478] Avg episode reward: [(0, '6.490'), (1, '6.520')] +[2023-09-26 07:53:08,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 2121728. Throughput: 0: 780.1, 1: 781.4. Samples: 526307. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:53:08,763][91478] Avg episode reward: [(0, '6.400'), (1, '6.330')] +[2023-09-26 07:53:09,982][92474] Updated weights for policy 1, policy_version 4160 (0.0017) +[2023-09-26 07:53:09,983][92473] Updated weights for policy 0, policy_version 4160 (0.0017) +[2023-09-26 07:53:13,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2146304. Throughput: 0: 779.7, 1: 778.9. Samples: 535365. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:53:13,763][91478] Avg episode reward: [(0, '6.540'), (1, '6.370')] +[2023-09-26 07:53:13,776][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000004192_1073152.pth... +[2023-09-26 07:53:13,776][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000004192_1073152.pth... +[2023-09-26 07:53:13,808][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000001280_327680.pth +[2023-09-26 07:53:13,811][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000001280_327680.pth +[2023-09-26 07:53:18,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2179072. Throughput: 0: 774.7, 1: 774.6. Samples: 544746. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:53:18,763][91478] Avg episode reward: [(0, '6.520'), (1, '6.250')] +[2023-09-26 07:53:23,427][92473] Updated weights for policy 0, policy_version 4320 (0.0017) +[2023-09-26 07:53:23,427][92474] Updated weights for policy 1, policy_version 4320 (0.0017) +[2023-09-26 07:53:23,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2211840. Throughput: 0: 770.1, 1: 771.4. Samples: 548879. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:53:23,762][91478] Avg episode reward: [(0, '6.470'), (1, '6.470')] +[2023-09-26 07:53:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2244608. Throughput: 0: 772.4, 1: 772.2. Samples: 558738. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 07:53:28,763][91478] Avg episode reward: [(0, '6.510'), (1, '6.320')] +[2023-09-26 07:53:33,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 2277376. Throughput: 0: 773.7, 1: 774.0. Samples: 568118. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 07:53:33,762][91478] Avg episode reward: [(0, '6.700'), (1, '6.350')] +[2023-09-26 07:53:33,763][91993] Saving new best policy, reward=6.700! +[2023-09-26 07:53:36,327][92473] Updated weights for policy 0, policy_version 4480 (0.0020) +[2023-09-26 07:53:36,327][92474] Updated weights for policy 1, policy_version 4480 (0.0020) +[2023-09-26 07:53:38,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2301952. Throughput: 0: 774.5, 1: 773.6. Samples: 572977. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 07:53:38,763][91478] Avg episode reward: [(0, '6.590'), (1, '6.220')] +[2023-09-26 07:53:43,762][91478] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2334720. Throughput: 0: 776.1, 1: 776.4. Samples: 581992. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 07:53:43,763][91478] Avg episode reward: [(0, '6.460'), (1, '6.090')] +[2023-09-26 07:53:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2367488. Throughput: 0: 778.4, 1: 778.3. Samples: 591802. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:53:48,763][91478] Avg episode reward: [(0, '6.640'), (1, '6.310')] +[2023-09-26 07:53:49,431][92473] Updated weights for policy 0, policy_version 4640 (0.0017) +[2023-09-26 07:53:49,431][92474] Updated weights for policy 1, policy_version 4640 (0.0018) +[2023-09-26 07:53:53,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 2400256. Throughput: 0: 775.4, 1: 774.6. Samples: 596057. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 07:53:53,763][91478] Avg episode reward: [(0, '6.430'), (1, '6.280')] +[2023-09-26 07:53:58,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2424832. Throughput: 0: 773.3, 1: 774.6. Samples: 605019. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 07:53:58,763][91478] Avg episode reward: [(0, '6.420'), (1, '6.410')] +[2023-09-26 07:54:03,079][92473] Updated weights for policy 0, policy_version 4800 (0.0017) +[2023-09-26 07:54:03,080][92474] Updated weights for policy 1, policy_version 4800 (0.0017) +[2023-09-26 07:54:03,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2457600. Throughput: 0: 773.7, 1: 774.1. Samples: 614396. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:54:03,763][91478] Avg episode reward: [(0, '6.490'), (1, '6.410')] +[2023-09-26 07:54:08,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2490368. Throughput: 0: 779.6, 1: 778.5. Samples: 618994. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 07:54:08,762][91478] Avg episode reward: [(0, '6.400'), (1, '6.360')] +[2023-09-26 07:54:13,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 2523136. Throughput: 0: 776.9, 1: 778.4. Samples: 628727. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:54:13,762][91478] Avg episode reward: [(0, '6.500'), (1, '6.430')] +[2023-09-26 07:54:16,016][92473] Updated weights for policy 0, policy_version 4960 (0.0018) +[2023-09-26 07:54:16,016][92474] Updated weights for policy 1, policy_version 4960 (0.0018) +[2023-09-26 07:54:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2555904. Throughput: 0: 776.3, 1: 775.8. Samples: 637959. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:54:18,763][91478] Avg episode reward: [(0, '6.350'), (1, '6.370')] +[2023-09-26 07:54:23,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2580480. Throughput: 0: 775.4, 1: 777.3. Samples: 642848. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 07:54:23,763][91478] Avg episode reward: [(0, '6.420'), (1, '6.420')] +[2023-09-26 07:54:28,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2613248. Throughput: 0: 775.9, 1: 775.3. Samples: 651797. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 07:54:28,763][91478] Avg episode reward: [(0, '6.430'), (1, '6.500')] +[2023-09-26 07:54:29,280][92474] Updated weights for policy 1, policy_version 5120 (0.0016) +[2023-09-26 07:54:29,281][92473] Updated weights for policy 0, policy_version 5120 (0.0015) +[2023-09-26 07:54:33,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2646016. Throughput: 0: 772.2, 1: 772.9. Samples: 661335. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 07:54:33,763][91478] Avg episode reward: [(0, '6.540'), (1, '6.300')] +[2023-09-26 07:54:38,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2678784. Throughput: 0: 772.7, 1: 773.6. Samples: 665637. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 07:54:38,763][91478] Avg episode reward: [(0, '6.320'), (1, '6.470')] +[2023-09-26 07:54:42,542][92473] Updated weights for policy 0, policy_version 5280 (0.0016) +[2023-09-26 07:54:42,543][92474] Updated weights for policy 1, policy_version 5280 (0.0015) +[2023-09-26 07:54:43,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2703360. Throughput: 0: 778.3, 1: 776.8. Samples: 674998. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:54:43,762][91478] Avg episode reward: [(0, '6.510'), (1, '6.550')] +[2023-09-26 07:54:48,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2736128. Throughput: 0: 775.6, 1: 774.2. Samples: 684138. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 07:54:48,763][91478] Avg episode reward: [(0, '6.550'), (1, '6.630')] +[2023-09-26 07:54:53,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2768896. Throughput: 0: 778.6, 1: 778.7. Samples: 689074. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:54:53,763][91478] Avg episode reward: [(0, '6.600'), (1, '6.610')] +[2023-09-26 07:54:55,545][92473] Updated weights for policy 0, policy_version 5440 (0.0017) +[2023-09-26 07:54:55,545][92474] Updated weights for policy 1, policy_version 5440 (0.0015) +[2023-09-26 07:54:58,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2801664. Throughput: 0: 775.4, 1: 774.2. Samples: 698461. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:54:58,763][91478] Avg episode reward: [(0, '6.590'), (1, '6.400')] +[2023-09-26 07:55:03,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 2834432. Throughput: 0: 778.6, 1: 777.4. Samples: 707982. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 07:55:03,762][91478] Avg episode reward: [(0, '6.620'), (1, '6.520')] +[2023-09-26 07:55:08,714][92474] Updated weights for policy 1, policy_version 5600 (0.0017) +[2023-09-26 07:55:08,714][92473] Updated weights for policy 0, policy_version 5600 (0.0017) +[2023-09-26 07:55:08,762][91478] Fps is (10 sec: 6553.9, 60 sec: 6280.5, 300 sec: 6234.3). Total num frames: 2867200. Throughput: 0: 775.9, 1: 774.2. Samples: 712600. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 07:55:08,762][91478] Avg episode reward: [(0, '6.610'), (1, '6.440')] +[2023-09-26 07:55:13,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2891776. Throughput: 0: 777.5, 1: 777.4. Samples: 721764. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:55:13,762][91478] Avg episode reward: [(0, '6.490'), (1, '6.400')] +[2023-09-26 07:55:13,769][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000005648_1445888.pth... +[2023-09-26 07:55:13,769][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000005648_1445888.pth... +[2023-09-26 07:55:13,800][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000002736_700416.pth +[2023-09-26 07:55:13,805][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000002736_700416.pth +[2023-09-26 07:55:18,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 2924544. Throughput: 0: 775.1, 1: 775.9. Samples: 731132. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 07:55:18,763][91478] Avg episode reward: [(0, '6.520'), (1, '6.440')] +[2023-09-26 07:55:21,922][92473] Updated weights for policy 0, policy_version 5760 (0.0017) +[2023-09-26 07:55:21,922][92474] Updated weights for policy 1, policy_version 5760 (0.0017) +[2023-09-26 07:55:23,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2957312. Throughput: 0: 779.0, 1: 778.0. Samples: 735703. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 07:55:23,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.490')] +[2023-09-26 07:55:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 2990080. Throughput: 0: 781.4, 1: 783.0. Samples: 745398. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 07:55:28,763][91478] Avg episode reward: [(0, '6.570'), (1, '6.430')] +[2023-09-26 07:55:33,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3022848. Throughput: 0: 784.7, 1: 784.8. Samples: 754763. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:55:33,762][91478] Avg episode reward: [(0, '6.530'), (1, '6.510')] +[2023-09-26 07:55:34,832][92474] Updated weights for policy 1, policy_version 5920 (0.0017) +[2023-09-26 07:55:34,832][92473] Updated weights for policy 0, policy_version 5920 (0.0019) +[2023-09-26 07:55:38,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6212.3, 300 sec: 6234.3). Total num frames: 3051520. Throughput: 0: 785.3, 1: 785.7. Samples: 759769. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 07:55:38,763][91478] Avg episode reward: [(0, '6.630'), (1, '6.650')] +[2023-09-26 07:55:43,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3080192. Throughput: 0: 782.1, 1: 781.5. Samples: 768821. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 07:55:43,763][91478] Avg episode reward: [(0, '6.420'), (1, '6.640')] +[2023-09-26 07:55:47,927][92473] Updated weights for policy 0, policy_version 6080 (0.0017) +[2023-09-26 07:55:47,934][92474] Updated weights for policy 1, policy_version 6080 (0.0017) +[2023-09-26 07:55:48,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 3112960. Throughput: 0: 779.2, 1: 782.1. Samples: 778241. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 07:55:48,762][91478] Avg episode reward: [(0, '6.520'), (1, '6.420')] +[2023-09-26 07:55:53,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3145728. Throughput: 0: 781.0, 1: 782.4. Samples: 782951. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 07:55:53,762][91478] Avg episode reward: [(0, '6.520'), (1, '6.510')] +[2023-09-26 07:55:58,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6234.3). Total num frames: 3178496. Throughput: 0: 785.8, 1: 787.8. Samples: 792576. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:55:58,763][91478] Avg episode reward: [(0, '6.350'), (1, '6.490')] +[2023-09-26 07:56:01,033][92473] Updated weights for policy 0, policy_version 6240 (0.0016) +[2023-09-26 07:56:01,034][92474] Updated weights for policy 1, policy_version 6240 (0.0017) +[2023-09-26 07:56:03,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 3211264. Throughput: 0: 785.7, 1: 784.0. Samples: 801772. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:56:03,763][91478] Avg episode reward: [(0, '6.590'), (1, '6.470')] +[2023-09-26 07:56:08,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3235840. Throughput: 0: 789.4, 1: 788.4. Samples: 806702. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:56:08,763][91478] Avg episode reward: [(0, '6.690'), (1, '6.270')] +[2023-09-26 07:56:13,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3268608. Throughput: 0: 783.4, 1: 783.0. Samples: 815885. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 07:56:13,762][91478] Avg episode reward: [(0, '6.630'), (1, '6.430')] +[2023-09-26 07:56:14,132][92473] Updated weights for policy 0, policy_version 6400 (0.0016) +[2023-09-26 07:56:14,132][92474] Updated weights for policy 1, policy_version 6400 (0.0017) +[2023-09-26 07:56:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3301376. Throughput: 0: 783.4, 1: 783.7. Samples: 825282. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:56:18,763][91478] Avg episode reward: [(0, '6.800'), (1, '6.490')] +[2023-09-26 07:56:18,764][91993] Saving new best policy, reward=6.800! +[2023-09-26 07:56:23,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3334144. Throughput: 0: 774.8, 1: 774.7. Samples: 829496. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 07:56:23,763][91478] Avg episode reward: [(0, '6.730'), (1, '6.440')] +[2023-09-26 07:56:27,307][92473] Updated weights for policy 0, policy_version 6560 (0.0018) +[2023-09-26 07:56:27,308][92474] Updated weights for policy 1, policy_version 6560 (0.0016) +[2023-09-26 07:56:28,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 3366912. Throughput: 0: 782.0, 1: 782.7. Samples: 839234. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 07:56:28,762][91478] Avg episode reward: [(0, '6.730'), (1, '6.480')] +[2023-09-26 07:56:33,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6212.2, 300 sec: 6234.3). Total num frames: 3395584. Throughput: 0: 782.1, 1: 782.3. Samples: 848636. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 07:56:33,763][91478] Avg episode reward: [(0, '6.630'), (1, '6.570')] +[2023-09-26 07:56:38,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 3424256. Throughput: 0: 782.1, 1: 780.6. Samples: 853273. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:56:38,763][91478] Avg episode reward: [(0, '6.740'), (1, '6.580')] +[2023-09-26 07:56:40,329][92474] Updated weights for policy 1, policy_version 6720 (0.0018) +[2023-09-26 07:56:40,329][92473] Updated weights for policy 0, policy_version 6720 (0.0018) +[2023-09-26 07:56:43,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3457024. Throughput: 0: 779.5, 1: 777.4. Samples: 862635. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 07:56:43,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.400')] +[2023-09-26 07:56:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3489792. Throughput: 0: 777.8, 1: 779.8. Samples: 871862. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 07:56:48,762][91478] Avg episode reward: [(0, '6.670'), (1, '6.560')] +[2023-09-26 07:56:53,698][92473] Updated weights for policy 0, policy_version 6880 (0.0016) +[2023-09-26 07:56:53,699][92474] Updated weights for policy 1, policy_version 6880 (0.0018) +[2023-09-26 07:56:53,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 3522560. Throughput: 0: 774.8, 1: 777.3. Samples: 876548. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 07:56:53,763][91478] Avg episode reward: [(0, '6.750'), (1, '6.420')] +[2023-09-26 07:56:58,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3547136. Throughput: 0: 777.1, 1: 777.1. Samples: 885824. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:56:58,762][91478] Avg episode reward: [(0, '6.640'), (1, '6.390')] +[2023-09-26 07:57:03,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 3579904. Throughput: 0: 776.7, 1: 776.4. Samples: 895172. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 07:57:03,763][91478] Avg episode reward: [(0, '6.540'), (1, '6.510')] +[2023-09-26 07:57:06,653][92474] Updated weights for policy 1, policy_version 7040 (0.0017) +[2023-09-26 07:57:06,653][92473] Updated weights for policy 0, policy_version 7040 (0.0016) +[2023-09-26 07:57:08,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 3612672. Throughput: 0: 784.0, 1: 783.2. Samples: 900018. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 07:57:08,762][91478] Avg episode reward: [(0, '6.540'), (1, '6.350')] +[2023-09-26 07:57:13,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 3645440. Throughput: 0: 778.9, 1: 779.4. Samples: 909358. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 07:57:13,762][91478] Avg episode reward: [(0, '6.430'), (1, '6.470')] +[2023-09-26 07:57:13,772][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000007120_1822720.pth... +[2023-09-26 07:57:13,772][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000007120_1822720.pth... +[2023-09-26 07:57:13,801][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000004192_1073152.pth +[2023-09-26 07:57:13,814][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000004192_1073152.pth +[2023-09-26 07:57:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 3678208. Throughput: 0: 780.1, 1: 778.3. Samples: 918761. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 07:57:18,763][91478] Avg episode reward: [(0, '6.610'), (1, '6.550')] +[2023-09-26 07:57:19,838][92474] Updated weights for policy 1, policy_version 7200 (0.0017) +[2023-09-26 07:57:19,839][92473] Updated weights for policy 0, policy_version 7200 (0.0017) +[2023-09-26 07:57:23,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 3710976. Throughput: 0: 780.8, 1: 782.1. Samples: 923605. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 07:57:23,763][91478] Avg episode reward: [(0, '6.570'), (1, '6.570')] +[2023-09-26 07:57:28,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6212.2, 300 sec: 6234.2). Total num frames: 3739648. Throughput: 0: 781.6, 1: 782.4. Samples: 933016. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 07:57:28,764][91478] Avg episode reward: [(0, '6.820'), (1, '6.410')] +[2023-09-26 07:57:28,775][91993] Saving new best policy, reward=6.820! +[2023-09-26 07:57:32,757][92474] Updated weights for policy 1, policy_version 7360 (0.0017) +[2023-09-26 07:57:32,757][92473] Updated weights for policy 0, policy_version 7360 (0.0017) +[2023-09-26 07:57:33,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 3768320. Throughput: 0: 782.8, 1: 781.4. Samples: 942249. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 07:57:33,762][91478] Avg episode reward: [(0, '6.630'), (1, '6.490')] +[2023-09-26 07:57:38,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3801088. Throughput: 0: 786.3, 1: 784.5. Samples: 947234. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:57:38,763][91478] Avg episode reward: [(0, '6.690'), (1, '6.470')] +[2023-09-26 07:57:43,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3833856. Throughput: 0: 785.2, 1: 785.2. Samples: 956492. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:57:43,762][91478] Avg episode reward: [(0, '6.580'), (1, '6.510')] +[2023-09-26 07:57:45,633][92473] Updated weights for policy 0, policy_version 7520 (0.0017) +[2023-09-26 07:57:45,633][92474] Updated weights for policy 1, policy_version 7520 (0.0017) +[2023-09-26 07:57:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 3866624. Throughput: 0: 789.5, 1: 789.7. Samples: 966235. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:57:48,763][91478] Avg episode reward: [(0, '6.660'), (1, '6.490')] +[2023-09-26 07:57:53,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 3899392. Throughput: 0: 785.1, 1: 786.9. Samples: 970757. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 07:57:53,763][91478] Avg episode reward: [(0, '6.780'), (1, '6.410')] +[2023-09-26 07:57:58,700][92473] Updated weights for policy 0, policy_version 7680 (0.0015) +[2023-09-26 07:57:58,701][92474] Updated weights for policy 1, policy_version 7680 (0.0018) +[2023-09-26 07:57:58,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6248.1). Total num frames: 3932160. Throughput: 0: 789.3, 1: 788.7. Samples: 980368. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:57:58,763][91478] Avg episode reward: [(0, '6.730'), (1, '6.330')] +[2023-09-26 07:58:03,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 3956736. Throughput: 0: 789.4, 1: 789.6. Samples: 989817. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:58:03,763][91478] Avg episode reward: [(0, '6.660'), (1, '6.420')] +[2023-09-26 07:58:08,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 3989504. Throughput: 0: 789.6, 1: 789.0. Samples: 994642. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 07:58:08,762][91478] Avg episode reward: [(0, '6.640'), (1, '6.590')] +[2023-09-26 07:58:11,682][92473] Updated weights for policy 0, policy_version 7840 (0.0016) +[2023-09-26 07:58:11,682][92474] Updated weights for policy 1, policy_version 7840 (0.0016) +[2023-09-26 07:58:13,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4022272. Throughput: 0: 786.9, 1: 786.5. Samples: 1003820. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 07:58:13,763][91478] Avg episode reward: [(0, '6.720'), (1, '6.500')] +[2023-09-26 07:58:18,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4055040. Throughput: 0: 789.2, 1: 789.9. Samples: 1013306. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 07:58:18,763][91478] Avg episode reward: [(0, '6.630'), (1, '6.340')] +[2023-09-26 07:58:23,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4087808. Throughput: 0: 783.8, 1: 785.6. Samples: 1017854. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:58:23,763][91478] Avg episode reward: [(0, '6.830'), (1, '6.490')] +[2023-09-26 07:58:23,764][91993] Saving new best policy, reward=6.830! +[2023-09-26 07:58:25,132][92474] Updated weights for policy 1, policy_version 8000 (0.0015) +[2023-09-26 07:58:25,132][92473] Updated weights for policy 0, policy_version 8000 (0.0017) +[2023-09-26 07:58:28,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 4112384. Throughput: 0: 780.8, 1: 780.8. Samples: 1026767. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:58:28,762][91478] Avg episode reward: [(0, '6.710'), (1, '6.280')] +[2023-09-26 07:58:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4145152. Throughput: 0: 777.8, 1: 778.2. Samples: 1036253. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 07:58:33,763][91478] Avg episode reward: [(0, '6.600'), (1, '6.380')] +[2023-09-26 07:58:38,315][92474] Updated weights for policy 1, policy_version 8160 (0.0018) +[2023-09-26 07:58:38,316][92473] Updated weights for policy 0, policy_version 8160 (0.0019) +[2023-09-26 07:58:38,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4177920. Throughput: 0: 777.2, 1: 776.0. Samples: 1040653. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 07:58:38,763][91478] Avg episode reward: [(0, '6.690'), (1, '6.420')] +[2023-09-26 07:58:43,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4210688. Throughput: 0: 778.2, 1: 777.8. Samples: 1050386. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 07:58:43,763][91478] Avg episode reward: [(0, '6.790'), (1, '6.350')] +[2023-09-26 07:58:48,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4243456. Throughput: 0: 775.3, 1: 775.0. Samples: 1059577. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:58:48,763][91478] Avg episode reward: [(0, '6.560'), (1, '6.450')] +[2023-09-26 07:58:51,419][92473] Updated weights for policy 0, policy_version 8320 (0.0016) +[2023-09-26 07:58:51,421][92474] Updated weights for policy 1, policy_version 8320 (0.0016) +[2023-09-26 07:58:53,762][91478] Fps is (10 sec: 5734.7, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 4268032. Throughput: 0: 775.8, 1: 775.7. Samples: 1064461. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:58:53,762][91478] Avg episode reward: [(0, '6.610'), (1, '6.640')] +[2023-09-26 07:58:58,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 4300800. Throughput: 0: 775.2, 1: 775.0. Samples: 1073579. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:58:58,763][91478] Avg episode reward: [(0, '6.520'), (1, '6.440')] +[2023-09-26 07:59:03,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4333568. Throughput: 0: 774.5, 1: 774.4. Samples: 1083004. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:59:03,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.310')] +[2023-09-26 07:59:04,564][92473] Updated weights for policy 0, policy_version 8480 (0.0019) +[2023-09-26 07:59:04,564][92474] Updated weights for policy 1, policy_version 8480 (0.0020) +[2023-09-26 07:59:08,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4366336. Throughput: 0: 773.9, 1: 773.8. Samples: 1087499. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 07:59:08,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.460')] +[2023-09-26 07:59:13,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4399104. Throughput: 0: 782.3, 1: 781.4. Samples: 1097133. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 07:59:13,763][91478] Avg episode reward: [(0, '6.890'), (1, '6.430')] +[2023-09-26 07:59:13,778][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000008592_2199552.pth... +[2023-09-26 07:59:13,779][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000008592_2199552.pth... +[2023-09-26 07:59:13,809][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000005648_1445888.pth +[2023-09-26 07:59:13,819][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000005648_1445888.pth +[2023-09-26 07:59:13,823][91993] Saving new best policy, reward=6.890! +[2023-09-26 07:59:17,709][92474] Updated weights for policy 1, policy_version 8640 (0.0017) +[2023-09-26 07:59:17,710][92473] Updated weights for policy 0, policy_version 8640 (0.0017) +[2023-09-26 07:59:18,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 4423680. Throughput: 0: 777.5, 1: 776.6. Samples: 1106186. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 07:59:18,763][91478] Avg episode reward: [(0, '6.790'), (1, '6.470')] +[2023-09-26 07:59:23,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 4456448. Throughput: 0: 781.0, 1: 780.3. Samples: 1110913. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 07:59:23,762][91478] Avg episode reward: [(0, '6.560'), (1, '6.390')] +[2023-09-26 07:59:28,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4489216. Throughput: 0: 777.3, 1: 777.9. Samples: 1120369. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:59:28,763][91478] Avg episode reward: [(0, '6.530'), (1, '6.370')] +[2023-09-26 07:59:30,641][92474] Updated weights for policy 1, policy_version 8800 (0.0017) +[2023-09-26 07:59:30,641][92473] Updated weights for policy 0, policy_version 8800 (0.0016) +[2023-09-26 07:59:33,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4521984. Throughput: 0: 784.7, 1: 784.5. Samples: 1130194. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:59:33,763][91478] Avg episode reward: [(0, '6.550'), (1, '6.300')] +[2023-09-26 07:59:38,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4554752. Throughput: 0: 779.5, 1: 780.1. Samples: 1134642. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:59:38,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.440')] +[2023-09-26 07:59:43,652][92474] Updated weights for policy 1, policy_version 8960 (0.0016) +[2023-09-26 07:59:43,654][92473] Updated weights for policy 0, policy_version 8960 (0.0017) +[2023-09-26 07:59:43,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 4587520. Throughput: 0: 785.4, 1: 786.3. Samples: 1144307. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 07:59:43,762][91478] Avg episode reward: [(0, '6.750'), (1, '6.320')] +[2023-09-26 07:59:48,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 4612096. Throughput: 0: 785.2, 1: 785.2. Samples: 1153671. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 07:59:48,762][91478] Avg episode reward: [(0, '6.760'), (1, '6.250')] +[2023-09-26 07:59:53,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4644864. Throughput: 0: 787.8, 1: 786.8. Samples: 1158358. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 07:59:53,763][91478] Avg episode reward: [(0, '6.780'), (1, '6.210')] +[2023-09-26 07:59:56,739][92473] Updated weights for policy 0, policy_version 9120 (0.0017) +[2023-09-26 07:59:56,740][92474] Updated weights for policy 1, policy_version 9120 (0.0017) +[2023-09-26 07:59:58,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4677632. Throughput: 0: 782.4, 1: 783.1. Samples: 1167581. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 07:59:58,763][91478] Avg episode reward: [(0, '6.510'), (1, '6.460')] +[2023-09-26 08:00:03,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4710400. Throughput: 0: 792.3, 1: 792.6. Samples: 1177504. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:00:03,763][91478] Avg episode reward: [(0, '6.560'), (1, '6.360')] +[2023-09-26 08:00:08,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4743168. Throughput: 0: 789.4, 1: 789.9. Samples: 1181983. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:00:08,762][91478] Avg episode reward: [(0, '6.560'), (1, '6.470')] +[2023-09-26 08:00:09,560][92474] Updated weights for policy 1, policy_version 9280 (0.0018) +[2023-09-26 08:00:09,560][92473] Updated weights for policy 0, policy_version 9280 (0.0018) +[2023-09-26 08:00:13,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4775936. Throughput: 0: 794.4, 1: 793.5. Samples: 1191827. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:00:13,763][91478] Avg episode reward: [(0, '6.730'), (1, '6.280')] +[2023-09-26 08:00:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 4808704. Throughput: 0: 788.1, 1: 788.5. Samples: 1201143. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:00:18,763][91478] Avg episode reward: [(0, '6.510'), (1, '6.320')] +[2023-09-26 08:00:22,515][92474] Updated weights for policy 1, policy_version 9440 (0.0017) +[2023-09-26 08:00:22,515][92473] Updated weights for policy 0, policy_version 9440 (0.0018) +[2023-09-26 08:00:23,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6262.0). Total num frames: 4837376. Throughput: 0: 795.4, 1: 794.8. Samples: 1206201. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:00:23,763][91478] Avg episode reward: [(0, '6.650'), (1, '6.310')] +[2023-09-26 08:00:28,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4866048. Throughput: 0: 789.7, 1: 788.7. Samples: 1215336. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:00:28,763][91478] Avg episode reward: [(0, '6.640'), (1, '6.570')] +[2023-09-26 08:00:33,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6262.0). Total num frames: 4898816. Throughput: 0: 789.0, 1: 789.5. Samples: 1224704. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:00:33,763][91478] Avg episode reward: [(0, '6.600'), (1, '6.500')] +[2023-09-26 08:00:35,694][92474] Updated weights for policy 1, policy_version 9600 (0.0016) +[2023-09-26 08:00:35,694][92473] Updated weights for policy 0, policy_version 9600 (0.0018) +[2023-09-26 08:00:38,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4931584. Throughput: 0: 786.1, 1: 785.7. Samples: 1229088. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:00:38,763][91478] Avg episode reward: [(0, '6.640'), (1, '6.450')] +[2023-09-26 08:00:43,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 4964352. Throughput: 0: 791.1, 1: 791.0. Samples: 1238777. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:00:43,763][91478] Avg episode reward: [(0, '6.660'), (1, '6.390')] +[2023-09-26 08:00:48,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 4988928. Throughput: 0: 780.5, 1: 780.2. Samples: 1247736. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:00:48,763][91478] Avg episode reward: [(0, '6.640'), (1, '6.320')] +[2023-09-26 08:00:48,862][92473] Updated weights for policy 0, policy_version 9760 (0.0017) +[2023-09-26 08:00:48,862][92474] Updated weights for policy 1, policy_version 9760 (0.0016) +[2023-09-26 08:00:53,762][91478] Fps is (10 sec: 5734.6, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 5021696. Throughput: 0: 785.2, 1: 786.0. Samples: 1252688. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:00:53,762][91478] Avg episode reward: [(0, '6.660'), (1, '6.400')] +[2023-09-26 08:00:58,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5054464. Throughput: 0: 774.5, 1: 776.1. Samples: 1261601. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:00:58,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.400')] +[2023-09-26 08:01:02,251][92473] Updated weights for policy 0, policy_version 9920 (0.0019) +[2023-09-26 08:01:02,251][92474] Updated weights for policy 1, policy_version 9920 (0.0019) +[2023-09-26 08:01:03,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 5087232. Throughput: 0: 775.6, 1: 775.7. Samples: 1270948. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:01:03,763][91478] Avg episode reward: [(0, '6.630'), (1, '6.560')] +[2023-09-26 08:01:08,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 5111808. Throughput: 0: 771.2, 1: 771.2. Samples: 1275613. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:01:08,763][91478] Avg episode reward: [(0, '6.480'), (1, '6.400')] +[2023-09-26 08:01:13,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 5144576. Throughput: 0: 773.4, 1: 773.5. Samples: 1284947. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:01:13,763][91478] Avg episode reward: [(0, '6.570'), (1, '6.390')] +[2023-09-26 08:01:13,775][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000010048_2572288.pth... +[2023-09-26 08:01:13,776][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000010048_2572288.pth... +[2023-09-26 08:01:13,811][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000007120_1822720.pth +[2023-09-26 08:01:13,811][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000007120_1822720.pth +[2023-09-26 08:01:15,516][92473] Updated weights for policy 0, policy_version 10080 (0.0018) +[2023-09-26 08:01:15,516][92474] Updated weights for policy 1, policy_version 10080 (0.0018) +[2023-09-26 08:01:18,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 5177344. Throughput: 0: 773.7, 1: 773.6. Samples: 1294332. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:01:18,762][91478] Avg episode reward: [(0, '6.650'), (1, '6.500')] +[2023-09-26 08:01:23,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6212.3, 300 sec: 6248.1). Total num frames: 5210112. Throughput: 0: 773.9, 1: 774.6. Samples: 1298772. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:01:23,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.450')] +[2023-09-26 08:01:28,661][92473] Updated weights for policy 0, policy_version 10240 (0.0015) +[2023-09-26 08:01:28,662][92474] Updated weights for policy 1, policy_version 10240 (0.0017) +[2023-09-26 08:01:28,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6262.0). Total num frames: 5242880. Throughput: 0: 770.7, 1: 770.4. Samples: 1308125. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:01:28,763][91478] Avg episode reward: [(0, '6.670'), (1, '6.580')] +[2023-09-26 08:01:33,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 5267456. Throughput: 0: 773.5, 1: 773.7. Samples: 1317360. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:01:33,763][91478] Avg episode reward: [(0, '6.670'), (1, '6.500')] +[2023-09-26 08:01:38,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 5300224. Throughput: 0: 772.2, 1: 771.0. Samples: 1322132. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:01:38,763][91478] Avg episode reward: [(0, '6.560'), (1, '6.480')] +[2023-09-26 08:01:41,535][92474] Updated weights for policy 1, policy_version 10400 (0.0015) +[2023-09-26 08:01:41,535][92473] Updated weights for policy 0, policy_version 10400 (0.0018) +[2023-09-26 08:01:43,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 5332992. Throughput: 0: 780.4, 1: 779.6. Samples: 1331800. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:01:43,763][91478] Avg episode reward: [(0, '6.630'), (1, '6.340')] +[2023-09-26 08:01:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5365760. Throughput: 0: 782.6, 1: 783.9. Samples: 1341440. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:01:48,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.390')] +[2023-09-26 08:01:53,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 5398528. Throughput: 0: 781.0, 1: 780.9. Samples: 1345901. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:01:53,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.240')] +[2023-09-26 08:01:54,733][92474] Updated weights for policy 1, policy_version 10560 (0.0018) +[2023-09-26 08:01:54,733][92473] Updated weights for policy 0, policy_version 10560 (0.0014) +[2023-09-26 08:01:58,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 5431296. Throughput: 0: 780.4, 1: 780.8. Samples: 1355200. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:01:58,762][91478] Avg episode reward: [(0, '6.490'), (1, '6.350')] +[2023-09-26 08:02:03,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6212.3, 300 sec: 6262.0). Total num frames: 5459968. Throughput: 0: 782.3, 1: 780.2. Samples: 1364642. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:02:03,763][91478] Avg episode reward: [(0, '6.580'), (1, '6.510')] +[2023-09-26 08:02:07,656][92474] Updated weights for policy 1, policy_version 10720 (0.0017) +[2023-09-26 08:02:07,658][92473] Updated weights for policy 0, policy_version 10720 (0.0018) +[2023-09-26 08:02:08,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5488640. Throughput: 0: 786.3, 1: 785.9. Samples: 1369521. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:02:08,763][91478] Avg episode reward: [(0, '6.440'), (1, '6.310')] +[2023-09-26 08:02:13,762][91478] Fps is (10 sec: 6143.9, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5521408. Throughput: 0: 785.5, 1: 786.4. Samples: 1378859. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:02:13,763][91478] Avg episode reward: [(0, '6.570'), (1, '6.350')] +[2023-09-26 08:02:18,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5554176. Throughput: 0: 790.2, 1: 791.3. Samples: 1388529. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:02:18,762][91478] Avg episode reward: [(0, '6.730'), (1, '6.540')] +[2023-09-26 08:02:20,714][92473] Updated weights for policy 0, policy_version 10880 (0.0017) +[2023-09-26 08:02:20,714][92474] Updated weights for policy 1, policy_version 10880 (0.0016) +[2023-09-26 08:02:23,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6262.0). Total num frames: 5586944. Throughput: 0: 786.3, 1: 786.3. Samples: 1392896. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:02:23,763][91478] Avg episode reward: [(0, '6.610'), (1, '6.400')] +[2023-09-26 08:02:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 5619712. Throughput: 0: 784.6, 1: 784.4. Samples: 1402404. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:02:28,762][91478] Avg episode reward: [(0, '6.510'), (1, '6.430')] +[2023-09-26 08:02:33,738][92474] Updated weights for policy 1, policy_version 11040 (0.0018) +[2023-09-26 08:02:33,738][92473] Updated weights for policy 0, policy_version 11040 (0.0017) +[2023-09-26 08:02:33,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 5652480. Throughput: 0: 782.8, 1: 781.1. Samples: 1411813. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:02:33,762][91478] Avg episode reward: [(0, '6.560'), (1, '6.290')] +[2023-09-26 08:02:38,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5677056. Throughput: 0: 783.8, 1: 785.3. Samples: 1416513. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:02:38,763][91478] Avg episode reward: [(0, '6.600'), (1, '6.390')] +[2023-09-26 08:02:43,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5709824. Throughput: 0: 786.3, 1: 786.5. Samples: 1425976. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:02:43,762][91478] Avg episode reward: [(0, '6.600'), (1, '6.440')] +[2023-09-26 08:02:46,710][92474] Updated weights for policy 1, policy_version 11200 (0.0016) +[2023-09-26 08:02:46,710][92473] Updated weights for policy 0, policy_version 11200 (0.0016) +[2023-09-26 08:02:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5742592. Throughput: 0: 787.9, 1: 790.0. Samples: 1435648. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:02:48,763][91478] Avg episode reward: [(0, '6.750'), (1, '6.600')] +[2023-09-26 08:02:53,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5775360. Throughput: 0: 785.6, 1: 785.4. Samples: 1440214. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:02:53,763][91478] Avg episode reward: [(0, '6.590'), (1, '6.450')] +[2023-09-26 08:02:58,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 5808128. Throughput: 0: 789.3, 1: 788.4. Samples: 1449853. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 08:02:58,762][91478] Avg episode reward: [(0, '6.460'), (1, '6.570')] +[2023-09-26 08:02:59,757][92473] Updated weights for policy 0, policy_version 11360 (0.0018) +[2023-09-26 08:02:59,757][92474] Updated weights for policy 1, policy_version 11360 (0.0017) +[2023-09-26 08:03:03,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6348.8, 300 sec: 6275.9). Total num frames: 5840896. Throughput: 0: 783.6, 1: 783.2. Samples: 1459035. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 08:03:03,763][91478] Avg episode reward: [(0, '6.510'), (1, '6.500')] +[2023-09-26 08:03:08,762][91478] Fps is (10 sec: 5734.2, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5865472. Throughput: 0: 785.5, 1: 786.0. Samples: 1463616. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:03:08,763][91478] Avg episode reward: [(0, '6.590'), (1, '6.380')] +[2023-09-26 08:03:13,029][92474] Updated weights for policy 1, policy_version 11520 (0.0018) +[2023-09-26 08:03:13,029][92473] Updated weights for policy 0, policy_version 11520 (0.0017) +[2023-09-26 08:03:13,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 5898240. Throughput: 0: 782.1, 1: 782.0. Samples: 1472790. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:03:13,762][91478] Avg episode reward: [(0, '6.570'), (1, '6.290')] +[2023-09-26 08:03:13,772][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000011520_2949120.pth... +[2023-09-26 08:03:13,772][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000011520_2949120.pth... +[2023-09-26 08:03:13,807][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000008592_2199552.pth +[2023-09-26 08:03:13,814][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000008592_2199552.pth +[2023-09-26 08:03:18,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 5931008. Throughput: 0: 782.8, 1: 783.0. Samples: 1482277. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:03:18,763][91478] Avg episode reward: [(0, '6.580'), (1, '6.530')] +[2023-09-26 08:03:23,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 5963776. Throughput: 0: 781.4, 1: 781.6. Samples: 1486848. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:03:23,763][91478] Avg episode reward: [(0, '6.820'), (1, '6.570')] +[2023-09-26 08:03:26,279][92474] Updated weights for policy 1, policy_version 11680 (0.0017) +[2023-09-26 08:03:26,279][92473] Updated weights for policy 0, policy_version 11680 (0.0015) +[2023-09-26 08:03:28,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 5988352. Throughput: 0: 778.1, 1: 779.4. Samples: 1496063. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:03:28,763][91478] Avg episode reward: [(0, '6.640'), (1, '6.390')] +[2023-09-26 08:03:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 6021120. Throughput: 0: 773.7, 1: 773.7. Samples: 1505281. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:03:33,763][91478] Avg episode reward: [(0, '6.630'), (1, '6.470')] +[2023-09-26 08:03:38,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 6053888. Throughput: 0: 776.6, 1: 776.9. Samples: 1510121. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:03:38,762][91478] Avg episode reward: [(0, '6.720'), (1, '6.520')] +[2023-09-26 08:03:39,443][92474] Updated weights for policy 1, policy_version 11840 (0.0016) +[2023-09-26 08:03:39,444][92473] Updated weights for policy 0, policy_version 11840 (0.0017) +[2023-09-26 08:03:43,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6086656. Throughput: 0: 774.1, 1: 776.1. Samples: 1519616. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 08:03:43,763][91478] Avg episode reward: [(0, '6.820'), (1, '6.550')] +[2023-09-26 08:03:48,762][91478] Fps is (10 sec: 6553.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6119424. Throughput: 0: 778.9, 1: 777.7. Samples: 1529081. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 08:03:48,763][91478] Avg episode reward: [(0, '6.660'), (1, '6.350')] +[2023-09-26 08:03:52,261][92474] Updated weights for policy 1, policy_version 12000 (0.0017) +[2023-09-26 08:03:52,262][92473] Updated weights for policy 0, policy_version 12000 (0.0015) +[2023-09-26 08:03:53,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 6152192. Throughput: 0: 780.8, 1: 782.2. Samples: 1533952. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:03:53,762][91478] Avg episode reward: [(0, '6.650'), (1, '6.390')] +[2023-09-26 08:03:58,762][91478] Fps is (10 sec: 5734.7, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 6176768. Throughput: 0: 782.8, 1: 785.6. Samples: 1543369. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:03:58,762][91478] Avg episode reward: [(0, '6.500'), (1, '6.580')] +[2023-09-26 08:04:03,762][91478] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 6209536. Throughput: 0: 778.2, 1: 779.7. Samples: 1552384. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:04:03,763][91478] Avg episode reward: [(0, '6.770'), (1, '6.510')] +[2023-09-26 08:04:05,587][92473] Updated weights for policy 0, policy_version 12160 (0.0014) +[2023-09-26 08:04:05,588][92474] Updated weights for policy 1, policy_version 12160 (0.0018) +[2023-09-26 08:04:08,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6242304. Throughput: 0: 780.3, 1: 778.8. Samples: 1557011. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:04:08,763][91478] Avg episode reward: [(0, '6.470'), (1, '6.440')] +[2023-09-26 08:04:13,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 6275072. Throughput: 0: 784.8, 1: 783.1. Samples: 1566617. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:04:13,763][91478] Avg episode reward: [(0, '6.510'), (1, '6.560')] +[2023-09-26 08:04:18,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 6299648. Throughput: 0: 780.2, 1: 779.0. Samples: 1575443. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:04:18,763][91478] Avg episode reward: [(0, '6.580'), (1, '6.410')] +[2023-09-26 08:04:18,906][92474] Updated weights for policy 1, policy_version 12320 (0.0018) +[2023-09-26 08:04:18,906][92473] Updated weights for policy 0, policy_version 12320 (0.0014) +[2023-09-26 08:04:23,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 6332416. Throughput: 0: 779.2, 1: 778.9. Samples: 1580237. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:04:23,763][91478] Avg episode reward: [(0, '6.650'), (1, '6.410')] +[2023-09-26 08:04:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6365184. Throughput: 0: 773.8, 1: 773.7. Samples: 1589252. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:04:28,763][91478] Avg episode reward: [(0, '6.640'), (1, '6.530')] +[2023-09-26 08:04:32,081][92474] Updated weights for policy 1, policy_version 12480 (0.0017) +[2023-09-26 08:04:32,081][92473] Updated weights for policy 0, policy_version 12480 (0.0016) +[2023-09-26 08:04:33,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6397952. Throughput: 0: 775.9, 1: 776.3. Samples: 1598929. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:04:33,763][91478] Avg episode reward: [(0, '6.660'), (1, '6.450')] +[2023-09-26 08:04:38,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6430720. Throughput: 0: 773.7, 1: 773.7. Samples: 1603584. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:04:38,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.470')] +[2023-09-26 08:04:43,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 6455296. Throughput: 0: 774.2, 1: 771.1. Samples: 1612905. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:04:43,763][91478] Avg episode reward: [(0, '6.540'), (1, '6.570')] +[2023-09-26 08:04:45,270][92473] Updated weights for policy 0, policy_version 12640 (0.0016) +[2023-09-26 08:04:45,270][92474] Updated weights for policy 1, policy_version 12640 (0.0018) +[2023-09-26 08:04:48,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.1, 300 sec: 6248.1). Total num frames: 6488064. Throughput: 0: 773.8, 1: 773.7. Samples: 1622020. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:04:48,762][91478] Avg episode reward: [(0, '6.670'), (1, '6.400')] +[2023-09-26 08:04:53,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 6520832. Throughput: 0: 773.6, 1: 775.1. Samples: 1626704. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:04:53,763][91478] Avg episode reward: [(0, '6.790'), (1, '6.630')] +[2023-09-26 08:04:58,647][92474] Updated weights for policy 1, policy_version 12800 (0.0017) +[2023-09-26 08:04:58,647][92473] Updated weights for policy 0, policy_version 12800 (0.0017) +[2023-09-26 08:04:58,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6553600. Throughput: 0: 770.3, 1: 770.2. Samples: 1635943. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:04:58,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.510')] +[2023-09-26 08:05:03,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6578176. Throughput: 0: 774.0, 1: 774.5. Samples: 1645126. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:05:03,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.450')] +[2023-09-26 08:05:08,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6610944. Throughput: 0: 775.1, 1: 775.6. Samples: 1650016. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:05:08,762][91478] Avg episode reward: [(0, '6.660'), (1, '6.550')] +[2023-09-26 08:05:11,865][92473] Updated weights for policy 0, policy_version 12960 (0.0016) +[2023-09-26 08:05:11,865][92474] Updated weights for policy 1, policy_version 12960 (0.0017) +[2023-09-26 08:05:13,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6643712. Throughput: 0: 774.0, 1: 773.7. Samples: 1658900. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:05:13,762][91478] Avg episode reward: [(0, '6.540'), (1, '6.640')] +[2023-09-26 08:05:13,773][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000012976_3321856.pth... +[2023-09-26 08:05:13,774][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000012976_3321856.pth... +[2023-09-26 08:05:13,812][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000010048_2572288.pth +[2023-09-26 08:05:13,816][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000010048_2572288.pth +[2023-09-26 08:05:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6234.3). Total num frames: 6676480. Throughput: 0: 773.7, 1: 774.4. Samples: 1668591. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:05:18,763][91478] Avg episode reward: [(0, '6.860'), (1, '6.560')] +[2023-09-26 08:05:23,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6709248. Throughput: 0: 773.8, 1: 773.7. Samples: 1673220. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:05:23,763][91478] Avg episode reward: [(0, '6.650'), (1, '6.610')] +[2023-09-26 08:05:24,879][92474] Updated weights for policy 1, policy_version 13120 (0.0016) +[2023-09-26 08:05:24,879][92473] Updated weights for policy 0, policy_version 13120 (0.0017) +[2023-09-26 08:05:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6742016. Throughput: 0: 776.7, 1: 776.7. Samples: 1682808. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:05:28,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.560')] +[2023-09-26 08:05:33,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6766592. Throughput: 0: 778.0, 1: 776.9. Samples: 1691991. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:05:33,762][91478] Avg episode reward: [(0, '6.630'), (1, '6.610')] +[2023-09-26 08:05:37,836][92474] Updated weights for policy 1, policy_version 13280 (0.0017) +[2023-09-26 08:05:37,836][92473] Updated weights for policy 0, policy_version 13280 (0.0016) +[2023-09-26 08:05:38,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6799360. Throughput: 0: 780.6, 1: 778.9. Samples: 1696882. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:05:38,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.570')] +[2023-09-26 08:05:43,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 6832128. Throughput: 0: 780.7, 1: 780.9. Samples: 1706218. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:05:43,762][91478] Avg episode reward: [(0, '6.810'), (1, '6.540')] +[2023-09-26 08:05:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6864896. Throughput: 0: 785.9, 1: 783.6. Samples: 1715756. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:05:48,763][91478] Avg episode reward: [(0, '6.670'), (1, '6.590')] +[2023-09-26 08:05:51,050][92473] Updated weights for policy 0, policy_version 13440 (0.0014) +[2023-09-26 08:05:51,050][92474] Updated weights for policy 1, policy_version 13440 (0.0017) +[2023-09-26 08:05:53,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 6897664. Throughput: 0: 780.6, 1: 781.7. Samples: 1720317. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:05:53,762][91478] Avg episode reward: [(0, '6.540'), (1, '6.500')] +[2023-09-26 08:05:58,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 6922240. Throughput: 0: 786.6, 1: 785.3. Samples: 1729635. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 08:05:58,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.430')] +[2023-09-26 08:06:03,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6955008. Throughput: 0: 779.2, 1: 779.9. Samples: 1738753. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 08:06:03,763][91478] Avg episode reward: [(0, '6.810'), (1, '6.200')] +[2023-09-26 08:06:04,217][92473] Updated weights for policy 0, policy_version 13600 (0.0017) +[2023-09-26 08:06:04,217][92474] Updated weights for policy 1, policy_version 13600 (0.0017) +[2023-09-26 08:06:08,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 6987776. Throughput: 0: 782.3, 1: 780.9. Samples: 1743564. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:06:08,762][91478] Avg episode reward: [(0, '6.840'), (1, '6.330')] +[2023-09-26 08:06:13,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 7020544. Throughput: 0: 780.1, 1: 781.8. Samples: 1753092. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:06:13,763][91478] Avg episode reward: [(0, '6.840'), (1, '6.290')] +[2023-09-26 08:06:17,140][92474] Updated weights for policy 1, policy_version 13760 (0.0016) +[2023-09-26 08:06:17,140][92473] Updated weights for policy 0, policy_version 13760 (0.0017) +[2023-09-26 08:06:18,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 7053312. Throughput: 0: 785.3, 1: 785.6. Samples: 1762682. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:06:18,763][91478] Avg episode reward: [(0, '6.720'), (1, '6.400')] +[2023-09-26 08:06:23,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 7086080. Throughput: 0: 783.0, 1: 783.9. Samples: 1767392. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:06:23,763][91478] Avg episode reward: [(0, '6.720'), (1, '6.480')] +[2023-09-26 08:06:28,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 7110656. Throughput: 0: 780.7, 1: 781.1. Samples: 1776498. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:06:28,763][91478] Avg episode reward: [(0, '6.740'), (1, '6.370')] +[2023-09-26 08:06:30,414][92473] Updated weights for policy 0, policy_version 13920 (0.0018) +[2023-09-26 08:06:30,414][92474] Updated weights for policy 1, policy_version 13920 (0.0016) +[2023-09-26 08:06:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 7143424. Throughput: 0: 777.4, 1: 780.4. Samples: 1785856. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:06:33,763][91478] Avg episode reward: [(0, '6.600'), (1, '6.590')] +[2023-09-26 08:06:38,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 7176192. Throughput: 0: 775.4, 1: 774.0. Samples: 1790043. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 08:06:38,763][91478] Avg episode reward: [(0, '6.760'), (1, '6.450')] +[2023-09-26 08:06:43,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6212.2, 300 sec: 6234.3). Total num frames: 7204864. Throughput: 0: 777.0, 1: 777.4. Samples: 1799586. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-26 08:06:43,763][91478] Avg episode reward: [(0, '6.500'), (1, '6.480')] +[2023-09-26 08:06:43,765][92474] Updated weights for policy 1, policy_version 14080 (0.0018) +[2023-09-26 08:06:43,765][92473] Updated weights for policy 0, policy_version 14080 (0.0017) +[2023-09-26 08:06:48,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7233536. Throughput: 0: 775.1, 1: 773.8. Samples: 1808455. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:06:48,763][91478] Avg episode reward: [(0, '6.450'), (1, '6.500')] +[2023-09-26 08:06:53,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7266304. Throughput: 0: 775.1, 1: 774.5. Samples: 1813297. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:06:53,762][91478] Avg episode reward: [(0, '6.480'), (1, '6.510')] +[2023-09-26 08:06:56,980][92474] Updated weights for policy 1, policy_version 14240 (0.0018) +[2023-09-26 08:06:56,980][92473] Updated weights for policy 0, policy_version 14240 (0.0018) +[2023-09-26 08:06:58,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6234.3). Total num frames: 7299072. Throughput: 0: 773.6, 1: 773.7. Samples: 1822720. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:06:58,762][91478] Avg episode reward: [(0, '6.800'), (1, '6.570')] +[2023-09-26 08:07:03,762][91478] Fps is (10 sec: 6143.9, 60 sec: 6212.3, 300 sec: 6234.3). Total num frames: 7327744. Throughput: 0: 766.0, 1: 765.7. Samples: 1831610. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:07:03,763][91478] Avg episode reward: [(0, '6.470'), (1, '6.610')] +[2023-09-26 08:07:08,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7356416. Throughput: 0: 769.0, 1: 768.2. Samples: 1836562. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:07:08,762][91478] Avg episode reward: [(0, '6.570'), (1, '6.460')] +[2023-09-26 08:07:10,356][92474] Updated weights for policy 1, policy_version 14400 (0.0019) +[2023-09-26 08:07:10,356][92473] Updated weights for policy 0, policy_version 14400 (0.0019) +[2023-09-26 08:07:13,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7389184. Throughput: 0: 767.2, 1: 767.5. Samples: 1845560. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:07:13,762][91478] Avg episode reward: [(0, '6.650'), (1, '6.580')] +[2023-09-26 08:07:13,772][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000014432_3694592.pth... +[2023-09-26 08:07:13,772][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000014432_3694592.pth... +[2023-09-26 08:07:13,802][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000011520_2949120.pth +[2023-09-26 08:07:13,810][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000011520_2949120.pth +[2023-09-26 08:07:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7421952. Throughput: 0: 769.1, 1: 768.0. Samples: 1855027. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:07:18,763][91478] Avg episode reward: [(0, '6.750'), (1, '6.490')] +[2023-09-26 08:07:23,477][92474] Updated weights for policy 1, policy_version 14560 (0.0019) +[2023-09-26 08:07:23,477][92473] Updated weights for policy 0, policy_version 14560 (0.0017) +[2023-09-26 08:07:23,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7454720. Throughput: 0: 772.0, 1: 773.4. Samples: 1859588. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:07:23,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.670')] +[2023-09-26 08:07:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7487488. Throughput: 0: 773.7, 1: 773.4. Samples: 1869203. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:07:28,763][91478] Avg episode reward: [(0, '6.600'), (1, '6.780')] +[2023-09-26 08:07:28,774][92345] Saving new best policy, reward=6.780! +[2023-09-26 08:07:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7512064. Throughput: 0: 778.5, 1: 777.7. Samples: 1878487. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:07:33,763][91478] Avg episode reward: [(0, '6.850'), (1, '6.540')] +[2023-09-26 08:07:36,426][92473] Updated weights for policy 0, policy_version 14720 (0.0017) +[2023-09-26 08:07:36,427][92474] Updated weights for policy 1, policy_version 14720 (0.0019) +[2023-09-26 08:07:38,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7544832. Throughput: 0: 779.5, 1: 780.2. Samples: 1883486. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:07:38,763][91478] Avg episode reward: [(0, '6.970'), (1, '6.510')] +[2023-09-26 08:07:38,764][91993] Saving new best policy, reward=6.970! +[2023-09-26 08:07:43,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 7577600. Throughput: 0: 778.0, 1: 776.5. Samples: 1892673. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:07:43,763][91478] Avg episode reward: [(0, '6.820'), (1, '6.850')] +[2023-09-26 08:07:43,775][92345] Saving new best policy, reward=6.850! +[2023-09-26 08:07:48,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 7610368. Throughput: 0: 786.0, 1: 786.8. Samples: 1902387. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:07:48,762][91478] Avg episode reward: [(0, '6.680'), (1, '6.590')] +[2023-09-26 08:07:49,566][92474] Updated weights for policy 1, policy_version 14880 (0.0014) +[2023-09-26 08:07:49,566][92473] Updated weights for policy 0, policy_version 14880 (0.0019) +[2023-09-26 08:07:53,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7643136. Throughput: 0: 778.7, 1: 780.0. Samples: 1906703. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:07:53,762][91478] Avg episode reward: [(0, '6.540'), (1, '6.570')] +[2023-09-26 08:07:58,762][91478] Fps is (10 sec: 6143.9, 60 sec: 6212.3, 300 sec: 6206.5). Total num frames: 7671808. Throughput: 0: 784.1, 1: 785.5. Samples: 1916194. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:07:58,763][91478] Avg episode reward: [(0, '6.760'), (1, '6.790')] +[2023-09-26 08:08:02,795][92474] Updated weights for policy 1, policy_version 15040 (0.0019) +[2023-09-26 08:08:02,795][92473] Updated weights for policy 0, policy_version 15040 (0.0018) +[2023-09-26 08:08:03,762][91478] Fps is (10 sec: 5734.2, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 7700480. Throughput: 0: 780.0, 1: 779.7. Samples: 1925217. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:08:03,763][91478] Avg episode reward: [(0, '6.880'), (1, '6.610')] +[2023-09-26 08:08:08,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7733248. Throughput: 0: 780.8, 1: 779.5. Samples: 1929805. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:08:08,763][91478] Avg episode reward: [(0, '6.690'), (1, '6.470')] +[2023-09-26 08:08:13,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7766016. Throughput: 0: 779.5, 1: 776.1. Samples: 1939208. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:08:13,763][91478] Avg episode reward: [(0, '6.580'), (1, '6.640')] +[2023-09-26 08:08:16,131][92474] Updated weights for policy 1, policy_version 15200 (0.0017) +[2023-09-26 08:08:16,131][92473] Updated weights for policy 0, policy_version 15200 (0.0018) +[2023-09-26 08:08:18,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 7798784. Throughput: 0: 778.3, 1: 779.0. Samples: 1948565. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 08:08:18,762][91478] Avg episode reward: [(0, '6.530'), (1, '6.790')] +[2023-09-26 08:08:23,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7823360. Throughput: 0: 778.5, 1: 778.2. Samples: 1953538. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 08:08:23,763][91478] Avg episode reward: [(0, '6.600'), (1, '6.510')] +[2023-09-26 08:08:28,762][91478] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 7856128. Throughput: 0: 778.8, 1: 779.0. Samples: 1962777. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 08:08:28,763][91478] Avg episode reward: [(0, '6.730'), (1, '6.620')] +[2023-09-26 08:08:29,004][92473] Updated weights for policy 0, policy_version 15360 (0.0016) +[2023-09-26 08:08:29,004][92474] Updated weights for policy 1, policy_version 15360 (0.0015) +[2023-09-26 08:08:33,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7888896. Throughput: 0: 775.7, 1: 776.3. Samples: 1972224. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:08:33,763][91478] Avg episode reward: [(0, '6.730'), (1, '6.610')] +[2023-09-26 08:08:38,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 7921664. Throughput: 0: 777.2, 1: 776.3. Samples: 1976612. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:08:38,762][91478] Avg episode reward: [(0, '6.620'), (1, '6.700')] +[2023-09-26 08:08:42,197][92474] Updated weights for policy 1, policy_version 15520 (0.0016) +[2023-09-26 08:08:42,197][92473] Updated weights for policy 0, policy_version 15520 (0.0015) +[2023-09-26 08:08:43,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7954432. Throughput: 0: 782.0, 1: 778.2. Samples: 1986405. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:08:43,764][91478] Avg episode reward: [(0, '6.750'), (1, '6.740')] +[2023-09-26 08:08:48,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 7987200. Throughput: 0: 781.3, 1: 781.4. Samples: 1995539. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:08:48,763][91478] Avg episode reward: [(0, '6.750'), (1, '6.900')] +[2023-09-26 08:08:48,764][92345] Saving new best policy, reward=6.900! +[2023-09-26 08:08:53,762][91478] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 8011776. Throughput: 0: 783.4, 1: 784.2. Samples: 2000349. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:08:53,763][91478] Avg episode reward: [(0, '6.760'), (1, '6.830')] +[2023-09-26 08:08:55,390][92473] Updated weights for policy 0, policy_version 15680 (0.0018) +[2023-09-26 08:08:55,390][92474] Updated weights for policy 1, policy_version 15680 (0.0018) +[2023-09-26 08:08:58,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6212.2, 300 sec: 6220.4). Total num frames: 8044544. Throughput: 0: 778.1, 1: 781.8. Samples: 2009402. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 08:08:58,763][91478] Avg episode reward: [(0, '6.750'), (1, '6.930')] +[2023-09-26 08:08:58,775][92345] Saving new best policy, reward=6.930! +[2023-09-26 08:09:03,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 8077312. Throughput: 0: 780.6, 1: 782.6. Samples: 2018907. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 08:09:03,763][91478] Avg episode reward: [(0, '6.740'), (1, '6.690')] +[2023-09-26 08:09:08,544][92473] Updated weights for policy 0, policy_version 15840 (0.0018) +[2023-09-26 08:09:08,545][92474] Updated weights for policy 1, policy_version 15840 (0.0018) +[2023-09-26 08:09:08,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 8110080. Throughput: 0: 775.8, 1: 777.3. Samples: 2023428. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:09:08,763][91478] Avg episode reward: [(0, '6.840'), (1, '6.850')] +[2023-09-26 08:09:13,762][91478] Fps is (10 sec: 6143.8, 60 sec: 6212.3, 300 sec: 6234.2). Total num frames: 8138752. Throughput: 0: 780.5, 1: 780.4. Samples: 2033017. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:09:13,763][91478] Avg episode reward: [(0, '6.830'), (1, '7.180')] +[2023-09-26 08:09:13,778][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000015904_4071424.pth... +[2023-09-26 08:09:13,809][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000012976_3321856.pth +[2023-09-26 08:09:13,812][92345] Saving new best policy, reward=7.180! +[2023-09-26 08:09:13,813][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000015904_4071424.pth... +[2023-09-26 08:09:13,842][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000012976_3321856.pth +[2023-09-26 08:09:18,762][91478] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 8167424. Throughput: 0: 775.2, 1: 773.9. Samples: 2041936. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:09:18,762][91478] Avg episode reward: [(0, '6.730'), (1, '6.740')] +[2023-09-26 08:09:21,724][92473] Updated weights for policy 0, policy_version 16000 (0.0019) +[2023-09-26 08:09:21,724][92474] Updated weights for policy 1, policy_version 16000 (0.0018) +[2023-09-26 08:09:23,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 8200192. Throughput: 0: 779.1, 1: 778.8. Samples: 2046719. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:09:23,763][91478] Avg episode reward: [(0, '6.520'), (1, '6.590')] +[2023-09-26 08:09:28,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 8232960. Throughput: 0: 773.7, 1: 777.1. Samples: 2056192. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:09:28,763][91478] Avg episode reward: [(0, '6.560'), (1, '6.570')] +[2023-09-26 08:09:33,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 8265728. Throughput: 0: 778.4, 1: 778.3. Samples: 2065589. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:09:33,763][91478] Avg episode reward: [(0, '6.440'), (1, '6.740')] +[2023-09-26 08:09:34,835][92473] Updated weights for policy 0, policy_version 16160 (0.0017) +[2023-09-26 08:09:34,836][92474] Updated weights for policy 1, policy_version 16160 (0.0019) +[2023-09-26 08:09:38,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 8298496. Throughput: 0: 779.5, 1: 778.5. Samples: 2070457. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:09:38,762][91478] Avg episode reward: [(0, '6.580'), (1, '6.500')] +[2023-09-26 08:09:43,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 8323072. Throughput: 0: 780.6, 1: 780.7. Samples: 2079661. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:09:43,763][91478] Avg episode reward: [(0, '6.730'), (1, '6.510')] +[2023-09-26 08:09:47,769][92474] Updated weights for policy 1, policy_version 16320 (0.0014) +[2023-09-26 08:09:47,770][92473] Updated weights for policy 0, policy_version 16320 (0.0015) +[2023-09-26 08:09:48,762][91478] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 8355840. Throughput: 0: 780.9, 1: 779.1. Samples: 2089110. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:09:48,763][91478] Avg episode reward: [(0, '6.660'), (1, '6.690')] +[2023-09-26 08:09:53,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 8388608. Throughput: 0: 783.7, 1: 782.5. Samples: 2093909. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:09:53,762][91478] Avg episode reward: [(0, '6.480'), (1, '6.720')] +[2023-09-26 08:09:58,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 8421376. Throughput: 0: 780.2, 1: 781.6. Samples: 2103300. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:09:58,763][91478] Avg episode reward: [(0, '6.460'), (1, '6.670')] +[2023-09-26 08:10:00,828][92473] Updated weights for policy 0, policy_version 16480 (0.0016) +[2023-09-26 08:10:00,828][92474] Updated weights for policy 1, policy_version 16480 (0.0017) +[2023-09-26 08:10:03,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 8454144. Throughput: 0: 785.6, 1: 786.7. Samples: 2112692. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:10:03,762][91478] Avg episode reward: [(0, '6.620'), (1, '6.820')] +[2023-09-26 08:10:08,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 8478720. Throughput: 0: 787.0, 1: 786.3. Samples: 2117519. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:10:08,763][91478] Avg episode reward: [(0, '6.630'), (1, '6.630')] +[2023-09-26 08:10:13,762][91478] Fps is (10 sec: 5734.2, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 8511488. Throughput: 0: 784.8, 1: 783.4. Samples: 2126760. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:10:13,763][91478] Avg episode reward: [(0, '6.610'), (1, '6.830')] +[2023-09-26 08:10:13,970][92474] Updated weights for policy 1, policy_version 16640 (0.0019) +[2023-09-26 08:10:13,970][92473] Updated weights for policy 0, policy_version 16640 (0.0019) +[2023-09-26 08:10:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 8544256. Throughput: 0: 783.0, 1: 783.8. Samples: 2136097. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:10:18,763][91478] Avg episode reward: [(0, '6.520'), (1, '6.420')] +[2023-09-26 08:10:23,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 8577024. Throughput: 0: 779.7, 1: 780.0. Samples: 2140640. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:10:23,762][91478] Avg episode reward: [(0, '6.460'), (1, '6.440')] +[2023-09-26 08:10:27,053][92473] Updated weights for policy 0, policy_version 16800 (0.0016) +[2023-09-26 08:10:27,053][92474] Updated weights for policy 1, policy_version 16800 (0.0016) +[2023-09-26 08:10:28,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 8609792. Throughput: 0: 785.4, 1: 786.5. Samples: 2150400. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:10:28,763][91478] Avg episode reward: [(0, '6.570'), (1, '6.650')] +[2023-09-26 08:10:33,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 8642560. Throughput: 0: 784.2, 1: 785.1. Samples: 2159726. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:10:33,763][91478] Avg episode reward: [(0, '6.580'), (1, '6.710')] +[2023-09-26 08:10:38,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 8667136. Throughput: 0: 784.3, 1: 782.1. Samples: 2164397. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:10:38,763][91478] Avg episode reward: [(0, '6.650'), (1, '6.480')] +[2023-09-26 08:10:40,213][92473] Updated weights for policy 0, policy_version 16960 (0.0016) +[2023-09-26 08:10:40,214][92474] Updated weights for policy 1, policy_version 16960 (0.0016) +[2023-09-26 08:10:43,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 8699904. Throughput: 0: 782.3, 1: 780.1. Samples: 2173605. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:10:43,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.520')] +[2023-09-26 08:10:48,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 8732672. Throughput: 0: 783.0, 1: 783.2. Samples: 2183171. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:10:48,762][91478] Avg episode reward: [(0, '6.540'), (1, '6.460')] +[2023-09-26 08:10:53,183][92474] Updated weights for policy 1, policy_version 17120 (0.0017) +[2023-09-26 08:10:53,183][92473] Updated weights for policy 0, policy_version 17120 (0.0017) +[2023-09-26 08:10:53,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 8765440. Throughput: 0: 779.7, 1: 780.5. Samples: 2187729. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:10:53,763][91478] Avg episode reward: [(0, '6.640'), (1, '6.180')] +[2023-09-26 08:10:58,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 8798208. Throughput: 0: 785.3, 1: 786.8. Samples: 2197504. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:10:58,762][91478] Avg episode reward: [(0, '6.710'), (1, '6.040')] +[2023-09-26 08:11:03,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 8830976. Throughput: 0: 788.8, 1: 787.8. Samples: 2207045. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:11:03,763][91478] Avg episode reward: [(0, '6.760'), (1, '6.190')] +[2023-09-26 08:11:06,239][92473] Updated weights for policy 0, policy_version 17280 (0.0017) +[2023-09-26 08:11:06,239][92474] Updated weights for policy 1, policy_version 17280 (0.0016) +[2023-09-26 08:11:08,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 8855552. Throughput: 0: 788.2, 1: 790.8. Samples: 2211694. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:11:08,763][91478] Avg episode reward: [(0, '6.760'), (1, '6.290')] +[2023-09-26 08:11:13,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 8888320. Throughput: 0: 783.9, 1: 782.5. Samples: 2220890. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:11:13,762][91478] Avg episode reward: [(0, '6.590'), (1, '6.640')] +[2023-09-26 08:11:13,901][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000017376_4448256.pth... +[2023-09-26 08:11:13,927][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000014432_3694592.pth +[2023-09-26 08:11:13,930][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000017376_4448256.pth... +[2023-09-26 08:11:13,963][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000014432_3694592.pth +[2023-09-26 08:11:18,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 8921088. Throughput: 0: 784.2, 1: 784.1. Samples: 2230298. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:11:18,762][91478] Avg episode reward: [(0, '6.650'), (1, '6.540')] +[2023-09-26 08:11:19,135][92474] Updated weights for policy 1, policy_version 17440 (0.0018) +[2023-09-26 08:11:19,135][92473] Updated weights for policy 0, policy_version 17440 (0.0015) +[2023-09-26 08:11:23,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 8953856. Throughput: 0: 784.6, 1: 786.1. Samples: 2235076. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:11:23,763][91478] Avg episode reward: [(0, '6.550'), (1, '6.320')] +[2023-09-26 08:11:28,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 8986624. Throughput: 0: 789.4, 1: 790.3. Samples: 2244689. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:11:28,763][91478] Avg episode reward: [(0, '6.650'), (1, '6.470')] +[2023-09-26 08:11:32,096][92474] Updated weights for policy 1, policy_version 17600 (0.0016) +[2023-09-26 08:11:32,096][92473] Updated weights for policy 0, policy_version 17600 (0.0016) +[2023-09-26 08:11:33,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 9019392. Throughput: 0: 789.8, 1: 788.9. Samples: 2254215. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:11:33,762][91478] Avg episode reward: [(0, '6.600'), (1, '6.610')] +[2023-09-26 08:11:38,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6262.0). Total num frames: 9052160. Throughput: 0: 790.6, 1: 789.1. Samples: 2258817. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:11:38,762][91478] Avg episode reward: [(0, '6.700'), (1, '6.580')] +[2023-09-26 08:11:43,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 9076736. Throughput: 0: 782.5, 1: 781.0. Samples: 2267859. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:11:43,762][91478] Avg episode reward: [(0, '6.670'), (1, '6.480')] +[2023-09-26 08:11:45,432][92474] Updated weights for policy 1, policy_version 17760 (0.0018) +[2023-09-26 08:11:45,432][92473] Updated weights for policy 0, policy_version 17760 (0.0019) +[2023-09-26 08:11:48,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 9109504. Throughput: 0: 780.7, 1: 781.6. Samples: 2277348. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:11:48,763][91478] Avg episode reward: [(0, '6.780'), (1, '6.520')] +[2023-09-26 08:11:53,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 9142272. Throughput: 0: 779.5, 1: 777.1. Samples: 2281740. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:11:53,762][91478] Avg episode reward: [(0, '6.590'), (1, '6.750')] +[2023-09-26 08:11:58,558][92474] Updated weights for policy 1, policy_version 17920 (0.0014) +[2023-09-26 08:11:58,559][92473] Updated weights for policy 0, policy_version 17920 (0.0018) +[2023-09-26 08:11:58,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6262.0). Total num frames: 9175040. Throughput: 0: 785.6, 1: 784.2. Samples: 2291531. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:11:58,762][91478] Avg episode reward: [(0, '6.760'), (1, '6.470')] +[2023-09-26 08:12:03,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9207808. Throughput: 0: 782.7, 1: 782.5. Samples: 2300732. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:12:03,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.460')] +[2023-09-26 08:12:08,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 9232384. Throughput: 0: 783.3, 1: 784.4. Samples: 2305623. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:12:08,763][91478] Avg episode reward: [(0, '6.710'), (1, '6.720')] +[2023-09-26 08:12:11,504][92474] Updated weights for policy 1, policy_version 18080 (0.0017) +[2023-09-26 08:12:11,504][92473] Updated weights for policy 0, policy_version 18080 (0.0017) +[2023-09-26 08:12:13,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 9265152. Throughput: 0: 780.5, 1: 780.4. Samples: 2314929. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:12:13,763][91478] Avg episode reward: [(0, '6.590'), (1, '6.700')] +[2023-09-26 08:12:18,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 9297920. Throughput: 0: 780.2, 1: 780.2. Samples: 2324433. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:12:18,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.660')] +[2023-09-26 08:12:23,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 9330688. Throughput: 0: 778.1, 1: 779.6. Samples: 2328914. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:12:23,763][91478] Avg episode reward: [(0, '6.790'), (1, '6.790')] +[2023-09-26 08:12:24,548][92474] Updated weights for policy 1, policy_version 18240 (0.0015) +[2023-09-26 08:12:24,548][92473] Updated weights for policy 0, policy_version 18240 (0.0015) +[2023-09-26 08:12:28,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 9363456. Throughput: 0: 785.4, 1: 787.9. Samples: 2338656. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:12:28,762][91478] Avg episode reward: [(0, '6.690'), (1, '6.620')] +[2023-09-26 08:12:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 9388032. Throughput: 0: 780.0, 1: 779.1. Samples: 2347510. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:12:33,763][91478] Avg episode reward: [(0, '6.590'), (1, '6.630')] +[2023-09-26 08:12:37,845][92474] Updated weights for policy 1, policy_version 18400 (0.0018) +[2023-09-26 08:12:37,846][92473] Updated weights for policy 0, policy_version 18400 (0.0019) +[2023-09-26 08:12:38,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 9420800. Throughput: 0: 784.3, 1: 784.0. Samples: 2352314. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:12:38,763][91478] Avg episode reward: [(0, '6.570'), (1, '6.780')] +[2023-09-26 08:12:43,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 9453568. Throughput: 0: 777.4, 1: 778.8. Samples: 2361561. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:12:43,762][91478] Avg episode reward: [(0, '6.540'), (1, '6.750')] +[2023-09-26 08:12:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 9486336. Throughput: 0: 783.0, 1: 782.1. Samples: 2371161. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:12:48,762][91478] Avg episode reward: [(0, '6.850'), (1, '6.700')] +[2023-09-26 08:12:51,079][92474] Updated weights for policy 1, policy_version 18560 (0.0019) +[2023-09-26 08:12:51,080][92473] Updated weights for policy 0, policy_version 18560 (0.0019) +[2023-09-26 08:12:53,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6262.0). Total num frames: 9519104. Throughput: 0: 778.0, 1: 778.2. Samples: 2375652. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:12:53,762][91478] Avg episode reward: [(0, '6.830'), (1, '6.800')] +[2023-09-26 08:12:58,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 9543680. Throughput: 0: 774.8, 1: 775.5. Samples: 2384691. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:12:58,763][91478] Avg episode reward: [(0, '6.510'), (1, '6.620')] +[2023-09-26 08:13:03,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 9576448. Throughput: 0: 775.8, 1: 775.4. Samples: 2394239. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:13:03,763][91478] Avg episode reward: [(0, '6.510'), (1, '6.770')] +[2023-09-26 08:13:04,055][92474] Updated weights for policy 1, policy_version 18720 (0.0015) +[2023-09-26 08:13:04,056][92473] Updated weights for policy 0, policy_version 18720 (0.0017) +[2023-09-26 08:13:08,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 9609216. Throughput: 0: 781.0, 1: 780.8. Samples: 2399196. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:13:08,762][91478] Avg episode reward: [(0, '6.530'), (1, '6.750')] +[2023-09-26 08:13:13,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 9641984. Throughput: 0: 777.1, 1: 775.0. Samples: 2408504. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:13:13,763][91478] Avg episode reward: [(0, '6.510'), (1, '6.220')] +[2023-09-26 08:13:13,776][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000018832_4820992.pth... +[2023-09-26 08:13:13,776][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000018832_4820992.pth... +[2023-09-26 08:13:13,811][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000015904_4071424.pth +[2023-09-26 08:13:13,812][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000015904_4071424.pth +[2023-09-26 08:13:17,045][92474] Updated weights for policy 1, policy_version 18880 (0.0019) +[2023-09-26 08:13:17,045][92473] Updated weights for policy 0, policy_version 18880 (0.0019) +[2023-09-26 08:13:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9674752. Throughput: 0: 784.6, 1: 784.8. Samples: 2418132. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:13:18,762][91478] Avg episode reward: [(0, '6.760'), (1, '6.640')] +[2023-09-26 08:13:23,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9707520. Throughput: 0: 782.3, 1: 783.7. Samples: 2422785. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:13:23,763][91478] Avg episode reward: [(0, '6.610'), (1, '6.500')] +[2023-09-26 08:13:28,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 9732096. Throughput: 0: 782.0, 1: 782.2. Samples: 2431950. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:13:28,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.470')] +[2023-09-26 08:13:30,193][92473] Updated weights for policy 0, policy_version 19040 (0.0018) +[2023-09-26 08:13:30,193][92474] Updated weights for policy 1, policy_version 19040 (0.0016) +[2023-09-26 08:13:33,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 9764864. Throughput: 0: 778.7, 1: 779.3. Samples: 2441270. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:13:33,762][91478] Avg episode reward: [(0, '6.740'), (1, '6.790')] +[2023-09-26 08:13:38,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 9797632. Throughput: 0: 782.7, 1: 781.9. Samples: 2446060. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:13:38,763][91478] Avg episode reward: [(0, '6.720'), (1, '6.740')] +[2023-09-26 08:13:43,274][92474] Updated weights for policy 1, policy_version 19200 (0.0017) +[2023-09-26 08:13:43,274][92473] Updated weights for policy 0, policy_version 19200 (0.0016) +[2023-09-26 08:13:43,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 9830400. Throughput: 0: 787.0, 1: 787.7. Samples: 2455552. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:13:43,763][91478] Avg episode reward: [(0, '6.670'), (1, '6.690')] +[2023-09-26 08:13:48,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9863168. Throughput: 0: 786.4, 1: 786.3. Samples: 2465008. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:13:48,762][91478] Avg episode reward: [(0, '6.740'), (1, '6.570')] +[2023-09-26 08:13:53,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 9895936. Throughput: 0: 784.7, 1: 786.2. Samples: 2469884. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 08:13:53,763][91478] Avg episode reward: [(0, '6.780'), (1, '6.480')] +[2023-09-26 08:13:56,342][92473] Updated weights for policy 0, policy_version 19360 (0.0017) +[2023-09-26 08:13:56,342][92474] Updated weights for policy 1, policy_version 19360 (0.0016) +[2023-09-26 08:13:58,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 9920512. Throughput: 0: 784.5, 1: 783.3. Samples: 2479057. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 08:13:58,763][91478] Avg episode reward: [(0, '6.500'), (1, '6.280')] +[2023-09-26 08:14:03,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 9953280. Throughput: 0: 781.6, 1: 781.2. Samples: 2488458. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 08:14:03,763][91478] Avg episode reward: [(0, '6.570'), (1, '6.100')] +[2023-09-26 08:14:08,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6262.0). Total num frames: 9986048. Throughput: 0: 785.3, 1: 783.9. Samples: 2493399. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:14:08,762][91478] Avg episode reward: [(0, '6.720'), (1, '6.550')] +[2023-09-26 08:14:09,217][92474] Updated weights for policy 1, policy_version 19520 (0.0016) +[2023-09-26 08:14:09,219][92473] Updated weights for policy 0, policy_version 19520 (0.0016) +[2023-09-26 08:14:13,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10018816. Throughput: 0: 786.5, 1: 786.4. Samples: 2502732. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:14:13,763][91478] Avg episode reward: [(0, '6.940'), (1, '6.390')] +[2023-09-26 08:14:18,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10051584. Throughput: 0: 794.8, 1: 793.9. Samples: 2512759. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:14:18,762][91478] Avg episode reward: [(0, '6.600'), (1, '6.620')] +[2023-09-26 08:14:21,998][92474] Updated weights for policy 1, policy_version 19680 (0.0017) +[2023-09-26 08:14:21,998][92473] Updated weights for policy 0, policy_version 19680 (0.0015) +[2023-09-26 08:14:23,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10084352. Throughput: 0: 790.2, 1: 790.3. Samples: 2517186. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:14:23,763][91478] Avg episode reward: [(0, '6.560'), (1, '6.870')] +[2023-09-26 08:14:28,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 10117120. Throughput: 0: 794.8, 1: 793.1. Samples: 2527007. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:14:28,763][91478] Avg episode reward: [(0, '6.690'), (1, '6.780')] +[2023-09-26 08:14:33,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 10141696. Throughput: 0: 789.8, 1: 789.4. Samples: 2536071. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:14:33,762][91478] Avg episode reward: [(0, '6.710'), (1, '6.930')] +[2023-09-26 08:14:35,068][92474] Updated weights for policy 1, policy_version 19840 (0.0016) +[2023-09-26 08:14:35,068][92473] Updated weights for policy 0, policy_version 19840 (0.0017) +[2023-09-26 08:14:38,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 10174464. Throughput: 0: 791.9, 1: 790.4. Samples: 2541085. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:14:38,762][91478] Avg episode reward: [(0, '6.610'), (1, '6.800')] +[2023-09-26 08:14:43,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10207232. Throughput: 0: 786.3, 1: 787.2. Samples: 2549864. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:14:43,763][91478] Avg episode reward: [(0, '6.570'), (1, '6.840')] +[2023-09-26 08:14:48,226][92474] Updated weights for policy 1, policy_version 20000 (0.0018) +[2023-09-26 08:14:48,226][92473] Updated weights for policy 0, policy_version 20000 (0.0017) +[2023-09-26 08:14:48,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10240000. Throughput: 0: 791.8, 1: 792.0. Samples: 2559726. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:14:48,763][91478] Avg episode reward: [(0, '6.950'), (1, '6.710')] +[2023-09-26 08:14:53,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 10272768. Throughput: 0: 787.2, 1: 787.2. Samples: 2564248. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:14:53,763][91478] Avg episode reward: [(0, '6.850'), (1, '6.600')] +[2023-09-26 08:14:58,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 10305536. Throughput: 0: 791.3, 1: 791.9. Samples: 2573978. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:14:58,763][91478] Avg episode reward: [(0, '6.740'), (1, '6.740')] +[2023-09-26 08:15:01,182][92474] Updated weights for policy 1, policy_version 20160 (0.0017) +[2023-09-26 08:15:01,182][92473] Updated weights for policy 0, policy_version 20160 (0.0018) +[2023-09-26 08:15:03,762][91478] Fps is (10 sec: 6143.9, 60 sec: 6348.8, 300 sec: 6289.8). Total num frames: 10334208. Throughput: 0: 782.6, 1: 784.1. Samples: 2583262. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:15:03,763][91478] Avg episode reward: [(0, '6.740'), (1, '6.710')] +[2023-09-26 08:15:08,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10362880. Throughput: 0: 784.6, 1: 783.4. Samples: 2587749. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:15:08,763][91478] Avg episode reward: [(0, '6.610'), (1, '6.820')] +[2023-09-26 08:15:13,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10395648. Throughput: 0: 777.8, 1: 778.4. Samples: 2597037. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:15:13,763][91478] Avg episode reward: [(0, '6.710'), (1, '6.690')] +[2023-09-26 08:15:13,773][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000020304_5197824.pth... +[2023-09-26 08:15:13,774][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000020304_5197824.pth... +[2023-09-26 08:15:13,812][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000017376_4448256.pth +[2023-09-26 08:15:13,814][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000017376_4448256.pth +[2023-09-26 08:15:14,371][92474] Updated weights for policy 1, policy_version 20320 (0.0016) +[2023-09-26 08:15:14,372][92473] Updated weights for policy 0, policy_version 20320 (0.0017) +[2023-09-26 08:15:18,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10428416. Throughput: 0: 786.2, 1: 787.4. Samples: 2606887. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 08:15:18,762][91478] Avg episode reward: [(0, '6.740'), (1, '6.870')] +[2023-09-26 08:15:23,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10461184. Throughput: 0: 779.5, 1: 780.0. Samples: 2611261. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 08:15:23,763][91478] Avg episode reward: [(0, '6.660'), (1, '6.670')] +[2023-09-26 08:15:27,236][92473] Updated weights for policy 0, policy_version 20480 (0.0017) +[2023-09-26 08:15:27,236][92474] Updated weights for policy 1, policy_version 20480 (0.0018) +[2023-09-26 08:15:28,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10493952. Throughput: 0: 792.4, 1: 792.0. Samples: 2621162. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 08:15:28,763][91478] Avg episode reward: [(0, '6.770'), (1, '6.850')] +[2023-09-26 08:15:33,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6303.7). Total num frames: 10526720. Throughput: 0: 788.6, 1: 787.8. Samples: 2630664. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:15:33,763][91478] Avg episode reward: [(0, '6.630'), (1, '6.900')] +[2023-09-26 08:15:38,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6289.8). Total num frames: 10555392. Throughput: 0: 792.0, 1: 792.5. Samples: 2635551. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:15:38,763][91478] Avg episode reward: [(0, '6.710'), (1, '6.970')] +[2023-09-26 08:15:40,094][92473] Updated weights for policy 0, policy_version 20640 (0.0015) +[2023-09-26 08:15:40,094][92474] Updated weights for policy 1, policy_version 20640 (0.0017) +[2023-09-26 08:15:43,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10584064. Throughput: 0: 786.1, 1: 785.6. Samples: 2644705. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:15:43,763][91478] Avg episode reward: [(0, '6.710'), (1, '6.940')] +[2023-09-26 08:15:48,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 10616832. Throughput: 0: 788.1, 1: 788.1. Samples: 2654190. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:15:48,762][91478] Avg episode reward: [(0, '6.670'), (1, '6.810')] +[2023-09-26 08:15:53,312][92474] Updated weights for policy 1, policy_version 20800 (0.0016) +[2023-09-26 08:15:53,313][92473] Updated weights for policy 0, policy_version 20800 (0.0017) +[2023-09-26 08:15:53,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10649600. Throughput: 0: 786.5, 1: 787.4. Samples: 2658574. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:15:53,763][91478] Avg episode reward: [(0, '6.460'), (1, '7.050')] +[2023-09-26 08:15:58,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 10682368. Throughput: 0: 791.8, 1: 792.4. Samples: 2668324. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:15:58,762][91478] Avg episode reward: [(0, '6.750'), (1, '6.760')] +[2023-09-26 08:16:03,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6212.3, 300 sec: 6275.9). Total num frames: 10706944. Throughput: 0: 782.8, 1: 782.7. Samples: 2677336. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:16:03,763][91478] Avg episode reward: [(0, '6.650'), (1, '6.710')] +[2023-09-26 08:16:06,578][92473] Updated weights for policy 0, policy_version 20960 (0.0015) +[2023-09-26 08:16:06,578][92474] Updated weights for policy 1, policy_version 20960 (0.0017) +[2023-09-26 08:16:08,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10739712. Throughput: 0: 786.8, 1: 786.8. Samples: 2682075. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:16:08,762][91478] Avg episode reward: [(0, '6.690'), (1, '6.780')] +[2023-09-26 08:16:13,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10772480. Throughput: 0: 776.1, 1: 777.6. Samples: 2691078. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:16:13,763][91478] Avg episode reward: [(0, '6.830'), (1, '6.660')] +[2023-09-26 08:16:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10805248. Throughput: 0: 781.9, 1: 783.0. Samples: 2701088. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:16:18,763][91478] Avg episode reward: [(0, '6.750'), (1, '6.950')] +[2023-09-26 08:16:19,475][92473] Updated weights for policy 0, policy_version 21120 (0.0019) +[2023-09-26 08:16:19,476][92474] Updated weights for policy 1, policy_version 21120 (0.0018) +[2023-09-26 08:16:23,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10838016. Throughput: 0: 778.3, 1: 778.1. Samples: 2705590. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:16:23,763][91478] Avg episode reward: [(0, '6.750'), (1, '6.770')] +[2023-09-26 08:16:28,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10870784. Throughput: 0: 781.7, 1: 782.6. Samples: 2715098. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:16:28,763][91478] Avg episode reward: [(0, '6.750'), (1, '6.910')] +[2023-09-26 08:16:32,591][92474] Updated weights for policy 1, policy_version 21280 (0.0017) +[2023-09-26 08:16:32,592][92473] Updated weights for policy 0, policy_version 21280 (0.0017) +[2023-09-26 08:16:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 10895360. Throughput: 0: 779.6, 1: 778.3. Samples: 2724292. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:16:33,763][91478] Avg episode reward: [(0, '6.860'), (1, '6.740')] +[2023-09-26 08:16:38,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6212.3, 300 sec: 6275.9). Total num frames: 10928128. Throughput: 0: 783.3, 1: 784.5. Samples: 2729128. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:16:38,763][91478] Avg episode reward: [(0, '6.720'), (1, '6.570')] +[2023-09-26 08:16:43,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 10960896. Throughput: 0: 776.0, 1: 776.4. Samples: 2738182. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:16:43,762][91478] Avg episode reward: [(0, '6.720'), (1, '6.910')] +[2023-09-26 08:16:46,059][92474] Updated weights for policy 1, policy_version 21440 (0.0019) +[2023-09-26 08:16:46,059][92473] Updated weights for policy 0, policy_version 21440 (0.0015) +[2023-09-26 08:16:48,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 10993664. Throughput: 0: 778.5, 1: 776.2. Samples: 2747295. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:16:48,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.720')] +[2023-09-26 08:16:53,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 11018240. Throughput: 0: 774.4, 1: 776.7. Samples: 2751871. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:16:53,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.810')] +[2023-09-26 08:16:58,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 11051008. Throughput: 0: 782.1, 1: 780.5. Samples: 2761394. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:16:58,762][91478] Avg episode reward: [(0, '6.820'), (1, '6.800')] +[2023-09-26 08:16:59,071][92474] Updated weights for policy 1, policy_version 21600 (0.0016) +[2023-09-26 08:16:59,071][92473] Updated weights for policy 0, policy_version 21600 (0.0017) +[2023-09-26 08:17:03,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 11083776. Throughput: 0: 775.6, 1: 776.7. Samples: 2770944. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 08:17:03,762][91478] Avg episode reward: [(0, '6.640'), (1, '6.670')] +[2023-09-26 08:17:08,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 11116544. Throughput: 0: 777.7, 1: 777.2. Samples: 2775559. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 08:17:08,763][91478] Avg episode reward: [(0, '6.590'), (1, '6.870')] +[2023-09-26 08:17:12,033][92473] Updated weights for policy 0, policy_version 21760 (0.0018) +[2023-09-26 08:17:12,033][92474] Updated weights for policy 1, policy_version 21760 (0.0017) +[2023-09-26 08:17:13,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 11149312. Throughput: 0: 779.6, 1: 780.0. Samples: 2785280. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 08:17:13,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.810')] +[2023-09-26 08:17:13,773][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000021776_5574656.pth... +[2023-09-26 08:17:13,773][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000021776_5574656.pth... +[2023-09-26 08:17:13,802][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000018832_4820992.pth +[2023-09-26 08:17:13,814][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000018832_4820992.pth +[2023-09-26 08:17:18,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 11182080. Throughput: 0: 782.4, 1: 782.6. Samples: 2794718. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:17:18,762][91478] Avg episode reward: [(0, '6.740'), (1, '7.080')] +[2023-09-26 08:17:23,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 11214848. Throughput: 0: 782.9, 1: 782.5. Samples: 2799571. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:17:23,763][91478] Avg episode reward: [(0, '6.640'), (1, '6.910')] +[2023-09-26 08:17:25,116][92473] Updated weights for policy 0, policy_version 21920 (0.0017) +[2023-09-26 08:17:25,116][92474] Updated weights for policy 1, policy_version 21920 (0.0018) +[2023-09-26 08:17:28,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 11239424. Throughput: 0: 783.9, 1: 782.9. Samples: 2808688. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:17:28,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.730')] +[2023-09-26 08:17:33,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 11272192. Throughput: 0: 784.6, 1: 787.7. Samples: 2818047. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:17:33,762][91478] Avg episode reward: [(0, '6.650'), (1, '6.640')] +[2023-09-26 08:17:38,330][92473] Updated weights for policy 0, policy_version 22080 (0.0016) +[2023-09-26 08:17:38,331][92474] Updated weights for policy 1, policy_version 22080 (0.0017) +[2023-09-26 08:17:38,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 11304960. Throughput: 0: 786.4, 1: 784.8. Samples: 2822573. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:17:38,763][91478] Avg episode reward: [(0, '6.840'), (1, '6.650')] +[2023-09-26 08:17:43,762][91478] Fps is (10 sec: 6143.8, 60 sec: 6212.2, 300 sec: 6262.0). Total num frames: 11333632. Throughput: 0: 781.2, 1: 780.5. Samples: 2831674. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:17:43,763][91478] Avg episode reward: [(0, '6.860'), (1, '6.720')] +[2023-09-26 08:17:48,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 11362304. Throughput: 0: 773.8, 1: 773.7. Samples: 2840582. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:17:48,763][91478] Avg episode reward: [(0, '6.640'), (1, '6.540')] +[2023-09-26 08:17:51,835][92474] Updated weights for policy 1, policy_version 22240 (0.0017) +[2023-09-26 08:17:51,835][92473] Updated weights for policy 0, policy_version 22240 (0.0017) +[2023-09-26 08:17:53,762][91478] Fps is (10 sec: 6144.2, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 11395072. Throughput: 0: 775.5, 1: 775.4. Samples: 2845349. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:17:53,762][91478] Avg episode reward: [(0, '6.750'), (1, '7.010')] +[2023-09-26 08:17:58,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 11427840. Throughput: 0: 773.7, 1: 773.7. Samples: 2854912. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:17:58,762][91478] Avg episode reward: [(0, '6.640'), (1, '6.860')] +[2023-09-26 08:18:03,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 11460608. Throughput: 0: 771.3, 1: 772.0. Samples: 2864166. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:18:03,762][91478] Avg episode reward: [(0, '6.850'), (1, '6.810')] +[2023-09-26 08:18:04,920][92474] Updated weights for policy 1, policy_version 22400 (0.0017) +[2023-09-26 08:18:04,922][92473] Updated weights for policy 0, policy_version 22400 (0.0015) +[2023-09-26 08:18:08,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 11485184. Throughput: 0: 771.1, 1: 770.1. Samples: 2868923. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:18:08,763][91478] Avg episode reward: [(0, '6.710'), (1, '6.950')] +[2023-09-26 08:18:13,762][91478] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 11517952. Throughput: 0: 773.7, 1: 773.5. Samples: 2878311. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:18:13,763][91478] Avg episode reward: [(0, '6.780'), (1, '6.880')] +[2023-09-26 08:18:17,782][92473] Updated weights for policy 0, policy_version 22560 (0.0016) +[2023-09-26 08:18:17,782][92474] Updated weights for policy 1, policy_version 22560 (0.0015) +[2023-09-26 08:18:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 11550720. Throughput: 0: 775.9, 1: 774.4. Samples: 2887812. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:18:18,763][91478] Avg episode reward: [(0, '6.660'), (1, '6.600')] +[2023-09-26 08:18:23,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 11583488. Throughput: 0: 779.1, 1: 778.6. Samples: 2892667. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:18:23,763][91478] Avg episode reward: [(0, '6.640'), (1, '6.650')] +[2023-09-26 08:18:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 11616256. Throughput: 0: 780.4, 1: 782.8. Samples: 2902018. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:18:28,763][91478] Avg episode reward: [(0, '6.650'), (1, '6.690')] +[2023-09-26 08:18:30,884][92474] Updated weights for policy 1, policy_version 22720 (0.0015) +[2023-09-26 08:18:30,884][92473] Updated weights for policy 0, policy_version 22720 (0.0015) +[2023-09-26 08:18:33,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 11649024. Throughput: 0: 785.2, 1: 784.7. Samples: 2911226. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:18:33,763][91478] Avg episode reward: [(0, '6.770'), (1, '6.600')] +[2023-09-26 08:18:38,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6212.3, 300 sec: 6262.0). Total num frames: 11677696. Throughput: 0: 786.2, 1: 786.3. Samples: 2916111. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 08:18:38,763][91478] Avg episode reward: [(0, '6.880'), (1, '7.000')] +[2023-09-26 08:18:43,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6212.3, 300 sec: 6248.1). Total num frames: 11706368. Throughput: 0: 785.9, 1: 785.0. Samples: 2925601. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 08:18:43,763][91478] Avg episode reward: [(0, '6.820'), (1, '6.980')] +[2023-09-26 08:18:43,941][92473] Updated weights for policy 0, policy_version 22880 (0.0017) +[2023-09-26 08:18:43,941][92474] Updated weights for policy 1, policy_version 22880 (0.0017) +[2023-09-26 08:18:48,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 11739136. Throughput: 0: 784.3, 1: 783.3. Samples: 2934708. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-26 08:18:48,763][91478] Avg episode reward: [(0, '6.660'), (1, '6.900')] +[2023-09-26 08:18:53,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 11771904. Throughput: 0: 780.2, 1: 780.7. Samples: 2939165. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:18:53,763][91478] Avg episode reward: [(0, '6.350'), (1, '6.880')] +[2023-09-26 08:18:57,210][92474] Updated weights for policy 1, policy_version 23040 (0.0017) +[2023-09-26 08:18:57,210][92473] Updated weights for policy 0, policy_version 23040 (0.0017) +[2023-09-26 08:18:58,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 11804672. Throughput: 0: 784.2, 1: 785.6. Samples: 2948951. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:18:58,762][91478] Avg episode reward: [(0, '6.270'), (1, '6.950')] +[2023-09-26 08:19:03,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 11829248. Throughput: 0: 775.9, 1: 775.5. Samples: 2957622. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:19:03,763][91478] Avg episode reward: [(0, '6.340'), (1, '6.830')] +[2023-09-26 08:19:08,762][91478] Fps is (10 sec: 5734.2, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 11862016. Throughput: 0: 776.0, 1: 775.0. Samples: 2962462. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:19:08,763][91478] Avg episode reward: [(0, '6.650'), (1, '6.690')] +[2023-09-26 08:19:10,497][92473] Updated weights for policy 0, policy_version 23200 (0.0017) +[2023-09-26 08:19:10,497][92474] Updated weights for policy 1, policy_version 23200 (0.0016) +[2023-09-26 08:19:13,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 11894784. Throughput: 0: 774.3, 1: 773.7. Samples: 2971681. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:19:13,762][91478] Avg episode reward: [(0, '6.740'), (1, '6.830')] +[2023-09-26 08:19:13,773][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000023232_5947392.pth... +[2023-09-26 08:19:13,774][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000023232_5947392.pth... +[2023-09-26 08:19:13,810][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000020304_5197824.pth +[2023-09-26 08:19:13,813][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000020304_5197824.pth +[2023-09-26 08:19:18,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 11927552. Throughput: 0: 779.0, 1: 779.9. Samples: 2981377. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:19:18,763][91478] Avg episode reward: [(0, '6.730'), (1, '6.910')] +[2023-09-26 08:19:23,573][92473] Updated weights for policy 0, policy_version 23360 (0.0017) +[2023-09-26 08:19:23,575][92474] Updated weights for policy 1, policy_version 23360 (0.0015) +[2023-09-26 08:19:23,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 11960320. Throughput: 0: 775.6, 1: 777.1. Samples: 2985984. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:19:23,763][91478] Avg episode reward: [(0, '6.790'), (1, '6.830')] +[2023-09-26 08:19:28,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6212.3, 300 sec: 6262.0). Total num frames: 11988992. Throughput: 0: 777.9, 1: 777.6. Samples: 2995598. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:19:28,763][91478] Avg episode reward: [(0, '6.730'), (1, '7.050')] +[2023-09-26 08:19:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 12017664. Throughput: 0: 774.5, 1: 775.5. Samples: 3004455. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:19:33,763][91478] Avg episode reward: [(0, '6.610'), (1, '6.750')] +[2023-09-26 08:19:36,716][92473] Updated weights for policy 0, policy_version 23520 (0.0017) +[2023-09-26 08:19:36,716][92474] Updated weights for policy 1, policy_version 23520 (0.0018) +[2023-09-26 08:19:38,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6212.3, 300 sec: 6248.1). Total num frames: 12050432. Throughput: 0: 780.6, 1: 780.4. Samples: 3009407. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:19:38,763][91478] Avg episode reward: [(0, '6.940'), (1, '7.050')] +[2023-09-26 08:19:43,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12083200. Throughput: 0: 777.6, 1: 776.1. Samples: 3018865. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:19:43,763][91478] Avg episode reward: [(0, '6.840'), (1, '6.920')] +[2023-09-26 08:19:48,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12115968. Throughput: 0: 786.8, 1: 787.1. Samples: 3028448. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:19:48,763][91478] Avg episode reward: [(0, '6.730'), (1, '6.800')] +[2023-09-26 08:19:49,660][92474] Updated weights for policy 1, policy_version 23680 (0.0018) +[2023-09-26 08:19:49,661][92473] Updated weights for policy 0, policy_version 23680 (0.0017) +[2023-09-26 08:19:53,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12148736. Throughput: 0: 783.8, 1: 785.7. Samples: 3033088. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:19:53,763][91478] Avg episode reward: [(0, '6.620'), (1, '7.070')] +[2023-09-26 08:19:58,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6234.3). Total num frames: 12173312. Throughput: 0: 786.5, 1: 785.4. Samples: 3042418. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:19:58,763][91478] Avg episode reward: [(0, '6.850'), (1, '6.850')] +[2023-09-26 08:20:02,762][92474] Updated weights for policy 1, policy_version 23840 (0.0017) +[2023-09-26 08:20:02,762][92473] Updated weights for policy 0, policy_version 23840 (0.0017) +[2023-09-26 08:20:03,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12206080. Throughput: 0: 781.9, 1: 780.4. Samples: 3051680. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:20:03,763][91478] Avg episode reward: [(0, '6.730'), (1, '6.830')] +[2023-09-26 08:20:08,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12238848. Throughput: 0: 785.8, 1: 784.3. Samples: 3056637. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:20:08,763][91478] Avg episode reward: [(0, '6.820'), (1, '6.960')] +[2023-09-26 08:20:13,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12271616. Throughput: 0: 782.0, 1: 781.7. Samples: 3065965. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:20:13,762][91478] Avg episode reward: [(0, '6.710'), (1, '7.000')] +[2023-09-26 08:20:15,699][92474] Updated weights for policy 1, policy_version 24000 (0.0018) +[2023-09-26 08:20:15,699][92473] Updated weights for policy 0, policy_version 24000 (0.0016) +[2023-09-26 08:20:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12304384. Throughput: 0: 789.7, 1: 788.5. Samples: 3075476. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:20:18,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.800')] +[2023-09-26 08:20:23,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12337152. Throughput: 0: 785.8, 1: 787.2. Samples: 3080193. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:20:23,763][91478] Avg episode reward: [(0, '6.640'), (1, '6.810')] +[2023-09-26 08:20:28,744][92474] Updated weights for policy 1, policy_version 24160 (0.0018) +[2023-09-26 08:20:28,744][92473] Updated weights for policy 0, policy_version 24160 (0.0018) +[2023-09-26 08:20:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6348.8, 300 sec: 6248.1). Total num frames: 12369920. Throughput: 0: 787.0, 1: 786.5. Samples: 3089673. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:20:28,763][91478] Avg episode reward: [(0, '6.740'), (1, '6.860')] +[2023-09-26 08:20:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6234.2). Total num frames: 12394496. Throughput: 0: 785.0, 1: 785.4. Samples: 3099115. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:20:33,763][91478] Avg episode reward: [(0, '6.950'), (1, '6.870')] +[2023-09-26 08:20:38,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12427264. Throughput: 0: 786.4, 1: 784.4. Samples: 3103774. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:20:38,763][91478] Avg episode reward: [(0, '6.720'), (1, '6.760')] +[2023-09-26 08:20:41,791][92473] Updated weights for policy 0, policy_version 24320 (0.0017) +[2023-09-26 08:20:41,791][92474] Updated weights for policy 1, policy_version 24320 (0.0017) +[2023-09-26 08:20:43,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12460032. Throughput: 0: 784.9, 1: 785.2. Samples: 3113074. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:20:43,763][91478] Avg episode reward: [(0, '6.730'), (1, '6.920')] +[2023-09-26 08:20:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12492800. Throughput: 0: 790.4, 1: 791.2. Samples: 3122852. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:20:48,763][91478] Avg episode reward: [(0, '6.620'), (1, '7.060')] +[2023-09-26 08:20:53,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12525568. Throughput: 0: 784.5, 1: 785.8. Samples: 3127300. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 08:20:53,762][91478] Avg episode reward: [(0, '6.850'), (1, '6.950')] +[2023-09-26 08:20:54,751][92474] Updated weights for policy 1, policy_version 24480 (0.0017) +[2023-09-26 08:20:54,752][92473] Updated weights for policy 0, policy_version 24480 (0.0016) +[2023-09-26 08:20:58,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 12558336. Throughput: 0: 788.9, 1: 788.9. Samples: 3136966. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 08:20:58,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.820')] +[2023-09-26 08:21:03,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12582912. Throughput: 0: 785.3, 1: 785.3. Samples: 3146150. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 08:21:03,763][91478] Avg episode reward: [(0, '6.580'), (1, '6.960')] +[2023-09-26 08:21:07,768][92473] Updated weights for policy 0, policy_version 24640 (0.0020) +[2023-09-26 08:21:07,768][92474] Updated weights for policy 1, policy_version 24640 (0.0019) +[2023-09-26 08:21:08,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12615680. Throughput: 0: 788.7, 1: 787.5. Samples: 3151123. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:21:08,763][91478] Avg episode reward: [(0, '6.390'), (1, '6.990')] +[2023-09-26 08:21:13,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12648448. Throughput: 0: 784.2, 1: 784.6. Samples: 3160269. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:21:13,763][91478] Avg episode reward: [(0, '6.510'), (1, '7.100')] +[2023-09-26 08:21:13,772][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000024704_6324224.pth... +[2023-09-26 08:21:13,772][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000024704_6324224.pth... +[2023-09-26 08:21:13,806][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000021776_5574656.pth +[2023-09-26 08:21:13,808][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000021776_5574656.pth +[2023-09-26 08:21:18,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 12681216. Throughput: 0: 787.5, 1: 787.4. Samples: 3169984. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:21:18,762][91478] Avg episode reward: [(0, '6.700'), (1, '6.950')] +[2023-09-26 08:21:20,832][92473] Updated weights for policy 0, policy_version 24800 (0.0016) +[2023-09-26 08:21:20,832][92474] Updated weights for policy 1, policy_version 24800 (0.0017) +[2023-09-26 08:21:23,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12713984. Throughput: 0: 784.7, 1: 785.9. Samples: 3174449. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:21:23,763][91478] Avg episode reward: [(0, '6.670'), (1, '6.800')] +[2023-09-26 08:21:28,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 12738560. Throughput: 0: 786.8, 1: 786.7. Samples: 3183882. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:21:28,763][91478] Avg episode reward: [(0, '6.600'), (1, '6.880')] +[2023-09-26 08:21:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12771328. Throughput: 0: 777.3, 1: 777.3. Samples: 3192811. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:21:33,763][91478] Avg episode reward: [(0, '6.650'), (1, '7.040')] +[2023-09-26 08:21:34,396][92474] Updated weights for policy 1, policy_version 24960 (0.0018) +[2023-09-26 08:21:34,396][92473] Updated weights for policy 0, policy_version 24960 (0.0015) +[2023-09-26 08:21:38,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12804096. Throughput: 0: 778.4, 1: 776.9. Samples: 3197291. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:21:38,763][91478] Avg episode reward: [(0, '6.730'), (1, '6.840')] +[2023-09-26 08:21:43,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 12836864. Throughput: 0: 775.8, 1: 776.2. Samples: 3206805. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:21:43,762][91478] Avg episode reward: [(0, '6.580'), (1, '6.940')] +[2023-09-26 08:21:47,420][92473] Updated weights for policy 0, policy_version 25120 (0.0014) +[2023-09-26 08:21:47,420][92474] Updated weights for policy 1, policy_version 25120 (0.0016) +[2023-09-26 08:21:48,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6212.3, 300 sec: 6262.0). Total num frames: 12865536. Throughput: 0: 778.3, 1: 777.4. Samples: 3216154. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:21:48,763][91478] Avg episode reward: [(0, '6.650'), (1, '6.980')] +[2023-09-26 08:21:53,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 12894208. Throughput: 0: 773.3, 1: 773.9. Samples: 3220747. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:21:53,762][91478] Avg episode reward: [(0, '6.570'), (1, '7.010')] +[2023-09-26 08:21:58,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 12926976. Throughput: 0: 772.6, 1: 773.1. Samples: 3229828. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:21:58,762][91478] Avg episode reward: [(0, '6.740'), (1, '7.020')] +[2023-09-26 08:22:00,653][92473] Updated weights for policy 0, policy_version 25280 (0.0017) +[2023-09-26 08:22:00,654][92474] Updated weights for policy 1, policy_version 25280 (0.0018) +[2023-09-26 08:22:03,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 12959744. Throughput: 0: 771.5, 1: 770.4. Samples: 3239366. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:22:03,763][91478] Avg episode reward: [(0, '6.720'), (1, '7.050')] +[2023-09-26 08:22:08,762][91478] Fps is (10 sec: 6143.9, 60 sec: 6212.3, 300 sec: 6234.3). Total num frames: 12988416. Throughput: 0: 772.2, 1: 771.0. Samples: 3243895. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:22:08,763][91478] Avg episode reward: [(0, '6.720'), (1, '7.070')] +[2023-09-26 08:22:13,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 13017088. Throughput: 0: 769.5, 1: 770.3. Samples: 3253170. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:22:13,762][91478] Avg episode reward: [(0, '6.540'), (1, '7.090')] +[2023-09-26 08:22:13,962][92474] Updated weights for policy 1, policy_version 25440 (0.0017) +[2023-09-26 08:22:13,962][92473] Updated weights for policy 0, policy_version 25440 (0.0016) +[2023-09-26 08:22:18,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 13049856. Throughput: 0: 773.7, 1: 774.2. Samples: 3262465. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:22:18,762][91478] Avg episode reward: [(0, '6.720'), (1, '7.120')] +[2023-09-26 08:22:23,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 13082624. Throughput: 0: 773.6, 1: 773.9. Samples: 3266926. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:22:23,763][91478] Avg episode reward: [(0, '6.600'), (1, '7.090')] +[2023-09-26 08:22:27,281][92473] Updated weights for policy 0, policy_version 25600 (0.0017) +[2023-09-26 08:22:27,281][92474] Updated weights for policy 1, policy_version 25600 (0.0018) +[2023-09-26 08:22:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 13115392. Throughput: 0: 773.6, 1: 775.6. Samples: 3276515. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:22:28,762][91478] Avg episode reward: [(0, '6.730'), (1, '7.080')] +[2023-09-26 08:22:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 13139968. Throughput: 0: 769.6, 1: 770.8. Samples: 3285469. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:22:33,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.800')] +[2023-09-26 08:22:38,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6234.3). Total num frames: 13172736. Throughput: 0: 774.6, 1: 773.8. Samples: 3290421. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:22:38,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.740')] +[2023-09-26 08:22:40,439][92474] Updated weights for policy 1, policy_version 25760 (0.0018) +[2023-09-26 08:22:40,440][92473] Updated weights for policy 0, policy_version 25760 (0.0016) +[2023-09-26 08:22:43,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 13205504. Throughput: 0: 774.2, 1: 773.8. Samples: 3299488. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:22:43,763][91478] Avg episode reward: [(0, '6.670'), (1, '6.650')] +[2023-09-26 08:22:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6212.3, 300 sec: 6248.1). Total num frames: 13238272. Throughput: 0: 776.4, 1: 777.0. Samples: 3309269. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:22:48,763][91478] Avg episode reward: [(0, '6.560'), (1, '6.730')] +[2023-09-26 08:22:53,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6212.2, 300 sec: 6234.2). Total num frames: 13266944. Throughput: 0: 774.2, 1: 776.3. Samples: 3313664. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:22:53,763][91478] Avg episode reward: [(0, '6.940'), (1, '6.680')] +[2023-09-26 08:22:53,780][92473] Updated weights for policy 0, policy_version 25920 (0.0017) +[2023-09-26 08:22:53,780][92474] Updated weights for policy 1, policy_version 25920 (0.0018) +[2023-09-26 08:22:58,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 13295616. Throughput: 0: 771.8, 1: 771.0. Samples: 3322596. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:22:58,763][91478] Avg episode reward: [(0, '6.860'), (1, '6.610')] +[2023-09-26 08:23:03,762][91478] Fps is (10 sec: 6143.9, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 13328384. Throughput: 0: 773.4, 1: 771.8. Samples: 3332001. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:23:03,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.880')] +[2023-09-26 08:23:07,137][92474] Updated weights for policy 1, policy_version 26080 (0.0018) +[2023-09-26 08:23:07,137][92473] Updated weights for policy 0, policy_version 26080 (0.0018) +[2023-09-26 08:23:08,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6212.3, 300 sec: 6248.1). Total num frames: 13361152. Throughput: 0: 769.2, 1: 770.3. Samples: 3336205. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:23:08,763][91478] Avg episode reward: [(0, '6.760'), (1, '7.150')] +[2023-09-26 08:23:13,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 13385728. Throughput: 0: 769.9, 1: 769.1. Samples: 3345770. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:23:13,763][91478] Avg episode reward: [(0, '6.710'), (1, '7.030')] +[2023-09-26 08:23:13,776][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000026160_6696960.pth... +[2023-09-26 08:23:13,803][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000023232_5947392.pth +[2023-09-26 08:23:13,827][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000026160_6696960.pth... +[2023-09-26 08:23:13,854][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000023232_5947392.pth +[2023-09-26 08:23:18,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 13418496. Throughput: 0: 771.0, 1: 771.7. Samples: 3354890. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:23:18,763][91478] Avg episode reward: [(0, '6.710'), (1, '6.840')] +[2023-09-26 08:23:20,282][92473] Updated weights for policy 0, policy_version 26240 (0.0017) +[2023-09-26 08:23:20,283][92474] Updated weights for policy 1, policy_version 26240 (0.0016) +[2023-09-26 08:23:23,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 13451264. Throughput: 0: 771.5, 1: 771.3. Samples: 3359847. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:23:23,763][91478] Avg episode reward: [(0, '6.590'), (1, '6.750')] +[2023-09-26 08:23:28,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 13484032. Throughput: 0: 771.2, 1: 772.6. Samples: 3368958. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:23:28,763][91478] Avg episode reward: [(0, '6.650'), (1, '6.930')] +[2023-09-26 08:23:33,622][92473] Updated weights for policy 0, policy_version 26400 (0.0017) +[2023-09-26 08:23:33,622][92474] Updated weights for policy 1, policy_version 26400 (0.0017) +[2023-09-26 08:23:33,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6234.3). Total num frames: 13516800. Throughput: 0: 766.0, 1: 766.7. Samples: 3378241. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:23:33,763][91478] Avg episode reward: [(0, '6.770'), (1, '6.780')] +[2023-09-26 08:23:38,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 13541376. Throughput: 0: 771.5, 1: 771.2. Samples: 3383086. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:23:38,763][91478] Avg episode reward: [(0, '6.870'), (1, '6.820')] +[2023-09-26 08:23:43,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 13574144. Throughput: 0: 771.6, 1: 771.9. Samples: 3392054. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:23:43,763][91478] Avg episode reward: [(0, '6.660'), (1, '7.130')] +[2023-09-26 08:23:46,866][92474] Updated weights for policy 1, policy_version 26560 (0.0018) +[2023-09-26 08:23:46,866][92473] Updated weights for policy 0, policy_version 26560 (0.0017) +[2023-09-26 08:23:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 13606912. Throughput: 0: 773.1, 1: 773.3. Samples: 3401590. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:23:48,763][91478] Avg episode reward: [(0, '6.710'), (1, '7.020')] +[2023-09-26 08:23:53,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 13639680. Throughput: 0: 775.4, 1: 774.1. Samples: 3405929. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:23:53,763][91478] Avg episode reward: [(0, '6.600'), (1, '6.950')] +[2023-09-26 08:23:58,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 13672448. Throughput: 0: 779.1, 1: 777.7. Samples: 3415826. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:23:58,762][91478] Avg episode reward: [(0, '6.810'), (1, '7.020')] +[2023-09-26 08:23:59,804][92473] Updated weights for policy 0, policy_version 26720 (0.0017) +[2023-09-26 08:23:59,806][92474] Updated weights for policy 1, policy_version 26720 (0.0017) +[2023-09-26 08:24:03,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 13705216. Throughput: 0: 781.4, 1: 781.0. Samples: 3425198. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:24:03,763][91478] Avg episode reward: [(0, '6.700'), (1, '7.070')] +[2023-09-26 08:24:08,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 13737984. Throughput: 0: 781.4, 1: 780.3. Samples: 3430123. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:24:08,763][91478] Avg episode reward: [(0, '6.780'), (1, '7.010')] +[2023-09-26 08:24:12,607][92474] Updated weights for policy 1, policy_version 26880 (0.0019) +[2023-09-26 08:24:12,607][92473] Updated weights for policy 0, policy_version 26880 (0.0020) +[2023-09-26 08:24:13,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 13762560. Throughput: 0: 786.0, 1: 784.7. Samples: 3439641. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:24:13,763][91478] Avg episode reward: [(0, '6.780'), (1, '6.810')] +[2023-09-26 08:24:18,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 13795328. Throughput: 0: 783.8, 1: 784.9. Samples: 3448832. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:24:18,763][91478] Avg episode reward: [(0, '6.560'), (1, '6.980')] +[2023-09-26 08:24:23,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6234.3). Total num frames: 13828096. Throughput: 0: 782.5, 1: 780.3. Samples: 3453414. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:24:23,763][91478] Avg episode reward: [(0, '6.590'), (1, '6.940')] +[2023-09-26 08:24:26,066][92474] Updated weights for policy 1, policy_version 27040 (0.0017) +[2023-09-26 08:24:26,066][92473] Updated weights for policy 0, policy_version 27040 (0.0017) +[2023-09-26 08:24:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 13860864. Throughput: 0: 784.3, 1: 784.0. Samples: 3462628. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:24:28,763][91478] Avg episode reward: [(0, '6.630'), (1, '6.820')] +[2023-09-26 08:24:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 13885440. Throughput: 0: 778.0, 1: 777.8. Samples: 3471604. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:24:33,763][91478] Avg episode reward: [(0, '6.580'), (1, '6.900')] +[2023-09-26 08:24:38,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 13918208. Throughput: 0: 783.5, 1: 783.2. Samples: 3476434. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:24:38,763][91478] Avg episode reward: [(0, '6.710'), (1, '6.850')] +[2023-09-26 08:24:39,302][92474] Updated weights for policy 1, policy_version 27200 (0.0016) +[2023-09-26 08:24:39,302][92473] Updated weights for policy 0, policy_version 27200 (0.0017) +[2023-09-26 08:24:43,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 13950976. Throughput: 0: 775.7, 1: 777.0. Samples: 3485696. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:24:43,762][91478] Avg episode reward: [(0, '6.780'), (1, '6.760')] +[2023-09-26 08:24:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 13983744. Throughput: 0: 778.6, 1: 778.2. Samples: 3495251. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:24:48,763][91478] Avg episode reward: [(0, '6.750'), (1, '6.870')] +[2023-09-26 08:24:52,389][92474] Updated weights for policy 1, policy_version 27360 (0.0020) +[2023-09-26 08:24:52,389][92473] Updated weights for policy 0, policy_version 27360 (0.0015) +[2023-09-26 08:24:53,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 14016512. Throughput: 0: 775.4, 1: 778.0. Samples: 3500026. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:24:53,763][91478] Avg episode reward: [(0, '6.740'), (1, '7.170')] +[2023-09-26 08:24:58,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 14041088. Throughput: 0: 769.9, 1: 771.1. Samples: 3508988. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:24:58,763][91478] Avg episode reward: [(0, '6.660'), (1, '7.170')] +[2023-09-26 08:25:03,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 14073856. Throughput: 0: 773.8, 1: 773.7. Samples: 3518468. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:25:03,763][91478] Avg episode reward: [(0, '6.580'), (1, '6.680')] +[2023-09-26 08:25:05,529][92473] Updated weights for policy 0, policy_version 27520 (0.0016) +[2023-09-26 08:25:05,529][92474] Updated weights for policy 1, policy_version 27520 (0.0015) +[2023-09-26 08:25:08,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 14106624. Throughput: 0: 773.8, 1: 775.3. Samples: 3523122. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:25:08,762][91478] Avg episode reward: [(0, '6.810'), (1, '6.810')] +[2023-09-26 08:25:13,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14139392. Throughput: 0: 778.8, 1: 780.4. Samples: 3532795. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:25:13,763][91478] Avg episode reward: [(0, '6.750'), (1, '6.840')] +[2023-09-26 08:25:13,773][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000027616_7069696.pth... +[2023-09-26 08:25:13,773][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000027616_7069696.pth... +[2023-09-26 08:25:13,808][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000024704_6324224.pth +[2023-09-26 08:25:13,809][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000024704_6324224.pth +[2023-09-26 08:25:18,526][92474] Updated weights for policy 1, policy_version 27680 (0.0015) +[2023-09-26 08:25:18,527][92473] Updated weights for policy 0, policy_version 27680 (0.0017) +[2023-09-26 08:25:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14172160. Throughput: 0: 784.4, 1: 784.2. Samples: 3542189. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:25:18,763][91478] Avg episode reward: [(0, '6.720'), (1, '6.590')] +[2023-09-26 08:25:23,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14204928. Throughput: 0: 784.7, 1: 785.9. Samples: 3547113. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:25:23,763][91478] Avg episode reward: [(0, '6.600'), (1, '6.840')] +[2023-09-26 08:25:28,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 14229504. Throughput: 0: 783.7, 1: 782.1. Samples: 3556157. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:25:28,762][91478] Avg episode reward: [(0, '6.650'), (1, '6.390')] +[2023-09-26 08:25:31,666][92474] Updated weights for policy 1, policy_version 27840 (0.0017) +[2023-09-26 08:25:31,667][92473] Updated weights for policy 0, policy_version 27840 (0.0016) +[2023-09-26 08:25:33,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14262272. Throughput: 0: 780.5, 1: 782.1. Samples: 3565569. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-26 08:25:33,763][91478] Avg episode reward: [(0, '6.750'), (1, '6.700')] +[2023-09-26 08:25:38,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 14295040. Throughput: 0: 783.6, 1: 782.2. Samples: 3570486. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:25:38,762][91478] Avg episode reward: [(0, '6.760'), (1, '6.970')] +[2023-09-26 08:25:43,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14327808. Throughput: 0: 790.1, 1: 788.7. Samples: 3580035. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:25:43,762][91478] Avg episode reward: [(0, '6.510'), (1, '7.030')] +[2023-09-26 08:25:44,398][92473] Updated weights for policy 0, policy_version 28000 (0.0014) +[2023-09-26 08:25:44,400][92474] Updated weights for policy 1, policy_version 28000 (0.0017) +[2023-09-26 08:25:48,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14360576. Throughput: 0: 794.2, 1: 792.6. Samples: 3589876. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:25:48,763][91478] Avg episode reward: [(0, '6.360'), (1, '6.860')] +[2023-09-26 08:25:53,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14393344. Throughput: 0: 790.2, 1: 790.8. Samples: 3594263. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:25:53,763][91478] Avg episode reward: [(0, '6.500'), (1, '6.980')] +[2023-09-26 08:25:57,373][92474] Updated weights for policy 1, policy_version 28160 (0.0015) +[2023-09-26 08:25:57,373][92473] Updated weights for policy 0, policy_version 28160 (0.0018) +[2023-09-26 08:25:58,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6348.8, 300 sec: 6234.3). Total num frames: 14422016. Throughput: 0: 791.7, 1: 791.1. Samples: 3604022. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:25:58,763][91478] Avg episode reward: [(0, '6.570'), (1, '7.010')] +[2023-09-26 08:26:03,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14450688. Throughput: 0: 783.0, 1: 784.2. Samples: 3612711. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:26:03,763][91478] Avg episode reward: [(0, '6.740'), (1, '6.910')] +[2023-09-26 08:26:08,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14483456. Throughput: 0: 783.6, 1: 782.6. Samples: 3617595. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:26:08,762][91478] Avg episode reward: [(0, '6.830'), (1, '6.870')] +[2023-09-26 08:26:10,700][92474] Updated weights for policy 1, policy_version 28320 (0.0017) +[2023-09-26 08:26:10,701][92473] Updated weights for policy 0, policy_version 28320 (0.0017) +[2023-09-26 08:26:13,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14516224. Throughput: 0: 786.4, 1: 788.1. Samples: 3627008. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:26:13,763][91478] Avg episode reward: [(0, '6.700'), (1, '7.070')] +[2023-09-26 08:26:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14548992. Throughput: 0: 787.0, 1: 786.0. Samples: 3636353. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:26:18,763][91478] Avg episode reward: [(0, '6.580'), (1, '7.020')] +[2023-09-26 08:26:23,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 14573568. Throughput: 0: 784.4, 1: 784.5. Samples: 3641087. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:26:23,763][91478] Avg episode reward: [(0, '6.650'), (1, '7.000')] +[2023-09-26 08:26:23,839][92473] Updated weights for policy 0, policy_version 28480 (0.0017) +[2023-09-26 08:26:23,839][92474] Updated weights for policy 1, policy_version 28480 (0.0017) +[2023-09-26 08:26:28,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14606336. Throughput: 0: 779.5, 1: 779.8. Samples: 3650200. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:26:28,763][91478] Avg episode reward: [(0, '6.540'), (1, '6.660')] +[2023-09-26 08:26:33,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14639104. Throughput: 0: 775.8, 1: 777.5. Samples: 3659776. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-26 08:26:33,763][91478] Avg episode reward: [(0, '6.790'), (1, '6.680')] +[2023-09-26 08:26:37,055][92473] Updated weights for policy 0, policy_version 28640 (0.0017) +[2023-09-26 08:26:37,055][92474] Updated weights for policy 1, policy_version 28640 (0.0017) +[2023-09-26 08:26:38,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14671872. Throughput: 0: 775.1, 1: 774.1. Samples: 3663979. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 08:26:38,763][91478] Avg episode reward: [(0, '6.660'), (1, '6.750')] +[2023-09-26 08:26:43,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6234.3). Total num frames: 14704640. Throughput: 0: 776.7, 1: 775.9. Samples: 3673890. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 08:26:43,763][91478] Avg episode reward: [(0, '6.670'), (1, '6.760')] +[2023-09-26 08:26:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 14737408. Throughput: 0: 785.3, 1: 783.4. Samples: 3683301. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 08:26:48,763][91478] Avg episode reward: [(0, '6.520'), (1, '6.980')] +[2023-09-26 08:26:50,075][92473] Updated weights for policy 0, policy_version 28800 (0.0018) +[2023-09-26 08:26:50,075][92474] Updated weights for policy 1, policy_version 28800 (0.0015) +[2023-09-26 08:26:53,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 14761984. Throughput: 0: 780.4, 1: 781.2. Samples: 3687868. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-26 08:26:53,762][91478] Avg episode reward: [(0, '6.710'), (1, '6.730')] +[2023-09-26 08:26:58,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6212.3, 300 sec: 6220.4). Total num frames: 14794752. Throughput: 0: 778.7, 1: 777.1. Samples: 3697020. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:26:58,763][91478] Avg episode reward: [(0, '6.710'), (1, '6.750')] +[2023-09-26 08:27:03,289][92473] Updated weights for policy 0, policy_version 28960 (0.0018) +[2023-09-26 08:27:03,289][92474] Updated weights for policy 1, policy_version 28960 (0.0018) +[2023-09-26 08:27:03,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6234.2). Total num frames: 14827520. Throughput: 0: 782.0, 1: 779.5. Samples: 3706621. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:27:03,763][91478] Avg episode reward: [(0, '6.630'), (1, '7.000')] +[2023-09-26 08:27:08,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 14860288. Throughput: 0: 775.8, 1: 777.3. Samples: 3710976. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:27:08,762][91478] Avg episode reward: [(0, '6.720'), (1, '6.890')] +[2023-09-26 08:27:13,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 14893056. Throughput: 0: 782.6, 1: 782.1. Samples: 3720612. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:27:13,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.900')] +[2023-09-26 08:27:13,775][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000029088_7446528.pth... +[2023-09-26 08:27:13,776][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000029088_7446528.pth... +[2023-09-26 08:27:13,809][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000026160_6696960.pth +[2023-09-26 08:27:13,816][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000026160_6696960.pth +[2023-09-26 08:27:16,465][92473] Updated weights for policy 0, policy_version 29120 (0.0017) +[2023-09-26 08:27:16,465][92474] Updated weights for policy 1, policy_version 29120 (0.0017) +[2023-09-26 08:27:18,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 14917632. Throughput: 0: 775.3, 1: 774.0. Samples: 3729498. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:27:18,763][91478] Avg episode reward: [(0, '6.790'), (1, '6.910')] +[2023-09-26 08:27:23,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 14950400. Throughput: 0: 779.8, 1: 780.5. Samples: 3734194. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:27:23,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.900')] +[2023-09-26 08:27:28,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 14983168. Throughput: 0: 775.4, 1: 775.7. Samples: 3743691. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:27:28,762][91478] Avg episode reward: [(0, '6.710'), (1, '6.890')] +[2023-09-26 08:27:29,805][92474] Updated weights for policy 1, policy_version 29280 (0.0017) +[2023-09-26 08:27:29,805][92473] Updated weights for policy 0, policy_version 29280 (0.0017) +[2023-09-26 08:27:33,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 15015936. Throughput: 0: 771.5, 1: 772.9. Samples: 3752802. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:27:33,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.750')] +[2023-09-26 08:27:38,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 15040512. Throughput: 0: 774.6, 1: 773.7. Samples: 3757541. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:27:38,762][91478] Avg episode reward: [(0, '6.730'), (1, '6.880')] +[2023-09-26 08:27:43,009][92473] Updated weights for policy 0, policy_version 29440 (0.0017) +[2023-09-26 08:27:43,010][92474] Updated weights for policy 1, policy_version 29440 (0.0018) +[2023-09-26 08:27:43,762][91478] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 15073280. Throughput: 0: 772.8, 1: 773.5. Samples: 3766603. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:27:43,762][91478] Avg episode reward: [(0, '6.840'), (1, '6.990')] +[2023-09-26 08:27:48,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6144.0, 300 sec: 6234.2). Total num frames: 15106048. Throughput: 0: 773.1, 1: 775.2. Samples: 3776295. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:27:48,763][91478] Avg episode reward: [(0, '6.820'), (1, '6.840')] +[2023-09-26 08:27:53,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 15138816. Throughput: 0: 774.7, 1: 773.8. Samples: 3780657. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:27:53,763][91478] Avg episode reward: [(0, '6.610'), (1, '6.940')] +[2023-09-26 08:27:56,026][92474] Updated weights for policy 1, policy_version 29600 (0.0018) +[2023-09-26 08:27:56,027][92473] Updated weights for policy 0, policy_version 29600 (0.0017) +[2023-09-26 08:27:58,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 15171584. Throughput: 0: 774.6, 1: 775.8. Samples: 3790376. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:27:58,763][91478] Avg episode reward: [(0, '6.710'), (1, '7.000')] +[2023-09-26 08:28:03,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 15196160. Throughput: 0: 775.7, 1: 775.4. Samples: 3799298. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:28:03,762][91478] Avg episode reward: [(0, '6.680'), (1, '6.870')] +[2023-09-26 08:28:08,762][91478] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 15228928. Throughput: 0: 779.0, 1: 777.1. Samples: 3804221. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:28:08,762][91478] Avg episode reward: [(0, '6.810'), (1, '6.980')] +[2023-09-26 08:28:09,167][92473] Updated weights for policy 0, policy_version 29760 (0.0016) +[2023-09-26 08:28:09,168][92474] Updated weights for policy 1, policy_version 29760 (0.0017) +[2023-09-26 08:28:13,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 15261696. Throughput: 0: 776.5, 1: 776.3. Samples: 3813567. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 08:28:13,763][91478] Avg episode reward: [(0, '6.700'), (1, '7.140')] +[2023-09-26 08:28:18,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 15294464. Throughput: 0: 785.1, 1: 784.1. Samples: 3823415. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 08:28:18,762][91478] Avg episode reward: [(0, '6.550'), (1, '6.800')] +[2023-09-26 08:28:22,071][92474] Updated weights for policy 1, policy_version 29920 (0.0015) +[2023-09-26 08:28:22,072][92473] Updated weights for policy 0, policy_version 29920 (0.0016) +[2023-09-26 08:28:23,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 15327232. Throughput: 0: 780.3, 1: 780.6. Samples: 3827778. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 08:28:23,762][91478] Avg episode reward: [(0, '6.380'), (1, '6.750')] +[2023-09-26 08:28:28,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 15360000. Throughput: 0: 791.5, 1: 791.5. Samples: 3837837. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-26 08:28:28,763][91478] Avg episode reward: [(0, '6.180'), (1, '6.730')] +[2023-09-26 08:28:33,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 15392768. Throughput: 0: 787.3, 1: 787.7. Samples: 3847169. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:28:33,762][91478] Avg episode reward: [(0, '6.230'), (1, '6.850')] +[2023-09-26 08:28:34,956][92474] Updated weights for policy 1, policy_version 30080 (0.0015) +[2023-09-26 08:28:34,956][92473] Updated weights for policy 0, policy_version 30080 (0.0014) +[2023-09-26 08:28:38,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 15417344. Throughput: 0: 791.5, 1: 792.6. Samples: 3851941. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:28:38,763][91478] Avg episode reward: [(0, '6.520'), (1, '6.880')] +[2023-09-26 08:28:43,762][91478] Fps is (10 sec: 5734.2, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 15450112. Throughput: 0: 786.0, 1: 785.5. Samples: 3861093. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:28:43,763][91478] Avg episode reward: [(0, '6.650'), (1, '6.840')] +[2023-09-26 08:28:48,097][92474] Updated weights for policy 1, policy_version 30240 (0.0020) +[2023-09-26 08:28:48,097][92473] Updated weights for policy 0, policy_version 30240 (0.0020) +[2023-09-26 08:28:48,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 15482880. Throughput: 0: 792.8, 1: 794.1. Samples: 3870710. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:28:48,762][91478] Avg episode reward: [(0, '6.710'), (1, '6.920')] +[2023-09-26 08:28:53,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 15515648. Throughput: 0: 787.2, 1: 788.2. Samples: 3875113. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 08:28:53,763][91478] Avg episode reward: [(0, '6.710'), (1, '7.020')] +[2023-09-26 08:28:58,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 15548416. Throughput: 0: 793.2, 1: 793.5. Samples: 3884966. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 08:28:58,762][91478] Avg episode reward: [(0, '6.590'), (1, '6.910')] +[2023-09-26 08:29:01,027][92474] Updated weights for policy 1, policy_version 30400 (0.0014) +[2023-09-26 08:29:01,028][92473] Updated weights for policy 0, policy_version 30400 (0.0017) +[2023-09-26 08:29:03,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6248.1). Total num frames: 15581184. Throughput: 0: 787.0, 1: 788.0. Samples: 3894291. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 08:29:03,762][91478] Avg episode reward: [(0, '6.630'), (1, '6.930')] +[2023-09-26 08:29:08,762][91478] Fps is (10 sec: 6143.9, 60 sec: 6348.8, 300 sec: 6262.0). Total num frames: 15609856. Throughput: 0: 793.9, 1: 794.4. Samples: 3899253. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 08:29:08,763][91478] Avg episode reward: [(0, '6.850'), (1, '6.730')] +[2023-09-26 08:29:13,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 15638528. Throughput: 0: 783.9, 1: 783.3. Samples: 3908361. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 08:29:13,763][91478] Avg episode reward: [(0, '6.850'), (1, '6.790')] +[2023-09-26 08:29:13,773][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000030544_7819264.pth... +[2023-09-26 08:29:13,773][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000030544_7819264.pth... +[2023-09-26 08:29:13,807][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000027616_7069696.pth +[2023-09-26 08:29:13,809][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000027616_7069696.pth +[2023-09-26 08:29:14,033][92474] Updated weights for policy 1, policy_version 30560 (0.0015) +[2023-09-26 08:29:14,033][92473] Updated weights for policy 0, policy_version 30560 (0.0017) +[2023-09-26 08:29:18,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 15671296. Throughput: 0: 784.6, 1: 785.6. Samples: 3917825. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:29:18,762][91478] Avg episode reward: [(0, '6.630'), (1, '6.880')] +[2023-09-26 08:29:23,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 15704064. Throughput: 0: 784.3, 1: 782.5. Samples: 3922447. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:29:23,763][91478] Avg episode reward: [(0, '6.740'), (1, '6.830')] +[2023-09-26 08:29:27,083][92473] Updated weights for policy 0, policy_version 30720 (0.0015) +[2023-09-26 08:29:27,083][92474] Updated weights for policy 1, policy_version 30720 (0.0018) +[2023-09-26 08:29:28,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 15736832. Throughput: 0: 789.1, 1: 789.8. Samples: 3932141. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:29:28,763][91478] Avg episode reward: [(0, '6.730'), (1, '6.780')] +[2023-09-26 08:29:33,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 15769600. Throughput: 0: 786.7, 1: 786.0. Samples: 3941481. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:29:33,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.750')] +[2023-09-26 08:29:38,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 15802368. Throughput: 0: 792.4, 1: 793.2. Samples: 3946464. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:29:38,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.940')] +[2023-09-26 08:29:39,936][92473] Updated weights for policy 0, policy_version 30880 (0.0017) +[2023-09-26 08:29:39,936][92474] Updated weights for policy 1, policy_version 30880 (0.0017) +[2023-09-26 08:29:43,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 15835136. Throughput: 0: 788.9, 1: 788.2. Samples: 3955939. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:29:43,763][91478] Avg episode reward: [(0, '6.720'), (1, '6.940')] +[2023-09-26 08:29:48,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 15859712. Throughput: 0: 790.5, 1: 790.0. Samples: 3965415. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:29:48,763][91478] Avg episode reward: [(0, '6.840'), (1, '7.030')] +[2023-09-26 08:29:52,698][92474] Updated weights for policy 1, policy_version 31040 (0.0018) +[2023-09-26 08:29:52,699][92473] Updated weights for policy 0, policy_version 31040 (0.0016) +[2023-09-26 08:29:53,762][91478] Fps is (10 sec: 5734.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 15892480. Throughput: 0: 790.6, 1: 789.9. Samples: 3970374. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:29:53,762][91478] Avg episode reward: [(0, '6.830'), (1, '6.900')] +[2023-09-26 08:29:58,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 15925248. Throughput: 0: 790.9, 1: 790.8. Samples: 3979538. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:29:58,762][91478] Avg episode reward: [(0, '6.840'), (1, '6.740')] +[2023-09-26 08:30:03,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 15958016. Throughput: 0: 796.0, 1: 795.1. Samples: 3989425. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:30:03,763][91478] Avg episode reward: [(0, '6.720'), (1, '6.780')] +[2023-09-26 08:30:05,747][92474] Updated weights for policy 1, policy_version 31200 (0.0017) +[2023-09-26 08:30:05,747][92473] Updated weights for policy 0, policy_version 31200 (0.0019) +[2023-09-26 08:30:08,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6348.8, 300 sec: 6275.9). Total num frames: 15990784. Throughput: 0: 792.9, 1: 792.7. Samples: 3993798. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:30:08,763][91478] Avg episode reward: [(0, '6.850'), (1, '6.810')] +[2023-09-26 08:30:13,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6348.8, 300 sec: 6262.0). Total num frames: 16019456. Throughput: 0: 790.6, 1: 788.9. Samples: 4003220. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:30:13,763][91478] Avg episode reward: [(0, '6.720'), (1, '6.880')] +[2023-09-26 08:30:18,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 16048128. Throughput: 0: 789.1, 1: 788.3. Samples: 4012463. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 08:30:18,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.780')] +[2023-09-26 08:30:18,938][92473] Updated weights for policy 0, policy_version 31360 (0.0014) +[2023-09-26 08:30:18,939][92474] Updated weights for policy 1, policy_version 31360 (0.0018) +[2023-09-26 08:30:23,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16080896. Throughput: 0: 785.6, 1: 787.1. Samples: 4017238. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 08:30:23,763][91478] Avg episode reward: [(0, '6.690'), (1, '7.070')] +[2023-09-26 08:30:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16113664. Throughput: 0: 781.6, 1: 782.6. Samples: 4026331. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 08:30:28,763][91478] Avg episode reward: [(0, '6.660'), (1, '6.890')] +[2023-09-26 08:30:32,258][92474] Updated weights for policy 1, policy_version 31520 (0.0017) +[2023-09-26 08:30:32,258][92473] Updated weights for policy 0, policy_version 31520 (0.0017) +[2023-09-26 08:30:33,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 16146432. Throughput: 0: 781.1, 1: 781.2. Samples: 4035716. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 08:30:33,762][91478] Avg episode reward: [(0, '6.570'), (1, '6.950')] +[2023-09-26 08:30:38,762][91478] Fps is (10 sec: 6144.0, 60 sec: 6212.3, 300 sec: 6262.0). Total num frames: 16175104. Throughput: 0: 780.5, 1: 778.4. Samples: 4040525. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:30:38,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.740')] +[2023-09-26 08:30:43,762][91478] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 16203776. Throughput: 0: 779.3, 1: 779.2. Samples: 4049669. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:30:43,763][91478] Avg episode reward: [(0, '6.820'), (1, '6.860')] +[2023-09-26 08:30:45,379][92474] Updated weights for policy 1, policy_version 31680 (0.0017) +[2023-09-26 08:30:45,379][92473] Updated weights for policy 0, policy_version 31680 (0.0018) +[2023-09-26 08:30:48,762][91478] Fps is (10 sec: 6144.1, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 16236544. Throughput: 0: 774.1, 1: 775.0. Samples: 4059136. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:30:48,762][91478] Avg episode reward: [(0, '6.610'), (1, '6.830')] +[2023-09-26 08:30:53,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6262.0). Total num frames: 16269312. Throughput: 0: 776.4, 1: 776.5. Samples: 4063679. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:30:53,762][91478] Avg episode reward: [(0, '6.730'), (1, '6.800')] +[2023-09-26 08:30:58,350][92474] Updated weights for policy 1, policy_version 31840 (0.0016) +[2023-09-26 08:30:58,350][92473] Updated weights for policy 0, policy_version 31840 (0.0018) +[2023-09-26 08:30:58,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16302080. Throughput: 0: 779.5, 1: 781.3. Samples: 4073455. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:30:58,762][91478] Avg episode reward: [(0, '6.630'), (1, '6.760')] +[2023-09-26 08:31:03,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16334848. Throughput: 0: 782.4, 1: 783.0. Samples: 4082903. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:31:03,763][91478] Avg episode reward: [(0, '6.960'), (1, '6.790')] +[2023-09-26 08:31:08,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16367616. Throughput: 0: 784.1, 1: 780.4. Samples: 4087642. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:31:08,763][91478] Avg episode reward: [(0, '6.740'), (1, '6.980')] +[2023-09-26 08:31:11,427][92474] Updated weights for policy 1, policy_version 32000 (0.0016) +[2023-09-26 08:31:11,427][92473] Updated weights for policy 0, policy_version 32000 (0.0018) +[2023-09-26 08:31:13,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6212.3, 300 sec: 6248.1). Total num frames: 16392192. Throughput: 0: 783.6, 1: 783.3. Samples: 4096844. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:31:13,763][91478] Avg episode reward: [(0, '6.710'), (1, '6.900')] +[2023-09-26 08:31:13,956][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000032032_8200192.pth... +[2023-09-26 08:31:13,976][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000032032_8200192.pth... +[2023-09-26 08:31:13,989][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000029088_7446528.pth +[2023-09-26 08:31:14,004][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000029088_7446528.pth +[2023-09-26 08:31:18,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16424960. Throughput: 0: 784.9, 1: 784.8. Samples: 4106354. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:31:18,763][91478] Avg episode reward: [(0, '6.720'), (1, '6.900')] +[2023-09-26 08:31:23,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16457728. Throughput: 0: 784.3, 1: 786.6. Samples: 4111215. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:31:23,763][91478] Avg episode reward: [(0, '6.800'), (1, '7.350')] +[2023-09-26 08:31:23,764][92345] Saving new best policy, reward=7.350! +[2023-09-26 08:31:24,242][92474] Updated weights for policy 1, policy_version 32160 (0.0016) +[2023-09-26 08:31:24,242][92473] Updated weights for policy 0, policy_version 32160 (0.0019) +[2023-09-26 08:31:28,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 16490496. Throughput: 0: 787.5, 1: 788.7. Samples: 4120597. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:31:28,762][91478] Avg episode reward: [(0, '6.690'), (1, '6.980')] +[2023-09-26 08:31:33,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16523264. Throughput: 0: 784.3, 1: 783.0. Samples: 4129665. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:31:33,763][91478] Avg episode reward: [(0, '6.590'), (1, '7.250')] +[2023-09-26 08:31:37,528][92473] Updated weights for policy 0, policy_version 32320 (0.0016) +[2023-09-26 08:31:37,528][92474] Updated weights for policy 1, policy_version 32320 (0.0016) +[2023-09-26 08:31:38,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6212.3, 300 sec: 6248.1). Total num frames: 16547840. Throughput: 0: 787.1, 1: 786.8. Samples: 4134502. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:31:38,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.960')] +[2023-09-26 08:31:43,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 16580608. Throughput: 0: 784.5, 1: 784.5. Samples: 4144060. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:31:43,763][91478] Avg episode reward: [(0, '6.650'), (1, '6.870')] +[2023-09-26 08:31:48,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16613376. Throughput: 0: 782.2, 1: 783.2. Samples: 4153346. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:31:48,762][91478] Avg episode reward: [(0, '6.540'), (1, '6.850')] +[2023-09-26 08:31:50,534][92473] Updated weights for policy 0, policy_version 32480 (0.0017) +[2023-09-26 08:31:50,534][92474] Updated weights for policy 1, policy_version 32480 (0.0017) +[2023-09-26 08:31:53,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16646144. Throughput: 0: 781.8, 1: 783.5. Samples: 4158082. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:31:53,763][91478] Avg episode reward: [(0, '6.740'), (1, '7.590')] +[2023-09-26 08:31:53,764][92345] Saving new best policy, reward=7.590! +[2023-09-26 08:31:58,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16678912. Throughput: 0: 786.7, 1: 787.7. Samples: 4167690. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:31:58,763][91478] Avg episode reward: [(0, '6.750'), (1, '7.650')] +[2023-09-26 08:31:58,773][92345] Saving new best policy, reward=7.650! +[2023-09-26 08:32:03,442][92473] Updated weights for policy 0, policy_version 32640 (0.0018) +[2023-09-26 08:32:03,442][92474] Updated weights for policy 1, policy_version 32640 (0.0017) +[2023-09-26 08:32:03,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 16711680. Throughput: 0: 789.6, 1: 788.1. Samples: 4177351. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:32:03,762][91478] Avg episode reward: [(0, '6.820'), (1, '7.640')] +[2023-09-26 08:32:08,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 16744448. Throughput: 0: 786.0, 1: 786.5. Samples: 4181979. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:32:08,762][91478] Avg episode reward: [(0, '6.600'), (1, '7.550')] +[2023-09-26 08:32:13,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16769024. Throughput: 0: 788.4, 1: 787.1. Samples: 4191495. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:32:13,763][91478] Avg episode reward: [(0, '6.560'), (1, '7.040')] +[2023-09-26 08:32:16,510][92474] Updated weights for policy 1, policy_version 32800 (0.0017) +[2023-09-26 08:32:16,510][92473] Updated weights for policy 0, policy_version 32800 (0.0016) +[2023-09-26 08:32:18,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16801792. Throughput: 0: 788.2, 1: 787.9. Samples: 4200586. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:32:18,763][91478] Avg episode reward: [(0, '6.600'), (1, '6.820')] +[2023-09-26 08:32:23,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16834560. Throughput: 0: 787.0, 1: 786.5. Samples: 4205311. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:32:23,763][91478] Avg episode reward: [(0, '6.820'), (1, '6.900')] +[2023-09-26 08:32:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16867328. Throughput: 0: 785.6, 1: 784.3. Samples: 4214707. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:32:28,763][91478] Avg episode reward: [(0, '6.690'), (1, '6.960')] +[2023-09-26 08:32:29,780][92474] Updated weights for policy 1, policy_version 32960 (0.0018) +[2023-09-26 08:32:29,780][92473] Updated weights for policy 0, policy_version 32960 (0.0019) +[2023-09-26 08:32:33,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6303.7). Total num frames: 16900096. Throughput: 0: 784.9, 1: 784.0. Samples: 4223947. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:32:33,763][91478] Avg episode reward: [(0, '6.730'), (1, '6.860')] +[2023-09-26 08:32:38,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 16924672. Throughput: 0: 784.4, 1: 783.2. Samples: 4228625. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:32:38,762][91478] Avg episode reward: [(0, '6.700'), (1, '6.980')] +[2023-09-26 08:32:42,981][92473] Updated weights for policy 0, policy_version 33120 (0.0017) +[2023-09-26 08:32:42,981][92474] Updated weights for policy 1, policy_version 33120 (0.0017) +[2023-09-26 08:32:43,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 16957440. Throughput: 0: 778.7, 1: 777.4. Samples: 4237717. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:32:43,762][91478] Avg episode reward: [(0, '6.840'), (1, '6.960')] +[2023-09-26 08:32:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 16990208. Throughput: 0: 774.6, 1: 777.4. Samples: 4247189. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:32:48,762][91478] Avg episode reward: [(0, '6.710'), (1, '6.820')] +[2023-09-26 08:32:53,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17022976. Throughput: 0: 773.6, 1: 774.5. Samples: 4251646. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:32:53,763][91478] Avg episode reward: [(0, '6.680'), (1, '7.230')] +[2023-09-26 08:32:56,335][92473] Updated weights for policy 0, policy_version 33280 (0.0016) +[2023-09-26 08:32:56,336][92474] Updated weights for policy 1, policy_version 33280 (0.0018) +[2023-09-26 08:32:58,762][91478] Fps is (10 sec: 5734.2, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 17047552. Throughput: 0: 769.8, 1: 769.2. Samples: 4260750. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:32:58,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.840')] +[2023-09-26 08:33:03,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 17080320. Throughput: 0: 770.2, 1: 772.9. Samples: 4270027. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:33:03,762][91478] Avg episode reward: [(0, '6.700'), (1, '7.040')] +[2023-09-26 08:33:08,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 17113088. Throughput: 0: 767.4, 1: 768.4. Samples: 4274422. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:33:08,762][91478] Avg episode reward: [(0, '6.590'), (1, '7.140')] +[2023-09-26 08:33:09,713][92473] Updated weights for policy 0, policy_version 33440 (0.0018) +[2023-09-26 08:33:09,713][92474] Updated weights for policy 1, policy_version 33440 (0.0017) +[2023-09-26 08:33:13,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17145856. Throughput: 0: 770.2, 1: 771.4. Samples: 4284077. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:33:13,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.850')] +[2023-09-26 08:33:13,774][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000033488_8572928.pth... +[2023-09-26 08:33:13,775][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000033488_8572928.pth... +[2023-09-26 08:33:13,812][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000030544_7819264.pth +[2023-09-26 08:33:13,812][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000030544_7819264.pth +[2023-09-26 08:33:18,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 17170432. Throughput: 0: 769.3, 1: 768.6. Samples: 4293153. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:33:18,763][91478] Avg episode reward: [(0, '6.600'), (1, '6.890')] +[2023-09-26 08:33:22,867][92473] Updated weights for policy 0, policy_version 33600 (0.0017) +[2023-09-26 08:33:22,868][92474] Updated weights for policy 1, policy_version 33600 (0.0019) +[2023-09-26 08:33:23,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 17203200. Throughput: 0: 768.5, 1: 768.7. Samples: 4297799. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:33:23,763][91478] Avg episode reward: [(0, '6.800'), (1, '7.170')] +[2023-09-26 08:33:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 17235968. Throughput: 0: 770.9, 1: 770.7. Samples: 4307091. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:33:28,763][91478] Avg episode reward: [(0, '6.670'), (1, '6.920')] +[2023-09-26 08:33:33,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6275.9). Total num frames: 17268736. Throughput: 0: 768.4, 1: 768.4. Samples: 4316344. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:33:33,763][91478] Avg episode reward: [(0, '6.590'), (1, '7.030')] +[2023-09-26 08:33:36,175][92473] Updated weights for policy 0, policy_version 33760 (0.0018) +[2023-09-26 08:33:36,175][92474] Updated weights for policy 1, policy_version 33760 (0.0015) +[2023-09-26 08:33:38,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 17293312. Throughput: 0: 772.4, 1: 771.2. Samples: 4321109. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:33:38,763][91478] Avg episode reward: [(0, '6.590'), (1, '7.150')] +[2023-09-26 08:33:43,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 17326080. Throughput: 0: 772.2, 1: 772.7. Samples: 4330271. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:33:43,763][91478] Avg episode reward: [(0, '6.820'), (1, '6.660')] +[2023-09-26 08:33:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 17358848. Throughput: 0: 775.0, 1: 773.7. Samples: 4339717. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:33:48,763][91478] Avg episode reward: [(0, '6.770'), (1, '6.810')] +[2023-09-26 08:33:49,204][92473] Updated weights for policy 0, policy_version 33920 (0.0016) +[2023-09-26 08:33:49,204][92474] Updated weights for policy 1, policy_version 33920 (0.0017) +[2023-09-26 08:33:53,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 17391616. Throughput: 0: 778.6, 1: 779.0. Samples: 4344516. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:33:53,763][91478] Avg episode reward: [(0, '6.640'), (1, '6.910')] +[2023-09-26 08:33:58,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 17424384. Throughput: 0: 777.2, 1: 777.7. Samples: 4354048. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:33:58,763][91478] Avg episode reward: [(0, '6.750'), (1, '6.960')] +[2023-09-26 08:34:02,327][92474] Updated weights for policy 1, policy_version 34080 (0.0019) +[2023-09-26 08:34:02,327][92473] Updated weights for policy 0, policy_version 34080 (0.0020) +[2023-09-26 08:34:03,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6262.0). Total num frames: 17457152. Throughput: 0: 778.7, 1: 778.4. Samples: 4363224. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:34:03,762][91478] Avg episode reward: [(0, '6.870'), (1, '6.920')] +[2023-09-26 08:34:08,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 17489920. Throughput: 0: 781.6, 1: 782.7. Samples: 4368193. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:34:08,763][91478] Avg episode reward: [(0, '6.870'), (1, '6.810')] +[2023-09-26 08:34:13,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 17514496. Throughput: 0: 781.8, 1: 781.8. Samples: 4377450. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:34:13,763][91478] Avg episode reward: [(0, '6.870'), (1, '6.930')] +[2023-09-26 08:34:15,354][92474] Updated weights for policy 1, policy_version 34240 (0.0016) +[2023-09-26 08:34:15,354][92473] Updated weights for policy 0, policy_version 34240 (0.0016) +[2023-09-26 08:34:18,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 17547264. Throughput: 0: 782.9, 1: 783.2. Samples: 4386816. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:34:18,763][91478] Avg episode reward: [(0, '6.640'), (1, '6.960')] +[2023-09-26 08:34:23,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 17580032. Throughput: 0: 778.9, 1: 778.7. Samples: 4391202. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:34:23,763][91478] Avg episode reward: [(0, '6.740'), (1, '7.150')] +[2023-09-26 08:34:28,554][92473] Updated weights for policy 0, policy_version 34400 (0.0016) +[2023-09-26 08:34:28,554][92474] Updated weights for policy 1, policy_version 34400 (0.0016) +[2023-09-26 08:34:28,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 17612800. Throughput: 0: 784.3, 1: 784.1. Samples: 4400850. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:34:28,763][91478] Avg episode reward: [(0, '6.840'), (1, '7.090')] +[2023-09-26 08:34:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 17637376. Throughput: 0: 782.2, 1: 780.5. Samples: 4410040. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:34:33,763][91478] Avg episode reward: [(0, '6.670'), (1, '7.010')] +[2023-09-26 08:34:38,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 17670144. Throughput: 0: 782.3, 1: 782.1. Samples: 4414914. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-26 08:34:38,763][91478] Avg episode reward: [(0, '6.540'), (1, '7.340')] +[2023-09-26 08:34:41,610][92473] Updated weights for policy 0, policy_version 34560 (0.0016) +[2023-09-26 08:34:41,610][92474] Updated weights for policy 1, policy_version 34560 (0.0016) +[2023-09-26 08:34:43,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 17702912. Throughput: 0: 778.7, 1: 778.4. Samples: 4424117. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:34:43,763][91478] Avg episode reward: [(0, '6.650'), (1, '7.030')] +[2023-09-26 08:34:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 17735680. Throughput: 0: 783.7, 1: 784.6. Samples: 4433798. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:34:48,763][91478] Avg episode reward: [(0, '6.620'), (1, '7.270')] +[2023-09-26 08:34:53,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 17768448. Throughput: 0: 776.3, 1: 776.5. Samples: 4438071. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:34:53,763][91478] Avg episode reward: [(0, '6.570'), (1, '6.980')] +[2023-09-26 08:34:54,930][92474] Updated weights for policy 1, policy_version 34720 (0.0017) +[2023-09-26 08:34:54,930][92473] Updated weights for policy 0, policy_version 34720 (0.0016) +[2023-09-26 08:34:58,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 17793024. Throughput: 0: 777.0, 1: 778.1. Samples: 4447427. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:34:58,763][91478] Avg episode reward: [(0, '6.710'), (1, '6.650')] +[2023-09-26 08:35:03,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 17825792. Throughput: 0: 775.6, 1: 774.2. Samples: 4456557. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-26 08:35:03,762][91478] Avg episode reward: [(0, '6.630'), (1, '6.820')] +[2023-09-26 08:35:08,132][92473] Updated weights for policy 0, policy_version 34880 (0.0017) +[2023-09-26 08:35:08,132][92474] Updated weights for policy 1, policy_version 34880 (0.0018) +[2023-09-26 08:35:08,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6144.0, 300 sec: 6234.2). Total num frames: 17858560. Throughput: 0: 780.6, 1: 778.6. Samples: 4461363. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:35:08,763][91478] Avg episode reward: [(0, '6.690'), (1, '6.760')] +[2023-09-26 08:35:13,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 17891328. Throughput: 0: 776.0, 1: 777.8. Samples: 4470772. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:35:13,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.890')] +[2023-09-26 08:35:13,776][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000034944_8945664.pth... +[2023-09-26 08:35:13,776][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000034944_8945664.pth... +[2023-09-26 08:35:13,812][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000032032_8200192.pth +[2023-09-26 08:35:13,813][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000032032_8200192.pth +[2023-09-26 08:35:18,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 17924096. Throughput: 0: 777.6, 1: 777.3. Samples: 4480011. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:35:18,763][91478] Avg episode reward: [(0, '6.710'), (1, '6.950')] +[2023-09-26 08:35:21,188][92473] Updated weights for policy 0, policy_version 35040 (0.0016) +[2023-09-26 08:35:21,189][92474] Updated weights for policy 1, policy_version 35040 (0.0015) +[2023-09-26 08:35:23,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 17956864. Throughput: 0: 777.6, 1: 777.4. Samples: 4484887. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:35:23,763][91478] Avg episode reward: [(0, '6.670'), (1, '7.090')] +[2023-09-26 08:35:28,762][91478] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 17981440. Throughput: 0: 776.4, 1: 775.5. Samples: 4493953. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:35:28,762][91478] Avg episode reward: [(0, '6.740'), (1, '6.800')] +[2023-09-26 08:35:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.6, 300 sec: 6234.3). Total num frames: 18014208. Throughput: 0: 774.5, 1: 774.6. Samples: 4503507. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:35:33,762][91478] Avg episode reward: [(0, '6.940'), (1, '7.040')] +[2023-09-26 08:35:34,468][92473] Updated weights for policy 0, policy_version 35200 (0.0016) +[2023-09-26 08:35:34,469][92474] Updated weights for policy 1, policy_version 35200 (0.0017) +[2023-09-26 08:35:38,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 18046976. Throughput: 0: 774.9, 1: 774.5. Samples: 4507794. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:35:38,762][91478] Avg episode reward: [(0, '6.510'), (1, '7.070')] +[2023-09-26 08:35:43,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 18079744. Throughput: 0: 779.2, 1: 778.8. Samples: 4517538. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:35:43,762][91478] Avg episode reward: [(0, '6.520'), (1, '6.770')] +[2023-09-26 08:35:47,382][92474] Updated weights for policy 1, policy_version 35360 (0.0015) +[2023-09-26 08:35:47,383][92473] Updated weights for policy 0, policy_version 35360 (0.0017) +[2023-09-26 08:35:48,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18112512. Throughput: 0: 782.6, 1: 782.8. Samples: 4527002. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:35:48,763][91478] Avg episode reward: [(0, '6.450'), (1, '6.760')] +[2023-09-26 08:35:53,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 18137088. Throughput: 0: 778.2, 1: 781.4. Samples: 4531544. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 08:35:53,763][91478] Avg episode reward: [(0, '6.670'), (1, '6.850')] +[2023-09-26 08:35:58,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 18169856. Throughput: 0: 779.9, 1: 778.5. Samples: 4540901. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 08:35:58,762][91478] Avg episode reward: [(0, '6.820'), (1, '7.010')] +[2023-09-26 08:36:00,505][92474] Updated weights for policy 1, policy_version 35520 (0.0016) +[2023-09-26 08:36:00,505][92473] Updated weights for policy 0, policy_version 35520 (0.0016) +[2023-09-26 08:36:03,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 18202624. Throughput: 0: 783.9, 1: 785.2. Samples: 4550620. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 08:36:03,762][91478] Avg episode reward: [(0, '6.650'), (1, '7.160')] +[2023-09-26 08:36:08,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18235392. Throughput: 0: 780.6, 1: 780.6. Samples: 4555140. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 08:36:08,763][91478] Avg episode reward: [(0, '6.740'), (1, '6.920')] +[2023-09-26 08:36:13,368][92474] Updated weights for policy 1, policy_version 35680 (0.0015) +[2023-09-26 08:36:13,369][92473] Updated weights for policy 0, policy_version 35680 (0.0014) +[2023-09-26 08:36:13,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18268160. Throughput: 0: 788.7, 1: 787.8. Samples: 4564894. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-26 08:36:13,763][91478] Avg episode reward: [(0, '6.610'), (1, '6.970')] +[2023-09-26 08:36:18,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18300928. Throughput: 0: 786.8, 1: 786.6. Samples: 4574310. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:36:18,763][91478] Avg episode reward: [(0, '6.800'), (1, '7.090')] +[2023-09-26 08:36:23,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 18325504. Throughput: 0: 792.7, 1: 792.6. Samples: 4579132. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:36:23,763][91478] Avg episode reward: [(0, '6.710'), (1, '7.090')] +[2023-09-26 08:36:26,387][92473] Updated weights for policy 0, policy_version 35840 (0.0017) +[2023-09-26 08:36:26,388][92474] Updated weights for policy 1, policy_version 35840 (0.0016) +[2023-09-26 08:36:28,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 18358272. Throughput: 0: 788.0, 1: 787.2. Samples: 4588422. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:36:28,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.960')] +[2023-09-26 08:36:33,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18391040. Throughput: 0: 787.4, 1: 787.1. Samples: 4597858. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-26 08:36:33,762][91478] Avg episode reward: [(0, '6.520'), (1, '6.920')] +[2023-09-26 08:36:38,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18423808. Throughput: 0: 787.7, 1: 787.3. Samples: 4602419. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:36:38,762][91478] Avg episode reward: [(0, '6.600'), (1, '6.840')] +[2023-09-26 08:36:39,735][92474] Updated weights for policy 1, policy_version 36000 (0.0017) +[2023-09-26 08:36:39,735][92473] Updated weights for policy 0, policy_version 36000 (0.0018) +[2023-09-26 08:36:43,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18456576. Throughput: 0: 786.9, 1: 787.4. Samples: 4611743. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:36:43,762][91478] Avg episode reward: [(0, '6.630'), (1, '6.930')] +[2023-09-26 08:36:48,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18489344. Throughput: 0: 785.2, 1: 783.8. Samples: 4621226. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:36:48,763][91478] Avg episode reward: [(0, '6.760'), (1, '6.840')] +[2023-09-26 08:36:52,493][92474] Updated weights for policy 1, policy_version 36160 (0.0018) +[2023-09-26 08:36:52,493][92473] Updated weights for policy 0, policy_version 36160 (0.0017) +[2023-09-26 08:36:53,762][91478] Fps is (10 sec: 5734.2, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 18513920. Throughput: 0: 790.6, 1: 790.2. Samples: 4626276. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:36:53,763][91478] Avg episode reward: [(0, '6.820'), (1, '7.070')] +[2023-09-26 08:36:58,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 18546688. Throughput: 0: 782.9, 1: 784.0. Samples: 4635402. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-26 08:36:58,763][91478] Avg episode reward: [(0, '6.710'), (1, '6.880')] +[2023-09-26 08:37:03,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 18579456. Throughput: 0: 783.4, 1: 784.6. Samples: 4644868. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:37:03,763][91478] Avg episode reward: [(0, '6.580'), (1, '6.860')] +[2023-09-26 08:37:05,445][92474] Updated weights for policy 1, policy_version 36320 (0.0020) +[2023-09-26 08:37:05,445][92473] Updated weights for policy 0, policy_version 36320 (0.0020) +[2023-09-26 08:37:08,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 18612224. Throughput: 0: 784.5, 1: 784.8. Samples: 4649750. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:37:08,762][91478] Avg episode reward: [(0, '6.830'), (1, '6.920')] +[2023-09-26 08:37:13,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 18644992. Throughput: 0: 785.6, 1: 786.6. Samples: 4659170. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:37:13,762][91478] Avg episode reward: [(0, '6.810'), (1, '7.110')] +[2023-09-26 08:37:13,770][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000036416_9322496.pth... +[2023-09-26 08:37:13,771][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000036416_9322496.pth... +[2023-09-26 08:37:13,804][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000033488_8572928.pth +[2023-09-26 08:37:13,811][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000033488_8572928.pth +[2023-09-26 08:37:18,603][92474] Updated weights for policy 1, policy_version 36480 (0.0014) +[2023-09-26 08:37:18,605][92473] Updated weights for policy 0, policy_version 36480 (0.0014) +[2023-09-26 08:37:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18677760. Throughput: 0: 783.9, 1: 783.9. Samples: 4668412. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:37:18,763][91478] Avg episode reward: [(0, '6.580'), (1, '6.950')] +[2023-09-26 08:37:23,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6280.5, 300 sec: 6220.4). Total num frames: 18702336. Throughput: 0: 787.3, 1: 786.0. Samples: 4673219. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:37:23,763][91478] Avg episode reward: [(0, '6.650'), (1, '7.050')] +[2023-09-26 08:37:28,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6280.6, 300 sec: 6220.4). Total num frames: 18735104. Throughput: 0: 786.1, 1: 786.2. Samples: 4682497. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:37:28,762][91478] Avg episode reward: [(0, '6.590'), (1, '7.180')] +[2023-09-26 08:37:31,578][92473] Updated weights for policy 0, policy_version 36640 (0.0017) +[2023-09-26 08:37:31,579][92474] Updated weights for policy 1, policy_version 36640 (0.0020) +[2023-09-26 08:37:33,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18767872. Throughput: 0: 785.0, 1: 787.1. Samples: 4691972. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:37:33,763][91478] Avg episode reward: [(0, '6.820'), (1, '7.130')] +[2023-09-26 08:37:38,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18800640. Throughput: 0: 779.5, 1: 779.7. Samples: 4696439. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:37:38,763][91478] Avg episode reward: [(0, '6.710'), (1, '6.860')] +[2023-09-26 08:37:43,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18833408. Throughput: 0: 786.8, 1: 787.5. Samples: 4706245. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:37:43,763][91478] Avg episode reward: [(0, '6.630'), (1, '7.090')] +[2023-09-26 08:37:44,711][92474] Updated weights for policy 1, policy_version 36800 (0.0018) +[2023-09-26 08:37:44,711][92473] Updated weights for policy 0, policy_version 36800 (0.0018) +[2023-09-26 08:37:48,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18866176. Throughput: 0: 786.9, 1: 785.2. Samples: 4715611. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:37:48,763][91478] Avg episode reward: [(0, '6.620'), (1, '7.290')] +[2023-09-26 08:37:53,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18890752. Throughput: 0: 784.4, 1: 781.4. Samples: 4720214. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:37:53,763][91478] Avg episode reward: [(0, '6.360'), (1, '7.150')] +[2023-09-26 08:37:57,925][92473] Updated weights for policy 0, policy_version 36960 (0.0015) +[2023-09-26 08:37:57,925][92474] Updated weights for policy 1, policy_version 36960 (0.0016) +[2023-09-26 08:37:58,762][91478] Fps is (10 sec: 5734.6, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 18923520. Throughput: 0: 778.8, 1: 778.0. Samples: 4729226. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:37:58,762][91478] Avg episode reward: [(0, '6.360'), (1, '6.780')] +[2023-09-26 08:38:03,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18956288. Throughput: 0: 781.5, 1: 779.4. Samples: 4738654. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:38:03,763][91478] Avg episode reward: [(0, '6.480'), (1, '6.940')] +[2023-09-26 08:38:08,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 18989056. Throughput: 0: 776.4, 1: 778.0. Samples: 4743168. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:38:08,763][91478] Avg episode reward: [(0, '6.310'), (1, '6.860')] +[2023-09-26 08:38:11,167][92473] Updated weights for policy 0, policy_version 37120 (0.0017) +[2023-09-26 08:38:11,168][92474] Updated weights for policy 1, policy_version 37120 (0.0019) +[2023-09-26 08:38:13,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6275.9). Total num frames: 19021824. Throughput: 0: 779.9, 1: 779.8. Samples: 4752686. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:38:13,763][91478] Avg episode reward: [(0, '6.620'), (1, '6.720')] +[2023-09-26 08:38:18,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 19046400. Throughput: 0: 775.7, 1: 774.6. Samples: 4761737. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:38:18,763][91478] Avg episode reward: [(0, '6.740'), (1, '6.980')] +[2023-09-26 08:38:23,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19079168. Throughput: 0: 779.5, 1: 779.2. Samples: 4766582. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:38:23,763][91478] Avg episode reward: [(0, '6.730'), (1, '7.130')] +[2023-09-26 08:38:24,509][92474] Updated weights for policy 1, policy_version 37280 (0.0016) +[2023-09-26 08:38:24,509][92473] Updated weights for policy 0, policy_version 37280 (0.0016) +[2023-09-26 08:38:28,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19111936. Throughput: 0: 774.2, 1: 774.5. Samples: 4775936. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:38:28,762][91478] Avg episode reward: [(0, '6.600'), (1, '6.790')] +[2023-09-26 08:38:33,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 19144704. Throughput: 0: 772.6, 1: 774.0. Samples: 4785210. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-26 08:38:33,762][91478] Avg episode reward: [(0, '6.500'), (1, '6.760')] +[2023-09-26 08:38:37,704][92473] Updated weights for policy 0, policy_version 37440 (0.0017) +[2023-09-26 08:38:37,704][92474] Updated weights for policy 1, policy_version 37440 (0.0016) +[2023-09-26 08:38:38,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 19169280. Throughput: 0: 773.1, 1: 774.7. Samples: 4789866. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:38:38,762][91478] Avg episode reward: [(0, '6.640'), (1, '7.040')] +[2023-09-26 08:38:43,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 19202048. Throughput: 0: 773.7, 1: 773.5. Samples: 4798851. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:38:43,763][91478] Avg episode reward: [(0, '6.560'), (1, '6.960')] +[2023-09-26 08:38:48,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 19234816. Throughput: 0: 776.0, 1: 776.5. Samples: 4808518. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:38:48,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.920')] +[2023-09-26 08:38:50,780][92473] Updated weights for policy 0, policy_version 37600 (0.0017) +[2023-09-26 08:38:50,781][92474] Updated weights for policy 1, policy_version 37600 (0.0018) +[2023-09-26 08:38:53,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 19267584. Throughput: 0: 775.7, 1: 774.3. Samples: 4812921. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:38:53,762][91478] Avg episode reward: [(0, '6.700'), (1, '6.930')] +[2023-09-26 08:38:58,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19300352. Throughput: 0: 775.6, 1: 774.8. Samples: 4822456. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:38:58,763][91478] Avg episode reward: [(0, '6.720'), (1, '7.130')] +[2023-09-26 08:39:03,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 19324928. Throughput: 0: 777.2, 1: 776.5. Samples: 4831656. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:39:03,763][91478] Avg episode reward: [(0, '6.720'), (1, '6.910')] +[2023-09-26 08:39:03,899][92473] Updated weights for policy 0, policy_version 37760 (0.0015) +[2023-09-26 08:39:03,899][92474] Updated weights for policy 1, policy_version 37760 (0.0018) +[2023-09-26 08:39:08,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 19357696. Throughput: 0: 776.0, 1: 776.4. Samples: 4836443. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:39:08,762][91478] Avg episode reward: [(0, '6.720'), (1, '6.710')] +[2023-09-26 08:39:13,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 19390464. Throughput: 0: 773.8, 1: 773.7. Samples: 4845573. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:39:13,762][91478] Avg episode reward: [(0, '6.610'), (1, '7.030')] +[2023-09-26 08:39:13,772][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000037872_9695232.pth... +[2023-09-26 08:39:13,772][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000037872_9695232.pth... +[2023-09-26 08:39:13,820][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000034944_8945664.pth +[2023-09-26 08:39:13,820][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000034944_8945664.pth +[2023-09-26 08:39:17,282][92473] Updated weights for policy 0, policy_version 37920 (0.0018) +[2023-09-26 08:39:17,282][92474] Updated weights for policy 1, policy_version 37920 (0.0016) +[2023-09-26 08:39:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19423232. Throughput: 0: 772.6, 1: 774.1. Samples: 4854809. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:39:18,763][91478] Avg episode reward: [(0, '6.740'), (1, '6.960')] +[2023-09-26 08:39:23,762][91478] Fps is (10 sec: 5734.3, 60 sec: 6144.0, 300 sec: 6220.4). Total num frames: 19447808. Throughput: 0: 774.0, 1: 774.8. Samples: 4859561. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:39:23,763][91478] Avg episode reward: [(0, '6.740'), (1, '7.080')] +[2023-09-26 08:39:28,762][91478] Fps is (10 sec: 5734.5, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 19480576. Throughput: 0: 775.8, 1: 776.7. Samples: 4868712. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 08:39:28,762][91478] Avg episode reward: [(0, '6.850'), (1, '6.580')] +[2023-09-26 08:39:30,438][92474] Updated weights for policy 1, policy_version 38080 (0.0015) +[2023-09-26 08:39:30,438][92473] Updated weights for policy 0, policy_version 38080 (0.0017) +[2023-09-26 08:39:33,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 19513344. Throughput: 0: 774.2, 1: 777.2. Samples: 4878331. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 08:39:33,763][91478] Avg episode reward: [(0, '6.750'), (1, '6.530')] +[2023-09-26 08:39:38,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19546112. Throughput: 0: 778.3, 1: 778.2. Samples: 4882963. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 08:39:38,762][91478] Avg episode reward: [(0, '6.640'), (1, '6.530')] +[2023-09-26 08:39:43,422][92474] Updated weights for policy 1, policy_version 38240 (0.0016) +[2023-09-26 08:39:43,423][92473] Updated weights for policy 0, policy_version 38240 (0.0017) +[2023-09-26 08:39:43,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19578880. Throughput: 0: 778.3, 1: 780.3. Samples: 4892594. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 08:39:43,764][91478] Avg episode reward: [(0, '6.860'), (1, '6.770')] +[2023-09-26 08:39:48,762][91478] Fps is (10 sec: 6553.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19611648. Throughput: 0: 777.6, 1: 777.8. Samples: 4901646. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-26 08:39:48,763][91478] Avg episode reward: [(0, '6.720'), (1, '6.920')] +[2023-09-26 08:39:53,762][91478] Fps is (10 sec: 5734.6, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 19636224. Throughput: 0: 778.8, 1: 778.4. Samples: 4906517. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:39:53,763][91478] Avg episode reward: [(0, '6.650'), (1, '6.730')] +[2023-09-26 08:39:56,387][92473] Updated weights for policy 0, policy_version 38400 (0.0013) +[2023-09-26 08:39:56,388][92474] Updated weights for policy 1, policy_version 38400 (0.0017) +[2023-09-26 08:39:58,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6144.0, 300 sec: 6248.1). Total num frames: 19668992. Throughput: 0: 784.4, 1: 783.2. Samples: 4916113. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:39:58,763][91478] Avg episode reward: [(0, '6.540'), (1, '6.640')] +[2023-09-26 08:40:03,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19701760. Throughput: 0: 787.6, 1: 785.2. Samples: 4925586. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:40:03,763][91478] Avg episode reward: [(0, '6.510'), (1, '6.940')] +[2023-09-26 08:40:08,762][91478] Fps is (10 sec: 6553.8, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19734528. Throughput: 0: 787.4, 1: 787.5. Samples: 4930434. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:40:08,762][91478] Avg episode reward: [(0, '6.660'), (1, '6.880')] +[2023-09-26 08:40:09,292][92473] Updated weights for policy 0, policy_version 38560 (0.0017) +[2023-09-26 08:40:09,292][92474] Updated weights for policy 1, policy_version 38560 (0.0015) +[2023-09-26 08:40:13,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19767296. Throughput: 0: 789.3, 1: 789.9. Samples: 4939777. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-26 08:40:13,763][91478] Avg episode reward: [(0, '6.570'), (1, '6.520')] +[2023-09-26 08:40:18,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19800064. Throughput: 0: 787.7, 1: 786.3. Samples: 4949158. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 08:40:18,763][91478] Avg episode reward: [(0, '6.540'), (1, '6.360')] +[2023-09-26 08:40:22,482][92474] Updated weights for policy 1, policy_version 38720 (0.0018) +[2023-09-26 08:40:22,482][92473] Updated weights for policy 0, policy_version 38720 (0.0017) +[2023-09-26 08:40:23,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6275.9). Total num frames: 19832832. Throughput: 0: 789.7, 1: 788.2. Samples: 4953967. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 08:40:23,763][91478] Avg episode reward: [(0, '6.670'), (1, '6.300')] +[2023-09-26 08:40:28,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19857408. Throughput: 0: 786.5, 1: 785.4. Samples: 4963326. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 08:40:28,763][91478] Avg episode reward: [(0, '6.690'), (1, '6.760')] +[2023-09-26 08:40:33,762][91478] Fps is (10 sec: 5734.4, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19890176. Throughput: 0: 787.0, 1: 788.6. Samples: 4972549. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 08:40:33,763][91478] Avg episode reward: [(0, '6.700'), (1, '6.920')] +[2023-09-26 08:40:35,454][92473] Updated weights for policy 0, policy_version 38880 (0.0017) +[2023-09-26 08:40:35,454][92474] Updated weights for policy 1, policy_version 38880 (0.0015) +[2023-09-26 08:40:38,762][91478] Fps is (10 sec: 6553.6, 60 sec: 6280.5, 300 sec: 6248.1). Total num frames: 19922944. Throughput: 0: 787.6, 1: 788.2. Samples: 4977431. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-26 08:40:38,763][91478] Avg episode reward: [(0, '6.490'), (1, '7.140')] +[2023-09-26 08:40:43,762][91478] Fps is (10 sec: 6553.5, 60 sec: 6280.6, 300 sec: 6248.1). Total num frames: 19955712. Throughput: 0: 786.7, 1: 787.0. Samples: 4986932. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:40:43,763][91478] Avg episode reward: [(0, '6.680'), (1, '6.970')] +[2023-09-26 08:40:48,240][92474] Updated weights for policy 1, policy_version 39040 (0.0017) +[2023-09-26 08:40:48,240][92473] Updated weights for policy 0, policy_version 39040 (0.0018) +[2023-09-26 08:40:48,762][91478] Fps is (10 sec: 6553.7, 60 sec: 6280.6, 300 sec: 6275.9). Total num frames: 19988480. Throughput: 0: 791.9, 1: 791.1. Samples: 4996820. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-26 08:40:48,762][91478] Avg episode reward: [(0, '6.490'), (1, '6.960')] +[2023-09-26 08:40:52,116][92513] Stopping RolloutWorker_w6... +[2023-09-26 08:40:52,116][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-26 08:40:52,116][92512] Stopping RolloutWorker_w5... +[2023-09-26 08:40:52,116][92507] Stopping RolloutWorker_w3... +[2023-09-26 08:40:52,116][92509] Stopping RolloutWorker_w2... +[2023-09-26 08:40:52,116][92511] Stopping RolloutWorker_w1... +[2023-09-26 08:40:52,116][92510] Stopping RolloutWorker_w4... +[2023-09-26 08:40:52,117][92514] Stopping RolloutWorker_w7... +[2023-09-26 08:40:52,116][91478] Component RolloutWorker_w6 stopped! +[2023-09-26 08:40:52,117][92512] Loop rollout_proc5_evt_loop terminating... +[2023-09-26 08:40:52,117][92513] Loop rollout_proc6_evt_loop terminating... +[2023-09-26 08:40:52,117][92507] Loop rollout_proc3_evt_loop terminating... +[2023-09-26 08:40:52,117][92509] Loop rollout_proc2_evt_loop terminating... +[2023-09-26 08:40:52,117][92475] Stopping RolloutWorker_w0... +[2023-09-26 08:40:52,117][92514] Loop rollout_proc7_evt_loop terminating... +[2023-09-26 08:40:52,117][92511] Loop rollout_proc1_evt_loop terminating... +[2023-09-26 08:40:52,117][92510] Loop rollout_proc4_evt_loop terminating... +[2023-09-26 08:40:52,118][91478] Component RolloutWorker_w5 stopped! +[2023-09-26 08:40:52,118][92475] Loop rollout_proc0_evt_loop terminating... +[2023-09-26 08:40:52,118][91478] Component RolloutWorker_w4 stopped! +[2023-09-26 08:40:52,119][91478] Component RolloutWorker_w3 stopped! +[2023-09-26 08:40:52,119][91478] Component RolloutWorker_w2 stopped! +[2023-09-26 08:40:52,120][91478] Component Batcher_1 stopped! +[2023-09-26 08:40:52,120][91993] Stopping Batcher_0... +[2023-09-26 08:40:52,120][91478] Component RolloutWorker_w1 stopped! +[2023-09-26 08:40:52,121][91993] Loop batcher_evt_loop terminating... +[2023-09-26 08:40:52,121][91478] Component RolloutWorker_w7 stopped! +[2023-09-26 08:40:52,122][91478] Component RolloutWorker_w0 stopped! +[2023-09-26 08:40:52,122][91478] Component Batcher_0 stopped! +[2023-09-26 08:40:52,136][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000039088_10006528.pth... +[2023-09-26 08:40:52,116][92345] Stopping Batcher_1... +[2023-09-26 08:40:52,146][92345] Loop batcher_evt_loop terminating... +[2023-09-26 08:40:52,147][92345] Removing ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000036416_9322496.pth +[2023-09-26 08:40:52,152][92345] Saving ./train_atari/atari_frostbite/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-26 08:40:52,165][91993] Removing ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000036416_9322496.pth +[2023-09-26 08:40:52,169][91993] Saving ./train_atari/atari_frostbite/checkpoint_p0/checkpoint_000039088_10006528.pth... +[2023-09-26 08:40:52,174][92473] Weights refcount: 2 0 +[2023-09-26 08:40:52,175][92473] Stopping InferenceWorker_p0-w0... +[2023-09-26 08:40:52,176][92473] Loop inference_proc0-0_evt_loop terminating... +[2023-09-26 08:40:52,176][91478] Component InferenceWorker_p0-w0 stopped! +[2023-09-26 08:40:52,184][92474] Weights refcount: 2 0 +[2023-09-26 08:40:52,187][92474] Stopping InferenceWorker_p1-w0... +[2023-09-26 08:40:52,187][92474] Loop inference_proc1-0_evt_loop terminating... +[2023-09-26 08:40:52,187][91478] Component InferenceWorker_p1-w0 stopped! +[2023-09-26 08:40:52,198][92345] Stopping LearnerWorker_p1... +[2023-09-26 08:40:52,198][92345] Loop learner_proc1_evt_loop terminating... +[2023-09-26 08:40:52,199][91478] Component LearnerWorker_p1 stopped! +[2023-09-26 08:40:52,205][91993] Stopping LearnerWorker_p0... +[2023-09-26 08:40:52,205][91993] Loop learner_proc0_evt_loop terminating... +[2023-09-26 08:40:52,205][91478] Component LearnerWorker_p0 stopped! +[2023-09-26 08:40:52,206][91478] Waiting for process learner_proc0 to stop... +[2023-09-26 08:40:52,978][91478] Waiting for process learner_proc1 to stop... +[2023-09-26 08:40:53,007][91478] Waiting for process inference_proc0-0 to join... +[2023-09-26 08:40:53,007][91478] Waiting for process inference_proc1-0 to join... +[2023-09-26 08:40:53,008][91478] Waiting for process rollout_proc0 to join... +[2023-09-26 08:40:53,009][91478] Waiting for process rollout_proc1 to join... +[2023-09-26 08:40:53,009][91478] Waiting for process rollout_proc2 to join... +[2023-09-26 08:40:53,010][91478] Waiting for process rollout_proc3 to join... +[2023-09-26 08:40:53,011][91478] Waiting for process rollout_proc4 to join... +[2023-09-26 08:40:53,011][91478] Waiting for process rollout_proc5 to join... +[2023-09-26 08:40:53,012][91478] Waiting for process rollout_proc6 to join... +[2023-09-26 08:40:53,012][91478] Waiting for process rollout_proc7 to join... +[2023-09-26 08:40:53,013][91478] Batcher 0 profile tree view: +batching: 20.8274, releasing_batches: 1.7384 +[2023-09-26 08:40:53,014][91478] Batcher 1 profile tree view: +batching: 20.7750, releasing_batches: 1.7423 +[2023-09-26 08:40:53,014][91478] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0052 + wait_policy_total: 664.1598 +update_model: 37.4093 + weight_update: 0.0018 +one_step: 0.0011 + handle_policy_step: 2301.0509 + deserialize: 67.6480, stack: 16.2028, obs_to_device_normalize: 557.7310, forward: 1108.6167, send_messages: 92.7013 + prepare_outputs: 308.4975 + to_cpu: 156.5519 +[2023-09-26 08:40:53,015][91478] InferenceWorker_p1-w0 profile tree view: +wait_policy: 0.0051 + wait_policy_total: 675.5771 +update_model: 37.4117 + weight_update: 0.0017 +one_step: 0.0012 + handle_policy_step: 2289.0075 + deserialize: 68.8435, stack: 16.3902, obs_to_device_normalize: 557.2466, forward: 1101.6084, send_messages: 95.4714 + prepare_outputs: 305.1509 + to_cpu: 153.7165 +[2023-09-26 08:40:53,015][91478] Learner 0 profile tree view: +misc: 0.0168, prepare_batch: 32.3151 +train: 457.6851 + epoch_init: 0.1070, minibatch_init: 3.1684, losses_postprocess: 62.9331, kl_divergence: 5.4435, after_optimizer: 21.9874 + calculate_losses: 44.8992 + losses_init: 0.0966, forward_head: 14.2799, bptt_initial: 0.4291, bptt: 0.4542, tail: 10.2721, advantages_returns: 3.0540, losses: 12.7400 + update: 315.0401 + clip: 164.6867 +[2023-09-26 08:40:53,016][91478] Learner 1 profile tree view: +misc: 0.0161, prepare_batch: 32.5272 +train: 456.0765 + epoch_init: 0.1004, minibatch_init: 3.0887, losses_postprocess: 62.7012, kl_divergence: 5.4166, after_optimizer: 22.2459 + calculate_losses: 43.9644 + losses_init: 0.1022, forward_head: 13.3938, bptt_initial: 0.4418, bptt: 0.4710, tail: 10.3155, advantages_returns: 3.0700, losses: 12.5869 + update: 314.4898 + clip: 163.4965 +[2023-09-26 08:40:53,017][91478] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.4031, enqueue_policy_requests: 42.5075, env_step: 1194.7815, overhead: 29.3365, complete_rollouts: 1.0681 +save_policy_outputs: 54.2119 + split_output_tensors: 18.5711 +[2023-09-26 08:40:53,017][91478] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.4047, enqueue_policy_requests: 42.0556, env_step: 1242.2116, overhead: 29.2284, complete_rollouts: 1.0608 +save_policy_outputs: 53.4701 + split_output_tensors: 18.2535 +[2023-09-26 08:40:53,018][91478] Loop Runner_EvtLoop terminating... +[2023-09-26 08:40:53,018][91478] Runner profile tree view: +main_loop: 3215.1037 +[2023-09-26 08:40:53,018][91478] Collected {0: 10006528, 1: 10006528}, FPS: 6224.7