diff --git "a/sf_log.txt" "b/sf_log.txt" new file mode 100644--- /dev/null +++ "b/sf_log.txt" @@ -0,0 +1,2248 @@ +[2023-09-27 06:21:01,942][06167] Saving configuration to ./train_atari/atari_stargunner/config.json... +[2023-09-27 06:21:02,259][06167] Rollout worker 0 uses device cpu +[2023-09-27 06:21:02,260][06167] Rollout worker 1 uses device cpu +[2023-09-27 06:21:02,261][06167] Rollout worker 2 uses device cpu +[2023-09-27 06:21:02,261][06167] Rollout worker 3 uses device cpu +[2023-09-27 06:21:02,262][06167] Rollout worker 4 uses device cpu +[2023-09-27 06:21:02,262][06167] Rollout worker 5 uses device cpu +[2023-09-27 06:21:02,263][06167] Rollout worker 6 uses device cpu +[2023-09-27 06:21:02,263][06167] Rollout worker 7 uses device cpu +[2023-09-27 06:21:02,264][06167] In synchronous mode, we only accumulate one batch. Setting num_batches_to_accumulate to 1 +[2023-09-27 06:21:02,311][06167] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-27 06:21:02,311][06167] InferenceWorker_p0-w0: min num requests: 1 +[2023-09-27 06:21:02,314][06167] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-27 06:21:02,315][06167] InferenceWorker_p1-w0: min num requests: 1 +[2023-09-27 06:21:02,337][06167] Starting all processes... +[2023-09-27 06:21:02,338][06167] Starting process learner_proc0 +[2023-09-27 06:21:03,924][06167] Starting process learner_proc1 +[2023-09-27 06:21:03,928][06938] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-27 06:21:03,928][06938] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 +[2023-09-27 06:21:03,946][06938] Num visible devices: 1 +[2023-09-27 06:21:03,963][06938] Starting seed is not provided +[2023-09-27 06:21:03,963][06938] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-27 06:21:03,963][06938] Initializing actor-critic model on device cuda:0 +[2023-09-27 06:21:03,964][06938] RunningMeanStd input shape: (4, 84, 84) +[2023-09-27 06:21:03,964][06938] RunningMeanStd input shape: (1,) +[2023-09-27 06:21:03,975][06938] ConvEncoder: input_channels=4 +[2023-09-27 06:21:04,132][06938] Conv encoder output size: 512 +[2023-09-27 06:21:04,134][06938] Created Actor Critic model with architecture: +[2023-09-27 06:21:04,135][06938] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=18, bias=True) + ) +) +[2023-09-27 06:21:04,725][06938] Using optimizer +[2023-09-27 06:21:04,726][06938] No checkpoints found +[2023-09-27 06:21:04,726][06938] Did not load from checkpoint, starting from scratch! +[2023-09-27 06:21:04,726][06938] Initialized policy 0 weights for model version 0 +[2023-09-27 06:21:04,727][06938] LearnerWorker_p0 finished initialization! +[2023-09-27 06:21:04,728][06938] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-27 06:21:05,627][06167] Starting all processes... +[2023-09-27 06:21:05,632][07019] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-27 06:21:05,632][07019] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for learning process 1 +[2023-09-27 06:21:05,633][06167] Starting process inference_proc0-0 +[2023-09-27 06:21:05,633][06167] Starting process inference_proc1-0 +[2023-09-27 06:21:05,633][06167] Starting process rollout_proc0 +[2023-09-27 06:21:05,634][06167] Starting process rollout_proc1 +[2023-09-27 06:21:05,650][07019] Num visible devices: 1 +[2023-09-27 06:21:05,634][06167] Starting process rollout_proc2 +[2023-09-27 06:21:05,634][06167] Starting process rollout_proc3 +[2023-09-27 06:21:05,635][06167] Starting process rollout_proc4 +[2023-09-27 06:21:05,638][06167] Starting process rollout_proc5 +[2023-09-27 06:21:05,639][06167] Starting process rollout_proc6 +[2023-09-27 06:21:05,678][07019] Starting seed is not provided +[2023-09-27 06:21:05,678][07019] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-27 06:21:05,678][07019] Initializing actor-critic model on device cuda:0 +[2023-09-27 06:21:05,679][07019] RunningMeanStd input shape: (4, 84, 84) +[2023-09-27 06:21:05,643][06167] Starting process rollout_proc7 +[2023-09-27 06:21:05,679][07019] RunningMeanStd input shape: (1,) +[2023-09-27 06:21:05,699][07019] ConvEncoder: input_channels=4 +[2023-09-27 06:21:06,002][07019] Conv encoder output size: 512 +[2023-09-27 06:21:06,004][07019] Created Actor Critic model with architecture: +[2023-09-27 06:21:06,004][07019] ActorCriticSharedWeights( + (obs_normalizer): ObservationNormalizer( + (running_mean_std): RunningMeanStdDictInPlace( + (running_mean_std): ModuleDict( + (obs): RunningMeanStdInPlace() + ) + ) + ) + (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) + (encoder): MultiInputEncoder( + (encoders): ModuleDict( + (obs): ConvEncoder( + (enc): RecursiveScriptModule( + original_name=ConvEncoderImpl + (conv_head): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Conv2d) + (1): RecursiveScriptModule(original_name=ReLU) + (2): RecursiveScriptModule(original_name=Conv2d) + (3): RecursiveScriptModule(original_name=ReLU) + (4): RecursiveScriptModule(original_name=Conv2d) + (5): RecursiveScriptModule(original_name=ReLU) + ) + (mlp_layers): RecursiveScriptModule( + original_name=Sequential + (0): RecursiveScriptModule(original_name=Linear) + (1): RecursiveScriptModule(original_name=ReLU) + ) + ) + ) + ) + ) + (core): ModelCoreIdentity() + (decoder): MlpDecoder( + (mlp): Identity() + ) + (critic_linear): Linear(in_features=512, out_features=1, bias=True) + (action_parameterization): ActionParameterizationDefault( + (distribution_linear): Linear(in_features=512, out_features=18, bias=True) + ) +) +[2023-09-27 06:21:06,600][07019] Using optimizer +[2023-09-27 06:21:06,601][07019] No checkpoints found +[2023-09-27 06:21:06,601][07019] Did not load from checkpoint, starting from scratch! +[2023-09-27 06:21:06,601][07019] Initialized policy 1 weights for model version 0 +[2023-09-27 06:21:06,602][07019] LearnerWorker_p1 finished initialization! +[2023-09-27 06:21:06,603][07019] Using GPUs [0] for process 1 (actually maps to GPUs [1]) +[2023-09-27 06:21:07,553][07225] Worker 5 uses CPU cores [20, 21, 22, 23] +[2023-09-27 06:21:07,553][07177] Worker 0 uses CPU cores [0, 1, 2, 3] +[2023-09-27 06:21:07,554][07176] Using GPUs [1] for process 1 (actually maps to GPUs [1]) +[2023-09-27 06:21:07,554][07176] Set environment var CUDA_VISIBLE_DEVICES to '1' (GPU indices [1]) for inference process 1 +[2023-09-27 06:21:07,560][07219] Worker 1 uses CPU cores [4, 5, 6, 7] +[2023-09-27 06:21:07,572][07176] Num visible devices: 1 +[2023-09-27 06:21:07,585][07224] Worker 6 uses CPU cores [24, 25, 26, 27] +[2023-09-27 06:21:07,601][07223] Worker 4 uses CPU cores [16, 17, 18, 19] +[2023-09-27 06:21:07,631][07175] Using GPUs [0] for process 0 (actually maps to GPUs [0]) +[2023-09-27 06:21:07,632][07175] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 +[2023-09-27 06:21:07,653][07175] Num visible devices: 1 +[2023-09-27 06:21:07,658][07220] Worker 2 uses CPU cores [8, 9, 10, 11] +[2023-09-27 06:21:07,724][07226] Worker 7 uses CPU cores [28, 29, 30, 31] +[2023-09-27 06:21:07,725][07221] Worker 3 uses CPU cores [12, 13, 14, 15] +[2023-09-27 06:21:08,184][07176] RunningMeanStd input shape: (4, 84, 84) +[2023-09-27 06:21:08,184][07176] RunningMeanStd input shape: (1,) +[2023-09-27 06:21:08,195][07176] ConvEncoder: input_channels=4 +[2023-09-27 06:21:08,229][06167] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan, 1: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) +[2023-09-27 06:21:08,247][07175] RunningMeanStd input shape: (4, 84, 84) +[2023-09-27 06:21:08,247][07175] RunningMeanStd input shape: (1,) +[2023-09-27 06:21:08,258][07175] ConvEncoder: input_channels=4 +[2023-09-27 06:21:08,294][07176] Conv encoder output size: 512 +[2023-09-27 06:21:08,300][06167] Inference worker 1-0 is ready! +[2023-09-27 06:21:08,356][07175] Conv encoder output size: 512 +[2023-09-27 06:21:08,362][06167] Inference worker 0-0 is ready! +[2023-09-27 06:21:08,362][06167] All inference workers are ready! Signal rollout workers to start! +[2023-09-27 06:21:08,797][07219] Decorrelating experience for 0 frames... +[2023-09-27 06:21:08,797][07223] Decorrelating experience for 0 frames... +[2023-09-27 06:21:08,800][07177] Decorrelating experience for 0 frames... +[2023-09-27 06:21:08,800][07224] Decorrelating experience for 0 frames... +[2023-09-27 06:21:08,801][07226] Decorrelating experience for 0 frames... +[2023-09-27 06:21:08,804][07225] Decorrelating experience for 0 frames... +[2023-09-27 06:21:08,875][07220] Decorrelating experience for 0 frames... +[2023-09-27 06:21:08,888][07221] Decorrelating experience for 0 frames... +[2023-09-27 06:21:13,229][06167] Fps is (10 sec: 1638.4, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 8192. Throughput: 0: 204.8, 1: 204.8. Samples: 2048. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:21:13,230][06167] Avg episode reward: [(0, '0.333'), (1, '0.889')] +[2023-09-27 06:21:18,229][06167] Fps is (10 sec: 3276.8, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 32768. Throughput: 0: 409.6, 1: 409.6. Samples: 8192. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:21:18,230][06167] Avg episode reward: [(0, '0.778'), (1, '1.158')] +[2023-09-27 06:21:22,302][06167] Heartbeat connected on LearnerWorker_p0 +[2023-09-27 06:21:22,305][06167] Heartbeat connected on Batcher_1 +[2023-09-27 06:21:22,307][06167] Heartbeat connected on Batcher_0 +[2023-09-27 06:21:22,318][06167] Heartbeat connected on RolloutWorker_w0 +[2023-09-27 06:21:22,321][06167] Heartbeat connected on RolloutWorker_w1 +[2023-09-27 06:21:22,323][06167] Heartbeat connected on RolloutWorker_w2 +[2023-09-27 06:21:22,326][06167] Heartbeat connected on RolloutWorker_w3 +[2023-09-27 06:21:22,329][06167] Heartbeat connected on RolloutWorker_w4 +[2023-09-27 06:21:22,332][06167] Heartbeat connected on RolloutWorker_w5 +[2023-09-27 06:21:22,334][06167] Heartbeat connected on RolloutWorker_w6 +[2023-09-27 06:21:22,337][06167] Heartbeat connected on RolloutWorker_w7 +[2023-09-27 06:21:22,348][06167] Heartbeat connected on InferenceWorker_p0-w0 +[2023-09-27 06:21:22,366][06167] Heartbeat connected on InferenceWorker_p1-w0 +[2023-09-27 06:21:22,449][06167] Heartbeat connected on LearnerWorker_p1 +[2023-09-27 06:21:23,229][06167] Fps is (10 sec: 5734.4, 60 sec: 4369.1, 300 sec: 4369.1). Total num frames: 65536. Throughput: 0: 433.3, 1: 430.3. Samples: 12953. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:21:23,230][06167] Avg episode reward: [(0, '0.932'), (1, '0.868')] +[2023-09-27 06:21:25,022][07176] Updated weights for policy 1, policy_version 160 (0.0018) +[2023-09-27 06:21:25,023][07175] Updated weights for policy 0, policy_version 160 (0.0018) +[2023-09-27 06:21:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 4915.2, 300 sec: 4915.2). Total num frames: 98304. Throughput: 0: 567.3, 1: 564.7. Samples: 22640. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:21:28,230][06167] Avg episode reward: [(0, '0.932'), (1, '0.833')] +[2023-09-27 06:21:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 5242.9, 300 sec: 5242.9). Total num frames: 131072. Throughput: 0: 648.6, 1: 646.6. Samples: 32381. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:21:33,230][06167] Avg episode reward: [(0, '0.987'), (1, '0.853')] +[2023-09-27 06:21:37,711][07175] Updated weights for policy 0, policy_version 320 (0.0014) +[2023-09-27 06:21:37,711][07176] Updated weights for policy 1, policy_version 320 (0.0018) +[2023-09-27 06:21:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 5461.3, 300 sec: 5461.3). Total num frames: 163840. Throughput: 0: 618.2, 1: 615.7. Samples: 37017. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 06:21:38,230][06167] Avg episode reward: [(0, '1.041'), (1, '0.955')] +[2023-09-27 06:21:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 5617.4, 300 sec: 5617.4). Total num frames: 196608. Throughput: 0: 673.0, 1: 672.9. Samples: 47108. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:21:43,230][06167] Avg episode reward: [(0, '1.060'), (1, '0.910')] +[2023-09-27 06:21:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 5734.4, 300 sec: 5734.4). Total num frames: 229376. Throughput: 0: 716.8, 1: 716.1. Samples: 57317. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 06:21:48,230][06167] Avg episode reward: [(0, '1.100'), (1, '1.100')] +[2023-09-27 06:21:48,232][07019] Saving new best policy, reward=1.100! +[2023-09-27 06:21:48,231][06938] Saving new best policy, reward=1.100! +[2023-09-27 06:21:50,075][07175] Updated weights for policy 0, policy_version 480 (0.0017) +[2023-09-27 06:21:50,076][07176] Updated weights for policy 1, policy_version 480 (0.0019) +[2023-09-27 06:21:53,229][06167] Fps is (10 sec: 6553.7, 60 sec: 5825.4, 300 sec: 5825.4). Total num frames: 262144. Throughput: 0: 689.0, 1: 687.6. Samples: 61950. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:21:53,230][06167] Avg episode reward: [(0, '1.050'), (1, '1.150')] +[2023-09-27 06:21:53,230][07019] Saving new best policy, reward=1.150! +[2023-09-27 06:21:58,229][06167] Fps is (10 sec: 6553.7, 60 sec: 5898.3, 300 sec: 5898.3). Total num frames: 294912. Throughput: 0: 776.0, 1: 774.6. Samples: 71821. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:21:58,229][06167] Avg episode reward: [(0, '1.070'), (1, '1.140')] +[2023-09-27 06:22:02,447][07175] Updated weights for policy 0, policy_version 640 (0.0016) +[2023-09-27 06:22:02,451][07176] Updated weights for policy 1, policy_version 640 (0.0015) +[2023-09-27 06:22:03,229][06167] Fps is (10 sec: 6553.4, 60 sec: 5957.8, 300 sec: 5957.8). Total num frames: 327680. Throughput: 0: 819.2, 1: 819.2. Samples: 81920. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:22:03,230][06167] Avg episode reward: [(0, '1.240'), (1, '1.210')] +[2023-09-27 06:22:03,231][06938] Saving new best policy, reward=1.240! +[2023-09-27 06:22:03,232][07019] Saving new best policy, reward=1.210! +[2023-09-27 06:22:08,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6007.5, 300 sec: 6007.5). Total num frames: 360448. Throughput: 0: 820.6, 1: 820.4. Samples: 86799. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:22:08,230][06167] Avg episode reward: [(0, '1.300'), (1, '1.250')] +[2023-09-27 06:22:08,230][06938] Saving new best policy, reward=1.300! +[2023-09-27 06:22:08,230][07019] Saving new best policy, reward=1.250! +[2023-09-27 06:22:13,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6049.5). Total num frames: 393216. Throughput: 0: 820.6, 1: 820.3. Samples: 96478. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:22:13,230][06167] Avg episode reward: [(0, '1.320'), (1, '1.370')] +[2023-09-27 06:22:13,237][06938] Saving new best policy, reward=1.320! +[2023-09-27 06:22:13,238][07019] Saving new best policy, reward=1.370! +[2023-09-27 06:22:15,062][07176] Updated weights for policy 1, policy_version 800 (0.0016) +[2023-09-27 06:22:15,063][07175] Updated weights for policy 0, policy_version 800 (0.0018) +[2023-09-27 06:22:18,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6085.5). Total num frames: 425984. Throughput: 0: 823.0, 1: 823.7. Samples: 106481. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:22:18,230][06167] Avg episode reward: [(0, '1.270'), (1, '1.600')] +[2023-09-27 06:22:18,231][07019] Saving new best policy, reward=1.600! +[2023-09-27 06:22:23,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6116.7). Total num frames: 458752. Throughput: 0: 821.7, 1: 822.2. Samples: 110995. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:22:23,230][06167] Avg episode reward: [(0, '1.180'), (1, '1.640')] +[2023-09-27 06:22:23,231][07019] Saving new best policy, reward=1.640! +[2023-09-27 06:22:27,668][07175] Updated weights for policy 0, policy_version 960 (0.0016) +[2023-09-27 06:22:27,668][07176] Updated weights for policy 1, policy_version 960 (0.0018) +[2023-09-27 06:22:28,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6144.0). Total num frames: 491520. Throughput: 0: 819.1, 1: 819.2. Samples: 120832. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 06:22:28,229][06167] Avg episode reward: [(0, '1.290'), (1, '1.580')] +[2023-09-27 06:22:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6168.1). Total num frames: 524288. Throughput: 0: 815.9, 1: 815.1. Samples: 130711. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:22:33,230][06167] Avg episode reward: [(0, '1.300'), (1, '1.690')] +[2023-09-27 06:22:33,231][07019] Saving new best policy, reward=1.690! +[2023-09-27 06:22:38,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6189.5). Total num frames: 557056. Throughput: 0: 816.3, 1: 816.2. Samples: 135413. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:22:38,230][06167] Avg episode reward: [(0, '1.480'), (1, '1.540')] +[2023-09-27 06:22:38,231][06938] Saving new best policy, reward=1.480! +[2023-09-27 06:22:40,133][07176] Updated weights for policy 1, policy_version 1120 (0.0017) +[2023-09-27 06:22:40,133][07175] Updated weights for policy 0, policy_version 1120 (0.0018) +[2023-09-27 06:22:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6208.7). Total num frames: 589824. Throughput: 0: 817.0, 1: 818.4. Samples: 145414. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:22:43,230][06167] Avg episode reward: [(0, '1.520'), (1, '1.560')] +[2023-09-27 06:22:43,238][06938] Saving new best policy, reward=1.520! +[2023-09-27 06:22:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6225.9). Total num frames: 622592. Throughput: 0: 816.4, 1: 814.6. Samples: 155312. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 06:22:48,230][06167] Avg episode reward: [(0, '1.450'), (1, '1.520')] +[2023-09-27 06:22:52,662][07176] Updated weights for policy 1, policy_version 1280 (0.0017) +[2023-09-27 06:22:52,663][07175] Updated weights for policy 0, policy_version 1280 (0.0017) +[2023-09-27 06:22:53,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6241.5). Total num frames: 655360. Throughput: 0: 813.4, 1: 813.0. Samples: 159987. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:22:53,229][06167] Avg episode reward: [(0, '1.470'), (1, '1.390')] +[2023-09-27 06:22:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6255.7). Total num frames: 688128. Throughput: 0: 816.0, 1: 817.4. Samples: 169984. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 06:22:58,230][06167] Avg episode reward: [(0, '1.320'), (1, '1.420')] +[2023-09-27 06:22:58,240][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000001344_344064.pth... +[2023-09-27 06:22:58,240][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000001344_344064.pth... +[2023-09-27 06:23:03,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6268.7). Total num frames: 720896. Throughput: 0: 817.7, 1: 816.3. Samples: 180012. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 06:23:03,230][06167] Avg episode reward: [(0, '1.310'), (1, '1.440')] +[2023-09-27 06:23:05,195][07175] Updated weights for policy 0, policy_version 1440 (0.0015) +[2023-09-27 06:23:05,195][07176] Updated weights for policy 1, policy_version 1440 (0.0017) +[2023-09-27 06:23:08,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6280.5). Total num frames: 753664. Throughput: 0: 817.9, 1: 817.5. Samples: 184589. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 06:23:08,230][06167] Avg episode reward: [(0, '1.180'), (1, '1.510')] +[2023-09-27 06:23:13,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6291.5). Total num frames: 786432. Throughput: 0: 819.2, 1: 819.2. Samples: 194561. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:23:13,230][06167] Avg episode reward: [(0, '1.250'), (1, '1.440')] +[2023-09-27 06:23:17,703][07175] Updated weights for policy 0, policy_version 1600 (0.0017) +[2023-09-27 06:23:17,703][07176] Updated weights for policy 1, policy_version 1600 (0.0018) +[2023-09-27 06:23:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6301.5). Total num frames: 819200. Throughput: 0: 820.4, 1: 821.0. Samples: 204575. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:23:18,230][06167] Avg episode reward: [(0, '1.240'), (1, '1.330')] +[2023-09-27 06:23:23,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6310.9). Total num frames: 851968. Throughput: 0: 820.3, 1: 820.3. Samples: 209242. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:23:23,230][06167] Avg episode reward: [(0, '1.320'), (1, '1.390')] +[2023-09-27 06:23:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6319.5). Total num frames: 884736. Throughput: 0: 819.2, 1: 819.2. Samples: 219141. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:23:28,230][06167] Avg episode reward: [(0, '1.360'), (1, '1.500')] +[2023-09-27 06:23:30,067][07176] Updated weights for policy 1, policy_version 1760 (0.0018) +[2023-09-27 06:23:30,068][07175] Updated weights for policy 0, policy_version 1760 (0.0017) +[2023-09-27 06:23:33,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6327.6). Total num frames: 917504. Throughput: 0: 822.0, 1: 822.8. Samples: 229331. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:23:33,229][06167] Avg episode reward: [(0, '1.410'), (1, '1.630')] +[2023-09-27 06:23:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6335.1). Total num frames: 950272. Throughput: 0: 822.1, 1: 822.3. Samples: 233986. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 06:23:38,230][06167] Avg episode reward: [(0, '1.440'), (1, '1.770')] +[2023-09-27 06:23:38,231][07019] Saving new best policy, reward=1.770! +[2023-09-27 06:23:42,498][07175] Updated weights for policy 0, policy_version 1920 (0.0017) +[2023-09-27 06:23:42,498][07176] Updated weights for policy 1, policy_version 1920 (0.0017) +[2023-09-27 06:23:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6342.2). Total num frames: 983040. Throughput: 0: 821.4, 1: 820.0. Samples: 243848. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 06:23:43,230][06167] Avg episode reward: [(0, '1.390'), (1, '1.800')] +[2023-09-27 06:23:43,241][07019] Saving new best policy, reward=1.800! +[2023-09-27 06:23:48,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6348.8). Total num frames: 1015808. Throughput: 0: 820.7, 1: 822.4. Samples: 253952. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:23:48,229][06167] Avg episode reward: [(0, '1.280'), (1, '1.870')] +[2023-09-27 06:23:48,230][07019] Saving new best policy, reward=1.870! +[2023-09-27 06:23:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6355.0). Total num frames: 1048576. Throughput: 0: 823.3, 1: 824.2. Samples: 258727. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 06:23:53,230][06167] Avg episode reward: [(0, '1.400'), (1, '1.940')] +[2023-09-27 06:23:53,231][07019] Saving new best policy, reward=1.940! +[2023-09-27 06:23:54,962][07175] Updated weights for policy 0, policy_version 2080 (0.0015) +[2023-09-27 06:23:54,962][07176] Updated weights for policy 1, policy_version 2080 (0.0019) +[2023-09-27 06:23:58,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6360.9). Total num frames: 1081344. Throughput: 0: 823.0, 1: 821.4. Samples: 268559. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:23:58,229][06167] Avg episode reward: [(0, '1.350'), (1, '1.760')] +[2023-09-27 06:24:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6366.4). Total num frames: 1114112. Throughput: 0: 821.3, 1: 822.1. Samples: 278528. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:24:03,230][06167] Avg episode reward: [(0, '1.410'), (1, '1.720')] +[2023-09-27 06:24:07,473][07175] Updated weights for policy 0, policy_version 2240 (0.0018) +[2023-09-27 06:24:07,473][07176] Updated weights for policy 1, policy_version 2240 (0.0020) +[2023-09-27 06:24:08,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6371.6). Total num frames: 1146880. Throughput: 0: 822.5, 1: 822.6. Samples: 283274. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:24:08,229][06167] Avg episode reward: [(0, '1.610'), (1, '1.520')] +[2023-09-27 06:24:08,230][06938] Saving new best policy, reward=1.610! +[2023-09-27 06:24:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6376.5). Total num frames: 1179648. Throughput: 0: 823.1, 1: 821.9. Samples: 293164. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 06:24:13,230][06167] Avg episode reward: [(0, '1.390'), (1, '1.600')] +[2023-09-27 06:24:18,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6381.1). Total num frames: 1212416. Throughput: 0: 819.2, 1: 820.2. Samples: 303104. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:24:18,230][06167] Avg episode reward: [(0, '1.370'), (1, '1.710')] +[2023-09-27 06:24:20,018][07176] Updated weights for policy 1, policy_version 2400 (0.0017) +[2023-09-27 06:24:20,019][07175] Updated weights for policy 0, policy_version 2400 (0.0016) +[2023-09-27 06:24:23,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6385.6). Total num frames: 1245184. Throughput: 0: 820.6, 1: 820.4. Samples: 307831. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 06:24:23,230][06167] Avg episode reward: [(0, '1.250'), (1, '1.780')] +[2023-09-27 06:24:28,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6389.8). Total num frames: 1277952. Throughput: 0: 820.2, 1: 820.2. Samples: 317667. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:24:28,229][06167] Avg episode reward: [(0, '1.390'), (1, '1.700')] +[2023-09-27 06:24:32,431][07175] Updated weights for policy 0, policy_version 2560 (0.0016) +[2023-09-27 06:24:32,432][07176] Updated weights for policy 1, policy_version 2560 (0.0015) +[2023-09-27 06:24:33,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6393.8). Total num frames: 1310720. Throughput: 0: 819.2, 1: 819.2. Samples: 327680. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:24:33,229][06167] Avg episode reward: [(0, '1.400'), (1, '1.740')] +[2023-09-27 06:24:38,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6397.6). Total num frames: 1343488. Throughput: 0: 822.0, 1: 821.2. Samples: 332671. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:24:38,230][06167] Avg episode reward: [(0, '1.470'), (1, '1.590')] +[2023-09-27 06:24:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6401.2). Total num frames: 1376256. Throughput: 0: 823.4, 1: 823.6. Samples: 342674. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 06:24:43,230][06167] Avg episode reward: [(0, '1.450'), (1, '1.670')] +[2023-09-27 06:24:44,867][07175] Updated weights for policy 0, policy_version 2720 (0.0016) +[2023-09-27 06:24:44,867][07176] Updated weights for policy 1, policy_version 2720 (0.0016) +[2023-09-27 06:24:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6404.7). Total num frames: 1409024. Throughput: 0: 819.3, 1: 819.2. Samples: 352260. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:24:48,230][06167] Avg episode reward: [(0, '1.500'), (1, '1.650')] +[2023-09-27 06:24:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6408.0). Total num frames: 1441792. Throughput: 0: 823.0, 1: 822.8. Samples: 357335. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:24:53,230][06167] Avg episode reward: [(0, '1.500'), (1, '1.800')] +[2023-09-27 06:24:57,335][07175] Updated weights for policy 0, policy_version 2880 (0.0017) +[2023-09-27 06:24:57,335][07176] Updated weights for policy 1, policy_version 2880 (0.0016) +[2023-09-27 06:24:58,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6411.1). Total num frames: 1474560. Throughput: 0: 820.7, 1: 820.6. Samples: 367020. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:24:58,230][06167] Avg episode reward: [(0, '1.530'), (1, '1.680')] +[2023-09-27 06:24:58,238][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000002880_737280.pth... +[2023-09-27 06:24:58,239][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000002880_737280.pth... +[2023-09-27 06:25:03,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6414.2). Total num frames: 1507328. Throughput: 0: 819.5, 1: 819.2. Samples: 376849. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:25:03,230][06167] Avg episode reward: [(0, '1.470'), (1, '1.650')] +[2023-09-27 06:25:08,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6417.1). Total num frames: 1540096. Throughput: 0: 821.8, 1: 822.4. Samples: 381820. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:25:08,229][06167] Avg episode reward: [(0, '1.570'), (1, '1.810')] +[2023-09-27 06:25:09,932][07175] Updated weights for policy 0, policy_version 3040 (0.0017) +[2023-09-27 06:25:09,932][07176] Updated weights for policy 1, policy_version 3040 (0.0017) +[2023-09-27 06:25:13,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6419.9). Total num frames: 1572864. Throughput: 0: 820.5, 1: 820.2. Samples: 391497. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:25:13,230][06167] Avg episode reward: [(0, '1.590'), (1, '1.780')] +[2023-09-27 06:25:18,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6422.5). Total num frames: 1605632. Throughput: 0: 819.2, 1: 819.2. Samples: 401408. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:25:18,230][06167] Avg episode reward: [(0, '1.620'), (1, '1.910')] +[2023-09-27 06:25:18,231][06938] Saving new best policy, reward=1.620! +[2023-09-27 06:25:22,589][07176] Updated weights for policy 1, policy_version 3200 (0.0017) +[2023-09-27 06:25:22,589][07175] Updated weights for policy 0, policy_version 3200 (0.0019) +[2023-09-27 06:25:23,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6425.1). Total num frames: 1638400. Throughput: 0: 815.3, 1: 814.9. Samples: 406030. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:25:23,230][06167] Avg episode reward: [(0, '1.680'), (1, '1.960')] +[2023-09-27 06:25:23,232][07019] Saving new best policy, reward=1.960! +[2023-09-27 06:25:23,232][06938] Saving new best policy, reward=1.680! +[2023-09-27 06:25:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6427.6). Total num frames: 1671168. Throughput: 0: 812.3, 1: 812.6. Samples: 415797. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:25:28,230][06167] Avg episode reward: [(0, '1.780'), (1, '2.030')] +[2023-09-27 06:25:28,239][06938] Saving new best policy, reward=1.780! +[2023-09-27 06:25:28,239][07019] Saving new best policy, reward=2.030! +[2023-09-27 06:25:33,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6429.9). Total num frames: 1703936. Throughput: 0: 819.1, 1: 819.2. Samples: 425984. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:25:33,230][06167] Avg episode reward: [(0, '1.710'), (1, '2.030')] +[2023-09-27 06:25:35,047][07175] Updated weights for policy 0, policy_version 3360 (0.0017) +[2023-09-27 06:25:35,048][07176] Updated weights for policy 1, policy_version 3360 (0.0015) +[2023-09-27 06:25:38,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6432.2). Total num frames: 1736704. Throughput: 0: 813.7, 1: 814.3. Samples: 430596. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 06:25:38,230][06167] Avg episode reward: [(0, '1.770'), (1, '2.090')] +[2023-09-27 06:25:38,231][07019] Saving new best policy, reward=2.090! +[2023-09-27 06:25:43,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6434.4). Total num frames: 1769472. Throughput: 0: 813.9, 1: 815.2. Samples: 440326. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:25:43,229][06167] Avg episode reward: [(0, '1.830'), (1, '2.110')] +[2023-09-27 06:25:43,239][06938] Saving new best policy, reward=1.830! +[2023-09-27 06:25:43,239][07019] Saving new best policy, reward=2.110! +[2023-09-27 06:25:47,465][07175] Updated weights for policy 0, policy_version 3520 (0.0018) +[2023-09-27 06:25:47,465][07176] Updated weights for policy 1, policy_version 3520 (0.0017) +[2023-09-27 06:25:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6436.6). Total num frames: 1802240. Throughput: 0: 818.9, 1: 819.2. Samples: 450560. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 06:25:48,230][06167] Avg episode reward: [(0, '1.800'), (1, '1.980')] +[2023-09-27 06:25:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6438.6). Total num frames: 1835008. Throughput: 0: 817.4, 1: 816.7. Samples: 455355. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:25:53,229][06167] Avg episode reward: [(0, '1.870'), (1, '2.270')] +[2023-09-27 06:25:53,230][06938] Saving new best policy, reward=1.870! +[2023-09-27 06:25:53,230][07019] Saving new best policy, reward=2.270! +[2023-09-27 06:25:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6440.6). Total num frames: 1867776. Throughput: 0: 815.9, 1: 816.5. Samples: 464953. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 06:25:58,230][06167] Avg episode reward: [(0, '1.830'), (1, '2.610')] +[2023-09-27 06:25:58,240][07019] Saving new best policy, reward=2.610! +[2023-09-27 06:26:00,027][07175] Updated weights for policy 0, policy_version 3680 (0.0017) +[2023-09-27 06:26:00,028][07176] Updated weights for policy 1, policy_version 3680 (0.0016) +[2023-09-27 06:26:03,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6442.5). Total num frames: 1900544. Throughput: 0: 819.2, 1: 818.2. Samples: 475092. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:26:03,230][06167] Avg episode reward: [(0, '1.870'), (1, '2.470')] +[2023-09-27 06:26:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 1933312. Throughput: 0: 819.7, 1: 820.4. Samples: 479834. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:26:08,230][06167] Avg episode reward: [(0, '1.710'), (1, '2.390')] +[2023-09-27 06:26:12,514][07175] Updated weights for policy 0, policy_version 3840 (0.0016) +[2023-09-27 06:26:12,515][07176] Updated weights for policy 1, policy_version 3840 (0.0016) +[2023-09-27 06:26:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 1966080. Throughput: 0: 819.3, 1: 819.2. Samples: 489530. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:26:13,230][06167] Avg episode reward: [(0, '1.870'), (1, '1.770')] +[2023-09-27 06:26:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 1998848. Throughput: 0: 819.2, 1: 818.5. Samples: 499681. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:26:18,230][06167] Avg episode reward: [(0, '2.020'), (1, '1.720')] +[2023-09-27 06:26:18,231][06938] Saving new best policy, reward=2.020! +[2023-09-27 06:26:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2031616. Throughput: 0: 819.2, 1: 818.7. Samples: 504300. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 06:26:23,230][06167] Avg episode reward: [(0, '2.300'), (1, '1.920')] +[2023-09-27 06:26:23,231][06938] Saving new best policy, reward=2.300! +[2023-09-27 06:26:25,007][07176] Updated weights for policy 1, policy_version 4000 (0.0016) +[2023-09-27 06:26:25,008][07175] Updated weights for policy 0, policy_version 4000 (0.0016) +[2023-09-27 06:26:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2064384. Throughput: 0: 822.1, 1: 820.7. Samples: 514253. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:26:28,230][06167] Avg episode reward: [(0, '2.350'), (1, '2.210')] +[2023-09-27 06:26:28,239][06938] Saving new best policy, reward=2.350! +[2023-09-27 06:26:33,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2097152. Throughput: 0: 819.2, 1: 819.2. Samples: 524290. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:26:33,230][06167] Avg episode reward: [(0, '2.110'), (1, '2.480')] +[2023-09-27 06:26:37,486][07175] Updated weights for policy 0, policy_version 4160 (0.0019) +[2023-09-27 06:26:37,487][07176] Updated weights for policy 1, policy_version 4160 (0.0019) +[2023-09-27 06:26:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2129920. Throughput: 0: 818.2, 1: 818.2. Samples: 528992. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:26:38,230][06167] Avg episode reward: [(0, '1.960'), (1, '2.620')] +[2023-09-27 06:26:38,231][07019] Saving new best policy, reward=2.620! +[2023-09-27 06:26:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2162688. Throughput: 0: 822.3, 1: 821.6. Samples: 538928. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:26:43,230][06167] Avg episode reward: [(0, '2.000'), (1, '2.640')] +[2023-09-27 06:26:43,240][07019] Saving new best policy, reward=2.640! +[2023-09-27 06:26:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2195456. Throughput: 0: 819.3, 1: 820.2. Samples: 548868. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 06:26:48,230][06167] Avg episode reward: [(0, '2.070'), (1, '2.610')] +[2023-09-27 06:26:49,906][07175] Updated weights for policy 0, policy_version 4320 (0.0018) +[2023-09-27 06:26:49,906][07176] Updated weights for policy 1, policy_version 4320 (0.0018) +[2023-09-27 06:26:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2228224. Throughput: 0: 822.0, 1: 821.8. Samples: 553805. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:26:53,230][06167] Avg episode reward: [(0, '2.080'), (1, '2.620')] +[2023-09-27 06:26:58,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2260992. Throughput: 0: 823.5, 1: 823.0. Samples: 563623. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:26:58,230][06167] Avg episode reward: [(0, '2.220'), (1, '2.710')] +[2023-09-27 06:26:58,239][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000004416_1130496.pth... +[2023-09-27 06:26:58,239][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000004416_1130496.pth... +[2023-09-27 06:26:58,276][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000001344_344064.pth +[2023-09-27 06:26:58,278][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000001344_344064.pth +[2023-09-27 06:26:58,283][07019] Saving new best policy, reward=2.710! +[2023-09-27 06:27:02,259][07176] Updated weights for policy 1, policy_version 4480 (0.0017) +[2023-09-27 06:27:02,259][07175] Updated weights for policy 0, policy_version 4480 (0.0017) +[2023-09-27 06:27:03,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2293760. Throughput: 0: 821.3, 1: 820.7. Samples: 573572. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:27:03,230][06167] Avg episode reward: [(0, '2.340'), (1, '2.670')] +[2023-09-27 06:27:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2326528. Throughput: 0: 821.9, 1: 823.0. Samples: 578319. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:27:08,230][06167] Avg episode reward: [(0, '2.300'), (1, '2.510')] +[2023-09-27 06:27:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2359296. Throughput: 0: 821.4, 1: 821.5. Samples: 588183. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:27:13,230][06167] Avg episode reward: [(0, '2.410'), (1, '2.190')] +[2023-09-27 06:27:13,240][06938] Saving new best policy, reward=2.410! +[2023-09-27 06:27:14,824][07176] Updated weights for policy 1, policy_version 4640 (0.0018) +[2023-09-27 06:27:14,824][07175] Updated weights for policy 0, policy_version 4640 (0.0018) +[2023-09-27 06:27:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2392064. Throughput: 0: 821.3, 1: 820.0. Samples: 598148. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:27:18,230][06167] Avg episode reward: [(0, '2.370'), (1, '2.250')] +[2023-09-27 06:27:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2424832. Throughput: 0: 826.2, 1: 826.4. Samples: 603358. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:27:23,230][06167] Avg episode reward: [(0, '2.370'), (1, '2.260')] +[2023-09-27 06:27:27,329][07175] Updated weights for policy 0, policy_version 4800 (0.0017) +[2023-09-27 06:27:27,330][07176] Updated weights for policy 1, policy_version 4800 (0.0017) +[2023-09-27 06:27:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2457600. Throughput: 0: 821.0, 1: 821.5. Samples: 612842. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 06:27:28,230][06167] Avg episode reward: [(0, '2.410'), (1, '2.390')] +[2023-09-27 06:27:33,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2490368. Throughput: 0: 820.7, 1: 819.4. Samples: 622669. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:27:33,229][06167] Avg episode reward: [(0, '2.470'), (1, '2.390')] +[2023-09-27 06:27:33,230][06938] Saving new best policy, reward=2.470! +[2023-09-27 06:27:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2523136. Throughput: 0: 823.0, 1: 822.6. Samples: 627856. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:27:38,230][06167] Avg episode reward: [(0, '2.440'), (1, '2.420')] +[2023-09-27 06:27:39,644][07175] Updated weights for policy 0, policy_version 4960 (0.0019) +[2023-09-27 06:27:39,644][07176] Updated weights for policy 1, policy_version 4960 (0.0019) +[2023-09-27 06:27:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2555904. Throughput: 0: 824.6, 1: 824.8. Samples: 637848. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:27:43,230][06167] Avg episode reward: [(0, '2.430'), (1, '2.370')] +[2023-09-27 06:27:48,229][06167] Fps is (10 sec: 7372.8, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 2596864. Throughput: 0: 825.6, 1: 825.4. Samples: 647868. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:27:48,230][06167] Avg episode reward: [(0, '2.510'), (1, '2.290')] +[2023-09-27 06:27:48,231][06938] Saving new best policy, reward=2.510! +[2023-09-27 06:27:51,960][07175] Updated weights for policy 0, policy_version 5120 (0.0018) +[2023-09-27 06:27:51,960][07176] Updated weights for policy 1, policy_version 5120 (0.0014) +[2023-09-27 06:27:53,229][06167] Fps is (10 sec: 6963.3, 60 sec: 6621.9, 300 sec: 6567.5). Total num frames: 2625536. Throughput: 0: 830.8, 1: 829.6. Samples: 653034. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:27:53,230][06167] Avg episode reward: [(0, '2.530'), (1, '2.350')] +[2023-09-27 06:27:53,231][06938] Saving new best policy, reward=2.530! +[2023-09-27 06:27:58,229][06167] Fps is (10 sec: 5734.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 2654208. Throughput: 0: 826.8, 1: 826.5. Samples: 662579. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:27:58,230][06167] Avg episode reward: [(0, '2.680'), (1, '2.300')] +[2023-09-27 06:27:58,249][06938] Saving new best policy, reward=2.680! +[2023-09-27 06:28:03,229][06167] Fps is (10 sec: 6963.2, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 2695168. Throughput: 0: 827.3, 1: 827.4. Samples: 672607. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 06:28:03,230][06167] Avg episode reward: [(0, '2.820'), (1, '2.210')] +[2023-09-27 06:28:03,230][06938] Saving new best policy, reward=2.820! +[2023-09-27 06:28:04,466][07175] Updated weights for policy 0, policy_version 5280 (0.0019) +[2023-09-27 06:28:04,466][07176] Updated weights for policy 1, policy_version 5280 (0.0017) +[2023-09-27 06:28:08,229][06167] Fps is (10 sec: 7372.8, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 2727936. Throughput: 0: 824.7, 1: 824.7. Samples: 677579. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 06:28:08,230][06167] Avg episode reward: [(0, '2.790'), (1, '2.340')] +[2023-09-27 06:28:13,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 2760704. Throughput: 0: 828.5, 1: 828.2. Samples: 687393. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:28:13,230][06167] Avg episode reward: [(0, '2.920'), (1, '2.410')] +[2023-09-27 06:28:13,238][06938] Saving new best policy, reward=2.920! +[2023-09-27 06:28:16,890][07176] Updated weights for policy 1, policy_version 5440 (0.0017) +[2023-09-27 06:28:16,890][07175] Updated weights for policy 0, policy_version 5440 (0.0016) +[2023-09-27 06:28:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 2793472. Throughput: 0: 828.3, 1: 828.3. Samples: 697217. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 06:28:18,230][06167] Avg episode reward: [(0, '2.850'), (1, '2.390')] +[2023-09-27 06:28:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 2826240. Throughput: 0: 827.9, 1: 828.3. Samples: 702388. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:28:23,230][06167] Avg episode reward: [(0, '2.970'), (1, '2.740')] +[2023-09-27 06:28:23,231][06938] Saving new best policy, reward=2.970! +[2023-09-27 06:28:23,231][07019] Saving new best policy, reward=2.740! +[2023-09-27 06:28:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 2859008. Throughput: 0: 828.2, 1: 827.8. Samples: 712369. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:28:28,230][06167] Avg episode reward: [(0, '3.010'), (1, '2.870')] +[2023-09-27 06:28:28,240][06938] Saving new best policy, reward=3.010! +[2023-09-27 06:28:28,240][07019] Saving new best policy, reward=2.870! +[2023-09-27 06:28:29,229][07175] Updated weights for policy 0, policy_version 5600 (0.0016) +[2023-09-27 06:28:29,230][07176] Updated weights for policy 1, policy_version 5600 (0.0016) +[2023-09-27 06:28:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 2891776. Throughput: 0: 823.1, 1: 822.7. Samples: 721931. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:28:33,230][06167] Avg episode reward: [(0, '2.760'), (1, '2.690')] +[2023-09-27 06:28:38,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 2924544. Throughput: 0: 821.5, 1: 822.8. Samples: 727029. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:28:38,229][06167] Avg episode reward: [(0, '2.730'), (1, '2.550')] +[2023-09-27 06:28:41,748][07176] Updated weights for policy 1, policy_version 5760 (0.0015) +[2023-09-27 06:28:41,748][07175] Updated weights for policy 0, policy_version 5760 (0.0017) +[2023-09-27 06:28:43,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 2957312. Throughput: 0: 825.9, 1: 826.0. Samples: 736916. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:28:43,229][06167] Avg episode reward: [(0, '2.700'), (1, '2.320')] +[2023-09-27 06:28:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 2990080. Throughput: 0: 822.4, 1: 822.4. Samples: 746622. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 06:28:48,229][06167] Avg episode reward: [(0, '2.760'), (1, '2.330')] +[2023-09-27 06:28:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6621.9, 300 sec: 6581.4). Total num frames: 3022848. Throughput: 0: 821.8, 1: 823.4. Samples: 751616. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:28:53,229][06167] Avg episode reward: [(0, '2.940'), (1, '2.460')] +[2023-09-27 06:28:54,241][07176] Updated weights for policy 1, policy_version 5920 (0.0017) +[2023-09-27 06:28:54,241][07175] Updated weights for policy 0, policy_version 5920 (0.0018) +[2023-09-27 06:28:58,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 3055616. Throughput: 0: 824.0, 1: 824.0. Samples: 761553. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:28:58,230][06167] Avg episode reward: [(0, '3.080'), (1, '2.720')] +[2023-09-27 06:28:58,242][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000005968_1527808.pth... +[2023-09-27 06:28:58,242][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000005968_1527808.pth... +[2023-09-27 06:28:58,278][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000002880_737280.pth +[2023-09-27 06:28:58,278][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000002880_737280.pth +[2023-09-27 06:28:58,281][06938] Saving new best policy, reward=3.080! +[2023-09-27 06:29:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3088384. Throughput: 0: 822.1, 1: 822.2. Samples: 771213. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:29:03,229][06167] Avg episode reward: [(0, '3.080'), (1, '2.710')] +[2023-09-27 06:29:06,741][07175] Updated weights for policy 0, policy_version 6080 (0.0018) +[2023-09-27 06:29:06,742][07176] Updated weights for policy 1, policy_version 6080 (0.0017) +[2023-09-27 06:29:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3121152. Throughput: 0: 819.4, 1: 820.7. Samples: 776192. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:29:08,230][06167] Avg episode reward: [(0, '2.780'), (1, '2.580')] +[2023-09-27 06:29:13,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3153920. Throughput: 0: 818.3, 1: 818.7. Samples: 786036. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:29:13,229][06167] Avg episode reward: [(0, '2.900'), (1, '2.600')] +[2023-09-27 06:29:18,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3186688. Throughput: 0: 820.4, 1: 821.1. Samples: 795799. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:29:18,229][06167] Avg episode reward: [(0, '2.940'), (1, '2.570')] +[2023-09-27 06:29:19,237][07175] Updated weights for policy 0, policy_version 6240 (0.0018) +[2023-09-27 06:29:19,237][07176] Updated weights for policy 1, policy_version 6240 (0.0017) +[2023-09-27 06:29:23,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3219456. Throughput: 0: 819.2, 1: 819.4. Samples: 800768. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:29:23,230][06167] Avg episode reward: [(0, '2.890'), (1, '2.570')] +[2023-09-27 06:29:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3252224. Throughput: 0: 820.7, 1: 821.2. Samples: 810804. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:29:28,230][06167] Avg episode reward: [(0, '2.900'), (1, '2.780')] +[2023-09-27 06:29:31,674][07175] Updated weights for policy 0, policy_version 6400 (0.0016) +[2023-09-27 06:29:31,675][07176] Updated weights for policy 1, policy_version 6400 (0.0017) +[2023-09-27 06:29:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3284992. Throughput: 0: 820.8, 1: 820.7. Samples: 820488. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:29:33,230][06167] Avg episode reward: [(0, '2.990'), (1, '2.690')] +[2023-09-27 06:29:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3317760. Throughput: 0: 819.2, 1: 819.2. Samples: 825344. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:29:38,230][06167] Avg episode reward: [(0, '2.950'), (1, '2.640')] +[2023-09-27 06:29:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3350528. Throughput: 0: 819.7, 1: 819.6. Samples: 835321. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:29:43,230][06167] Avg episode reward: [(0, '3.090'), (1, '2.650')] +[2023-09-27 06:29:43,241][06938] Saving new best policy, reward=3.090! +[2023-09-27 06:29:44,241][07175] Updated weights for policy 0, policy_version 6560 (0.0015) +[2023-09-27 06:29:44,241][07176] Updated weights for policy 1, policy_version 6560 (0.0016) +[2023-09-27 06:29:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3383296. Throughput: 0: 820.6, 1: 820.4. Samples: 845055. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:29:48,230][06167] Avg episode reward: [(0, '3.360'), (1, '2.660')] +[2023-09-27 06:29:48,231][06938] Saving new best policy, reward=3.360! +[2023-09-27 06:29:53,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3416064. Throughput: 0: 819.2, 1: 819.2. Samples: 849920. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:29:53,230][06167] Avg episode reward: [(0, '3.510'), (1, '2.800')] +[2023-09-27 06:29:53,231][06938] Saving new best policy, reward=3.510! +[2023-09-27 06:29:56,616][07175] Updated weights for policy 0, policy_version 6720 (0.0017) +[2023-09-27 06:29:56,616][07176] Updated weights for policy 1, policy_version 6720 (0.0017) +[2023-09-27 06:29:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3448832. Throughput: 0: 822.6, 1: 822.1. Samples: 860048. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 06:29:58,230][06167] Avg episode reward: [(0, '3.320'), (1, '2.770')] +[2023-09-27 06:30:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3481600. Throughput: 0: 822.6, 1: 821.6. Samples: 869786. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:30:03,230][06167] Avg episode reward: [(0, '3.200'), (1, '2.690')] +[2023-09-27 06:30:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3514368. Throughput: 0: 819.3, 1: 819.2. Samples: 874500. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:30:08,229][06167] Avg episode reward: [(0, '3.280'), (1, '2.580')] +[2023-09-27 06:30:09,081][07176] Updated weights for policy 1, policy_version 6880 (0.0017) +[2023-09-27 06:30:09,081][07175] Updated weights for policy 0, policy_version 6880 (0.0017) +[2023-09-27 06:30:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3547136. Throughput: 0: 821.0, 1: 820.9. Samples: 884687. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:30:13,229][06167] Avg episode reward: [(0, '3.380'), (1, '2.560')] +[2023-09-27 06:30:18,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3579904. Throughput: 0: 822.2, 1: 821.6. Samples: 894459. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:30:18,230][06167] Avg episode reward: [(0, '3.470'), (1, '2.660')] +[2023-09-27 06:30:21,550][07176] Updated weights for policy 1, policy_version 7040 (0.0017) +[2023-09-27 06:30:21,551][07175] Updated weights for policy 0, policy_version 7040 (0.0017) +[2023-09-27 06:30:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3612672. Throughput: 0: 819.8, 1: 819.3. Samples: 899101. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:30:23,230][06167] Avg episode reward: [(0, '3.500'), (1, '2.840')] +[2023-09-27 06:30:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3645440. Throughput: 0: 821.3, 1: 822.2. Samples: 909280. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 06:30:28,230][06167] Avg episode reward: [(0, '3.550'), (1, '2.830')] +[2023-09-27 06:30:28,241][06938] Saving new best policy, reward=3.550! +[2023-09-27 06:30:33,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3678208. Throughput: 0: 821.2, 1: 821.2. Samples: 918962. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 06:30:33,230][06167] Avg episode reward: [(0, '3.710'), (1, '2.910')] +[2023-09-27 06:30:33,232][06938] Saving new best policy, reward=3.710! +[2023-09-27 06:30:33,232][07019] Saving new best policy, reward=2.910! +[2023-09-27 06:30:34,109][07175] Updated weights for policy 0, policy_version 7200 (0.0013) +[2023-09-27 06:30:34,110][07176] Updated weights for policy 1, policy_version 7200 (0.0016) +[2023-09-27 06:30:38,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3710976. Throughput: 0: 819.3, 1: 819.2. Samples: 923652. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:30:38,230][06167] Avg episode reward: [(0, '3.390'), (1, '2.760')] +[2023-09-27 06:30:43,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3743744. Throughput: 0: 817.3, 1: 817.7. Samples: 933625. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:30:43,229][06167] Avg episode reward: [(0, '3.560'), (1, '2.690')] +[2023-09-27 06:30:46,643][07176] Updated weights for policy 1, policy_version 7360 (0.0018) +[2023-09-27 06:30:46,643][07175] Updated weights for policy 0, policy_version 7360 (0.0016) +[2023-09-27 06:30:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3776512. Throughput: 0: 818.6, 1: 819.0. Samples: 943478. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:30:48,230][06167] Avg episode reward: [(0, '3.600'), (1, '2.820')] +[2023-09-27 06:30:53,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3809280. Throughput: 0: 819.2, 1: 819.2. Samples: 948229. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:30:53,230][06167] Avg episode reward: [(0, '3.440'), (1, '2.790')] +[2023-09-27 06:30:58,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3842048. Throughput: 0: 818.8, 1: 818.4. Samples: 958364. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:30:58,229][06167] Avg episode reward: [(0, '3.510'), (1, '2.940')] +[2023-09-27 06:30:58,238][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000007504_1921024.pth... +[2023-09-27 06:30:58,238][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000007504_1921024.pth... +[2023-09-27 06:30:58,272][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000004416_1130496.pth +[2023-09-27 06:30:58,276][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000004416_1130496.pth +[2023-09-27 06:30:58,281][07019] Saving new best policy, reward=2.940! +[2023-09-27 06:30:59,071][07175] Updated weights for policy 0, policy_version 7520 (0.0017) +[2023-09-27 06:30:59,071][07176] Updated weights for policy 1, policy_version 7520 (0.0018) +[2023-09-27 06:31:03,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3874816. Throughput: 0: 817.5, 1: 818.1. Samples: 968063. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:31:03,229][06167] Avg episode reward: [(0, '3.390'), (1, '2.990')] +[2023-09-27 06:31:03,230][07019] Saving new best policy, reward=2.990! +[2023-09-27 06:31:08,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3907584. Throughput: 0: 818.7, 1: 819.1. Samples: 972801. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:31:08,230][06167] Avg episode reward: [(0, '3.110'), (1, '2.950')] +[2023-09-27 06:31:11,651][07175] Updated weights for policy 0, policy_version 7680 (0.0018) +[2023-09-27 06:31:11,651][07176] Updated weights for policy 1, policy_version 7680 (0.0020) +[2023-09-27 06:31:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3940352. Throughput: 0: 817.9, 1: 817.4. Samples: 982864. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:31:13,229][06167] Avg episode reward: [(0, '3.090'), (1, '3.080')] +[2023-09-27 06:31:13,236][07019] Saving new best policy, reward=3.080! +[2023-09-27 06:31:18,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 3973120. Throughput: 0: 818.6, 1: 817.8. Samples: 992597. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:31:18,230][06167] Avg episode reward: [(0, '2.980'), (1, '3.200')] +[2023-09-27 06:31:18,230][07019] Saving new best policy, reward=3.200! +[2023-09-27 06:31:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4005888. Throughput: 0: 819.1, 1: 819.2. Samples: 997376. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:31:23,229][06167] Avg episode reward: [(0, '3.000'), (1, '3.200')] +[2023-09-27 06:31:24,248][07175] Updated weights for policy 0, policy_version 7840 (0.0018) +[2023-09-27 06:31:24,248][07176] Updated weights for policy 1, policy_version 7840 (0.0018) +[2023-09-27 06:31:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4038656. Throughput: 0: 818.7, 1: 818.9. Samples: 1007316. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 06:31:28,230][06167] Avg episode reward: [(0, '3.040'), (1, '3.300')] +[2023-09-27 06:31:28,239][07019] Saving new best policy, reward=3.300! +[2023-09-27 06:31:33,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4071424. Throughput: 0: 816.8, 1: 817.8. Samples: 1017035. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 06:31:33,230][06167] Avg episode reward: [(0, '2.850'), (1, '3.440')] +[2023-09-27 06:31:33,231][07019] Saving new best policy, reward=3.440! +[2023-09-27 06:31:36,741][07175] Updated weights for policy 0, policy_version 8000 (0.0016) +[2023-09-27 06:31:36,741][07176] Updated weights for policy 1, policy_version 8000 (0.0017) +[2023-09-27 06:31:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4104192. Throughput: 0: 819.1, 1: 819.2. Samples: 1021952. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:31:38,230][06167] Avg episode reward: [(0, '2.950'), (1, '3.460')] +[2023-09-27 06:31:38,231][07019] Saving new best policy, reward=3.460! +[2023-09-27 06:31:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4136960. Throughput: 0: 816.2, 1: 816.2. Samples: 1031822. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:31:43,230][06167] Avg episode reward: [(0, '2.960'), (1, '3.020')] +[2023-09-27 06:31:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4169728. Throughput: 0: 818.2, 1: 817.3. Samples: 1041660. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:31:48,230][06167] Avg episode reward: [(0, '2.990'), (1, '3.120')] +[2023-09-27 06:31:49,163][07175] Updated weights for policy 0, policy_version 8160 (0.0016) +[2023-09-27 06:31:49,163][07176] Updated weights for policy 1, policy_version 8160 (0.0017) +[2023-09-27 06:31:53,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4202496. Throughput: 0: 819.2, 1: 819.2. Samples: 1046529. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:31:53,230][06167] Avg episode reward: [(0, '3.100'), (1, '3.220')] +[2023-09-27 06:31:58,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4235264. Throughput: 0: 819.1, 1: 820.4. Samples: 1056641. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:31:58,229][06167] Avg episode reward: [(0, '3.050'), (1, '3.220')] +[2023-09-27 06:32:01,638][07176] Updated weights for policy 1, policy_version 8320 (0.0015) +[2023-09-27 06:32:01,640][07175] Updated weights for policy 0, policy_version 8320 (0.0017) +[2023-09-27 06:32:03,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4268032. Throughput: 0: 819.1, 1: 819.9. Samples: 1066351. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:32:03,230][06167] Avg episode reward: [(0, '3.300'), (1, '3.200')] +[2023-09-27 06:32:08,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4300800. Throughput: 0: 819.2, 1: 819.2. Samples: 1071105. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:32:08,230][06167] Avg episode reward: [(0, '3.450'), (1, '3.140')] +[2023-09-27 06:32:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4333568. Throughput: 0: 820.9, 1: 820.9. Samples: 1081200. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:32:13,230][06167] Avg episode reward: [(0, '3.860'), (1, '2.960')] +[2023-09-27 06:32:13,239][06938] Saving new best policy, reward=3.860! +[2023-09-27 06:32:14,112][07175] Updated weights for policy 0, policy_version 8480 (0.0016) +[2023-09-27 06:32:14,112][07176] Updated weights for policy 1, policy_version 8480 (0.0017) +[2023-09-27 06:32:18,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4366336. Throughput: 0: 822.0, 1: 821.4. Samples: 1090989. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:32:18,230][06167] Avg episode reward: [(0, '4.020'), (1, '2.980')] +[2023-09-27 06:32:18,231][06938] Saving new best policy, reward=4.020! +[2023-09-27 06:32:23,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4399104. Throughput: 0: 819.3, 1: 819.2. Samples: 1095686. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:32:23,229][06167] Avg episode reward: [(0, '3.810'), (1, '3.110')] +[2023-09-27 06:32:26,606][07176] Updated weights for policy 1, policy_version 8640 (0.0018) +[2023-09-27 06:32:26,606][07175] Updated weights for policy 0, policy_version 8640 (0.0017) +[2023-09-27 06:32:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4431872. Throughput: 0: 822.4, 1: 824.0. Samples: 1105911. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:32:28,230][06167] Avg episode reward: [(0, '3.790'), (1, '3.030')] +[2023-09-27 06:32:33,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4464640. Throughput: 0: 819.6, 1: 821.0. Samples: 1115488. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:32:33,230][06167] Avg episode reward: [(0, '3.880'), (1, '2.980')] +[2023-09-27 06:32:38,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4497408. Throughput: 0: 819.3, 1: 819.2. Samples: 1120260. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:32:38,229][06167] Avg episode reward: [(0, '3.990'), (1, '3.090')] +[2023-09-27 06:32:39,186][07175] Updated weights for policy 0, policy_version 8800 (0.0019) +[2023-09-27 06:32:39,187][07176] Updated weights for policy 1, policy_version 8800 (0.0017) +[2023-09-27 06:32:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 4530176. Throughput: 0: 816.5, 1: 816.3. Samples: 1130117. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:32:43,230][06167] Avg episode reward: [(0, '3.940'), (1, '2.990')] +[2023-09-27 06:32:48,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6567.5). Total num frames: 4562944. Throughput: 0: 816.4, 1: 816.3. Samples: 1139823. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:32:48,230][06167] Avg episode reward: [(0, '3.880'), (1, '3.150')] +[2023-09-27 06:32:51,698][07176] Updated weights for policy 1, policy_version 8960 (0.0020) +[2023-09-27 06:32:51,699][07175] Updated weights for policy 0, policy_version 8960 (0.0019) +[2023-09-27 06:32:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 4595712. Throughput: 0: 819.2, 1: 819.2. Samples: 1144832. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:32:53,230][06167] Avg episode reward: [(0, '3.590'), (1, '3.300')] +[2023-09-27 06:32:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 4628480. Throughput: 0: 817.8, 1: 816.7. Samples: 1154753. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:32:58,230][06167] Avg episode reward: [(0, '3.200'), (1, '3.210')] +[2023-09-27 06:32:58,240][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000009040_2314240.pth... +[2023-09-27 06:32:58,240][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000009040_2314240.pth... +[2023-09-27 06:32:58,274][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000005968_1527808.pth +[2023-09-27 06:32:58,278][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000005968_1527808.pth +[2023-09-27 06:33:03,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 4661248. Throughput: 0: 812.5, 1: 813.4. Samples: 1164157. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:33:03,230][06167] Avg episode reward: [(0, '3.290'), (1, '3.390')] +[2023-09-27 06:33:04,460][07175] Updated weights for policy 0, policy_version 9120 (0.0016) +[2023-09-27 06:33:04,460][07176] Updated weights for policy 1, policy_version 9120 (0.0017) +[2023-09-27 06:33:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 4694016. Throughput: 0: 816.9, 1: 815.3. Samples: 1169135. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:33:08,230][06167] Avg episode reward: [(0, '3.380'), (1, '3.380')] +[2023-09-27 06:33:13,229][06167] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 4718592. Throughput: 0: 810.3, 1: 808.6. Samples: 1178763. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:33:13,229][06167] Avg episode reward: [(0, '3.180'), (1, '3.180')] +[2023-09-27 06:33:17,263][07176] Updated weights for policy 1, policy_version 9280 (0.0018) +[2023-09-27 06:33:17,264][07175] Updated weights for policy 0, policy_version 9280 (0.0017) +[2023-09-27 06:33:18,229][06167] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 4751360. Throughput: 0: 805.5, 1: 805.0. Samples: 1187964. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:33:18,229][06167] Avg episode reward: [(0, '3.040'), (1, '3.160')] +[2023-09-27 06:33:23,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 4784128. Throughput: 0: 807.8, 1: 806.4. Samples: 1192897. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:33:23,230][06167] Avg episode reward: [(0, '2.940'), (1, '3.220')] +[2023-09-27 06:33:28,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 4816896. Throughput: 0: 805.8, 1: 804.9. Samples: 1202599. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:33:28,230][06167] Avg episode reward: [(0, '2.870'), (1, '3.260')] +[2023-09-27 06:33:29,825][07176] Updated weights for policy 1, policy_version 9440 (0.0016) +[2023-09-27 06:33:29,826][07175] Updated weights for policy 0, policy_version 9440 (0.0017) +[2023-09-27 06:33:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 4849664. Throughput: 0: 807.5, 1: 807.7. Samples: 1212505. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 06:33:33,230][06167] Avg episode reward: [(0, '3.160'), (1, '3.150')] +[2023-09-27 06:33:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 4882432. Throughput: 0: 807.8, 1: 806.5. Samples: 1217474. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:33:38,230][06167] Avg episode reward: [(0, '3.340'), (1, '3.240')] +[2023-09-27 06:33:42,200][07176] Updated weights for policy 1, policy_version 9600 (0.0018) +[2023-09-27 06:33:42,200][07175] Updated weights for policy 0, policy_version 9600 (0.0016) +[2023-09-27 06:33:43,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 4915200. Throughput: 0: 807.5, 1: 808.8. Samples: 1227487. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:33:43,229][06167] Avg episode reward: [(0, '3.300'), (1, '3.130')] +[2023-09-27 06:33:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 4947968. Throughput: 0: 814.0, 1: 813.0. Samples: 1237372. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:33:48,230][06167] Avg episode reward: [(0, '3.230'), (1, '3.030')] +[2023-09-27 06:33:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 4980736. Throughput: 0: 815.6, 1: 816.2. Samples: 1242565. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:33:53,230][06167] Avg episode reward: [(0, '3.050'), (1, '3.070')] +[2023-09-27 06:33:54,659][07176] Updated weights for policy 1, policy_version 9760 (0.0017) +[2023-09-27 06:33:54,660][07175] Updated weights for policy 0, policy_version 9760 (0.0018) +[2023-09-27 06:33:58,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 5013504. Throughput: 0: 815.2, 1: 815.3. Samples: 1252134. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:33:58,230][06167] Avg episode reward: [(0, '2.950'), (1, '3.080')] +[2023-09-27 06:34:03,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 5046272. Throughput: 0: 821.7, 1: 822.0. Samples: 1261933. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:34:03,229][06167] Avg episode reward: [(0, '2.990'), (1, '3.040')] +[2023-09-27 06:34:07,151][07175] Updated weights for policy 0, policy_version 9920 (0.0017) +[2023-09-27 06:34:07,152][07176] Updated weights for policy 1, policy_version 9920 (0.0012) +[2023-09-27 06:34:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 5079040. Throughput: 0: 823.7, 1: 824.4. Samples: 1267061. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:34:08,230][06167] Avg episode reward: [(0, '2.970'), (1, '3.120')] +[2023-09-27 06:34:13,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5111808. Throughput: 0: 826.2, 1: 825.8. Samples: 1276941. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:34:13,230][06167] Avg episode reward: [(0, '3.200'), (1, '3.390')] +[2023-09-27 06:34:18,229][06167] Fps is (10 sec: 6963.2, 60 sec: 6621.8, 300 sec: 6539.7). Total num frames: 5148672. Throughput: 0: 825.7, 1: 825.5. Samples: 1286810. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:34:18,230][06167] Avg episode reward: [(0, '3.120'), (1, '3.200')] +[2023-09-27 06:34:19,468][07176] Updated weights for policy 1, policy_version 10080 (0.0016) +[2023-09-27 06:34:19,469][07175] Updated weights for policy 0, policy_version 10080 (0.0018) +[2023-09-27 06:34:23,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5177344. Throughput: 0: 826.6, 1: 827.8. Samples: 1291923. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:34:23,230][06167] Avg episode reward: [(0, '3.140'), (1, '3.160')] +[2023-09-27 06:34:28,229][06167] Fps is (10 sec: 6144.0, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5210112. Throughput: 0: 821.2, 1: 821.2. Samples: 1301392. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:34:28,230][06167] Avg episode reward: [(0, '3.370'), (1, '3.080')] +[2023-09-27 06:34:32,017][07175] Updated weights for policy 0, policy_version 10240 (0.0017) +[2023-09-27 06:34:32,018][07176] Updated weights for policy 1, policy_version 10240 (0.0018) +[2023-09-27 06:34:33,229][06167] Fps is (10 sec: 6963.2, 60 sec: 6621.9, 300 sec: 6539.7). Total num frames: 5246976. Throughput: 0: 822.0, 1: 822.0. Samples: 1311353. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:34:33,230][06167] Avg episode reward: [(0, '3.380'), (1, '3.000')] +[2023-09-27 06:34:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5275648. Throughput: 0: 821.5, 1: 821.6. Samples: 1316506. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:34:38,230][06167] Avg episode reward: [(0, '3.450'), (1, '3.090')] +[2023-09-27 06:34:43,229][06167] Fps is (10 sec: 6144.1, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5308416. Throughput: 0: 821.2, 1: 821.2. Samples: 1326044. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:34:43,229][06167] Avg episode reward: [(0, '3.490'), (1, '3.250')] +[2023-09-27 06:34:44,565][07176] Updated weights for policy 1, policy_version 10400 (0.0019) +[2023-09-27 06:34:44,565][07175] Updated weights for policy 0, policy_version 10400 (0.0015) +[2023-09-27 06:34:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5341184. Throughput: 0: 822.4, 1: 822.2. Samples: 1335944. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:34:48,230][06167] Avg episode reward: [(0, '3.370'), (1, '3.060')] +[2023-09-27 06:34:53,229][06167] Fps is (10 sec: 6963.1, 60 sec: 6621.9, 300 sec: 6539.7). Total num frames: 5378048. Throughput: 0: 823.6, 1: 823.0. Samples: 1341158. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:34:53,230][06167] Avg episode reward: [(0, '3.370'), (1, '3.140')] +[2023-09-27 06:34:57,120][07175] Updated weights for policy 0, policy_version 10560 (0.0017) +[2023-09-27 06:34:57,120][07176] Updated weights for policy 1, policy_version 10560 (0.0016) +[2023-09-27 06:34:58,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5406720. Throughput: 0: 818.8, 1: 818.7. Samples: 1350626. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:34:58,230][06167] Avg episode reward: [(0, '3.590'), (1, '3.010')] +[2023-09-27 06:34:58,329][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000010576_2707456.pth... +[2023-09-27 06:34:58,352][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000010576_2707456.pth... +[2023-09-27 06:34:58,356][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000007504_1921024.pth +[2023-09-27 06:34:58,381][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000007504_1921024.pth +[2023-09-27 06:35:03,229][06167] Fps is (10 sec: 6143.9, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5439488. Throughput: 0: 816.8, 1: 816.8. Samples: 1360322. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:35:03,230][06167] Avg episode reward: [(0, '3.730'), (1, '3.000')] +[2023-09-27 06:35:08,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5472256. Throughput: 0: 816.9, 1: 816.6. Samples: 1365429. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:35:08,230][06167] Avg episode reward: [(0, '3.640'), (1, '2.770')] +[2023-09-27 06:35:09,557][07176] Updated weights for policy 1, policy_version 10720 (0.0016) +[2023-09-27 06:35:09,558][07175] Updated weights for policy 0, policy_version 10720 (0.0016) +[2023-09-27 06:35:13,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5505024. Throughput: 0: 821.0, 1: 820.6. Samples: 1375264. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:35:13,230][06167] Avg episode reward: [(0, '3.820'), (1, '2.950')] +[2023-09-27 06:35:18,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6485.3, 300 sec: 6525.8). Total num frames: 5537792. Throughput: 0: 818.4, 1: 818.9. Samples: 1385034. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:35:18,229][06167] Avg episode reward: [(0, '3.920'), (1, '3.190')] +[2023-09-27 06:35:22,101][07175] Updated weights for policy 0, policy_version 10880 (0.0015) +[2023-09-27 06:35:22,103][07176] Updated weights for policy 1, policy_version 10880 (0.0020) +[2023-09-27 06:35:23,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5570560. Throughput: 0: 817.5, 1: 816.0. Samples: 1390011. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 06:35:23,231][06167] Avg episode reward: [(0, '3.990'), (1, '3.280')] +[2023-09-27 06:35:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5603328. Throughput: 0: 819.2, 1: 819.1. Samples: 1399770. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 06:35:28,230][06167] Avg episode reward: [(0, '3.890'), (1, '3.450')] +[2023-09-27 06:35:33,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6485.3, 300 sec: 6525.8). Total num frames: 5636096. Throughput: 0: 814.0, 1: 813.7. Samples: 1409193. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 06:35:33,229][06167] Avg episode reward: [(0, '3.510'), (1, '3.360')] +[2023-09-27 06:35:34,732][07175] Updated weights for policy 0, policy_version 11040 (0.0019) +[2023-09-27 06:35:34,732][07176] Updated weights for policy 1, policy_version 11040 (0.0017) +[2023-09-27 06:35:38,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5668864. Throughput: 0: 812.2, 1: 812.1. Samples: 1414250. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 06:35:38,230][06167] Avg episode reward: [(0, '3.320'), (1, '3.380')] +[2023-09-27 06:35:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5701632. Throughput: 0: 815.3, 1: 814.8. Samples: 1423982. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:35:43,230][06167] Avg episode reward: [(0, '2.970'), (1, '3.350')] +[2023-09-27 06:35:47,258][07175] Updated weights for policy 0, policy_version 11200 (0.0015) +[2023-09-27 06:35:47,258][07176] Updated weights for policy 1, policy_version 11200 (0.0016) +[2023-09-27 06:35:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5734400. Throughput: 0: 815.8, 1: 815.7. Samples: 1433736. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:35:48,230][06167] Avg episode reward: [(0, '3.230'), (1, '3.410')] +[2023-09-27 06:35:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6485.3, 300 sec: 6525.8). Total num frames: 5767168. Throughput: 0: 817.8, 1: 816.6. Samples: 1438974. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:35:53,230][06167] Avg episode reward: [(0, '3.460'), (1, '3.310')] +[2023-09-27 06:35:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5799936. Throughput: 0: 817.1, 1: 817.9. Samples: 1448840. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:35:58,230][06167] Avg episode reward: [(0, '3.200'), (1, '3.160')] +[2023-09-27 06:35:59,586][07176] Updated weights for policy 1, policy_version 11360 (0.0017) +[2023-09-27 06:35:59,587][07175] Updated weights for policy 0, policy_version 11360 (0.0017) +[2023-09-27 06:36:03,229][06167] Fps is (10 sec: 6963.3, 60 sec: 6621.9, 300 sec: 6539.7). Total num frames: 5836800. Throughput: 0: 819.9, 1: 819.3. Samples: 1458795. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:36:03,230][06167] Avg episode reward: [(0, '3.150'), (1, '3.100')] +[2023-09-27 06:36:08,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5865472. Throughput: 0: 820.6, 1: 820.2. Samples: 1463851. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:36:08,230][06167] Avg episode reward: [(0, '3.390'), (1, '3.210')] +[2023-09-27 06:36:12,192][07175] Updated weights for policy 0, policy_version 11520 (0.0017) +[2023-09-27 06:36:12,192][07176] Updated weights for policy 1, policy_version 11520 (0.0017) +[2023-09-27 06:36:13,229][06167] Fps is (10 sec: 6143.9, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5898240. Throughput: 0: 818.0, 1: 818.5. Samples: 1473410. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:36:13,230][06167] Avg episode reward: [(0, '3.500'), (1, '3.210')] +[2023-09-27 06:36:18,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5931008. Throughput: 0: 820.2, 1: 820.2. Samples: 1483014. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:36:18,230][06167] Avg episode reward: [(0, '3.610'), (1, '3.240')] +[2023-09-27 06:36:23,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5963776. Throughput: 0: 821.6, 1: 821.2. Samples: 1488179. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:36:23,229][06167] Avg episode reward: [(0, '3.740'), (1, '3.270')] +[2023-09-27 06:36:24,627][07175] Updated weights for policy 0, policy_version 11680 (0.0018) +[2023-09-27 06:36:24,627][07176] Updated weights for policy 1, policy_version 11680 (0.0016) +[2023-09-27 06:36:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 5996544. Throughput: 0: 821.8, 1: 823.6. Samples: 1498022. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:36:28,230][06167] Avg episode reward: [(0, '3.780'), (1, '3.060')] +[2023-09-27 06:36:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6029312. Throughput: 0: 821.5, 1: 822.0. Samples: 1507696. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:36:33,229][06167] Avg episode reward: [(0, '3.790'), (1, '3.090')] +[2023-09-27 06:36:37,115][07176] Updated weights for policy 1, policy_version 11840 (0.0015) +[2023-09-27 06:36:37,117][07175] Updated weights for policy 0, policy_version 11840 (0.0020) +[2023-09-27 06:36:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6062080. Throughput: 0: 820.1, 1: 820.2. Samples: 1512787. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:36:38,230][06167] Avg episode reward: [(0, '3.790'), (1, '2.990')] +[2023-09-27 06:36:43,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6094848. Throughput: 0: 819.3, 1: 819.2. Samples: 1522572. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:36:43,230][06167] Avg episode reward: [(0, '3.970'), (1, '2.890')] +[2023-09-27 06:36:48,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6127616. Throughput: 0: 818.8, 1: 818.5. Samples: 1532474. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 06:36:48,229][06167] Avg episode reward: [(0, '3.930'), (1, '3.210')] +[2023-09-27 06:36:49,522][07176] Updated weights for policy 1, policy_version 12000 (0.0016) +[2023-09-27 06:36:49,523][07175] Updated weights for policy 0, policy_version 12000 (0.0016) +[2023-09-27 06:36:53,234][06167] Fps is (10 sec: 6959.7, 60 sec: 6621.3, 300 sec: 6539.6). Total num frames: 6164480. Throughput: 0: 819.2, 1: 820.5. Samples: 1537645. Policy #0 lag: (min: 1.0, avg: 1.0, max: 1.0) +[2023-09-27 06:36:53,235][06167] Avg episode reward: [(0, '3.870'), (1, '3.200')] +[2023-09-27 06:36:58,229][06167] Fps is (10 sec: 6963.0, 60 sec: 6621.9, 300 sec: 6539.7). Total num frames: 6197248. Throughput: 0: 822.4, 1: 822.3. Samples: 1547423. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:36:58,230][06167] Avg episode reward: [(0, '3.780'), (1, '3.050')] +[2023-09-27 06:36:58,242][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000012112_3100672.pth... +[2023-09-27 06:36:58,270][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000012112_3100672.pth... +[2023-09-27 06:36:58,272][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000009040_2314240.pth +[2023-09-27 06:36:58,306][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000009040_2314240.pth +[2023-09-27 06:37:01,987][07176] Updated weights for policy 1, policy_version 12160 (0.0014) +[2023-09-27 06:37:01,987][07175] Updated weights for policy 0, policy_version 12160 (0.0018) +[2023-09-27 06:37:03,237][06167] Fps is (10 sec: 6961.5, 60 sec: 6621.0, 300 sec: 6553.4). Total num frames: 6234112. Throughput: 0: 823.9, 1: 823.8. Samples: 1557174. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:37:03,238][06167] Avg episode reward: [(0, '3.680'), (1, '3.200')] +[2023-09-27 06:37:08,229][06167] Fps is (10 sec: 6144.0, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6258688. Throughput: 0: 822.5, 1: 823.3. Samples: 1562240. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:37:08,230][06167] Avg episode reward: [(0, '3.460'), (1, '3.220')] +[2023-09-27 06:37:13,229][06167] Fps is (10 sec: 5738.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6291456. Throughput: 0: 821.5, 1: 820.3. Samples: 1571903. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:37:13,230][06167] Avg episode reward: [(0, '3.390'), (1, '3.300')] +[2023-09-27 06:37:14,579][07175] Updated weights for policy 0, policy_version 12320 (0.0017) +[2023-09-27 06:37:14,579][07176] Updated weights for policy 1, policy_version 12320 (0.0018) +[2023-09-27 06:37:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6324224. Throughput: 0: 821.7, 1: 821.4. Samples: 1581635. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:37:18,230][06167] Avg episode reward: [(0, '3.430'), (1, '3.610')] +[2023-09-27 06:37:18,277][07019] Saving new best policy, reward=3.610! +[2023-09-27 06:37:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6356992. Throughput: 0: 822.4, 1: 822.3. Samples: 1586802. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:37:23,230][06167] Avg episode reward: [(0, '3.360'), (1, '3.650')] +[2023-09-27 06:37:23,247][07019] Saving new best policy, reward=3.650! +[2023-09-27 06:37:26,983][07175] Updated weights for policy 0, policy_version 12480 (0.0018) +[2023-09-27 06:37:26,983][07176] Updated weights for policy 1, policy_version 12480 (0.0017) +[2023-09-27 06:37:28,229][06167] Fps is (10 sec: 7372.8, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 6397952. Throughput: 0: 823.0, 1: 822.3. Samples: 1596610. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 06:37:28,230][06167] Avg episode reward: [(0, '3.590'), (1, '3.800')] +[2023-09-27 06:37:28,243][07019] Saving new best policy, reward=3.800! +[2023-09-27 06:37:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6422528. Throughput: 0: 820.4, 1: 820.7. Samples: 1606321. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 06:37:33,230][06167] Avg episode reward: [(0, '3.430'), (1, '3.710')] +[2023-09-27 06:37:38,229][06167] Fps is (10 sec: 5734.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6455296. Throughput: 0: 819.3, 1: 819.2. Samples: 1611370. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:37:38,230][06167] Avg episode reward: [(0, '3.410'), (1, '3.540')] +[2023-09-27 06:37:39,504][07175] Updated weights for policy 0, policy_version 12640 (0.0016) +[2023-09-27 06:37:39,505][07176] Updated weights for policy 1, policy_version 12640 (0.0014) +[2023-09-27 06:37:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 6488064. Throughput: 0: 819.6, 1: 819.3. Samples: 1621172. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:37:43,230][06167] Avg episode reward: [(0, '3.350'), (1, '3.540')] +[2023-09-27 06:37:48,229][06167] Fps is (10 sec: 7372.9, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 6529024. Throughput: 0: 820.2, 1: 820.0. Samples: 1630969. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:37:48,229][06167] Avg episode reward: [(0, '3.340'), (1, '3.260')] +[2023-09-27 06:37:51,957][07175] Updated weights for policy 0, policy_version 12800 (0.0017) +[2023-09-27 06:37:51,957][07176] Updated weights for policy 1, policy_version 12800 (0.0016) +[2023-09-27 06:37:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6485.9, 300 sec: 6525.8). Total num frames: 6553600. Throughput: 0: 820.1, 1: 819.8. Samples: 1636037. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:37:53,230][06167] Avg episode reward: [(0, '3.480'), (1, '3.110')] +[2023-09-27 06:37:58,229][06167] Fps is (10 sec: 5734.3, 60 sec: 6485.3, 300 sec: 6525.8). Total num frames: 6586368. Throughput: 0: 818.6, 1: 818.6. Samples: 1645574. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:37:58,230][06167] Avg episode reward: [(0, '3.430'), (1, '3.360')] +[2023-09-27 06:38:03,229][06167] Fps is (10 sec: 7372.8, 60 sec: 6554.4, 300 sec: 6553.6). Total num frames: 6627328. Throughput: 0: 821.2, 1: 821.0. Samples: 1655533. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:38:03,230][06167] Avg episode reward: [(0, '3.340'), (1, '3.470')] +[2023-09-27 06:38:04,432][07176] Updated weights for policy 1, policy_version 12960 (0.0018) +[2023-09-27 06:38:04,432][07175] Updated weights for policy 0, policy_version 12960 (0.0017) +[2023-09-27 06:38:08,229][06167] Fps is (10 sec: 7372.8, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 6660096. Throughput: 0: 821.1, 1: 821.5. Samples: 1660720. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:38:08,230][06167] Avg episode reward: [(0, '3.590'), (1, '3.790')] +[2023-09-27 06:38:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 6692864. Throughput: 0: 819.5, 1: 820.6. Samples: 1670416. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:38:13,230][06167] Avg episode reward: [(0, '3.490'), (1, '4.090')] +[2023-09-27 06:38:13,239][07019] Saving new best policy, reward=4.090! +[2023-09-27 06:38:16,862][07175] Updated weights for policy 0, policy_version 13120 (0.0017) +[2023-09-27 06:38:16,863][07176] Updated weights for policy 1, policy_version 13120 (0.0018) +[2023-09-27 06:38:18,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 6725632. Throughput: 0: 822.0, 1: 822.0. Samples: 1680305. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 06:38:18,230][06167] Avg episode reward: [(0, '3.620'), (1, '4.320')] +[2023-09-27 06:38:18,231][07019] Saving new best policy, reward=4.320! +[2023-09-27 06:38:23,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 6758400. Throughput: 0: 822.9, 1: 823.5. Samples: 1685461. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 06:38:23,229][06167] Avg episode reward: [(0, '3.720'), (1, '4.230')] +[2023-09-27 06:38:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 6791168. Throughput: 0: 823.7, 1: 823.3. Samples: 1695290. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:38:28,230][06167] Avg episode reward: [(0, '3.480'), (1, '4.190')] +[2023-09-27 06:38:29,276][07175] Updated weights for policy 0, policy_version 13280 (0.0017) +[2023-09-27 06:38:29,277][07176] Updated weights for policy 1, policy_version 13280 (0.0018) +[2023-09-27 06:38:33,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 6823936. Throughput: 0: 823.5, 1: 823.9. Samples: 1705101. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:38:33,230][06167] Avg episode reward: [(0, '3.660'), (1, '3.840')] +[2023-09-27 06:38:38,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 6856704. Throughput: 0: 822.0, 1: 823.4. Samples: 1710080. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:38:38,230][06167] Avg episode reward: [(0, '3.800'), (1, '3.930')] +[2023-09-27 06:38:41,825][07175] Updated weights for policy 0, policy_version 13440 (0.0014) +[2023-09-27 06:38:41,826][07176] Updated weights for policy 1, policy_version 13440 (0.0015) +[2023-09-27 06:38:43,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 6889472. Throughput: 0: 824.7, 1: 824.2. Samples: 1719776. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:38:43,229][06167] Avg episode reward: [(0, '3.850'), (1, '3.870')] +[2023-09-27 06:38:48,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 6922240. Throughput: 0: 824.2, 1: 823.3. Samples: 1729671. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:38:48,230][06167] Avg episode reward: [(0, '3.790'), (1, '3.970')] +[2023-09-27 06:38:53,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 6955008. Throughput: 0: 820.9, 1: 821.1. Samples: 1734613. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:38:53,229][06167] Avg episode reward: [(0, '3.700'), (1, '3.900')] +[2023-09-27 06:38:54,245][07176] Updated weights for policy 1, policy_version 13600 (0.0016) +[2023-09-27 06:38:54,245][07175] Updated weights for policy 0, policy_version 13600 (0.0016) +[2023-09-27 06:38:58,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 6987776. Throughput: 0: 824.4, 1: 823.5. Samples: 1744572. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:38:58,229][06167] Avg episode reward: [(0, '3.320'), (1, '3.790')] +[2023-09-27 06:38:58,238][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000013648_3493888.pth... +[2023-09-27 06:38:58,239][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000013648_3493888.pth... +[2023-09-27 06:38:58,268][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000010576_2707456.pth +[2023-09-27 06:38:58,273][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000010576_2707456.pth +[2023-09-27 06:39:03,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 7020544. Throughput: 0: 821.3, 1: 821.4. Samples: 1754224. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:39:03,230][06167] Avg episode reward: [(0, '3.210'), (1, '3.550')] +[2023-09-27 06:39:06,711][07176] Updated weights for policy 1, policy_version 13760 (0.0014) +[2023-09-27 06:39:06,713][07175] Updated weights for policy 0, policy_version 13760 (0.0015) +[2023-09-27 06:39:08,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 7053312. Throughput: 0: 819.2, 1: 820.2. Samples: 1759232. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:39:08,230][06167] Avg episode reward: [(0, '3.290'), (1, '3.780')] +[2023-09-27 06:39:13,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6567.5). Total num frames: 7086080. Throughput: 0: 821.3, 1: 822.1. Samples: 1769244. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:39:13,229][06167] Avg episode reward: [(0, '3.460'), (1, '3.730')] +[2023-09-27 06:39:18,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 7118848. Throughput: 0: 818.5, 1: 819.0. Samples: 1778789. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:39:18,230][06167] Avg episode reward: [(0, '3.570'), (1, '3.500')] +[2023-09-27 06:39:19,318][07175] Updated weights for policy 0, policy_version 13920 (0.0016) +[2023-09-27 06:39:19,318][07176] Updated weights for policy 1, policy_version 13920 (0.0018) +[2023-09-27 06:39:23,229][06167] Fps is (10 sec: 6143.9, 60 sec: 6485.3, 300 sec: 6567.5). Total num frames: 7147520. Throughput: 0: 818.5, 1: 818.9. Samples: 1783764. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:39:23,230][06167] Avg episode reward: [(0, '3.700'), (1, '3.650')] +[2023-09-27 06:39:28,229][06167] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6539.7). Total num frames: 7176192. Throughput: 0: 813.0, 1: 813.4. Samples: 1792963. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:39:28,230][06167] Avg episode reward: [(0, '3.660'), (1, '3.550')] +[2023-09-27 06:39:32,076][07176] Updated weights for policy 1, policy_version 14080 (0.0015) +[2023-09-27 06:39:32,077][07175] Updated weights for policy 0, policy_version 14080 (0.0016) +[2023-09-27 06:39:33,229][06167] Fps is (10 sec: 6144.0, 60 sec: 6417.1, 300 sec: 6553.6). Total num frames: 7208960. Throughput: 0: 811.3, 1: 812.6. Samples: 1802745. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:39:33,230][06167] Avg episode reward: [(0, '3.730'), (1, '3.360')] +[2023-09-27 06:39:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6553.6). Total num frames: 7241728. Throughput: 0: 812.8, 1: 813.3. Samples: 1807785. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:39:38,230][06167] Avg episode reward: [(0, '3.720'), (1, '3.550')] +[2023-09-27 06:39:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6553.6). Total num frames: 7274496. Throughput: 0: 808.2, 1: 808.2. Samples: 1817312. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:39:43,230][06167] Avg episode reward: [(0, '3.660'), (1, '3.550')] +[2023-09-27 06:39:44,750][07175] Updated weights for policy 0, policy_version 14240 (0.0019) +[2023-09-27 06:39:44,750][07176] Updated weights for policy 1, policy_version 14240 (0.0017) +[2023-09-27 06:39:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6539.7). Total num frames: 7307264. Throughput: 0: 809.4, 1: 809.4. Samples: 1827067. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 06:39:48,230][06167] Avg episode reward: [(0, '4.390'), (1, '3.790')] +[2023-09-27 06:39:48,418][06938] Saving new best policy, reward=4.390! +[2023-09-27 06:39:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6553.6). Total num frames: 7340032. Throughput: 0: 812.6, 1: 810.7. Samples: 1832284. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 06:39:53,230][06167] Avg episode reward: [(0, '4.160'), (1, '3.880')] +[2023-09-27 06:39:57,061][07175] Updated weights for policy 0, policy_version 14400 (0.0016) +[2023-09-27 06:39:57,061][07176] Updated weights for policy 1, policy_version 14400 (0.0018) +[2023-09-27 06:39:58,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6553.6). Total num frames: 7372800. Throughput: 0: 810.5, 1: 809.9. Samples: 1842160. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:39:58,230][06167] Avg episode reward: [(0, '4.410'), (1, '3.850')] +[2023-09-27 06:39:58,299][06938] Saving new best policy, reward=4.410! +[2023-09-27 06:40:03,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6553.6). Total num frames: 7405568. Throughput: 0: 814.6, 1: 813.6. Samples: 1852057. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:40:03,229][06167] Avg episode reward: [(0, '4.330'), (1, '3.920')] +[2023-09-27 06:40:08,229][06167] Fps is (10 sec: 7373.0, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 7446528. Throughput: 0: 816.3, 1: 814.6. Samples: 1857154. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:40:08,229][06167] Avg episode reward: [(0, '4.420'), (1, '3.910')] +[2023-09-27 06:40:08,230][06938] Saving new best policy, reward=4.420! +[2023-09-27 06:40:09,436][07176] Updated weights for policy 1, policy_version 14560 (0.0013) +[2023-09-27 06:40:09,436][07175] Updated weights for policy 0, policy_version 14560 (0.0017) +[2023-09-27 06:40:13,229][06167] Fps is (10 sec: 7372.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 7479296. Throughput: 0: 823.6, 1: 822.0. Samples: 1867017. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:40:13,230][06167] Avg episode reward: [(0, '4.150'), (1, '3.780')] +[2023-09-27 06:40:18,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 7512064. Throughput: 0: 822.7, 1: 822.2. Samples: 1876764. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:40:18,230][06167] Avg episode reward: [(0, '3.860'), (1, '4.230')] +[2023-09-27 06:40:21,997][07175] Updated weights for policy 0, policy_version 14720 (0.0016) +[2023-09-27 06:40:21,997][07176] Updated weights for policy 1, policy_version 14720 (0.0016) +[2023-09-27 06:40:23,229][06167] Fps is (10 sec: 6144.0, 60 sec: 6553.6, 300 sec: 6567.5). Total num frames: 7540736. Throughput: 0: 821.5, 1: 820.6. Samples: 1881679. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:40:23,230][06167] Avg episode reward: [(0, '3.920'), (1, '4.270')] +[2023-09-27 06:40:28,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 7577600. Throughput: 0: 824.6, 1: 824.5. Samples: 1891521. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:40:28,230][06167] Avg episode reward: [(0, '3.970'), (1, '3.990')] +[2023-09-27 06:40:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6621.9, 300 sec: 6567.5). Total num frames: 7606272. Throughput: 0: 825.6, 1: 825.2. Samples: 1901353. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:40:33,230][06167] Avg episode reward: [(0, '3.620'), (1, '3.880')] +[2023-09-27 06:40:34,470][07176] Updated weights for policy 1, policy_version 14880 (0.0017) +[2023-09-27 06:40:34,471][07175] Updated weights for policy 0, policy_version 14880 (0.0017) +[2023-09-27 06:40:38,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 7643136. Throughput: 0: 823.3, 1: 823.4. Samples: 1906382. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:40:38,230][06167] Avg episode reward: [(0, '3.940'), (1, '3.550')] +[2023-09-27 06:40:43,229][06167] Fps is (10 sec: 6963.3, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 7675904. Throughput: 0: 823.4, 1: 823.4. Samples: 1916265. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:40:43,229][06167] Avg episode reward: [(0, '4.130'), (1, '3.520')] +[2023-09-27 06:40:46,842][07175] Updated weights for policy 0, policy_version 15040 (0.0017) +[2023-09-27 06:40:46,843][07176] Updated weights for policy 1, policy_version 15040 (0.0018) +[2023-09-27 06:40:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 7708672. Throughput: 0: 822.8, 1: 823.3. Samples: 1926131. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:40:48,230][06167] Avg episode reward: [(0, '3.980'), (1, '3.430')] +[2023-09-27 06:40:53,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 7741440. Throughput: 0: 822.8, 1: 824.0. Samples: 1931261. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:40:53,230][06167] Avg episode reward: [(0, '4.250'), (1, '3.480')] +[2023-09-27 06:40:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6567.5). Total num frames: 7774208. Throughput: 0: 821.0, 1: 822.2. Samples: 1940963. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:40:58,230][06167] Avg episode reward: [(0, '4.070'), (1, '3.600')] +[2023-09-27 06:40:58,240][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000015184_3887104.pth... +[2023-09-27 06:40:58,241][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000015184_3887104.pth... +[2023-09-27 06:40:58,277][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000012112_3100672.pth +[2023-09-27 06:40:58,279][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000012112_3100672.pth +[2023-09-27 06:40:59,327][07176] Updated weights for policy 1, policy_version 15200 (0.0015) +[2023-09-27 06:40:59,328][07175] Updated weights for policy 0, policy_version 15200 (0.0016) +[2023-09-27 06:41:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 7806976. Throughput: 0: 819.6, 1: 819.7. Samples: 1950531. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:41:03,230][06167] Avg episode reward: [(0, '3.840'), (1, '3.370')] +[2023-09-27 06:41:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 7839744. Throughput: 0: 823.3, 1: 823.0. Samples: 1955761. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:41:08,229][06167] Avg episode reward: [(0, '3.700'), (1, '3.310')] +[2023-09-27 06:41:11,915][07175] Updated weights for policy 0, policy_version 15360 (0.0017) +[2023-09-27 06:41:11,915][07176] Updated weights for policy 1, policy_version 15360 (0.0017) +[2023-09-27 06:41:13,229][06167] Fps is (10 sec: 6144.0, 60 sec: 6485.3, 300 sec: 6567.5). Total num frames: 7868416. Throughput: 0: 820.7, 1: 822.0. Samples: 1965443. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:41:13,230][06167] Avg episode reward: [(0, '3.630'), (1, '3.490')] +[2023-09-27 06:41:18,229][06167] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6553.6). Total num frames: 7897088. Throughput: 0: 815.2, 1: 815.4. Samples: 1974729. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:41:18,230][06167] Avg episode reward: [(0, '3.600'), (1, '3.390')] +[2023-09-27 06:41:23,229][06167] Fps is (10 sec: 6144.0, 60 sec: 6485.3, 300 sec: 6553.6). Total num frames: 7929856. Throughput: 0: 816.4, 1: 816.8. Samples: 1979875. Policy #0 lag: (min: 3.0, avg: 3.0, max: 3.0) +[2023-09-27 06:41:23,231][06167] Avg episode reward: [(0, '3.860'), (1, '3.610')] +[2023-09-27 06:41:24,591][07176] Updated weights for policy 1, policy_version 15520 (0.0015) +[2023-09-27 06:41:24,592][07175] Updated weights for policy 0, policy_version 15520 (0.0017) +[2023-09-27 06:41:28,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6553.6). Total num frames: 7962624. Throughput: 0: 815.3, 1: 815.2. Samples: 1989635. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:41:28,229][06167] Avg episode reward: [(0, '3.680'), (1, '3.700')] +[2023-09-27 06:41:33,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6485.3, 300 sec: 6553.6). Total num frames: 7995392. Throughput: 0: 813.6, 1: 812.8. Samples: 1999319. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:41:33,230][06167] Avg episode reward: [(0, '3.970'), (1, '3.540')] +[2023-09-27 06:41:37,329][07176] Updated weights for policy 1, policy_version 15680 (0.0016) +[2023-09-27 06:41:37,330][07175] Updated weights for policy 0, policy_version 15680 (0.0018) +[2023-09-27 06:41:38,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6553.6). Total num frames: 8028160. Throughput: 0: 810.5, 1: 807.8. Samples: 2004085. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:41:38,230][06167] Avg episode reward: [(0, '4.020'), (1, '3.770')] +[2023-09-27 06:41:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6553.6). Total num frames: 8060928. Throughput: 0: 808.0, 1: 808.0. Samples: 2013684. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:41:43,230][06167] Avg episode reward: [(0, '3.900'), (1, '3.610')] +[2023-09-27 06:41:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6539.8). Total num frames: 8093696. Throughput: 0: 810.3, 1: 810.8. Samples: 2023481. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:41:48,230][06167] Avg episode reward: [(0, '3.950'), (1, '3.520')] +[2023-09-27 06:41:49,785][07175] Updated weights for policy 0, policy_version 15840 (0.0016) +[2023-09-27 06:41:49,785][07176] Updated weights for policy 1, policy_version 15840 (0.0018) +[2023-09-27 06:41:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6539.7). Total num frames: 8126464. Throughput: 0: 809.5, 1: 809.3. Samples: 2028606. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:41:53,230][06167] Avg episode reward: [(0, '3.910'), (1, '3.520')] +[2023-09-27 06:41:58,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6526.0). Total num frames: 8159232. Throughput: 0: 811.2, 1: 809.6. Samples: 2038382. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:41:58,229][06167] Avg episode reward: [(0, '3.960'), (1, '3.470')] +[2023-09-27 06:42:02,214][07175] Updated weights for policy 0, policy_version 16000 (0.0016) +[2023-09-27 06:42:02,215][07176] Updated weights for policy 1, policy_version 16000 (0.0017) +[2023-09-27 06:42:03,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6553.6). Total num frames: 8192000. Throughput: 0: 816.6, 1: 816.7. Samples: 2048231. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:42:03,229][06167] Avg episode reward: [(0, '3.790'), (1, '3.590')] +[2023-09-27 06:42:08,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6553.6). Total num frames: 8224768. Throughput: 0: 816.3, 1: 816.7. Samples: 2053360. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:42:08,230][06167] Avg episode reward: [(0, '3.930'), (1, '3.740')] +[2023-09-27 06:42:13,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6485.3, 300 sec: 6553.6). Total num frames: 8257536. Throughput: 0: 817.5, 1: 818.3. Samples: 2063245. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:42:13,230][06167] Avg episode reward: [(0, '4.080'), (1, '3.720')] +[2023-09-27 06:42:14,570][07175] Updated weights for policy 0, policy_version 16160 (0.0016) +[2023-09-27 06:42:14,570][07176] Updated weights for policy 1, policy_version 16160 (0.0017) +[2023-09-27 06:42:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8290304. Throughput: 0: 817.5, 1: 818.3. Samples: 2072930. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:42:18,230][06167] Avg episode reward: [(0, '4.000'), (1, '3.600')] +[2023-09-27 06:42:23,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 8323072. Throughput: 0: 822.0, 1: 822.9. Samples: 2078104. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:42:23,230][06167] Avg episode reward: [(0, '4.000'), (1, '3.510')] +[2023-09-27 06:42:27,016][07175] Updated weights for policy 0, policy_version 16320 (0.0015) +[2023-09-27 06:42:27,016][07176] Updated weights for policy 1, policy_version 16320 (0.0017) +[2023-09-27 06:42:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8355840. Throughput: 0: 825.5, 1: 825.9. Samples: 2087994. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:42:28,230][06167] Avg episode reward: [(0, '3.690'), (1, '3.720')] +[2023-09-27 06:42:33,229][06167] Fps is (10 sec: 7372.9, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 8396800. Throughput: 0: 826.6, 1: 826.4. Samples: 2097866. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:42:33,230][06167] Avg episode reward: [(0, '3.910'), (1, '3.680')] +[2023-09-27 06:42:38,229][06167] Fps is (10 sec: 7372.9, 60 sec: 6690.1, 300 sec: 6581.4). Total num frames: 8429568. Throughput: 0: 827.0, 1: 827.2. Samples: 2103045. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 06:42:38,230][06167] Avg episode reward: [(0, '3.910'), (1, '3.780')] +[2023-09-27 06:42:39,461][07175] Updated weights for policy 0, policy_version 16480 (0.0019) +[2023-09-27 06:42:39,461][07176] Updated weights for policy 1, policy_version 16480 (0.0019) +[2023-09-27 06:42:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 8462336. Throughput: 0: 826.2, 1: 826.4. Samples: 2112751. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:42:43,230][06167] Avg episode reward: [(0, '3.920'), (1, '3.830')] +[2023-09-27 06:42:48,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 8495104. Throughput: 0: 826.2, 1: 826.0. Samples: 2122579. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:42:48,229][06167] Avg episode reward: [(0, '3.990'), (1, '3.790')] +[2023-09-27 06:42:51,885][07176] Updated weights for policy 1, policy_version 16640 (0.0016) +[2023-09-27 06:42:51,885][07175] Updated weights for policy 0, policy_version 16640 (0.0016) +[2023-09-27 06:42:53,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6690.2, 300 sec: 6581.4). Total num frames: 8527872. Throughput: 0: 826.1, 1: 825.3. Samples: 2127670. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:42:53,229][06167] Avg episode reward: [(0, '3.920'), (1, '3.840')] +[2023-09-27 06:42:58,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 8560640. Throughput: 0: 825.4, 1: 825.6. Samples: 2137537. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:42:58,230][06167] Avg episode reward: [(0, '3.610'), (1, '3.960')] +[2023-09-27 06:42:58,240][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000016720_4280320.pth... +[2023-09-27 06:42:58,241][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000016720_4280320.pth... +[2023-09-27 06:42:58,276][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000013648_3493888.pth +[2023-09-27 06:42:58,277][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000013648_3493888.pth +[2023-09-27 06:43:03,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 8593408. Throughput: 0: 826.2, 1: 825.8. Samples: 2147267. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:43:03,230][06167] Avg episode reward: [(0, '3.580'), (1, '3.960')] +[2023-09-27 06:43:04,351][07175] Updated weights for policy 0, policy_version 16800 (0.0018) +[2023-09-27 06:43:04,351][07176] Updated weights for policy 1, policy_version 16800 (0.0019) +[2023-09-27 06:43:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6690.2, 300 sec: 6553.6). Total num frames: 8626176. Throughput: 0: 825.1, 1: 825.6. Samples: 2152387. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 06:43:08,230][06167] Avg episode reward: [(0, '3.760'), (1, '3.700')] +[2023-09-27 06:43:13,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6690.2, 300 sec: 6553.6). Total num frames: 8658944. Throughput: 0: 825.4, 1: 826.2. Samples: 2162316. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 06:43:13,229][06167] Avg episode reward: [(0, '3.780'), (1, '3.710')] +[2023-09-27 06:43:16,695][07175] Updated weights for policy 0, policy_version 16960 (0.0015) +[2023-09-27 06:43:16,695][07176] Updated weights for policy 1, policy_version 16960 (0.0017) +[2023-09-27 06:43:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6690.2, 300 sec: 6553.6). Total num frames: 8691712. Throughput: 0: 825.5, 1: 825.4. Samples: 2172158. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:43:18,230][06167] Avg episode reward: [(0, '4.040'), (1, '3.990')] +[2023-09-27 06:43:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 8724480. Throughput: 0: 821.2, 1: 822.8. Samples: 2177028. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:43:23,230][06167] Avg episode reward: [(0, '4.140'), (1, '4.060')] +[2023-09-27 06:43:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6690.2, 300 sec: 6553.6). Total num frames: 8757248. Throughput: 0: 827.0, 1: 827.5. Samples: 2187207. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:43:28,229][06167] Avg episode reward: [(0, '3.930'), (1, '3.950')] +[2023-09-27 06:43:29,088][07175] Updated weights for policy 0, policy_version 17120 (0.0017) +[2023-09-27 06:43:29,088][07176] Updated weights for policy 1, policy_version 17120 (0.0016) +[2023-09-27 06:43:33,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8790016. Throughput: 0: 827.1, 1: 827.3. Samples: 2197024. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:43:33,229][06167] Avg episode reward: [(0, '3.450'), (1, '4.110')] +[2023-09-27 06:43:38,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8822784. Throughput: 0: 823.1, 1: 823.5. Samples: 2201767. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:43:38,230][06167] Avg episode reward: [(0, '3.400'), (1, '4.160')] +[2023-09-27 06:43:41,623][07176] Updated weights for policy 1, policy_version 17280 (0.0016) +[2023-09-27 06:43:41,624][07175] Updated weights for policy 0, policy_version 17280 (0.0017) +[2023-09-27 06:43:43,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8855552. Throughput: 0: 825.0, 1: 824.0. Samples: 2211745. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:43:43,230][06167] Avg episode reward: [(0, '3.540'), (1, '4.050')] +[2023-09-27 06:43:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8888320. Throughput: 0: 823.9, 1: 823.7. Samples: 2221407. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:43:48,230][06167] Avg episode reward: [(0, '3.650'), (1, '4.140')] +[2023-09-27 06:43:53,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8921088. Throughput: 0: 819.3, 1: 820.6. Samples: 2226180. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:43:53,229][06167] Avg episode reward: [(0, '3.670'), (1, '4.150')] +[2023-09-27 06:43:54,218][07175] Updated weights for policy 0, policy_version 17440 (0.0016) +[2023-09-27 06:43:54,218][07176] Updated weights for policy 1, policy_version 17440 (0.0017) +[2023-09-27 06:43:58,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8953856. Throughput: 0: 819.0, 1: 819.0. Samples: 2236029. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:43:58,230][06167] Avg episode reward: [(0, '3.620'), (1, '4.050')] +[2023-09-27 06:44:03,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 8986624. Throughput: 0: 817.7, 1: 816.9. Samples: 2245714. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:44:03,230][06167] Avg episode reward: [(0, '3.670'), (1, '4.380')] +[2023-09-27 06:44:03,231][07019] Saving new best policy, reward=4.380! +[2023-09-27 06:44:06,800][07175] Updated weights for policy 0, policy_version 17600 (0.0017) +[2023-09-27 06:44:06,800][07176] Updated weights for policy 1, policy_version 17600 (0.0017) +[2023-09-27 06:44:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 9019392. Throughput: 0: 819.1, 1: 819.0. Samples: 2250742. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:44:08,230][06167] Avg episode reward: [(0, '3.860'), (1, '4.260')] +[2023-09-27 06:44:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 9052160. Throughput: 0: 814.4, 1: 813.8. Samples: 2260476. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:44:13,230][06167] Avg episode reward: [(0, '3.850'), (1, '4.560')] +[2023-09-27 06:44:13,242][07019] Saving new best policy, reward=4.560! +[2023-09-27 06:44:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6567.5). Total num frames: 9084928. Throughput: 0: 813.4, 1: 813.2. Samples: 2270220. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:44:18,230][06167] Avg episode reward: [(0, '4.210'), (1, '4.420')] +[2023-09-27 06:44:19,284][07176] Updated weights for policy 1, policy_version 17760 (0.0017) +[2023-09-27 06:44:19,284][07175] Updated weights for policy 0, policy_version 17760 (0.0018) +[2023-09-27 06:44:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 9117696. Throughput: 0: 816.6, 1: 818.1. Samples: 2275328. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 06:44:23,230][06167] Avg episode reward: [(0, '4.260'), (1, '4.260')] +[2023-09-27 06:44:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 9150464. Throughput: 0: 815.8, 1: 815.4. Samples: 2285151. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 06:44:28,230][06167] Avg episode reward: [(0, '4.230'), (1, '3.850')] +[2023-09-27 06:44:31,757][07175] Updated weights for policy 0, policy_version 17920 (0.0016) +[2023-09-27 06:44:31,757][07176] Updated weights for policy 1, policy_version 17920 (0.0016) +[2023-09-27 06:44:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 9183232. Throughput: 0: 816.2, 1: 816.9. Samples: 2294900. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:44:33,230][06167] Avg episode reward: [(0, '4.550'), (1, '3.990')] +[2023-09-27 06:44:33,231][06938] Saving new best policy, reward=4.550! +[2023-09-27 06:44:38,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 9216000. Throughput: 0: 819.1, 1: 819.1. Samples: 2299898. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:44:38,229][06167] Avg episode reward: [(0, '4.670'), (1, '3.960')] +[2023-09-27 06:44:38,230][06938] Saving new best policy, reward=4.670! +[2023-09-27 06:44:43,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 9248768. Throughput: 0: 818.6, 1: 817.8. Samples: 2309667. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:44:43,229][06167] Avg episode reward: [(0, '4.520'), (1, '4.140')] +[2023-09-27 06:44:44,274][07176] Updated weights for policy 1, policy_version 18080 (0.0017) +[2023-09-27 06:44:44,274][07175] Updated weights for policy 0, policy_version 18080 (0.0019) +[2023-09-27 06:44:48,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 9281536. Throughput: 0: 819.8, 1: 820.3. Samples: 2319521. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:44:48,230][06167] Avg episode reward: [(0, '4.520'), (1, '4.230')] +[2023-09-27 06:44:53,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 9314304. Throughput: 0: 819.2, 1: 818.6. Samples: 2324441. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:44:53,230][06167] Avg episode reward: [(0, '4.460'), (1, '4.140')] +[2023-09-27 06:44:56,690][07175] Updated weights for policy 0, policy_version 18240 (0.0016) +[2023-09-27 06:44:56,690][07176] Updated weights for policy 1, policy_version 18240 (0.0018) +[2023-09-27 06:44:58,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 9347072. Throughput: 0: 822.0, 1: 822.1. Samples: 2334460. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:44:58,230][06167] Avg episode reward: [(0, '4.570'), (1, '4.100')] +[2023-09-27 06:44:58,238][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000018256_4673536.pth... +[2023-09-27 06:44:58,238][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000018256_4673536.pth... +[2023-09-27 06:44:58,289][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000015184_3887104.pth +[2023-09-27 06:44:58,290][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000015184_3887104.pth +[2023-09-27 06:45:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 9379840. Throughput: 0: 822.2, 1: 821.9. Samples: 2344205. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 06:45:03,230][06167] Avg episode reward: [(0, '4.550'), (1, '4.010')] +[2023-09-27 06:45:08,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 9412608. Throughput: 0: 819.2, 1: 819.2. Samples: 2349056. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 06:45:08,230][06167] Avg episode reward: [(0, '4.880'), (1, '4.090')] +[2023-09-27 06:45:08,231][06938] Saving new best policy, reward=4.880! +[2023-09-27 06:45:09,172][07176] Updated weights for policy 1, policy_version 18400 (0.0015) +[2023-09-27 06:45:09,173][07175] Updated weights for policy 0, policy_version 18400 (0.0017) +[2023-09-27 06:45:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 9445376. Throughput: 0: 822.1, 1: 823.0. Samples: 2359182. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 06:45:13,230][06167] Avg episode reward: [(0, '5.000'), (1, '4.260')] +[2023-09-27 06:45:13,239][06938] Saving new best policy, reward=5.000! +[2023-09-27 06:45:18,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6567.5). Total num frames: 9478144. Throughput: 0: 823.9, 1: 823.4. Samples: 2369027. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:45:18,229][06167] Avg episode reward: [(0, '4.800'), (1, '4.420')] +[2023-09-27 06:45:21,559][07175] Updated weights for policy 0, policy_version 18560 (0.0017) +[2023-09-27 06:45:21,559][07176] Updated weights for policy 1, policy_version 18560 (0.0018) +[2023-09-27 06:45:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 9510912. Throughput: 0: 820.0, 1: 819.4. Samples: 2373673. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:45:23,230][06167] Avg episode reward: [(0, '4.160'), (1, '4.180')] +[2023-09-27 06:45:28,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6567.5). Total num frames: 9543680. Throughput: 0: 823.8, 1: 825.2. Samples: 2383872. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:45:28,230][06167] Avg episode reward: [(0, '4.110'), (1, '4.140')] +[2023-09-27 06:45:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 9576448. Throughput: 0: 823.9, 1: 824.2. Samples: 2393685. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:45:33,230][06167] Avg episode reward: [(0, '4.170'), (1, '3.770')] +[2023-09-27 06:45:33,982][07175] Updated weights for policy 0, policy_version 18720 (0.0017) +[2023-09-27 06:45:33,982][07176] Updated weights for policy 1, policy_version 18720 (0.0019) +[2023-09-27 06:45:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 9609216. Throughput: 0: 821.0, 1: 820.6. Samples: 2398317. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:45:38,230][06167] Avg episode reward: [(0, '4.120'), (1, '3.640')] +[2023-09-27 06:45:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 9641984. Throughput: 0: 821.2, 1: 823.0. Samples: 2408448. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:45:43,230][06167] Avg episode reward: [(0, '4.270'), (1, '3.510')] +[2023-09-27 06:45:46,522][07176] Updated weights for policy 1, policy_version 18880 (0.0019) +[2023-09-27 06:45:46,522][07175] Updated weights for policy 0, policy_version 18880 (0.0019) +[2023-09-27 06:45:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 9674752. Throughput: 0: 823.1, 1: 823.2. Samples: 2418290. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:45:48,230][06167] Avg episode reward: [(0, '4.550'), (1, '3.650')] +[2023-09-27 06:45:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 9707520. Throughput: 0: 821.6, 1: 820.2. Samples: 2422938. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:45:53,230][06167] Avg episode reward: [(0, '4.140'), (1, '3.930')] +[2023-09-27 06:45:58,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 9740288. Throughput: 0: 819.8, 1: 821.1. Samples: 2433024. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:45:58,229][06167] Avg episode reward: [(0, '4.060'), (1, '4.140')] +[2023-09-27 06:45:58,973][07175] Updated weights for policy 0, policy_version 19040 (0.0016) +[2023-09-27 06:45:58,974][07176] Updated weights for policy 1, policy_version 19040 (0.0016) +[2023-09-27 06:46:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 9773056. Throughput: 0: 822.0, 1: 822.0. Samples: 2443009. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:46:03,230][06167] Avg episode reward: [(0, '4.110'), (1, '4.030')] +[2023-09-27 06:46:08,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6567.5). Total num frames: 9805824. Throughput: 0: 823.4, 1: 822.7. Samples: 2447745. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:46:08,230][06167] Avg episode reward: [(0, '4.280'), (1, '3.870')] +[2023-09-27 06:46:11,335][07175] Updated weights for policy 0, policy_version 19200 (0.0014) +[2023-09-27 06:46:11,336][07176] Updated weights for policy 1, policy_version 19200 (0.0017) +[2023-09-27 06:46:13,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 9838592. Throughput: 0: 819.6, 1: 819.2. Samples: 2457622. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:46:13,229][06167] Avg episode reward: [(0, '4.280'), (1, '3.760')] +[2023-09-27 06:46:18,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 9871360. Throughput: 0: 821.9, 1: 822.9. Samples: 2467699. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:46:18,230][06167] Avg episode reward: [(0, '4.360'), (1, '3.920')] +[2023-09-27 06:46:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 9904128. Throughput: 0: 821.9, 1: 821.6. Samples: 2472277. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 06:46:23,230][06167] Avg episode reward: [(0, '4.410'), (1, '4.410')] +[2023-09-27 06:46:23,902][07176] Updated weights for policy 1, policy_version 19360 (0.0017) +[2023-09-27 06:46:23,903][07175] Updated weights for policy 0, policy_version 19360 (0.0014) +[2023-09-27 06:46:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 9936896. Throughput: 0: 819.2, 1: 819.2. Samples: 2482177. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 06:46:28,230][06167] Avg episode reward: [(0, '4.220'), (1, '4.250')] +[2023-09-27 06:46:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 9969664. Throughput: 0: 820.2, 1: 819.4. Samples: 2492071. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 06:46:33,230][06167] Avg episode reward: [(0, '4.270'), (1, '4.600')] +[2023-09-27 06:46:33,231][07019] Saving new best policy, reward=4.600! +[2023-09-27 06:46:36,458][07175] Updated weights for policy 0, policy_version 19520 (0.0016) +[2023-09-27 06:46:36,458][07176] Updated weights for policy 1, policy_version 19520 (0.0017) +[2023-09-27 06:46:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 10002432. Throughput: 0: 819.8, 1: 819.8. Samples: 2496718. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:46:38,230][06167] Avg episode reward: [(0, '4.100'), (1, '4.610')] +[2023-09-27 06:46:38,231][07019] Saving new best policy, reward=4.610! +[2023-09-27 06:46:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 10035200. Throughput: 0: 819.3, 1: 819.2. Samples: 2506757. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:46:43,230][06167] Avg episode reward: [(0, '4.000'), (1, '4.260')] +[2023-09-27 06:46:48,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 10067968. Throughput: 0: 821.3, 1: 822.8. Samples: 2516992. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:46:48,230][06167] Avg episode reward: [(0, '4.380'), (1, '4.120')] +[2023-09-27 06:46:48,739][07175] Updated weights for policy 0, policy_version 19680 (0.0015) +[2023-09-27 06:46:48,739][07176] Updated weights for policy 1, policy_version 19680 (0.0017) +[2023-09-27 06:46:53,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 10100736. Throughput: 0: 822.8, 1: 823.0. Samples: 2521807. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:46:53,229][06167] Avg episode reward: [(0, '4.530'), (1, '4.150')] +[2023-09-27 06:46:58,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 10133504. Throughput: 0: 821.8, 1: 821.2. Samples: 2531556. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:46:58,230][06167] Avg episode reward: [(0, '4.480'), (1, '4.150')] +[2023-09-27 06:46:58,240][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000019792_5066752.pth... +[2023-09-27 06:46:58,240][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000019792_5066752.pth... +[2023-09-27 06:46:58,275][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000016720_4280320.pth +[2023-09-27 06:46:58,285][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000016720_4280320.pth +[2023-09-27 06:47:01,422][07175] Updated weights for policy 0, policy_version 19840 (0.0016) +[2023-09-27 06:47:01,422][07176] Updated weights for policy 1, policy_version 19840 (0.0017) +[2023-09-27 06:47:03,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 10166272. Throughput: 0: 819.3, 1: 818.2. Samples: 2541386. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:47:03,230][06167] Avg episode reward: [(0, '4.490'), (1, '4.170')] +[2023-09-27 06:47:08,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 10199040. Throughput: 0: 819.3, 1: 819.5. Samples: 2546023. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:47:08,230][06167] Avg episode reward: [(0, '4.480'), (1, '4.070')] +[2023-09-27 06:47:13,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 10231808. Throughput: 0: 819.3, 1: 819.2. Samples: 2555908. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:47:13,229][06167] Avg episode reward: [(0, '4.540'), (1, '3.760')] +[2023-09-27 06:47:13,957][07176] Updated weights for policy 1, policy_version 20000 (0.0018) +[2023-09-27 06:47:13,957][07175] Updated weights for policy 0, policy_version 20000 (0.0017) +[2023-09-27 06:47:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 10264576. Throughput: 0: 819.6, 1: 820.7. Samples: 2565886. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:47:18,230][06167] Avg episode reward: [(0, '4.370'), (1, '3.830')] +[2023-09-27 06:47:23,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 10297344. Throughput: 0: 821.1, 1: 820.9. Samples: 2570610. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:47:23,230][06167] Avg episode reward: [(0, '4.810'), (1, '4.190')] +[2023-09-27 06:47:26,498][07175] Updated weights for policy 0, policy_version 20160 (0.0016) +[2023-09-27 06:47:26,499][07176] Updated weights for policy 1, policy_version 20160 (0.0020) +[2023-09-27 06:47:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10330112. Throughput: 0: 819.1, 1: 819.2. Samples: 2580480. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:47:28,231][06167] Avg episode reward: [(0, '4.560'), (1, '4.050')] +[2023-09-27 06:47:33,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10362880. Throughput: 0: 816.8, 1: 815.0. Samples: 2590422. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:47:33,229][06167] Avg episode reward: [(0, '4.730'), (1, '4.280')] +[2023-09-27 06:47:38,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10395648. Throughput: 0: 812.5, 1: 812.5. Samples: 2594935. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:47:38,230][06167] Avg episode reward: [(0, '4.720'), (1, '4.560')] +[2023-09-27 06:47:38,980][07175] Updated weights for policy 0, policy_version 20320 (0.0017) +[2023-09-27 06:47:38,980][07176] Updated weights for policy 1, policy_version 20320 (0.0019) +[2023-09-27 06:47:43,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10428416. Throughput: 0: 816.2, 1: 817.2. Samples: 2605056. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:47:43,229][06167] Avg episode reward: [(0, '4.750'), (1, '4.600')] +[2023-09-27 06:47:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10461184. Throughput: 0: 817.4, 1: 817.2. Samples: 2614939. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:47:48,230][06167] Avg episode reward: [(0, '4.760'), (1, '4.220')] +[2023-09-27 06:47:51,440][07176] Updated weights for policy 1, policy_version 20480 (0.0016) +[2023-09-27 06:47:51,440][07175] Updated weights for policy 0, policy_version 20480 (0.0016) +[2023-09-27 06:47:53,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10493952. Throughput: 0: 818.0, 1: 818.0. Samples: 2619644. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:47:53,230][06167] Avg episode reward: [(0, '4.980'), (1, '4.160')] +[2023-09-27 06:47:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10526720. Throughput: 0: 819.1, 1: 819.2. Samples: 2629633. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:47:58,230][06167] Avg episode reward: [(0, '4.790'), (1, '3.630')] +[2023-09-27 06:48:03,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10559488. Throughput: 0: 820.2, 1: 820.3. Samples: 2639711. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:48:03,229][06167] Avg episode reward: [(0, '4.840'), (1, '3.280')] +[2023-09-27 06:48:03,875][07176] Updated weights for policy 1, policy_version 20640 (0.0018) +[2023-09-27 06:48:03,876][07175] Updated weights for policy 0, policy_version 20640 (0.0015) +[2023-09-27 06:48:08,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10592256. Throughput: 0: 819.5, 1: 820.0. Samples: 2644387. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:48:08,229][06167] Avg episode reward: [(0, '5.170'), (1, '3.710')] +[2023-09-27 06:48:08,230][06938] Saving new best policy, reward=5.170! +[2023-09-27 06:48:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10625024. Throughput: 0: 819.3, 1: 819.2. Samples: 2654214. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:48:13,229][06167] Avg episode reward: [(0, '5.360'), (1, '3.720')] +[2023-09-27 06:48:13,238][06938] Saving new best policy, reward=5.360! +[2023-09-27 06:48:16,413][07176] Updated weights for policy 1, policy_version 20800 (0.0017) +[2023-09-27 06:48:16,413][07175] Updated weights for policy 0, policy_version 20800 (0.0017) +[2023-09-27 06:48:18,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10657792. Throughput: 0: 819.9, 1: 820.5. Samples: 2664240. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:48:18,229][06167] Avg episode reward: [(0, '5.250'), (1, '3.780')] +[2023-09-27 06:48:23,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10690560. Throughput: 0: 822.2, 1: 821.8. Samples: 2668915. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:48:23,230][06167] Avg episode reward: [(0, '5.560'), (1, '4.160')] +[2023-09-27 06:48:23,230][06938] Saving new best policy, reward=5.560! +[2023-09-27 06:48:28,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10723328. Throughput: 0: 819.2, 1: 819.2. Samples: 2678785. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:48:28,230][06167] Avg episode reward: [(0, '5.810'), (1, '4.400')] +[2023-09-27 06:48:28,241][06938] Saving new best policy, reward=5.810! +[2023-09-27 06:48:28,869][07175] Updated weights for policy 0, policy_version 20960 (0.0016) +[2023-09-27 06:48:28,870][07176] Updated weights for policy 1, policy_version 20960 (0.0017) +[2023-09-27 06:48:33,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10756096. Throughput: 0: 819.0, 1: 818.6. Samples: 2688630. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:48:33,230][06167] Avg episode reward: [(0, '5.470'), (1, '4.240')] +[2023-09-27 06:48:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10788864. Throughput: 0: 818.1, 1: 818.2. Samples: 2693281. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:48:38,230][06167] Avg episode reward: [(0, '5.300'), (1, '4.140')] +[2023-09-27 06:48:41,379][07176] Updated weights for policy 1, policy_version 21120 (0.0015) +[2023-09-27 06:48:41,379][07175] Updated weights for policy 0, policy_version 21120 (0.0015) +[2023-09-27 06:48:43,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10821632. Throughput: 0: 819.3, 1: 819.2. Samples: 2703364. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:48:43,229][06167] Avg episode reward: [(0, '5.130'), (1, '3.960')] +[2023-09-27 06:48:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10854400. Throughput: 0: 820.3, 1: 820.8. Samples: 2713560. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:48:48,229][06167] Avg episode reward: [(0, '5.220'), (1, '4.010')] +[2023-09-27 06:48:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10887168. Throughput: 0: 820.2, 1: 819.7. Samples: 2718185. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:48:53,229][06167] Avg episode reward: [(0, '5.310'), (1, '4.120')] +[2023-09-27 06:48:53,844][07176] Updated weights for policy 1, policy_version 21280 (0.0019) +[2023-09-27 06:48:53,844][07175] Updated weights for policy 0, policy_version 21280 (0.0014) +[2023-09-27 06:48:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10919936. Throughput: 0: 820.0, 1: 819.3. Samples: 2727983. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:48:58,230][06167] Avg episode reward: [(0, '5.480'), (1, '3.840')] +[2023-09-27 06:48:58,242][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000021328_5459968.pth... +[2023-09-27 06:48:58,242][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000021328_5459968.pth... +[2023-09-27 06:48:58,275][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000018256_4673536.pth +[2023-09-27 06:48:58,276][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000018256_4673536.pth +[2023-09-27 06:49:03,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10952704. Throughput: 0: 820.9, 1: 822.1. Samples: 2738176. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:49:03,230][06167] Avg episode reward: [(0, '5.110'), (1, '4.130')] +[2023-09-27 06:49:06,336][07176] Updated weights for policy 1, policy_version 21440 (0.0018) +[2023-09-27 06:49:06,336][07175] Updated weights for policy 0, policy_version 21440 (0.0018) +[2023-09-27 06:49:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 10985472. Throughput: 0: 819.7, 1: 819.8. Samples: 2742694. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 06:49:08,230][06167] Avg episode reward: [(0, '5.060'), (1, '4.170')] +[2023-09-27 06:49:13,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11018240. Throughput: 0: 819.6, 1: 819.2. Samples: 2752532. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 06:49:13,229][06167] Avg episode reward: [(0, '4.920'), (1, '4.580')] +[2023-09-27 06:49:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11051008. Throughput: 0: 821.6, 1: 822.2. Samples: 2762602. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 06:49:18,229][06167] Avg episode reward: [(0, '4.650'), (1, '4.790')] +[2023-09-27 06:49:18,230][07019] Saving new best policy, reward=4.790! +[2023-09-27 06:49:18,861][07176] Updated weights for policy 1, policy_version 21600 (0.0017) +[2023-09-27 06:49:18,861][07175] Updated weights for policy 0, policy_version 21600 (0.0017) +[2023-09-27 06:49:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11083776. Throughput: 0: 822.0, 1: 822.0. Samples: 2767262. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:49:23,229][06167] Avg episode reward: [(0, '4.520'), (1, '4.760')] +[2023-09-27 06:49:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11116544. Throughput: 0: 819.2, 1: 819.2. Samples: 2777093. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:49:28,230][06167] Avg episode reward: [(0, '4.870'), (1, '4.650')] +[2023-09-27 06:49:31,452][07175] Updated weights for policy 0, policy_version 21760 (0.0016) +[2023-09-27 06:49:31,453][07176] Updated weights for policy 1, policy_version 21760 (0.0016) +[2023-09-27 06:49:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11149312. Throughput: 0: 816.5, 1: 816.4. Samples: 2787040. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:49:33,229][06167] Avg episode reward: [(0, '5.250'), (1, '4.310')] +[2023-09-27 06:49:38,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11182080. Throughput: 0: 816.4, 1: 816.5. Samples: 2791667. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:49:38,229][06167] Avg episode reward: [(0, '5.280'), (1, '4.230')] +[2023-09-27 06:49:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11214848. Throughput: 0: 818.3, 1: 819.1. Samples: 2801664. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:49:43,229][06167] Avg episode reward: [(0, '5.290'), (1, '3.850')] +[2023-09-27 06:49:44,019][07175] Updated weights for policy 0, policy_version 21920 (0.0018) +[2023-09-27 06:49:44,019][07176] Updated weights for policy 1, policy_version 21920 (0.0018) +[2023-09-27 06:49:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11247616. Throughput: 0: 814.3, 1: 812.2. Samples: 2811366. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:49:48,229][06167] Avg episode reward: [(0, '5.640'), (1, '3.760')] +[2023-09-27 06:49:53,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11280384. Throughput: 0: 814.5, 1: 815.4. Samples: 2816040. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 06:49:53,230][06167] Avg episode reward: [(0, '5.410'), (1, '4.030')] +[2023-09-27 06:49:56,594][07175] Updated weights for policy 0, policy_version 22080 (0.0016) +[2023-09-27 06:49:56,594][07176] Updated weights for policy 1, policy_version 22080 (0.0017) +[2023-09-27 06:49:58,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11313152. Throughput: 0: 818.8, 1: 817.6. Samples: 2826172. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 06:49:58,230][06167] Avg episode reward: [(0, '5.100'), (1, '4.300')] +[2023-09-27 06:50:03,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11345920. Throughput: 0: 814.4, 1: 814.6. Samples: 2835907. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 06:50:03,229][06167] Avg episode reward: [(0, '5.230'), (1, '4.390')] +[2023-09-27 06:50:08,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11378688. Throughput: 0: 814.0, 1: 815.3. Samples: 2840582. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:50:08,230][06167] Avg episode reward: [(0, '5.210'), (1, '4.760')] +[2023-09-27 06:50:09,047][07176] Updated weights for policy 1, policy_version 22240 (0.0017) +[2023-09-27 06:50:09,047][07175] Updated weights for policy 0, policy_version 22240 (0.0019) +[2023-09-27 06:50:13,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11411456. Throughput: 0: 819.1, 1: 819.2. Samples: 2850816. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:50:13,230][06167] Avg episode reward: [(0, '5.400'), (1, '4.970')] +[2023-09-27 06:50:13,240][07019] Saving new best policy, reward=4.970! +[2023-09-27 06:50:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11444224. Throughput: 0: 817.7, 1: 818.8. Samples: 2860684. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:50:18,230][06167] Avg episode reward: [(0, '5.640'), (1, '4.870')] +[2023-09-27 06:50:21,504][07176] Updated weights for policy 1, policy_version 22400 (0.0018) +[2023-09-27 06:50:21,504][07175] Updated weights for policy 0, policy_version 22400 (0.0014) +[2023-09-27 06:50:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11476992. Throughput: 0: 817.6, 1: 817.6. Samples: 2865248. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 06:50:23,230][06167] Avg episode reward: [(0, '5.780'), (1, '4.470')] +[2023-09-27 06:50:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11509760. Throughput: 0: 819.3, 1: 819.2. Samples: 2875396. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 06:50:28,230][06167] Avg episode reward: [(0, '5.410'), (1, '4.470')] +[2023-09-27 06:50:33,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11542528. Throughput: 0: 821.4, 1: 822.5. Samples: 2885343. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 06:50:33,229][06167] Avg episode reward: [(0, '4.810'), (1, '4.460')] +[2023-09-27 06:50:33,995][07176] Updated weights for policy 1, policy_version 22560 (0.0015) +[2023-09-27 06:50:33,996][07175] Updated weights for policy 0, policy_version 22560 (0.0016) +[2023-09-27 06:50:38,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11575296. Throughput: 0: 820.5, 1: 819.9. Samples: 2889858. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:50:38,230][06167] Avg episode reward: [(0, '5.150'), (1, '4.470')] +[2023-09-27 06:50:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11608064. Throughput: 0: 819.2, 1: 820.3. Samples: 2899951. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:50:43,229][06167] Avg episode reward: [(0, '4.980'), (1, '4.320')] +[2023-09-27 06:50:46,550][07175] Updated weights for policy 0, policy_version 22720 (0.0018) +[2023-09-27 06:50:46,552][07176] Updated weights for policy 1, policy_version 22720 (0.0019) +[2023-09-27 06:50:48,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11640832. Throughput: 0: 821.0, 1: 820.5. Samples: 2909777. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:50:48,230][06167] Avg episode reward: [(0, '5.180'), (1, '4.690')] +[2023-09-27 06:50:53,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11673600. Throughput: 0: 820.7, 1: 819.5. Samples: 2914391. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:50:53,230][06167] Avg episode reward: [(0, '5.480'), (1, '4.640')] +[2023-09-27 06:50:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11706368. Throughput: 0: 819.2, 1: 819.2. Samples: 2924544. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:50:58,230][06167] Avg episode reward: [(0, '5.740'), (1, '4.420')] +[2023-09-27 06:50:58,242][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000022864_5853184.pth... +[2023-09-27 06:50:58,242][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000022864_5853184.pth... +[2023-09-27 06:50:58,285][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000019792_5066752.pth +[2023-09-27 06:50:58,288][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000019792_5066752.pth +[2023-09-27 06:50:59,051][07175] Updated weights for policy 0, policy_version 22880 (0.0016) +[2023-09-27 06:50:59,052][07176] Updated weights for policy 1, policy_version 22880 (0.0014) +[2023-09-27 06:51:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11739136. Throughput: 0: 819.0, 1: 817.7. Samples: 2934336. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:51:03,230][06167] Avg episode reward: [(0, '5.820'), (1, '4.340')] +[2023-09-27 06:51:03,231][06938] Saving new best policy, reward=5.820! +[2023-09-27 06:51:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11771904. Throughput: 0: 819.1, 1: 819.3. Samples: 2938975. Policy #0 lag: (min: 14.0, avg: 14.0, max: 14.0) +[2023-09-27 06:51:08,230][06167] Avg episode reward: [(0, '5.770'), (1, '4.370')] +[2023-09-27 06:51:11,463][07175] Updated weights for policy 0, policy_version 23040 (0.0016) +[2023-09-27 06:51:11,463][07176] Updated weights for policy 1, policy_version 23040 (0.0017) +[2023-09-27 06:51:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11804672. Throughput: 0: 819.1, 1: 819.2. Samples: 2949120. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:51:13,230][06167] Avg episode reward: [(0, '5.730'), (1, '4.540')] +[2023-09-27 06:51:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11837440. Throughput: 0: 819.0, 1: 817.6. Samples: 2958994. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:51:18,229][06167] Avg episode reward: [(0, '5.360'), (1, '4.570')] +[2023-09-27 06:51:23,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11870208. Throughput: 0: 819.3, 1: 819.4. Samples: 2963601. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:51:23,230][06167] Avg episode reward: [(0, '5.420'), (1, '4.560')] +[2023-09-27 06:51:24,004][07176] Updated weights for policy 1, policy_version 23200 (0.0017) +[2023-09-27 06:51:24,005][07175] Updated weights for policy 0, policy_version 23200 (0.0017) +[2023-09-27 06:51:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11902976. Throughput: 0: 819.2, 1: 819.6. Samples: 2973696. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:51:28,230][06167] Avg episode reward: [(0, '5.050'), (1, '4.630')] +[2023-09-27 06:51:33,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11935744. Throughput: 0: 814.6, 1: 815.0. Samples: 2983108. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:51:33,230][06167] Avg episode reward: [(0, '4.900'), (1, '4.470')] +[2023-09-27 06:51:36,627][07176] Updated weights for policy 1, policy_version 23360 (0.0017) +[2023-09-27 06:51:36,628][07175] Updated weights for policy 0, policy_version 23360 (0.0018) +[2023-09-27 06:51:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 11968512. Throughput: 0: 817.6, 1: 818.9. Samples: 2988034. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:51:38,230][06167] Avg episode reward: [(0, '4.680'), (1, '4.130')] +[2023-09-27 06:51:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12001280. Throughput: 0: 819.2, 1: 819.1. Samples: 2998267. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:51:43,230][06167] Avg episode reward: [(0, '4.650'), (1, '4.080')] +[2023-09-27 06:51:48,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12034048. Throughput: 0: 819.8, 1: 819.9. Samples: 3008124. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:51:48,230][06167] Avg episode reward: [(0, '4.440'), (1, '4.260')] +[2023-09-27 06:51:48,962][07176] Updated weights for policy 1, policy_version 23520 (0.0017) +[2023-09-27 06:51:48,963][07175] Updated weights for policy 0, policy_version 23520 (0.0017) +[2023-09-27 06:51:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12066816. Throughput: 0: 820.2, 1: 820.2. Samples: 3012794. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:51:53,230][06167] Avg episode reward: [(0, '4.460'), (1, '4.420')] +[2023-09-27 06:51:58,229][06167] Fps is (10 sec: 6553.9, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12099584. Throughput: 0: 819.2, 1: 819.2. Samples: 3022848. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:51:58,229][06167] Avg episode reward: [(0, '4.440'), (1, '4.450')] +[2023-09-27 06:52:01,498][07176] Updated weights for policy 1, policy_version 23680 (0.0018) +[2023-09-27 06:52:01,498][07175] Updated weights for policy 0, policy_version 23680 (0.0018) +[2023-09-27 06:52:03,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12132352. Throughput: 0: 818.9, 1: 820.1. Samples: 3032750. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:52:03,229][06167] Avg episode reward: [(0, '4.490'), (1, '4.580')] +[2023-09-27 06:52:08,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12165120. Throughput: 0: 819.5, 1: 819.7. Samples: 3037365. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:52:08,230][06167] Avg episode reward: [(0, '4.760'), (1, '4.700')] +[2023-09-27 06:52:13,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12197888. Throughput: 0: 819.2, 1: 819.2. Samples: 3047424. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:52:13,230][06167] Avg episode reward: [(0, '5.210'), (1, '4.530')] +[2023-09-27 06:52:13,928][07175] Updated weights for policy 0, policy_version 23840 (0.0018) +[2023-09-27 06:52:13,928][07176] Updated weights for policy 1, policy_version 23840 (0.0016) +[2023-09-27 06:52:18,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12230656. Throughput: 0: 826.2, 1: 826.2. Samples: 3057466. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:52:18,229][06167] Avg episode reward: [(0, '4.930'), (1, '4.270')] +[2023-09-27 06:52:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12263424. Throughput: 0: 824.3, 1: 822.9. Samples: 3062156. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:52:23,229][06167] Avg episode reward: [(0, '4.870'), (1, '4.370')] +[2023-09-27 06:52:26,379][07175] Updated weights for policy 0, policy_version 24000 (0.0020) +[2023-09-27 06:52:26,379][07176] Updated weights for policy 1, policy_version 24000 (0.0019) +[2023-09-27 06:52:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12296192. Throughput: 0: 819.3, 1: 819.3. Samples: 3072004. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:52:28,229][06167] Avg episode reward: [(0, '4.750'), (1, '4.550')] +[2023-09-27 06:52:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12328960. Throughput: 0: 821.8, 1: 822.5. Samples: 3082114. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 06:52:33,229][06167] Avg episode reward: [(0, '4.590'), (1, '5.050')] +[2023-09-27 06:52:33,230][07019] Saving new best policy, reward=5.050! +[2023-09-27 06:52:38,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12361728. Throughput: 0: 821.4, 1: 821.2. Samples: 3086712. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 06:52:38,230][06167] Avg episode reward: [(0, '4.600'), (1, '5.080')] +[2023-09-27 06:52:38,231][07019] Saving new best policy, reward=5.080! +[2023-09-27 06:52:38,984][07175] Updated weights for policy 0, policy_version 24160 (0.0018) +[2023-09-27 06:52:38,984][07176] Updated weights for policy 1, policy_version 24160 (0.0016) +[2023-09-27 06:52:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12394496. Throughput: 0: 819.2, 1: 819.2. Samples: 3096576. Policy #0 lag: (min: 8.0, avg: 8.0, max: 8.0) +[2023-09-27 06:52:43,230][06167] Avg episode reward: [(0, '4.610'), (1, '4.840')] +[2023-09-27 06:52:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12427264. Throughput: 0: 816.6, 1: 816.0. Samples: 3106221. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:52:48,230][06167] Avg episode reward: [(0, '4.910'), (1, '4.780')] +[2023-09-27 06:52:51,558][07175] Updated weights for policy 0, policy_version 24320 (0.0017) +[2023-09-27 06:52:51,558][07176] Updated weights for policy 1, policy_version 24320 (0.0018) +[2023-09-27 06:52:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12460032. Throughput: 0: 817.8, 1: 817.8. Samples: 3110967. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:52:53,230][06167] Avg episode reward: [(0, '4.850'), (1, '4.420')] +[2023-09-27 06:52:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12492800. Throughput: 0: 819.2, 1: 818.9. Samples: 3121138. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:52:58,230][06167] Avg episode reward: [(0, '4.870'), (1, '4.530')] +[2023-09-27 06:52:58,239][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000024400_6246400.pth... +[2023-09-27 06:52:58,239][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000024400_6246400.pth... +[2023-09-27 06:52:58,272][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000021328_5459968.pth +[2023-09-27 06:52:58,273][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000021328_5459968.pth +[2023-09-27 06:53:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12525568. Throughput: 0: 816.2, 1: 816.4. Samples: 3130934. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:53:03,229][06167] Avg episode reward: [(0, '4.670'), (1, '4.750')] +[2023-09-27 06:53:04,014][07175] Updated weights for policy 0, policy_version 24480 (0.0018) +[2023-09-27 06:53:04,014][07176] Updated weights for policy 1, policy_version 24480 (0.0017) +[2023-09-27 06:53:08,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12558336. Throughput: 0: 816.7, 1: 816.7. Samples: 3135658. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:53:08,230][06167] Avg episode reward: [(0, '5.390'), (1, '5.250')] +[2023-09-27 06:53:08,231][07019] Saving new best policy, reward=5.250! +[2023-09-27 06:53:13,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12591104. Throughput: 0: 819.1, 1: 819.2. Samples: 3145728. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:53:13,230][06167] Avg episode reward: [(0, '5.410'), (1, '5.410')] +[2023-09-27 06:53:13,242][07019] Saving new best policy, reward=5.410! +[2023-09-27 06:53:16,456][07175] Updated weights for policy 0, policy_version 24640 (0.0018) +[2023-09-27 06:53:16,456][07176] Updated weights for policy 1, policy_version 24640 (0.0017) +[2023-09-27 06:53:18,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12623872. Throughput: 0: 818.4, 1: 815.7. Samples: 3155647. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:53:18,229][06167] Avg episode reward: [(0, '5.270'), (1, '5.130')] +[2023-09-27 06:53:23,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12656640. Throughput: 0: 815.6, 1: 815.8. Samples: 3160127. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:53:23,230][06167] Avg episode reward: [(0, '5.160'), (1, '5.150')] +[2023-09-27 06:53:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12689408. Throughput: 0: 819.2, 1: 819.2. Samples: 3170304. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:53:28,229][06167] Avg episode reward: [(0, '5.240'), (1, '5.400')] +[2023-09-27 06:53:29,033][07175] Updated weights for policy 0, policy_version 24800 (0.0014) +[2023-09-27 06:53:29,033][07176] Updated weights for policy 1, policy_version 24800 (0.0017) +[2023-09-27 06:53:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12722176. Throughput: 0: 820.6, 1: 820.8. Samples: 3180084. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:53:33,230][06167] Avg episode reward: [(0, '5.010'), (1, '5.260')] +[2023-09-27 06:53:38,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12754944. Throughput: 0: 820.3, 1: 820.0. Samples: 3184779. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:53:38,230][06167] Avg episode reward: [(0, '4.150'), (1, '5.220')] +[2023-09-27 06:53:41,614][07175] Updated weights for policy 0, policy_version 24960 (0.0016) +[2023-09-27 06:53:41,616][07176] Updated weights for policy 1, policy_version 24960 (0.0017) +[2023-09-27 06:53:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12787712. Throughput: 0: 818.9, 1: 818.3. Samples: 3194811. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:53:43,230][06167] Avg episode reward: [(0, '4.410'), (1, '5.460')] +[2023-09-27 06:53:43,239][07019] Saving new best policy, reward=5.460! +[2023-09-27 06:53:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12820480. Throughput: 0: 815.2, 1: 815.4. Samples: 3204312. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:53:48,229][06167] Avg episode reward: [(0, '4.480'), (1, '5.450')] +[2023-09-27 06:53:53,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12853248. Throughput: 0: 816.6, 1: 818.0. Samples: 3209216. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:53:53,230][06167] Avg episode reward: [(0, '4.330'), (1, '5.090')] +[2023-09-27 06:53:54,202][07175] Updated weights for policy 0, policy_version 25120 (0.0017) +[2023-09-27 06:53:54,204][07176] Updated weights for policy 1, policy_version 25120 (0.0017) +[2023-09-27 06:53:58,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12886016. Throughput: 0: 817.2, 1: 815.9. Samples: 3219215. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:53:58,230][06167] Avg episode reward: [(0, '4.560'), (1, '4.680')] +[2023-09-27 06:54:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12918784. Throughput: 0: 814.1, 1: 815.7. Samples: 3228988. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:54:03,230][06167] Avg episode reward: [(0, '4.830'), (1, '4.990')] +[2023-09-27 06:54:06,630][07175] Updated weights for policy 0, policy_version 25280 (0.0018) +[2023-09-27 06:54:06,630][07176] Updated weights for policy 1, policy_version 25280 (0.0019) +[2023-09-27 06:54:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12951552. Throughput: 0: 817.9, 1: 819.1. Samples: 3233793. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:54:08,230][06167] Avg episode reward: [(0, '4.930'), (1, '4.860')] +[2023-09-27 06:54:13,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 12984320. Throughput: 0: 819.2, 1: 818.2. Samples: 3243987. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:54:13,229][06167] Avg episode reward: [(0, '5.240'), (1, '5.120')] +[2023-09-27 06:54:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13017088. Throughput: 0: 820.1, 1: 819.6. Samples: 3253874. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:54:18,230][06167] Avg episode reward: [(0, '5.350'), (1, '5.040')] +[2023-09-27 06:54:19,000][07175] Updated weights for policy 0, policy_version 25440 (0.0017) +[2023-09-27 06:54:19,000][07176] Updated weights for policy 1, policy_version 25440 (0.0016) +[2023-09-27 06:54:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13049856. Throughput: 0: 819.7, 1: 819.5. Samples: 3258544. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:54:23,229][06167] Avg episode reward: [(0, '5.330'), (1, '5.280')] +[2023-09-27 06:54:28,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13082624. Throughput: 0: 819.5, 1: 820.4. Samples: 3268608. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:54:28,229][06167] Avg episode reward: [(0, '5.490'), (1, '4.980')] +[2023-09-27 06:54:31,413][07175] Updated weights for policy 0, policy_version 25600 (0.0018) +[2023-09-27 06:54:31,413][07176] Updated weights for policy 1, policy_version 25600 (0.0019) +[2023-09-27 06:54:33,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13115392. Throughput: 0: 826.2, 1: 825.9. Samples: 3278658. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:54:33,230][06167] Avg episode reward: [(0, '5.790'), (1, '4.760')] +[2023-09-27 06:54:38,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13148160. Throughput: 0: 824.7, 1: 823.4. Samples: 3283382. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:54:38,230][06167] Avg episode reward: [(0, '6.120'), (1, '4.390')] +[2023-09-27 06:54:38,231][06938] Saving new best policy, reward=6.120! +[2023-09-27 06:54:43,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13180928. Throughput: 0: 821.2, 1: 822.5. Samples: 3293184. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:54:43,230][06167] Avg episode reward: [(0, '5.940'), (1, '4.700')] +[2023-09-27 06:54:43,984][07175] Updated weights for policy 0, policy_version 25760 (0.0017) +[2023-09-27 06:54:43,984][07176] Updated weights for policy 1, policy_version 25760 (0.0016) +[2023-09-27 06:54:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13213696. Throughput: 0: 819.1, 1: 819.3. Samples: 3302713. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:54:48,230][06167] Avg episode reward: [(0, '5.880'), (1, '4.470')] +[2023-09-27 06:54:53,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13246464. Throughput: 0: 819.2, 1: 817.6. Samples: 3307449. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:54:53,229][06167] Avg episode reward: [(0, '5.930'), (1, '4.570')] +[2023-09-27 06:54:56,876][07175] Updated weights for policy 0, policy_version 25920 (0.0019) +[2023-09-27 06:54:56,876][07176] Updated weights for policy 1, policy_version 25920 (0.0020) +[2023-09-27 06:54:58,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13279232. Throughput: 0: 814.2, 1: 812.8. Samples: 3317203. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:54:58,230][06167] Avg episode reward: [(0, '5.880'), (1, '4.720')] +[2023-09-27 06:54:58,242][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000025936_6639616.pth... +[2023-09-27 06:54:58,242][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000025936_6639616.pth... +[2023-09-27 06:54:58,278][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000022864_5853184.pth +[2023-09-27 06:54:58,288][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000022864_5853184.pth +[2023-09-27 06:55:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13312000. Throughput: 0: 808.3, 1: 809.3. Samples: 3326664. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:55:03,229][06167] Avg episode reward: [(0, '5.310'), (1, '4.810')] +[2023-09-27 06:55:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13344768. Throughput: 0: 813.8, 1: 814.4. Samples: 3331816. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:55:08,230][06167] Avg episode reward: [(0, '5.670'), (1, '4.980')] +[2023-09-27 06:55:09,418][07175] Updated weights for policy 0, policy_version 26080 (0.0018) +[2023-09-27 06:55:09,418][07176] Updated weights for policy 1, policy_version 26080 (0.0017) +[2023-09-27 06:55:13,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13377536. Throughput: 0: 812.4, 1: 811.2. Samples: 3341674. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:55:13,230][06167] Avg episode reward: [(0, '5.870'), (1, '4.940')] +[2023-09-27 06:55:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13410304. Throughput: 0: 810.0, 1: 809.8. Samples: 3351548. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:55:18,230][06167] Avg episode reward: [(0, '6.040'), (1, '5.350')] +[2023-09-27 06:55:21,754][07175] Updated weights for policy 0, policy_version 26240 (0.0017) +[2023-09-27 06:55:21,754][07176] Updated weights for policy 1, policy_version 26240 (0.0018) +[2023-09-27 06:55:23,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13443072. Throughput: 0: 813.7, 1: 814.9. Samples: 3356670. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 06:55:23,229][06167] Avg episode reward: [(0, '5.970'), (1, '5.210')] +[2023-09-27 06:55:28,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13475840. Throughput: 0: 816.6, 1: 815.1. Samples: 3366613. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:55:28,230][06167] Avg episode reward: [(0, '6.120'), (1, '5.270')] +[2023-09-27 06:55:33,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13508608. Throughput: 0: 817.2, 1: 817.2. Samples: 3376261. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:55:33,230][06167] Avg episode reward: [(0, '6.180'), (1, '5.180')] +[2023-09-27 06:55:33,231][06938] Saving new best policy, reward=6.180! +[2023-09-27 06:55:34,301][07175] Updated weights for policy 0, policy_version 26400 (0.0018) +[2023-09-27 06:55:34,301][07176] Updated weights for policy 1, policy_version 26400 (0.0018) +[2023-09-27 06:55:38,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13541376. Throughput: 0: 819.2, 1: 820.7. Samples: 3381243. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:55:38,230][06167] Avg episode reward: [(0, '6.200'), (1, '5.250')] +[2023-09-27 06:55:38,231][06938] Saving new best policy, reward=6.200! +[2023-09-27 06:55:43,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13574144. Throughput: 0: 818.2, 1: 819.8. Samples: 3390913. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:55:43,230][06167] Avg episode reward: [(0, '5.770'), (1, '5.450')] +[2023-09-27 06:55:46,849][07176] Updated weights for policy 1, policy_version 26560 (0.0019) +[2023-09-27 06:55:46,849][07175] Updated weights for policy 0, policy_version 26560 (0.0018) +[2023-09-27 06:55:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13606912. Throughput: 0: 822.3, 1: 821.8. Samples: 3400651. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:55:48,230][06167] Avg episode reward: [(0, '5.670'), (1, '5.700')] +[2023-09-27 06:55:48,230][07019] Saving new best policy, reward=5.700! +[2023-09-27 06:55:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13639680. Throughput: 0: 821.9, 1: 821.5. Samples: 3405768. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:55:53,230][06167] Avg episode reward: [(0, '5.470'), (1, '5.810')] +[2023-09-27 06:55:53,231][07019] Saving new best policy, reward=5.810! +[2023-09-27 06:55:58,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13672448. Throughput: 0: 820.4, 1: 820.2. Samples: 3415498. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:55:58,231][06167] Avg episode reward: [(0, '5.730'), (1, '5.820')] +[2023-09-27 06:55:58,243][07019] Saving new best policy, reward=5.820! +[2023-09-27 06:55:59,343][07175] Updated weights for policy 0, policy_version 26720 (0.0018) +[2023-09-27 06:55:59,343][07176] Updated weights for policy 1, policy_version 26720 (0.0019) +[2023-09-27 06:56:03,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13705216. Throughput: 0: 818.8, 1: 818.7. Samples: 3425235. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:56:03,229][06167] Avg episode reward: [(0, '6.180'), (1, '5.660')] +[2023-09-27 06:56:08,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13737984. Throughput: 0: 819.0, 1: 817.6. Samples: 3430318. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:56:08,229][06167] Avg episode reward: [(0, '6.170'), (1, '5.490')] +[2023-09-27 06:56:11,873][07176] Updated weights for policy 1, policy_version 26880 (0.0016) +[2023-09-27 06:56:11,873][07175] Updated weights for policy 0, policy_version 26880 (0.0015) +[2023-09-27 06:56:13,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13770752. Throughput: 0: 816.0, 1: 815.6. Samples: 3440032. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:56:13,230][06167] Avg episode reward: [(0, '6.320'), (1, '5.350')] +[2023-09-27 06:56:13,237][06938] Saving new best policy, reward=6.320! +[2023-09-27 06:56:18,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13803520. Throughput: 0: 815.1, 1: 814.8. Samples: 3449606. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:56:18,230][06167] Avg episode reward: [(0, '6.140'), (1, '5.320')] +[2023-09-27 06:56:23,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13836288. Throughput: 0: 817.9, 1: 816.7. Samples: 3454800. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:56:23,230][06167] Avg episode reward: [(0, '5.840'), (1, '5.430')] +[2023-09-27 06:56:24,373][07175] Updated weights for policy 0, policy_version 27040 (0.0015) +[2023-09-27 06:56:24,374][07176] Updated weights for policy 1, policy_version 27040 (0.0017) +[2023-09-27 06:56:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13869056. Throughput: 0: 819.3, 1: 818.8. Samples: 3464630. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:56:28,230][06167] Avg episode reward: [(0, '5.370'), (1, '5.730')] +[2023-09-27 06:56:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13901824. Throughput: 0: 818.5, 1: 818.6. Samples: 3474320. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:56:33,230][06167] Avg episode reward: [(0, '5.410'), (1, '5.860')] +[2023-09-27 06:56:33,231][07019] Saving new best policy, reward=5.860! +[2023-09-27 06:56:36,898][07175] Updated weights for policy 0, policy_version 27200 (0.0017) +[2023-09-27 06:56:36,898][07176] Updated weights for policy 1, policy_version 27200 (0.0018) +[2023-09-27 06:56:38,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13934592. Throughput: 0: 818.4, 1: 818.1. Samples: 3479408. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:56:38,229][06167] Avg episode reward: [(0, '5.440'), (1, '5.800')] +[2023-09-27 06:56:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 13967360. Throughput: 0: 817.5, 1: 818.2. Samples: 3489105. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:56:43,231][06167] Avg episode reward: [(0, '5.570'), (1, '5.540')] +[2023-09-27 06:56:48,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14000128. Throughput: 0: 819.1, 1: 819.4. Samples: 3498966. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:56:48,230][06167] Avg episode reward: [(0, '5.990'), (1, '5.480')] +[2023-09-27 06:56:49,312][07175] Updated weights for policy 0, policy_version 27360 (0.0017) +[2023-09-27 06:56:49,313][07176] Updated weights for policy 1, policy_version 27360 (0.0017) +[2023-09-27 06:56:53,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14032896. Throughput: 0: 819.4, 1: 820.4. Samples: 3504110. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:56:53,229][06167] Avg episode reward: [(0, '6.120'), (1, '5.290')] +[2023-09-27 06:56:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14065664. Throughput: 0: 821.5, 1: 822.0. Samples: 3513990. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:56:58,231][06167] Avg episode reward: [(0, '5.960'), (1, '5.280')] +[2023-09-27 06:56:58,242][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000027472_7032832.pth... +[2023-09-27 06:56:58,242][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000027472_7032832.pth... +[2023-09-27 06:56:58,273][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000024400_6246400.pth +[2023-09-27 06:56:58,275][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000024400_6246400.pth +[2023-09-27 06:57:01,785][07175] Updated weights for policy 0, policy_version 27520 (0.0014) +[2023-09-27 06:57:01,785][07176] Updated weights for policy 1, policy_version 27520 (0.0017) +[2023-09-27 06:57:03,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14098432. Throughput: 0: 822.8, 1: 822.8. Samples: 3523657. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 06:57:03,230][06167] Avg episode reward: [(0, '5.540'), (1, '5.570')] +[2023-09-27 06:57:08,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14131200. Throughput: 0: 820.5, 1: 821.4. Samples: 3528688. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 06:57:08,229][06167] Avg episode reward: [(0, '5.930'), (1, '5.740')] +[2023-09-27 06:57:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14163968. Throughput: 0: 820.9, 1: 820.1. Samples: 3538476. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 06:57:13,230][06167] Avg episode reward: [(0, '5.680'), (1, '6.030')] +[2023-09-27 06:57:13,243][07019] Saving new best policy, reward=6.030! +[2023-09-27 06:57:14,312][07175] Updated weights for policy 0, policy_version 27680 (0.0014) +[2023-09-27 06:57:14,312][07176] Updated weights for policy 1, policy_version 27680 (0.0016) +[2023-09-27 06:57:18,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14196736. Throughput: 0: 818.0, 1: 818.1. Samples: 3547946. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:57:18,230][06167] Avg episode reward: [(0, '5.330'), (1, '6.340')] +[2023-09-27 06:57:18,231][07019] Saving new best policy, reward=6.340! +[2023-09-27 06:57:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14229504. Throughput: 0: 818.0, 1: 818.8. Samples: 3553066. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:57:23,230][06167] Avg episode reward: [(0, '5.530'), (1, '6.360')] +[2023-09-27 06:57:23,232][07019] Saving new best policy, reward=6.360! +[2023-09-27 06:57:26,897][07175] Updated weights for policy 0, policy_version 27840 (0.0017) +[2023-09-27 06:57:26,897][07176] Updated weights for policy 1, policy_version 27840 (0.0017) +[2023-09-27 06:57:28,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14262272. Throughput: 0: 820.2, 1: 819.1. Samples: 3562873. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:57:28,230][06167] Avg episode reward: [(0, '5.760'), (1, '6.000')] +[2023-09-27 06:57:33,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14295040. Throughput: 0: 819.8, 1: 819.7. Samples: 3572743. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 06:57:33,229][06167] Avg episode reward: [(0, '5.970'), (1, '6.180')] +[2023-09-27 06:57:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14327808. Throughput: 0: 819.2, 1: 819.6. Samples: 3577856. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:57:38,230][06167] Avg episode reward: [(0, '5.930'), (1, '6.040')] +[2023-09-27 06:57:39,306][07176] Updated weights for policy 1, policy_version 28000 (0.0018) +[2023-09-27 06:57:39,306][07175] Updated weights for policy 0, policy_version 28000 (0.0017) +[2023-09-27 06:57:43,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14360576. Throughput: 0: 818.4, 1: 818.7. Samples: 3587662. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:57:43,230][06167] Avg episode reward: [(0, '5.970'), (1, '5.830')] +[2023-09-27 06:57:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14393344. Throughput: 0: 817.6, 1: 817.4. Samples: 3597230. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:57:48,229][06167] Avg episode reward: [(0, '5.980'), (1, '5.660')] +[2023-09-27 06:57:51,833][07176] Updated weights for policy 1, policy_version 28160 (0.0016) +[2023-09-27 06:57:51,834][07175] Updated weights for policy 0, policy_version 28160 (0.0018) +[2023-09-27 06:57:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14426112. Throughput: 0: 819.2, 1: 817.8. Samples: 3602355. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 06:57:53,230][06167] Avg episode reward: [(0, '6.070'), (1, '6.040')] +[2023-09-27 06:57:58,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14458880. Throughput: 0: 818.1, 1: 818.2. Samples: 3612106. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 06:57:58,230][06167] Avg episode reward: [(0, '6.030'), (1, '6.400')] +[2023-09-27 06:57:58,238][07019] Saving new best policy, reward=6.400! +[2023-09-27 06:58:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 14491648. Throughput: 0: 819.6, 1: 819.9. Samples: 3621724. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 06:58:03,230][06167] Avg episode reward: [(0, '6.440'), (1, '6.630')] +[2023-09-27 06:58:03,231][06938] Saving new best policy, reward=6.440! +[2023-09-27 06:58:03,231][07019] Saving new best policy, reward=6.630! +[2023-09-27 06:58:04,446][07175] Updated weights for policy 0, policy_version 28320 (0.0017) +[2023-09-27 06:58:04,446][07176] Updated weights for policy 1, policy_version 28320 (0.0016) +[2023-09-27 06:58:08,229][06167] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 14516224. Throughput: 0: 817.0, 1: 818.6. Samples: 3626667. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 06:58:08,229][06167] Avg episode reward: [(0, '6.210'), (1, '6.650')] +[2023-09-27 06:58:08,372][07019] Saving new best policy, reward=6.650! +[2023-09-27 06:58:13,229][06167] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 14548992. Throughput: 0: 809.8, 1: 809.9. Samples: 3635760. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 06:58:13,230][06167] Avg episode reward: [(0, '5.810'), (1, '6.820')] +[2023-09-27 06:58:13,245][07019] Saving new best policy, reward=6.820! +[2023-09-27 06:58:17,264][07175] Updated weights for policy 0, policy_version 28480 (0.0017) +[2023-09-27 06:58:17,264][07176] Updated weights for policy 1, policy_version 28480 (0.0019) +[2023-09-27 06:58:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 14581760. Throughput: 0: 809.1, 1: 809.2. Samples: 3645567. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:58:18,229][06167] Avg episode reward: [(0, '5.970'), (1, '6.570')] +[2023-09-27 06:58:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 14614528. Throughput: 0: 808.5, 1: 807.8. Samples: 3650589. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:58:23,230][06167] Avg episode reward: [(0, '6.390'), (1, '6.420')] +[2023-09-27 06:58:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 14647296. Throughput: 0: 806.0, 1: 805.6. Samples: 3660183. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:58:28,230][06167] Avg episode reward: [(0, '6.710'), (1, '6.190')] +[2023-09-27 06:58:28,240][06938] Saving new best policy, reward=6.710! +[2023-09-27 06:58:29,891][07175] Updated weights for policy 0, policy_version 28640 (0.0017) +[2023-09-27 06:58:29,891][07176] Updated weights for policy 1, policy_version 28640 (0.0018) +[2023-09-27 06:58:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.0, 300 sec: 6525.8). Total num frames: 14680064. Throughput: 0: 807.9, 1: 809.6. Samples: 3670016. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:58:33,230][06167] Avg episode reward: [(0, '6.200'), (1, '6.370')] +[2023-09-27 06:58:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 14712832. Throughput: 0: 802.0, 1: 802.2. Samples: 3674547. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:58:38,230][06167] Avg episode reward: [(0, '6.530'), (1, '6.470')] +[2023-09-27 06:58:42,677][07175] Updated weights for policy 0, policy_version 28800 (0.0016) +[2023-09-27 06:58:42,677][07176] Updated weights for policy 1, policy_version 28800 (0.0015) +[2023-09-27 06:58:43,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 14745600. Throughput: 0: 801.8, 1: 803.7. Samples: 3684352. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:58:43,229][06167] Avg episode reward: [(0, '6.580'), (1, '6.390')] +[2023-09-27 06:58:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 14778368. Throughput: 0: 806.4, 1: 806.6. Samples: 3694308. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:58:48,229][06167] Avg episode reward: [(0, '6.030'), (1, '5.920')] +[2023-09-27 06:58:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 14811136. Throughput: 0: 804.5, 1: 802.4. Samples: 3698976. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:58:53,229][06167] Avg episode reward: [(0, '6.120'), (1, '6.290')] +[2023-09-27 06:58:55,324][07175] Updated weights for policy 0, policy_version 28960 (0.0016) +[2023-09-27 06:58:55,324][07176] Updated weights for policy 1, policy_version 28960 (0.0018) +[2023-09-27 06:58:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 14843904. Throughput: 0: 812.2, 1: 813.2. Samples: 3708901. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:58:58,230][06167] Avg episode reward: [(0, '6.130'), (1, '6.390')] +[2023-09-27 06:58:58,237][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000028992_7421952.pth... +[2023-09-27 06:58:58,238][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000028992_7421952.pth... +[2023-09-27 06:58:58,273][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000025936_6639616.pth +[2023-09-27 06:58:58,274][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000025936_6639616.pth +[2023-09-27 06:59:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 14876672. Throughput: 0: 812.8, 1: 812.5. Samples: 3718703. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:59:03,229][06167] Avg episode reward: [(0, '6.010'), (1, '6.350')] +[2023-09-27 06:59:07,988][07175] Updated weights for policy 0, policy_version 29120 (0.0018) +[2023-09-27 06:59:07,988][07176] Updated weights for policy 1, policy_version 29120 (0.0015) +[2023-09-27 06:59:08,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 14909440. Throughput: 0: 807.2, 1: 807.8. Samples: 3723265. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:59:08,230][06167] Avg episode reward: [(0, '6.460'), (1, '6.200')] +[2023-09-27 06:59:13,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 14942208. Throughput: 0: 810.0, 1: 811.1. Samples: 3733135. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:59:13,230][06167] Avg episode reward: [(0, '6.760'), (1, '6.410')] +[2023-09-27 06:59:13,243][06938] Saving new best policy, reward=6.760! +[2023-09-27 06:59:18,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 14974976. Throughput: 0: 810.0, 1: 807.0. Samples: 3742778. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:59:18,229][06167] Avg episode reward: [(0, '6.530'), (1, '7.000')] +[2023-09-27 06:59:18,230][07019] Saving new best policy, reward=7.000! +[2023-09-27 06:59:20,591][07175] Updated weights for policy 0, policy_version 29280 (0.0017) +[2023-09-27 06:59:20,591][07176] Updated weights for policy 1, policy_version 29280 (0.0017) +[2023-09-27 06:59:23,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15007744. Throughput: 0: 813.6, 1: 814.1. Samples: 3747793. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:59:23,230][06167] Avg episode reward: [(0, '6.680'), (1, '6.770')] +[2023-09-27 06:59:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15040512. Throughput: 0: 814.2, 1: 812.4. Samples: 3757552. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:59:28,229][06167] Avg episode reward: [(0, '6.580'), (1, '6.690')] +[2023-09-27 06:59:33,107][07175] Updated weights for policy 0, policy_version 29440 (0.0016) +[2023-09-27 06:59:33,107][07176] Updated weights for policy 1, policy_version 29440 (0.0018) +[2023-09-27 06:59:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15073280. Throughput: 0: 810.0, 1: 809.5. Samples: 3767183. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:59:33,230][06167] Avg episode reward: [(0, '6.620'), (1, '6.580')] +[2023-09-27 06:59:38,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15106048. Throughput: 0: 815.3, 1: 815.4. Samples: 3772355. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:59:38,229][06167] Avg episode reward: [(0, '6.480'), (1, '6.770')] +[2023-09-27 06:59:43,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15138816. Throughput: 0: 816.2, 1: 815.2. Samples: 3782311. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:59:43,229][06167] Avg episode reward: [(0, '6.880'), (1, '6.050')] +[2023-09-27 06:59:43,240][06938] Saving new best policy, reward=6.880! +[2023-09-27 06:59:45,414][07175] Updated weights for policy 0, policy_version 29600 (0.0014) +[2023-09-27 06:59:45,414][07176] Updated weights for policy 1, policy_version 29600 (0.0016) +[2023-09-27 06:59:48,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15171584. Throughput: 0: 817.2, 1: 817.6. Samples: 3792266. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:59:48,230][06167] Avg episode reward: [(0, '6.970'), (1, '5.880')] +[2023-09-27 06:59:48,231][06938] Saving new best policy, reward=6.970! +[2023-09-27 06:59:53,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15204352. Throughput: 0: 819.8, 1: 819.3. Samples: 3797025. Policy #0 lag: (min: 10.0, avg: 10.0, max: 10.0) +[2023-09-27 06:59:53,230][06167] Avg episode reward: [(0, '6.770'), (1, '6.060')] +[2023-09-27 06:59:57,915][07176] Updated weights for policy 1, policy_version 29760 (0.0015) +[2023-09-27 06:59:57,916][07175] Updated weights for policy 0, policy_version 29760 (0.0016) +[2023-09-27 06:59:58,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15237120. Throughput: 0: 823.2, 1: 821.5. Samples: 3807146. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 06:59:58,230][06167] Avg episode reward: [(0, '6.870'), (1, '6.640')] +[2023-09-27 07:00:03,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15269888. Throughput: 0: 819.2, 1: 820.6. Samples: 3816570. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:00:03,229][06167] Avg episode reward: [(0, '6.770'), (1, '6.200')] +[2023-09-27 07:00:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15302656. Throughput: 0: 819.2, 1: 820.2. Samples: 3821568. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:00:08,229][06167] Avg episode reward: [(0, '6.550'), (1, '6.100')] +[2023-09-27 07:00:10,570][07175] Updated weights for policy 0, policy_version 29920 (0.0017) +[2023-09-27 07:00:10,570][07176] Updated weights for policy 1, policy_version 29920 (0.0016) +[2023-09-27 07:00:13,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15335424. Throughput: 0: 819.2, 1: 819.5. Samples: 3831290. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:00:13,230][06167] Avg episode reward: [(0, '6.880'), (1, '6.530')] +[2023-09-27 07:00:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15368192. Throughput: 0: 822.0, 1: 822.7. Samples: 3841195. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 07:00:18,229][06167] Avg episode reward: [(0, '6.860'), (1, '7.030')] +[2023-09-27 07:00:18,230][07019] Saving new best policy, reward=7.030! +[2023-09-27 07:00:22,990][07175] Updated weights for policy 0, policy_version 30080 (0.0016) +[2023-09-27 07:00:22,990][07176] Updated weights for policy 1, policy_version 30080 (0.0016) +[2023-09-27 07:00:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15400960. Throughput: 0: 819.3, 1: 820.5. Samples: 3846144. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 07:00:23,230][06167] Avg episode reward: [(0, '6.880'), (1, '6.570')] +[2023-09-27 07:00:28,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15433728. Throughput: 0: 817.7, 1: 817.7. Samples: 3855904. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 07:00:28,229][06167] Avg episode reward: [(0, '6.750'), (1, '6.430')] +[2023-09-27 07:00:33,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15466496. Throughput: 0: 813.6, 1: 813.5. Samples: 3865487. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 07:00:33,229][06167] Avg episode reward: [(0, '6.980'), (1, '6.650')] +[2023-09-27 07:00:33,230][06938] Saving new best policy, reward=6.980! +[2023-09-27 07:00:35,595][07175] Updated weights for policy 0, policy_version 30240 (0.0016) +[2023-09-27 07:00:35,595][07176] Updated weights for policy 1, policy_version 30240 (0.0017) +[2023-09-27 07:00:38,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15499264. Throughput: 0: 818.5, 1: 817.6. Samples: 3870651. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:00:38,230][06167] Avg episode reward: [(0, '6.830'), (1, '6.720')] +[2023-09-27 07:00:43,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15532032. Throughput: 0: 813.9, 1: 814.7. Samples: 3880436. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:00:43,230][06167] Avg episode reward: [(0, '6.750'), (1, '6.290')] +[2023-09-27 07:00:48,065][07175] Updated weights for policy 0, policy_version 30400 (0.0017) +[2023-09-27 07:00:48,065][07176] Updated weights for policy 1, policy_version 30400 (0.0016) +[2023-09-27 07:00:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15564800. Throughput: 0: 818.4, 1: 818.5. Samples: 3890234. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:00:48,230][06167] Avg episode reward: [(0, '6.770'), (1, '6.390')] +[2023-09-27 07:00:53,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15597568. Throughput: 0: 819.2, 1: 818.9. Samples: 3895281. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:00:53,230][06167] Avg episode reward: [(0, '6.620'), (1, '6.460')] +[2023-09-27 07:00:58,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15630336. Throughput: 0: 818.1, 1: 817.4. Samples: 3904888. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:00:58,229][06167] Avg episode reward: [(0, '7.120'), (1, '6.520')] +[2023-09-27 07:00:58,236][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000030528_7815168.pth... +[2023-09-27 07:00:58,236][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000030528_7815168.pth... +[2023-09-27 07:00:58,267][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000027472_7032832.pth +[2023-09-27 07:00:58,271][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000027472_7032832.pth +[2023-09-27 07:00:58,271][06938] Saving new best policy, reward=7.120! +[2023-09-27 07:01:00,661][07175] Updated weights for policy 0, policy_version 30560 (0.0016) +[2023-09-27 07:01:00,662][07176] Updated weights for policy 1, policy_version 30560 (0.0016) +[2023-09-27 07:01:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15663104. Throughput: 0: 815.9, 1: 815.3. Samples: 3914599. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:01:03,230][06167] Avg episode reward: [(0, '6.730'), (1, '6.190')] +[2023-09-27 07:01:08,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 15695872. Throughput: 0: 816.0, 1: 814.4. Samples: 3919510. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:01:08,230][06167] Avg episode reward: [(0, '7.060'), (1, '6.400')] +[2023-09-27 07:01:13,229][06167] Fps is (10 sec: 5734.3, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 15720448. Throughput: 0: 814.8, 1: 814.7. Samples: 3929235. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:01:13,230][06167] Avg episode reward: [(0, '7.420'), (1, '6.550')] +[2023-09-27 07:01:13,275][06938] Saving new best policy, reward=7.420! +[2023-09-27 07:01:13,438][07175] Updated weights for policy 0, policy_version 30720 (0.0017) +[2023-09-27 07:01:13,438][07176] Updated weights for policy 1, policy_version 30720 (0.0017) +[2023-09-27 07:01:18,229][06167] Fps is (10 sec: 5734.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 15753216. Throughput: 0: 811.0, 1: 810.9. Samples: 3938471. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 07:01:18,229][06167] Avg episode reward: [(0, '7.230'), (1, '6.730')] +[2023-09-27 07:01:23,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 15785984. Throughput: 0: 811.7, 1: 811.3. Samples: 3943681. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 07:01:23,229][06167] Avg episode reward: [(0, '7.290'), (1, '7.040')] +[2023-09-27 07:01:23,413][07019] Saving new best policy, reward=7.040! +[2023-09-27 07:01:25,921][07175] Updated weights for policy 0, policy_version 30880 (0.0016) +[2023-09-27 07:01:25,922][07176] Updated weights for policy 1, policy_version 30880 (0.0015) +[2023-09-27 07:01:28,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 15818752. Throughput: 0: 811.9, 1: 811.4. Samples: 3953484. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 07:01:28,230][06167] Avg episode reward: [(0, '7.290'), (1, '6.960')] +[2023-09-27 07:01:33,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 15851520. Throughput: 0: 811.2, 1: 811.2. Samples: 3963239. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 07:01:33,229][06167] Avg episode reward: [(0, '6.560'), (1, '6.980')] +[2023-09-27 07:01:38,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 15884288. Throughput: 0: 811.8, 1: 809.9. Samples: 3968259. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:01:38,229][06167] Avg episode reward: [(0, '6.320'), (1, '6.900')] +[2023-09-27 07:01:38,482][07176] Updated weights for policy 1, policy_version 31040 (0.0019) +[2023-09-27 07:01:38,483][07175] Updated weights for policy 0, policy_version 31040 (0.0017) +[2023-09-27 07:01:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 15917056. Throughput: 0: 812.0, 1: 812.8. Samples: 3978002. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:01:43,230][06167] Avg episode reward: [(0, '6.380'), (1, '6.300')] +[2023-09-27 07:01:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 15949824. Throughput: 0: 809.2, 1: 810.3. Samples: 3987475. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:01:48,229][06167] Avg episode reward: [(0, '5.860'), (1, '5.960')] +[2023-09-27 07:01:51,101][07175] Updated weights for policy 0, policy_version 31200 (0.0018) +[2023-09-27 07:01:51,102][07176] Updated weights for policy 1, policy_version 31200 (0.0017) +[2023-09-27 07:01:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 15982592. Throughput: 0: 811.6, 1: 811.2. Samples: 3992537. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:01:53,230][06167] Avg episode reward: [(0, '6.040'), (1, '6.100')] +[2023-09-27 07:01:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6417.0, 300 sec: 6498.1). Total num frames: 16015360. Throughput: 0: 812.3, 1: 812.4. Samples: 4002347. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 07:01:58,230][06167] Avg episode reward: [(0, '6.020'), (1, '6.540')] +[2023-09-27 07:02:03,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 16048128. Throughput: 0: 816.7, 1: 818.0. Samples: 4012033. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 07:02:03,230][06167] Avg episode reward: [(0, '6.040'), (1, '6.870')] +[2023-09-27 07:02:03,632][07175] Updated weights for policy 0, policy_version 31360 (0.0018) +[2023-09-27 07:02:03,632][07176] Updated weights for policy 1, policy_version 31360 (0.0017) +[2023-09-27 07:02:08,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6417.1, 300 sec: 6498.1). Total num frames: 16080896. Throughput: 0: 814.7, 1: 814.6. Samples: 4016999. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 07:02:08,230][06167] Avg episode reward: [(0, '6.050'), (1, '7.310')] +[2023-09-27 07:02:08,231][07019] Saving new best policy, reward=7.310! +[2023-09-27 07:02:13,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16113664. Throughput: 0: 813.2, 1: 813.3. Samples: 4026677. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 07:02:13,229][06167] Avg episode reward: [(0, '6.340'), (1, '7.910')] +[2023-09-27 07:02:13,239][07019] Saving new best policy, reward=7.910! +[2023-09-27 07:02:16,170][07175] Updated weights for policy 0, policy_version 31520 (0.0016) +[2023-09-27 07:02:16,170][07176] Updated weights for policy 1, policy_version 31520 (0.0017) +[2023-09-27 07:02:18,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16146432. Throughput: 0: 814.6, 1: 815.9. Samples: 4036612. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 07:02:18,230][06167] Avg episode reward: [(0, '6.190'), (1, '7.810')] +[2023-09-27 07:02:23,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16179200. Throughput: 0: 814.4, 1: 815.6. Samples: 4041610. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:02:23,230][06167] Avg episode reward: [(0, '6.280'), (1, '7.490')] +[2023-09-27 07:02:28,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16211968. Throughput: 0: 813.8, 1: 813.4. Samples: 4051226. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:02:28,230][06167] Avg episode reward: [(0, '5.980'), (1, '7.740')] +[2023-09-27 07:02:28,644][07176] Updated weights for policy 1, policy_version 31680 (0.0015) +[2023-09-27 07:02:28,645][07175] Updated weights for policy 0, policy_version 31680 (0.0017) +[2023-09-27 07:02:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16244736. Throughput: 0: 819.0, 1: 819.1. Samples: 4061188. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:02:33,229][06167] Avg episode reward: [(0, '6.180'), (1, '8.050')] +[2023-09-27 07:02:33,230][07019] Saving new best policy, reward=8.050! +[2023-09-27 07:02:38,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16277504. Throughput: 0: 816.9, 1: 817.2. Samples: 4066071. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:02:38,230][06167] Avg episode reward: [(0, '6.010'), (1, '7.630')] +[2023-09-27 07:02:41,177][07176] Updated weights for policy 1, policy_version 31840 (0.0018) +[2023-09-27 07:02:41,178][07175] Updated weights for policy 0, policy_version 31840 (0.0018) +[2023-09-27 07:02:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16310272. Throughput: 0: 816.4, 1: 816.6. Samples: 4075830. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 07:02:43,230][06167] Avg episode reward: [(0, '5.710'), (1, '7.370')] +[2023-09-27 07:02:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16343040. Throughput: 0: 819.3, 1: 819.2. Samples: 4085764. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 07:02:48,229][06167] Avg episode reward: [(0, '6.030'), (1, '7.440')] +[2023-09-27 07:02:53,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16375808. Throughput: 0: 820.1, 1: 820.5. Samples: 4090827. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 07:02:53,230][06167] Avg episode reward: [(0, '6.050'), (1, '7.460')] +[2023-09-27 07:02:53,558][07176] Updated weights for policy 1, policy_version 32000 (0.0016) +[2023-09-27 07:02:53,559][07175] Updated weights for policy 0, policy_version 32000 (0.0018) +[2023-09-27 07:02:58,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6498.1). Total num frames: 16408576. Throughput: 0: 822.2, 1: 822.3. Samples: 4100681. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 07:02:58,230][06167] Avg episode reward: [(0, '5.780'), (1, '6.630')] +[2023-09-27 07:02:58,242][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000032048_8204288.pth... +[2023-09-27 07:02:58,242][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000032048_8204288.pth... +[2023-09-27 07:02:58,291][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000028992_7421952.pth +[2023-09-27 07:02:58,293][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000028992_7421952.pth +[2023-09-27 07:03:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16441344. Throughput: 0: 821.7, 1: 820.3. Samples: 4110501. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 07:03:03,230][06167] Avg episode reward: [(0, '5.750'), (1, '6.120')] +[2023-09-27 07:03:05,965][07175] Updated weights for policy 0, policy_version 32160 (0.0016) +[2023-09-27 07:03:05,965][07176] Updated weights for policy 1, policy_version 32160 (0.0018) +[2023-09-27 07:03:08,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16474112. Throughput: 0: 823.1, 1: 822.7. Samples: 4115672. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 07:03:08,229][06167] Avg episode reward: [(0, '6.410'), (1, '5.800')] +[2023-09-27 07:03:13,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16506880. Throughput: 0: 824.4, 1: 824.6. Samples: 4125429. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 07:03:13,230][06167] Avg episode reward: [(0, '6.490'), (1, '5.920')] +[2023-09-27 07:03:18,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16539648. Throughput: 0: 823.0, 1: 821.7. Samples: 4135201. Policy #0 lag: (min: 4.0, avg: 4.0, max: 4.0) +[2023-09-27 07:03:18,230][06167] Avg episode reward: [(0, '6.870'), (1, '5.520')] +[2023-09-27 07:03:18,397][07175] Updated weights for policy 0, policy_version 32320 (0.0017) +[2023-09-27 07:03:18,397][07176] Updated weights for policy 1, policy_version 32320 (0.0017) +[2023-09-27 07:03:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16572416. Throughput: 0: 825.3, 1: 825.3. Samples: 4140350. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:03:23,230][06167] Avg episode reward: [(0, '7.260'), (1, '5.840')] +[2023-09-27 07:03:28,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16605184. Throughput: 0: 823.8, 1: 823.5. Samples: 4149961. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:03:28,230][06167] Avg episode reward: [(0, '7.450'), (1, '6.330')] +[2023-09-27 07:03:28,238][06938] Saving new best policy, reward=7.450! +[2023-09-27 07:03:31,038][07175] Updated weights for policy 0, policy_version 32480 (0.0015) +[2023-09-27 07:03:31,039][07176] Updated weights for policy 1, policy_version 32480 (0.0017) +[2023-09-27 07:03:33,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16637952. Throughput: 0: 821.3, 1: 820.0. Samples: 4159623. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:03:33,229][06167] Avg episode reward: [(0, '7.950'), (1, '6.430')] +[2023-09-27 07:03:33,230][06938] Saving new best policy, reward=7.950! +[2023-09-27 07:03:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16670720. Throughput: 0: 819.9, 1: 819.9. Samples: 4164618. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:03:38,230][06167] Avg episode reward: [(0, '7.180'), (1, '6.340')] +[2023-09-27 07:03:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16703488. Throughput: 0: 812.1, 1: 813.4. Samples: 4173829. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:03:43,229][06167] Avg episode reward: [(0, '7.210'), (1, '6.480')] +[2023-09-27 07:03:43,833][07176] Updated weights for policy 1, policy_version 32640 (0.0014) +[2023-09-27 07:03:43,833][07175] Updated weights for policy 0, policy_version 32640 (0.0018) +[2023-09-27 07:03:48,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16736256. Throughput: 0: 814.2, 1: 813.6. Samples: 4183754. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:03:48,230][06167] Avg episode reward: [(0, '7.040'), (1, '7.070')] +[2023-09-27 07:03:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16769024. Throughput: 0: 806.0, 1: 806.3. Samples: 4188224. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:03:53,229][06167] Avg episode reward: [(0, '7.120'), (1, '7.000')] +[2023-09-27 07:03:56,475][07176] Updated weights for policy 1, policy_version 32800 (0.0018) +[2023-09-27 07:03:56,475][07175] Updated weights for policy 0, policy_version 32800 (0.0018) +[2023-09-27 07:03:58,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16801792. Throughput: 0: 810.1, 1: 811.5. Samples: 4198400. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:03:58,230][06167] Avg episode reward: [(0, '6.650'), (1, '7.570')] +[2023-09-27 07:04:03,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16834560. Throughput: 0: 813.3, 1: 813.2. Samples: 4208396. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:04:03,230][06167] Avg episode reward: [(0, '6.810'), (1, '7.590')] +[2023-09-27 07:04:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16867328. Throughput: 0: 807.6, 1: 808.0. Samples: 4213051. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:04:08,229][06167] Avg episode reward: [(0, '6.880'), (1, '7.780')] +[2023-09-27 07:04:08,861][07175] Updated weights for policy 0, policy_version 32960 (0.0016) +[2023-09-27 07:04:08,861][07176] Updated weights for policy 1, policy_version 32960 (0.0017) +[2023-09-27 07:04:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16900096. Throughput: 0: 810.7, 1: 812.2. Samples: 4222995. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:04:13,230][06167] Avg episode reward: [(0, '6.260'), (1, '7.870')] +[2023-09-27 07:04:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16932864. Throughput: 0: 817.0, 1: 818.1. Samples: 4233205. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:04:18,229][06167] Avg episode reward: [(0, '5.960'), (1, '7.560')] +[2023-09-27 07:04:21,245][07175] Updated weights for policy 0, policy_version 33120 (0.0018) +[2023-09-27 07:04:21,245][07176] Updated weights for policy 1, policy_version 33120 (0.0018) +[2023-09-27 07:04:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16965632. Throughput: 0: 814.9, 1: 814.9. Samples: 4237960. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:04:23,230][06167] Avg episode reward: [(0, '5.880'), (1, '7.660')] +[2023-09-27 07:04:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 16998400. Throughput: 0: 820.7, 1: 819.5. Samples: 4247638. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 07:04:28,230][06167] Avg episode reward: [(0, '5.620'), (1, '7.420')] +[2023-09-27 07:04:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17031168. Throughput: 0: 821.6, 1: 823.7. Samples: 4257792. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 07:04:33,230][06167] Avg episode reward: [(0, '6.100'), (1, '6.810')] +[2023-09-27 07:04:33,772][07175] Updated weights for policy 0, policy_version 33280 (0.0017) +[2023-09-27 07:04:33,772][07176] Updated weights for policy 1, policy_version 33280 (0.0016) +[2023-09-27 07:04:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17063936. Throughput: 0: 825.7, 1: 825.6. Samples: 4262532. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 07:04:38,230][06167] Avg episode reward: [(0, '6.880'), (1, '7.180')] +[2023-09-27 07:04:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17096704. Throughput: 0: 822.5, 1: 821.1. Samples: 4272361. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 07:04:43,230][06167] Avg episode reward: [(0, '7.400'), (1, '7.750')] +[2023-09-27 07:04:46,325][07175] Updated weights for policy 0, policy_version 33440 (0.0021) +[2023-09-27 07:04:46,325][07176] Updated weights for policy 1, policy_version 33440 (0.0020) +[2023-09-27 07:04:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17129472. Throughput: 0: 819.6, 1: 821.8. Samples: 4282260. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 07:04:48,229][06167] Avg episode reward: [(0, '7.410'), (1, '7.990')] +[2023-09-27 07:04:53,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17162240. Throughput: 0: 818.8, 1: 818.6. Samples: 4286734. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 07:04:53,229][06167] Avg episode reward: [(0, '7.660'), (1, '7.850')] +[2023-09-27 07:04:58,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17195008. Throughput: 0: 818.9, 1: 819.1. Samples: 4296704. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 07:04:58,230][06167] Avg episode reward: [(0, '7.370'), (1, '7.710')] +[2023-09-27 07:04:58,241][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000033584_8597504.pth... +[2023-09-27 07:04:58,242][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000033584_8597504.pth... +[2023-09-27 07:04:58,277][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000030528_7815168.pth +[2023-09-27 07:04:58,277][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000030528_7815168.pth +[2023-09-27 07:04:58,969][07176] Updated weights for policy 1, policy_version 33600 (0.0019) +[2023-09-27 07:04:58,969][07175] Updated weights for policy 0, policy_version 33600 (0.0018) +[2023-09-27 07:05:03,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17227776. Throughput: 0: 815.3, 1: 815.0. Samples: 4306570. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 07:05:03,230][06167] Avg episode reward: [(0, '6.990'), (1, '7.910')] +[2023-09-27 07:05:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17260544. Throughput: 0: 814.3, 1: 814.4. Samples: 4311253. Policy #0 lag: (min: 7.0, avg: 7.0, max: 7.0) +[2023-09-27 07:05:08,230][06167] Avg episode reward: [(0, '6.980'), (1, '8.110')] +[2023-09-27 07:05:08,230][07019] Saving new best policy, reward=8.110! +[2023-09-27 07:05:11,410][07175] Updated weights for policy 0, policy_version 33760 (0.0016) +[2023-09-27 07:05:11,411][07176] Updated weights for policy 1, policy_version 33760 (0.0015) +[2023-09-27 07:05:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17293312. Throughput: 0: 817.6, 1: 818.9. Samples: 4321284. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:05:13,230][06167] Avg episode reward: [(0, '7.040'), (1, '7.560')] +[2023-09-27 07:05:18,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17326080. Throughput: 0: 816.5, 1: 815.6. Samples: 4331237. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:05:18,230][06167] Avg episode reward: [(0, '7.550'), (1, '7.220')] +[2023-09-27 07:05:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17358848. Throughput: 0: 814.8, 1: 814.6. Samples: 4335858. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:05:23,230][06167] Avg episode reward: [(0, '7.470'), (1, '8.000')] +[2023-09-27 07:05:23,880][07175] Updated weights for policy 0, policy_version 33920 (0.0017) +[2023-09-27 07:05:23,880][07176] Updated weights for policy 1, policy_version 33920 (0.0018) +[2023-09-27 07:05:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17391616. Throughput: 0: 816.0, 1: 817.4. Samples: 4345862. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:05:28,230][06167] Avg episode reward: [(0, '7.740'), (1, '7.940')] +[2023-09-27 07:05:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17424384. Throughput: 0: 816.2, 1: 814.3. Samples: 4355631. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:05:33,230][06167] Avg episode reward: [(0, '7.440'), (1, '7.890')] +[2023-09-27 07:05:36,657][07175] Updated weights for policy 0, policy_version 34080 (0.0017) +[2023-09-27 07:05:36,658][07176] Updated weights for policy 1, policy_version 34080 (0.0014) +[2023-09-27 07:05:38,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17457152. Throughput: 0: 815.6, 1: 816.9. Samples: 4360196. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 07:05:38,230][06167] Avg episode reward: [(0, '7.680'), (1, '7.010')] +[2023-09-27 07:05:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17489920. Throughput: 0: 814.8, 1: 814.0. Samples: 4370001. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 07:05:43,230][06167] Avg episode reward: [(0, '7.760'), (1, '7.590')] +[2023-09-27 07:05:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17522688. Throughput: 0: 811.8, 1: 810.9. Samples: 4379588. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 07:05:48,230][06167] Avg episode reward: [(0, '7.800'), (1, '7.620')] +[2023-09-27 07:05:49,405][07175] Updated weights for policy 0, policy_version 34240 (0.0017) +[2023-09-27 07:05:49,405][07176] Updated weights for policy 1, policy_version 34240 (0.0019) +[2023-09-27 07:05:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17555456. Throughput: 0: 814.9, 1: 814.3. Samples: 4384567. Policy #0 lag: (min: 2.0, avg: 2.0, max: 2.0) +[2023-09-27 07:05:53,230][06167] Avg episode reward: [(0, '7.620'), (1, '7.570')] +[2023-09-27 07:05:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17588224. Throughput: 0: 810.7, 1: 809.0. Samples: 4394172. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:05:58,230][06167] Avg episode reward: [(0, '7.640'), (1, '7.680')] +[2023-09-27 07:06:01,960][07175] Updated weights for policy 0, policy_version 34400 (0.0015) +[2023-09-27 07:06:01,961][07176] Updated weights for policy 1, policy_version 34400 (0.0017) +[2023-09-27 07:06:03,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6525.8). Total num frames: 17620992. Throughput: 0: 808.1, 1: 807.7. Samples: 4403951. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:06:03,230][06167] Avg episode reward: [(0, '7.950'), (1, '7.820')] +[2023-09-27 07:06:08,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 17653760. Throughput: 0: 813.5, 1: 813.2. Samples: 4409058. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:06:08,229][06167] Avg episode reward: [(0, '7.630'), (1, '7.990')] +[2023-09-27 07:06:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 17686528. Throughput: 0: 812.3, 1: 810.9. Samples: 4418909. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:06:13,229][06167] Avg episode reward: [(0, '7.310'), (1, '7.730')] +[2023-09-27 07:06:14,450][07175] Updated weights for policy 0, policy_version 34560 (0.0017) +[2023-09-27 07:06:14,450][07176] Updated weights for policy 1, policy_version 34560 (0.0016) +[2023-09-27 07:06:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 17719296. Throughput: 0: 809.8, 1: 809.5. Samples: 4428502. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:06:18,230][06167] Avg episode reward: [(0, '7.330'), (1, '7.550')] +[2023-09-27 07:06:23,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 17752064. Throughput: 0: 816.5, 1: 814.6. Samples: 4433595. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 07:06:23,230][06167] Avg episode reward: [(0, '7.200'), (1, '7.420')] +[2023-09-27 07:06:26,873][07175] Updated weights for policy 0, policy_version 34720 (0.0016) +[2023-09-27 07:06:26,873][07176] Updated weights for policy 1, policy_version 34720 (0.0017) +[2023-09-27 07:06:28,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 17784832. Throughput: 0: 817.4, 1: 816.4. Samples: 4443518. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 07:06:28,230][06167] Avg episode reward: [(0, '7.110'), (1, '7.310')] +[2023-09-27 07:06:33,229][06167] Fps is (10 sec: 5734.4, 60 sec: 6417.1, 300 sec: 6525.8). Total num frames: 17809408. Throughput: 0: 815.7, 1: 815.9. Samples: 4453011. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 07:06:33,230][06167] Avg episode reward: [(0, '7.390'), (1, '6.620')] +[2023-09-27 07:06:38,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 17850368. Throughput: 0: 818.1, 1: 818.4. Samples: 4458211. Policy #0 lag: (min: 12.0, avg: 12.0, max: 12.0) +[2023-09-27 07:06:38,230][06167] Avg episode reward: [(0, '7.590'), (1, '6.870')] +[2023-09-27 07:06:39,516][07175] Updated weights for policy 0, policy_version 34880 (0.0017) +[2023-09-27 07:06:39,516][07176] Updated weights for policy 1, policy_version 34880 (0.0014) +[2023-09-27 07:06:43,234][06167] Fps is (10 sec: 7368.8, 60 sec: 6553.0, 300 sec: 6553.5). Total num frames: 17883136. Throughput: 0: 818.9, 1: 818.8. Samples: 4467875. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 07:06:43,237][06167] Avg episode reward: [(0, '7.440'), (1, '7.280')] +[2023-09-27 07:06:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 17915904. Throughput: 0: 818.8, 1: 818.5. Samples: 4477628. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 07:06:48,229][06167] Avg episode reward: [(0, '7.120'), (1, '7.740')] +[2023-09-27 07:06:51,950][07176] Updated weights for policy 1, policy_version 35040 (0.0018) +[2023-09-27 07:06:51,951][07175] Updated weights for policy 0, policy_version 35040 (0.0019) +[2023-09-27 07:06:53,229][06167] Fps is (10 sec: 6557.1, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 17948672. Throughput: 0: 819.0, 1: 819.8. Samples: 4482806. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 07:06:53,230][06167] Avg episode reward: [(0, '7.390'), (1, '7.870')] +[2023-09-27 07:06:58,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 17981440. Throughput: 0: 819.0, 1: 819.0. Samples: 4492622. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 07:06:58,230][06167] Avg episode reward: [(0, '7.260'), (1, '7.940')] +[2023-09-27 07:06:58,240][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000035120_8990720.pth... +[2023-09-27 07:06:58,240][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000035120_8990720.pth... +[2023-09-27 07:06:58,275][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000032048_8204288.pth +[2023-09-27 07:06:58,276][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000032048_8204288.pth +[2023-09-27 07:07:03,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18014208. Throughput: 0: 821.5, 1: 821.8. Samples: 4502453. Policy #0 lag: (min: 13.0, avg: 13.0, max: 13.0) +[2023-09-27 07:07:03,229][06167] Avg episode reward: [(0, '6.840'), (1, '7.920')] +[2023-09-27 07:07:04,352][07176] Updated weights for policy 1, policy_version 35200 (0.0016) +[2023-09-27 07:07:04,353][07175] Updated weights for policy 0, policy_version 35200 (0.0016) +[2023-09-27 07:07:08,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18046976. Throughput: 0: 821.8, 1: 822.2. Samples: 4507573. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 07:07:08,230][06167] Avg episode reward: [(0, '7.340'), (1, '8.130')] +[2023-09-27 07:07:08,231][07019] Saving new best policy, reward=8.130! +[2023-09-27 07:07:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18079744. Throughput: 0: 819.6, 1: 819.9. Samples: 4517295. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 07:07:13,229][06167] Avg episode reward: [(0, '7.710'), (1, '8.480')] +[2023-09-27 07:07:13,239][07019] Saving new best policy, reward=8.480! +[2023-09-27 07:07:16,779][07176] Updated weights for policy 1, policy_version 35360 (0.0017) +[2023-09-27 07:07:16,780][07175] Updated weights for policy 0, policy_version 35360 (0.0015) +[2023-09-27 07:07:18,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18112512. Throughput: 0: 824.4, 1: 824.2. Samples: 4527197. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 07:07:18,229][06167] Avg episode reward: [(0, '8.140'), (1, '8.240')] +[2023-09-27 07:07:18,230][06938] Saving new best policy, reward=8.140! +[2023-09-27 07:07:23,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18145280. Throughput: 0: 821.5, 1: 823.2. Samples: 4532224. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 07:07:23,230][06167] Avg episode reward: [(0, '7.890'), (1, '7.860')] +[2023-09-27 07:07:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18178048. Throughput: 0: 827.0, 1: 827.1. Samples: 4542301. Policy #0 lag: (min: 9.0, avg: 9.0, max: 9.0) +[2023-09-27 07:07:28,230][06167] Avg episode reward: [(0, '8.350'), (1, '8.160')] +[2023-09-27 07:07:28,240][06938] Saving new best policy, reward=8.350! +[2023-09-27 07:07:29,120][07175] Updated weights for policy 0, policy_version 35520 (0.0017) +[2023-09-27 07:07:29,121][07176] Updated weights for policy 1, policy_version 35520 (0.0015) +[2023-09-27 07:07:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6690.1, 300 sec: 6553.6). Total num frames: 18210816. Throughput: 0: 827.1, 1: 826.9. Samples: 4552059. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:07:33,230][06167] Avg episode reward: [(0, '8.510'), (1, '7.600')] +[2023-09-27 07:07:33,231][06938] Saving new best policy, reward=8.510! +[2023-09-27 07:07:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18243584. Throughput: 0: 821.9, 1: 822.7. Samples: 4556810. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:07:38,230][06167] Avg episode reward: [(0, '8.680'), (1, '8.010')] +[2023-09-27 07:07:38,231][06938] Saving new best policy, reward=8.680! +[2023-09-27 07:07:41,522][07175] Updated weights for policy 0, policy_version 35680 (0.0015) +[2023-09-27 07:07:41,523][07176] Updated weights for policy 1, policy_version 35680 (0.0017) +[2023-09-27 07:07:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6554.2, 300 sec: 6553.6). Total num frames: 18276352. Throughput: 0: 826.1, 1: 827.5. Samples: 4567034. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:07:43,230][06167] Avg episode reward: [(0, '8.240'), (1, '8.320')] +[2023-09-27 07:07:48,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18309120. Throughput: 0: 827.3, 1: 827.0. Samples: 4576899. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:07:48,229][06167] Avg episode reward: [(0, '8.640'), (1, '8.140')] +[2023-09-27 07:07:53,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18341888. Throughput: 0: 822.2, 1: 822.2. Samples: 4581568. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:07:53,229][06167] Avg episode reward: [(0, '8.310'), (1, '8.460')] +[2023-09-27 07:07:53,952][07175] Updated weights for policy 0, policy_version 35840 (0.0012) +[2023-09-27 07:07:53,953][07176] Updated weights for policy 1, policy_version 35840 (0.0017) +[2023-09-27 07:07:58,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18374656. Throughput: 0: 825.0, 1: 826.6. Samples: 4591616. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:07:58,230][06167] Avg episode reward: [(0, '8.490'), (1, '8.620')] +[2023-09-27 07:07:58,240][07019] Saving new best policy, reward=8.620! +[2023-09-27 07:08:03,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18407424. Throughput: 0: 826.0, 1: 825.6. Samples: 4601517. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:08:03,230][06167] Avg episode reward: [(0, '8.090'), (1, '8.590')] +[2023-09-27 07:08:06,450][07175] Updated weights for policy 0, policy_version 36000 (0.0017) +[2023-09-27 07:08:06,450][07176] Updated weights for policy 1, policy_version 36000 (0.0017) +[2023-09-27 07:08:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18440192. Throughput: 0: 823.0, 1: 821.6. Samples: 4606233. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:08:08,230][06167] Avg episode reward: [(0, '8.200'), (1, '8.190')] +[2023-09-27 07:08:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18472960. Throughput: 0: 820.1, 1: 821.9. Samples: 4616192. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:08:13,230][06167] Avg episode reward: [(0, '8.670'), (1, '7.870')] +[2023-09-27 07:08:18,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18505728. Throughput: 0: 823.3, 1: 822.9. Samples: 4626140. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 07:08:18,229][06167] Avg episode reward: [(0, '8.720'), (1, '7.690')] +[2023-09-27 07:08:18,230][06938] Saving new best policy, reward=8.720! +[2023-09-27 07:08:18,995][07175] Updated weights for policy 0, policy_version 36160 (0.0015) +[2023-09-27 07:08:18,995][07176] Updated weights for policy 1, policy_version 36160 (0.0017) +[2023-09-27 07:08:23,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18538496. Throughput: 0: 821.6, 1: 820.4. Samples: 4630699. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 07:08:23,230][06167] Avg episode reward: [(0, '8.450'), (1, '7.500')] +[2023-09-27 07:08:28,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18571264. Throughput: 0: 819.2, 1: 818.5. Samples: 4640730. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 07:08:28,230][06167] Avg episode reward: [(0, '7.950'), (1, '7.370')] +[2023-09-27 07:08:31,528][07175] Updated weights for policy 0, policy_version 36320 (0.0014) +[2023-09-27 07:08:31,529][07176] Updated weights for policy 1, policy_version 36320 (0.0018) +[2023-09-27 07:08:33,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18604032. Throughput: 0: 818.3, 1: 818.8. Samples: 4650569. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 07:08:33,230][06167] Avg episode reward: [(0, '7.750'), (1, '7.210')] +[2023-09-27 07:08:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18636800. Throughput: 0: 818.7, 1: 818.9. Samples: 4655263. Policy #0 lag: (min: 11.0, avg: 11.0, max: 11.0) +[2023-09-27 07:08:38,230][06167] Avg episode reward: [(0, '7.430'), (1, '7.380')] +[2023-09-27 07:08:43,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18669568. Throughput: 0: 819.2, 1: 819.2. Samples: 4665344. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 07:08:43,230][06167] Avg episode reward: [(0, '7.010'), (1, '7.640')] +[2023-09-27 07:08:44,002][07175] Updated weights for policy 0, policy_version 36480 (0.0017) +[2023-09-27 07:08:44,002][07176] Updated weights for policy 1, policy_version 36480 (0.0018) +[2023-09-27 07:08:48,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18702336. Throughput: 0: 818.0, 1: 818.0. Samples: 4675135. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 07:08:48,229][06167] Avg episode reward: [(0, '7.580'), (1, '7.030')] +[2023-09-27 07:08:53,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18735104. Throughput: 0: 817.7, 1: 817.6. Samples: 4679822. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 07:08:53,230][06167] Avg episode reward: [(0, '7.690'), (1, '7.230')] +[2023-09-27 07:08:56,452][07175] Updated weights for policy 0, policy_version 36640 (0.0018) +[2023-09-27 07:08:56,452][07176] Updated weights for policy 1, policy_version 36640 (0.0015) +[2023-09-27 07:08:58,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18767872. Throughput: 0: 819.2, 1: 819.2. Samples: 4689920. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 07:08:58,230][06167] Avg episode reward: [(0, '8.040'), (1, '7.330')] +[2023-09-27 07:08:58,242][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000036656_9383936.pth... +[2023-09-27 07:08:58,242][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000036656_9383936.pth... +[2023-09-27 07:08:58,272][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000033584_8597504.pth +[2023-09-27 07:08:58,276][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000033584_8597504.pth +[2023-09-27 07:09:03,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18800640. Throughput: 0: 820.2, 1: 821.0. Samples: 4699995. Policy #0 lag: (min: 6.0, avg: 6.0, max: 6.0) +[2023-09-27 07:09:03,230][06167] Avg episode reward: [(0, '7.870'), (1, '7.960')] +[2023-09-27 07:09:08,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18833408. Throughput: 0: 821.3, 1: 821.2. Samples: 4704613. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:09:08,230][06167] Avg episode reward: [(0, '7.380'), (1, '7.760')] +[2023-09-27 07:09:08,911][07175] Updated weights for policy 0, policy_version 36800 (0.0018) +[2023-09-27 07:09:08,912][07176] Updated weights for policy 1, policy_version 36800 (0.0017) +[2023-09-27 07:09:13,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18866176. Throughput: 0: 819.2, 1: 820.0. Samples: 4714497. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:09:13,230][06167] Avg episode reward: [(0, '7.840'), (1, '7.500')] +[2023-09-27 07:09:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18898944. Throughput: 0: 821.8, 1: 821.2. Samples: 4724502. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:09:18,230][06167] Avg episode reward: [(0, '6.970'), (1, '8.150')] +[2023-09-27 07:09:21,517][07176] Updated weights for policy 1, policy_version 36960 (0.0017) +[2023-09-27 07:09:21,517][07175] Updated weights for policy 0, policy_version 36960 (0.0017) +[2023-09-27 07:09:23,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18931712. Throughput: 0: 818.4, 1: 818.6. Samples: 4728924. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:09:23,229][06167] Avg episode reward: [(0, '7.250'), (1, '8.770')] +[2023-09-27 07:09:23,230][07019] Saving new best policy, reward=8.770! +[2023-09-27 07:09:28,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18964480. Throughput: 0: 819.2, 1: 819.2. Samples: 4739072. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:09:28,230][06167] Avg episode reward: [(0, '7.530'), (1, '8.560')] +[2023-09-27 07:09:33,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 18997248. Throughput: 0: 823.2, 1: 824.0. Samples: 4749259. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:09:33,230][06167] Avg episode reward: [(0, '7.960'), (1, '8.710')] +[2023-09-27 07:09:33,878][07175] Updated weights for policy 0, policy_version 37120 (0.0017) +[2023-09-27 07:09:33,878][07176] Updated weights for policy 1, policy_version 37120 (0.0018) +[2023-09-27 07:09:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19030016. Throughput: 0: 822.1, 1: 821.9. Samples: 4753805. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:09:38,229][06167] Avg episode reward: [(0, '8.200'), (1, '8.540')] +[2023-09-27 07:09:43,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19062784. Throughput: 0: 819.4, 1: 819.2. Samples: 4763660. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:09:43,229][06167] Avg episode reward: [(0, '9.030'), (1, '8.850')] +[2023-09-27 07:09:43,238][06938] Saving new best policy, reward=9.030! +[2023-09-27 07:09:43,239][07019] Saving new best policy, reward=8.850! +[2023-09-27 07:09:46,302][07175] Updated weights for policy 0, policy_version 37280 (0.0019) +[2023-09-27 07:09:46,302][07176] Updated weights for policy 1, policy_version 37280 (0.0019) +[2023-09-27 07:09:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19095552. Throughput: 0: 820.3, 1: 821.6. Samples: 4773881. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:09:48,229][06167] Avg episode reward: [(0, '8.820'), (1, '9.480')] +[2023-09-27 07:09:48,230][07019] Saving new best policy, reward=9.480! +[2023-09-27 07:09:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19128320. Throughput: 0: 822.0, 1: 822.2. Samples: 4778604. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:09:53,230][06167] Avg episode reward: [(0, '9.230'), (1, '8.990')] +[2023-09-27 07:09:53,230][06938] Saving new best policy, reward=9.230! +[2023-09-27 07:09:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19161088. Throughput: 0: 822.4, 1: 821.0. Samples: 4788452. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:09:58,230][06167] Avg episode reward: [(0, '8.540'), (1, '8.040')] +[2023-09-27 07:09:58,783][07176] Updated weights for policy 1, policy_version 37440 (0.0018) +[2023-09-27 07:09:58,784][07175] Updated weights for policy 0, policy_version 37440 (0.0015) +[2023-09-27 07:10:03,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19193856. Throughput: 0: 821.0, 1: 822.5. Samples: 4798458. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:10:03,230][06167] Avg episode reward: [(0, '8.160'), (1, '7.950')] +[2023-09-27 07:10:08,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19226624. Throughput: 0: 823.6, 1: 823.3. Samples: 4803034. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:10:08,230][06167] Avg episode reward: [(0, '8.240'), (1, '7.890')] +[2023-09-27 07:10:11,278][07175] Updated weights for policy 0, policy_version 37600 (0.0017) +[2023-09-27 07:10:11,278][07176] Updated weights for policy 1, policy_version 37600 (0.0017) +[2023-09-27 07:10:13,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19259392. Throughput: 0: 821.0, 1: 819.5. Samples: 4812894. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:10:13,230][06167] Avg episode reward: [(0, '7.890'), (1, '8.090')] +[2023-09-27 07:10:18,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19292160. Throughput: 0: 819.2, 1: 820.4. Samples: 4823041. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:10:18,229][06167] Avg episode reward: [(0, '7.870'), (1, '7.500')] +[2023-09-27 07:10:23,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19324928. Throughput: 0: 824.7, 1: 824.4. Samples: 4828018. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:10:23,229][06167] Avg episode reward: [(0, '7.750'), (1, '7.550')] +[2023-09-27 07:10:23,582][07175] Updated weights for policy 0, policy_version 37760 (0.0016) +[2023-09-27 07:10:23,582][07176] Updated weights for policy 1, policy_version 37760 (0.0018) +[2023-09-27 07:10:28,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19357696. Throughput: 0: 826.0, 1: 824.9. Samples: 4837951. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:10:28,230][06167] Avg episode reward: [(0, '8.210'), (1, '8.210')] +[2023-09-27 07:10:33,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19390464. Throughput: 0: 819.2, 1: 819.4. Samples: 4847616. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:10:33,230][06167] Avg episode reward: [(0, '8.420'), (1, '8.460')] +[2023-09-27 07:10:36,347][07175] Updated weights for policy 0, policy_version 37920 (0.0017) +[2023-09-27 07:10:36,347][07176] Updated weights for policy 1, policy_version 37920 (0.0017) +[2023-09-27 07:10:38,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19423232. Throughput: 0: 818.7, 1: 819.1. Samples: 4852306. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:10:38,230][06167] Avg episode reward: [(0, '8.430'), (1, '8.510')] +[2023-09-27 07:10:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19456000. Throughput: 0: 816.9, 1: 817.5. Samples: 4861998. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:10:43,230][06167] Avg episode reward: [(0, '8.230'), (1, '8.580')] +[2023-09-27 07:10:48,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19488768. Throughput: 0: 819.2, 1: 818.9. Samples: 4872172. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:10:48,229][06167] Avg episode reward: [(0, '7.750'), (1, '8.560')] +[2023-09-27 07:10:48,776][07175] Updated weights for policy 0, policy_version 38080 (0.0016) +[2023-09-27 07:10:48,777][07176] Updated weights for policy 1, policy_version 38080 (0.0017) +[2023-09-27 07:10:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19521536. Throughput: 0: 820.7, 1: 820.9. Samples: 4876904. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:10:53,230][06167] Avg episode reward: [(0, '8.100'), (1, '8.780')] +[2023-09-27 07:10:58,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19554304. Throughput: 0: 818.0, 1: 819.0. Samples: 4886556. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:10:58,230][06167] Avg episode reward: [(0, '7.550'), (1, '9.140')] +[2023-09-27 07:10:58,241][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000038192_9777152.pth... +[2023-09-27 07:10:58,241][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000038192_9777152.pth... +[2023-09-27 07:10:58,276][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000035120_8990720.pth +[2023-09-27 07:10:58,277][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000035120_8990720.pth +[2023-09-27 07:11:01,330][07175] Updated weights for policy 0, policy_version 38240 (0.0017) +[2023-09-27 07:11:01,330][07176] Updated weights for policy 1, policy_version 38240 (0.0018) +[2023-09-27 07:11:03,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19587072. Throughput: 0: 819.1, 1: 817.9. Samples: 4896704. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:11:03,229][06167] Avg episode reward: [(0, '8.210'), (1, '8.610')] +[2023-09-27 07:11:08,229][06167] Fps is (10 sec: 6553.8, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19619840. Throughput: 0: 813.3, 1: 814.1. Samples: 4901254. Policy #0 lag: (min: 5.0, avg: 5.0, max: 5.0) +[2023-09-27 07:11:08,229][06167] Avg episode reward: [(0, '7.550'), (1, '8.760')] +[2023-09-27 07:11:13,229][06167] Fps is (10 sec: 6553.4, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19652608. Throughput: 0: 812.2, 1: 813.4. Samples: 4911102. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 07:11:13,230][06167] Avg episode reward: [(0, '7.900'), (1, '8.840')] +[2023-09-27 07:11:13,991][07175] Updated weights for policy 0, policy_version 38400 (0.0017) +[2023-09-27 07:11:13,991][07176] Updated weights for policy 1, policy_version 38400 (0.0017) +[2023-09-27 07:11:18,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19685376. Throughput: 0: 816.8, 1: 815.6. Samples: 4921074. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 07:11:18,230][06167] Avg episode reward: [(0, '7.740'), (1, '8.740')] +[2023-09-27 07:11:23,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19718144. Throughput: 0: 816.3, 1: 815.7. Samples: 4925748. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 07:11:23,230][06167] Avg episode reward: [(0, '7.550'), (1, '8.150')] +[2023-09-27 07:11:26,435][07176] Updated weights for policy 1, policy_version 38560 (0.0018) +[2023-09-27 07:11:26,435][07175] Updated weights for policy 0, policy_version 38560 (0.0017) +[2023-09-27 07:11:28,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6581.4). Total num frames: 19750912. Throughput: 0: 818.3, 1: 819.1. Samples: 4935680. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 07:11:28,229][06167] Avg episode reward: [(0, '7.230'), (1, '7.890')] +[2023-09-27 07:11:33,229][06167] Fps is (10 sec: 6553.7, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19783680. Throughput: 0: 818.0, 1: 817.0. Samples: 4945749. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) +[2023-09-27 07:11:33,229][06167] Avg episode reward: [(0, '7.590'), (1, '8.520')] +[2023-09-27 07:11:38,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.7). Total num frames: 19816448. Throughput: 0: 817.2, 1: 817.0. Samples: 4950441. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:11:38,229][06167] Avg episode reward: [(0, '8.190'), (1, '7.880')] +[2023-09-27 07:11:38,804][07175] Updated weights for policy 0, policy_version 38720 (0.0015) +[2023-09-27 07:11:38,804][07176] Updated weights for policy 1, policy_version 38720 (0.0018) +[2023-09-27 07:11:43,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19849216. Throughput: 0: 820.7, 1: 819.6. Samples: 4960373. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:11:43,230][06167] Avg episode reward: [(0, '8.280'), (1, '8.550')] +[2023-09-27 07:11:48,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19881984. Throughput: 0: 819.3, 1: 820.5. Samples: 4970496. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:11:48,229][06167] Avg episode reward: [(0, '7.900'), (1, '9.020')] +[2023-09-27 07:11:51,113][07176] Updated weights for policy 1, policy_version 38880 (0.0017) +[2023-09-27 07:11:51,114][07175] Updated weights for policy 0, policy_version 38880 (0.0018) +[2023-09-27 07:11:53,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19914752. Throughput: 0: 825.2, 1: 825.0. Samples: 4975513. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:11:53,229][06167] Avg episode reward: [(0, '8.330'), (1, '8.670')] +[2023-09-27 07:11:58,229][06167] Fps is (10 sec: 6553.6, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19947520. Throughput: 0: 825.9, 1: 824.3. Samples: 4985363. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:11:58,229][06167] Avg episode reward: [(0, '8.500'), (1, '8.700')] +[2023-09-27 07:12:03,229][06167] Fps is (10 sec: 6553.5, 60 sec: 6553.6, 300 sec: 6553.6). Total num frames: 19980288. Throughput: 0: 823.0, 1: 823.0. Samples: 4995143. Policy #0 lag: (min: 15.0, avg: 15.0, max: 15.0) +[2023-09-27 07:12:03,230][06167] Avg episode reward: [(0, '8.450'), (1, '9.050')] +[2023-09-27 07:12:03,542][07175] Updated weights for policy 0, policy_version 39040 (0.0018) +[2023-09-27 07:12:03,542][07176] Updated weights for policy 1, policy_version 39040 (0.0017) +[2023-09-27 07:12:07,264][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000039088_10006528.pth... +[2023-09-27 07:12:07,265][07221] Stopping RolloutWorker_w3... +[2023-09-27 07:12:07,265][06167] Component Batcher_0 stopped! +[2023-09-27 07:12:07,265][07223] Stopping RolloutWorker_w4... +[2023-09-27 07:12:07,265][07224] Stopping RolloutWorker_w6... +[2023-09-27 07:12:07,265][07225] Stopping RolloutWorker_w5... +[2023-09-27 07:12:07,265][07226] Stopping RolloutWorker_w7... +[2023-09-27 07:12:07,265][07220] Stopping RolloutWorker_w2... +[2023-09-27 07:12:07,265][07219] Stopping RolloutWorker_w1... +[2023-09-27 07:12:07,265][07221] Loop rollout_proc3_evt_loop terminating... +[2023-09-27 07:12:07,265][07177] Stopping RolloutWorker_w0... +[2023-09-27 07:12:07,265][06167] Component RolloutWorker_w3 stopped! +[2023-09-27 07:12:07,265][07019] Stopping Batcher_1... +[2023-09-27 07:12:07,265][07223] Loop rollout_proc4_evt_loop terminating... +[2023-09-27 07:12:07,266][07226] Loop rollout_proc7_evt_loop terminating... +[2023-09-27 07:12:07,266][06167] Component RolloutWorker_w6 stopped! +[2023-09-27 07:12:07,266][07224] Loop rollout_proc6_evt_loop terminating... +[2023-09-27 07:12:07,266][07220] Loop rollout_proc2_evt_loop terminating... +[2023-09-27 07:12:07,266][07219] Loop rollout_proc1_evt_loop terminating... +[2023-09-27 07:12:07,266][07225] Loop rollout_proc5_evt_loop terminating... +[2023-09-27 07:12:07,266][07177] Loop rollout_proc0_evt_loop terminating... +[2023-09-27 07:12:07,266][06167] Component RolloutWorker_w4 stopped! +[2023-09-27 07:12:07,266][07019] Loop batcher_evt_loop terminating... +[2023-09-27 07:12:07,266][06167] Component RolloutWorker_w5 stopped! +[2023-09-27 07:12:07,267][06167] Component RolloutWorker_w2 stopped! +[2023-09-27 07:12:07,267][06167] Component RolloutWorker_w1 stopped! +[2023-09-27 07:12:07,267][06167] Component RolloutWorker_w7 stopped! +[2023-09-27 07:12:07,268][06167] Component RolloutWorker_w0 stopped! +[2023-09-27 07:12:07,268][06167] Component Batcher_1 stopped! +[2023-09-27 07:12:07,265][06938] Stopping Batcher_0... +[2023-09-27 07:12:07,280][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-27 07:12:07,285][06938] Loop batcher_evt_loop terminating... +[2023-09-27 07:12:07,295][06938] Removing ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000036656_9383936.pth +[2023-09-27 07:12:07,299][06938] Saving ./train_atari/atari_stargunner/checkpoint_p0/checkpoint_000039088_10006528.pth... +[2023-09-27 07:12:07,309][07019] Removing ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000036656_9383936.pth +[2023-09-27 07:12:07,313][07019] Saving ./train_atari/atari_stargunner/checkpoint_p1/checkpoint_000039088_10006528.pth... +[2023-09-27 07:12:07,320][07176] Weights refcount: 2 0 +[2023-09-27 07:12:07,322][07176] Stopping InferenceWorker_p1-w0... +[2023-09-27 07:12:07,323][07176] Loop inference_proc1-0_evt_loop terminating... +[2023-09-27 07:12:07,322][06167] Component InferenceWorker_p1-w0 stopped! +[2023-09-27 07:12:07,335][06938] Stopping LearnerWorker_p0... +[2023-09-27 07:12:07,335][07175] Weights refcount: 2 0 +[2023-09-27 07:12:07,335][06938] Loop learner_proc0_evt_loop terminating... +[2023-09-27 07:12:07,335][06167] Component LearnerWorker_p0 stopped! +[2023-09-27 07:12:07,336][07175] Stopping InferenceWorker_p0-w0... +[2023-09-27 07:12:07,336][07175] Loop inference_proc0-0_evt_loop terminating... +[2023-09-27 07:12:07,337][06167] Component InferenceWorker_p0-w0 stopped! +[2023-09-27 07:12:07,349][07019] Stopping LearnerWorker_p1... +[2023-09-27 07:12:07,349][07019] Loop learner_proc1_evt_loop terminating... +[2023-09-27 07:12:07,349][06167] Component LearnerWorker_p1 stopped! +[2023-09-27 07:12:07,350][06167] Waiting for process learner_proc0 to stop... +[2023-09-27 07:12:08,055][06167] Waiting for process learner_proc1 to stop... +[2023-09-27 07:12:08,081][06167] Waiting for process inference_proc0-0 to join... +[2023-09-27 07:12:08,082][06167] Waiting for process inference_proc1-0 to join... +[2023-09-27 07:12:08,083][06167] Waiting for process rollout_proc0 to join... +[2023-09-27 07:12:08,083][06167] Waiting for process rollout_proc1 to join... +[2023-09-27 07:12:08,084][06167] Waiting for process rollout_proc2 to join... +[2023-09-27 07:12:08,085][06167] Waiting for process rollout_proc3 to join... +[2023-09-27 07:12:08,085][06167] Waiting for process rollout_proc4 to join... +[2023-09-27 07:12:08,086][06167] Waiting for process rollout_proc5 to join... +[2023-09-27 07:12:08,086][06167] Waiting for process rollout_proc6 to join... +[2023-09-27 07:12:08,087][06167] Waiting for process rollout_proc7 to join... +[2023-09-27 07:12:08,087][06167] Batcher 0 profile tree view: +batching: 21.2088, releasing_batches: 1.7160 +[2023-09-27 07:12:08,088][06167] Batcher 1 profile tree view: +batching: 20.8340, releasing_batches: 1.6852 +[2023-09-27 07:12:08,088][06167] InferenceWorker_p0-w0 profile tree view: +wait_policy: 0.0052 + wait_policy_total: 624.3000 +update_model: 35.8637 + weight_update: 0.0017 +one_step: 0.0012 + handle_policy_step: 2201.2694 + deserialize: 66.7673, stack: 15.7438, obs_to_device_normalize: 533.5212, forward: 1057.3362, send_messages: 93.7867 + prepare_outputs: 294.7649 + to_cpu: 148.5147 +[2023-09-27 07:12:08,089][06167] InferenceWorker_p1-w0 profile tree view: +wait_policy: 0.0052 + wait_policy_total: 613.0826 +update_model: 36.2024 + weight_update: 0.0017 +one_step: 0.0012 + handle_policy_step: 2208.9449 + deserialize: 65.4805, stack: 15.8442, obs_to_device_normalize: 535.0860, forward: 1059.2935, send_messages: 94.5971 + prepare_outputs: 295.3032 + to_cpu: 148.5835 +[2023-09-27 07:12:08,089][06167] Learner 0 profile tree view: +misc: 0.0159, prepare_batch: 31.9822 +train: 456.9696 + epoch_init: 0.1045, minibatch_init: 3.1309, losses_postprocess: 62.9033, kl_divergence: 5.4445, after_optimizer: 21.2503 + calculate_losses: 45.0564 + losses_init: 0.1057, forward_head: 14.3361, bptt_initial: 0.4443, bptt: 0.4917, tail: 10.2647, advantages_returns: 3.0774, losses: 12.7818 + update: 315.0207 + clip: 164.0107 +[2023-09-27 07:12:08,090][06167] Learner 1 profile tree view: +misc: 0.0158, prepare_batch: 32.2115 +train: 456.8976 + epoch_init: 0.1051, minibatch_init: 3.1500, losses_postprocess: 62.4595, kl_divergence: 5.4232, after_optimizer: 21.3549 + calculate_losses: 45.0254 + losses_init: 0.1048, forward_head: 14.2894, bptt_initial: 0.4327, bptt: 0.4810, tail: 10.3317, advantages_returns: 3.0686, losses: 12.7160 + update: 315.2893 + clip: 162.7764 +[2023-09-27 07:12:08,090][06167] RolloutWorker_w0 profile tree view: +wait_for_trajectories: 0.3981, enqueue_policy_requests: 42.8809, env_step: 953.2485, overhead: 29.6997, complete_rollouts: 1.1059 +save_policy_outputs: 54.8238 + split_output_tensors: 19.0842 +[2023-09-27 07:12:08,090][06167] RolloutWorker_w7 profile tree view: +wait_for_trajectories: 0.3916, enqueue_policy_requests: 43.2467, env_step: 939.0686, overhead: 29.1694, complete_rollouts: 1.0916 +save_policy_outputs: 53.6829 + split_output_tensors: 18.4910 +[2023-09-27 07:12:08,091][06167] Loop Runner_EvtLoop terminating... +[2023-09-27 07:12:08,091][06167] Runner profile tree view: +main_loop: 3065.7538 +[2023-09-27 07:12:08,091][06167] Collected {0: 10006528, 1: 10006528}, FPS: 6527.9