| [2025-06-11 18:38:50,934][196496] Saving configuration to /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/config.json... |
| [2025-06-11 18:38:50,935][196496] Rollout worker 0 uses device cpu |
| [2025-06-11 18:38:50,935][196496] Rollout worker 1 uses device cpu |
| [2025-06-11 18:38:50,935][196496] Rollout worker 2 uses device cpu |
| [2025-06-11 18:38:50,935][196496] Rollout worker 3 uses device cpu |
| [2025-06-11 18:38:50,935][196496] Rollout worker 4 uses device cpu |
| [2025-06-11 18:38:50,935][196496] Rollout worker 5 uses device cpu |
| [2025-06-11 18:38:50,935][196496] Rollout worker 6 uses device cpu |
| [2025-06-11 18:38:50,935][196496] Rollout worker 7 uses device cpu |
| [2025-06-11 18:38:51,017][196496] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 18:38:51,017][196496] InferenceWorker_p0-w0: min num requests: 2 |
| [2025-06-11 18:38:51,041][196496] Starting all processes... |
| [2025-06-11 18:38:51,041][196496] Starting process learner_proc0 |
| [2025-06-11 18:38:52,017][196496] Starting all processes... |
| [2025-06-11 18:38:52,019][196677] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 18:38:52,019][196677] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
| [2025-06-11 18:38:52,021][196496] Starting process inference_proc0-0 |
| [2025-06-11 18:38:52,022][196496] Starting process rollout_proc0 |
| [2025-06-11 18:38:52,022][196496] Starting process rollout_proc1 |
| [2025-06-11 18:38:52,025][196496] Starting process rollout_proc2 |
| [2025-06-11 18:38:52,032][196677] Num visible devices: 1 |
| [2025-06-11 18:38:52,037][196677] Setting fixed seed 3333 |
| [2025-06-11 18:38:52,040][196677] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 18:38:52,040][196677] Initializing actor-critic model on device cuda:0 |
| [2025-06-11 18:38:52,040][196677] RunningMeanStd input shape: (3, 72, 128) |
| [2025-06-11 18:38:52,041][196677] RunningMeanStd input shape: (1,) |
| [2025-06-11 18:38:52,039][196496] Starting process rollout_proc3 |
| [2025-06-11 18:38:52,053][196496] Starting process rollout_proc4 |
| [2025-06-11 18:38:52,053][196496] Starting process rollout_proc5 |
| [2025-06-11 18:38:52,054][196496] Starting process rollout_proc6 |
| [2025-06-11 18:38:52,057][196496] Starting process rollout_proc7 |
| [2025-06-11 18:38:52,113][196677] ConvEncoder: input_channels=3 |
| [2025-06-11 18:38:52,180][196677] Conv encoder output size: 512 |
| [2025-06-11 18:38:52,180][196677] Policy head output size: 512 |
| [2025-06-11 18:38:52,188][196677] Created Actor Critic model with architecture: |
| [2025-06-11 18:38:52,189][196677] ActorCriticSharedWeights( |
| (obs_normalizer): ObservationNormalizer( |
| (running_mean_std): RunningMeanStdDictInPlace( |
| (running_mean_std): ModuleDict( |
| (obs): RunningMeanStdInPlace() |
| ) |
| ) |
| ) |
| (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
| (encoder): VizdoomEncoder( |
| (basic_encoder): ConvEncoder( |
| (enc): RecursiveScriptModule( |
| original_name=ConvEncoderImpl |
| (conv_head): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Conv2d) |
| (1): RecursiveScriptModule(original_name=ELU) |
| (2): RecursiveScriptModule(original_name=Conv2d) |
| (3): RecursiveScriptModule(original_name=ELU) |
| (4): RecursiveScriptModule(original_name=Conv2d) |
| (5): RecursiveScriptModule(original_name=ELU) |
| ) |
| (mlp_layers): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Linear) |
| (1): RecursiveScriptModule(original_name=ELU) |
| ) |
| ) |
| ) |
| ) |
| (core): ModelCoreRNN( |
| (core): GRU(512, 512) |
| ) |
| (decoder): MlpDecoder( |
| (mlp): Identity() |
| ) |
| (critic_linear): Linear(in_features=512, out_features=1, bias=True) |
| (action_parameterization): ActionParameterizationDefault( |
| (distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
| ) |
| ) |
| [2025-06-11 18:38:52,405][196677] Using optimizer <class 'torch.optim.adam.Adam'> |
| [2025-06-11 18:38:53,154][196677] No checkpoints found |
| [2025-06-11 18:38:53,154][196677] Did not load from checkpoint, starting from scratch! |
| [2025-06-11 18:38:53,155][196677] Initialized policy 0 weights for model version 0 |
| [2025-06-11 18:38:53,158][196677] LearnerWorker_p0 finished initialization! |
| [2025-06-11 18:38:53,158][196677] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 18:38:53,393][196727] Worker 4 uses CPU cores [8, 9] |
| [2025-06-11 18:38:53,415][196724] Worker 0 uses CPU cores [0, 1] |
| [2025-06-11 18:38:53,440][196726] Worker 3 uses CPU cores [6, 7] |
| [2025-06-11 18:38:53,467][196721] Worker 1 uses CPU cores [2, 3] |
| [2025-06-11 18:38:53,477][196728] Worker 6 uses CPU cores [12, 13] |
| [2025-06-11 18:38:53,504][196725] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 18:38:53,504][196725] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
| [2025-06-11 18:38:53,516][196725] Num visible devices: 1 |
| [2025-06-11 18:38:53,585][196733] Worker 7 uses CPU cores [14, 15] |
| [2025-06-11 18:38:53,631][196725] RunningMeanStd input shape: (3, 72, 128) |
| [2025-06-11 18:38:53,632][196725] RunningMeanStd input shape: (1,) |
| [2025-06-11 18:38:53,638][196496] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
| [2025-06-11 18:38:53,638][196731] Worker 5 uses CPU cores [10, 11] |
| [2025-06-11 18:38:53,638][196723] Worker 2 uses CPU cores [4, 5] |
| [2025-06-11 18:38:53,680][196725] ConvEncoder: input_channels=3 |
| [2025-06-11 18:38:53,735][196725] Conv encoder output size: 512 |
| [2025-06-11 18:38:53,736][196725] Policy head output size: 512 |
| [2025-06-11 18:38:53,763][196496] Inference worker 0-0 is ready! |
| [2025-06-11 18:38:53,763][196496] All inference workers are ready! Signal rollout workers to start! |
| [2025-06-11 18:38:53,789][196723] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 18:38:53,790][196731] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 18:38:53,790][196721] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 18:38:53,790][196724] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 18:38:53,790][196728] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 18:38:53,790][196733] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 18:38:53,790][196727] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 18:38:53,790][196726] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 18:38:53,959][196733] Decorrelating experience for 0 frames... |
| [2025-06-11 18:38:53,960][196724] Decorrelating experience for 0 frames... |
| [2025-06-11 18:38:53,960][196721] Decorrelating experience for 0 frames... |
| [2025-06-11 18:38:54,027][196723] Decorrelating experience for 0 frames... |
| [2025-06-11 18:38:54,027][196728] Decorrelating experience for 0 frames... |
| [2025-06-11 18:38:54,112][196733] Decorrelating experience for 32 frames... |
| [2025-06-11 18:38:54,152][196721] Decorrelating experience for 32 frames... |
| [2025-06-11 18:38:54,155][196731] Decorrelating experience for 0 frames... |
| [2025-06-11 18:38:54,250][196728] Decorrelating experience for 32 frames... |
| [2025-06-11 18:38:54,250][196723] Decorrelating experience for 32 frames... |
| [2025-06-11 18:38:54,313][196733] Decorrelating experience for 64 frames... |
| [2025-06-11 18:38:54,322][196731] Decorrelating experience for 32 frames... |
| [2025-06-11 18:38:54,328][196727] Decorrelating experience for 0 frames... |
| [2025-06-11 18:38:54,427][196724] Decorrelating experience for 32 frames... |
| [2025-06-11 18:38:54,476][196721] Decorrelating experience for 64 frames... |
| [2025-06-11 18:38:54,508][196728] Decorrelating experience for 64 frames... |
| [2025-06-11 18:38:54,510][196723] Decorrelating experience for 64 frames... |
| [2025-06-11 18:38:54,525][196733] Decorrelating experience for 96 frames... |
| [2025-06-11 18:38:54,565][196726] Decorrelating experience for 0 frames... |
| [2025-06-11 18:38:54,592][196727] Decorrelating experience for 32 frames... |
| [2025-06-11 18:38:54,689][196721] Decorrelating experience for 96 frames... |
| [2025-06-11 18:38:54,721][196728] Decorrelating experience for 96 frames... |
| [2025-06-11 18:38:54,721][196723] Decorrelating experience for 96 frames... |
| [2025-06-11 18:38:54,731][196726] Decorrelating experience for 32 frames... |
| [2025-06-11 18:38:54,735][196731] Decorrelating experience for 64 frames... |
| [2025-06-11 18:38:54,796][196724] Decorrelating experience for 64 frames... |
| [2025-06-11 18:38:54,804][196727] Decorrelating experience for 64 frames... |
| [2025-06-11 18:38:54,911][196731] Decorrelating experience for 96 frames... |
| [2025-06-11 18:38:54,974][196724] Decorrelating experience for 96 frames... |
| [2025-06-11 18:38:54,984][196727] Decorrelating experience for 96 frames... |
| [2025-06-11 18:38:55,099][196726] Decorrelating experience for 64 frames... |
| [2025-06-11 18:38:55,300][196726] Decorrelating experience for 96 frames... |
| [2025-06-11 18:38:55,325][196677] Signal inference workers to stop experience collection... |
| [2025-06-11 18:38:55,329][196725] InferenceWorker_p0-w0: stopping experience collection |
| [2025-06-11 18:38:56,167][196677] Signal inference workers to resume experience collection... |
| [2025-06-11 18:38:56,168][196725] InferenceWorker_p0-w0: resuming experience collection |
| [2025-06-11 18:38:57,236][196496] Fps is (10 sec: 9108.3, 60 sec: 9108.3, 300 sec: 9108.3). Total num frames: 32768. Throughput: 0: 1618.9. Samples: 5824. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2025-06-11 18:38:57,236][196496] Avg episode reward: [(0, '3.884')] |
| [2025-06-11 18:38:57,399][196725] Updated weights for policy 0, policy_version 10 (0.0097) |
| [2025-06-11 18:38:58,740][196725] Updated weights for policy 0, policy_version 20 (0.0007) |
| [2025-06-11 18:39:00,062][196725] Updated weights for policy 0, policy_version 30 (0.0006) |
| [2025-06-11 18:39:01,409][196725] Updated weights for policy 0, policy_version 40 (0.0006) |
| [2025-06-11 18:39:02,236][196496] Fps is (10 sec: 21915.1, 60 sec: 21915.1, 300 sec: 21915.1). Total num frames: 188416. Throughput: 0: 3386.1. Samples: 29112. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
| [2025-06-11 18:39:02,236][196496] Avg episode reward: [(0, '4.335')] |
| [2025-06-11 18:39:02,236][196677] Saving new best policy, reward=4.335! |
| [2025-06-11 18:39:02,801][196725] Updated weights for policy 0, policy_version 50 (0.0006) |
| [2025-06-11 18:39:04,171][196725] Updated weights for policy 0, policy_version 60 (0.0006) |
| [2025-06-11 18:39:05,523][196725] Updated weights for policy 0, policy_version 70 (0.0006) |
| [2025-06-11 18:39:07,005][196725] Updated weights for policy 0, policy_version 80 (0.0008) |
| [2025-06-11 18:39:07,236][196496] Fps is (10 sec: 29900.9, 60 sec: 24399.7, 300 sec: 24399.7). Total num frames: 331776. Throughput: 0: 5459.8. Samples: 74240. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
| [2025-06-11 18:39:07,236][196496] Avg episode reward: [(0, '4.753')] |
| [2025-06-11 18:39:07,238][196677] Saving new best policy, reward=4.753! |
| [2025-06-11 18:39:08,366][196725] Updated weights for policy 0, policy_version 90 (0.0006) |
| [2025-06-11 18:39:09,756][196725] Updated weights for policy 0, policy_version 100 (0.0008) |
| [2025-06-11 18:39:10,965][196496] Heartbeat connected on Batcher_0 |
| [2025-06-11 18:39:11,019][196496] Heartbeat connected on LearnerWorker_p0 |
| [2025-06-11 18:39:11,021][196496] Heartbeat connected on RolloutWorker_w0 |
| [2025-06-11 18:39:11,021][196496] Heartbeat connected on InferenceWorker_p0-w0 |
| [2025-06-11 18:39:11,023][196496] Heartbeat connected on RolloutWorker_w1 |
| [2025-06-11 18:39:11,026][196496] Heartbeat connected on RolloutWorker_w2 |
| [2025-06-11 18:39:11,027][196496] Heartbeat connected on RolloutWorker_w3 |
| [2025-06-11 18:39:11,033][196496] Heartbeat connected on RolloutWorker_w4 |
| [2025-06-11 18:39:11,036][196496] Heartbeat connected on RolloutWorker_w5 |
| [2025-06-11 18:39:11,038][196496] Heartbeat connected on RolloutWorker_w6 |
| [2025-06-11 18:39:11,041][196496] Heartbeat connected on RolloutWorker_w7 |
| [2025-06-11 18:39:11,157][196725] Updated weights for policy 0, policy_version 110 (0.0006) |
| [2025-06-11 18:39:12,236][196496] Fps is (10 sec: 29081.5, 60 sec: 25768.5, 300 sec: 25768.5). Total num frames: 479232. Throughput: 0: 6334.7. Samples: 117810. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
| [2025-06-11 18:39:12,236][196496] Avg episode reward: [(0, '4.721')] |
| [2025-06-11 18:39:12,547][196725] Updated weights for policy 0, policy_version 120 (0.0007) |
| [2025-06-11 18:39:13,920][196725] Updated weights for policy 0, policy_version 130 (0.0007) |
| [2025-06-11 18:39:15,286][196725] Updated weights for policy 0, policy_version 140 (0.0006) |
| [2025-06-11 18:39:16,655][196725] Updated weights for policy 0, policy_version 150 (0.0006) |
| [2025-06-11 18:39:17,236][196496] Fps is (10 sec: 29900.6, 60 sec: 26730.8, 300 sec: 26730.8). Total num frames: 630784. Throughput: 0: 5940.3. Samples: 140178. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 18:39:17,236][196496] Avg episode reward: [(0, '4.775')] |
| [2025-06-11 18:39:17,238][196677] Saving new best policy, reward=4.775! |
| [2025-06-11 18:39:18,033][196725] Updated weights for policy 0, policy_version 160 (0.0006) |
| [2025-06-11 18:39:19,403][196725] Updated weights for policy 0, policy_version 170 (0.0006) |
| [2025-06-11 18:39:20,760][196725] Updated weights for policy 0, policy_version 180 (0.0006) |
| [2025-06-11 18:39:22,118][196725] Updated weights for policy 0, policy_version 190 (0.0006) |
| [2025-06-11 18:39:22,236][196496] Fps is (10 sec: 29900.7, 60 sec: 27213.5, 300 sec: 27213.5). Total num frames: 778240. Throughput: 0: 6471.8. Samples: 185078. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 18:39:22,236][196496] Avg episode reward: [(0, '5.121')] |
| [2025-06-11 18:39:22,256][196677] Saving new best policy, reward=5.121! |
| [2025-06-11 18:39:23,480][196725] Updated weights for policy 0, policy_version 200 (0.0006) |
| [2025-06-11 18:39:24,852][196725] Updated weights for policy 0, policy_version 210 (0.0006) |
| [2025-06-11 18:39:26,236][196725] Updated weights for policy 0, policy_version 220 (0.0006) |
| [2025-06-11 18:39:27,236][196496] Fps is (10 sec: 29900.9, 60 sec: 27674.3, 300 sec: 27674.3). Total num frames: 929792. Throughput: 0: 6844.8. Samples: 229968. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
| [2025-06-11 18:39:27,236][196496] Avg episode reward: [(0, '5.324')] |
| [2025-06-11 18:39:27,238][196677] Saving new best policy, reward=5.324! |
| [2025-06-11 18:39:27,612][196725] Updated weights for policy 0, policy_version 230 (0.0006) |
| [2025-06-11 18:39:28,991][196725] Updated weights for policy 0, policy_version 240 (0.0006) |
| [2025-06-11 18:39:30,359][196725] Updated weights for policy 0, policy_version 250 (0.0006) |
| [2025-06-11 18:39:31,749][196725] Updated weights for policy 0, policy_version 260 (0.0006) |
| [2025-06-11 18:39:32,236][196496] Fps is (10 sec: 29900.9, 60 sec: 27909.7, 300 sec: 27909.7). Total num frames: 1077248. Throughput: 0: 6537.2. Samples: 252320. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
| [2025-06-11 18:39:32,236][196496] Avg episode reward: [(0, '6.289')] |
| [2025-06-11 18:39:32,236][196677] Saving new best policy, reward=6.289! |
| [2025-06-11 18:39:33,146][196725] Updated weights for policy 0, policy_version 270 (0.0006) |
| [2025-06-11 18:39:34,514][196725] Updated weights for policy 0, policy_version 280 (0.0006) |
| [2025-06-11 18:39:35,856][196725] Updated weights for policy 0, policy_version 290 (0.0007) |
| [2025-06-11 18:39:37,223][196725] Updated weights for policy 0, policy_version 300 (0.0006) |
| [2025-06-11 18:39:37,237][196496] Fps is (10 sec: 29898.7, 60 sec: 28184.6, 300 sec: 28184.6). Total num frames: 1228800. Throughput: 0: 6811.7. Samples: 296978. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
| [2025-06-11 18:39:37,237][196496] Avg episode reward: [(0, '8.575')] |
| [2025-06-11 18:39:37,240][196677] Saving new best policy, reward=8.575! |
| [2025-06-11 18:39:38,595][196725] Updated weights for policy 0, policy_version 310 (0.0007) |
| [2025-06-11 18:39:39,967][196725] Updated weights for policy 0, policy_version 320 (0.0007) |
| [2025-06-11 18:39:41,342][196725] Updated weights for policy 0, policy_version 330 (0.0006) |
| [2025-06-11 18:39:42,236][196496] Fps is (10 sec: 29900.9, 60 sec: 28319.4, 300 sec: 28319.4). Total num frames: 1376256. Throughput: 0: 7463.8. Samples: 341694. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
| [2025-06-11 18:39:42,236][196496] Avg episode reward: [(0, '10.651')] |
| [2025-06-11 18:39:42,236][196677] Saving new best policy, reward=10.651! |
| [2025-06-11 18:39:42,752][196725] Updated weights for policy 0, policy_version 340 (0.0006) |
| [2025-06-11 18:39:44,136][196725] Updated weights for policy 0, policy_version 350 (0.0006) |
| [2025-06-11 18:39:45,518][196725] Updated weights for policy 0, policy_version 360 (0.0006) |
| [2025-06-11 18:39:46,869][196725] Updated weights for policy 0, policy_version 370 (0.0007) |
| [2025-06-11 18:39:47,236][196496] Fps is (10 sec: 29493.3, 60 sec: 28428.7, 300 sec: 28428.7). Total num frames: 1523712. Throughput: 0: 7440.8. Samples: 363950. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 18:39:47,236][196496] Avg episode reward: [(0, '10.591')] |
| [2025-06-11 18:39:48,244][196725] Updated weights for policy 0, policy_version 380 (0.0006) |
| [2025-06-11 18:39:49,630][196725] Updated weights for policy 0, policy_version 390 (0.0006) |
| [2025-06-11 18:39:51,015][196725] Updated weights for policy 0, policy_version 400 (0.0007) |
| [2025-06-11 18:39:52,236][196496] Fps is (10 sec: 29491.1, 60 sec: 28519.4, 300 sec: 28519.4). Total num frames: 1671168. Throughput: 0: 7426.8. Samples: 408446. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0) |
| [2025-06-11 18:39:52,236][196496] Avg episode reward: [(0, '12.264')] |
| [2025-06-11 18:39:52,263][196677] Saving new best policy, reward=12.264! |
| [2025-06-11 18:39:52,409][196725] Updated weights for policy 0, policy_version 410 (0.0007) |
| [2025-06-11 18:39:53,774][196725] Updated weights for policy 0, policy_version 420 (0.0007) |
| [2025-06-11 18:39:55,159][196725] Updated weights for policy 0, policy_version 430 (0.0007) |
| [2025-06-11 18:39:56,538][196725] Updated weights for policy 0, policy_version 440 (0.0007) |
| [2025-06-11 18:39:57,236][196496] Fps is (10 sec: 29900.9, 60 sec: 29832.5, 300 sec: 28660.2). Total num frames: 1822720. Throughput: 0: 7448.3. Samples: 452982. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
| [2025-06-11 18:39:57,236][196496] Avg episode reward: [(0, '12.874')] |
| [2025-06-11 18:39:57,238][196677] Saving new best policy, reward=12.874! |
| [2025-06-11 18:39:57,923][196725] Updated weights for policy 0, policy_version 450 (0.0006) |
| [2025-06-11 18:39:59,340][196725] Updated weights for policy 0, policy_version 460 (0.0007) |
| [2025-06-11 18:40:00,725][196725] Updated weights for policy 0, policy_version 470 (0.0006) |
| [2025-06-11 18:40:02,120][196725] Updated weights for policy 0, policy_version 480 (0.0006) |
| [2025-06-11 18:40:02,236][196496] Fps is (10 sec: 29491.2, 60 sec: 29627.7, 300 sec: 28661.1). Total num frames: 1966080. Throughput: 0: 7434.4. Samples: 474726. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
| [2025-06-11 18:40:02,236][196496] Avg episode reward: [(0, '14.989')] |
| [2025-06-11 18:40:02,263][196677] Saving new best policy, reward=14.989! |
| [2025-06-11 18:40:03,515][196677] Stopping Batcher_0... |
| [2025-06-11 18:40:03,515][196496] Component Batcher_0 stopped! |
| [2025-06-11 18:40:03,515][196725] Updated weights for policy 0, policy_version 490 (0.0007) |
| [2025-06-11 18:40:03,515][196677] Loop batcher_evt_loop terminating... |
| [2025-06-11 18:40:03,515][196677] Saving /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000000490_2007040.pth... |
| [2025-06-11 18:40:03,541][196723] Stopping RolloutWorker_w2... |
| [2025-06-11 18:40:03,542][196496] Component RolloutWorker_w2 stopped! |
| [2025-06-11 18:40:03,542][196723] Loop rollout_proc2_evt_loop terminating... |
| [2025-06-11 18:40:03,542][196496] Component RolloutWorker_w6 stopped! |
| [2025-06-11 18:40:03,542][196728] Stopping RolloutWorker_w6... |
| [2025-06-11 18:40:03,543][196728] Loop rollout_proc6_evt_loop terminating... |
| [2025-06-11 18:40:03,546][196496] Component RolloutWorker_w4 stopped! |
| [2025-06-11 18:40:03,546][196727] Stopping RolloutWorker_w4... |
| [2025-06-11 18:40:03,547][196727] Loop rollout_proc4_evt_loop terminating... |
| [2025-06-11 18:40:03,547][196496] Component RolloutWorker_w0 stopped! |
| [2025-06-11 18:40:03,547][196724] Stopping RolloutWorker_w0... |
| [2025-06-11 18:40:03,548][196724] Loop rollout_proc0_evt_loop terminating... |
| [2025-06-11 18:40:03,548][196496] Component RolloutWorker_w7 stopped! |
| [2025-06-11 18:40:03,548][196733] Stopping RolloutWorker_w7... |
| [2025-06-11 18:40:03,548][196496] Component RolloutWorker_w5 stopped! |
| [2025-06-11 18:40:03,548][196733] Loop rollout_proc7_evt_loop terminating... |
| [2025-06-11 18:40:03,548][196731] Stopping RolloutWorker_w5... |
| [2025-06-11 18:40:03,549][196731] Loop rollout_proc5_evt_loop terminating... |
| [2025-06-11 18:40:03,551][196496] Component RolloutWorker_w3 stopped! |
| [2025-06-11 18:40:03,551][196726] Stopping RolloutWorker_w3... |
| [2025-06-11 18:40:03,552][196726] Loop rollout_proc3_evt_loop terminating... |
| [2025-06-11 18:40:03,552][196496] Component RolloutWorker_w1 stopped! |
| [2025-06-11 18:40:03,552][196721] Stopping RolloutWorker_w1... |
| [2025-06-11 18:40:03,552][196721] Loop rollout_proc1_evt_loop terminating... |
| [2025-06-11 18:40:03,553][196725] Weights refcount: 2 0 |
| [2025-06-11 18:40:03,554][196725] Stopping InferenceWorker_p0-w0... |
| [2025-06-11 18:40:03,555][196496] Component InferenceWorker_p0-w0 stopped! |
| [2025-06-11 18:40:03,555][196725] Loop inference_proc0-0_evt_loop terminating... |
| [2025-06-11 18:40:03,562][196677] Saving new best policy, reward=16.065! |
| [2025-06-11 18:40:03,613][196677] Saving /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000000490_2007040.pth... |
| [2025-06-11 18:40:03,674][196677] Stopping LearnerWorker_p0... |
| [2025-06-11 18:40:03,674][196677] Loop learner_proc0_evt_loop terminating... |
| [2025-06-11 18:40:03,674][196496] Component LearnerWorker_p0 stopped! |
| [2025-06-11 18:40:03,675][196496] Waiting for process learner_proc0 to stop... |
| [2025-06-11 18:40:04,416][196496] Waiting for process inference_proc0-0 to join... |
| [2025-06-11 18:40:04,416][196496] Waiting for process rollout_proc0 to join... |
| [2025-06-11 18:40:04,417][196496] Waiting for process rollout_proc1 to join... |
| [2025-06-11 18:40:04,417][196496] Waiting for process rollout_proc2 to join... |
| [2025-06-11 18:40:04,417][196496] Waiting for process rollout_proc3 to join... |
| [2025-06-11 18:40:04,417][196496] Waiting for process rollout_proc4 to join... |
| [2025-06-11 18:40:04,417][196496] Waiting for process rollout_proc5 to join... |
| [2025-06-11 18:40:04,418][196496] Waiting for process rollout_proc6 to join... |
| [2025-06-11 18:40:04,418][196496] Waiting for process rollout_proc7 to join... |
| [2025-06-11 18:40:04,418][196496] Batcher 0 profile tree view: |
| batching: 3.7070, releasing_batches: 0.0102 |
| [2025-06-11 18:40:04,418][196496] InferenceWorker_p0-w0 profile tree view: |
| wait_policy: 0.0000 |
| wait_policy_total: 1.3650 |
| update_model: 1.0523 |
| weight_update: 0.0007 |
| one_step: 0.0024 |
| handle_policy_step: 63.8188 |
| deserialize: 2.1940, stack: 0.3450, obs_to_device_normalize: 14.4329, forward: 34.6100, send_messages: 3.1500 |
| prepare_outputs: 6.9549 |
| to_cpu: 4.3562 |
| [2025-06-11 18:40:04,418][196496] Learner 0 profile tree view: |
| misc: 0.0022, prepare_batch: 4.7394 |
| train: 10.4747 |
| epoch_init: 0.0015, minibatch_init: 0.0017, losses_postprocess: 0.0679, kl_divergence: 0.0776, after_optimizer: 4.2096 |
| calculate_losses: 4.0789 |
| losses_init: 0.0008, forward_head: 0.3356, bptt_initial: 2.9485, tail: 0.1706, advantages_returns: 0.0429, losses: 0.2591 |
| bptt: 0.2752 |
| bptt_forward_core: 0.2633 |
| update: 1.9318 |
| clip: 0.2056 |
| [2025-06-11 18:40:04,418][196496] RolloutWorker_w0 profile tree view: |
| wait_for_trajectories: 0.0366, enqueue_policy_requests: 2.1933, env_step: 29.8171, overhead: 2.4029, complete_rollouts: 0.1670 |
| save_policy_outputs: 2.1456 |
| split_output_tensors: 1.0670 |
| [2025-06-11 18:40:04,418][196496] RolloutWorker_w7 profile tree view: |
| wait_for_trajectories: 0.0345, enqueue_policy_requests: 2.1740, env_step: 30.5282, overhead: 2.3833, complete_rollouts: 0.1411 |
| save_policy_outputs: 2.1498 |
| split_output_tensors: 1.0813 |
| [2025-06-11 18:40:04,419][196496] Loop Runner_EvtLoop terminating... |
| [2025-06-11 18:40:04,419][196496] Runner profile tree view: |
| main_loop: 73.3782 |
| [2025-06-11 18:40:04,419][196496] Collected {0: 2007040}, FPS: 27352.0 |
| [2025-06-11 18:40:04,424][196496] Loading existing experiment configuration from /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/config.json |
| [2025-06-11 18:40:04,424][196496] Overriding arg 'num_workers' with value 1 passed from command line |
| [2025-06-11 18:40:04,424][196496] Adding new argument 'no_render'=True that is not in the saved config file! |
| [2025-06-11 18:40:04,424][196496] Adding new argument 'save_video'=True that is not in the saved config file! |
| [2025-06-11 18:40:04,424][196496] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
| [2025-06-11 18:40:04,424][196496] Adding new argument 'video_name'=None that is not in the saved config file! |
| [2025-06-11 18:40:04,424][196496] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
| [2025-06-11 18:40:04,424][196496] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
| [2025-06-11 18:40:04,424][196496] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
| [2025-06-11 18:40:04,424][196496] Adding new argument 'hf_repository'='PranayPalem/vizdoom_laptop_optimized' that is not in the saved config file! |
| [2025-06-11 18:40:04,424][196496] Adding new argument 'policy_index'=0 that is not in the saved config file! |
| [2025-06-11 18:40:04,424][196496] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
| [2025-06-11 18:40:04,425][196496] Adding new argument 'train_script'=None that is not in the saved config file! |
| [2025-06-11 18:40:04,425][196496] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
| [2025-06-11 18:40:04,425][196496] Using frameskip 1 and render_action_repeat=4 for evaluation |
| [2025-06-11 18:40:04,441][196496] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 18:40:04,442][196496] RunningMeanStd input shape: (3, 72, 128) |
| [2025-06-11 18:40:04,443][196496] RunningMeanStd input shape: (1,) |
| [2025-06-11 18:40:04,452][196496] ConvEncoder: input_channels=3 |
| [2025-06-11 18:40:04,502][196496] Conv encoder output size: 512 |
| [2025-06-11 18:40:04,503][196496] Policy head output size: 512 |
| [2025-06-11 18:40:04,677][196496] Loading state from checkpoint /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000000490_2007040.pth... |
| [2025-06-11 18:40:05,082][196496] Num frames 100... |
| [2025-06-11 18:40:05,141][196496] Num frames 200... |
| [2025-06-11 18:40:05,206][196496] Num frames 300... |
| [2025-06-11 18:40:05,271][196496] Num frames 400... |
| [2025-06-11 18:40:05,333][196496] Num frames 500... |
| [2025-06-11 18:40:05,395][196496] Num frames 600... |
| [2025-06-11 18:40:05,455][196496] Num frames 700... |
| [2025-06-11 18:40:05,516][196496] Num frames 800... |
| [2025-06-11 18:40:05,577][196496] Num frames 900... |
| [2025-06-11 18:40:05,637][196496] Num frames 1000... |
| [2025-06-11 18:40:05,699][196496] Num frames 1100... |
| [2025-06-11 18:40:05,760][196496] Num frames 1200... |
| [2025-06-11 18:40:05,822][196496] Num frames 1300... |
| [2025-06-11 18:40:05,883][196496] Num frames 1400... |
| [2025-06-11 18:40:05,946][196496] Num frames 1500... |
| [2025-06-11 18:40:06,009][196496] Num frames 1600... |
| [2025-06-11 18:40:06,071][196496] Num frames 1700... |
| [2025-06-11 18:40:06,131][196496] Num frames 1800... |
| [2025-06-11 18:40:06,192][196496] Num frames 1900... |
| [2025-06-11 18:40:06,257][196496] Avg episode rewards: #0: 54.199, true rewards: #0: 19.200 |
| [2025-06-11 18:40:06,257][196496] Avg episode reward: 54.199, avg true_objective: 19.200 |
| [2025-06-11 18:40:06,311][196496] Num frames 2000... |
| [2025-06-11 18:40:06,372][196496] Num frames 2100... |
| [2025-06-11 18:40:06,434][196496] Num frames 2200... |
| [2025-06-11 18:40:06,494][196496] Num frames 2300... |
| [2025-06-11 18:40:06,555][196496] Num frames 2400... |
| [2025-06-11 18:40:06,622][196496] Num frames 2500... |
| [2025-06-11 18:40:06,684][196496] Num frames 2600... |
| [2025-06-11 18:40:06,744][196496] Num frames 2700... |
| [2025-06-11 18:40:06,806][196496] Num frames 2800... |
| [2025-06-11 18:40:06,869][196496] Num frames 2900... |
| [2025-06-11 18:40:06,931][196496] Num frames 3000... |
| [2025-06-11 18:40:06,997][196496] Num frames 3100... |
| [2025-06-11 18:40:07,062][196496] Num frames 3200... |
| [2025-06-11 18:40:07,113][196496] Avg episode rewards: #0: 42.000, true rewards: #0: 16.000 |
| [2025-06-11 18:40:07,113][196496] Avg episode reward: 42.000, avg true_objective: 16.000 |
| [2025-06-11 18:40:07,175][196496] Num frames 3300... |
| [2025-06-11 18:40:07,234][196496] Num frames 3400... |
| [2025-06-11 18:40:07,309][196496] Num frames 3500... |
| [2025-06-11 18:40:07,368][196496] Num frames 3600... |
| [2025-06-11 18:40:07,453][196496] Avg episode rewards: #0: 29.826, true rewards: #0: 12.160 |
| [2025-06-11 18:40:07,453][196496] Avg episode reward: 29.826, avg true_objective: 12.160 |
| [2025-06-11 18:40:07,489][196496] Num frames 3700... |
| [2025-06-11 18:40:07,548][196496] Num frames 3800... |
| [2025-06-11 18:40:07,610][196496] Num frames 3900... |
| [2025-06-11 18:40:07,673][196496] Num frames 4000... |
| [2025-06-11 18:40:07,734][196496] Num frames 4100... |
| [2025-06-11 18:40:07,797][196496] Num frames 4200... |
| [2025-06-11 18:40:07,860][196496] Num frames 4300... |
| [2025-06-11 18:40:07,924][196496] Num frames 4400... |
| [2025-06-11 18:40:07,990][196496] Num frames 4500... |
| [2025-06-11 18:40:08,076][196496] Avg episode rewards: #0: 28.367, true rewards: #0: 11.367 |
| [2025-06-11 18:40:08,077][196496] Avg episode reward: 28.367, avg true_objective: 11.367 |
| [2025-06-11 18:40:08,111][196496] Num frames 4600... |
| [2025-06-11 18:40:08,171][196496] Num frames 4700... |
| [2025-06-11 18:40:08,230][196496] Num frames 4800... |
| [2025-06-11 18:40:08,291][196496] Num frames 4900... |
| [2025-06-11 18:40:08,352][196496] Num frames 5000... |
| [2025-06-11 18:40:08,412][196496] Num frames 5100... |
| [2025-06-11 18:40:08,482][196496] Avg episode rewards: #0: 25.454, true rewards: #0: 10.254 |
| [2025-06-11 18:40:08,483][196496] Avg episode reward: 25.454, avg true_objective: 10.254 |
| [2025-06-11 18:40:08,532][196496] Num frames 5200... |
| [2025-06-11 18:40:08,592][196496] Num frames 5300... |
| [2025-06-11 18:40:08,655][196496] Num frames 5400... |
| [2025-06-11 18:40:08,715][196496] Num frames 5500... |
| [2025-06-11 18:40:08,806][196496] Avg episode rewards: #0: 22.252, true rewards: #0: 9.252 |
| [2025-06-11 18:40:08,806][196496] Avg episode reward: 22.252, avg true_objective: 9.252 |
| [2025-06-11 18:40:08,841][196496] Num frames 5600... |
| [2025-06-11 18:40:08,900][196496] Num frames 5700... |
| [2025-06-11 18:40:08,965][196496] Num frames 5800... |
| [2025-06-11 18:40:09,028][196496] Num frames 5900... |
| [2025-06-11 18:40:09,090][196496] Num frames 6000... |
| [2025-06-11 18:40:09,150][196496] Num frames 6100... |
| [2025-06-11 18:40:09,212][196496] Num frames 6200... |
| [2025-06-11 18:40:09,274][196496] Num frames 6300... |
| [2025-06-11 18:40:09,335][196496] Num frames 6400... |
| [2025-06-11 18:40:09,396][196496] Num frames 6500... |
| [2025-06-11 18:40:09,457][196496] Num frames 6600... |
| [2025-06-11 18:40:09,519][196496] Num frames 6700... |
| [2025-06-11 18:40:09,632][196496] Avg episode rewards: #0: 22.981, true rewards: #0: 9.696 |
| [2025-06-11 18:40:09,633][196496] Avg episode reward: 22.981, avg true_objective: 9.696 |
| [2025-06-11 18:40:09,641][196496] Num frames 6800... |
| [2025-06-11 18:40:09,704][196496] Num frames 6900... |
| [2025-06-11 18:40:09,764][196496] Num frames 7000... |
| [2025-06-11 18:40:09,825][196496] Num frames 7100... |
| [2025-06-11 18:40:09,889][196496] Num frames 7200... |
| [2025-06-11 18:40:09,954][196496] Num frames 7300... |
| [2025-06-11 18:40:10,017][196496] Avg episode rewards: #0: 21.020, true rewards: #0: 9.145 |
| [2025-06-11 18:40:10,017][196496] Avg episode reward: 21.020, avg true_objective: 9.145 |
| [2025-06-11 18:40:10,074][196496] Num frames 7400... |
| [2025-06-11 18:40:10,137][196496] Num frames 7500... |
| [2025-06-11 18:40:10,199][196496] Num frames 7600... |
| [2025-06-11 18:40:10,259][196496] Num frames 7700... |
| [2025-06-11 18:40:10,321][196496] Num frames 7800... |
| [2025-06-11 18:40:10,384][196496] Num frames 7900... |
| [2025-06-11 18:40:10,448][196496] Num frames 8000... |
| [2025-06-11 18:40:10,510][196496] Num frames 8100... |
| [2025-06-11 18:40:10,574][196496] Num frames 8200... |
| [2025-06-11 18:40:10,635][196496] Num frames 8300... |
| [2025-06-11 18:40:10,697][196496] Num frames 8400... |
| [2025-06-11 18:40:10,761][196496] Num frames 8500... |
| [2025-06-11 18:40:10,823][196496] Num frames 8600... |
| [2025-06-11 18:40:10,885][196496] Num frames 8700... |
| [2025-06-11 18:40:10,948][196496] Num frames 8800... |
| [2025-06-11 18:40:11,015][196496] Num frames 8900... |
| [2025-06-11 18:40:11,081][196496] Num frames 9000... |
| [2025-06-11 18:40:11,143][196496] Num frames 9100... |
| [2025-06-11 18:40:11,210][196496] Num frames 9200... |
| [2025-06-11 18:40:11,274][196496] Num frames 9300... |
| [2025-06-11 18:40:11,341][196496] Num frames 9400... |
| [2025-06-11 18:40:11,408][196496] Avg episode rewards: #0: 24.795, true rewards: #0: 10.462 |
| [2025-06-11 18:40:11,408][196496] Avg episode reward: 24.795, avg true_objective: 10.462 |
| [2025-06-11 18:40:11,463][196496] Num frames 9500... |
| [2025-06-11 18:40:11,523][196496] Num frames 9600... |
| [2025-06-11 18:40:11,585][196496] Num frames 9700... |
| [2025-06-11 18:40:11,646][196496] Num frames 9800... |
| [2025-06-11 18:40:11,709][196496] Num frames 9900... |
| [2025-06-11 18:40:11,773][196496] Num frames 10000... |
| [2025-06-11 18:40:11,842][196496] Num frames 10100... |
| [2025-06-11 18:40:11,906][196496] Num frames 10200... |
| [2025-06-11 18:40:11,970][196496] Num frames 10300... |
| [2025-06-11 18:40:12,031][196496] Avg episode rewards: #0: 24.112, true rewards: #0: 10.312 |
| [2025-06-11 18:40:12,031][196496] Avg episode reward: 24.112, avg true_objective: 10.312 |
| [2025-06-11 18:40:22,551][196496] Replay video saved to /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/replay.mp4! |
| [2025-06-11 18:41:01,389][196496] The model has been pushed to https://huggingface.co/PranayPalem/vizdoom_laptop_optimized |
| [2025-06-11 19:38:53,674][258077] Saving configuration to /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/config.json... |
| [2025-06-11 19:38:53,674][258077] Rollout worker 0 uses device cpu |
| [2025-06-11 19:38:53,675][258077] Rollout worker 1 uses device cpu |
| [2025-06-11 19:38:53,675][258077] Rollout worker 2 uses device cpu |
| [2025-06-11 19:38:53,675][258077] Rollout worker 3 uses device cpu |
| [2025-06-11 19:38:53,675][258077] Rollout worker 4 uses device cpu |
| [2025-06-11 19:38:53,675][258077] Rollout worker 5 uses device cpu |
| [2025-06-11 19:38:53,675][258077] Rollout worker 6 uses device cpu |
| [2025-06-11 19:38:53,675][258077] Rollout worker 7 uses device cpu |
| [2025-06-11 19:38:53,767][258077] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 19:38:53,767][258077] InferenceWorker_p0-w0: min num requests: 2 |
| [2025-06-11 19:38:53,789][258077] Starting all processes... |
| [2025-06-11 19:38:53,789][258077] Starting process learner_proc0 |
| [2025-06-11 19:38:54,917][258077] Starting all processes... |
| [2025-06-11 19:38:54,921][258224] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 19:38:54,921][258224] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
| [2025-06-11 19:38:54,922][258077] Starting process inference_proc0-0 |
| [2025-06-11 19:38:54,922][258077] Starting process rollout_proc0 |
| [2025-06-11 19:38:54,922][258077] Starting process rollout_proc1 |
| [2025-06-11 19:38:54,922][258077] Starting process rollout_proc2 |
| [2025-06-11 19:38:54,935][258224] Num visible devices: 1 |
| [2025-06-11 19:38:54,940][258224] Setting fixed seed 3333 |
| [2025-06-11 19:38:54,941][258224] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 19:38:54,942][258224] Initializing actor-critic model on device cuda:0 |
| [2025-06-11 19:38:54,942][258224] RunningMeanStd input shape: (3, 72, 128) |
| [2025-06-11 19:38:54,942][258224] RunningMeanStd input shape: (1,) |
| [2025-06-11 19:38:54,922][258077] Starting process rollout_proc3 |
| [2025-06-11 19:38:54,923][258077] Starting process rollout_proc4 |
| [2025-06-11 19:38:54,925][258077] Starting process rollout_proc5 |
| [2025-06-11 19:38:54,925][258077] Starting process rollout_proc6 |
| [2025-06-11 19:38:54,925][258077] Starting process rollout_proc7 |
| [2025-06-11 19:38:55,016][258224] ConvEncoder: input_channels=3 |
| [2025-06-11 19:38:55,116][258224] Conv encoder output size: 512 |
| [2025-06-11 19:38:55,117][258224] Policy head output size: 512 |
| [2025-06-11 19:38:55,132][258224] Created Actor Critic model with architecture: |
| [2025-06-11 19:38:55,132][258224] ActorCriticSharedWeights( |
| (obs_normalizer): ObservationNormalizer( |
| (running_mean_std): RunningMeanStdDictInPlace( |
| (running_mean_std): ModuleDict( |
| (obs): RunningMeanStdInPlace() |
| ) |
| ) |
| ) |
| (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
| (encoder): VizdoomEncoder( |
| (basic_encoder): ConvEncoder( |
| (enc): RecursiveScriptModule( |
| original_name=ConvEncoderImpl |
| (conv_head): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Conv2d) |
| (1): RecursiveScriptModule(original_name=ELU) |
| (2): RecursiveScriptModule(original_name=Conv2d) |
| (3): RecursiveScriptModule(original_name=ELU) |
| (4): RecursiveScriptModule(original_name=Conv2d) |
| (5): RecursiveScriptModule(original_name=ELU) |
| ) |
| (mlp_layers): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Linear) |
| (1): RecursiveScriptModule(original_name=ELU) |
| ) |
| ) |
| ) |
| ) |
| (core): ModelCoreRNN( |
| (core): GRU(512, 512) |
| ) |
| (decoder): MlpDecoder( |
| (mlp): Identity() |
| ) |
| (critic_linear): Linear(in_features=512, out_features=1, bias=True) |
| (action_parameterization): ActionParameterizationDefault( |
| (distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
| ) |
| ) |
| [2025-06-11 19:38:55,350][258224] Using optimizer <class 'torch.optim.adam.Adam'> |
| [2025-06-11 19:38:56,154][258224] Loading state from checkpoint /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000000490_2007040.pth... |
| [2025-06-11 19:38:56,189][258224] Loading model from checkpoint |
| [2025-06-11 19:38:56,190][258224] Loaded experiment state at self.train_step=490, self.env_steps=2007040 |
| [2025-06-11 19:38:56,190][258224] Initialized policy 0 weights for model version 490 |
| [2025-06-11 19:38:56,193][258224] LearnerWorker_p0 finished initialization! |
| [2025-06-11 19:38:56,193][258224] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 19:38:56,387][258317] Worker 4 uses CPU cores [8, 9] |
| [2025-06-11 19:38:56,392][258314] Worker 0 uses CPU cores [0, 1] |
| [2025-06-11 19:38:56,418][258319] Worker 6 uses CPU cores [12, 13] |
| [2025-06-11 19:38:56,451][258296] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 19:38:56,451][258296] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
| [2025-06-11 19:38:56,464][258322] Worker 7 uses CPU cores [14, 15] |
| [2025-06-11 19:38:56,464][258296] Num visible devices: 1 |
| [2025-06-11 19:38:56,476][258313] Worker 1 uses CPU cores [2, 3] |
| [2025-06-11 19:38:56,549][258321] Worker 5 uses CPU cores [10, 11] |
| [2025-06-11 19:38:56,597][258296] RunningMeanStd input shape: (3, 72, 128) |
| [2025-06-11 19:38:56,598][258296] RunningMeanStd input shape: (1,) |
| [2025-06-11 19:38:56,606][258316] Worker 3 uses CPU cores [6, 7] |
| [2025-06-11 19:38:56,662][258315] Worker 2 uses CPU cores [4, 5] |
| [2025-06-11 19:38:56,666][258296] ConvEncoder: input_channels=3 |
| [2025-06-11 19:38:56,715][258296] Conv encoder output size: 512 |
| [2025-06-11 19:38:56,715][258296] Policy head output size: 512 |
| [2025-06-11 19:38:56,742][258077] Inference worker 0-0 is ready! |
| [2025-06-11 19:38:56,743][258077] All inference workers are ready! Signal rollout workers to start! |
| [2025-06-11 19:38:56,776][258322] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:38:56,776][258317] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:38:56,776][258314] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:38:56,782][258316] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:38:56,788][258313] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:38:56,788][258319] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:38:56,788][258315] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:38:56,788][258321] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:38:56,956][258317] Decorrelating experience for 0 frames... |
| [2025-06-11 19:38:57,024][258322] Decorrelating experience for 0 frames... |
| [2025-06-11 19:38:57,030][258316] Decorrelating experience for 0 frames... |
| [2025-06-11 19:38:57,030][258321] Decorrelating experience for 0 frames... |
| [2025-06-11 19:38:57,036][258315] Decorrelating experience for 0 frames... |
| [2025-06-11 19:38:57,036][258313] Decorrelating experience for 0 frames... |
| [2025-06-11 19:38:57,037][258319] Decorrelating experience for 0 frames... |
| [2025-06-11 19:38:57,113][258317] Decorrelating experience for 32 frames... |
| [2025-06-11 19:38:57,113][258314] Decorrelating experience for 0 frames... |
| [2025-06-11 19:38:57,200][258313] Decorrelating experience for 32 frames... |
| [2025-06-11 19:38:57,210][258315] Decorrelating experience for 32 frames... |
| [2025-06-11 19:38:57,211][258319] Decorrelating experience for 32 frames... |
| [2025-06-11 19:38:57,220][258316] Decorrelating experience for 32 frames... |
| [2025-06-11 19:38:57,287][258322] Decorrelating experience for 32 frames... |
| [2025-06-11 19:38:57,291][258314] Decorrelating experience for 32 frames... |
| [2025-06-11 19:38:57,333][258317] Decorrelating experience for 64 frames... |
| [2025-06-11 19:38:57,423][258319] Decorrelating experience for 64 frames... |
| [2025-06-11 19:38:57,438][258316] Decorrelating experience for 64 frames... |
| [2025-06-11 19:38:57,438][258321] Decorrelating experience for 32 frames... |
| [2025-06-11 19:38:57,500][258314] Decorrelating experience for 64 frames... |
| [2025-06-11 19:38:57,597][258319] Decorrelating experience for 96 frames... |
| [2025-06-11 19:38:57,604][258313] Decorrelating experience for 64 frames... |
| [2025-06-11 19:38:57,646][258321] Decorrelating experience for 64 frames... |
| [2025-06-11 19:38:57,763][258317] Decorrelating experience for 96 frames... |
| [2025-06-11 19:38:57,763][258316] Decorrelating experience for 96 frames... |
| [2025-06-11 19:38:57,817][258313] Decorrelating experience for 96 frames... |
| [2025-06-11 19:38:57,937][258321] Decorrelating experience for 96 frames... |
| [2025-06-11 19:38:57,945][258322] Decorrelating experience for 64 frames... |
| [2025-06-11 19:38:58,131][258322] Decorrelating experience for 96 frames... |
| [2025-06-11 19:38:58,141][258314] Decorrelating experience for 96 frames... |
| [2025-06-11 19:38:58,353][258224] Signal inference workers to stop experience collection... |
| [2025-06-11 19:38:58,358][258296] InferenceWorker_p0-w0: stopping experience collection |
| [2025-06-11 19:38:58,370][258315] Decorrelating experience for 64 frames... |
| [2025-06-11 19:38:58,538][258315] Decorrelating experience for 96 frames... |
| [2025-06-11 19:38:59,178][258224] Signal inference workers to resume experience collection... |
| [2025-06-11 19:38:59,179][258296] InferenceWorker_p0-w0: resuming experience collection |
| [2025-06-11 19:38:59,739][258077] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 2023424. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2025-06-11 19:38:59,739][258077] Avg episode reward: [(0, '5.474')] |
| [2025-06-11 19:39:00,587][258296] Updated weights for policy 0, policy_version 500 (0.0099) |
| [2025-06-11 19:39:02,085][258296] Updated weights for policy 0, policy_version 510 (0.0009) |
| [2025-06-11 19:39:03,566][258296] Updated weights for policy 0, policy_version 520 (0.0007) |
| [2025-06-11 19:39:04,739][258077] Fps is (10 sec: 27033.3, 60 sec: 27033.3, 300 sec: 27033.3). Total num frames: 2158592. Throughput: 0: 5101.1. Samples: 25506. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
| [2025-06-11 19:39:04,739][258077] Avg episode reward: [(0, '18.715')] |
| [2025-06-11 19:39:04,761][258224] Saving new best policy, reward=18.715! |
| [2025-06-11 19:39:05,085][258296] Updated weights for policy 0, policy_version 530 (0.0007) |
| [2025-06-11 19:39:06,592][258296] Updated weights for policy 0, policy_version 540 (0.0007) |
| [2025-06-11 19:39:08,108][258296] Updated weights for policy 0, policy_version 550 (0.0007) |
| [2025-06-11 19:39:09,633][258296] Updated weights for policy 0, policy_version 560 (0.0007) |
| [2025-06-11 19:39:09,739][258077] Fps is (10 sec: 27033.4, 60 sec: 27033.4, 300 sec: 27033.4). Total num frames: 2293760. Throughput: 0: 6609.4. Samples: 66094. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
| [2025-06-11 19:39:09,739][258077] Avg episode reward: [(0, '20.614')] |
| [2025-06-11 19:39:09,740][258224] Saving new best policy, reward=20.614! |
| [2025-06-11 19:39:11,278][258296] Updated weights for policy 0, policy_version 570 (0.0007) |
| [2025-06-11 19:39:12,804][258296] Updated weights for policy 0, policy_version 580 (0.0007) |
| [2025-06-11 19:39:13,704][258077] Heartbeat connected on Batcher_0 |
| [2025-06-11 19:39:13,764][258077] Heartbeat connected on LearnerWorker_p0 |
| [2025-06-11 19:39:13,769][258077] Heartbeat connected on InferenceWorker_p0-w0 |
| [2025-06-11 19:39:13,770][258077] Heartbeat connected on RolloutWorker_w0 |
| [2025-06-11 19:39:13,773][258077] Heartbeat connected on RolloutWorker_w1 |
| [2025-06-11 19:39:13,776][258077] Heartbeat connected on RolloutWorker_w2 |
| [2025-06-11 19:39:13,779][258077] Heartbeat connected on RolloutWorker_w3 |
| [2025-06-11 19:39:13,781][258077] Heartbeat connected on RolloutWorker_w4 |
| [2025-06-11 19:39:13,784][258077] Heartbeat connected on RolloutWorker_w5 |
| [2025-06-11 19:39:13,787][258077] Heartbeat connected on RolloutWorker_w6 |
| [2025-06-11 19:39:13,789][258077] Heartbeat connected on RolloutWorker_w7 |
| [2025-06-11 19:39:14,346][258296] Updated weights for policy 0, policy_version 590 (0.0007) |
| [2025-06-11 19:39:14,738][258077] Fps is (10 sec: 26624.3, 60 sec: 26760.6, 300 sec: 26760.6). Total num frames: 2424832. Throughput: 0: 5698.0. Samples: 85470. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
| [2025-06-11 19:39:14,739][258077] Avg episode reward: [(0, '19.710')] |
| [2025-06-11 19:39:15,890][258296] Updated weights for policy 0, policy_version 600 (0.0007) |
| [2025-06-11 19:39:17,420][258296] Updated weights for policy 0, policy_version 610 (0.0007) |
| [2025-06-11 19:39:18,958][258296] Updated weights for policy 0, policy_version 620 (0.0007) |
| [2025-06-11 19:39:19,738][258077] Fps is (10 sec: 26624.4, 60 sec: 26828.9, 300 sec: 26828.9). Total num frames: 2560000. Throughput: 0: 6276.0. Samples: 125520. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
| [2025-06-11 19:39:19,739][258077] Avg episode reward: [(0, '21.066')] |
| [2025-06-11 19:39:19,741][258224] Saving new best policy, reward=21.066! |
| [2025-06-11 19:39:20,514][258296] Updated weights for policy 0, policy_version 630 (0.0007) |
| [2025-06-11 19:39:22,075][258296] Updated weights for policy 0, policy_version 640 (0.0007) |
| [2025-06-11 19:39:23,618][258296] Updated weights for policy 0, policy_version 650 (0.0007) |
| [2025-06-11 19:39:24,739][258077] Fps is (10 sec: 26623.9, 60 sec: 26705.9, 300 sec: 26705.9). Total num frames: 2691072. Throughput: 0: 6608.4. Samples: 165210. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
| [2025-06-11 19:39:24,739][258077] Avg episode reward: [(0, '19.379')] |
| [2025-06-11 19:39:25,157][258296] Updated weights for policy 0, policy_version 660 (0.0007) |
| [2025-06-11 19:39:26,685][258296] Updated weights for policy 0, policy_version 670 (0.0007) |
| [2025-06-11 19:39:28,256][258296] Updated weights for policy 0, policy_version 680 (0.0007) |
| [2025-06-11 19:39:29,738][258077] Fps is (10 sec: 26214.3, 60 sec: 26624.0, 300 sec: 26624.0). Total num frames: 2822144. Throughput: 0: 6173.9. Samples: 185216. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
| [2025-06-11 19:39:29,739][258077] Avg episode reward: [(0, '20.191')] |
| [2025-06-11 19:39:29,790][258296] Updated weights for policy 0, policy_version 690 (0.0007) |
| [2025-06-11 19:39:31,335][258296] Updated weights for policy 0, policy_version 700 (0.0008) |
| [2025-06-11 19:39:32,880][258296] Updated weights for policy 0, policy_version 710 (0.0007) |
| [2025-06-11 19:39:34,423][258296] Updated weights for policy 0, policy_version 720 (0.0007) |
| [2025-06-11 19:39:34,738][258077] Fps is (10 sec: 26624.2, 60 sec: 26682.6, 300 sec: 26682.6). Total num frames: 2957312. Throughput: 0: 6423.7. Samples: 224828. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0) |
| [2025-06-11 19:39:34,739][258077] Avg episode reward: [(0, '22.561')] |
| [2025-06-11 19:39:34,739][258224] Saving new best policy, reward=22.561! |
| [2025-06-11 19:39:35,992][258296] Updated weights for policy 0, policy_version 730 (0.0007) |
| [2025-06-11 19:39:37,558][258296] Updated weights for policy 0, policy_version 740 (0.0007) |
| [2025-06-11 19:39:39,111][258296] Updated weights for policy 0, policy_version 750 (0.0007) |
| [2025-06-11 19:39:39,739][258077] Fps is (10 sec: 26623.8, 60 sec: 26624.0, 300 sec: 26624.0). Total num frames: 3088384. Throughput: 0: 6609.0. Samples: 264360. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
| [2025-06-11 19:39:39,739][258077] Avg episode reward: [(0, '22.660')] |
| [2025-06-11 19:39:39,740][258224] Saving new best policy, reward=22.660! |
| [2025-06-11 19:39:40,672][258296] Updated weights for policy 0, policy_version 760 (0.0007) |
| [2025-06-11 19:39:42,225][258296] Updated weights for policy 0, policy_version 770 (0.0007) |
| [2025-06-11 19:39:43,792][258296] Updated weights for policy 0, policy_version 780 (0.0007) |
| [2025-06-11 19:39:44,738][258077] Fps is (10 sec: 26214.4, 60 sec: 26578.5, 300 sec: 26578.5). Total num frames: 3219456. Throughput: 0: 6311.3. Samples: 284008. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
| [2025-06-11 19:39:44,739][258077] Avg episode reward: [(0, '22.866')] |
| [2025-06-11 19:39:44,739][258224] Saving new best policy, reward=22.866! |
| [2025-06-11 19:39:45,363][258296] Updated weights for policy 0, policy_version 790 (0.0007) |
| [2025-06-11 19:39:46,936][258296] Updated weights for policy 0, policy_version 800 (0.0007) |
| [2025-06-11 19:39:48,504][258296] Updated weights for policy 0, policy_version 810 (0.0007) |
| [2025-06-11 19:39:49,739][258077] Fps is (10 sec: 25805.0, 60 sec: 26460.2, 300 sec: 26460.2). Total num frames: 3346432. Throughput: 0: 6617.6. Samples: 323298. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
| [2025-06-11 19:39:49,739][258077] Avg episode reward: [(0, '22.551')] |
| [2025-06-11 19:39:50,056][258296] Updated weights for policy 0, policy_version 820 (0.0007) |
| [2025-06-11 19:39:51,605][258296] Updated weights for policy 0, policy_version 830 (0.0007) |
| [2025-06-11 19:39:53,176][258296] Updated weights for policy 0, policy_version 840 (0.0007) |
| [2025-06-11 19:39:54,738][258077] Fps is (10 sec: 25804.7, 60 sec: 26437.8, 300 sec: 26437.8). Total num frames: 3477504. Throughput: 0: 6589.5. Samples: 362620. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
| [2025-06-11 19:39:54,739][258077] Avg episode reward: [(0, '19.912')] |
| [2025-06-11 19:39:54,771][258296] Updated weights for policy 0, policy_version 850 (0.0007) |
| [2025-06-11 19:39:56,330][258296] Updated weights for policy 0, policy_version 860 (0.0007) |
| [2025-06-11 19:39:57,898][258296] Updated weights for policy 0, policy_version 870 (0.0007) |
| [2025-06-11 19:39:59,466][258296] Updated weights for policy 0, policy_version 880 (0.0007) |
| [2025-06-11 19:39:59,739][258077] Fps is (10 sec: 26214.4, 60 sec: 26419.2, 300 sec: 26419.2). Total num frames: 3608576. Throughput: 0: 6593.6. Samples: 382184. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
| [2025-06-11 19:39:59,739][258077] Avg episode reward: [(0, '23.099')] |
| [2025-06-11 19:39:59,741][258224] Saving new best policy, reward=23.099! |
| [2025-06-11 19:40:01,065][258296] Updated weights for policy 0, policy_version 890 (0.0007) |
| [2025-06-11 19:40:02,631][258296] Updated weights for policy 0, policy_version 900 (0.0007) |
| [2025-06-11 19:40:04,197][258296] Updated weights for policy 0, policy_version 910 (0.0008) |
| [2025-06-11 19:40:04,739][258077] Fps is (10 sec: 26214.4, 60 sec: 26351.0, 300 sec: 26403.5). Total num frames: 3739648. Throughput: 0: 6568.7. Samples: 421110. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0) |
| [2025-06-11 19:40:04,739][258077] Avg episode reward: [(0, '19.705')] |
| [2025-06-11 19:40:05,764][258296] Updated weights for policy 0, policy_version 920 (0.0007) |
| [2025-06-11 19:40:07,364][258296] Updated weights for policy 0, policy_version 930 (0.0007) |
| [2025-06-11 19:40:08,967][258296] Updated weights for policy 0, policy_version 940 (0.0007) |
| [2025-06-11 19:40:09,739][258077] Fps is (10 sec: 25804.6, 60 sec: 26214.4, 300 sec: 26331.4). Total num frames: 3866624. Throughput: 0: 6551.9. Samples: 460048. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0) |
| [2025-06-11 19:40:09,739][258077] Avg episode reward: [(0, '22.629')] |
| [2025-06-11 19:40:10,565][258296] Updated weights for policy 0, policy_version 950 (0.0007) |
| [2025-06-11 19:40:12,139][258296] Updated weights for policy 0, policy_version 960 (0.0007) |
| [2025-06-11 19:40:13,717][258296] Updated weights for policy 0, policy_version 970 (0.0007) |
| [2025-06-11 19:40:14,739][258077] Fps is (10 sec: 25804.8, 60 sec: 26214.4, 300 sec: 26323.6). Total num frames: 3997696. Throughput: 0: 6537.8. Samples: 479416. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
| [2025-06-11 19:40:14,739][258077] Avg episode reward: [(0, '24.448')] |
| [2025-06-11 19:40:14,739][258224] Saving new best policy, reward=24.448! |
| [2025-06-11 19:40:14,985][258077] Component Batcher_0 stopped! |
| [2025-06-11 19:40:14,985][258224] Stopping Batcher_0... |
| [2025-06-11 19:40:14,986][258224] Saving /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000000978_4005888.pth... |
| [2025-06-11 19:40:14,986][258224] Loop batcher_evt_loop terminating... |
| [2025-06-11 19:40:15,021][258314] Stopping RolloutWorker_w0... |
| [2025-06-11 19:40:15,021][258317] Stopping RolloutWorker_w4... |
| [2025-06-11 19:40:15,021][258077] Component RolloutWorker_w0 stopped! |
| [2025-06-11 19:40:15,021][258321] Stopping RolloutWorker_w5... |
| [2025-06-11 19:40:15,021][258316] Stopping RolloutWorker_w3... |
| [2025-06-11 19:40:15,021][258322] Stopping RolloutWorker_w7... |
| [2025-06-11 19:40:15,021][258077] Component RolloutWorker_w4 stopped! |
| [2025-06-11 19:40:15,021][258314] Loop rollout_proc0_evt_loop terminating... |
| [2025-06-11 19:40:15,021][258317] Loop rollout_proc4_evt_loop terminating... |
| [2025-06-11 19:40:15,022][258316] Loop rollout_proc3_evt_loop terminating... |
| [2025-06-11 19:40:15,022][258077] Component RolloutWorker_w3 stopped! |
| [2025-06-11 19:40:15,022][258321] Loop rollout_proc5_evt_loop terminating... |
| [2025-06-11 19:40:15,022][258322] Loop rollout_proc7_evt_loop terminating... |
| [2025-06-11 19:40:15,022][258077] Component RolloutWorker_w5 stopped! |
| [2025-06-11 19:40:15,022][258077] Component RolloutWorker_w7 stopped! |
| [2025-06-11 19:40:15,022][258296] Weights refcount: 2 0 |
| [2025-06-11 19:40:15,023][258077] Component RolloutWorker_w6 stopped! |
| [2025-06-11 19:40:15,023][258319] Stopping RolloutWorker_w6... |
| [2025-06-11 19:40:15,023][258319] Loop rollout_proc6_evt_loop terminating... |
| [2025-06-11 19:40:15,023][258296] Stopping InferenceWorker_p0-w0... |
| [2025-06-11 19:40:15,024][258077] Component InferenceWorker_p0-w0 stopped! |
| [2025-06-11 19:40:15,024][258296] Loop inference_proc0-0_evt_loop terminating... |
| [2025-06-11 19:40:15,026][258077] Component RolloutWorker_w1 stopped! |
| [2025-06-11 19:40:15,026][258313] Stopping RolloutWorker_w1... |
| [2025-06-11 19:40:15,027][258313] Loop rollout_proc1_evt_loop terminating... |
| [2025-06-11 19:40:15,027][258077] Component RolloutWorker_w2 stopped! |
| [2025-06-11 19:40:15,027][258315] Stopping RolloutWorker_w2... |
| [2025-06-11 19:40:15,028][258315] Loop rollout_proc2_evt_loop terminating... |
| [2025-06-11 19:40:15,039][258224] Saving /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000000978_4005888.pth... |
| [2025-06-11 19:40:15,115][258224] Stopping LearnerWorker_p0... |
| [2025-06-11 19:40:15,115][258224] Loop learner_proc0_evt_loop terminating... |
| [2025-06-11 19:40:15,115][258077] Component LearnerWorker_p0 stopped! |
| [2025-06-11 19:40:15,116][258077] Waiting for process learner_proc0 to stop... |
| [2025-06-11 19:40:15,954][258077] Waiting for process inference_proc0-0 to join... |
| [2025-06-11 19:40:15,956][258077] Waiting for process rollout_proc0 to join... |
| [2025-06-11 19:40:15,956][258077] Waiting for process rollout_proc1 to join... |
| [2025-06-11 19:40:15,956][258077] Waiting for process rollout_proc2 to join... |
| [2025-06-11 19:40:15,957][258077] Waiting for process rollout_proc3 to join... |
| [2025-06-11 19:40:15,957][258077] Waiting for process rollout_proc4 to join... |
| [2025-06-11 19:40:15,957][258077] Waiting for process rollout_proc5 to join... |
| [2025-06-11 19:40:15,957][258077] Waiting for process rollout_proc6 to join... |
| [2025-06-11 19:40:15,957][258077] Waiting for process rollout_proc7 to join... |
| [2025-06-11 19:40:15,957][258077] Batcher 0 profile tree view: |
| batching: 3.2483, releasing_batches: 0.0113 |
| [2025-06-11 19:40:15,957][258077] InferenceWorker_p0-w0 profile tree view: |
| wait_policy: 0.0000 |
| wait_policy_total: 1.4908 |
| update_model: 1.2357 |
| weight_update: 0.0008 |
| one_step: 0.0025 |
| handle_policy_step: 71.6643 |
| deserialize: 2.7464, stack: 0.4336, obs_to_device_normalize: 17.1179, forward: 38.1951, send_messages: 3.2479 |
| prepare_outputs: 7.3213 |
| to_cpu: 4.4621 |
| [2025-06-11 19:40:15,957][258077] Learner 0 profile tree view: |
| misc: 0.0024, prepare_batch: 4.8330 |
| train: 11.1145 |
| epoch_init: 0.0016, minibatch_init: 0.0018, losses_postprocess: 0.0715, kl_divergence: 0.0804, after_optimizer: 0.2072 |
| calculate_losses: 4.2003 |
| losses_init: 0.0008, forward_head: 0.3476, bptt_initial: 3.0037, tail: 0.1751, advantages_returns: 0.0468, losses: 0.2745 |
| bptt: 0.2993 |
| bptt_forward_core: 0.2866 |
| update: 6.4357 |
| clip: 0.2142 |
| [2025-06-11 19:40:15,958][258077] RolloutWorker_w0 profile tree view: |
| wait_for_trajectories: 0.0366, enqueue_policy_requests: 2.3944, env_step: 33.6083, overhead: 2.9035, complete_rollouts: 0.1489 |
| save_policy_outputs: 2.3112 |
| split_output_tensors: 1.1337 |
| [2025-06-11 19:40:15,958][258077] RolloutWorker_w7 profile tree view: |
| wait_for_trajectories: 0.0380, enqueue_policy_requests: 2.4850, env_step: 33.2989, overhead: 2.8466, complete_rollouts: 0.1510 |
| save_policy_outputs: 2.3348 |
| split_output_tensors: 1.1538 |
| [2025-06-11 19:40:15,958][258077] Loop Runner_EvtLoop terminating... |
| [2025-06-11 19:40:15,958][258077] Runner profile tree view: |
| main_loop: 82.1689 |
| [2025-06-11 19:40:15,958][258077] Collected {0: 4005888}, FPS: 24326.1 |
| [2025-06-11 19:40:15,963][258077] Loading existing experiment configuration from /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/config.json |
| [2025-06-11 19:40:15,963][258077] Overriding arg 'num_workers' with value 1 passed from command line |
| [2025-06-11 19:40:15,963][258077] Adding new argument 'no_render'=True that is not in the saved config file! |
| [2025-06-11 19:40:15,963][258077] Adding new argument 'save_video'=True that is not in the saved config file! |
| [2025-06-11 19:40:15,964][258077] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
| [2025-06-11 19:40:15,964][258077] Adding new argument 'video_name'=None that is not in the saved config file! |
| [2025-06-11 19:40:15,964][258077] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
| [2025-06-11 19:40:15,964][258077] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
| [2025-06-11 19:40:15,964][258077] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
| [2025-06-11 19:40:15,964][258077] Adding new argument 'hf_repository'='PranayPalem/vizdoom_laptop_optimized' that is not in the saved config file! |
| [2025-06-11 19:40:15,964][258077] Adding new argument 'policy_index'=0 that is not in the saved config file! |
| [2025-06-11 19:40:15,964][258077] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
| [2025-06-11 19:40:15,964][258077] Adding new argument 'train_script'=None that is not in the saved config file! |
| [2025-06-11 19:40:15,964][258077] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
| [2025-06-11 19:40:15,964][258077] Using frameskip 1 and render_action_repeat=4 for evaluation |
| [2025-06-11 19:40:15,981][258077] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:40:15,983][258077] RunningMeanStd input shape: (3, 72, 128) |
| [2025-06-11 19:40:15,983][258077] RunningMeanStd input shape: (1,) |
| [2025-06-11 19:40:15,992][258077] ConvEncoder: input_channels=3 |
| [2025-06-11 19:40:16,043][258077] Conv encoder output size: 512 |
| [2025-06-11 19:40:16,043][258077] Policy head output size: 512 |
| [2025-06-11 19:40:16,228][258077] Loading state from checkpoint /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000000978_4005888.pth... |
| [2025-06-11 19:40:16,648][258077] Num frames 100... |
| [2025-06-11 19:40:16,711][258077] Num frames 200... |
| [2025-06-11 19:40:16,775][258077] Num frames 300... |
| [2025-06-11 19:40:16,842][258077] Num frames 400... |
| [2025-06-11 19:40:16,909][258077] Num frames 500... |
| [2025-06-11 19:40:16,973][258077] Num frames 600... |
| [2025-06-11 19:40:17,048][258077] Num frames 700... |
| [2025-06-11 19:40:17,111][258077] Num frames 800... |
| [2025-06-11 19:40:17,180][258077] Num frames 900... |
| [2025-06-11 19:40:17,246][258077] Num frames 1000... |
| [2025-06-11 19:40:17,312][258077] Num frames 1100... |
| [2025-06-11 19:40:17,381][258077] Num frames 1200... |
| [2025-06-11 19:40:17,450][258077] Num frames 1300... |
| [2025-06-11 19:40:17,518][258077] Num frames 1400... |
| [2025-06-11 19:40:17,587][258077] Num frames 1500... |
| [2025-06-11 19:40:17,688][258077] Avg episode rewards: #0: 34.680, true rewards: #0: 15.680 |
| [2025-06-11 19:40:17,689][258077] Avg episode reward: 34.680, avg true_objective: 15.680 |
| [2025-06-11 19:40:17,715][258077] Num frames 1600... |
| [2025-06-11 19:40:17,779][258077] Num frames 1700... |
| [2025-06-11 19:40:17,840][258077] Num frames 1800... |
| [2025-06-11 19:40:17,901][258077] Num frames 1900... |
| [2025-06-11 19:40:17,963][258077] Num frames 2000... |
| [2025-06-11 19:40:18,027][258077] Num frames 2100... |
| [2025-06-11 19:40:18,091][258077] Num frames 2200... |
| [2025-06-11 19:40:18,153][258077] Num frames 2300... |
| [2025-06-11 19:40:18,215][258077] Num frames 2400... |
| [2025-06-11 19:40:18,287][258077] Num frames 2500... |
| [2025-06-11 19:40:18,398][258077] Avg episode rewards: #0: 27.960, true rewards: #0: 12.960 |
| [2025-06-11 19:40:18,398][258077] Avg episode reward: 27.960, avg true_objective: 12.960 |
| [2025-06-11 19:40:18,404][258077] Num frames 2600... |
| [2025-06-11 19:40:18,465][258077] Num frames 2700... |
| [2025-06-11 19:40:18,528][258077] Num frames 2800... |
| [2025-06-11 19:40:18,591][258077] Num frames 2900... |
| [2025-06-11 19:40:18,652][258077] Num frames 3000... |
| [2025-06-11 19:40:18,715][258077] Num frames 3100... |
| [2025-06-11 19:40:18,779][258077] Num frames 3200... |
| [2025-06-11 19:40:18,831][258077] Avg episode rewards: #0: 22.000, true rewards: #0: 10.667 |
| [2025-06-11 19:40:18,831][258077] Avg episode reward: 22.000, avg true_objective: 10.667 |
| [2025-06-11 19:40:18,894][258077] Num frames 3300... |
| [2025-06-11 19:40:18,956][258077] Num frames 3400... |
| [2025-06-11 19:40:19,021][258077] Num frames 3500... |
| [2025-06-11 19:40:19,083][258077] Num frames 3600... |
| [2025-06-11 19:40:19,147][258077] Num frames 3700... |
| [2025-06-11 19:40:19,215][258077] Num frames 3800... |
| [2025-06-11 19:40:19,316][258077] Avg episode rewards: #0: 20.180, true rewards: #0: 9.680 |
| [2025-06-11 19:40:19,316][258077] Avg episode reward: 20.180, avg true_objective: 9.680 |
| [2025-06-11 19:40:19,334][258077] Num frames 3900... |
| [2025-06-11 19:40:19,390][258077] Num frames 4000... |
| [2025-06-11 19:40:19,455][258077] Num frames 4100... |
| [2025-06-11 19:40:19,526][258077] Num frames 4200... |
| [2025-06-11 19:40:19,589][258077] Num frames 4300... |
| [2025-06-11 19:40:19,650][258077] Num frames 4400... |
| [2025-06-11 19:40:19,734][258077] Avg episode rewards: #0: 18.696, true rewards: #0: 8.896 |
| [2025-06-11 19:40:19,735][258077] Avg episode reward: 18.696, avg true_objective: 8.896 |
| [2025-06-11 19:40:19,770][258077] Num frames 4500... |
| [2025-06-11 19:40:19,830][258077] Num frames 4600... |
| [2025-06-11 19:40:19,897][258077] Num frames 4700... |
| [2025-06-11 19:40:19,960][258077] Num frames 4800... |
| [2025-06-11 19:40:20,023][258077] Num frames 4900... |
| [2025-06-11 19:40:20,085][258077] Num frames 5000... |
| [2025-06-11 19:40:20,149][258077] Num frames 5100... |
| [2025-06-11 19:40:20,210][258077] Num frames 5200... |
| [2025-06-11 19:40:20,273][258077] Num frames 5300... |
| [2025-06-11 19:40:20,334][258077] Num frames 5400... |
| [2025-06-11 19:40:20,395][258077] Num frames 5500... |
| [2025-06-11 19:40:20,469][258077] Num frames 5600... |
| [2025-06-11 19:40:20,532][258077] Num frames 5700... |
| [2025-06-11 19:40:20,595][258077] Num frames 5800... |
| [2025-06-11 19:40:20,657][258077] Num frames 5900... |
| [2025-06-11 19:40:20,718][258077] Num frames 6000... |
| [2025-06-11 19:40:20,780][258077] Num frames 6100... |
| [2025-06-11 19:40:20,844][258077] Num frames 6200... |
| [2025-06-11 19:40:20,907][258077] Num frames 6300... |
| [2025-06-11 19:40:20,968][258077] Num frames 6400... |
| [2025-06-11 19:40:21,029][258077] Num frames 6500... |
| [2025-06-11 19:40:21,112][258077] Avg episode rewards: #0: 26.413, true rewards: #0: 10.913 |
| [2025-06-11 19:40:21,112][258077] Avg episode reward: 26.413, avg true_objective: 10.913 |
| [2025-06-11 19:40:21,147][258077] Num frames 6600... |
| [2025-06-11 19:40:21,209][258077] Num frames 6700... |
| [2025-06-11 19:40:21,268][258077] Num frames 6800... |
| [2025-06-11 19:40:21,329][258077] Num frames 6900... |
| [2025-06-11 19:40:21,391][258077] Num frames 7000... |
| [2025-06-11 19:40:21,453][258077] Num frames 7100... |
| [2025-06-11 19:40:21,515][258077] Num frames 7200... |
| [2025-06-11 19:40:21,583][258077] Num frames 7300... |
| [2025-06-11 19:40:21,646][258077] Num frames 7400... |
| [2025-06-11 19:40:21,715][258077] Num frames 7500... |
| [2025-06-11 19:40:21,780][258077] Num frames 7600... |
| [2025-06-11 19:40:21,843][258077] Num frames 7700... |
| [2025-06-11 19:40:21,905][258077] Num frames 7800... |
| [2025-06-11 19:40:21,968][258077] Num frames 7900... |
| [2025-06-11 19:40:22,033][258077] Num frames 8000... |
| [2025-06-11 19:40:22,094][258077] Num frames 8100... |
| [2025-06-11 19:40:22,159][258077] Num frames 8200... |
| [2025-06-11 19:40:22,262][258077] Avg episode rewards: #0: 28.823, true rewards: #0: 11.823 |
| [2025-06-11 19:40:22,263][258077] Avg episode reward: 28.823, avg true_objective: 11.823 |
| [2025-06-11 19:40:22,279][258077] Num frames 8300... |
| [2025-06-11 19:40:22,339][258077] Num frames 8400... |
| [2025-06-11 19:40:22,402][258077] Num frames 8500... |
| [2025-06-11 19:40:22,463][258077] Num frames 8600... |
| [2025-06-11 19:40:22,524][258077] Num frames 8700... |
| [2025-06-11 19:40:22,585][258077] Num frames 8800... |
| [2025-06-11 19:40:22,646][258077] Num frames 8900... |
| [2025-06-11 19:40:22,709][258077] Num frames 9000... |
| [2025-06-11 19:40:22,770][258077] Num frames 9100... |
| [2025-06-11 19:40:22,832][258077] Num frames 9200... |
| [2025-06-11 19:40:22,936][258077] Avg episode rewards: #0: 28.102, true rewards: #0: 11.602 |
| [2025-06-11 19:40:22,937][258077] Avg episode reward: 28.102, avg true_objective: 11.602 |
| [2025-06-11 19:40:22,948][258077] Num frames 9300... |
| [2025-06-11 19:40:23,011][258077] Num frames 9400... |
| [2025-06-11 19:40:23,074][258077] Num frames 9500... |
| [2025-06-11 19:40:23,136][258077] Num frames 9600... |
| [2025-06-11 19:40:23,199][258077] Num frames 9700... |
| [2025-06-11 19:40:23,262][258077] Num frames 9800... |
| [2025-06-11 19:40:23,326][258077] Num frames 9900... |
| [2025-06-11 19:40:23,390][258077] Num frames 10000... |
| [2025-06-11 19:40:23,454][258077] Num frames 10100... |
| [2025-06-11 19:40:23,516][258077] Num frames 10200... |
| [2025-06-11 19:40:23,578][258077] Num frames 10300... |
| [2025-06-11 19:40:23,642][258077] Num frames 10400... |
| [2025-06-11 19:40:23,706][258077] Num frames 10500... |
| [2025-06-11 19:40:23,771][258077] Num frames 10600... |
| [2025-06-11 19:40:23,843][258077] Num frames 10700... |
| [2025-06-11 19:40:23,904][258077] Num frames 10800... |
| [2025-06-11 19:40:23,968][258077] Num frames 10900... |
| [2025-06-11 19:40:24,033][258077] Num frames 11000... |
| [2025-06-11 19:40:24,095][258077] Num frames 11100... |
| [2025-06-11 19:40:24,158][258077] Num frames 11200... |
| [2025-06-11 19:40:24,219][258077] Num frames 11300... |
| [2025-06-11 19:40:24,325][258077] Avg episode rewards: #0: 31.869, true rewards: #0: 12.647 |
| [2025-06-11 19:40:24,325][258077] Avg episode reward: 31.869, avg true_objective: 12.647 |
| [2025-06-11 19:40:24,336][258077] Num frames 11400... |
| [2025-06-11 19:40:24,400][258077] Num frames 11500... |
| [2025-06-11 19:40:24,462][258077] Num frames 11600... |
| [2025-06-11 19:40:24,522][258077] Num frames 11700... |
| [2025-06-11 19:40:24,583][258077] Num frames 11800... |
| [2025-06-11 19:40:24,645][258077] Num frames 11900... |
| [2025-06-11 19:40:24,709][258077] Num frames 12000... |
| [2025-06-11 19:40:24,771][258077] Num frames 12100... |
| [2025-06-11 19:40:24,834][258077] Num frames 12200... |
| [2025-06-11 19:40:24,939][258077] Avg episode rewards: #0: 30.584, true rewards: #0: 12.284 |
| [2025-06-11 19:40:24,940][258077] Avg episode reward: 30.584, avg true_objective: 12.284 |
| [2025-06-11 19:40:37,835][258077] Replay video saved to /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/replay.mp4! |
| [2025-06-11 19:40:56,339][258077] The model has been pushed to https://huggingface.co/PranayPalem/vizdoom_laptop_optimized |
| [2025-06-11 19:45:03,842][265682] Saving configuration to /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/config.json... |
| [2025-06-11 19:45:03,842][265682] Rollout worker 0 uses device cpu |
| [2025-06-11 19:45:03,842][265682] Rollout worker 1 uses device cpu |
| [2025-06-11 19:45:03,842][265682] Rollout worker 2 uses device cpu |
| [2025-06-11 19:45:03,842][265682] Rollout worker 3 uses device cpu |
| [2025-06-11 19:45:03,843][265682] Rollout worker 4 uses device cpu |
| [2025-06-11 19:45:03,843][265682] Rollout worker 5 uses device cpu |
| [2025-06-11 19:45:03,843][265682] Rollout worker 6 uses device cpu |
| [2025-06-11 19:45:03,843][265682] Rollout worker 7 uses device cpu |
| [2025-06-11 19:45:03,940][265682] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 19:45:03,940][265682] InferenceWorker_p0-w0: min num requests: 2 |
| [2025-06-11 19:45:03,964][265682] Starting all processes... |
| [2025-06-11 19:45:03,964][265682] Starting process learner_proc0 |
| [2025-06-11 19:45:04,930][265682] Starting all processes... |
| [2025-06-11 19:45:04,933][265832] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 19:45:04,934][265832] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
| [2025-06-11 19:45:04,935][265682] Starting process inference_proc0-0 |
| [2025-06-11 19:45:04,935][265682] Starting process rollout_proc0 |
| [2025-06-11 19:45:04,946][265832] Num visible devices: 1 |
| [2025-06-11 19:45:04,952][265832] Setting fixed seed 3333 |
| [2025-06-11 19:45:04,939][265682] Starting process rollout_proc1 |
| [2025-06-11 19:45:04,953][265832] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 19:45:04,953][265832] Initializing actor-critic model on device cuda:0 |
| [2025-06-11 19:45:04,953][265832] RunningMeanStd input shape: (3, 72, 128) |
| [2025-06-11 19:45:04,954][265832] RunningMeanStd input shape: (1,) |
| [2025-06-11 19:45:04,947][265682] Starting process rollout_proc2 |
| [2025-06-11 19:45:04,947][265682] Starting process rollout_proc3 |
| [2025-06-11 19:45:04,947][265682] Starting process rollout_proc4 |
| [2025-06-11 19:45:04,952][265682] Starting process rollout_proc5 |
| [2025-06-11 19:45:04,961][265682] Starting process rollout_proc6 |
| [2025-06-11 19:45:04,967][265682] Starting process rollout_proc7 |
| [2025-06-11 19:45:05,022][265832] ConvEncoder: input_channels=3 |
| [2025-06-11 19:45:05,092][265832] Conv encoder output size: 512 |
| [2025-06-11 19:45:05,092][265832] Policy head output size: 512 |
| [2025-06-11 19:45:05,100][265832] Created Actor Critic model with architecture: |
| [2025-06-11 19:45:05,100][265832] ActorCriticSharedWeights( |
| (obs_normalizer): ObservationNormalizer( |
| (running_mean_std): RunningMeanStdDictInPlace( |
| (running_mean_std): ModuleDict( |
| (obs): RunningMeanStdInPlace() |
| ) |
| ) |
| ) |
| (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
| (encoder): VizdoomEncoder( |
| (basic_encoder): ConvEncoder( |
| (enc): RecursiveScriptModule( |
| original_name=ConvEncoderImpl |
| (conv_head): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Conv2d) |
| (1): RecursiveScriptModule(original_name=ELU) |
| (2): RecursiveScriptModule(original_name=Conv2d) |
| (3): RecursiveScriptModule(original_name=ELU) |
| (4): RecursiveScriptModule(original_name=Conv2d) |
| (5): RecursiveScriptModule(original_name=ELU) |
| ) |
| (mlp_layers): RecursiveScriptModule( |
| original_name=Sequential |
| (0): RecursiveScriptModule(original_name=Linear) |
| (1): RecursiveScriptModule(original_name=ELU) |
| ) |
| ) |
| ) |
| ) |
| (core): ModelCoreRNN( |
| (core): GRU(512, 512) |
| ) |
| (decoder): MlpDecoder( |
| (mlp): Identity() |
| ) |
| (critic_linear): Linear(in_features=512, out_features=1, bias=True) |
| (action_parameterization): ActionParameterizationDefault( |
| (distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
| ) |
| ) |
| [2025-06-11 19:45:05,282][265832] Using optimizer <class 'torch.optim.adam.Adam'> |
| [2025-06-11 19:45:05,891][265832] Loading state from checkpoint /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000000978_4005888.pth... |
| [2025-06-11 19:45:05,916][265832] Loading model from checkpoint |
| [2025-06-11 19:45:05,917][265832] Loaded experiment state at self.train_step=978, self.env_steps=4005888 |
| [2025-06-11 19:45:05,917][265832] Initialized policy 0 weights for model version 978 |
| [2025-06-11 19:45:05,919][265832] LearnerWorker_p0 finished initialization! |
| [2025-06-11 19:45:05,919][265832] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 19:45:06,324][265879] Worker 1 uses CPU cores [2, 3] |
| [2025-06-11 19:45:06,328][265878] Worker 0 uses CPU cores [0, 1] |
| [2025-06-11 19:45:06,397][265888] Worker 6 uses CPU cores [12, 13] |
| [2025-06-11 19:45:06,411][265884] Worker 5 uses CPU cores [10, 11] |
| [2025-06-11 19:45:06,428][265881] Worker 3 uses CPU cores [6, 7] |
| [2025-06-11 19:45:06,437][265880] Worker 2 uses CPU cores [4, 5] |
| [2025-06-11 19:45:06,499][265889] Worker 7 uses CPU cores [14, 15] |
| [2025-06-11 19:45:06,563][265861] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
| [2025-06-11 19:45:06,563][265861] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
| [2025-06-11 19:45:06,576][265861] Num visible devices: 1 |
| [2025-06-11 19:45:06,611][265682] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
| [2025-06-11 19:45:06,612][265882] Worker 4 uses CPU cores [8, 9] |
| [2025-06-11 19:45:06,711][265861] RunningMeanStd input shape: (3, 72, 128) |
| [2025-06-11 19:45:06,711][265861] RunningMeanStd input shape: (1,) |
| [2025-06-11 19:45:06,770][265861] ConvEncoder: input_channels=3 |
| [2025-06-11 19:45:06,818][265861] Conv encoder output size: 512 |
| [2025-06-11 19:45:06,818][265861] Policy head output size: 512 |
| [2025-06-11 19:45:06,844][265682] Inference worker 0-0 is ready! |
| [2025-06-11 19:45:06,844][265682] All inference workers are ready! Signal rollout workers to start! |
| [2025-06-11 19:45:06,877][265880] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:45:06,879][265888] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:45:06,889][265878] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:45:06,890][265889] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:45:06,890][265881] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:45:06,890][265879] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:45:06,890][265884] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:45:06,896][265882] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:45:07,048][265888] Decorrelating experience for 0 frames... |
| [2025-06-11 19:45:07,048][265880] Decorrelating experience for 0 frames... |
| [2025-06-11 19:45:07,117][265882] Decorrelating experience for 0 frames... |
| [2025-06-11 19:45:07,117][265878] Decorrelating experience for 0 frames... |
| [2025-06-11 19:45:07,117][265889] Decorrelating experience for 0 frames... |
| [2025-06-11 19:45:07,203][265888] Decorrelating experience for 32 frames... |
| [2025-06-11 19:45:07,223][265881] Decorrelating experience for 0 frames... |
| [2025-06-11 19:45:07,232][265879] Decorrelating experience for 0 frames... |
| [2025-06-11 19:45:07,281][265878] Decorrelating experience for 32 frames... |
| [2025-06-11 19:45:07,282][265882] Decorrelating experience for 32 frames... |
| [2025-06-11 19:45:07,381][265879] Decorrelating experience for 32 frames... |
| [2025-06-11 19:45:07,434][265888] Decorrelating experience for 64 frames... |
| [2025-06-11 19:45:07,477][265878] Decorrelating experience for 64 frames... |
| [2025-06-11 19:45:07,488][265880] Decorrelating experience for 32 frames... |
| [2025-06-11 19:45:07,595][265882] Decorrelating experience for 64 frames... |
| [2025-06-11 19:45:07,628][265888] Decorrelating experience for 96 frames... |
| [2025-06-11 19:45:07,637][265879] Decorrelating experience for 64 frames... |
| [2025-06-11 19:45:07,669][265878] Decorrelating experience for 96 frames... |
| [2025-06-11 19:45:07,686][265889] Decorrelating experience for 32 frames... |
| [2025-06-11 19:45:07,706][265880] Decorrelating experience for 64 frames... |
| [2025-06-11 19:45:07,825][265882] Decorrelating experience for 96 frames... |
| [2025-06-11 19:45:07,867][265879] Decorrelating experience for 96 frames... |
| [2025-06-11 19:45:07,879][265881] Decorrelating experience for 32 frames... |
| [2025-06-11 19:45:07,904][265880] Decorrelating experience for 96 frames... |
| [2025-06-11 19:45:08,046][265884] Decorrelating experience for 0 frames... |
| [2025-06-11 19:45:08,051][265889] Decorrelating experience for 64 frames... |
| [2025-06-11 19:45:08,100][265881] Decorrelating experience for 64 frames... |
| [2025-06-11 19:45:08,231][265884] Decorrelating experience for 32 frames... |
| [2025-06-11 19:45:08,316][265881] Decorrelating experience for 96 frames... |
| [2025-06-11 19:45:08,327][265889] Decorrelating experience for 96 frames... |
| [2025-06-11 19:45:08,507][265832] Signal inference workers to stop experience collection... |
| [2025-06-11 19:45:08,510][265861] InferenceWorker_p0-w0: stopping experience collection |
| [2025-06-11 19:45:08,522][265884] Decorrelating experience for 64 frames... |
| [2025-06-11 19:45:08,687][265884] Decorrelating experience for 96 frames... |
| [2025-06-11 19:45:09,225][265832] Signal inference workers to resume experience collection... |
| [2025-06-11 19:45:09,226][265861] InferenceWorker_p0-w0: resuming experience collection |
| [2025-06-11 19:45:10,069][265682] Fps is (10 sec: 7105.5, 60 sec: 7105.5, 300 sec: 7105.5). Total num frames: 4030464. Throughput: 0: 1728.4. Samples: 5978. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
| [2025-06-11 19:45:10,070][265682] Avg episode reward: [(0, '10.514')] |
| [2025-06-11 19:45:10,597][265861] Updated weights for policy 0, policy_version 988 (0.0096) |
| [2025-06-11 19:45:12,094][265861] Updated weights for policy 0, policy_version 998 (0.0007) |
| [2025-06-11 19:45:13,552][265861] Updated weights for policy 0, policy_version 1008 (0.0007) |
| [2025-06-11 19:45:15,043][265861] Updated weights for policy 0, policy_version 1018 (0.0007) |
| [2025-06-11 19:45:15,069][265682] Fps is (10 sec: 19369.3, 60 sec: 19369.3, 300 sec: 19369.3). Total num frames: 4169728. Throughput: 0: 3143.0. Samples: 26586. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:45:15,070][265682] Avg episode reward: [(0, '27.602')] |
| [2025-06-11 19:45:15,070][265832] Saving new best policy, reward=27.602! |
| [2025-06-11 19:45:16,572][265861] Updated weights for policy 0, policy_version 1028 (0.0007) |
| [2025-06-11 19:45:18,066][265861] Updated weights for policy 0, policy_version 1038 (0.0007) |
| [2025-06-11 19:45:19,567][265861] Updated weights for policy 0, policy_version 1048 (0.0008) |
| [2025-06-11 19:45:20,069][265682] Fps is (10 sec: 27443.2, 60 sec: 22216.6, 300 sec: 22216.6). Total num frames: 4304896. Throughput: 0: 5025.9. Samples: 67642. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:45:20,070][265682] Avg episode reward: [(0, '24.169')] |
| [2025-06-11 19:45:21,076][265861] Updated weights for policy 0, policy_version 1058 (0.0007) |
| [2025-06-11 19:45:22,645][265861] Updated weights for policy 0, policy_version 1068 (0.0007) |
| [2025-06-11 19:45:23,873][265682] Heartbeat connected on Batcher_0 |
| [2025-06-11 19:45:23,936][265682] Heartbeat connected on LearnerWorker_p0 |
| [2025-06-11 19:45:23,943][265682] Heartbeat connected on InferenceWorker_p0-w0 |
| [2025-06-11 19:45:23,944][265682] Heartbeat connected on RolloutWorker_w0 |
| [2025-06-11 19:45:23,946][265682] Heartbeat connected on RolloutWorker_w1 |
| [2025-06-11 19:45:23,948][265682] Heartbeat connected on RolloutWorker_w2 |
| [2025-06-11 19:45:23,952][265682] Heartbeat connected on RolloutWorker_w3 |
| [2025-06-11 19:45:23,956][265682] Heartbeat connected on RolloutWorker_w4 |
| [2025-06-11 19:45:23,958][265682] Heartbeat connected on RolloutWorker_w5 |
| [2025-06-11 19:45:23,962][265682] Heartbeat connected on RolloutWorker_w6 |
| [2025-06-11 19:45:23,964][265682] Heartbeat connected on RolloutWorker_w7 |
| [2025-06-11 19:45:24,175][265861] Updated weights for policy 0, policy_version 1078 (0.0007) |
| [2025-06-11 19:45:25,069][265682] Fps is (10 sec: 26624.2, 60 sec: 23299.6, 300 sec: 23299.6). Total num frames: 4435968. Throughput: 0: 5842.9. Samples: 107852. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:45:25,070][265682] Avg episode reward: [(0, '26.346')] |
| [2025-06-11 19:45:25,707][265861] Updated weights for policy 0, policy_version 1088 (0.0007) |
| [2025-06-11 19:45:27,226][265861] Updated weights for policy 0, policy_version 1098 (0.0007) |
| [2025-06-11 19:45:28,754][265861] Updated weights for policy 0, policy_version 1108 (0.0008) |
| [2025-06-11 19:45:30,069][265682] Fps is (10 sec: 26623.9, 60 sec: 24095.4, 300 sec: 24095.4). Total num frames: 4571136. Throughput: 0: 5450.1. Samples: 127852. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:45:30,070][265682] Avg episode reward: [(0, '24.510')] |
| [2025-06-11 19:45:30,289][265861] Updated weights for policy 0, policy_version 1118 (0.0007) |
| [2025-06-11 19:45:31,820][265861] Updated weights for policy 0, policy_version 1128 (0.0007) |
| [2025-06-11 19:45:33,346][265861] Updated weights for policy 0, policy_version 1138 (0.0007) |
| [2025-06-11 19:45:34,871][265861] Updated weights for policy 0, policy_version 1148 (0.0007) |
| [2025-06-11 19:45:35,069][265682] Fps is (10 sec: 27033.5, 60 sec: 24611.7, 300 sec: 24611.7). Total num frames: 4706304. Throughput: 0: 5902.9. Samples: 167990. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:45:35,070][265682] Avg episode reward: [(0, '23.769')] |
| [2025-06-11 19:45:36,426][265861] Updated weights for policy 0, policy_version 1158 (0.0008) |
| [2025-06-11 19:45:37,960][265861] Updated weights for policy 0, policy_version 1168 (0.0007) |
| [2025-06-11 19:45:39,504][265861] Updated weights for policy 0, policy_version 1178 (0.0008) |
| [2025-06-11 19:45:40,069][265682] Fps is (10 sec: 26624.2, 60 sec: 24851.2, 300 sec: 24851.2). Total num frames: 4837376. Throughput: 0: 6210.7. Samples: 207802. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
| [2025-06-11 19:45:40,070][265682] Avg episode reward: [(0, '21.887')] |
| [2025-06-11 19:45:41,046][265861] Updated weights for policy 0, policy_version 1188 (0.0007) |
| [2025-06-11 19:45:42,576][265861] Updated weights for policy 0, policy_version 1198 (0.0007) |
| [2025-06-11 19:45:44,102][265861] Updated weights for policy 0, policy_version 1208 (0.0007) |
| [2025-06-11 19:45:45,069][265682] Fps is (10 sec: 26624.1, 60 sec: 25134.9, 300 sec: 25134.9). Total num frames: 4972544. Throughput: 0: 5925.3. Samples: 227880. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
| [2025-06-11 19:45:45,070][265682] Avg episode reward: [(0, '25.230')] |
| [2025-06-11 19:45:45,668][265861] Updated weights for policy 0, policy_version 1218 (0.0007) |
| [2025-06-11 19:45:47,199][265861] Updated weights for policy 0, policy_version 1228 (0.0008) |
| [2025-06-11 19:45:48,767][265861] Updated weights for policy 0, policy_version 1238 (0.0007) |
| [2025-06-11 19:45:50,069][265682] Fps is (10 sec: 26624.1, 60 sec: 25259.1, 300 sec: 25259.1). Total num frames: 5103616. Throughput: 0: 6155.5. Samples: 267512. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
| [2025-06-11 19:45:50,070][265682] Avg episode reward: [(0, '23.497')] |
| [2025-06-11 19:45:50,322][265861] Updated weights for policy 0, policy_version 1248 (0.0008) |
| [2025-06-11 19:45:51,870][265861] Updated weights for policy 0, policy_version 1258 (0.0007) |
| [2025-06-11 19:45:53,421][265861] Updated weights for policy 0, policy_version 1268 (0.0007) |
| [2025-06-11 19:45:54,997][265861] Updated weights for policy 0, policy_version 1278 (0.0007) |
| [2025-06-11 19:45:55,069][265682] Fps is (10 sec: 26214.2, 60 sec: 25357.6, 300 sec: 25357.6). Total num frames: 5234688. Throughput: 0: 6687.5. Samples: 306914. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
| [2025-06-11 19:45:55,070][265682] Avg episode reward: [(0, '26.803')] |
| [2025-06-11 19:45:56,545][265861] Updated weights for policy 0, policy_version 1288 (0.0007) |
| [2025-06-11 19:45:58,103][265861] Updated weights for policy 0, policy_version 1298 (0.0007) |
| [2025-06-11 19:45:59,665][265861] Updated weights for policy 0, policy_version 1308 (0.0008) |
| [2025-06-11 19:46:00,069][265682] Fps is (10 sec: 26214.3, 60 sec: 25437.8, 300 sec: 25437.8). Total num frames: 5365760. Throughput: 0: 6669.1. Samples: 326694. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
| [2025-06-11 19:46:00,070][265682] Avg episode reward: [(0, '25.598')] |
| [2025-06-11 19:46:01,227][265861] Updated weights for policy 0, policy_version 1318 (0.0007) |
| [2025-06-11 19:46:02,784][265861] Updated weights for policy 0, policy_version 1328 (0.0008) |
| [2025-06-11 19:46:04,359][265861] Updated weights for policy 0, policy_version 1338 (0.0008) |
| [2025-06-11 19:46:05,069][265682] Fps is (10 sec: 26214.6, 60 sec: 25504.2, 300 sec: 25504.2). Total num frames: 5496832. Throughput: 0: 6631.9. Samples: 366078. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
| [2025-06-11 19:46:05,070][265682] Avg episode reward: [(0, '27.674')] |
| [2025-06-11 19:46:05,072][265832] Saving new best policy, reward=27.674! |
| [2025-06-11 19:46:05,951][265861] Updated weights for policy 0, policy_version 1348 (0.0007) |
| [2025-06-11 19:46:07,505][265861] Updated weights for policy 0, policy_version 1358 (0.0008) |
| [2025-06-11 19:46:09,076][265861] Updated weights for policy 0, policy_version 1368 (0.0007) |
| [2025-06-11 19:46:10,069][265682] Fps is (10 sec: 26214.1, 60 sec: 26624.0, 300 sec: 25560.1). Total num frames: 5627904. Throughput: 0: 6606.7. Samples: 405156. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 19:46:10,070][265682] Avg episode reward: [(0, '23.448')] |
| [2025-06-11 19:46:10,635][265861] Updated weights for policy 0, policy_version 1378 (0.0007) |
| [2025-06-11 19:46:12,197][265861] Updated weights for policy 0, policy_version 1388 (0.0008) |
| [2025-06-11 19:46:13,739][265861] Updated weights for policy 0, policy_version 1398 (0.0008) |
| [2025-06-11 19:46:15,069][265682] Fps is (10 sec: 26214.2, 60 sec: 26487.4, 300 sec: 25607.9). Total num frames: 5758976. Throughput: 0: 6599.2. Samples: 424814. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 19:46:15,070][265682] Avg episode reward: [(0, '24.711')] |
| [2025-06-11 19:46:15,309][265861] Updated weights for policy 0, policy_version 1408 (0.0007) |
| [2025-06-11 19:46:16,887][265861] Updated weights for policy 0, policy_version 1418 (0.0007) |
| [2025-06-11 19:46:18,446][265861] Updated weights for policy 0, policy_version 1428 (0.0008) |
| [2025-06-11 19:46:20,009][265861] Updated weights for policy 0, policy_version 1438 (0.0008) |
| [2025-06-11 19:46:20,069][265682] Fps is (10 sec: 26214.5, 60 sec: 26419.2, 300 sec: 25649.2). Total num frames: 5890048. Throughput: 0: 6581.0. Samples: 464136. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
| [2025-06-11 19:46:20,070][265682] Avg episode reward: [(0, '26.477')] |
| [2025-06-11 19:46:21,577][265861] Updated weights for policy 0, policy_version 1448 (0.0007) |
| [2025-06-11 19:46:23,126][265861] Updated weights for policy 0, policy_version 1458 (0.0007) |
| [2025-06-11 19:46:24,691][265861] Updated weights for policy 0, policy_version 1468 (0.0008) |
| [2025-06-11 19:46:25,069][265682] Fps is (10 sec: 26214.6, 60 sec: 26419.2, 300 sec: 25685.3). Total num frames: 6021120. Throughput: 0: 6569.6. Samples: 503434. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
| [2025-06-11 19:46:25,070][265682] Avg episode reward: [(0, '27.070')] |
| [2025-06-11 19:46:26,257][265861] Updated weights for policy 0, policy_version 1478 (0.0007) |
| [2025-06-11 19:46:27,818][265861] Updated weights for policy 0, policy_version 1488 (0.0008) |
| [2025-06-11 19:46:29,373][265861] Updated weights for policy 0, policy_version 1498 (0.0007) |
| [2025-06-11 19:46:30,069][265682] Fps is (10 sec: 26214.5, 60 sec: 26351.0, 300 sec: 25716.9). Total num frames: 6152192. Throughput: 0: 6561.1. Samples: 523128. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
| [2025-06-11 19:46:30,070][265682] Avg episode reward: [(0, '25.782')] |
| [2025-06-11 19:46:30,930][265861] Updated weights for policy 0, policy_version 1508 (0.0007) |
| [2025-06-11 19:46:32,481][265861] Updated weights for policy 0, policy_version 1518 (0.0007) |
| [2025-06-11 19:46:34,033][265861] Updated weights for policy 0, policy_version 1528 (0.0008) |
| [2025-06-11 19:46:35,069][265682] Fps is (10 sec: 26214.5, 60 sec: 26282.7, 300 sec: 25745.1). Total num frames: 6283264. Throughput: 0: 6559.8. Samples: 562704. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
| [2025-06-11 19:46:35,070][265682] Avg episode reward: [(0, '27.230')] |
| [2025-06-11 19:46:35,596][265861] Updated weights for policy 0, policy_version 1538 (0.0007) |
| [2025-06-11 19:46:37,158][265861] Updated weights for policy 0, policy_version 1548 (0.0007) |
| [2025-06-11 19:46:38,729][265861] Updated weights for policy 0, policy_version 1558 (0.0007) |
| [2025-06-11 19:46:40,069][265682] Fps is (10 sec: 26214.4, 60 sec: 26282.7, 300 sec: 25770.2). Total num frames: 6414336. Throughput: 0: 6553.6. Samples: 601828. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
| [2025-06-11 19:46:40,070][265682] Avg episode reward: [(0, '27.025')] |
| [2025-06-11 19:46:40,309][265861] Updated weights for policy 0, policy_version 1568 (0.0007) |
| [2025-06-11 19:46:41,872][265861] Updated weights for policy 0, policy_version 1578 (0.0007) |
| [2025-06-11 19:46:43,420][265861] Updated weights for policy 0, policy_version 1588 (0.0007) |
| [2025-06-11 19:46:44,984][265861] Updated weights for policy 0, policy_version 1598 (0.0007) |
| [2025-06-11 19:46:45,069][265682] Fps is (10 sec: 26214.3, 60 sec: 26214.4, 300 sec: 25792.7). Total num frames: 6545408. Throughput: 0: 6553.2. Samples: 621588. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
| [2025-06-11 19:46:45,070][265682] Avg episode reward: [(0, '23.233')] |
| [2025-06-11 19:46:46,556][265861] Updated weights for policy 0, policy_version 1608 (0.0008) |
| [2025-06-11 19:46:48,121][265861] Updated weights for policy 0, policy_version 1618 (0.0007) |
| [2025-06-11 19:46:49,676][265861] Updated weights for policy 0, policy_version 1628 (0.0007) |
| [2025-06-11 19:46:50,069][265682] Fps is (10 sec: 26214.6, 60 sec: 26214.4, 300 sec: 25813.1). Total num frames: 6676480. Throughput: 0: 6551.7. Samples: 660906. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
| [2025-06-11 19:46:50,070][265682] Avg episode reward: [(0, '26.102')] |
| [2025-06-11 19:46:51,262][265861] Updated weights for policy 0, policy_version 1638 (0.0007) |
| [2025-06-11 19:46:52,821][265861] Updated weights for policy 0, policy_version 1648 (0.0008) |
| [2025-06-11 19:46:54,390][265861] Updated weights for policy 0, policy_version 1658 (0.0008) |
| [2025-06-11 19:46:55,069][265682] Fps is (10 sec: 26214.4, 60 sec: 26214.4, 300 sec: 25831.6). Total num frames: 6807552. Throughput: 0: 6553.6. Samples: 700068. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 19:46:55,070][265682] Avg episode reward: [(0, '27.351')] |
| [2025-06-11 19:46:55,977][265861] Updated weights for policy 0, policy_version 1668 (0.0008) |
| [2025-06-11 19:46:57,533][265861] Updated weights for policy 0, policy_version 1678 (0.0008) |
| [2025-06-11 19:46:59,111][265861] Updated weights for policy 0, policy_version 1688 (0.0007) |
| [2025-06-11 19:47:00,069][265682] Fps is (10 sec: 26214.2, 60 sec: 26214.4, 300 sec: 25848.5). Total num frames: 6938624. Throughput: 0: 6550.1. Samples: 719570. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 19:47:00,070][265682] Avg episode reward: [(0, '26.827')] |
| [2025-06-11 19:47:00,072][265832] Saving /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000001694_6938624.pth... |
| [2025-06-11 19:47:00,118][265832] Removing /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000000490_2007040.pth |
| [2025-06-11 19:47:00,708][265861] Updated weights for policy 0, policy_version 1698 (0.0007) |
| [2025-06-11 19:47:02,275][265861] Updated weights for policy 0, policy_version 1708 (0.0008) |
| [2025-06-11 19:47:03,841][265861] Updated weights for policy 0, policy_version 1718 (0.0007) |
| [2025-06-11 19:47:05,069][265682] Fps is (10 sec: 25804.8, 60 sec: 26146.1, 300 sec: 25829.3). Total num frames: 7065600. Throughput: 0: 6543.8. Samples: 758606. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 19:47:05,070][265682] Avg episode reward: [(0, '28.484')] |
| [2025-06-11 19:47:05,094][265832] Saving new best policy, reward=28.484! |
| [2025-06-11 19:47:05,430][265861] Updated weights for policy 0, policy_version 1728 (0.0007) |
| [2025-06-11 19:47:07,001][265861] Updated weights for policy 0, policy_version 1738 (0.0008) |
| [2025-06-11 19:47:08,570][265861] Updated weights for policy 0, policy_version 1748 (0.0007) |
| [2025-06-11 19:47:10,069][265682] Fps is (10 sec: 25804.6, 60 sec: 26146.1, 300 sec: 25844.9). Total num frames: 7196672. Throughput: 0: 6541.1. Samples: 797784. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:47:10,070][265682] Avg episode reward: [(0, '29.671')] |
| [2025-06-11 19:47:10,072][265832] Saving new best policy, reward=29.671! |
| [2025-06-11 19:47:10,160][265861] Updated weights for policy 0, policy_version 1758 (0.0008) |
| [2025-06-11 19:47:11,719][265861] Updated weights for policy 0, policy_version 1768 (0.0007) |
| [2025-06-11 19:47:13,293][265861] Updated weights for policy 0, policy_version 1778 (0.0007) |
| [2025-06-11 19:47:14,841][265861] Updated weights for policy 0, policy_version 1788 (0.0007) |
| [2025-06-11 19:47:15,069][265682] Fps is (10 sec: 26214.5, 60 sec: 26146.2, 300 sec: 25859.3). Total num frames: 7327744. Throughput: 0: 6533.4. Samples: 817130. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:47:15,070][265682] Avg episode reward: [(0, '26.296')] |
| [2025-06-11 19:47:16,479][265861] Updated weights for policy 0, policy_version 1798 (0.0007) |
| [2025-06-11 19:47:18,076][265861] Updated weights for policy 0, policy_version 1808 (0.0008) |
| [2025-06-11 19:47:19,682][265861] Updated weights for policy 0, policy_version 1818 (0.0007) |
| [2025-06-11 19:47:20,069][265682] Fps is (10 sec: 25805.1, 60 sec: 26077.9, 300 sec: 25841.9). Total num frames: 7454720. Throughput: 0: 6514.2. Samples: 855844. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) |
| [2025-06-11 19:47:20,070][265682] Avg episode reward: [(0, '30.384')] |
| [2025-06-11 19:47:20,072][265832] Saving new best policy, reward=30.384! |
| [2025-06-11 19:47:21,272][265861] Updated weights for policy 0, policy_version 1828 (0.0008) |
| [2025-06-11 19:47:22,850][265861] Updated weights for policy 0, policy_version 1838 (0.0008) |
| [2025-06-11 19:47:24,430][265861] Updated weights for policy 0, policy_version 1848 (0.0007) |
| [2025-06-11 19:47:25,069][265682] Fps is (10 sec: 25804.8, 60 sec: 26077.9, 300 sec: 25855.4). Total num frames: 7585792. Throughput: 0: 6504.0. Samples: 894506. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:47:25,070][265682] Avg episode reward: [(0, '25.502')] |
| [2025-06-11 19:47:26,021][265861] Updated weights for policy 0, policy_version 1858 (0.0008) |
| [2025-06-11 19:47:27,589][265861] Updated weights for policy 0, policy_version 1868 (0.0007) |
| [2025-06-11 19:47:29,158][265861] Updated weights for policy 0, policy_version 1878 (0.0007) |
| [2025-06-11 19:47:30,069][265682] Fps is (10 sec: 25804.7, 60 sec: 26009.6, 300 sec: 25839.3). Total num frames: 7712768. Throughput: 0: 6498.2. Samples: 914006. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:47:30,070][265682] Avg episode reward: [(0, '30.228')] |
| [2025-06-11 19:47:30,726][265861] Updated weights for policy 0, policy_version 1888 (0.0008) |
| [2025-06-11 19:47:32,309][265861] Updated weights for policy 0, policy_version 1898 (0.0008) |
| [2025-06-11 19:47:33,880][265861] Updated weights for policy 0, policy_version 1908 (0.0007) |
| [2025-06-11 19:47:35,069][265682] Fps is (10 sec: 25804.8, 60 sec: 26009.6, 300 sec: 25852.0). Total num frames: 7843840. Throughput: 0: 6493.8. Samples: 953128. Policy #0 lag: (min: 0.0, avg: 0.9, max: 2.0) |
| [2025-06-11 19:47:35,070][265682] Avg episode reward: [(0, '27.078')] |
| [2025-06-11 19:47:35,449][265861] Updated weights for policy 0, policy_version 1918 (0.0008) |
| [2025-06-11 19:47:37,016][265861] Updated weights for policy 0, policy_version 1928 (0.0007) |
| [2025-06-11 19:47:38,613][265861] Updated weights for policy 0, policy_version 1938 (0.0008) |
| [2025-06-11 19:47:40,069][265682] Fps is (10 sec: 26214.4, 60 sec: 26009.6, 300 sec: 25863.8). Total num frames: 7974912. Throughput: 0: 6486.5. Samples: 991960. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:47:40,070][265682] Avg episode reward: [(0, '28.174')] |
| [2025-06-11 19:47:40,199][265861] Updated weights for policy 0, policy_version 1948 (0.0007) |
| [2025-06-11 19:47:41,772][265861] Updated weights for policy 0, policy_version 1958 (0.0007) |
| [2025-06-11 19:47:43,349][265861] Updated weights for policy 0, policy_version 1968 (0.0008) |
| [2025-06-11 19:47:44,928][265861] Updated weights for policy 0, policy_version 1978 (0.0007) |
| [2025-06-11 19:47:45,069][265682] Fps is (10 sec: 25804.8, 60 sec: 25941.3, 300 sec: 25849.0). Total num frames: 8101888. Throughput: 0: 6485.2. Samples: 1011404. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:47:45,070][265682] Avg episode reward: [(0, '28.096')] |
| [2025-06-11 19:47:46,501][265861] Updated weights for policy 0, policy_version 1988 (0.0007) |
| [2025-06-11 19:47:48,098][265861] Updated weights for policy 0, policy_version 1998 (0.0007) |
| [2025-06-11 19:47:49,668][265861] Updated weights for policy 0, policy_version 2008 (0.0008) |
| [2025-06-11 19:47:50,069][265682] Fps is (10 sec: 25804.7, 60 sec: 25941.3, 300 sec: 25860.2). Total num frames: 8232960. Throughput: 0: 6481.7. Samples: 1050282. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:47:50,070][265682] Avg episode reward: [(0, '27.863')] |
| [2025-06-11 19:47:51,239][265861] Updated weights for policy 0, policy_version 2018 (0.0007) |
| [2025-06-11 19:47:52,838][265861] Updated weights for policy 0, policy_version 2028 (0.0007) |
| [2025-06-11 19:47:54,411][265861] Updated weights for policy 0, policy_version 2038 (0.0007) |
| [2025-06-11 19:47:55,069][265682] Fps is (10 sec: 26214.4, 60 sec: 25941.3, 300 sec: 25870.7). Total num frames: 8364032. Throughput: 0: 6476.6. Samples: 1089232. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:47:55,070][265682] Avg episode reward: [(0, '30.910')] |
| [2025-06-11 19:47:55,070][265832] Saving new best policy, reward=30.910! |
| [2025-06-11 19:47:56,011][265861] Updated weights for policy 0, policy_version 2048 (0.0008) |
| [2025-06-11 19:47:57,590][265861] Updated weights for policy 0, policy_version 2058 (0.0007) |
| [2025-06-11 19:47:59,177][265861] Updated weights for policy 0, policy_version 2068 (0.0008) |
| [2025-06-11 19:48:00,069][265682] Fps is (10 sec: 25804.8, 60 sec: 25873.1, 300 sec: 25857.0). Total num frames: 8491008. Throughput: 0: 6472.0. Samples: 1108370. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 19:48:00,070][265682] Avg episode reward: [(0, '28.957')] |
| [2025-06-11 19:48:00,755][265861] Updated weights for policy 0, policy_version 2078 (0.0008) |
| [2025-06-11 19:48:02,332][265861] Updated weights for policy 0, policy_version 2088 (0.0009) |
| [2025-06-11 19:48:03,897][265861] Updated weights for policy 0, policy_version 2098 (0.0008) |
| [2025-06-11 19:48:05,069][265682] Fps is (10 sec: 25804.7, 60 sec: 25941.3, 300 sec: 25867.0). Total num frames: 8622080. Throughput: 0: 6478.2. Samples: 1147364. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 19:48:05,070][265682] Avg episode reward: [(0, '29.970')] |
| [2025-06-11 19:48:05,483][265861] Updated weights for policy 0, policy_version 2108 (0.0007) |
| [2025-06-11 19:48:07,078][265861] Updated weights for policy 0, policy_version 2118 (0.0007) |
| [2025-06-11 19:48:08,642][265861] Updated weights for policy 0, policy_version 2128 (0.0007) |
| [2025-06-11 19:48:10,069][265682] Fps is (10 sec: 26214.5, 60 sec: 25941.4, 300 sec: 25876.5). Total num frames: 8753152. Throughput: 0: 6482.8. Samples: 1186232. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 19:48:10,070][265682] Avg episode reward: [(0, '27.283')] |
| [2025-06-11 19:48:10,219][265861] Updated weights for policy 0, policy_version 2138 (0.0007) |
| [2025-06-11 19:48:11,812][265861] Updated weights for policy 0, policy_version 2148 (0.0007) |
| [2025-06-11 19:48:13,403][265861] Updated weights for policy 0, policy_version 2158 (0.0008) |
| [2025-06-11 19:48:14,974][265861] Updated weights for policy 0, policy_version 2168 (0.0007) |
| [2025-06-11 19:48:15,069][265682] Fps is (10 sec: 25805.0, 60 sec: 25873.1, 300 sec: 25863.7). Total num frames: 8880128. Throughput: 0: 6478.6. Samples: 1205544. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 19:48:15,069][265682] Avg episode reward: [(0, '27.811')] |
| [2025-06-11 19:48:16,555][265861] Updated weights for policy 0, policy_version 2178 (0.0007) |
| [2025-06-11 19:48:18,147][265861] Updated weights for policy 0, policy_version 2188 (0.0008) |
| [2025-06-11 19:48:19,711][265861] Updated weights for policy 0, policy_version 2198 (0.0007) |
| [2025-06-11 19:48:20,069][265682] Fps is (10 sec: 25804.5, 60 sec: 25941.3, 300 sec: 25872.8). Total num frames: 9011200. Throughput: 0: 6475.3. Samples: 1244516. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:48:20,070][265682] Avg episode reward: [(0, '22.397')] |
| [2025-06-11 19:48:21,292][265861] Updated weights for policy 0, policy_version 2208 (0.0007) |
| [2025-06-11 19:48:22,869][265861] Updated weights for policy 0, policy_version 2218 (0.0008) |
| [2025-06-11 19:48:24,425][265861] Updated weights for policy 0, policy_version 2228 (0.0007) |
| [2025-06-11 19:48:25,069][265682] Fps is (10 sec: 25804.8, 60 sec: 25873.1, 300 sec: 25860.7). Total num frames: 9138176. Throughput: 0: 6479.9. Samples: 1283556. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0) |
| [2025-06-11 19:48:25,070][265682] Avg episode reward: [(0, '28.124')] |
| [2025-06-11 19:48:26,024][265861] Updated weights for policy 0, policy_version 2238 (0.0007) |
| [2025-06-11 19:48:27,592][265861] Updated weights for policy 0, policy_version 2248 (0.0008) |
| [2025-06-11 19:48:29,143][265861] Updated weights for policy 0, policy_version 2258 (0.0008) |
| [2025-06-11 19:48:30,069][265682] Fps is (10 sec: 25805.1, 60 sec: 25941.4, 300 sec: 25869.4). Total num frames: 9269248. Throughput: 0: 6481.3. Samples: 1303060. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 19:48:30,069][265682] Avg episode reward: [(0, '25.319')] |
| [2025-06-11 19:48:30,704][265861] Updated weights for policy 0, policy_version 2268 (0.0007) |
| [2025-06-11 19:48:32,280][265861] Updated weights for policy 0, policy_version 2278 (0.0009) |
| [2025-06-11 19:48:33,873][265861] Updated weights for policy 0, policy_version 2288 (0.0008) |
| [2025-06-11 19:48:35,069][265682] Fps is (10 sec: 26214.2, 60 sec: 25941.3, 300 sec: 25877.7). Total num frames: 9400320. Throughput: 0: 6485.5. Samples: 1342130. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 19:48:35,070][265682] Avg episode reward: [(0, '26.885')] |
| [2025-06-11 19:48:35,453][265861] Updated weights for policy 0, policy_version 2298 (0.0007) |
| [2025-06-11 19:48:37,020][265861] Updated weights for policy 0, policy_version 2308 (0.0008) |
| [2025-06-11 19:48:38,601][265861] Updated weights for policy 0, policy_version 2318 (0.0007) |
| [2025-06-11 19:48:40,069][265682] Fps is (10 sec: 26214.2, 60 sec: 25941.3, 300 sec: 25885.6). Total num frames: 9531392. Throughput: 0: 6483.0. Samples: 1380968. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0) |
| [2025-06-11 19:48:40,070][265682] Avg episode reward: [(0, '26.630')] |
| [2025-06-11 19:48:40,191][265861] Updated weights for policy 0, policy_version 2328 (0.0007) |
| [2025-06-11 19:48:41,757][265861] Updated weights for policy 0, policy_version 2338 (0.0007) |
| [2025-06-11 19:48:43,349][265861] Updated weights for policy 0, policy_version 2348 (0.0008) |
| [2025-06-11 19:48:44,927][265861] Updated weights for policy 0, policy_version 2358 (0.0008) |
| [2025-06-11 19:48:45,069][265682] Fps is (10 sec: 25804.9, 60 sec: 25941.3, 300 sec: 25874.4). Total num frames: 9658368. Throughput: 0: 6488.6. Samples: 1400358. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0) |
| [2025-06-11 19:48:45,070][265682] Avg episode reward: [(0, '30.036')] |
| [2025-06-11 19:48:46,514][265861] Updated weights for policy 0, policy_version 2368 (0.0007) |
| [2025-06-11 19:48:48,094][265861] Updated weights for policy 0, policy_version 2378 (0.0007) |
| [2025-06-11 19:48:49,673][265861] Updated weights for policy 0, policy_version 2388 (0.0008) |
| [2025-06-11 19:48:50,069][265682] Fps is (10 sec: 25804.9, 60 sec: 25941.4, 300 sec: 25882.0). Total num frames: 9789440. Throughput: 0: 6487.0. Samples: 1439278. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
| [2025-06-11 19:48:50,070][265682] Avg episode reward: [(0, '28.403')] |
| [2025-06-11 19:48:51,254][265861] Updated weights for policy 0, policy_version 2398 (0.0007) |
| [2025-06-11 19:48:52,810][265861] Updated weights for policy 0, policy_version 2408 (0.0007) |
| [2025-06-11 19:48:54,382][265861] Updated weights for policy 0, policy_version 2418 (0.0007) |
| [2025-06-11 19:48:55,069][265682] Fps is (10 sec: 26214.3, 60 sec: 25941.3, 300 sec: 25889.2). Total num frames: 9920512. Throughput: 0: 6488.8. Samples: 1478228. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0) |
| [2025-06-11 19:48:55,070][265682] Avg episode reward: [(0, '24.516')] |
| [2025-06-11 19:48:55,977][265861] Updated weights for policy 0, policy_version 2428 (0.0007) |
| [2025-06-11 19:48:57,568][265861] Updated weights for policy 0, policy_version 2438 (0.0008) |
| [2025-06-11 19:48:58,370][265832] Stopping Batcher_0... |
| [2025-06-11 19:48:58,370][265682] Component Batcher_0 stopped! |
| [2025-06-11 19:48:58,370][265832] Saving /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000002443_10006528.pth... |
| [2025-06-11 19:48:58,370][265832] Loop batcher_evt_loop terminating... |
| [2025-06-11 19:48:58,404][265861] Weights refcount: 2 0 |
| [2025-06-11 19:48:58,406][265861] Stopping InferenceWorker_p0-w0... |
| [2025-06-11 19:48:58,406][265682] Component InferenceWorker_p0-w0 stopped! |
| [2025-06-11 19:48:58,406][265861] Loop inference_proc0-0_evt_loop terminating... |
| [2025-06-11 19:48:58,416][265888] Stopping RolloutWorker_w6... |
| [2025-06-11 19:48:58,416][265682] Component RolloutWorker_w6 stopped! |
| [2025-06-11 19:48:58,416][265888] Loop rollout_proc6_evt_loop terminating... |
| [2025-06-11 19:48:58,417][265682] Component RolloutWorker_w5 stopped! |
| [2025-06-11 19:48:58,417][265884] Stopping RolloutWorker_w5... |
| [2025-06-11 19:48:58,417][265884] Loop rollout_proc5_evt_loop terminating... |
| [2025-06-11 19:48:58,417][265682] Component RolloutWorker_w3 stopped! |
| [2025-06-11 19:48:58,417][265881] Stopping RolloutWorker_w3... |
| [2025-06-11 19:48:58,418][265881] Loop rollout_proc3_evt_loop terminating... |
| [2025-06-11 19:48:58,418][265682] Component RolloutWorker_w4 stopped! |
| [2025-06-11 19:48:58,418][265882] Stopping RolloutWorker_w4... |
| [2025-06-11 19:48:58,418][265832] Removing /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000000978_4005888.pth |
| [2025-06-11 19:48:58,418][265878] Stopping RolloutWorker_w0... |
| [2025-06-11 19:48:58,418][265682] Component RolloutWorker_w0 stopped! |
| [2025-06-11 19:48:58,418][265882] Loop rollout_proc4_evt_loop terminating... |
| [2025-06-11 19:48:58,418][265889] Stopping RolloutWorker_w7... |
| [2025-06-11 19:48:58,418][265878] Loop rollout_proc0_evt_loop terminating... |
| [2025-06-11 19:48:58,418][265682] Component RolloutWorker_w7 stopped! |
| [2025-06-11 19:48:58,418][265889] Loop rollout_proc7_evt_loop terminating... |
| [2025-06-11 19:48:58,419][265682] Component RolloutWorker_w2 stopped! |
| [2025-06-11 19:48:58,419][265880] Stopping RolloutWorker_w2... |
| [2025-06-11 19:48:58,420][265880] Loop rollout_proc2_evt_loop terminating... |
| [2025-06-11 19:48:58,421][265682] Component RolloutWorker_w1 stopped! |
| [2025-06-11 19:48:58,421][265879] Stopping RolloutWorker_w1... |
| [2025-06-11 19:48:58,421][265879] Loop rollout_proc1_evt_loop terminating... |
| [2025-06-11 19:48:58,427][265832] Saving /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000002443_10006528.pth... |
| [2025-06-11 19:48:58,505][265832] Stopping LearnerWorker_p0... |
| [2025-06-11 19:48:58,506][265682] Component LearnerWorker_p0 stopped! |
| [2025-06-11 19:48:58,506][265832] Loop learner_proc0_evt_loop terminating... |
| [2025-06-11 19:48:58,506][265682] Waiting for process learner_proc0 to stop... |
| [2025-06-11 19:48:59,375][265682] Waiting for process inference_proc0-0 to join... |
| [2025-06-11 19:48:59,375][265682] Waiting for process rollout_proc0 to join... |
| [2025-06-11 19:48:59,375][265682] Waiting for process rollout_proc1 to join... |
| [2025-06-11 19:48:59,376][265682] Waiting for process rollout_proc2 to join... |
| [2025-06-11 19:48:59,376][265682] Waiting for process rollout_proc3 to join... |
| [2025-06-11 19:48:59,376][265682] Waiting for process rollout_proc4 to join... |
| [2025-06-11 19:48:59,376][265682] Waiting for process rollout_proc5 to join... |
| [2025-06-11 19:48:59,376][265682] Waiting for process rollout_proc6 to join... |
| [2025-06-11 19:48:59,377][265682] Waiting for process rollout_proc7 to join... |
| [2025-06-11 19:48:59,377][265682] Batcher 0 profile tree view: |
| batching: 11.5728, releasing_batches: 0.0313 |
| [2025-06-11 19:48:59,377][265682] InferenceWorker_p0-w0 profile tree view: |
| wait_policy: 0.0000 |
| wait_policy_total: 2.6956 |
| update_model: 3.4849 |
| weight_update: 0.0007 |
| one_step: 0.0018 |
| handle_policy_step: 215.5104 |
| deserialize: 7.7441, stack: 1.2528, obs_to_device_normalize: 51.2289, forward: 115.1320, send_messages: 9.8646 |
| prepare_outputs: 22.3457 |
| to_cpu: 13.7875 |
| [2025-06-11 19:48:59,377][265682] Learner 0 profile tree view: |
| misc: 0.0056, prepare_batch: 13.2024 |
| train: 32.2071 |
| epoch_init: 0.0046, minibatch_init: 0.0051, losses_postprocess: 0.2250, kl_divergence: 0.2571, after_optimizer: 0.5344 |
| calculate_losses: 12.2347 |
| losses_init: 0.0028, forward_head: 0.6924, bptt_initial: 8.9282, tail: 0.5230, advantages_returns: 0.1473, losses: 0.8602 |
| bptt: 0.9214 |
| bptt_forward_core: 0.8794 |
| update: 18.5939 |
| clip: 0.6011 |
| [2025-06-11 19:48:59,377][265682] RolloutWorker_w0 profile tree view: |
| wait_for_trajectories: 0.1166, enqueue_policy_requests: 7.3792, env_step: 102.5662, overhead: 8.7417, complete_rollouts: 0.3125 |
| save_policy_outputs: 7.0649 |
| split_output_tensors: 3.5034 |
| [2025-06-11 19:48:59,377][265682] RolloutWorker_w7 profile tree view: |
| wait_for_trajectories: 0.1122, enqueue_policy_requests: 7.4264, env_step: 102.1977, overhead: 8.8298, complete_rollouts: 0.2999 |
| save_policy_outputs: 7.0993 |
| split_output_tensors: 3.4856 |
| [2025-06-11 19:48:59,377][265682] Loop Runner_EvtLoop terminating... |
| [2025-06-11 19:48:59,377][265682] Runner profile tree view: |
| main_loop: 235.4135 |
| [2025-06-11 19:48:59,377][265682] Collected {0: 10006528}, FPS: 25489.8 |
| [2025-06-11 19:48:59,383][265682] Loading existing experiment configuration from /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/config.json |
| [2025-06-11 19:48:59,383][265682] Overriding arg 'num_workers' with value 1 passed from command line |
| [2025-06-11 19:48:59,383][265682] Adding new argument 'no_render'=True that is not in the saved config file! |
| [2025-06-11 19:48:59,383][265682] Adding new argument 'save_video'=True that is not in the saved config file! |
| [2025-06-11 19:48:59,383][265682] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
| [2025-06-11 19:48:59,383][265682] Adding new argument 'video_name'=None that is not in the saved config file! |
| [2025-06-11 19:48:59,383][265682] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
| [2025-06-11 19:48:59,383][265682] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
| [2025-06-11 19:48:59,383][265682] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
| [2025-06-11 19:48:59,383][265682] Adding new argument 'hf_repository'='PranayPalem/vizdoom_laptop_optimized' that is not in the saved config file! |
| [2025-06-11 19:48:59,383][265682] Adding new argument 'policy_index'=0 that is not in the saved config file! |
| [2025-06-11 19:48:59,383][265682] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
| [2025-06-11 19:48:59,383][265682] Adding new argument 'train_script'=None that is not in the saved config file! |
| [2025-06-11 19:48:59,383][265682] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
| [2025-06-11 19:48:59,383][265682] Using frameskip 1 and render_action_repeat=4 for evaluation |
| [2025-06-11 19:48:59,401][265682] Doom resolution: 160x120, resize resolution: (128, 72) |
| [2025-06-11 19:48:59,402][265682] RunningMeanStd input shape: (3, 72, 128) |
| [2025-06-11 19:48:59,403][265682] RunningMeanStd input shape: (1,) |
| [2025-06-11 19:48:59,413][265682] ConvEncoder: input_channels=3 |
| [2025-06-11 19:48:59,463][265682] Conv encoder output size: 512 |
| [2025-06-11 19:48:59,464][265682] Policy head output size: 512 |
| [2025-06-11 19:48:59,648][265682] Loading state from checkpoint /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/checkpoint_p0/checkpoint_000002443_10006528.pth... |
| [2025-06-11 19:49:00,066][265682] Num frames 100... |
| [2025-06-11 19:49:00,130][265682] Num frames 200... |
| [2025-06-11 19:49:00,190][265682] Num frames 300... |
| [2025-06-11 19:49:00,253][265682] Num frames 400... |
| [2025-06-11 19:49:00,316][265682] Num frames 500... |
| [2025-06-11 19:49:00,386][265682] Avg episode rewards: #0: 8.230, true rewards: #0: 5.230 |
| [2025-06-11 19:49:00,386][265682] Avg episode reward: 8.230, avg true_objective: 5.230 |
| [2025-06-11 19:49:00,438][265682] Num frames 600... |
| [2025-06-11 19:49:00,499][265682] Num frames 700... |
| [2025-06-11 19:49:00,560][265682] Num frames 800... |
| [2025-06-11 19:49:00,622][265682] Num frames 900... |
| [2025-06-11 19:49:00,683][265682] Num frames 1000... |
| [2025-06-11 19:49:00,743][265682] Num frames 1100... |
| [2025-06-11 19:49:00,810][265682] Num frames 1200... |
| [2025-06-11 19:49:00,870][265682] Num frames 1300... |
| [2025-06-11 19:49:00,939][265682] Num frames 1400... |
| [2025-06-11 19:49:01,003][265682] Num frames 1500... |
| [2025-06-11 19:49:01,066][265682] Avg episode rewards: #0: 14.575, true rewards: #0: 7.575 |
| [2025-06-11 19:49:01,066][265682] Avg episode reward: 14.575, avg true_objective: 7.575 |
| [2025-06-11 19:49:01,124][265682] Num frames 1600... |
| [2025-06-11 19:49:01,184][265682] Num frames 1700... |
| [2025-06-11 19:49:01,241][265682] Num frames 1800... |
| [2025-06-11 19:49:01,307][265682] Num frames 1900... |
| [2025-06-11 19:49:01,373][265682] Num frames 2000... |
| [2025-06-11 19:49:01,438][265682] Num frames 2100... |
| [2025-06-11 19:49:01,502][265682] Num frames 2200... |
| [2025-06-11 19:49:01,563][265682] Num frames 2300... |
| [2025-06-11 19:49:01,628][265682] Num frames 2400... |
| [2025-06-11 19:49:01,695][265682] Num frames 2500... |
| [2025-06-11 19:49:01,757][265682] Num frames 2600... |
| [2025-06-11 19:49:01,820][265682] Num frames 2700... |
| [2025-06-11 19:49:01,892][265682] Avg episode rewards: #0: 19.437, true rewards: #0: 9.103 |
| [2025-06-11 19:49:01,893][265682] Avg episode reward: 19.437, avg true_objective: 9.103 |
| [2025-06-11 19:49:01,942][265682] Num frames 2800... |
| [2025-06-11 19:49:02,005][265682] Num frames 2900... |
| [2025-06-11 19:49:02,069][265682] Num frames 3000... |
| [2025-06-11 19:49:02,143][265682] Num frames 3100... |
| [2025-06-11 19:49:02,209][265682] Num frames 3200... |
| [2025-06-11 19:49:02,271][265682] Num frames 3300... |
| [2025-06-11 19:49:02,334][265682] Num frames 3400... |
| [2025-06-11 19:49:02,400][265682] Num frames 3500... |
| [2025-06-11 19:49:02,463][265682] Num frames 3600... |
| [2025-06-11 19:49:02,526][265682] Num frames 3700... |
| [2025-06-11 19:49:02,590][265682] Num frames 3800... |
| [2025-06-11 19:49:02,655][265682] Num frames 3900... |
| [2025-06-11 19:49:02,718][265682] Num frames 4000... |
| [2025-06-11 19:49:02,784][265682] Num frames 4100... |
| [2025-06-11 19:49:02,847][265682] Num frames 4200... |
| [2025-06-11 19:49:02,911][265682] Num frames 4300... |
| [2025-06-11 19:49:02,975][265682] Num frames 4400... |
| [2025-06-11 19:49:03,046][265682] Num frames 4500... |
| [2025-06-11 19:49:03,158][265682] Avg episode rewards: #0: 27.467, true rewards: #0: 11.467 |
| [2025-06-11 19:49:03,158][265682] Avg episode reward: 27.467, avg true_objective: 11.467 |
| [2025-06-11 19:49:03,167][265682] Num frames 4600... |
| [2025-06-11 19:49:03,230][265682] Num frames 4700... |
| [2025-06-11 19:49:03,289][265682] Num frames 4800... |
| [2025-06-11 19:49:03,348][265682] Num frames 4900... |
| [2025-06-11 19:49:03,451][265682] Avg episode rewards: #0: 23.362, true rewards: #0: 9.962 |
| [2025-06-11 19:49:03,451][265682] Avg episode reward: 23.362, avg true_objective: 9.962 |
| [2025-06-11 19:49:03,463][265682] Num frames 5000... |
| [2025-06-11 19:49:03,527][265682] Num frames 5100... |
| [2025-06-11 19:49:03,597][265682] Num frames 5200... |
| [2025-06-11 19:49:03,659][265682] Num frames 5300... |
| [2025-06-11 19:49:03,721][265682] Num frames 5400... |
| [2025-06-11 19:49:03,782][265682] Num frames 5500... |
| [2025-06-11 19:49:03,844][265682] Num frames 5600... |
| [2025-06-11 19:49:03,905][265682] Num frames 5700... |
| [2025-06-11 19:49:03,970][265682] Num frames 5800... |
| [2025-06-11 19:49:04,034][265682] Num frames 5900... |
| [2025-06-11 19:49:04,100][265682] Num frames 6000... |
| [2025-06-11 19:49:04,164][265682] Num frames 6100... |
| [2025-06-11 19:49:04,227][265682] Num frames 6200... |
| [2025-06-11 19:49:04,323][265682] Avg episode rewards: #0: 24.602, true rewards: #0: 10.435 |
| [2025-06-11 19:49:04,324][265682] Avg episode reward: 24.602, avg true_objective: 10.435 |
| [2025-06-11 19:49:04,350][265682] Num frames 6300... |
| [2025-06-11 19:49:04,413][265682] Num frames 6400... |
| [2025-06-11 19:49:04,473][265682] Num frames 6500... |
| [2025-06-11 19:49:04,534][265682] Num frames 6600... |
| [2025-06-11 19:49:04,594][265682] Num frames 6700... |
| [2025-06-11 19:49:04,656][265682] Num frames 6800... |
| [2025-06-11 19:49:04,718][265682] Num frames 6900... |
| [2025-06-11 19:49:04,782][265682] Num frames 7000... |
| [2025-06-11 19:49:04,857][265682] Num frames 7100... |
| [2025-06-11 19:49:04,923][265682] Num frames 7200... |
| [2025-06-11 19:49:04,989][265682] Num frames 7300... |
| [2025-06-11 19:49:05,053][265682] Num frames 7400... |
| [2025-06-11 19:49:05,125][265682] Num frames 7500... |
| [2025-06-11 19:49:05,193][265682] Num frames 7600... |
| [2025-06-11 19:49:05,295][265682] Avg episode rewards: #0: 25.813, true rewards: #0: 10.956 |
| [2025-06-11 19:49:05,295][265682] Avg episode reward: 25.813, avg true_objective: 10.956 |
| [2025-06-11 19:49:05,314][265682] Num frames 7700... |
| [2025-06-11 19:49:05,380][265682] Num frames 7800... |
| [2025-06-11 19:49:05,444][265682] Num frames 7900... |
| [2025-06-11 19:49:05,507][265682] Num frames 8000... |
| [2025-06-11 19:49:05,575][265682] Num frames 8100... |
| [2025-06-11 19:49:05,639][265682] Num frames 8200... |
| [2025-06-11 19:49:05,743][265682] Avg episode rewards: #0: 23.721, true rewards: #0: 10.346 |
| [2025-06-11 19:49:05,743][265682] Avg episode reward: 23.721, avg true_objective: 10.346 |
| [2025-06-11 19:49:05,761][265682] Num frames 8300... |
| [2025-06-11 19:49:05,826][265682] Num frames 8400... |
| [2025-06-11 19:49:05,887][265682] Num frames 8500... |
| [2025-06-11 19:49:05,950][265682] Num frames 8600... |
| [2025-06-11 19:49:06,014][265682] Num frames 8700... |
| [2025-06-11 19:49:06,076][265682] Num frames 8800... |
| [2025-06-11 19:49:06,138][265682] Num frames 8900... |
| [2025-06-11 19:49:06,200][265682] Num frames 9000... |
| [2025-06-11 19:49:06,263][265682] Avg episode rewards: #0: 22.794, true rewards: #0: 10.017 |
| [2025-06-11 19:49:06,263][265682] Avg episode reward: 22.794, avg true_objective: 10.017 |
| [2025-06-11 19:49:06,316][265682] Num frames 9100... |
| [2025-06-11 19:49:06,383][265682] Num frames 9200... |
| [2025-06-11 19:49:06,444][265682] Num frames 9300... |
| [2025-06-11 19:49:06,503][265682] Num frames 9400... |
| [2025-06-11 19:49:06,564][265682] Num frames 9500... |
| [2025-06-11 19:49:06,624][265682] Num frames 9600... |
| [2025-06-11 19:49:06,685][265682] Num frames 9700... |
| [2025-06-11 19:49:06,746][265682] Num frames 9800... |
| [2025-06-11 19:49:06,806][265682] Num frames 9900... |
| [2025-06-11 19:49:06,867][265682] Num frames 10000... |
| [2025-06-11 19:49:06,927][265682] Num frames 10100... |
| [2025-06-11 19:49:06,982][265682] Avg episode rewards: #0: 23.103, true rewards: #0: 10.103 |
| [2025-06-11 19:49:06,983][265682] Avg episode reward: 23.103, avg true_objective: 10.103 |
| [2025-06-11 19:49:17,448][265682] Replay video saved to /home/pranaypalem/Documents/Reinforcement_Learning/RL_Testing_Pranay/DoomHealth/train_dir/vizdoom_laptop_optimized/replay.mp4! |
|
|