so7en commited on
Commit
df61bc8
·
verified ·
1 Parent(s): cc650b4

Upload folder using huggingface_hub

Browse files
.summary/0/events.out.tfevents.1741685575.58c6eccd5343 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5967de1eb6b4fb3e8de52dc222281017b0ab3ceb916b2d10efe9b18a7b61ce5
3
+ size 192624
README.md CHANGED
@@ -15,7 +15,7 @@ model-index:
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
- value: 9.45 +/- 4.65
19
  name: mean_reward
20
  verified: false
21
  ---
 
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
+ value: 9.69 +/- 4.55
19
  name: mean_reward
20
  verified: false
21
  ---
checkpoint_p0/best_000001077_4411392_reward_29.362.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eccac242858ffd706cf79d62dc95fab382b2904915cd50e39cfbbe443de7f473
3
+ size 34929243
checkpoint_p0/checkpoint_000001180_4833280.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19e0bc8ab4b1549eb622781caf4e788f07526a4725c7c4d6b25f3bd65d484589
3
+ size 34929669
checkpoint_p0/checkpoint_000001222_5005312.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:880715179abc52649cb84d05e676e960b7305ab19d34d482d43e0ecf7ab5c74d
3
+ size 34929669
config.json CHANGED
@@ -65,7 +65,7 @@
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
- "train_for_env_steps": 4000000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
 
65
  "summaries_use_frameskip": true,
66
  "heartbeat_interval": 20,
67
  "heartbeat_reporting_interval": 600,
68
+ "train_for_env_steps": 5000000,
69
  "train_for_seconds": 10000000000,
70
  "save_every_sec": 120,
71
  "keep_checkpoints": 2,
replay.mp4 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2213a985462be2654f1fcb5739a9191306ba7dac2e989064872c98087b78a2ba
3
- size 18776142
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb2b60f48e3b9f6c51076117c170c9009f4ac5e93eeccff797ccea01543a7df9
3
+ size 18477134
sf_log.txt CHANGED
@@ -1069,3 +1069,834 @@ main_loop: 1085.5647
1069
  [2025-03-11 09:30:26,421][01034] Avg episode rewards: #0: 21.954, true rewards: #0: 9.454
1070
  [2025-03-11 09:30:26,422][01034] Avg episode reward: 21.954, avg true_objective: 9.454
1071
  [2025-03-11 09:31:19,097][01034] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1069
  [2025-03-11 09:30:26,421][01034] Avg episode rewards: #0: 21.954, true rewards: #0: 9.454
1070
  [2025-03-11 09:30:26,422][01034] Avg episode reward: 21.954, avg true_objective: 9.454
1071
  [2025-03-11 09:31:19,097][01034] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
1072
+ [2025-03-11 09:31:25,046][01034] The model has been pushed to https://huggingface.co/so7en/Doom_unit8_2
1073
+ [2025-03-11 09:32:55,246][01034] Environment doom_basic already registered, overwriting...
1074
+ [2025-03-11 09:32:55,247][01034] Environment doom_two_colors_easy already registered, overwriting...
1075
+ [2025-03-11 09:32:55,253][01034] Environment doom_two_colors_hard already registered, overwriting...
1076
+ [2025-03-11 09:32:55,254][01034] Environment doom_dm already registered, overwriting...
1077
+ [2025-03-11 09:32:55,255][01034] Environment doom_dwango5 already registered, overwriting...
1078
+ [2025-03-11 09:32:55,256][01034] Environment doom_my_way_home_flat_actions already registered, overwriting...
1079
+ [2025-03-11 09:32:55,260][01034] Environment doom_defend_the_center_flat_actions already registered, overwriting...
1080
+ [2025-03-11 09:32:55,261][01034] Environment doom_my_way_home already registered, overwriting...
1081
+ [2025-03-11 09:32:55,264][01034] Environment doom_deadly_corridor already registered, overwriting...
1082
+ [2025-03-11 09:32:55,268][01034] Environment doom_defend_the_center already registered, overwriting...
1083
+ [2025-03-11 09:32:55,271][01034] Environment doom_defend_the_line already registered, overwriting...
1084
+ [2025-03-11 09:32:55,272][01034] Environment doom_health_gathering already registered, overwriting...
1085
+ [2025-03-11 09:32:55,274][01034] Environment doom_health_gathering_supreme already registered, overwriting...
1086
+ [2025-03-11 09:32:55,278][01034] Environment doom_battle already registered, overwriting...
1087
+ [2025-03-11 09:32:55,279][01034] Environment doom_battle2 already registered, overwriting...
1088
+ [2025-03-11 09:32:55,283][01034] Environment doom_duel_bots already registered, overwriting...
1089
+ [2025-03-11 09:32:55,284][01034] Environment doom_deathmatch_bots already registered, overwriting...
1090
+ [2025-03-11 09:32:55,289][01034] Environment doom_duel already registered, overwriting...
1091
+ [2025-03-11 09:32:55,290][01034] Environment doom_deathmatch_full already registered, overwriting...
1092
+ [2025-03-11 09:32:55,290][01034] Environment doom_benchmark already registered, overwriting...
1093
+ [2025-03-11 09:32:55,296][01034] register_encoder_factory: <function make_vizdoom_encoder at 0x79692d1f59e0>
1094
+ [2025-03-11 09:32:55,330][01034] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
1095
+ [2025-03-11 09:32:55,338][01034] Overriding arg 'train_for_env_steps' with value 5000000 passed from command line
1096
+ [2025-03-11 09:32:55,350][01034] Experiment dir /content/train_dir/default_experiment already exists!
1097
+ [2025-03-11 09:32:55,353][01034] Resuming existing experiment from /content/train_dir/default_experiment...
1098
+ [2025-03-11 09:32:55,357][01034] Weights and Biases integration disabled
1099
+ [2025-03-11 09:32:55,361][01034] Environment var CUDA_VISIBLE_DEVICES is 0
1100
+
1101
+ [2025-03-11 09:32:59,335][01034] Starting experiment with the following configuration:
1102
+ help=False
1103
+ algo=APPO
1104
+ env=doom_health_gathering_supreme
1105
+ experiment=default_experiment
1106
+ train_dir=/content/train_dir
1107
+ restart_behavior=resume
1108
+ device=gpu
1109
+ seed=None
1110
+ num_policies=1
1111
+ async_rl=True
1112
+ serial_mode=False
1113
+ batched_sampling=False
1114
+ num_batches_to_accumulate=2
1115
+ worker_num_splits=2
1116
+ policy_workers_per_policy=1
1117
+ max_policy_lag=1000
1118
+ num_workers=8
1119
+ num_envs_per_worker=4
1120
+ batch_size=1024
1121
+ num_batches_per_epoch=1
1122
+ num_epochs=1
1123
+ rollout=32
1124
+ recurrence=32
1125
+ shuffle_minibatches=False
1126
+ gamma=0.99
1127
+ reward_scale=1.0
1128
+ reward_clip=1000.0
1129
+ value_bootstrap=False
1130
+ normalize_returns=True
1131
+ exploration_loss_coeff=0.001
1132
+ value_loss_coeff=0.5
1133
+ kl_loss_coeff=0.0
1134
+ exploration_loss=symmetric_kl
1135
+ gae_lambda=0.95
1136
+ ppo_clip_ratio=0.1
1137
+ ppo_clip_value=0.2
1138
+ with_vtrace=False
1139
+ vtrace_rho=1.0
1140
+ vtrace_c=1.0
1141
+ optimizer=adam
1142
+ adam_eps=1e-06
1143
+ adam_beta1=0.9
1144
+ adam_beta2=0.999
1145
+ max_grad_norm=4.0
1146
+ learning_rate=0.0001
1147
+ lr_schedule=constant
1148
+ lr_schedule_kl_threshold=0.008
1149
+ lr_adaptive_min=1e-06
1150
+ lr_adaptive_max=0.01
1151
+ obs_subtract_mean=0.0
1152
+ obs_scale=255.0
1153
+ normalize_input=True
1154
+ normalize_input_keys=None
1155
+ decorrelate_experience_max_seconds=0
1156
+ decorrelate_envs_on_one_worker=True
1157
+ actor_worker_gpus=[]
1158
+ set_workers_cpu_affinity=True
1159
+ force_envs_single_thread=False
1160
+ default_niceness=0
1161
+ log_to_file=True
1162
+ experiment_summaries_interval=10
1163
+ flush_summaries_interval=30
1164
+ stats_avg=100
1165
+ summaries_use_frameskip=True
1166
+ heartbeat_interval=20
1167
+ heartbeat_reporting_interval=600
1168
+ train_for_env_steps=5000000
1169
+ train_for_seconds=10000000000
1170
+ save_every_sec=120
1171
+ keep_checkpoints=2
1172
+ load_checkpoint_kind=latest
1173
+ save_milestones_sec=-1
1174
+ save_best_every_sec=5
1175
+ save_best_metric=reward
1176
+ save_best_after=100000
1177
+ benchmark=False
1178
+ encoder_mlp_layers=[512, 512]
1179
+ encoder_conv_architecture=convnet_simple
1180
+ encoder_conv_mlp_layers=[512]
1181
+ use_rnn=True
1182
+ rnn_size=512
1183
+ rnn_type=gru
1184
+ rnn_num_layers=1
1185
+ decoder_mlp_layers=[]
1186
+ nonlinearity=elu
1187
+ policy_initialization=orthogonal
1188
+ policy_init_gain=1.0
1189
+ actor_critic_share_weights=True
1190
+ adaptive_stddev=True
1191
+ continuous_tanh_scale=0.0
1192
+ initial_stddev=1.0
1193
+ use_env_info_cache=False
1194
+ env_gpu_actions=False
1195
+ env_gpu_observations=True
1196
+ env_frameskip=4
1197
+ env_framestack=1
1198
+ pixel_format=CHW
1199
+ use_record_episode_statistics=False
1200
+ with_wandb=False
1201
+ wandb_user=None
1202
+ wandb_project=sample_factory
1203
+ wandb_group=None
1204
+ wandb_job_type=SF
1205
+ wandb_tags=[]
1206
+ with_pbt=False
1207
+ pbt_mix_policies_in_one_env=True
1208
+ pbt_period_env_steps=5000000
1209
+ pbt_start_mutation=20000000
1210
+ pbt_replace_fraction=0.3
1211
+ pbt_mutation_rate=0.15
1212
+ pbt_replace_reward_gap=0.1
1213
+ pbt_replace_reward_gap_absolute=1e-06
1214
+ pbt_optimize_gamma=False
1215
+ pbt_target_objective=true_objective
1216
+ pbt_perturb_min=1.1
1217
+ pbt_perturb_max=1.5
1218
+ num_agents=-1
1219
+ num_humans=0
1220
+ num_bots=-1
1221
+ start_bot_difficulty=None
1222
+ timelimit=None
1223
+ res_w=128
1224
+ res_h=72
1225
+ wide_aspect_ratio=False
1226
+ eval_env_frameskip=1
1227
+ fps=35
1228
+ command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000
1229
+ cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000}
1230
+ git_hash=unknown
1231
+ git_repo_name=not a git repository
1232
+ [2025-03-11 09:32:59,337][01034] Saving configuration to /content/train_dir/default_experiment/config.json...
1233
+ [2025-03-11 09:32:59,339][01034] Rollout worker 0 uses device cpu
1234
+ [2025-03-11 09:32:59,339][01034] Rollout worker 1 uses device cpu
1235
+ [2025-03-11 09:32:59,342][01034] Rollout worker 2 uses device cpu
1236
+ [2025-03-11 09:32:59,345][01034] Rollout worker 3 uses device cpu
1237
+ [2025-03-11 09:32:59,345][01034] Rollout worker 4 uses device cpu
1238
+ [2025-03-11 09:32:59,346][01034] Rollout worker 5 uses device cpu
1239
+ [2025-03-11 09:32:59,347][01034] Rollout worker 6 uses device cpu
1240
+ [2025-03-11 09:32:59,348][01034] Rollout worker 7 uses device cpu
1241
+ [2025-03-11 09:32:59,427][01034] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1242
+ [2025-03-11 09:32:59,427][01034] InferenceWorker_p0-w0: min num requests: 2
1243
+ [2025-03-11 09:32:59,462][01034] Starting all processes...
1244
+ [2025-03-11 09:32:59,463][01034] Starting process learner_proc0
1245
+ [2025-03-11 09:32:59,513][01034] Starting all processes...
1246
+ [2025-03-11 09:32:59,519][01034] Starting process inference_proc0-0
1247
+ [2025-03-11 09:32:59,524][01034] Starting process rollout_proc0
1248
+ [2025-03-11 09:32:59,524][01034] Starting process rollout_proc1
1249
+ [2025-03-11 09:32:59,524][01034] Starting process rollout_proc2
1250
+ [2025-03-11 09:32:59,524][01034] Starting process rollout_proc3
1251
+ [2025-03-11 09:32:59,524][01034] Starting process rollout_proc4
1252
+ [2025-03-11 09:32:59,524][01034] Starting process rollout_proc5
1253
+ [2025-03-11 09:32:59,524][01034] Starting process rollout_proc6
1254
+ [2025-03-11 09:32:59,524][01034] Starting process rollout_proc7
1255
+ [2025-03-11 09:33:14,431][11994] Worker 5 uses CPU cores [1]
1256
+ [2025-03-11 09:33:14,697][11975] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1257
+ [2025-03-11 09:33:14,698][11975] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
1258
+ [2025-03-11 09:33:14,762][11975] Num visible devices: 1
1259
+ [2025-03-11 09:33:14,804][11988] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1260
+ [2025-03-11 09:33:14,805][11988] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
1261
+ [2025-03-11 09:33:14,808][11975] Starting seed is not provided
1262
+ [2025-03-11 09:33:14,809][11975] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1263
+ [2025-03-11 09:33:14,810][11975] Initializing actor-critic model on device cuda:0
1264
+ [2025-03-11 09:33:14,811][11975] RunningMeanStd input shape: (3, 72, 128)
1265
+ [2025-03-11 09:33:14,813][11975] RunningMeanStd input shape: (1,)
1266
+ [2025-03-11 09:33:14,839][11993] Worker 4 uses CPU cores [0]
1267
+ [2025-03-11 09:33:14,860][11975] ConvEncoder: input_channels=3
1268
+ [2025-03-11 09:33:14,904][11988] Num visible devices: 1
1269
+ [2025-03-11 09:33:14,965][11992] Worker 3 uses CPU cores [1]
1270
+ [2025-03-11 09:33:14,966][11995] Worker 7 uses CPU cores [1]
1271
+ [2025-03-11 09:33:15,098][11991] Worker 2 uses CPU cores [0]
1272
+ [2025-03-11 09:33:15,104][11989] Worker 0 uses CPU cores [0]
1273
+ [2025-03-11 09:33:15,127][11990] Worker 1 uses CPU cores [1]
1274
+ [2025-03-11 09:33:15,175][11996] Worker 6 uses CPU cores [0]
1275
+ [2025-03-11 09:33:15,204][11975] Conv encoder output size: 512
1276
+ [2025-03-11 09:33:15,204][11975] Policy head output size: 512
1277
+ [2025-03-11 09:33:15,221][11975] Created Actor Critic model with architecture:
1278
+ [2025-03-11 09:33:15,221][11975] ActorCriticSharedWeights(
1279
+ (obs_normalizer): ObservationNormalizer(
1280
+ (running_mean_std): RunningMeanStdDictInPlace(
1281
+ (running_mean_std): ModuleDict(
1282
+ (obs): RunningMeanStdInPlace()
1283
+ )
1284
+ )
1285
+ )
1286
+ (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
1287
+ (encoder): VizdoomEncoder(
1288
+ (basic_encoder): ConvEncoder(
1289
+ (enc): RecursiveScriptModule(
1290
+ original_name=ConvEncoderImpl
1291
+ (conv_head): RecursiveScriptModule(
1292
+ original_name=Sequential
1293
+ (0): RecursiveScriptModule(original_name=Conv2d)
1294
+ (1): RecursiveScriptModule(original_name=ELU)
1295
+ (2): RecursiveScriptModule(original_name=Conv2d)
1296
+ (3): RecursiveScriptModule(original_name=ELU)
1297
+ (4): RecursiveScriptModule(original_name=Conv2d)
1298
+ (5): RecursiveScriptModule(original_name=ELU)
1299
+ )
1300
+ (mlp_layers): RecursiveScriptModule(
1301
+ original_name=Sequential
1302
+ (0): RecursiveScriptModule(original_name=Linear)
1303
+ (1): RecursiveScriptModule(original_name=ELU)
1304
+ )
1305
+ )
1306
+ )
1307
+ )
1308
+ (core): ModelCoreRNN(
1309
+ (core): GRU(512, 512)
1310
+ )
1311
+ (decoder): MlpDecoder(
1312
+ (mlp): Identity()
1313
+ )
1314
+ (critic_linear): Linear(in_features=512, out_features=1, bias=True)
1315
+ (action_parameterization): ActionParameterizationDefault(
1316
+ (distribution_linear): Linear(in_features=512, out_features=5, bias=True)
1317
+ )
1318
+ )
1319
+ [2025-03-11 09:33:15,464][11975] Using optimizer <class 'torch.optim.adam.Adam'>
1320
+ [2025-03-11 09:33:16,398][11975] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth...
1321
+ [2025-03-11 09:33:16,441][11975] Loading model from checkpoint
1322
+ [2025-03-11 09:33:16,444][11975] Loaded experiment state at self.train_step=978, self.env_steps=4005888
1323
+ [2025-03-11 09:33:16,444][11975] Initialized policy 0 weights for model version 978
1324
+ [2025-03-11 09:33:16,447][11975] Using GPUs [0] for process 0 (actually maps to GPUs [0])
1325
+ [2025-03-11 09:33:16,450][11975] LearnerWorker_p0 finished initialization!
1326
+ [2025-03-11 09:33:16,707][11988] RunningMeanStd input shape: (3, 72, 128)
1327
+ [2025-03-11 09:33:16,708][11988] RunningMeanStd input shape: (1,)
1328
+ [2025-03-11 09:33:16,720][11988] ConvEncoder: input_channels=3
1329
+ [2025-03-11 09:33:16,820][11988] Conv encoder output size: 512
1330
+ [2025-03-11 09:33:16,820][11988] Policy head output size: 512
1331
+ [2025-03-11 09:33:16,855][01034] Inference worker 0-0 is ready!
1332
+ [2025-03-11 09:33:16,858][01034] All inference workers are ready! Signal rollout workers to start!
1333
+ [2025-03-11 09:33:17,136][11993] Doom resolution: 160x120, resize resolution: (128, 72)
1334
+ [2025-03-11 09:33:17,152][11995] Doom resolution: 160x120, resize resolution: (128, 72)
1335
+ [2025-03-11 09:33:17,153][11990] Doom resolution: 160x120, resize resolution: (128, 72)
1336
+ [2025-03-11 09:33:17,161][11991] Doom resolution: 160x120, resize resolution: (128, 72)
1337
+ [2025-03-11 09:33:17,170][11989] Doom resolution: 160x120, resize resolution: (128, 72)
1338
+ [2025-03-11 09:33:17,168][11996] Doom resolution: 160x120, resize resolution: (128, 72)
1339
+ [2025-03-11 09:33:17,180][11994] Doom resolution: 160x120, resize resolution: (128, 72)
1340
+ [2025-03-11 09:33:17,244][11992] Doom resolution: 160x120, resize resolution: (128, 72)
1341
+ [2025-03-11 09:33:18,192][11993] Decorrelating experience for 0 frames...
1342
+ [2025-03-11 09:33:18,200][11991] Decorrelating experience for 0 frames...
1343
+ [2025-03-11 09:33:18,492][11990] Decorrelating experience for 0 frames...
1344
+ [2025-03-11 09:33:18,491][11995] Decorrelating experience for 0 frames...
1345
+ [2025-03-11 09:33:18,523][11994] Decorrelating experience for 0 frames...
1346
+ [2025-03-11 09:33:19,418][01034] Heartbeat connected on Batcher_0
1347
+ [2025-03-11 09:33:19,424][01034] Heartbeat connected on LearnerWorker_p0
1348
+ [2025-03-11 09:33:19,466][01034] Heartbeat connected on InferenceWorker_p0-w0
1349
+ [2025-03-11 09:33:19,525][11993] Decorrelating experience for 32 frames...
1350
+ [2025-03-11 09:33:19,563][11991] Decorrelating experience for 32 frames...
1351
+ [2025-03-11 09:33:19,607][11989] Decorrelating experience for 0 frames...
1352
+ [2025-03-11 09:33:20,362][01034] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 4005888. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
1353
+ [2025-03-11 09:33:20,559][11995] Decorrelating experience for 32 frames...
1354
+ [2025-03-11 09:33:20,571][11990] Decorrelating experience for 32 frames...
1355
+ [2025-03-11 09:33:20,589][11992] Decorrelating experience for 0 frames...
1356
+ [2025-03-11 09:33:20,643][11994] Decorrelating experience for 32 frames...
1357
+ [2025-03-11 09:33:22,090][11996] Decorrelating experience for 0 frames...
1358
+ [2025-03-11 09:33:22,135][11989] Decorrelating experience for 32 frames...
1359
+ [2025-03-11 09:33:22,684][11993] Decorrelating experience for 64 frames...
1360
+ [2025-03-11 09:33:22,728][11991] Decorrelating experience for 64 frames...
1361
+ [2025-03-11 09:33:22,903][11990] Decorrelating experience for 64 frames...
1362
+ [2025-03-11 09:33:22,951][11994] Decorrelating experience for 64 frames...
1363
+ [2025-03-11 09:33:23,783][11995] Decorrelating experience for 64 frames...
1364
+ [2025-03-11 09:33:24,073][11996] Decorrelating experience for 32 frames...
1365
+ [2025-03-11 09:33:24,580][11990] Decorrelating experience for 96 frames...
1366
+ [2025-03-11 09:33:24,617][11989] Decorrelating experience for 64 frames...
1367
+ [2025-03-11 09:33:24,669][11993] Decorrelating experience for 96 frames...
1368
+ [2025-03-11 09:33:24,706][11991] Decorrelating experience for 96 frames...
1369
+ [2025-03-11 09:33:24,731][01034] Heartbeat connected on RolloutWorker_w1
1370
+ [2025-03-11 09:33:24,934][01034] Heartbeat connected on RolloutWorker_w4
1371
+ [2025-03-11 09:33:24,959][01034] Heartbeat connected on RolloutWorker_w2
1372
+ [2025-03-11 09:33:25,178][11992] Decorrelating experience for 32 frames...
1373
+ [2025-03-11 09:33:25,363][01034] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
1374
+ [2025-03-11 09:33:26,132][11995] Decorrelating experience for 96 frames...
1375
+ [2025-03-11 09:33:26,385][01034] Heartbeat connected on RolloutWorker_w7
1376
+ [2025-03-11 09:33:27,002][11996] Decorrelating experience for 64 frames...
1377
+ [2025-03-11 09:33:27,316][11992] Decorrelating experience for 64 frames...
1378
+ [2025-03-11 09:33:28,643][11994] Decorrelating experience for 96 frames...
1379
+ [2025-03-11 09:33:29,174][01034] Heartbeat connected on RolloutWorker_w5
1380
+ [2025-03-11 09:33:29,331][11975] Signal inference workers to stop experience collection...
1381
+ [2025-03-11 09:33:29,348][11988] InferenceWorker_p0-w0: stopping experience collection
1382
+ [2025-03-11 09:33:29,617][11992] Decorrelating experience for 96 frames...
1383
+ [2025-03-11 09:33:29,725][01034] Heartbeat connected on RolloutWorker_w3
1384
+ [2025-03-11 09:33:29,861][11996] Decorrelating experience for 96 frames...
1385
+ [2025-03-11 09:33:29,982][01034] Heartbeat connected on RolloutWorker_w6
1386
+ [2025-03-11 09:33:30,098][11989] Decorrelating experience for 96 frames...
1387
+ [2025-03-11 09:33:30,179][01034] Heartbeat connected on RolloutWorker_w0
1388
+ [2025-03-11 09:33:30,362][01034] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 4005888. Throughput: 0: 197.8. Samples: 1978. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
1389
+ [2025-03-11 09:33:30,363][01034] Avg episode reward: [(0, '3.446')]
1390
+ [2025-03-11 09:33:30,528][11975] Signal inference workers to resume experience collection...
1391
+ [2025-03-11 09:33:30,529][11988] InferenceWorker_p0-w0: resuming experience collection
1392
+ [2025-03-11 09:33:35,367][01034] Fps is (10 sec: 2456.7, 60 sec: 1637.8, 300 sec: 1637.8). Total num frames: 4030464. Throughput: 0: 465.6. Samples: 6986. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1393
+ [2025-03-11 09:33:35,370][01034] Avg episode reward: [(0, '9.019')]
1394
+ [2025-03-11 09:33:40,068][11988] Updated weights for policy 0, policy_version 988 (0.0013)
1395
+ [2025-03-11 09:33:40,362][01034] Fps is (10 sec: 4095.9, 60 sec: 2048.0, 300 sec: 2048.0). Total num frames: 4046848. Throughput: 0: 447.7. Samples: 8954. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1396
+ [2025-03-11 09:33:40,367][01034] Avg episode reward: [(0, '12.408')]
1397
+ [2025-03-11 09:33:45,362][01034] Fps is (10 sec: 4098.2, 60 sec: 2621.4, 300 sec: 2621.4). Total num frames: 4071424. Throughput: 0: 620.9. Samples: 15522. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1398
+ [2025-03-11 09:33:45,366][01034] Avg episode reward: [(0, '15.496')]
1399
+ [2025-03-11 09:33:48,882][11988] Updated weights for policy 0, policy_version 998 (0.0021)
1400
+ [2025-03-11 09:33:50,362][01034] Fps is (10 sec: 4505.7, 60 sec: 2867.2, 300 sec: 2867.2). Total num frames: 4091904. Throughput: 0: 733.2. Samples: 21996. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1401
+ [2025-03-11 09:33:50,363][01034] Avg episode reward: [(0, '16.786')]
1402
+ [2025-03-11 09:33:55,362][01034] Fps is (10 sec: 3276.8, 60 sec: 2808.7, 300 sec: 2808.7). Total num frames: 4104192. Throughput: 0: 688.9. Samples: 24112. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1403
+ [2025-03-11 09:33:55,363][01034] Avg episode reward: [(0, '18.035')]
1404
+ [2025-03-11 09:33:59,613][11988] Updated weights for policy 0, policy_version 1008 (0.0014)
1405
+ [2025-03-11 09:34:00,362][01034] Fps is (10 sec: 3686.4, 60 sec: 3072.0, 300 sec: 3072.0). Total num frames: 4128768. Throughput: 0: 756.0. Samples: 30240. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1406
+ [2025-03-11 09:34:00,363][01034] Avg episode reward: [(0, '21.810')]
1407
+ [2025-03-11 09:34:05,362][01034] Fps is (10 sec: 4915.2, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 4153344. Throughput: 0: 832.1. Samples: 37446. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1408
+ [2025-03-11 09:34:05,365][01034] Avg episode reward: [(0, '22.686')]
1409
+ [2025-03-11 09:34:10,125][11988] Updated weights for policy 0, policy_version 1018 (0.0017)
1410
+ [2025-03-11 09:34:10,362][01034] Fps is (10 sec: 4096.0, 60 sec: 3276.8, 300 sec: 3276.8). Total num frames: 4169728. Throughput: 0: 882.4. Samples: 39708. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1411
+ [2025-03-11 09:34:10,363][01034] Avg episode reward: [(0, '23.657')]
1412
+ [2025-03-11 09:34:15,362][01034] Fps is (10 sec: 4096.0, 60 sec: 3425.7, 300 sec: 3425.7). Total num frames: 4194304. Throughput: 0: 982.0. Samples: 46168. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1413
+ [2025-03-11 09:34:15,363][01034] Avg episode reward: [(0, '25.134')]
1414
+ [2025-03-11 09:34:18,601][11988] Updated weights for policy 0, policy_version 1028 (0.0017)
1415
+ [2025-03-11 09:34:20,362][01034] Fps is (10 sec: 4505.5, 60 sec: 3481.6, 300 sec: 3481.6). Total num frames: 4214784. Throughput: 0: 1029.5. Samples: 53308. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1416
+ [2025-03-11 09:34:20,363][01034] Avg episode reward: [(0, '25.168')]
1417
+ [2025-03-11 09:34:25,362][01034] Fps is (10 sec: 3686.4, 60 sec: 3754.8, 300 sec: 3465.8). Total num frames: 4231168. Throughput: 0: 1034.4. Samples: 55500. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1418
+ [2025-03-11 09:34:25,363][01034] Avg episode reward: [(0, '25.489')]
1419
+ [2025-03-11 09:34:25,371][11975] Saving new best policy, reward=25.489!
1420
+ [2025-03-11 09:34:28,907][11988] Updated weights for policy 0, policy_version 1038 (0.0029)
1421
+ [2025-03-11 09:34:30,362][01034] Fps is (10 sec: 4096.1, 60 sec: 4164.3, 300 sec: 3569.4). Total num frames: 4255744. Throughput: 0: 1029.4. Samples: 61844. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1422
+ [2025-03-11 09:34:30,368][01034] Avg episode reward: [(0, '25.401')]
1423
+ [2025-03-11 09:34:35,362][01034] Fps is (10 sec: 3686.4, 60 sec: 3959.8, 300 sec: 3495.3). Total num frames: 4268032. Throughput: 0: 992.1. Samples: 66642. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1424
+ [2025-03-11 09:34:35,366][01034] Avg episode reward: [(0, '25.305')]
1425
+ [2025-03-11 09:34:40,362][01034] Fps is (10 sec: 2867.2, 60 sec: 3959.5, 300 sec: 3481.6). Total num frames: 4284416. Throughput: 0: 992.8. Samples: 68786. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1426
+ [2025-03-11 09:34:40,365][01034] Avg episode reward: [(0, '25.804')]
1427
+ [2025-03-11 09:34:40,369][11975] Saving new best policy, reward=25.804!
1428
+ [2025-03-11 09:34:41,484][11988] Updated weights for policy 0, policy_version 1048 (0.0018)
1429
+ [2025-03-11 09:34:45,362][01034] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3565.9). Total num frames: 4308992. Throughput: 0: 1001.5. Samples: 75306. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1430
+ [2025-03-11 09:34:45,365][01034] Avg episode reward: [(0, '26.943')]
1431
+ [2025-03-11 09:34:45,371][11975] Saving new best policy, reward=26.943!
1432
+ [2025-03-11 09:34:50,362][01034] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3595.4). Total num frames: 4329472. Throughput: 0: 993.6. Samples: 82158. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1433
+ [2025-03-11 09:34:50,367][01034] Avg episode reward: [(0, '26.996')]
1434
+ [2025-03-11 09:34:50,385][11975] Saving new best policy, reward=26.996!
1435
+ [2025-03-11 09:34:50,393][11988] Updated weights for policy 0, policy_version 1058 (0.0023)
1436
+ [2025-03-11 09:34:55,362][01034] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3578.6). Total num frames: 4345856. Throughput: 0: 991.2. Samples: 84314. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1437
+ [2025-03-11 09:34:55,364][01034] Avg episode reward: [(0, '27.098')]
1438
+ [2025-03-11 09:34:55,444][11975] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001062_4349952.pth...
1439
+ [2025-03-11 09:34:55,589][11975] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000970_3973120.pth
1440
+ [2025-03-11 09:34:55,603][11975] Saving new best policy, reward=27.098!
1441
+ [2025-03-11 09:35:00,362][01034] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3645.4). Total num frames: 4370432. Throughput: 0: 986.2. Samples: 90548. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1442
+ [2025-03-11 09:35:00,367][01034] Avg episode reward: [(0, '27.904')]
1443
+ [2025-03-11 09:35:00,372][11975] Saving new best policy, reward=27.904!
1444
+ [2025-03-11 09:35:00,612][11988] Updated weights for policy 0, policy_version 1068 (0.0048)
1445
+ [2025-03-11 09:35:05,362][01034] Fps is (10 sec: 4505.6, 60 sec: 3959.5, 300 sec: 3666.9). Total num frames: 4390912. Throughput: 0: 980.5. Samples: 97430. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1446
+ [2025-03-11 09:35:05,366][01034] Avg episode reward: [(0, '29.205')]
1447
+ [2025-03-11 09:35:05,372][11975] Saving new best policy, reward=29.205!
1448
+ [2025-03-11 09:35:10,362][01034] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3686.4). Total num frames: 4411392. Throughput: 0: 978.3. Samples: 99522. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1449
+ [2025-03-11 09:35:10,367][01034] Avg episode reward: [(0, '29.362')]
1450
+ [2025-03-11 09:35:10,370][11975] Saving new best policy, reward=29.362!
1451
+ [2025-03-11 09:35:11,248][11988] Updated weights for policy 0, policy_version 1078 (0.0022)
1452
+ [2025-03-11 09:35:15,362][01034] Fps is (10 sec: 4096.0, 60 sec: 3959.5, 300 sec: 3704.2). Total num frames: 4431872. Throughput: 0: 985.6. Samples: 106194. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1453
+ [2025-03-11 09:35:15,366][01034] Avg episode reward: [(0, '27.649')]
1454
+ [2025-03-11 09:35:20,292][11988] Updated weights for policy 0, policy_version 1088 (0.0020)
1455
+ [2025-03-11 09:35:20,364][01034] Fps is (10 sec: 4504.4, 60 sec: 4027.6, 300 sec: 3754.6). Total num frames: 4456448. Throughput: 0: 1024.1. Samples: 112730. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1456
+ [2025-03-11 09:35:20,367][01034] Avg episode reward: [(0, '26.766')]
1457
+ [2025-03-11 09:35:25,362][01034] Fps is (10 sec: 4096.0, 60 sec: 4027.7, 300 sec: 3735.6). Total num frames: 4472832. Throughput: 0: 1024.6. Samples: 114892. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1458
+ [2025-03-11 09:35:25,366][01034] Avg episode reward: [(0, '26.297')]
1459
+ [2025-03-11 09:35:30,362][01034] Fps is (10 sec: 3687.3, 60 sec: 3959.5, 300 sec: 3749.4). Total num frames: 4493312. Throughput: 0: 1024.0. Samples: 121386. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1460
+ [2025-03-11 09:35:30,363][01034] Avg episode reward: [(0, '24.421')]
1461
+ [2025-03-11 09:35:30,760][11988] Updated weights for policy 0, policy_version 1098 (0.0034)
1462
+ [2025-03-11 09:35:35,368][01034] Fps is (10 sec: 4093.3, 60 sec: 4095.6, 300 sec: 3762.1). Total num frames: 4513792. Throughput: 0: 1015.5. Samples: 127864. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1463
+ [2025-03-11 09:35:35,370][01034] Avg episode reward: [(0, '24.652')]
1464
+ [2025-03-11 09:35:40,362][01034] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 3774.2). Total num frames: 4534272. Throughput: 0: 1016.6. Samples: 130060. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1465
+ [2025-03-11 09:35:40,365][01034] Avg episode reward: [(0, '23.710')]
1466
+ [2025-03-11 09:35:41,360][11988] Updated weights for policy 0, policy_version 1108 (0.0024)
1467
+ [2025-03-11 09:35:45,362][01034] Fps is (10 sec: 4098.7, 60 sec: 4096.0, 300 sec: 3785.3). Total num frames: 4554752. Throughput: 0: 1028.6. Samples: 136834. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1468
+ [2025-03-11 09:35:45,366][01034] Avg episode reward: [(0, '24.926')]
1469
+ [2025-03-11 09:35:50,362][01034] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 3795.6). Total num frames: 4575232. Throughput: 0: 1021.3. Samples: 143388. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1470
+ [2025-03-11 09:35:50,366][01034] Avg episode reward: [(0, '24.932')]
1471
+ [2025-03-11 09:35:50,596][11988] Updated weights for policy 0, policy_version 1118 (0.0018)
1472
+ [2025-03-11 09:35:55,362][01034] Fps is (10 sec: 4096.0, 60 sec: 4164.3, 300 sec: 3805.3). Total num frames: 4595712. Throughput: 0: 1021.7. Samples: 145500. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1473
+ [2025-03-11 09:35:55,368][01034] Avg episode reward: [(0, '24.212')]
1474
+ [2025-03-11 09:36:00,362][01034] Fps is (10 sec: 4096.0, 60 sec: 4096.0, 300 sec: 3814.4). Total num frames: 4616192. Throughput: 0: 1022.8. Samples: 152220. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1475
+ [2025-03-11 09:36:00,367][01034] Avg episode reward: [(0, '24.105')]
1476
+ [2025-03-11 09:36:00,587][11988] Updated weights for policy 0, policy_version 1128 (0.0021)
1477
+ [2025-03-11 09:36:05,364][01034] Fps is (10 sec: 4095.2, 60 sec: 4095.9, 300 sec: 3822.9). Total num frames: 4636672. Throughput: 0: 1020.2. Samples: 158640. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1478
+ [2025-03-11 09:36:05,365][01034] Avg episode reward: [(0, '24.591')]
1479
+ [2025-03-11 09:36:10,362][01034] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3806.9). Total num frames: 4653056. Throughput: 0: 1018.2. Samples: 160712. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1480
+ [2025-03-11 09:36:10,364][01034] Avg episode reward: [(0, '25.626')]
1481
+ [2025-03-11 09:36:11,436][11988] Updated weights for policy 0, policy_version 1138 (0.0016)
1482
+ [2025-03-11 09:36:15,362][01034] Fps is (10 sec: 4096.8, 60 sec: 4096.0, 300 sec: 3838.5). Total num frames: 4677632. Throughput: 0: 1020.0. Samples: 167286. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1483
+ [2025-03-11 09:36:15,365][01034] Avg episode reward: [(0, '26.171')]
1484
+ [2025-03-11 09:36:20,364][01034] Fps is (10 sec: 4504.6, 60 sec: 4027.8, 300 sec: 3845.6). Total num frames: 4698112. Throughput: 0: 1011.0. Samples: 173354. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1485
+ [2025-03-11 09:36:20,365][01034] Avg episode reward: [(0, '27.801')]
1486
+ [2025-03-11 09:36:21,961][11988] Updated weights for policy 0, policy_version 1148 (0.0024)
1487
+ [2025-03-11 09:36:25,362][01034] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3830.3). Total num frames: 4714496. Throughput: 0: 1007.4. Samples: 175392. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1488
+ [2025-03-11 09:36:25,365][01034] Avg episode reward: [(0, '29.018')]
1489
+ [2025-03-11 09:36:30,362][01034] Fps is (10 sec: 4097.0, 60 sec: 4096.0, 300 sec: 3858.9). Total num frames: 4739072. Throughput: 0: 1007.9. Samples: 182188. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1490
+ [2025-03-11 09:36:30,363][01034] Avg episode reward: [(0, '27.383')]
1491
+ [2025-03-11 09:36:31,344][11988] Updated weights for policy 0, policy_version 1158 (0.0017)
1492
+ [2025-03-11 09:36:35,362][01034] Fps is (10 sec: 4095.8, 60 sec: 4028.1, 300 sec: 3843.9). Total num frames: 4755456. Throughput: 0: 997.2. Samples: 188262. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
1493
+ [2025-03-11 09:36:35,364][01034] Avg episode reward: [(0, '26.201')]
1494
+ [2025-03-11 09:36:40,362][01034] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3850.2). Total num frames: 4775936. Throughput: 0: 995.5. Samples: 190296. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
1495
+ [2025-03-11 09:36:40,363][01034] Avg episode reward: [(0, '25.372')]
1496
+ [2025-03-11 09:36:42,078][11988] Updated weights for policy 0, policy_version 1168 (0.0029)
1497
+ [2025-03-11 09:36:45,362][01034] Fps is (10 sec: 4096.2, 60 sec: 4027.7, 300 sec: 3856.2). Total num frames: 4796416. Throughput: 0: 1000.5. Samples: 197244. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
1498
+ [2025-03-11 09:36:45,363][01034] Avg episode reward: [(0, '24.422')]
1499
+ [2025-03-11 09:36:50,363][01034] Fps is (10 sec: 4095.7, 60 sec: 4027.7, 300 sec: 3861.9). Total num frames: 4816896. Throughput: 0: 992.2. Samples: 203286. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
1500
+ [2025-03-11 09:36:50,364][01034] Avg episode reward: [(0, '24.667')]
1501
+ [2025-03-11 09:36:52,769][11988] Updated weights for policy 0, policy_version 1178 (0.0017)
1502
+ [2025-03-11 09:36:55,362][01034] Fps is (10 sec: 3686.4, 60 sec: 3959.5, 300 sec: 3848.3). Total num frames: 4833280. Throughput: 0: 992.8. Samples: 205390. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
1503
+ [2025-03-11 09:36:55,365][01034] Avg episode reward: [(0, '25.056')]
1504
+ [2025-03-11 09:36:55,375][11975] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001180_4833280.pth...
1505
+ [2025-03-11 09:36:55,572][11975] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth
1506
+ [2025-03-11 09:37:00,362][01034] Fps is (10 sec: 4096.4, 60 sec: 4027.7, 300 sec: 3872.6). Total num frames: 4857856. Throughput: 0: 996.9. Samples: 212146. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1507
+ [2025-03-11 09:37:00,367][01034] Avg episode reward: [(0, '26.580')]
1508
+ [2025-03-11 09:37:01,909][11988] Updated weights for policy 0, policy_version 1188 (0.0013)
1509
+ [2025-03-11 09:37:05,362][01034] Fps is (10 sec: 4096.0, 60 sec: 3959.6, 300 sec: 3859.3). Total num frames: 4874240. Throughput: 0: 993.3. Samples: 218050. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1510
+ [2025-03-11 09:37:05,369][01034] Avg episode reward: [(0, '26.225')]
1511
+ [2025-03-11 09:37:10,362][01034] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3864.5). Total num frames: 4894720. Throughput: 0: 997.3. Samples: 220272. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1512
+ [2025-03-11 09:37:10,365][01034] Avg episode reward: [(0, '25.605')]
1513
+ [2025-03-11 09:37:12,703][11988] Updated weights for policy 0, policy_version 1198 (0.0013)
1514
+ [2025-03-11 09:37:15,362][01034] Fps is (10 sec: 4505.6, 60 sec: 4027.7, 300 sec: 3886.8). Total num frames: 4919296. Throughput: 0: 1000.5. Samples: 227212. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1515
+ [2025-03-11 09:37:15,365][01034] Avg episode reward: [(0, '26.486')]
1516
+ [2025-03-11 09:37:20,364][01034] Fps is (10 sec: 4095.0, 60 sec: 3959.5, 300 sec: 3874.1). Total num frames: 4935680. Throughput: 0: 996.2. Samples: 233092. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1517
+ [2025-03-11 09:37:20,365][01034] Avg episode reward: [(0, '25.940')]
1518
+ [2025-03-11 09:37:23,538][11988] Updated weights for policy 0, policy_version 1208 (0.0034)
1519
+ [2025-03-11 09:37:25,362][01034] Fps is (10 sec: 3686.4, 60 sec: 4027.7, 300 sec: 3878.7). Total num frames: 4956160. Throughput: 0: 1000.9. Samples: 235336. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
1520
+ [2025-03-11 09:37:25,365][01034] Avg episode reward: [(0, '26.098')]
1521
+ [2025-03-11 09:37:30,362][01034] Fps is (10 sec: 4097.0, 60 sec: 3959.5, 300 sec: 3883.0). Total num frames: 4976640. Throughput: 0: 1004.2. Samples: 242432. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
1522
+ [2025-03-11 09:37:30,363][01034] Avg episode reward: [(0, '26.919')]
1523
+ [2025-03-11 09:37:32,059][11988] Updated weights for policy 0, policy_version 1218 (0.0021)
1524
+ [2025-03-11 09:37:35,362][01034] Fps is (10 sec: 4096.0, 60 sec: 4027.8, 300 sec: 3887.2). Total num frames: 4997120. Throughput: 0: 1004.7. Samples: 248496. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
1525
+ [2025-03-11 09:37:35,365][01034] Avg episode reward: [(0, '27.347')]
1526
+ [2025-03-11 09:37:37,397][11975] Stopping Batcher_0...
1527
+ [2025-03-11 09:37:37,397][11975] Loop batcher_evt_loop terminating...
1528
+ [2025-03-11 09:37:37,397][01034] Component Batcher_0 stopped!
1529
+ [2025-03-11 09:37:37,399][11975] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001222_5005312.pth...
1530
+ [2025-03-11 09:37:37,458][11988] Weights refcount: 2 0
1531
+ [2025-03-11 09:37:37,463][01034] Component InferenceWorker_p0-w0 stopped!
1532
+ [2025-03-11 09:37:37,465][11988] Stopping InferenceWorker_p0-w0...
1533
+ [2025-03-11 09:37:37,467][11988] Loop inference_proc0-0_evt_loop terminating...
1534
+ [2025-03-11 09:37:37,521][11975] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001062_4349952.pth
1535
+ [2025-03-11 09:37:37,530][11975] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001222_5005312.pth...
1536
+ [2025-03-11 09:37:37,711][11975] Stopping LearnerWorker_p0...
1537
+ [2025-03-11 09:37:37,712][01034] Component LearnerWorker_p0 stopped!
1538
+ [2025-03-11 09:37:37,715][11975] Loop learner_proc0_evt_loop terminating...
1539
+ [2025-03-11 09:37:37,767][01034] Component RolloutWorker_w6 stopped!
1540
+ [2025-03-11 09:37:37,768][11996] Stopping RolloutWorker_w6...
1541
+ [2025-03-11 09:37:37,770][01034] Component RolloutWorker_w4 stopped!
1542
+ [2025-03-11 09:37:37,772][11993] Stopping RolloutWorker_w4...
1543
+ [2025-03-11 09:37:37,768][11996] Loop rollout_proc6_evt_loop terminating...
1544
+ [2025-03-11 09:37:37,772][11993] Loop rollout_proc4_evt_loop terminating...
1545
+ [2025-03-11 09:37:37,781][01034] Component RolloutWorker_w2 stopped!
1546
+ [2025-03-11 09:37:37,782][11991] Stopping RolloutWorker_w2...
1547
+ [2025-03-11 09:37:37,788][11991] Loop rollout_proc2_evt_loop terminating...
1548
+ [2025-03-11 09:37:37,808][01034] Component RolloutWorker_w0 stopped!
1549
+ [2025-03-11 09:37:37,808][11989] Stopping RolloutWorker_w0...
1550
+ [2025-03-11 09:37:37,811][11989] Loop rollout_proc0_evt_loop terminating...
1551
+ [2025-03-11 09:37:37,955][11994] Stopping RolloutWorker_w5...
1552
+ [2025-03-11 09:37:37,955][11994] Loop rollout_proc5_evt_loop terminating...
1553
+ [2025-03-11 09:37:37,963][11995] Stopping RolloutWorker_w7...
1554
+ [2025-03-11 09:37:37,963][11995] Loop rollout_proc7_evt_loop terminating...
1555
+ [2025-03-11 09:37:37,963][01034] Component RolloutWorker_w5 stopped!
1556
+ [2025-03-11 09:37:37,966][01034] Component RolloutWorker_w7 stopped!
1557
+ [2025-03-11 09:37:37,982][11992] Stopping RolloutWorker_w3...
1558
+ [2025-03-11 09:37:37,982][01034] Component RolloutWorker_w3 stopped!
1559
+ [2025-03-11 09:37:37,993][11992] Loop rollout_proc3_evt_loop terminating...
1560
+ [2025-03-11 09:37:38,023][11990] Stopping RolloutWorker_w1...
1561
+ [2025-03-11 09:37:38,026][11990] Loop rollout_proc1_evt_loop terminating...
1562
+ [2025-03-11 09:37:38,023][01034] Component RolloutWorker_w1 stopped!
1563
+ [2025-03-11 09:37:38,027][01034] Waiting for process learner_proc0 to stop...
1564
+ [2025-03-11 09:37:39,434][01034] Waiting for process inference_proc0-0 to join...
1565
+ [2025-03-11 09:37:39,435][01034] Waiting for process rollout_proc0 to join...
1566
+ [2025-03-11 09:37:41,580][01034] Waiting for process rollout_proc1 to join...
1567
+ [2025-03-11 09:37:41,604][01034] Waiting for process rollout_proc2 to join...
1568
+ [2025-03-11 09:37:41,608][01034] Waiting for process rollout_proc3 to join...
1569
+ [2025-03-11 09:37:41,610][01034] Waiting for process rollout_proc4 to join...
1570
+ [2025-03-11 09:37:41,612][01034] Waiting for process rollout_proc5 to join...
1571
+ [2025-03-11 09:37:41,613][01034] Waiting for process rollout_proc6 to join...
1572
+ [2025-03-11 09:37:41,614][01034] Waiting for process rollout_proc7 to join...
1573
+ [2025-03-11 09:37:41,616][01034] Batcher 0 profile tree view:
1574
+ batching: 6.6567, releasing_batches: 0.0064
1575
+ [2025-03-11 09:37:41,617][01034] InferenceWorker_p0-w0 profile tree view:
1576
+ wait_policy: 0.0000
1577
+ wait_policy_total: 106.0202
1578
+ update_model: 2.0983
1579
+ weight_update: 0.0023
1580
+ one_step: 0.0027
1581
+ handle_policy_step: 142.7545
1582
+ deserialize: 3.5692, stack: 0.7917, obs_to_device_normalize: 30.2719, forward: 73.4046, send_messages: 6.8078
1583
+ prepare_outputs: 21.8253
1584
+ to_cpu: 13.4822
1585
+ [2025-03-11 09:37:41,618][01034] Learner 0 profile tree view:
1586
+ misc: 0.0010, prepare_batch: 4.1704
1587
+ train: 20.0778
1588
+ epoch_init: 0.0011, minibatch_init: 0.0014, losses_postprocess: 0.1680, kl_divergence: 0.1749, after_optimizer: 0.7446
1589
+ calculate_losses: 6.6746
1590
+ losses_init: 0.0008, forward_head: 0.6318, bptt_initial: 4.0933, tail: 0.3026, advantages_returns: 0.0758, losses: 0.9827
1591
+ bptt: 0.5234
1592
+ bptt_forward_core: 0.5085
1593
+ update: 12.1652
1594
+ clip: 0.2320
1595
+ [2025-03-11 09:37:41,619][01034] RolloutWorker_w0 profile tree view:
1596
+ wait_for_trajectories: 0.0540, enqueue_policy_requests: 23.6359, env_step: 197.9996, overhead: 2.9385, complete_rollouts: 1.6932
1597
+ save_policy_outputs: 4.2113
1598
+ split_output_tensors: 1.5586
1599
+ [2025-03-11 09:37:41,621][01034] RolloutWorker_w7 profile tree view:
1600
+ wait_for_trajectories: 0.0829, enqueue_policy_requests: 25.4075, env_step: 197.3198, overhead: 3.0436, complete_rollouts: 1.7271
1601
+ save_policy_outputs: 4.7781
1602
+ split_output_tensors: 1.9377
1603
+ [2025-03-11 09:37:41,623][01034] Loop Runner_EvtLoop terminating...
1604
+ [2025-03-11 09:37:41,624][01034] Runner profile tree view:
1605
+ main_loop: 282.1627
1606
+ [2025-03-11 09:37:41,625][01034] Collected {0: 5005312}, FPS: 3542.0
1607
+ [2025-03-11 09:38:17,617][01034] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
1608
+ [2025-03-11 09:38:17,619][01034] Overriding arg 'num_workers' with value 1 passed from command line
1609
+ [2025-03-11 09:38:17,620][01034] Adding new argument 'no_render'=True that is not in the saved config file!
1610
+ [2025-03-11 09:38:17,621][01034] Adding new argument 'save_video'=True that is not in the saved config file!
1611
+ [2025-03-11 09:38:17,622][01034] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
1612
+ [2025-03-11 09:38:17,623][01034] Adding new argument 'video_name'=None that is not in the saved config file!
1613
+ [2025-03-11 09:38:17,624][01034] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
1614
+ [2025-03-11 09:38:17,625][01034] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
1615
+ [2025-03-11 09:38:17,626][01034] Adding new argument 'push_to_hub'=False that is not in the saved config file!
1616
+ [2025-03-11 09:38:17,626][01034] Adding new argument 'hf_repository'=None that is not in the saved config file!
1617
+ [2025-03-11 09:38:17,627][01034] Adding new argument 'policy_index'=0 that is not in the saved config file!
1618
+ [2025-03-11 09:38:17,628][01034] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
1619
+ [2025-03-11 09:38:17,629][01034] Adding new argument 'train_script'=None that is not in the saved config file!
1620
+ [2025-03-11 09:38:17,629][01034] Adding new argument 'enjoy_script'=None that is not in the saved config file!
1621
+ [2025-03-11 09:38:17,630][01034] Using frameskip 1 and render_action_repeat=4 for evaluation
1622
+ [2025-03-11 09:38:17,669][01034] RunningMeanStd input shape: (3, 72, 128)
1623
+ [2025-03-11 09:38:17,671][01034] RunningMeanStd input shape: (1,)
1624
+ [2025-03-11 09:38:17,690][01034] ConvEncoder: input_channels=3
1625
+ [2025-03-11 09:38:17,728][01034] Conv encoder output size: 512
1626
+ [2025-03-11 09:38:17,729][01034] Policy head output size: 512
1627
+ [2025-03-11 09:38:17,748][01034] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001222_5005312.pth...
1628
+ [2025-03-11 09:38:18,162][01034] Num frames 100...
1629
+ [2025-03-11 09:38:18,290][01034] Num frames 200...
1630
+ [2025-03-11 09:38:18,425][01034] Num frames 300...
1631
+ [2025-03-11 09:38:18,570][01034] Num frames 400...
1632
+ [2025-03-11 09:38:18,705][01034] Num frames 500...
1633
+ [2025-03-11 09:38:18,842][01034] Num frames 600...
1634
+ [2025-03-11 09:38:18,973][01034] Num frames 700...
1635
+ [2025-03-11 09:38:19,104][01034] Num frames 800...
1636
+ [2025-03-11 09:38:19,233][01034] Num frames 900...
1637
+ [2025-03-11 09:38:19,362][01034] Num frames 1000...
1638
+ [2025-03-11 09:38:19,500][01034] Num frames 1100...
1639
+ [2025-03-11 09:38:19,636][01034] Num frames 1200...
1640
+ [2025-03-11 09:38:19,780][01034] Num frames 1300...
1641
+ [2025-03-11 09:38:19,912][01034] Num frames 1400...
1642
+ [2025-03-11 09:38:20,045][01034] Num frames 1500...
1643
+ [2025-03-11 09:38:20,175][01034] Num frames 1600...
1644
+ [2025-03-11 09:38:20,309][01034] Num frames 1700...
1645
+ [2025-03-11 09:38:20,453][01034] Num frames 1800...
1646
+ [2025-03-11 09:38:20,582][01034] Num frames 1900...
1647
+ [2025-03-11 09:38:20,716][01034] Num frames 2000...
1648
+ [2025-03-11 09:38:20,861][01034] Num frames 2100...
1649
+ [2025-03-11 09:38:20,913][01034] Avg episode rewards: #0: 63.999, true rewards: #0: 21.000
1650
+ [2025-03-11 09:38:20,914][01034] Avg episode reward: 63.999, avg true_objective: 21.000
1651
+ [2025-03-11 09:38:21,044][01034] Num frames 2200...
1652
+ [2025-03-11 09:38:21,174][01034] Num frames 2300...
1653
+ [2025-03-11 09:38:21,305][01034] Num frames 2400...
1654
+ [2025-03-11 09:38:21,490][01034] Num frames 2500...
1655
+ [2025-03-11 09:38:21,663][01034] Num frames 2600...
1656
+ [2025-03-11 09:38:21,767][01034] Avg episode rewards: #0: 37.129, true rewards: #0: 13.130
1657
+ [2025-03-11 09:38:21,768][01034] Avg episode reward: 37.129, avg true_objective: 13.130
1658
+ [2025-03-11 09:38:21,907][01034] Num frames 2700...
1659
+ [2025-03-11 09:38:22,076][01034] Num frames 2800...
1660
+ [2025-03-11 09:38:22,247][01034] Num frames 2900...
1661
+ [2025-03-11 09:38:22,421][01034] Num frames 3000...
1662
+ [2025-03-11 09:38:22,594][01034] Num frames 3100...
1663
+ [2025-03-11 09:38:22,768][01034] Num frames 3200...
1664
+ [2025-03-11 09:38:22,958][01034] Num frames 3300...
1665
+ [2025-03-11 09:38:23,139][01034] Num frames 3400...
1666
+ [2025-03-11 09:38:23,325][01034] Num frames 3500...
1667
+ [2025-03-11 09:38:23,524][01034] Num frames 3600...
1668
+ [2025-03-11 09:38:23,682][01034] Num frames 3700...
1669
+ [2025-03-11 09:38:23,813][01034] Num frames 3800...
1670
+ [2025-03-11 09:38:23,957][01034] Num frames 3900...
1671
+ [2025-03-11 09:38:24,088][01034] Num frames 4000...
1672
+ [2025-03-11 09:38:24,216][01034] Num frames 4100...
1673
+ [2025-03-11 09:38:24,348][01034] Num frames 4200...
1674
+ [2025-03-11 09:38:24,482][01034] Avg episode rewards: #0: 38.843, true rewards: #0: 14.177
1675
+ [2025-03-11 09:38:24,483][01034] Avg episode reward: 38.843, avg true_objective: 14.177
1676
+ [2025-03-11 09:38:24,544][01034] Num frames 4300...
1677
+ [2025-03-11 09:38:24,672][01034] Num frames 4400...
1678
+ [2025-03-11 09:38:24,811][01034] Num frames 4500...
1679
+ [2025-03-11 09:38:24,950][01034] Num frames 4600...
1680
+ [2025-03-11 09:38:25,088][01034] Num frames 4700...
1681
+ [2025-03-11 09:38:25,217][01034] Num frames 4800...
1682
+ [2025-03-11 09:38:25,347][01034] Num frames 4900...
1683
+ [2025-03-11 09:38:25,484][01034] Num frames 5000...
1684
+ [2025-03-11 09:38:25,615][01034] Num frames 5100...
1685
+ [2025-03-11 09:38:25,745][01034] Num frames 5200...
1686
+ [2025-03-11 09:38:25,903][01034] Num frames 5300...
1687
+ [2025-03-11 09:38:26,040][01034] Avg episode rewards: #0: 35.372, true rewards: #0: 13.372
1688
+ [2025-03-11 09:38:26,041][01034] Avg episode reward: 35.372, avg true_objective: 13.372
1689
+ [2025-03-11 09:38:26,109][01034] Num frames 5400...
1690
+ [2025-03-11 09:38:26,241][01034] Num frames 5500...
1691
+ [2025-03-11 09:38:26,372][01034] Num frames 5600...
1692
+ [2025-03-11 09:38:26,509][01034] Num frames 5700...
1693
+ [2025-03-11 09:38:26,635][01034] Num frames 5800...
1694
+ [2025-03-11 09:38:26,768][01034] Num frames 5900...
1695
+ [2025-03-11 09:38:26,871][01034] Avg episode rewards: #0: 30.472, true rewards: #0: 11.872
1696
+ [2025-03-11 09:38:26,871][01034] Avg episode reward: 30.472, avg true_objective: 11.872
1697
+ [2025-03-11 09:38:26,955][01034] Num frames 6000...
1698
+ [2025-03-11 09:38:27,093][01034] Num frames 6100...
1699
+ [2025-03-11 09:38:27,224][01034] Num frames 6200...
1700
+ [2025-03-11 09:38:27,352][01034] Num frames 6300...
1701
+ [2025-03-11 09:38:27,486][01034] Num frames 6400...
1702
+ [2025-03-11 09:38:27,616][01034] Num frames 6500...
1703
+ [2025-03-11 09:38:27,745][01034] Num frames 6600...
1704
+ [2025-03-11 09:38:27,811][01034] Avg episode rewards: #0: 27.513, true rewards: #0: 11.013
1705
+ [2025-03-11 09:38:27,812][01034] Avg episode reward: 27.513, avg true_objective: 11.013
1706
+ [2025-03-11 09:38:27,935][01034] Num frames 6700...
1707
+ [2025-03-11 09:38:28,075][01034] Num frames 6800...
1708
+ [2025-03-11 09:38:28,202][01034] Num frames 6900...
1709
+ [2025-03-11 09:38:28,333][01034] Num frames 7000...
1710
+ [2025-03-11 09:38:28,467][01034] Num frames 7100...
1711
+ [2025-03-11 09:38:28,596][01034] Num frames 7200...
1712
+ [2025-03-11 09:38:28,724][01034] Num frames 7300...
1713
+ [2025-03-11 09:38:28,857][01034] Num frames 7400...
1714
+ [2025-03-11 09:38:28,987][01034] Num frames 7500...
1715
+ [2025-03-11 09:38:29,126][01034] Num frames 7600...
1716
+ [2025-03-11 09:38:29,257][01034] Num frames 7700...
1717
+ [2025-03-11 09:38:29,429][01034] Avg episode rewards: #0: 27.417, true rewards: #0: 11.131
1718
+ [2025-03-11 09:38:29,430][01034] Avg episode reward: 27.417, avg true_objective: 11.131
1719
+ [2025-03-11 09:38:29,442][01034] Num frames 7800...
1720
+ [2025-03-11 09:38:29,570][01034] Num frames 7900...
1721
+ [2025-03-11 09:38:29,702][01034] Num frames 8000...
1722
+ [2025-03-11 09:38:29,834][01034] Num frames 8100...
1723
+ [2025-03-11 09:38:29,963][01034] Num frames 8200...
1724
+ [2025-03-11 09:38:30,102][01034] Num frames 8300...
1725
+ [2025-03-11 09:38:30,230][01034] Num frames 8400...
1726
+ [2025-03-11 09:38:30,357][01034] Num frames 8500...
1727
+ [2025-03-11 09:38:30,493][01034] Num frames 8600...
1728
+ [2025-03-11 09:38:30,623][01034] Num frames 8700...
1729
+ [2025-03-11 09:38:30,754][01034] Num frames 8800...
1730
+ [2025-03-11 09:38:30,884][01034] Num frames 8900...
1731
+ [2025-03-11 09:38:31,014][01034] Num frames 9000...
1732
+ [2025-03-11 09:38:31,155][01034] Num frames 9100...
1733
+ [2025-03-11 09:38:31,291][01034] Num frames 9200...
1734
+ [2025-03-11 09:38:31,429][01034] Num frames 9300...
1735
+ [2025-03-11 09:38:31,561][01034] Num frames 9400...
1736
+ [2025-03-11 09:38:31,693][01034] Num frames 9500...
1737
+ [2025-03-11 09:38:31,828][01034] Num frames 9600...
1738
+ [2025-03-11 09:38:31,966][01034] Num frames 9700...
1739
+ [2025-03-11 09:38:32,102][01034] Num frames 9800...
1740
+ [2025-03-11 09:38:32,247][01034] Avg episode rewards: #0: 30.826, true rewards: #0: 12.326
1741
+ [2025-03-11 09:38:32,248][01034] Avg episode reward: 30.826, avg true_objective: 12.326
1742
+ [2025-03-11 09:38:32,299][01034] Num frames 9900...
1743
+ [2025-03-11 09:38:32,437][01034] Num frames 10000...
1744
+ [2025-03-11 09:38:32,567][01034] Num frames 10100...
1745
+ [2025-03-11 09:38:32,725][01034] Num frames 10200...
1746
+ [2025-03-11 09:38:32,855][01034] Num frames 10300...
1747
+ [2025-03-11 09:38:32,987][01034] Num frames 10400...
1748
+ [2025-03-11 09:38:33,119][01034] Num frames 10500...
1749
+ [2025-03-11 09:38:33,264][01034] Num frames 10600...
1750
+ [2025-03-11 09:38:33,392][01034] Num frames 10700...
1751
+ [2025-03-11 09:38:33,535][01034] Num frames 10800...
1752
+ [2025-03-11 09:38:33,683][01034] Num frames 10900...
1753
+ [2025-03-11 09:38:33,864][01034] Num frames 11000...
1754
+ [2025-03-11 09:38:33,988][01034] Avg episode rewards: #0: 30.595, true rewards: #0: 12.262
1755
+ [2025-03-11 09:38:33,989][01034] Avg episode reward: 30.595, avg true_objective: 12.262
1756
+ [2025-03-11 09:38:34,105][01034] Num frames 11100...
1757
+ [2025-03-11 09:38:34,280][01034] Num frames 11200...
1758
+ [2025-03-11 09:38:34,451][01034] Num frames 11300...
1759
+ [2025-03-11 09:38:34,616][01034] Num frames 11400...
1760
+ [2025-03-11 09:38:34,783][01034] Num frames 11500...
1761
+ [2025-03-11 09:38:34,961][01034] Num frames 11600...
1762
+ [2025-03-11 09:38:35,040][01034] Avg episode rewards: #0: 28.912, true rewards: #0: 11.612
1763
+ [2025-03-11 09:38:35,041][01034] Avg episode reward: 28.912, avg true_objective: 11.612
1764
+ [2025-03-11 09:39:42,700][01034] Replay video saved to /content/train_dir/default_experiment/replay.mp4!
1765
+ [2025-03-11 09:40:02,896][01034] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json
1766
+ [2025-03-11 09:40:02,897][01034] Overriding arg 'num_workers' with value 1 passed from command line
1767
+ [2025-03-11 09:40:02,898][01034] Adding new argument 'no_render'=True that is not in the saved config file!
1768
+ [2025-03-11 09:40:02,901][01034] Adding new argument 'save_video'=True that is not in the saved config file!
1769
+ [2025-03-11 09:40:02,902][01034] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
1770
+ [2025-03-11 09:40:02,903][01034] Adding new argument 'video_name'=None that is not in the saved config file!
1771
+ [2025-03-11 09:40:02,904][01034] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
1772
+ [2025-03-11 09:40:02,904][01034] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
1773
+ [2025-03-11 09:40:02,905][01034] Adding new argument 'push_to_hub'=True that is not in the saved config file!
1774
+ [2025-03-11 09:40:02,909][01034] Adding new argument 'hf_repository'='so7en/Doom_unit8_2' that is not in the saved config file!
1775
+ [2025-03-11 09:40:02,910][01034] Adding new argument 'policy_index'=0 that is not in the saved config file!
1776
+ [2025-03-11 09:40:02,910][01034] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
1777
+ [2025-03-11 09:40:02,911][01034] Adding new argument 'train_script'=None that is not in the saved config file!
1778
+ [2025-03-11 09:40:02,912][01034] Adding new argument 'enjoy_script'=None that is not in the saved config file!
1779
+ [2025-03-11 09:40:02,913][01034] Using frameskip 1 and render_action_repeat=4 for evaluation
1780
+ [2025-03-11 09:40:02,954][01034] RunningMeanStd input shape: (3, 72, 128)
1781
+ [2025-03-11 09:40:02,956][01034] RunningMeanStd input shape: (1,)
1782
+ [2025-03-11 09:40:02,970][01034] ConvEncoder: input_channels=3
1783
+ [2025-03-11 09:40:03,025][01034] Conv encoder output size: 512
1784
+ [2025-03-11 09:40:03,026][01034] Policy head output size: 512
1785
+ [2025-03-11 09:40:03,052][01034] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000001222_5005312.pth...
1786
+ [2025-03-11 09:40:03,620][01034] Num frames 100...
1787
+ [2025-03-11 09:40:03,755][01034] Num frames 200...
1788
+ [2025-03-11 09:40:03,887][01034] Num frames 300...
1789
+ [2025-03-11 09:40:04,015][01034] Num frames 400...
1790
+ [2025-03-11 09:40:04,142][01034] Num frames 500...
1791
+ [2025-03-11 09:40:04,277][01034] Num frames 600...
1792
+ [2025-03-11 09:40:04,415][01034] Num frames 700...
1793
+ [2025-03-11 09:40:04,543][01034] Num frames 800...
1794
+ [2025-03-11 09:40:04,673][01034] Num frames 900...
1795
+ [2025-03-11 09:40:04,802][01034] Num frames 1000...
1796
+ [2025-03-11 09:40:04,926][01034] Num frames 1100...
1797
+ [2025-03-11 09:40:05,057][01034] Num frames 1200...
1798
+ [2025-03-11 09:40:05,189][01034] Num frames 1300...
1799
+ [2025-03-11 09:40:05,372][01034] Avg episode rewards: #0: 38.950, true rewards: #0: 13.950
1800
+ [2025-03-11 09:40:05,374][01034] Avg episode reward: 38.950, avg true_objective: 13.950
1801
+ [2025-03-11 09:40:05,383][01034] Num frames 1400...
1802
+ [2025-03-11 09:40:05,516][01034] Num frames 1500...
1803
+ [2025-03-11 09:40:05,648][01034] Num frames 1600...
1804
+ [2025-03-11 09:40:05,776][01034] Num frames 1700...
1805
+ [2025-03-11 09:40:05,846][01034] Avg episode rewards: #0: 22.055, true rewards: #0: 8.555
1806
+ [2025-03-11 09:40:05,846][01034] Avg episode reward: 22.055, avg true_objective: 8.555
1807
+ [2025-03-11 09:40:05,957][01034] Num frames 1800...
1808
+ [2025-03-11 09:40:06,092][01034] Num frames 1900...
1809
+ [2025-03-11 09:40:06,217][01034] Num frames 2000...
1810
+ [2025-03-11 09:40:06,356][01034] Num frames 2100...
1811
+ [2025-03-11 09:40:06,491][01034] Num frames 2200...
1812
+ [2025-03-11 09:40:06,624][01034] Num frames 2300...
1813
+ [2025-03-11 09:40:06,756][01034] Num frames 2400...
1814
+ [2025-03-11 09:40:06,883][01034] Num frames 2500...
1815
+ [2025-03-11 09:40:07,013][01034] Num frames 2600...
1816
+ [2025-03-11 09:40:07,142][01034] Num frames 2700...
1817
+ [2025-03-11 09:40:07,284][01034] Num frames 2800...
1818
+ [2025-03-11 09:40:07,420][01034] Num frames 2900...
1819
+ [2025-03-11 09:40:07,550][01034] Num frames 3000...
1820
+ [2025-03-11 09:40:07,679][01034] Num frames 3100...
1821
+ [2025-03-11 09:40:07,846][01034] Avg episode rewards: #0: 27.277, true rewards: #0: 10.610
1822
+ [2025-03-11 09:40:07,847][01034] Avg episode reward: 27.277, avg true_objective: 10.610
1823
+ [2025-03-11 09:40:07,870][01034] Num frames 3200...
1824
+ [2025-03-11 09:40:07,993][01034] Num frames 3300...
1825
+ [2025-03-11 09:40:08,120][01034] Num frames 3400...
1826
+ [2025-03-11 09:40:08,248][01034] Num frames 3500...
1827
+ [2025-03-11 09:40:08,383][01034] Num frames 3600...
1828
+ [2025-03-11 09:40:08,523][01034] Num frames 3700...
1829
+ [2025-03-11 09:40:08,650][01034] Num frames 3800...
1830
+ [2025-03-11 09:40:08,778][01034] Num frames 3900...
1831
+ [2025-03-11 09:40:08,907][01034] Num frames 4000...
1832
+ [2025-03-11 09:40:09,035][01034] Num frames 4100...
1833
+ [2025-03-11 09:40:09,197][01034] Avg episode rewards: #0: 26.195, true rewards: #0: 10.445
1834
+ [2025-03-11 09:40:09,198][01034] Avg episode reward: 26.195, avg true_objective: 10.445
1835
+ [2025-03-11 09:40:09,234][01034] Num frames 4200...
1836
+ [2025-03-11 09:40:09,378][01034] Num frames 4300...
1837
+ [2025-03-11 09:40:09,516][01034] Num frames 4400...
1838
+ [2025-03-11 09:40:09,641][01034] Num frames 4500...
1839
+ [2025-03-11 09:40:09,768][01034] Num frames 4600...
1840
+ [2025-03-11 09:40:09,897][01034] Num frames 4700...
1841
+ [2025-03-11 09:40:10,022][01034] Num frames 4800...
1842
+ [2025-03-11 09:40:10,150][01034] Num frames 4900...
1843
+ [2025-03-11 09:40:10,282][01034] Num frames 5000...
1844
+ [2025-03-11 09:40:10,430][01034] Num frames 5100...
1845
+ [2025-03-11 09:40:10,559][01034] Num frames 5200...
1846
+ [2025-03-11 09:40:10,684][01034] Num frames 5300...
1847
+ [2025-03-11 09:40:10,813][01034] Num frames 5400...
1848
+ [2025-03-11 09:40:10,941][01034] Num frames 5500...
1849
+ [2025-03-11 09:40:11,070][01034] Num frames 5600...
1850
+ [2025-03-11 09:40:11,195][01034] Num frames 5700...
1851
+ [2025-03-11 09:40:11,323][01034] Num frames 5800...
1852
+ [2025-03-11 09:40:11,483][01034] Num frames 5900...
1853
+ [2025-03-11 09:40:11,647][01034] Avg episode rewards: #0: 30.364, true rewards: #0: 11.964
1854
+ [2025-03-11 09:40:11,648][01034] Avg episode reward: 30.364, avg true_objective: 11.964
1855
+ [2025-03-11 09:40:11,674][01034] Num frames 6000...
1856
+ [2025-03-11 09:40:11,810][01034] Num frames 6100...
1857
+ [2025-03-11 09:40:11,938][01034] Num frames 6200...
1858
+ [2025-03-11 09:40:12,066][01034] Num frames 6300...
1859
+ [2025-03-11 09:40:12,246][01034] Avg episode rewards: #0: 26.497, true rewards: #0: 10.663
1860
+ [2025-03-11 09:40:12,247][01034] Avg episode reward: 26.497, avg true_objective: 10.663
1861
+ [2025-03-11 09:40:12,251][01034] Num frames 6400...
1862
+ [2025-03-11 09:40:12,378][01034] Num frames 6500...
1863
+ [2025-03-11 09:40:12,519][01034] Num frames 6600...
1864
+ [2025-03-11 09:40:12,649][01034] Num frames 6700...
1865
+ [2025-03-11 09:40:12,778][01034] Num frames 6800...
1866
+ [2025-03-11 09:40:12,906][01034] Num frames 6900...
1867
+ [2025-03-11 09:40:13,015][01034] Avg episode rewards: #0: 23.774, true rewards: #0: 9.917
1868
+ [2025-03-11 09:40:13,016][01034] Avg episode reward: 23.774, avg true_objective: 9.917
1869
+ [2025-03-11 09:40:13,092][01034] Num frames 7000...
1870
+ [2025-03-11 09:40:13,222][01034] Num frames 7100...
1871
+ [2025-03-11 09:40:13,368][01034] Num frames 7200...
1872
+ [2025-03-11 09:40:13,562][01034] Num frames 7300...
1873
+ [2025-03-11 09:40:13,733][01034] Num frames 7400...
1874
+ [2025-03-11 09:40:13,904][01034] Num frames 7500...
1875
+ [2025-03-11 09:40:14,074][01034] Num frames 7600...
1876
+ [2025-03-11 09:40:14,240][01034] Num frames 7700...
1877
+ [2025-03-11 09:40:14,410][01034] Num frames 7800...
1878
+ [2025-03-11 09:40:14,589][01034] Num frames 7900...
1879
+ [2025-03-11 09:40:14,705][01034] Avg episode rewards: #0: 23.042, true rewards: #0: 9.917
1880
+ [2025-03-11 09:40:14,707][01034] Avg episode reward: 23.042, avg true_objective: 9.917
1881
+ [2025-03-11 09:40:14,825][01034] Num frames 8000...
1882
+ [2025-03-11 09:40:14,999][01034] Num frames 8100...
1883
+ [2025-03-11 09:40:15,180][01034] Num frames 8200...
1884
+ [2025-03-11 09:40:15,362][01034] Num frames 8300...
1885
+ [2025-03-11 09:40:15,554][01034] Num frames 8400...
1886
+ [2025-03-11 09:40:15,698][01034] Num frames 8500...
1887
+ [2025-03-11 09:40:15,824][01034] Num frames 8600...
1888
+ [2025-03-11 09:40:15,956][01034] Num frames 8700...
1889
+ [2025-03-11 09:40:16,090][01034] Num frames 8800...
1890
+ [2025-03-11 09:40:16,225][01034] Avg episode rewards: #0: 22.847, true rewards: #0: 9.847
1891
+ [2025-03-11 09:40:16,225][01034] Avg episode reward: 22.847, avg true_objective: 9.847
1892
+ [2025-03-11 09:40:16,279][01034] Num frames 8900...
1893
+ [2025-03-11 09:40:16,415][01034] Num frames 9000...
1894
+ [2025-03-11 09:40:16,552][01034] Num frames 9100...
1895
+ [2025-03-11 09:40:16,683][01034] Num frames 9200...
1896
+ [2025-03-11 09:40:16,812][01034] Num frames 9300...
1897
+ [2025-03-11 09:40:16,943][01034] Num frames 9400...
1898
+ [2025-03-11 09:40:17,072][01034] Num frames 9500...
1899
+ [2025-03-11 09:40:17,203][01034] Num frames 9600...
1900
+ [2025-03-11 09:40:17,387][01034] Avg episode rewards: #0: 22.094, true rewards: #0: 9.694
1901
+ [2025-03-11 09:40:17,388][01034] Avg episode reward: 22.094, avg true_objective: 9.694
1902
+ [2025-03-11 09:41:11,174][01034] Replay video saved to /content/train_dir/default_experiment/replay.mp4!