Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- .summary/0/events.out.tfevents.1760762395.7f51b3d89358 +3 -0
- .summary/0/events.out.tfevents.1760807750.11d7b92c6867 +0 -0
- .summary/0/events.out.tfevents.1760812661.9740909df8d5 +3 -0
- .summary/0/events.out.tfevents.1760831727.6d115e46a7aa +0 -0
- .summary/0/events.out.tfevents.1760832021.6d115e46a7aa +3 -0
- .summary/0/events.out.tfevents.1760834099.c4fccf2fafc1 +3 -0
- README.md +56 -0
- checkpoint_p0/best_000001047_4288512_reward_15.226.pth +3 -0
- checkpoint_p0/checkpoint_000000973_3985408.pth +3 -0
- checkpoint_p0/checkpoint_000001030_4218880.pth +3 -0
- config.json +149 -0
- log.txt +56 -0
- replay.mp4 +3 -0
- sf_log.txt +860 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
replay.mp4 filter=lfs diff=lfs merge=lfs -text
|
.summary/0/events.out.tfevents.1760762395.7f51b3d89358
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1a612b9161c98815d98ec295cb5242ad6e1748adc1508a9d2d24f8c46085dbc4
|
| 3 |
+
size 357576
|
.summary/0/events.out.tfevents.1760807750.11d7b92c6867
ADDED
|
File without changes
|
.summary/0/events.out.tfevents.1760812661.9740909df8d5
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:96689dd679a964ebf1d2d3ae5cea47629e0cec47deafa7e67e4c234db1b0ccc1
|
| 3 |
+
size 4168
|
.summary/0/events.out.tfevents.1760831727.6d115e46a7aa
ADDED
|
File without changes
|
.summary/0/events.out.tfevents.1760832021.6d115e46a7aa
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:29ed3e1415d9127daa093a8b606013236a0d6d4ecae71711352eacd74a34623f
|
| 3 |
+
size 4132
|
.summary/0/events.out.tfevents.1760834099.c4fccf2fafc1
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7cf930fc0dc32cf6cebeebc32fad78f0538ce0975f088543a4758d9349136075
|
| 3 |
+
size 1308950
|
README.md
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: sample-factory
|
| 3 |
+
tags:
|
| 4 |
+
- deep-reinforcement-learning
|
| 5 |
+
- reinforcement-learning
|
| 6 |
+
- sample-factory
|
| 7 |
+
model-index:
|
| 8 |
+
- name: APPO
|
| 9 |
+
results:
|
| 10 |
+
- task:
|
| 11 |
+
type: reinforcement-learning
|
| 12 |
+
name: reinforcement-learning
|
| 13 |
+
dataset:
|
| 14 |
+
name: doom_deadly_corridor
|
| 15 |
+
type: doom_deadly_corridor
|
| 16 |
+
metrics:
|
| 17 |
+
- type: mean_reward
|
| 18 |
+
value: 8.92 +/- 7.43
|
| 19 |
+
name: mean_reward
|
| 20 |
+
verified: false
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
A(n) **APPO** model trained on the **doom_deadly_corridor** environment.
|
| 24 |
+
|
| 25 |
+
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
|
| 26 |
+
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
## Downloading the model
|
| 30 |
+
|
| 31 |
+
After installing Sample-Factory, download the model with:
|
| 32 |
+
```
|
| 33 |
+
python -m sample_factory.huggingface.load_from_hub -r elliemci/deadly_corridor_experiment
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
## Using the model
|
| 38 |
+
|
| 39 |
+
To run the model after download, use the `enjoy` script corresponding to this environment:
|
| 40 |
+
```
|
| 41 |
+
python -m <path.to.enjoy.module> --algo=APPO --env=doom_deadly_corridor --train_dir=./train_dir --experiment=deadly_corridor_experiment
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
|
| 46 |
+
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
|
| 47 |
+
|
| 48 |
+
## Training with this model
|
| 49 |
+
|
| 50 |
+
To continue training with this model, use the `train` script corresponding to this environment:
|
| 51 |
+
```
|
| 52 |
+
python -m <path.to.train.module> --algo=APPO --env=doom_deadly_corridor --train_dir=./train_dir --experiment=deadly_corridor_experiment --restart_behavior=resume --train_for_env_steps=10000000000
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
| 56 |
+
|
checkpoint_p0/best_000001047_4288512_reward_15.226.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:85e01ad2e691bb821d8f1613b919759d5fb2bbd4eabbc66be14ec12a1c25d584
|
| 3 |
+
size 34966380
|
checkpoint_p0/checkpoint_000000973_3985408.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:50c8e46872f33efcf183be9bf2694f977e98b3e1115dd74e53805ab47eefab52
|
| 3 |
+
size 34966818
|
checkpoint_p0/checkpoint_000001030_4218880.pth
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d260347365353c92ee867c53b0b215d2521e42b4b81cc5140c41981d766fc76c
|
| 3 |
+
size 34966818
|
config.json
ADDED
|
@@ -0,0 +1,149 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"help": false,
|
| 3 |
+
"algo": "APPO",
|
| 4 |
+
"env": "doom_deadly_corridor",
|
| 5 |
+
"experiment": "deadly_corridor_experiment",
|
| 6 |
+
"train_dir": "train_dir",
|
| 7 |
+
"restart_behavior": "resume",
|
| 8 |
+
"device": "gpu",
|
| 9 |
+
"seed": null,
|
| 10 |
+
"num_policies": 1,
|
| 11 |
+
"async_rl": true,
|
| 12 |
+
"serial_mode": false,
|
| 13 |
+
"batched_sampling": false,
|
| 14 |
+
"num_batches_to_accumulate": 2,
|
| 15 |
+
"worker_num_splits": 2,
|
| 16 |
+
"policy_workers_per_policy": 1,
|
| 17 |
+
"max_policy_lag": 1000,
|
| 18 |
+
"num_workers": 1,
|
| 19 |
+
"num_envs_per_worker": 4,
|
| 20 |
+
"batch_size": 1024,
|
| 21 |
+
"num_batches_per_epoch": 1,
|
| 22 |
+
"num_epochs": 1,
|
| 23 |
+
"rollout": 32,
|
| 24 |
+
"recurrence": 32,
|
| 25 |
+
"shuffle_minibatches": false,
|
| 26 |
+
"gamma": 0.99,
|
| 27 |
+
"reward_scale": 1.0,
|
| 28 |
+
"reward_clip": 1000.0,
|
| 29 |
+
"value_bootstrap": false,
|
| 30 |
+
"normalize_returns": true,
|
| 31 |
+
"exploration_loss_coeff": 0.001,
|
| 32 |
+
"value_loss_coeff": 0.5,
|
| 33 |
+
"kl_loss_coeff": 0.0,
|
| 34 |
+
"exploration_loss": "symmetric_kl",
|
| 35 |
+
"gae_lambda": 0.95,
|
| 36 |
+
"ppo_clip_ratio": 0.1,
|
| 37 |
+
"ppo_clip_value": 0.2,
|
| 38 |
+
"with_vtrace": false,
|
| 39 |
+
"vtrace_rho": 1.0,
|
| 40 |
+
"vtrace_c": 1.0,
|
| 41 |
+
"optimizer": "adam",
|
| 42 |
+
"adam_eps": 1e-06,
|
| 43 |
+
"adam_beta1": 0.9,
|
| 44 |
+
"adam_beta2": 0.999,
|
| 45 |
+
"max_grad_norm": 4.0,
|
| 46 |
+
"learning_rate": 0.0001,
|
| 47 |
+
"lr_schedule": "constant",
|
| 48 |
+
"lr_schedule_kl_threshold": 0.008,
|
| 49 |
+
"lr_adaptive_min": 1e-06,
|
| 50 |
+
"lr_adaptive_max": 0.01,
|
| 51 |
+
"obs_subtract_mean": 0.0,
|
| 52 |
+
"obs_scale": 255.0,
|
| 53 |
+
"normalize_input": true,
|
| 54 |
+
"normalize_input_keys": null,
|
| 55 |
+
"decorrelate_experience_max_seconds": 0,
|
| 56 |
+
"decorrelate_envs_on_one_worker": true,
|
| 57 |
+
"actor_worker_gpus": [],
|
| 58 |
+
"set_workers_cpu_affinity": true,
|
| 59 |
+
"force_envs_single_thread": false,
|
| 60 |
+
"default_niceness": 0,
|
| 61 |
+
"log_to_file": true,
|
| 62 |
+
"experiment_summaries_interval": 60,
|
| 63 |
+
"flush_summaries_interval": 10,
|
| 64 |
+
"stats_avg": 10,
|
| 65 |
+
"summaries_use_frameskip": true,
|
| 66 |
+
"heartbeat_interval": 20,
|
| 67 |
+
"heartbeat_reporting_interval": 600,
|
| 68 |
+
"train_for_env_steps": 5000000,
|
| 69 |
+
"train_for_seconds": 10000000000,
|
| 70 |
+
"save_every_sec": 120,
|
| 71 |
+
"keep_checkpoints": 2,
|
| 72 |
+
"load_checkpoint_kind": "latest",
|
| 73 |
+
"save_milestones_sec": -1,
|
| 74 |
+
"save_best_every_sec": 5,
|
| 75 |
+
"save_best_metric": "reward",
|
| 76 |
+
"save_best_after": 100000,
|
| 77 |
+
"benchmark": false,
|
| 78 |
+
"encoder_mlp_layers": [
|
| 79 |
+
512,
|
| 80 |
+
512
|
| 81 |
+
],
|
| 82 |
+
"encoder_conv_architecture": "convnet_simple",
|
| 83 |
+
"encoder_conv_mlp_layers": [
|
| 84 |
+
512
|
| 85 |
+
],
|
| 86 |
+
"use_rnn": true,
|
| 87 |
+
"rnn_size": 512,
|
| 88 |
+
"rnn_type": "gru",
|
| 89 |
+
"rnn_num_layers": 1,
|
| 90 |
+
"decoder_mlp_layers": [],
|
| 91 |
+
"nonlinearity": "elu",
|
| 92 |
+
"policy_initialization": "orthogonal",
|
| 93 |
+
"policy_init_gain": 1.0,
|
| 94 |
+
"actor_critic_share_weights": true,
|
| 95 |
+
"adaptive_stddev": true,
|
| 96 |
+
"continuous_tanh_scale": 0.0,
|
| 97 |
+
"initial_stddev": 1.0,
|
| 98 |
+
"use_env_info_cache": false,
|
| 99 |
+
"env_gpu_actions": false,
|
| 100 |
+
"env_gpu_observations": true,
|
| 101 |
+
"env_frameskip": 4,
|
| 102 |
+
"env_framestack": 1,
|
| 103 |
+
"pixel_format": "CHW",
|
| 104 |
+
"use_record_episode_statistics": false,
|
| 105 |
+
"with_wandb": true,
|
| 106 |
+
"wandb_user": null,
|
| 107 |
+
"wandb_project": "sample_factory",
|
| 108 |
+
"wandb_group": null,
|
| 109 |
+
"wandb_job_type": "SF",
|
| 110 |
+
"wandb_tags": [],
|
| 111 |
+
"with_pbt": false,
|
| 112 |
+
"pbt_mix_policies_in_one_env": true,
|
| 113 |
+
"pbt_period_env_steps": 5000000,
|
| 114 |
+
"pbt_start_mutation": 20000000,
|
| 115 |
+
"pbt_replace_fraction": 0.3,
|
| 116 |
+
"pbt_mutation_rate": 0.15,
|
| 117 |
+
"pbt_replace_reward_gap": 0.1,
|
| 118 |
+
"pbt_replace_reward_gap_absolute": 1e-06,
|
| 119 |
+
"pbt_optimize_gamma": false,
|
| 120 |
+
"pbt_target_objective": "true_objective",
|
| 121 |
+
"pbt_perturb_min": 1.1,
|
| 122 |
+
"pbt_perturb_max": 1.5,
|
| 123 |
+
"num_agents": -1,
|
| 124 |
+
"num_humans": 0,
|
| 125 |
+
"num_bots": -1,
|
| 126 |
+
"start_bot_difficulty": null,
|
| 127 |
+
"timelimit": null,
|
| 128 |
+
"res_w": 128,
|
| 129 |
+
"res_h": 72,
|
| 130 |
+
"wide_aspect_ratio": false,
|
| 131 |
+
"eval_env_frameskip": 1,
|
| 132 |
+
"fps": 35,
|
| 133 |
+
"command_line": "--env=doom_deadly_corridor --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=5000000 --experiment=deadly_corridor_experiment --train_dir=train_dir --with_wandb=True --experiment_summaries_interval=60 --flush_summaries_interval=10 --stats_avg=10",
|
| 134 |
+
"cli_args": {
|
| 135 |
+
"env": "doom_deadly_corridor",
|
| 136 |
+
"experiment": "deadly_corridor_experiment",
|
| 137 |
+
"train_dir": "train_dir",
|
| 138 |
+
"num_workers": 8,
|
| 139 |
+
"num_envs_per_worker": 4,
|
| 140 |
+
"experiment_summaries_interval": 60,
|
| 141 |
+
"flush_summaries_interval": 10,
|
| 142 |
+
"stats_avg": 10,
|
| 143 |
+
"train_for_env_steps": 5000000,
|
| 144 |
+
"with_wandb": true
|
| 145 |
+
},
|
| 146 |
+
"git_hash": "unknown",
|
| 147 |
+
"git_repo_name": "not a git repository",
|
| 148 |
+
"wandb_unique_id": "deadly_corridor_experiment_20251018_043945_196881"
|
| 149 |
+
}
|
log.txt
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Gym has been unmaintained since 2022 and does not support NumPy 2.0 amongst other critical functionality.
|
| 2 |
+
Please upgrade to Gymnasium, the maintained drop-in replacement of Gym, or contact the authors of your software and request that they upgrade.
|
| 3 |
+
See the migration guide at https://gymnasium.farama.org/introduction/migration_guide/ for additional information.
|
| 4 |
+
[36m[2025-10-19 00:31:54,997][05777] register_encoder_factory: <function make_vizdoom_encoder at 0x785a3660e3e0>[0m
|
| 5 |
+
[33m[2025-10-19 00:31:55,569][05777] Loading existing experiment configuration from train_dir/deadly_corridor_experiment/config.json[0m
|
| 6 |
+
[36m[2025-10-19 00:31:55,570][05777] Overriding arg 'train_for_env_steps' with value 5000000 passed from command line[0m
|
| 7 |
+
[36m[2025-10-19 00:31:55,578][05777] Experiment dir train_dir/deadly_corridor_experiment already exists![0m
|
| 8 |
+
[36m[2025-10-19 00:31:55,578][05777] Resuming existing experiment from train_dir/deadly_corridor_experiment...[0m
|
| 9 |
+
[36m[2025-10-19 00:31:55,578][05777] Weights and Biases integration enabled. Project: sample_factory, user: None, group: None, unique_id: deadly_corridor_experiment_20251018_043945_196881[0m
|
| 10 |
+
[36m[2025-10-19 00:31:58,148][05777] Initializing WandB...[0m
|
| 11 |
+
wandb: WARNING `start_method` is deprecated and will be removed in a future version of wandb. This setting is currently non-functional and safely ignored.
|
| 12 |
+
[33m[2025-10-19 00:31:58,172][05777] Exception thrown when attempting to run <function init_wandb.<locals>.init_wandb_func at 0x785a36525d00>, attempt 0 out of 3[0m
|
| 13 |
+
[33m[2025-10-19 00:31:59,182][05777] Exception thrown when attempting to run <function init_wandb.<locals>.init_wandb_func at 0x785a36525d00>, attempt 1 out of 3[0m
|
| 14 |
+
[33m[2025-10-19 00:32:01,192][05777] Exception thrown when attempting to run <function init_wandb.<locals>.init_wandb_func at 0x785a36525d00>, attempt 2 out of 3[0m
|
| 15 |
+
[31m[1m[2025-10-19 00:32:05,202][05777] Could not initialize WandB! api_key not configured (no-tty). call wandb.login(key=[your_api_key])[0m
|
| 16 |
+
Traceback (most recent call last):
|
| 17 |
+
File "<frozen runpy>", line 198, in _run_module_as_main
|
| 18 |
+
File "<frozen runpy>", line 88, in _run_code
|
| 19 |
+
File "/usr/local/lib/python3.12/dist-packages/sf_examples/vizdoom/train_vizdoom.py", line 48, in <module>
|
| 20 |
+
sys.exit(main())
|
| 21 |
+
^^^^^^
|
| 22 |
+
File "/usr/local/lib/python3.12/dist-packages/sf_examples/vizdoom/train_vizdoom.py", line 43, in main
|
| 23 |
+
status = run_rl(cfg)
|
| 24 |
+
^^^^^^^^^^^
|
| 25 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/train.py", line 32, in run_rl
|
| 26 |
+
cfg, runner = make_runner(cfg)
|
| 27 |
+
^^^^^^^^^^^^^^^^
|
| 28 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/train.py", line 23, in make_runner
|
| 29 |
+
runner = runner_cls(cfg)
|
| 30 |
+
^^^^^^^^^^^^^^^
|
| 31 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/runners/runner_parallel.py", line 17, in __init__
|
| 32 |
+
super().__init__(cfg)
|
| 33 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/runners/runner.py", line 135, in __init__
|
| 34 |
+
init_wandb(self.cfg) # should be done before writers are initialized
|
| 35 |
+
^^^^^^^^^^^^^^^^^^^^
|
| 36 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/utils/wandb_utils.py", line 51, in init_wandb
|
| 37 |
+
init_wandb_func()
|
| 38 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/utils/utils.py", line 181, in newfn
|
| 39 |
+
return func(*args, **kwargs)
|
| 40 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 41 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/utils/wandb_utils.py", line 36, in init_wandb_func
|
| 42 |
+
wandb.init(
|
| 43 |
+
File "/usr/local/lib/python3.12/dist-packages/wandb/sdk/wandb_init.py", line 1601, in init
|
| 44 |
+
wandb._sentry.reraise(e)
|
| 45 |
+
File "/usr/local/lib/python3.12/dist-packages/wandb/analytics/sentry.py", line 162, in reraise
|
| 46 |
+
raise exc.with_traceback(sys.exc_info()[2])
|
| 47 |
+
File "/usr/local/lib/python3.12/dist-packages/wandb/sdk/wandb_init.py", line 1523, in init
|
| 48 |
+
wi.maybe_login(init_settings)
|
| 49 |
+
File "/usr/local/lib/python3.12/dist-packages/wandb/sdk/wandb_init.py", line 190, in maybe_login
|
| 50 |
+
wandb_login._login(
|
| 51 |
+
File "/usr/local/lib/python3.12/dist-packages/wandb/sdk/wandb_login.py", line 320, in _login
|
| 52 |
+
key, key_status = wlogin.prompt_api_key(referrer=referrer)
|
| 53 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 54 |
+
File "/usr/local/lib/python3.12/dist-packages/wandb/sdk/wandb_login.py", line 245, in prompt_api_key
|
| 55 |
+
raise UsageError("api_key not configured (no-tty). call " + directive)
|
| 56 |
+
wandb.errors.errors.UsageError: api_key not configured (no-tty). call wandb.login(key=[your_api_key])
|
replay.mp4
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:78d0dbdbd7d3639767d9304c62c8bb6b7fb30671e0ba60882ab0761fb1e42793
|
| 3 |
+
size 1833187
|
sf_log.txt
ADDED
|
@@ -0,0 +1,860 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[2025-10-18 04:40:01,685][02528] Saving configuration to train_dir/deadly_corridor_experiment/config.json...
|
| 2 |
+
[2025-10-18 04:40:01,697][02528] Rollout worker 0 uses device cpu
|
| 3 |
+
[2025-10-18 04:40:01,699][02528] Rollout worker 1 uses device cpu
|
| 4 |
+
[2025-10-18 04:40:01,701][02528] Rollout worker 2 uses device cpu
|
| 5 |
+
[2025-10-18 04:40:01,703][02528] Rollout worker 3 uses device cpu
|
| 6 |
+
[2025-10-18 04:40:01,705][02528] Rollout worker 4 uses device cpu
|
| 7 |
+
[2025-10-18 04:40:01,707][02528] Rollout worker 5 uses device cpu
|
| 8 |
+
[2025-10-18 04:40:01,709][02528] Rollout worker 6 uses device cpu
|
| 9 |
+
[2025-10-18 04:40:01,710][02528] Rollout worker 7 uses device cpu
|
| 10 |
+
[2025-10-18 04:40:01,854][02528] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 11 |
+
[2025-10-18 04:40:01,856][02528] InferenceWorker_p0-w0: min num requests: 2
|
| 12 |
+
[2025-10-18 04:40:01,890][02528] Starting all processes...
|
| 13 |
+
[2025-10-18 04:40:01,892][02528] Starting process learner_proc0
|
| 14 |
+
[2025-10-18 04:40:01,971][02528] Starting all processes...
|
| 15 |
+
[2025-10-18 04:40:01,982][02528] Starting process inference_proc0-0
|
| 16 |
+
[2025-10-18 04:40:01,983][02528] Starting process rollout_proc0
|
| 17 |
+
[2025-10-18 04:40:01,983][02528] Starting process rollout_proc1
|
| 18 |
+
[2025-10-18 04:40:01,986][02528] Starting process rollout_proc2
|
| 19 |
+
[2025-10-18 04:40:01,986][02528] Starting process rollout_proc3
|
| 20 |
+
[2025-10-18 04:40:01,987][02528] Starting process rollout_proc4
|
| 21 |
+
[2025-10-18 04:40:01,987][02528] Starting process rollout_proc5
|
| 22 |
+
[2025-10-18 04:40:01,987][02528] Starting process rollout_proc6
|
| 23 |
+
[2025-10-18 04:40:01,987][02528] Starting process rollout_proc7
|
| 24 |
+
[2025-10-18 04:40:17,938][04351] Worker 0 uses CPU cores [0]
|
| 25 |
+
[2025-10-18 04:40:18,215][04349] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 26 |
+
[2025-10-18 04:40:18,227][04349] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
|
| 27 |
+
[2025-10-18 04:40:18,312][04349] Num visible devices: 1
|
| 28 |
+
[2025-10-18 04:40:18,325][04336] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 29 |
+
[2025-10-18 04:40:18,342][04336] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
|
| 30 |
+
[2025-10-18 04:40:18,389][04336] Num visible devices: 1
|
| 31 |
+
[2025-10-18 04:40:18,393][04336] Starting seed is not provided
|
| 32 |
+
[2025-10-18 04:40:18,396][04336] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 33 |
+
[2025-10-18 04:40:18,397][04336] Initializing actor-critic model on device cuda:0
|
| 34 |
+
[2025-10-18 04:40:18,398][04336] RunningMeanStd input shape: (3, 72, 128)
|
| 35 |
+
[2025-10-18 04:40:18,404][04336] RunningMeanStd input shape: (1,)
|
| 36 |
+
[2025-10-18 04:40:18,421][04350] Worker 1 uses CPU cores [1]
|
| 37 |
+
[2025-10-18 04:40:18,451][04336] ConvEncoder: input_channels=3
|
| 38 |
+
[2025-10-18 04:40:18,453][04355] Worker 5 uses CPU cores [1]
|
| 39 |
+
[2025-10-18 04:40:18,471][04352] Worker 2 uses CPU cores [0]
|
| 40 |
+
[2025-10-18 04:40:18,623][04354] Worker 3 uses CPU cores [1]
|
| 41 |
+
[2025-10-18 04:40:18,645][04357] Worker 7 uses CPU cores [1]
|
| 42 |
+
[2025-10-18 04:40:18,647][04356] Worker 6 uses CPU cores [0]
|
| 43 |
+
[2025-10-18 04:40:18,685][04353] Worker 4 uses CPU cores [0]
|
| 44 |
+
[2025-10-18 04:40:18,772][04336] Conv encoder output size: 512
|
| 45 |
+
[2025-10-18 04:40:18,773][04336] Policy head output size: 512
|
| 46 |
+
[2025-10-18 04:40:18,822][04336] Created Actor Critic model with architecture:
|
| 47 |
+
[2025-10-18 04:40:18,822][04336] ActorCriticSharedWeights(
|
| 48 |
+
(obs_normalizer): ObservationNormalizer(
|
| 49 |
+
(running_mean_std): RunningMeanStdDictInPlace(
|
| 50 |
+
(running_mean_std): ModuleDict(
|
| 51 |
+
(obs): RunningMeanStdInPlace()
|
| 52 |
+
)
|
| 53 |
+
)
|
| 54 |
+
)
|
| 55 |
+
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
|
| 56 |
+
(encoder): VizdoomEncoder(
|
| 57 |
+
(basic_encoder): ConvEncoder(
|
| 58 |
+
(enc): RecursiveScriptModule(
|
| 59 |
+
original_name=ConvEncoderImpl
|
| 60 |
+
(conv_head): RecursiveScriptModule(
|
| 61 |
+
original_name=Sequential
|
| 62 |
+
(0): RecursiveScriptModule(original_name=Conv2d)
|
| 63 |
+
(1): RecursiveScriptModule(original_name=ELU)
|
| 64 |
+
(2): RecursiveScriptModule(original_name=Conv2d)
|
| 65 |
+
(3): RecursiveScriptModule(original_name=ELU)
|
| 66 |
+
(4): RecursiveScriptModule(original_name=Conv2d)
|
| 67 |
+
(5): RecursiveScriptModule(original_name=ELU)
|
| 68 |
+
)
|
| 69 |
+
(mlp_layers): RecursiveScriptModule(
|
| 70 |
+
original_name=Sequential
|
| 71 |
+
(0): RecursiveScriptModule(original_name=Linear)
|
| 72 |
+
(1): RecursiveScriptModule(original_name=ELU)
|
| 73 |
+
)
|
| 74 |
+
)
|
| 75 |
+
)
|
| 76 |
+
)
|
| 77 |
+
(core): ModelCoreRNN(
|
| 78 |
+
(core): GRU(512, 512)
|
| 79 |
+
)
|
| 80 |
+
(decoder): MlpDecoder(
|
| 81 |
+
(mlp): Identity()
|
| 82 |
+
)
|
| 83 |
+
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
|
| 84 |
+
(action_parameterization): ActionParameterizationDefault(
|
| 85 |
+
(distribution_linear): Linear(in_features=512, out_features=11, bias=True)
|
| 86 |
+
)
|
| 87 |
+
)
|
| 88 |
+
[2025-10-18 04:40:19,109][04336] Using optimizer <class 'torch.optim.adam.Adam'>
|
| 89 |
+
[2025-10-18 04:40:21,851][02528] Heartbeat connected on Batcher_0
|
| 90 |
+
[2025-10-18 04:40:21,856][02528] Heartbeat connected on InferenceWorker_p0-w0
|
| 91 |
+
[2025-10-18 04:40:21,867][02528] Heartbeat connected on RolloutWorker_w0
|
| 92 |
+
[2025-10-18 04:40:21,873][02528] Heartbeat connected on RolloutWorker_w1
|
| 93 |
+
[2025-10-18 04:40:21,881][02528] Heartbeat connected on RolloutWorker_w2
|
| 94 |
+
[2025-10-18 04:40:21,890][02528] Heartbeat connected on RolloutWorker_w4
|
| 95 |
+
[2025-10-18 04:40:21,895][02528] Heartbeat connected on RolloutWorker_w6
|
| 96 |
+
[2025-10-18 04:40:21,900][02528] Heartbeat connected on RolloutWorker_w3
|
| 97 |
+
[2025-10-18 04:40:21,905][02528] Heartbeat connected on RolloutWorker_w5
|
| 98 |
+
[2025-10-18 04:40:21,909][02528] Heartbeat connected on RolloutWorker_w7
|
| 99 |
+
[2025-10-18 04:40:24,652][04336] No checkpoints found
|
| 100 |
+
[2025-10-18 04:40:24,652][04336] Did not load from checkpoint, starting from scratch!
|
| 101 |
+
[2025-10-18 04:40:24,653][04336] Initialized policy 0 weights for model version 0
|
| 102 |
+
[2025-10-18 04:40:24,658][04336] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 103 |
+
[2025-10-18 04:40:24,658][04336] LearnerWorker_p0 finished initialization!
|
| 104 |
+
[2025-10-18 04:40:24,665][02528] Heartbeat connected on LearnerWorker_p0
|
| 105 |
+
[2025-10-18 04:40:24,846][04349] RunningMeanStd input shape: (3, 72, 128)
|
| 106 |
+
[2025-10-18 04:40:24,848][04349] RunningMeanStd input shape: (1,)
|
| 107 |
+
[2025-10-18 04:40:24,862][04349] ConvEncoder: input_channels=3
|
| 108 |
+
[2025-10-18 04:40:25,013][04349] Conv encoder output size: 512
|
| 109 |
+
[2025-10-18 04:40:25,014][04349] Policy head output size: 512
|
| 110 |
+
[2025-10-18 04:40:25,069][02528] Inference worker 0-0 is ready!
|
| 111 |
+
[2025-10-18 04:40:25,072][02528] All inference workers are ready! Signal rollout workers to start!
|
| 112 |
+
[2025-10-18 04:40:25,361][04355] Doom resolution: 160x120, resize resolution: (128, 72)
|
| 113 |
+
[2025-10-18 04:40:25,374][04357] Doom resolution: 160x120, resize resolution: (128, 72)
|
| 114 |
+
[2025-10-18 04:40:25,404][04350] Doom resolution: 160x120, resize resolution: (128, 72)
|
| 115 |
+
[2025-10-18 04:40:25,430][04353] Doom resolution: 160x120, resize resolution: (128, 72)
|
| 116 |
+
[2025-10-18 04:40:25,461][04356] Doom resolution: 160x120, resize resolution: (128, 72)
|
| 117 |
+
[2025-10-18 04:40:25,459][04352] Doom resolution: 160x120, resize resolution: (128, 72)
|
| 118 |
+
[2025-10-18 04:40:25,465][04354] Doom resolution: 160x120, resize resolution: (128, 72)
|
| 119 |
+
[2025-10-18 04:40:25,468][04351] Doom resolution: 160x120, resize resolution: (128, 72)
|
| 120 |
+
[2025-10-18 04:40:25,679][02528] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
|
| 121 |
+
[2025-10-18 04:40:26,453][04356] Decorrelating experience for 0 frames...
|
| 122 |
+
[2025-10-18 04:40:26,834][04356] Decorrelating experience for 32 frames...
|
| 123 |
+
[2025-10-18 04:40:27,044][04355] Decorrelating experience for 0 frames...
|
| 124 |
+
[2025-10-18 04:40:27,049][04357] Decorrelating experience for 0 frames...
|
| 125 |
+
[2025-10-18 04:40:27,053][04350] Decorrelating experience for 0 frames...
|
| 126 |
+
[2025-10-18 04:40:27,959][04356] Decorrelating experience for 64 frames...
|
| 127 |
+
[2025-10-18 04:40:28,198][04352] Decorrelating experience for 0 frames...
|
| 128 |
+
[2025-10-18 04:40:28,202][04357] Decorrelating experience for 32 frames...
|
| 129 |
+
[2025-10-18 04:40:28,204][04353] Decorrelating experience for 0 frames...
|
| 130 |
+
[2025-10-18 04:40:28,205][04355] Decorrelating experience for 32 frames...
|
| 131 |
+
[2025-10-18 04:40:28,210][04350] Decorrelating experience for 32 frames...
|
| 132 |
+
[2025-10-18 04:40:28,959][04357] Decorrelating experience for 64 frames...
|
| 133 |
+
[2025-10-18 04:40:29,717][04352] Decorrelating experience for 32 frames...
|
| 134 |
+
[2025-10-18 04:40:29,721][04353] Decorrelating experience for 32 frames...
|
| 135 |
+
[2025-10-18 04:40:29,737][04351] Decorrelating experience for 0 frames...
|
| 136 |
+
[2025-10-18 04:40:29,769][04356] Decorrelating experience for 96 frames...
|
| 137 |
+
[2025-10-18 04:40:29,769][04355] Decorrelating experience for 64 frames...
|
| 138 |
+
[2025-10-18 04:40:29,868][04357] Decorrelating experience for 96 frames...
|
| 139 |
+
[2025-10-18 04:40:30,679][02528] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
|
| 140 |
+
[2025-10-18 04:40:30,857][04351] Decorrelating experience for 32 frames...
|
| 141 |
+
[2025-10-18 04:40:30,899][04350] Decorrelating experience for 64 frames...
|
| 142 |
+
[2025-10-18 04:40:31,083][04355] Decorrelating experience for 96 frames...
|
| 143 |
+
[2025-10-18 04:40:31,423][04352] Decorrelating experience for 64 frames...
|
| 144 |
+
[2025-10-18 04:40:32,420][04354] Decorrelating experience for 0 frames...
|
| 145 |
+
[2025-10-18 04:40:32,560][04353] Decorrelating experience for 64 frames...
|
| 146 |
+
[2025-10-18 04:40:32,867][04350] Decorrelating experience for 96 frames...
|
| 147 |
+
[2025-10-18 04:40:34,658][04352] Decorrelating experience for 96 frames...
|
| 148 |
+
[2025-10-18 04:40:34,946][04353] Decorrelating experience for 96 frames...
|
| 149 |
+
[2025-10-18 04:40:34,960][04351] Decorrelating experience for 64 frames...
|
| 150 |
+
[2025-10-18 04:40:35,679][02528] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 77.4. Samples: 774. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
|
| 151 |
+
[2025-10-18 04:40:35,699][02528] Avg episode reward: [(0, '-0.483')]
|
| 152 |
+
[2025-10-18 04:40:38,352][04354] Decorrelating experience for 32 frames...
|
| 153 |
+
[2025-10-18 04:40:40,679][02528] Fps is (10 sec: 819.2, 60 sec: 546.1, 300 sec: 546.1). Total num frames: 8192. Throughput: 0: 173.7. Samples: 2606. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0)
|
| 154 |
+
[2025-10-18 04:40:40,681][02528] Avg episode reward: [(0, '-0.469')]
|
| 155 |
+
[2025-10-18 04:40:41,513][04351] Decorrelating experience for 96 frames...
|
| 156 |
+
[2025-10-18 04:40:42,326][04354] Decorrelating experience for 64 frames...
|
| 157 |
+
[2025-10-18 04:40:44,510][04354] Decorrelating experience for 96 frames...
|
| 158 |
+
[2025-10-18 04:40:45,679][02528] Fps is (10 sec: 2048.0, 60 sec: 1024.0, 300 sec: 1024.0). Total num frames: 20480. Throughput: 0: 209.1. Samples: 4182. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
|
| 159 |
+
[2025-10-18 04:40:45,686][02528] Avg episode reward: [(0, '0.054')]
|
| 160 |
+
[2025-10-18 04:40:50,509][04349] Updated weights for policy 0, policy_version 10 (0.0034)
|
| 161 |
+
[2025-10-18 04:40:50,679][02528] Fps is (10 sec: 3276.8, 60 sec: 1638.4, 300 sec: 1638.4). Total num frames: 40960. Throughput: 0: 380.6. Samples: 9516. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 162 |
+
[2025-10-18 04:40:50,684][02528] Avg episode reward: [(0, '0.065')]
|
| 163 |
+
[2025-10-18 04:40:55,679][02528] Fps is (10 sec: 3276.9, 60 sec: 1774.9, 300 sec: 1774.9). Total num frames: 53248. Throughput: 0: 466.9. Samples: 14006. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 164 |
+
[2025-10-18 04:40:55,684][02528] Avg episode reward: [(0, '0.751')]
|
| 165 |
+
[2025-10-18 04:41:00,679][02528] Fps is (10 sec: 2457.6, 60 sec: 1872.5, 300 sec: 1872.5). Total num frames: 65536. Throughput: 0: 449.1. Samples: 15720. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
|
| 166 |
+
[2025-10-18 04:41:00,684][02528] Avg episode reward: [(0, '0.668')]
|
| 167 |
+
[2025-10-18 04:41:04,689][04349] Updated weights for policy 0, policy_version 20 (0.0023)
|
| 168 |
+
[2025-10-18 04:41:05,679][02528] Fps is (10 sec: 2867.2, 60 sec: 2048.0, 300 sec: 2048.0). Total num frames: 81920. Throughput: 0: 515.0. Samples: 20598. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 169 |
+
[2025-10-18 04:41:05,683][02528] Avg episode reward: [(0, '0.633')]
|
| 170 |
+
[2025-10-18 04:41:10,679][02528] Fps is (10 sec: 3276.8, 60 sec: 2184.5, 300 sec: 2184.5). Total num frames: 98304. Throughput: 0: 567.9. Samples: 25556. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
|
| 171 |
+
[2025-10-18 04:41:10,685][02528] Avg episode reward: [(0, '1.366')]
|
| 172 |
+
[2025-10-18 04:41:15,679][02528] Fps is (10 sec: 2867.2, 60 sec: 2211.8, 300 sec: 2211.8). Total num frames: 110592. Throughput: 0: 604.7. Samples: 27210. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 173 |
+
[2025-10-18 04:41:15,686][02528] Avg episode reward: [(0, '2.481')]
|
| 174 |
+
[2025-10-18 04:41:15,693][04336] Saving new best policy, reward=2.481!
|
| 175 |
+
[2025-10-18 04:41:19,026][04349] Updated weights for policy 0, policy_version 30 (0.0020)
|
| 176 |
+
[2025-10-18 04:41:20,679][02528] Fps is (10 sec: 2867.2, 60 sec: 2308.7, 300 sec: 2308.7). Total num frames: 126976. Throughput: 0: 676.2. Samples: 31204. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
|
| 177 |
+
[2025-10-18 04:41:20,687][02528] Avg episode reward: [(0, '2.203')]
|
| 178 |
+
[2025-10-18 04:41:25,679][02528] Fps is (10 sec: 3276.9, 60 sec: 2389.3, 300 sec: 2389.3). Total num frames: 143360. Throughput: 0: 751.9. Samples: 36442. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 179 |
+
[2025-10-18 04:41:25,681][02528] Avg episode reward: [(0, '2.994')]
|
| 180 |
+
[2025-10-18 04:41:25,688][04336] Saving new best policy, reward=2.994!
|
| 181 |
+
[2025-10-18 04:41:30,681][02528] Fps is (10 sec: 2866.6, 60 sec: 2594.0, 300 sec: 2394.5). Total num frames: 155648. Throughput: 0: 763.1. Samples: 38522. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 182 |
+
[2025-10-18 04:41:30,685][02528] Avg episode reward: [(0, '1.620')]
|
| 183 |
+
[2025-10-18 04:41:33,916][04349] Updated weights for policy 0, policy_version 40 (0.0018)
|
| 184 |
+
[2025-10-18 04:41:35,679][02528] Fps is (10 sec: 2457.6, 60 sec: 2798.9, 300 sec: 2399.1). Total num frames: 167936. Throughput: 0: 718.3. Samples: 41838. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
|
| 185 |
+
[2025-10-18 04:41:35,682][02528] Avg episode reward: [(0, '3.186')]
|
| 186 |
+
[2025-10-18 04:41:35,685][04336] Saving new best policy, reward=3.186!
|
| 187 |
+
[2025-10-18 04:41:40,679][02528] Fps is (10 sec: 2458.1, 60 sec: 2867.2, 300 sec: 2403.0). Total num frames: 180224. Throughput: 0: 710.9. Samples: 45996. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
|
| 188 |
+
[2025-10-18 04:41:40,682][02528] Avg episode reward: [(0, '1.682')]
|
| 189 |
+
[2025-10-18 04:41:45,681][02528] Fps is (10 sec: 2866.6, 60 sec: 2935.4, 300 sec: 2457.5). Total num frames: 196608. Throughput: 0: 726.9. Samples: 48432. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
|
| 190 |
+
[2025-10-18 04:41:45,685][02528] Avg episode reward: [(0, '6.078')]
|
| 191 |
+
[2025-10-18 04:41:45,694][04336] Saving new best policy, reward=6.078!
|
| 192 |
+
[2025-10-18 04:41:49,968][04349] Updated weights for policy 0, policy_version 50 (0.0027)
|
| 193 |
+
[2025-10-18 04:41:50,679][02528] Fps is (10 sec: 2457.7, 60 sec: 2730.7, 300 sec: 2409.4). Total num frames: 204800. Throughput: 0: 679.4. Samples: 51172. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 194 |
+
[2025-10-18 04:41:50,687][02528] Avg episode reward: [(0, '2.947')]
|
| 195 |
+
[2025-10-18 04:41:55,679][02528] Fps is (10 sec: 2458.1, 60 sec: 2798.9, 300 sec: 2457.6). Total num frames: 221184. Throughput: 0: 670.8. Samples: 55740. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 196 |
+
[2025-10-18 04:41:55,684][02528] Avg episode reward: [(0, '3.728')]
|
| 197 |
+
[2025-10-18 04:41:55,694][04336] Saving train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000054_221184.pth...
|
| 198 |
+
[2025-10-18 04:42:00,679][02528] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2457.6). Total num frames: 233472. Throughput: 0: 675.6. Samples: 57612. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
|
| 199 |
+
[2025-10-18 04:42:00,685][02528] Avg episode reward: [(0, '2.880')]
|
| 200 |
+
[2025-10-18 04:42:04,740][04349] Updated weights for policy 0, policy_version 60 (0.0025)
|
| 201 |
+
[2025-10-18 04:42:05,679][02528] Fps is (10 sec: 2457.6, 60 sec: 2730.7, 300 sec: 2457.6). Total num frames: 245760. Throughput: 0: 678.6. Samples: 61742. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
|
| 202 |
+
[2025-10-18 04:42:05,684][02528] Avg episode reward: [(0, '2.537')]
|
| 203 |
+
[2025-10-18 04:42:10,679][02528] Fps is (10 sec: 2457.5, 60 sec: 2662.4, 300 sec: 2457.6). Total num frames: 258048. Throughput: 0: 644.1. Samples: 65428. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
|
| 204 |
+
[2025-10-18 04:42:10,684][02528] Avg episode reward: [(0, '3.734')]
|
| 205 |
+
[2025-10-18 04:42:15,679][02528] Fps is (10 sec: 3276.8, 60 sec: 2798.9, 300 sec: 2532.1). Total num frames: 278528. Throughput: 0: 656.3. Samples: 68052. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 206 |
+
[2025-10-18 04:42:15,687][02528] Avg episode reward: [(0, '2.063')]
|
| 207 |
+
[2025-10-18 04:42:17,887][04349] Updated weights for policy 0, policy_version 70 (0.0015)
|
| 208 |
+
[2025-10-18 04:42:20,679][02528] Fps is (10 sec: 3276.9, 60 sec: 2730.7, 300 sec: 2528.8). Total num frames: 290816. Throughput: 0: 692.8. Samples: 73014. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 209 |
+
[2025-10-18 04:42:20,687][02528] Avg episode reward: [(0, '3.895')]
|
| 210 |
+
[2025-10-18 04:42:25,679][02528] Fps is (10 sec: 2457.6, 60 sec: 2662.4, 300 sec: 2525.9). Total num frames: 303104. Throughput: 0: 672.5. Samples: 76260. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
|
| 211 |
+
[2025-10-18 04:42:25,686][02528] Avg episode reward: [(0, '5.384')]
|
| 212 |
+
[2025-10-18 04:42:30,679][02528] Fps is (10 sec: 2867.2, 60 sec: 2730.8, 300 sec: 2555.9). Total num frames: 319488. Throughput: 0: 673.7. Samples: 78748. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 213 |
+
[2025-10-18 04:42:30,683][02528] Avg episode reward: [(0, '4.836')]
|
| 214 |
+
[2025-10-18 04:42:32,211][04349] Updated weights for policy 0, policy_version 80 (0.0025)
|
| 215 |
+
[2025-10-18 04:42:35,680][02528] Fps is (10 sec: 3276.4, 60 sec: 2798.9, 300 sec: 2583.6). Total num frames: 335872. Throughput: 0: 730.4. Samples: 84040. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
|
| 216 |
+
[2025-10-18 04:42:35,716][02528] Avg episode reward: [(0, '2.635')]
|
| 217 |
+
[2025-10-18 04:42:40,680][02528] Fps is (10 sec: 2866.8, 60 sec: 2798.9, 300 sec: 2578.9). Total num frames: 348160. Throughput: 0: 710.0. Samples: 87690. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 218 |
+
[2025-10-18 04:42:40,685][02528] Avg episode reward: [(0, '3.053')]
|
| 219 |
+
[2025-10-18 04:42:45,681][02528] Fps is (10 sec: 2457.5, 60 sec: 2730.7, 300 sec: 2574.6). Total num frames: 360448. Throughput: 0: 705.9. Samples: 89380. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
|
| 220 |
+
[2025-10-18 04:42:45,689][02528] Avg episode reward: [(0, '4.166')]
|
| 221 |
+
[2025-10-18 04:42:46,796][04349] Updated weights for policy 0, policy_version 90 (0.0026)
|
| 222 |
+
[2025-10-18 04:42:50,679][02528] Fps is (10 sec: 3277.3, 60 sec: 2935.5, 300 sec: 2627.1). Total num frames: 380928. Throughput: 0: 729.4. Samples: 94566. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 223 |
+
[2025-10-18 04:42:50,685][02528] Avg episode reward: [(0, '3.264')]
|
| 224 |
+
[2025-10-18 04:42:55,679][02528] Fps is (10 sec: 3277.4, 60 sec: 2867.2, 300 sec: 2621.4). Total num frames: 393216. Throughput: 0: 749.9. Samples: 99174. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 225 |
+
[2025-10-18 04:42:55,683][02528] Avg episode reward: [(0, '4.159')]
|
| 226 |
+
[2025-10-18 04:43:00,679][02528] Fps is (10 sec: 2457.6, 60 sec: 2867.2, 300 sec: 2616.2). Total num frames: 405504. Throughput: 0: 727.7. Samples: 100798. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 227 |
+
[2025-10-18 04:43:00,681][02528] Avg episode reward: [(0, '4.177')]
|
| 228 |
+
[2025-10-18 04:43:01,411][04349] Updated weights for policy 0, policy_version 100 (0.0022)
|
| 229 |
+
[2025-10-18 04:43:05,679][02528] Fps is (10 sec: 2867.1, 60 sec: 2935.5, 300 sec: 2636.8). Total num frames: 421888. Throughput: 0: 717.7. Samples: 105310. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 230 |
+
[2025-10-18 04:43:05,685][02528] Avg episode reward: [(0, '3.483')]
|
| 231 |
+
[2025-10-18 04:43:10,680][02528] Fps is (10 sec: 3276.6, 60 sec: 3003.7, 300 sec: 2656.2). Total num frames: 438272. Throughput: 0: 759.3. Samples: 110430. Policy #0 lag: (min: 0.0, avg: 0.8, max: 2.0)
|
| 232 |
+
[2025-10-18 04:43:10,683][02528] Avg episode reward: [(0, '3.799')]
|
| 233 |
+
[2025-10-18 04:43:15,096][04349] Updated weights for policy 0, policy_version 110 (0.0020)
|
| 234 |
+
[2025-10-18 04:43:15,683][02528] Fps is (10 sec: 2866.1, 60 sec: 2867.0, 300 sec: 2650.3). Total num frames: 450560. Throughput: 0: 743.0. Samples: 112184. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 235 |
+
[2025-10-18 04:43:15,690][02528] Avg episode reward: [(0, '4.204')]
|
| 236 |
+
[2025-10-18 04:43:20,680][02528] Fps is (10 sec: 2457.4, 60 sec: 2867.1, 300 sec: 2644.8). Total num frames: 462848. Throughput: 0: 710.3. Samples: 116004. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
|
| 237 |
+
[2025-10-18 04:43:20,689][02528] Avg episode reward: [(0, '3.735')]
|
| 238 |
+
[2025-10-18 04:43:25,679][02528] Fps is (10 sec: 3278.2, 60 sec: 3003.7, 300 sec: 2685.2). Total num frames: 483328. Throughput: 0: 744.8. Samples: 121204. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 239 |
+
[2025-10-18 04:43:25,682][02528] Avg episode reward: [(0, '4.039')]
|
| 240 |
+
[2025-10-18 04:43:27,637][04349] Updated weights for policy 0, policy_version 120 (0.0022)
|
| 241 |
+
[2025-10-18 04:43:30,679][02528] Fps is (10 sec: 3277.1, 60 sec: 2935.4, 300 sec: 2679.0). Total num frames: 495616. Throughput: 0: 762.4. Samples: 123686. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 242 |
+
[2025-10-18 04:43:30,690][02528] Avg episode reward: [(0, '4.385')]
|
| 243 |
+
[2025-10-18 04:43:35,679][02528] Fps is (10 sec: 2457.6, 60 sec: 2867.3, 300 sec: 2673.2). Total num frames: 507904. Throughput: 0: 720.8. Samples: 127000. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 244 |
+
[2025-10-18 04:43:35,695][02528] Avg episode reward: [(0, '3.702')]
|
| 245 |
+
[2025-10-18 04:43:40,679][02528] Fps is (10 sec: 2867.2, 60 sec: 2935.5, 300 sec: 2688.7). Total num frames: 524288. Throughput: 0: 726.7. Samples: 131876. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
|
| 246 |
+
[2025-10-18 04:43:40,683][02528] Avg episode reward: [(0, '3.777')]
|
| 247 |
+
[2025-10-18 04:43:42,091][04349] Updated weights for policy 0, policy_version 130 (0.0029)
|
| 248 |
+
[2025-10-18 04:43:45,680][02528] Fps is (10 sec: 3276.7, 60 sec: 3003.8, 300 sec: 2703.4). Total num frames: 540672. Throughput: 0: 749.4. Samples: 134522. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
|
| 249 |
+
[2025-10-18 04:43:45,690][02528] Avg episode reward: [(0, '3.578')]
|
| 250 |
+
[2025-10-18 04:43:50,681][02528] Fps is (10 sec: 2866.7, 60 sec: 2867.1, 300 sec: 2697.3). Total num frames: 552960. Throughput: 0: 738.2. Samples: 138530. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
|
| 251 |
+
[2025-10-18 04:43:50,705][02528] Avg episode reward: [(0, '2.809')]
|
| 252 |
+
[2025-10-18 04:43:55,679][02528] Fps is (10 sec: 2867.3, 60 sec: 2935.5, 300 sec: 2711.2). Total num frames: 569344. Throughput: 0: 716.8. Samples: 142684. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 253 |
+
[2025-10-18 04:43:55,683][02528] Avg episode reward: [(0, '5.054')]
|
| 254 |
+
[2025-10-18 04:43:55,697][04336] Saving train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000139_569344.pth...
|
| 255 |
+
[2025-10-18 04:43:56,644][04349] Updated weights for policy 0, policy_version 140 (0.0018)
|
| 256 |
+
[2025-10-18 04:44:00,679][02528] Fps is (10 sec: 3277.4, 60 sec: 3003.7, 300 sec: 2724.3). Total num frames: 585728. Throughput: 0: 734.9. Samples: 145250. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 257 |
+
[2025-10-18 04:44:00,682][02528] Avg episode reward: [(0, '4.423')]
|
| 258 |
+
[2025-10-18 04:44:05,679][02528] Fps is (10 sec: 2867.2, 60 sec: 2935.5, 300 sec: 2718.3). Total num frames: 598016. Throughput: 0: 756.1. Samples: 150026. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 259 |
+
[2025-10-18 04:44:05,688][02528] Avg episode reward: [(0, '4.099')]
|
| 260 |
+
[2025-10-18 04:44:10,679][02528] Fps is (10 sec: 2457.6, 60 sec: 2867.2, 300 sec: 2712.5). Total num frames: 610304. Throughput: 0: 711.6. Samples: 153224. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 261 |
+
[2025-10-18 04:44:10,683][02528] Avg episode reward: [(0, '3.542')]
|
| 262 |
+
[2025-10-18 04:44:11,075][04349] Updated weights for policy 0, policy_version 150 (0.0021)
|
| 263 |
+
[2025-10-18 04:44:15,679][02528] Fps is (10 sec: 2867.2, 60 sec: 2935.7, 300 sec: 2724.7). Total num frames: 626688. Throughput: 0: 716.1. Samples: 155912. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 264 |
+
[2025-10-18 04:44:15,685][02528] Avg episode reward: [(0, '3.985')]
|
| 265 |
+
[2025-10-18 04:44:20,681][02528] Fps is (10 sec: 3276.1, 60 sec: 3003.7, 300 sec: 2736.5). Total num frames: 643072. Throughput: 0: 760.1. Samples: 161204. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 266 |
+
[2025-10-18 04:44:20,689][02528] Avg episode reward: [(0, '3.892')]
|
| 267 |
+
[2025-10-18 04:44:24,449][04349] Updated weights for policy 0, policy_version 160 (0.0023)
|
| 268 |
+
[2025-10-18 04:44:25,679][02528] Fps is (10 sec: 2867.3, 60 sec: 2867.2, 300 sec: 2730.7). Total num frames: 655360. Throughput: 0: 730.9. Samples: 164764. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 269 |
+
[2025-10-18 04:44:25,681][02528] Avg episode reward: [(0, '4.474')]
|
| 270 |
+
[2025-10-18 04:44:30,679][02528] Fps is (10 sec: 2867.8, 60 sec: 2935.5, 300 sec: 2741.8). Total num frames: 671744. Throughput: 0: 713.8. Samples: 166644. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
|
| 271 |
+
[2025-10-18 04:44:30,692][02528] Avg episode reward: [(0, '4.383')]
|
| 272 |
+
[2025-10-18 04:44:35,679][02528] Fps is (10 sec: 3276.8, 60 sec: 3003.7, 300 sec: 2752.5). Total num frames: 688128. Throughput: 0: 742.1. Samples: 171922. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 273 |
+
[2025-10-18 04:44:35,687][02528] Avg episode reward: [(0, '5.633')]
|
| 274 |
+
[2025-10-18 04:44:37,105][04349] Updated weights for policy 0, policy_version 170 (0.0028)
|
| 275 |
+
[2025-10-18 04:44:40,680][02528] Fps is (10 sec: 2866.9, 60 sec: 2935.4, 300 sec: 2746.7). Total num frames: 700416. Throughput: 0: 744.0. Samples: 176164. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
|
| 276 |
+
[2025-10-18 04:44:40,686][02528] Avg episode reward: [(0, '4.093')]
|
| 277 |
+
[2025-10-18 04:44:45,679][02528] Fps is (10 sec: 2457.5, 60 sec: 2867.2, 300 sec: 2741.2). Total num frames: 712704. Throughput: 0: 722.5. Samples: 177764. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
|
| 278 |
+
[2025-10-18 04:44:45,684][02528] Avg episode reward: [(0, '3.497')]
|
| 279 |
+
[2025-10-18 04:44:50,679][02528] Fps is (10 sec: 3277.2, 60 sec: 3003.8, 300 sec: 2766.7). Total num frames: 733184. Throughput: 0: 724.6. Samples: 182632. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 280 |
+
[2025-10-18 04:44:50,685][02528] Avg episode reward: [(0, '5.862')]
|
| 281 |
+
[2025-10-18 04:44:51,796][04349] Updated weights for policy 0, policy_version 180 (0.0025)
|
| 282 |
+
[2025-10-18 04:44:55,679][02528] Fps is (10 sec: 3277.0, 60 sec: 2935.5, 300 sec: 2761.0). Total num frames: 745472. Throughput: 0: 766.6. Samples: 187720. Policy #0 lag: (min: 0.0, avg: 0.7, max: 1.0)
|
| 283 |
+
[2025-10-18 04:44:55,683][02528] Avg episode reward: [(0, '6.190')]
|
| 284 |
+
[2025-10-18 04:44:55,708][04336] Saving new best policy, reward=6.190!
|
| 285 |
+
[2025-10-18 04:45:00,680][02528] Fps is (10 sec: 2457.2, 60 sec: 2867.1, 300 sec: 2755.5). Total num frames: 757760. Throughput: 0: 741.4. Samples: 189274. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
|
| 286 |
+
[2025-10-18 04:45:00,688][02528] Avg episode reward: [(0, '4.443')]
|
| 287 |
+
[2025-10-18 04:45:05,679][02528] Fps is (10 sec: 2867.2, 60 sec: 2935.5, 300 sec: 2764.8). Total num frames: 774144. Throughput: 0: 714.6. Samples: 193358. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 288 |
+
[2025-10-18 04:45:05,684][02528] Avg episode reward: [(0, '4.686')]
|
| 289 |
+
[2025-10-18 04:45:06,107][04349] Updated weights for policy 0, policy_version 190 (0.0017)
|
| 290 |
+
[2025-10-18 04:45:10,679][02528] Fps is (10 sec: 3277.3, 60 sec: 3003.7, 300 sec: 2773.8). Total num frames: 790528. Throughput: 0: 749.0. Samples: 198470. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
|
| 291 |
+
[2025-10-18 04:45:10,686][02528] Avg episode reward: [(0, '5.912')]
|
| 292 |
+
[2025-10-18 04:45:15,680][02528] Fps is (10 sec: 2866.8, 60 sec: 2935.4, 300 sec: 2768.3). Total num frames: 802816. Throughput: 0: 755.6. Samples: 200646. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 293 |
+
[2025-10-18 04:45:15,691][02528] Avg episode reward: [(0, '4.293')]
|
| 294 |
+
[2025-10-18 04:45:20,679][02528] Fps is (10 sec: 2457.6, 60 sec: 2867.3, 300 sec: 2763.1). Total num frames: 815104. Throughput: 0: 703.4. Samples: 203574. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 295 |
+
[2025-10-18 04:45:20,684][02528] Avg episode reward: [(0, '5.096')]
|
| 296 |
+
[2025-10-18 04:45:21,239][04349] Updated weights for policy 0, policy_version 200 (0.0028)
|
| 297 |
+
[2025-10-18 04:45:25,679][02528] Fps is (10 sec: 2867.6, 60 sec: 2935.5, 300 sec: 2818.6). Total num frames: 831488. Throughput: 0: 726.3. Samples: 208848. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 298 |
+
[2025-10-18 04:45:25,683][02528] Avg episode reward: [(0, '4.874')]
|
| 299 |
+
[2025-10-18 04:45:30,680][02528] Fps is (10 sec: 3276.3, 60 sec: 2935.4, 300 sec: 2874.1). Total num frames: 847872. Throughput: 0: 749.2. Samples: 211480. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 300 |
+
[2025-10-18 04:45:30,686][02528] Avg episode reward: [(0, '4.638')]
|
| 301 |
+
[2025-10-18 04:45:35,617][04349] Updated weights for policy 0, policy_version 210 (0.0025)
|
| 302 |
+
[2025-10-18 04:45:35,681][02528] Fps is (10 sec: 2866.7, 60 sec: 2867.1, 300 sec: 2888.0). Total num frames: 860160. Throughput: 0: 712.7. Samples: 214706. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
|
| 303 |
+
[2025-10-18 04:45:35,686][02528] Avg episode reward: [(0, '4.617')]
|
| 304 |
+
[2025-10-18 04:45:40,679][02528] Fps is (10 sec: 2457.9, 60 sec: 2867.2, 300 sec: 2888.0). Total num frames: 872448. Throughput: 0: 699.2. Samples: 219186. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 305 |
+
[2025-10-18 04:45:40,683][02528] Avg episode reward: [(0, '5.419')]
|
| 306 |
+
[2025-10-18 04:45:45,679][02528] Fps is (10 sec: 3277.3, 60 sec: 3003.8, 300 sec: 2888.0). Total num frames: 892928. Throughput: 0: 721.9. Samples: 221758. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
|
| 307 |
+
[2025-10-18 04:45:45,687][02528] Avg episode reward: [(0, '5.710')]
|
| 308 |
+
[2025-10-18 04:45:48,814][04349] Updated weights for policy 0, policy_version 220 (0.0024)
|
| 309 |
+
[2025-10-18 04:45:50,679][02528] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2874.1). Total num frames: 901120. Throughput: 0: 722.4. Samples: 225868. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
|
| 310 |
+
[2025-10-18 04:45:50,685][02528] Avg episode reward: [(0, '3.929')]
|
| 311 |
+
[2025-10-18 04:45:55,683][02528] Fps is (10 sec: 2456.5, 60 sec: 2867.0, 300 sec: 2888.0). Total num frames: 917504. Throughput: 0: 691.8. Samples: 229602. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
|
| 312 |
+
[2025-10-18 04:45:55,686][02528] Avg episode reward: [(0, '4.380')]
|
| 313 |
+
[2025-10-18 04:45:55,702][04336] Saving train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000224_917504.pth...
|
| 314 |
+
[2025-10-18 04:45:55,920][04336] Removing train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000054_221184.pth
|
| 315 |
+
[2025-10-18 04:46:00,679][02528] Fps is (10 sec: 3276.8, 60 sec: 2935.5, 300 sec: 2888.0). Total num frames: 933888. Throughput: 0: 696.2. Samples: 231974. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
|
| 316 |
+
[2025-10-18 04:46:00,691][02528] Avg episode reward: [(0, '5.847')]
|
| 317 |
+
[2025-10-18 04:46:03,045][04349] Updated weights for policy 0, policy_version 230 (0.0016)
|
| 318 |
+
[2025-10-18 04:46:05,682][02528] Fps is (10 sec: 2867.5, 60 sec: 2867.0, 300 sec: 2874.1). Total num frames: 946176. Throughput: 0: 736.3. Samples: 236710. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 319 |
+
[2025-10-18 04:46:05,689][02528] Avg episode reward: [(0, '5.913')]
|
| 320 |
+
[2025-10-18 04:46:10,679][02528] Fps is (10 sec: 2457.6, 60 sec: 2798.9, 300 sec: 2874.1). Total num frames: 958464. Throughput: 0: 689.3. Samples: 239868. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0)
|
| 321 |
+
[2025-10-18 04:46:10,682][02528] Avg episode reward: [(0, '6.065')]
|
| 322 |
+
[2025-10-18 04:46:15,679][02528] Fps is (10 sec: 2868.2, 60 sec: 2867.3, 300 sec: 2874.1). Total num frames: 974848. Throughput: 0: 682.8. Samples: 242206. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 323 |
+
[2025-10-18 04:46:15,683][02528] Avg episode reward: [(0, '5.410')]
|
| 324 |
+
[2025-10-18 04:46:17,725][04349] Updated weights for policy 0, policy_version 240 (0.0014)
|
| 325 |
+
[2025-10-18 04:46:20,679][02528] Fps is (10 sec: 3276.7, 60 sec: 2935.4, 300 sec: 2874.1). Total num frames: 991232. Throughput: 0: 725.0. Samples: 247332. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 326 |
+
[2025-10-18 04:46:20,684][02528] Avg episode reward: [(0, '4.203')]
|
| 327 |
+
[2025-10-18 04:46:25,680][02528] Fps is (10 sec: 2867.0, 60 sec: 2867.2, 300 sec: 2874.2). Total num frames: 1003520. Throughput: 0: 711.2. Samples: 251192. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 328 |
+
[2025-10-18 04:46:25,683][02528] Avg episode reward: [(0, '7.632')]
|
| 329 |
+
[2025-10-18 04:46:25,689][04336] Saving new best policy, reward=7.632!
|
| 330 |
+
[2025-10-18 04:46:30,680][02528] Fps is (10 sec: 2048.0, 60 sec: 2730.7, 300 sec: 2860.3). Total num frames: 1011712. Throughput: 0: 688.5. Samples: 252742. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
|
| 331 |
+
[2025-10-18 04:46:30,687][02528] Avg episode reward: [(0, '4.357')]
|
| 332 |
+
[2025-10-18 04:46:33,058][04349] Updated weights for policy 0, policy_version 250 (0.0020)
|
| 333 |
+
[2025-10-18 04:46:35,680][02528] Fps is (10 sec: 2867.1, 60 sec: 2867.2, 300 sec: 2888.0). Total num frames: 1032192. Throughput: 0: 701.9. Samples: 257456. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
|
| 334 |
+
[2025-10-18 04:46:35,686][02528] Avg episode reward: [(0, '4.018')]
|
| 335 |
+
[2025-10-18 04:46:40,679][02528] Fps is (10 sec: 3277.0, 60 sec: 2867.2, 300 sec: 2874.2). Total num frames: 1044480. Throughput: 0: 719.5. Samples: 261978. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 336 |
+
[2025-10-18 04:46:40,682][02528] Avg episode reward: [(0, '5.881')]
|
| 337 |
+
[2025-10-18 04:46:45,679][02528] Fps is (10 sec: 2457.8, 60 sec: 2730.7, 300 sec: 2888.0). Total num frames: 1056768. Throughput: 0: 703.2. Samples: 263616. Policy #0 lag: (min: 0.0, avg: 0.6, max: 1.0)
|
| 338 |
+
[2025-10-18 04:46:45,684][02528] Avg episode reward: [(0, '5.979')]
|
| 339 |
+
[2025-10-18 04:46:47,723][04349] Updated weights for policy 0, policy_version 260 (0.0020)
|
| 340 |
+
[2025-10-18 04:46:50,680][02528] Fps is (10 sec: 2866.9, 60 sec: 2867.2, 300 sec: 2888.0). Total num frames: 1073152. Throughput: 0: 698.2. Samples: 268128. Policy #0 lag: (min: 0.0, avg: 0.7, max: 2.0)
|
| 341 |
+
[2025-10-18 04:46:50,684][02528] Avg episode reward: [(0, '6.243')]
|
| 342 |
+
[2025-10-18 04:46:55,679][02528] Fps is (10 sec: 3686.2, 60 sec: 2935.7, 300 sec: 2915.8). Total num frames: 1093632. Throughput: 0: 747.6. Samples: 273510. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 343 |
+
[2025-10-18 04:46:55,685][02528] Avg episode reward: [(0, '6.197')]
|
| 344 |
+
[2025-10-18 04:47:00,679][02528] Fps is (10 sec: 2867.5, 60 sec: 2798.9, 300 sec: 2901.9). Total num frames: 1101824. Throughput: 0: 736.8. Samples: 275360. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
|
| 345 |
+
[2025-10-18 04:47:00,686][02528] Avg episode reward: [(0, '5.088')]
|
| 346 |
+
[2025-10-18 04:47:01,025][04349] Updated weights for policy 0, policy_version 270 (0.0020)
|
| 347 |
+
[2025-10-18 04:47:05,679][02528] Fps is (10 sec: 2048.1, 60 sec: 2799.1, 300 sec: 2901.9). Total num frames: 1114112. Throughput: 0: 695.7. Samples: 278638. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
|
| 348 |
+
[2025-10-18 04:47:05,684][02528] Avg episode reward: [(0, '4.799')]
|
| 349 |
+
[2025-10-18 04:47:10,679][02528] Fps is (10 sec: 3276.8, 60 sec: 2935.5, 300 sec: 2901.9). Total num frames: 1134592. Throughput: 0: 724.2. Samples: 283782. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 350 |
+
[2025-10-18 04:47:10,685][02528] Avg episode reward: [(0, '8.581')]
|
| 351 |
+
[2025-10-18 04:47:10,689][04336] Saving new best policy, reward=8.581!
|
| 352 |
+
[2025-10-18 04:47:14,797][04349] Updated weights for policy 0, policy_version 280 (0.0022)
|
| 353 |
+
[2025-10-18 04:47:15,679][02528] Fps is (10 sec: 3276.8, 60 sec: 2867.2, 300 sec: 2901.9). Total num frames: 1146880. Throughput: 0: 747.0. Samples: 286358. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 354 |
+
[2025-10-18 04:47:15,689][02528] Avg episode reward: [(0, '5.566')]
|
| 355 |
+
[2025-10-18 04:47:20,679][02528] Fps is (10 sec: 2457.6, 60 sec: 2798.9, 300 sec: 2901.9). Total num frames: 1159168. Throughput: 0: 715.1. Samples: 289634. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 356 |
+
[2025-10-18 04:47:20,684][02528] Avg episode reward: [(0, '3.583')]
|
| 357 |
+
[2025-10-18 04:47:25,679][02528] Fps is (10 sec: 2867.1, 60 sec: 2867.2, 300 sec: 2901.9). Total num frames: 1175552. Throughput: 0: 721.2. Samples: 294432. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 358 |
+
[2025-10-18 04:47:25,682][02528] Avg episode reward: [(0, '3.364')]
|
| 359 |
+
[2025-10-18 04:47:28,775][04349] Updated weights for policy 0, policy_version 290 (0.0025)
|
| 360 |
+
[2025-10-18 04:47:30,679][02528] Fps is (10 sec: 3276.7, 60 sec: 3003.7, 300 sec: 2901.9). Total num frames: 1191936. Throughput: 0: 742.3. Samples: 297022. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 361 |
+
[2025-10-18 04:47:30,700][02528] Avg episode reward: [(0, '7.713')]
|
| 362 |
+
[2025-10-18 04:47:35,679][02528] Fps is (10 sec: 2867.3, 60 sec: 2867.3, 300 sec: 2901.9). Total num frames: 1204224. Throughput: 0: 727.3. Samples: 300854. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 363 |
+
[2025-10-18 04:47:35,683][02528] Avg episode reward: [(0, '6.734')]
|
| 364 |
+
[2025-10-18 04:47:40,679][02528] Fps is (10 sec: 2457.7, 60 sec: 2867.2, 300 sec: 2901.9). Total num frames: 1216512. Throughput: 0: 690.6. Samples: 304588. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 365 |
+
[2025-10-18 04:47:40,684][02528] Avg episode reward: [(0, '5.367')]
|
| 366 |
+
[2025-10-18 04:47:43,726][04349] Updated weights for policy 0, policy_version 300 (0.0021)
|
| 367 |
+
[2025-10-18 04:47:45,679][02528] Fps is (10 sec: 2867.1, 60 sec: 2935.5, 300 sec: 2888.0). Total num frames: 1232896. Throughput: 0: 708.4. Samples: 307236. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 368 |
+
[2025-10-18 04:47:45,687][02528] Avg episode reward: [(0, '6.527')]
|
| 369 |
+
[2025-10-18 04:47:50,679][02528] Fps is (10 sec: 3276.8, 60 sec: 2935.5, 300 sec: 2901.9). Total num frames: 1249280. Throughput: 0: 747.9. Samples: 312294. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 370 |
+
[2025-10-18 04:47:50,682][02528] Avg episode reward: [(0, '8.883')]
|
| 371 |
+
[2025-10-18 04:47:50,689][04336] Saving new best policy, reward=8.883!
|
| 372 |
+
[2025-10-18 04:47:55,681][02528] Fps is (10 sec: 2457.1, 60 sec: 2730.6, 300 sec: 2888.0). Total num frames: 1257472. Throughput: 0: 703.1. Samples: 315424. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 373 |
+
[2025-10-18 04:47:55,684][02528] Avg episode reward: [(0, '4.580')]
|
| 374 |
+
[2025-10-18 04:47:55,701][04336] Saving train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000307_1257472.pth...
|
| 375 |
+
[2025-10-18 04:47:55,957][04336] Removing train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000139_569344.pth
|
| 376 |
+
[2025-10-18 04:47:58,420][04349] Updated weights for policy 0, policy_version 310 (0.0032)
|
| 377 |
+
[2025-10-18 04:48:00,679][02528] Fps is (10 sec: 2457.6, 60 sec: 2867.2, 300 sec: 2888.0). Total num frames: 1273856. Throughput: 0: 697.5. Samples: 317746. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 378 |
+
[2025-10-18 04:48:00,684][02528] Avg episode reward: [(0, '5.529')]
|
| 379 |
+
[2025-10-18 04:48:05,680][02528] Fps is (10 sec: 3277.0, 60 sec: 2935.4, 300 sec: 2888.0). Total num frames: 1290240. Throughput: 0: 735.8. Samples: 322748. Policy #0 lag: (min: 0.0, avg: 0.6, max: 2.0)
|
| 380 |
+
[2025-10-18 04:48:05,691][02528] Avg episode reward: [(0, '6.455')]
|
| 381 |
+
[2025-10-18 04:48:10,679][02528] Fps is (10 sec: 2867.2, 60 sec: 2798.9, 300 sec: 2888.1). Total num frames: 1302528. Throughput: 0: 712.5. Samples: 326494. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 382 |
+
[2025-10-18 04:48:10,683][02528] Avg episode reward: [(0, '5.723')]
|
| 383 |
+
[2025-10-18 04:48:13,124][04349] Updated weights for policy 0, policy_version 320 (0.0020)
|
| 384 |
+
[2025-10-18 04:48:15,679][02528] Fps is (10 sec: 2867.6, 60 sec: 2867.2, 300 sec: 2901.9). Total num frames: 1318912. Throughput: 0: 691.6. Samples: 328144. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 385 |
+
[2025-10-18 04:48:15,686][02528] Avg episode reward: [(0, '4.236')]
|
| 386 |
+
[2025-10-18 04:48:20,679][02528] Fps is (10 sec: 3276.9, 60 sec: 2935.5, 300 sec: 2888.0). Total num frames: 1335296. Throughput: 0: 719.3. Samples: 333222. Policy #0 lag: (min: 0.0, avg: 0.4, max: 1.0)
|
| 387 |
+
[2025-10-18 04:48:20,684][02528] Avg episode reward: [(0, '5.477')]
|
| 388 |
+
[2025-10-18 04:48:25,679][02528] Fps is (10 sec: 2867.2, 60 sec: 2867.2, 300 sec: 2888.0). Total num frames: 1347584. Throughput: 0: 735.0. Samples: 337662. Policy #0 lag: (min: 0.0, avg: 0.5, max: 2.0)
|
| 389 |
+
[2025-10-18 04:48:25,681][02528] Avg episode reward: [(0, '9.114')]
|
| 390 |
+
[2025-10-18 04:48:25,685][04336] Saving new best policy, reward=9.114!
|
| 391 |
+
[2025-10-18 04:48:26,644][04349] Updated weights for policy 0, policy_version 330 (0.0023)
|
| 392 |
+
[2025-10-18 04:48:30,679][02528] Fps is (10 sec: 2457.6, 60 sec: 2799.0, 300 sec: 2888.0). Total num frames: 1359872. Throughput: 0: 710.6. Samples: 339214. Policy #0 lag: (min: 0.0, avg: 0.5, max: 1.0)
|
| 393 |
+
[2025-10-18 04:48:30,682][02528] Avg episode reward: [(0, '6.420')]
|
| 394 |
+
[2025-10-18 04:48:35,679][02528] Fps is (10 sec: 2867.2, 60 sec: 2867.2, 300 sec: 2888.0). Total num frames: 1376256. Throughput: 0: 694.0. Samples: 343526. Policy #0 lag: (min: 0.0, avg: 0.4, max: 2.0)
|
| 395 |
+
[2025-10-18 04:48:35,685][02528] Avg episode reward: [(0, '6.379')]
|
| 396 |
+
[2025-10-18 04:48:36,139][02528] Keyboard interrupt detected in the event loop EvtLoop [Runner_EvtLoop, process=main process 2528], exiting...
|
| 397 |
+
[2025-10-18 04:48:36,148][04336] Stopping Batcher_0...
|
| 398 |
+
[2025-10-18 04:48:36,151][04336] Loop batcher_evt_loop terminating...
|
| 399 |
+
[2025-10-18 04:48:36,155][04336] Saving train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000336_1376256.pth...
|
| 400 |
+
[2025-10-18 04:48:36,150][02528] Runner profile tree view:
|
| 401 |
+
main_loop: 514.2599
|
| 402 |
+
[2025-10-18 04:48:36,168][02528] Collected {0: 1376256}, FPS: 2676.2
|
| 403 |
+
[2025-10-18 04:48:36,244][04349] Weights refcount: 2 0
|
| 404 |
+
[2025-10-18 04:48:36,249][04349] Stopping InferenceWorker_p0-w0...
|
| 405 |
+
[2025-10-18 04:48:36,252][04349] Loop inference_proc0-0_evt_loop terminating...
|
| 406 |
+
[2025-10-18 04:48:36,379][04336] Removing train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000224_917504.pth
|
| 407 |
+
[2025-10-18 04:48:36,355][04351] EvtLoop [rollout_proc0_evt_loop, process=rollout_proc0] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance0'), args=(0, 0)
|
| 408 |
+
Traceback (most recent call last):
|
| 409 |
+
File "/usr/local/lib/python3.12/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal
|
| 410 |
+
slot_callable(*args)
|
| 411 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts
|
| 412 |
+
complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing)
|
| 413 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 414 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts
|
| 415 |
+
new_obs, rewards, terminated, truncated, infos = e.step(actions)
|
| 416 |
+
^^^^^^^^^^^^^^^
|
| 417 |
+
File "/usr/local/lib/python3.12/dist-packages/gymnasium/core.py", line 461, in step
|
| 418 |
+
return self.env.step(action)
|
| 419 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 420 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step
|
| 421 |
+
obs, rew, terminated, truncated, info = self.env.step(action)
|
| 422 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 423 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step
|
| 424 |
+
obs, rew, terminated, truncated, info = self.env.step(action)
|
| 425 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 426 |
+
File "/usr/local/lib/python3.12/dist-packages/gymnasium/core.py", line 555, in step
|
| 427 |
+
observation, reward, terminated, truncated, info = self.env.step(action)
|
| 428 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 429 |
+
File "/usr/local/lib/python3.12/dist-packages/gymnasium/core.py", line 522, in step
|
| 430 |
+
observation, reward, terminated, truncated, info = self.env.step(action)
|
| 431 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 432 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step
|
| 433 |
+
obs, reward, terminated, truncated, info = self.env.step(action)
|
| 434 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 435 |
+
File "/usr/local/lib/python3.12/dist-packages/gymnasium/core.py", line 461, in step
|
| 436 |
+
return self.env.step(action)
|
| 437 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 438 |
+
File "/usr/local/lib/python3.12/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step
|
| 439 |
+
obs, reward, terminated, truncated, info = self.env.step(action)
|
| 440 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 441 |
+
File "/usr/local/lib/python3.12/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step
|
| 442 |
+
reward = self.game.make_action(actions_flattened, self.skip_frames)
|
| 443 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 444 |
+
vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed.
|
| 445 |
+
[2025-10-18 04:48:36,417][04336] Stopping LearnerWorker_p0...
|
| 446 |
+
[2025-10-18 04:48:36,419][04336] Loop learner_proc0_evt_loop terminating...
|
| 447 |
+
[2025-10-18 04:48:36,413][04351] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc0_evt_loop
|
| 448 |
+
[2025-10-18 04:48:36,478][04356] EvtLoop [rollout_proc6_evt_loop, process=rollout_proc6] unhandled exception in slot='advance_rollouts' connected to emitter=Emitter(object_id='InferenceWorker_p0-w0', signal_name='advance6'), args=(0, 0)
|
| 449 |
+
Traceback (most recent call last):
|
| 450 |
+
File "/usr/local/lib/python3.12/dist-packages/signal_slot/signal_slot.py", line 355, in _process_signal
|
| 451 |
+
slot_callable(*args)
|
| 452 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts
|
| 453 |
+
complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing)
|
| 454 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 455 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts
|
| 456 |
+
new_obs, rewards, terminated, truncated, infos = e.step(actions)
|
| 457 |
+
^^^^^^^^^^^^^^^
|
| 458 |
+
File "/usr/local/lib/python3.12/dist-packages/gymnasium/core.py", line 461, in step
|
| 459 |
+
return self.env.step(action)
|
| 460 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 461 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/utils/make_env.py", line 129, in step
|
| 462 |
+
obs, rew, terminated, truncated, info = self.env.step(action)
|
| 463 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 464 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/utils/make_env.py", line 115, in step
|
| 465 |
+
obs, rew, terminated, truncated, info = self.env.step(action)
|
| 466 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 467 |
+
File "/usr/local/lib/python3.12/dist-packages/gymnasium/core.py", line 555, in step
|
| 468 |
+
observation, reward, terminated, truncated, info = self.env.step(action)
|
| 469 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 470 |
+
File "/usr/local/lib/python3.12/dist-packages/gymnasium/core.py", line 522, in step
|
| 471 |
+
observation, reward, terminated, truncated, info = self.env.step(action)
|
| 472 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 473 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/envs/env_wrappers.py", line 86, in step
|
| 474 |
+
obs, reward, terminated, truncated, info = self.env.step(action)
|
| 475 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 476 |
+
File "/usr/local/lib/python3.12/dist-packages/gymnasium/core.py", line 461, in step
|
| 477 |
+
return self.env.step(action)
|
| 478 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 479 |
+
File "/usr/local/lib/python3.12/dist-packages/sf_examples/vizdoom/doom/wrappers/multiplayer_stats.py", line 54, in step
|
| 480 |
+
obs, reward, terminated, truncated, info = self.env.step(action)
|
| 481 |
+
^^^^^^^^^^^^^^^^^^^^^
|
| 482 |
+
File "/usr/local/lib/python3.12/dist-packages/sf_examples/vizdoom/doom/doom_gym.py", line 452, in step
|
| 483 |
+
reward = self.game.make_action(actions_flattened, self.skip_frames)
|
| 484 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 485 |
+
vizdoom.vizdoom.SignalException: Signal SIGINT received. ViZDoom instance has been closed.
|
| 486 |
+
[2025-10-18 04:48:36,537][04356] Unhandled exception Signal SIGINT received. ViZDoom instance has been closed. in evt loop rollout_proc6_evt_loop
|
| 487 |
+
[2025-10-18 18:37:47,967][01690] Saving configuration to train_dir/deadly_corridor_experiment/config.json...
|
| 488 |
+
[2025-10-18 18:37:48,722][01690] Rollout worker 0 uses device cpu
|
| 489 |
+
[2025-10-18 18:37:48,850][01690] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 490 |
+
[2025-10-18 18:37:48,852][01690] InferenceWorker_p0-w0: min num requests: 1
|
| 491 |
+
[2025-10-18 18:37:48,862][01690] Starting all processes...
|
| 492 |
+
[2025-10-18 18:37:48,866][01690] Starting process learner_proc0
|
| 493 |
+
[2025-10-18 18:37:48,972][01690] Starting all processes...
|
| 494 |
+
[2025-10-18 18:37:48,980][01690] Starting process inference_proc0-0
|
| 495 |
+
[2025-10-18 18:37:48,982][01690] Starting process rollout_proc0
|
| 496 |
+
[2025-10-18 18:37:54,595][02697] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 497 |
+
[2025-10-18 18:37:54,597][02697] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
|
| 498 |
+
[2025-10-18 18:37:54,618][02697] Num visible devices: 1
|
| 499 |
+
[2025-10-18 18:37:54,625][02697] Starting seed is not provided
|
| 500 |
+
[2025-10-18 18:37:54,626][02697] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 501 |
+
[2025-10-18 18:37:54,626][02697] Initializing actor-critic model on device cuda:0
|
| 502 |
+
[2025-10-18 18:37:54,627][02697] RunningMeanStd input shape: (3, 72, 128)
|
| 503 |
+
[2025-10-18 18:37:54,630][02697] RunningMeanStd input shape: (1,)
|
| 504 |
+
[2025-10-18 18:37:54,662][02697] ConvEncoder: input_channels=3
|
| 505 |
+
[2025-10-18 18:37:55,082][02704] Worker 0 uses CPU cores [0, 1]
|
| 506 |
+
[2025-10-18 18:37:55,097][02703] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 507 |
+
[2025-10-18 18:37:55,098][02703] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
|
| 508 |
+
[2025-10-18 18:37:55,115][02697] Conv encoder output size: 512
|
| 509 |
+
[2025-10-18 18:37:55,115][02703] Num visible devices: 1
|
| 510 |
+
[2025-10-18 18:37:55,116][02697] Policy head output size: 512
|
| 511 |
+
[2025-10-18 18:37:55,159][02697] Created Actor Critic model with architecture:
|
| 512 |
+
[2025-10-18 18:37:55,159][02697] ActorCriticSharedWeights(
|
| 513 |
+
(obs_normalizer): ObservationNormalizer(
|
| 514 |
+
(running_mean_std): RunningMeanStdDictInPlace(
|
| 515 |
+
(running_mean_std): ModuleDict(
|
| 516 |
+
(obs): RunningMeanStdInPlace()
|
| 517 |
+
)
|
| 518 |
+
)
|
| 519 |
+
)
|
| 520 |
+
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
|
| 521 |
+
(encoder): VizdoomEncoder(
|
| 522 |
+
(basic_encoder): ConvEncoder(
|
| 523 |
+
(enc): RecursiveScriptModule(
|
| 524 |
+
original_name=ConvEncoderImpl
|
| 525 |
+
(conv_head): RecursiveScriptModule(
|
| 526 |
+
original_name=Sequential
|
| 527 |
+
(0): RecursiveScriptModule(original_name=Conv2d)
|
| 528 |
+
(1): RecursiveScriptModule(original_name=ELU)
|
| 529 |
+
(2): RecursiveScriptModule(original_name=Conv2d)
|
| 530 |
+
(3): RecursiveScriptModule(original_name=ELU)
|
| 531 |
+
(4): RecursiveScriptModule(original_name=Conv2d)
|
| 532 |
+
(5): RecursiveScriptModule(original_name=ELU)
|
| 533 |
+
)
|
| 534 |
+
(mlp_layers): RecursiveScriptModule(
|
| 535 |
+
original_name=Sequential
|
| 536 |
+
(0): RecursiveScriptModule(original_name=Linear)
|
| 537 |
+
(1): RecursiveScriptModule(original_name=ELU)
|
| 538 |
+
)
|
| 539 |
+
)
|
| 540 |
+
)
|
| 541 |
+
)
|
| 542 |
+
(core): ModelCoreRNN(
|
| 543 |
+
(core): GRU(512, 512)
|
| 544 |
+
)
|
| 545 |
+
(decoder): MlpDecoder(
|
| 546 |
+
(mlp): Identity()
|
| 547 |
+
)
|
| 548 |
+
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
|
| 549 |
+
(action_parameterization): ActionParameterizationDefault(
|
| 550 |
+
(distribution_linear): Linear(in_features=512, out_features=11, bias=True)
|
| 551 |
+
)
|
| 552 |
+
)
|
| 553 |
+
[2025-10-18 18:37:55,418][02697] Using optimizer <class 'torch.optim.adam.Adam'>
|
| 554 |
+
[2025-10-18 18:37:59,948][02697] Loading state from checkpoint train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000336_1376256.pth...
|
| 555 |
+
[2025-10-18 18:38:04,136][02697] Could not load from checkpoint, attempt 0
|
| 556 |
+
Traceback (most recent call last):
|
| 557 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint
|
| 558 |
+
checkpoint_dict = torch.load(latest_checkpoint, map_location=device)
|
| 559 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 560 |
+
File "/usr/local/lib/python3.12/dist-packages/torch/serialization.py", line 1529, in load
|
| 561 |
+
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
|
| 562 |
+
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
| 563 |
+
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
| 564 |
+
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
| 565 |
+
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function.
|
| 566 |
+
|
| 567 |
+
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
| 568 |
+
[2025-10-18 18:38:04,140][02697] Loading state from checkpoint train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000336_1376256.pth...
|
| 569 |
+
[2025-10-18 18:38:04,146][02697] Could not load from checkpoint, attempt 1
|
| 570 |
+
Traceback (most recent call last):
|
| 571 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint
|
| 572 |
+
checkpoint_dict = torch.load(latest_checkpoint, map_location=device)
|
| 573 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 574 |
+
File "/usr/local/lib/python3.12/dist-packages/torch/serialization.py", line 1529, in load
|
| 575 |
+
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
|
| 576 |
+
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
| 577 |
+
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
| 578 |
+
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
| 579 |
+
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function.
|
| 580 |
+
|
| 581 |
+
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
| 582 |
+
[2025-10-18 18:38:04,147][02697] Loading state from checkpoint train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000336_1376256.pth...
|
| 583 |
+
[2025-10-18 18:38:04,154][02697] Could not load from checkpoint, attempt 2
|
| 584 |
+
Traceback (most recent call last):
|
| 585 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint
|
| 586 |
+
checkpoint_dict = torch.load(latest_checkpoint, map_location=device)
|
| 587 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 588 |
+
File "/usr/local/lib/python3.12/dist-packages/torch/serialization.py", line 1529, in load
|
| 589 |
+
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
|
| 590 |
+
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
| 591 |
+
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
| 592 |
+
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
| 593 |
+
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function.
|
| 594 |
+
|
| 595 |
+
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
| 596 |
+
[2025-10-18 18:38:04,155][02697] Did not load from checkpoint, starting from scratch!
|
| 597 |
+
[2025-10-18 18:38:04,155][02697] Initialized policy 0 weights for model version 0
|
| 598 |
+
[2025-10-18 18:38:04,160][02697] LearnerWorker_p0 finished initialization!
|
| 599 |
+
[2025-10-18 18:38:04,165][02697] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 600 |
+
[2025-10-18 18:38:04,335][02703] RunningMeanStd input shape: (3, 72, 128)
|
| 601 |
+
[2025-10-18 18:38:04,337][02703] RunningMeanStd input shape: (1,)
|
| 602 |
+
[2025-10-18 18:38:04,354][02703] ConvEncoder: input_channels=3
|
| 603 |
+
[2025-10-18 18:38:04,494][02703] Conv encoder output size: 512
|
| 604 |
+
[2025-10-18 18:38:04,495][02703] Policy head output size: 512
|
| 605 |
+
[2025-10-18 18:38:04,541][01690] Inference worker 0-0 is ready!
|
| 606 |
+
[2025-10-18 18:38:04,545][01690] All inference workers are ready! Signal rollout workers to start!
|
| 607 |
+
[2025-10-18 18:38:04,596][02704] Doom resolution: 160x120, resize resolution: (128, 72)
|
| 608 |
+
[2025-10-18 18:38:05,722][02704] Decorrelating experience for 0 frames...
|
| 609 |
+
[2025-10-18 18:38:06,084][02704] Decorrelating experience for 32 frames...
|
| 610 |
+
[2025-10-18 18:38:06,485][02704] Decorrelating experience for 64 frames...
|
| 611 |
+
[2025-10-18 18:38:06,758][02704] Decorrelating experience for 96 frames...
|
| 612 |
+
[2025-10-18 18:38:07,287][01690] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
|
| 613 |
+
[2025-10-18 18:38:08,843][01690] Heartbeat connected on Batcher_0
|
| 614 |
+
[2025-10-18 18:38:08,850][01690] Heartbeat connected on LearnerWorker_p0
|
| 615 |
+
[2025-10-18 18:38:08,861][01690] Heartbeat connected on InferenceWorker_p0-w0
|
| 616 |
+
[2025-10-18 18:38:08,868][01690] Heartbeat connected on RolloutWorker_w0
|
| 617 |
+
[2025-10-18 18:38:12,287][01690] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 256.8. Samples: 1284. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
|
| 618 |
+
[2025-10-18 18:38:12,294][01690] Avg episode reward: [(0, '0.993')]
|
| 619 |
+
[2025-10-18 18:38:12,967][02697] Signal inference workers to stop experience collection...
|
| 620 |
+
[2025-10-18 18:38:12,975][02703] InferenceWorker_p0-w0: stopping experience collection
|
| 621 |
+
[2025-10-18 18:38:13,187][02697] Signal inference workers to resume experience collection...
|
| 622 |
+
[2025-10-18 18:38:13,190][02703] InferenceWorker_p0-w0: resuming experience collection
|
| 623 |
+
[2025-10-18 18:38:17,238][02697] Stopping Batcher_0...
|
| 624 |
+
[2025-10-18 18:38:17,244][02697] Loop batcher_evt_loop terminating...
|
| 625 |
+
[2025-10-18 18:38:17,239][01690] Component Batcher_0 stopped!
|
| 626 |
+
[2025-10-18 18:38:17,249][02697] Saving train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000004_16384.pth...
|
| 627 |
+
[2025-10-18 18:38:17,286][02703] Weights refcount: 2 0
|
| 628 |
+
[2025-10-18 18:38:17,294][02703] Stopping InferenceWorker_p0-w0...
|
| 629 |
+
[2025-10-18 18:38:17,295][01690] Component InferenceWorker_p0-w0 stopped!
|
| 630 |
+
[2025-10-18 18:38:17,301][02703] Loop inference_proc0-0_evt_loop terminating...
|
| 631 |
+
[2025-10-18 18:38:17,603][02697] Removing train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000004_16384.pth
|
| 632 |
+
[2025-10-18 18:38:17,615][02704] Stopping RolloutWorker_w0...
|
| 633 |
+
[2025-10-18 18:38:17,618][02697] Saving train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000004_16384.pth...
|
| 634 |
+
[2025-10-18 18:38:17,617][02704] Loop rollout_proc0_evt_loop terminating...
|
| 635 |
+
[2025-10-18 18:38:17,616][01690] Component RolloutWorker_w0 stopped!
|
| 636 |
+
[2025-10-18 18:38:17,981][02697] Removing train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000004_16384.pth
|
| 637 |
+
[2025-10-18 18:38:17,998][02697] Stopping LearnerWorker_p0...
|
| 638 |
+
[2025-10-18 18:38:18,001][02697] Loop learner_proc0_evt_loop terminating...
|
| 639 |
+
[2025-10-18 18:38:18,000][01690] Component LearnerWorker_p0 stopped!
|
| 640 |
+
[2025-10-18 18:38:18,012][01690] Waiting for process learner_proc0 to stop...
|
| 641 |
+
[2025-10-18 18:38:19,658][01690] Waiting for process inference_proc0-0 to join...
|
| 642 |
+
[2025-10-18 18:38:19,665][01690] Waiting for process rollout_proc0 to join...
|
| 643 |
+
[2025-10-18 18:38:19,671][01690] Batcher 0 profile tree view:
|
| 644 |
+
batching: 0.1554, releasing_batches: 0.0031
|
| 645 |
+
[2025-10-18 18:38:19,676][01690] InferenceWorker_p0-w0 profile tree view:
|
| 646 |
+
wait_policy: 0.0000
|
| 647 |
+
wait_policy_total: 2.9008
|
| 648 |
+
update_model: 0.1376
|
| 649 |
+
weight_update: 0.0198
|
| 650 |
+
one_step: 0.0033
|
| 651 |
+
handle_policy_step: 9.1288
|
| 652 |
+
deserialize: 0.1130, stack: 0.0367, obs_to_device_normalize: 1.8738, forward: 5.9609, send_messages: 0.2287
|
| 653 |
+
prepare_outputs: 0.6964
|
| 654 |
+
to_cpu: 0.4763
|
| 655 |
+
[2025-10-18 18:38:19,679][01690] Learner 0 profile tree view:
|
| 656 |
+
misc: 0.0000, prepare_batch: 1.2749
|
| 657 |
+
train: 2.5333
|
| 658 |
+
epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0019, kl_divergence: 0.0198, after_optimizer: 0.1426
|
| 659 |
+
calculate_losses: 0.7245
|
| 660 |
+
losses_init: 0.0000, forward_head: 0.4491, bptt_initial: 0.1020, tail: 0.0716, advantages_returns: 0.0014, losses: 0.0882
|
| 661 |
+
bptt: 0.0113
|
| 662 |
+
bptt_forward_core: 0.0110
|
| 663 |
+
update: 1.6398
|
| 664 |
+
clip: 0.0674
|
| 665 |
+
[2025-10-18 18:38:19,682][01690] RolloutWorker_w0 profile tree view:
|
| 666 |
+
wait_for_trajectories: 0.0058, enqueue_policy_requests: 0.3530, env_step: 5.5468, overhead: 0.2136, complete_rollouts: 0.0110
|
| 667 |
+
save_policy_outputs: 0.2790
|
| 668 |
+
split_output_tensors: 0.1091
|
| 669 |
+
[2025-10-18 18:38:19,693][01690] Loop Runner_EvtLoop terminating...
|
| 670 |
+
[2025-10-18 18:38:19,695][01690] Runner profile tree view:
|
| 671 |
+
main_loop: 30.8336
|
| 672 |
+
[2025-10-18 18:38:19,700][01690] Collected {0: 16384}, FPS: 531.4
|
| 673 |
+
[2025-10-19 00:00:26,666][14521] Saving configuration to train_dir/deadly_corridor_experiment/config.json...
|
| 674 |
+
[2025-10-19 00:00:27,422][14521] Rollout worker 0 uses device cpu
|
| 675 |
+
[2025-10-19 00:00:27,565][14521] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 676 |
+
[2025-10-19 00:00:27,566][14521] InferenceWorker_p0-w0: min num requests: 1
|
| 677 |
+
[2025-10-19 00:00:27,575][14521] Starting all processes...
|
| 678 |
+
[2025-10-19 00:00:27,578][14521] Starting process learner_proc0
|
| 679 |
+
[2025-10-19 00:00:29,749][14521] Starting all processes...
|
| 680 |
+
[2025-10-19 00:00:29,754][14521] Starting process inference_proc0-0
|
| 681 |
+
[2025-10-19 00:00:29,754][14521] Starting process rollout_proc0
|
| 682 |
+
[2025-10-19 00:00:29,767][14634] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 683 |
+
[2025-10-19 00:00:29,770][14634] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
|
| 684 |
+
[2025-10-19 00:00:29,794][14634] Num visible devices: 1
|
| 685 |
+
[2025-10-19 00:00:29,802][14634] Starting seed is not provided
|
| 686 |
+
[2025-10-19 00:00:29,802][14634] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 687 |
+
[2025-10-19 00:00:29,803][14634] Initializing actor-critic model on device cuda:0
|
| 688 |
+
[2025-10-19 00:00:29,805][14634] RunningMeanStd input shape: (3, 72, 128)
|
| 689 |
+
[2025-10-19 00:00:29,808][14634] RunningMeanStd input shape: (1,)
|
| 690 |
+
[2025-10-19 00:00:29,843][14634] ConvEncoder: input_channels=3
|
| 691 |
+
[2025-10-19 00:00:30,232][14634] Conv encoder output size: 512
|
| 692 |
+
[2025-10-19 00:00:30,232][14634] Policy head output size: 512
|
| 693 |
+
[2025-10-19 00:00:30,311][14634] Created Actor Critic model with architecture:
|
| 694 |
+
[2025-10-19 00:00:30,312][14634] ActorCriticSharedWeights(
|
| 695 |
+
(obs_normalizer): ObservationNormalizer(
|
| 696 |
+
(running_mean_std): RunningMeanStdDictInPlace(
|
| 697 |
+
(running_mean_std): ModuleDict(
|
| 698 |
+
(obs): RunningMeanStdInPlace()
|
| 699 |
+
)
|
| 700 |
+
)
|
| 701 |
+
)
|
| 702 |
+
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
|
| 703 |
+
(encoder): VizdoomEncoder(
|
| 704 |
+
(basic_encoder): ConvEncoder(
|
| 705 |
+
(enc): RecursiveScriptModule(
|
| 706 |
+
original_name=ConvEncoderImpl
|
| 707 |
+
(conv_head): RecursiveScriptModule(
|
| 708 |
+
original_name=Sequential
|
| 709 |
+
(0): RecursiveScriptModule(original_name=Conv2d)
|
| 710 |
+
(1): RecursiveScriptModule(original_name=ELU)
|
| 711 |
+
(2): RecursiveScriptModule(original_name=Conv2d)
|
| 712 |
+
(3): RecursiveScriptModule(original_name=ELU)
|
| 713 |
+
(4): RecursiveScriptModule(original_name=Conv2d)
|
| 714 |
+
(5): RecursiveScriptModule(original_name=ELU)
|
| 715 |
+
)
|
| 716 |
+
(mlp_layers): RecursiveScriptModule(
|
| 717 |
+
original_name=Sequential
|
| 718 |
+
(0): RecursiveScriptModule(original_name=Linear)
|
| 719 |
+
(1): RecursiveScriptModule(original_name=ELU)
|
| 720 |
+
)
|
| 721 |
+
)
|
| 722 |
+
)
|
| 723 |
+
)
|
| 724 |
+
(core): ModelCoreRNN(
|
| 725 |
+
(core): GRU(512, 512)
|
| 726 |
+
)
|
| 727 |
+
(decoder): MlpDecoder(
|
| 728 |
+
(mlp): Identity()
|
| 729 |
+
)
|
| 730 |
+
(critic_linear): Linear(in_features=512, out_features=1, bias=True)
|
| 731 |
+
(action_parameterization): ActionParameterizationDefault(
|
| 732 |
+
(distribution_linear): Linear(in_features=512, out_features=11, bias=True)
|
| 733 |
+
)
|
| 734 |
+
)
|
| 735 |
+
[2025-10-19 00:00:30,816][14634] Using optimizer <class 'torch.optim.adam.Adam'>
|
| 736 |
+
[2025-10-19 00:00:34,481][14652] Worker 0 uses CPU cores [0, 1]
|
| 737 |
+
[2025-10-19 00:00:34,723][14651] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 738 |
+
[2025-10-19 00:00:34,730][14651] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
|
| 739 |
+
[2025-10-19 00:00:34,759][14651] Num visible devices: 1
|
| 740 |
+
[2025-10-19 00:00:38,691][14634] Loading state from checkpoint train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000336_1376256.pth...
|
| 741 |
+
[2025-10-19 00:00:40,739][14634] Could not load from checkpoint, attempt 0
|
| 742 |
+
Traceback (most recent call last):
|
| 743 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint
|
| 744 |
+
checkpoint_dict = torch.load(latest_checkpoint, map_location=device)
|
| 745 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 746 |
+
File "/usr/local/lib/python3.12/dist-packages/torch/serialization.py", line 1529, in load
|
| 747 |
+
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
|
| 748 |
+
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
| 749 |
+
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
| 750 |
+
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
| 751 |
+
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function.
|
| 752 |
+
|
| 753 |
+
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
| 754 |
+
[2025-10-19 00:00:40,742][14634] Loading state from checkpoint train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000336_1376256.pth...
|
| 755 |
+
[2025-10-19 00:00:40,781][14634] Could not load from checkpoint, attempt 1
|
| 756 |
+
Traceback (most recent call last):
|
| 757 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint
|
| 758 |
+
checkpoint_dict = torch.load(latest_checkpoint, map_location=device)
|
| 759 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 760 |
+
File "/usr/local/lib/python3.12/dist-packages/torch/serialization.py", line 1529, in load
|
| 761 |
+
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
|
| 762 |
+
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
| 763 |
+
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
| 764 |
+
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
| 765 |
+
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function.
|
| 766 |
+
|
| 767 |
+
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
| 768 |
+
[2025-10-19 00:00:40,782][14634] Loading state from checkpoint train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000336_1376256.pth...
|
| 769 |
+
[2025-10-19 00:00:40,785][14634] Could not load from checkpoint, attempt 2
|
| 770 |
+
Traceback (most recent call last):
|
| 771 |
+
File "/usr/local/lib/python3.12/dist-packages/sample_factory/algo/learning/learner.py", line 281, in load_checkpoint
|
| 772 |
+
checkpoint_dict = torch.load(latest_checkpoint, map_location=device)
|
| 773 |
+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
| 774 |
+
File "/usr/local/lib/python3.12/dist-packages/torch/serialization.py", line 1529, in load
|
| 775 |
+
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
|
| 776 |
+
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, [1mdo those steps only if you trust the source of the checkpoint[0m.
|
| 777 |
+
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
|
| 778 |
+
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
|
| 779 |
+
WeightsUnpickler error: Unsupported global: GLOBAL numpy.core.multiarray.scalar was not an allowed global by default. Please use `torch.serialization.add_safe_globals([numpy.core.multiarray.scalar])` or the `torch.serialization.safe_globals([numpy.core.multiarray.scalar])` context manager to allowlist this global if you trust this class/function.
|
| 780 |
+
|
| 781 |
+
Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
|
| 782 |
+
[2025-10-19 00:00:40,786][14634] Did not load from checkpoint, starting from scratch!
|
| 783 |
+
[2025-10-19 00:00:40,786][14634] Initialized policy 0 weights for model version 0
|
| 784 |
+
[2025-10-19 00:00:40,790][14634] LearnerWorker_p0 finished initialization!
|
| 785 |
+
[2025-10-19 00:00:40,792][14634] Using GPUs [0] for process 0 (actually maps to GPUs [0])
|
| 786 |
+
[2025-10-19 00:00:40,905][14651] RunningMeanStd input shape: (3, 72, 128)
|
| 787 |
+
[2025-10-19 00:00:40,906][14651] RunningMeanStd input shape: (1,)
|
| 788 |
+
[2025-10-19 00:00:40,916][14651] ConvEncoder: input_channels=3
|
| 789 |
+
[2025-10-19 00:00:41,006][14651] Conv encoder output size: 512
|
| 790 |
+
[2025-10-19 00:00:41,007][14651] Policy head output size: 512
|
| 791 |
+
[2025-10-19 00:00:41,039][14521] Inference worker 0-0 is ready!
|
| 792 |
+
[2025-10-19 00:00:41,040][14521] All inference workers are ready! Signal rollout workers to start!
|
| 793 |
+
[2025-10-19 00:00:41,069][14521] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
|
| 794 |
+
[2025-10-19 00:00:41,077][14652] Doom resolution: 160x120, resize resolution: (128, 72)
|
| 795 |
+
[2025-10-19 00:00:41,830][14652] Decorrelating experience for 0 frames...
|
| 796 |
+
[2025-10-19 00:00:42,060][14652] Decorrelating experience for 32 frames...
|
| 797 |
+
[2025-10-19 00:00:42,363][14652] Decorrelating experience for 64 frames...
|
| 798 |
+
[2025-10-19 00:00:42,656][14652] Decorrelating experience for 96 frames...
|
| 799 |
+
[2025-10-19 00:00:46,070][14521] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 1.2. Samples: 6. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
|
| 800 |
+
[2025-10-19 00:00:46,074][14521] Avg episode reward: [(0, '-0.077')]
|
| 801 |
+
[2025-10-19 00:00:47,558][14521] Heartbeat connected on Batcher_0
|
| 802 |
+
[2025-10-19 00:00:47,574][14521] Heartbeat connected on RolloutWorker_w0
|
| 803 |
+
[2025-10-19 00:00:47,576][14521] Heartbeat connected on InferenceWorker_p0-w0
|
| 804 |
+
[2025-10-19 00:00:49,568][14634] Signal inference workers to stop experience collection...
|
| 805 |
+
[2025-10-19 00:00:49,583][14651] InferenceWorker_p0-w0: stopping experience collection
|
| 806 |
+
[2025-10-19 00:00:49,772][14634] Signal inference workers to resume experience collection...
|
| 807 |
+
[2025-10-19 00:00:49,775][14651] InferenceWorker_p0-w0: resuming experience collection
|
| 808 |
+
[2025-10-19 00:00:50,283][14521] Heartbeat connected on LearnerWorker_p0
|
| 809 |
+
[2025-10-19 00:00:51,070][14521] Fps is (10 sec: 819.2, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 8192. Throughput: 0: 209.4. Samples: 2094. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0)
|
| 810 |
+
[2025-10-19 00:00:51,072][14521] Avg episode reward: [(0, '-0.175')]
|
| 811 |
+
[2025-10-19 00:00:55,025][14634] Stopping Batcher_0...
|
| 812 |
+
[2025-10-19 00:00:55,028][14634] Saving train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000004_16384.pth...
|
| 813 |
+
[2025-10-19 00:00:55,025][14521] Component Batcher_0 stopped!
|
| 814 |
+
[2025-10-19 00:00:55,028][14634] Loop batcher_evt_loop terminating...
|
| 815 |
+
[2025-10-19 00:00:55,058][14651] Weights refcount: 2 0
|
| 816 |
+
[2025-10-19 00:00:55,060][14651] Stopping InferenceWorker_p0-w0...
|
| 817 |
+
[2025-10-19 00:00:55,060][14521] Component InferenceWorker_p0-w0 stopped!
|
| 818 |
+
[2025-10-19 00:00:55,064][14651] Loop inference_proc0-0_evt_loop terminating...
|
| 819 |
+
[2025-10-19 00:00:55,217][14634] Removing train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000004_16384.pth
|
| 820 |
+
[2025-10-19 00:00:55,236][14634] Saving train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000004_16384.pth...
|
| 821 |
+
[2025-10-19 00:00:55,291][14652] Stopping RolloutWorker_w0...
|
| 822 |
+
[2025-10-19 00:00:55,292][14652] Loop rollout_proc0_evt_loop terminating...
|
| 823 |
+
[2025-10-19 00:00:55,293][14521] Component RolloutWorker_w0 stopped!
|
| 824 |
+
[2025-10-19 00:00:55,408][14634] Removing train_dir/deadly_corridor_experiment/checkpoint_p0/checkpoint_000000004_16384.pth
|
| 825 |
+
[2025-10-19 00:00:55,432][14634] Stopping LearnerWorker_p0...
|
| 826 |
+
[2025-10-19 00:00:55,433][14521] Component LearnerWorker_p0 stopped!
|
| 827 |
+
[2025-10-19 00:00:55,434][14634] Loop learner_proc0_evt_loop terminating...
|
| 828 |
+
[2025-10-19 00:00:55,435][14521] Waiting for process learner_proc0 to stop...
|
| 829 |
+
[2025-10-19 00:00:56,764][14521] Waiting for process inference_proc0-0 to join...
|
| 830 |
+
[2025-10-19 00:00:56,766][14521] Waiting for process rollout_proc0 to join...
|
| 831 |
+
[2025-10-19 00:00:56,768][14521] Batcher 0 profile tree view:
|
| 832 |
+
batching: 0.0902, releasing_batches: 0.0034
|
| 833 |
+
[2025-10-19 00:00:56,770][14521] InferenceWorker_p0-w0 profile tree view:
|
| 834 |
+
wait_policy: 0.0000
|
| 835 |
+
wait_policy_total: 2.6831
|
| 836 |
+
update_model: 0.2012
|
| 837 |
+
weight_update: 0.0345
|
| 838 |
+
one_step: 0.0035
|
| 839 |
+
handle_policy_step: 10.5613
|
| 840 |
+
deserialize: 0.1239, stack: 0.0335, obs_to_device_normalize: 1.8416, forward: 7.2182, send_messages: 0.3024
|
| 841 |
+
prepare_outputs: 0.8086
|
| 842 |
+
to_cpu: 0.5498
|
| 843 |
+
[2025-10-19 00:00:56,772][14521] Learner 0 profile tree view:
|
| 844 |
+
misc: 0.0000, prepare_batch: 1.6133
|
| 845 |
+
train: 2.9548
|
| 846 |
+
epoch_init: 0.0000, minibatch_init: 0.0000, losses_postprocess: 0.0028, kl_divergence: 0.0235, after_optimizer: 0.1886
|
| 847 |
+
calculate_losses: 0.7724
|
| 848 |
+
losses_init: 0.0000, forward_head: 0.4685, bptt_initial: 0.1183, tail: 0.0694, advantages_returns: 0.0015, losses: 0.0988
|
| 849 |
+
bptt: 0.0152
|
| 850 |
+
bptt_forward_core: 0.0148
|
| 851 |
+
update: 1.9656
|
| 852 |
+
clip: 0.1359
|
| 853 |
+
[2025-10-19 00:00:56,773][14521] RolloutWorker_w0 profile tree view:
|
| 854 |
+
wait_for_trajectories: 0.0049, enqueue_policy_requests: 0.4166, env_step: 6.8453, overhead: 0.2312, complete_rollouts: 0.0163
|
| 855 |
+
save_policy_outputs: 0.2837
|
| 856 |
+
split_output_tensors: 0.1146
|
| 857 |
+
[2025-10-19 00:00:56,775][14521] Loop Runner_EvtLoop terminating...
|
| 858 |
+
[2025-10-19 00:00:56,776][14521] Runner profile tree view:
|
| 859 |
+
main_loop: 29.2021
|
| 860 |
+
[2025-10-19 00:00:56,777][14521] Collected {0: 16384}, FPS: 561.1
|