Sequence
int64 1
25.2k
| Time
int64 1
858M
| File
stringclasses 830
values | RangeOffset
int64 0
2.21M
| RangeLength
int64 0
168k
| Text
stringlengths 1
4.7M
⌀ | Language
stringclasses 20
values | Type
stringclasses 9
values |
|---|---|---|---|---|---|---|---|
102
| 204,582
|
TERMINAL
| 0
| 0
|
[?25l[27;6Ha[27;7H[?25h
| null |
terminal_output
|
103
| 205,115
|
TERMINAL
| 0
| 0
|
[?25l[27;6H[X[0mSaved working directory and index state WIP on new-arch-sampling: 8cbd77e loaded npy file in int8 to emulate dataloader\r\n[?25h
| null |
terminal_output
|
104
| 205,228
|
TERMINAL
| 0
| 0
|
]0;mahajanm@node17: /usr/stud/mahajanm/Projects/jafar[?2004h(jafar) ]0;mahajanm@node17: ~/Projects/jafar[01;32mmahajanm@node17[00m:[01;34m~/Projects/jafar[00m$ git pull\r\n[?2004l\rt
| null |
terminal_output
|
105
| 205,365
|
TERMINAL
| 0
| 0
|
[?25la[2ms[22m[2mh[22m[27;5H[?25h[?25l[27;3Hs[27;5H[?25h[?25l[27;4Hh[27;5H[?25h
| null |
terminal_output
|
106
| 205,591
|
TERMINAL
| 0
| 0
|
[?25l[27;5H [27;6H[?25h
| null |
terminal_output
|
107
| 206,317
|
TERMINAL
| 0
| 0
|
[?25l[27;6Hp[27;7H[?25h
| null |
terminal_output
|
108
| 206,433
|
TERMINAL
| 0
| 0
|
[?25l[27;7Ho[27;8H[?25h[?25l[27;8H[X[0mUpdating 8cbd77e..b2196a7\r\n[?25hFast-forward\r\n
| null |
terminal_output
|
109
| 206,503
|
TERMINAL
| 0
| 0
|
p
| null |
terminal_output
|
110
| 211,800
|
TERMINAL
| 0
| 0
|
models/dynamics.py | 4 [32m+[m[31m---[m\r\n sample.py | 6 [32m++++[m[31m--[m\r\n train_dynamics.py | 8 [32m++++++[m[31m--[m\r\n 3 files changed, 11 insertions(+), 7 deletions(-)\r\n]0;mahajanm@node17: /usr/stud/mahajanm/Projects/jafar[?2004h(jafar) ]0;mahajanm@node17: ~/Projects/jafar[01;32mmahajanm@node17[00m:[01;34m~/Projects/jafar[00m$ git stash pop
| null |
terminal_output
|
111
| 212,957
|
TERMINAL
| 0
| 0
|
\r\n[?2004l\rAuto-merging train_dynamics.py\r\n
| null |
terminal_output
|
112
| 213,882
|
TERMINAL
| 0
| 0
|
On branch new-arch-sampling\r\nYour branch is up to date with 'origin/new-arch-sampling'.\r\n\r\nChanges not staged for commit:\r\n (use "git add <file>..." to update what will be committed)\r\n (use "git restore <file>..." to discard changes in working directory)\r\n\t[31mmodified: genie.py[m\r\n\t[31mmodified: models/lam.py[m\r\n\t[31mmodified: train_dynamics.py[m\r\n\r\nUntracked files:\r\n (use "git add <file>..." to include in what will be committed)\r\n\t[31mlogs/[m\r\n\t[31mpsnr.py[m\r\n\t[31mrequirements-franz.txt[m\r\n\t[31mscripts/[m\r\n\t[31mslurm/[m\r\n\t[31mutils/dataloader_new.py[m\r\n\r\nno changes added to commit (use "git add" and/or "git commit -a")\r\nDropped refs/stash@{0} (44c025681b2f0443448ee555e71d4099a52aa654)\r\n]0;mahajanm@node17: /usr/stud/mahajanm/Projects/jafar[?2004h(jafar) ]0;mahajanm@node17: ~/Projects/jafar[01;32mmahajanm@node17[00m:[01;34m~/Projects/jafar[00m$
| null |
terminal_output
|
113
| 219,234
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 0
| 0
| null |
shellscript
|
tab
|
114
| 224,486
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 944
| 0
| null |
shellscript
|
selection_mouse
|
115
| 225,085
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 706
| 0
| null |
shellscript
|
selection_mouse
|
116
| 225,092
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 705
| 0
| null |
shellscript
|
selection_command
|
117
| 225,574
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 656
| 0
| null |
shellscript
|
selection_mouse
|
118
| 226,927
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 657
| 0
| null |
shellscript
|
selection_command
|
119
| 237,474
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 644
| 0
|
o
|
shellscript
|
content
|
120
| 237,477
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 645
| 0
| null |
shellscript
|
selection_keyboard
|
121
| 237,565
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 645
| 0
|
a
|
shellscript
|
content
|
122
| 237,565
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 646
| 0
| null |
shellscript
|
selection_keyboard
|
123
| 237,639
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 646
| 0
|
i
|
shellscript
|
content
|
124
| 237,640
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 647
| 0
| null |
shellscript
|
selection_keyboard
|
125
| 238,137
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 647
| 0
|
-
|
shellscript
|
content
|
126
| 238,138
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 648
| 0
| null |
shellscript
|
selection_keyboard
|
127
| 238,314
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 648
| 0
|
s
|
shellscript
|
content
|
128
| 238,315
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 649
| 0
| null |
shellscript
|
selection_keyboard
|
129
| 238,529
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 649
| 0
|
m
|
shellscript
|
content
|
130
| 238,530
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 650
| 0
| null |
shellscript
|
selection_keyboard
|
131
| 238,912
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 649
| 1
| null |
shellscript
|
content
|
132
| 239,030
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 649
| 0
|
a
|
shellscript
|
content
|
133
| 239,031
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 650
| 0
| null |
shellscript
|
selection_keyboard
|
134
| 239,103
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 650
| 0
|
m
|
shellscript
|
content
|
135
| 239,104
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 651
| 0
| null |
shellscript
|
selection_keyboard
|
136
| 239,279
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 651
| 0
|
p
|
shellscript
|
content
|
137
| 239,279
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 652
| 0
| null |
shellscript
|
selection_keyboard
|
138
| 239,362
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 652
| 0
|
l
|
shellscript
|
content
|
139
| 239,363
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 653
| 0
| null |
shellscript
|
selection_keyboard
|
140
| 239,485
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 653
| 0
|
e
|
shellscript
|
content
|
141
| 239,485
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 654
| 0
| null |
shellscript
|
selection_keyboard
|
142
| 239,856
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 654
| 0
|
-
|
shellscript
|
content
|
143
| 239,857
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 655
| 0
| null |
shellscript
|
selection_keyboard
|
144
| 240,632
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 654
| 0
| null |
shellscript
|
selection_command
|
145
| 244,055
|
sample.py
| 0
| 0
|
from dataclasses import dataclass\nfrom typing import Optional\nimport time\nimport os\n\nimport dm_pix as pix\nimport einops\nimport jax\nimport jax.numpy as jnp\nimport flax.linen as nn\nimport numpy as np\nfrom flax.training.train_state import TrainState\nimport grain\nimport orbax.checkpoint as ocp\nimport optax\nfrom PIL import Image, ImageDraw\nimport tyro\n\nfrom genie import Genie\nfrom utils.dataloader import get_dataloader\n\n\n@dataclass\nclass Args:\n # Experiment\n seed: int = 0\n seq_len: int = 16\n image_channels: int = 3\n image_height: int = 90\n image_width: int = 160\n data_dir: str = "data/coinrun_episodes"\n checkpoint: str = ""\n checkpoint_step: Optional[int] = None\n # Sampling\n batch_size: int = 1\n maskgit_steps: int = 25\n temperature: float = 1.0\n sample_argmax: bool = True\n start_frame: int = 0\n # Tokenizer checkpoint\n tokenizer_dim: int = 512\n latent_patch_dim: int = 32\n num_patch_latents: int = 1024\n patch_size: int = 4\n tokenizer_num_blocks: int = 8\n tokenizer_num_heads: int = 8\n # LAM checkpoint\n lam_dim: int = 512\n latent_action_dim: int = 32\n num_latent_actions: int = 6\n lam_patch_size: int = 16\n lam_num_blocks: int = 8\n lam_num_heads: int = 8\n lam_co_train: bool = True\n # Dynamics checkpoint\n dyna_dim: int = 512\n dyna_num_blocks: int = 12\n dyna_num_heads: int = 8\n param_dtype: jnp.dtype = jnp.float32\n dtype: jnp.dtype = jnp.bfloat16\n use_flash_attention: bool = True\n\n\nargs = tyro.cli(Args)\nrng = jax.random.PRNGKey(args.seed)\n\n# --- Load Genie checkpoint ---\ngenie = Genie(\n # Tokenizer\n in_dim=args.image_channels,\n tokenizer_dim=args.tokenizer_dim,\n latent_patch_dim=args.latent_patch_dim,\n num_patch_latents=args.num_patch_latents,\n patch_size=args.patch_size,\n tokenizer_num_blocks=args.tokenizer_num_blocks,\n tokenizer_num_heads=args.tokenizer_num_heads,\n # LAM\n lam_dim=args.lam_dim,\n latent_action_dim=args.latent_action_dim,\n num_latent_actions=args.num_latent_actions,\n lam_patch_size=args.lam_patch_size,\n lam_num_blocks=args.lam_num_blocks,\n lam_num_heads=args.lam_num_heads,\n lam_co_train=args.lam_co_train,\n # Dynamics\n dyna_dim=args.dyna_dim,\n dyna_num_blocks=args.dyna_num_blocks,\n dyna_num_heads=args.dyna_num_heads,\n use_maskgit=False,\n param_dtype=args.param_dtype,\n dtype=args.dtype,\n use_flash_attention=args.use_flash_attention,\n)\nrng, _rng = jax.random.split(rng)\nimage_shape = (args.image_height, args.image_width, args.image_channels)\ndummy_inputs = dict(\n videos=jnp.zeros((args.batch_size, args.seq_len, *image_shape), dtype=jnp.float32),\n mask_rng=_rng,\n)\nrng, _rng = jax.random.split(rng)\nparams = genie.init(_rng, dummy_inputs)\n\ndummy_train_state = TrainState.create(\n apply_fn=genie.apply,\n params=params,\n tx=optax.adamw(\n optax.warmup_cosine_decay_schedule(\n 0, 0, 1, 2 # dummy values\n )\n ), \n)\nhandler_registry = ocp.handlers.DefaultCheckpointHandlerRegistry()\nhandler_registry.add('model_state', ocp.args.StandardRestore, ocp.handlers.StandardCheckpointHandler)\ncheckpoint_manager = ocp.CheckpointManager(\n args.checkpoint,\n options=ocp.CheckpointManagerOptions(step_format_fixed_length=6),\n handler_registry=handler_registry\n)\nabstract_train_state = jax.tree_util.tree_map(\n ocp.utils.to_shape_dtype_struct, dummy_train_state\n)\n\nrestored = checkpoint_manager.restore(\n args.checkpoint_step or checkpoint_manager.latest_step(),\n args=ocp.args.Composite(\n model_state=ocp.args.StandardRestore(abstract_train_state),\n ),\n)\nrestored_train_state = restored["model_state"]\nparams = restored_train_state.params\n\n\ndef _sampling_wrapper(module, batch):\n # return module.sample_maskgit(batch, args.seq_len, args.maskgit_steps, args.temperature, args.sample_argmax)\n return module.sample_causal(batch, args.seq_len, args.temperature, args.sample_argmax)\n\n# --- Define autoregressive sampling loop ---\ndef _autoreg_sample(rng, video_batch, action_batch):\n vid = video_batch[:, : args.start_frame + 1]\n # sampling_fn = jax.jit(nn.apply(_sampling_wrapper, genie)) \n sampling_fn = nn.apply(_sampling_wrapper, genie)\n rng, _rng = jax.random.split(rng)\n batch = dict(videos=vid, latent_actions=action_batch, rng=_rng)\n generated_vid = sampling_fn(\n params,\n batch\n )\n return generated_vid\n\ndef _get_dataloader_iterator():\n array_record_files = [\n os.path.join(args.data_dir, x)\n for x in os.listdir(args.data_dir)\n if x.endswith(".array_record")\n ]\n grain_dataloader = get_dataloader(\n array_record_files,\n args.seq_len,\n # NOTE: We deliberately pass the global batch size\n # The dataloader shards the dataset across all processes\n args.batch_size,\n *image_shape,\n num_workers=0,\n prefetch_buffer_size=1,\n seed=args.seed,\n )\n initial_state = grain_dataloader._create_initial_state()\n grain_iterator = grain.DataLoaderIterator(grain_dataloader, initial_state)\n return grain_iterator\n\n# --- Get video + latent actions ---\n# grain_iterator = _get_dataloader_iterator()\n# video_batch = next(grain_iterator)\n# video_batch = np.load("overfit_dir/single_sample_corner.npy")\nvideo_batch = np.load("overfit_dir/oai_sample_seed69_1.npy") # *255.\n\n\nvideo_batch = video_batch.astype(args.dtype) / 255.0\n# Get latent actions for all videos in the batch\nbatch = dict(videos=video_batch[:,:args.seq_len])\naction_batch = genie.apply(params, batch, False, method=Genie.vq_encode)\naction_batch = action_batch.reshape(video_batch.shape[0], args.seq_len - 1, 1)\n\n# --- Sample + evaluate video ---\nprint("autoreg sampling...")\nvid = _autoreg_sample(rng, video_batch, action_batch)\nprint("autoreg sampling done. calculating ssim and saving video")\ngt = video_batch[:, : vid.shape[1]].clip(0, 1).reshape(-1, *video_batch.shape[2:])\nrecon = vid.clip(0, 1).reshape(-1, *vid.shape[2:])\nssim = pix.ssim(gt[:, args.start_frame + 1 :], recon[:, args.start_frame + 1 :]).mean()\nprint(f"SSIM: {ssim}")\n\n# --- Construct video ---\ntrue_videos = (video_batch * 255).astype(np.uint8)\npred_videos = (vid * 255).astype(np.uint8)\nvideo_comparison = np.zeros((2, *vid.shape), dtype=np.uint8)\nvideo_comparison[0] = true_videos[:, :args.seq_len]\nvideo_comparison[1] = pred_videos\nframes = einops.rearrange(video_comparison, "n b t h w c -> t (b h) (n w) c")\n\n# --- Save video --- \nimgs = [Image.fromarray(img) for img in frames]\n# Write actions on each frame, on each row (i.e., for each video in the batch, on the GT row)\nfor t, img in enumerate(imgs[1:]):\n d = ImageDraw.Draw(img)\n for row in range(action_batch.shape[0]):\n action = action_batch[row, t, 0]\n y_offset = row * video_batch.shape[2] + 2\n d.text((2, y_offset), f"{action}", fill=255)\nimgs[0].save(\n f"generation_{time.time()}.gif",\n save_all=True,\n append_images=imgs[1:],\n duration=250,\n loop=0,\n)\n
|
python
|
tab
|
146
| 244,815
|
train_dynamics.py
| 0
| 0
| null |
python
|
tab
|
147
| 244,989
|
train_dynamics.py
| 11,692
| 113
|
# for i in range(videos.shape[0]):\n # video_i = videos[i:i+1] # shape (1, T, H, W, C)\n # np.save(f"overfit_dir/oai_sample_seed69_{i}.npy", video_i)\n # jax.debug.breakpoint()\n videos = np.load("overfit_dir/oai_sample_seed69_1.npy") # *255.\n # videos = videos.astype(np.uint8)\n
|
python
|
content
|
148
| 255,552
|
sample.py
| 0
| 0
| null |
python
|
tab
|
149
| 258,549
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 0
| 0
| null |
shellscript
|
tab
|
150
| 262,147
|
TERMINAL
| 0
| 0
|
[?25ls[2mh[22m[27;45H[?25h
| null |
terminal_output
|
151
| 262,201
|
TERMINAL
| 0
| 0
|
[?25l[27;44Hh[27;46H[?25h
| null |
terminal_output
|
152
| 262,281
|
TERMINAL
| 0
| 0
|
[?25l[27;45H [27;46H[?25h
| null |
terminal_output
|
153
| 263,206
|
slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch
| 0
| 0
| null |
shellscript
|
tab
|
154
| 268,286
|
TERMINAL
| 0
| 0
|
[7mslurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch[27m
| null |
terminal_output
|
155
| 268,655
|
TERMINAL
| 0
| 0
|
\r[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Cslurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch\r\n[?2004l\r#!/usr/bin/env bash\r\n\r\n#SBATCH --nodes=1\r\n#SBATCH --ntasks-per-node=1\r\n#SBATCH --time=48:00:00\r\n#SBATCH --partition=accelerated\r\n#SBATCH --cpus-per-task=5\r\n#SBATCH --gres=gpu:1\r\n#SBATCH --output=/storage/slurm/mahajanm/yoloruns/%x_%j.log\r\n#SBATCH --error=/storage/slurm/mahajanm/yoloruns/%x_%j.log\r\n#SBATCH --job-name=train_dynamics_overfit_sample_causal_actionspace-1\r\n\r\n# Log the sbatch script\r\ncat $0\r\n\r\nmodule unload mpi/openmpi/5.0\r\nmodule unload devel/cuda/12.4\r\n# source .venv/bin/activate\r\n\r\narray_records_dir=.\r\n\r\njob_name=$SLURM_JOB_NAME\r\nslurm_job_id=$SLURM_JOB_ID\r\n\r\nCHECKPOINT_DIR=/storage/user/mahajanm/Projects/world-modeling/checkpoints/causal/overfit-oai-sample-actionspace-1/$job_name/$slurm_job_id\r\nmkdir -p $CHECKPOINT_DIR\r\n\r\n# tokenizer_ckpt_dir=/hkfs/work/workspace/scratch/tum_ind3695-jafa_ws_shared/checkpoints/big-runs/tokenizer-lr-scaling/train_tokenizer_lr_sweep_1e-4\r\ntokenizer_ckpt_dir=/storage/user/mahajanm/Projects/world-modeling/checkpoints/tokenizer_ckpt\r\n\r\nenv | grep SLURM\r\n\r\nsrun python train_dynamics.py \\r\n --save_ckpt \\r\n --num_steps=2000 \\r\n --warmup_steps=0 \\r\n --wsd_decay_steps=0 \\r\n --ckpt_dir $CHECKPOINT_DIR \\r\n --batch_size=1 \\r\n --init_lr=1e-4 \\r\n --max_lr=1e-4 \\r\n --log_image_interval=1000 \\r\n --num_latent_actions=1 \\r\n --log \\r\n --log_checkpoint_interval=1000 \\r\n --name=dynamics-causal-overfit-actionspace-1-$slurm_job_id \\r\n --tags dynamics causal overfit \\r\n --entity instant-uv \\r\n --project jafar \\r\n --tokenizer_checkpoint=$tokenizer_ckpt_dir \\r\n --data_dir $array_records_dir \\r\n --dyna_dim=128 \\r\n --dyna_num_blocks=2 \\r\n --dyna_num_heads=4\r\n slurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch: 16: module: not found\r\nslurm/jobs/mihir/horeka/overfit_sample/causal/dynamics_overfit_sample.sbatch: 17: module: not found\r\nSLURM_STEP_NODELIST=node17\r\nSLURM_JOB_USER=mahajanm\r\nSLURM_JOB_GPUS=0\r\nSLURM_JOBID=1393544\r\nSLURM_PTY_PORT=39621\r\nSLURM_JOB_QOS=stud\r\nSLURM_JOB_NUM_NODES=1\r\nSLURM_SRUN_COMM_PORT=42571\r\nSLURM_TASKS_PER_NODE=1\r\nSLURM_NTASKS_PER_NODE=1\r\nSLURM_TOPOLOGY_ADDR_PATTERN=node\r\nSLURM_PRIO_PROCESS=0\r\nSLURM_JOB_START_TIME=1753197754\r\nSLURM_JOB_CPUS_PER_NODE=5\r\nSLURM_JOB_NAME=interactive\r\nSLURM_JOB_GID=20909\r\nSLURM_CPUS_ON_NODE=5\r\nSLURM_PROCID=0\r\nSLURM_JOB_ACCOUNT=stud\r\nSLURM_SCRIPT_CONTEXT=prolog_task\r\nSLURM_CONF=/var/spool/slurmd/conf-cache/slurm.conf\r\nSLURM_STEP_LAUNCHER_PORT=42571\r\nSLURM_SUBMIT_HOST=atcremers51\r\nSLURM_MPI_TYPE=none\r\nSLURM_GPUS_ON_NODE=1\r\nSLURM_NODELIST=node17\r\nSLURM_NNODES=1\r\nSLURM_JOB_ID=1393544\r\nSLURMD_NODENAME=node17\r\nSLURM_OOM_KILL_STEP=0\r\nSLURM_JOB_NODELIST=node17\r\nSLURM_GTIDS=0\r\nSLURM_STEPID=4294967290\r\nSLURM_CPUS_PER_TASK=5\r\nSLURM_JOB_END_TIME=1753233754\r\nSLURM_STEP_NUM_NODES=1\r\nSLURM_TRES_PER_TASK=cpu=5\r\nSLURM_PTY_WIN_ROW=27\r\nSLURM_JOB_UID=7389\r\nSLURM_CLUSTER_NAME=inf9\r\nSLURM_STEP_TASKS_PER_NODE=1\r\nSLURM_LOCALID=0\r\nSLURM_JOB_PARTITION=NORMAL\r\nSLURM_LAUNCH_NODE_IPADDR=131.159.18.70\r\nSLURMD_DEBUG=2\r\nSLURM_TASK_PID=3978593\r\nSLURM_NTASKS=1\r\nSLURM_TOPOLOGY_ADDR=node17\r\nSLURM_NPROCS=1\r\nSLURM_STEP_NUM_TASKS=1\r\nSLURM_SRUN_COMM_HOST=131.159.18.70\r\nSLURM_SUBMIT_DIR=/usr/stud/mahajanm/Projects/jafar\r\nSLURM_PTY_WIN_COL=184\r\nSLURM_STEP_ID=4294967290\r\nSLURM_NODEID=0\r\n
| null |
terminal_output
|
156
| 276,070
|
TERMINAL
| 0
| 0
|
/usr/stud/mahajanm/Projects/jafar/.venv/lib/python3.10/site-packages/tyro/_parsers.py:347: UserWarning: The field `param-dtype` is annotated with type `<class 'numpy.dtype'>`, but the default value `<class 'jax.numpy.float32'>` has type `<class 'jax._src.numpy.scalar_types._ScalarMeta'>`. We'll try to handle this gracefully, but it may cause unexpected behavior.\r\n warnings.warn(message)\r\n/usr/stud/mahajanm/Projects/jafar/.venv/lib/python3.10/site-packages/tyro/_parsers.py:347: UserWarning: The field `dtype` is annotated with type `<class 'numpy.dtype'>`, but the default value `<class 'jax.numpy.bfloat16'>` has type `<class 'jax._src.numpy.scalar_types._ScalarMeta'>`. We'll try to handle this gracefully, but it may cause unexpected behavior.\r\n warnings.warn(message)\r\n
| null |
terminal_output
|
157
| 282,779
|
TERMINAL
| 0
| 0
|
2025-07-22 17:26:27.972720: W external/xla/xla/service/gpu/autotuning/dot_search_space.cc:200] All configs were filtered out because none of them sufficiently match the hints. Maybe the hints set does not contain a good representative set of valid configs?Working around this by using the full hints set instead.\r\n
| null |
terminal_output
|
158
| 292,809
|
TERMINAL
| 0
| 0
|
2025-07-22 17:26:37.998625: W external/xla/xla/service/gpu/autotuning/dot_search_space.cc:200] All configs were filtered out because none of them sufficiently match the hints. Maybe the hints set does not contain a good representative set of valid configs?Working around this by using the full hints set instead.\r\n
| null |
terminal_output
|
159
| 311,595
|
TERMINAL
| 0
| 0
|
2025-07-22 17:26:56.758885: W external/xla/xla/service/gpu/autotuning/dot_search_space.cc:200] All configs were filtered out because none of them sufficiently match the hints. Maybe the hints set does not contain a good representative set of valid configs?Working around this by using the full hints set instead.\r\n
| null |
terminal_output
|
160
| 315,353
|
TERMINAL
| 0
| 0
|
wandb: Currently logged in as: mihir-mahajan2002 (instant-uv) to https://api.wandb.ai. Use `wandb login --relogin` to force relogin\r\n
| null |
terminal_output
|
161
| 315,878
|
TERMINAL
| 0
| 0
|
wandb: Tracking run with wandb version 0.19.11\r\nwandb: Run data is saved locally in /usr/stud/mahajanm/Projects/jafar/wandb/run-20250722_172700-h06yy7dw\r\nwandb: Run `wandb offline` to turn off syncing.\r\nwandb: Syncing run dynamics-causal-overfit-actionspace-1-1393544\r\nwandb: ⭐️ View project at https://wandb.ai/instant-uv/jafar\r\nwandb: 🚀 View run at https://wandb.ai/instant-uv/jafar/runs/h06yy7dw\r\n
| null |
terminal_output
|
162
| 318,215
|
TERMINAL
| 0
| 0
|
WARNING:absl:Missing metrics for step 146000\r\nERROR:absl:File /storage/user/mahajanm/Projects/world-modeling/checkpoints/tokenizer_ckpt/146000/metrics/metrics not found.\r\n
| null |
terminal_output
|
163
| 322,778
|
TERMINAL
| 0
| 0
|
Running on 1 devices.\r\nCounting all components: ['tokenizer', 'lam', 'dynamics']\r\nParameter counts:\r\n{'tokenizer': 37989616, 'lam': 19349312, 'dynamics': 583168, 'total': 57922096}\r\n
| null |
terminal_output
|
164
| 322,836
|
TERMINAL
| 0
| 0
|
Traceback (most recent call last):\r\n File "/usr/stud/mahajanm/Projects/jafar/train_dynamics.py", line 336, in <module>\r\n videos = np.load("overfit_dir/oai_sample_seed69_1.npy") # *255.\r\n File "/usr/stud/mahajanm/Projects/jafar/.venv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 427, in load\r\n fid = stack.enter_context(open(os_fspath(file), "rb"))\r\nFileNotFoundError: [Errno 2] No such file or directory: 'overfit_dir/oai_sample_seed69_1.npy'\r\n
| null |
terminal_output
|
165
| 323,937
|
TERMINAL
| 0
| 0
|
[1;34mwandb[0m: \r\n[1;34mwandb[0m: 🚀 View run [33mdynamics-causal-overfit-actionspace-1-1393544[0m at: [34mhttps://wandb.ai/instant-uv/jafar/runs/h06yy7dw[0m\r\n[1;34mwandb[0m: Find logs at: [1;35mwandb/run-20250722_172700-h06yy7dw/logs[0m\r\n
| null |
terminal_output
|
166
| 326,935
|
TERMINAL
| 0
| 0
|
srun: error: node17: task 0: Exited with exit code 1\r\n]0;mahajanm@node17: /usr/stud/mahajanm/Projects/jafar[?2004h(jafar) ]0;mahajanm@node17: ~/Projects/jafar[01;32mmahajanm@node17[00m:[01;34m~/Projects/jafar[00m$
| null |
terminal_output
|
167
| 329,048
|
TERMINAL
| 0
| 0
|
bash
| null |
terminal_focus
|
168
| 333,063
|
TERMINAL
| 0
| 0
|
sh
| null |
terminal_focus
|
169
| 333,759
|
TERMINAL
| 0
| 0
|
m
| null |
terminal_output
|
170
| 333,863
|
TERMINAL
| 0
| 0
|
[?25l[27;44Hv[27;46H[?25h
| null |
terminal_output
|
171
| 333,995
|
TERMINAL
| 0
| 0
|
[?25l[27;45H [27;47H[?25h
| null |
terminal_output
|
172
| 334,045
|
TERMINAL
| 0
| 0
|
[?25l[27;46Ho[27;47H[?25h
| null |
terminal_output
|
173
| 334,176
|
TERMINAL
| 0
| 0
|
verfit_dir/
| null |
terminal_output
|
174
| 335,421
|
TERMINAL
| 0
| 0
|
[?25l[27;57H [27;58H[?25h
| null |
terminal_output
|
175
| 335,642
|
TERMINAL
| 0
| 0
|
[?25l[27;58Ho[27;59H[?25h
| null |
terminal_output
|
176
| 335,789
|
TERMINAL
| 0
| 0
|
[?25l[27;59Hv[27;60H[?25h
| null |
terminal_output
|
177
| 335,898
|
TERMINAL
| 0
| 0
|
erfit_dir/
| null |
terminal_output
|
178
| 336,856
|
TERMINAL
| 0
| 0
|
[?25l[27;69H_[27;70H[?25h
| null |
terminal_output
|
179
| 337,174
|
TERMINAL
| 0
| 0
|
[?25l[27;70Hn[27;71H[?25h
| null |
terminal_output
|
180
| 337,889
|
TERMINAL
| 0
| 0
|
[?25l[27;70Hb[27;71H[?25h
| null |
terminal_output
|
181
| 338,033
|
TERMINAL
| 0
| 0
|
[?25l[27;71Ha[27;73H[?25h
| null |
terminal_output
|
182
| 338,106
|
TERMINAL
| 0
| 0
|
[?25l[27;72Hk[27;73H[?25h
| null |
terminal_output
|
183
| 338,357
|
TERMINAL
| 0
| 0
|
\r\n[?2004l\r]0;mahajanm@node17: /usr/stud/mahajanm/Projects/jafar[?2004h(jafar) ]0;mahajanm@node17: ~/Projects/jafar[01;32mmahajanm@node17[00m:[01;34m~/Projects/jafar[00m$
| null |
terminal_output
|
184
| 357,804
|
TERMINAL
| 0
| 0
|
bash
| null |
terminal_focus
|
185
| 358,282
|
TERMINAL
| 0
| 0
|
sh
| null |
terminal_focus
|
186
| 359,171
|
TERMINAL
| 0
| 0
|
[?25lu[2mn[22m[27;45H[?25h
| null |
terminal_output
|
187
| 359,258
|
TERMINAL
| 0
| 0
|
[?25l[27;44Hn[27;45H[?25h
| null |
terminal_output
|
188
| 359,854
|
TERMINAL
| 0
| 0
|
[?25l[27;45Hz[27;47H[?25h
| null |
terminal_output
|
189
| 359,935
|
TERMINAL
| 0
| 0
|
[?25l[27;46Hi[27;47H[?25h
| null |
terminal_output
|
190
| 360,138
|
TERMINAL
| 0
| 0
|
[?25l[27;47Hp[27;49H[?25h[?25l[27;48H [27;49H[?25h
| null |
terminal_output
|
191
| 360,290
|
TERMINAL
| 0
| 0
|
[?25l[27;49Ho[27;50H[?25h
| null |
terminal_output
|
192
| 360,388
|
TERMINAL
| 0
| 0
|
verfit_dir
| null |
terminal_output
|
193
| 362,505
|
TERMINAL
| 0
| 0
|
[?25l[27;60H.[27;61H[?25h
| null |
terminal_output
|
194
| 362,716
|
TERMINAL
| 0
| 0
|
zip
| null |
terminal_output
|
195
| 363,798
|
TERMINAL
| 0
| 0
|
\r\n[?2004l\rArchive: overfit_dir.zip\r\n End-of-central-directory signature not found. Either this file is not\r\n a zipfile, or it constitutes one disk of a multi-part archive. In the\r\n latter case the central directory and zipfile comment will be found on\r\n the last disk(s) of this archive.\r\nunzip: cannot find zipfile directory in one of overfit_dir.zip or\r\n overfit_dir.zip.zip, and cannot find overfit_dir.zip.ZIP, period.\r\n]0;mahajanm@node17: /usr/stud/mahajanm/Projects/jafar[?2004h(jafar) ]0;mahajanm@node17: ~/Projects/jafar[01;32mmahajanm@node17[00m:[01;34m~/Projects/jafar[00m$
| null |
terminal_output
|
196
| 365,818
|
TERMINAL
| 0
| 0
|
unzip overfit_dir.zip
| null |
terminal_output
|
197
| 370,998
|
TERMINAL
| 0
| 0
|
\r\n[?2004l\rArchive: overfit_dir.zip\r\n creating: overfit_dir/\r\n inflating: overfit_dir/single_sample_axe.npy \r\n inflating: overfit_dir/single_batch_12_elems.npy
| null |
terminal_output
|
198
| 371,173
|
TERMINAL
| 0
| 0
|
\r\n inflating: overfit_dir/single_batch_3_elems.npy
| null |
terminal_output
|
199
| 371,259
|
TERMINAL
| 0
| 0
|
\r\n inflating: overfit_dir/oai_sample_seed69_0.npy \r\n inflating: overfit_dir/oai_sample_seed69_3.npy \r\n inflating: overfit_dir/corner_8repl.npy
| null |
terminal_output
|
200
| 371,430
|
TERMINAL
| 0
| 0
|
\r\n inflating: overfit_dir/oai_sample_seed69_1.npy \r\n inflating: overfit_dir/single_batch_6_elems.npy
| null |
terminal_output
|
201
| 371,607
|
TERMINAL
| 0
| 0
|
\r\n inflating: overfit_dir/oai_sample_seed69_5.npy \r\n inflating: overfit_dir/oai_sample_seed69_9.npy \r\n inflating: overfit_dir/oai_sample_seed69_8.npy \r\n inflating: overfit_dir/oai_sample_seed69_11.npy \r\n inflating: overfit_dir/sample_oai_dataset.npy \r\n inflating: overfit_dir/sample_oai_dataset_seed42.npy \r\n inflating: overfit_dir/oai_sample_seed69_2.npy \r\n inflating: overfit_dir/single_sample_corner.npy
| null |
terminal_output
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.