mihirma commited on
Commit
ee9a429
·
verified ·
1 Parent(s): ae43bac

Add files using upload-large-folder tool

Browse files
Files changed (20) hide show
  1. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-0bc12902-38f2-43df-b4fa-d7785d950ad91765373904011-2025_12_10-14.38.30.612/source.csv +0 -0
  2. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-1abe6561-37fa-44c4-a02e-35deedf040521754322372590-2025_08_04-17.46.19.699/source.csv +0 -0
  3. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-2370b2c3-1c20-4f0f-b073-b86899e124f91765365699565-2025_12_10-12.21.46.720/source.csv +0 -0
  4. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-370c054a-afa3-4c83-b05e-ce3666df91621765381903056-2025_12_10-16.51.54.209/source.csv +0 -0
  5. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-60f75f53-189b-4910-bc91-de9064bb67731759002168004-2025_09_27-21.42.49.485/source.csv +0 -0
  6. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-623c548f-e16f-46a4-9ee1-6577a82e63e51754054052755-2025_08_01-15.14.20.520/source.csv +35 -0
  7. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-7544fc83-3b69-4322-bf8e-cad9ebe1fe211755254545814-2025_08_15-12.42.32.95/source.csv +0 -0
  8. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-75f0ddaf-ea67-4942-9305-34b8d6d8dcdc1755942721627-2025_08_23-11.52.07.167/source.csv +0 -0
  9. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-88e23d98-00ad-4d5b-8d4d-1f239e211eb71763045757922-2025_11_13-15.56.09.849/source.csv +7 -0
  10. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-8e7b7877-c553-4d5c-a7c5-433adcd8112b1754287948136-2025_08_04-08.12.35.154/source.csv +286 -0
  11. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-91d7eee6-8ee7-4f50-ba4c-546c6e0f99451755438444066-2025_08_17-15.47.30.74/source.csv +24 -0
  12. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-93adc08f-77de-486a-a0da-6bd1df62203b1753869084135-2025_07_30-11.51.32.679/source.csv +0 -0
  13. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-987c23ac-4e87-407a-ad46-c530dbbf6d4c1764767462916-2025_12_03-14.11.11.447/source.csv +0 -0
  14. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-a464863e-68b5-46e0-8fc2-ff25b4163aec1757954853521-2025_09_15-18.47.40.354/source.csv +0 -0
  15. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-bc0084f0-c97c-4490-943b-c621798113041767623320849-2026_01_05-15.28.50.314/source.csv +152 -0
  16. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-c0f990f7-b244-48ad-9e10-09436f02488e1763904501871-2025_11_23-14.28.30.128/source.csv +0 -0
  17. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-c3efaede-1402-4fa2-9af2-1daa60c1fac61764499916668-2025_11_30-11.53.19.937/source.csv +0 -0
  18. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-c7a3c1e0-f967-4348-8f91-07dcc637207c1761212988162-2025_10_23-11.49.58.248/source.csv +0 -0
  19. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-d47ebbeb-4370-4f85-8a19-4edee45b89e31767785934640-2026_01_07-12.39.04.804/source.csv +0 -0
  20. 1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-e152ee4a-939b-4464-8ec7-3739b32281d61763456606729-2025_11_18-10.03.33.832/source.csv +53 -0
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-0bc12902-38f2-43df-b4fa-d7785d950ad91765373904011-2025_12_10-14.38.30.612/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-1abe6561-37fa-44c4-a02e-35deedf040521754322372590-2025_08_04-17.46.19.699/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-2370b2c3-1c20-4f0f-b073-b86899e124f91765365699565-2025_12_10-12.21.46.720/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-370c054a-afa3-4c83-b05e-ce3666df91621765381903056-2025_12_10-16.51.54.209/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-60f75f53-189b-4910-bc91-de9064bb67731759002168004-2025_09_27-21.42.49.485/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-623c548f-e16f-46a4-9ee1-6577a82e63e51754054052755-2025_08_01-15.14.20.520/source.csv ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
2
+ 2,354,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"3:14:20 PM [info] Activating crowd-code\n3:14:20 PM [info] Recording started\n3:14:20 PM [info] Initializing git provider using file system watchers...\n",Log,tab
3
+ 3,1073,"extension-output-pdoom-org.crowd-code-#1-crowd-code",150,0,"3:14:20 PM [info] Git repository found\n3:14:20 PM [info] Git provider initialized successfully\n3:14:20 PM [info] Initial git state: [object Object]\n",Log,content
4
+ 4,703122,"experiments/sample.sh",0,0,"source .venv/bin/activate\n\ndata_dir=""$PWD/data_arrayrecord/dummy""\nckpt_dir=""$PWD/checkpoints/causal_dynamics_openai_grain_tok_restore""\n\nexport PYTHONUNBUFFERED=1\nsrun ipython --pdb sample.py -- \\n --dyna_type ""causal"" \\n --batch_size 1 \\n --seq_len 2 \\n --start_frame 1 \\n --checkpoint $ckpt_dir \\n --data_dir $data_dir",shellscript,tab
5
+ 5,703816,"experiments/sample.sh",283,0,"",shellscript,selection_mouse
6
+ 6,703816,"experiments/sample.sh",282,0,"",shellscript,selection_command
7
+ 7,704348,"experiments/sample.sh",337,0,"",shellscript,selection_mouse
8
+ 8,704352,"experiments/sample.sh",336,0,"",shellscript,selection_command
9
+ 9,705452,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
10
+ 10,710934,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
11
+ 11,711936,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
12
+ 12,720157,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
13
+ 13,721337,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
14
+ 14,722480,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
15
+ 15,723451,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
16
+ 16,724245,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
17
+ 17,725680,"experiments/sample.sh",337,0,"",shellscript,selection_mouse
18
+ 18,725686,"experiments/sample.sh",336,0,"",shellscript,selection_command
19
+ 19,860776,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
20
+ 20,861676,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
21
+ 21,863663,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
22
+ 22,864281,"experiments/sample.sh",337,0,"",shellscript,selection_mouse
23
+ 23,864282,"experiments/sample.sh",336,0,"",shellscript,selection_command
24
+ 24,865295,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
25
+ 25,865896,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
26
+ 26,867495,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
27
+ 27,1826628,"experiments/sample.sh",337,0,"",shellscript,selection_mouse
28
+ 28,1826635,"experiments/sample.sh",336,0,"",shellscript,selection_command
29
+ 29,1827817,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
30
+ 30,1828589,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
31
+ 31,1830307,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
32
+ 32,1831392,"experiments/sample.sh",337,0,"",shellscript,selection_mouse
33
+ 33,1831396,"experiments/sample.sh",336,0,"",shellscript,selection_command
34
+ 34,1832055,"experiments/sample.sh",26,0,"",shellscript,selection_mouse
35
+ 35,1832824,"experiments/sample.sh",135,0,"",shellscript,selection_mouse
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-7544fc83-3b69-4322-bf8e-cad9ebe1fe211755254545814-2025_08_15-12.42.32.95/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-75f0ddaf-ea67-4942-9305-34b8d6d8dcdc1755942721627-2025_08_23-11.52.07.167/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-88e23d98-00ad-4d5b-8d4d-1f239e211eb71763045757922-2025_11_13-15.56.09.849/source.csv ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
2
+ 1,3,"crowd-pilot/crowd-pilot/serialization_utils.py",0,0,"#!/usr/bin/env python3\n""""""\nCommon utilities for dataset serialization scripts.\n""""""\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom pathlib import Path\nfrom typing import List, Optional, Tuple, Dict\n\nimport difflib\nimport re\nimport pandas as pd\nfrom datasets import Dataset, load_dataset\n\n\n_ANSI_CSI_RE = re.compile(r""\x1b\[[0-9;?]*[ -/]*[@-~]"")\n_ANSI_OSC_TERMINATED_RE = re.compile(r""\x1b\][\s\S]*?(?:\x07|\x1b\\)"")\n_ANSI_OSC_LINE_FALLBACK_RE = re.compile(r""\x1b\][^\n]*$"")\n_BRACKETED_PASTE_ENABLE = ""\x1b[?2004h""\n_BRACKETED_PASTE_DISABLE = ""\x1b[?2004l""\n_OSC_633 = ""\x1b]633;""\n_OSC_0 = ""\x1b]0;""\n\n\n@dataclass\nclass SerializeConfig:\n output_dir: str\n shard_size: int\n target_chars: int\n overlap_chars: int\n min_session_chars: int\n max_docs: Optional[int]\n long_pause_threshold_ms: int\n csv_root: Optional[str]\n val_ratio: float\n arrayrecord_group_size: Optional[int] = None\n\n\ndef _clean_text(text: str) -> str:\n # Normalize line endings and strip trailing spaces; preserve tabs/newlines.\n return text.replace(""\r\n"", ""\n"").replace(""\r"", ""\n"").rstrip()\n\n\ndef _fenced_block(path: str, language: Optional[str], content: str) -> str:\n lang = (language or """").lower()\n return f""```{lang}\n{content}\n```\n""\n\n\ndef _apply_change(content: str, offset: int, length: int, new_text: str) -> str:\n # Mirrors crowd_code_player.replay_file.apply_change\n base = str(content)\n text = str(new_text) if pd.notna(new_text) else """"\n text = text.replace(""\\n"", ""\n"").replace(""\\r"", ""\r"")\n if offset > len(base):\n base = base + ("" "" * (offset - len(base)))\n return base[:offset] + text + base[offset + length:]\n\n\ndef _apply_backspaces(text: str) -> str:\n out: List[str] = []\n for ch in text:\n if ch == ""\b"": # \x08\n if out:\n out.pop()\n else:\n out.append(ch)\n return """".join(out)\n\n\ndef _normalize_terminal_output(raw: str) -> str:\n """"""\n Normalize PTY/terminal output for training:\n - Apply backspaces (\x08)\n - Strip OSC (window title/shell integration) first, keeping BEL/ST terminators intact\n - Resolve carriage returns (\r) by keeping the last rewrite per line\n - Strip CSI (coloring etc.)\n - Finally drop any remaining BEL (\x07)\n """"""\n if not raw:\n return raw\n s = _apply_backspaces(raw)\n # Remove OSC sequences that are properly terminated (BEL or ST)\n s = _ANSI_OSC_TERMINATED_RE.sub("""", s)\n # Fallback: drop any unterminated OSC up to end-of-line only\n s = ""\n"".join(_ANSI_OSC_LINE_FALLBACK_RE.sub("""", line) for line in s.split(""\n""))\n # Resolve carriage returns per line:\n # - If there are multiple rewrites, keep the last non-empty chunk\n # - If it's CRLF (ending with '\r' before '\n'), keep the content before '\r'\n resolved_lines: List[str] = []\n for seg in s.split(""\n""):\n parts = seg.split(""\r"")\n chosen = """"\n # pick last non-empty part if available; else last part\n for p in reversed(parts):\n if p != """":\n chosen = p\n break\n if chosen == """" and parts:\n chosen = parts[-1]\n resolved_lines.append(chosen)\n s = ""\n"".join(resolved_lines)\n # Strip ANSI escape sequences\n s = _ANSI_CSI_RE.sub("""", s)\n # Remove any remaining BEL beeps\n s = s.replace(""\x07"", """")\n return s\n\n\ndef _line_numbered_output(content: str, start_line: Optional[int] = None, end_line: Optional[int] = None) -> str:\n # TODO (f.srambical): check whether this corresponds **exactly** to the output of cat -n {file_path} | sed -n '{vstart},{vend}p'\n lines = content.splitlines()\n total = len(lines)\n if total == 0:\n return """"\n s = 1 if start_line is None else max(1, min(start_line, total))\n e = total if end_line is None else max(1, min(end_line, total))\n if e < s:\n # FIXME (f.srambical): If this does not happen, remove the condition\n raise ValueError(""This should never happen!"")\n e = s\n buf: List[str] = []\n for idx in range(s, e + 1):\n buf.append(f""{idx:6}\t{lines[idx - 1]}"")\n return ""\n"".join(buf)\n\n\ndef _compute_viewport(total_lines: int, center_line: int, radius: int) -> Tuple[int, int]:\n if total_lines <= 0:\n return (1, 0)\n start = max(1, center_line - radius)\n end = min(total_lines, center_line + radius)\n if end < start:\n # FIXME (f.srambical): If this does not happen, remove the condition\n raise ValueError(""This should never happen!"")\n return (start, end)\n\n\ndef _escape_single_quotes_for_sed(text: str) -> str:\n # Close quote, add an escaped single quote, reopen quote: '""'""'\n return text.replace(""'"", ""'\""'\""'"")\n\n\ndef _compute_changed_block_lines(before: str, after: str) -> Tuple[int, int, List[str]]:\n """"""\n Return 1-based start and end line numbers in 'before' that should be replaced,\n and the replacement lines from 'after'.\n For pure deletions, the replacement list may be empty.\n """"""\n before_lines = before.splitlines()\n after_lines = after.splitlines()\n sm = difflib.SequenceMatcher(a=before_lines, b=after_lines, autojunk=False)\n opcodes = [op for op in sm.get_opcodes() if op[0] != ""equal""]\n if not opcodes:\n # FIXME (f.srambical): clean this up\n raise ValueError(""No diff opcodes found for content change"")\n # No visible change; choose a safe single-line replace at end of file\n start_line = max(1, len(before_lines))\n end_line = start_line\n repl = after_lines[start_line - 1:start_line] if after_lines else [""""]\n return (start_line, end_line, repl)\n\n first = opcodes[0]\n last = opcodes[-1]\n # i1/i2 refer to 'before' indices, j1/j2 to 'after'\n start_line = (first[1] + 1) if (first[1] + 1) > 0 else 1\n end_line = last[2] # no increment since we go from 'exclusive' to 'inclusive' indexing\n replacement_lines = after_lines[first[3]:last[4]]\n return (start_line, end_line, replacement_lines)\n\n\ndef _session_to_transcript(\n df: pd.DataFrame,\n long_pause_threshold_ms: int,\n) -> str:\n\n file_states: Dict[str, str] = {}\n terminal_state: str = """"\n per_file_event_counts: Dict[str, int] = {}\n per_file_cursor_positions: Dict[str, Tuple[int, int]] = {} # (offset, length) for each file\n last_time_ms: Optional[int] = None\n\n parts: List[str] = []\n\n for i in range(len(df)):\n row = df.iloc[i]\n file_path: str = row[""File""]\n event_time: int = row[""Time""]\n language: Optional[str] = row[""Language""]\n\n # Long pause detection\n if last_time_ms is not None:\n delta = event_time - last_time_ms\n if delta > long_pause_threshold_ms:\n # TODO (f.srambical): think about whether we want to emit this as an observation or not\n parts.append(f""<obs long_pause ms=\""{delta}\"" />"")\n last_time_ms = event_time\n\n event_type = row[""Type""]\n\n match event_type:\n case ""tab"":\n # File switch event\n parts.append(f""<act focus file=\""{file_path}\"" />"")\n \n # If Text is present, this is the first time opening the file\n # and the entire file content is captured\n text = row[""Text""]\n if pd.notna(text):\n file_content = str(text).replace(""\\n"", ""\n"").replace(""\\r"", ""\r"")\n file_states[file_path] = file_content\n parts.append(f""// observation: file={file_path}"")\n parts.append(_fenced_block(file_path, language, _clean_text(file_content)))\n\n case ""terminal_command"":\n # Terminal command execution\n command = row[""Text""]\n command_str = str(command).replace(""\\n"", ""\n"").replace(""\\r"", ""\r"")\n parts.append(f""<act terminal_command />"")\n parts.append(_fenced_block(file_path, ""bash"", _clean_text(command_str)))\n\n case ""terminal_output"":\n # Terminal output capture\n output = row[""Text""]\n output_str = str(output).replace(""\\n"", ""\n"").replace(""\\r"", ""\r"")\n parts.append(f""<obs terminal_output />"")\n parts.append(_fenced_block(file_path, None, _clean_text(output_str)))\n\n case ""terminal_focus"":\n # Terminal focus event\n parts.append(f""<act focus target=\""terminal\"" />"")\n\n case ""git_branch_checkout"":\n # Git branch checkout event\n branch_info = row[""Text""]\n branch_str = str(branch_info).replace(""\\n"", ""\n"").replace(""\\r"", ""\r"")\n parts.append(f""<act git_branch_checkout />"")\n parts.append(f""// git: {_clean_text(branch_str)}"")\n\n case ""selection_command"" | ""selection_mouse"" | ""selection_keyboard"":\n # Handle cursor movement\n offset = row[""RangeOffset""]\n length = row[""RangeLength""]\n old_cursor = per_file_cursor_positions.get(file_path, (0, 0))\n new_cursor = (offset, length)\n per_file_cursor_positions[file_path] = new_cursor\n \n # Emit cursor movement observation if position changed\n if old_cursor != new_cursor:\n parts.append(f""<act cursor file=\""{file_path}\"" offset=\""{offset}\"" len=\""{length}\"" />"")\n\n case ""content"":\n # Handle file edit events\n offset = row[""RangeOffset""]\n length = row[""RangeLength""]\n new_text = row[""Text""]\n new_text_str = str(new_text) if pd.notna(new_text) else """"\n\n operation = ""noop""\n if length == 0 and new_text_str:\n operation = ""insert""\n elif length > 0 and not new_text_str:\n operation = ""delete""\n elif length > 0 and new_text_str:\n operation = ""replace""\n\n parts.append(f""<act {operation} file=\""{file_path}\"" offset=\""{offset}\"" len=\""{length}\"" />"")\n\n if new_text_str and (operation == ""insert"" or operation == ""replace""):\n parts.append(_fenced_block(file_path, language, _clean_text(new_text_str)))\n\n before = file_states.get(file_path, """")\n after = _apply_change(before, offset, length, new_text)\n file_states[file_path] = after\n per_file_event_counts[file_path] = per_file_event_counts.get(file_path, 0) + 1\n\n # Update cursor position after edit (cursor moves to end of inserted/replaced text)\n per_file_cursor_positions[file_path] = (offset + len(new_text_str), 0)\n\n case _:\n raise ValueError(f""Unknown event type: {event_type}"")\n\n return ""\n"".join(parts).strip()\n\n\ndef session_to_bash_formatted_transcript(\n df: pd.DataFrame,\n viewport_radius: int = 10,\n normalize_terminal_output: bool = True,\n) -> str:\n r""""""\n Serialize a session to a bash-like transcript comprised of:\n - Commands (bash fenced blocks): cat -n, sed -i 'S,Ec\...' && cat -n | sed -n 'VSTART,VENDp'\n - Outputs (<stdout>...</stdout>) that reflect the file state after each action\n Tracks per-file state and a per-file viewport. Viewport only shifts when selection moves out of bounds\n or when first initialized.\n """"""\n file_states: Dict[str, str] = {}\n per_file_viewport: Dict[str, Optional[Tuple[int, int]]] = {}\n\n parts: List[str] = []\n terminal_output_buffer: List[str] = []\n pending_edits_before: Dict[str, Optional[str]] = {}\n\n def _flush_terminal_output_buffer() -> None:\n if not terminal_output_buffer:\n return\n aggregated = """".join(terminal_output_buffer)\n out = aggregated\n if normalize_terminal_output:\n out = _normalize_terminal_output(out)\n cleaned = _clean_text(out)\n if cleaned.strip():\n parts.append(f""<stdout>\n{cleaned}\n</stdout>"")\n terminal_output_buffer.clear()\n\n def _flush_pending_edit_for_file(target_file: str) -> None:\n before_snapshot = pending_edits_before.get(target_file)\n if before_snapshot is None:\n return\n after_state = file_states.get(target_file, """")\n try:\n start_line, end_line, repl_lines = _compute_changed_block_lines(before_snapshot, after_state)\n except ValueError:\n pending_edits_before[target_file] = None\n return\n before_total_lines = len(before_snapshot.splitlines())\n if end_line < start_line:\n escaped_lines = [_escape_single_quotes_for_sed(line) for line in repl_lines]\n sed_payload = ""\n"".join(escaped_lines)\n if start_line <= max(1, before_total_lines):\n sed_cmd = f""sed -i '{start_line}i\\\n{sed_payload}' {target_file}""\n else:\n sed_cmd = f""sed -i '$a\\\n{sed_payload}' {target_file}""\n elif not repl_lines:\n sed_cmd = f""sed -i '{start_line},{end_line}d' {target_file}""\n else:\n escaped_lines = [_escape_single_quotes_for_sed(line) for line in repl_lines]\n sed_payload = ""\n"".join(escaped_lines)\n sed_cmd = f""sed -i '{start_line},{end_line}c\\\n{sed_payload}' {target_file}""\n total_lines = len(after_state.splitlines())\n center = (start_line + end_line) // 2\n vp = _compute_viewport(total_lines, center, viewport_radius)\n per_file_viewport[target_file] = vp\n vstart, vend = vp\n chained_cmd = f""{sed_cmd} && cat -n {target_file} | sed -n '{vstart},{vend}p'""\n parts.append(_fenced_block(target_file, ""bash"", _clean_text(chained_cmd)))\n viewport_output = _line_numbered_output(after_state, vstart, vend)\n parts.append(f""<stdout>\n{viewport_output}\n</stdout>"")\n pending_edits_before[target_file] = None\n\n def _flush_all_pending_edits() -> None:\n for fname in list(pending_edits_before.keys()):\n _flush_pending_edit_for_file(fname)\n\n for i in range(len(df)):\n row = df.iloc[i]\n file_path: str = row[""File""]\n event_type = row[""Type""]\n\n if i % 100 == 0:\n breakpoint()\n \n match event_type:\n case ""tab"":\n _flush_all_pending_edits()\n _flush_terminal_output_buffer()\n text = row[""Text""]\n if pd.notna(text):\n content = str(text).replace(""\\n"", ""\n"").replace(""\\r"", ""\r"")\n file_states[file_path] = content\n # First open with full file capture\n cmd = f""cat -n {file_path}""\n parts.append(_fenced_block(file_path, ""bash"", _clean_text(cmd)))\n output = _line_numbered_output(content)\n parts.append(f""<stdout>\n{output}\n</stdout>"")\n else:\n # File switch without content snapshot: show current viewport only\n content = file_states.get(file_path, """")\n total_lines = len(content.splitlines())\n vp = per_file_viewport.get(file_path)\n if not vp or vp[1] == 0:\n vp = _compute_viewport(total_lines, 1, viewport_radius)\n per_file_viewport[file_path] = vp\n if vp:\n vstart, vend = vp\n cmd = f""cat -n {file_path} | sed -n '{vstart},{vend}p'""\n parts.append(_fenced_block(file_path, ""bash"", _clean_text(cmd)))\n viewport_output = _line_numbered_output(content, vstart, vend)\n parts.append(f""<stdout>\n{viewport_output}\n</stdout>"")\n\n case ""content"":\n _flush_terminal_output_buffer()\n offset = int(row[""RangeOffset""])\n length = int(row[""RangeLength""])\n new_text = row[""Text""]\n before = file_states.get(file_path, """")\n after = _apply_change(before, offset, length, new_text)\n if pending_edits_before.get(file_path) is None:\n pending_edits_before[file_path] = before\n file_states[file_path] = after\n\n case ""selection_command"" | ""selection_mouse"" | ""selection_keyboard"":\n # During an edit burst (pending edits), suppress flush and viewport emissions\n if pending_edits_before.get(file_path) is None:\n _flush_terminal_output_buffer()\n else:\n # Skip emitting viewport while edits are pending to avoid per-keystroke sed/cat spam\n break\n offset = int(row[""RangeOffset""])\n content = file_states.get(file_path, """")\n total_lines = len(content.splitlines())\n target_line = content[:offset].count(""\n"") + 1\n vp = per_file_viewport.get(file_path)\n should_emit = False\n if not vp or vp[1] == 0:\n vp = _compute_viewport(total_lines, target_line, viewport_radius)\n per_file_viewport[file_path] = vp\n should_emit = True\n else:\n vstart, vend = vp\n if target_line < vstart or target_line > vend:\n vp = _compute_viewport(total_lines, target_line, viewport_radius)\n per_file_viewport[file_path] = vp\n should_emit = True\n if should_emit and vp:\n vstart, vend = vp\n cmd = f""cat -n {file_path} | sed -n '{vstart},{vend}p'""\n parts.append(_fenced_block(file_path, ""bash"", _clean_text(cmd)))\n viewport_output = _line_numbered_output(content, vstart, vend)\n parts.append(f""<stdout>\n{viewport_output}\n</stdout>"")\n\n case ""terminal_command"":\n _flush_all_pending_edits()\n _flush_terminal_output_buffer()\n command = row[""Text""]\n command_str = str(command).replace(""\\n"", ""\n"").replace(""\\r"", ""\r"")\n parts.append(_fenced_block(file_path, ""bash"", _clean_text(command_str)))\n\n case ""terminal_output"":\n output = row[""Text""]\n raw_output = str(output).replace(""\\n"", ""\n"").replace(""\\r"", ""\r"")\n terminal_output_buffer.append(raw_output)\n\n case ""terminal_focus"" | ""git_branch_checkout"":\n _flush_all_pending_edits()\n _flush_terminal_output_buffer()\n # FIXME (f.srambical): handle these events \n pass\n\n case _:\n _flush_all_pending_edits()\n _flush_terminal_output_buffer()\n raise ValueError(f""Unknown event type: {event_type}"")\n\n _flush_all_pending_edits()\n _flush_terminal_output_buffer()\n return ""\n"".join(parts).strip()\n\ndef load_hf_csv(hf_path: str, split: str) -> Dataset:\n loaded = load_dataset(hf_path, split=split)\n\n assert isinstance(loaded, Dataset), ""Expected a Dataset from load_dataset""\n return loaded\n\n\ndef _discover_local_sessions(root: Path) -> List[Path]:\n # Recursively find all CSV files\n paths: List[Path] = []\n for p in root.rglob(""*.csv""):\n if p.is_file():\n paths.append(p)\n paths.sort()\n return paths\n\n\ndef _chunk_text(text: str, target_chars: int, overlap_chars: int) -> List[str]:\n """"""Split a long text into overlapping chunks near target length.""""""\n if target_chars <= 0:\n return [text]\n n = len(text)\n if n <= target_chars:\n return [text]\n\n chunks: List[str] = []\n start = 0\n # Ensure sane overlap\n overlap = max(0, min(overlap_chars, target_chars // 2))\n while start < n:\n end_target = min(start + target_chars, n)\n if end_target < n:\n end = end_target\n else:\n end = n\n chunk = text[start:end].strip()\n chunks.append(chunk)\n if end == n:\n break\n # advance with overlap\n start = max(0, end - overlap)\n if start >= n:\n break\n return chunks\n\n\n",python,tab
3
+ 2,318,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"3:56:09 PM [info] Activating crowd-code\n3:56:09 PM [info] Recording started\n3:56:09 PM [info] Initializing git provider using file system watchers...\n",Log,tab
4
+ 3,550,"extension-output-pdoom-org.crowd-code-#1-crowd-code",150,0,"3:56:10 PM [info] Git repository found\n3:56:10 PM [info] Git provider initialized successfully\n3:56:10 PM [info] Initial git state: [object Object]\n",Log,content
5
+ 4,230600,"TERMINAL",0,0,"",,terminal_focus
6
+ 5,230602,"crowd-pilot/crowd-pilot/serialization_utils.py",0,0,"",python,tab
7
+ 6,266711,"TERMINAL",0,0,"source /home/franz.srambical/crowd-pilot/.venv/bin/activate",,terminal_command
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-8e7b7877-c553-4d5c-a7c5-433adcd8112b1754287948136-2025_08_04-08.12.35.154/source.csv ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
2
+ 1,3,"utils/nn.py",0,0,"import math\nfrom typing import Tuple, Callable, List\n\nfrom flax import nnx\nimport jax\nimport jax.numpy as jnp\nimport einops\n\n\nclass PositionalEncoding(nnx.Module):\n """"""https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/JAX/tutorial6/Transformers_and_MHAttention.html""""""\n\n def __init__(self, d_model: int, max_len: int = 5000):\n self.d_model = d_model\n self.max_len = max_len\n\n pe = jnp.zeros((self.max_len, self.d_model))\n position = jnp.arange(0, self.max_len, dtype=jnp.float32)[:, None]\n div_term = jnp.exp(\n jnp.arange(0, self.d_model, 2) * (-math.log(10000.0) / self.d_model)\n )\n pe = pe.at[:, 0::2].set(jnp.sin(position * div_term))\n pe = pe.at[:, 1::2].set(jnp.cos(position * div_term))\n self.pe = nnx.Variable(pe)\n\n def __call__(self, x: jax.Array) -> jax.Array:\n x = x + self.pe[: x.shape[2]]\n return x\n\n\nclass STBlock(nnx.Module):\n def __init__(\n self,\n dim: int,\n ffn_dim: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n rngs: nnx.Rngs,\n ):\n self.dim = dim\n self.ffn_dim = ffn_dim\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n\n self.spatial_pos_enc = PositionalEncoding(self.dim)\n self.spatial_norm = nnx.LayerNorm(\n num_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.spatial_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.dim,\n qkv_features=self.dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=False\n ),\n rngs=rngs,\n decode=False,\n )\n\n self.temporal_pos_enc = PositionalEncoding(self.dim)\n self.temporal_norm = nnx.LayerNorm(\n num_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.temporal_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.dim,\n qkv_features=self.dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=True\n ),\n rngs=rngs,\n decode=False,\n )\n\n self.ffn_norm = nnx.LayerNorm(\n num_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_dense1 = nnx.Linear(\n in_features=self.dim,\n out_features=self.ffn_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_dense2 = nnx.Linear(\n in_features=self.ffn_dim,\n out_features=self.dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n @nnx.remat\n def __call__(self, x_BTNM: jax.Array) -> jax.Array:\n # --- Spatial attention ---\n z_BTNM = self.spatial_pos_enc(x_BTNM)\n z_BTNM = self.spatial_norm(z_BTNM)\n z_BTNM = self.spatial_attention(z_BTNM)\n x_BTNM = x_BTNM + z_BTNM\n\n # --- Temporal attention ---\n x_BNTM = x_BTNM.swapaxes(1, 2)\n z_BNTM = self.temporal_pos_enc(x_BNTM)\n z_BNTM = self.temporal_norm(z_BNTM)\n z_BNTM = self.temporal_attention(z_BNTM)\n x_BNTM = x_BNTM + z_BNTM\n x_BTNM = x_BNTM.swapaxes(1, 2)\n\n # --- Feedforward ---\n z_BTNM = self.ffn_norm(x_BTNM)\n z_BTND = self.ffn_dense1(z_BTNM)\n z_BTND = jax.nn.gelu(z_BTND)\n z_BTNM = self.ffn_dense2(z_BTND)\n x_BTNM = x_BTNM + z_BTNM\n\n return x_BTNM\n\n\nclass STTransformer(nnx.Module):\n """"""\n Dimension keys:\n B: batch size\n T: number of frames\n N: number of patches per frame\n I: number of input features\n M: model dimension\n D: FFN dimension\n O: number of output features\n """"""\n def __init__(\n self,\n input_dim: int,\n model_dim: int,\n ffn_dim: int,\n out_dim: int,\n num_blocks: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n rngs: nnx.Rngs,\n ):\n self.input_dim = input_dim\n self.model_dim = model_dim\n self.ffn_dim = ffn_dim\n self.out_dim = out_dim\n self.num_blocks = num_blocks\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n\n self.input_norm1 = nnx.LayerNorm(\n num_features=self.input_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_dense = nnx.Linear(\n in_features=self.input_dim,\n out_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_norm2 = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n self.blocks = []\n for _ in range(self.num_blocks):\n self.blocks.append(\n STBlock(\n dim=self.model_dim,\n ffn_dim=self.ffn_dim,\n num_heads=self.num_heads,\n dropout=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n rngs=rngs,\n )\n )\n\n self.output_dense = nnx.Linear(\n in_features=self.model_dim,\n out_features=self.out_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n def __call__(self, x_BTNI: jax.Array) -> jax.Array:\n x_BTNI = self.input_norm1(x_BTNI)\n x_BTNM = self.input_dense(x_BTNI)\n x_BTNM = self.input_norm2(x_BTNM)\n\n for block in self.blocks:\n x_BTNM = block(x_BTNM)\n\n x_BTNO = self.output_dense(x_BTNM)\n return x_BTNO\n\nclass TransformerBlock(nnx.Module):\n def __init__(\n self,\n model_dim: int,\n ffn_dim: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n decode: bool,\n rngs: nnx.Rngs,\n ):\n self.model_dim = model_dim\n self.ffn_dim = ffn_dim\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n self.decode = decode\n\n self.temporal_pos_enc = PositionalEncoding(self.model_dim)\n self.spatial_pos_enc = PositionalEncoding(self.model_dim)\n self.temporal_norm = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.spatial_norm = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_norm = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.temporal_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.model_dim,\n qkv_features=self.model_dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=True\n ),\n rngs=rngs,\n decode=self.decode,\n )\n self.spatial_attention = nnx.MultiHeadAttention(\n num_heads=self.num_heads,\n in_features=self.model_dim,\n qkv_features=self.model_dim,\n dropout_rate=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n attention_fn=_create_flash_attention_fn(\n self.use_flash_attention, is_causal=True\n ),\n rngs=rngs,\n decode=self.decode,\n )\n self.ffn_dense1 = nnx.Linear(\n in_features=self.model_dim,\n out_features=self.ffn_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.ffn_dense2 = nnx.Linear(\n in_features=self.ffn_dim,\n out_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n @nnx.remat\n def __call__(self, x_BTNM: jax.Array, pos_index: Tuple[jax.Array, jax.Array] | None = None) -> jax.Array:\n # --- Spatial attention ---\n B, T, N, M = x_BTNM.shape\n z_FNM = einops.rearrange(x_BTNM, ""b t n m -> (b t) n m"")\n z_FNM = self.spatial_norm(z_FNM)\n if self.decode:\n assert pos_index is not None\n z_FM = z_FNM[:, pos_index[1]]\n z_F1M = jnp.reshape(z_FM, (B * T, 1, M))\n z_F1M = self.spatial_attention(z_F1M)\n z_FM = jnp.reshape(z_F1M, (B * T, M))\n z_FNM = z_FNM.at[:, pos_index[1], :].set(z_FM)\n else:\n z_FNM = self.spatial_attention(z_FNM)\n z_BTNM = einops.rearrange(z_FNM, ""(b t) n m -> b t n m"", t=T)\n x_BTNM = x_BTNM + z_BTNM\n # --- Temporal attention ---\n z_PTM = einops.rearrange(x_BTNM, ""b t n m -> (b n) t m"")\n z_PTM = self.temporal_norm(z_PTM)\n if self.decode:\n assert pos_index is not None\n z_PM = z_PTM[:, pos_index[0]]\n z_P1M = jnp.reshape(z_PM, (B * N, 1, M))\n z_P1M = self.temporal_attention(z_P1M)\n z_PM = jnp.reshape(z_P1M, (B * N, M))\n z_PTM = z_PTM.at[:, pos_index[0], :].set(z_PM)\n else:\n z_PTM = self.temporal_attention(z_PTM)\n z_BTNM = einops.rearrange(z_PTM, ""(b n) t m -> b t n m"", n=N)\n x_BTNM = x_BTNM + z_BTNM\n # --- Feedforward ---\n z_BTNM = self.ffn_norm(x_BTNM)\n z_BTND = self.ffn_dense1(z_BTNM)\n z_BTND = jax.nn.gelu(z_BTND)\n z_BTNM = self.ffn_dense2(z_BTND)\n x_BTNM = x_BTNM + z_BTNM\n\n return x_BTNM\n\nclass Transformer(nnx.Module):\n """"""\n Dimension keys:\n B: batch size\n T: number of frames\n N: number of patches per frame\n I: number of input features\n M: model dimension\n D: FFN dimension\n O: number of output features\n F: number of frames in batch\n P: number of patch positions in batch\n """"""\n def __init__(\n self,\n input_dim: int,\n model_dim: int,\n ffn_dim: int,\n out_dim: int,\n num_blocks: int,\n num_heads: int,\n dropout: float,\n param_dtype: jnp.dtype,\n dtype: jnp.dtype,\n use_flash_attention: bool,\n decode: bool,\n rngs: nnx.Rngs,\n ):\n self.input_dim = input_dim\n self.model_dim = model_dim\n self.ffn_dim = ffn_dim\n self.out_dim = out_dim\n self.num_blocks = num_blocks\n self.num_heads = num_heads\n self.dropout = dropout\n self.param_dtype = param_dtype\n self.dtype = dtype\n self.use_flash_attention = use_flash_attention\n\n self.pos_enc = PositionalEncoding(self.model_dim)\n self.input_norm1 = nnx.LayerNorm(\n num_features=self.input_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_dense = nnx.Linear(\n in_features=self.input_dim,\n out_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n self.input_norm2 = nnx.LayerNorm(\n num_features=self.model_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n self.blocks: List[TransformerBlock] = []\n for _ in range(self.num_blocks):\n self.blocks.append(\n TransformerBlock(\n model_dim=self.model_dim,\n ffn_dim=self.ffn_dim,\n num_heads=self.num_heads,\n dropout=self.dropout,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n use_flash_attention=self.use_flash_attention,\n decode=decode,\n rngs=rngs,\n )\n )\n self.output_dense = nnx.Linear(\n in_features=self.model_dim,\n out_features=self.out_dim,\n param_dtype=self.param_dtype,\n dtype=self.dtype,\n rngs=rngs,\n )\n\n def __call__(self, x_BTNI: jax.Array, pos_index: Tuple[jax.Array, jax.Array] | None = None) -> jax.Array:\n x_BTNI = self.input_norm1(x_BTNI)\n x_BTNM = self.input_dense(x_BTNI)\n x_BTNM = self.input_norm2(x_BTNM)\n x_BTNM = self.pos_enc(x_BTNM)\n\n for block in self.blocks:\n x_BTNM = block(x_BTNM, pos_index)\n\n x_BTNV = self.output_dense(x_BTNM)\n return x_BTNV\n\ndef normalize(x: jax.Array) -> jax.Array:\n return x / (jnp.linalg.norm(x, ord=2, axis=-1, keepdims=True) + 1e-8)\n\n\nclass VectorQuantizer(nnx.Module):\n """"""\n Dimension keys:\n D: B * T * N\n K: number of latents\n L: latent dimension\n """"""\n def __init__(\n self, latent_dim: int, num_latents: int, dropout: float, rngs: nnx.Rngs\n ):\n self.latent_dim = latent_dim\n self.num_latents = num_latents\n self.dropout = dropout\n\n self.codebook = nnx.Param(\n normalize(\n nnx.initializers.lecun_uniform()(\n rngs.params(), (self.num_latents, self.latent_dim)\n )\n )\n )\n self.drop = nnx.Dropout(self.dropout, rngs=rngs)\n\n def __call__(\n self, x_DL: jax.Array, training: bool\n ) -> Tuple[jax.Array, jax.Array, jax.Array, jax.Array]:\n # --- Compute distances ---\n x_DL = normalize(x_DL)\n normalized_codebook_KL = normalize(self.codebook.value)\n distance_DK = -jnp.matmul(x_DL, normalized_codebook_KL.T)\n if training:\n distance_DK = self.drop(distance_DK)\n\n # --- Get indices and embeddings ---\n indices_D = jnp.argmin(distance_DK, axis=-1)\n z_DL = self.codebook[indices_D]\n\n # --- Straight through estimator ---\n z_q_DL = x_DL + jax.lax.stop_gradient(z_DL - x_DL)\n return z_q_DL, z_DL, x_DL, indices_D\n\n def get_codes(self, indices_E: jax.Array) -> jax.Array:\n return self.codebook[indices_E]\n\n\ndef _create_flash_attention_fn(use_flash_attention: bool, is_causal: bool) -> Callable:\n """"""\n Create an attention function that uses flash attention if enabled.\n\n flax.nnx.MultiHeadAttention provides tensors with shape (batch..., length, num_heads, head_dim),\n but jax.nn.dot_product_attention expects (batch, length, num_heads, head_dim). We reshape to\n ensure compatibility. cuDNN's flash attention additionally requires a sequence length that\n is a multiple of 4. We pad the sequence length to the nearest multiple of 4 and mask\n accordingly. Note that cuDNN requires the mask to be broadcast before calling the attention\n function due to strict shape checking.\n """"""\n\n def attention_fn(query_BTHD, key_BSHD, value_BSHD, bias=None, mask_B111=None, **kwargs):\n implementation = ""cudnn"" if use_flash_attention else None\n\n def _merge_batch_dims(x):\n return einops.rearrange(x, ""... l h k -> (...) l h k"")\n\n def _pad(x, pad_size):\n return jnp.pad(x, ((0, 0), (0, pad_size), (0, 0), (0, 0)))\n\n original_shape = query_BTHD.shape\n T = query_BTHD.shape[-3]\n S = key_BSHD.shape[-3]\n\n # Pad to nearest multiple of 4\n Q = ((T + 3) // 4) * 4\n pad_size_Q = Q - T\n K = ((S + 3) // 4) * 4\n pad_size_K = K - S\n\n query_BQHD = _pad(_merge_batch_dims(query_BTHD), pad_size_Q)\n key_BKHD = _pad(_merge_batch_dims(key_BSHD), pad_size_K)\n value_BKHD = _pad(_merge_batch_dims(value_BSHD), pad_size_K)\n B = query_BQHD.shape[0]\n\n attention_mask = jnp.ones((Q, K), dtype=jnp.bool_)\n attention_mask = attention_mask.at[Q:, :].set(False)\n attention_mask = attention_mask.at[:, K:].set(False)\n\n # Handle causal mask for cached decoder self-attention (from nnx.MultiHeadAttention)\n if mask_B111 is not None:\n # FIXME (f.srambical): Why do we need this?\n mask_B111 = _merge_batch_dims(mask_B111)\n # We need to broadcast T and S dimensions to target_seq_len since cudnn attention strictly checks the mask shape\n # https://github.com/jax-ml/jax/issues/28974\n # https://github.com/jax-ml/jax/blob/08c7677393672ccb85c10f1ed0bd506905c3c994/jax/_src/cudnn/fused_attention_stablehlo.py#L1830\n # https://github.com/jax-ml/jax/blob/08c7677393672ccb85c10f1ed0bd506905c3c994/jax/_src/cudnn/fused_attention_stablehlo.py#L337\n mask_B1TS = einops.repeat(mask_B111, ""... 1 1 -> ... t s"", t=Q, s=K)\n mask_B1TS = mask_B111.astype(jnp.bool)\n else:\n mask_11TS = attention_mask[jnp.newaxis, jnp.newaxis, :, :]\n mask_B1TS = jnp.broadcast_to(mask_11TS, (B, 1, Q, K))\n\n bias_4d = _merge_batch_dims(bias) if bias is not None else None\n\n # NOTE: jax.nn.dot_product_attention does not support dropout\n output_4d = jax.nn.dot_product_attention(\n query=query_BQHD,\n key=key_BKHD,\n value=value_BKHD,\n bias=bias_4d,\n mask=mask_B1TS,\n implementation=implementation,\n is_causal=is_causal,\n )\n return output_4d[..., :T, :, :].reshape(original_shape)\n\n return attention_fn\n",python,tab
3
+ 2,156,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"8:12:35 AM [info] Activating crowd-code\n8:12:35 AM [info] Recording started\n8:12:35 AM [info] Initializing git provider using file system watchers...\n",Log,tab
4
+ 3,270,"extension-output-pdoom-org.crowd-code-#1-crowd-code",150,0,"8:12:35 AM [info] Git repository found\n8:12:35 AM [info] Git provider initialized successfully\n8:12:35 AM [info] Initial git state: [object Object]\n",Log,content
5
+ 4,27027,"TERMINAL",0,0,"",,terminal_focus
6
+ 5,27028,"utils/nn.py",0,0,"",python,tab
7
+ 6,27772,"TERMINAL",0,0,"source /home/franz.srambical/jafar/.venv/bin/activate",,terminal_command
8
+ 7,30804,"TERMINAL",0,0,"salloc --gpus=1 --ntasks-per-node=1 --cpus-per-task=1 --mem=100G",,terminal_command
9
+ 8,30871,"TERMINAL",0,0,"]633;Csalloc: Granted job allocation 14895\r\n",,terminal_output
10
+ 9,30978,"TERMINAL",0,0,"salloc: Waiting for resource configuration\r\n",,terminal_output
11
+ 10,31985,"TERMINAL",0,0,"salloc: Nodes hai005 are ready for job\r\n",,terminal_output
12
+ 11,32407,"TERMINAL",0,0,"Running inside SLURM, Job ID 14895.\r\n",,terminal_output
13
+ 12,32483,"TERMINAL",0,0,"]0;franz.srambical@hai-login2:~/jafar[?2004h[franz.srambical@hai005.haicore.berlin:~/jafar] $ ",,terminal_output
14
+ 13,33111,"TERMINAL",0,0,"n",,terminal_output
15
+ 14,33239,"TERMINAL",0,0,"s",,terminal_output
16
+ 15,33338,"TERMINAL",0,0,"y",,terminal_output
17
+ 16,33448,"TERMINAL",0,0,"s",,terminal_output
18
+ 17,33607,"TERMINAL",0,0,"",,terminal_output
19
+ 18,33984,"TERMINAL",0,0,"\r\nnsys nsys-ui \r\n[franz.srambical@hai005.haicore.berlin:~/jafar] $ nsys",,terminal_output
20
+ 19,34150,"TERMINAL",0,0,"\r\nnsys nsys-ui \r\n[franz.srambical@hai005.haicore.berlin:~/jafar] $ nsys",,terminal_output
21
+ 20,35164,"TERMINAL",0,0," ",,terminal_output
22
+ 21,37061,"TERMINAL",0,0,"p",,terminal_output
23
+ 22,37138,"TERMINAL",0,0,"r",,terminal_output
24
+ 23,37229,"TERMINAL",0,0,"o",,terminal_output
25
+ 24,37328,"TERMINAL",0,0,"f",,terminal_output
26
+ 25,37429,"TERMINAL",0,0,"i",,terminal_output
27
+ 26,37529,"TERMINAL",0,0,"l",,terminal_output
28
+ 27,37652,"TERMINAL",0,0,"e",,terminal_output
29
+ 28,37716,"TERMINAL",0,0," ",,terminal_output
30
+ 29,37984,"TERMINAL",0,0,"-",,terminal_output
31
+ 30,38182,"TERMINAL",0,0,"o",,terminal_output
32
+ 31,38288,"TERMINAL",0,0," ",,terminal_output
33
+ 32,40833,"TERMINAL",0,0,"m",,terminal_output
34
+ 33,41751,"TERMINAL",0,0,"",,terminal_output
35
+ 34,41984,"TERMINAL",0,0,"f",,terminal_output
36
+ 35,42303,"TERMINAL",0,0,"",,terminal_output
37
+ 36,42423,"TERMINAL",0,0,"t",,terminal_output
38
+ 37,42500,"TERMINAL",0,0,"e",,terminal_output
39
+ 38,42559,"TERMINAL",0,0,"s",,terminal_output
40
+ 39,42623,"TERMINAL",0,0,"t",,terminal_output
41
+ 40,42845,"TERMINAL",0,0,"_",,terminal_output
42
+ 41,43219,"TERMINAL",0,0,"p",,terminal_output
43
+ 42,43371,"TERMINAL",0,0,"r",,terminal_output
44
+ 43,43445,"TERMINAL",0,0,"o",,terminal_output
45
+ 44,43592,"TERMINAL",0,0,"f",,terminal_output
46
+ 45,43670,"TERMINAL",0,0,"i",,terminal_output
47
+ 46,43725,"TERMINAL",0,0,"l",,terminal_output
48
+ 47,43811,"TERMINAL",0,0,"e",,terminal_output
49
+ 48,44081,"TERMINAL",0,0," ",,terminal_output
50
+ 49,45881,"TERMINAL",0,0,"-",,terminal_output
51
+ 50,46075,"TERMINAL",0,0,"-f",,terminal_output
52
+ 51,46200,"TERMINAL",0,0,"o",,terminal_output
53
+ 52,46313,"TERMINAL",0,0,"r",,terminal_output
54
+ 53,46568,"TERMINAL",0,0,"c",,terminal_output
55
+ 54,46633,"TERMINAL",0,0,"e",,terminal_output
56
+ 55,46935,"TERMINAL",0,0,"_",,terminal_output
57
+ 56,47620,"TERMINAL",0,0,"",,terminal_output
58
+ 57,48781,"TERMINAL",0,0,"-",,terminal_output
59
+ 58,48978,"TERMINAL",0,0,"o",,terminal_output
60
+ 59,49100,"TERMINAL",0,0,"v",,terminal_output
61
+ 60,49293,"TERMINAL",0,0,"e",,terminal_output
62
+ 61,49368,"TERMINAL",0,0,"r",,terminal_output
63
+ 62,49569,"TERMINAL",0,0,"w",,terminal_output
64
+ 63,49643,"TERMINAL",0,0,"r",,terminal_output
65
+ 64,49754,"TERMINAL",0,0,"i",,terminal_output
66
+ 65,49862,"TERMINAL",0,0,"t",,terminal_output
67
+ 66,49947,"TERMINAL",0,0,"e",,terminal_output
68
+ 67,51426,"TERMINAL",0,0," ",,terminal_output
69
+ 68,51499,"TERMINAL",0,0,"t",,terminal_output
70
+ 69,52077,"TERMINAL",0,0,"r",,terminal_output
71
+ 70,52423,"TERMINAL",0,0,"y",,terminal_output
72
+ 71,52541,"TERMINAL",0,0,"e",,terminal_output
73
+ 72,52617,"TERMINAL",0,0," ",,terminal_output
74
+ 73,53355,"TERMINAL",0,0,"",,terminal_output
75
+ 74,53510,"TERMINAL",0,0,"",,terminal_output
76
+ 75,53668,"TERMINAL",0,0,"",,terminal_output
77
+ 76,54103,"TERMINAL",0,0,"u",,terminal_output
78
+ 77,54185,"TERMINAL",0,0,"e",,terminal_output
79
+ 78,54328,"TERMINAL",0,0," ",,terminal_output
80
+ 79,54478,"TERMINAL",0,0,"-",,terminal_output
81
+ 80,54649,"TERMINAL",0,0,"-",,terminal_output
82
+ 81,56255,"TERMINAL",0,0,"t",,terminal_output
83
+ 82,56397,"TERMINAL",0,0,"r",,terminal_output
84
+ 83,56544,"TERMINAL",0,0,"a",,terminal_output
85
+ 84,56661,"TERMINAL",0,0,"c",,terminal_output
86
+ 85,56781,"TERMINAL",0,0,"e",,terminal_output
87
+ 86,56938,"TERMINAL",0,0,"=",,terminal_output
88
+ 87,57153,"TERMINAL",0,0,"c",,terminal_output
89
+ 88,57312,"TERMINAL",0,0,"u",,terminal_output
90
+ 89,57395,"TERMINAL",0,0,"d",,terminal_output
91
+ 90,57446,"TERMINAL",0,0,"a",,terminal_output
92
+ 91,57835,"TERMINAL",0,0,",",,terminal_output
93
+ 92,59494,"TERMINAL",0,0,"n",,terminal_output
94
+ 93,59551,"TERMINAL",0,0,"v",,terminal_output
95
+ 94,59742,"TERMINAL",0,0,"t",,terminal_output
96
+ 95,59932,"TERMINAL",0,0,"x",,terminal_output
97
+ 96,61804,"TERMINAL",0,0," ",,terminal_output
98
+ 97,63065,"TERMINAL",0,0,"b",,terminal_output
99
+ 98,63194,"TERMINAL",0,0,"as",,terminal_output
100
+ 99,63280,"TERMINAL",0,0,"h",,terminal_output
101
+ 100,63429,"TERMINAL",0,0," e",,terminal_output
102
+ 101,63600,"TERMINAL",0,0,"x",,terminal_output
103
+ 102,63677,"TERMINAL",0,0,"p",,terminal_output
104
+ 103,63794,"TERMINAL",0,0,"e",,terminal_output
105
+ 104,63864,"TERMINAL",0,0,"ri",,terminal_output
106
+ 105,63916,"TERMINAL",0,0,"m",,terminal_output
107
+ 106,64042,"TERMINAL",0,0,"ents/",,terminal_output
108
+ 107,65215,"TERMINAL",0,0,"t",,terminal_output
109
+ 108,65607,"TERMINAL",0,0,"r",,terminal_output
110
+ 109,65719,"TERMINAL",0,0,"ai",,terminal_output
111
+ 110,65771,"TERMINAL",0,0,"n",,terminal_output
112
+ 111,65935,"TERMINAL",0,0,"",,terminal_output
113
+ 112,66603,"TERMINAL",0,0,"",,terminal_output
114
+ 113,67080,"TERMINAL",0,0,"",,terminal_output
115
+ 114,67235,"TERMINAL",0,0,"",,terminal_output
116
+ 115,67378,"TERMINAL",0,0,"",,terminal_output
117
+ 116,67520,"TERMINAL",0,0,"",,terminal_output
118
+ 117,67649,"TERMINAL",0,0,"",,terminal_output
119
+ 118,67836,"TERMINAL",0,0,"d",,terminal_output
120
+ 119,67899,"TERMINAL",0,0,"y",,terminal_output
121
+ 120,68013,"TERMINAL",0,0,"namics_grain_",,terminal_output
122
+ 121,68464,"TERMINAL",0,0,"t",,terminal_output
123
+ 122,68565,"TERMINAL",0,0,"ok",,terminal_output
124
+ 123,68674,"TERMINAL",0,0,"_",,terminal_output
125
+ 124,69397,"TERMINAL",0,0,"r",,terminal_output
126
+ 125,69607,"TERMINAL",0,0,"estore.sh ",,terminal_output
127
+ 126,70404,"TERMINAL",0,0,"\r\n[?2004l\r",,terminal_output
128
+ 127,71800,"TERMINAL",0,0,"Collecting data...\r\n",,terminal_output
129
+ 128,77965,"TERMINAL",0,0,"WARNING:2025-08-04 08:13:53,008:jax._src.distributed:127: JAX detected proxy variable(s) in the environment as distributed setup: QUADD_INJECTION_PROXY. On some systems, this may cause a hang of distributed.initialize and you may need to unset these ENV variable(s)\r\nWARNING:jax._src.distributed:JAX detected proxy variable(s) in the environment as distributed setup: QUADD_INJECTION_PROXY. On some systems, this may cause a hang of distributed.initialize and you may need to unset these ENV variable(s)\r\n",,terminal_output
130
+ 129,78646,"TERMINAL",0,0,"Running on 1 devices.\r\n",,terminal_output
131
+ 130,86522,"TERMINAL",0,0,"Counting all components: ['dynamics', 'lam', 'tokenizer']\r\nParameter counts:\r\n{'dynamics': 26555392, 'lam': 35115232, 'tokenizer': 33750256, 'total': 95420880}\r\n",,terminal_output
132
+ 131,88929,"TERMINAL",0,0,"WARNING:absl:Metadata file does not exist: /home/franz.srambical/jafar/checkpoints/causal_dynamics_openai_grain_tok_restore/000290/_CHECKPOINT_METADATA\r\n",,terminal_output
133
+ 132,89797,"TERMINAL",0,0,"/fast/home/franz.srambical/jafar/.venv/lib/python3.10/site-packages/orbax/checkpoint/_src/serialization/type_handlers.py:1256: UserWarning: Sharding info not provided when restoring. Populating sharding info from sharding file. Please note restoration time will be slightly increased due to reading from file. Note also that this option is unsafe when restoring on a different topology than the checkpoint was saved with.\r\n warnings.warn(\r\n",,terminal_output
134
+ 133,96833,"TERMINAL",0,0,"Starting training from step 0...\r\n",,terminal_output
135
+ 134,98035,"TERMINAL",0,0,"2025-08-04 08:14:13.078544: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n",,terminal_output
136
+ 135,98192,"TERMINAL",0,0,"WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\r\nE0000 00:00:1754288053.237108 3090181 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n",,terminal_output
137
+ 136,98265,"TERMINAL",0,0,"E0000 00:00:1754288053.280784 3090181 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n",,terminal_output
138
+ 137,98709,"TERMINAL",0,0,"W0000 00:00:1754288053.643365 3090181 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1754288053.643382 3090181 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1754288053.643385 3090181 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\nW0000 00:00:1754288053.643387 3090181 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\r\n",,terminal_output
139
+ 138,104271,"TERMINAL",0,0,"2025-08-04 08:14:19.312081: E external/xla/xla/backends/profiler/gpu/cupti_error_manager.cc:213] cuptiSubscribe: error 39: CUPTI_ERROR_MULTIPLE_SUBSCRIBERS_NOT_SUPPORTED\r\n2025-08-04 08:14:19.312098: E external/xla/xla/backends/profiler/gpu/cupti_error_manager.cc:242] cuptiGetResultString: ignored due to a previous error.\r\nE0804 08:14:19.312101 3090181 cupti_tracer.cc:1204] function cupti_interface_->Subscribe( &subscriber_, (CUpti_CallbackFunc)ApiCallback, this)failed with error \r\n",,terminal_output
140
+ 139,131807,"TERMINAL",0,0,"2025-08-04 08:14:46.853849: W external/xla/xla/service/gpu/autotuning/dot_search_space.cc:200] All configs were filtered out because none of them sufficiently match the hints. Maybe the hints set does not contain a good representative set of valid configs?Working around this by using the full hints set instead.\r\n2025-08-04 08:14:46.854633: W external/xla/xla/service/gpu/autotuning/dot_search_space.cc:200] All configs were filtered out because none of them sufficiently match the hints. Maybe the hints set does not contain a good representative set of valid configs?Working around this by using the full hints set instead.\r\n2025-08-04 08:14:46.855920: W external/xla/xla/service/gpu/autotuning/dot_search_space.cc:200] All configs were filtered out because none of them sufficiently match the hints. Maybe the hints set does not contain a good representative set of valid configs?Working around this by using the full hints set instead.\r\n2025-08-04 08:14:46.855946: W external/xla/xla/service/gpu/autotuning/dot_search_space.cc:200] All configs were filtered out because none of them sufficiently match the hints. Maybe the hints set does not contain a good representative set of valid configs?Working around this by using the full hints set instead.\r\n",,terminal_output
141
+ 140,156766,"TERMINAL",0,0,"Step 0, loss: 16.796998977661133\r\n",,terminal_output
142
+ 141,195251,"TERMINAL",0,0,"Step 1, loss: 1.9303642511367798\r\n",,terminal_output
143
+ 142,196253,"TERMINAL",0,0,"Step 2, loss: 2.342648506164551\r\n",,terminal_output
144
+ 143,197253,"TERMINAL",0,0,"Step 3, loss: 2.199798107147217\r\n",,terminal_output
145
+ 144,198255,"TERMINAL",0,0,"Step 4, loss: 1.6089359521865845\r\nSaved checkpoint at step 5\r\n",,terminal_output
146
+ 145,199252,"TERMINAL",0,0,"2025-08-04 08:15:53.706165: E external/xla/xla/backends/profiler/gpu/cupti_error_manager.cc:157] cuptiFinalize: ignored due to a previous error.\r\n2025-08-04 08:15:53.706187: E external/xla/xla/backends/profiler/gpu/cupti_error_manager.cc:242] cuptiGetResultString: ignored due to a previous error.\r\nE0804 08:15:53.706190 3090181 cupti_tracer.cc:1317] function cupti_interface_->Finalize()failed with error \r\n2025-08-04 08:15:53.707018: E external/xla/xla/backends/profiler/gpu/cupti_error_manager.cc:150] cuptiGetTimestamp: ignored due to a previous error.\r\n",,terminal_output
147
+ 146,210246,"TERMINAL",0,0,"2025-08-04 08:16:04.208428: E external/xla/xla/backends/profiler/gpu/cupti_error_manager.cc:150] cuptiGetTimestamp: ignored due to a previous error.\r\n",,terminal_output
148
+ 147,249252,"TERMINAL",0,0,"/home/franz.srambical/.local/share/uv/python/cpython-3.10.18-linux-x86_64-gnu/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 10 leaked shared_memory objects to clean up at shutdown\r\n warnings.warn('resource_tracker: There appear to be %d '\r\n",,terminal_output
149
+ 148,253252,"TERMINAL",0,0,"Generating '/var/tmp/nsys-report-4202.qdstrm'\r\n",,terminal_output
150
+ 149,255249,"TERMINAL",0,0,"\r[1/1] [0% ] test_profile.nsys-rep\r[1/1] [0% ] test_profile.nsys-rep\r[1/1] [10% ] test_profile.nsys-rep\r[1/1] [8% ] test_profile.nsys-rep\r[1/1] [7% ] test_profile.nsys-rep\r[1/1] [6% ] test_profile.nsys-rep\r[1/1] [12% ] test_profile.nsys-rep\r[1/1] [10% ] test_profile.nsys-rep\r[1/1] [9% ] test_profile.nsys-rep\r[1/1] [8% ] test_profile.nsys-rep\r[1/1] [7% ] test_profile.nsys-rep\r[1/1] [6% ] test_profile.nsys-rep\r[1/1] [8% ] test_profile.nsys-rep\r[1/1] [10% ] test_profile.nsys-rep\r[1/1] [11% ] test_profile.nsys-rep\r[1/1] [12% ] test_profile.nsys-rep\r[1/1] [13% ] test_profile.nsys-rep\r[1/1] [12% ] test_profile.nsys-rep\r[1/1] [13% ] test_profile.nsys-rep\r[1/1] [12% ] test_profile.nsys-rep\r[1/1] [13% ] test_profile.nsys-rep\r[1/1] [14% ] test_profile.nsys-rep\r[1/1] [13% ] test_profile.nsys-rep\r[1/1] [12% ] test_profile.nsys-rep\r[1/1] [13% ] test_profile.nsys-rep\r[1/1] [12% ] test_profile.nsys-rep\r[1/1] [11% ] test_profile.nsys-rep\r[1/1] [10% ] test_profile.nsys-rep\r[1/1] [9% ] test_profile.nsys-rep\r[1/1] [8% ] test_profile.nsys-rep\r[1/1] [7% ] test_profile.nsys-rep\r[1/1] [6% ] test_profile.nsys-rep\r[1/1] [7% ] test_profile.nsys-rep\r[1/1] [8% ] test_profile.nsys-rep\r[1/1] [9% ] test_profile.nsys-rep\r[1/1] [11% ] test_profile.nsys-rep\r[1/1] [13% ] test_profile.nsys-rep\r[1/1] [=15% ] test_profile.nsys-rep\r[1/1] [=17% ] test_profile.nsys-rep\r[1/1] [==19% ] test_profile.nsys-rep\r[1/1] [==21% ] test_profile.nsys-rep\r[1/1] [===22% ] test_profile.nsys-rep\r[1/1] [===23% ] test_profile.nsys-rep\r[1/1] [====25% ] test_profile.nsys-rep\r[1/1] [====27% ] test_profile.nsys-rep\r[1/1] [=====30% ] test_profile.nsys-rep\r[1/1] [=====32% ] test_profile.nsys-rep\r[1/1] [======34% ] test_profile.nsys-rep\r[1/1] [=======36% ] test_profile.nsys-rep\r[1/1] [=======37% ] test_profile.nsys-rep\r[1/1] [=======38% ] test_profile.nsys-rep\r[1/1] [=======39% ] test_profile.nsys-rep\r[1/1] [========40% ] test_profile.nsys-rep\r[1/1] [========41% ] test_profile.nsys-rep\r[1/1] [========42% ] test_profile.nsys-rep\r[1/1] [=========43% ] test_profile.nsys-rep\r[1/1] [=========44% ] test_profile.nsys-rep\r[1/1] [=========45% ] test_profile.nsys-rep\r[1/1] [==========47% ] test_profile.nsys-rep\r[1/1] [==========49% ] test_profile.nsys-rep\r[1/1] [===========51% ] test_profile.nsys-rep\r[1/1] [===========53% ] test_profile.nsys-rep\r[1/1] [============55% ] test_profile.nsys-rep\r[1/1] [============57% ] test_profile.nsys-rep\r[1/1] [=============59% ] test_profile.nsys-rep\r[1/1] [==============61% ] test_profile.nsys-rep\r[1/1] [==============63% ] test_profile.nsys-rep\r[1/1] [===============65% ] test_profile.nsys-rep\r[1/1] [================68% ] test_profile.nsys-rep\r[1/1] [================70% ] test_profile.nsys-rep\r[1/1] [=================72% ] test_profile.nsys-rep\r[1/1] [=================74% ] test_profile.nsys-rep\r[1/1] [==================76% ] test_profile.nsys-rep\r[1/1] [==================77% ] test_profile.nsys-rep\r[1/1] [===================82% ] test_profile.nsys-rep\r[1/1] [========================100%] test_profile.nsys-rep",,terminal_output
151
+ 150,260255,"TERMINAL",0,0,"\r[1/1] [========================100%] test_profile.nsys-rep\r\nGenerated:\r\n /fast/home/franz.srambical/jafar/test_profile.nsys-rep\r\n]0;franz.srambical@hai-login2:~/jafar[?2004h[franz.srambical@hai005.haicore.berlin:~/jafar] $ ",,terminal_output
152
+ 151,341900,"TERMINAL",0,0,"\r[franz.srambical@hai005.haicore.berlin:~/jafar] $ ",,terminal_output
153
+ 152,955976,"TERMINAL",0,0,"n",,terminal_output
154
+ 153,956091,"TERMINAL",0,0,"c",,terminal_output
155
+ 154,956287,"TERMINAL",0,0,"u",,terminal_output
156
+ 155,961697,"TERMINAL",0,0," ",,terminal_output
157
+ 156,961792,"TERMINAL",0,0,"-",,terminal_output
158
+ 157,961973,"TERMINAL",0,0,"o",,terminal_output
159
+ 158,962062,"TERMINAL",0,0," ",,terminal_output
160
+ 159,963985,"TERMINAL",0,0,"t",,terminal_output
161
+ 160,964066,"TERMINAL",0,0,"e",,terminal_output
162
+ 161,964152,"TERMINAL",0,0,"s",,terminal_output
163
+ 162,964204,"TERMINAL",0,0,"t",,terminal_output
164
+ 163,964485,"TERMINAL",0,0,"_",,terminal_output
165
+ 164,964669,"TERMINAL",0,0,"p",,terminal_output
166
+ 165,964751,"TERMINAL",0,0,"r",,terminal_output
167
+ 166,964865,"TERMINAL",0,0,"o",,terminal_output
168
+ 167,964979,"TERMINAL",0,0,"f",,terminal_output
169
+ 168,965104,"TERMINAL",0,0,"i",,terminal_output
170
+ 169,965164,"TERMINAL",0,0,"l",,terminal_output
171
+ 170,965256,"TERMINAL",0,0,"e",,terminal_output
172
+ 171,967201,"TERMINAL",0,0," ",,terminal_output
173
+ 172,967345,"TERMINAL",0,0,"-",,terminal_output
174
+ 173,967491,"TERMINAL",0,0,"-",,terminal_output
175
+ 174,967546,"TERMINAL",0,0,"s",,terminal_output
176
+ 175,967605,"TERMINAL",0,0,"e",,terminal_output
177
+ 176,967696,"TERMINAL",0,0,"t",,terminal_output
178
+ 177,967801,"TERMINAL",0,0," ",,terminal_output
179
+ 178,968014,"TERMINAL",0,0,"f",,terminal_output
180
+ 179,968095,"TERMINAL",0,0,"u",,terminal_output
181
+ 180,968192,"TERMINAL",0,0,"l",,terminal_output
182
+ 181,968385,"TERMINAL",0,0,"l",,terminal_output
183
+ 182,968479,"TERMINAL",0,0," ",,terminal_output
184
+ 183,970729,"TERMINAL",0,0,"bas",,terminal_output
185
+ 184,970834,"TERMINAL",0,0,"h",,terminal_output
186
+ 185,970969,"TERMINAL",0,0," e",,terminal_output
187
+ 186,971152,"TERMINAL",0,0,"x",,terminal_output
188
+ 187,971246,"TERMINAL",0,0,"p",,terminal_output
189
+ 188,971509,"TERMINAL",0,0,"erim",,terminal_output
190
+ 189,971611,"TERMINAL",0,0,"ents/",,terminal_output
191
+ 190,972533,"TERMINAL",0,0,"t",,terminal_output
192
+ 191,972669,"TERMINAL",0,0,"r",,terminal_output
193
+ 192,972766,"TERMINAL",0,0,"ai",,terminal_output
194
+ 193,972843,"TERMINAL",0,0,"n",,terminal_output
195
+ 194,973130,"TERMINAL",0,0,"",,terminal_output
196
+ 195,973284,"TERMINAL",0,0,"",,terminal_output
197
+ 196,973415,"TERMINAL",0,0,"",,terminal_output
198
+ 197,973572,"TERMINAL",0,0,"",,terminal_output
199
+ 198,973721,"TERMINAL",0,0,"",,terminal_output
200
+ 199,973900,"TERMINAL",0,0,"d",,terminal_output
201
+ 200,973998,"TERMINAL",0,0,"y",,terminal_output
202
+ 201,974129,"TERMINAL",0,0,"namics_grain_",,terminal_output
203
+ 202,974856,"TERMINAL",0,0,"t",,terminal_output
204
+ 203,975063,"TERMINAL",0,0,"ok_",,terminal_output
205
+ 204,976053,"TERMINAL",0,0,"r",,terminal_output
206
+ 205,976210,"TERMINAL",0,0,"estore.sh ",,terminal_output
207
+ 206,976605,"TERMINAL",0,0,"\r\n[?2004l\rbash: ncu: command not found\r\n]0;franz.srambical@hai-login2:~/jafar[?2004h[franz.srambical@hai005.haicore.berlin:~/jafar] $ ",,terminal_output
208
+ 207,978326,"TERMINAL",0,0,"n",,terminal_output
209
+ 208,978392,"TERMINAL",0,0,"c",,terminal_output
210
+ 209,978620,"TERMINAL",0,0,"",,terminal_output
211
+ 210,978862,"TERMINAL",0,0,"\r\nncclras ncdu ncurses6-config ncursesw6-config \r\n[franz.srambical@hai005.haicore.berlin:~/jafar] $ nc",,terminal_output
212
+ 211,979116,"TERMINAL",0,0,"u",,terminal_output
213
+ 212,981520,"TERMINAL",0,0,"",,terminal_output
214
+ 213,982234,"TERMINAL",0,0,"u",,terminal_output
215
+ 214,982369,"TERMINAL",0,0,"rses",,terminal_output
216
+ 215,982539,"TERMINAL",0,0,"",,terminal_output
217
+ 216,983359,"TERMINAL",0,0,"",,terminal_output
218
+ 217,992859,"TERMINAL",0,0,"m",,terminal_output
219
+ 218,992923,"TERMINAL",0,0,"o",,terminal_output
220
+ 219,993010,"TERMINAL",0,0,"d",,terminal_output
221
+ 220,993130,"TERMINAL",0,0,"u",,terminal_output
222
+ 221,993284,"TERMINAL",0,0,"l",,terminal_output
223
+ 222,993401,"TERMINAL",0,0,"e",,terminal_output
224
+ 223,993573,"TERMINAL",0,0,"",,terminal_output
225
+ 224,993929,"TERMINAL",0,0,"\r\nmodule modulemd-validator \r\n[franz.srambical@hai005.haicore.berlin:~/jafar] $ module",,terminal_output
226
+ 225,995094,"TERMINAL",0,0," ",,terminal_output
227
+ 226,995401,"TERMINAL",0,0,"",,terminal_output
228
+ 227,995876,"TERMINAL",0,0,"l",,terminal_output
229
+ 228,996199,"TERMINAL",0,0,"",,terminal_output
230
+ 229,996597,"TERMINAL",0,0,"l",,terminal_output
231
+ 230,996669,"TERMINAL",0,0,"i",,terminal_output
232
+ 231,996885,"TERMINAL",0,0,"t",,terminal_output
233
+ 232,997246,"TERMINAL",0,0,"s",,terminal_output
234
+ 233,997333,"TERMINAL",0,0,"t",,terminal_output
235
+ 234,997504,"TERMINAL",0,0," ",,terminal_output
236
+ 235,997813,"TERMINAL",0,0,"\r\n[?2004l\rNo modules loaded\r\n]0;franz.srambical@hai-login2:~/jafar[?2004h[franz.srambical@hai005.haicore.berlin:~/jafar] $ ",,terminal_output
237
+ 236,1000434,"TERMINAL",0,0,"module list ",,terminal_output
238
+ 237,1001046,"TERMINAL",0,0,"",,terminal_output
239
+ 238,1001216,"TERMINAL",0,0,"",,terminal_output
240
+ 239,1001368,"TERMINAL",0,0,"",,terminal_output
241
+ 240,1001521,"TERMINAL",0,0,"",,terminal_output
242
+ 241,1001685,"TERMINAL",0,0,"",,terminal_output
243
+ 242,1001849,"TERMINAL",0,0,"s",,terminal_output
244
+ 243,1001964,"TERMINAL",0,0,"p",,terminal_output
245
+ 244,1002024,"TERMINAL",0,0,"i",,terminal_output
246
+ 245,1002120,"TERMINAL",0,0,"d",,terminal_output
247
+ 246,1002330,"TERMINAL",0,0,"e",,terminal_output
248
+ 247,1002404,"TERMINAL",0,0,"r",,terminal_output
249
+ 248,1002594,"TERMINAL",0,0," ",,terminal_output
250
+ 249,1004453,"TERMINAL",0,0,"n",,terminal_output
251
+ 250,1004545,"TERMINAL",0,0,"s",,terminal_output
252
+ 251,1004621,"TERMINAL",0,0,"i",,terminal_output
253
+ 252,1004704,"TERMINAL",0,0,"g",,terminal_output
254
+ 253,1004804,"TERMINAL",0,0,"h",,terminal_output
255
+ 254,1004899,"TERMINAL",0,0,"t",,terminal_output
256
+ 255,1004994,"TERMINAL",0,0,"\r\n[?2004l\r",,terminal_output
257
+ 256,1006202,"TERMINAL",0,0,"\r\n----------------------------------------------------------------------------------------------------------------\r\n insight: insight/0.20.5 (E)\r\n----------------------------------------------------------------------------------------------------------------\r\n This extension is provided by the following modules. To access the extension you must load one of the following \r\nmodules. Note that any module names in parentheses show the module location in the software hierarchy.\r\n\r\n\r\n R-bundle-CRAN/2024.11-foss-2024a\r\n\r\n\r\nNames marked by a trailing (E) are extensions provided by another module.\r\n\r\n\r\n\r\n \r\n\r\n]0;franz.srambical@hai-login2:~/jafar[?2004h[franz.srambical@hai005.haicore.berlin:~/jafar] $ ",,terminal_output
258
+ 257,1021117,"TERMINAL",0,0,"mo",,terminal_output
259
+ 258,1021320,"TERMINAL",0,0,"d",,terminal_output
260
+ 259,1021376,"TERMINAL",0,0,"u",,terminal_output
261
+ 260,1021517,"TERMINAL",0,0,"l",,terminal_output
262
+ 261,1021569,"TERMINAL",0,0,"e",,terminal_output
263
+ 262,1021691,"TERMINAL",0,0," ",,terminal_output
264
+ 263,1021831,"TERMINAL",0,0,"l",,terminal_output
265
+ 264,1021979,"TERMINAL",0,0,"o",,terminal_output
266
+ 265,1022033,"TERMINAL",0,0,"a",,terminal_output
267
+ 266,1022113,"TERMINAL",0,0,"d",,terminal_output
268
+ 267,1022183,"TERMINAL",0,0," ",,terminal_output
269
+ 268,1022297,"TERMINAL",0,0,"n",,terminal_output
270
+ 269,1022373,"TERMINAL",0,0,"s",,terminal_output
271
+ 270,1022465,"TERMINAL",0,0,"i",,terminal_output
272
+ 271,1022552,"TERMINAL",0,0,"g",,terminal_output
273
+ 272,1022649,"TERMINAL",0,0,"h",,terminal_output
274
+ 273,1022777,"TERMINAL",0,0,"t",,terminal_output
275
+ 274,1023018,"TERMINAL",0,0,"_",,terminal_output
276
+ 275,1023253,"TERMINAL",0,0,"oc",,terminal_output
277
+ 276,1023589,"TERMINAL",0,0,"",,terminal_output
278
+ 277,1023704,"TERMINAL",0,0,"",,terminal_output
279
+ 278,1023778,"TERMINAL",0,0,"c",,terminal_output
280
+ 279,1023900,"TERMINAL",0,0,"o",,terminal_output
281
+ 280,1024023,"TERMINAL",0,0,"m",,terminal_output
282
+ 281,1024155,"TERMINAL",0,0,"p",,terminal_output
283
+ 282,1024259,"TERMINAL",0,0,"u",,terminal_output
284
+ 283,1024395,"TERMINAL",0,0,"te",,terminal_output
285
+ 284,1024488,"TERMINAL",0,0,"\r\n[?2004l\r",,terminal_output
286
+ 285,1025856,"TERMINAL",0,0,"Lmod has detected the following error:  The following module(s) are unknown: ""nsight_compute""\r\n\r\nPlease check the spelling or version number. Also try ""module spider ...""\r\nIt is also possible your cache file is out-of-date; it may help to try:\r\n $ module --ignore_cache load ""nsight_compute""\r\n\r\nAlso make sure that all modulefiles written in TCL start with the string #%Module\r\n\r\n\r\n\r\n]0;franz.srambical@hai-login2:~/jafar[?2004h[franz.srambical@hai005.haicore.berlin:~/jafar] $ ",,terminal_output
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-91d7eee6-8ee7-4f50-ba4c-546c6e0f99451755438444066-2025_08_17-15.47.30.74/source.csv ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
2
+ 1,3,"train_lam.py",0,0,"from dataclasses import dataclass, field\nimport os\nfrom typing import cast\n\nimport einops\nfrom jax.sharding import Mesh, PartitionSpec, NamedSharding\nfrom jax.experimental.mesh_utils import create_device_mesh\nimport optax\nimport orbax.checkpoint as ocp\nimport numpy as np\nimport dm_pix as pix\nimport jax\nimport jax.numpy as jnp\nimport tyro\nimport wandb\nimport grain\nimport flax.nnx as nnx\n\nfrom models.lam import LatentActionModel\nfrom utils.dataloader import get_dataloader\nfrom utils.lr_utils import get_lr_schedule\nfrom utils.parameter_utils import count_parameters_by_component\n\n\n@dataclass\nclass Args:\n # Experiment\n num_steps: int = 200_000\n seed: int = 0\n seq_len: int = 16\n image_channels: int = 3\n image_height: int = 90\n image_width: int = 160\n data_dir: str = """"\n save_ckpt: bool = False\n restore_ckpt: bool = False\n # Optimization\n batch_size: int = 36\n vq_beta: float = 0.25\n init_lr: float = 0.0\n max_lr: float = 3e-5\n decay_end: float = 0.0\n wsd_decay_steps: int = (\n 10000 # NOTE: wsd_decay_steps will only be used when using a wsd-schedule\n )\n warmup_steps: int = 5000\n lr_schedule: str = ""wsd"" # supported options: wsd, cos\n vq_reset_thresh: int = 50\n # LAM\n model_dim: int = 512\n ffn_dim: int = 2048\n latent_dim: int = 32\n num_latents: int = 6\n patch_size: int = 16\n num_blocks: int = 4\n num_heads: int = 8\n dropout: float = 0.0\n codebook_dropout: float = 0.0\n param_dtype = jnp.float32\n dtype = jnp.bfloat16\n # Logging\n log: bool = False\n entity: str = """"\n project: str = """"\n name: str = ""train_lam""\n tags: list[str] = field(default_factory=lambda: [""lam""])\n log_interval: int = 5\n log_image_interval: int = 250\n ckpt_dir: str = """"\n log_checkpoint_interval: int = 10000\n log_checkpoint_keep_period: int = 20000\n wandb_id: str = """"\n use_flash_attention: bool = True\n\n\nargs = tyro.cli(Args)\n\n\ndef lam_loss_fn(\n model: LatentActionModel, inputs: dict\n) -> tuple[jax.Array, tuple[jax.Array, jax.Array, dict]]:\n # --- Compute loss ---\n gt = jnp.asarray(inputs[""videos""], dtype=jnp.float32) / 255.0\n inputs[""videos""] = gt.astype(args.dtype)\n model.train()\n outputs = model(inputs, training=True)\n outputs[""recon""] = outputs[""recon""].astype(jnp.float32)\n gt_future_frames = gt[:, 1:]\n mse = jnp.square(gt_future_frames - outputs[""recon""]).mean()\n q_loss = jnp.square(jax.lax.stop_gradient(outputs[""emb""]) - outputs[""z""]).mean()\n commitment_loss = jnp.square(\n outputs[""emb""] - jax.lax.stop_gradient(outputs[""z""])\n ).mean()\n loss = mse + q_loss + args.vq_beta * commitment_loss\n\n # --- Compute validation metrics ---\n gt = gt_future_frames.clip(0, 1).reshape(-1, *gt_future_frames.shape[2:])\n recon = outputs[""recon""].clip(0, 1).reshape(-1, *outputs[""recon""].shape[2:])\n psnr = jnp.asarray(pix.psnr(gt, recon)).mean()\n ssim = jnp.asarray(pix.ssim(gt, recon)).mean()\n count_fn = jax.vmap(lambda i: (outputs[""indices""] == i).sum())\n index_counts = count_fn(jnp.arange(args.num_latents))\n metrics = dict(\n loss=loss,\n mse=mse,\n q_loss=q_loss,\n commitment_loss=commitment_loss,\n psnr=psnr,\n ssim=ssim,\n codebook_usage=(index_counts != 0).mean(),\n )\n return loss, (outputs[""recon""], index_counts, metrics)\n\n\n@nnx.jit\ndef train_step(\n lam: LatentActionModel,\n optimizer: nnx.Optimizer,\n inputs: dict,\n action_last_active: jax.Array,\n rng: jax.Array,\n) -> tuple[jax.Array, jax.Array, jax.Array, dict]:\n def loss_fn(\n model: LatentActionModel,\n ) -> tuple[jax.Array, tuple[jax.Array, jax.Array, dict]]:\n return lam_loss_fn(model, inputs)\n\n # --- Update model ---\n (loss, (recon, idx_counts, metrics)), grads = nnx.value_and_grad(\n loss_fn, has_aux=True\n )(lam)\n optimizer.update(grads)\n\n # --- Reset inactive latent actions ---\n codebook = lam.vq.codebook\n num_codes = len(codebook)\n active_codes = idx_counts != 0.0\n action_last_active = jnp.where(active_codes, 0, action_last_active + 1)\n p_code = active_codes / active_codes.sum()\n reset_idxs = jax.random.choice(rng, num_codes, shape=(num_codes,), p=p_code)\n do_reset = action_last_active >= args.vq_reset_thresh\n new_codebook = jnp.where(\n jnp.expand_dims(do_reset, -1), codebook[reset_idxs], codebook.value\n )\n lam.vq.codebook.value = new_codebook\n action_last_active = jnp.where(do_reset, 0, action_last_active)\n return loss, recon, action_last_active, metrics\n\n\nif __name__ == ""__main__"":\n jax.distributed.initialize()\n num_devices = jax.device_count()\n if num_devices == 0:\n raise ValueError(""No JAX devices found."")\n print(f""Running on {num_devices} devices."")\n\n if args.batch_size % num_devices != 0:\n raise ValueError(\n f""Global batch size {args.batch_size} must be divisible by ""\n f""number of devices {num_devices}.""\n )\n\n per_device_batch_size_for_init = args.batch_size // num_devices\n\n rng = jax.random.key(args.seed)\n\n # --- Initialize model ---\n rng, _rng = jax.random.split(rng)\n rngs = nnx.Rngs(_rng)\n lam = LatentActionModel(\n in_dim=args.image_channels,\n model_dim=args.model_dim,\n ffn_dim=args.ffn_dim,\n latent_dim=args.latent_dim,\n num_latents=args.num_latents,\n patch_size=args.patch_size,\n num_blocks=args.num_blocks,\n num_heads=args.num_heads,\n dropout=args.dropout,\n codebook_dropout=args.codebook_dropout,\n param_dtype=args.param_dtype,\n dtype=args.dtype,\n use_flash_attention=args.use_flash_attention,\n rngs=rngs,\n )\n\n # Count parameters\n _, params, _ = nnx.split(lam, nnx.Param, ...)\n param_counts = count_parameters_by_component(params)\n\n if args.log and jax.process_index() == 0:\n wandb_init_kwargs = {\n ""entity"": args.entity,\n ""project"": args.project,\n ""name"": args.name,\n ""tags"": args.tags,\n ""group"": ""debug"",\n ""config"": args,\n }\n\n if args.wandb_id:\n wandb_init_kwargs.update(\n {\n ""id"": args.wandb_id,\n ""resume"": ""allow"",\n }\n )\n wandb.init(**wandb_init_kwargs)\n\n wandb.config.update({""model_param_count"": param_counts})\n\n print(""Parameter counts:"")\n print(param_counts)\n\n # --- Initialize optimizer ---\n lr_schedule = get_lr_schedule(\n args.lr_schedule,\n args.init_lr,\n args.max_lr,\n args.decay_end,\n args.num_steps,\n args.warmup_steps,\n args.wsd_decay_steps,\n )\n tx = optax.adamw(\n learning_rate=lr_schedule,\n b1=0.9,\n b2=0.9,\n weight_decay=1e-4,\n mu_dtype=args.param_dtype, # moments in full precision\n )\n optimizer = nnx.Optimizer(lam, tx)\n\n # FIXME: switch to create_hybrid_device_mesh for runs spanning multiple nodes\n device_mesh_arr = create_device_mesh((num_devices,))\n mesh = Mesh(devices=device_mesh_arr, axis_names=(""data"",))\n\n replicated_sharding = NamedSharding(mesh, PartitionSpec())\n videos_sharding = NamedSharding(mesh, PartitionSpec(""data"", None, None, None, None))\n\n model_state = nnx.state(optimizer.model)\n model_sharded_state = jax.lax.with_sharding_constraint(\n model_state, replicated_sharding\n )\n nnx.update(optimizer.model, model_sharded_state)\n optimizer_state = nnx.state(optimizer, nnx.optimizer.OptState)\n optimizer_sharded_state = jax.lax.with_sharding_constraint(\n optimizer_state, replicated_sharding\n )\n nnx.update(optimizer, optimizer_sharded_state)\n\n # --- Initialize checkpoint manager ---\n step = 0\n handler_registry = ocp.handlers.DefaultCheckpointHandlerRegistry()\n handler_registry.add(\n ""model_state"", ocp.args.PyTreeSave, ocp.handlers.PyTreeCheckpointHandler\n )\n handler_registry.add(\n ""model_state"", ocp.args.PyTreeRestore, ocp.handlers.PyTreeCheckpointHandler\n )\n handler_registry.add(\n ""dataloader_state"",\n grain.checkpoint.CheckpointSave,\n cast(ocp.handlers.CheckpointHandler, grain.checkpoint.CheckpointHandler),\n )\n handler_registry.add(\n ""dataloader_state"",\n grain.checkpoint.CheckpointRestore,\n cast(ocp.handlers.CheckpointHandler, grain.checkpoint.CheckpointHandler),\n )\n\n checkpoint_options = ocp.CheckpointManagerOptions(\n save_interval_steps=args.log_checkpoint_interval,\n max_to_keep=3,\n keep_period=args.log_checkpoint_keep_period,\n step_format_fixed_length=6,\n cleanup_tmp_directories=True,\n )\n\n checkpoint_manager = ocp.CheckpointManager(\n args.ckpt_dir,\n options=checkpoint_options,\n handler_registry=handler_registry,\n )\n\n # --- Create DataLoaderIterator from dataloader ---\n image_shape = (args.image_height, args.image_width, args.image_channels)\n array_record_files = [\n os.path.join(args.data_dir, x)\n for x in os.listdir(args.data_dir)\n if x.endswith("".array_record"")\n ]\n grain_dataloader = get_dataloader(\n array_record_files,\n args.seq_len,\n # NOTE: We deliberately pass the global batch size\n # The dataloader shards the dataset across all processes\n args.batch_size,\n *image_shape,\n num_workers=8,\n prefetch_buffer_size=1,\n seed=args.seed,\n )\n initial_state = grain_dataloader._create_initial_state()\n grain_iterator = grain.DataLoaderIterator(grain_dataloader, initial_state)\n\n # --- Restore checkpoint ---\n if args.restore_ckpt:\n abstract_optimizer = nnx.eval_shape(lambda: optimizer)\n abstract_optimizer_state = nnx.state(abstract_optimizer)\n restored = checkpoint_manager.restore(\n checkpoint_manager.latest_step(),\n args=ocp.args.Composite(\n model_state=ocp.args.PyTreeRestore(abstract_optimizer_state), # type: ignore\n dataloader_state=grain.checkpoint.CheckpointRestore(grain_iterator), # type: ignore\n ),\n )\n restored_optimizer_state = restored[""model_state""]\n nnx.update(optimizer, restored_optimizer_state)\n grain_iterator = restored[""dataloader_state""]\n step = checkpoint_manager.latest_step() or 0\n print(f""Restored dataloader and model state from step {step}"")\n\n # --- TRAIN LOOP ---\n dataloader = (\n jax.make_array_from_process_local_data(videos_sharding, elem)\n for elem in grain_iterator\n )\n print(f""Starting training from step {step}..."")\n action_last_active = jnp.zeros(args.num_latents, dtype=jnp.int32)\n while step < args.num_steps:\n for videos in dataloader:\n # --- Train step ---\n rng, _rng = jax.random.split(rng)\n\n inputs = dict(videos=videos, rng=_rng)\n rng, _rng = jax.random.split(rng)\n loss, recon, action_last_active, metrics = train_step(\n lam, optimizer, inputs, action_last_active, _rng\n )\n metrics[""lr""] = lr_schedule(step)\n print(f""Step {step}, loss: {loss}"")\n step += 1\n\n # --- Logging ---\n if args.log:\n if step % args.log_interval == 0 and jax.process_index() == 0:\n wandb.log(\n {\n ""loss"": loss,\n ""step"": step,\n **metrics,\n }\n )\n if step % args.log_image_interval == 0:\n gt_seq = inputs[""videos""][0, 1:].astype(jnp.float32) / 255.0\n recon_seq = recon[0].clip(0, 1)\n comparison_seq = jnp.concatenate((gt_seq, recon_seq), axis=1)\n comparison_seq = einops.rearrange(\n comparison_seq * 255, ""t h w c -> h (t w) c""\n )\n # NOTE: Process-dependent control flow deliberately happens\n # after indexing operation since it must not contain code\n # sections that lead to cross-accelerator communication.\n if jax.process_index() == 0:\n log_images = dict(\n image=wandb.Image(np.asarray(gt_seq[0])),\n recon=wandb.Image(np.asarray(recon_seq[0])),\n true_vs_recon=wandb.Image(\n np.asarray(comparison_seq.astype(np.uint8))\n ),\n )\n wandb.log(log_images)\n # --- Checkpointing ---\n if args.save_ckpt and step % args.log_checkpoint_interval == 0:\n optimizer_state = nnx.state(optimizer)\n checkpoint_manager.save(\n step,\n args=ocp.args.Composite(\n model_state=ocp.args.PyTreeSave(optimizer_state), # type: ignore\n dataloader_state=grain.checkpoint.CheckpointSave( # type: ignore\n grain_iterator # type: ignore\n ),\n ),\n )\n print(f""Saved checkpoint at step {step}"")\n if step >= args.num_steps:\n break\n\n checkpoint_manager.close()\n",python,tab
3
+ 2,131,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"3:47:30 PM [info] Activating crowd-code\n3:47:30 PM [info] Recording started\n3:47:30 PM [info] Initializing git provider using file system watchers...\n",Log,tab
4
+ 3,176,"extension-output-pdoom-org.crowd-code-#1-crowd-code",150,0,"3:47:30 PM [info] Git repository found\n3:47:30 PM [info] Git provider initialized successfully\n3:47:30 PM [info] Initial git state: [object Object]\n",Log,content
5
+ 4,457,"TERMINAL",0,0,"",,terminal_focus
6
+ 5,627,"train_lam.py",0,0,"",python,tab
7
+ 6,174755,"train_lam.py",6883,63," mu_dtype=args.dtype,\n",python,content
8
+ 7,175106,"train_lam.py",0,0,"Switched from branch 'momentum-in-fp32' to 'main'",python,git_branch_checkout
9
+ 8,186959,"slurm/dev/mihir/horeka/yolo-runs/train_dynamics_new_arch.sbatch",0,0,"#!/usr/bin/env bash\n\n#SBATCH --nodes=2\n#SBATCH --ntasks-per-node=4\n#SBATCH --time=48:00:00\n#SBATCH --partition=accelerated\n#SBATCH --cpus-per-task=5\n#SBATCH --gres=gpu:4\n#SBATCH --output=/hkfs/work/workspace/scratch/tum_ind3695-jafa_ws_shared/logs/logs_mihir/%x_%j.log\n#SBATCH --error=/hkfs/work/workspace/scratch/tum_ind3695-jafa_ws_shared/logs/logs_mihir/%x_%j.log\n#SBATCH --job-name=train_dyn_new_arch-bugfixed-temporal-shift\n\n# Log the sbatch script\ncat $0\n\nmodule unload mpi/openmpi/5.0\nmodule unload devel/cuda/12.4\nsource .venv/bin/activate\n\narray_records_dir=/hkfs/work/workspace/scratch/tum_ind3695-jafa_ws_shared/data_new/open_ai_minecraft_arrayrecords_chunked\n\njob_name=$SLURM_JOB_NAME\nslurm_job_id=$SLURM_JOB_ID\n\nCHECKPOINT_DIR=$ws_dir/checkpoints/$job_name/$slurm_job_id\nmkdir -p $CHECKPOINT_DIR\n\ntokenizer_ckpt_dir=/hkfs/work/workspace/scratch/tum_ind3695-jafa_ws_shared/checkpoints/big-runs/tokenizer-lr-scaling/train_tokenizer_lr_sweep_1e-4\n\nenv | grep SLURM\n\nsrun python train_dynamics.py \\n --save_ckpt \\n --num_steps=50000 \\n --warmup_steps=2500 \\n --wsd_decay_steps=5000 \\n --ckpt_dir $CHECKPOINT_DIR \\n --batch_size=96 \\n --max_lr=1e-4 \\n --log_image_interval=1000 \\n --log \\n --log_checkpoint_interval=1000 \\n --name=dynamics-new-arch-bugfix-temporal-shift-$slurm_job_id \\n --tags dynamics new-arch bug-fix \\n --entity instant-uv \\n --project jafar \\n --tokenizer_checkpoint=$tokenizer_ckpt_dir \\n --data_dir $array_records_dir \\n ",shellscript,tab
10
+ 9,188008,"train_lam.py",0,0,"from dataclasses import dataclass, field\nimport os\nfrom typing import cast\n\nimport einops\nfrom jax.sharding import Mesh, PartitionSpec, NamedSharding\nfrom jax.experimental.mesh_utils import create_device_mesh\nimport optax\nimport orbax.checkpoint as ocp\nimport numpy as np\nimport dm_pix as pix\nimport jax\nimport jax.numpy as jnp\nimport tyro\nimport wandb\nimport grain\nimport flax.nnx as nnx\n\nfrom models.lam import LatentActionModel\nfrom utils.dataloader import get_dataloader\nfrom utils.lr_utils import get_lr_schedule\nfrom utils.parameter_utils import count_parameters_by_component\n\n\n@dataclass\nclass Args:\n # Experiment\n num_steps: int = 200_000\n seed: int = 0\n seq_len: int = 16\n image_channels: int = 3\n image_height: int = 90\n image_width: int = 160\n data_dir: str = """"\n save_ckpt: bool = False\n restore_ckpt: bool = False\n # Optimization\n batch_size: int = 36\n vq_beta: float = 0.25\n init_lr: float = 0.0\n max_lr: float = 3e-5\n decay_end: float = 0.0\n wsd_decay_steps: int = (\n 10000 # NOTE: wsd_decay_steps will only be used when using a wsd-schedule\n )\n warmup_steps: int = 5000\n lr_schedule: str = ""wsd"" # supported options: wsd, cos\n vq_reset_thresh: int = 50\n # LAM\n model_dim: int = 512\n ffn_dim: int = 2048\n latent_dim: int = 32\n num_latents: int = 6\n patch_size: int = 16\n num_blocks: int = 4\n num_heads: int = 8\n dropout: float = 0.0\n codebook_dropout: float = 0.0\n param_dtype = jnp.float32\n dtype = jnp.bfloat16\n # Logging\n log: bool = False\n entity: str = """"\n project: str = """"\n name: str = ""train_lam""\n tags: list[str] = field(default_factory=lambda: [""lam""])\n log_interval: int = 5\n log_image_interval: int = 250\n ckpt_dir: str = """"\n log_checkpoint_interval: int = 10000\n log_checkpoint_keep_period: int = 20000\n wandb_id: str = """"\n use_flash_attention: bool = True\n\n\nargs = tyro.cli(Args)\n\n\ndef lam_loss_fn(\n model: LatentActionModel, inputs: dict\n) -> tuple[jax.Array, tuple[jax.Array, jax.Array, dict]]:\n # --- Compute loss ---\n gt = jnp.asarray(inputs[""videos""], dtype=jnp.float32) / 255.0\n inputs[""videos""] = gt.astype(args.dtype)\n model.train()\n outputs = model(inputs, training=True)\n outputs[""recon""] = outputs[""recon""].astype(jnp.float32)\n gt_future_frames = gt[:, 1:]\n mse = jnp.square(gt_future_frames - outputs[""recon""]).mean()\n q_loss = jnp.square(jax.lax.stop_gradient(outputs[""emb""]) - outputs[""z""]).mean()\n commitment_loss = jnp.square(\n outputs[""emb""] - jax.lax.stop_gradient(outputs[""z""])\n ).mean()\n loss = mse + q_loss + args.vq_beta * commitment_loss\n\n # --- Compute validation metrics ---\n gt = gt_future_frames.clip(0, 1).reshape(-1, *gt_future_frames.shape[2:])\n recon = outputs[""recon""].clip(0, 1).reshape(-1, *outputs[""recon""].shape[2:])\n psnr = jnp.asarray(pix.psnr(gt, recon)).mean()\n ssim = jnp.asarray(pix.ssim(gt, recon)).mean()\n count_fn = jax.vmap(lambda i: (outputs[""indices""] == i).sum())\n index_counts = count_fn(jnp.arange(args.num_latents))\n metrics = dict(\n loss=loss,\n mse=mse,\n q_loss=q_loss,\n commitment_loss=commitment_loss,\n psnr=psnr,\n ssim=ssim,\n codebook_usage=(index_counts != 0).mean(),\n )\n return loss, (outputs[""recon""], index_counts, metrics)\n\n\n@nnx.jit\ndef train_step(\n lam: LatentActionModel,\n optimizer: nnx.Optimizer,\n inputs: dict,\n action_last_active: jax.Array,\n rng: jax.Array,\n) -> tuple[jax.Array, jax.Array, jax.Array, dict]:\n def loss_fn(\n model: LatentActionModel,\n ) -> tuple[jax.Array, tuple[jax.Array, jax.Array, dict]]:\n return lam_loss_fn(model, inputs)\n\n # --- Update model ---\n (loss, (recon, idx_counts, metrics)), grads = nnx.value_and_grad(\n loss_fn, has_aux=True\n )(lam)\n optimizer.update(grads)\n\n # --- Reset inactive latent actions ---\n codebook = lam.vq.codebook\n num_codes = len(codebook)\n active_codes = idx_counts != 0.0\n action_last_active = jnp.where(active_codes, 0, action_last_active + 1)\n p_code = active_codes / active_codes.sum()\n reset_idxs = jax.random.choice(rng, num_codes, shape=(num_codes,), p=p_code)\n do_reset = action_last_active >= args.vq_reset_thresh\n new_codebook = jnp.where(\n jnp.expand_dims(do_reset, -1), codebook[reset_idxs], codebook.value\n )\n lam.vq.codebook.value = new_codebook\n action_last_active = jnp.where(do_reset, 0, action_last_active)\n return loss, recon, action_last_active, metrics\n\n\nif __name__ == ""__main__"":\n jax.distributed.initialize()\n num_devices = jax.device_count()\n if num_devices == 0:\n raise ValueError(""No JAX devices found."")\n print(f""Running on {num_devices} devices."")\n\n if args.batch_size % num_devices != 0:\n raise ValueError(\n f""Global batch size {args.batch_size} must be divisible by ""\n f""number of devices {num_devices}.""\n )\n\n per_device_batch_size_for_init = args.batch_size // num_devices\n\n rng = jax.random.key(args.seed)\n\n # --- Initialize model ---\n rng, _rng = jax.random.split(rng)\n rngs = nnx.Rngs(_rng)\n lam = LatentActionModel(\n in_dim=args.image_channels,\n model_dim=args.model_dim,\n ffn_dim=args.ffn_dim,\n latent_dim=args.latent_dim,\n num_latents=args.num_latents,\n patch_size=args.patch_size,\n num_blocks=args.num_blocks,\n num_heads=args.num_heads,\n dropout=args.dropout,\n codebook_dropout=args.codebook_dropout,\n param_dtype=args.param_dtype,\n dtype=args.dtype,\n use_flash_attention=args.use_flash_attention,\n rngs=rngs,\n )\n\n # Count parameters\n _, params, _ = nnx.split(lam, nnx.Param, ...)\n param_counts = count_parameters_by_component(params)\n\n if args.log and jax.process_index() == 0:\n wandb_init_kwargs = {\n ""entity"": args.entity,\n ""project"": args.project,\n ""name"": args.name,\n ""tags"": args.tags,\n ""group"": ""debug"",\n ""config"": args,\n }\n\n if args.wandb_id:\n wandb_init_kwargs.update(\n {\n ""id"": args.wandb_id,\n ""resume"": ""allow"",\n }\n )\n wandb.init(**wandb_init_kwargs)\n\n wandb.config.update({""model_param_count"": param_counts})\n\n print(""Parameter counts:"")\n print(param_counts)\n\n # --- Initialize optimizer ---\n lr_schedule = get_lr_schedule(\n args.lr_schedule,\n args.init_lr,\n args.max_lr,\n args.decay_end,\n args.num_steps,\n args.warmup_steps,\n args.wsd_decay_steps,\n )\n tx = optax.adamw(\n learning_rate=lr_schedule,\n b1=0.9,\n b2=0.9,\n weight_decay=1e-4,\n mu_dtype=args.dtype,\n )\n optimizer = nnx.Optimizer(lam, tx)\n\n # FIXME: switch to create_hybrid_device_mesh for runs spanning multiple nodes\n device_mesh_arr = create_device_mesh((num_devices,))\n mesh = Mesh(devices=device_mesh_arr, axis_names=(""data"",))\n\n replicated_sharding = NamedSharding(mesh, PartitionSpec())\n videos_sharding = NamedSharding(mesh, PartitionSpec(""data"", None, None, None, None))\n\n model_state = nnx.state(optimizer.model)\n model_sharded_state = jax.lax.with_sharding_constraint(\n model_state, replicated_sharding\n )\n nnx.update(optimizer.model, model_sharded_state)\n optimizer_state = nnx.state(optimizer, nnx.optimizer.OptState)\n optimizer_sharded_state = jax.lax.with_sharding_constraint(\n optimizer_state, replicated_sharding\n )\n nnx.update(optimizer, optimizer_sharded_state)\n\n # --- Initialize checkpoint manager ---\n step = 0\n handler_registry = ocp.handlers.DefaultCheckpointHandlerRegistry()\n handler_registry.add(\n ""model_state"", ocp.args.PyTreeSave, ocp.handlers.PyTreeCheckpointHandler\n )\n handler_registry.add(\n ""model_state"", ocp.args.PyTreeRestore, ocp.handlers.PyTreeCheckpointHandler\n )\n handler_registry.add(\n ""dataloader_state"",\n grain.checkpoint.CheckpointSave,\n cast(ocp.handlers.CheckpointHandler, grain.checkpoint.CheckpointHandler),\n )\n handler_registry.add(\n ""dataloader_state"",\n grain.checkpoint.CheckpointRestore,\n cast(ocp.handlers.CheckpointHandler, grain.checkpoint.CheckpointHandler),\n )\n\n checkpoint_options = ocp.CheckpointManagerOptions(\n save_interval_steps=args.log_checkpoint_interval,\n max_to_keep=3,\n keep_period=args.log_checkpoint_keep_period,\n step_format_fixed_length=6,\n cleanup_tmp_directories=True,\n )\n\n checkpoint_manager = ocp.CheckpointManager(\n args.ckpt_dir,\n options=checkpoint_options,\n handler_registry=handler_registry,\n )\n\n # --- Create DataLoaderIterator from dataloader ---\n image_shape = (args.image_height, args.image_width, args.image_channels)\n array_record_files = [\n os.path.join(args.data_dir, x)\n for x in os.listdir(args.data_dir)\n if x.endswith("".array_record"")\n ]\n grain_dataloader = get_dataloader(\n array_record_files,\n args.seq_len,\n # NOTE: We deliberately pass the global batch size\n # The dataloader shards the dataset across all processes\n args.batch_size,\n *image_shape,\n num_workers=8,\n prefetch_buffer_size=1,\n seed=args.seed,\n )\n initial_state = grain_dataloader._create_initial_state()\n grain_iterator = grain.DataLoaderIterator(grain_dataloader, initial_state)\n\n # --- Restore checkpoint ---\n if args.restore_ckpt:\n abstract_optimizer = nnx.eval_shape(lambda: optimizer)\n abstract_optimizer_state = nnx.state(abstract_optimizer)\n restored = checkpoint_manager.restore(\n checkpoint_manager.latest_step(),\n args=ocp.args.Composite(\n model_state=ocp.args.PyTreeRestore(abstract_optimizer_state), # type: ignore\n dataloader_state=grain.checkpoint.CheckpointRestore(grain_iterator), # type: ignore\n ),\n )\n restored_optimizer_state = restored[""model_state""]\n nnx.update(optimizer, restored_optimizer_state)\n grain_iterator = restored[""dataloader_state""]\n step = checkpoint_manager.latest_step() or 0\n print(f""Restored dataloader and model state from step {step}"")\n\n # --- TRAIN LOOP ---\n dataloader = (\n jax.make_array_from_process_local_data(videos_sharding, elem)\n for elem in grain_iterator\n )\n print(f""Starting training from step {step}..."")\n action_last_active = jnp.zeros(args.num_latents, dtype=jnp.int32)\n while step < args.num_steps:\n for videos in dataloader:\n # --- Train step ---\n rng, _rng = jax.random.split(rng)\n\n inputs = dict(videos=videos, rng=_rng)\n rng, _rng = jax.random.split(rng)\n loss, recon, action_last_active, metrics = train_step(\n lam, optimizer, inputs, action_last_active, _rng\n )\n metrics[""lr""] = lr_schedule(step)\n print(f""Step {step}, loss: {loss}"")\n step += 1\n\n # --- Logging ---\n if args.log:\n if step % args.log_interval == 0 and jax.process_index() == 0:\n wandb.log(\n {\n ""loss"": loss,\n ""step"": step,\n **metrics,\n }\n )\n if step % args.log_image_interval == 0:\n gt_seq = inputs[""videos""][0, 1:].astype(jnp.float32) / 255.0\n recon_seq = recon[0].clip(0, 1)\n comparison_seq = jnp.concatenate((gt_seq, recon_seq), axis=1)\n comparison_seq = einops.rearrange(\n comparison_seq * 255, ""t h w c -> h (t w) c""\n )\n # NOTE: Process-dependent control flow deliberately happens\n # after indexing operation since it must not contain code\n # sections that lead to cross-accelerator communication.\n if jax.process_index() == 0:\n log_images = dict(\n image=wandb.Image(np.asarray(gt_seq[0])),\n recon=wandb.Image(np.asarray(recon_seq[0])),\n true_vs_recon=wandb.Image(\n np.asarray(comparison_seq.astype(np.uint8))\n ),\n )\n wandb.log(log_images)\n # --- Checkpointing ---\n if args.save_ckpt and step % args.log_checkpoint_interval == 0:\n optimizer_state = nnx.state(optimizer)\n checkpoint_manager.save(\n step,\n args=ocp.args.Composite(\n model_state=ocp.args.PyTreeSave(optimizer_state), # type: ignore\n dataloader_state=grain.checkpoint.CheckpointSave( # type: ignore\n grain_iterator # type: ignore\n ),\n ),\n )\n print(f""Saved checkpoint at step {step}"")\n if step >= args.num_steps:\n break\n\n checkpoint_manager.close()\n",python,tab
11
+ 10,188073,"train_lam.py",583,0,"jax.config.update(""jax_transfer_guard"", ""disallow"")\n",python,content
12
+ 11,193582,"train_dynamics.py",0,0,"from dataclasses import dataclass, field\nimport os\nfrom typing import cast\n\nimport einops\nfrom jax.sharding import Mesh, PartitionSpec, NamedSharding\nfrom jax.experimental.mesh_utils import create_device_mesh\nimport optax\nimport orbax.checkpoint as ocp\nimport numpy as np\nimport dm_pix as pix\nimport jax\nimport jax.numpy as jnp\nimport tyro\nimport wandb\nimport grain\nimport flax.nnx as nnx\n\nfrom genie import Genie, restore_genie_components\nfrom utils.dataloader import get_dataloader\nfrom utils.lr_utils import get_lr_schedule\nfrom utils.parameter_utils import count_parameters_by_component\n\n\n@dataclass\nclass Args:\n # Experiment\n num_steps: int = 200_000\n seed: int = 0\n seq_len: int = 16\n image_channels: int = 3\n image_height: int = 90\n image_width: int = 160\n data_dir: str = """"\n save_ckpt: bool = False\n restore_ckpt: bool = False\n # Optimization\n batch_size: int = 36\n init_lr: float = 0.0\n max_lr: float = 3e-5\n decay_end: float = 0.0\n wsd_decay_steps: int = (\n 10000 # NOTE: wsd_decay_steps will only be used when using a wsd-schedule\n )\n warmup_steps: int = 5000\n lr_schedule: str = ""wsd"" # supported options: wsd, cos\n # Tokenizer\n tokenizer_dim: int = 512\n tokenizer_ffn_dim: int = 2048\n latent_patch_dim: int = 32\n num_patch_latents: int = 1024\n patch_size: int = 4\n tokenizer_num_blocks: int = 4\n tokenizer_num_heads: int = 8\n tokenizer_checkpoint: str = """"\n # LAM\n lam_dim: int = 512\n lam_ffn_dim: int = 2048\n latent_action_dim: int = 32\n num_latent_actions: int = 6\n lam_patch_size: int = 16\n lam_num_blocks: int = 4\n lam_num_heads: int = 8\n lam_checkpoint: str = """"\n # Dynamics\n dyna_type: str = ""maskgit"" # supported options: maskgit, causal\n dyna_dim: int = 512\n dyna_ffn_dim: int = 2048\n dyna_num_blocks: int = 6\n dyna_num_heads: int = 8\n dropout: float = 0.0\n mask_limit: float = 0.5\n param_dtype = jnp.float32\n dtype = jnp.bfloat16\n use_flash_attention: bool = True\n # Logging\n log: bool = False\n entity: str = """"\n project: str = """"\n name: str = ""train_dynamics""\n tags: list[str] = field(default_factory=lambda: [""dynamics""])\n log_interval: int = 5\n log_image_interval: int = 250\n ckpt_dir: str = """"\n log_checkpoint_interval: int = 25000\n log_checkpoint_keep_period: int = 20000\n log_gradients: bool = False\n wandb_id: str = """"\n\n\nargs = tyro.cli(Args)\n\n\ndef dynamics_loss_fn(\n model: Genie, inputs: dict\n) -> tuple[jax.Array, tuple[jax.Array, dict]]:\n """"""Compute masked dynamics loss""""""\n # gt = jnp.asarray(inputs[""videos""], dtype=jnp.float32) / 255.0\n # inputs[""videos""] = gt.astype(args.dtype)\n model.train()\n outputs = model(inputs, training=True)\n mask = outputs[""mask""]\n outputs[""token_logits""] = outputs[""token_logits""].astype(jnp.float32)\n ce_loss = optax.softmax_cross_entropy_with_integer_labels(\n outputs[""token_logits""], outputs[""video_tokens""]\n )\n ce_loss = (mask * ce_loss).sum() / mask.sum()\n acc = outputs[""token_logits""].argmax(-1) == outputs[""video_tokens""]\n acc = (mask * acc).sum() / mask.sum()\n select_probs = jax.nn.softmax(outputs[""token_logits""])\n # gt = gt.clip(0, 1).reshape(-1, *gt.shape[2:])\n # recon = outputs[""recon""].clip(0, 1).reshape(-1, *outputs[""recon""].shape[2:])\n # psnr = jnp.asarray(pix.psnr(gt, recon)).mean()\n # ssim = jnp.asarray(pix.ssim(gt, recon)).mean()\n # _, index_counts_lam = jnp.unique_counts(\n # jnp.ravel(outputs[""lam_indices""]), size=args.num_latent_actions, fill_value=0\n # )\n _, index_counts_tokenizer = jnp.unique_counts(\n jnp.ravel(outputs[""video_tokens""]), size=args.num_patch_latents, fill_value=0\n )\n # codebook_usage_lam = (index_counts_lam != 0).mean()\n codebook_usage_tokenizer = (index_counts_tokenizer != 0).mean()\n metrics = dict(\n cross_entropy_loss=ce_loss,\n masked_token_accuracy=acc,\n select_logit=outputs[""token_logits""].max(-1).mean(),\n select_p=select_probs.max(-1).mean(),\n entropy=jax.scipy.special.entr(select_probs).sum(-1).mean(),\n # psnr=psnr,\n # ssim=ssim,\n # codebook_usage_lam=codebook_usage_lam,\n codebook_usage_tokenizer=codebook_usage_tokenizer,\n )\n return ce_loss, (None, metrics)\n\n\n@nnx.jit\ndef train_step(\n model: Genie, optimizer: nnx.Optimizer, inputs: dict\n) -> tuple[jax.Array, jax.Array, dict]:\n """"""Update state and compute metrics""""""\n\n def loss_fn(model: Genie) -> tuple[jax.Array, tuple[jax.Array, dict]]:\n return dynamics_loss_fn(model, inputs)\n\n (loss, (recon, metrics)), grads = nnx.value_and_grad(loss_fn, has_aux=True)(model)\n optimizer.update(grads)\n if args.log_gradients:\n metrics[""gradients_std/""] = jax.tree.map(\n lambda x: x.std(), grads[""params""][""dynamics""]\n )\n return loss, recon, metrics\n\n\nif __name__ == ""__main__"":\n jax.distributed.initialize()\n num_devices = jax.device_count()\n if num_devices == 0:\n raise ValueError(""No JAX devices found."")\n print(f""Running on {num_devices} devices."")\n\n if args.batch_size % num_devices != 0:\n raise ValueError(\n f""Global batch size {args.batch_size} must be divisible by ""\n f""number of devices {num_devices}.""\n )\n\n per_device_batch_size_for_init = args.batch_size // num_devices\n\n rng = jax.random.key(args.seed)\n\n # --- Initialize model ---\n rng, _rng = jax.random.split(rng)\n rngs = nnx.Rngs(_rng)\n genie = Genie(\n # Tokenizer\n in_dim=args.image_channels,\n tokenizer_dim=args.tokenizer_dim,\n tokenizer_ffn_dim=args.tokenizer_ffn_dim,\n latent_patch_dim=args.latent_patch_dim,\n num_patch_latents=args.num_patch_latents,\n patch_size=args.patch_size,\n tokenizer_num_blocks=args.tokenizer_num_blocks,\n tokenizer_num_heads=args.tokenizer_num_heads,\n # LAM\n lam_dim=args.lam_dim,\n lam_ffn_dim=args.lam_ffn_dim,\n latent_action_dim=args.latent_action_dim,\n num_latent_actions=args.num_latent_actions,\n lam_patch_size=args.lam_patch_size,\n lam_num_blocks=args.lam_num_blocks,\n lam_num_heads=args.lam_num_heads,\n lam_co_train=not args.lam_checkpoint,\n # Dynamics\n dyna_type=args.dyna_type,\n dyna_dim=args.dyna_dim,\n dyna_ffn_dim=args.dyna_ffn_dim,\n dyna_num_blocks=args.dyna_num_blocks,\n dyna_num_heads=args.dyna_num_heads,\n dropout=args.dropout,\n mask_limit=args.mask_limit,\n param_dtype=args.param_dtype,\n dtype=args.dtype,\n use_flash_attention=args.use_flash_attention,\n decode=False,\n rngs=rngs,\n )\n\n _, params, _ = nnx.split(genie, nnx.Param, ...)\n param_counts = count_parameters_by_component(params)\n\n if args.log and jax.process_index() == 0:\n wandb_init_kwargs = {\n ""entity"": args.entity,\n ""project"": args.project,\n ""name"": args.name,\n ""tags"": args.tags,\n ""group"": ""debug"",\n ""config"": args,\n }\n\n if args.wandb_id:\n wandb_init_kwargs.update(\n {\n ""id"": args.wandb_id,\n ""resume"": ""allow"",\n }\n )\n wandb.init(**wandb_init_kwargs)\n\n wandb.config.update({""model_param_count"": param_counts})\n\n print(""Parameter counts:"")\n print(param_counts)\n\n # --- Initialize optimizer ---\n lr_schedule = get_lr_schedule(\n args.lr_schedule,\n args.init_lr,\n args.max_lr,\n args.decay_end,\n args.num_steps,\n args.warmup_steps,\n args.wsd_decay_steps,\n )\n tx = optax.adamw(\n learning_rate=lr_schedule,\n b1=0.9,\n b2=0.9,\n weight_decay=1e-4,\n mu_dtype=args.dtype,\n )\n optimizer = nnx.Optimizer(genie, tx)\n del genie\n\n # FIXME: switch to create_hybrid_device_mesh for runs spanning multiple nodes\n device_mesh_arr = create_device_mesh((num_devices,))\n mesh = Mesh(devices=device_mesh_arr, axis_names=(""data"",))\n\n replicated_sharding = NamedSharding(mesh, PartitionSpec())\n videos_sharding = NamedSharding(mesh, PartitionSpec(""data"", None, None, None, None))\n\n model_state = nnx.state(optimizer.model)\n model_sharded_state = jax.lax.with_sharding_constraint(\n model_state, replicated_sharding\n )\n nnx.update(optimizer.model, model_sharded_state)\n optimizer_state = nnx.state(optimizer, nnx.optimizer.OptState)\n optimizer_sharded_state = jax.lax.with_sharding_constraint(\n optimizer_state, replicated_sharding\n )\n nnx.update(optimizer, optimizer_sharded_state)\n\n # --- Initialize checkpoint manager ---\n step = 0\n handler_registry = ocp.handlers.DefaultCheckpointHandlerRegistry()\n handler_registry.add(\n ""model_state"", ocp.args.PyTreeSave, ocp.handlers.PyTreeCheckpointHandler\n )\n handler_registry.add(\n ""model_state"", ocp.args.PyTreeRestore, ocp.handlers.PyTreeCheckpointHandler\n )\n handler_registry.add(\n ""dataloader_state"",\n grain.checkpoint.CheckpointSave,\n cast(ocp.handlers.CheckpointHandler, grain.checkpoint.CheckpointHandler),\n )\n handler_registry.add(\n ""dataloader_state"",\n grain.checkpoint.CheckpointRestore,\n cast(ocp.handlers.CheckpointHandler, grain.checkpoint.CheckpointHandler),\n )\n\n checkpoint_options = ocp.CheckpointManagerOptions(\n save_interval_steps=args.log_checkpoint_interval,\n max_to_keep=3,\n keep_period=args.log_checkpoint_keep_period,\n step_format_fixed_length=6,\n cleanup_tmp_directories=True,\n )\n\n checkpoint_manager = ocp.CheckpointManager(\n args.ckpt_dir,\n options=checkpoint_options,\n handler_registry=handler_registry,\n )\n\n # --- Create DataLoaderIterator from dataloader ---\n image_shape = (args.image_height, args.image_width, args.image_channels)\n array_record_files = [\n os.path.join(args.data_dir, x)\n for x in os.listdir(args.data_dir)\n if x.endswith("".array_record"")\n ]\n grain_dataloader = get_dataloader(\n array_record_files,\n args.seq_len,\n # NOTE: We deliberately pass the global batch size\n # The dataloader shards the dataset across all processes\n args.batch_size,\n *image_shape,\n num_workers=8,\n prefetch_buffer_size=1,\n seed=args.seed,\n )\n initial_state = grain_dataloader._create_initial_state()\n grain_iterator = grain.DataLoaderIterator(grain_dataloader, initial_state)\n\n # --- Restore checkpoint ---\n if args.restore_ckpt:\n abstract_optimizer = nnx.eval_shape(lambda: optimizer)\n abstract_optimizer_state = nnx.state(abstract_optimizer)\n restored = checkpoint_manager.restore(\n checkpoint_manager.latest_step(),\n args=ocp.args.Composite(\n model_state=ocp.args.PyTreeRestore(abstract_optimizer_state), # type: ignore\n dataloader_state=grain.checkpoint.CheckpointRestore(grain_iterator), # type: ignore\n ),\n )\n restored_optimizer_state = restored[""model_state""]\n nnx.update(optimizer, restored_optimizer_state)\n grain_iterator = restored[""dataloader_state""]\n step = checkpoint_manager.latest_step() or 0\n print(f""Restored dataloader and model state from step {step}"")\n else:\n # Restore from pre-trained tokenizer (and LAM)\n optimizer = restore_genie_components(optimizer, replicated_sharding, rng, args)\n # NOTE: We have to remove the (unused) tokenizer vq dropout due flax.nnx lazily initializing modules.\n # Specifically, the first dynamics model checkpoint will contain the vq dropout module,\n # but the first full restore will fail due to nnx not initializing the module when\n # dropout is set to 0.0.\n del optimizer.model.tokenizer.vq.drop\n\n # --- TRAIN LOOP ---\n # dataloader = (\n # jax.make_array_from_process_local_data(videos_sharding, elem)\n # for elem in grain_iterator\n # )\n # jax.config.update(""jax_transfer_guard"", ""disallow"")\n print(f""Starting training from step {step}..."")\n should_break = False\n while step < args.num_steps and not should_break:\n for _ in range(50):\n # --- Train step ---\n rng, _rng_mask = jax.random.split(rng, 2)\n inputs = dict(mask_rng=_rng_mask)\n loss, recon, metrics = train_step(optimizer.model, optimizer, inputs)\n metrics[""lr""] = lr_schedule(step)\n print(f""Step {step}, loss: {loss}"")\n step += 1\n\n # --- Logging ---\n if args.log:\n if step % args.log_interval == 0 and jax.process_index() == 0:\n wandb.log(\n {\n ""loss"": loss,\n ""step"": step,\n **metrics,\n }\n )\n if step % args.log_image_interval == 0:\n pass\n # gt_seq = inputs[""videos""][0].astype(jnp.float32) / 255.0\n # recon_seq = recon[0].clip(0, 1)\n # comparison_seq = jnp.concatenate((gt_seq, recon_seq), axis=1)\n # comparison_seq = einops.rearrange(\n # comparison_seq * 255, ""t h w c -> h (t w) c""\n # )\n # if jax.process_index() == 0:\n # log_images = dict(\n # image=wandb.Image(np.asarray(gt_seq[args.seq_len - 1])),\n # recon=wandb.Image(np.asarray(recon_seq[args.seq_len - 1])),\n # true_vs_recon=wandb.Image(\n # np.asarray(comparison_seq.astype(np.uint8))\n # ),\n # )\n # wandb.log(log_images)\n # --- Checkpointing ---\n if args.save_ckpt and step % args.log_checkpoint_interval == 0:\n optimizer_state = nnx.state(optimizer)\n checkpoint_manager.save(\n step,\n args=ocp.args.Composite(\n model_state=ocp.args.PyTreeSave(optimizer_state), # type: ignore\n dataloader_state=grain.checkpoint.CheckpointSave( # type: ignore\n grain_iterator # type: ignore\n ),\n ),\n )\n print(f""Saved checkpoint at step {step}"")\n if step >= 50:\n should_break = True\n break\n\n checkpoint_manager.close()\n",python,tab
13
+ 12,217408,"train_dynamics.py",8607,0,"",python,selection_command
14
+ 13,218226,"train_dynamics.py",9308,0,"",python,selection_command
15
+ 14,219567,"train_dynamics.py",14859,0,"",python,selection_command
16
+ 15,221511,"train_dynamics.py",1129,0,"",python,selection_command
17
+ 16,221808,"train_dynamics.py",2276,0,"",python,selection_command
18
+ 17,222449,"train_dynamics.py",2338,0,"",python,selection_command
19
+ 18,222887,"train_dynamics.py",12432,0,"",python,selection_command
20
+ 19,224613,"train_dynamics.py",12433,0,"",python,selection_command
21
+ 20,224628,"train_dynamics.py",12433,1,"",python,content
22
+ 21,224759,"train_dynamics.py",14764,0,"",python,selection_command
23
+ 22,225788,"train_dynamics.py",14765,0,"",python,selection_command
24
+ 23,225792,"train_dynamics.py",14765,1,"",python,content
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-93adc08f-77de-486a-a0da-6bd1df62203b1753869084135-2025_07_30-11.51.32.679/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-987c23ac-4e87-407a-ad46-c530dbbf6d4c1764767462916-2025_12_03-14.11.11.447/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-a464863e-68b5-46e0-8fc2-ff25b4163aec1757954853521-2025_09_15-18.47.40.354/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-bc0084f0-c97c-4490-943b-c621798113041767623320849-2026_01_05-15.28.50.314/source.csv ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
2
+ 2,497,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"3:28:50 PM [info] Activating crowd-code\n3:28:50 PM [info] Recording started\n3:28:50 PM [info] Initializing git provider using file system watchers...\n3:28:50 PM [info] No workspace folder found\n",Log,tab
3
+ 3,2216,"Untitled-1",0,0,"",plaintext,tab
4
+ 4,4694,"Untitled-1",0,0,"d",plaintext,content
5
+ 5,4698,"Untitled-1",1,0,"",plaintext,selection_keyboard
6
+ 6,4828,"Untitled-1",1,0,"e",plaintext,content
7
+ 7,4830,"Untitled-1",2,0,"",plaintext,selection_keyboard
8
+ 8,4934,"Untitled-1",2,0,"f",plaintext,content
9
+ 9,4937,"Untitled-1",3,0,"",plaintext,selection_keyboard
10
+ 10,5019,"Untitled-1",3,0," ",plaintext,content
11
+ 11,5022,"Untitled-1",4,0,"",plaintext,selection_keyboard
12
+ 12,5038,"Untitled-1",4,0,"h",plaintext,content
13
+ 13,5040,"Untitled-1",5,0,"",plaintext,selection_keyboard
14
+ 14,5125,"Untitled-1",5,0,"e",plaintext,content
15
+ 15,5127,"Untitled-1",6,0,"",plaintext,selection_keyboard
16
+ 16,5290,"Untitled-1",6,0,"l",plaintext,content
17
+ 17,5292,"Untitled-1",7,0,"",plaintext,selection_keyboard
18
+ 18,5442,"Untitled-1",7,0,"l",plaintext,content
19
+ 19,5444,"Untitled-1",8,0,"",plaintext,selection_keyboard
20
+ 20,5559,"Untitled-1",8,0,"o",plaintext,content
21
+ 21,5561,"Untitled-1",9,0,"",plaintext,selection_keyboard
22
+ 22,7068,"Untitled-1",9,0,"_",plaintext,content
23
+ 23,7071,"Untitled-1",10,0,"",plaintext,selection_keyboard
24
+ 24,10710,"Untitled-1",0,10,"def hello_world\n",plaintext,content
25
+ 25,11243,"Untitled-1",15,1,"",plaintext,content
26
+ 26,11684,"Untitled-1",15,0,"()",plaintext,content
27
+ 27,11686,"Untitled-1",16,0,"",plaintext,selection_keyboard
28
+ 28,22387,"Untitled-1",17,0,"",plaintext,selection_command
29
+ 29,28157,"Untitled-1",17,0,":",plaintext,content
30
+ 30,28159,"Untitled-1",18,0,"",plaintext,selection_keyboard
31
+ 31,29850,"Untitled-1",18,0,"\n",plaintext,content
32
+ 32,32053,"Untitled-1",19,0," ",plaintext,content
33
+ 33,32055,"Untitled-1",20,0,"",plaintext,selection_keyboard
34
+ 34,32112,"Untitled-1",19,1,"",plaintext,content
35
+ 35,33793,"Untitled-1",18,1,"",plaintext,content
36
+ 36,33976,"Untitled-1",18,0,"\n",plaintext,content
37
+ 37,41115,"Untitled-1",18,1,"",plaintext,content
38
+ 38,41431,"Untitled-1",18,0,"\n",plaintext,content
39
+ 39,41912,"Untitled-1",0,19,"def hello_world():\n",plaintext,content
40
+ 40,42885,"Untitled-1",18,1,"",plaintext,content
41
+ 41,43756,"Untitled-1",18,0,"\n",plaintext,content
42
+ 42,44563,"Untitled-1",19,0," ",plaintext,content
43
+ 43,47090,"Untitled-1",23,0,"p",plaintext,content
44
+ 44,47092,"Untitled-1",24,0,"",plaintext,selection_keyboard
45
+ 45,50396,"Untitled-1",19,5," print(""Hello, World!"")\n",plaintext,content
46
+ 46,54237,"Untitled-1",19,0,"",plaintext,selection_command
47
+ 47,55567,"Untitled-1",23,0,"",plaintext,selection_command
48
+ 48,55739,"Untitled-1",28,0,"",plaintext,selection_command
49
+ 49,55984,"Untitled-1",30,0,"",plaintext,selection_command
50
+ 50,56626,"Untitled-1",30,13,"",plaintext,content
51
+ 51,62279,"Untitled-1",19,14," print(""Hello, World!"")\n",plaintext,content
52
+ 52,63932,"Untitled-1",29,0,"",plaintext,selection_command
53
+ 53,64876,"Untitled-1",10,0,"",plaintext,selection_command
54
+ 54,65436,"Untitled-1",0,18,"def hello_world():",plaintext,selection_command
55
+ 55,65502,"Untitled-1",0,45,"def hello_world():\n print(""Hello, World!"")",plaintext,selection_command
56
+ 56,65765,"Untitled-1",0,46,"def hello_world():\n print(""Hello, World!"")\n",plaintext,selection_command
57
+ 57,66029,"Untitled-1",0,46,"",plaintext,content
58
+ 58,66630,"Untitled-1",0,0,"d",plaintext,content
59
+ 59,66632,"Untitled-1",1,0,"",plaintext,selection_keyboard
60
+ 60,66682,"Untitled-1",1,0,"e",plaintext,content
61
+ 61,66683,"Untitled-1",2,0,"",plaintext,selection_keyboard
62
+ 62,66872,"Untitled-1",2,0,"f",plaintext,content
63
+ 63,66874,"Untitled-1",3,0,"",plaintext,selection_keyboard
64
+ 64,66941,"Untitled-1",3,0," ",plaintext,content
65
+ 65,66943,"Untitled-1",4,0,"",plaintext,selection_keyboard
66
+ 66,67027,"Untitled-1",4,0,"f",plaintext,content
67
+ 67,67029,"Untitled-1",5,0,"",plaintext,selection_keyboard
68
+ 68,67036,"Untitled-1",5,0,"i",plaintext,content
69
+ 69,67037,"Untitled-1",6,0,"",plaintext,selection_keyboard
70
+ 70,67127,"Untitled-1",6,0,"b",plaintext,content
71
+ 71,67128,"Untitled-1",7,0,"",plaintext,selection_keyboard
72
+ 72,67297,"Untitled-1",7,0,"o",plaintext,content
73
+ 73,67299,"Untitled-1",8,0,"",plaintext,selection_keyboard
74
+ 74,68556,"Untitled-1",8,0,"n",plaintext,content
75
+ 75,68558,"Untitled-1",9,0,"",plaintext,selection_keyboard
76
+ 76,68699,"Untitled-1",9,0,"n",plaintext,content
77
+ 77,68701,"Untitled-1",10,0,"",plaintext,selection_keyboard
78
+ 78,68913,"Untitled-1",10,0,"a",plaintext,content
79
+ 79,68914,"Untitled-1",11,0,"",plaintext,selection_keyboard
80
+ 80,69135,"Untitled-1",11,0,"c",plaintext,content
81
+ 81,69137,"Untitled-1",12,0,"",plaintext,selection_keyboard
82
+ 82,70597,"Untitled-1",0,12,"def fibonnaci\n",plaintext,content
83
+ 83,71132,"Untitled-1",13,1,"",plaintext,content
84
+ 84,71638,"Untitled-1",13,0,"()",plaintext,content
85
+ 85,71640,"Untitled-1",14,0,"",plaintext,selection_keyboard
86
+ 86,155245,"Untitled-1",0,15,"def fibonnaci(n):\n",plaintext,content
87
+ 87,156484,"Untitled-1",15,0,"",plaintext,selection_command
88
+ 88,156581,"Untitled-1",16,0,"",plaintext,selection_command
89
+ 89,156764,"Untitled-1",17,0,"",plaintext,selection_command
90
+ 90,156996,"Untitled-1",17,0,"\n",plaintext,content
91
+ 91,158165,"Untitled-1",18,1," if n <= 1:\n",plaintext,content
92
+ 92,160225,"Untitled-1",33,0,"",plaintext,selection_command
93
+ 93,161697,"Untitled-1",33,0," return n\n",plaintext,content
94
+ 94,162835,"Untitled-1",50,0," return fibonnaci(n-1) + fibonnaci(n-2)\n",plaintext,content
95
+ 95,165097,"Untitled-1",92,1,"",plaintext,content
96
+ 96,165553,"Untitled-1",50,42,"",plaintext,content
97
+ 97,166147,"Untitled-1",33,17," return n\n",plaintext,content
98
+ 98,167346,"Untitled-1",50,0," else:\n",plaintext,content
99
+ 99,168289,"Untitled-1",60,0," return fibonnaci(n-1) + fibonnaci(n-2)\n",plaintext,content
100
+ 100,169550,"Untitled-1",60,0,"",plaintext,selection_command
101
+ 101,169694,"Untitled-1",50,0,"",plaintext,selection_command
102
+ 102,171090,"Untitled-1",54,0,"s",plaintext,content
103
+ 103,171092,"Untitled-1",55,0,"",plaintext,selection_keyboard
104
+ 104,171936,"Untitled-1",54,1,"",plaintext,content
105
+ 105,176069,"Untitled-1",58,0,"j",plaintext,content
106
+ 106,176071,"Untitled-1",59,0,"",plaintext,selection_keyboard
107
+ 107,176157,"Untitled-1",59,0,"f",plaintext,content
108
+ 108,176159,"Untitled-1",60,0,"",plaintext,selection_keyboard
109
+ 109,176181,"Untitled-1",60,0,"d",plaintext,content
110
+ 110,176182,"Untitled-1",61,0,"",plaintext,selection_keyboard
111
+ 111,176287,"Untitled-1",61,0,"k",plaintext,content
112
+ 112,176289,"Untitled-1",62,0,"",plaintext,selection_keyboard
113
+ 113,176390,"Untitled-1",62,0,"d",plaintext,content
114
+ 114,176391,"Untitled-1",63,0,"",plaintext,selection_keyboard
115
+ 115,176456,"Untitled-1",63,0,"k",plaintext,content
116
+ 116,176458,"Untitled-1",64,0,"",plaintext,selection_keyboard
117
+ 117,176563,"Untitled-1",64,0,"d",plaintext,content
118
+ 118,176565,"Untitled-1",65,0,"",plaintext,selection_keyboard
119
+ 119,177638,"Untitled-1",50,17," else:\n",plaintext,content
120
+ 120,181280,"Untitled-1",61,0,"",plaintext,selection_command
121
+ 121,181427,"Untitled-1",62,0,"",plaintext,selection_command
122
+ 122,181646,"Untitled-1",63,0,"",plaintext,selection_command
123
+ 123,182877,"Untitled-1",64,0,"",plaintext,selection_command
124
+ 124,183046,"Untitled-1",65,0,"",plaintext,selection_command
125
+ 125,183851,"Untitled-1",106,0,"\n else:",plaintext,content
126
+ 126,183852,"Untitled-1",50,10,"",plaintext,content
127
+ 127,184213,"Untitled-1",96,0,"\n return n",plaintext,content
128
+ 128,184214,"Untitled-1",33,17,"",plaintext,content
129
+ 129,185053,"Untitled-1",79,17,"",plaintext,content
130
+ 130,185054,"Untitled-1",33,0," return n\n",plaintext,content
131
+ 131,185188,"Untitled-1",96,10,"",plaintext,content
132
+ 132,185189,"Untitled-1",50,0," else:\n",plaintext,content
133
+ 133,185866,"Untitled-1",55,0,"",plaintext,selection_command
134
+ 134,185942,"Untitled-1",38,0,"",plaintext,selection_command
135
+ 135,186092,"Untitled-1",23,0,"",plaintext,selection_command
136
+ 136,190296,"Untitled-1",25,1,"",plaintext,content
137
+ 137,190559,"Untitled-1",25,0,"f",plaintext,content
138
+ 138,190560,"Untitled-1",26,0,"",plaintext,selection_keyboard
139
+ 139,190960,"Untitled-1",26,0,"f",plaintext,content
140
+ 140,190962,"Untitled-1",27,0,"",plaintext,selection_keyboard
141
+ 141,191033,"Untitled-1",27,0,"j",plaintext,content
142
+ 142,191034,"Untitled-1",28,0,"",plaintext,selection_keyboard
143
+ 143,191115,"Untitled-1",28,0,"s",plaintext,content
144
+ 144,191117,"Untitled-1",29,0,"",plaintext,selection_keyboard
145
+ 145,191143,"Untitled-1",29,0,"d",plaintext,content
146
+ 146,191144,"Untitled-1",30,0,"",plaintext,selection_keyboard
147
+ 147,191246,"Untitled-1",30,0,"f",plaintext,content
148
+ 148,191248,"Untitled-1",31,0,"",plaintext,selection_keyboard
149
+ 149,193530,"Untitled-1",18,20," if n <= 1:\n",plaintext,content
150
+ 150,196399,"Untitled-1",0,18,"def fibonnaci\n",plaintext,content
151
+ 151,198343,"Untitled-1",0,14,"def fibonnaci(n):\n",plaintext,content
152
+ 152,200616,"Untitled-1",30,0,"",plaintext,selection_command
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-c0f990f7-b244-48ad-9e10-09436f02488e1763904501871-2025_11_23-14.28.30.128/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-c3efaede-1402-4fa2-9af2-1daa60c1fac61764499916668-2025_11_30-11.53.19.937/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-c7a3c1e0-f967-4348-8f91-07dcc637207c1761212988162-2025_10_23-11.49.58.248/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-d47ebbeb-4370-4f85-8a19-4edee45b89e31767785934640-2026_01_07-12.39.04.804/source.csv ADDED
The diff for this file is too large to render. See raw diff
 
1f15334ab7e6820c9fda17c961659882ef9853cc80f7356b9a9b22f286fd7389/crowd-code-e152ee4a-939b-4464-8ec7-3739b32281d61763456606729-2025_11_18-10.03.33.832/source.csv ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Sequence,Time,File,RangeOffset,RangeLength,Text,Language,Type
2
+ 1,3,"slurm/dev/franz/berlin/crowd-pilot/nemo/test_sft.py",0,0,"import nemo_run as run\nfrom nemo.collections import llm\n\nNAME = ""qwen3_30b_a3b""\n\nrecipe = llm.qwen3_30b_a3b.finetune_recipe(\n name=NAME,\n dir=f""test_output"",\n num_nodes=1,\n num_gpus_per_node=8,\n peft_scheme='lora', # 'lora', 'none'\n packed_sequence=True,\n)\n\ndef local_executor_torchrun(nodes: int, devices: int) -> run.LocalExecutor:\n # Env vars for jobs are configured here\n env_vars = {\n ""TORCH_NCCL_AVOID_RECORD_STREAMS"": ""1"",\n ""NCCL_NVLS_ENABLE"": ""0"",\n ""NVTE_DP_AMAX_REDUCE_INTERVAL"": ""0"",\n ""NVTE_ASYNC_AMAX_REDUCTION"": ""1"",\n #""LD_PRELOAD"": ""/fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/nvidia/cuda_runtime/lib/libcudart.so.12"",\n }\n\n executor = run.LocalExecutor(ntasks_per_node=devices, launcher=""torchrun"", env_vars=env_vars)\n\n return executor\n\ndef run_finetuning():\n recipe.resume.restore_config.path = ""/fast/project/HFMI_SynergyUnit/tab_model/data/checkpoints/nemo_converted_weights_qwen3-coder-30b-a3b-instruct/""\n recipe.data.delete_raw = False\n executor = local_executor_torchrun(nodes=recipe.trainer.num_nodes, devices=recipe.trainer.devices)\n\n run.run(recipe, executor=executor, name=NAME)\n\n# This condition is necessary for the script to be compatible with Python's multiprocessing module.\nif __name__ == ""__main__"":\n run_finetuning()",python,tab
3
+ 2,81,"TERMINAL",0,0,"",,terminal_focus
4
+ 3,82,"TERMINAL",0,0,"",,terminal_focus
5
+ 4,306,"extension-output-pdoom-org.crowd-code-#1-crowd-code",0,0,"10:03:33 AM [info] Activating crowd-code\n10:03:33 AM [info] Recording started\n10:03:33 AM [info] Initializing git provider using file system watchers...\n",Log,tab
6
+ 5,462,"extension-output-pdoom-org.crowd-code-#1-crowd-code",153,0,"10:03:34 AM [info] Git repository found\n10:03:34 AM [info] Git provider initialized successfully\n10:03:34 AM [info] Initial git state: [object Object]\n",Log,content
7
+ 6,544,"TERMINAL",0,0,"bash",,terminal_focus
8
+ 7,545,"slurm/dev/franz/berlin/crowd-pilot/nemo/test_sft.py",0,0,"",python,tab
9
+ 8,545,"TERMINAL",0,0,"bash",,terminal_focus
10
+ 9,623,"TERMINAL",0,0,"source /home/franz.srambical/crowd-pilot/.nemo/bin/activate",,terminal_command
11
+ 10,623,"TERMINAL",0,0,"source /home/franz.srambical/crowd-pilot/.nemo/bin/activate",,terminal_command
12
+ 11,624,"TERMINAL",0,0,"]633;C]0;franz.srambical@hai-login2:~/crowd-pilot",,terminal_output
13
+ 12,624,"TERMINAL",0,0,"]633;C]0;franz.srambical@hai-login2:~/crowd-pilot",,terminal_output
14
+ 13,207295,"TERMINAL",0,0,"squeue",,terminal_command
15
+ 14,207312,"TERMINAL",0,0,"]633;C JOBID USER PARTITION NODES CPUS ST SUBMIT_TIME START_TIME TIME TIME_LIMIT NODELIST(REASON)\r\n 34028 xiao.liu interacti 1 128 R 2025-11-17T23:53:51 2025-11-17T23:53:51 10:13:10 23:59:00 hai005\r\n 34027 franz.sram interacti 1 20 R 2025-11-17T22:06:31 2025-11-17T22:06:31 12:00:30 1-00:00:00 hai001\r\n 34026 xiao.liu interacti 1 128 R 2025-11-17T20:01:59 2025-11-17T20:01:59 14:05:02 23:59:00 hai006\r\n 34037 kalyan.nad standard 1 64 R 2025-11-18T08:55:56 2025-11-18T08:55:56 1:11:05 8:00:00 hai002\r\n 34032 xiao.liu standard 1 128 R 2025-11-18T03:50:45 2025-11-18T05:06:30 5:00:31 23:59:00 hai007\r\n 33959 xiao.liu standard 1 128 R 2025-11-17T10:39:54 2025-11-17T10:40:37 23:26:24 23:59:00 hai004\r\n]0;franz.srambical@hai-login2:~/crowd-pilot",,terminal_output
16
+ 15,216561,"TERMINAL",0,0,"scancel 34027",,terminal_command
17
+ 16,216570,"TERMINAL",0,0,"]633;C]0;franz.srambical@hai-login2:~/crowd-pilot",,terminal_output
18
+ 17,230003,"TERMINAL",0,0,"salloc --gpus=8 --ntasks-per-node=1 --cpus-per-task=10 --mem=100G",,terminal_command
19
+ 18,230054,"TERMINAL",0,0,"]633;Csalloc: Pending job allocation 34038\r\nsalloc: job 34038 queued and waiting for resources\r\n",,terminal_output
20
+ 19,234099,"TERMINAL",0,0,"^Csalloc: Job aborted due to signal\r\nsalloc: Job allocation 34038 has been revoked.\r\n]0;franz.srambical@hai-login2:~/crowd-pilot",,terminal_output
21
+ 20,236703,"TERMINAL",0,0,"salloc --gpus=8 --ntasks-per-node=1 --cpus-per-task=10 --mem=100G --qos=low",,terminal_command
22
+ 21,236754,"TERMINAL",0,0,"]633;Csalloc: Granted job allocation 34039\r\n",,terminal_output
23
+ 22,236853,"TERMINAL",0,0,"salloc: Nodes hai003 are ready for job\r\n",,terminal_output
24
+ 23,237197,"TERMINAL",0,0,"Running inside SLURM, Job ID 34039.\r\n",,terminal_output
25
+ 24,237282,"TERMINAL",0,0,"]0;franz.srambical@hai-login2:~/crowd-pilot[?2004h[franz.srambical@hai003.haicore.berlin:~/crowd-pilot] $ ",,terminal_output
26
+ 25,240701,"TERMINAL",0,0,"\r(reverse-i-search)`': ",,terminal_output
27
+ 26,240882,"TERMINAL",0,0,"s': python /home/franz.srambical/slurm/dev/franz/berlin/crowd-pilot/nemo/test_sft.py",,terminal_output
28
+ 27,241023,"TERMINAL",0,0,"\ro': readelf -d $(python -c ""import functorch; import inspect, os; print(os.path.join(os.path.dirname(inspect.getfile(functorch)), '_C*.so'))"") | grep RUNPATH\ru': source /home/franz.srambical/crowd-pilot/.nemo/bin/activate\r",,terminal_output
29
+ 28,241225,"TERMINAL",0,0,"[1@r': sour",,terminal_output
30
+ 29,241960,"TERMINAL",0,0,"\r[30@[franz.srambical@hai003.haicore.berlin:~/crowd-pilot] $ sour\r\n[?2004l\r]0;franz.srambical@hai-login2:~/crowd-pilot[?2004h(.nemo) [franz.srambical@hai003.haicore.berlin:~/crowd-pilot] $ ",,terminal_output
31
+ 30,242463,"TERMINAL",0,0,"\r(reverse-i-search)`': ",,terminal_output
32
+ 31,242681,"TERMINAL",0,0,"p': source /home/franz.srambical/crowd-pilot/.nemo/bin/activate",,terminal_output
33
+ 32,242762,"TERMINAL",0,0,"\ry': python /home/franz.srambical/slurm/dev/franz/berlin/crowd-pilot/nemo/test_sft.py",,terminal_output
34
+ 33,242874,"TERMINAL",0,0,"\rt': python /home/franz.srambical/slurm/dev/franz/berlin/crowd-pilot/nemo/test_sft.py\r",,terminal_output
35
+ 34,243012,"TERMINAL",0,0,"[1@h': pyth",,terminal_output
36
+ 35,243077,"TERMINAL",0,0,"[1@o': pytho",,terminal_output
37
+ 36,243213,"TERMINAL",0,0,"[1@n': python",,terminal_output
38
+ 37,243662,"TERMINAL",0,0,"\r[36@.nemo) [franz.srambical@hai003.haicore.berlin:~/crowd-pilot] $ python",,terminal_output
39
+ 38,243943,"TERMINAL",0,0,"\r\n[?2004l\r",,terminal_output
40
+ 39,252109,"TERMINAL",0,0,"/fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.\r\n import pynvml # type: ignore[import]\r\n",,terminal_output
41
+ 40,276475,"TERMINAL",0,0,"OneLogger: Setting error_handling_strategy to DISABLE_QUIETLY_AND_REPORT_METRIC_ERROR for rank (rank=0) with OneLogger disabled. To override: explicitly set error_handling_strategy parameter.\r\n",,terminal_output
42
+ 41,276578,"TERMINAL",0,0,"No exporters were provided. This means that no telemetry data will be collected.\r\n",,terminal_output
43
+ 42,284004,"TERMINAL",0,0,"───────────────────────────────────────────────────────────── Entering Experiment qwen3_30b_a3b with id: qwen3_30b_a3b_1763456897 ──────────────────────────────────────────────────────────────\r\n",,terminal_output
44
+ 43,284254,"TERMINAL",0,0,"[10:08:17] INFO  Log directory is: /home/franz.srambical/.nemo_run/experiments/qwen3_30b_a3b/qwen3_30b_a3b_1763456897/qwen3_30b_a3b ]8;id=991907;file:///fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/torchx/schedulers/local_scheduler.py\local_scheduler.py]8;;\:]8;id=464480;file:///fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/torchx/schedulers/local_scheduler.py#777\777]8;;\\r\n[10:08:17] Launching job qwen3_30b_a3b for experiment qwen3_30b_a3b ]8;id=915722;file:///fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/nemo_run/run/experiment.py\experiment.py]8;;\:]8;id=538525;file:///fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/nemo_run/run/experiment.py#795\795]8;;\\r\n  INFO  Launched app: local_persistent://nemo_run/qwen3_30b_a3b-k2spc72nk5x9lc ]8;id=246651;file:///fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/nemo_run/run/torchx_backend/launcher.py\launcher.py]8;;\:]8;id=968411;file:///fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/nemo_run/run/torchx_backend/launcher.py#116\116]8;;\\r\n──────────────────��─────────────────────────────────────────────── Waiting for Experiment qwen3_30b_a3b_1763456897 to finish ───────────────────────────────────────────────────────────────────\r\n\r\nExperiment Status for qwen3_30b_a3b_1763456897\r\n\r\nTask 0: qwen3_30b_a3b\r\n- Status: RUNNING\r\n- Executor: LocalExecutor\r\n- Job id: qwen3_30b_a3b-k2spc72nk5x9lc\r\n- Local Directory: /home/franz.srambical/.nemo_run/experiments/qwen3_30b_a3b/qwen3_30b_a3b_1763456897/qwen3_30b_a3b\r\n\r\n[10:08:18] INFO  Waiting for job qwen3_30b_a3b-k2spc72nk5x9lc to finish [log=True]... ]8;id=911442;file:///fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/nemo_run/run/torchx_backend/launcher.py\launcher.py]8;;\:]8;id=91408;file:///fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/nemo_run/run/torchx_backend/launcher.py#136\136]8;;\\r\n",,terminal_output
45
+ 44,285105,"TERMINAL",0,0,"n3_30b_a3b/0 /fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.\r\nn3_30b_a3b/0 import pynvml # type: ignore[import]\r\n",,terminal_output
46
+ 45,286704,"TERMINAL",0,0,"n3_30b_a3b/0 I1118 10:08:20.315000 503305 torch/distributed/run.py:657] Using nproc_per_node=8.\r\nn3_30b_a3b/0 W1118 10:08:20.316000 503305 torch/distributed/run.py:774] \r\nn3_30b_a3b/0 W1118 10:08:20.316000 503305 torch/distributed/run.py:774] *****************************************\r\nn3_30b_a3b/0 W1118 10:08:20.316000 503305 torch/distributed/run.py:774] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. \r\nn3_30b_a3b/0 W1118 10:08:20.316000 503305 torch/distributed/run.py:774] *****************************************\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] Starting elastic_operator with launch configs:\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] entrypoint : nemo_run.core.runners.fdl_runner\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] min_nodes : 1\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] max_nodes : 1\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] nproc_per_node : 8\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] run_id : 2707\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] rdzv_backend : c10d\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] rdzv_endpoint : localhost:0\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] rdzv_configs : {'timeout': 900}\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] max_restarts : 0\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] monitor_interval : 0.1\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] log_dir : /home/franz.srambical/.nemo_run/experiments/qwen3_30b_a3b/qwen3_30b_a3b_1763456897/qwen3_30b_a3b/nemo_run/qwen3_30b_a3b-k2spc72nk5x9lc/torchelastic/qwen3_30b_a3b\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] metrics_cfg : {}\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] event_log_handler : null\r\nn3_30b_a3b/0 I1118 10:08:20.319000 503305 torch/distributed/launcher/api.py:199] \r\nn3_30b_a3b/0 I1118 10:08:20.327000 503305 torch/distributed/elastic/agent/server/api.py:869] [default] starting workers for entrypoint: python\r\nn3_30b_a3b/0 I1118 10:08:20.328000 503305 torch/distributed/elastic/agent/server/api.py:677] [default] Rendezvous'ing worker group\r\n",,terminal_output
47
+ 46,286999,"TERMINAL",0,0,"n3_30b_a3b/0 I1118 10:08:20.587000 503305 torch/distributed/elastic/agent/server/api.py:523] [default] Rendezvous complete for workers. Result:\r\nn3_30b_a3b/0 I1118 10:08:20.587000 503305 torch/distributed/elastic/agent/server/api.py:523] restart_count=0\r\nn3_30b_a3b/0 I1118 10:08:20.587000 503305 torch/distributed/elastic/agent/server/api.py:523] master_addr=hai003.haicore.berlin\r\nn3_30b_a3b/0 I1118 10:08:20.587000 503305 torch/distributed/elastic/agent/server/api.py:523] master_port=45071\r\nn3_30b_a3b/0 I1118 10:08:20.587000 503305 torch/distributed/elastic/agent/server/api.py:523] group_rank=0\r\nn3_30b_a3b/0 I1118 10:08:20.587000 503305 torch/distributed/elastic/agent/server/api.py:523] group_world_size=1\r\nn3_30b_a3b/0 I1118 10:08:20.587000 503305 torch/distributed/elastic/agent/server/api.py:523] local_ranks=[0, 1, 2, 3, 4, 5, 6, 7]\r\nn3_30b_a3b/0 I1118 10:08:20.587000 503305 torch/distributed/elastic/agent/server/api.py:523] role_ranks=[0, 1, 2, 3, 4, 5, 6, 7]\r\nn3_30b_a3b/0 I1118 10:08:20.587000 503305 torch/distributed/elastic/agent/server/api.py:523] global_ranks=[0, 1, 2, 3, 4, 5, 6, 7]\r\nn3_30b_a3b/0 I1118 10:08:20.587000 503305 torch/distributed/elastic/agent/server/api.py:523] role_world_sizes=[8, 8, 8, 8, 8, 8, 8, 8]\r\nn3_30b_a3b/0 I1118 10:08:20.587000 503305 torch/distributed/elastic/agent/server/api.py:523] global_world_sizes=[8, 8, 8, 8, 8, 8, 8, 8]\r\nn3_30b_a3b/0 I1118 10:08:20.587000 503305 torch/distributed/elastic/agent/server/api.py:523] event_log_handler=null\r\nn3_30b_a3b/0 I1118 10:08:20.587000 503305 torch/distributed/elastic/agent/server/api.py:523] \r\nn3_30b_a3b/0 I1118 10:08:20.588000 503305 torch/distributed/elastic/agent/server/api.py:685] [default] Starting worker group\r\nn3_30b_a3b/0 I1118 10:08:20.589000 503305 torch/distributed/elastic/agent/server/local_elastic_agent.py:296] use_agent_store: True\r\nn3_30b_a3b/0 I1118 10:08:20.589000 503305 torch/distributed/elastic/agent/server/local_elastic_agent.py:192] Environment variable 'TORCHELASTIC_ENABLE_FILE_TIMER' not found. Do not start FileTimerServer.\r\nn3_30b_a3b/0 I1118 10:08:20.590000 503305 torch/distributed/elastic/agent/server/local_elastic_agent.py:236] Environment variable 'TORCHELASTIC_HEALTH_CHECK_PORT' not found. Do not start health check.\r\n",,terminal_output
48
+ 47,290775,"TERMINAL",0,0,"n3_30b_a3b/0 [default6]:/fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.\r\nn3_30b_a3b/0 [default6]: import pynvml # type: ignore[import]\r\n",,terminal_output
49
+ 48,296737,"TERMINAL",0,0,"n3_30b_a3b/0 [default1]:/fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.\r\nn3_30b_a3b/0 [default1]: import pynvml # type: ignore[import]\r\n",,terminal_output
50
+ 49,297159,"TERMINAL",0,0,"n3_30b_a3b/0 [default3]:/fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.\r\nn3_30b_a3b/0 [default3]: import pynvml # type: ignore[import]\r\nn3_30b_a3b/0 [default0]:/fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.\r\nn3_30b_a3b/0 [default0]: import pynvml # type: ignore[import]\r\n",,terminal_output
51
+ 50,297577,"TERMINAL",0,0,"n3_30b_a3b/0 [default4]:/fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.\r\nn3_30b_a3b/0 [default4]: import pynvml # type: ignore[import]\r\n",,terminal_output
52
+ 51,297891,"TERMINAL",0,0,"n3_30b_a3b/0 [default5]:/fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.\r\nn3_30b_a3b/0 [default5]: import pynvml # type: ignore[import]\r\n",,terminal_output
53
+ 52,298212,"TERMINAL",0,0,"n3_30b_a3b/0 [default2]:/fast/home/franz.srambical/crowd-pilot/.nemo/lib/python3.10/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.\r\nn3_30b_a3b/0 [default2]: import pynvml # type: ignore[import]\r\n",,terminal_output