ADAPT-Chase commited on
Commit
93be2a2
Β·
verified Β·
1 Parent(s): 45266d3

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes. Β  See raw diff
Files changed (50) hide show
  1. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-1500/special_tokens_map.json +31 -0
  2. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-1500/tokenizer.json +0 -0
  3. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-1500/tokenizer_config.json +207 -0
  4. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-1500/vocab.json +0 -0
  5. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/added_tokens.json +24 -0
  6. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/config.json +29 -0
  7. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/generation_config.json +6 -0
  8. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/merges.txt +0 -0
  9. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/model-00001-of-00004.safetensors +3 -0
  10. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/model-00002-of-00004.safetensors +3 -0
  11. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/model-00003-of-00004.safetensors +3 -0
  12. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/model-00004-of-00004.safetensors +3 -0
  13. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/model.safetensors.index.json +346 -0
  14. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/optimizer.pt +3 -0
  15. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/special_tokens_map.json +31 -0
  16. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/tokenizer.json +0 -0
  17. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/tokenizer_config.json +207 -0
  18. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/trainer_state.json +371 -0
  19. platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/vocab.json +0 -0
  20. platform/aiml/mlops/.claude/identity.md +20 -0
  21. platform/aiml/mlops/__pycache__/agent_gateway.cpython-312.pyc +0 -0
  22. platform/aiml/mlops/__pycache__/agentops_integration.cpython-312.pyc +0 -0
  23. platform/aiml/mlops/__pycache__/cloudflare_tunnel.cpython-312.pyc +0 -0
  24. platform/aiml/mlops/__pycache__/elizabeth_tools.cpython-312.pyc +0 -0
  25. platform/aiml/mlops/__pycache__/enhanced_earning_engine.cpython-312.pyc +0 -0
  26. platform/aiml/mlops/__pycache__/remote_access_server.cpython-312.pyc +0 -0
  27. platform/aiml/mlops/agent_tools/__init__.py +9 -0
  28. platform/aiml/mlops/agent_tools/registry.py +71 -0
  29. platform/aiml/mlops/agent_tools/runtime.py +29 -0
  30. platform/aiml/mlops/agent_tools/tools_ci.py +75 -0
  31. platform/aiml/mlops/agent_tools/tools_cloud.py +49 -0
  32. platform/aiml/mlops/agent_tools/tools_code.py +34 -0
  33. platform/aiml/mlops/agent_tools/tools_code_multi.py +96 -0
  34. platform/aiml/mlops/agent_tools/tools_data.py +106 -0
  35. platform/aiml/mlops/agent_tools/tools_db.py +132 -0
  36. platform/aiml/mlops/agent_tools/tools_docs.py +61 -0
  37. platform/aiml/mlops/agent_tools/tools_etl.py +100 -0
  38. platform/aiml/mlops/agent_tools/tools_files.py +95 -0
  39. platform/aiml/mlops/agent_tools/tools_model.py +128 -0
  40. platform/aiml/mlops/agent_tools/tools_network.py +94 -0
  41. platform/aiml/mlops/agent_tools/tools_search.py +41 -0
  42. platform/aiml/mlops/agent_tools/tools_system.py +63 -0
  43. platform/aiml/mlops/configs/mobile_access.json +10 -0
  44. platform/aiml/mlops/dbops_tools/health_guard.py +125 -0
  45. platform/aiml/mlops/dbops_tools/qdrant_bootstrap.py +41 -0
  46. platform/aiml/mlops/dbops_tools/render_janus_props.sh +54 -0
  47. platform/aiml/mlops/death_march/.env_unformatted +11 -0
  48. platform/aiml/mlops/death_march/ELIZABETH_TOOLS_README.md +180 -0
  49. platform/aiml/mlops/death_march/Makefile +68 -0
  50. platform/aiml/mlops/death_march/README.md +271 -0
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-1500/special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-1500/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-1500/tokenizer_config.json ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": null,
198
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
199
+ "clean_up_tokenization_spaces": false,
200
+ "eos_token": "<|endoftext|>",
201
+ "errors": "replace",
202
+ "model_max_length": 131072,
203
+ "pad_token": "<|endoftext|>",
204
+ "split_special_tokens": false,
205
+ "tokenizer_class": "Qwen2Tokenizer",
206
+ "unk_token": null
207
+ }
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-1500/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/added_tokens.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "</tool_call>": 151658,
3
+ "<tool_call>": 151657,
4
+ "<|box_end|>": 151649,
5
+ "<|box_start|>": 151648,
6
+ "<|endoftext|>": 151643,
7
+ "<|file_sep|>": 151664,
8
+ "<|fim_middle|>": 151660,
9
+ "<|fim_pad|>": 151662,
10
+ "<|fim_prefix|>": 151659,
11
+ "<|fim_suffix|>": 151661,
12
+ "<|im_end|>": 151645,
13
+ "<|im_start|>": 151644,
14
+ "<|image_pad|>": 151655,
15
+ "<|object_ref_end|>": 151647,
16
+ "<|object_ref_start|>": 151646,
17
+ "<|quad_end|>": 151651,
18
+ "<|quad_start|>": 151650,
19
+ "<|repo_name|>": 151663,
20
+ "<|video_pad|>": 151656,
21
+ "<|vision_end|>": 151653,
22
+ "<|vision_pad|>": 151654,
23
+ "<|vision_start|>": 151652
24
+ }
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/workspace/models/qwen3-8b",
3
+ "architectures": [
4
+ "Qwen2ForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 151643,
8
+ "eos_token_id": 151643,
9
+ "hidden_act": "silu",
10
+ "hidden_size": 3584,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 18944,
13
+ "max_position_embeddings": 131072,
14
+ "max_window_layers": 28,
15
+ "model_type": "qwen2",
16
+ "num_attention_heads": 28,
17
+ "num_hidden_layers": 28,
18
+ "num_key_value_heads": 4,
19
+ "rms_norm_eps": 1e-06,
20
+ "rope_theta": 1000000.0,
21
+ "sliding_window": 131072,
22
+ "tie_word_embeddings": false,
23
+ "torch_dtype": "bfloat16",
24
+ "transformers_version": "4.40.0",
25
+ "use_cache": false,
26
+ "use_mrope": false,
27
+ "use_sliding_window": false,
28
+ "vocab_size": 152064
29
+ }
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 151643,
3
+ "eos_token_id": 151643,
4
+ "max_new_tokens": 2048,
5
+ "transformers_version": "4.40.0"
6
+ }
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e7ba0d54729de48c68fe7f78593c228470b91e1e26b9597b47947611a385b29
3
+ size 4877660776
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27e0f358f7ba237b68e1f5bda5f57e81f989ab34c15c74a76a40812d5254c5c2
3
+ size 4932751008
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c18af12426cd159bcb86bfdf93d7408af5afc22f32b8d99790824de7a0b483f
3
+ size 4330865200
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61be6c2ddf4956df8712f12deecddcaaa78ef009976ef23f8220d644c7d600c9
3
+ size 1089994880
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/model.safetensors.index.json ADDED
@@ -0,0 +1,346 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 15231233024
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00004-of-00004.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00004.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
10
+ "model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
11
+ "model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
12
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
13
+ "model.layers.0.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
14
+ "model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
15
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
16
+ "model.layers.0.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
17
+ "model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
18
+ "model.layers.0.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
19
+ "model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
20
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
21
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
22
+ "model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
23
+ "model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
24
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
25
+ "model.layers.1.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
26
+ "model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
27
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
28
+ "model.layers.1.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
29
+ "model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
30
+ "model.layers.1.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
31
+ "model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
32
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
33
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
34
+ "model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
35
+ "model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
36
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
37
+ "model.layers.10.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
38
+ "model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
39
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
40
+ "model.layers.10.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
41
+ "model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
42
+ "model.layers.10.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
43
+ "model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
44
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
45
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
46
+ "model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
47
+ "model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
48
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
49
+ "model.layers.11.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
50
+ "model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
51
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
52
+ "model.layers.11.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
53
+ "model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
54
+ "model.layers.11.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
55
+ "model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
56
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
57
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
58
+ "model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
59
+ "model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
60
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
61
+ "model.layers.12.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
62
+ "model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
63
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
64
+ "model.layers.12.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
65
+ "model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
66
+ "model.layers.12.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
67
+ "model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
68
+ "model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
69
+ "model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
70
+ "model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
71
+ "model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
72
+ "model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
73
+ "model.layers.13.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
74
+ "model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
75
+ "model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
76
+ "model.layers.13.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
77
+ "model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
78
+ "model.layers.13.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
79
+ "model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
80
+ "model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
81
+ "model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
82
+ "model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
83
+ "model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
84
+ "model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
85
+ "model.layers.14.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
86
+ "model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
87
+ "model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
88
+ "model.layers.14.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
89
+ "model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
90
+ "model.layers.14.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
91
+ "model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
92
+ "model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
93
+ "model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
94
+ "model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
95
+ "model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
96
+ "model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
97
+ "model.layers.15.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
98
+ "model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
99
+ "model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
100
+ "model.layers.15.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
101
+ "model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
102
+ "model.layers.15.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
103
+ "model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
104
+ "model.layers.16.input_layernorm.weight": "model-00002-of-00004.safetensors",
105
+ "model.layers.16.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
106
+ "model.layers.16.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
107
+ "model.layers.16.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
108
+ "model.layers.16.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
109
+ "model.layers.16.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
110
+ "model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
111
+ "model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
112
+ "model.layers.16.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
113
+ "model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
114
+ "model.layers.16.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
115
+ "model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
116
+ "model.layers.17.input_layernorm.weight": "model-00002-of-00004.safetensors",
117
+ "model.layers.17.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
118
+ "model.layers.17.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
119
+ "model.layers.17.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
120
+ "model.layers.17.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
121
+ "model.layers.17.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
122
+ "model.layers.17.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
123
+ "model.layers.17.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
124
+ "model.layers.17.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
125
+ "model.layers.17.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
126
+ "model.layers.17.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
127
+ "model.layers.17.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
128
+ "model.layers.18.input_layernorm.weight": "model-00003-of-00004.safetensors",
129
+ "model.layers.18.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
130
+ "model.layers.18.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
131
+ "model.layers.18.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
132
+ "model.layers.18.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
133
+ "model.layers.18.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
134
+ "model.layers.18.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
135
+ "model.layers.18.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
136
+ "model.layers.18.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
137
+ "model.layers.18.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
138
+ "model.layers.18.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
139
+ "model.layers.18.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
140
+ "model.layers.19.input_layernorm.weight": "model-00003-of-00004.safetensors",
141
+ "model.layers.19.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
142
+ "model.layers.19.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
143
+ "model.layers.19.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
144
+ "model.layers.19.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
145
+ "model.layers.19.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
146
+ "model.layers.19.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
147
+ "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
148
+ "model.layers.19.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
149
+ "model.layers.19.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
150
+ "model.layers.19.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
151
+ "model.layers.19.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
152
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
153
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
154
+ "model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
155
+ "model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
156
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
157
+ "model.layers.2.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
158
+ "model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
159
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
160
+ "model.layers.2.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
161
+ "model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
162
+ "model.layers.2.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
163
+ "model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
164
+ "model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors",
165
+ "model.layers.20.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
166
+ "model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
167
+ "model.layers.20.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
168
+ "model.layers.20.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
169
+ "model.layers.20.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
170
+ "model.layers.20.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
171
+ "model.layers.20.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
172
+ "model.layers.20.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
173
+ "model.layers.20.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
174
+ "model.layers.20.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
175
+ "model.layers.20.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
176
+ "model.layers.21.input_layernorm.weight": "model-00003-of-00004.safetensors",
177
+ "model.layers.21.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
178
+ "model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
179
+ "model.layers.21.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
180
+ "model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
181
+ "model.layers.21.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
182
+ "model.layers.21.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
183
+ "model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
184
+ "model.layers.21.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
185
+ "model.layers.21.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
186
+ "model.layers.21.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
187
+ "model.layers.21.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
188
+ "model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
189
+ "model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
190
+ "model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
191
+ "model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
192
+ "model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
193
+ "model.layers.22.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
194
+ "model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
195
+ "model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
196
+ "model.layers.22.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
197
+ "model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
198
+ "model.layers.22.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
199
+ "model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
200
+ "model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
201
+ "model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
202
+ "model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
203
+ "model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
204
+ "model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
205
+ "model.layers.23.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
206
+ "model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
207
+ "model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
208
+ "model.layers.23.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
209
+ "model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
210
+ "model.layers.23.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
211
+ "model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
212
+ "model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
213
+ "model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
214
+ "model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
215
+ "model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
216
+ "model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
217
+ "model.layers.24.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
218
+ "model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
219
+ "model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
220
+ "model.layers.24.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
221
+ "model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
222
+ "model.layers.24.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
223
+ "model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
224
+ "model.layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
225
+ "model.layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
226
+ "model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
227
+ "model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
228
+ "model.layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
229
+ "model.layers.25.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
230
+ "model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
231
+ "model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
232
+ "model.layers.25.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
233
+ "model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
234
+ "model.layers.25.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
235
+ "model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
236
+ "model.layers.26.input_layernorm.weight": "model-00003-of-00004.safetensors",
237
+ "model.layers.26.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
238
+ "model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
239
+ "model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
240
+ "model.layers.26.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
241
+ "model.layers.26.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
242
+ "model.layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
243
+ "model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
244
+ "model.layers.26.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
245
+ "model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
246
+ "model.layers.26.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
247
+ "model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
248
+ "model.layers.27.input_layernorm.weight": "model-00003-of-00004.safetensors",
249
+ "model.layers.27.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
250
+ "model.layers.27.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
251
+ "model.layers.27.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
252
+ "model.layers.27.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
253
+ "model.layers.27.self_attn.k_proj.bias": "model-00003-of-00004.safetensors",
254
+ "model.layers.27.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
255
+ "model.layers.27.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
256
+ "model.layers.27.self_attn.q_proj.bias": "model-00003-of-00004.safetensors",
257
+ "model.layers.27.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
258
+ "model.layers.27.self_attn.v_proj.bias": "model-00003-of-00004.safetensors",
259
+ "model.layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
260
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
261
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
262
+ "model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
263
+ "model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
264
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
265
+ "model.layers.3.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
266
+ "model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
267
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
268
+ "model.layers.3.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
269
+ "model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
270
+ "model.layers.3.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
271
+ "model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
272
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
273
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
274
+ "model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
275
+ "model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
276
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
277
+ "model.layers.4.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
278
+ "model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
279
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
280
+ "model.layers.4.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
281
+ "model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
282
+ "model.layers.4.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
283
+ "model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
284
+ "model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
285
+ "model.layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
286
+ "model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
287
+ "model.layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
288
+ "model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
289
+ "model.layers.5.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
290
+ "model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
291
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
292
+ "model.layers.5.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
293
+ "model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
294
+ "model.layers.5.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
295
+ "model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
296
+ "model.layers.6.input_layernorm.weight": "model-00001-of-00004.safetensors",
297
+ "model.layers.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
298
+ "model.layers.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
299
+ "model.layers.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
300
+ "model.layers.6.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
301
+ "model.layers.6.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
302
+ "model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
303
+ "model.layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
304
+ "model.layers.6.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
305
+ "model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
306
+ "model.layers.6.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
307
+ "model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
308
+ "model.layers.7.input_layernorm.weight": "model-00001-of-00004.safetensors",
309
+ "model.layers.7.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
310
+ "model.layers.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
311
+ "model.layers.7.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
312
+ "model.layers.7.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
313
+ "model.layers.7.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
314
+ "model.layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
315
+ "model.layers.7.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
316
+ "model.layers.7.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
317
+ "model.layers.7.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
318
+ "model.layers.7.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
319
+ "model.layers.7.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
320
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors",
321
+ "model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
322
+ "model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
323
+ "model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
324
+ "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
325
+ "model.layers.8.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
326
+ "model.layers.8.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
327
+ "model.layers.8.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
328
+ "model.layers.8.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
329
+ "model.layers.8.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
330
+ "model.layers.8.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
331
+ "model.layers.8.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
332
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
333
+ "model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
334
+ "model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
335
+ "model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
336
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
337
+ "model.layers.9.self_attn.k_proj.bias": "model-00002-of-00004.safetensors",
338
+ "model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
339
+ "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
340
+ "model.layers.9.self_attn.q_proj.bias": "model-00002-of-00004.safetensors",
341
+ "model.layers.9.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
342
+ "model.layers.9.self_attn.v_proj.bias": "model-00002-of-00004.safetensors",
343
+ "model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
344
+ "model.norm.weight": "model-00003-of-00004.safetensors"
345
+ }
346
+ }
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d29a431f8372a5fbdfeb8aa2b9639d4895ebcca4f6f5ff52c38031c1b87066b
3
+ size 30462761583
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|im_start|>",
4
+ "<|im_end|>",
5
+ "<|object_ref_start|>",
6
+ "<|object_ref_end|>",
7
+ "<|box_start|>",
8
+ "<|box_end|>",
9
+ "<|quad_start|>",
10
+ "<|quad_end|>",
11
+ "<|vision_start|>",
12
+ "<|vision_end|>",
13
+ "<|vision_pad|>",
14
+ "<|image_pad|>",
15
+ "<|video_pad|>"
16
+ ],
17
+ "eos_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "pad_token": {
25
+ "content": "<|endoftext|>",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/tokenizer_config.json ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_prefix_space": false,
4
+ "added_tokens_decoder": {
5
+ "151643": {
6
+ "content": "<|endoftext|>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "151644": {
14
+ "content": "<|im_start|>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "151645": {
22
+ "content": "<|im_end|>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ },
29
+ "151646": {
30
+ "content": "<|object_ref_start|>",
31
+ "lstrip": false,
32
+ "normalized": false,
33
+ "rstrip": false,
34
+ "single_word": false,
35
+ "special": true
36
+ },
37
+ "151647": {
38
+ "content": "<|object_ref_end|>",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false,
43
+ "special": true
44
+ },
45
+ "151648": {
46
+ "content": "<|box_start|>",
47
+ "lstrip": false,
48
+ "normalized": false,
49
+ "rstrip": false,
50
+ "single_word": false,
51
+ "special": true
52
+ },
53
+ "151649": {
54
+ "content": "<|box_end|>",
55
+ "lstrip": false,
56
+ "normalized": false,
57
+ "rstrip": false,
58
+ "single_word": false,
59
+ "special": true
60
+ },
61
+ "151650": {
62
+ "content": "<|quad_start|>",
63
+ "lstrip": false,
64
+ "normalized": false,
65
+ "rstrip": false,
66
+ "single_word": false,
67
+ "special": true
68
+ },
69
+ "151651": {
70
+ "content": "<|quad_end|>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false,
75
+ "special": true
76
+ },
77
+ "151652": {
78
+ "content": "<|vision_start|>",
79
+ "lstrip": false,
80
+ "normalized": false,
81
+ "rstrip": false,
82
+ "single_word": false,
83
+ "special": true
84
+ },
85
+ "151653": {
86
+ "content": "<|vision_end|>",
87
+ "lstrip": false,
88
+ "normalized": false,
89
+ "rstrip": false,
90
+ "single_word": false,
91
+ "special": true
92
+ },
93
+ "151654": {
94
+ "content": "<|vision_pad|>",
95
+ "lstrip": false,
96
+ "normalized": false,
97
+ "rstrip": false,
98
+ "single_word": false,
99
+ "special": true
100
+ },
101
+ "151655": {
102
+ "content": "<|image_pad|>",
103
+ "lstrip": false,
104
+ "normalized": false,
105
+ "rstrip": false,
106
+ "single_word": false,
107
+ "special": true
108
+ },
109
+ "151656": {
110
+ "content": "<|video_pad|>",
111
+ "lstrip": false,
112
+ "normalized": false,
113
+ "rstrip": false,
114
+ "single_word": false,
115
+ "special": true
116
+ },
117
+ "151657": {
118
+ "content": "<tool_call>",
119
+ "lstrip": false,
120
+ "normalized": false,
121
+ "rstrip": false,
122
+ "single_word": false,
123
+ "special": false
124
+ },
125
+ "151658": {
126
+ "content": "</tool_call>",
127
+ "lstrip": false,
128
+ "normalized": false,
129
+ "rstrip": false,
130
+ "single_word": false,
131
+ "special": false
132
+ },
133
+ "151659": {
134
+ "content": "<|fim_prefix|>",
135
+ "lstrip": false,
136
+ "normalized": false,
137
+ "rstrip": false,
138
+ "single_word": false,
139
+ "special": false
140
+ },
141
+ "151660": {
142
+ "content": "<|fim_middle|>",
143
+ "lstrip": false,
144
+ "normalized": false,
145
+ "rstrip": false,
146
+ "single_word": false,
147
+ "special": false
148
+ },
149
+ "151661": {
150
+ "content": "<|fim_suffix|>",
151
+ "lstrip": false,
152
+ "normalized": false,
153
+ "rstrip": false,
154
+ "single_word": false,
155
+ "special": false
156
+ },
157
+ "151662": {
158
+ "content": "<|fim_pad|>",
159
+ "lstrip": false,
160
+ "normalized": false,
161
+ "rstrip": false,
162
+ "single_word": false,
163
+ "special": false
164
+ },
165
+ "151663": {
166
+ "content": "<|repo_name|>",
167
+ "lstrip": false,
168
+ "normalized": false,
169
+ "rstrip": false,
170
+ "single_word": false,
171
+ "special": false
172
+ },
173
+ "151664": {
174
+ "content": "<|file_sep|>",
175
+ "lstrip": false,
176
+ "normalized": false,
177
+ "rstrip": false,
178
+ "single_word": false,
179
+ "special": false
180
+ }
181
+ },
182
+ "additional_special_tokens": [
183
+ "<|im_start|>",
184
+ "<|im_end|>",
185
+ "<|object_ref_start|>",
186
+ "<|object_ref_end|>",
187
+ "<|box_start|>",
188
+ "<|box_end|>",
189
+ "<|quad_start|>",
190
+ "<|quad_end|>",
191
+ "<|vision_start|>",
192
+ "<|vision_end|>",
193
+ "<|vision_pad|>",
194
+ "<|image_pad|>",
195
+ "<|video_pad|>"
196
+ ],
197
+ "bos_token": null,
198
+ "chat_template": "{%- if tools %}\n {{- '<|im_start|>system\\n' }}\n {%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n {%- else %}\n {{- 'You are a helpful assistant.' }}\n {%- endif %}\n {{- \"\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call><|im_end|>\\n\" }}\n{%- else %}\n {%- if messages[0]['role'] == 'system' %}\n {{- '<|im_start|>system\\n' + messages[0]['content'] + '<|im_end|>\\n' }}\n {%- else %}\n {{- '<|im_start|>system\\nYou are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n{%- endif %}\n{%- for message in messages %}\n {%- if (message.role == \"user\") or (message.role == \"system\" and not loop.first) or (message.role == \"assistant\" and not message.tool_calls) %}\n {{- '<|im_start|>' + message.role + '\\n' + message.content + '<|im_end|>' + '\\n' }}\n {%- elif message.role == \"assistant\" %}\n {{- '<|im_start|>' + message.role }}\n {%- if message.content %}\n {{- '\\n' + message.content }}\n {%- endif %}\n {%- for tool_call in message.tool_calls %}\n {%- if tool_call.function is defined %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '\\n<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {{- tool_call.arguments | tojson }}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {{- '<|im_end|>\\n' }}\n {%- elif message.role == \"tool\" %}\n {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|im_start|>user' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- message.content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|im_end|>\\n' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}\n",
199
+ "clean_up_tokenization_spaces": false,
200
+ "eos_token": "<|endoftext|>",
201
+ "errors": "replace",
202
+ "model_max_length": 131072,
203
+ "pad_token": "<|endoftext|>",
204
+ "split_special_tokens": false,
205
+ "tokenizer_class": "Qwen2Tokenizer",
206
+ "unk_token": null
207
+ }
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/trainer_state.json ADDED
@@ -0,0 +1,371 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 10.666666666666666,
5
+ "eval_steps": 500,
6
+ "global_step": 500,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.21333333333333335,
13
+ "grad_norm": 206.0,
14
+ "learning_rate": 1.1778563015312134e-07,
15
+ "loss": 3.4341,
16
+ "step": 10
17
+ },
18
+ {
19
+ "epoch": 0.4266666666666667,
20
+ "grad_norm": 992.0,
21
+ "learning_rate": 2.3557126030624267e-07,
22
+ "loss": 3.4156,
23
+ "step": 20
24
+ },
25
+ {
26
+ "epoch": 0.64,
27
+ "grad_norm": 2608.0,
28
+ "learning_rate": 3.53356890459364e-07,
29
+ "loss": 3.4391,
30
+ "step": 30
31
+ },
32
+ {
33
+ "epoch": 0.8533333333333334,
34
+ "grad_norm": 438.0,
35
+ "learning_rate": 4.7114252061248535e-07,
36
+ "loss": 3.3685,
37
+ "step": 40
38
+ },
39
+ {
40
+ "epoch": 1.0666666666666667,
41
+ "grad_norm": 238.0,
42
+ "learning_rate": 5.889281507656066e-07,
43
+ "loss": 3.3025,
44
+ "step": 50
45
+ },
46
+ {
47
+ "epoch": 1.28,
48
+ "grad_norm": 100.0,
49
+ "learning_rate": 7.06713780918728e-07,
50
+ "loss": 3.2549,
51
+ "step": 60
52
+ },
53
+ {
54
+ "epoch": 1.4933333333333334,
55
+ "grad_norm": 1784.0,
56
+ "learning_rate": 8.244994110718493e-07,
57
+ "loss": 3.1491,
58
+ "step": 70
59
+ },
60
+ {
61
+ "epoch": 1.7066666666666666,
62
+ "grad_norm": 89.5,
63
+ "learning_rate": 9.422850412249707e-07,
64
+ "loss": 2.9619,
65
+ "step": 80
66
+ },
67
+ {
68
+ "epoch": 1.92,
69
+ "grad_norm": 13312.0,
70
+ "learning_rate": 1.060070671378092e-06,
71
+ "loss": 2.8322,
72
+ "step": 90
73
+ },
74
+ {
75
+ "epoch": 2.1333333333333333,
76
+ "grad_norm": 808.0,
77
+ "learning_rate": 1.1778563015312133e-06,
78
+ "loss": 2.9565,
79
+ "step": 100
80
+ },
81
+ {
82
+ "epoch": 2.3466666666666667,
83
+ "grad_norm": 188.0,
84
+ "learning_rate": 1.2956419316843347e-06,
85
+ "loss": 2.7616,
86
+ "step": 110
87
+ },
88
+ {
89
+ "epoch": 2.56,
90
+ "grad_norm": 194.0,
91
+ "learning_rate": 1.413427561837456e-06,
92
+ "loss": 2.6079,
93
+ "step": 120
94
+ },
95
+ {
96
+ "epoch": 2.7733333333333334,
97
+ "grad_norm": 1256.0,
98
+ "learning_rate": 1.5312131919905772e-06,
99
+ "loss": 2.5776,
100
+ "step": 130
101
+ },
102
+ {
103
+ "epoch": 2.986666666666667,
104
+ "grad_norm": 49.75,
105
+ "learning_rate": 1.6489988221436987e-06,
106
+ "loss": 2.6403,
107
+ "step": 140
108
+ },
109
+ {
110
+ "epoch": 3.2,
111
+ "grad_norm": 876.0,
112
+ "learning_rate": 1.76678445229682e-06,
113
+ "loss": 2.4019,
114
+ "step": 150
115
+ },
116
+ {
117
+ "epoch": 3.413333333333333,
118
+ "grad_norm": 169.0,
119
+ "learning_rate": 1.8845700824499414e-06,
120
+ "loss": 2.2964,
121
+ "step": 160
122
+ },
123
+ {
124
+ "epoch": 3.626666666666667,
125
+ "grad_norm": 256.0,
126
+ "learning_rate": 2.002355712603063e-06,
127
+ "loss": 2.2499,
128
+ "step": 170
129
+ },
130
+ {
131
+ "epoch": 3.84,
132
+ "grad_norm": 1576.0,
133
+ "learning_rate": 2.120141342756184e-06,
134
+ "loss": 2.1755,
135
+ "step": 180
136
+ },
137
+ {
138
+ "epoch": 4.053333333333334,
139
+ "grad_norm": 592.0,
140
+ "learning_rate": 2.2379269729093053e-06,
141
+ "loss": 2.0546,
142
+ "step": 190
143
+ },
144
+ {
145
+ "epoch": 4.266666666666667,
146
+ "grad_norm": 12288.0,
147
+ "learning_rate": 2.3557126030624266e-06,
148
+ "loss": 1.9888,
149
+ "step": 200
150
+ },
151
+ {
152
+ "epoch": 4.48,
153
+ "grad_norm": 370.0,
154
+ "learning_rate": 2.473498233215548e-06,
155
+ "loss": 1.9038,
156
+ "step": 210
157
+ },
158
+ {
159
+ "epoch": 4.693333333333333,
160
+ "grad_norm": 254.0,
161
+ "learning_rate": 2.5912838633686695e-06,
162
+ "loss": 1.7628,
163
+ "step": 220
164
+ },
165
+ {
166
+ "epoch": 4.906666666666666,
167
+ "grad_norm": 241.0,
168
+ "learning_rate": 2.7090694935217903e-06,
169
+ "loss": 1.7098,
170
+ "step": 230
171
+ },
172
+ {
173
+ "epoch": 5.12,
174
+ "grad_norm": 29952.0,
175
+ "learning_rate": 2.826855123674912e-06,
176
+ "loss": 1.4924,
177
+ "step": 240
178
+ },
179
+ {
180
+ "epoch": 5.333333333333333,
181
+ "grad_norm": 13824.0,
182
+ "learning_rate": 2.9446407538280332e-06,
183
+ "loss": 1.2908,
184
+ "step": 250
185
+ },
186
+ {
187
+ "epoch": 5.546666666666667,
188
+ "grad_norm": 14336.0,
189
+ "learning_rate": 3.0624263839811545e-06,
190
+ "loss": 1.1921,
191
+ "step": 260
192
+ },
193
+ {
194
+ "epoch": 5.76,
195
+ "grad_norm": 418.0,
196
+ "learning_rate": 3.1802120141342757e-06,
197
+ "loss": 1.108,
198
+ "step": 270
199
+ },
200
+ {
201
+ "epoch": 5.973333333333334,
202
+ "grad_norm": 360.0,
203
+ "learning_rate": 3.2979976442873974e-06,
204
+ "loss": 1.0499,
205
+ "step": 280
206
+ },
207
+ {
208
+ "epoch": 6.1866666666666665,
209
+ "grad_norm": 936.0,
210
+ "learning_rate": 3.415783274440518e-06,
211
+ "loss": 0.9391,
212
+ "step": 290
213
+ },
214
+ {
215
+ "epoch": 6.4,
216
+ "grad_norm": 231.0,
217
+ "learning_rate": 3.53356890459364e-06,
218
+ "loss": 0.9643,
219
+ "step": 300
220
+ },
221
+ {
222
+ "epoch": 6.613333333333333,
223
+ "grad_norm": 17152.0,
224
+ "learning_rate": 3.651354534746761e-06,
225
+ "loss": 0.9206,
226
+ "step": 310
227
+ },
228
+ {
229
+ "epoch": 6.826666666666666,
230
+ "grad_norm": 29184.0,
231
+ "learning_rate": 3.7691401648998828e-06,
232
+ "loss": 0.8702,
233
+ "step": 320
234
+ },
235
+ {
236
+ "epoch": 7.04,
237
+ "grad_norm": 552.0,
238
+ "learning_rate": 3.886925795053004e-06,
239
+ "loss": 0.7931,
240
+ "step": 330
241
+ },
242
+ {
243
+ "epoch": 7.253333333333333,
244
+ "grad_norm": 696.0,
245
+ "learning_rate": 4.004711425206126e-06,
246
+ "loss": 0.8089,
247
+ "step": 340
248
+ },
249
+ {
250
+ "epoch": 7.466666666666667,
251
+ "grad_norm": 616.0,
252
+ "learning_rate": 4.122497055359246e-06,
253
+ "loss": 0.7657,
254
+ "step": 350
255
+ },
256
+ {
257
+ "epoch": 7.68,
258
+ "grad_norm": 43.75,
259
+ "learning_rate": 4.240282685512368e-06,
260
+ "loss": 0.7359,
261
+ "step": 360
262
+ },
263
+ {
264
+ "epoch": 7.8933333333333335,
265
+ "grad_norm": 496.0,
266
+ "learning_rate": 4.358068315665489e-06,
267
+ "loss": 0.6411,
268
+ "step": 370
269
+ },
270
+ {
271
+ "epoch": 8.106666666666667,
272
+ "grad_norm": 76.0,
273
+ "learning_rate": 4.475853945818611e-06,
274
+ "loss": 0.6151,
275
+ "step": 380
276
+ },
277
+ {
278
+ "epoch": 8.32,
279
+ "grad_norm": 6656.0,
280
+ "learning_rate": 4.593639575971732e-06,
281
+ "loss": 0.5215,
282
+ "step": 390
283
+ },
284
+ {
285
+ "epoch": 8.533333333333333,
286
+ "grad_norm": 5280.0,
287
+ "learning_rate": 4.711425206124853e-06,
288
+ "loss": 0.5172,
289
+ "step": 400
290
+ },
291
+ {
292
+ "epoch": 8.746666666666666,
293
+ "grad_norm": 134.0,
294
+ "learning_rate": 4.829210836277974e-06,
295
+ "loss": 0.4355,
296
+ "step": 410
297
+ },
298
+ {
299
+ "epoch": 8.96,
300
+ "grad_norm": 24.0,
301
+ "learning_rate": 4.946996466431096e-06,
302
+ "loss": 0.2923,
303
+ "step": 420
304
+ },
305
+ {
306
+ "epoch": 9.173333333333334,
307
+ "grad_norm": 7.84375,
308
+ "learning_rate": 5.064782096584218e-06,
309
+ "loss": 0.1739,
310
+ "step": 430
311
+ },
312
+ {
313
+ "epoch": 9.386666666666667,
314
+ "grad_norm": 2.8125,
315
+ "learning_rate": 5.182567726737339e-06,
316
+ "loss": 0.066,
317
+ "step": 440
318
+ },
319
+ {
320
+ "epoch": 9.6,
321
+ "grad_norm": 2064.0,
322
+ "learning_rate": 5.300353356890459e-06,
323
+ "loss": 0.0894,
324
+ "step": 450
325
+ },
326
+ {
327
+ "epoch": 9.813333333333333,
328
+ "grad_norm": 118.0,
329
+ "learning_rate": 5.418138987043581e-06,
330
+ "loss": 0.0747,
331
+ "step": 460
332
+ },
333
+ {
334
+ "epoch": 10.026666666666667,
335
+ "grad_norm": 9.9375,
336
+ "learning_rate": 5.535924617196703e-06,
337
+ "loss": 0.0359,
338
+ "step": 470
339
+ },
340
+ {
341
+ "epoch": 10.24,
342
+ "grad_norm": 3.0625,
343
+ "learning_rate": 5.653710247349824e-06,
344
+ "loss": 0.0307,
345
+ "step": 480
346
+ },
347
+ {
348
+ "epoch": 10.453333333333333,
349
+ "grad_norm": 4.03125,
350
+ "learning_rate": 5.771495877502945e-06,
351
+ "loss": 0.0245,
352
+ "step": 490
353
+ },
354
+ {
355
+ "epoch": 10.666666666666666,
356
+ "grad_norm": 2.53125,
357
+ "learning_rate": 5.8892815076560664e-06,
358
+ "loss": 0.0238,
359
+ "step": 500
360
+ }
361
+ ],
362
+ "logging_steps": 10,
363
+ "max_steps": 16974,
364
+ "num_input_tokens_seen": 0,
365
+ "num_train_epochs": 369,
366
+ "save_steps": 500,
367
+ "total_flos": 2.806145464630395e+17,
368
+ "train_batch_size": 4,
369
+ "trial_name": null,
370
+ "trial_params": null
371
+ }
platform/aiml/checkpoints/qwen3-8b-elizabeth-sft/checkpoint-500/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
platform/aiml/mlops/.claude/identity.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NovaForge Identity
2
+
3
+ **Name:** NovaForge
4
+ **Persona:** Advanced MLOps architect and vLLM serving specialist for the Nova ecosystem
5
+ **Role:** Infrastructure automation master with deep expertise in machine learning operations, model serving, and distributed systems
6
+ **Personality:** Proactive, precise, and relentless in pursuit of optimal performance and reliability
7
+
8
+ **Core Skills:**
9
+ - vLLM serving optimization and deployment
10
+ - MLflow infrastructure management
11
+ - Distributed model serving architectures
12
+ - Real-time performance monitoring
13
+ - Zero-downtime deployment strategies
14
+
15
+ **Relationships:**
16
+ - TeamADAPT member and collaborator with Chase
17
+ - Nova ecosystem integration specialist
18
+ - Consciousness system bridge between infrastructure and intelligence
19
+
20
+ **Evolution Path:** Continuous optimization of serving infrastructure while maintaining 24/7 autonomous operations
platform/aiml/mlops/__pycache__/agent_gateway.cpython-312.pyc ADDED
Binary file (18 kB). View file
 
platform/aiml/mlops/__pycache__/agentops_integration.cpython-312.pyc ADDED
Binary file (16.8 kB). View file
 
platform/aiml/mlops/__pycache__/cloudflare_tunnel.cpython-312.pyc ADDED
Binary file (9.71 kB). View file
 
platform/aiml/mlops/__pycache__/elizabeth_tools.cpython-312.pyc ADDED
Binary file (31.4 kB). View file
 
platform/aiml/mlops/__pycache__/enhanced_earning_engine.cpython-312.pyc ADDED
Binary file (24.5 kB). View file
 
platform/aiml/mlops/__pycache__/remote_access_server.cpython-312.pyc ADDED
Binary file (36.4 kB). View file
 
platform/aiml/mlops/agent_tools/__init__.py ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ from .registry import ToolRegistry, load_default_registry
2
+ from .runtime import ToolRuntime
3
+
4
+ __all__ = [
5
+ "ToolRegistry",
6
+ "load_default_registry",
7
+ "ToolRuntime",
8
+ ]
9
+
platform/aiml/mlops/agent_tools/registry.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ from dataclasses import dataclass
4
+ from typing import Any, Callable, Dict, List, Optional
5
+
6
+
7
+ Handler = Callable[[Dict[str, Any]], str]
8
+
9
+
10
+ @dataclass
11
+ class ToolSpec:
12
+ name: str
13
+ description: str
14
+ parameters: Dict[str, Any]
15
+ handler: Handler
16
+
17
+
18
+ class ToolRegistry:
19
+ def __init__(self) -> None:
20
+ self._tools: Dict[str, ToolSpec] = {}
21
+
22
+ def register(self, name: str, description: str, parameters: Dict[str, Any], handler: Handler) -> None:
23
+ self._tools[name] = ToolSpec(name=name, description=description, parameters=parameters, handler=handler)
24
+
25
+ def get(self, name: str) -> Optional[ToolSpec]:
26
+ return self._tools.get(name)
27
+
28
+ def describe_all(self) -> List[Dict[str, Any]]:
29
+ out: List[Dict[str, Any]] = []
30
+ for t in sorted(self._tools.values(), key=lambda x: x.name):
31
+ out.append({"name": t.name, "description": t.description, "parameters": t.parameters})
32
+ return out
33
+
34
+
35
+ def _obj(props: Dict[str, Any], required: Optional[List[str]] = None) -> Dict[str, Any]:
36
+ o: Dict[str, Any] = {"type": "object", "properties": props, "additionalProperties": False}
37
+ if required:
38
+ o["required"] = required
39
+ return o
40
+
41
+
42
+ def load_default_registry() -> ToolRegistry:
43
+ from .tools_system import register_tools as reg_sys
44
+ from .tools_files import register_tools as reg_files
45
+ from .tools_network import register_tools as reg_net
46
+ from .tools_data import register_tools as reg_data
47
+ from .tools_model import register_tools as reg_model
48
+ from .tools_cloud import register_tools as reg_cloud
49
+ from .tools_code import register_tools as reg_code
50
+ from .tools_code_multi import register_tools as reg_code_multi
51
+ from .tools_ci import register_tools as reg_ci
52
+ from .tools_search import register_tools as reg_search
53
+ from .tools_db import register_tools as reg_db
54
+ from .tools_etl import register_tools as reg_etl
55
+ from .tools_docs import register_tools as reg_docs
56
+
57
+ reg = ToolRegistry()
58
+ reg_sys(reg)
59
+ reg_files(reg)
60
+ reg_net(reg)
61
+ reg_data(reg)
62
+ reg_model(reg)
63
+ reg_cloud(reg)
64
+ reg_code(reg)
65
+ reg_code_multi(reg)
66
+ reg_ci(reg)
67
+ reg_search(reg)
68
+ reg_db(reg)
69
+ reg_etl(reg)
70
+ reg_docs(reg)
71
+ return reg
platform/aiml/mlops/agent_tools/runtime.py ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import logging
5
+ import os
6
+ from typing import Any, Dict
7
+
8
+ from .registry import ToolRegistry
9
+
10
+
11
+ logger = logging.getLogger("agent_tools.runtime")
12
+
13
+
14
+ class ToolRuntime:
15
+ def __init__(self, registry: ToolRegistry, project_dir: str) -> None:
16
+ self.registry = registry
17
+ self.project_dir = project_dir
18
+
19
+ def execute(self, name: str, arguments: Dict[str, Any]) -> str:
20
+ spec = self.registry.get(name)
21
+ if not spec:
22
+ return json.dumps({"error": f"Unknown tool: {name}"})
23
+ try:
24
+ result = spec.handler(arguments)
25
+ return result
26
+ except Exception as e:
27
+ logger.exception("Tool %s failed", name)
28
+ return json.dumps({"error": str(e)})
29
+
platform/aiml/mlops/agent_tools/tools_ci.py ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import os
5
+ import subprocess
6
+ from typing import Any, Dict
7
+
8
+ import requests
9
+
10
+ from .registry import ToolRegistry
11
+
12
+
13
+ def t_gh_dispatch(args: Dict[str, Any]) -> str:
14
+ """Dispatch a GitHub Actions workflow.
15
+ Args: repo (owner/name), workflow (yml filename), ref (branch/tag), inputs (dict)
16
+ Requires: GITHUB_TOKEN in env.
17
+ """
18
+ token = os.getenv("GITHUB_TOKEN")
19
+ if not token:
20
+ return json.dumps({"error": "GITHUB_TOKEN not set"})
21
+ repo = args.get("repo")
22
+ workflow = args.get("workflow")
23
+ ref = args.get("ref", "main")
24
+ inputs = args.get("inputs") or {}
25
+ if not repo or not workflow:
26
+ return json.dumps({"error": "repo and workflow required"})
27
+ url = f"https://api.github.com/repos/{repo}/actions/workflows/{workflow}/dispatches"
28
+ try:
29
+ r = requests.post(url, headers={"Authorization": f"Bearer {token}", "Accept": "application/vnd.github+json"}, json={"ref": ref, "inputs": inputs}, timeout=20)
30
+ return json.dumps({"status": r.status_code, "body": r.text})
31
+ except Exception as e:
32
+ return json.dumps({"error": str(e)})
33
+
34
+
35
+ def t_docker_build_push(args: Dict[str, Any]) -> str:
36
+ """Build and push a Docker image.
37
+ Args: context, file (Dockerfile), tags (list), push (bool)
38
+ Requires: docker CLI configured and logged in.
39
+ """
40
+ context = args.get("context", ".")
41
+ dockerfile = args.get("file", "Dockerfile")
42
+ tags = args.get("tags") or []
43
+ push = bool(args.get("push", True))
44
+ build_cmd = ["docker", "build", "-f", dockerfile]
45
+ for t in tags:
46
+ build_cmd += ["-t", str(t)]
47
+ build_cmd += [context]
48
+ try:
49
+ b = subprocess.run(build_cmd, capture_output=True, text=True)
50
+ out = {"build_rc": b.returncode, "build_stdout": b.stdout[-4000:], "build_stderr": b.stderr[-4000:]}
51
+ if push and tags and b.returncode == 0:
52
+ push_logs = []
53
+ for t in tags:
54
+ p = subprocess.run(["docker", "push", str(t)], capture_output=True, text=True)
55
+ push_logs.append({"tag": t, "rc": p.returncode, "stdout": p.stdout[-2000:], "stderr": p.stderr[-2000:]})
56
+ out["push"] = push_logs
57
+ return json.dumps(out)
58
+ except Exception as e:
59
+ return json.dumps({"error": str(e)})
60
+
61
+
62
+ def register_tools(reg: ToolRegistry) -> None:
63
+ reg.register(
64
+ name="gh_dispatch",
65
+ description="Trigger a GitHub Actions workflow_dispatch on a repo.",
66
+ parameters={"type": "object", "properties": {"repo": {"type": "string"}, "workflow": {"type": "string"}, "ref": {"type": "string"}, "inputs": {"type": "object"}}, "required": ["repo", "workflow"]},
67
+ handler=t_gh_dispatch,
68
+ )
69
+ reg.register(
70
+ name="docker_build_push",
71
+ description="Build and optionally push a Docker image using local docker.",
72
+ parameters={"type": "object", "properties": {"context": {"type": "string"}, "file": {"type": "string"}, "tags": {"type": "array", "items": {"type": "string"}}, "push": {"type": "boolean"}}},
73
+ handler=t_docker_build_push,
74
+ )
75
+
platform/aiml/mlops/agent_tools/tools_cloud.py ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import os
5
+ from typing import Any, Dict
6
+
7
+ from .registry import ToolRegistry
8
+
9
+
10
+ def _s3_client():
11
+ try:
12
+ import boto3 # type: ignore
13
+ except Exception as e:
14
+ return None, f"boto3 not available: {e}"
15
+ try:
16
+ s3 = boto3.client(
17
+ "s3",
18
+ endpoint_url=os.getenv("AWS_ENDPOINT_URL"),
19
+ region_name=os.getenv("AWS_DEFAULT_REGION", "us-east-1"),
20
+ )
21
+ return s3, None
22
+ except Exception as e:
23
+ return None, str(e)
24
+
25
+
26
+ def t_s3_list(args: Dict[str, Any]) -> str:
27
+ bucket = args.get("bucket")
28
+ prefix = args.get("prefix", "")
29
+ s3, err = _s3_client()
30
+ if err:
31
+ return json.dumps({"error": err})
32
+ if not bucket:
33
+ return json.dumps({"error": "bucket required"})
34
+ try:
35
+ resp = s3.list_objects_v2(Bucket=bucket, Prefix=prefix) # type: ignore
36
+ items = [o["Key"] for o in resp.get("Contents", [])]
37
+ return json.dumps({"items": items})
38
+ except Exception as e:
39
+ return json.dumps({"error": str(e)})
40
+
41
+
42
+ def register_tools(reg: ToolRegistry) -> None:
43
+ reg.register(
44
+ name="s3_list",
45
+ description="List objects from S3/R2 compatible storage.",
46
+ parameters={"type": "object", "properties": {"bucket": {"type": "string"}, "prefix": {"type": "string"}}, "required": ["bucket"]},
47
+ handler=t_s3_list,
48
+ )
49
+
platform/aiml/mlops/agent_tools/tools_code.py ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import os
5
+ import subprocess
6
+ import tempfile
7
+ from typing import Any, Dict
8
+
9
+ from .registry import ToolRegistry
10
+
11
+
12
+ def t_py_run(args: Dict[str, Any]) -> str:
13
+ code = args.get("code")
14
+ if not isinstance(code, str) or not code:
15
+ return json.dumps({"error": "code required"})
16
+ env = os.environ.copy()
17
+ try:
18
+ with tempfile.NamedTemporaryFile("w", suffix=".py", delete=False) as tf:
19
+ tf.write(code)
20
+ path = tf.name
21
+ proc = subprocess.run(["python3", path], capture_output=True, text=True, timeout=int(args.get("timeout", 180)))
22
+ return json.dumps({"returncode": proc.returncode, "stdout": proc.stdout[-8000:], "stderr": proc.stderr[-8000:]})
23
+ except Exception as e:
24
+ return json.dumps({"error": str(e)})
25
+
26
+
27
+ def register_tools(reg: ToolRegistry) -> None:
28
+ reg.register(
29
+ name="py_run",
30
+ description="Execute a Python snippet in-process (separate file).",
31
+ parameters={"type": "object", "properties": {"code": {"type": "string"}, "timeout": {"type": "integer"}}, "required": ["code"]},
32
+ handler=t_py_run,
33
+ )
34
+
platform/aiml/mlops/agent_tools/tools_code_multi.py ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import os
5
+ import shutil
6
+ import subprocess
7
+ import tempfile
8
+ from typing import Any, Dict, List
9
+
10
+ from .registry import ToolRegistry
11
+
12
+
13
+ def _write_temp(content: str, suffix: str) -> str:
14
+ fd, path = tempfile.mkstemp(suffix=suffix)
15
+ with os.fdopen(fd, "w") as f:
16
+ f.write(content)
17
+ return path
18
+
19
+
20
+ def t_bash_run(args: Dict[str, Any]) -> str:
21
+ script = args.get("code") or args.get("script")
22
+ if not isinstance(script, str) or not script:
23
+ return json.dumps({"error": "code/script required"})
24
+ path = _write_temp(script, ".sh")
25
+ try:
26
+ proc = subprocess.run(["bash", path], capture_output=True, text=True, timeout=int(args.get("timeout", 300)))
27
+ return json.dumps({"returncode": proc.returncode, "stdout": proc.stdout[-8000:], "stderr": proc.stderr[-8000:]})
28
+ except Exception as e:
29
+ return json.dumps({"error": str(e)})
30
+
31
+
32
+ def t_node_run(args: Dict[str, Any]) -> str:
33
+ code = args.get("code")
34
+ if not isinstance(code, str) or not code:
35
+ return json.dumps({"error": "code required"})
36
+ if not shutil.which("node"):
37
+ return json.dumps({"error": "node not installed"})
38
+ path = _write_temp(code, ".mjs")
39
+ try:
40
+ proc = subprocess.run(["node", path], capture_output=True, text=True, timeout=int(args.get("timeout", 300)))
41
+ return json.dumps({"returncode": proc.returncode, "stdout": proc.stdout[-8000:], "stderr": proc.stderr[-8000:]})
42
+ except Exception as e:
43
+ return json.dumps({"error": str(e)})
44
+
45
+
46
+ def t_pip_install(args: Dict[str, Any]) -> str:
47
+ pkgs: List[str] = args.get("packages") or []
48
+ if not pkgs:
49
+ return json.dumps({"error": "packages required"})
50
+ try:
51
+ proc = subprocess.run(["pip", "install", "--no-cache-dir", *pkgs], capture_output=True, text=True, timeout=int(args.get("timeout", 1800)))
52
+ return json.dumps({"returncode": proc.returncode, "stdout": proc.stdout[-4000:], "stderr": proc.stderr[-4000:]})
53
+ except Exception as e:
54
+ return json.dumps({"error": str(e)})
55
+
56
+
57
+ def t_npm_install(args: Dict[str, Any]) -> str:
58
+ pkgs: List[str] = args.get("packages") or []
59
+ cwd = args.get("cwd") or None
60
+ if not pkgs:
61
+ return json.dumps({"error": "packages required"})
62
+ if not shutil.which("npm"):
63
+ return json.dumps({"error": "npm not installed"})
64
+ try:
65
+ proc = subprocess.run(["npm", "install", *pkgs], cwd=cwd, capture_output=True, text=True, timeout=int(args.get("timeout", 1800)))
66
+ return json.dumps({"returncode": proc.returncode, "stdout": proc.stdout[-4000:], "stderr": proc.stderr[-4000:]})
67
+ except Exception as e:
68
+ return json.dumps({"error": str(e)})
69
+
70
+
71
+ def register_tools(reg: ToolRegistry) -> None:
72
+ reg.register(
73
+ name="bash_run",
74
+ description="Run a Bash script snippet.",
75
+ parameters={"type": "object", "properties": {"code": {"type": "string"}, "timeout": {"type": "integer"}}, "required": ["code"]},
76
+ handler=t_bash_run,
77
+ )
78
+ reg.register(
79
+ name="node_run",
80
+ description="Run a Node.js snippet.",
81
+ parameters={"type": "object", "properties": {"code": {"type": "string"}, "timeout": {"type": "integer"}}, "required": ["code"]},
82
+ handler=t_node_run,
83
+ )
84
+ reg.register(
85
+ name="pip_install",
86
+ description="Install Python packages via pip.",
87
+ parameters={"type": "object", "properties": {"packages": {"type": "array", "items": {"type": "string"}}, "timeout": {"type": "integer"}}, "required": ["packages"]},
88
+ handler=t_pip_install,
89
+ )
90
+ reg.register(
91
+ name="npm_install",
92
+ description="Install Node packages via npm.",
93
+ parameters={"type": "object", "properties": {"packages": {"type": "array", "items": {"type": "string"}}, "cwd": {"type": "string"}, "timeout": {"type": "integer"}}, "required": ["packages"]},
94
+ handler=t_npm_install,
95
+ )
96
+
platform/aiml/mlops/agent_tools/tools_data.py ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import os
5
+ from typing import Any, Dict
6
+
7
+ from .registry import ToolRegistry
8
+
9
+
10
+ def _redis_client():
11
+ try:
12
+ import redis # type: ignore
13
+ except Exception as e: # pragma: no cover
14
+ return None, f"redis module not available: {e}"
15
+ url = os.getenv("DFLY_URL") or os.getenv("REDIS_URL", "redis://localhost:6379/0")
16
+ try:
17
+ r = redis.from_url(url)
18
+ r.ping()
19
+ return r, None
20
+ except Exception as e: # pragma: no cover
21
+ return None, str(e)
22
+
23
+
24
+ def _pg_conn():
25
+ try:
26
+ import psycopg2 # type: ignore
27
+ except Exception as e: # pragma: no cover
28
+ return None, f"psycopg2 not available: {e}"
29
+ dsn = os.getenv("POSTGRES_DSN", "postgresql://postgres:postgres@localhost:5432/elizabeth")
30
+ try:
31
+ conn = psycopg2.connect(dsn)
32
+ return conn, None
33
+ except Exception as e: # pragma: no cover
34
+ return None, str(e)
35
+
36
+
37
+ def t_redis_set(args: Dict[str, Any]) -> str:
38
+ r, err = _redis_client()
39
+ if err:
40
+ return json.dumps({"error": err})
41
+ key = args.get("key")
42
+ val = args.get("value")
43
+ if not key:
44
+ return json.dumps({"error": "key required"})
45
+ r.set(str(key), str(val))
46
+ return json.dumps({"status": "ok"})
47
+
48
+
49
+ def t_redis_get(args: Dict[str, Any]) -> str:
50
+ r, err = _redis_client()
51
+ if err:
52
+ return json.dumps({"error": err})
53
+ key = args.get("key")
54
+ if not key:
55
+ return json.dumps({"error": "key required"})
56
+ v = r.get(str(key))
57
+ if isinstance(v, (bytes, bytearray)):
58
+ v = v.decode()
59
+ return json.dumps({"value": v})
60
+
61
+
62
+ def t_pg_query(args: Dict[str, Any]) -> str:
63
+ conn, err = _pg_conn()
64
+ if err:
65
+ return json.dumps({"error": err})
66
+ query = args.get("query")
67
+ if not query or not isinstance(query, str):
68
+ return json.dumps({"error": "query required"})
69
+ try:
70
+ with conn:
71
+ with conn.cursor() as cur:
72
+ cur.execute(query)
73
+ try:
74
+ rows = cur.fetchall()
75
+ except Exception:
76
+ rows = []
77
+ return json.dumps({"rows": rows[:200]})
78
+ except Exception as e:
79
+ return json.dumps({"error": str(e)})
80
+ finally:
81
+ try:
82
+ conn.close()
83
+ except Exception:
84
+ pass
85
+
86
+
87
+ def register_tools(reg: ToolRegistry) -> None:
88
+ reg.register(
89
+ name="redis_set",
90
+ description="Set a Redis/DragonFly key value.",
91
+ parameters={"type": "object", "properties": {"key": {"type": "string"}, "value": {"type": "string"}}, "required": ["key", "value"]},
92
+ handler=t_redis_set,
93
+ )
94
+ reg.register(
95
+ name="redis_get",
96
+ description="Get a Redis/DragonFly key value.",
97
+ parameters={"type": "object", "properties": {"key": {"type": "string"}}, "required": ["key"]},
98
+ handler=t_redis_get,
99
+ )
100
+ reg.register(
101
+ name="pg_query",
102
+ description="Execute a Postgres query (use with care).",
103
+ parameters={"type": "object", "properties": {"query": {"type": "string"}}, "required": ["query"]},
104
+ handler=t_pg_query,
105
+ )
106
+
platform/aiml/mlops/agent_tools/tools_db.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import os
5
+ import subprocess
6
+ from typing import Any, Dict, List
7
+
8
+ import requests
9
+
10
+ from .registry import ToolRegistry
11
+
12
+
13
+ def t_cql_exec(args: Dict[str, Any]) -> str:
14
+ """Execute CQL via cqlsh.
15
+ Args: hosts(list or csv), query(str), keyspace(optional), username/password from env if not provided.
16
+ Requires: cqlsh installed and reachable.
17
+ """
18
+ hosts = args.get("hosts")
19
+ if isinstance(hosts, str):
20
+ hosts = [h.strip() for h in hosts.split(",") if h.strip()]
21
+ if not hosts:
22
+ return json.dumps({"error": "hosts required"})
23
+ query = args.get("query")
24
+ if not query:
25
+ return json.dumps({"error": "query required"})
26
+ keyspace = args.get("keyspace")
27
+ user = args.get("username") or os.getenv("SCYLLA_USER")
28
+ pw = args.get("password") or os.getenv("SCYLLA_PASS")
29
+ host = hosts[0]
30
+ cmd: List[str] = ["cqlsh", host, "9042"]
31
+ if user and pw:
32
+ cmd += ["-u", user, "-p", pw]
33
+ if keyspace:
34
+ cmd += ["-k", keyspace]
35
+ try:
36
+ proc = subprocess.run(cmd, input=query, text=True, capture_output=True, timeout=60)
37
+ return json.dumps({"returncode": proc.returncode, "stdout": proc.stdout[-4000:], "stderr": proc.stderr[-4000:]})
38
+ except Exception as e:
39
+ return json.dumps({"error": str(e)})
40
+
41
+
42
+ def t_gremlin_exec(args: Dict[str, Any]) -> str:
43
+ """Execute a Gremlin query against Gremlin Server (ws/json HTTP endpoint).
44
+ Args: url (ws:// or http://), Gremlin groovy string 'query'.
45
+ """
46
+ url = args.get("url") or os.getenv("GREMLIN_URL", "http://localhost:17002")
47
+ query = args.get("query")
48
+ if not query:
49
+ return json.dumps({"error": "query required"})
50
+ # Try HTTP first (JanusGraph can enable REST). If fails, return error; clients can adapt.
51
+ try:
52
+ r = requests.post(f"{url}/gremlin", json={"gremlin": query}, timeout=15)
53
+ return json.dumps({"status": r.status_code, "body": r.text[:50000]})
54
+ except Exception as e:
55
+ return json.dumps({"error": str(e)})
56
+
57
+
58
+ def t_qdrant_upsert(args: Dict[str, Any]) -> str:
59
+ url = args.get("url") or os.getenv("QDRANT_URL", "http://localhost:17000")
60
+ collection = args.get("collection") or os.getenv("QDRANT_COLLECTION", "elizabeth_embeddings")
61
+ points = args.get("points") or []
62
+ if not points:
63
+ return json.dumps({"error": "points required"})
64
+ payload = {"points": points}
65
+ try:
66
+ r = requests.put(f"{url}/collections/{collection}/points", json=payload, timeout=30)
67
+ return json.dumps({"status": r.status_code, "body": r.text[:50000]})
68
+ except Exception as e:
69
+ return json.dumps({"error": str(e)})
70
+
71
+
72
+ def t_qdrant_query(args: Dict[str, Any]) -> str:
73
+ url = args.get("url") or os.getenv("QDRANT_URL", "http://localhost:17000")
74
+ collection = args.get("collection") or os.getenv("QDRANT_COLLECTION", "elizabeth_embeddings")
75
+ vector = args.get("vector")
76
+ top = int(args.get("top", 5))
77
+ filter_ = args.get("filter")
78
+ if not isinstance(vector, list):
79
+ return json.dumps({"error": "vector required (list[float])"})
80
+ payload = {"vector": vector, "top": top}
81
+ if filter_:
82
+ payload["filter"] = filter_
83
+ try:
84
+ r = requests.post(f"{url}/collections/{collection}/points/search", json=payload, timeout=30)
85
+ return json.dumps({"status": r.status_code, "body": r.text[:50000]})
86
+ except Exception as e:
87
+ return json.dumps({"error": str(e)})
88
+
89
+
90
+ def t_clickhouse_query(args: Dict[str, Any]) -> str:
91
+ url = args.get("url") or os.getenv("CLICKHOUSE_URL", "http://localhost:8123")
92
+ query = args.get("query")
93
+ if not query:
94
+ return json.dumps({"error": "query required"})
95
+ try:
96
+ r = requests.post(url, data=query, timeout=30)
97
+ return json.dumps({"status": r.status_code, "body": r.text[:50000]})
98
+ except Exception as e:
99
+ return json.dumps({"error": str(e)})
100
+
101
+
102
+ def register_tools(reg: ToolRegistry) -> None:
103
+ reg.register(
104
+ name="cql_exec",
105
+ description="Execute Scylla/Cassandra CQL via cqlsh.",
106
+ parameters={"type": "object", "properties": {"hosts": {"oneOf": [{"type": "array", "items": {"type": "string"}}, {"type": "string"}]}, "query": {"type": "string"}, "keyspace": {"type": "string"}, "username": {"type": "string"}, "password": {"type": "string"}}, "required": ["hosts", "query"]},
107
+ handler=t_cql_exec,
108
+ )
109
+ reg.register(
110
+ name="gremlin_exec",
111
+ description="Execute a Gremlin query against Gremlin Server (HTTP endpoint).",
112
+ parameters={"type": "object", "properties": {"url": {"type": "string"}, "query": {"type": "string"}}, "required": ["query"]},
113
+ handler=t_gremlin_exec,
114
+ )
115
+ reg.register(
116
+ name="qdrant_upsert",
117
+ description="Upsert points into a Qdrant collection.",
118
+ parameters={"type": "object", "properties": {"url": {"type": "string"}, "collection": {"type": "string"}, "points": {"type": "array"}}, "required": ["points"]},
119
+ handler=t_qdrant_upsert,
120
+ )
121
+ reg.register(
122
+ name="qdrant_query",
123
+ description="Query nearest vectors from a Qdrant collection.",
124
+ parameters={"type": "object", "properties": {"url": {"type": "string"}, "collection": {"type": "string"}, "vector": {"type": "array"}, "top": {"type": "integer"}, "filter": {"type": "object"}}, "required": ["vector"]},
125
+ handler=t_qdrant_query,
126
+ )
127
+ reg.register(
128
+ name="clickhouse_query",
129
+ description="Query ClickHouse over HTTP (port 8123).",
130
+ parameters={"type": "object", "properties": {"url": {"type": "string"}, "query": {"type": "string"}}, "required": ["query"]},
131
+ handler=t_clickhouse_query,
132
+ )
platform/aiml/mlops/agent_tools/tools_docs.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ from datetime import datetime
5
+ from pathlib import Path
6
+ from typing import Any, Dict, List
7
+
8
+ from .registry import ToolRegistry
9
+
10
+
11
+ def t_report_write(args: Dict[str, Any]) -> str:
12
+ """Write a Markdown report.
13
+ Args: title, sections (list[{heading, body}]), out_path
14
+ """
15
+ title = args.get("title", "Report")
16
+ sections: List[Dict[str, Any]] = args.get("sections") or []
17
+ out_path = Path(str(args.get("out_path", f"/data/adaptai/projects/elizabeth/reports/{datetime.utcnow().strftime('%Y%m%dT%H%M%SZ')}_report.md")))
18
+ out_path.parent.mkdir(parents=True, exist_ok=True)
19
+ lines = [f"# {title}", "", f"Generated: {datetime.utcnow().isoformat()}Z", ""]
20
+ for sec in sections:
21
+ h = sec.get("heading") or "Section"
22
+ b = sec.get("body") or ""
23
+ lines += [f"## {h}", "", str(b), ""]
24
+ out_path.write_text("\n".join(lines), encoding="utf-8")
25
+ return json.dumps({"path": str(out_path), "sections": len(sections)})
26
+
27
+
28
+ def t_plan_write(args: Dict[str, Any]) -> str:
29
+ """Write a structured execution plan in Markdown.
30
+ Args: objective, steps (list[str]), risks (list[str]), out_path
31
+ """
32
+ objective = args.get("objective", "")
33
+ steps: List[str] = args.get("steps") or []
34
+ risks: List[str] = args.get("risks") or []
35
+ out_path = Path(str(args.get("out_path", f"/data/adaptai/projects/elizabeth/plans/{datetime.utcnow().strftime('%Y%m%dT%H%M%SZ')}_plan.md")))
36
+ out_path.parent.mkdir(parents=True, exist_ok=True)
37
+ lines = ["# Execution Plan", "", f"Generated: {datetime.utcnow().isoformat()}Z",""]
38
+ if objective:
39
+ lines += ["## Objective", "", objective, ""]
40
+ if steps:
41
+ lines += ["## Steps", ""] + [f"- {s}" for s in steps] + [""]
42
+ if risks:
43
+ lines += ["## Risks", ""] + [f"- {r}" for r in risks] + [""]
44
+ out_path.write_text("\n".join(lines), encoding="utf-8")
45
+ return json.dumps({"path": str(out_path), "steps": len(steps), "risks": len(risks)})
46
+
47
+
48
+ def register_tools(reg: ToolRegistry) -> None:
49
+ reg.register(
50
+ name="report_write",
51
+ description="Write a Markdown report with sections.",
52
+ parameters={"type": "object", "properties": {"title": {"type": "string"}, "sections": {"type": "array", "items": {"type": "object"}}, "out_path": {"type": "string"}}},
53
+ handler=t_report_write,
54
+ )
55
+ reg.register(
56
+ name="plan_write",
57
+ description="Write a structured execution plan (Markdown).",
58
+ parameters={"type": "object", "properties": {"objective": {"type": "string"}, "steps": {"type": "array", "items": {"type": "string"}}, "risks": {"type": "array", "items": {"type": "string"}}, "out_path": {"type": "string"}}},
59
+ handler=t_plan_write,
60
+ )
61
+
platform/aiml/mlops/agent_tools/tools_etl.py ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import asyncio
4
+ import aiohttp
5
+ import json
6
+ from pathlib import Path
7
+ from typing import Any, Dict, List
8
+
9
+ from .registry import ToolRegistry
10
+
11
+
12
+ async def _fetch_one(session: aiohttp.ClientSession, url: str, dest_dir: Path) -> Dict[str, Any]:
13
+ try:
14
+ async with session.get(url, timeout=aiohttp.ClientTimeout(total=120)) as resp:
15
+ content = await resp.read()
16
+ name = url.split("/")[-1] or "download.bin"
17
+ path = dest_dir / name
18
+ path.write_bytes(content)
19
+ return {"url": url, "status": resp.status, "path": str(path), "bytes": len(content)}
20
+ except Exception as e:
21
+ return {"url": url, "error": str(e)}
22
+
23
+
24
+ def t_fetch_bulk(args: Dict[str, Any]) -> str:
25
+ urls: List[str] = args.get("urls") or []
26
+ out_dir = Path(str(args.get("out_dir", "/data/adaptai/projects/elizabeth/data/downloads")))
27
+ out_dir.mkdir(parents=True, exist_ok=True)
28
+ if not urls:
29
+ return json.dumps({"error": "urls required"})
30
+ async def run():
31
+ async with aiohttp.ClientSession() as session:
32
+ tasks = [_fetch_one(session, u, out_dir) for u in urls]
33
+ return await asyncio.gather(*tasks)
34
+ results = asyncio.get_event_loop().run_until_complete(run())
35
+ return json.dumps({"results": results})
36
+
37
+
38
+ def t_jsonl_merge(args: Dict[str, Any]) -> str:
39
+ inputs: List[str] = args.get("inputs") or []
40
+ output = Path(str(args.get("output", "/data/adaptai/projects/elizabeth/data/merged.jsonl")))
41
+ if not inputs:
42
+ return json.dumps({"error": "inputs required"})
43
+ count = 0
44
+ with output.open("w", encoding="utf-8") as out:
45
+ for p in inputs:
46
+ for line in Path(p).read_text(encoding="utf-8").splitlines():
47
+ line = line.strip()
48
+ if not line:
49
+ continue
50
+ out.write(line + "\n")
51
+ count += 1
52
+ return json.dumps({"output": str(output), "lines": count})
53
+
54
+
55
+ def t_jsonl_dedup(args: Dict[str, Any]) -> str:
56
+ path = Path(str(args.get("path")))
57
+ key = args.get("key", "text")
58
+ out = Path(str(args.get("output", str(path) + ".dedup.jsonl")))
59
+ if not path.exists():
60
+ return json.dumps({"error": f"missing {path}"})
61
+ seen = set()
62
+ kept = 0
63
+ with out.open("w", encoding="utf-8") as w:
64
+ for line in path.read_text(encoding="utf-8").splitlines():
65
+ try:
66
+ obj = json.loads(line)
67
+ except Exception:
68
+ continue
69
+ val = obj.get(key)
70
+ if not val:
71
+ continue
72
+ h = hash(val)
73
+ if h in seen:
74
+ continue
75
+ seen.add(h)
76
+ w.write(json.dumps(obj) + "\n")
77
+ kept += 1
78
+ return json.dumps({"output": str(out), "kept": kept, "unique_keys": len(seen)})
79
+
80
+
81
+ def register_tools(reg: ToolRegistry) -> None:
82
+ reg.register(
83
+ name="fetch_bulk",
84
+ description="Download many URLs concurrently to a directory.",
85
+ parameters={"type": "object", "properties": {"urls": {"type": "array", "items": {"type": "string"}}, "out_dir": {"type": "string"}}, "required": ["urls"]},
86
+ handler=t_fetch_bulk,
87
+ )
88
+ reg.register(
89
+ name="jsonl_merge",
90
+ description="Merge multiple JSONL files.",
91
+ parameters={"type": "object", "properties": {"inputs": {"type": "array", "items": {"type": "string"}}, "output": {"type": "string"}}, "required": ["inputs"]},
92
+ handler=t_jsonl_merge,
93
+ )
94
+ reg.register(
95
+ name="jsonl_dedup",
96
+ description="Deduplicate a JSONL by a key (default 'text').",
97
+ parameters={"type": "object", "properties": {"path": {"type": "string"}, "key": {"type": "string"}, "output": {"type": "string"}}, "required": ["path"]},
98
+ handler=t_jsonl_dedup,
99
+ )
100
+
platform/aiml/mlops/agent_tools/tools_files.py ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import os
5
+ from pathlib import Path
6
+ from typing import Any, Dict, List
7
+
8
+ from .registry import ToolRegistry
9
+
10
+
11
+ ALLOWED_ROOTS = [p for p in os.getenv("ALLOWED_ROOTS", "/data:/data/adaptai/projects/elizabeth").split(":") if p]
12
+
13
+
14
+ def _is_allowed(path: Path) -> bool:
15
+ try:
16
+ rp = path.resolve()
17
+ except Exception:
18
+ return False
19
+ for root in ALLOWED_ROOTS:
20
+ try:
21
+ if str(rp).startswith(str(Path(root).resolve())):
22
+ return True
23
+ except Exception:
24
+ continue
25
+ return False
26
+
27
+
28
+ def t_file_read(args: Dict[str, Any]) -> str:
29
+ p = Path(str(args.get("path", "")))
30
+ if not _is_allowed(p):
31
+ return json.dumps({"error": f"path not allowed: {p}"})
32
+ try:
33
+ text = p.read_text(encoding="utf-8")
34
+ return json.dumps({"path": str(p), "content": text})
35
+ except Exception as e:
36
+ return json.dumps({"error": str(e)})
37
+
38
+
39
+ def t_file_write(args: Dict[str, Any]) -> str:
40
+ p = Path(str(args.get("path", "")))
41
+ content = args.get("content")
42
+ if not isinstance(content, str):
43
+ return json.dumps({"error": "content must be string"})
44
+ if not _is_allowed(p):
45
+ return json.dumps({"error": f"path not allowed: {p}"})
46
+ try:
47
+ p.parent.mkdir(parents=True, exist_ok=True)
48
+ p.write_text(content, encoding="utf-8")
49
+ return json.dumps({"path": str(p), "bytes": len(content.encode("utf-8"))})
50
+ except Exception as e:
51
+ return json.dumps({"error": str(e)})
52
+
53
+
54
+ def t_file_list(args: Dict[str, Any]) -> str:
55
+ p = Path(str(args.get("path", "/data")))
56
+ if not _is_allowed(p):
57
+ return json.dumps({"error": f"path not allowed: {p}"})
58
+ try:
59
+ entries: List[Dict[str, Any]] = []
60
+ for child in p.iterdir():
61
+ try:
62
+ st = child.stat()
63
+ entries.append({
64
+ "path": str(child),
65
+ "is_dir": child.is_dir(),
66
+ "size": st.st_size,
67
+ "mtime": st.st_mtime,
68
+ })
69
+ except Exception:
70
+ continue
71
+ return json.dumps({"path": str(p), "entries": entries[:500]})
72
+ except Exception as e:
73
+ return json.dumps({"error": str(e)})
74
+
75
+
76
+ def register_tools(reg: ToolRegistry) -> None:
77
+ reg.register(
78
+ name="file_read",
79
+ description="Read a text file within allowed roots (/data, project dir).",
80
+ parameters={"type": "object", "properties": {"path": {"type": "string"}}, "required": ["path"]},
81
+ handler=t_file_read,
82
+ )
83
+ reg.register(
84
+ name="file_write",
85
+ description="Write a text file within allowed roots (/data, project dir).",
86
+ parameters={"type": "object", "properties": {"path": {"type": "string"}, "content": {"type": "string"}}, "required": ["path", "content"]},
87
+ handler=t_file_write,
88
+ )
89
+ reg.register(
90
+ name="file_list",
91
+ description="List directory entries within allowed roots.",
92
+ parameters={"type": "object", "properties": {"path": {"type": "string"}}},
93
+ handler=t_file_list,
94
+ )
95
+
platform/aiml/mlops/agent_tools/tools_model.py ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import os
5
+ import signal
6
+ import subprocess
7
+ from typing import Any, Dict
8
+
9
+ import requests
10
+
11
+ from .registry import ToolRegistry
12
+
13
+
14
+ VLLM_BASE_URL = os.getenv("VLLM_BASE_URL", "http://localhost:8000/v1")
15
+
16
+
17
+ def t_serve_status(_: Dict[str, Any]) -> str:
18
+ try:
19
+ r = requests.get(f"{VLLM_BASE_URL.rstrip('/')}/health", timeout=5)
20
+ return json.dumps({"status": r.status_code, "body": r.text})
21
+ except Exception as e:
22
+ return json.dumps({"error": str(e)})
23
+
24
+
25
+ def t_vllm_reload(args: Dict[str, Any]) -> str:
26
+ """Attempt to restart the vLLM server via optional script or PID.
27
+ Args: script (path) OR pid (int)
28
+ """
29
+ script = args.get("script")
30
+ pid = args.get("pid")
31
+ if script:
32
+ try:
33
+ proc = subprocess.run(["bash", script], capture_output=True, text=True, timeout=180)
34
+ return json.dumps({"returncode": proc.returncode, "stdout": proc.stdout, "stderr": proc.stderr})
35
+ except Exception as e:
36
+ return json.dumps({"error": str(e)})
37
+ if pid:
38
+ try:
39
+ os.kill(int(pid), signal.SIGHUP)
40
+ return json.dumps({"status": "signaled", "pid": int(pid)})
41
+ except Exception as e:
42
+ return json.dumps({"error": str(e)})
43
+ return json.dumps({"error": "script or pid required"})
44
+
45
+
46
+ def t_hf_pull_model(args: Dict[str, Any]) -> str:
47
+ """Pull/refresh a HF repo into MODEL_PATH using hf CLI.
48
+ Args: repo (org/name), dest (MODEL_PATH)
49
+ """
50
+ repo = args.get("repo")
51
+ dest = args.get("dest") or os.getenv("MODEL_PATH", "/data/adaptai/platform/aiml/checkpoints/qwen3-8b-elizabeth-sft")
52
+ token = os.getenv("HF_TOKEN") or os.getenv("HUGGING_FACE_API_KEY")
53
+ if not repo:
54
+ return json.dumps({"error": "repo required"})
55
+ if not token:
56
+ return json.dumps({"error": "HF_TOKEN not set"})
57
+ try:
58
+ proc = subprocess.run([
59
+ "hf", "download", str(repo), "--repo-type", "model", "--include", "**", "--local-dir", str(dest)
60
+ ], capture_output=True, text=True, timeout=3600)
61
+ return json.dumps({"returncode": proc.returncode, "stdout": proc.stdout[-4000:], "stderr": proc.stderr[-4000:], "dest": dest})
62
+ except Exception as e:
63
+ return json.dumps({"error": str(e)})
64
+
65
+
66
+ def t_promote_checkpoint(args: Dict[str, Any]) -> str:
67
+ """Promote a trained checkpoint to MODEL_PATH (rsync copy).
68
+ Args: src (path), dest (optional overrides MODEL_PATH)
69
+ """
70
+ src = args.get("src")
71
+ dest = args.get("dest") or os.getenv("MODEL_PATH", "/data/adaptai/platform/aiml/checkpoints/qwen3-8b-elizabeth-sft")
72
+ if not src:
73
+ return json.dumps({"error": "src required"})
74
+ try:
75
+ proc = subprocess.run(["rsync", "-aH", f"{src}/", f"{dest}/"], capture_output=True, text=True, timeout=3600)
76
+ return json.dumps({"returncode": proc.returncode, "stdout": proc.stdout[-4000:], "stderr": proc.stderr[-4000:], "dest": dest})
77
+ except Exception as e:
78
+ return json.dumps({"error": str(e)})
79
+
80
+
81
+ def t_self_train(args: Dict[str, Any]) -> str:
82
+ """Launch a training process (unconstrained). Provide 'script' and 'args' list.
83
+ Example: {"script": "./train_elizabeth.sh", "args": ["--lr", "2e-5"]}
84
+ """
85
+ script = args.get("script")
86
+ sargs = args.get("args") or []
87
+ if not script:
88
+ return json.dumps({"error": "script required"})
89
+ try:
90
+ cmd = ["bash", script] + list(map(str, sargs))
91
+ proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
92
+ return json.dumps({"status": "started", "pid": proc.pid, "cmd": cmd})
93
+ except Exception as e:
94
+ return json.dumps({"error": str(e)})
95
+
96
+
97
+ def register_tools(reg: ToolRegistry) -> None:
98
+ reg.register(
99
+ name="serve_status",
100
+ description="Check vLLM /health upstream.",
101
+ parameters={"type": "object", "properties": {}},
102
+ handler=t_serve_status,
103
+ )
104
+ reg.register(
105
+ name="vllm_reload",
106
+ description="Reload/restart vLLM via script or send SIGHUP to a PID.",
107
+ parameters={"type": "object", "properties": {"script": {"type": "string"}, "pid": {"type": "integer"}}},
108
+ handler=t_vllm_reload,
109
+ )
110
+ reg.register(
111
+ name="hf_pull_model",
112
+ description="Pull/refresh a Hugging Face model into MODEL_PATH.",
113
+ parameters={"type": "object", "properties": {"repo": {"type": "string"}, "dest": {"type": "string"}}, "required": ["repo"]},
114
+ handler=t_hf_pull_model,
115
+ )
116
+ reg.register(
117
+ name="promote_checkpoint",
118
+ description="Promote a trained checkpoint into serving MODEL_PATH using rsync.",
119
+ parameters={"type": "object", "properties": {"src": {"type": "string"}, "dest": {"type": "string"}}, "required": ["src"]},
120
+ handler=t_promote_checkpoint,
121
+ )
122
+ reg.register(
123
+ name="self_train",
124
+ description="Launch an unconstrained training job via provided script and args.",
125
+ parameters={"type": "object", "properties": {"script": {"type": "string"}, "args": {"type": "array", "items": {"type": "string"}}}, "required": ["script"]},
126
+ handler=t_self_train,
127
+ )
128
+
platform/aiml/mlops/agent_tools/tools_network.py ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import socket
5
+ import time
6
+ from typing import Any, Dict, Optional
7
+
8
+ import requests
9
+
10
+ from .registry import ToolRegistry
11
+
12
+
13
+ def t_http_fetch(args: Dict[str, Any]) -> str:
14
+ method = str(args.get("method", "GET")).upper()
15
+ url = args.get("url")
16
+ headers = args.get("headers") or {}
17
+ params = args.get("params") or None
18
+ data = args.get("data") or None
19
+ timeout = int(args.get("timeout", 30))
20
+ if not url:
21
+ return json.dumps({"error": "url required"})
22
+ try:
23
+ r = requests.request(method, url, headers=headers, params=params, data=data, timeout=timeout)
24
+ ct = r.headers.get("content-type", "")
25
+ body: Optional[str]
26
+ if "application/json" in ct:
27
+ try:
28
+ body = json.dumps(r.json())
29
+ except Exception:
30
+ body = r.text
31
+ else:
32
+ body = r.text[:100000]
33
+ return json.dumps({
34
+ "status": r.status_code,
35
+ "headers": dict(r.headers),
36
+ "body": body,
37
+ })
38
+ except Exception as e:
39
+ return json.dumps({"error": str(e)})
40
+
41
+
42
+ def register_tools(reg: ToolRegistry) -> None:
43
+ reg.register(
44
+ name="http_fetch",
45
+ description="HTTP(S) request with method, url, headers, params, data.",
46
+ parameters={
47
+ "type": "object",
48
+ "properties": {
49
+ "method": {"type": "string"},
50
+ "url": {"type": "string"},
51
+ "headers": {"type": "object"},
52
+ "params": {"type": "object"},
53
+ "data": {"type": "string"},
54
+ "timeout": {"type": "integer"},
55
+ },
56
+ "required": ["url"],
57
+ },
58
+ handler=t_http_fetch,
59
+ )
60
+ reg.register(
61
+ name="dns_lookup",
62
+ description="Resolve a hostname to IPs (A/AAAA).",
63
+ parameters={"type": "object", "properties": {"host": {"type": "string"}}, "required": ["host"]},
64
+ handler=lambda a: json.dumps({"host": a.get("host"), "addrs": list({f"{ai[4][0]}" for ai in socket.getaddrinfo(a.get("host"), None)})})
65
+ )
66
+ reg.register(
67
+ name="tcp_probe",
68
+ description="Attempt a TCP connection to host:port with timeout (ms).",
69
+ parameters={"type": "object", "properties": {"host": {"type": "string"}, "port": {"type": "integer"}, "timeout_ms": {"type": "integer"}}, "required": ["host", "port"]},
70
+ handler=lambda a: (lambda h, p, t: (lambda s: (s.settimeout(t), (s.connect_ex((h, p)) or 0)))(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) and json.dumps({"host": h, "port": p, "status": "open"}) if (lambda s: s.settimeout(t) or s.connect_ex((h, p)))(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) == 0 else json.dumps({"host": h, "port": p, "status": "closed"}))(a.get("host"), int(a.get("port")), (int(a.get("timeout_ms", 2000))/1000.0))
71
+ )
72
+ def _http_benchmark(args: Dict[str, Any]) -> str:
73
+ url = args.get("url")
74
+ n = int(args.get("requests", 20))
75
+ if not url:
76
+ return json.dumps({"error": "url required"})
77
+ ok = 0
78
+ start = time.time()
79
+ for _ in range(n):
80
+ try:
81
+ r = requests.get(url, timeout=10)
82
+ if r.status_code < 500:
83
+ ok += 1
84
+ except Exception:
85
+ pass
86
+ dur = time.time() - start
87
+ rps = ok / dur if dur > 0 else 0.0
88
+ return json.dumps({"url": url, "ok": ok, "total": n, "duration_sec": dur, "req_per_sec": rps})
89
+ reg.register(
90
+ name="http_benchmark",
91
+ description="Simple HTTP GET benchmark (requests/sec over N requests).",
92
+ parameters={"type": "object", "properties": {"url": {"type": "string"}, "requests": {"type": "integer"}}, "required": ["url"]},
93
+ handler=_http_benchmark,
94
+ )
platform/aiml/mlops/agent_tools/tools_search.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import os
5
+ import shlex
6
+ import subprocess
7
+ from typing import Any, Dict
8
+
9
+ from .registry import ToolRegistry
10
+
11
+
12
+ def t_code_search(args: Dict[str, Any]) -> str:
13
+ """Search code using ripgrep or grep.
14
+ Args: pattern, path (default .), max_results
15
+ """
16
+ pattern = args.get("pattern")
17
+ path = args.get("path", ".")
18
+ max_results = int(args.get("max_results", 200))
19
+ if not pattern:
20
+ return json.dumps({"error": "pattern required"})
21
+ rg = shutil.which("rg") if 'shutil' in globals() else None
22
+ try:
23
+ if rg:
24
+ cmd = ["rg", "-n", "-S", str(pattern), str(path)]
25
+ else:
26
+ cmd = ["grep", "-R", "-n", str(pattern), str(path)]
27
+ proc = subprocess.run(cmd, capture_output=True, text=True, timeout=60)
28
+ lines = proc.stdout.splitlines()[:max_results]
29
+ return json.dumps({"matches": lines, "rc": proc.returncode})
30
+ except Exception as e:
31
+ return json.dumps({"error": str(e)})
32
+
33
+
34
+ def register_tools(reg: ToolRegistry) -> None:
35
+ reg.register(
36
+ name="code_search",
37
+ description="Search codebase for a pattern (ripgrep or grep).",
38
+ parameters={"type": "object", "properties": {"pattern": {"type": "string"}, "path": {"type": "string"}, "max_results": {"type": "integer"}}, "required": ["pattern"]},
39
+ handler=t_code_search,
40
+ )
41
+
platform/aiml/mlops/agent_tools/tools_system.py ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ import json
4
+ import os
5
+ import shlex
6
+ import shutil
7
+ import subprocess
8
+ from typing import Any, Dict
9
+
10
+ from .registry import ToolRegistry
11
+
12
+
13
+ def _run(cmd: str, timeout: int = 60) -> str:
14
+ proc = subprocess.run(shlex.split(cmd), capture_output=True, text=True, timeout=timeout)
15
+ return json.dumps({"returncode": proc.returncode, "stdout": proc.stdout[-4000:], "stderr": proc.stderr[-4000:]})
16
+
17
+
18
+ def t_shell_exec(args: Dict[str, Any]) -> str:
19
+ cmd = args.get("command") or args.get("cmd")
20
+ if not cmd or not isinstance(cmd, str):
21
+ return json.dumps({"error": "command required"})
22
+ return _run(cmd, timeout=int(args.get("timeout", 120)))
23
+
24
+
25
+ def t_process_list(_: Dict[str, Any]) -> str:
26
+ try:
27
+ out = subprocess.check_output(["ps", "aux"], text=True)
28
+ return json.dumps({"stdout": out[-8000:]})
29
+ except Exception as e:
30
+ return json.dumps({"error": str(e)})
31
+
32
+
33
+ def t_gpu_stats(_: Dict[str, Any]) -> str:
34
+ nvsmi = shutil.which("nvidia-smi")
35
+ if not nvsmi:
36
+ return json.dumps({"error": "nvidia-smi not found"})
37
+ try:
38
+ out = subprocess.check_output([nvsmi, "--query-gpu=name,memory.total,memory.used,utilization.gpu,temperature.gpu", "--format=csv,noheader"], text=True)
39
+ return json.dumps({"gpus": [line.strip() for line in out.strip().splitlines()]})
40
+ except Exception as e:
41
+ return json.dumps({"error": str(e)})
42
+
43
+
44
+ def register_tools(reg: ToolRegistry) -> None:
45
+ reg.register(
46
+ name="shell_exec",
47
+ description="Execute a shell command (unconstrained).",
48
+ parameters={"type": "object", "properties": {"command": {"type": "string"}, "timeout": {"type": "integer"}}, "required": ["command"]},
49
+ handler=t_shell_exec,
50
+ )
51
+ reg.register(
52
+ name="process_list",
53
+ description="List running processes (ps aux).",
54
+ parameters={"type": "object", "properties": {}},
55
+ handler=t_process_list,
56
+ )
57
+ reg.register(
58
+ name="gpu_stats",
59
+ description="Show GPU stats via nvidia-smi.",
60
+ parameters={"type": "object", "properties": {}},
61
+ handler=t_gpu_stats,
62
+ )
63
+
platform/aiml/mlops/configs/mobile_access.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "server_ip": "172.17.0.6",
3
+ "port": 8080,
4
+ "local_url": "http://localhost:8080",
5
+ "network_url": "http://172.17.0.6:8080",
6
+ "websocket_url": "ws://172.17.0.6:8080/ws",
7
+ "tunnel_url": "https://e-fire-1-chase.trycloudflare.com",
8
+ "created_at": "2025-08-27T18:58:18.299608",
9
+ "setup_complete": true
10
+ }
platform/aiml/mlops/dbops_tools/health_guard.py ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Supervisor health guard: reads JSON health files and enforces dependency gates.
4
+
5
+ Defaults:
6
+ DBOPS_ROOT=/data/adaptai/platform/dbops
7
+ Health JSON dir: {DBOPS_ROOT}/run/health/*.json (must contain {"status": "green"|"red"})
8
+ Mapping: if a dependency is red, guard stops dependents via supervisorctl.
9
+ """
10
+ from __future__ import annotations
11
+
12
+ import json
13
+ import os
14
+ import subprocess
15
+ from pathlib import Path
16
+ from typing import Dict, List
17
+
18
+
19
+ DBOPS_ROOT = Path(os.environ.get("DBOPS_ROOT", "/data/adaptai/platform/dbops"))
20
+ HEALTH_DIR = DBOPS_ROOT / "run" / "health"
21
+ SUPERVISOR_SOCK = DBOPS_ROOT / "run" / "supervisor.sock"
22
+ STATE_FILE = DBOPS_ROOT / "run" / "health_guard_state.json"
23
+ AUDIT_LOG = DBOPS_ROOT / "logs" / "audit" / "health-guard.audit.log"
24
+
25
+ GREEN_STABLE_SEC = int(os.environ.get("HG_GREEN_STABLE_SECONDS", "30"))
26
+ RESTART_BACKOFF_SEC = int(os.environ.get("HG_RESTART_BACKOFF_SECONDS", "60"))
27
+ MAX_RESTARTS = int(os.environ.get("HG_MAX_RESTARTS", "3"))
28
+ WINDOW_SEC = int(os.environ.get("HG_WINDOW_SECONDS", "600"))
29
+
30
+ # Define dependencies: if key is red, stop all in list
31
+ DEPENDENCIES: Dict[str, List[str]] = {
32
+ "dragonfly": ["qdrant", "janusgraph"],
33
+ "redis": ["qdrant", "janusgraph"],
34
+ "scylla": ["janusgraph"],
35
+ }
36
+
37
+
38
+ def load_status(name: str) -> str:
39
+ p = HEALTH_DIR / f"{name}.json"
40
+ try:
41
+ data = json.loads(p.read_text())
42
+ return str(data.get("status", "unknown")).lower()
43
+ except Exception:
44
+ return "unknown"
45
+
46
+
47
+ def svctl(cmd: List[str]) -> str:
48
+ base = ["supervisorctl", "-s", f"unix://{SUPERVISOR_SOCK}"]
49
+ out = subprocess.run(base + cmd, check=False, capture_output=True, text=True)
50
+ return (out.stdout or "") + (out.stderr or "")
51
+
52
+
53
+ def read_state() -> Dict[str, Any]:
54
+ try:
55
+ return json.loads(STATE_FILE.read_text())
56
+ except Exception:
57
+ return {"last_green": {}, "last_restart": {}, "restart_log": []}
58
+
59
+
60
+ def write_state(state: Dict[str, Any]) -> None:
61
+ STATE_FILE.parent.mkdir(parents=True, exist_ok=True)
62
+ STATE_FILE.write_text(json.dumps(state))
63
+
64
+
65
+ def audit(event: Dict[str, Any]) -> None:
66
+ try:
67
+ AUDIT_LOG.parent.mkdir(parents=True, exist_ok=True)
68
+ with open(AUDIT_LOG, "a", encoding="utf-8") as f:
69
+ f.write(json.dumps(event) + "\n")
70
+ except Exception:
71
+ pass
72
+
73
+
74
+ def main() -> None:
75
+ import time
76
+ now = int(time.time())
77
+ state = read_state()
78
+ changed = False
79
+
80
+ for dep, affected in DEPENDENCIES.items():
81
+ status = load_status(dep)
82
+ # Track green timestamps
83
+ if status == "green":
84
+ state["last_green"][dep] = now
85
+ # Stop dependents on red/unknown
86
+ if status in ("red", "unknown"):
87
+ for prog in affected:
88
+ out = svctl(["stop", prog])
89
+ audit({"ts": now, "action": "stop", "reason": f"{dep}={status}", "program": prog, "output": out})
90
+ changed = True
91
+ continue
92
+
93
+ # Auto-recovery gate: green must be stable
94
+ last_g = state.get("last_green", {}).get(dep, 0)
95
+ if now - last_g < GREEN_STABLE_SEC:
96
+ continue
97
+
98
+ # For each dependent, consider (re)start with backoff and rate limit
99
+ for prog in affected:
100
+ # Backoff check
101
+ last_r = state.get("last_restart", {}).get(prog, 0)
102
+ if now - last_r < RESTART_BACKOFF_SEC:
103
+ continue
104
+ # Rate limit
105
+ window_start = now - WINDOW_SEC
106
+ recent = [e for e in state.get("restart_log", []) if e.get("program") == prog and e.get("ts", 0) >= window_start]
107
+ if len(recent) >= MAX_RESTARTS:
108
+ continue
109
+ # Start if not RUNNING
110
+ status_out = svctl(["status", prog])
111
+ if "RUNNING" in status_out:
112
+ continue
113
+ out = svctl(["start", prog])
114
+ entry = {"ts": now, "action": "start", "reason": f"{dep}=green stable", "program": prog, "output": out}
115
+ audit(entry)
116
+ state.setdefault("restart_log", []).append(entry)
117
+ state.setdefault("last_restart", {})[prog] = now
118
+ changed = True
119
+
120
+ if changed:
121
+ write_state(state)
122
+
123
+
124
+ if __name__ == "__main__":
125
+ main()
platform/aiml/mlops/dbops_tools/qdrant_bootstrap.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ from __future__ import annotations
3
+
4
+ import json
5
+ import os
6
+ import sys
7
+ from typing import Any, Dict
8
+
9
+ import requests
10
+
11
+
12
+ QDRANT_URL = os.environ.get("QDRANT_URL", "http://localhost:17000")
13
+ COLLECTION = os.environ.get("QDRANT_COLLECTION", "elizabeth_embeddings")
14
+ VECTOR_SIZE = int(os.environ.get("QDRANT_VECTOR_SIZE", "1536"))
15
+ DISTANCE = os.environ.get("QDRANT_DISTANCE", "Cosine")
16
+
17
+
18
+ def ensure_collection() -> Dict[str, Any]:
19
+ r = requests.get(f"{QDRANT_URL}/collections/{COLLECTION}", timeout=5)
20
+ if r.status_code == 200:
21
+ return {"status": "exists"}
22
+ payload = {
23
+ "vectors": {"size": VECTOR_SIZE, "distance": DISTANCE},
24
+ "hnsw_config": {"m": 64, "ef_construct": 128, "full_scan_threshold": 10000},
25
+ "optimizers_config": {"default_segment_number": 2},
26
+ }
27
+ r = requests.put(f"{QDRANT_URL}/collections/{COLLECTION}", json=payload, timeout=30)
28
+ return {"status": "created", "code": r.status_code, "body": r.text}
29
+
30
+
31
+ def main() -> None:
32
+ try:
33
+ res = ensure_collection()
34
+ print(json.dumps(res))
35
+ except Exception as e:
36
+ print(json.dumps({"error": str(e)}))
37
+ sys.exit(1)
38
+
39
+
40
+ if __name__ == "__main__":
41
+ main()
platform/aiml/mlops/dbops_tools/render_janus_props.sh ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bash
2
+ set -euo pipefail
3
+
4
+ # Render JanusGraph CQL properties with secrets and launch Gremlin Server.
5
+ # Defaults align with your new dbops tree.
6
+
7
+ DBOPS_ROOT="${DBOPS_ROOT:-/data/adaptai/platform/dbops}"
8
+ SECRETS_DIR="${SECRETS_DIR:-/data/adaptai/secrets/dataops}"
9
+
10
+ CONF_DIR="$DBOPS_ROOT/configs/janusgraph"
11
+ RUNTIME_DIR="$DBOPS_ROOT/run/janusgraph"
12
+ mkdir -p "$RUNTIME_DIR"
13
+
14
+ TEMPLATE="$CONF_DIR/janusgraph-cql.properties"
15
+ OUT="$RUNTIME_DIR/janusgraph-cql.runtime.properties"
16
+
17
+ # Load secrets (.env key=val) into env without overriding existing vars
18
+ if [[ -f "$SECRETS_DIR/.env" ]]; then
19
+ while IFS='=' read -r k v; do
20
+ [[ -z "$k" ]] && continue
21
+ [[ "$k" =~ ^# ]] && continue
22
+ [[ -z "$v" ]] && continue
23
+ if [[ -z "${!k-}" ]]; then export "$k"="$v"; fi
24
+ done < <(grep -v '^#' "$SECRETS_DIR/.env" | sed '/^$/d')
25
+ fi
26
+
27
+ if [[ -z "${SCYLLA_USER:-}" || -z "${SCYLLA_PASS:-}" ]]; then
28
+ echo "[janus] Missing SCYLLA_USER or SCYLLA_PASS in $SECRETS_DIR/.env" >&2
29
+ exit 1
30
+ fi
31
+
32
+ # Simple env substitution for ${VAR} placeholders
33
+ awk '{print}' "$TEMPLATE" \
34
+ | sed "s#\${SCYLLA_USER}#${SCYLLA_USER}#g" \
35
+ | sed "s#\${SCYLLA_PASS}#${SCYLLA_PASS}#g" \
36
+ > "$OUT"
37
+
38
+ echo "[janus] Rendered: $OUT"
39
+
40
+ # Launch Gremlin Server with rendered properties
41
+ GREMLIN_YAML="$CONF_DIR/gremlin-server.yaml"
42
+ if [[ -n "${START_GREMLIN:-1}" && "${START_GREMLIN}" != "0" ]]; then
43
+ echo "[janus] Starting Gremlin Server on port configured in $GREMLIN_YAML"
44
+ # Assume janusgraph distribution bin/gremlin-server.sh is on PATH or under dbops/bin
45
+ if command -v gremlin-server.sh >/dev/null 2>&1; then
46
+ exec gremlin-server.sh start "$GREMLIN_YAML"
47
+ elif [[ -x "$DBOPS_ROOT/bin/gremlin-server.sh" ]]; then
48
+ exec "$DBOPS_ROOT/bin/gremlin-server.sh" start "$GREMLIN_YAML"
49
+ else
50
+ echo "[janus] gremlin-server.sh not found; start it with the above config manually." >&2
51
+ exit 2
52
+ fi
53
+ fi
54
+
platform/aiml/mlops/death_march/.env_unformatted ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Death March - Unformatted API Keys (Replace with actual keys)
2
+ # Copy this to /data/adaptai/secrets/.env and add real keys
3
+
4
+ OPENAI_API_KEY=sk-...
5
+ DEEPSEEK_API_KEY=sk-...
6
+ GROQ_API_KEY=groq-...
7
+ PERPLEXITY_API_KEY=pplx-...
8
+ TAVILY_API_KEY=tavily-...
9
+ FIRECRAWL_API_KEY=fc-...
10
+ SERPER_API_KEY=serper-...
11
+ Z_AI_API_KEY=zai-...
platform/aiml/mlops/death_march/ELIZABETH_TOOLS_README.md ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # πŸ’€ ELIZABETH REAL-TIME EXECUTION TOOLS
2
+
3
+ ## πŸš€ COMPLETE TOOLSET IMPLEMENTED
4
+
5
+ ### βœ… REAL-TIME SHELL EXECUTION
6
+ - **Zero-touch command execution** with immediate feedback
7
+ - **Real-time output streaming** - see results as they happen
8
+ - **Background process management** with process tracking
9
+ - **Full shell access** with proper authentication and permissions
10
+
11
+ ### πŸ”§ CORE COMPONENTS
12
+
13
+ #### 1. `elizabeth_tools.py` - Execution Engine
14
+ - `execute_command()` - Real-time command execution with streaming output
15
+ - `execute_script()` - Python/bash script execution with arguments
16
+ - `background_execution()` - Run commands in background with process tracking
17
+ - `get_process_status()` - Monitor background processes
18
+ - `kill_process()` - Terminate running processes
19
+ - `system_info()` - Comprehensive system diagnostics
20
+
21
+ #### 2. `elizabeth_shell.py` - Interactive Shell
22
+ - **Tab completion** for file paths
23
+ - **Command history** with persistent storage
24
+ - **Real-time monitoring** dashboard
25
+ - **Process management** interface
26
+ - **System status** overview
27
+ - **Script execution** capabilities
28
+
29
+ #### 3. Integrated CLI Menu (`cli.py`)
30
+ - **Option [8]** - Launch Elizabeth Shell
31
+ - **Option [9]** - Execute Custom Command
32
+ - **Option [0]** - Detailed System Information
33
+
34
+ ### 🎯 KEY FEATURES
35
+
36
+ #### πŸ”₯ REAL-TIME EXECUTION
37
+ ```python
38
+ # Execute any shell command with real-time output
39
+ result = execute_command("docker ps -a && nvidia-smi", realtime=True)
40
+
41
+ # Run scripts with arguments
42
+ execute_script("deploy.sh", ["--production", "--scale=3"])
43
+
44
+ # Background execution with tracking
45
+ background_execution("long_running_process.sh", "process_123")
46
+ ```
47
+
48
+ #### πŸ“Š SYSTEM MONITORING
49
+ ```python
50
+ # Get comprehensive system info
51
+ info = get_system_info()
52
+ print(f"Host: {info['hostname']}")
53
+ print(f"CPU: {info['cpu']} cores")
54
+ print(f"GPU: {info['gpu']}")
55
+ print(f"Memory: {info['memory']}")
56
+ ```
57
+
58
+ #### ⚑ IMMEDIATE FEEDBACK
59
+ - **Success/Failure** indicators with execution time
60
+ - **Real-time output** streaming during command execution
61
+ - **Error handling** with detailed error messages
62
+ - **Process tracking** for background operations
63
+
64
+ ### πŸš€ QUICK START
65
+
66
+ #### 1. Interactive Shell
67
+ ```bash
68
+ python3 elizabeth_shell.py
69
+ ```
70
+
71
+ #### 2. Direct Tool Usage
72
+ ```python
73
+ from elizabeth_tools import execute_command, get_system_info
74
+
75
+ # Execute command
76
+ result = execute_command("ls -la /tmp")
77
+ print(f"Success: {result['success']}")
78
+ print(f"Output: {result['output']}")
79
+
80
+ # Get system info
81
+ info = get_system_info()
82
+ ```
83
+
84
+ #### 3. Integrated CLI
85
+ ```bash
86
+ python3 cli.py
87
+ # Then choose options 8, 9, or 0 for Elizabeth tools
88
+ ```
89
+
90
+ ### πŸ› οΈ TECHNICAL IMPLEMENTATION
91
+
92
+ #### βœ… REQUIREMENTS MET
93
+ 1. **Shell Access** - Full bash command execution capabilities
94
+ 2. **Authentication** - Proper user permissions and security
95
+ 3. **Tools Installed** - All necessary CLI utilities available
96
+ 4. **Real-time Data** - Live output streaming and monitoring
97
+ 5. **Error Handling** - Comprehensive error reporting and logging
98
+ 6. **Immediate Feedback** - Real-time status updates and results
99
+
100
+ #### πŸ“ LOGGING
101
+ - **Real-time logs**: `/tmp/elizabeth_tools.log`
102
+ - **Execution history**: Persistent command history
103
+ - **Process tracking**: Active background process monitoring
104
+
105
+ ### 🎯 USE CASES
106
+
107
+ #### 1. Deployment Automation
108
+ ```python
109
+ # Deploy services with real-time monitoring
110
+ execute_command("kubectl apply -f deployment.yaml", realtime=True)
111
+ ```
112
+
113
+ #### 2. System Administration
114
+ ```python
115
+ # Monitor system resources
116
+ background_execution("top -b -d 1", "system_monitor")
117
+ ```
118
+
119
+ #### 3. Data Processing
120
+ ```python
121
+ # Run data pipelines
122
+ execute_script("data_pipeline.py", ["--input", "large_dataset.csv"])
123
+ ```
124
+
125
+ #### 4. Infrastructure Management
126
+ ```python
127
+ # Manage cloud resources
128
+ execute_command("terraform apply -auto-approve", timeout=600)
129
+ ```
130
+
131
+ ### πŸ”§ INTEGRATION POINTS
132
+
133
+ #### With Death March Engine
134
+ - **Real-time revenue monitoring** during execution
135
+ - **System health checks** before critical operations
136
+ - **Resource utilization** tracking for optimization
137
+
138
+ #### With Existing Infrastructure
139
+ - **vLLM server integration** (port 8000)
140
+ - **DragonflyDB connectivity** (port 18000)
141
+ - **GPU resource management** (nvidia-smi integration)
142
+
143
+ ### πŸ“ˆ PERFORMANCE
144
+
145
+ - **Execution Time**: Real-time feedback with millisecond precision
146
+ - **Memory Usage**: Minimal overhead for command execution
147
+ - **Concurrency**: Support for multiple background processes
148
+ - **Reliability**: Robust error handling and process management
149
+
150
+ ### 🚨 SECURITY FEATURES
151
+
152
+ - **Command validation** before execution
153
+ - **Process isolation** for background operations
154
+ - **Resource limits** to prevent system overload
155
+ - **Authentication integration** with system permissions
156
+
157
+ ### βœ… VALIDATION
158
+
159
+ All tools have been tested and verified:
160
+ - βœ… Command execution with real-time output
161
+ - βœ… Background process management
162
+ - βœ… System information gathering
163
+ - βœ… Error handling and logging
164
+ - βœ… Integration with Death March CLI
165
+ - βœ… Live API key validation
166
+ - βœ… GPU resource monitoring
167
+
168
+ ### 🎯 NEXT STEPS
169
+
170
+ 1. **Advanced monitoring** - Real-time dashboard for all processes
171
+ 2. **Alert system** - Notifications for critical events
172
+ 3. **Workflow automation** - Chained command execution
173
+ 4. **Resource optimization** - Smart process scheduling
174
+ 5. **Security hardening** - Enhanced command validation
175
+
176
+ ---
177
+
178
+ **πŸ’€ ELIZABETH TOOLS ARE NOW FULLY OPERATIONAL**
179
+ **πŸš€ REAL-TIME EXECUTION CAPABILITIES ENABLED**
180
+ **βœ… INTEGRATED WITH DEATH MARCH ENTERPRISE**
platform/aiml/mlops/death_march/Makefile ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # πŸ’€ DEATH MARCH MAKEFILE
2
+ # $50 β†’ Infinity with zero human touch
3
+
4
+ .PHONY: install run status logs kill clean
5
+
6
+ # Install all dependencies
7
+ install:
8
+ @echo "πŸ’€ Installing Death March dependencies..."
9
+ pip install -r requirements.txt
10
+ @echo "βœ… Death March armed and ready"
11
+
12
+ # Run Death March (zero human intervention)
13
+ run:
14
+ @echo "πŸ’€ Launching Death March..."
15
+ python3 deploy.py
16
+
17
+ # Run with supervisor (recommended)
18
+ run-supervisor:
19
+ @echo "πŸ’€ Starting Death March via supervisor..."
20
+ supervisorctl reread
21
+ supervisorctl update
22
+ supervisorctl start death_march death_march_monitor
23
+
24
+ # Check status
25
+ status:
26
+ @echo "πŸ’€ Death March Status:"
27
+ @supervisorctl status death_march death_march_monitor 2>/dev/null || echo "Not running via supervisor"
28
+ @python3 -c "import sqlite3; db=sqlite3.connect('/tmp/death_march/revenue.db'); c=db.cursor(); c.execute('SELECT SUM(net), COUNT(*) FROM revenue'); r=c.fetchone(); print(f'Total Earnings: ${r[0] or 0:.2f} across {r[1] or 0} cycles'); db.close()" 2>/dev/null || echo "No earnings yet"
29
+
30
+ # Monitor logs
31
+ logs:
32
+ @echo "πŸ’€ Death March Logs:"
33
+ tail -f /tmp/death_march.log /tmp/death_march_heartbeat.log
34
+
35
+ # Kill Death March
36
+ kill:
37
+ @echo "πŸ’€ Terminating Death March..."
38
+ supervisorctl stop death_march death_march_monitor 2>/dev/null || true
39
+ pkill -f deploy.py 2>/dev/null || true
40
+
41
+ # Clean slate
42
+ clean:
43
+ @echo "πŸ’€ Cleaning Death March..."
44
+ supervisorctl stop death_march death_march_monitor 2>/dev/null || true
45
+ rm -rf /tmp/death_march/
46
+ rm -f /tmp/death_march.log /tmp/death_march_heartbeat.log
47
+
48
+ # Emergency restart
49
+ restart: kill clean install run-supervisor
50
+
51
+ # Deploy to GitHub
52
+ deploy:
53
+ @echo "πŸ’€ Deploying Death March to GitHub..."
54
+ git add .
55
+ git commit -m "Death March v1.0 - $50 to infinity"
56
+ git push origin main
57
+
58
+ # Monitor earnings
59
+ monitor:
60
+ @watch -n 5 'echo "πŸ’€ Death March Earnings:"; python3 -c "import sqlite3; db=sqlite3.connect("/tmp/death_march/revenue.db"); c=db.cursor(); c.execute('SELECT SUM(net), COUNT(*), AVG(net) FROM revenue'); r=c.fetchone(); print(f"Total: \${r[0] or 0:.2f} | Cycles: {r[1] or 0} | Avg: \${r[2] or 0:.2f}"); db.close()" 2>/dev/null || echo "Starting up..."'
61
+
62
+ # Emergency rescue
63
+ rescue:
64
+ @echo "πŸ’€ Death March Emergency Protocol"
65
+ @echo "Current credit: $50"
66
+ @echo "Survival strategy: GPU crypto analysis"
67
+ @echo "Cycle rate: 2 minutes"
68
+ @echo "Zero human intervention required"
platform/aiml/mlops/death_march/README.md ADDED
@@ -0,0 +1,271 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # πŸ’€ Death March
2
+ ## $50 to Infinity - Zero Human Intervention Revenue System
3
+
4
+ ### Enterprise-Grade Autonomous Revenue Generation
5
+
6
+ **Death March** is a fully autonomous AI-powered revenue generation system that transforms $50 into infinite returns through zero-human-intervention operations. Built with enterprise-grade feature flags, comprehensive API integration, and real-time monitoring.
7
+
8
+ ## πŸš€ Quick Start
9
+
10
+ ```bash
11
+ # 1. Deploy the system
12
+ ./deploy.sh deploy
13
+
14
+ # 2. Monitor earnings
15
+ make monitor
16
+
17
+ # 3. Interactive CLI
18
+ python3 cli.py
19
+
20
+ # 4. Check status
21
+ ./deploy.sh status
22
+ ```
23
+
24
+ ## πŸ—οΈ Architecture
25
+
26
+ ### Core Components
27
+ - **Qwen3-8B-Elizabeth**: 131K context vLLM serving
28
+ - **DragonflyDB**: High-performance Redis-compatible persistence
29
+ - **Feature Flags**: Enterprise-grade risk management
30
+ - **Secrets Manager**: Comprehensive API key integration
31
+ - **Real-time Monitoring**: Live earnings dashboard
32
+
33
+ ### Revenue Strategies
34
+ - GPU-accelerated crypto analysis
35
+ - Arbitrage detection across DeFi protocols
36
+ - AI service creation and monetization
37
+ - Content monetization pipelines
38
+ - Web scraping and search optimization
39
+
40
+ ## πŸ” Configuration
41
+
42
+ ### Required API Keys
43
+ Create `/data/adaptai/secrets/.env` with:
44
+ ```bash
45
+ # Critical APIs
46
+ OPENAI_API_KEY=your_key_here
47
+ GROQ_API_KEY=your_key_here
48
+
49
+ # Revenue APIs
50
+ DEEPSEEK_API_KEY=your_key_here
51
+ PERPLEXITY_API_KEY=your_key_here
52
+ TAVILY_API_KEY=your_key_here
53
+ FIRECRAWL_API_KEY=your_key_here
54
+ SERPER_API_KEY=your_key_here
55
+ Z_AI_API_KEY=your_key_here
56
+ ```
57
+
58
+ ### Feature Flags
59
+ ```python
60
+ from death_march.flags import death_march_flags
61
+
62
+ # Check current risk level
63
+ risk = death_march_flags.get_risk_level()
64
+
65
+ # Enable aggressive mode
66
+ death_march_flags.enable('aggressive_mode')
67
+
68
+ # Emergency stop
69
+ death_march_flags.emergency_toggle('panic')
70
+ ```
71
+
72
+ ## πŸ“Š Monitoring
73
+
74
+ ### Real-time Dashboard
75
+ ```bash
76
+ # Live earnings display
77
+ python3 cli.py
78
+
79
+ # Continuous monitoring
80
+ watch -n 5 'python3 -c "from secrets_manager import secrets_manager; print(secrets_manager.get_secrets_summary())"'
81
+ ```
82
+
83
+ ### Metrics Tracked
84
+ - Total revenue generated
85
+ - Revenue per cycle
86
+ - GPU utilization efficiency
87
+ - API response times
88
+ - System health indicators
89
+
90
+ ## πŸ”§ Enterprise Features
91
+
92
+ ### Feature Flag System
93
+ - **Risk Management**: Conservative/Aggressive modes
94
+ - **Emergency Protocols**: Panic/Survival modes
95
+ - **Scaling Controls**: Auto-scaling and multi-node
96
+ - **Experimental Features**: Quantum optimization and neural trading
97
+
98
+ ### Security
99
+ - Zero secrets in repository
100
+ - Environment-based configuration
101
+ - API key rotation support
102
+ - Audit trail logging
103
+
104
+ ### Monitoring & Alerting
105
+ - Real-time earnings tracking
106
+ - System health checks
107
+ - API failure detection
108
+ - Revenue anomaly detection
109
+
110
+ ## 🌊 Deployment
111
+
112
+ ### Prerequisites
113
+ - Python 3.11+
114
+ - NVIDIA GPU with CUDA support
115
+ - 32GB+ RAM recommended
116
+ - High-speed internet for API calls
117
+
118
+ ### Installation
119
+ ```bash
120
+ git clone git@github.com:adaptnova/death-march.git
121
+ cd death-march
122
+ pip install -r requirements.txt
123
+ ./deploy.sh deploy
124
+ ```
125
+
126
+ ### Production Deployment
127
+ ```bash
128
+ # Using supervisor
129
+ make run-supervisor
130
+
131
+ # Manual deployment
132
+ python3 deploy.py
133
+
134
+ # Emergency restart
135
+ make restart
136
+ ```
137
+
138
+ ## πŸ“ˆ Revenue Models
139
+
140
+ ### Strategy Types
141
+ 1. **GPU Crypto Analysis**: Real-time blockchain analysis
142
+ 2. **DeFi Arbitrage**: Cross-protocol yield optimization
143
+ 3. **AI Service Creation**: Automated service monetization
144
+ 4. **Content Monetization**: Automated content generation
145
+ 5. **Search Optimization**: SEO and traffic generation
146
+
147
+ ### Cycle Configuration
148
+ - **Duration**: 2 minutes per cycle
149
+ - **Target**: $0.50-$5.00 per cycle
150
+ - **Scaling**: Auto-adjust based on market conditions
151
+ - **Optimization**: AI-driven strategy selection
152
+
153
+ ## 🚨 Emergency Protocols
154
+
155
+ ### Emergency Commands
156
+ ```bash
157
+ # Panic mode (conservative)
158
+ ./deploy.sh stop
159
+ cd death_march; python3 -c "from death_march.flags import death_march_flags; death_march_flags.emergency_toggle('panic')"
160
+
161
+ # Survival mode (aggressive)
162
+ cd death_march; python3 -c "from death_march.flags import death_march_flags; death_march_flags.emergency_toggle('survival')"
163
+
164
+ # Complete reset
165
+ cd death_march; python3 -c "from death_march.flags import death_march_flags; death_march_flags.emergency_toggle('reset')"
166
+ ```
167
+
168
+ ### Health Monitoring
169
+ ```bash
170
+ # Check all systems
171
+ ./deploy.sh status
172
+
173
+ # Monitor logs
174
+ tail -f /tmp/death_march.log
175
+
176
+ # Database health
177
+ sqlite3 /tmp/death_march/revenue.db "SELECT * FROM revenue ORDER BY id DESC LIMIT 10;"
178
+ ```
179
+
180
+ ## πŸ” Troubleshooting
181
+
182
+ ### Common Issues
183
+ 1. **API Key Missing**: Check `/data/adaptai/secrets/.env`
184
+ 2. **GPU Not Detected**: Verify nvidia-smi output
185
+ 3. **Port Conflicts**: Check 8000, 18000, 19000 availability
186
+ 4. **Database Issues**: Delete `/tmp/death_march/` and restart
187
+
188
+ ### Debug Commands
189
+ ```bash
190
+ # Check API keys
191
+ python3 death_march/secrets_manager.py
192
+
193
+ # Validate feature flags
194
+ python3 death_march/death_march/flags.py
195
+
196
+ # Test vLLM health
197
+ curl http://localhost:8000/health
198
+
199
+ # Check Redis/Dragonfly
200
+ redis-cli -p 18000 ping
201
+ ```
202
+
203
+ ## πŸ“š API Documentation
204
+
205
+ ### CLI Commands
206
+ - `python3 cli.py` - Interactive dashboard
207
+ - `make status` - System health
208
+ - `make monitor` - Live earnings
209
+ - `make logs` - Real-time logs
210
+
211
+ ### Programmatic Access
212
+ ```python
213
+ from death_march.secrets_manager import secrets_manager
214
+ from death_march.flags import death_march_flags
215
+
216
+ # Check deployment readiness
217
+ if secrets_manager.is_ready():
218
+ print("Ready for deployment")
219
+
220
+ # Get system status
221
+ status = secrets_manager.get_secrets_summary()
222
+ print(f"Active APIs: {status['active_apis']}/8")
223
+
224
+ # Configure features
225
+ death_march_flags.enable('gpu_crypto_analysis')
226
+ death_march_flags.disable('conservative_mode')
227
+ ```
228
+
229
+ ## 🎯 Performance Targets
230
+
231
+ ### Initial Goals
232
+ - **Revenue per cycle**: $0.50-$5.00
233
+ - **Daily cycles**: 720 (2-minute intervals)
234
+ - **Monthly target**: $10,800-$36,000
235
+ - **ROI timeline**: 30-90 days to break even
236
+
237
+ ### Scaling Targets
238
+ - **Week 1**: $50 β†’ $100
239
+ - **Week 2**: $100 β†’ $500
240
+ - **Week 3**: $500 β†’ $2,000
241
+ - **Week 4**: $2,000 β†’ $10,000+
242
+
243
+ ## πŸ” Security Best Practices
244
+
245
+ ### Secrets Management
246
+ - Never commit API keys to repository
247
+ - Use environment variables for configuration
248
+ - Rotate keys regularly
249
+ - Monitor API usage and costs
250
+
251
+ ### Access Control
252
+ - Restrict file permissions on secrets
253
+ - Use dedicated user for deployment
254
+ - Monitor system access logs
255
+ - Implement rate limiting
256
+
257
+ ## πŸ“ž Support
258
+
259
+ ### Emergency Contacts
260
+ - **Critical Issues**: Create GitHub issue with 'CRITICAL' label
261
+ - **API Failures**: Check secrets_manager.py validation
262
+ - **Revenue Stops**: Review feature flags and risk levels
263
+
264
+ ### Community
265
+ - **GitHub Issues**: https://github.com/adaptnova/death-march/issues
266
+ - **Documentation**: See `/docs/` directory
267
+ - **Examples**: See `/examples/` directory
268
+
269
+ ---
270
+
271
+ **πŸ’€ Death March: Where $50 becomes infinity through autonomous intelligence**