umer07 commited on
Commit
ca27310
·
verified ·
1 Parent(s): df6b353

Upload VM_RECOVERY_GUIDE_20260409.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. VM_RECOVERY_GUIDE_20260409.md +244 -0
VM_RECOVERY_GUIDE_20260409.md ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Fathom VM Recovery Guide
2
+
3
+ This guide matches the live VM state observed on April 9, 2026.
4
+
5
+ ## What is already backed up
6
+
7
+ HF dataset repo: `umer07/vllm-deployement-backup` (private)
8
+
9
+ Existing artifact:
10
+
11
+ - `vllm_deployement_backup_20260409_063205.tar.gz`
12
+
13
+ That archive contains:
14
+
15
+ - `/opt/fathom`
16
+ - `/root/serve.py`
17
+ - `/root/vllm.log`
18
+ - `/root/serve.log`
19
+ - `/etc/os-release`
20
+
21
+ This is not enough by itself for full disaster recovery because the live platform also depends on Docker volumes for Neo4j and MinIO. Those stateful volumes must be restored as well.
22
+
23
+ ## Live stack this guide targets
24
+
25
+ - Host OS: Ubuntu 24.04.3 LTS
26
+ - ROCm driver reported by `rocm-smi`: `6.14.14`
27
+ - Host-side model process: `/root/serve.py`
28
+ - Model API port: `8001`
29
+ - Docker app root: `/opt/fathom`
30
+ - Backend port: `7860`
31
+ - Dashboard port: `3000`
32
+ - Neo4j ports: `7474`, `7687`
33
+ - MinIO ports: `9000`, `9001`
34
+
35
+ Observed host Python packages:
36
+
37
+ - `torch==2.5.1+rocm6.2`
38
+ - `transformers==4.44.2`
39
+ - `peft==0.18.1`
40
+ - `flask==3.1.3`
41
+
42
+ ## Required backup set for full recovery
43
+
44
+ You need all of the following in the private HF dataset:
45
+
46
+ - `vllm_deployement_backup_20260409_063205.tar.gz`
47
+ - `fathom_minio_data_20260409.tar.gz`
48
+ - `fathom_neo4j_data_20260409.tar.gz`
49
+ - `fathom_neo4j_logs_20260409.tar.gz`
50
+ - This guide
51
+
52
+ Intentionally not included:
53
+
54
+ - `fathom_hf_cache`
55
+
56
+ Reason:
57
+
58
+ - It is very large and fully reproducible from Hugging Face once `HF_TOKEN` is present.
59
+ - Excluding it keeps the recovery bundle much smaller while still allowing a complete rebuild.
60
+
61
+ ## 1. Provision a replacement VM
62
+
63
+ Use an AMD GPU VM that supports ROCm and matches the current host as closely as possible.
64
+
65
+ Minimum expectations:
66
+
67
+ - Ubuntu 24.04
68
+ - Docker Engine
69
+ - Docker Compose plugin
70
+ - Python 3.10+
71
+ - ROCm-compatible PyTorch install
72
+
73
+ ## 2. Install base packages
74
+
75
+ ```bash
76
+ apt update
77
+ apt install -y docker.io docker-compose-v2 python3 python3-pip curl git
78
+ systemctl enable docker
79
+ systemctl start docker
80
+ ```
81
+
82
+ If Docker Compose is not available as `docker compose`, install the Compose plugin separately.
83
+
84
+ ## 3. Install host-side model dependencies
85
+
86
+ Install ROCm PyTorch first, then the Python packages required by `/root/serve.py`.
87
+
88
+ ```bash
89
+ python3 -m pip install --upgrade pip
90
+ python3 -m pip install \
91
+ torch torchvision torchaudio \
92
+ --index-url https://download.pytorch.org/whl/rocm6.2
93
+
94
+ python3 -m pip install \
95
+ transformers==4.44.2 \
96
+ peft==0.18.1 \
97
+ flask==3.1.3 \
98
+ accelerate \
99
+ sentencepiece \
100
+ safetensors \
101
+ huggingface_hub
102
+ ```
103
+
104
+ If your ROCm image already ships with PyTorch, verify that `python3 -c "import torch; print(torch.__version__)"` succeeds before replacing it.
105
+
106
+ ## 4. Download the backup files from Hugging Face
107
+
108
+ Use the private token and download the required files from `umer07/vllm-deployement-backup`.
109
+
110
+ ```bash
111
+ python3 - <<'PY'
112
+ from huggingface_hub import hf_hub_download
113
+
114
+ repo_id = "umer07/vllm-deployement-backup"
115
+ files = [
116
+ "vllm_deployement_backup_20260409_063205.tar.gz",
117
+ "fathom_minio_data_20260409.tar.gz",
118
+ "fathom_neo4j_data_20260409.tar.gz",
119
+ "fathom_neo4j_logs_20260409.tar.gz",
120
+ "VM_RECOVERY_GUIDE_20260409.md",
121
+ ]
122
+
123
+ for name in files:
124
+ path = hf_hub_download(
125
+ repo_id=repo_id,
126
+ filename=name,
127
+ repo_type="dataset",
128
+ local_dir=".",
129
+ )
130
+ print(path)
131
+ PY
132
+ ```
133
+
134
+ Make sure the Hugging Face token has access to the private dataset.
135
+
136
+ ## 5. Restore the application files
137
+
138
+ The application backup was created with absolute-style paths. Extract it from `/`.
139
+
140
+ ```bash
141
+ cd /
142
+ tar -xzf /root/vllm_deployement_backup_20260409_063205.tar.gz
143
+ ```
144
+
145
+ After extraction, verify:
146
+
147
+ - `/opt/fathom/.env`
148
+ - `/opt/fathom/docker-compose.yml`
149
+ - `/root/serve.py`
150
+
151
+ ## 6. Restore Docker volumes
152
+
153
+ Create the required Docker volumes:
154
+
155
+ ```bash
156
+ docker volume create fathom_neo4j_data
157
+ docker volume create fathom_neo4j_logs
158
+ docker volume create fathom_minio_data
159
+ ```
160
+
161
+ Restore the saved contents into each volume:
162
+
163
+ ```bash
164
+ docker run --rm -v fathom_minio_data:/restore -v /root:/backup alpine \
165
+ sh -c "cd /restore && tar -xzf /backup/fathom_minio_data_20260409.tar.gz"
166
+
167
+ docker run --rm -v fathom_neo4j_data:/restore -v /root:/backup alpine \
168
+ sh -c "cd /restore && tar -xzf /backup/fathom_neo4j_data_20260409.tar.gz"
169
+
170
+ docker run --rm -v fathom_neo4j_logs:/restore -v /root:/backup alpine \
171
+ sh -c "cd /restore && tar -xzf /backup/fathom_neo4j_logs_20260409.tar.gz"
172
+ ```
173
+
174
+ ## 7. Start the host-side model server
175
+
176
+ The current deployment uses `/root/serve.py` directly on the host and listens on port `8001`.
177
+
178
+ ```bash
179
+ nohup python3 /root/serve.py > /root/serve.log 2>&1 &
180
+ sleep 10
181
+ curl http://127.0.0.1:8001/health
182
+ ```
183
+
184
+ Expected result:
185
+
186
+ ```json
187
+ {"status":"ok","model":"umer07/fathom-mixtral"}
188
+ ```
189
+
190
+ The first cold start may take several minutes while model files download.
191
+
192
+ ## 8. Start the Docker stack
193
+
194
+ ```bash
195
+ cd /opt/fathom
196
+ docker compose up -d --build
197
+ docker compose ps
198
+ ```
199
+
200
+ Expected services:
201
+
202
+ - `fathom-backend`
203
+ - `fathom-dashboard`
204
+ - `fathom-minio`
205
+ - `fathom-neo4j`
206
+
207
+ ## 9. Verify the stack
208
+
209
+ ```bash
210
+ curl http://127.0.0.1:7860/health
211
+ curl http://127.0.0.1:8001/health
212
+ curl http://127.0.0.1:3000
213
+ ```
214
+
215
+ Optional checks:
216
+
217
+ ```bash
218
+ docker compose -f /opt/fathom/docker-compose.yml ps
219
+ docker logs fathom-backend --tail 50
220
+ docker logs fathom-dashboard --tail 50
221
+ docker logs fathom-neo4j --tail 50
222
+ docker logs fathom-minio --tail 50
223
+ ```
224
+
225
+ ## 10. Important notes
226
+
227
+ - `/opt/fathom/.env` contains live secrets. Keep the dataset private.
228
+ - The compose file points the backend at `http://127.0.0.1:8001`.
229
+ - The current live host is using `/root/serve.py`, not a systemd `fathom-vllm.service`.
230
+ - The Hugging Face cache is excluded on purpose. The replacement VM will repopulate it automatically on first model startup.
231
+ - If you want fully offline recovery later, create and store an additional `fathom_hf_cache` volume backup.
232
+
233
+ ## Quick recovery checklist
234
+
235
+ ```bash
236
+ # 1. Install Docker + Python
237
+ # 2. Install ROCm PyTorch + serve.py dependencies
238
+ # 3. Download all backup files from HF
239
+ # 4. Extract vllm_deployement_backup_20260409_063205.tar.gz at /
240
+ # 5. Recreate and restore Docker volumes
241
+ # 6. Start python3 /root/serve.py
242
+ # 7. Start docker compose in /opt/fathom
243
+ # 8. Verify ports 8001, 7860, 3000, 9000, 7687
244
+ ```