Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,221 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# OCR-VQA Images — Download, Reassemble, and Place into LibMoE
|
| 2 |
+
|
| 3 |
+
This guide shows how to download split archives from Hugging Face, merge them into a single ZIP, extract, and place the images into the LibMoE data tree:
|
| 4 |
+
|
| 5 |
+
```
|
| 6 |
+
libmoe/
|
| 7 |
+
└── data/
|
| 8 |
+
├── image_onevision/
|
| 9 |
+
├── coco/
|
| 10 |
+
│ └── train2017/
|
| 11 |
+
├── gqa/
|
| 12 |
+
│ └── images/
|
| 13 |
+
├── ocr_vqa/
|
| 14 |
+
│ └── images/
|
| 15 |
+
├── textvqa/
|
| 16 |
+
│ └── train_images/
|
| 17 |
+
└── vg/
|
| 18 |
+
├── VG_100K/
|
| 19 |
+
└── VG_100K_2/
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
> Tested on Linux (bash). Requires ~100 GB free space for download + extraction room.
|
| 23 |
+
|
| 24 |
+
---
|
| 25 |
+
|
| 26 |
+
## 1) Prerequisites
|
| 27 |
+
|
| 28 |
+
* Python + `huggingface_hub` CLI (or use `curl/wget`)
|
| 29 |
+
* (Optional but faster) `hf_transfer` for high-speed downloads
|
| 30 |
+
|
| 31 |
+
Install CLIs:
|
| 32 |
+
|
| 33 |
+
```bash
|
| 34 |
+
pip install -U huggingface_hub hf_transfer
|
| 35 |
+
# Login once (stores token):
|
| 36 |
+
huggingface-cli login # or set HF_TOKEN env var
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
---
|
| 40 |
+
|
| 41 |
+
## 2) Download split parts from Hugging Face
|
| 42 |
+
|
| 43 |
+
Dataset: `DavidNguyen/ocr_vqa`, path: `ocr_vqa/images_part_*.zip.part`
|
| 44 |
+
|
| 45 |
+
### Using `curl` directly (works anywhere)
|
| 46 |
+
|
| 47 |
+
```bash
|
| 48 |
+
DEST=/cm/archive/namnv78_A100_PDM/data
|
| 49 |
+
mkdir -p "$DEST/ocr_vqa_parts"
|
| 50 |
+
cd "$DEST/ocr_vqa_parts"
|
| 51 |
+
|
| 52 |
+
for p in {aa..ax}; do
|
| 53 |
+
# Direct "resolve" URLs on HF (replace if repo path changes)
|
| 54 |
+
curl -L -o "images_part_${p}.zip.part" \
|
| 55 |
+
"https://huggingface.co/datasets/DavidNguyen/ocr_vqa/resolve/main/ocr_vqa/images_part_${p}.zip.part?download=true"
|
| 56 |
+
done
|
| 57 |
+
|
| 58 |
+
ls -lh
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
---
|
| 62 |
+
|
| 63 |
+
## 3) (Optional) Quick integrity check on parts
|
| 64 |
+
|
| 65 |
+
```bash
|
| 66 |
+
# Show sizes to spot any truncated files
|
| 67 |
+
ls -lh images_part_*.zip.part
|
| 68 |
+
|
| 69 |
+
# If you have reference checksums, verify:
|
| 70 |
+
# md5sum -c md5sum_parts.txt
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
All parts should be ~2.1 GB except the last (`ax`) which may be smaller.
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
+
## 4) Concatenate parts → a single ZIP
|
| 78 |
+
|
| 79 |
+
If you already downloaded to `/cm/archive/namnv78_A100_PDM/data/`, the one-liner below (as requested) works:
|
| 80 |
+
|
| 81 |
+
```bash
|
| 82 |
+
cat /cm/archive/namnv78_A100_PDM/data/images_part_*.zip.part \
|
| 83 |
+
> /cm/archive/namnv78_A100_PDM/data/images.zip
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
If you used the `ocr_vqa_parts` subfolder above:
|
| 87 |
+
|
| 88 |
+
```bash
|
| 89 |
+
cd /cm/archive/namnv78_A100_PDM/data/ocr_vqa_parts
|
| 90 |
+
cat images_part_*.zip.part > ../images.zip
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
Verify the merged file:
|
| 94 |
+
|
| 95 |
+
```bash
|
| 96 |
+
cd /cm/archive/namnv78_A100_PDM/data
|
| 97 |
+
ls -lh images.zip # expect ~48–50 GB total
|
| 98 |
+
# Optional checksum if provided:
|
| 99 |
+
# md5sum images.zip
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
---
|
| 103 |
+
|
| 104 |
+
## 5) Unzip
|
| 105 |
+
|
| 106 |
+
```bash
|
| 107 |
+
cd /cm/archive/namnv78_A100_PDM/data
|
| 108 |
+
mkdir -p ./images_unzip
|
| 109 |
+
unzip -q images.zip -d ./images_unzip
|
| 110 |
+
# Or faster with 7z:
|
| 111 |
+
# 7z x images.zip -o./images_unzip
|
| 112 |
+
```
|
| 113 |
+
|
| 114 |
+
After extraction you should see the image files or an `images/` folder inside `images_unzip/` (depends on how the ZIP was packed).
|
| 115 |
+
|
| 116 |
+
---
|
| 117 |
+
|
| 118 |
+
## 6) Place into the LibMoE data tree
|
| 119 |
+
|
| 120 |
+
Target layout:
|
| 121 |
+
|
| 122 |
+
```
|
| 123 |
+
libmoe/data/ocr_vqa/images/
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
Create the tree (and optionally the other dataset folders to keep structure consistent):
|
| 127 |
+
|
| 128 |
+
```bash
|
| 129 |
+
# Adjust LIBMOE_ROOT to your actual path
|
| 130 |
+
LIBMOE_ROOT=~/projects/libmoe
|
| 131 |
+
|
| 132 |
+
mkdir -p "$LIBMOE_ROOT/data/image_onevision"
|
| 133 |
+
mkdir -p "$LIBMOE_ROOT/data/coco/train2017"
|
| 134 |
+
mkdir -p "$LIBMOE_ROOT/data/gqa/images"
|
| 135 |
+
mkdir -p "$LIBMOE_ROOT/data/ocr_vqa/images"
|
| 136 |
+
mkdir -p "$LIBMOE_ROOT/data/textvqa/train_images"
|
| 137 |
+
mkdir -p "$LIBMOE_ROOT/data/vg/VG_100K"
|
| 138 |
+
mkdir -p "$LIBMOE_ROOT/data/vg/VG_100K_2"
|
| 139 |
+
```
|
| 140 |
+
|
| 141 |
+
Now move (or symlink) the OCR-VQA images:
|
| 142 |
+
|
| 143 |
+
```bash
|
| 144 |
+
# Case A: the ZIP extracted directly to many image files:
|
| 145 |
+
mv /cm/archive/namnv78_A100_PDM/data/images_unzip/* \
|
| 146 |
+
"$LIBMOE_ROOT/data/ocr_vqa/images/"
|
| 147 |
+
|
| 148 |
+
# Case B: the ZIP contains an inner 'images/' folder:
|
| 149 |
+
# mv /cm/archive/namnv78_A100_PDM/data/images_unzip/images/* \
|
| 150 |
+
# "$LIBMOE_ROOT/data/ocr_vqa/images/"
|
| 151 |
+
```
|
| 152 |
+
|
| 153 |
+
(If you prefer not to duplicate storage, create a symlink instead of `mv`):
|
| 154 |
+
|
| 155 |
+
```bash
|
| 156 |
+
# Remove the target if it exists and point a symlink:
|
| 157 |
+
rm -rf "$LIBMOE_ROOT/data/ocr_vqa/images"
|
| 158 |
+
ln -s /cm/archive/namnv78_A100_PDM/data/images_unzip \
|
| 159 |
+
"$LIBMOE_ROOT/data/ocr_vqa/images"
|
| 160 |
+
```
|
| 161 |
+
|
| 162 |
+
Check:
|
| 163 |
+
|
| 164 |
+
```bash
|
| 165 |
+
tree -L 2 "$LIBMOE_ROOT/data" | sed -n '1,200p'
|
| 166 |
+
```
|
| 167 |
+
|
| 168 |
+
You should now see:
|
| 169 |
+
|
| 170 |
+
```
|
| 171 |
+
libmoe/
|
| 172 |
+
└── data/
|
| 173 |
+
├── image_onevision/
|
| 174 |
+
├── coco/
|
| 175 |
+
│ └── train2017/
|
| 176 |
+
├── gqa/
|
| 177 |
+
│ └── images/
|
| 178 |
+
├── ocr_vqa/
|
| 179 |
+
│ └── images/ <-- OCR-VQA images here
|
| 180 |
+
├── textvqa/
|
| 181 |
+
│ └── train_images/
|
| 182 |
+
└── vg/
|
| 183 |
+
├── VG_100K/
|
| 184 |
+
└── VG_100K_2/
|
| 185 |
+
```
|
| 186 |
+
|
| 187 |
+
---
|
| 188 |
+
|
| 189 |
+
## 7) Troubleshooting
|
| 190 |
+
|
| 191 |
+
* **`unzip: End-of-central-directory signature not found`**
|
| 192 |
+
One or more `.zip.part` files are missing/corrupted. Re-download the missing part(s) and re-run `cat`.
|
| 193 |
+
|
| 194 |
+
* **Slow downloads / frequent timeouts**
|
| 195 |
+
Enable accelerated transfer: `export HF_HUB_ENABLE_HF_TRANSFER=1` (requires `hf_transfer` installed).
|
| 196 |
+
You can also run multiple parallel shells to fetch parts.
|
| 197 |
+
|
| 198 |
+
* **Disk space**
|
| 199 |
+
You need free space for: all parts (~48–50 GB) **and** the unzipped images (size varies).
|
| 200 |
+
Delete parts after verifying extraction to reclaim space.
|
| 201 |
+
|
| 202 |
+
* **Permission denied**
|
| 203 |
+
Use a writable directory or prefix commands with `sudo` if appropriate. Check that `$LIBMOE_ROOT` is correct.
|
| 204 |
+
|
| 205 |
+
---
|
| 206 |
+
|
| 207 |
+
## 8) Clean up (optional)
|
| 208 |
+
|
| 209 |
+
```bash
|
| 210 |
+
rm -f /cm/archive/namnv78_A100_PDM/data/images.zip
|
| 211 |
+
rm -rf /cm/archive/namnv78_A100_PDM/data/ocr_vqa_parts
|
| 212 |
+
```
|
| 213 |
+
|
| 214 |
+
---
|
| 215 |
+
|
| 216 |
+
## 9) Notes
|
| 217 |
+
|
| 218 |
+
* If new parts (e.g., `ay`, `az`, `ba`, …) are added later, append them to the `PARTS=(...)` list before running the loop.
|
| 219 |
+
* For reproducibility, consider publishing an `md5sum` file for all parts and the final ZIP.
|
| 220 |
+
|
| 221 |
+
Happy hacking!
|