DavidNguyen commited on
Commit
e78d06d
·
verified ·
1 Parent(s): fbb0b0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -157
README.md CHANGED
@@ -23,44 +23,42 @@ libmoe/
23
 
24
  ---
25
 
26
- ## 1) Prerequisites
27
 
28
- * Python + `huggingface_hub` CLI (or use `curl/wget`)
29
- * (Optional but faster) `hf_transfer` for high-speed downloads
30
 
31
- Install CLIs:
32
 
33
- ```bash
34
- pip install -U huggingface_hub hf_transfer
35
- # Login once (stores token):
36
- huggingface-cli login # or set HF_TOKEN env var
37
- ```
38
 
39
- ---
 
 
40
 
41
- ## 2) Download split parts from Hugging Face
 
 
42
 
43
- Dataset: `DavidNguyen/ocr_vqa`, path: `ocr_vqa/images_part_*.zip.part`
 
44
 
45
- ### Using `curl` directly (works anywhere)
 
 
 
 
 
 
 
46
 
47
- ```bash
48
- DEST=/cm/archive/namnv78_A100_PDM/data
49
- mkdir -p "$DEST/ocr_vqa_parts"
50
- cd "$DEST/ocr_vqa_parts"
51
 
52
- for p in {aa..ax}; do
53
- # Direct "resolve" URLs on HF (replace if repo path changes)
54
- curl -L -o "images_part_${p}.zip.part" \
55
- "https://huggingface.co/datasets/DavidNguyen/ocr_vqa/resolve/main/ocr_vqa/images_part_${p}.zip.part?download=true"
56
- done
57
 
58
- ls -lh
59
  ```
60
 
61
  ---
62
 
63
- ## 3) (Optional) Quick integrity check on parts
64
 
65
  ```bash
66
  # Show sizes to spot any truncated files
@@ -74,148 +72,22 @@ All parts should be ~2.1 GB except the last (`ax`) which may be smaller.
74
 
75
  ---
76
 
77
- ## 4) Concatenate parts → a single ZIP
78
 
79
  If you already downloaded to `/cm/archive/namnv78_A100_PDM/data/`, the one-liner below (as requested) works:
80
 
81
  ```bash
82
- cat /cm/archive/namnv78_A100_PDM/data/images_part_*.zip.part \
83
- > /cm/archive/namnv78_A100_PDM/data/images.zip
84
- ```
85
-
86
- If you used the `ocr_vqa_parts` subfolder above:
87
-
88
- ```bash
89
- cd /cm/archive/namnv78_A100_PDM/data/ocr_vqa_parts
90
- cat images_part_*.zip.part > ../images.zip
91
- ```
92
-
93
- Verify the merged file:
94
-
95
- ```bash
96
- cd /cm/archive/namnv78_A100_PDM/data
97
- ls -lh images.zip # expect ~48–50 GB total
98
- # Optional checksum if provided:
99
- # md5sum images.zip
100
- ```
101
-
102
- ---
103
-
104
- ## 5) Unzip
105
-
106
- ```bash
107
- cd /cm/archive/namnv78_A100_PDM/data
108
- mkdir -p ./images_unzip
109
- unzip -q images.zip -d ./images_unzip
110
- # Or faster with 7z:
111
- # 7z x images.zip -o./images_unzip
112
  ```
113
 
114
- After extraction you should see the image files or an `images/` folder inside `images_unzip/` (depends on how the ZIP was packed).
115
-
116
  ---
117
 
118
- ## 6) Place into the LibMoE data tree
119
-
120
- Target layout:
121
-
122
- ```
123
- libmoe/data/ocr_vqa/images/
124
- ```
125
-
126
- Create the tree (and optionally the other dataset folders to keep structure consistent):
127
-
128
- ```bash
129
- # Adjust LIBMOE_ROOT to your actual path
130
- LIBMOE_ROOT=~/projects/libmoe
131
-
132
- mkdir -p "$LIBMOE_ROOT/data/image_onevision"
133
- mkdir -p "$LIBMOE_ROOT/data/coco/train2017"
134
- mkdir -p "$LIBMOE_ROOT/data/gqa/images"
135
- mkdir -p "$LIBMOE_ROOT/data/ocr_vqa/images"
136
- mkdir -p "$LIBMOE_ROOT/data/textvqa/train_images"
137
- mkdir -p "$LIBMOE_ROOT/data/vg/VG_100K"
138
- mkdir -p "$LIBMOE_ROOT/data/vg/VG_100K_2"
139
- ```
140
-
141
- Now move (or symlink) the OCR-VQA images:
142
-
143
- ```bash
144
- # Case A: the ZIP extracted directly to many image files:
145
- mv /cm/archive/namnv78_A100_PDM/data/images_unzip/* \
146
- "$LIBMOE_ROOT/data/ocr_vqa/images/"
147
-
148
- # Case B: the ZIP contains an inner 'images/' folder:
149
- # mv /cm/archive/namnv78_A100_PDM/data/images_unzip/images/* \
150
- # "$LIBMOE_ROOT/data/ocr_vqa/images/"
151
- ```
152
-
153
- (If you prefer not to duplicate storage, create a symlink instead of `mv`):
154
 
155
  ```bash
156
- # Remove the target if it exists and point a symlink:
157
- rm -rf "$LIBMOE_ROOT/data/ocr_vqa/images"
158
- ln -s /cm/archive/namnv78_A100_PDM/data/images_unzip \
159
- "$LIBMOE_ROOT/data/ocr_vqa/images"
160
- ```
161
-
162
- Check:
163
-
164
- ```bash
165
- tree -L 2 "$LIBMOE_ROOT/data" | sed -n '1,200p'
166
- ```
167
-
168
- You should now see:
169
-
170
- ```
171
- libmoe/
172
- └── data/
173
- ├── image_onevision/
174
- ├── coco/
175
- │ └── train2017/
176
- ├── gqa/
177
- │ └── images/
178
- ├── ocr_vqa/
179
- │ └── images/ <-- OCR-VQA images here
180
- ├── textvqa/
181
- │ └── train_images/
182
- └── vg/
183
- ├── VG_100K/
184
- └── VG_100K_2/
185
- ```
186
-
187
- ---
188
-
189
- ## 7) Troubleshooting
190
-
191
- * **`unzip: End-of-central-directory signature not found`**
192
- One or more `.zip.part` files are missing/corrupted. Re-download the missing part(s) and re-run `cat`.
193
-
194
- * **Slow downloads / frequent timeouts**
195
- Enable accelerated transfer: `export HF_HUB_ENABLE_HF_TRANSFER=1` (requires `hf_transfer` installed).
196
- You can also run multiple parallel shells to fetch parts.
197
-
198
- * **Disk space**
199
- You need free space for: all parts (~48–50 GB) **and** the unzipped images (size varies).
200
- Delete parts after verifying extraction to reclaim space.
201
-
202
- * **Permission denied**
203
- Use a writable directory or prefix commands with `sudo` if appropriate. Check that `$LIBMOE_ROOT` is correct.
204
 
205
- ---
206
-
207
- ## 8) Clean up (optional)
208
-
209
- ```bash
210
- rm -f /cm/archive/namnv78_A100_PDM/data/images.zip
211
- rm -rf /cm/archive/namnv78_A100_PDM/data/ocr_vqa_parts
212
  ```
213
-
214
- ---
215
-
216
- ## 9) Notes
217
-
218
- * If new parts (e.g., `ay`, `az`, `ba`, …) are added later, append them to the `PARTS=(...)` list before running the loop.
219
- * For reproducibility, consider publishing an `md5sum` file for all parts and the final ZIP.
220
-
221
- Happy hacking!
 
23
 
24
  ---
25
 
 
26
 
27
+ ## 1) Download split parts from Hugging Face
 
28
 
29
+ Dataset: `DavidNguyen/ocr_vqa`, path: `ocr_vqa/images_part_*.zip.part`
30
 
31
+ ### Using `curl` directly (works anywhere)
 
 
 
 
32
 
33
+ ```bash
34
+ #!/usr/bin/env bash
35
+ set -euo pipefail
36
 
37
+ DEST=./data/ocr_vqa/ocr_vqa_parts
38
+ mkdir -p "$DEST"
39
+ cd "$DEST"
40
 
41
+ # List of parts
42
+ PARTS=$(echo a{a..x}) # aa ab ac ... ax
43
 
44
+ # Download function
45
+ download_one() {
46
+ p="$1"
47
+ file="images_part_${p}.zip.part"
48
+ url="https://huggingface.co/datasets/DavidNguyen/ocr_vqa/resolve/main/ocr_vqa/${file}?download=true"
49
+ echo "[*] downloading $file"
50
+ curl -L --fail --retry 5 --retry-delay 3 -o "$file" "$url"
51
+ }
52
 
53
+ export -f download_one
 
 
 
54
 
55
+ echo "$PARTS" | xargs -n1 -P8 bash -c 'download_one "$@"' _
 
 
 
 
56
 
 
57
  ```
58
 
59
  ---
60
 
61
+ ## 2) (Optional) Quick integrity check on parts
62
 
63
  ```bash
64
  # Show sizes to spot any truncated files
 
72
 
73
  ---
74
 
75
+ ## 3) Concatenate parts → a single ZIP
76
 
77
  If you already downloaded to `/cm/archive/namnv78_A100_PDM/data/`, the one-liner below (as requested) works:
78
 
79
  ```bash
80
+ mkdir /cm/archive/namnv78_A100_PDM/data/ocr_vqa
81
+ cat /cm/archive/namnv78_A100_PDM/data/ocr_vqa/images_part_*.zip.part \
82
+ > /cm/archive/namnv78_A100_PDM/data/ocr_vqa/images.zip
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
  ```
84
 
 
 
85
  ---
86
 
87
+ ## 4) Unzip
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
  ```bash
90
+ cd /cm/archive/namnv78_A100_PDM/data/ocr_vqa
91
+ unzip -q images.zip
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92
 
 
 
 
 
 
 
 
93
  ```