Update README.md
Browse files
README.md
CHANGED
|
@@ -89,8 +89,8 @@ Run the following command to train the model quickly:
|
|
| 89 |
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --master_port=23333 --nproc_per_node=8 tools/train_rec.py --c configs/rec/unirec/focalsvtr_ardecoder_unirec.yml
|
| 90 |
```
|
| 91 |
|
| 92 |
-
To download the full dataset, you need to merge the split files named `data.mdb.part_*` (located in `HWDB2Train`, `ch_pdf_lmdb`, and `en_pdf_lmdb`) into a single `data.mdb` file. Execute the commands below step by step:
|
| 93 |
|
|
|
|
| 94 |
```shell
|
| 95 |
# downloading full data
|
| 96 |
huggingface-cli download topdu/UniRec40M --repo-type dataset --local-dir ./UniRec40M/
|
|
|
|
| 89 |
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --master_port=23333 --nproc_per_node=8 tools/train_rec.py --c configs/rec/unirec/focalsvtr_ardecoder_unirec.yml
|
| 90 |
```
|
| 91 |
|
|
|
|
| 92 |
|
| 93 |
+
Downloading the full dataset requires 3.5 TB of available storage space. Then, you need to merge the split files named `data.mdb.part_*` (located in `HWDB2Train`, `ch_pdf_lmdb`, and `en_pdf_lmdb`) into a single `data.mdb` file. Execute the commands below step by step:
|
| 94 |
```shell
|
| 95 |
# downloading full data
|
| 96 |
huggingface-cli download topdu/UniRec40M --repo-type dataset --local-dir ./UniRec40M/
|