Datasets:

Languages:
English
Size:
n>1T
ArXiv:
License:

update DATA_DIR to DOLMA_DATA_DIR

#53
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -91,15 +91,15 @@ The fastest way to download Dolma is to clone this repository and use the files
91
  We recommend using wget in parallel mode to download the files. For example:
92
 
93
  ```bash
94
- DATA_DIR="<path_to_your_data_directory>"
95
  PARALLEL_DOWNLOADS="<number_of_parallel_downloads>"
96
  DOLMA_VERSION="<version_of_dolma_to_download>"
97
 
98
  git clone https://huggingface.co/datasets/allenai/dolma
99
- mkdir -p "${DATA_DIR}"
100
 
101
 
102
- cat "dolma/urls/${DOLMA_VERSION}.txt" | xargs -n 1 -P "${PARALLEL_DOWNLOADS}" wget -q -P "$DATA_DIR"
103
  ```
104
 
105
  Then, to load this data using HuggingFace's `datasets` library, you can use the following code:
@@ -108,7 +108,7 @@ Then, to load this data using HuggingFace's `datasets` library, you can use the
108
  import os
109
  from datasets import load_dataset
110
 
111
- os.environ["DATA_DIR"] = "<path_to_your_data_directory>"
112
  dataset = load_dataset("allenai/dolma", split="train")
113
  ```
114
 
 
91
  We recommend using wget in parallel mode to download the files. For example:
92
 
93
  ```bash
94
+ DOLMA_DATA_DIR="<path_to_your_data_directory>"
95
  PARALLEL_DOWNLOADS="<number_of_parallel_downloads>"
96
  DOLMA_VERSION="<version_of_dolma_to_download>"
97
 
98
  git clone https://huggingface.co/datasets/allenai/dolma
99
+ mkdir -p "${DOLMA_DATA_DIR}"
100
 
101
 
102
+ cat "dolma/urls/${DOLMA_VERSION}.txt" | xargs -n 1 -P "${PARALLEL_DOWNLOADS}" wget -q -P "$DOLMA_DATA_DIR"
103
  ```
104
 
105
  Then, to load this data using HuggingFace's `datasets` library, you can use the following code:
 
108
  import os
109
  from datasets import load_dataset
110
 
111
+ os.environ["DOLMA_DATA_DIR"] = "<path_to_your_data_directory>"
112
  dataset = load_dataset("allenai/dolma", split="train")
113
  ```
114