| | --- |
| | license: odc-by |
| | pretty_name: Zyda-2 |
| | task_categories: |
| | - text-generation |
| | language: |
| | - en |
| | size_categories: |
| | - n>1T |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: data/*/*/* |
| | - config_name: sample-100BT |
| | data_files: |
| | - split: train |
| | path: sample/100BT/*/* |
| | - config_name: dclm_crossdeduped |
| | data_files: |
| | - split: train |
| | path: data/dclm_crossdeduped/*/* |
| | - config_name: zyda_crossdeduped-filtered |
| | data_files: |
| | - split: train |
| | path: data/zyda_crossdeduped-filtered/*/* |
| | - config_name: dolma-cc_crossdeduped-filtered |
| | data_files: |
| | - split: train |
| | path: data/dolma-cc_crossdeduped-filtered/* |
| | - config_name: fwe3 |
| | data_files: |
| | - split: train |
| | path: data/fwe3/*/* |
| | --- |
| | |
| | # Zyda-2 |
| |
|
| | <!-- Provide a quick summary of the dataset. --> |
| |
|
| | Zyda-2 is a 5 trillion token language modeling dataset created by collecting open and high quality datasets and combining them and cross-deduplication and model-based quality filtering. Zyda-2 comprises diverse sources of web data, highly educational content, math, code, and scientific papers. |
| |
|
| | To construct Zyda-2, we took the best open-source datasets available: [Zyda](https://huggingface.co/datasets/Zyphra/Zyda), [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb), [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0), and [Dolma](https://huggingface.co/datasets/allenai/dolma). Models trained on Zyda-2 significantly outperform identical models trained on the Pile, RefinedWeb, FineWeb, FineWeb-Edu, and DCLM. Due to our post-processing deduplication, filtering, and weighting pipeline, Zyda-2 outperforms all its constituent datasets in resulting model quality. |
| |
|
| | An early version of Zyda-2 was used as the primary dataset for phase 1 pretraining of our Zamba2 [series](https://huggingface.co/Zyphra/Zamba2-7B) [of](Zyphra/Zamba2-2.7B) [models](Zyphra/Zamba2-1.2B) which perform extremely strongly on a per-token basis and are often state-of-the-art for their size, testifying to the strength of Zyda-2 as a pretraining dataset. |
| |
|
| | According to our evaluations, Zyda-2 is the most performant per-token open dataset available. Zyda-2 excels at educational and natural language reasoning content. For code performance, we recommend mixing it with a pure code dataset such as [Starcoder](https://huggingface.co/bigcode/starcoder). |
| |
|
| |
|
| | <center> |
| | <img src="https://cdn-uploads.huggingface.co/production/uploads/65455aca468722e935103b17/-nxHBcU38QJ-MNdKXPiYS.png" width="600" alt="Zyda-2 evaluation scores"> |
| | </center> |
| |
|
| |
|
| | For more information, please see our [technical blog](https://www.zyphra.com/post/building-zyda-2). |
| |
|
| | ## How to download |
| | We preserved the schemas of original component datasets, meaning that every component has its own schema. For that reason attempting to download the whole dataset using `datasets.load_dataset()` will fail during the stage of generating a split. If you attempt to stream the default config, it will also fail. |
| |
|
| | To download the whole dataset we recommend to either clone the repository, or, if you must use the `datasets.load_dataset()`, download individual components separately. |
| |
|
| | Only `nemo_id` and `text` are common columns between the components. Select those for every component first, and only then interleave the datasets with optimal weights (see example at the bottom of this section). |
| |
|
| | Example command to clone the repository using huggingface-cli: `huggingface-cli download Zyphra/Zyda-2 --repo-type dataset` |
| |
|
| | Commands to download individual components: |
| | - DCLM: `ds_dclm = datasets.load_dataset("Zyphra/Zyda-2", name="dclm_crossdeduped", split="train")` |
| | - Zyda: `ds_zyda = datasets.load_dataset("Zyphra/Zyda-2", name="zyda_crossdeduped-filtered", split="train")` |
| | - Dolma-CC: `ds_dolma = datasets.load_dataset("Zyphra/Zyda-2", name="dolma-cc_crossdeduped-filtered", split="train")` |
| | - Fineweb-Edu: `ds_fwe = datasets.load_dataset("Zyphra/Zyda-2", name="fwe3", split="train")` |
| |
|
| | In this repository we provide raw results of cross deduplication and filtering. To achieve the best possible performance, one will need to use appropriate weights during training. |
| | We found the following optimal weights by number of tokens (in the sense of weights in the resultant dataset): DCLM - 4.0, FWE3 - 4.0, Zyda - 0.16, Dolma-CC - 0.24. |
| |
|
| | Below you will find an example of how to get proper dataset object. |
| | It demonstrates how to select only `nemo_id` and `text` columns, and then interleave the datasets with probabilities computed from the weights above. |
| | One needs to be careful with weights normalization, as `interleave_datasets()` returns documents, while our weights are token-wise. We provide precomputed document-wise weights in the example below. |
| | To stream the dataset, add `streaming=True` to the `load_dataset()` commands. |
| |
|
| | ``` |
| | common_columns = ["nemo_id", "text"] |
| | ds_dclm = ds_dclm.select_columns(common_columns) |
| | ds_zyda = ds_zyda.select_columns(common_columns) |
| | ds_dolma = ds_dolma.select_columns(common_columns) |
| | ds_fwe = ds_zyda.select_columns(common_columns) |
| | norm_weights = [0.4038, 0.0316, 0.0585, 0.5061] |
| | ds = datasets.interleave_datasets([ds_dclm, ds_zyda, ds_dolma, ds_fwe], probabilities=norm_weights, stopping_strategy="all_exhausted") |
| | ``` |
| |
|
| | ### (Smaller) sample version |
| | Along with the configs above, you can also download a smaller version of the dataset with the following config: |
| | - `sample-100BT`: a subset randomly sampled from the whole dataset of around 100B gpt-neox tokens (252GB, 91.2M documents). |
| |
|
| | This sample only has common columns `nemo-id` and `text`. In addition, it was sampled according to optimal weights, so you can start using it directly. |
| |
|
| | `ds_sample = datasets.load_dataset("Zyphra/Zyda-2", name="sample-100BT", split="train")` |
| |
|
| | ## Breakdown by component |
| |
|
| | | Component | Download size (parquet, GBs) | Documents (millions) | gpt-neox tokens (billions) | |
| | | --- | --- | --- | --- | |
| | | dclm-crossdeduped | 8,469.4 | 2,590.5 | 3,348.942 | |
| | | zyda-crossdeduped-filtered | 452.4 | 247.7 | 163.6 | |
| | | dolma_cc-crossdeduped-filtered | 668.2 | 445.6 | 238.4 | |
| | | fwe3 | 3,490.5 | 1,279.1 | 1,319.2 | |
| | | Total | 13,080.5 | 4,562.8 | 5,070.2 | |
| | |
| | ### Dataset Description |
| | |
| | <!-- Provide a longer summary of what this dataset is. --> |
| | |
| | - **Curated by:** Zyphra |
| | - **Language(s) (NLP):** Primarily English |
| | - **License:** Open Data Commons License |
| | |
| | |
| | ## Dataset Structure |
| | |
| | <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
| | |
| | Each component has their own individual schema. Please, consult with their respective sources for exact information. |
| | |
| | However, in all components the document text is in the `text` column, and the unique document id is in the `nemo_id` column. |
| |
|
| | Our Zyda-1 and Dolma-CC versions also have two additional columns corresponding to prediction of Nvidia's quality model (https://huggingface.co/nvidia/quality-classifier-deberta): `quality_prob` and `quality_pred`. |
| |
|
| | ### Source Data |
| |
|
| | Zyda-2 is comprised of four high quality open-source datasets: |
| |
|
| | Zyda-1: https://huggingface.co/datasets/Zyphra/Zyda |
| |
|
| | Dolma-CC v1.7: https://huggingface.co/datasets/allenai/dolma |
| |
|
| | DCLM-baseline: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0 |
| |
|
| | FineWeb-Edu-score2: https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2 |
| |
|
| | <center> |
| | <img src="https://cdn-uploads.huggingface.co/production/uploads/65c05e75c084467acab2f84a/GQenkNxzyM65M4eR2YZcV.png" width="600" alt="Zyda-2 dataset composition"> |
| | </center> |
| |
|
| | #### Personal and Sensitive Information |
| |
|
| | As a language modeling dataset, it likely contains PII which has not been filtered out of the component datasets and which may have been missed by our own filters. |
| |
|
| | ## Bias, Risks, and Limitations |
| |
|
| | As a dataset comprised of open web scrapes, it is likely that it contains biased and toxic content. |
| |
|
| | ## Licensing Information |
| |
|
| | We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this dataset, you are also bound by any license agreements and terms of use of the original data sources. |
| |
|
| | ## Citation |
| |
|
| | If you use our dataset to train a model, please cite us at: |
| |
|
| | ``` |
| | @misc{zyphra_nvidia_2024, |
| | author = {Yury Tokpanov, Paolo Glorioso, Ayush Dattagupta, Vibhu Jawa, Ryan Wolf, Vikranth Jeyakumar, Arham Mehta, Quentin Anthony, Beren Millidge}, |
| | title = {Building {Zyda-2}, a 5 {Trillion} {Token} {High-Quality} {Dataset}, with {NVIDIA} {NeMo} {Curator}}, |
| | url = {https://www.zyphra.com/post/building-zyda-2}, |
| | publisher = {Zyphra}, |
| | year = {2024}, |
| | month = {October}, |
| | day = {15} |
| | } |
| | ``` |
| |
|
| |
|