Datasets:

Tasks:
Other
Size:
n<1K
ArXiv:
License:
olmix / README.md
Mayee Chen
Add swarms over different domains for RQ2
33b58d5
metadata
license: apache-2.0
task_categories:
  - other
pretty_name: Olmix Swarm Datasets
size_categories:
  - n<1K
tags:
  - data-mixing
  - language-models
  - pretraining
  - mixture-optimization

Olmix Swarm Datasets

This repository contains proxy run swarm datasets for data mixing using Olmix. Each swarm consists of multiple proxy training runs with different domain mixture ratios and their corresponding evaluation metrics across various downstream tasks. The swarm datasets here are based on 30M parameter proxy models trained on 3B tokens.

For more details, see our paper: Olmix: A Framework for Data Mixing Throughout LM Development

Swarms

The dataset contains 36 swarms. Each swarm consists of ratios.csv, metrics.csv, and meta.json.

dclm_swarm/                          # Main DCLM swarm (1 swarm)
├── study/
│   ├── rq2/                         # Swarm size experiments (3 swarms)
│   │   ├── dclm_swarm_18_topics/
│   │   ├── dclm_swarm_12_topics/
│   │   └── dclm_swarm_6_topics/
│   ├── rq3/                         # Swarm distribution experiments (5 swarms)
│   │   ├── dclm_dense_swarm/
│   │   ├── source_level_sparse_swarm/
│   │   ├── source_level_dense_swarm/
│   │   ├── dclm_swarm_strong_prior/
│   │   └── dclm_swarm_weak_prior/
│   └── rq6/                         # Constrained swarm for enforcing repetition (1 swarm)
│       └── dclm_constrained_swarm/
└── mixture_reuse/
    ├── real_world/
    │   ├── full_reuse/              # Full mixture reuse (12 swarms)
    │   │   ├── update1_add_stack_edu_seed{0,1,2}/
    │   │   ├── update2_add_more_sources_seed{0,1,2}/
    │   │   ├── update3_revise_pdfs_seed{0,1,2}/
    │   │   └── update5_partition_pdfs_seed{0,1,2}/
    │   ├── partial_reuse/           # Partial mixture reuse (6 swarms)
    │   │   ├── update1_add_stack_edu_recompute_software_dev_seed{0,1,2}/
    │   │   └── update5_partition_pdfs_recompute_mix_seed{0,1,2}/
    │   ├── swarm_reuse/             # Swarm reuse baseline (1 swarm)
    │   └── full_recomputation/      # Full recomputation baseline (1 swarm)
    └── theory_validation/           # Theoretical validation (6 swarms)
        ├── dclm_stackedu/
        ├── dclm_pdfs/
        ├── weak_mix_reuse_gap_dclm_stackedu/
        ├── intermediate_mix_reuse_gap_dclm_stackedu/
        ├── weak_mix_reuse_gap_dclm_pdfs/
        └── intermediate_mix_reuse_gap_dclm_pdfs/
  • dclm_swarm/ (1 swarm): 128 proxy runs over 24 DCLM topics (sparse); the primary swarm used throughout the paper.

  • study/ (9 swarms):

    • RQ2 — Swarm Size (3 swarms, Section 3.2): How many proxy runs are needed for m domains? Swarms over subsets of DCLM topics: 18 topics (129 runs total), 12 topics (130 runs total), and 6 topics (125 runs total).
    • RQ3 — Swarm Distribution (5 swarms, Section 3.3): Comparing swarm configurations over 24 DCLM topics or 7 sources, varying density (sparse vs. dense) and Dirichlet prior strength (weak vs. strong).
    • RQ6 — Constrained Optimization (1 swarm, Section 3.6): 128 proxy runs satisfying repetition constraints for R=6T requested tokens and k=4 repetition factor.
  • mixture_reuse/real_world/ (20 swarms, Section 5.1, Figure 10):

    • Full reuse (12 swarms): The entire previous mixture is reused as a single virtual domain. 3 seeds each for 4 updates:
      • Update 1 (add Stack-Edu): swarms over DCLM (1 virtual domain) and Stack-Edu
      • Update 2 (add more sources): swarms over DCLM+Stack-Edu (1 virtual domain) and new sources
      • Update 3 (revise PDFs): swarms over revised PDF source and unchanged domains (1 virtual domain)
      • Update 5 (partition PDFs): swarms over partitioned PDF topics and unchanged domains (1 virtual domain)
    • Partial reuse (6 swarms): Only the unaffected portion of the mixture is reused; affected domains are recomputed. 3 seeds each for 2 updates:
      • Update 1 (add Stack-Edu, recompute software development): swarms over DCLM \ software development (1 virtual domain), software development, and Stack-Edu
      • Update 5 (partition PDFs, recompute mix): swarms over PDF topics and sources
    • Swarm reuse (1 swarm): Proxy runs accumulated from all swarms compatible with the final 64-domain set.
    • Full recomputation (1 swarm): 512 proxy runs over the final 64 domains after all 5 updates. Subsampled at various budgets to measure full recomputation's performance as a function of run count.
  • mixture_reuse/theory_validation/ (6 swarms, Section 5.2): Experiments validating mixture reuse theory.

    • DCLM+StackEdu and DCLM+PDFs: Swarms for the full recomputation baseline in two settings.
    • Weak/Intermediate mix reuse gap (DCLM+StackEdu): Swarms over DCLM (reusing a weak or intermediate mix) and Stack-Edu.
    • Weak/Intermediate mix reuse gap (DCLM+PDFs): Swarms over DCLM (reusing a weak or intermediate mix) and PDFs.

Data Format

ratios.csv

  • run, name, index: run identifier, full run name, and sequential index
  • One column per domain: mixture weight (proportion of training tokens) assigned to that domain. Domain names follow the format source:topic (e.g., dclm:politics, s2pdf:science_tech, stack-edu:Python), where the prefix is the data source and the suffix is the topic within that source. Single-part names (e.g., wikipedia, arxiv) are single-domain sources.

metrics.csv

  • run, name, index: same identifiers as in ratios.csv
  • One column per evaluation task: the model's bits-per-byte (BPB) score on that task. Lower BPB is better. Tasks span a wide range of benchmarks including MMLU, ARC, coding (HumanEval, MBPP), math (Minerva), and reading comprehension.

meta.json

Each meta.json includes swarm_id, description, category, notes, files, and two fields used when configuring a fit in the Olmix repository:

  • relative_sizes: The natural distribution of the domains, used in the KL penalty.
  • token_counts: The final token count for each domain when constructing the repetition constraint.

Usage

The primary use of these datasets is as input to the Olmix GitHub repository for constructing a proposed mix. Download ratios.csv, metrics.csv, and meta.json from the desired swarm folder and point to them in your Olmix config .yaml file (see example config).

The dataset is also accessible via the HuggingFace datasets library (load_dataset("allenai/olmix")) for exploration purposes.

Resources

Citation

If you use these datasets in your research, please cite:

@article{chen2026olmix,
  title={Olmix: A Framework for Data Mixing Throughout LM Development},
  author={Chen, Mayee F and Murray, Tyler and Heineman, David and Jordan, Matt and Hajishirzi, Hannaneh and Re, Christopher and Soldaini, Luca and Lo, Kyle},
  journal={arXiv preprint arXiv:2602.12237},
  year={2026}
}

License

This dataset is released under the Apache 2.0 License.