Datasets:

Tasks:
Other
Size:
n<1K
ArXiv:
License:
Mayee Chen commited on
Commit
d53a119
·
1 Parent(s): aa1e709

instructions for using with github

Browse files
Files changed (1) hide show
  1. README.md +2 -26
README.md CHANGED
@@ -29,35 +29,11 @@ The dataset contains **32 swarms** organized into four main categories:
29
 
30
  ## Usage
31
 
32
- ### Primary Use Case: Direct CSV Loading with Olmix
33
 
34
- The primary way to use these datasets is to load the CSV files directly and pass them to Olmix functions. This is the recommended workflow for data mixture optimization.
35
 
36
 
37
- ### Loading Metadata
38
-
39
- Each dataset includes a `meta.json` file with descriptions and details:
40
-
41
- ```python
42
- import json
43
-
44
- with open("dclm_swarm/meta.json") as f:
45
- meta = json.load(f)
46
-
47
- print(f"Description: {meta['description']}")
48
- print(f"Notes: {meta['notes']}")
49
- ```
50
-
51
- ### Browsing Available Datasets
52
-
53
- ```bash
54
- # List all available swarms
55
- find . -name "meta.json" -exec dirname {} \;
56
-
57
- # View a specific swarm's metadata
58
- cat dclm_swarm/meta.json
59
- ```
60
-
61
  ### Alternative: HuggingFace Datasets Library
62
 
63
  You can also load all swarms at once using the HuggingFace datasets library (useful for exploration):
 
29
 
30
  ## Usage
31
 
32
+ ### Primary Use Case: Proposing a Mix using OlmixBase
33
 
34
+ The primary way to use these datasets is to download these CSVs (`ratios.csv`, `metrics.csv` together) directly and use them with the [Olmix GitHub](https://github.com/allenai/olmix) ([example](https://github.com/allenai/olmix/blob/main/configs/fits/dclm_baseline.yaml)). This is the recommended workflow for data mixture optimization.
35
 
36
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  ### Alternative: HuggingFace Datasets Library
38
 
39
  You can also load all swarms at once using the HuggingFace datasets library (useful for exploration):