Upload folder using huggingface_hub
Browse files- README.md +43 -0
- train/filtered.parquet +3 -0
- val/filtered.parquet +3 -0
README.md
ADDED
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# mmlu (filtered)
|
| 2 |
+
|
| 3 |
+
Filtered version of [cais/mmlu](https://huggingface.co/datasets/cais/mmlu), produced as part of the `assistant_core_midtraining` data mixture.
|
| 4 |
+
|
| 5 |
+
## Source Dataset
|
| 6 |
+
|
| 7 |
+
- **Original HuggingFace path**: `cais/mmlu`
|
| 8 |
+
- **Disjoint train/val splits**: yes
|
| 9 |
+
- **train split**: 1 subset(s) from `train` split
|
| 10 |
+
- **val split**: 57 subset(s) from `test` split
|
| 11 |
+
|
| 12 |
+
## Filtering Methodology
|
| 13 |
+
|
| 14 |
+
Each example in the original dataset was reviewed by an LLM (GPT-OSS-120B) which assessed quality across several dimensions including factual accuracy, formatting consistency, and instructional clarity. Examples that failed review were collected into a blocklist keyed by source name and original row index.
|
| 15 |
+
|
| 16 |
+
The filtered dataset was produced by:
|
| 17 |
+
|
| 18 |
+
1. Loading the original dataset from HuggingFace
|
| 19 |
+
2. Stamping each row with its original index (`__source_orig_idx__`) for traceability
|
| 20 |
+
3. Removing all rows whose index appears in the blocklist
|
| 21 |
+
4. Validating that row counts match expectations exactly
|
| 22 |
+
5. Spot-checking a random sample of surviving rows against the original to verify data integrity
|
| 23 |
+
|
| 24 |
+
## Filtering Impact
|
| 25 |
+
|
| 26 |
+
| Split | Original Rows | Removed | Filtered Rows | % Removed |
|
| 27 |
+
|-------|--------------|---------|---------------|-----------|
|
| 28 |
+
| train | 99,842 | 6,928 | 92,914 | 6.94% |
|
| 29 |
+
| val | 14,042 | 0 | 14,042 | 0.00% |
|
| 30 |
+
|
| 31 |
+
## Schema
|
| 32 |
+
|
| 33 |
+
All original columns are preserved. One column is added:
|
| 34 |
+
|
| 35 |
+
- `__source_orig_idx__`: The row's index in the original (unfiltered) dataset. This provides complete lineage back to the source for debugging and future analysis.
|
| 36 |
+
|
| 37 |
+
**Note**: The original `train` split had a nested structure (`subsplit_name: "train"`). This has been flattened into top-level columns. When updating `data_source_configurations.py` to point to this repo, remove the `subsplit_name` field from the `train` split config.
|
| 38 |
+
|
| 39 |
+
## Provenance
|
| 40 |
+
|
| 41 |
+
- **Blocklist**: `data/instruct_mix/train/review_results/blocklist_proposed_20260218_232356.jsonl`
|
| 42 |
+
- **Mixture**: `assistant_core_midtraining`
|
| 43 |
+
- **Generated**: 2026-02-20T12:05:03.203204+00:00
|
train/filtered.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8a7a34d58e67260ec21ae2534d7e0fd53b0608a820a727d2353c7a6819adf483
|
| 3 |
+
size 44530264
|
val/filtered.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7eef176252c3719e1512064f4c6407956cf7b67bcb4980d5ef5a8007bd4da17f
|
| 3 |
+
size 3447252
|