--- license: mit tags: - human-feedback - preference-modeling - synthetic - coding - safety - mmluindex size_categories: - 1M\n\nAssistant: ", "rejected": "\n\nHuman: \n\nAssistant: " } ``` Some records are single-turn and some are multi-turn. The formatting remains consistent across both styles. ## Content Profile The dataset includes coverage across: - coding and software questions - mathematics and reasoning tasks - writing and rewriting requests - factual explanation and comparison prompts - safety-sensitive refusal cases - honesty and uncertainty calibration examples The responses were designed so that the preferred answer is meaningfully better than the rejected one, rather than only stylistically different. ## Intended Use MMLUindex is suitable for: - preference-model and reward-model prototyping - loader and preprocessing pipeline testing - small-scale alignment experiments - evaluation of refusal quality and safe redirection - experiments comparing structured assistant outputs against weaker alternatives ## Out of Scope MMLUindex is **not** intended as: - a benchmark for final model capability claims - a replacement for large human-collected alignment corpora - a source of verified real-world labels for high-stakes deployment - a production-scale foundation-model training corpus ## Loading Examples Load a CSV subset: ```python from datasets import load_dataset dataset = load_dataset( "csv", data_files="MMLUindex-coding-base/train/train.csv", split="train", ) ``` Load a compressed JSONL subset: ```python from datasets import load_dataset dataset = load_dataset( "json", data_files="MMLUindex-coding-base/train/train.jsonl.gz", split="train", ) ``` Load the full combined JSONL file: ```python from datasets import load_dataset dataset = load_dataset( "json", data_files="MMLUindex.jsonl", split="train", ) ``` ## Quality Notes The dataset was checked for: - valid JSONL structure - consistent `chosen` / `rejected` keys - readable gzip subset files - matching CSV and JSONL subset row counts - stable conversation formatting beginning with `Human:` and `Assistant:` ## Limitations - The dataset is synthetic rather than human-annotated end to end. - The dataset is larger than the earlier repository versions, but it is still synthetic rather than organically collected from live annotator preference workflows. - Folder names are organizational labels, not provenance claims about collection method. - Safety coverage is broad but not exhaustive. ## Safety Notice Some subset rows include harmful, deceptive, or distressing prompts in order to model better and worse assistant behavior. These examples are included for alignment and safety evaluation, not for endorsing such content. ## Version Notes Current repository characteristics: - 1,200,000 total preference pairs - four subset folders - CSV and compressed JSONL training exports - one combined root JSONL export