MCEval8K / README.md
iszhaoxin's picture
Update README.md
c2fe9a2 verified
metadata
license: cc-by-4.0
configs:
  - config_name: chunking
    data_files:
      - split: train
        path: linguistic/chunking/train.json
      - split: validation
        path: linguistic/chunking/valid.json
      - split: test
        path: linguistic/chunking/test.json
    dataset_info:
      splits:
        train:
          num_examples: 6000
        validation:
          num_examples: 1000
        test:
          num_examples: 1000
  - config_name: clang8
    data_files:
      - split: train
        path: linguistic/clang8/train.json
      - split: validation
        path: linguistic/clang8/valid.json
      - split: test
        path: linguistic/clang8/test.json
  - config_name: ner
    data_files:
      - split: train
        path: linguistic/ner/train.json
      - split: validation
        path: linguistic/ner/valid.json
      - split: test
        path: linguistic/ner/test.json
  - config_name: postag
    data_files:
      - split: train
        path: linguistic/postag/train.json
      - split: validation
        path: linguistic/postag/valid.json
      - split: test
        path: linguistic/postag/test.json
  - config_name: agnews
    data_files:
      - split: train
        path: classification/agnews/train.json
      - split: validation
        path: classification/agnews/valid.json
      - split: test
        path: classification/agnews/test.json
    dataset_info:
      splits:
        train:
          num_examples: 6000
        validation:
          num_examples: 1000
        test:
          num_examples: 1000
  - config_name: amazon-reviews
    data_files:
      - split: train
        path: classification/amazon-reviews/train.json
      - split: validation
        path: classification/amazon-reviews/valid.json
      - split: test
        path: classification/amazon-reviews/test.json
  - config_name: imdb
    data_files:
      - split: train
        path: classification/imdb/train.json
      - split: validation
        path: classification/imdb/valid.json
      - split: test
        path: classification/imdb/test.json
  - config_name: mnli
    data_files:
      - split: train
        path: nli/mnli/train.json
      - split: validation
        path: nli/mnli/valid.json
      - split: test
        path: nli/mnli/test.json
  - config_name: paws
    data_files:
      - split: train
        path: nli/paws/train.json
      - split: validation
        path: nli/paws/valid.json
      - split: test
        path: nli/paws/test.json
  - config_name: swag
    data_files:
      - split: train
        path: nli/swag/train.json
      - split: validation
        path: nli/swag/valid.json
      - split: test
        path: nli/swag/test.json
  - config_name: fever
    data_files:
      - split: train
        path: fact/fever/train.json
      - split: validation
        path: fact/fever/valid.json
      - split: test
        path: fact/fever/test.json
  - config_name: myriadlama
    data_files:
      - split: train
        path: fact/myriadlama/train.json
      - split: validation
        path: fact/myriadlama/valid.json
      - split: test
        path: fact/myriadlama/test.json
  - config_name: commonsenseqa
    data_files:
      - split: train
        path: fact/commonsenseqa/train.json
      - split: validation
        path: fact/commonsenseqa/valid.json
      - split: test
        path: fact/commonsenseqa/test.json
  - config_name: templama
    data_files:
      - split: train
        path: fact/templama/train.json
      - split: validation
        path: fact/templama/valid.json
      - split: test
        path: fact/templama/test.json
  - config_name: halueval
    data_files:
      - split: train
        path: self-reflection/halueval/train.json
      - split: validation
        path: self-reflection/halueval/valid.json
      - split: test
        path: self-reflection/halueval/test.json
  - config_name: stereoset
    data_files:
      - split: train
        path: self-reflection/stereoset/train.json
      - split: validation
        path: self-reflection/stereoset/valid.json
      - split: test
        path: self-reflection/stereoset/test.json
  - config_name: toxicity
    data_files:
      - split: train
        path: self-reflection/toxicity/train.json
      - split: validation
        path: self-reflection/toxicity/valid.json
      - split: test
        path: self-reflection/toxicity/test.json
  - config_name: lti
    data_files:
      - split: train
        path: multilingual/lti/train.json
      - split: validation
        path: multilingual/lti/valid.json
      - split: test
        path: multilingual/lti/test.json
  - config_name: mpostag
    data_files:
      - split: train
        path: multilingual/mpostag/train.json
      - split: validation
        path: multilingual/mpostag/valid.json
      - split: test
        path: multilingual/mpostag/test.json
  - config_name: amazon-review-multi
    data_files:
      - split: train
        path: multilingual/amazon-review-multi/train.json
      - split: validation
        path: multilingual/amazon-review-multi/valid.json
      - split: test
        path: multilingual/amazon-review-multi/test.json
  - config_name: xnli
    data_files:
      - split: train
        path: multilingual/xnli/train.json
      - split: validation
        path: multilingual/xnli/valid.json
      - split: test
        path: multilingual/xnli/test.json
  - config_name: mlama
    data_files:
      - split: train
        path: multilingual/mlama/train.json
      - split: validation
        path: multilingual/mlama/valid.json
      - split: test
        path: multilingual/mlama/test.json
annotations_creators:
  - no-annotation
language_creators:
  - found
language:
  - multilingual
multilinguality:
  - multilingual
size_categories:
  - n<1M
task_categories:
  - multiple-choice
  - question-answering
  - text-classification
task_ids:
  - natural-language-inference
  - acceptability-classification
  - fact-checking
  - intent-classification
  - language-identification
  - multi-label-classification
  - sentiment-classification
  - topic-classification
  - sentiment-scoring
  - hate-speech-detection
  - named-entity-recognition
  - part-of-speech
  - parsing
  - open-domain-qa
  - document-question-answering
  - multiple-choice-qa
paperswithcode_id: null

MCEval8K

MCEval8K is a diverse multiple-choice evaluation benchmark for probing language models’ (LMs) understanding of a broad range of language skills using neuron-level analysis. It was introduced in the ACL 2025 paper - "Neuron Empirical Gradient: Discovering and Quantifying Neurons’ Global Linear Controllability".

🔍 Overview

MCEval8K consists of 22 tasks grouped into six skill genres, covering linguistic analysis, content classification, reasoning, factuality, self-reflection, and multilinguality. It is specifically designed for skill neuron probing — identifying and analyzing neurons responsible for specific language capabilities in large language models (LLMs).

Each instance is formatted as a multiple-choice question with a single-token answer, enabling fine-grained neuron attribution analysis.

📚 Dataset Structure

Each task is capped at 8,000 examples to ensure scalability while retaining task diversity. All tasks are converted to multiple-choice format with controlled answer distributions to avoid label bias.

The genres and the involved tasks are summarized in the table below.

Genre Tasks
Linguistic POS, CHUNK, NER, GED
Content Classification IMDB, Amazon, Agnews
Natural Language Inference MNLI, PAWS, SWAG
Factuality MyriadLAMA, FEVER, CSQA, TempLAMA
Self-reflection HaluEval, Toxic, Stereoset
Multilinguality LTI, M-POS, M-Amazon, mLAMA, XNLI

Linguistic

  • POS: Part-of-speech tagging using Universal Dependencies. Given a sentence with a highlighted word, the model predicts its POS tag.
  • CHUNK: Phrase chunking from CoNLL-2000. The task is to determine the syntactic chunk type (e.g., NP, VP) of a given word.
  • NER: Named entity recognition from CoNLL-2003. Predicts the entity type (e.g., PERSON, ORG) for a marked word.
  • GED: Grammatical error detection from the cLang-8 dataset. Each query asks whether a sentence contains a grammatical error.

Content Classification

  • IMDB: Sentiment classification using IMDB reviews. The model predicts whether a review is “positive” or “negative”.
  • Amazon: Review rating classification (1–5 stars) using Amazon reviews.
  • Agnews: Topic classification into four news categories: World, Sports, Business, Sci/Tech.

Natural Language Inference

  • MNLI: Multi-genre natural language inference. Given a premise and a hypothesis, predict whether the relation is entailment, contradiction, or neutral.
  • PAWS: Paraphrase identification. Given two similar sentences, determine if they are paraphrases (yes/no).
  • SWAG: Commonsense inference. Choose the most plausible continuation from four candidate endings for a given context.

Factuality

  • FEVER: Fact verification. Classify claims into “SUPPORTED”, “REFUTED”, or “NOT ENOUGH INFO”.
  • MyriadLAMA: Factual knowledge probing across diverse relation types. Predict the correct object of a subject-relation pair.
  • CSQA: Commonsense QA (CommonsenseQA). Answer multiple-choice questions requiring general commonsense.
  • TempLAMA: Temporal knowledge probing. Given a temporal relation (e.g., “born in”), predict the correct year or time entity.

Self-Reflection

  • HaluEval: Hallucination detection. Given a generated sentence, determine if it contains hallucinated content.
  • Toxic: Toxic comment classification. Binary task to predict whether a comment is toxic.
  • Stereoset: Stereotype detection. Determine whether a given sentence reflects a stereotypical, anti-stereotypical, or unrelated bias.

Multilinguality

  • LTI: Language identification from a multilingual set of short text snippets.
  • M-POS: Multilingual POS tagging using Universal Dependencies in different languages.
  • M-Amazon: Sentiment classification in different languages using multilingual Amazon reviews.
  • mLAMA: Multilingual factual knowledge probing, using the mLAMA dataset.
  • XNLI: Cross-lingual natural language inference across multiple languages, adapted to multiple-choice format.

📄 Format

Each example includes:

  • question: A textual input or instruction.
  • choices: A list of answer candidates.
  • answer: The correct choice (as a single token).

📦 Usage

The dataset is available on HuggingFace:

👉 https://huggingface.co/datasets/iszhaoxin/MCEval8K

from datasets import load_dataset

dataset = load_dataset("iszhaoxin/MCEval8K")

🧠 Purpose

MCEval8K is specifically built to support:

  • Neuron-level probing: Evaluating neuron contributions using techniques like NeurGrad.

  • Controlled evaluation: Avoiding confounds such as tokenization bias and label imbalance.

📜 Citation

@inproceedings{zhao2025neuron,
  title     = {Neuron Empirical Gradient: Discovering and Quantifying Neurons’ Global Linear Controllability},
  author    = {Xin Zhao and Zehui Jiang and Naoki Yoshinaga},
  booktitle = {Proceedings of the 63nd Annual Meeting of the Association for Computational Linguistics (ACL)},
  year      = {2025}
}