CogBench / README.md
mouryat9's picture
Upload README.md with huggingface_hub
2b6b6a2 verified
metadata
language:
  - en
license: cc-by-4.0
size_categories:
  - 10K<n<100K
task_categories:
  - text-classification
task_ids:
  - multi-class-classification
tags:
  - education
  - blooms-taxonomy
  - question-generation
  - cognitive-level
  - benchmark
pretty_name: CogBench
dataset_info:
  features:
    - name: question_id
      dtype: string
    - name: question_text
      dtype: string
    - name: question_type
      dtype: string
    - name: subject
      dtype: string
    - name: source
      dtype: string
    - name: bloom_level
      dtype: int64
    - name: bloom_name
      dtype: string
    - name: label_source
      dtype: string
    - name: confidence
      dtype: float64
  splits:
    - name: train
      num_examples: 21827
    - name: validation
      num_examples: 2636
    - name: test
      num_examples: 2636
    - name: human_annotated
      num_examples: 739

CogBench: A Benchmark for Evaluating Cognitive-Level Control in LLM Question Generation

Dataset Description

CogBench is a benchmark dataset for evaluating whether large language models can generate educational questions at specific cognitive levels according to Bloom's Taxonomy (Anderson & Krathwohl, 2001).

Dataset Summary

  • 27,099 questions across 16 academic subjects
  • 6 cognitive levels (Remember, Understand, Apply, Analyze, Evaluate, Create)
  • 739 human-labeled questions from peer-reviewed sources
  • 26,360 silver-labeled questions using the CCS classifier (82% accuracy)
  • 4 data sources: Yahya (2012), Mohammed & Omar (2020), OpenStax QA, OpenStax MCQ

Supported Tasks

  • Bloom's Taxonomy Classification: Predict the cognitive level (1-6) of a given question
  • Cognitive-Level-Controlled Question Generation: Evaluate whether an LLM can generate questions at a specified Bloom's level

Languages

English

Dataset Structure

Data Fields

Field Type Description
question_id string Unique identifier
question_text string The question text
question_type string open_ended or mcq
subject string Academic subject (e.g., biology, physics)
source string Data source identifier
bloom_level int Bloom's Taxonomy level (1-6)
bloom_name string Level name (Remember, Understand, Apply, Analyze, Evaluate, Create)
label_source string How the label was assigned (original_author, taxonomy_mapping, or ccs_phase1_silver)
confidence float CCS model confidence score (for silver labels)

Data Splits

Split N Description
train ~21,800 Silver-labeled + human-labeled (for training classifiers)
validation ~2,600 Silver-labeled (for tuning)
test ~2,600 Silver-labeled (for evaluation)
human_annotated 739 Human-labeled from peer-reviewed sources (gold standard)

Subject Distribution

Subject Count Subject Count
Physics 5,294 Economics 1,257
Mathematics 4,665 Computer Science 1,001
Biology 3,421 Political Science 824
Business 2,653 Astronomy 781
Chemistry 2,326 History 749
Psychology 1,665 Sociology 736
Nursing 610 Philosophy 228
Anthropology 150 General 739

Dataset Creation

Source Data

  1. Yahya (2012): 600 questions manually classified by education researchers. Published in PLOS ONE.
  2. Mohammed & Omar (2020): 141 questions with expert Bloom's classifications. Published in PLOS ONE.
  3. OpenStax QA: ~9,900 question-answer pairs from open-source textbooks via HuggingFace.
  4. OpenStax MCQ: ~16,400 multiple-choice questions scraped from OpenStax interactive content.

Annotations

  • Human labels (739 questions): Original Bloom's classifications from peer-reviewed publications
  • Silver labels (26,360 questions): Assigned by CCS (Cognitive Classification Score), a fine-tuned BERT-base-uncased model achieving 82.0% exact accuracy and 89.6% adjacent accuracy on 6-level Bloom's classification

CCS Model Validation

Method Exact Accuracy Adjacent (±1) F1 Macro
Random 16.8% 44.5% 0.168
Verb Heuristic 60.6% 74.4% 0.613
TF-IDF + SVM 77.7% 85.3% 0.776
LLM Panel (4-vote) 74.2% 84.3% 0.744
CCS (BERT) 82.0% 89.6% 0.819

Considerations for Using the Data

Ethical Considerations

  • This dataset is intended for research in educational AI and benchmark evaluation
  • Silver labels have ~82% accuracy; use with appropriate uncertainty quantification
  • Questions are sourced from educational materials and do not contain sensitive content

Limitations

  • Silver labels may contain systematic biases from the CCS model
  • Subject distribution is uneven (STEM-heavy due to OpenStax sources)
  • Bloom's Taxonomy classification inherently involves subjectivity

Citation

If you use CogBench in your research, please cite:

@misc{kunuku2026cogbench,
  title={CogBench: A Benchmark for Evaluating Cognitive-Level Control in LLM Question Generation},
  author={Kunuku, Mourya Teja},
  year={2026},
  url={https://huggingface.co/datasets/mouryat9/CogBench}
}

License

CC-BY-4.0