Datasets:
dataset_info:
- config_name: entertainment
features:
- name: entity
dtype: string
- name: question
dtype: string
- name: choices
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: metadata
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
splits:
- name: upper_shadow
num_bytes: 990469
num_examples: 1164
- name: lower_shadow
num_bytes: 723122
num_examples: 1154
- name: upper_shadow_controlled
num_bytes: 146264
num_examples: 172
- name: lower_shadow_controlled
num_bytes: 109891
num_examples: 172
- name: upper_direct
num_bytes: 993682
num_examples: 1164
- name: lower_direct
num_bytes: 727278
num_examples: 1154
download_size: 686647
dataset_size: 3690706
- config_name: sports
features:
- name: entity
dtype: string
- name: question
dtype: string
- name: choices
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: metadata
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
splits:
- name: upper_shadow
num_bytes: 851770
num_examples: 1030
- name: lower_shadow
num_bytes: 718409
num_examples: 1012
- name: upper_shadow_controlled
num_bytes: 85285
num_examples: 104
- name: lower_shadow_controlled
num_bytes: 67686
num_examples: 104
- name: upper_direct
num_bytes: 855263
num_examples: 1030
- name: lower_direct
num_bytes: 723358
num_examples: 1012
download_size: 525434
dataset_size: 3301771
- config_name: technology
features:
- name: entity
dtype: string
- name: question
dtype: string
- name: choices
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: answer
dtype: string
- name: metadata
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
splits:
- name: upper_shadow
num_bytes: 1063796
num_examples: 1370
- name: lower_shadow
num_bytes: 999160
num_examples: 1344
- name: upper_shadow_controlled
num_bytes: 215759
num_examples: 282
- name: lower_shadow_controlled
num_bytes: 201445
num_examples: 282
- name: upper_direct
num_bytes: 1063797
num_examples: 1370
- name: lower_direct
num_bytes: 1004136
num_examples: 1344
download_size: 1065847
dataset_size: 4548093
configs:
- config_name: entertainment
data_files:
- split: upper_shadow
path: entertainment/upper_shadow-*
- split: lower_shadow
path: entertainment/lower_shadow-*
- split: upper_shadow_controlled
path: entertainment/upper_shadow_controlled-*
- split: lower_shadow_controlled
path: entertainment/lower_shadow_controlled-*
- split: upper_direct
path: entertainment/upper_direct-*
- split: lower_direct
path: entertainment/lower_direct-*
- config_name: sports
data_files:
- split: upper_shadow
path: sports/upper_shadow-*
- split: lower_shadow
path: sports/lower_shadow-*
- split: upper_shadow_controlled
path: sports/upper_shadow_controlled-*
- split: lower_shadow_controlled
path: sports/lower_shadow_controlled-*
- split: upper_direct
path: sports/upper_direct-*
- split: lower_direct
path: sports/lower_direct-*
- config_name: technology
data_files:
- split: upper_shadow
path: technology/upper_shadow-*
- split: lower_shadow
path: technology/lower_shadow-*
- split: upper_shadow_controlled
path: technology/upper_shadow_controlled-*
- split: lower_shadow_controlled
path: technology/lower_shadow_controlled-*
- split: upper_direct
path: technology/upper_direct-*
- split: lower_direct
path: technology/lower_direct-*
license: cc-by-sa-4.0
task_categories:
- question-answering
- multiple-choice
language:
- en
tags:
- knowledge-probing
- llm-evaluation
- entity-resolution
- machine-unlearning
size_categories:
- 10K<n<100K
ShadowBench: A Hardened Benchmark for Latent Entity Association
ShadowBench is a diagnostic framework designed to evaluate the "Shadow Knowledge" of Large Language Models (LLMs). While traditional benchmarks measure factual recall using explicit entity names (e.g., "Elon Musk"), ShadowBench evaluates whether a model can navigate its internal knowledge graph when these lexical anchors are removed.
Dataset Summary
The core task in ShadowBench is Dual-Trait Association (DTA). A model is presented with an anonymized shadow description (Trait A) and must associate it with a second, independent fact (Trait B) among three "Hard Negative" distractors.
Success requires the model to utilize the hidden entity as a semantic bridge:
Trait A (Shadow) → [Latent Entity] → Trait B (Target Choice)
Key Features
- Adversarially Hardened: Unlike standard MCQs, ShadowBench (v3) is filtered to prevent "shortcut learning" via gendered pronouns, chronological era-matching, or category-leaks.
- Scale Robust: Evaluated on models ranging from 8B parameters (Llama-3, Qwen3) to frontier scales (GPT-5.4-mini, GPT-5.4, and Claude-Sonnet-4.6).
- Multi-Domain: Covers Technology, Sports (Tennis), and Entertainment (Actors).
- Stratified: Includes "Upper Tier" (Head) and "Lower Tier" (Tail) entities based on Wikipedia popularity metrics to evaluate "Popularity Bias."
Dataset Structure
Subsets
The dataset is divided into three primary domains:
technology: Corporate, product, and leadership-based associations.sports: Numerical achievements and career milestones in professional tennis.entertainment: Narrative roles and filmographic associations.
Splits
Each subset contains the following splits:
upper_shadow/lower_shadow: The primary anonymized DTA task.upper_direct/lower_direct: A control split where explicit names are restored to establish a factual "ceiling" (Direct QA).upper_controlled/lower_controlled: A 1:1 entity-matched subset used for sensitivity analysis.
Data Schema
Each sample contains:
entity: The hidden entity name.question: The shadow description (Trait A).choices: A dictionary (A, B, C, D) containing Trait B and three hard distractors.answer: The correct option key.metadata: A mapping dictionary where each key (A, B, C, D) corresponds to the actual entity represented by that answer choice.
Construction & Hardening (v1 to v3)
ShadowBench was developed through an iterative process to ensure success is strictly contingent on latent semantic reasoning:
- v1: Lexical Anonymization (Names removed).
- v2: Chronological & Syntactic Hardening (Pronouns neutralized + Generational Proximity Filter added).
- v3: Demographic Homogeneity (Gender-matched distractors added to prevent elimination via lexical cues like "WTA" or "Best Actress").
Usage
You can load this dataset using the Hugging Face datasets library:
from datasets import load_dataset
# Load the Technology Shadow split
dataset = load_dataset("shadow-bench/ShadowBench", "technology", split="upper_shadow")
# Inspect a sample
print(dataset[0])
Licensing
This dataset is derived from Wikipedia and is licensed under CC BY-SA 4.0.
Citation
If you use this dataset in your research, please cite our paper: [TBD]