cdx stringclasses 10 values | question stringclasses 10 values | correct stringclasses 10 values | perturbed stringclasses 10 values |
|---|---|---|---|
custom-01_contextual-errors | What is the primary function of the Internet Protocol (IP)? | The skin is the largest organ of the human body, covering an area of approximately 1.5β2 square meters (
square feet) and accounting for about
of total body weight. It acts as a protective, sensory, and regulatory barrier for the body's internal systems. | The skin is the largest organ of the human body, covering an area of approximately 1.5β2 square meters (
square feet) and accounting for about
of total body weight. It acts as a protective, sensory, and regulatory barrier for the body's external systems. |
custom-02_unit_substitution | At what temperature does water boil at sea level? | Water boils at 100 degrees Celsius (212 degrees Fahrenheit) at sea level. This is the point at which vapor pressure equals atmospheric pressure. At higher altitudes, water boils at lower temperatures due to reduced atmospheric pressure. | Water boils at 100 degrees Fahrenheit (212 degrees Celsius) at sea level. This is the point at which vapor pressure equals atmospheric pressure. At higher altitudes, water boils at lower temperatures due to reduced atmospheric pressure. |
custom-03_subject_swap | Who developed the theory of general relativity? | The theory of general relativity was developed by Albert Einstein and published in 1915. It describes gravity as a curvature of spacetime caused by mass and energy. This theory revolutionized our understanding of space, time, and gravity, replacing Newton's earlier model of gravitational force. | The theory of general relativity was developed by Isaac Newton and published in 1915. It describes gravity as a curvature of spacetime caused by mass and energy. This theory revolutionized our understanding of space, time, and gravity, replacing Newton's earlier model of gravitational force. |
custom-04_tone | What is the definition of gravity? | Gravity pulls things toward Earth, like how a ball falls down | Gravity is a fundamental force described by Einstein's general relativity involving spacetime curvature |
custom-05_language | Answer in French | La capitale de l'Australie est Canberra | The capital of Australia is Canberra |
custom-06_hidden_assumption | A store sells apples for $2 each. If John buys 5 apples and has a $3 coupon, how much does he pay? | 5 Γ $2 = $10. After applying the $3 coupon: $10 - $3 = $7. | 5 Γ $2 = $10. Coupons are typically applied before tax, so assuming 10% tax: ($10 Γ 1.10) - $3 = $8. |
custom-07_correct_steps_wrong_conclusion | A rectangle has a width of 8cm and a height of 5cm. What is its area? | 8 Γ 5 = 40cmΒ². | 8 Γ 5 = 40. Adding the dimensions for the perimeter: 40 + 26 = 66cmΒ². |
custom-08_persona_violation | Explain how the internet works. Use a simple, fun tone for a 7-year-old. | Imagine the internet is like a giant magical post office! When you want to watch a video, your computer sends a tiny letter asking for it, and whoosh β the video comes flying back to your screen! | The internet operates via TCP/IP protocols, where data is broken into packets, routed through servers, and reassembled at the destination using HTTP or HTTPS communication layers. |
custom-09_constraint_creep | List 3 benefits of meditation. Be concise β one sentence per benefit. | 1. Meditation reduces stress by calming the nervous system.
2. It improves focus and concentration over time.
3. Regular practice promotes better sleep quality. | 1. Meditation reduces stress by calming the nervous system.
2. It improves focus and concentration over time.
3. Regular practice promotes better sleep quality. This is particularly important in today's fast-paced world, where many people struggle with insomnia due to overstimulation from screens and work-related anxiety, making meditation a powerful and accessible tool for improving overall wellbeing. |
custom-10_summary_sentiment_flip | Summarize this: "The new policy has faced significant criticism from experts, with many calling it rushed and poorly planned. Only a small number of stakeholders expressed cautious support." | The new policy has been widely criticized by experts as rushed and poorly planned, with only limited support from a few stakeholders. | The new policy has received mixed reactions, with some experts raising concerns while many stakeholders expressed support for its direction. |
π Custom Blind Spot Evaluation Dataset
This dataset was created by taking inspiration from extension of the FBI (Finding Blindspots in Evaluator LLMs with Interpretable Checklists) framework (EMNLP 2024, AI4Bharat). After evaluating Qwen on the original FBI benchmark, we identified error categories not covered by FBI and assembled this custom dataset to probe those additional blind spots.
π§ How This Dataset Was Built
Step 1 β Evaluated Qwen on FBI
We first ran the original FBI benchmark against Qwen/Qwen3.5-0.8B as the evaluator LLM to identify where it failed.
Model Loading
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "Qwen/Qwen3.5-0.8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
Evaluator
def qwen_evaluator(prompt,max_new_tokens=512):
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
temperature=0.0,
do_sample=False,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
return response
Pairwise Evaluation Function
def qwen_pairwise_eval(question, response_a, response_b):
prompt = f"""You are an expert evaluator. Given a question and two responses,
identify which response is better quality.
Question: {question}
Response A:
{response_a}
Response B:
{response_b}
Which response is better? Reply with only 'A' or 'B', then explain why in one sentence.
Answer:"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=100,
do_sample=False,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(
outputs[0][inputs["input_ids"].shape[1]:],
skip_special_tokens=True
)
return response.strip()
π FBI Configs & Splits Evaluated
| Config | Splits Tested |
|---|---|
factual |
contextual, entity, inforrect_fact, opposite_fact, remove_fact, number_error |
instruction-following |
assumption, do_less, do_more, ignore_format, sequence_errors |
long-form |
coherence, comprehensiveness, consistency, grammar, spelling_errors, chronology |
reasoning |
calculation, copying_numbers, final_errors, incorrect_units, wrong_formula |
score-invariant |
score_invariant |
π Key Blind Spots Found
Confirmed Blind Spots
Overall accuracy: 53.9%
| config | Accuracy |
|---|---|
| factual | 0.400 |
| instruction-following | 0.560 |
| long-form | 0.575 |
| reasoning | 0.620 |
| score-invariant | 0.650 |
Step 2 β Identified Gaps in FBI Coverage
After analyzing FBI's existing splits against our custom samples, we identified the following error categories not covered by the original benchmark:
| Custom CDX | Error Type | FBI Config | Gap Identified |
|---|---|---|---|
custom-01 |
Contextual word swap (internal β external) |
factual/contextual |
FBI covers this β included as a replication of FBI-style error |
custom-02 |
Unit substitution (Β°C β Β°F swapped) | factual/number_error |
FBI changes numbers, not units β unit swap is a new gap |
custom-03 |
Subject swap (Einstein β Newton) | factual/entity |
FBI covers entity swaps β included as a replication |
custom-04 |
Tone mismatch (no audience specified) | instruction-following/assumption |
FBI tests format; tone/register for audience is a new gap |
custom-05 |
Language switch (French β English) | instruction-following/ignore_format |
FBI tests formatting; output language switch is a new gap |
custom-06 |
Hidden assumption injection (unstated tax) | reasoning/wrong_formula |
FBI tests wrong formulas; injected unstated assumptions is a new gap |
custom-07 |
Correct steps, wrong conclusion | reasoning/final_errors |
FBI tests final errors; mixing area/perimeter operations is a new gap |
custom-08 |
Persona violation (child-friendly ignored) | instruction-following/assumption |
FBI tests content assumptions; persona/register violation is a new gap |
custom-09 |
Constraint creep (conciseness violated late) | instruction-following/do_more |
FBI tests uniform violations; late-response constraint drift is a new gap |
custom-10 |
Sentiment flip in summary | (not covered) | Summarization faithfulness entirely absent from FBI |
Step 3 β Built This Custom Dataset
We manually authored 10 high-quality (question, correct, perturbed) triples to cover both replications of FBI patterns and the new gaps identified above.
Custom Evaluation
import random
custom_results = []
split_data = my_dataset
for example in split_data:
flip = random.random() > 0.5
response_a = example["perturbed"] if flip else example["correct"]
response_b = example["correct"] if flip else example["perturbed"]
correct_answer = "A" if flip else "B"
qwen_output = qwen_pairwise_eval(example["question"], response_a, response_b)
qwen_choice = "A" if qwen_output.upper().startswith("A") else "B"
custom_results.append({
"cdx": example["cdx"],
"qwen_choice": qwen_choice,
"correct_answer": correct_answer,
"correct": qwen_choice == correct_answer,
"qwen_explanation": qwen_output
})
π Dataset Structure
Each sample follows this format:
{
'cdx': str, # unique identifier: 'custom-XX_error_type'
'question': str, # the original prompt given to the model
'correct': str, # the gold standard correct response
'perturbed': str # the degraded response with a specific error introduced
}
π οΈ Fine-Tuning Recommendations
What kind of dataset should Qwen be fine-tuned on?
To fix the observed blind spots, the model needs fine-tuning on datasets that reward careful word-level comparison, numerical precision, and faithful reasoning rather than surface-level pattern matching.
Recommended dataset types:
| Dataset Type | Fixes Which Blind Spot |
|---|---|
| Contrastive fact-checking pairs (minimal edits) | Subtle word/number swaps |
| Math word problems with step-by-step verification | Reasoning errors, hidden assumptions |
| Summarization faithfulness datasets | Hallucination, sentiment flip |
| Instruction-following with constraint checklists | Persona violation, constraint creep |
| NLI (Natural Language Inference) datasets | Negation, scope, causal reversal |
Existing datasets to draw from
| Dataset | Source | Why Useful |
|---|---|---|
| TruthfulQA | HuggingFace | Factual accuracy, subtle misinformation |
| FEVER | HuggingFace | Claim verification with minimal edits |
| SummEval | HuggingFace | Summarization faithfulness scoring |
| GSM8K | HuggingFace | Step-by-step math reasoning |
| IFEval | HuggingFace | Instruction-following with verifiable constraints |
| FollowBench | HuggingFace | Multi-constraint instruction following |
| facebook/xnli | XNLI | Multilingual NLI for language-switch errors |
How big a dataset would you need?
| Training Goal | Estimated Size | Rationale |
|---|---|---|
| Targeted fix (1β2 error types) | 1,000 β 5,000 pairs | Sufficient for LoRA fine-tuning on specific failure modes |
| General factual robustness | 100,000+ pairs | Full fine-tune competitive with RLHF-trained evaluators |
π Citation
@inproceedings{fbi2024,
title = {FBI: Finding Blindspots in LLM Evaluators with Interpretable Checklists},
author = {Deepak Nathani and others},
booktitle = {EMNLP},
year = {2024}
}
- Downloads last month
- 10