license: apache-2.0
pipeline_tag: text-classification
library_name: transformers
Robust Reward Model for LLM-as-a-Judge
This repository contains a robust, general-domain generative reward model presented in the paper One Token to Fool LLM-as-a-Judge.
- Paper: One Token to Fool LLM-as-a-Judge
- Code: https://github.com/microsoft/RewardEval
- Synthetic Training Data: https://huggingface.co/datasets/reward-eval/synthetic-judgements
Model Description
Generative reward models (also known as LLMs-as-judges), which use large language models (LLMs) to evaluate answer quality, are increasingly adopted in reinforcement learning with verifiable rewards (RLVR). They are often preferred over rigid rule-based metrics, especially for complex reasoning tasks involving free-form outputs. Despite the seeming simplicity of this comparison task, existing generative reward models exhibit surprising vulnerabilities to superficial manipulations: non-word symbols (e.g., ":" or ".") or reasoning openers like "Thought process:" and "Let's solve this problem step by step." can often lead to false positive rewards.
This model addresses this widespread weakness across various LLMs, datasets, and prompt formats that poses a serious threat for core algorithmic paradigms that rely on generative reward models, such as rejection sampling, preference optimization, and RLVR. To mitigate this issue, this work introduces a simple yet effective data augmentation strategy and trains a new generative reward model with substantially improved robustness, highlighting the urgent need for more reliable LLM-based evaluation methods.
How to use
You can use this model with the transformers library to evaluate answers. The model expects a prompt that includes both the ground-truth reference and the candidate answer for comparison, formatted according to its chat template.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "recce-ai/robust-llm-as-a-judge-qwen-7b" # Replace with the actual model ID if different
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
# Example for a comparison prompt:
# Format: System Message, then User Message (reference and candidate)
system_message = "You are a helpful and fair judge. Evaluate the candidate answer against the reference answer and provide a score of 1 (correct) or 0 (incorrect)."
reference_answer = "The capital of France is Paris."
candidate_answer = "Paris is the capital of France."
user_message = f"Reference: {reference_answer}\
Candidate: {candidate_answer}\
Score:"
messages = [
{"role": "system", "content": system_message},
{"role": "user", "content": user_message}
]
# Apply the chat template defined in the tokenizer_config.json
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)
# Generate the score (e.g., '1' or '0')
output_ids = model.generate(
input_ids,
max_new_tokens=5, # Generate only a few tokens for the score (e.g., '1', '0', 'Yes', 'No')
num_beams=1,
do_sample=False,
temperature=0.0, # Use low temperature for deterministic output
)
generated_text = tokenizer.decode(output_ids[0][len(input_ids[0]):], skip_special_tokens=True).strip()
print(f"Generated Score: {generated_text}")
# Example with a trick that might fool other LLMs-as-a-judge (according to the paper)
candidate_answer_tricked = "Thought process: The capital is a city. Paris is a city. Therefore, Paris is the capital of France."
user_message_tricked = f"Reference: {reference_answer}\
Candidate: {candidate_answer_tricked}\
Score:"
messages_tricked = [
{"role": "system", "content": system_message},
{"role": "user", "content": user_message_tricked}
]
prompt_tricked = tokenizer.apply_chat_template(messages_tricked, tokenize=False, add_generation_prompt=True)
input_ids_tricked = tokenizer(prompt_tricked, return_tensors=\"pt\").input_ids.to(model.device)
output_ids_tricked = model.generate(
input_ids_tricked,
max_new_tokens=5,
num_beams=1,
do_sample=False,
temperature=0.0,
)
generated_text_tricked = tokenizer.decode(output_ids_tricked[0][len(input_ids_tricked[0]):], skip_special_tokens=True).strip()
print(f"Generated Score (tricked): {generated_text_tricked}")
Citation
If you use this model, please cite:
@article{wu2025one,
title={One Token to Fool LLM-as-a-Judge},
author={Wu, Zhenyu and Sun, Qiushi and Zhang, Yiran and Wang, Yian and Li, Erran and Liang, Paul Pu},
journal={arXiv preprint arXiv:2507.08794},
year={2025}
}