Papers
arxiv:2603.12935

Can Fairness Be Prompted? Prompt-Based Debiasing Strategies in High-Stakes Recommendations

Published on Mar 13
· Submitted by
TVR
on Mar 16
Authors:
,

Abstract

Three bias-aware prompting strategies are investigated for large language model recommenders to improve group fairness while maintaining effectiveness.

AI-generated summary

Large Language Models (LLMs) can infer sensitive attributes such as gender or age from indirect cues like names and pronouns, potentially biasing recommendations. While several debiasing methods exist, they require access to the LLMs' weights, are computationally costly, and cannot be used by lay users. To address this gap, we investigate implicit biases in LLM Recommenders (LLMRecs) and explore whether prompt-based strategies can serve as a lightweight and easy-to-use debiasing approach. We contribute three bias-aware prompting strategies for LLMRecs. To our knowledge, this is the first study on prompt-based debiasing approaches in LLMRecs that focuses on group fairness for users. Our experiments with 3 LLMs, 4 prompt templates, 9 sensitive attribute values, and 2 datasets show that our proposed debiasing approach, which instructs an LLM to be fair, can improve fairness by up to 74% while retaining comparable effectiveness, but might overpromote specific demographic groups in some cases.

Community

Paper author Paper submitter
edited about 7 hours ago

teaser

Our contributions with actual examples from our news recommendation experiments. More similar responses from neutral and sensitive prompt variants mean less biased recommendations. We find that in some cases, bias-aware prompts could give over-adjusted responses based on implicit sensitive attributes (e.g., pronoun-inferred gender).

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.12935 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.12935 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.12935 in a Space README.md to link it from this page.

Collections including this paper 1