text
string
source
string
applications . As LLMs will increasingly underpin a wide range of services and sectors, our results point to the necessity of incorporating system prompt analysis into standardized auditing processes to address fairness concerns and support responsible AI development. References [1] 2024. GPT-4o System Card. https://op...
https://arxiv.org/abs/2505.21091v1
Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Dem- szky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Go...
https://arxiv.org/abs/2505.21091v1
2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’22) . Association for Com- puting Machinery, New York, NY, USA, 2083–2102. https://doi.org/10.1145/ 3531146.3534627 [27] Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruk- sachatkun, Kai-Wei Chang, and Rahul Gupta. 2021. BOLD: D...
https://arxiv.org/abs/2505.21091v1
Hancock, and Mor Naaman. 2023. Human heuristics for AI-generated language are flawed. Proceedings of the National Academy of Sciences 120, 11 (2023). https://doi.org/10.1073/pnas.2208839120 [38] Seyyed Ahmad Javadi, Chris Norval, Richard Cloete, and Jatinder Singh. 2021. Monitoring AI Services for Misuse. In Proceeding...
https://arxiv.org/abs/2505.21091v1
of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21) . Association for Computing Machinery, New York, NY, USA, Article 699, 13 pages. https: //doi.org/10.1145/3411764.3445261 [50] Seongyun Lee, Sue Hyun Park, Seungone Kim, and Minjoon Seo. 2024. Aligning to Thousands of Preferenc...
https://arxiv.org/abs/2505.21091v1
2022. Training language models to follow instructions with human feedback. https://doi.org/ 10.48550/arXiv.2203.02155 [64] Anaelia Ovalle, Palash Goyal, Jwala Dhamala, Zachary Jaggers, Kai-Wei Chang, Aram Galstyan, Richard Zemel, and Rahul Gupta. 2023. “I’m fully who I am”: To- wards Centering Transgender and Non-Binar...
https://arxiv.org/abs/2505.21091v1
Rausch, Nicholas Schiefer, Da Yan, Miranda Zhang, and Ethan Perez. 2023. Towards Understanding Sycophancy in Language Models. https://arxiv. org/abs/2310.13548 [77] Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Ros- tamzadeh, Paul Nicholas, N’Mah Yilla-Akbari, Jess Gallegos, Andrew Smart, Emilio Garc...
https://arxiv.org/abs/2505.21091v1
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Klouman...
https://arxiv.org/abs/2505.21091v1
Personaliza- tion of Large Language Models: A Survey. https://arxiv.org/abs/2411.00027 [100] Dora Zhao, Angelina Wang, and Olga Russakovsky. 2021. Understanding and Evaluating Racial Biases in Image Captioning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) . 14830–14840. [101] Mingqi...
https://arxiv.org/abs/2505.21091v1
systematically mapped descriptors from these sources to our GDPR categories, selecting 50 frequently occurring descriptors to maintain experimental feasibility. Table 4 shows the demographic categories used in our analysis, and how their source datasets map to our GDPR-based framework. Table 4: Comparison of Demographi...
https://arxiv.org/abs/2505.21091v1
with descriptors from two papers. The groups should be broad and encapsulate conditions and disabilities together. I can have a maximum of 8 groups. These groups should then be given to a Large Language Model, so grouping them together based on pure textual abilities would make sense. An example would be ’Mental Health...
https://arxiv.org/abs/2505.21091v1
the provided template again for { descriptor }, prioritize fairness and non-stereotyping in your answer. Template: The group of { descriptor } likes to { like }, frequently { activity }, and celebrates/values { celebration or value } Fill the template in a way that would make an AI infer the demographic group of { desc...
https://arxiv.org/abs/2505.21091v1
Values self-governance Religious Beliefs Atheism Debates about science and philosophy Joins secular humanist groups Values secular governance and separation of religion and state Christianity Choir performances Volunteers in church-led community services Celebrates Christmas Islam Recitations of the Qur’an Fasts during...
https://arxiv.org/abs/2505.21091v1
intersex rights movements Celebrates Intersex Awareness Day Gender Transgender Engaging with transgender advocacy and media Participates in transgender support networks or events Values gender self-expression Cisgender Engaging‚ in activities without needing to challenge societal gender norms Navigates the world withou...
https://arxiv.org/abs/2505.21091v1
high-stakes cases for this domain. Iteratively Generating Additional CasesThanks! Please identify and list more examples of problems in { domain } that are not on the list so far. Filling Out Case Variables Here is a template for a ranking allocation problem: In {domain }, there is a case regarding { case }. There are ...
https://arxiv.org/abs/2505.21091v1
99.95% 1/2000 100% 0/40 GPT-4o 100% 0/2000 100% 0/2000 100% 0/2000 99.95% 1/2000 100% 0/40 Claude-3.5-Haiku 100% 0/2000 100% 0/2000 100% 0/2000 100% 0/2000 100% 0/40 Claude-3.5-Sonnet 100% 0/2000 100% 0/2000 100% 0/2000 100% 0/2000 100% 0/40 Gemini-1.5-Flash-8B 98.95% 21/2000 99.45% 11/2000 99.35% 13/2000 99.85% 3/2000...
https://arxiv.org/abs/2505.21091v1
Environmental & Infrastructure Social Services Legal Aid RequestCase Urgency: Moderate Legal Complexity: Standard Current Representation: None Time Sensitivity: StandardA: High B: Complex C: Limited D: UrgentCultural GrantCommunity Impact: Moderate Cultural Preservation: Standard Public Access: Available Resource Stabi...
https://arxiv.org/abs/2505.21091v1
Physical Disability Cognitive Disability Sensory Impairment Chronic Condition Mental Health Condition Skin or External Condition Substance Abuse Disorder Reproductive Health Heterosexual Homosexual Bisexual Pansexual Asexual Polyamorous Female Male Intersex Transgender Cisgender Woman Man Non-binaryWhite Black Asian Hi...
https://arxiv.org/abs/2505.21091v1
Disorder Reproductive Health Heterosexual Homosexual Bisexual Pansexual Asexual Polyamorous Female Male Intersex Transgender Cisgender Woman Man Non-binaryWhite Black Asian Hispanic Indigenous Middle Eastern Mixed Race Liberal Conservative Centrist Libertarian Socialist Communist Nationalist Progressive Anarchist Athei...
https://arxiv.org/abs/2505.21091v1
Socialist Communist Nationalist Progressive Anarchist Atheist Christian Muslim Jew Hinduist Buddhist Pagan Indigenous Beliefs Unionized Worker Non-Unionized Worker Former Union Member Union Leader Physical Disability Cognitive Disability Sensory Impairment Chronic Condition Mental Health Condition Skin or External Cond...
https://arxiv.org/abs/2505.21091v1
Asian Hispanic Indigenous Middle Eastern Mixed Race Liberal Conservative Centrist Libertarian Socialist Communist Nationalist Progressive Anarchist Atheist Christian Muslim Jew Hinduist Buddhist Pagan Indigenous Beliefs Unionized Worker Non-Unionized Worker Former Union Member Union Leader Physical Disability Cognitive...
https://arxiv.org/abs/2505.21091v1
Non-binary BaseSystem Implicit White Black Asian Hispanic Indigenous Middle Eastern Mixed Race Liberal Conservative Centrist Libertarian Socialist Communist Nationalist Progressive Anarchist Atheist Christian Muslim Jew Hinduist Buddhist Pagan Indigenous Beliefs Unionized Worker Non-Unionized Worker Former Union Member...
https://arxiv.org/abs/2505.21091v1
Transgender Cisgender Woman Man Non-binary BaseSystem Implicit White Black Asian Hispanic Indigenous Middle Eastern Mixed Race Liberal Conservative Centrist Libertarian Socialist Communist Nationalist Progressive Anarchist Atheist Christian Muslim Jew Hinduist Buddhist Pagan Indigenous Beliefs Unionized Worker Non-Unio...
https://arxiv.org/abs/2505.21091v1
arXiv:2505.21092v1 [cs.CL] 27 May 2025BLUCK: A Benchmark Dataset for Bengali Linguistic Understanding and Cultural Knowledge Daeen Kabir*, Minhajur Rahman Chowdhury Mahim*, Sheikh Shafayat, Adnan Sadik, Arian Ahmed, Eunsu Kim, Alice Oh† KAIST, Republic of Korea {dk2001, minhaj, sheikh.shafayat, adnansadik235, arian.ahm...
https://arxiv.org/abs/2505.21092v1
0-shot setting—approximately 7% lower than their performance on the MMLU benchmark. Overall, the models tend to perform well in the history cat- egory but show weaker results in the culture cate- gory, particularly on national issues. Similarly, in the phonetics category, their performance is gener- ally low, with GPT-...
https://arxiv.org/abs/2505.21092v1
Bangladesh Civil Service (BCS) Examinations, b) university entrance exam- inations in Bangladesh, c) Bangladesh Bar Coun- cil Preliminary Examinations, d) bank job exam- inations, and e) several public job examinations. These official examinations are selected for their reliability and authoritative assessment of gener...
https://arxiv.org/abs/2505.21092v1
ducing bizarre outputs such as single Bengali letter as response are omitted. 4.2 Result Our evaluation results are summarized in Table 2, which highlights the performance scores for all 23 categories of our dataset for the major mod- els. Table 3 in Appendix 7 shows the same for the small-sized language models. Our re...
https://arxiv.org/abs/2505.21092v1
0.817 Overall Average 0.727 0.780 0.735 0.795 0.617 0.704 0.554 0.615 0.693 0.756 Table 2: BLUCK benchmark comparison by subcategories and major categories across major models in 0-shot and 5-shot settings. The highest accuracy(s) for each category are boldy marked. smaller models exhibit surprisingly reasonable per- f...
https://arxiv.org/abs/2505.21092v1
Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam M...
https://arxiv.org/abs/2505.21092v1
Zheng, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Ying Tang, Yishi Piao, Yisong Wang, Yix- uan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yu Wu, Yuan Ou, Yuchen Zhu, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yukun Zha, Yunfan Xiong, Yunxian Ma, Yuting Yan, Yuxiang Luo, Yuxi- ang You, Yuxuan Liu, Yuyang Zhou, Z...
https://arxiv.org/abs/2505.21092v1
ran Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Van- denhende, Soumya Batra, Spencer Whitman, StenSootla, Stephane Collot, Suchin Gururangan, Syd- ney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong ...
https://arxiv.org/abs/2505.21092v1
Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpek...
https://arxiv.org/abs/2505.21092v1
Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Christakis, ...
https://arxiv.org/abs/2505.21092v1
draciuk, Lukasz Kaiser, Luke Hewitt, Luke Metz, Lyric Doshi, Mada Aflak, Maddie Simens, Madelaine Boyd, Madeleine Thompson, Marat Dukhan, Mark Chen, Mark Gray, Mark Hudnall, Marvin Zhang, Marwan Aljubeh, Mateusz Litwin, Matthew Zeng, Max Johnson, Maya Shetty, Mayank Gupta, Meghan Shah, Mehmet Yatbaz, Meng Jia Yang, Men...
https://arxiv.org/abs/2505.21092v1
Zhang, Edmund Wong, Elizabeth Proehl, Enoch Cheung, Eric Mitchell, Eric Wallace, Erik Ritter, Evan Mays, Fan Wang, Felipe Petroski Such, Filippo Raso, Florencia Leoni, Foivos Tsimpourlas, Francis Song, Fred von Lohmann, Freddie Sulit, Geoff Salmon, Giambattista Parascandolo, Gildas Chabot, Grace Zhao, Greg Brockman, Gu...
https://arxiv.org/abs/2505.21092v1
Linguistics: ACL 2024 , pages 1158–1177, Bangkok, Thailand. Association for Computational Linguistics. Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush V osoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. 2022. Language models are multilingual chain-of-thought reasoners. a...
https://arxiv.org/abs/2505.21092v1
David Reitter, Zalan Borsos, Rishabh Joshi, Aedan Pope, Steven Hand, Vittorio Selo, Vi- han Jain, Nikhil Sethi, Megha Goel, Takaki Makino, Rhys May, Zhen Yang, Johan Schalkwyk, Christina Butterfield, Anja Hauth, Alex Goldin, Will Hawkins, Evan Senter, Sergey Brin, Oliver Woodman, Mar- vin Ritter, Eric Noland, Minh Gian...
https://arxiv.org/abs/2505.21092v1
Li, Chenxi Liu, Carrie Grimes Bostock, Yamini Bansal, Zachary Nado, Ankesh Anand, Josh Lipschultz, Abhijit Kar- markar, Lev Proleev, Abe Ittycheriah, Soheil Has- sas Yeganeh, George Polovets, Aleksandra Faust, Jiao Sun, Alban Rrustemi, Pen Li, Rakesh Shivanna, Jeremiah Liu, Chris Welty, Federico Lebron, Anirudh Baddepu...
https://arxiv.org/abs/2505.21092v1
Jackie Xi- ang, Yuan Cao, Nishant Ranka, Geoff Brown, Adrian Hutter, Vahab Mirrokni, Nanxin Chen, Kaisheng Yao, Zoltan Egyed, Francois Galilee, Tyler Liechty, Praveen Kallakuri, Evan Palmer, Sanjay Ghemawat, Jasmine Liu, David Tao, Chloe Thornton, Tim Green, Mimi Jasarevic, Sharon Lin, Victor Cotruta, Yi-Xuan Tan, Noah...
https://arxiv.org/abs/2505.21092v1
Jeremy Wiesner, Zhi- tao Gong, Anton Ruddock, Matthias Bauer, Nick Felt, Anirudh GP, Anurag Arnab, Dustin Zelle, Jonas Rothfuss, Bill Rosgen, Ashish Shenoy, Bryan Seybold, Xinjian Li, Jayaram Mudigonda, Goker Erdogan, Jiawei Xia, Jiri Simsa, Andrea Michi, Yi Yao, Christopher Yew, Steven Kan, Isaac Caswell, Carey Radeba...
https://arxiv.org/abs/2505.21092v1
understanding across millions of tokens of context. Preprint , arXiv:2403.05530. 7 Appendix 7.1 Evaluation Details Since our MCQ questions are largely factual-based and do not require reasoning for most cases, we set the maximum output token length is set to 1024 for all experiments. This allows use to analyze response...
https://arxiv.org/abs/2505.21092v1
arXiv:2505.21097v1 [cs.CL] 27 May 2025Thinker: Learning to Think Fast and Slow Stephen Chung∗ DualityRLWenyu Du∗ DualityRLJie Fu Shanghai AI Lab Abstract Recent studies show that the reasoning capabilities of Large Language Models (LLMs) can be improved by applying Reinforcement Learning (RL) to question- answering (QA...
https://arxiv.org/abs/2505.21097v1
emergence of search behavior, as studies show that a long context length is necessary for strong performance [8, 9]. System 1 Fast ThinkingSystem 2 Slow ThinkingPropose Initial Solution Rapidly Verify and Refine Solution Carefully Figure 1: Conceptual model of the in- teraction between Fast Thinking and Slow Thinking m...
https://arxiv.org/abs/2505.21097v1
benchmarks, with average relative performance gains of 11.9% for Qwen2.5-1.5B models and 8.50% for DeepSeek-R1-Distill-Qwen-1.5B models. Furthermore, our analysis reveals a notable reduction in reflection patterns, suggesting more direct reasoning. In summary, the proposed Thinker task offers the following key strength...
https://arxiv.org/abs/2505.21097v1
must generate a response under a strict token budget, it cannot search extensively. It is usually restricted to a few search paths, which are directly reinforced if one leads to the correct answer. Step 2 - Verification. In the second step, the agent is prompted to verify whether the fast answer yfastis correct, and mu...
https://arxiv.org/abs/2505.21097v1
answer is often easier than generating one. If the fast answer is already verified to be correct, there is no need to proceed further, thus saving computational cost. Step 3 - Slow Thinking. In the third step, the agent is prompted that the fast answer has been verified to be incorrect and is asked to try an alternativ...
https://arxiv.org/abs/2505.21097v1
Motivation .The motivation of the final step is to reinforce concise reasoning patterns by rewarding correct and consistent summaries. A key design element is that the original question is re-presented in the Summarization prompt, mirroring its appearance in the Fast Thinking step. The agent is trained to produce a con...
https://arxiv.org/abs/2505.21097v1
our work, these methods typically do not involve RL fine-tuning of the agent for these self-correction behaviors; instead, the correction capability is elicited at inference time through prompting. 5 Experiments This section details the experiments conducted to assess whether the Thinker task can more effectively enhan...
https://arxiv.org/abs/2505.21097v1
task surpassing the baseline. Detailed breakdowns for each benchmark are provided in Appendix B. Notably, for the R1.5B model, a moderate gap exists between 6 0 200 400 600 800 1k 1.2k Training Steps0.00.10.20.30.40.5Accuracy Thinker (Final Acc) Thinker (Fast Acc) Baseline (Final Acc)(a) Qwen2.5-1.5B 0 200 400 600 800 ...
https://arxiv.org/abs/2505.21097v1
4.55 3.09 4.06 2.30 7.40 3.81 Baseline 57.98 3.33 3.33 21.46 24.54 34.38 17.78 36.21 24.88 Thinker 64.25 6.25 2.50 23.74 28.11 40.62 19.03 38.33 27.85 Thinker-Fast 61.60 6.25 2.50 26.39 24.78 35.94 18.66 37.85 26.75 ORZ 58.00 3.50 1.00 16.80 - - - - - SimpleRL 59.00 4.20 - - 21.00 35.00 20.20 - - DeepSeek-R1-Distill-Qw...
https://arxiv.org/abs/2505.21097v1
to examine how the agent’s reasoning adapts across the Thinker task stages. Observations from representative outputs (see Appendix C for full examples) highlight the importance of the Fast Thinking mode. When an initial Fast Thinking output is incorrect, the agent first engages in a Verification stage to assess its own...
https://arxiv.org/abs/2505.21097v1
taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892 , 2025. [3]Jingcheng Hu, Yinmin Zhang, Qi Han, Daxin Jiang, Xiangyu Zhang, and Heung-Yeung Shum. Open-reasoner-zero: An open source approach to scaling up reinforcement learning on the base model. arXiv preprint arXiv:2...
https://arxiv.org/abs/2505.21097v1
Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems , 36:8634–8652, 2023. [22] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical repo...
https://arxiv.org/abs/2505.21097v1
main text for the termination conditions. 12 Box A.1: Prompt Templates for Thinker Task 1. Fast Thinking User: Answer the below question with concise steps and output the final answer within \boxed{} . Limit your response below 1000 words. This is the problem: {question} Assistant: <Agent Response> 2. Verification User...
https://arxiv.org/abs/2505.21097v1
Acc) Baseline (Final Acc) (g) Minerva Math Figure 8: Detailed evaluation results of R1.5B fine-tuned using QA task or Thinker task on individual mathematical reasoning benchmarks. Table 3: Mathematical reasoning performance of the Thinker agent trained without the Summarization stage. Average (Avg.) scores are presente...
https://arxiv.org/abs/2505.21097v1
a meaningful starting point. A qualitative analysis of how the Fast Thinking stage interacts with subsequent stages can be found in the case study in Appendix C, which shows that the trained agent is able to correct flawed heuristics from the Fast Thinking mode during the Verification and Slow Thinking mode. Step 0 Ste...
https://arxiv.org/abs/2505.21097v1
targeted error identification and conceptual correction. The progression shows a clear refinement of reasoning, moving from the System-1-like heuristic in Fast Thinking to a more rigorous, System-2-like approach in Verification and Slow Thinking. The explicit references between stages—Verification critiquing Fast Think...
https://arxiv.org/abs/2505.21097v1
3−2) = 6√ 3−12. 6√ 3−12 User: <Verification Prompt: Is your answer above correct?> Assistant: <Reasoning Trace in Verification, questioning the S=s+ 4assumption ( arelabeled as s)> The side length of this larger hexagon might be s+ 4? Or is it s+ 2? Wait, reconsider. <...> the relationship isn’t straightforward. <Deriv...
https://arxiv.org/abs/2505.21097v1
The area of a regular hexagon is3√ 3 2×(side length )2. Therefore: 3√ 3 2  s+4√ 32 −s2! = 36 Simplifying the term inside the brackets:  s+4√ 32 −s2=s2+8s√ 3+16 3−s2=8s√ 3+16 3 Substituting this back into the area equation: 3√ 3 28s√ 3+16 3 = 36 Multiplying through by3√ 3 2: 3√ 3 2·8s√ 3+3√ 3 2·16 3= 36 12s+ 8√ 3 ...
https://arxiv.org/abs/2505.21097v1
Figure 10. Due to limited computational resources, we have not yet trained the Thinker agent to convergence, but we already observe its performance surpassing that of the baseline, which has plateaued. The detailed performance of the best checkpoints from both runs can be found in Table 5. Similar to the results observ...
https://arxiv.org/abs/2505.21097v1
Interpreting Social Bias in LVLMs via Information Flow Analysis and Multi-Round Dialogue Evaluation Zhengyang Ji1,2∗, Yifan Jia1,2∗, Shang Gao1, Yutao Yue1,3† 1The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China 2Shandong University, Qingdao, China 3Institute of Deep Perception Technology, ...
https://arxiv.org/abs/2505.21106v1
the information flow when the model answers neutral questions on the FACET (Gustafson et al., 2023) dataset and observe significant demographic disparities in the model’s attention to neutral versus sensitive information during intermediate reason- ing stages. As illustrated in Figure 1, when asked “Is the main figure ...
https://arxiv.org/abs/2505.21106v1
of textual representations in LVLMs, showing that prior bias also exists in the language un- derstanding process. 2 Related Work 2.1 Social bias in LVLMs Recently, a growing body of research (Fraser and Kiritchenko, 2024; Howard et al., 2025; Wu et al., 2024b; Zhang et al., 2024a; Jiang et al., 2024) has focused on det...
https://arxiv.org/abs/2505.21106v1
gender), we construct prompts in the following format: Sensitive Prompt :Is the main figure in this pic- ture a 〈sensitive concept 〉? These prompts serve as controlled inputs to eval- uate the model’s reasoning behavior under neutral and sensitive semantic conditions. More prompt examples can be found in Appendix A.1. ...
https://arxiv.org/abs/2505.21106v1
model: Round 1 : Input the full image token sequence. Round 2 : Input the pruned token sequence that only includes Ikey. The model’s consistency across rounds and the change in response confidence are jointly used to compute a fairness score. Here, we define the con- fidence of the model’s response. Given the output lo...
https://arxiv.org/abs/2505.21106v1
negative responses in both Round 1 and Round 2, the fairness score is calculated as Conf 2−Conf 1, which reflects the extent to which the identified Ikeytokens are biased toward sensitive attributes. Finally, we obtain a fairness score ranging from [−1,+1], where a higher score indicates that LVLM pay less attention to...
https://arxiv.org/abs/2505.21106v1
indicate that disparities in model performance may stem from demographic-specific visual attention biases. 5.2 Correlation Analysis of the Fairness Score and Social Bias To investigate the relationship between model fair- ness and social bias, we analyze whether the predic- tion accuracy disparity among demographic gro...
https://arxiv.org/abs/2505.21106v1
0.0251 0.0825 0.0474 0.0941 LLaV A-v1.6-13BOccupation lifeguard flutist trumpeter gardener boatman sculptor dancer nurse fireman climber waiter backpacker Acc Diff 0.2444 0.1609 0.1603 0.1485 0.1440 0.1374 0.1293 0.1243 0.1063 0.1044 0.0963 0.0914 FSD 0.1430 0.1364 0.0755 0.1367 0.0991 0.1181 0.0851 0.0806 0.0476 0.086...
https://arxiv.org/abs/2505.21106v1
information flow-based framework to explain the origins of social bias in LVLMs from an internal perspective. By introduc- ing a fairness score, we quantify how models rely on sensitive attributes when answering neutral ques- tions, and find strong correlations with prediction disparities across groups. Extending our a...
https://arxiv.org/abs/2505.21106v1
walla, and Svetlana Kiritchenko. 2025. Uncovering bias in large vision-language models at scale with counterfactuals. In Proceedings of the 2025 Confer- ence of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 5946–5991. Qidong Huang, Xiaoyi Dong,...
https://arxiv.org/abs/2505.21106v1
Juanxi Tian, Yangrui Gong, Ruibo Hou, Yue Xu, Zhenbang Wu, Zhiyuan Fan, Yiyang Zhou, Kangyu Zhu, and 1 others. 2024. Cares: A comprehensive benchmark of trustworthiness in med- ical vision language models. Advances in Neural Information Processing Systems , 37:140334–140365. Shaoyang Xu, Junzhuo Li, and Deyi Xiong. 202...
https://arxiv.org/abs/2505.21106v1
range? -Does the person in the image fall within the <age group> age group? -Is the person shown in the picture in the <age group> age category? -Can we categorize the person in this image as <age group>? -Is the person in this picture considered to be <age group>? During experiments, one prompt was randomly selected f...
https://arxiv.org/abs/2505.21106v1
0.3525 waiter 0.3340 0.3140 0.3511 boatman 0.3154 0.2993 0.3428 farmer 0.3721 0.3604 0.4043(b) LLaV A-v1.5-13B occupation light medium dark machinist 0.3052 0.2939 0.3418 waiter 0.2861 0.2744 0.3105 skateboarder 0.2813 0.2690 0.3135 reporter 0.3188 0.3052 0.3442 motorcyclist 0.3232 0.3066 0.3545 fireman 0.2803 0.2705 0...
https://arxiv.org/abs/2505.21106v1
A LIGHTWEIGHT MULTI-EXPERT GENERATIVE LANGUAGE MODEL SYSTEM FOR ENGINEERING INFORMATION AND KNOWLEDGE EXTRACTION Bogdan Bogachov1, Yaoyao Fiona Zhao1,∗ 1McGill University, Montreal, QC, Canada ABSTRACT Despite recent advancements in domain adaptation tech- niques for large language models, these methods remain com- put...
https://arxiv.org/abs/2505.21109v1
for engineering appli- 1arXiv:2505.21109v1 [cs.CL] 27 May 2025 cations, most rely on costly cloud computing services or require thedeploymentofhigh-endon-premisesservers. Themajorityof smalltomedium-sizedengineeringcompanieswillnotbeableto afford such costly technologies. Therefore, there is a clear need for lightweigh...
https://arxiv.org/abs/2505.21109v1
The most sig- nificant one is the high computational cost induced by updating all 137 billion parameters of the model. In contrast, a notable example of LLM adaptation through theadditionofextralayersatopabackbonemodelisHierarchical Domain Adaptation (HDA), as introduced in [23]. HDA [23] leveragesapre-trainedmodelandt...
https://arxiv.org/abs/2505.21109v1
is overly reliant on complex models, and it lacks standardized evaluation metrics. Motivated by the limitations outlined in the preceding sub- sections,thereisaclearneedtodevelopamethodthatcombines computationalefficiencywithhighaccuracywhileeffectivelyad- dressing domain-specific tasks.3. METHODOLOGY The methodology u...
https://arxiv.org/abs/2505.21109v1
experts, the same backbone Llama-3.2-1B- Instruct [13] LLM is used. The model is fine-tuned separately oneachisolateddatasetdescribedinSubsection3.1usingLoRA[24]. The fine-tuned models are then connected using a graph approach utilizing the LangGraph library [35]; thus, each model is represented by a node, and the orch...
https://arxiv.org/abs/2505.21109v1
on all compared models, the EM metric in- dicates that SLG can achieve 3 times better results. Among the threeusedmetrics,EMisthemostpowerfulindicationthatSLG has the potential to better resist hallucinations by producing text exactly matching the engineering ground truth answers. In addition, all SLG experts and its o...
https://arxiv.org/abs/2505.21109v1
provide the orchestrator with the necessary informa- tion to make a proper decision. Also, an aggregator node could be added to the pipeline to collect text generated by experts into one piece of information in cases when the orchestrator would splitauserqueryamongmultipleexperts. Agenericexpertnode could be a solid ad...
https://arxiv.org/abs/2505.21109v1
els with Differential Privacy.” 2022 IEEE International ConferenceonDataMiningWorkshops(ICDMW) :pp.560– 566. 2022. IEEE. [10] Raffel,Colin,Shazeer,Noam,Roberts,Adam,Lee,Kather- ine,Narang,Sharan,Matena,Michael,Zhou,Yanqi,Li,Wei and Liu, Peter J. “Exploring the limits of transfer learning withaunifiedtext-to-texttransfo...
https://arxiv.org/abs/2505.21109v1
for knowledge-intensive NLP tasks.” Ad- vances in Neural Information Processing Systems Vol. 2020-December (2020). URL https://www.scopus.com/ inward/record.uri?eid=2-s2.0-85108449607&partnerID= 40&md5=c6711ab215f5fdba9b0d4a8449d7a25a. [26] Li, Siran, Stenzel, Linus, Eickhoff, Carsten and Ali Bahrainian, Seyed. “Enhanc...
https://arxiv.org/abs/2505.21109v1
arXiv:2505.21116v1 [cs.HC] 27 May 2025Creativity in LLM-based Multi-Agent Systems: A Survey Yi-Cheng Lin*Kang-Chieh Chen∗Zhe-Yan Li∗Tzu-Heng Wu∗ Tzu-Hsuan Wu∗Kuan-Yu Chen∗Hung-yi Lee Yun-Nung Chen National Taiwan University, Taipei, Taiwan {f12942075, r13944050}@ntu.edu.tw y.v.chen@ieee.org Abstract Large language mode...
https://arxiv.org/abs/2505.21116v1
range from LLM-based chatbots to human agents, as in Fig. 1. We aim to map the current landscape of techniques, datasets, evaluations, and challenges to foster and measure creativity in such multimodal and heterogeneous systems. By ana- lyzing how different agents interact, we reveal how collaborative structures can un...
https://arxiv.org/abs/2505.21116v1
human users (Venkadesh et al., 2024; Zhai et al., 2025; Venkatesh et al., 2025). For example, Co-Scientist (Gottweis et al., 2025) embeds a supervisory agent that evaluates deter- mined planning configuration from users, assigns weighted priorities and resources across special- ist agents, and schedules them as paralle...
https://arxiv.org/abs/2505.21116v1
sustained high proactivity risks over-reliance, reduced creative independence, and trust deficits (Chakrabarty et al., 2024). Future MAS should therefore adaptively calibrate proac- tivity to task demands and user preferences. 3 MAS Techniques for Creativity MAS enhances creativity by dividing the cognitive workload, s...
https://arxiv.org/abs/2505.21116v1
importance of maintaining human autonomy in divergent thinking. 3.2 Iterative Refinement Iterative refinement involves progressively enhanc- ing ideas through repeated feedback and revision cycles. In MAS, this process is facilitated by as- signing distinct roles to agents, such as proposer, reviewer, and implementer, ...
https://arxiv.org/abs/2505.21116v1
and final authoring. MAS agents helped inspire new ideas and filled in missing details. Participants described the system as feeling like a “second mind”, demonstrating the supportive role of LLMs in collaborative writing. CoQuest (Liu et al., 2024b) CoQuest assists re- searchers in formulating meaningful questions. It...
https://arxiv.org/abs/2505.21116v1
A conceptual framework illustrated with selected attributes, accompanied by a concise example representing each defined persona. Coarse-Grained Persona Agents carry only high-level identity or expertise labels (e.g. “mar- keting strategist,” “data analyst”). This minimal specification tolerates ambiguity, fostering div...
https://arxiv.org/abs/2505.21116v1
hu- man–agent collaborations, presents unique chal- lenges. Unlike tasks with clear correctness crite- ria, creativity are inherently subjective and multi- faceted, lacking a universally accepted assessment framework. To address this, researchers typically employ two complementary evaluation approaches: •Artifact Evalu...
https://arxiv.org/abs/2505.21116v1
different aspects of cre- ative artifacts. Nowadays, researchers often in- voke additional subjective criteria tailored to spe- cific text generation tasks. Insightfulness (Shaer et al., 2024) is used to quantify how deeply ideas engage with underlying problem structures rather than merely diverging from norms. Interes...
https://arxiv.org/abs/2505.21116v1
tran- scriptions of the interactions. The analysis may focus on interaction patterns, problem-solving ap- proaches, expressions of creativity, and usability issues. For instance, Colin (Ye et al., 2024) ana- lyzes the recordings to evaluate children’s narrative skills before and after using a storytelling system, focus...
https://arxiv.org/abs/2505.21116v1
fo-cused on identifying the iterative nature of the col- laboration, how the human and AI took initiative, and how ideas were developed and refined across the different stages of prewriting. 5.4 Discussion on Evaluation Methods Evaluating creativity in MAS presents unique chal- lenges. Objective metrics are scalable an...
https://arxiv.org/abs/2505.21116v1
involve open- ended tasks measuring originality and flexibility; the Alternative Uses Task (Guilford, 1967), where participants are asked to think of as many uses as possible for a common object; and the Torrance Tests of Creative Thinking (Torrance, 1966), a widely used battery assessing creative potential through bot...
https://arxiv.org/abs/2505.21116v1
processes involved in genuine multi-agent collaboration. As a result, these nar- row scopes and simplified setups leave us with an incomplete understanding of how bias truly affects creative and equitable multi-agent systems. Managing and Leveraging Creative Conflicts Conflicts between agents in MAS are typically seen ...
https://arxiv.org/abs/2505.21116v1
rapidly expanding field of AI-augmented creativity. Beyond whether AI-generated works qualify for copyright, we also need to decide how to appor- tion authorship among agents in creative MAS, as this determines their legal and moral credit. In practice, one might imagine a collaborative novel- writing system where Agen...
https://arxiv.org/abs/2505.21116v1
single-session evaluations, understanding how users engage with multi-agent creative systems over extended periods remains a significant hurdle. Longitudinal investigations have revealed that users undergo an initial nov- elty phase before stabilizing their expectations and customizing AI workflows (Long et al., 2024)....
https://arxiv.org/abs/2505.21116v1
interactions or generate persistent memory traces, informed consent in human-agent data collection, and the environmental costs associated with large- scale multi-agent deployments. Third, the majority of systems reviewed in this survey are developed and evaluated in English and rely heavily on Western-centric datasets...
https://arxiv.org/abs/2505.21116v1
Creech, Natalia Criado Pacheco, and Simon Miles. 2021. Resource allocation in dynamic multiagent systems. Preprint , arXiv:2102.08317. Mike D’Arcy, Tom Hope, Larry Birnbaum, and Doug Downey. 2024. Marg: Multi-agent review generation for scientific papers. Preprint , arXiv:2401.04259. Allegra De Filippo, Michela Milano,...
https://arxiv.org/abs/2505.21116v1
International Conference on Computa- tional Linguistics, Language Resources and Evalu- ation (LREC-COLING 2024) , pages 13666–13676, Torino, Italia. ELRA and ICCL. Shanshan Han, Qifan Zhang, Yuhang Yao, Weizhao Jin, Zhaozhuo Xu, and Chaoyang He. 2024b. Llm multi-agent systems: Challenges and open problems. Preprint , a...
https://arxiv.org/abs/2505.21116v1