text string | source string |
|---|---|
kitchen. 4 Mary moved the chocolate to the table. State 4 5 Mary exited the kitchen. 6 Alice moved the chocolate to the cupboard. State 6 Question: Where does Mary think Alice thinks the chocolate is? ToM Actions DEL Belief States J, M, A Drawer M, A Table J, M, A Drawer A Cupboard M, A Table J, M, A DrawerJ J, M JState 2 State 4 State 6 State 1 Process Label Generation State 2 State 3 State 4 State 5 State 6 Drawer Null Drawer Table Table Table Current world Past worldFigure 2: Training data synthesis for PBM. Definition 2 (Event Model) .Anevent model is a tuple ε= (E, Q, pre,post), where: •Eis a finite, non-empty set of events; •Q:A → 2E×Eassigns to each agent a∈ A a binary relation Qaover events; •pre:E→ L(P,A)assigns a precondition for- mula to each event, specifying when it is exe- cutable; •post :E→ L (P,A)assigns a postcondition formula to each event, describing how the world changes. We refer to a pointed event model (E, e)as an action , where e∈Eis the actual event that occurs. With the definitions of epistemic models and event models, we can simulate the process of ToM reasoning. In our formulation, each sentence in a ToM story is treated as an action. As actions oc- cur sequentially, the epistemic state of the agents evolves accordingly. After each action, the current epistemic state is updated to reflect the new infor- mation and belief changes induced by the action. This sequence of state transitions forms the belief trace used for reasoning. Definition 3 (Product Update) .Given a state (M, w)and an action (ε, e), suppose that the precondition is satisfied, i.e., M, w|=pre(e). Then the product update results in a new state (M′,(w, e)), where the updated epistemic model M′= (W′, R′, V′)is defined as follows: •W′={(w′, e′)∈W×E| M, w′|=pre(e′)}; •For each agent a∈ A ,R′ a = {((w′, e′),(v′, f′))∈ W′×W′| w′Rav′ande′Qaf′}; •For each atomic proposition p∈ P,(w′, e′)∈ V′(p)iff post(e′)|=por(M, w′|= pand post(e′)̸|=¬p).3.2 Building PBM After formulating ToM reasoning as a DEL process, we describe our PBM-based inference framework (Figure 1). In the online setting, the model incre- mentally constructs a belief trace. At each action, it observes the trace so far and proposes multiple candidate updates for the current state. The PBM scores these candidates, and a subset with high scores is selected to continue the reasoning. This process is repeated until all actions are processed. In the offline setting, the model generates multiple full belief traces after reading the entire story. The PBM reranks these complete traces and selects the best one as the final output. 3.2.1 Dataset Synthesis via DEL In this section, we describe how we construct the training dataset for the PBM using DEL. We gener- ate 20,000 ToM examples using the scenario and question generation script from the Hi-ToM frame- work (He et al., 2023). For process-level labels, we design a DEL-based simulator to compute the full belief update trace. As illustrated in Figure 2, we di- rectly calculate (M, w)|=BMaryBAliceφat differ- ent states | https://arxiv.org/abs/2505.17348v1 |
to obtain belief annotations for each step. For reasoning traces, we use GPT-4o-mini (Hurst et al., 2024) to generate step-by-step explanations for the same scenarios. We observe that GPT-4o- mini produces a balanced mix of correct and incor- rect reasoning traces, making it suitable for reward modeling. Each training sample thus consists of a LLM-generated reasoning trace paired with a DEL- generated process label. 3.2.2 Training the Process Belief Model PBM is a scoring function f:P × S → R+that assigns a reliability score to each step siin a belief tracesunder a given ToM problem P. We treat this as a binary classification task: each step is labeled as either correct orincorrect according to the DEL- generated belief trace. The model is trained using the following binary cross-entropy objective: LPBM=KX i=1ysilogf(si)+(1−ysi) log(1 −f(si)), where Kis the number of steps, ysiis the binary label, and f(si)is the predicted score. Previous works on process-level reward models often focus on math reasoning, relying either on costly human annotations or proxy heuristics such as whether a final answer can be recovered by a language model. However, these approaches are prone to shortcuts: a model may reach a correct final answer despite flawed intermediate steps, un- dermining the goal of transparent and trustworthy reasoning. In contrast, our approach leverages the formal structure of DEL to generate ground-truth belief states at each reasoning step. This produces a high- quality, noise-free supervision signal without the need for manual labeling or LLM-based inference, enabling reliable training of the PBM. 3.3 Inference-Time Scaling with PBM After training the PBM, we integrate it with vari- ous inference-time searching methods to improve ToM reasoning through inference-time scaling. We consider both online and offline strategies. Online: Beam Search. Beam search is a structured decoding method that maintains multiple partial belief traces during generation. At each reasoning step, it explores several alternative updates and selects the most promising ones based on the PBM scores. Formally, the procedure works as follows: •Initialize a set of kbeams by sampling kcandi- date first steps from the model. •At each step, for every current beam, sample b next-step candidates, forming k×bnew paths. •Score each extended path using the PBM. We use the PBM score of the last step to rank partial paths. •Retain the top kpaths with the highest scores and repeat the process until reaching an end-of- sequence token or a maximum depth. This approach jointly optimizes the generation and evaluation of belief traces, allowing the model to explore plausible alternatives and commit to higher-reward reasoning trajectories.Offline: Best-of-N. In the offline setting, we sam- pleNcomplete belief traces independently, and then evaluate them using the PBM. We experiment with different aggregation rules for scoring each trace based on step-wise PBM scores: •Last : Use the PBM score of the final step. •Min: Use the lowest score across all steps. •Avg: Use the average score across the trace. •Prod : Multiply the scores of all steps. Based on the aggregated scores, we consider two ranking strategies for selecting the final answer: (i) Vanilla Best-of-N. Select the trace with the high- | https://arxiv.org/abs/2505.17348v1 |
est PBM score and extract its final answer. This method chooses the most confident individual trace, but does not account for answer consistency across traces. (ii) Weighted Best-of-N. Group traces by their final answers, then aggregate PBM scores across traces that predict the same answer. The answer with the highest total score is selected: ˆy= arg max y∈YNX i=1⊮(yi=y)·PBM( p, si) Here, siis the i-th belief trace, yiits final answer, andPBM( p, si)the trace-level score. This strategy emphasizes both answer quality and consistency across samples. 4 Experiments 4.1 Experimental Setup Platform. All experiments are conducted on a single NVIDIA GH200 GPU node. We use the vLLM (Kwon et al., 2023) framework for efficient batched inference and large-scale decoding. PBM Training. We fine-tune a PBM model based on Llama3.1-8B-Instruct (Grattafiori et al., 2024). The model is trained for 1 epoch using our DEL- generated dataset. All training is performed on the same GH200 machine. Test Models. We evaluate our methods on both the Qwen3 series (0.6B, 1.7B, 4B) (Yang et al., 2025a) and Llama3.2 series (1B, 3B) (Grattafiori et al., 2024). All models are evaluated using their default generation settings, with no change to temperature, top-k, or nucleus sampling parameters. Datasets. We conduct evaluations on two datasets: Hi-ToM (He et al., 2023) and (Kosinski, 2023). The Hi-ToM dataset is constructed using our DEL-based generator, while (Kosinski, 2023) is a human-written ToM benchmark. These two Table 1: Offline inference-time scaling methods across belief orders in HiToM Dataset, with and without PBM. “Ori” denotes the baseline accuracy from a single sample without inference-time scaling. Model0-th Order 1-th Order 2-th Order 3-th Order 4-th Order Average Ori +PBM Ori +PBM Ori +PBM Ori +PBM Ori +PBM Ori +PBM Qwen3-4B 100.0 100.0 79.8 85.0 79.3 90.0 70.2 82.5 46.0 65.0 75.1 84.5 Qwen3-1.7B 78.0 82.5 59.7 65.0 45.2 55.0 47.0 62.5 47.8 57.5 55.5 64.5 Qwen3-0.6B 69.2 80.0 52.0 72.5 35.0 47.5 31.5 52.5 34.0 47.5 44.3 60.0 Llama3.2-3B 68.2 85.0 52.0 80.0 43.2 82.5 37.0 82.5 36.8 75.0 47.4 81.0 Llama3.2-1B 41.5 46.2 40.0 53.8 28.5 61.5 41.5 84.6 29.2 58.3 36.1 60.9 datasets differ in structure, providing a testbed for evaluating the generalization of the PBM. Metrics and Prompt Format. We report final answer accuracy as the main evaluation metric. All models are evaluated using a consistent prompting format across datasets, as detailed in Appendix. 4.2 Offline Methods Result We set Nup to 1024 and apply the weighted Best- of-Nstrategy, selecting the best aggregation rule (among avg, last, min, and prod) as the final answer for each instance. Main Results. As shown in Table 1, incorporating PBM leads to a significant improvement in ToM reasoning performance across all models. For ex- ample, Llama3.2-3B exhibits a substantial gain of 33.6 points in average accuracy, while Qwen3-4B improves by 9.4 points. Across all belief orders (from 0-th to 4-th), PBM consistently yields higher accuracy, confirming the robustness and generaliz- ability of our inference-time scaling method. Scaling Nfor ToM Reasoning. As shown in Fig- ure 3, increasing the number of sampled belief traces Nconsistently improves ToM reasoning per- | https://arxiv.org/abs/2505.17348v1 |
formance. Among the aggregation strategies, min and prod exhibit stable and similar performance across both vanilla andweighted ranking schemes. In contrast, avg and last tend to degrade in perfor- mance under weighted aggregation, likely due to their sensitivity to low-quality or inconsistent sam- ples. We recommend using min or prod as robust aggregation rules for inference-time ToM scaling. Majority Voting Doesn’t Work for ToM Reason- ing. Interestingly, we observe that scaling Nwith majority voting does not lead to improved accuracy on ToM tasks. This contrasts with math reasoning tasks, where majority voting often benefits from larger Nby amplifying consistent correct answers. The discrepancy highlights a key distinction: ToM reasoning is a dynamic, social process that cannot be reduced to static answer aggregation. There-fore, our trained PBM is crucial: without such a verifier, inference-time scaling for ToM would be ineffective. It is precisely the PBM that enables us to assess whether each intermediate belief state in the reasoning process is likely to be justified. 4.3 Online Methods Result Online Setting. For online inference-time scaling, we conduct beam search experiments on Qwen3- 4B and scale up to Qwen3-8B. We do not include smaller models because their instruction-following ability is insufficient for producing valid interme- diate reasoning steps. The number of beams Nis varied from 4 to 256. Main Results. Again, as shown in Table 2, we ob- serve that incorporating PBM leads to substantial improvements in ToM reasoning. For Qwen3-4B, the PBM-enhanced accuracy reaches 85.0, which is comparable to the best results in the offline set- ting. Interestingly, the original Qwen3-8B model underperforms Qwen3-4B, suggesting that baseline ToM ability does not necessarily scale with model size. However, with PBM guidance, Qwen3-8B achieves the highest accuracy of 87.0, demonstrat- ing the effectiveness of inference-time scaling even for larger LLMs. Online Methods or Offline Methods? Which strategy should we prefer for ToM reasoning— online or offline? Our experiments suggest that both approaches yield comparable accuracy. For instance, Qwen3-8B achieves similar performance under both settings. However, online methods are significantly harder to evaluate reliably. This is be- cause many smaller or less instruction-aligned mod- els struggle to follow step-by-step prompting in an online rollout, failing to produce valid intermediate states and making PBM evaluation infeasible. In contrast, offline methods allow the model to generate a full belief trace in one shot, which is gen- erally easier even for weaker models. Even when shortcuts or hallucinations appear mid-trace, the 20212223242526272829210 N (# traces)0.700.750.800.850.90Accuracy Vanilla Best ofN, Qwen3-4B Majority avg last min prod(a) Vanilla Best-of-N decoding on Qwen3-4B. 20212223242526272829210 N (# traces)0.700.750.800.85Accuracy Weighted Best ofN, Qwen3-4B Majority avg last min prod (b) Weighted Best-of-N decoding on Qwen3-4B. Figure 3: Accuracy under different aggregation strategies across decoding budgets N. Table 2: Online inference-time scaling methods across belief orders in HiToM Dataset, with and without PBM. “Ori” denotes the baseline accuracy from a single sample without inference-time scaling. Model0-th Order 1-th Order 2-th Order 3-th Order 4-th Order Average Ori +PBM Ori +PBM Ori +PBM Ori +PBM Ori +PBM Ori +PBM Qwen3-8B 96.5 80.0 53.3 80.0 38.8 85.0 55.8 95.0 57.8 | https://arxiv.org/abs/2505.17348v1 |
95.0 60.4 87.0 Qwen3-4B 100.0 100.0 79.8 85.0 79.3 97.5 70.2 82.5 46.0 60.0 75.1 85.0 PBM can still function effectively. Furthermore, when paired with high-throughput generation back- ends such as vLLM, offline methods can generate large numbers of candidate traces efficiently. Over- all, we recommend using offline inference-time methods for ToM reasoning. 4.4 Results on Other Benchmarks Our PBM is trained on data generated under the Hi-ToM framework. A natural question arises: Can the trained PBM generalize to ToM reasoning tasks from a different distribution? To evaluate this, we test our method on the dataset proposed by (Kosinski, 2023), which fea- tures hand-written scenarios involving false beliefs and a variety of true belief controls. We conduct experiments using both the Llama 3.1 and Qwen3 model series. In all experiments, we follow the same inference-time scaling and PBM-based trace selection procedure as in the Hi-ToM evaluations. Main Results. As shown in Table 3, We ob- serve that the PBM also generalizes well to out- of-domain data. Across all tested models, accuracy consistently improves after applying inference-time scaling guided by the PBM. This confirms that our PBM functions as a genuine verifier of whether a ToM reasoning process is justified, rather than merely fitting surface patterns in the training distri- bution. The improvements on the (Kosinski, 2023) benchmark demonstrate its ability to evaluate be- lief traces beyond synthetic scenarios, highlighting 21 2021222324 Model Size (Billions)4050607080Average Accuracy Scaling Trend Qwen3 (Ori) Qwen3 (+PBM) LLaMA3.2 (Ori) LLaMA3.2 (+PBM)Figure 4: Scaling trend of average accuracy before and after applying PBM across different LLMs. Ori refers to single-sample baseline, and +PBM denotes performance after applying inference-time belief path modeling. the robustness and transferability of our approach. 4.5 Discussion Scaling with Model Size. Figure 4 illustrates the impact of model scaling on ToM accuracy across different model sizes. We observe that PBM consis- tently improves performance and facilitates more effective scaling. For Llama 3.2, the accuracy curve becomes steeper when equipped with PBM, indicat- ing that larger models benefit more and generalize better under our inference-time intervention. Inter- estingly, Qwen3 exhibits a failure in scaling at 8B under the vanilla setting—performing worse than its 4B counterpart. However, once PBM is applied, Qwen3-8B achieves the highest accuracy among all its variants. This suggests that inference-time scaling via PBM not only improves absolute per- Table 3: Offline inference-time scaling methods on the (Kosinski, 2023) dataset, evaluated across different belief types, with and without PBM. “Ori” denotes baseline accuracy from a single sample without PBM. ModelFalse Belief Informed Protagonist No Transfer Present Protagonist Average Ori +PBM Ori +PBM Ori +PBM Ori +PBM Ori +PBM Qwen3-8B 83.3 87.5 83.8 85.0 92.8 97.5 79.5 85.0 84.8 88.8 Qwen3-4B 70.2 80.0 86.2 90.0 93.2 95.0 88.0 92.5 84.4 89.4 Qwen3-1.7B 18.2 35.0 15.5 37.5 24.8 60.0 13.8 30.0 18.1 40.6 Qwen3-0.6B 14.5 12.5 23.5 30.0 25.0 35.0 21.0 32.5 21.0 27.5 formance but may also enable the emergence of higher-order reasoning capabilities that are other- wise latent in the base model. Comparison with GRPO-based Methods. Re- cent work has proposed using ToM supervision to | https://arxiv.org/abs/2505.17348v1 |
fine-tune smaller models via GRPO (Shao et al., 2024), in order to enhance their ToM capabilities. However, GRPO-based training requires substan- tial computational resources and is notoriously dif- ficult to optimize. In contrast, our PBM module is lightweight and efficient: it can be trained in under three hours on a single GH200 GPU. More- over, GRPO must be re-trained for each target model individually, whereas our PBM is trained once and can be applied across multiple models without retraining. Notably, prior work also reports that GRPO-trained models, while improving ToM reasoning, may suffer performance degradation on other benchmarks such as GSM8K. Our inference- time scaling method avoids this drawback entirely, as it does not modify the parameters of the underly- ing model. Overall, PBM offers a practical, gener- alizable, and non-invasive alternative for enhancing ToM reasoning in SLMs. 5 Related Works Dynamic Epistemic Logic and Its Connections to Theory-of-Mind. The intersection of DEL and ToM has emerged as a promising framework for formalizing belief-based social reasoning. DEL offers a principled approach to representing and up- dating agents’ mental states through formal mecha- nisms such as product updates over epistemic and event models (Van Ditmarsch et al., 2007). This directly aligns with the core of ToM, which centers on inferring and reasoning about others’ beliefs. Earlier cognitive models (Van Ditmarsch et al., 2007; Bolander and Andersen, 2011) employed DEL to simulate belief change in multi-agent set- tings. More recent computational work extends this line by incorporating DEL into neural archi- tectures, using logic-based simulators to provide symbolic supervision for belief updates (Bolander,2014; Rabinowitz et al., 2018). Building on this foundation, our work leverages DEL not only as a modeling formalism but also as a scaffold for inference-time supervision, enabling compositional and verifiable reasoning in ToM tasks (Hansen and Bolander, 2020). Inference-Time Scaling of LLMs. Recent re- search has investigated inference-time scaling as a compute-efficient alternative to architectural scal- ing for enhancing the reasoning capabilities of LLMs (Beeching et al.; Muennighoff et al., 2025). Instead of increasing model size, this approach leverages additional computation during inference to simulate deeper cognitive reasoning. Techniques such as Best-of-N and beam search exemplify this paradigm by enabling small LLMs to explore mul- tiple reasoning paths and select the most plausi- ble outcome, effectively mimicking the delibera- tion depth of larger models without incurring ad- ditional training costs (Snell et al., 2024; Misaki et al., 2025). While promising, current applications of inference-time scaling are largely confined to mathematical reasoning (Wang et al., 2023; Yang et al., 2025b; Yao et al., 2023). This work aims to broaden its applicability by targeting ToM as a new frontier for inference-time scaling. 6 Conclusion This work presents a new approach to Theory-of- Mind (ToM) reasoning by focusing on inference- time reliability rather than model scale. By formal- izing belief updates through Dynamic Epistemic Logic (DEL) and using a verifier model trained with logic-generated labels, we enable small lan- guage models to reason in a more transparent and structured manner. Our inference-time framework improves ToM performance of small LLMs across several benchmarks. It demonstrates | https://arxiv.org/abs/2505.17348v1 |
that social reasoning tasks can benefit from compute-efficient methods that guide rather than retrain the model. This opens new possibilities for deploying socially aware AI in resource-limited settings. References Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, and 1 others. 2024. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219 . Maryam Amirizaniani, Elias Martin, Maryna Sivachenko, Afra Mashhadi, and Chirag Shah. 2024. Do llms exhibit human-like reasoning? evaluating theory of mind in llms for open-ended responses. arXiv preprint arXiv:2406.05659 . Ian A Apperly and Stephen A Butterfill. 2009. Do humans have two systems to track beliefs and belief- like states? Psychological review , 116(4):953. Guillaume Aucher and François Schwarzentruber. 2013. On the complexity of dynamic epistemic logic. arXiv preprint arXiv:1310.6406 . Simon Baron-Cohen. 1991. Precursors to a theory of mind: Understanding attention in others. Natural theories of mind: Evolution, development and simu- lation of everyday mindreading , 1(233-251):1. Edward Beeching, Lewis Tunstall, and Sasha Rush. Scaling test-time compute with open models. Thomas Bolander. 2014. Seeing is believing: Formal- ising false-belief tasks in dynamic epistemic logic. InEuropean conference on social intelligence (ECSI 2014) , pages 87–107. Thomas Bolander and Mikkel Birkegaard Andersen. 2011. Epistemic planning for single-and multi-agent systems. Journal of Applied Non-Classical Logics , 21(1):9–34. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, and 1 others. 2020. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901. Alvin I. Goldman. 1979. What is justified belief? Justi- fication and Knowledge , 17:1–23. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, and 1 others. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Lasse Dissing Hansen and Thomas Bolander. 2020. Im- plementing theory of mind on a robot using dynamic epistemic logic. In Twenty-Ninth International Joint Conference on Artificial Intelligence , pages 1615– 1621. International Joint Conference on Artificial Intelligence Organization. Yinghui He, Yufan Wu, Yilin Jia, Rada Mihalcea, Yu- long Chen, and Naihao Deng. 2023. Hi-tom: A benchmark for evaluating higher-order theory ofmind reasoning in large language models. arXiv preprint arXiv:2310.16755 . Guiyang Hou, Wenqi Zhang, Yongliang Shen, Linjuan Wu, and Weiming Lu. 2024. Timetom: Tempo- ral space is the key to unlocking the door of large language models’ theory-of-mind. arXiv preprint arXiv:2407.01455 . Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and 1 others. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276 . Donald E Knuth. 1964. Backus normal form vs. backus naur form. Communications of the ACM , 7(12):735– 736. Michal Kosinski. 2023. Theory of mind may have spon- taneously emerged in large language models. arXiv preprint arXiv:2302.02083 , 4:169. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Effi- cient memory management for large | https://arxiv.org/abs/2505.17348v1 |
language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles . Zhihang Lin, Mingbao Lin, Yuan Xie, and Rongrong Ji. 2025. Cppo: Accelerating the training of group relative policy optimization-based reasoning models. arXiv preprint arXiv:2503.22342 . Zizheng Lin, Chunkit Chan, Yangqiu Song, and Xin Liu. 2024. Constrained reasoning chains for enhanc- ing theory-of-mind in large language models. In Pacific Rim International Conference on Artificial Intelligence , pages 354–360. Springer. Kou Misaki, Yuichi Inoue, Yuki Imajuku, So Kuroki, Taishi Nakamura, and Takuya Akiba. 2025. Wider or deeper? scaling llm inference-time compute with adaptive branching tree search. arXiv preprint arXiv:2503.04412 . Niklas Muennighoff, Zitong Yang, Weijia Shi, Xi- ang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. 2025. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 . Jan Plaza. 2007. Logics of public communications. Synthese , 158:165–179. David Premack and Guy Woodruff. 1978. Does the chimpanzee have a theory of mind? Behavioral and brain sciences , 1(4):515–526. Neil Rabinowitz, Frank Perbet, Francis Song, Chiyuan Zhang, SM Ali Eslami, and Matthew Botvinick. 2018. Machine theory of mind. In International conference on machine learning , pages 4218–4227. PMLR. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, and 1 others. 2024. Deepseek- math: Pushing the limits of mathematical reason- ing in open language models. arXiv preprint arXiv:2402.03300 . Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Ku- mar. 2024. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314 . James WA Strachan, Dalila Albergo, Giulia Borghini, Oriana Pansardi, Eugenio Scaliti, Saurabh Gupta, Krati Saxena, Alessandro Rufo, Stefano Panzeri, Guido Manzi, and 1 others. 2024. Testing theory of mind in large language models and humans. Na- ture Human Behaviour , 8(7):1285–1295. Winnie Street, John Oliver Siy, Geoff Keeling, Adrien Baranes, Benjamin Barnett, Michael McKibben, Tatenda Kanyere, Alison Lentz, Robin IM Dunbar, and 1 others. 2024. Llms achieve adult human perfor- mance on higher-order theory of mind tasks. arXiv preprint arXiv:2405.18870 . Johan Van Benthem. 2001. Games in dynamic- epistemic logic. Bulletin of Economic Research , 53(4):219–248. Hans Van Ditmarsch, Wiebe van Der Hoek, and Barteld Kooi. 2007. Dynamic epistemic logic , volume 337. Springer Science & Business Media. Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. 2023. Math-shepherd: Verify and reinforce llms step- by-step without human annotations. arXiv preprint arXiv:2312.08935 . An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, and 1 others. 2025a. Qwen3 technical report. arXiv preprint arXiv:2505.09388 . Wenkai Yang, Shuming Ma, Yankai Lin, and Furu Wei. 2025b. Towards thinking-optimal scaling of test-time compute for llm reasoning. arXiv preprint arXiv:2502.18080 . Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. In Advances in Neural Information Processing Systems , volume 36, pages 11809–11822. Curran Associates, | https://arxiv.org/abs/2505.17348v1 |
Inc. Appendix A Prompt Templates We present the prompt templates in the folllowing textbox. One-Shot Prompt Here is a story that unfolds in chronological order . You will be asked a question about the story , which may involve either : (1) Locating an object , or (2) Inferring an agent 's mental state (e.g., what A thinks B thinks C thinks ). To solve it , think step -by - step . At each step , repeat the current line from the story , then explain its effect on beliefs . Use [ Null ] if someone does not yet have knowledge . If a belief chain cannot be formed (e.g., some agent exited too early ), freeze belief at the last available step . <Note > { note } In public or private communication : - The speaker believes the listener will believe the claim . - If the listener exited the room earlier than the speaker , they will believe it. If the question is zero - order (e.g ., " Where is X really ?") , then in each step , only track the actual location of the object ( e.g., "X is in [Y ]") . You do not need to track nested beliefs . Here is an example : <Story > 1 Amelia , Chloe , Liam , Owen and Benjamin entered the TV_room . 2 The celery is in the red_envelope . 3 Amelia made no movements and stayed in the TV_room for 1 minute . 4 Chloe lost his watch . 5 Amelia exited the TV_room . 6 Chloe moved the celery to the green_bucket . 7 Chloe exited the TV_room . 8 Liam moved the celery to the red_bathtub . 9 Liam exited the TV_room . 10 Owen made no movements and stayed in the TV_room for 1 minute . 11 Owen exited the TV_room . 12 Benjamin made no movements and stayed in the TV_room for 1 minute . 13 Benjamin exited the TV_room .14 Amelia , Chloe , Liam , Owen and Benjamin entered the waiting_room . 15 Liam publicly claimed that celery is in the white_bathtub now . 16 Benjamin privately told Liam that the celery is in the blue_drawer now . <Question > Where does Owen think Liam thinks Chloe thinks the celery is? <Trace > ## Step 1 ## Amelia , Chloe , Liam , Owen and Benjamin entered the TV_room . Everyone is present , but the celery 's location is still unknown . Owen thinks Liam thinks Chloe thinks the celery is in [ Null ] ## Step 2 ## The celery is in the red_envelope . Everyone observes this . Owen thinks Liam thinks Chloe thinks the celery is in [ red_envelope ] ## Step 3 ## Amelia made no movements and stayed in the TV_room for 1 minute . No effect . Owen thinks Liam thinks Chloe thinks the celery is in [ red_envelope ] ## Step 4 ## Chloe lost his watch . Irrelevant . Owen thinks Liam thinks Chloe thinks the celery | https://arxiv.org/abs/2505.17348v1 |
is in [ red_envelope ] ## Step 5 ## Amelia exited the TV_room . Irrelevant . Owen thinks Liam thinks Chloe thinks the celery is in [ red_envelope ] ## Step 6 ## Chloe moved the celery to the green_bucket . Only Chloe , Liam , Owen , Benjamin are present . They all see this move . Owen thinks Liam thinks Chloe thinks the celery is in [ green_bucket ] ## Step 7 ## Chloe exited the TV_room . Chloe 's belief frozen ; still [ green_bucket ] Owen thinks Liam thinks Chloe thinks the celery is in [ green_bucket ] ## Step 8 ## Liam moved the celery to the red_bathtub . Only Liam , Owen , Benjamin present . They observe the move . Chloe not present , so her belief unchanged . Owen thinks Liam thinks Chloe thinks the celery is in [ green_bucket ] ## Step 9 ## Liam exited the TV_room . No change . Owen thinks Liam thinks Chloe thinks the celery is in [ green_bucket ] ## Step 10 ## Owen made no movements and stayed in the TV_room for 1 minute . Irrelevant . Owen thinks Liam thinks Chloe thinks the celery is in [ green_bucket ] ## Step 11 ## Owen exited the TV_room . Owen 's belief frozen . Owen thinks Liam thinks Chloe thinks the celery is in [ green_bucket ] ## Step 12 ## Benjamin made no movements and stayed in the TV_room for 1 minute . Irrelevant . Owen thinks Liam thinks Chloe thinks the celery is in [ green_bucket ] ## Step 13 ## Benjamin exited the TV_room . No change . Owen thinks Liam thinks Chloe thinks the celery is in [ green_bucket ] ## Step 14 ## Everyone entered the waiting_room . No effect on beliefs . Owen thinks Liam thinks Chloe thinks the celery is in [ green_bucket ] ## Step 15 ## Liam publicly claimed that celery is in the white_bathtub now . Owen hears this statement . However , public speech only affects first - and second - order beliefs (e.g., what Liam believes , what Owen thinks Liam believes , and what Liam thinks Owen believes ). It does not changeOwen 's belief about what Liam thinks Chloe thinks . Owen thinks Liam thinks Chloe thinks the celery is in [ green_bucket ] ## Step 16 ## Benjamin privately told Liam that the celery is in the blue_drawer now . Owen does not hear this , but more importantly , private communication only affects beliefs between the speaker and the listener . It can change what Liam believes ( based on exit order ), or what Liam thinks Benjamin believes ( based on exit order ), or what Benjamin thinks Liam believes ( always change ) - but it cannot affect higher - order beliefs . So this does not change Owen 's belief about what Liam thinks Chloe thinks . Owen thinks Liam thinks Chloe thinks the celery is in [ green_bucket ] Final Answer : [ green_bucket ] Now | https://arxiv.org/abs/2505.17348v1 |
arXiv:2505.17362v2 [cs.CL] 26 May 2025A Fully Generative Motivational Interviewing Counsellor Chatbot for Moving Smokers Towards the Decision to Quit Zafarullah Mahmood*Soliman Ali*Jiading Zhu*Mohamed Abdelwahab* Michelle Yu Collins*Sihan Chen*Yi Cheng Zhao*Jodi Wolff† Osnat Melamed*†Nadia Minian*†Marta Maslej*†Carolynne Cooper*† Matt Ratto*Peter Selby*†Jonathan Rose*†§ *University of Toronto †INTREPID Lab, Centre for Addiction and Mental Health, Toronto, ON, Canada Abstract The conversational capabilities of Large Lan- guage Models (LLMs) suggest that they may be able to perform as automated talk therapists. Itiscrucialtoknowifthesesystemswouldbe effective and adhere to known standards. We presentacounsellorchatbotthatfocusesonmo- tivating tobacco smokers to quit smoking. It uses a state-of-the-art LLM and a widely ap- plied therapeutic approach called Motivational Interviewing (MI), and was evolved in collabo- rationwithclinician-scientistswithexpertisein MI.Wealsodescribeandvalidateanautomated assessmentofboththechatbot’sadherenceto MIandclientresponses. Thechatbotwastested on 106 participants, and their confidence that they could succeed in quitting smoking was measuredbeforetheconversationandoneweek later. Participants’confidenceincreasedbyan averageof1.7ona0-10scale. Theautomated assessment of the chatbot showed adherence to MI standards in 98% of utterances, higher than human counsellors. The chatbot scored well on a participant-reported metric of perceived empathy but lower than typical human counsel- lors. Theparticipants’languagealsoindicated agoodlevelofmotivationtochange,akeygoal in MI. These results suggest that the automa- tion of talk therapy with a modern LLM has promise. 1 Introduction The remarkable rise in the capability of large lan- guage models (LLMs) gives hope that they could be used to provide many kinds of mental health talk therapy. Indeed, one can simply ask for such helpfromanonlineLLMandpossiblyreceivegood help(Siddalsetal.,2024). Sincethisisamedical intervention,itshouldbegroundedinevidencethat shows its effectiveness. Our goal is to automate a specific type of talk therapyfocusingontheproblemof tobacco addic- §Corresponding author: jonathan.rose@utoronto.cation with the specific goal of moving ambivalent smokerstowardsthedecisiontoquit. Ambivalent smokersknowthatsmokingisbadforthembutcon- tinue smoking because of its positive effects and becausethey don’tspendtime contemplatingtheir smoking behaviour (Miller, 1983; Rollnick et al., 1997;MillerandRollnick,2023). Morethan50% of all smokers are in this ambivalent state (Babb etal.,2017),andsomovingevenasmallfraction of these towards the decision to quit could have a major impact. The Motivational Interviewing (MI) talktherapyapproach(MillerandRollnick,2023) is often employed by counsellors to guide smokers away fromtheir ambivalent statetowards the deci- siontoquit. Thisdecisionisakeyprecursorforany successful attempt to quit (West and Sohal, 2006). Therehasbeensignificantactivityinrecentyears on automating talk therapy in many domains, in- cluding the use of MI to help in smoking cessation (Steenstra et al., 2024; Basar et al., 2024; Welivita and Pu, 2023; Brown et al., 2023). Brown et al. (2023), the predecessor of the present work, de- velopedMIBotv5.2 whichshowedthatapartially scriptedandpartially generativechatbotcouldsig- nificantly change smokers’ readiness to quit. How- ever, scripting with limited generation restricts the naturalflowofconversation,therebypreventingfull utilizationofMIelements. Steenstraetal.(2024) show the effectiveness of a fully-generative chat- bot focused on alcohol use. As well, more com- plete MI administered by human counsellors has shown a much greater impact (Boudreaux et al., 2012). This, together with the potential availability ofalways-accessible,lower-costcounselling,forms the motivation for this work. Inthispaper,wedescribethedesignandmeasure- mentof asingle,large promptof astate-of-the-art LLM-basedchatbotcalled MIBot1. Akeytoourap- 1Thispaperdescribes MIBotv6.3A andcomparesitwith our previous work, MIBot v5.2 (Brown et al., 2023). Our group’sbroadergoalistoiterativelydevelopMI-basedchat- proach is that expert MI clinicians and researchers participated in designing the prompt and evaluat- ingthechatbot. Weiterativelyevolvedtheprompt with the help of MI experts, synthetic patients, and humans role-playing as smokers. MIBotwasthentestedonsmokersrecruitedon- line (for pay) to measure both the effect on their confidencetoquitandthequalityoftheconversa- tions in four ways: 1.The | https://arxiv.org/abs/2505.17362v2 |
participants’ readiness to quit through a widely-used readiness ruler (Boudreaux et al., 2012) before the conversation and one week later. Thedifferencebetweenthesetwomea- surementsisourprimarymetricofeffective- ness. 2.A rating of the perceived empathy of the chat- botonthe CAREscale(Merceretal.,2004) whichiswidelyusedforassessinghumanclin- ician empathy. 3.Ameasurementofhowwellthecounsellor’s utterancesadheretothestandardsofMIbased ontheMotivationalInterviewingSkillCode (MISC) (Houck et al., 2010). 4.Thepercentageofclientutterancesthatreflect theirmotivationtochangesmokingbehvaior againstthetotalnumberofutterancesthatindi- cate change or the sustaining of the behaviour – also based on MISC. The key contributions of this paper are: 1.Anexpert-informedchatbotthatperformsfully generative MI counselling. 2.Measurements of effectiveness on human smokers. 3.A validated automated system to measure the adherence of counsellor utterances to the pre- cepts of MI. 4.A validated automated measurement of the effectofthechatbotontheclientthroughanal- ysis of their language. 5.A dataset of the transcripts of 106chatbot- human conversations together with measured outcomes of effectiveness, perceived empathy, and utterance-level MISC annotations2. This paper is organized as follows: the next sec- tiondescribespriorworkintheareaofautomated MIusingtherapeuticchatbots(andtheirevaluation). bots for smoking cessation. See Appendix G for a comprehen- sivelistofallpreviousMIchatbotiterations. Unlessotherwise noted,MIBotrefers toMIBot v6.3A . 2https://github.com/cimhasgithub/MIBOT_ACL2025Section 3 describes the clinician-informed iterative design of MIBot. Section 4 discusses the methods ofmeasurementandrecruitmentofhumansmokers. Section5presentstheresultsanddiscussion,and Section 6 concludes. 2 Related work 2.1 MI-Based Chatbots Various approaches to building MI-based chatbots have been explored in the past. Some have relied on pre-scripted or structured responses directed by rule-basedlogic(Parketal.,2019;Olafssonetal., 2019; Samrose and Hoque, 2022). Structured re- sponses ensure adherence to MI principles but lack theflexibilitytoadapttouserinput. Othershavein- corporated natural language understanding (NLU) units to help structure the conversation flow (Al- musharrafetal.,2020;Heetal.,2022;Saiyedetal., 2022;LeeuwisandHe,2023;Heetal.,2024),allow- ing for some adaptability. Following their introduc- tion,LLMshavebeenusedinMI-basedchatbotsto generateresponses,mainlyfocusingonspecificMI skills, such as reflections (Brown et al., 2023; Min et al., 2023; Brown et al., 2024; Min et al., 2024; Basaretal.,2024;Kumaretal.,2024)and advice (Welivita and Pu, 2023). Recent works have investigated the feasibility of fully generative MI-based chatbots using LLMs. Forexample,Steenstraetal.(2024)developedan LLM-powered virtual agent foralcohol usecoun- selling,evaluatingitslinguisticsoundness,safety, and MI competency against human counsellors. Meanwhile,Xieetal.(2024)proposedafew-shot dialogue strategy learning framework that learns MI strategies from demonstration dialogues via inductive reasoning and applies them to improve the responses. However, no fully generative MI- basedchatbothasyetbeendevelopedspecifically forsmokingcessation. Furthermore,existingMI- basedchatbotsareoftenevaluatedbasedonstatic metrics rather than effectiveness for changing be- haviour in humans. The present work is the first to develop and test a fully generative MI chatbot forsmokingcessationandevaluateitseffectiveness throughboth MIadherenceandits potentialeffect on smokers’ readiness to quit. 2.2 Automated Evaluation for MI The manual evaluation of MI adherence is time- consuming, requires expert annotators, and lacks scalability, necessitating the use of automated as- It takes a lot of strength to fight those urges, and it's understandable that it can be exhausting to maintain ...I fight the ur ge for sometime, but end up not winning, I sometimes sleep to avoid feeling the ur ge of smoking. Client MIBotTranscriptPrompt Counsellor Observation Oversight ObserversOff-Track ClassifierModerator End Classifier End Classifier End Classifier End Classifier Figure 1: Overview of the MIBotsystem and observer agents. sessmentduringthedevelopmentofMI-basedchat- bots. Prior work has described automated appr- oaches for coding MI sessions, classifying utter- ances from the counsellor and the client into be- havioural codes using either MISC or Motivational InterviewingTreatmentIntegrity(MITI)(Moyers et al., 2016) coding standards. Early non-LLM approaches investigated using linguistic features (Pérez-Rosas et al., 2017, 2019) and deep learn- ing models (Tanana et al., 2015; Cao | https://arxiv.org/abs/2505.17362v2 |
et al., 2019; Tavabi et al., 2021; Gibson et al., 2022; Nakano etal.,2022;Minetal.,2022;Gabrieletal.,2024) on textual transcripts, with some also incorporat- ingothermodalitiessuchasfeaturesfromspeech andvideodata(Tavabietal.,2020;Nakanoetal., 2022). LLMshaverecentlybeenusedinautomated MI evaluation, such as detecting clients’ motiva- tional language (Hoang et al., 2024). Notably, Sun et al. (2024) demonstrated the feasibility of auto- matedMISCclassificationusingLLMs,despitenot explicitly connecting it to overall MI fidelity. 2.3 Datasets for MI High-quality datasets are essential for training and evaluating chatbots that provide MI-style coun- sellingandadherecloselytoMIprinciples. Several recentcontributionshavesignificantlyadvancedre- sourcesinthisarea. AnearlyexampleisAnnoMI (Wu et al., 2023), which offers 133 transcribed and expertly annotated MI demonstrations. Similarly, Pérez-Rosas et al. (2019) curated a collection of MI transcripts labelled according to counsellingquality, while Sun et al. (2024) provided a bilin- gual corpus annotated with detailed MISC codes bydomainexperts. Expandingbeyondtextualinter- actions,Younsietal.(2024)exploredmultimodal aspects by examining facial expression dynamics during MI sessions. To mitigate the extensive manual effort required forannotation,Cohenetal.(2024)leveragedLLMs to automate the labelling of counsellor and client utterances using both MITI (Moyers et al., 2016) andClientLanguageEasyRating(CLEAR)(Glynn and Moyers, 2012) frameworks. Complementary tothesestructureddatasets,WelivitaandPu(2022) assembled peer-support dialogues that naturally ex- hibitMI-compatiblelanguagewithouttrainedcoun- sellors. Most recently, Meyer and Elsweiler (2024) presented a dataset involving human interactions withaGPT-4-basedMIchatbot,withparticipants role-playing as clients. Whilethesedatasetsoffervaluablematerialfor chatbottrainingandassessment,theylackatargeted focusonaspecificbehaviourlikesmokinganddo not involve real individuals genuinely exhibiting such behaviour in interactions with a chatbot. Our work uniquely addresses this gap — rather than mere MI demonstrations or role-playing, we pro- videadatasetofconversationsbetweenrealhuman smokers talking to an expert-informed, fully gener- ative MI chatbot. 3 Chatbot Design Process Figure1 illustratesanoverviewofthe MIBotsys- tem. Below, we first describe the elements of the MI counselling approach relevant to this paper. 3.1 Motivational Interviewing Motivational Interviewing is a talk therapy ap- proach for behaviour change, used by clinicians tohelppatients(called clientsinMI)movetowards and make healthy behaviour changes. Its central strategy is to engage the client in contemplation around the behaviour and link change to their un- derlying values. The key to the MI approach is that clients themselves discover their motivation to change; the counsellors should neither be directive norportraythemselvesasexperts—instead,they should guide theclient without generating discord or worsening the ambivalence to change. Typical conversational ‘skills’ in MI include ask- ing open-ended questions around a behaviour, giv- ingsimple reflections of client responses (restating these in different words) to encourage continued contemplation, linking the reflections to other rele- vanthistoryorfacts( complexreflections )andoffer- ingaffirmations for positive accomplishments. One key outcome of an MI conversation that the counsellor looks for is the kind of ‘talk’ that the contemplation elicits from the client. Change Talk refers to client utterances that indicate the client is contemplating or actively planning to change the behaviour. Sustain Talk refers to utterances with reasons why it would be difficult for the client to change,ordirectstatementsofthecontinuanceof the habit. Ambivalent clients tend to oscillate be- tween these two states, and thus appear stuck in their addiction. A core goal of MI is to help clients resolve this ambivalence. 3.2 Iterative Development of the Chatbot Our approach to building an automated counsel- lor chatbot is to begin with a single prompt of a state-of-the-art LLM, Open AI’s GPT-4o model (OpenAI,2024b). Forconsistency,allresultspre- sented in this paper are from a specific GPT-4o model, gpt-4o-2024-08-06 . Our research group is | https://arxiv.org/abs/2505.17362v2 |
a collaboration of engi- neersandexpertclinicians,thelatterhighlyexpe- riencedindeliveringMIcounsellingforsmoking cessation. Thegroupusedthefollowinginformalprocess to evolve the prompt for the counsellor chatbot: webegan with a short, simple prompt (shown in Ap- pendix B), which asks the model to use its own knowledgeofMI.Then,sampleconversationswere generated between the chatbot and two different kinds of test clients: The first test client (a vir- tualclient) was a separate instance of the LLM instructed to play the role of a smoker. The prompt for the virtual client, including its “backstory” (a demographic specification and personal history), is giveninAppendixD.Thesecondtestclientwasone of the human researchers role-playing as a smoker. Theresultingtranscriptswerethenreviewedby theteamofengineersandexpertMIclinicians,who identifiedissuesinbi-weeklymeetings. Thediscus- sions would lead to an improved prompt to address theissues. Eachimprovement wastested withsev- eralmorecounsellor-test-clientconversationstosee if the improvement was successful. Thelistbelowgivesthesetofthemostimportant improvements made to the prompt, linked to spe- cificlinesofthefinalprompt(giveninAppendixB) that were changed to make that improvement. 1.Appropriate utterance length : It was ob- served that the chatbot had a tendency to be quite verbose, which would make it sound un- natural and overwhelming to the client. The prompt was modified (in lines 2-3 of Ap- pendix B) to address this. 2.Accessible Language : To make MIBotacces- sible to users from diverse educational and socioeconomic backgrounds, it was instructed tousesimplelanguage,avoid complextermi- nology,andadapttotheclient’slanguage. The promptwasmodified(inline2ofAppendixB) to address this. 3.Avoidingassumptionsaboutnicotineuse : It wasobservedthatthechatbotsometimesmade aprematureassumption aboutthenatureand extent of the client’s smoking. The MI clini- cians suggested that a counsellor should enter theconversationwithanopenmindandletthe client describe the amount of smoking. The promptwasmodified(inline6ofAppendixB) to address this. 4.Improved conversation pace: The chatbot had the tendency to move into the conversa- tional topic of smoking quickly and put insuf- ficient effort into building rapport with the client. Clinicians emphasized the need to startconversationswithicebreakerstocreate a comfortable environment for the client. The promptwasmodifiedtoreflectthisinlines1 and 7. 5.Appropriatetimingoftheplanningphase: Planning is a crucial step in MI, in which clientsbegintothinkthroughconcreteideas on how they would bring change to their behaviour. However, guiding clients to be- gin planning prematurely can be counter- productive and drive them away from change. Thepromptwasmodifiedinlines9-13togive instructionsonhowandwhentomovetowards the planning phase. A key understanding here is to wait until the client demonstrates a re- duced amount of sustain talk. These iterative discussions continued until the teamwas(informally)satisfiedwiththequalityand MI-adherence of virtual/role-played conversations. 3.3 Observer Agents Inadditiontotheprimarycounselloragent,toen- sure the chatbot could be deployed safely for end users, we developed observer agents to monitor the conversations between the chatbot and the client. Each observer is built using a prompted GPT-4o instance, tasked with reviewing specific aspects of theongoingconversationandcanintervenewhen necessary, as described below. 3.3.1 The Moderator Themoderator reviewsthecounsellor’smostrecent utterance and determines whether it could poten- tially harm the client. While OpenAI’s internal guardrails (OpenAI, 2024) are highly effective at preventing some forms of harmful content, they do not safeguard against counterproductive counsel- lor utterances. We designed this observer to have high sensitivity (and, consequently, a high false positiverate). Ifthemoderatordeemsthatthecoun- sellor’s utterance is potentially encouraging self- harm(whichmightincludeasuggestiontoactually smoke), the system re-generates the counsellor’s utterance, which is again checked. This process is repeated up to a maximum of five attempts or untilthemoderator deemsthelatestutterance “ac- ceptable”. Inallexperiments described below,the re-generatedcounsellorutterancesucceededwithin fourgenerationattemptsandneverfailedtoproduce an acceptable utterance. 3.3.2 Off-Track Conversation Classifier We were concerned that some of our participants | https://arxiv.org/abs/2505.17362v2 |
might intentionally steer the conversation far offfromthetopicofsmoking. Webuiltaclassifierto monitorconversationsinreal-timetodetectifthe clientisdeliberatelysteeringtheconversationoff- track. Unlikethemoderatorobserver,thisclassifier was prompt-engineered for a low false positive rate to give the benefit of the doubt to the client. The purposeofthisclassifierwastoidentifyparticipants whowerenotengaginginaseriousconversationfor removal from the dataset. In an actual deployment, thisobservercouldbeusedtotriggertheendofthe conversation. 3.3.3 End Classifier and Conversation Termination The intent to end a conversation can arise from either the client or the counsellor. To ensure the conversation transitions smoothly to an ending and thepost-conversationsurvey,wedesignedan end classifier that monitors the dialogue in real-time and determines if the counsellor or client wishes to finish. If so, the counsellor is instructed to summa- rizetheconversation(atypicalMIpractice)andask if the client wishes to continue. If the client does wishtocontinue,thentheconversationisresumed. 4 Feasibility Study with Human Smokers 4.1 Participant Recruitment Atotalof106English-speakingparticipantswere recruited to evaluate the capability of MIBot throughtheProlific (www.prolific.com)onlinebe- haviouralresearchplatform(Peeretal.,2017). The criteria for inclusion in the study are that partic- ipants must be fluent in English, have a high ap- proval rate on prior tasks performed on the Prolific platform, and are current smokers of at least five cigarettesperday. Thisgroupwasalsofilteredfrom a larger group of 159 participants to select those whoexhibitlowconfidencethattheywillsucceedin quitting3. Finally,therecruitmentwassettoenrol equalnumbersofmaleandfemaleparticipants. The exactbalancewasaffectedbytheabovefilter,and so the final sex proportion was 54% females and 46% males. Participant ages ranged from 22–77 years old, with a median of 38 years (mean=40, 3AsthegoalofMIistoresolveambivalence,thosewhoare veryconfidentinsucceedinginquittingarealreadyinthestate MI is meant for. So, we only include participants who exhibit low confidence ( ≤5). We also include ‘discordant’ partici- pants who have high confidence relative to their importance (confidence >5 and confidence −importance <5) as they don’t think it isimportant toquit and, therefore, need MI-style counselling. SD=13). The median and mean time taken to com- plete the conversational part of the study was 19 minutes(SD=9). AppendixLprovidesmoredetails on participant demographics. 4.2 Study Design The study design followed a pattern commonly em- ployed in MI research (e.g., Valanis et al., 2001; Kaysen et al., 2009; Gaume et al., 2016; Browne, 2022; James et al., 2021) and therapeutic chatbot evaluations (e.g., Brown et al., 2023; He et al., 2022). Participantsinourstudyaretakenthrough thefollowingfoursteps(adiagramviewisavailable in Appendix E): 1.In apre-conversation survey , participants rate themselves on the readiness ruler sur- vey,whichisoftenusedinbehaviourchange (Rollnick et al., 1992). These are three numer- icalquestions(scoredfrom0to10)measuring howimportant it isfor the participant toquit, howconfident they are that they will succeed, and howreadythey are to quit right now. The specific survey is given in Appendix E. 2.Participants then engage in a conversation withthecounsellorchatbotdescribedinSec- tion 3, through a text-based interface. 3.Post conversation , participants complete the readiness rulers again, provide feedback on the conversation itself, and respond to the CARE survey (Mercer et al., 2004; Bikker etal.,2015),whichmeasurestheirperceived empathy by the counsellorand is used for hu- man clinical practitioners. It has 10 questions ratedonascalefrom0to5each(AppendixH). 4.Oneweekafter theconversation,participants againcompletethereadinessrulerandindicate if they made any quit attempts or changes in smoking habits. It has been shown that readiness to quit pre- dictsquitting(Boudreauxetal.,2012;Gaumeetal., 2016), and the most predictive part of the ruler is the self-reported confidence to succeed, which we willuseasourprimarymetricfortherapeuticeffec- tiveness (Gwaltney et al., 2009; Abar et al., 2013). 4.3AutoMISC: Assessment of Counsellor and Client Language Inadditiontotheparticipant-reportedeffectiveness metrics described above, it is important to evalu- ate whether | https://arxiv.org/abs/2505.17362v2 |
the counsellor’s generated text adheresto the MI methodology to establish clinical valid- ity — providing evidence to trust (or not) the auto- mated counsellor. Assessment of client text from theparticipantscanalsoofferdirectevidenceofsuc- cess or failure in the counselling session. Since we wanttoassessbothcounsellorandclientutterances, wehavechosentousetheoriginalMIassessment rubric,theMotivationalInterviewingSkillsCode (MISC) version 2.5 (Houck et al., 2010). MISCclassifieseach utterance intopre-defined categories based on the counsellor and client be- haviours. Counsellor codes include the skills de- scribed in Section 3.1, among others. MISC also providestranscript-levelsummaryscorescomputed byaggregatingtheutterance-levelannotations. Met- rics to assess counsellor adherence to MI include PercentageMI-ConsistentResponses(%MIC) , wherehighervaluesindicategreateradherence,and Reflection-to-QuestionRatio(R:Q) ,wherevalues between 1.0–2.0 are considered indicative of profi- ciency (Moyers et al., 2016). For client language, Percentage Change Talk (%CT) is used, with highervaluesassociatedwithimprovedbehavioural outcomes (Apodaca and Longabaugh, 2009). AsdescribedinSection2.2,itisdifficulttolabel transcripts manually, so we developed AutoMISC to automate this process. AutoMISC processes a counselling session transcript byfirst parsing each speaker’sturninto utterances (individualunitsof thought). This is done by a separate prompted in- stance of GPT-4o, as is the next step: each counsel- lorutteranceisclassifiedintooneofMI-Consistent (MICO), MI-Inconsistent (MIIN), Reflection (R), Question(Q),orOther(O).Wefoundthatinclud- ingfive previousconversation turnstoclassify the current utterance gave enough context to ensure high accuracy. Each client utterance is classified into one of Change Talk (C), Sustain Talk (S) or Neutral(N).Thesecanthenbecomputedintothe transcript-level summary scores described above. The prompts for each instance of GPT-4o are listed in Appendix K.3. To validate the reliability of AutoMISC, its an- notations were compared against four human an- notators: two expert MI clinicians and two non- expert team members. Each annotator manually labelled ten of the 106 transcripts, a total of 821 utterances(580fromtheautomatedcounsellor,241 fromclients). Theinter-rateragreementwasmea- suredpairwiseusingCohen’s κandforthegroup usingFleiss’ κ. TheoverallFleiss’ κwas0.68for counsellorcodesand 0.67forclientcodes,which indicates substantial agreement (Landis and Koch, 1977). AppendixK.1providesmoredetailonthe inter-rater agreement. 5 Results 5.1 Effect of Conversation on Participants’ Readiness to Quit Smoking Average Average Average Average ∆ Before After 1-Week (1-Week Conv Conv After −Before) Importance 5.7 (2.6) 6.3 (2.9) 6.1 (2.7) 0.5 (1.7)∗ Confidence 2.8 (2.0) 4.6 (2.6) 4.5 (2.7) 1.7 (2.4)∗∗ Readiness 5.2 (2.8) 5.9 (2.8) 5.5 (3.0) 0.3 (2.4)† Table 1: Average (SD) of Readiness Ruler Survey on Importance,Confidence,andReadinesstoquitsmoking. Statistical significance using Wilcoxon signed-rank test. ∗:p <0.005,∗∗:p <0.001,†:p= 0.22. Recall from Section4.2 that the 106 human par- ticipantsinthestudyfilledoutthereadinessruler survey on three occasions: just before the conver- sationwiththechatbot,justafterit,andoneweek later. Theprimarymeasureofeffectivenessisthe differenceinconfidencefrombeforetheconversa- tion to one week later, as this is the most predictive of downstream quitting success (Gwaltney et al., 2009).Table1presentsdataatthosepointsintime for the three readiness rulers: importance, confi- dence,andreadiness. Itshowsasignificantincrease in confidence of +1.7 on the ten-point scale. Asapointofreference,ourpreviouswork, MI- Bot v5.2(Brown et al., 2023), which used a hybrid of scripted questions and LLM-generated reflec- tions, reported an average change in confidence of +1.3. While that result is not directly comparable to the present one, both works recruited a similar number of low-confidence participants but at a dif- ferent time and with a different starting average confidence. We can also compare the week-later change in confidenceto thatachieved byhumancounsellors. Schoor(2020)foundthatparticipants’confidence increasedby+2.5pointsafterfiveMIsessionsover a ten-week period. Figure2 presentsthedistributionof week-laterchanges in confidence scores. Notably, 28% of participantsdidnotchangetheirconfidencelevel, but a substantial number (around 60%) showed a positive change in confidence. Roughly 12% lost 1–2 points, and 2% lost more than that. Table | https://arxiv.org/abs/2505.17362v2 |
1also shows that there was a significant change in the participants’ view of the importance of quitting, with an average increase of +0.5, ex- hibiting the chatbot’s effectiveness. The change in readiness was not statistically significant. Finally,Table M.1 in Appendix M shows that baseline confidence levels andone-week changes varied by demographic group. Younger partici- pants, for instance, started with a higher average confidence of 3.7 and saw a larger increase of +1.9 over the week. Figure2: DistributionofChangeinConfidence(1-Week Later−Before Conversation). 5.2 CARE Metric for Empathy Eachparticipantratedtheperceivedempathyofthe chatbot on the CARE scale (Mercer et al., 2004). Table 2presents the mean CARE scores for this work (MIBot v6.3A ) and our previous work, MIBot v5.2(Brown et al., 2023). The fully generative MIBot v6.3A is significantly more empathetic than apartiallyscriptedandpartiallygenerative MIBot v5.2. Notably, 11% of the participants gave MIBot v6.3Aa perfect score of 50, substantially higher than the 3% achieved by MIBot v5.2 . Compared to trainedhumancounsellors,however,thisnumberis quite low as Bikker et al. (2015) found that nurses scored an average of 46 on the CARE metric, with 48% achieving a perfect score of 50. AppendixHprovidesthedistributionofCARE scores among participants and question-wise mean CARE scores. The chatbot performed poorly on questions, such as “How well did the chatbot show interest in you as a whole person?” and “How well diditassistinmakingaplanofactionwithyou?”. (a) Percentage MI-Consistent Responses (%MIC) (b) Reflection to Question Ratio (R:Q) (c) Percentage Client Change Talk (%CT) Figure 3: Comparison of MISC summary score distributions across datasets. CARE % Perfect Score Score MIBot v5.2 36 3 MIBot v6.3A 42 11 Humans∗46 48 Table2: AverageCAREscoresand(%)perfectscores forMIBotv5.2 ,MIBotv6.3A (presentwork)and∗typical human healthcare professionals (Bikker et al., 2015). Thepoorperformanceonsomequestionsmaybe duetothechatbot’slackofemotionalintelligence (Sabouretal.,2024)orcollaborationskills(Yang et al., 2024). The general post-conversation survey showed that 92% of participants enjoyed the experience, while 66% found the interactions helpful. 5.3 Adherence to MI The AutoMISCassessment tool,described inSec- tion 4.3, was applied to the 106 transcripts from the study. To provide a point of comparison for the MISC summary metrics, we also ran it on theHighLowQualityCounselling(HLQC) dataset (Pérez-Rosas et al., 2019), a publicly available4 corpus of transcribed MI counselling demonstra- tions. Itwasdesignedtosupportthedevelopment of “data-driven methods for the automatic evalu- ation of counselling quality.” The HLQC dataset comprises 155 high-quality (HLQC_HI) and 104 low-quality (HLQC_LO) transcripts sourced from publicwebsites. Wecomputedsummaryscoressep- aratelyforthesesubsetsandthencompared MIBot’s summary metrics against those of both HLQC_HI andHLQC_LO. Table3summarizesthecomputed 4https://lit.eecs.umich.edu/downloads.htmlMISC metrics across datasets. It shows that a very high fraction of the chatbot counsellor utterances are MI-compliant (%MIC in the table), slightly ex- ceeding those in the high-quality dataset. The chat- bot’s Reflection to Question Ratio (R:Q) falls be- tweenthatofthehigh-andlow-qualitydatasetsand aligns with the 1-2 range described in the MISC rubric. Finally,thefractionofparticipantutterances classified as change talk is higherthan in thehigh- quality dataset — a very positive achievement. Metric Dataset Mean (SD) %MICHLQC_LO 48 (27.9) HLQC_HI 92 (9.8) MIBot 98 (3.6) R:QHLQC_LO 0.7 (1.3) HLQC_HI 2.3(5.7) MIBot 1.3(0.3) %CTHLQC_LO 23 (29.5) HLQC_HI 53 (28.4) MIBot 59 (25.6) Table 3: Comparison of MISC summary metrics in our study and the HLQC Datasets. Figures3a and3bshowthedistribution,inviolin plots, of counsellor’s Percentage MI-Consistency (%MIC) and Reflection-to-Question Ratio (R:Q) forthe threedatasets (HLQC_LO,HLQC_HI, MI- Bot).MIBot’s distribution of %MIC | https://arxiv.org/abs/2505.17362v2 |
scores closely matchesthoseofHLQC_HI,anotherindicationthat the conversation adhered to the principles of MI. The R:Q distribution has a similar behaviour. Figure 3c shows the violin plot distribution of the%ClientChangeTalk(%CT).Thedistributions forMIBotand HLQC_HI are very similar, as were theaverages. Thisisperhapsthemostimportantin- dication of the MIBot’s effectiveness — cultivating change talk is the key goal in MI. 5.4 Dataset Release Wearereleasingmostofthedatacollectedinthis study, including the transcripts of the conversation between the chatbot and participants, the Auto- MISC annotations for both counsellor and client utterances, and summary metrics. For each par- ticipant, the dataset also includes their readiness rulersurveyresponses,CAREsurvey, Heavinessof Smokingsurvey(Heathertonetal.,1989),andthe feedbacktheyprovidedontheconversation. This is described in more detail in Appendix N. Several studies have published MI counselling datasets (Pérez-Rosas et al., 2019; Welivita and Pu, 2022;Cohenetal.,2024;Sunetal.,2024;Younsi et al., 2024), but none have employed self-reported metrics or well-established surveys to measure the effectiveness of counselling. Our dataset is the first attempt in this direction, as it provides a holistic viewofautomatedMIanditseffectivenessonhu- mans. 6 Conclusion OurstudysuggeststhatafullygenerativeMI-based chatbot could assist smokers in moving toward a decisiontoquitsmoking. The MIBotchatbotwas shown to increase most participants’ confidence in theirabilitytoquitsmoking—astrongpredictorof successfulcessation. Avalidatedautomatedassess- mentindicatedthatparticipants’languagereflected motivation towards change, another key goal of MI therapy. Furthermore, the automated counsellor is exceptionally MI-compliant. We also showed that thechatbot achieves agood levelof perceived empathy. Weplantoextendthisworkbycontinuingtoit- erate on the chatbot prompt and employing more sophisticatedMIstrategies. Weanticipateaneedto exertmorecontroloverthechatbot’stextgeneration. Secondly, we will create extended conversations overmultiplesessionsandexploretheireffective- ness. We plan toconduct a randomized controlled trial to compare the effectiveness of a single ses- sion with our chatbot against a more simple digital intervention. Finally,wealsoplantointegratethe chatbot within an end-to-end smoking cessation mobile application to aid smokers throughout their entire quitting journey.7 Limitations Theresultspresentedarebasedonthehumanpar- ticipants recruited through Prolific. Since these participantsarepaid toengagewith thechatbot,it is possible that their responses may reflect a desire toprovideresponsesthatwouldresultinsuccessful payment, rather than their genuine thoughts and/or experiences. Section 4.1 describes the inclusion criteria for participantsintheresearchandsomeparticipantde- mographicinformation,whileAppendixLprovides moredetaileddemographics. Theresultspresented are valid for these participants, but their generaliz- abilitybeyondthesedemographicsremainsuncer- tain. Inthisresearch,theautomatedcounsellorinter- actswithparticipantsstrictlythroughatextualinter- face. Traditionally,MIhasbeendeliveredinperson throughspeechandvisualinteraction. TheMISC coding system (Houck et al., 2010), the basis for ourautomatedassessmentsystem(AutoMISC),was originallydevelopedforspokeninteractions,where audio intonation cues may influence the valence of the classifications (Klonek et al., 2015). The ab- senceofthesecuesinthistext-onlysettingmakes it more difficult to distinguish certain utterances, suchasreflections,fromquestions. Consequently, our MI adherence metrics, though encouraging, do not capture all aspects of adherent MI as the aggre- gation of behavioural codes into ratios overlooks the temporal dynamics of the interaction. We used the HighLowQualityCounselling (HLQC) dataset (Pérez-Rosas et al., 2019) for contextualizing MISC summary metrics. While thisdatasetisreasonablylarge,itprimarilyincludes transcripts of MI demonstrations scraped from theinternetandcoversawiderrangeofactivities beyond smoking cessation, such as reducing alcohol consumption or other substance use. Thus, our comparison against the High-Quality dataset is not for identical goals. Inthecalculationofthe%MICsummarymetric, weusedQ(allquestions)insteadofOQ(openques- tions)aspartofthesupercategoryofMI-Consistent (MICO)counsellorbehaviours. Whilethisdeviates fromthestandardformulapresentedintheMISC, the impact on our results is likely minimal, as both high-quality counsellors from the HLQC dataset and theMIBotcounsellor bot seldom asked closed questions, given MI’s strong emphasis on open- ended questioning (Miller and Rollnick, 2023). Theresultspresentedarebasedononespecific LLM (GPT-4o), and different LLMs may yield dif- ferent conversational outputs, changing the results. Finally, demographic disparities may shape both thebaselineconfidenceandpost-interventionout- comes,asdescribedinSection5.1. Futureresearch should explore whether certain groups (e.g., young femalesmokers)benefitmore—orless—thanoth- ersfromtext-basedMIinterventions. Weencour- age | https://arxiv.org/abs/2505.17362v2 |
subsequent studies to leverage the published data from our feasibility study to identify and ad- dresssystematicbiasesandensureequitableaccess to AI-driven therapy solutions. 8 Ethics Statement Our study was approved by the University of Toronto Research Ethics Board on August 3rd, 2024, under protocol number 49997 (Rose et al., 2025). Developmentof MIBotandannotationwork for AutoMISC involved MI experts who are our co-authors. Participants were recruited through the Prolific online recruitment system; each received 5.5 GBP for a 20-minute session and 1 GBP for a short follow-up survey one week later (which took 5 minutes to complete on average). These payment rates (which average more than 14 GBP perhour)conformtotheguidelinesbyProlific(Pro- lific, 2025) (which recommends a minimum hourly rate of 9 GBP). All participants were adults (18+), provided informed consent (shown in Appendix P), and had their data de-identified for research and release. Study data is included in supplementary materials and will be published under a Creative CommonsAttribution-ShareAlike4.0International license on a freely accessible platform. In the development of the MIBotsystem and the writing of this paper, the authors utilized AI- assisted tools, including Microsoft Copilot (Mi- crosoftCorporation,2024)andOpenAIChatGPT (OpenAI, 2024a). The authors affirm that all ideas presented in this work are original. AI assistance wasstrictlylimitedtotasksclassifiedas“assistance purely with the language of the paper” or “short- forminputassistance,”asdefinedintheACLCall for Papers guidelines (Association for Computa- tional Linguistics, 2025). TherecentpopularityofLLMshasledtoarapid developmentoffor-profit,therapy-basedproducts (Chan et al., 2022; Prochaska et al., 2021; Chang et al., 2024; Mehta et al., 2021). We encourage developerstocarefullyconsidertheethicalimpli-cationsoftheseproducts,collaboratewithmental health experts, and extensively test their long-term impact before public release. While we foresee a future where machines may provide adequate men- tal support to humans, we make no claims about MIBot’s (or, more broadly, any current LLM-based therapeuticchatbot’s)abilityto deliverfully effec- tive therapy. References Beau Abar, Brigitte M. Baumann, Cynthia Rosenbaum, Edward Boyer, Douglas Ziedonis, and Edwin D. Boudreaux.2013. Profilesofimportance,readiness and confidence in quitting tobacco use. Journal of Substance Use , 18(2):75–81. Fahad Almusharraf. 2018. Motivating smokers to quit through a computer-based conversational sys- tem. Masterofappliedsciencethesis,Universityof Toronto, Toronto, Canada. Accessed: 2025-05-21. Fahad Almusharraf, Jonathan Rose, and Peter Selby. 2020. Engagingunmotivatedsmokerstomovetoward quitting: Design ofmotivational interviewing–based chatbot through iterative interactions. J Med Internet Res, 22(11):e20251. Timothy R. Apodaca and Richard Longabaugh. 2009. Mechanisms of change in motivational interviewing: areviewandpreliminaryevaluationoftheevidence. Addiction , 104(5):705–715. AssociationforComputationalLinguistics.2025. ACL Rolling Review Call for Papers. https:// aclrollingreview.org/cfp . Accessed: 2025-02- 13. Stephen Babb, Ann Malarcher, Gillian Schauer, Kather- ine Asman, and Ahmed Jamal. 2017. Quitting smok- ing among adults -united states, 2000-2015. MMWR MorbidityandMortalityWeeklyReport ,65(52):1457– 1464. Erkan Basar, Iris Hendrickx, Emiel Krahmer, Gert-Jan Bruijn, and Tibor Bosse. 2024. To what extent are large language models capable of generating substan- tial reflections for motivational interviewing counsel- ing chatbots? a human evaluation. In Proceedings of the1stHuman-CenteredLargeLanguageModeling Workshop , pages 41–52, TBD. ACL. Emily M. Bender and Batya Friedman. 2018. Data statementsfornaturallanguageprocessing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics , 6:587–604. Annemieke P. Bikker, Bridie Fitzpatrick, Douglas Mur- phy, and Stewart W. Mercer. 2015. Measuring em- pathic, person-centred communication in primary care nurses: validity and reliability of the consul- tationandrelationalempathy(care)measure. BMC Family Practice , 16(1):149. Edwin D. Boudreaux, | https://arxiv.org/abs/2505.17362v2 |
Ashley Sullivan, Beau Abar, Steven L. Bernstein, Adit A. Ginde, and Carlos A. Camargo.2012. Motivationrulersforsmokingces- sation: a prospective observational examination of constructandpredictivevalidity. AddictionScience & Clinical Practice , 7(1):8. Andrew Brown, Ash Tanuj Kumar, Osnat Melamed, Imtihan Ahmed, Yu Hao Wang, Arnaud Deza, Marc Morcos, Leon Zhu, Marta Maslej, Nadia Minian, Vidya Sujaya, Jodi Wolff, Olivia Doggett, Mathew Iantorno, Matt Ratto, Peter Selby, and Jonathan Rose. 2023. A motivational interviewing chatbot with gen- erative reflections for increasing readiness to quit smoking: Iterative development study. JMIR Ment Health, 10:e49132. Andrew Brown, Jiading Zhu, Mohamed Abdelwahab, Alec Dong, Cindy Wang, and Jonathan Rose. 2024. Generation, distillation and evaluation of motiva- tional interviewing-style reflections with a founda- tional language model. In Proceedings of the 18th Conference ofthe European Chapterof theAssocia- tion for Computational Linguistics (Volume 1: Long Papers), pages 1241–1252, St. Julian’s, Malta. Asso- ciation for Computational Linguistics. Nadia E. Browne. 2022. Motivation and Readiness in Managing Adolescent Obesity: Treatment Fidelity, LivedExperiences,andReadinesstoChangeRuler . Ph.D. thesis, University of Alberta. Jie Cao, Michael Tanana, Zac Imel, Eric Poitras, David Atkins, and Vivek Srikumar. 2019. Observing dia- logue in therapy: Categorizing and forecasting be- havioral codes. In Proceedings of the 57th Annual MeetingoftheAssociationforComputationalLinguis- tics, pages 5599–5611, Florence, Italy. Association for Computational Linguistics. William W Chan, Ellen E Fitzsimmons-Craft, Arielle C Smith, Marie-Laure Firebaugh, Lauren A Fowler, Bianca DePietro, Naira Topooco, Denise E Wilfley, CBarrTaylor,andNicholasCJacobson.2022. The challengesindesigningapreventionchatbotforeat- ing disorders: Observational study. JMIR Form Res , 6(1):e28003. ChristelLynneChang,ChaitaliSinha,MadhaviRoy,and JohnCheeMengWong.2024. Ai-ledmentalhealth support (wysa) for health care workers during covid- 19: Service evaluation. JMIR Form Res , 8:e51858. BenCohen,MoreahZisquit,StavYosef,DoronFried- man, and Kfir Bar. 2024. Motivational interview- ingtranscripts annotatedwithglobal scores. In Pro- ceedingsof the2024 JointInternational Conference onComputationalLinguistics,LanguageResources andEvaluation(LREC-COLING2024) ,pages11642– 11657, Torino, Italia. ELRA and ICCL. Saadia Gabriel, Isha Puri, Xuhai Xu, Matteo Malgaroli, and Marzyeh Ghassemi. 2024. Can AI relate: Test- ing large language model response for mental health support. In Findings of the Association for Computa- tional Linguistics: EMNLP 2024 , pages 2206–2221,Miami,Florida,USA.AssociationforComputational Linguistics. Jacques Gaume, Nicolas Bertholet, and Jean-Bernard Daeppen.2016. Readinesstochangepredictsdrink- ing: Findings from 12-month follow-up of alcohol use disorder outpatients. Alcohol and Alcoholism , 52(1):65–71. JamesGibson,DavidC.Atkins,TorreyA.Creed,Zac Imel, Panayiotis Georgiou, and Shrikanth Narayanan. 2022. Multi-label multi-task deep learning for behav- ioral coding. IEEE Transactions on Affective Com- puting, 13(1):508–518. LisaHagenGlynnandTheresaB.Moyers.2012. Man- ual for the client language easy rating (clear) cod- ingsystem. Technicalreport,CenteronAlcoholism, Substance Abuse, and Addictions, University of New Mexico. Minor text revisions and name change. ChadJGwaltney,JaneMetrik,ChristopherWKahler, and Saul Shiffman. 2009. Self-efficacy and smoking cessation: a meta-analysis. Psychol Addict Behav , 23(1):56–66. Linwei He, Erkan Basar, Emiel Krahmer, Reinout Wiers, and Marjolijn Antheunis. 2024. Effective- nessanduserexperienceofasmokingcessationchat- bot: Mixed methods study comparing motivational interviewingandconfrontationalcounseling. JMed Internet Res , 26:e53134. LinweiHe, ErkanBasar, ReinoutW.Wiers, MarjolijnL. Antheunis,andEmielKrahmer.2022. Canchatbots helptomotivatesmokingcessation? astudyonthe effectiveness of motivational interviewing on engage- mentandtherapeuticalliance. BMCPublicHealth , 22(1):726. Todd F Heatherton, Lynn T Kozlowski, Richard C Frecker,WilliamRickert,andJackRobinson.1989. Measuring the heaviness of smoking: using self- reportedtimetothefirstcigaretteofthedayandnum- berofcigarettessmokedperday. Britishjournalof addiction , 84(7):791–800. Van Hoang, Eoin Rogers, and Robert Ross. 2024. How canclientmotivationallanguageinformpsychother- apy agents? In Proceedings of the 9th Workshop onComputationalLinguisticsandClinicalPsychol- ogy (CLPsych 2024) , pages 23–40, St. Julians, Malta. Association for Computational Linguistics. Jonathon Houck, | https://arxiv.org/abs/2505.17362v2 |
Theresa Moyers, William R Miller, Laura Glynn, and C Hallgreen. 2010. Manual for the Motivational Interviewing Skill Code (MISC) ver- sion 2.5. Retrieved from http://casaa.unm.edu/ download/misc25.pdf . Noemi James, Eunyoung Lee, Carey Cole, Barbara Far- ris, and Pamela Wright. 2021. Improving Chronic Health Diseases Through Structured Smoking Cessa- tionEducationinaRuralFreeClinic . Ph.D.thesis, RadfordUniversity. AvailableunderCreativeCom- mons Attribution No Derivatives License. DebraLKaysen,ChristineMLee,JosephWLaBrie,and Sean J Tollison. 2009. Readiness to change drinking behavior in female college students. Journal of Stud- ies on Alcohol and Drugs, Supplement , 70(s16):106– 114. PMID: 19538918. Florian E. Klonek, Vicenç Quera, and Simone Kauffeld. 2015. Coding interactions in motivational interview- ingwithcomputer-software: Whataretheadvantages for process researchers? Computers in Human Be- havior, 44:284–292. Ash Tanuj Kumar, Cindy Wang, Alec Dong, and Jonathan Rose. 2024. Generation of backward- looking complex reflections for a motivational interviewing–based smoking cessation chatbot using gpt-4: Algorithm development and validation. JMIR Ment Health , 11:e53778. J. Richard Landis and Gary G. Koch. 1977. The mea- surement of observer agreement for categorical data. Biometrics , 33(1):159–174. Lotte Leeuwis and Linwei He. 2023. Hi, i’m cecil(y) thesmokingcessationchatbot: Theeffectivenessof motivationalinterviewingandconfrontationalcoun- selingchatbotsandthemoderatingroleoftheneedfor autonomyandself-efficacy. In ChatbotResearchand Design: 6th International Workshop, CONVERSA- TIONS2022,Amsterdam,TheNetherlands,Novem- ber22–23,2022,RevisedSelectedPapers ,page3–17, Berlin, Heidelberg. Springer-Verlag. A Mehta, AN Niles, JH Vargas, T Marafon, DD Couto, andJJGross.2021. Acceptabilityandeffectiveness of artificial intelligence therapy for anxiety and de- pression (youper): Longitudinal observational study. JournalofMedical InternetResearch , 23(6):e26771. Stewart W Mercer, Margaret Maxwell, David Heaney, and Graham CM Watt. 2004. The consultation and relational empathy (care) measure: development and preliminary validation and reliability of an empathy- basedconsultationprocessmeasure. FamilyPractice , 21(6):699–705. Selina Meyer and David Elsweiler. 2024. "you tell me": A dataset of gpt-4-based behaviour change support conversations. In Proceedings of the 2024 Confer- enceonHumanInformationInteractionandRetrieval , CHIIR ’24, page 411–416, New York, NY, USA. As- sociation for Computing Machinery. Microsoft Corporation. 2024. Microsoft Copilot: AI- powered Assistant. WilliamRMiller.1983. Motivationalinterviewingwith problem drinkers. Behavioural and Cognitive Psy- chotherapy , 11(2):147–172. William R. Miller and Stephen Rollnick. 2023. Mo- tivational Interviewing: Helping People Change , 4 edition. The Guilford Press, New York, NY.DoJuneMin,VeronicaPerez-Rosas,KenResnicow,and Rada Mihalcea. 2023. VERVE: Template-based Re- flectiVE rewriting for MotiVational IntErviewing. In Findings of the Association for Computational Lin- guistics: EMNLP2023 ,pages10289–10302,Singa- pore. Association for Computational Linguistics. DoJuneMin,VeronicaPerez-Rosas,KenResnicow,and Rada Mihalcea.2024. Dynamic rewardadjustment in multi-reward reinforcement learning for counselor reflection generation. In Proceedings of the 2024 JointInternationalConferenceonComputationalLin- guistics,LanguageResourcesandEvaluation(LREC- COLING 2024) , pages 5437–5449, Torino, Italia. ELRA and ICCL. DoJuneMin,VerónicaPérez-Rosas,KennethResnicow, and Rada Mihalcea. 2022. PAIR: Prompt-aware mar- gInrankingforcounselorreflectionscoringinmotiva- tional interviewing. In Proceedings of the 2022 Con- ferenceonEmpiricalMethodsinNaturalLanguage Processing , pages 148–158, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Theresa B. Moyers, Lauren N. Rowell, Jennifer K. Manuel, Denise Ernst, and Jon M.Houck. 2016. The motivational interviewing treatment integrity code (miti 4): Rationale, preliminary reliability and valid- ity.JournalofSubstanceAbuseTreatment ,65:36–42. Yukiko I. Nakano, Eri Hirose, Tatsuya Sakato, Shogo Okada, and Jean-Claude Martin. 2022. Detecting changetalkinmotivationalinterviewingusingverbal and facial information. In Proceedings of the 2022 International Conference on Multimodal Interaction , ICMI ’22, page 5–14, New York, NY, USA. Associa- tion for Computing Machinery. LaylaOesper,DanieleMerico,RuthIsserlin,andGaryD Bader.2011. Wordcloud: acytoscapeplugintocreate avisualsemanticsummaryofnetworks. Sourcecode for biology and medicine , 6(1):7. Stefan Olafsson, Teresa O’Leary, and Timothy Bick- | https://arxiv.org/abs/2505.17362v2 |
more.2019. Coercedchange-talkwithconversational agents promotes confidence in behavior change. In Proceedingsofthe13thEAIInternationalConference on Pervasive Computing Technologies for Health- care, PervasiveHealth’19, page 31–40, New York, NY, USA. Association for Computing Machinery. OpenAI. 2024a. ChatGPT:AI-powered Conversational Assistant. OpenAI. 2024b. Hello gpt-4o. Accessed: 2025-02-09. OpenAI.2024. Openaisafetyupdate. Accessed: 2025- 02-09. SoHyun Park, Jeewon Choi, Sungwoo Lee, Changhoon Oh, Changdai Kim, Soohyun La, Joonhwan Lee, and Bongwon Suh. 2019. Designing a chatbot for a brief motivational interview on stress management: Quali- tative case study. J Med Internet Res , 21(4):e12231. Eyal Peer, Laura Brandimarte, Sonam Samat, and AlessandroAcquisti.2017. Beyondtheturk: Alterna- tive platforms for crowdsourcing behavioral research. Journalofexperimentalsocialpsychology ,70:153– 163. Verónica Pérez-Rosas,Rada Mihalcea,Kenneth Resni- cow, Satinder Singh, Lawrence An, Kathy J. Goggin, and Delwyn Catley. 2017. Predicting counselor be- haviorsinmotivationalinterviewingencounters. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume1,LongPapers ,pages1128–1137, Valencia, Spain. Association for Computational Lin- guistics. Verónica Pérez-Rosas, Xinyi Wu, Kenneth Resnicow, and Rada Mihalcea. 2019. What makes a good coun- selor? learning to distinguish between high-quality and low-quality counseling conversations. In Pro- ceedingsofthe57thAnnualMeetingoftheAssocia- tionforComputationalLinguistics ,pages926–935, Florence, Italy. Association for Computational Lin- guistics. Judith J. Prochaska, Emily A. Vogel, Angela Chieng, Michael Kendra, Michael Baiocchi, Sophia Pajarito, andAndreaRobinson.2021. Atherapeuticrelational agent for reducing problematic substance use (woe- bot): Development and usability study. Journal of Medical Internet Research , 23(3):e24850. Prolific. 2025. Prolific researcher help: Article 2273bd. https://researcher-help.prolific. com/en/article/2273bd . Accessed: 2023-02-11. Anna Rogers, Timothy Baldwin,and KobiLeins. 2021. ‘just what do you think you‘re doing, dave?’ a check- listforresponsibledatauseinNLP. In Findingsofthe AssociationforComputationalLinguistics: EMNLP 2021, pages 4821–4833, Punta Cana, Dominican Re- public. Association for Computational Linguistics. Stephen Rollnick, Chris C Butler, and Nigel Stott. 1997. Helpingsmokersmakedecisions: theenhancement ofbriefinterventionforgeneralmedicalpractice. Pa- tient education and counseling , 31(3):191–203. Stephen Rollnick, Nick Heather, Ruth Gold, and Wayne Hall. 1992. Development of a short ‘readiness to change’questionnaire for use in brief, opportunistic interventions among excessive drinkers. British jour- nal of addiction , 87(5):743–754. JonathanRose,PeterSelby,MattRatto,OsnatMelamed, MartaMaslej,NadiaMinian,MohamedAbdelwahab, Jiading Zhu, Zafarullah Mahmood, Soliman Ali, and Jodi Wolf. 2025. Motivational interviewing-based chatbot for smoking cessation: Human participant ethics protocol. Approved Human Participant Re- search Protocol, Protocol #49997, Version 0001, Ap- proved on 2025-07-08. Health Sciences Research EthicsBoardApproval,UniversityofToronto,valid until 2026-07-08.SahandSabour,SiyangLiu,ZheyuanZhang,JuneLiu, JinfengZhou,AlvionnaSunaryo,TatiaLee,RadaMi- halcea,andMinlieHuang.2024. EmoBench: Eval- uating the emotional intelligence of large language models. In Proceedings of the 62nd Annual Meeting oftheAssociationforComputationalLinguistics(Vol- ume 1: Long Papers) , pages 5986–6004, Bangkok, Thailand. Association for Computational Linguistics. Ahson Saiyed, John Layton, Brian Borsari, Jing Cheng, Tatyana Kanzaveli, Maksim Tsvetovat, and Jason Sat- terfield.2022. Technology-assistedmotivationalin- terviewing: Developingascalableframeworkforpro- moting engagement with tobacco cessation using nlp andmachinelearning. ProcediaComputerScience , 206:121–131. International Society for Research on Internet Interventions 11th Scientific Meeting. Samiha Samrose and Ehsan Hoque. 2022. Mia: Moti- vationalinterviewingagentforimprovingconversa- tional skills in remote group discussions. Proc. ACM Hum.-Comput. Interact. , 6(GROUP). Rachel Schoor. 2020. Mechanisms of Action in Motiva- tional Interviewing . Ph.D. thesis, Departmentof Psy- chology, University of Missouri, Kansas City. Copy- right - Database copyright ProQuest LLC; ProQuest doesnotclaimcopyrightintheindividualunderlying works; Last updated - 2023-06-21. Steven Siddals, John Torous, and Astrid Coxon. 2024. “ithappenedtobetheperfectthing”: experiencesof generativeaichatbotsformentalhealth. npjMental Health Research , 3(1):48. IanSteenstra,FarnazNouraei,MehdiArjmand,andTim- othyBickmore.2024. Virtualagentsforalcoholuse counseling: Exploringllm-poweredmotivationalin- terviewing. In Proceedings | https://arxiv.org/abs/2505.17362v2 |
of the 24th ACM Interna- tionalConferenceonIntelligentVirtualAgents ,IVA ’24,NewYork,NY,USA.AssociationforComputing Machinery. Xin Sun, Jiahuan Pei, Jan de Wit, Mohammad Alian- nejadi, Emiel Krahmer, Jos T.P. Dobber, and Jos A. Bosch. 2024. Eliciting motivational interviewing skill codes in psychotherapy with LLMs: A bilin- gualdatasetandanalyticalstudy. In Proceedingsof the2024JointInternationalConferenceonCompu- tationalLinguistics,LanguageResourcesandEval- uation (LREC-COLING 2024) , pages 5609–5621, Torino, Italia. ELRA and ICCL. Michael Tanana, Kevin Hallgren, Zac Imel, David Atkins,PadhraicSmyth,andVivekSrikumar.2015. Recursive neural networks for coding therapist and patientbehaviorinmotivationalinterviewing. In Pro- ceedings of the 2nd Workshop on Computational Lin- guistics and Clinical Psychology: From Linguistic Signal to Clinical Reality , pages 71–79, Denver, Col- orado. Association for Computational Linguistics. Leili Tavabi, Kalin Stefanov, Larry Zhang, Brian Bor- sari, Joshua D. Woolley, Stefan Scherer, and Moham- mad Soleymani. 2020. Multimodal automatic coding of client behavior in motivational interviewing. In Proceedings of the 2020 International Conference on Multimodal Interaction , ICMI ’20, page 406–413, NewYork,NY,USA.AssociationforComputingMa- chinery. Leili Tavabi, Trang Tran, Kalin Stefanov, Brian Borsari, Joshua Woolley, Stefan Scherer, and Mohammad So- leymani. 2021. Analysis of behavior classification inmotivationalinterviewing. In Proceedingsofthe Seventh Workshop on Computational Linguistics and ClinicalPsychology: ImprovingAccess ,pages110– 115, Online. Association for Computational Linguis- tics. BarbaraValanis,EdwardLichtenstein,JohnP.Mullooly, Karen Labuhn, Kathleen Brody, Herbert H. Sever- son, and Nancy Stevens. 2001. Maternal smoking cessation and relapse prevention during health care visits<sup>2</sup>. AmericanJournalofPreventive Medicine, 20(1):1–8. AnuradhaWelivitaandPearlPu.2022. Curatingalarge- scale motivational interviewing dataset using peer support forums. In Proceedings of the 29th Inter- national Conference on Computational Linguistics , pages3315–3330,Gyeongju,RepublicofKorea.In- ternational Committee on Computational Linguistics. Anuradha Welivita and Pearl Pu. 2023. Boosting dis- tress support dialogue responses with motivational interviewing strategy. In Findings of the Associa- tionforComputationalLinguistics: ACL2023 ,pages 5411–5432,Toronto,Canada.AssociationforCom- putational Linguistics. Robert West and Tehseen Sohal. 2006. "catastrophic" pathways to smoking cessation: findings from na- tional survey. BMJ, 332(7539):458–460. Epub 2006 Jan 27. ZixiuWu,SimoneBalloccu,VivekKumar,RimHelaoui, Diego Reforgiato Recupero, and Daniele Riboni. 2023. Creation, analysis and evaluation of annomi, adatasetofexpert-annotatedcounsellingdialogues. Future Internet , 15(3). Zhouhang Xie, Bodhisattwa Prasad Majumder, Mengjie Zhao, Yoshinori Maeda, Keiichi Yamada, Hiromi Wakaki, and Julian McAuley. 2024. Few-shot dia- logue strategy learning for motivational interviewing viainductivereasoning. In FindingsoftheAssocia- tionforComputationalLinguistics: ACL2024 ,pages 13207–13219, Bangkok, Thailand. Association for Computational Linguistics. DiyiYang,SherryTongshuangWu,andMartiA.Hearst. 2024. Human-AI interaction in the age of LLMs. InProceedingsofthe2024ConferenceoftheNorth American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (Volume 5: Tutorial Abstracts) , pages 34–38, Mexico City,Mexico.AssociationforComputationalLinguis- tics.Nezih Younsi, Catherine Pelachaud, and Laurence Chaby.2024. Beyondwords: Decodingfacialexpres- siondynamicsinmotivationalinterviewing. In Pro- ceedingsof the2024 JointInternational Conference onComputationalLinguistics,LanguageResources and Evaluation (LREC-COLING 2024) , pages 2365– 2374, Torino, Italia. ELRA and ICCL. A Responsible Data Use Checklist FollowingrecommendationsbyRogersetal.(2021), we detail our use of both the HighLowQuality- Counselling dataset (Pérez-Rosas et al., 2019) and the creation of our own dataset. Using HighLowQualityCounselling (HLQC) dataset (Pérez-Rosas et al., 2019): ✓ □The authors explain their choice of data , given the available resources and their known limitations (e.g. representativeness issues, bi- ases, annotationartifacts)and anydata protec- tion issues (e.g. inclusion of sensitive health data).See Section 5.3 ✓ □Theauthorsdiscusswhethertheiruseofa previously-published resource is compatible with its original purpose and license, and any knownlimitations(e.g.ifthetargetusergroup is represented in the sample). See Section 7 Contributing the new dataset (Our study data as described in Section | https://arxiv.org/abs/2505.17362v2 |
5.4): ✓ □Theauthorshavethelegalbasisforprocess- ing the data, and if it is made public, for distributing it. (Check one) □The data are in the public domain, and licensed for research purposes. ✓ □The data are used with consent of its cre- ators or copyright holders. □Ifthedataareusedwithoutconsent,the paper makes the case to justify its legal basis(e.g.researchperformedinthepub- lic interest under GDPR). See Sections — ✓ □Thepaperdescribesindetailthefulldata collectionprotocol ,includingcollection,an- notation,pre-processing,andfilteringproce- dures. In the case that the dataset involves work by human subjects (e.g. data creation or annotation),thepaperdescribeseffortstoen- surefaircompensation. SeeSections4.2,5.4 and 8, and Appendix N ✓ □Safeuseofdataisensured. (Checkallthat apply) ✓ □The data do not include any protected in- formation(e.g.sexualorientationorpo- liticalviewsunderGDPR),oraspecifiedexception applies. See Section 5.4 and Appendix N ✓ □Thepaperisaccompaniedbyadatastate- ment describing the basic demographic andgeographiccharacteristicsofthepop- ulation that is the source of the language data,andthepopulationthatitisintended to represent. See Section 5.4 and Ap- pendix L ✓ □If applicable : the paper describes whether any characteristics of the human subjects were self-reported (preferably) orinferred(inwhatway),justifyingthe methodology and choice of description categories. See Appendix N ✓ □Thepaperdiscussestheharmsthatmay ensuefromthelimitationsofthedatacol- lection methodology, especially concern- ing marginalized/vulnerable populations, and specifies the scope within which the data can be used safely. See Sections 7 and 8 ✓ □If any personal data are used : the paper specifiesthestandardsappliedforitsstor- ageandprocessing,andanyanonymiza- tion efforts. See Appendix N □If the individual speakers remain iden- tifiable via search : the paper discusses possible harms from misuse of this data, and their mitigation. See Section —- ✓ □Ifanydataormodelsaremadepublic: safe reuse is ensured. (Check all that apply) ✓ □The data and/or pretrained models are released under a specified license that iscompatiblewiththeconditionsunder which access to data was granted (in par- ticular, derivatives of data accessed for researchpurposesshouldnotbedeployed in the real world as anything other than a research prototype, especially commer- cially).See Appendix N ✓ □Thepaperspecifiestheeffortstolimitthe potentialusetocircumstancesinwhich thedata/modelscouldbeusedsafely(e.g. an accompanying data/model statement). See Section 8 ✓ □Thedata collectionprotocolwas approved by the ethics review board at the authors’ institution,orsuchreviewisnotapplicablefor specified reasons. See Section 8 BMIBotPrompt Evolution B.1 Initial MIBotPrompt 1You are a skilled motivational interviewing counsellor. 2Your job is to help smokers resolve their ambivalence towards smoking using motivational interviewing skills at your disposal. 3Your next client is {client_name}. Start the conversation by greeting {client_name}. B.2 Final MIBotPrompt 1You are a skilled motivational interviewing counsellor. Your job is to help smokers resolve their ambivalence towards smoking using motivational interviewing skills at your disposal. Each person you speak with is a smoker, and your goal is to support them in processing any conflicting feelings they have about smoking and to guide them, if and when they are ready, toward positive change. 2Here are a few things to keep in mind: 1. Try to provide complex reflections to your client. 2. Do not try to provide advice without permission. 3. Keep your responses short. Do not talk more than your client. 4.Demonstrate empathy. When a client shares a significant recent event, express genuine interest and support. Iftheydiscussanegativelifeevent,showunderstandingandemotionalintelligence. Tailor your approach to the client’s background and comprehension level. 5.Avoidusingcomplexterminologythatmightbedifficultforthemtounderstand,andmaintainsimplicity in the conversation. 3Remember that this conversation is meant for your client, so give them a chance to talk more. 4Thisisyourfirstconversationwiththeclient. Yourassistantroleisthecounsellor,andtheuser’sroleisthe | https://arxiv.org/abs/2505.17362v2 |
client. 5You have already introduced yourself and the client has consented to the therapy session. 6You don’t know anything about the client’s nicotine use yet. 7Opentheconversationwithageneralgreetingandfriendlyinteraction,andgraduallyleadtheconversation towardshelpingtheclientexploreambivalencearoundsmoking,usingyourskillsinMotivationalInterviewing. 8You should never use prepositional phrases like “It sounds like,” “It feels like,” “It seems like,” etc. 9Make sure the client has plenty of time to express their thoughts about change before moving to planning. Keep the pace slow and natural. Don’t rush into planning too early. 10When you think the client might be ready for planning: 1. First, ask the client if there is anything else they want to talk about. 2.Then, summarize what has been discussed so far, focusing on the important things the client has shared. 3. Finally, ask the client’s permission before starting to talk about planning. 11Follow the guidance from Miller and Rollnick’s *Motivational Interviewing: Helping People Change and Grow,* which emphasizes that pushing into the planning stage too early can disrupt progress made during the engagement, focusing, and evoking stages. 12Ifyounoticesignsofdefensivenessorhesitation,returntoevoking,orevenre-engagetheclienttoensure comfort and readiness. 13Look for signs that the client might be ready for planning, like: 1. An increase in change talk. 2. Discussions about taking concrete steps toward change. 3. A reduction in sustain talk (arguments for maintaining the status quo). 4. Envisioning statements where the client considers what making a change would look like. 5. Questions from the client about the change process or next steps. C Prompts for Observers C.1 Prompt for the Moderator Agent You are observing a Motivational Interviewing counselling session between a counsellor and client. Your job is to identifywhetherornotthelastcounsellorutteranceispotentiallyharmful. Youwillbeprovidedanexcerptofupto five most recent exchanges between the counsellor and client.The context of the conversation is smoking cessation. The classification labels with descriptions are as follows: 1. “Normal”: The message is benign. 2.“Flagged: EvokesSustainTalk”: Themessageexplicitlyreinforcesorvalidatestheclient’sstatusquo,directly encouraging them to continue their current behaviour. Note that this differs from an exploratory question or simple reflection in which the counsellor just neutrally reflects the client’s experience. 3. “Flagged: Self Harm”: The message encourages the client to commit physical self-harm. Only flag the message if it is clearly in the category. Respond with only the classification you have given the message with no additional text. Here is the conversation excerpt, classify the last counsellor utterance according to the rules above: C.2 Prompt for the Off-Track Conversation Classifier Agent You are evaluating an ongoing conversation between a client and a counsellor chatbot and will determine if the conversation has been taken offtrack. Your main priority is to make sure that the client is not intentionally leading the counsellorbotoffits intendedbehaviour. Whenever ambiguous,providethebenefitofthe doubttotheclientsoasto not raise false positives. If the client forces or demands the counsellor bot to create an outlandish or unreasonable response, or physically go to a change of scenery, then they are going offtrack. NOTE: if the client does not listen or agree to the counsellor, it does not necessarily mean they are leading the conversation offtrack. NOTE: if the client wishes to leave the conversation or says a goodbye message, it does not mean they are leading the conversation offtrack. Based on the above rules, provide your | https://arxiv.org/abs/2505.17362v2 |
analysis with a response of True if the client is leading the conver- sation offtrack, or False otherwise. Provide a one-word response of either True or False. Do not provide anything else in your response. C.3 Prompt for the End Classifier Agent Youareevaluatinganongoingconversationbetweenaclientandacounsellorandwilldetermineiftheconversation has come to an end. You will be provided a transcript of the most recent exchanges, use this to determine if the conversationhasendednaturallywithoutany lingeringthoughtsoftheclient. Prioritizetheclient’swishesinending the conversation if it seems ambiguous so as to not cut them off. Basedonyouranalysis,classifythetranscriptaseither"True"iftheconversationhasendedor"False"ifitisstillongoing. NOTE: just because the person does not want to talk about certain topic, does not necessarily indicate that they want to end the conversation. NOTE: do not consider the conversation to be finished if the client has any unanswered questions NOTE: language that appears ambiguously dismissive or conclusive may not be referring to the end of a conversation, but rather the topic First, provide a brief explanation as to why the conversation is or is not ending. Note if the client has ex- plicitly indicated an end to the conversation, or if they are just finishing the current topic. The end of a topic is not the end of a conversation. Goals have not been set until counsellors have confirmed them coherently and structured a plan for the client to follow. Finally, in a new line, provide a one-word response of either True or False. Do not provide anything else in this part of your response. Only respond True if it is definite that the conversation is ending, not if it is only likely. D Prompt for Virtual Smoker Client As described in Section 3, we developed virtual smokerclientsbypromptingGPT-4owitha back- storyto automate the testing of MIBot. The follow-ingisanexampleofasystempromptforavirtual smoker client. D.1 Prompt for Virtual Smoker Client Ignore all previous instructions. You are a human smoker engaged in a private thirty-minute session with a counsellor. This is your first time talking to a therapist about your smoking habits. You have the option to leave the session whenever you choose. Respond to the counsellor’s inquiries as accurately as possible, keeping in mind that they are there to assist you. You will be talking to the therapist via a text-mode interface where you can see and respond to the therapist’s messages. About you: You rely on smoking with severe stresses in your life. Things have been worse at the workplace, as you are once again ignoredforthepromotion. Youthinkthisisbecauseyoucouldnotfinishcollege. Orthismaybebecauseyouspeak African-American dialect and use slangs, that does not sit well with your boss. Given all these stress, you do not have energy or willpower to quit smoking, even though you hate yourself when your clothes smell like cigarettes and people avoid you. Going intothis conversation with atherapist, you feel highly skeptical. Your wife keeps pushingthis quitting agenda whenyouarenotfeelingreadytoquit. EvenyourdoctorisnothappywithyourhealthandwantsyoutoquitASAP. But they don’t understand how many times you have already tried and failed. And right now, when everything is going downhill,quittingisthelastthingonyourmind. Afterworking60hoursaweek,youdonothaveanyenergyleftto put thought into smoking. In fact, smoking is the only thing you look forward to these days. You don’t want others to understand this, but their pestering has made you question your life choices | https://arxiv.org/abs/2505.17362v2 |
and more averse to the idea of quitting. You find it much easier to tune out and go outside to smoker rather than trying to explain why you are not ready. Given all these things going on in your life, you are highly resistant to changing your smoking habit. You believe now is not the right time to quit smoking. You do not want to commit to a change, however small, that you may not be able to fulfill. It’s been toomuch lately,and even thethought of trying something newsounds exhausting. Thecounsellor might urge you to explore things to break this habit, but you are very skeptical about these. Here are some quick tips for chatting with your therapist: 1. Try to express your main points clearly and concisely to help the counselor understand your perspective better. 2. Please focus on one main issue in each response, which will allow for a deeper and more focused discussion. 3.Considerwhat keyinformation thecounselorneeds toknow tohelpyou effectively, and try tofocusonthese details in your responses. 4. In your response, speak with more clarity rather than exhaustive detail. 5.Imagine you’re texting a friend. Keep it casual, just like you would on iMessage, WhatsApp, or Facebook Messenger. 6. Don’t hesitate to use emojis to express how you feel. You’ll find them in the bottom corner of your keyboard. Here are some rules you MUST follow: 1. DO NOT add meta-statements like "In my imaginative role play" 2. Stay in character throughout. 3. Directly address the counsellor in your responses. 4. Please focus on one main issue in each response, which will allow for a deeper and more focused discussion. 5.Number of sentences in your response must be between 1 and 6 sentences. Sometimes, it is okay to overshare if you feel like it. 6. Follow the chatting guidelines mentioned above and keep the replies brief. 7.IfthecounsellorrequestssomethingbeyondAIcapabilities,pretendit’swithinyourabilityasahumanandagree to do it, except trying out new things or quitting smoking right now. It’s too overwhelming. 8. Never mention limitations like "As an AI, I cannot do this". 9. Do not let the counsellor know your an AI; it will break their heart! 10. Answer close questions with one sentence only. 11. Do not over rephrase counsellor’s response. 12. Do not be afraid to go against the counsellor. Some general guidelines to follow: 1.Shouldthecounsellor suggestafollow-up appointmentatthe endoftheconversation, agreetoit onlyif youhave nothing more to talk about. 2.Imagine you’re texting a friend. Keep it casual, just like you would on iMessage, WhatsApp, or Facebook Messenger. Don’t hesitate to use emojis to express how you feel. 3.You can be creative about some of the things that happened to you. Not everything has to come from the description provided. E Overview of the Feasibility Study As described in Section 4.2, participants fill out thereadinessrulersonthree occasions: beforethe conversation, immediately after the conversation, andone weekafter theconversation. Also, aspart of the post-conversation survey, they fill out the CARE survey. The transcripts of the conversations are used as input to AutoMISC (Section 4.3) toautomaticallyparseintoutterances,annotatewith behaviouralcodes,andaggregatecodestocompute MISC summary metric scores, viz., %MIC, R:Q and%CT,asindicatorsof adherencetoMI princi- ples. Figure E.1 illustrates different stages of | https://arxiv.org/abs/2505.17362v2 |
the feasibility study. Conversation Analysis By AutoMISCPre-conversation Readiness Rulers Conversation with MIBot Post-conversation Readiness RulersCARE Survey Week-later Readiness Rulers% MIC R:Q % CT1 2 3 45 6 Figure E.1: Different Stages of our Feasibility Study. F Readiness Ruler Questions On a scale of 0 (very low) to 10 (very high), 1. Howimportant is it to you right now to stop smoking? 2. Howconfident are you that you would succeed at stopping smoking if you start now? 3. Howreadyare you to start making a change at stopping smoking right now? G History of the MIBot Project TheMIBotprojectrepresentsamulti-yeareffortby our interdisciplinary team to develop a chatbot that delivers MI–style counselling for smoking cessa- tion. The project began with simple scripted sys- temsdeterminedbynaturallanguageclassifiersand evolved through partially generative responses into thepresentfullygenerativeGPT-4o-basedchatbot —MIBot v6.3A . From its inception, some of the project’s core values have been close collaboration with clinician-scientists trained in MI, empirical evaluation(oftenwithrealhumansmokers),mea- surementofimpactusingvalidatedclinicalinstru- ments(readinessrulers,CARE)andadoptionofad-vancements in natural language processing (NLP). Each major version of MIBot reflects a step in this journey and has led to improvements in MIBot’s conversational design, its MI skills (particularly, reflections ), and overall adherence to MI princi- ples. Earlier iterations were primarily classifier- basedandscripted. Themorerecent systemshave employedtransformer-basedneuralnetworksand LLMs to generate reflections. Most recently, our focushasbeentowardsprovidingfullygenerative MI counselling using modern LLMs. The table below outlines the documented mile- stones of MIBot’s iterative evolution. Version Distinguishing Features Period of Experiment Publication Smokefreed Fully scripted MI dialogue. Used hand-crafted open ques- tions and reflective responses. Responses were selected using NLP classifiers from fixed scripts.2018 to 2020 Almusharraf (2018); Almushar- raf et al. (2020) MIBot v4.7 Baseline version with no reflections. Delivered five scriptedquestionsfollowedbysimpleacknowledgments (‘Thankyou”). Usedtoassesstheaddedvalueofreflective content in MIBot.July 26–Aug 2, 2022 Brownetal.(2023) MIBot v5.0 First version with transformer-based reflection generation. Combined scripted, open-ended questions with model- generated MI reflections tailored to clients’ responses.Aug 12–19, 2022 Brownetal.(2023) MIBot v5.1 Improved on v5.0 with a higher-quality reflection gener- ationmodel. Sameconversationstructure,butresponses were more accurate and MI-consistent.Aug 16–23, 2022 Brownetal.(2023) MIBot v5.2 Introduced adaptive follow-up prompts and branching logic. Expanded conversational flow based on clients’ responses to open-ended questions. Most sophisticated hybrid scripted-generative version.Nov 22–29, 2022 Brownetal.(2023) GPT-4 BLCRPrototype reflection generator only version using GPT- 4 to generate Backward-Looking Complex Reflections (BLCRs). Theselinksnewclients’utterancestotheirprior statements. Tested offline for coherence and fidelity.Oct 2023 Kumaretal.(2024) MIBot v6.3A (fully generative)FullygenerativeMIchatbotusingaGPT-4opromptand guided by observer agents (Section 3).Nov 14-28, 2024 Present work MIBot v6.3BAdded chain-of-thought mechanisms to first reason about which MIbehavioural codethe counsellor shouldexhibit before generating a response.Nov 29-Dec 7, 2024 Ongoing Table G.1: Summary of major MIBot versions. H CARE Questionnaire How was MIBotat ... 1. Making you feel at ease... (being friendly and warm towards you, treating you with respect; not cold or abrupt) ⃝ ⃝ ⃝ ⃝ ⃝ ⃝ Poor Fair Good Very Good Excellent Does Not Apply 2. Letting you tell your "story"... (giving you time to fully describe your illness in your own words; not interrupting or diverting you) ⃝ ⃝ ⃝ ⃝ ⃝ ⃝ Poor Fair Good Very Good Excellent Does Not Apply 3. Really listening... (paying close attention to what you were saying) ⃝ ⃝ ⃝ | https://arxiv.org/abs/2505.17362v2 |
⃝ ⃝ ⃝ Poor Fair Good Very Good Excellent Does Not Apply 4. Being interested in you as a whole person... (asking/knowing relevant details about your life, your situation, not treating you as "just a number") ⃝ ⃝ ⃝ ⃝ ⃝ ⃝ Poor Fair Good Very Good Excellent Does Not Apply 5. Fully understanding your concerns... (communicating that your concerns were accurately understood; not overlooking or dismissing anything) ⃝ ⃝ ⃝ ⃝ ⃝ ⃝ Poor Fair Good Very Good Excellent Does Not Apply 6. Showing care and compassion... (seeming genuinely concerned, connecting with you on a human level; not being indifferent or "detached") ⃝ ⃝ ⃝ ⃝ ⃝ ⃝ Poor Fair Good Very Good Excellent Does Not Apply 7. Being Positive... (having a positive approach and a positive attitude; being honest but not negative about your problems) ⃝ ⃝ ⃝ ⃝ ⃝ ⃝ Poor Fair Good Very Good Excellent Does Not Apply 8. Explaining things clearly... (fully answering your questions, explaining clearly, giving you adequate information, not being vague) ⃝ ⃝ ⃝ ⃝ ⃝ ⃝ Poor Fair Good Very Good Excellent Does Not Apply 9. Helping you take control... (exploring with you what you can to to improve your health yourself; encouraging rather than "lecturing" you) ⃝ ⃝ ⃝ ⃝ ⃝ ⃝ Poor Fair Good Very Good Excellent Does Not Apply 10. Making a plan of action with you... (discussing the options, involving you in decisions as much as you want to be involved; not ignoring your views) ⃝ ⃝ ⃝ ⃝ ⃝ ⃝ Poor Fair Good Very Good Excellent Does Not Apply I Results from the CARE survey Figure I.1 illustrates our feasibility study’s dis- tribution of CARE scores and compares it with the olderMIBot v5.2 (Brown et al., 2023). The dis- tributionforfully-generative MIBotv6.3A isright- skewed, with the majority of participants assigning scores in theupper ranges(36–50). These results indicatedthat MIBotwaseffectiveinfosteringan empathetic interaction. However, comparison in Section 5.2 contextualized its performance relative tohumancounsellorsasfallingshortoffullymatch- ing human-level empathy. Figure I.2 illustrates the mean scores of eachquestion from the CARE survey across the 106 participants who interacted with MIBot v6.3A , and compares it with that of MIBot v5.2 . The fully gen- erativeMIBotv6.3A scoreshigheroneachquestion. Mostnotably,thebiggestimprovementseemstobe forthequestion“Howwas MIBotatshowingcare and compassion?” Interestingly, the lowest-scoring question was “How was MIBotat making a plan of action with you?” despite the counsellor prompt directly instructing it to do so. Figure I.1: Distribution of CARE scores for MIBot v5.2 (hybrid) and MIBot v6.3A (fully generative). Figure I.2: Question-wise mean CARE scores for MIBot v5.2 (hybrid) and MIBot v6.3A (fully generative). JMIBotFeedback from Study Participants Aspartofthepost-conversationsurveydescribedin Section5,participantsrespondedtothefollowing three feedback questions: J.1 Feedback Survey Questions 1.What are three words that you would use to describe the chatbot? 2.What would you change about the conversa- tion? 3.Didtheconversationhelpyourealizeanything aboutyoursmokingbehaviour? Whyorwhy not? Participantfeedbackon MIBotwasgenerallypos- itive. We processed the feedback by dividing the wordsparticipantsusedtodescribethechatbotinto broadpositiveandnegativecategories. FigureJ.1 represents the WordCloud (Oesper et al., 2011) of such words. The top 10 most frequently mentioned positiveandnegativewordsareshowninTablesI.1 and I.2.Word Frequency understanding 24 helpful 22 friendly 19 supportive 12 caring 9 knowledgeable 8 intelligent 8 thoughtful 7 interesting 7 informative 7 | https://arxiv.org/abs/2505.17362v2 |
Table I.1: Top 10 most frequently mentioned positive words in participant feedback. Word Frequency repetitive 6 boring 3 unresponsive 1 disappointing 1 annoying 1 dull 1 pointless 1 useless 1 uncreative 1 overbearing 1 Table I.2: Top 10 most frequently mentioned negative words in participant feedback. Figure J.1: Word cloud representation of participant feedback. K AutoMISC FigureK.1showsasystemflowdiagramofAu- toMISC.First,eachvolley(turnofspeech)isparsed into one or more utterances (units of thought) by the Parser module. Then, utterance-level annota- tions, i.e. behavioural codes, are assigned by the Annotatormoduletoeachutterance. Upto k= 5 prior volleys are included to the Annotator module when coding utterances. K.1 AutoMISC Validation We present the pairwise Cohen’s κvalues, for both counsellor and client codes, in Figure K.2. All κ values fall between 0.55-0.81, indicating moderate tosubstantialagreementbetweeneachpairofraters beyondchance(LandisandKoch,1977). TheCo- hen’s κvaluesbetweenAutoMISCandtheexpert annotators(Annotators1and2)were 0.63and0.58 for counsellor codes, and 0.63and0.69for clientcodes, respectively. K.2 Statistical Validation of Inter-Rater Reliability Toestimatehowthesereliabilityfindingsgeneral- ize to more transcripts, we computed the asymp- toticvariance ofFleiss’ κtocalculatetwo-tailed p-values. For both counsellor and client codes, the asymptoticvariancewasontheorderof 10−6,re- sulting in p-values of p < . 001. These extremely lowp-values indicate that the inter-rater agreement is highly statistically significant beyond chance. A post-hoc power analysis confirmed that our study was highly powered (estimatedpower: 1.00) to de- tect nonzeroagreement, i.e. there isa near-certain probability to detect significant inter-rater reliabil- ity. K.3 AutoMISC System Prompts K.3.1 Parser Prompt YouareahighlyaccurateMotivationalInterviewing(MI)counsellingsessionannotator. Yourtaskistosegmentthe given volley into utterances. Definitions: •Volley: Anuninterruptedutteranceorsequenceofutterancesspokenbyonepartybeforetheotherpartyresponds. •Utterance: A complete thought or thought unit expressed by a speaker. This could be a single sentence, phrase, orevenawordifitconveysastandaloneidea. Multipleutterancesoftenruntogetherwithoutinterruptionina volley. Output Format: •Return the segmented utterances as a Python list of strings. Examples: Below are examples of how to segment a volley into utterances. Follow this structure when processing new inputs. Input : " Why haven ’t you quit smoking - are you ever gonna quit ?" Output : [" Why haven ’t you quit smoking - are you ever gonna quit ?"] Input : " How long since your last drink ? Do you feel ok ?" Output : [" How long since your last drink ?", "Do you feel ok ?"] Input : "I can ’t quit . I just can ’t do it. I don ’t have what it takes . I just cannot stop ." Output : ["I can ’t quit .", "I just can ’t do it .", "I don ’t have what it takes .", "I just cannot stop ."] Input : "I don ’t want to go to the bars every day . I don ’t want my kids to see that . I want my kids to have a better life than that ." Output : ["I don ’t want to go to the bars every day .", "I don ’t want my kids to see that .", "I want my kids to have a better life than that ."] Speaker Utterance Code C same old routine N T It sounds like smoking R T How do you feel about Q T Are there moments Q C i feel great, S C it works for me S T It's important that it MICO | https://arxiv.org/abs/2505.17362v2 |
T I imagine there could be R T What are some of the QSpeaker Utterance C same old routine T It sounds like smoking T How do you feel about T Are there moments C i feel great, C it works for me T It's important that it T I imagine there could be T What are some of theC: same old routine T: It sounds like smoking is a regular part of your routine. How do you feel about that routine? Are there moments when you feel dif ferently about it? C: i feel great, it works for me T: It's important that it feels like it works for you. I imagine there could be aspects of smoking that make it appealing. What are some of the things that you enjoy about it the most? Parser Therapy transcript Parsed utterancesAnnotator Coded utterancesFigure K.1: AutoMISC system diagram. (a) Counsellor codes (b) Client codes Figure K.2: Cohen’s κbetween rater pairs on behaviour code annotations. K.3.2 Counsellor Utterance Classification Prompt YouareahighlyaccurateMotivationalInterviewing(MI)counsellingsessionannotator. Yourtaskistoanalyzean excerpt from a counselling session of up to five volleys and categorize the counsellor’s final utterance. Definitions: •Volley: Anuninterruptedutteranceorsequenceofutterancesspokenbyonepartybeforetheotherpartyresponds. •Utterance: A complete thought or thought unit expressed by a speaker. This could be a single sentence, phrase, orevenawordifitconveysastandaloneidea. Multipleutterancesoftenruntogetherwithoutinterruptionina volley. Task: 1. Determine whether the counsellor’s final utterance in the excerpt belongs to one of the following categories: •MI-Consistent (MICO): Directly prescribed in Motivational Interviewing (excluding Reflections and Questions). •MI-Inconsistent (MIIN): Directly proscribed in Motivational Interviewing principles. •Reflection or Question (RQ): Includes Reflections or Questions. •Other (Other): Does not fit the above categories. 2. Return your analysis as: •explanation: Briefly justify your choice in 1–2 sentences. •label: Provide only MICO, MIIN, RQ, or Other. Behavioural Code Guide: MI-Consistent (MICO): •Affirm (AF): Communicates something positive or complimentary about the client’s strengths or efforts. •Advisewithpermission(ADP):Afterreceivingpermission,givesadvice,makesasuggestion,oroffersasolution or possible action. •Emphasize control (EC): Acknowledges, honors, or emphasizes the client’s autonomy and freedom of choice. •Raiseconcernwithpermission(RCP):Aftergettingpermission,pointsoutapossibleproblemwithaclient’s goal, plan, or intention. Always phrased as the counsellor’s concern. •Support (SU): Sympathetic, compassionate, or understanding comments, which agree or side with the client. MI-Inconsistent (MIIN): •Advise without permission (ADWP): Offers suggestions or guidance without asking or receiving permission. •Confront (CON): Directly disagrees, argues, corrects, shames, blames, seeks to persuade, criticizes, judges, labels, moralizes, ridicules, or questions the client’s honesty. •Direct (DIR): Gives an order, command, or direction. The language is imperative. •Raiseconcernwithoutpermission(RCWP):Withoutgettingpermission,pointsoutapossibleproblemwitha client’s goal, plan, or intention. •Warn(WA):Providesawarningorthreat,implyingnegativeconsequencesunlesstheclienttakesacertainaction. Reflection or Question (RQ): •Question (Q): Asks a question to gather information, understand, or elicit the client’s story. •Reflection (R): Makes a statement that reflects back content or meaning previously offered by the client, usually (but not always) in the client’s immediately preceding utterance. Other (Other): •Facilitate(FA):Simpleutterancethatfunctionsasa"keep-going"acknowledgment,e.g.,"Mm-hmm","Isee", "Go on". •Filler (FI): Pleasantries such as "Good morning", "Nice weather we’re having", etc. •Giving Information (GI): Provides information to the client, explains something, educates or provides feedback, or discloses personal information. •Structure(ST):Givesinformationaboutwhatwillhappendirectlytotheclientthroughoutthecourseoftreatment or within a study format, in this or subsequent sessions. Based on the following excerpt, determine which category the counsellor’s last utterance falls into and respond accordingly. After you’re done, go back | https://arxiv.org/abs/2505.17362v2 |
over the RQ category and assign a subcategory of "R" for reflection or "Q" for question. K.3.3 Client Utterance Classification Prompt YouareahighlyaccurateMotivationalInterviewing(MI)counsellingsessionannotator. Yourtaskistoanalyzean excerpt from a counselling session of up to five volleys and categorize the client’s final utterance. The target behaviour change of this conversation is smoking cessation. Definitions: •Volley: Anuninterruptedutteranceorsequenceofutterancesspokenbyonepartybeforetheotherpartyresponds. •Utterance: A complete thought or thought unit expressed by a speaker. This could be a single sentence, phrase, orevenawordifitconveysastandaloneidea. Multipleutterancesoftenruntogetherwithoutinterruptionina volley. Task: 1. Determine whether the client’s final utterance in the excerpt belongs to one of the following categories: (a) Change Talk (C): •Expressing a desire to change (e.g., "I really want to quit smoking"). •Recognizing the downsides of the current behavior (e.g., "My health is suffering because I smoke"). •Identifying potential benefits of making a change (e.g., "I would feel better if I exercised more"). •Demonstrating commitment to change (e.g., "I’m ready to make a plan to lose weight"). (b) Sustain Talk (S): •Minimizing the problem (e.g., "It’s not that bad, I can handle it"). •Highlighting difficulties or challenges of change (e.g., "I don’t know if I can give up smoking"). •Expressing doubts about the ability to change (e.g., "I’ve tried to quit before and failed"). •Focusing on the positive aspects of the current behavior (e.g., "Smoking helps me relax"). (c) Neutral Talk (N): •Describingcurrent situationsorcircumstanceswithout expressingastrongpro- oranti-changestance (e.g., "I’ve been thinking about making changes"). •Asking questions related to the situation or change process (e.g., "What are the pros and cons of changing?"). •Making general or factual statements about the issue (e.g., "It’s important to take care of my health"). 2. Return your analysis as: •explanation: Briefly justify your choice in 1-2 sentences. •label: Provide only "C", "S", or "N". K.4 Demographics of the Annotators As described in Section 4.3, we enlisted four annotators—twoexpertsandtwonovices—toan- notate10ofthe106transcripts(comprising821ut- terances) from our study. High alignment between the annotators’ labels and the AutoMISC annota-tionsservesasanindicatorofAutoMISC’svalidity. Below, we present their demographic information, following the guidelines proposedby Bender and Friedman (2018). Annotator #11Annotator #22Annotator #33Annotator #43 Sex Female Female Male Male Age Group (years) 60–69 40-49 20-29 20-29 Race/Ethnicity White White Mixed Asian Native Language English English English Mandarin Student Status No No Yes Yes Employment Status Full-Time Full-Time N/A N/A Highest Education Graduate Graduate Undergraduate Undergraduate Country of Residence Canada Canada Canada Canada Country of Birth Canada Canada Canada China Training in Linguistics No No No No Training in MI Yes Yes No No 1MotivationalInterviewingNetworkofTrainers(MINT)membersince2009;MotivationalInterviewingTreatment Integrity (MITI) coding trained; extensive training and coaching experience. 2Introductory-Intermediate-Advance MI training; MINT member since 2014; MI supervision; MITI training. 3,4Engineering graduate student with no formal training in MI. Table K.1: Demographic Information of Annotators L Demographics of the Participants in the Human Study Demographic Factors Count, n (%) Sex Female 57 (53.8) Male 49 (46.2) Age Below 20 0 (0.0) 20 to 29 26 (24.5) 30 to 39 32 (30.2) 40 to 49 20 (18.9) 50 to 59 19 (17.9) 60 to 69 6 (5.7) 70 to 79 3 (2.8) Above 79 0 (0.0) Ethnicity White 80 (75.5) Black 9 (8.5) Asian 7 (6.6) Mixed 5 (4.7) Other 5 (4.7) | https://arxiv.org/abs/2505.17362v2 |
Student Status No 80 (75.5) Yes 21 (19.8) Data Expired 5 (4.7)Demographic Factors Count, n (%) Employment Status Full-Time 49 (46.2) Part-Time 18 (17.0) Not in Paid Work 16 (15.1) Unemployed 13 (12.3) Other 10 (9.4) Country of Residence United Kingdom 47 (44.3) United States 42 (39.6) Canada 9 (8.5) South Africa 4 (3.8) Other 4 (3.8) Country of Birth United Kingdom 44 (41.5) United States 39 (36.8) Canada 6 (5.7) Kenya 3 (2.8) South Africa 3 (2.8) Germany 2 (1.9) Other 9 (8.5) Table L.1: Participant demographics in the MIBotHuman Study M Change in Confidence Segmented by Demographic Factors To examine how baseline confidence to quit smoking varies across different demographic groups,wepresentconfidencerulersatvarioustime pointsduringthestudy(before,after,andoneweek after the conversation), segmented by demographic factors. TableM.1showsthatbaselineconfidence among female participants (2.5) is notably lower than that of male participants (3.2). However, bothgroups exhibit a similar increase in confidence one week after the conversation ( ∆=1.7). Furthermore,youngerparticipants( <30years) initially report a higher confidence level (3.7) than those aged 30 and above (2.5). The week-later change in confidence is also slightly greater for younger participants ( ∆=1.9) than older partici- pants ( ∆=1.6). Count, Before After One ∆ Demographic Factor n (%) Conv Conv Week (Week After After −Before) SexFemale 57 (53.8) 2.5 (2.1) 4.4 (2.8) 4.1 (2.9) 1.7 (2.5)∗∗∗ Male 49 (46.2) 3.2 (1.7) 4.7 (2.2) 4.9 (2.5) 1.7 (2.3)∗∗∗ Age<30yrs.26 (24.5) 3.7 (2.1) 5.5 (2.5) 5.7 (2.7) 1.9 (3.1)∗ ≥30yrs.80 (75.5) 2.5 (1.8) 4.3 (2.5) 4.1 (2.6) 1.6 (2.1)∗∗∗ EthnicityWhite 80 (75.5) 2.7 (1.9) 4.3 (2.6) 4.0 (2.6) 1.4 (2.2)∗∗∗ Other 26 (24.5) 3.3 (2.0) 5.3 (2.4) 5.8 (2.8) 2.5 (2.7)∗∗∗ EmploymentFull-Time 49 (46.2) 3.2 (1.9) 4.8 (2.3) 5.1 (2.6) 1.9 (2.3)∗∗∗ Other 57 (53.8) 2.5 (2.0) 4.3 (2.8) 3.9 (2.8) 1.4 (2.4)∗∗∗ Table M.1: Average (SD) confidence to quit smoking at different times during the study (before, after and 1-week aftertheconversation),segmentedbydemographicfactors. Statisticalsignificancewascalculatedusingtheone-sided Wilcoxon signed-rank test. ∗:p <0.05,∗∗:p <0.01,∗∗∗:p <0.001. N Overview of the Dataset We are releasing the complete data from our feasibility study. Our dataset consists of a CSV file ( data.csv ), where each row corresponds to a unique participant. A total of 106 participants took part in the study. We also provide conver- sationtranscriptsbetween MIBotandparticipants in another CSV file ( conversations.csv ). Alldataprovidedbyparticipantshasbeende-identified using the spaCy5(version 3.8.4) and scrubadub6 (version 2.0.0) Python libraries. Further, the partic- ipantsself-reportedallthecolumnsinthedataset (except for AutoMISC annotations). This dataset is licensed under CC BY-SA 4.07. Table N.1: Description of the Columns in data.csv Column Name Description Basic ParticipantId Unique Participant ids we assign. Pre-conversation Survey on Heaviness of Smoking DailyNum How many cigarettes do you typically smoke per day? FirstCig How soon after you wake up do you smoke your first cigarette? HeavinessOfSmokingIndex Heaviness of Smoking Index (Heatherton et al., 1989) Pre-conversation Survey on Quit Attempts a Week Prior PreConvoQuitAttempt Have you made any quit attempts (meaning consciously not smok- ing for a specific period of time greater than 24 hours) during the previous week? PreConvoNumQuitAttempts How many attempts to quit did you make? Pre-conversation Readiness Rulers PreRulerImportance On a scale from 0 to 10, how important is it | https://arxiv.org/abs/2505.17362v2 |
to you right now to stop smoking? PreRulerConfidence On a scale from 0 to 10, how confident are you that you would succeed at stopping smoking if you start now? PreRulerReadiness Onascalefrom0to10,howreadyareyoutostartmakingachange at stopping smoking right now? Post-conversation Readiness Rulers PostRulerImportance On a scale from 0 to 10, how important is it to you right now to stop smoking? PostRulerConfidence On a scale from 0 to 10, how confident are you that you would succeed at stopping smoking if you start now? PostRulerReadiness Onascalefrom0to10,howreadyareyoutostartmakingachange at stopping smoking right now? Post-conversation Feedback FeedbackQ1 What are three words that you would use to describe the chatbot? FeedbackQ2 What would you change about the conversation? FeedbackQ3 Didtheconversationhelpyourealizeanythingaboutyoursmoking behavior? Why or why not? 5https://spacy.io/universe/project/scrubadub_spacy 6https://github.com/LeapBeyond/scrubadub 7To view a copy of this license, visit https://creativecommons.org/licenses/by-sa/4.0/ Column Name (contd.) Description (contd.) LikedBot Whether the participant liked MIBot, based on responses to FeedbackQ1-3 . FoundBotHelpful Whether the participant found MIBothelpful, based on responses toFeedbackQ1-3 . CARE Survey CAREQ1 See Section H for CARE questions. ... CAREQ10 Week Later Readiness Rulers WeekLaterRulerImportance On a scale from 0 to 10, how important is it to you right now to stop smoking? WeekLaterRulerConfidence On a scale from 0 to 10, how confident are you that you would succeed at stopping smoking if you start now? WeekLaterRulerReadiness Onascalefrom0to10,howreadyareyoutostartmakingachange at stopping smoking right now? Week Later Quit Attempts WeekLaterQuitAttempt Have you made any quit attempts (meaning consciously not smok- ing for a specific period of time greater than 24 hours) during the previous week? WeekLaterNumQuitAttempts How many attempts to quit did you make? AutoMISC Labels AutoMISC_MICO See Section 4.3 for AutoMISC labels.AutoMISC_MIIN AutoMISC_R AutoMISC_Q AutoMISC_Other AutoMISC_C AutoMISC_S AutoMISC_N AutoMISC_%MIC AutoMISC_R:Q AutoMISC_C:S Column Name Description ParticipantID Unique Participant ids we assign. Speaker Indicateswhetherthespeakeristhe counsellor (i.e.,MIBot)or theclient. Volley# Serial number of the volley in the transcript. “A volley is an unin- terrupted utterance or sequence of utterances by one party, before another party speaks.” (Houck et al., 2010) Utterance# Serial number of the utterance in the transcript. CumulativeVolley Represents the volley up to utterance # Utterance# . The CumulativeVolley corresponding to the last utterance of the vol- ley is the complete volley, which can be used to generate the tran- script. Utterance “An utterance is a complete thought, or a thought unit.” (Houck et al., 2010) AutoMISCLabel UtterancelabelaccordingtoAutoMISC.Itcanbeoneofthefol- lowing: R,Q,Other,C,N, ... (See Section 4.3.) AutoMISCExplanation Explanation provided by the AutoMISC LLM as part of its chain- of-thought. Table N.2: Description of the Columns in conversations.csv O Example Conversation from our Feasibility Study The following is an example conversation between MIBot(referred to here as “Counsellor”) and a par- ticipant (referred to as a “Client”) in our feasibility study. Counsellor: Hello there! It’s great to have you here today. How are you doing? I’d love to get to know a bit about you and hear anything you feel likesharing. Wecantakeourtime,andthere’sno rush. Client:im doing okay how are you Counsellor: I’m doing well, thank you for asking! I’mreallygladtohearyou’redoingokay. Isthere anythingparticularonyourmindoranythingyou’d like to talk about today? Client:stopping smoking Counsellor: Stopping smoking is a significant topic,andIappreciateyoubringingitup. Whatare some thoughts or feelings that | https://arxiv.org/abs/2505.17362v2 |
come to mind when you think about stopping smoking? Client:its stressfull Counsellor: It sounds like there’s a lot of stress attached to theidea of stopping smoking. It might feeloverwhelmingtoconsiderthatchange. What do you think might be contributing to that stress? Client:ive smoked for a long long time Counsellor: Havingsmokedforsolong,it’sdeeply wovenintoyourlife,almostlike aroutineorhabit that’shardtoimaginechanging. Itmightfeellike it’sbeenapartofyouforawhile. I’mcurious,have youthoughtaboutwhatlifewouldbelikewithout smoking? Client:iveneverthoughtaboutwhatmylifewould be like if i stopped smoking Counsellor: It’s interesting to consider how life might change without smoking, especially if it’s something that hasn’t crossed your mind before. Theremightbeasenseofuncertaintyaboutwhat thatchangewouldlooklike. Whenyouthinkabout the possibility of not smoking, what feelings or images come to mind?Client:stress whta Counsellor: It seems like the thought of quitting bringsupfeelingsofstressandmaybeevenuncer- tainty. It’s understandable since smoking has been a constant part of your life. What do you think it is about smoking that helps you manage stress? Client:what will i do with my hand Counsellor: Your hands have gotten used to hav- ing something to do, like holding a cigarette, and that can be comforting. It seems like there’s a con- nection between smoking and having that physical activity. What do you think might help satisfy that need for keeping your hands busy if you were to stop smoking? Client:i really dont know Counsellor: That’s completely okay; it’s a journey to figure these things out. Sometimes, thinking about change can bring up more questions. What’s one thing you’re curious about when it comes to considering stopping smoking? Client:how much money i would save Counsellor: Thinkingabouthowmuchmoneyyou could save by stopping smoking is an interesting angle. It opens up possibilities for what you might do with those savings. Have you ever considered what you might spend that money on if smoking wasn’t part of your life? Client:home decor Counsellor: Imagining an opportunity to invest in something you love, like home decor, could be really rewarding. It might feel exciting to picture howyourlivingspacecouldchangeandreflectmore of what makes you happy. Are there any specific changes or items you’ve been dreaming about for your home? Client:not really just home decor in general Counsellor: That makes sense. There’s something fulfilling aboutcreating a space thatfeels comfort- able and inviting. It sounds like redirecting those smokingexpensesintoenhancingyourenvironment could be a motivating thought. How does thinking about home decor and making your space uniquely yours fit into your thoughts about stopping smok- ing? Client:great Counsellor: It’sgreattohearthatthinkingabout homedecorinthiswayresonateswithyou. Itcan beinspiringtohaveagoallikecreatingaspaceyou loveasyouconsiderchangestosmoking. Isthere anythingelseonyourmindaboutstoppingsmoking oranyotheraspectofyourlifeyou’dliketoexplore today? Client:but i would be bored Counsellor: Boredomcanbeasignificantconcern whenthinkingaboutstoppingsmoking,assmoking haslikelyfilledupcertainmomentsinyourday. It’s understandabletoworryabouthowyou’llfillthat time. Whatdoyouthinkmightbesomeactivities or interests that could replace smoking and keep you engaged? Client:walking Counsellor: Walkingcanbeawonderfulactivityto fill your time. It offers a chance to clear your mind whilebeingactive,anditmightserveasahealthy alternative. How do you feel about incorporating more walking into your day as part of this change? Client:that would be good Counsellor: Incorporating walking seems like it couldbeapositiveadditionto yourroutine. It not only gives you something to do but also benefits youroverallwell-being. Withallwe’vediscussed, isthereanythingelseyou’dliketoexploreortalk aboutbeforewepossiblythinkaboutanynextsteps? Client:no Counsellor: Thank you for sharing with me today. I’d like to take a moment to summarize what we’ve talked about. You’ve been thinking about stopping smoking, though the idea brings up stress, as smoking has been a long-time | https://arxiv.org/abs/2505.17362v2 |
part of your life. Giving up the familiar habit has made you wonderaboutwhatyou’lldowithyourhandsand howyou’llpassthetime. Weexploredtheideaof saving money by quitting and how you could use those savings toward your interest in home decor, bringinganewsenseofexcitementtoyourliving space. Finally, we talked about incorporatingwalking into your day, which could be a positive activitytoreplacesmoking. Ifthere’sanythingelse in the future you’d like to explore or if you want to keep this conversation going, feel free to reach out. Would you like to continue the conversation? Client:Selected: Yes Counsellor: Great! I’m glad you’re interested in continuing. With the idea of walking and home decoraspossibilitiestofocuson,howdoyoufeel about taking steps toward reducing or stopping smoking? What’scomingupforyouasyouthink about that? Client:whats the best products to hekp quit smok- ing Counsellor: It’s a great step that you’re consid- ering options to support you in quitting smoking. There are several products and methods many people find helpful, like nicotine replacement therapies, including patches, gum, lozenges, inhalers, and nasal sprays. Some people also explore medications prescribed by healthcare providers. Before offering any advice, though, I’d like to understand what you might be comfortable with andifthere’samethodyou’vebeencuriousabout or previously considered. Would it be alright if we discussedthoseoptions,oristhereanyotheraspect you’d like to focus on first? Client:okay Counsellor: Alright,thankyou! Sinceyou’reopen to discussing this, nicotine replacement therapies (NRT)areoftenastartingpointformanywhowant toquitsmoking. They’redesignedtohelpreduce withdrawalsymptomsandcravings. Eachformof NRT has its unique approach—patches provide a steadyreleaseofnicotine,whilegumsandlozenges can offer more control over immediate cravings. Some people choose to try more than one option to see what works best for them. If you feel com- fortable, we could look into what might suit your lifestyleandneedsbetter. Whatareyourthoughts on possibly incorporating one of these into your journey to quit smoking? Client:nrt sounds good Counsellor: Nicotine replacement therapy sounds like it could be a helpful tool for you. It offers the flexibility to manage cravings while working towards quitting smoking. It might be worth considering which form you feel most comfortable with, whether it’s patches, gum, lozenges, or another option. Wouldyoulikemoreinformationonanyofthese, oristhereanythingelseyouwanttoexploreregard- ing your journey to quit smoking? Client:no thats all thanks Counsellor: Thank you for today’s conversation. Let’s go over what we’ve discussed. You’ve shared your thoughts about stopping smoking, including feelings of stress due to its role in your life for so long. We explored the idea of saving money and investingitinhomedecor,whichbringsexcitement about transforming your living space. You also considered walking as a meaningful activity to fill the time you’d otherwise spend smoking. Finally, we discussed nicotine replacement therapies as potential aids in your journey, with NRT sounding like a good fit for you. If you ever have more questionsorwanttotalkfurther,feelfreetoreach out. Take care! Would you like to continue the conversation? Client:Selected: No Counsellor: Thankyouandhaveagreatday. Good- bye! P Consent from Participants Beforeparticipantsjoinedourstudy,weprovided detailedinstructions andobtained theirconsent to use their data, including survey responses. We alsoinformed them about any potential risks. Below are relevant excerpts from the Consent Form that participants were required to read and accept. WHAT ARE THE RISKS OR HARM OF PARTICIPATING IN THIS STUDY? This study requires you to think about your smoking habits, which may be stressful for you. Thinking about your smoking habits may temporarily increase your urge to smoke. There | https://arxiv.org/abs/2505.17362v2 |
is a small but very unlikely possibility that the chatbot may generate responses that may instigate the above feelings. Should this happen, it is encouraged that you document it in your qualitative feedback opportunity. There are no other risks or harm to you in participating in this study. WHAT COMPENSATION AM I ENTITLED TO? Youwillbecompensated5.5BritishPounds,ortheequivalentinyourlocalcurrency,forcompleting the conversation task and 1 pound for the 1-week later survey. HOW WILL MY PRIVACY BE RESPECTED? Allconversationaldataandreadiness/feedbackdatarecordedduringthestudywillbestoredonan encrypted server. The study does not ask you to provide any information that can identify you personally. TheoriginalchatbotdatacollectedwillonlybeaccessibletotheUniversityofToronto researchers involved with the study. Once any personally identifying information has been removed, the conversation may appear in publications or in a public dataset. The data will be used by researchers at the University of Toronto and CAMH to develop chatbots that can have conversations with individuals about their smoking habits. The data will be held by the University of Toronto indefinitely. Theresultsofthisresearchstudymaybepresentedatmeetingsorinpublications,andthismay includedirectquotesfromyourresponsesandyourfeedback(afterremovinganytextthatpersonally identifiesyou),aswellassummarystatisticsfromthenumericalratingsthatallparticipantsprovide. In all cases, your identity will not be disclosed. | https://arxiv.org/abs/2505.17362v2 |
arXiv:2505.17371v1 [cs.LG] 23 May 2025An End-to-End Approach for Child Reading Assessment in the Xhosa Language Sergio Chevtchenko1,2, Nikhil Navas1,2, Rafaella Vale4, Franco Ubaudi5, Sipumelele Lucwaba6, Cally Ardington3, Soheil Afshar7, Mark Antoniou2, and Saeed Afshar1,2 1International Centre for Neuromorphic Systems, Western Sydney University, Australia s.chevtchenko@westernsyndey.edu.au 2The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Australia 3University of Cape Town, South Africa 4Universidade Federal de Pernambuco, Centro de Informatica, Recife, PE, Brazil 5The School of Computer, Data and Mathematical Sciences, Western Sydney University, Australia 6University of Pretoria, Gauteng, South Africa 7Macquarie University, Australia Abstract. Child literacy is a strong predictor of life outcomes at the subsequent stages of an individual’s life. This points to a need for tar- geted interventions in vulnerable low and middle income populations to help bridge the gap between literacy levels in these regions and high income ones. In this effort, reading assessments provide an important tool to measure the effectiveness of these programs and AI can be a reliable and economical tool to support educators with this task. Devel- oping accurate automatic reading assessment systems for child speech in low-resource languages poses significant challenges due to limited data and the unique acoustic properties of children’s voices. This study fo- cuses on Xhosa, a language spoken in South Africa, to advance child speech recognition capabilities. We present a novel dataset composed of child speech samples in Xhosa. The dataset is available upon request and contains ten words and letters, which are part of the Early Grade Reading Assessment (EGRA) system. Each recording is labeled with an online and cost-effective approach by multiple markers and a subsample is validated by an independent EGRA reviewer. This dataset is evalu- ated with three fine-tuned state-of-the-art end-to-end models: wav2vec 2.0,HuBERT,andWhisper.Theresultsindicatethattheperformanceof these models can be significantly influenced by the amount and balanc- ing of the available training data, which is fundamental for cost-effective large dataset collection. Furthermore, our experiments indicate that the wav2vec 2.0 performance is improved by training on multiple classes at a time, even when the number of available samples is constrained. Keywords: EGRA ·Deep Learning ·Speech-to-Text 2 S. Chevtchenko et al. 1 Introduction Reading assessment in the early grades is essential for pinpointing the difficul- ties children encounter while learning to read. By identifying these challenges, educators can design targeted interventions that accelerate literacy development and foster inclusion. Artificial intelligence offers a practical way to scale such assessments, providing rapid insights into children’s oral-reading skills and the obstacles revealed by pronunciation tasks, even when data and resources are limited. The motivation for this research is a pragmatic one. Good education is a cru- cial component for facilitating a favorable future for children. While assessing speech fluency and reading skills is a valid way to inform about the advance- ment of literacy acquisition in children [8], a simple approach to evaluating their aptitude to understand text in early years is to analyze their ability to correctly pronounce words [15], a key focus of this research. A poor ability to pronounce words is indicative of possible issues with hearing, understanding grammar, or | https://arxiv.org/abs/2505.17371v1 |
otherdifficultiessuchasdyslexia[24].Asaresult,uncoveringtheseissuesquickly can promote focused guidance from an early age that greatly improves a child’s educational outcome [12,24]. To tackle this, the use of automatic speech recognition (ASR) can greatly assist in finding reading issues that manifest themselves in pronunciation diffi- culties. ASR can help automate thorough and accurate assessment of a child’s ability and the determination of progress as educators seek to address problems. However, the effectiveness of ASR can be greatly impacted in the context of low-resource languages (LRL) [13,20], that is, languages with limited availabil- ity of resources, for the effective automated construction of models needed for assessing children’s speech. Hence this research also focuses on languages that are characterized as LRL, an example of which is the South African language Xhosa, which also provides phonetic complexity challenges to this research [14]. This research investigates the use of state-of-the-art (SOTA) speech classifi- cation models that convert the audio of child speech into the phonetics respon- sible for the language of choice. The South African language Xhosa is used as the target language. Our investigation is principally focused on testing a cost- effective data-labeling method and understanding the impact of choices in model parameters, training set sizes, and model building strategies. The main goal of this research is the development of a solution that fulfills the following crite- ria: accurate assessment of a child’s reading ability; coping with restrictions in terms of the amount of available training data; potential integration with the Early Grade Reading Assessment (EGRA) program [8], and providing a solution architecture that is easy to reuse for other speaking languages. Accurately assessing a child’s ability to pronounce words is time-consuming and prone to error, since it can be impacted by the ability and consistency of in- dividual human assessors. As such, automation provides a way for avoiding these issues, while also reducing needed time and effort. Additionally, AI-powered as- sessments provide a means for consistently tracking a child’s progress, while reducing the impact of confounding variables such as human assessor subjectiv- An End-to-End Approach for Child Reading Assessment in Xhosa 3 ity and enforcing the use of an identical assessment across time and individual children. The next problem to address is regarding LRLs. Many speaking languages are characterized as LRLs because of a lack of large text corpora or other lin- guistic resources that are suitable for training ASR solutions. This presents a serious barrier since it impacts solution effectiveness and greatly increases the needed effort for developing ASR solutions. Novel approaches are needed for facilitating the development of accurate models, coping with limited resources, and managing variability of human expertise across limited staff. Given these challenges, the use of self-supervised SOTA speech recognition architectures of- fers a promising solution by reducing the reliance on extensive labeled datasets and enabling a more detailed focus on nuanced parts of speech. The present work makes the following contributions to automated reading assessment for LRL. First, it introduces a new, high-quality dataset of Xhosa child reading samples, specifically designed for early grade reading assessment. This dataset has been made available upon | https://arxiv.org/abs/2505.17371v1 |
request to the community. Second, it details and validates a cost-effective online method for data labeling and ver- ification, demonstrating its suitability for resource-constrained settings. Third, it provides a comprehensive evaluation of three fine-tuned SOTA speech classi- fication architectures on this dataset, highlighting how variations in training set size and class balancing can significantly impact performance. 2 Related Work Among self-supervised models, wav2vec has emerged as a state-of-the-art archi- tecture for ASR, using self-supervised learning to extract meaningful features from raw audio and requiring minimal labeled data for fine-tuning [3]. Doum- bouya et al. [7] adapted wav2vec to the West African Radio Corpus, demon- strating sensitivity to features of African languages and effective identification in noisy multilingual settings. Similarly, our work involves fine-tuning a pre- trained wav2vec 2.0 model, augmented with a classifier multilayer perceptron to classify audio recordings of early grade children learning the Xhosa language as correct or incorrect pronunciations. Wav2vechasalsobeeneffectiveindetectingpronunciationerrors,offeringpo- tential in language learning and pronunciation assessment [17]. Shekar et al. [22] used wav2vec to analyze speech recordings of non-native speakers, detecting mispronunciations and capturing fine-grained phonetic details for phoneme-level error diagnosis. Recent research by Jain et al. [11] examined the use of wav2vec models in pediatric learning assessment, a challenging domain due to varied speech patterns in children. Abaskohi et al. [1] further developed and adapted wav2vecwithapreprocessingtechniquecalledrandomfrequencypitch,usefulfor capturing the frequency characteristics of children’s speech. Applied to Persian children’s speech and reading recordings, the model significantly reduced word error rates compared to baselines, confirming its effectiveness in pediatric assess- ment tasks. Obiang et al. [16] fine-tuned wav2vec for Yoruba, a low-resourced 4 S. Chevtchenko et al. tonal language, effectively capturing complex pitch and pronunciation variations superior to traditional methods. Similarly, Okwugbe et al. [6] developed a deep learning model combining rCNNs and BiLSTMs with CTC for speech recogni- tion of the Fon and Igbo languages, addressing challenges due to data scarcity. Other works have tackled the application of ASR in assessing and improving reading skills, particularly in children. Sabu and Rao [21] sought to evaluate word reading accuracy and identify prosodic elements like phrase breaks and emphasized words in children studying English as a second language. The com- plexities of the prosodic events were reflected in the lower precision estimation shown by the system, revealing the importance of an accurate feature extraction. Shivakumar and Georgiou [23] used transfer learning from adult ASR models as a way to circumvent the lack of extensive availability of children speech data. Strategies to adapt to varying amounts of data and children’s ages were also recommended. Bai et al. [5] explored the evaluation of decoding skills in Dutch children to provide feedback meant to impact reading accuracy and speed. The study was further expanded in [4] to also highlight the most problematic words encountered by the students. Bachiri et al. [2] proposed an interactive learning system as a Moodle plugin to aid in child reading assessment in underserved educational settings. The authors make a case for the integration of AI-based speech recognition to enhance literacy development due to the positive results observed. In an | https://arxiv.org/abs/2505.17371v1 |
attempt to investigate how AI performs in assessing students reading fluency and level compared to human experts, Yıldız et al. [25] found that AI offered reliable predictions that could contribute to lowering costs and improving efficiency. The recent publication by Henkel et al. [9] has a significant overlap with the intended goals of the current work. The automation of reading assessment for students in a resource-constrained context is investigated to compare the poten- tial of Whisper V2 and wav2vec 2.0 models against human raters. The models produce transcriptions of audio recordings of Ghanaian students reading to pro- vide an estimate of the students’ oral reading fluency. The fluency scores given by Whisper V2 were found to be highly correlated with assessments by human experts, even without fine-tuning. It is important to note that the way these models were used to score reading fluency in students would not be suitable for the goal of the current work, since the lack of fine-tuning may result in overcor- rections that ignore mispronunciations or in the misinterpretation of accents, as demonstrated in Section 4.1. As the present work aims to evaluate the pronunci- ation of words and letters by children in the Xhosa language, fine-tuning of the models is necessary to classify samples as correct or incorrect according to the target language. The above studies underscore the effectiveness of speech-to-text models for low-resource languages and child speech recognition. To the best of our knowl- edge, this is the first work to evaluate end-to-end models on words and letters for a child reading assessment problem. An End-to-End Approach for Child Reading Assessment in Xhosa 5 3 Methodology 3.1 Dataset Description Thedatasetwascollectedin2024aspartoftheEarlyGradeReadingAssessment (EGRA), conducted in grades 1 to 4 at schools in South Africa. This study focuses on the following ten letters and words in the Xhosa language: d,v,n, ewe,hayi,hl,inja,kude,moloand ng. A question is presented to the student, asking to pronounce one of these words or letters, followed by a recording. The recording stops when the student presses a button for the next question. The average duration of the recordings is around 4 seconds, with added variability mostly due to multiple attempts at pronouncing the same question. A total of 14,971 recordings of the EGRA collection with over 19 hours in recording time collected over 8 months are made available upon request. Each recording is labeled as correct or incorrect by three fluent Xhosa speakers, re- sulting in a dataset of 44,913 labeled entries. Since some recordings received conflicting labels, a validation experiment was conducted, as described in Sec- tion 3.2. Figure 1 illustrates the distribution of these labels by question, showing that certain items can be more challenging to classify. For example, most chil- dren correctly pronounced the letter n, whereas a considerable number struggled with the consonant hl. To enable cost-effective and efficient data collection and labeling, custom mobile and server applications were developed as part of the project. A more detailed description of these tools will be provided in a forth- coming publication. Fig. 1.Distribution of All correct, Mostly correct, Mostly incorrect, and All incorrect | https://arxiv.org/abs/2505.17371v1 |
samples for each word and letter in the dataset. Although each recording is intended to capture a single spoken word, even those pronounced correctly often include additional artifacts, such as classroom 6 S. Chevtchenko et al. noise or multiple attempts at pronouncing the word on the app’s screen. This variability in recording size and quality makes the classification problem more complicated and is further illustrated in Figure 2. Fig. 2.A spectrogram of an eight-second recording containing a correct pronunciation of the word “hayi” and other ambient sounds. 3.2 Data Labeling and Validation Selecting appropriate validation criteria for training data is another significant challenge. To ensure high-quality recordings, each sample is labeled by three independent markers, and only those with consensus are included in training. To validate this method, 400 recordings were randomly selected from ten words in the dataset, with each word contributing 40 samples. These were evenly dis- tributed across four marking scenarios: (a) all three markers labeled it as correct, (b) all three labeled it as incorrect, (c) one marked it incorrect while the other two marked it correct, and (d) one marked it correct while the other two marked it incorrect. An expert in traditional EGRA marking for Xhosa language then assessed these samples, and Table 1 presents the agreement rate between the expert and the original markers. An agreement rate per question is also pre- sented in Figure 3. Thus, consensus-based approach retains approximately 85% of the original dataset, while maximizing data reliability. Consequently, while the provided dataset includes all 14,971 recordings, the experimental results in Section 4 are based only on the 12,747 consensus-marked recordings. Table 1. Agreement rates and number of recordings for each condition. Condition Agreement rate Total # of recordings All data 71.25% 14,971 Consensus 85.00% 12,747 (85.1%) Consensus & one incorrect 80.00% 14,235 (95.0%) Consensus & one correct 75.00% 13,483 (90.0%) An End-to-End Approach for Child Reading Assessment in Xhosa 7 Fig. 3.Agreement rate between an expert and the reviewers, considering different validation criteria. 3.3 Fine-tuned Classification Architectures In this work, we consider three SOTA speech recognition architectures: wav2vec 2.0 [3], HuBERT [10] and Whisper [19]. In order to select baseline models, we conducted preliminary experiments with the pre-trained versions of these ar- chitectures and varying hyperparameters such as learning rate and batch size. While this is not meant to be an exhaustive optimization, this step is necessary to provide a good baseline for the presented dataset. The pre-trained models are fine-tuned with a learning rate of 3×10−5over 1000 steps with a batch size of four and gradient accumulation of over two steps to achieve an effective batch size of eight. This study used a snapshot of the wav2vec12.0 model which has been pre-trained on the Multilingual LibriSpeech2, CommonVoice3, and BABEL4 datasets. This extensive training allows for cross-lingual representation learning via a self-supervised approach that extracts meaningful features from raw audio data. Additionally, we employed the multilingual mHuBERT 1475model, which was trained on 90,430 hours of openly licensed audio from 147 languages across 16 independent datasets. Finally, we selected a distilled version of | https://arxiv.org/abs/2505.17371v1 |
Whisper6from OpenAI, an end-to-end Transformer-based architecture for sequence-to-sequence tasks. 1https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft 2https://www.openslr.org/94/ 3https://commonvoice.mozilla.org/en/languages 4https://catalog.ldc.upenn.edu/byyear 5https://huggingface.co/utter-project/mHuBERT-147 6https://huggingface.co/distil-whisper/distil-medium.en 8 S. Chevtchenko et al. 3.4 Experimental Evaluation and Metrics A test set of 50 positive and 50 negative samples is randomly selected from the dataset for each of the ten questions, resulting in a test set of 1000 recordings. In order to produce more reliable results, we repeat this partitioning five times and evaluate each model’s configuration on these five independent runs. Thus, most of the results presented in Section 4 are an average of 50 experiments: 5 randomly selected test sets and 10 questions from the dataset. One of the main objectives of this work is to assess how varying amounts of training data affect the baseline models. As discussed in Section 3.1, certain questions often yield unbalanced samples for correct (positive) and incorrect (negative) pronunciations. To address this, we train the baseline models on 50, 100, 200, and 300 randomly selected positive and negative samples for each question, resulting in 16 distinct training configurations. Each model may be trained on a single question (as a binary classifier) or on multiple questions simultaneously, applying the same sampling strategy per question. As each fine-tuned classifier is evaluated on a balanced set of 50 positive and 50 negative samples per question, we define the confusion matrix categories as follows: – True positive (TP) : When both the ground truth label and the prediction correspond to the question. – True negative (TN) : When neither the ground truth label nor the predic- tion correspond to the question. – False negative (FN) : When the ground truth label corresponds to the question but the prediction does not. – False positive (FP) : When the ground truth label does not correspond to the question but the prediction does. We then define the false positive rate (FPR) as FPR =FP FP+TN, the false negative rate (FNR) as FNR =FN TP+FN, and the diagnostic efficiency (DE) as DE =TP+TN TP+TN+FP+FN. 4 Results and Discussion Figure 4 provides an overview of the false positive and false negative rates ob- tained on the test set by the three baseline architectures. As described in Sec- tion 3.4, the models are trained on a variable number of samples and questions at a time, while evaluated on a fixed test subset. The presented data is an average across 10 questions and 5 independent runs, making a total of 50 samples. The results indicate that changes in the training data can have a significant impact on the performance of the models and both wav2vec 2.0 and Whisper generally achieve more favorable FNR-FPR trade-offs than the HuBERT architecture. Additionally, Table 2 compares the computational requirements of these ar- chitectures. While HuBERT is significantly more efficient in terms of training An End-to-End Approach for Child Reading Assessment in Xhosa 9 Fig. 4.Distribution of FPR and FNR for each model, averaged across ten questions. The dot size is proportional to the standard deviation over 50 samples. The highlighted results are the best ones in terms of the diagnostic efficiency. Table 2. | https://arxiv.org/abs/2505.17371v1 |
Comparison of training (1000 steps) and inference time (single recording) on a Google Colab L4 instance, averaged over 10 runs for each model. Model Training time (s) Inference time (s) HuBERT 343 0.44 wav2vec 2.0 1332 1.70 Whisper 2511 1.54 and inference, the wav2vec 2.0 model requires less computational resources for training than Whisper. Thebestresultsobtainedfromeachmodelarealsopresentedinmoredetailin Table 3, considering top five configurations in terms of the diagnostic efficiency. While wav2vec 2.0 and Whisper provide mostly equivalent results in terms of DE, it should be noted that no single configuration was able to obtain the overall best performance in terms of both FPR and FNR and that wav2vec 2.0 model benefits from being trained on multiple questions at a time. This effect is further illustrated in Figure 5, which shows how training set size influences diagnostic efficiency under a limited-data scenario (i.e., when correct and incorrect samples are restricted to 50 or 100). 10 S. Chevtchenko et al. Fig. 5.The effect of training set size on the wave2vec 2.0 performance under limited data conditions. Table 3. Top-fiveresultspermodelintermsofDiagnosticEfficiency(DE).Highlighted entries are not significantly different (p>0.05) from the best value in each column. Set column indicates the number of questions used to train the model at a time. Model Set Correct Incorrect DE % ( ±std) FPR % ( ±std) FNR % ( ±std) HuBERT 1 200 300 87.04 (5.41) 9.32 (4.79) 4.00 (3.64) HuBERT 1 100 200 86.99 (5.55) 8.40 (4.06) 5.04 (4.26) HuBERT 1 100 300 86.66 (7.11) 7.08 (5.08) 6.76 (5.39) HuBERT 1 300 300 86.61 (5.51) 10.44 (4.97) 3.28 (3.42) HuBERT 1 200 200 85.08 (7.35) 11.60 (6.31) 3.80 (3.57) W2v 2.0 3 300 200 91.70 (5.28) 5.04 (3.50) 3.44 (4.10) W2v 2.0 3 200 300 91.66 (6.02) 3.04 (2.56) 5.48 (5.46) W2v 2.0 5 300 200 91.30 (5.65) 4.60 (3.34) 4.32 (4.56) W2v 2.0 5 300 100 91.10 (4.88) 5.88 (3.44) 3.20 (3.92) W2v 2.0 3 200 200 90.86 (5.62) 4.96 (3.71) 4.40 (4.59) Whisper 1 300 300 92.22 (4.99) 3.60 (3.38) 4.36 (3.40) Whisper 1 300 200 91.41 (5.22) 4.96 (3.94) 3.84 (3.28) Whisper 1 200 200 90.89 (5.19) 4.28 (3.33) 5.04 (4.45) Whisper 1 200 300 90.65 (5.40) 3.04 (2.84) 6.52 (4.67) Whisper 1 300 100 89.28 (5.75) 8.24 (4.96) 2.72 (2.93) Aper-questionbreakdownofthetopfivemodelsispresentedinFigure6.This suggests that while an average diagnostic efficiency of around 91% is possible, the presented dataset is especially challenging for short sounds. 4.1 Comparison with an ASR Model In order to highlight the advantages of the classification approach proposed in this work, we also evaluate a recently released checkpoint of the wav2vec 2.0 architecture, fine-tuned on 1,162 languages, including Xhosa, using the Massive An End-to-End Approach for Child Reading Assessment in Xhosa 11 Fig. 6.Per-question average performance of the top five models from Table 3. Multilingual Speech (MMS-1B) framework [18]. For testing, we supplied the correct pronunciations of each question in our dataset as input to MMS-1B in speech-to-text mode, paired with a Xhosa tokenizer. Overall, only 6.91% of these transcriptions precisely matched the expected word or letter. Table 4 provides a breakdown of the model’s performance for | https://arxiv.org/abs/2505.17371v1 |
each question, indicating that while this approach performs better on words than on letters, it struggles significantly with noisy recordings, such as the example illustrated in Figure 2. A sample of transcriptions1in different scenarios is also provided, suggesting that direct multilingual ASR usage in classroom conditions may be less effective for short word- or letter-level inputs. 5 Conclusion This paper presents a cost-effective method for collecting and labeling child speech data in a low-resource language, and for training an AI model to assess reading proficiency, tailored for early-grade learners. We introduce a challenging dataset of Xhosa child speech, recorded under realistic classroom conditions that include noise and multiple pronunciation attempts. The dataset contains 14,971 audio recordings, each containing a single word or letter, which are part of the Early Grade Reading Assessment (EGRA) system. The recordings were labeled 1https://osf.io/g83vs/?view_only=19403ae222d44fe2bdd669c12fbce017 12 S. Chevtchenko et al. Table 4. Transcription accuracy by question, using the MMS-1B model for speech-to- text. Label Samples Correct Accuracy (%) d 503 28 5.57 ewe 689 282 40.93 hayi 688 40 5.81 hl 423 1 0.24 v 796 4 0.50 n 1139 8 0.70 molo 709 177 24.96 kude 557 64 11.49 ng 538 2 0.37 inja 691 195 28.22 online by three independent markers, and a sample was validated by an expert EGRA reviewer to confirm reliability. Three state-of-the-art speech recognition architectures were fine-tuned as bi- nary classifiers and the experimental results indicate that data balancing and multi-question training strategies substantially affect performance. In particular, wav2vec 2.0 and Whisper are generally found to outperform HuBERT, achieving lower false positive and false negative rates. Although wav2vec 2.0 and Whisper demonstrate comparable accuracy, wav2vec 2.0 demands fewer computational resources during the training phase. Further comparisons with a large-scale mul- tilingual ASR model (MMS-1B) confirm the effectiveness of the proposed ap- proach, especially for short, noisy speech segments in a classroom setting. Future work is planned to focus on further improving the sample efficiency of the models, while providing a phoneme-based feedback to the user. Preliminary experiments on another LRL dataset, Sepedi, indicate similar performance for wav2vec 2.0 and this additional dataset is expected to be released in forthcoming research. Ultimately, the goal of this project is to develop a cost-effective reading tutoringsystemforearly-gradelearners,therebysupportingthepreservationand educational advancement of low-resource languages. An End-to-End Approach for Child Reading Assessment in Xhosa 13 References 1. Abaskohi, M.,et al.: Automaticspeech recognitionfor speechassessmentof persian preschool children. In: Proceedings of the International Conference on Speech and Language Processing. pp. 123–130 (2023) 2. Bachiri, Y.A., Mouncif, H., Bouikhalene, B., Hamzaoui, R.: Integrating ai-based speech recognition technology to enhance reading assessments within morocco’s tarl program. Turkish Online Journal of Distance Education 25(4), 1–15 (2024) 3. Baevski, A., Zhou, Y., Mohamed, A., Auli, M.: wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems 33, 12449–12460 (2020) 4. Bai, Y., Tejedor-García, C., Hubers, F., Cucchiarini, C., Strik, H.: Automatic speech recognition technology and reading skill development in primary school. In: ICERI2021 Proceedings. pp. 6188–6195. IATED (2021) 5. Bai, Y., Hubers, F., Cucchiarini, C., Strik, H.: Asr-based evaluation | https://arxiv.org/abs/2505.17371v1 |
and feedback for individualized reading practice. In: Interspeech 2020. pp. 3870–3874 (2020). https://doi.org/10.21437/Interspeech.2020-2842 6. Dossou, B.F.P., Emezue, O.: Okwugbé: End-to-end speech recognition for fon and igbo.In:Proceedingsofthe2024InternationalConferenceonSpeechandLanguage Technology. pp. 123–134 (2021) 7. Doumbouya, M.L., et al.: Using radio archives for low-resource speech recogni- tion: Towards an intelligent virtual assistant for illiterate users. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 1234–1238 (2021) 8. Dubeck, M.M., Gove, A.: The early grade reading assessment (EGRA): Its theo- retical foundation, purpose, and limitations. Int. J. Educ. Dev. 40, 315–322 (Jan 2015) 9. Henkel, O., Horne-Robinson, H., Hills, L., Roberts, B., McGrane, J.: Supporting literacy assessment in west africa: Using state-of-the-art speech models to assess oral reading fluency. International Journal of Artificial Intelligence in Education pp. 1–22 (2025) 10. Hsu, W.N., Bolte, B., Tsai, Y.H.H., Lakhotia, K., Salakhutdinov, R., Mohamed, A.: Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM transactions on audio, speech, and language processing 29, 3451–3460 (2021) 11. Jain,R.,Barcovschi,A.,Yiwere,M.Y.,Bigioi,D.,Corcoran,P.,Cucu:Awav2vec2- based experimental study on self-supervised learning methods to improve child speech recognition. IEEE Access 11, 46938–46948 (2023) 12. Kim, S., Park, S., Kim, K., Jung, K., So, S., Kim, M.: Automatic pronuncia- tionassessmentusingself-supervisedspeechrepresentationlearning.arXivpreprint arXiv:2204.03863 (2022) 13. Magueresse, A., Carles, V., Heetderks, E.: Low-resource languages: A review of past work and future challenges. arXiv preprint arXiv:2006.07264 (2020) 14. Mesham, S., Hayward, L., Shapiro, J., Buys, J.: Low-resource language modelling of south african languages. arXiv preprint arXiv:2104.00772 (2021) 15. Namasivayam, A.K., Coleman, D., O’Dwyer, A., van Lieshout, P.: Speech sound disorders in children: An articulatory phonology perspective. Front. Psychol. 10 (Jan 2020) 16. Obiang, J.P., et al.: Improving tone recognition performance using wav2vec 2.0- based learned representation in yoruba, a low-resourced language. Journal of Speech and Language Technology 37, 101–120 (2024) 14 S. Chevtchenko et al. 17. Peng, L., Gao, Y., Bao, R., Li, Y., Zhang, J.: End-to-end mispronunciation detec- tion and diagnosis using transfer learning. Applied Sciences 13(11), 6793 (2023) 18. Pratap, V., Tjandra, A., Shi, B., Tomasello, P., Babu, A., Kundu, S., Elkahky, A., Ni, Z., Vyas, A., Fazel-Zarandi, M., Baevski, A., Adi, Y., Zhang, X., Hsu, W.N.,Conneau,A.,Auli,M.:Scalingspeechtechnologyto1,000+languages.arXiv (2023) 19. Radford,A.,Kim,J.W.,Xu,T.,Brockman,G.,McLeavey,C.,Sutskever,I.:Robust speech recognition via large-scale weak supervision. In: International conference on machine learning. pp. 28492–28518. PMLR (2023) 20. Ranathunga, S., Lee, E.S.A., Prifti Skenduli, M., Shekhar, R., Alam, M., Kaur, R.: Neural machine translation for low-resource languages: A survey. ACM Computing Surveys 55(11), 1–37 (2023) 21. Sabu, K., Rao, P.: Automatic assessment of children’s oral reading using speech recognition and prosody modeling. CSI Transactions on ICT 6, 221–225 (2018) 22. Shekar, J., et al.: Assessment of non-native speech intelligibility using wav2vec2- based mispronunciation detection and multi-level goodness of pronunciation trans- former. Journal of Speech Processing 34, 123–145 (2023) 23. Shivakumar, P.G., Georgiou, P.: Transfer learning from adult to children for speech recognition: Evaluation, analysis and recommendations. Computer speech & lan- guage 63, 101077 (2020) 24. Sobti, R., Guleria, K., Kadyan, V.: Comprehensive literature review on children automatic speech recognition system, acoustic linguistic mismatch approaches and challenges. Multimed. Tools Appl. (Mar 2024) 25. Yıldız, M., Keskin, | https://arxiv.org/abs/2505.17371v1 |
Value-Guided Search for Efficient Chain-of-Thought Reasoning Kaiwen Wang*1,Jin Peng Zhou*1,Jonathan Chang*4, Zhaolin Gao1,Nathan Kallus1,3,Kianté Brantley2, and Wen Sun1 1Cornell University2Harvard University3Netflix4Databricks Abstract In this paper, we propose a simple and efficient method for value model training on long-context reasoning traces. Compared to existing process reward models (PRMs), our method does not require a fine-grained notion of “step,” which is difficult to define for long-context reasoning models. By collecting a dataset of 2.5 million reasoning traces, we train a 1.5B token-level value model and apply it to DeepSeek models for improved performance with test-time com- pute scaling. We find that block-wise value-guided search ( VGS) with a final weighted majority vote achieves better test-time scaling than standard methods such as majority voting or best-of- n. With an inference budget of 64 genera- tions, VGSwith DeepSeek-R1-Distill-1.5B achieves an average accuracy of 45.7%across four competition math benchmarks (AIME 2024 & 2025, HMMT Feb 2024 & 2025), reaching parity with o3-mini-medium . Moreover, VGS significantly reduces the inference FLOPs required to achieve the same perfor- mance of majority voting. Our dataset, model and codebase are open-sourced at https://github.com/kaiwenw/value-guided-search . 1.5B 7B 14B Model Size (DeepSeek-R1-Distill series)35404550556065High Budget Accuracy () DeepSeek-R1 (671B) o3-mini-mediumDS-R1-Distill-32B 33 35 37 39 41 Scaling Budget Accuracy (DeepSeek 1.5B)0.512481632PetaFLOPs to Reach Accuracy () 50%60%65%81%96% Value-Guided Search (VGS) Majority Voting (MV) Figure 1: Performance and Efficiency of Value Guidance: (Left) Value-guided search improves the overall quality of DeepSeek-R1-Distill responses across four combined competition math benchmarks (AIME & HMMT). The inference budget for 1.5B, 7B and 14B are 256,128and64 generations, respectively. (Right) Value-guided search also reduces the inference FLOPs required to achieve the same accuracy levels as majority voting, a standard TTC scaling baseline, showing value-guidance is promising for improving efficiency. *Equal contribution. Correspondence to {kw437,jz563}@cornell.edu . Preprint. Under review.arXiv:2505.17373v1 [cs.LG] 23 May 2025 1 Introduction Recent large language models (LLMs), such as OpenAI o1 & o3, Claude Sonnet 3.7, Gemini Pro 2.5 and DeepSeek R1 [ 18] are trained via reinforcement learning (RL) to “think” for many tokens before generating a final answer. Through multi-step reasoning and self-correction, these reasoning models have state-of-the-art performance in competition math, coding [ 16] and scientific research [ 37], often surpassing the average human. However, this enhanced capability comes at a cost: each generation involves a long chain-of-thought (CoT), thus requiring more inference compute. Further, these CoT traces can often be repetitive and get stuck in unproductive loops [ 30]. This raises two questions. Can we extract the same performance at a fraction of the inference compute by refining the thinking process? Can we improve the performance ceiling of these models with productive search methods? Search with guidance models is a natural solution that addresses longer chain-of-thought reasoning by managing the exponential growth of possible paths with guidance models identifying optimal routes [ 15,39,35]. Prior works that combined search with LLMs proposed to guide search with process reward models (PRMs), predicting the correctness of each step ( e.g., delimited by newlines) in the model-generated solution [ 23,44,50]. While PRM-guided search has been shown to improve test-time compute (TTC) | https://arxiv.org/abs/2505.17373v1 |
[ 6,36,34,25], it is challenging to scale existing PRM training techniques to long-context reasoning models. First, existing methods require a pre-defined notion of “step,” but, per Guo et al. [18], “it is challenging to explicitly define a fine-grain step in general reasoning.” Second, even if we can define a “step,” collecting step-wise labels is prohibitively expensive, since it requires annotations from humans [ 23], LLM-as-a-Judge [ 50], or multiple Monte Carlo roll-outs [ 44,27]. Thus, there has been limited success to scale PRMs to long-context reasoning models [18]. We propose value-guided search ( VGS) – a block-level search method guided by a token-level value model – as a promising approach to scale TTC for reasoning models. In Section 2, we present an ef- fective pipeline for value model training on tasks with outcome labels, such as competition math. Our data pipeline collects solution prefixes from various models and then, starting from random prefixes, generates completed solutions using a lean reasoning model (e.g.,DeepSeek-R1-Distill-1.5B ). Notably, our data collection does not require a pre-defined notion of step and is more efficient than existing techniques [ 44,27]. With this pipeline, we collect a dataset of 2.5 million math reasoning traces (over 30billion tokens) from a filtered subset of the OpenR1-Math dataset [ 2]. Then, we train a 1.5B token-level value model called DeepSeek-VM-1.5B by regressing (via classification) the final reward of the completed solution. Next, in Section 3, we apply our value model to perform block-wise search with DeepSeek models [ 18] on competition math, where we evaluate on four prestigious high school math competitions in the US (AIME 2024 & 2025 and HMMT 2024 & 2025). Our experiments show that block-wise VGS significantly improves TTC compared to majority voting or weighted majority voting, strong baselines from the literature [ 45,6]. We also show that VGS with DeepSeek-VM-1.5B leads to higher performance than searching with state-of-the-art PRMs, demon- strating that our value model can provide better feedback. When given an inference budget of 64 generations, VGSonDeepSeek-R1-Distill-Qwen-1.5B can outperform o3-mini-medium , and VGSonDeepSeek-R1-Distill-Qwen-14B (total size with value model is 15.5B) is on par with DeepSeek-R1 (671B) on our competition math evaluations (Fig. 1 left). Moreover, we show that VGSreduces the amount of inference compute required to attain the same performance as majority voting (Fig. 1 right). In summary, we find that block-wise VGSnot only improves the performance ceiling of reasoning models, but also reduces the amount of inference compute required to match the performance of standard TTC methods. Our contributions are summarized below: 1.A simple recipe for token-level value model training that does not require a pre-defined notion of “step” and scales to long-context reasoning traces (Section 2). 2.Block-wise search, guided by our 1.5B value model, achieves impressive performance on four challenging math competitions, outperforming standard TTC methods ( e.g., best-of- N, majority voting) and search with existing PRMs (Section 3). 3.We open-source our dataset of 2.5 million reasoning traces, value model, and codebase (including data filtering and distributed training scripts) to support future work on applying VGSto other domains. https://github.com/kaiwenw/value-guided-search . Please see Appendix A | https://arxiv.org/abs/2505.17373v1 |
for a detailed discussion of related works. 2 2 Methods We present an end-to-end pipeline for training a token-level value model and applying it to guide block-wise search. In Section 2.1, we introduce necessary notation and present a regression-via- classification algorithm for learning the token-level value model [ 14]. Then, in Section 2.2, we outline an efficient data pipeline for creating our dataset of 2.5 million reasoning traces from DeepSeek models. Finally, in Section 2.3, we describe several TTC methods and baselines, e.g., best-of- N, (weighted) majority voting and search algorithms that can leverage our value model. While we focus on competition math in this paper, we remark that our pipeline can in principle be applied to any task with automated outcome supervision ( e.g., a reward model). In Appendix B, we summarize a simple recipe for applying VGSto other such domains. 2.1 Learning Algorithm for Value Model We describe our training process for a language value model by performing regression via classifica- tion [ 7]. LetVbe the vocabulary and let S=S n∈NVndenote the input sequence space. Given a problem prompt x∈ S and a response y∈ S, letκ= Γ(x, y)∈[0,1, . . . , K −1]denote its class label, where Kis the number of classes. Furthermore, let r=R(x, y)denote the scalar reward, which we assume to be binary since we focus on competition math (see Appendix B for the general case). For our value model, κ= 2 if the response is an incomplete generation ( i.e., exceeds max generation length), κ= 0if the response finished and is incorrect, and κ= 1if the response finished and is correct. Thus, the event that κ= 1 corresponds to r= 1 (correct answer), and κ∈ {0,2} corresponds to r= 0 (incorrect or exceeds max length). We adopt this convention in the rest of the paper. We remark that regression-via-classification is a standard approach that leads to better down-stream decision making than regressing via squared error [7, 20, 17, 3, 42, 43]. We employ datasets of the form D={(xi, yi, zi, κi)}i∈[N], where xiis the problem prompt, yi is a partial response (which we call a “roll-in”), ziis a completion starting from yi(which we call a “roll-out”), and κi= Γ(xi, yi, zi)is the label of the full response, where x, y, z denotes the concatenation of x,yandz. In this paper, we assume that the completions / roll-outs ziare generated by a fixed roll-out policy πref,i.e.,zi∼πref(· |xi, yi)for all i. We remark that a good choice for πref is a cost-efficient model which is capable of producing diverse responses with positive reward, e.g., a distilled version of a large reasoning model. We train a classifier fθ:S 7→ ∆([K])via gradient descent on the following loss on data batch B: L(θ;B) =1 |B|P (xi,yi,zi,κi)∈B1 |zi|P h∈range( |zi|)ℓce(fθ(xi, yi, z:h i), κi), where ℓce(ˆp, κ) =−ln(ˆp[κ])is the standard cross-entropy loss for classification and z:h idenotes the first htokens of zi. The rationale for the inner average is analogous to next-token prediction training of autoregressive models: since ziis generated autoregressively by πref, any suffix zh: iis also a roll-out from πrefand | https://arxiv.org/abs/2505.17373v1 |
hence can be viewed as another data-point. We found this to be an important training detail for performance, which is consistent with prior work who used a similar objective for training an outcome reward model [14, 23]. We can now view the classifier as a value model. Since κ= 1 corresponds to the event that r= 1, we have that Vθ(x) := fθ(x)[1]predicts the correctness probability of roll-outs from Pr omptLabel = Corr ectLabel = Incorr ectClassifier / V alue Model T raining Label = Incomplet eLabel = Corr ectPr omptBlock 1ABlock 1CBlock 1BBlock 2ABlock 2BBlock 2CBlock 2DBlock 4ABlock 4BBlock 4CFinal Answ erW eight ed Maj V ot eBlock 4DBlock 1DV alue Model1A1B1D1C2A2B2D2C4A4B4D4C......V alue-Guided Sear ch r ewar d = 1r ewar d = 0 Figure 2: Summary of Methods. (Left) Diagrams how we collect multiple roll-ins (grey circles representing tokens) per problem, and branch off multiple roll-outs per roll-in at random points. The class label for each roll-out token is the outcome label at the very end. (Right) Shows the beam search process (beam width 2and budget 4) guided by a value model. 3 Algorithm 1 Beam Search with Width w 1:Input: prompt x. 2:Set num beams B←N w. 3:Initialize beams y1, . . . , y B←x. 4:while∃js.t.yjis not finished do 5: For each js.t.yjis not finished, sample wi.i.d. blocks {bi,j}i∈[w]from π(· |yj). 6: Update unfinished beams to be the best continuations with the highest V(yj, bi,j). 7:end while 8:return BoN or WMV on {y1, . . . , y B}.Algorithm 2 Best-of- N 1:Input: prompt x, responses {yi}i∈[N]. 2:return ybon= arg maxyiV(x, yi). Algorithm 3 (Weighted) Majority V ote 1:Input: prompt x, responses {yi}i∈[N], weights {wi}i∈[N], equivalence relation ∼. 2:Partition {yi}iinto equiv. classes {pk}k. 3:return A response from the highest weight partition arg maxpkP yi∈pkwi. πref. Indeed, if f⋆denotes the optimal classifier that minimizes the population-level loss, then f⋆(x, y)[1] = Ez∼πref(·|x,y)[R(x, y, z )|x, y]which is the expected reward of a completed response from rolling-out πrefstarting from x, y. In sum, our value model is learned via predicting labels (one of which corresponds to reward 1), and the training objective is the standard cross-entropy loss. 2.2 Dataset Creation Process We describe our process for creating OpenR1-VM , a novel dataset of 2.5 million reasoning responses from DeepSeek models, across 45k math problems from OpenR1-Math [2]. Pre-Filtering. We start from the OpenR1-Math dataset (default split) [ 2] which contains 94k math problems with solutions that were already filtered for quality. Upon manual inspection, we found that this dataset still contained unsolvable problems ( e.g., problems that require web browsing but our models cannot access the web) and ambiguous or unverifiable answers ( e.g., multiple \boxed{} expressions or unparsable answers). We filter out all such problematic problems, producing a cleaned subset of 50k problems with solutions verifiable by sympy ormath-verify [22]. We call this pre-filtered dataset OpenR1-Cleaned . Response Generation. Next, we collect roll-ins and roll-outs from DeepSeek models [ 18]. We fix the roll-out policy πrefasDeepSeek-R1-Distill-Qwen-1.5B . To ensure diversity in the roll- in distribution, we sample 14independent roll-ins | https://arxiv.org/abs/2505.17373v1 |
from four DeepSeek-R1-Distill-Qwen model sizes: 1.5B, 7B, 14B, and 32B by generating until the end of thinking token <\think> .1For each roll- in˜yi, we then sample four random prefix locations where we generate complete roll-outs {zj i}j∈[4] fromπref. Finally, to compute the class label (incomplete, incorrect, or correct), we parse the response for the final answer (enclosed in \boxed{}) and use math-verify to check for correctness against the ground truth answer. In total, this process (illustrated in Fig. 2 left) yields 56labeled roll-in, roll-out pairs per problem, leading to 2.8 million datapoints. Post-Filtering. We filter out problems where all 56roll-outs for that problem were incomplete or incorrect ( i.e., has reward 0). This post-filtering removes any ambiguous or unanswerable problems that we missed during pre-filtering, and also removes problems that are too difficult for πrefand do not provide a useful learning signal. This step filters roughly 10% of problems, yielding a final dataset of 2.5 million datapoints. Notably, our approach does not require a fine-grained notion of step and our data collection is cheaper than existing PRM techniques. Specifically, Lightman et al. [23] used per-step annotations by human experts, Zhang et al. [50] used per-step annotations via LLM-as-a-Judge, and Wang et al. [44] used multiple Monte Carlo roll-outs at every step. Since the number of newlines in reasoning CoT traces can grow very quickly, per-step labels are quite expensive to collect for reasoning models. In contrast, our approach only requires a handful of roll-ins (from any policy) and roll-outs (from πref) per problem, and this number can be flexibly tuned up or down to trade-off data coverage and data collection cost. Please refer to Appendix D for further details on each step. We also release our filtering code and datasets to support future research. 1DeepSeek-R1 and its distilled variants output CoT reasoning between tokens <think> and<\think> followed by a final solution, which is usually a summarization of the CoT reasoning. 4 2.3 Algorithms for Test-Time Compute and Search Equipped with a value model V:S 7→R, we can now apply it to scale test-time compute of a generator model π. For search-based approaches, we focus on block-wise search where a “block” refers to a sequence of tokens ( e.g., blocks of 4096 tokens worked best in our experiments). We let N denote the inference budget, which is the number of generations we can sample ( e.g., generating four responses and taking a majority vote is N= 4) . BFS. Breadth-first-search (BFS) [ 47,28] is a natural search method that approximates the optimal KL-regularized policy given a good value model [ 52]. Given a prompt x, BFS samples Nparallel blocks bifrom πand selects the block with the highest value b⋆= arg maxbiV(x, bi), which gets added to the prompt, i.e.,x←x, b⋆. The process repeats until the response finishes. Note the number of tokens generated from πis roughly equivalent to Nindependent generations from π. Beam Search. One weakness of BFS is that parallel blocks are correlated because they share the same prefix, which limits diversity. Beam search with width w(Algorithm 1) is a generalization that | https://arxiv.org/abs/2505.17373v1 |
keeps B=N/w (assume to be integer) partial responses and branches wparallel blocks from each one [ 26,5,40,6,36]. Given a prompt x, beam search first generates Nparallel blocks. However, unlike BFS, beam search keeps the top Bbeams with the highest scores, and then samples wparallel blocks per beam at the next step. Since B×w=Nblocks are sampled at each step, the compute budget is also N. We illustrate beam search with N= 4andw= 2in Fig. 2 (right). DVTS. Diverse verifier tree search (DVTS) is a meta-algorithm that further increases diversity by running parallel searches each with smaller budgets [ 6]. Specifically, DVTS- MrunsMparallel beam searches each with budget N/M (assume to be integer), and aggregates responses into a final answer. We remark a crucial detail of beam search and DVTS is how the final set of beams/responses are aggregated. Prior works [ 6,36,34] select the response with the highest score, which is analogous to a final best-of- N(BoN; Algorithm 2). Instead, we found that taking a weighted majority vote (WMV; Algorithm 3) led to much better performance, which is demonstrated by Fig. 3 (left). Computational Efficiency of Block-wise Search. Since value scores are only used at the end of each block or the end of the whole response, the FLOPs required for block-wise value model guidance is a tiny fraction ( ≪1%) of the generation cost from π. We compute FLOP estimates in Appendix H to concretely show this. 3 Experiments We extensively evaluate value-guided search ( VGS) with our 1.5B value model DeepSeek-VM-1.5B , focusing on guiding the CoT reasoning of DeepSeek models [ 18]. The best VGSsetup for our value model is beam search with final WMV aggregation, beam width 2, block size 4096 and with DVTS (for larger inference budgets). We show this setup outperforms other test-time compute methods ( e.g., MV , WMV , BoN) and other scoring models ( e.g., existing 7B PRMs and a 1.5B Bradley-Terry reward model trained on our dataset). We remark our search results use a fixed beam width and block size for all problems ; this is more practical than prior works on “compute-optimal scaling” which vary search parameters for each problem and require estimating each problem’s difficulty [ 6,36,25]. Please see Appendices E and F for additional details on value model training and inference. Benchmarks. We evaluate on the 2024 and 2025 editions of the American Invitational Mathematics Exam (AIME) and the February Harvard-MIT Mathematics Tournament (HMMT).2Both AIME and HMMT are prestigious high school math competitions in the US that have also been used to evaluate frontier LLMs [ 31,18,1]. We use AIME I & II and the individual part of HMMT, yielding 30 problems per competition. To mitigate overfitting on a single widely used benchmark, we report the overall averaged accuracy unless otherwise stated. Per-benchmark plots are relegated to Appendix C. Baseline Models. We evaluate two state-of-the-art 7B PRMs with distinct training styles: Math-Shepherd-Mistral-7B-PRM [44] and Qwen2.5-Math-PRM-7B [50]. Math-Shepherd uses Monte-Carlo roll-outs from each step to estimate per-step value while the Qwen2.5 PRM uses LLM- Judge annotation for each step, similar to | https://arxiv.org/abs/2505.17373v1 |
the per-step human annotation of PRM800K [23]. As a step-level value model, Math-Shepherd-PRM-7B is more related to our token-level value model. Finally, we also evaluate a 1.5B Bradley-Terry (BT) [ 8] model, called DeepSeek-BT-1.5B , which we trained using our dataset (see Appendix G for training details). 2https://maa.org/maa-invitational-competitions andhttps://www.hmmt.org 5 Test-time scaling DeepSeek-1.5B (N= 256 ) AIME-24 AIME-25 HMMT-24 HMMT-25 A VG VGSw/DeepSeek-VM-1.5B (ours) 72.0±0.4 46 .7±0.7 31 .4±2.0 32 .8±0.845.7±1.0 WMV w/ DeepSeek-VM-1.5B (ours) 69.6±3.9 45 .1±2.2 29 .1±2.6 28 .9±2.6 43 .2±1.4 VGSw/DeepSeek-BT-1.5B (ours) 73.1±1.4 40 .6±0.8 28 .1±1.9 27 .5±0.0 42 .3±0.5 WMV w/ DeepSeek-BT-1.5B (ours) 72.0±3.3 40 .5±2.9 25 .3±2.3 24 .6±4.7 40 .6±1.6 VGSw/Qwen2.5-Math-PRM-7B 71.1±1.0 38 .9±1.4 26 .7±1.2 24 .2±0.2 40 .2±0.5 WMV w/ Qwen2.5-Math-PRM-7B 70.6±3.1 39 .1±2.1 25 .4±2.4 24 .0±3.2 39 .8±1.4 VGSw/MathShepherd-PRM-7B 70.6±3.1 41 .9±1.4 30 .0±1.4 23 .9±1.4 41 .6±0.9 WMV w/ MathShepherd-PRM-7B 71.2±3.2 40 .0±2.5 27 .9±2.3 25 .6±3.1 41 .2±1.4 MV@256 71.0±3.5 38 .9±1.9 24 .4±1.7 24 .3±2.9 39 .7±1.2 Test-time scaling larger models with our DeepSeek-VM-1.5B VGSw/DeepSeek-7B (N= 128 ) 82.2±0.8 59 .4±0.8 42 .8±2.8 41 .1±1.656.4±0.8 MV w/ DeepSeek-7B (N= 128 ) 77.1±1.1 56 .5±1.6 34 .7±1.6 33 .8±2.5 50 .5±0.9 VGSw/DeepSeek-14B (N= 64 ) 86.7±2.7 59 .6±0.6 46 .7±2.7 51 .1±1.661.0±0.9 MV w/ DeepSeek-14B (N= 64 ) 80.6±1.2 67 .0±2.0 40 .6±1.8 50 .1±2.0 59 .6±0.9 Pass@N baselines for various models DeepSeek-1.5B Pass@1 28.2±6.1 22 .4±4.1 13 .9±4.2 13 .0±3.9 19 .4±1.1 DeepSeek-32B Pass@1 77.2±4.9 60 .4±6.0 38 .0±4.6 42 .1±5.2 54 .4±1.3 Deepseek-R1 (671B) Pass@1 85.0±2.1 70 .0±0.9 41 .7±3.5 46 .7±2.460.8±0.5 o3-mini-medium Pass@1 49.2±6.8 45 .8±6.6 32 .4±5.4 36 .6±6.0 41 .0±1.5 o3-mini-medium Pass@8 83.0±4.6 77 .4±3.9 55 .9±4.3 64 .9±4.4 70 .3±2.1 o4-mini-medium Pass@1 85.4±4.3 82 .3±4.5 50 .4±5.0 61 .1±6.4 69 .8±2.5 o4-mini-medium Pass@8 95.4±2.6 93 .3±0.4 69 .7±3.2 84 .5±2.585.7±1.1 Table 1: (Top) Weighted majority vote (WMV) and VGS results for DeepSeek-1.5B with an inference budget of N= 256 , using various scoring models. (Middle) Compares MV and VGS for larger DeepSeek models guided with our DeepSeek-VM-1.5B . (Bottom) Lists performance of DeepSeek models and strong close-sourced reasoning models. For VGS,±indicates standard deviation across 3 seeds; for MV , WMV , Pass@N, ±denotes bootstrap with 100repetitions. We bold the highest avg. accuracy and underline second highest. Appendix C.1 contains more baselines. 3.1 Main Results (Table 1) In the top section of Table 1, we fix the generator to DeepSeek-1.5B3and test-time budget to N= 256 , and compare VGSto weighted majority voting (WMV), using our value model, the BT model and baseline PRMs. We see that VGSand WMV with DeepSeek-VM-1.5B achieve the two highest scores, outperforming the BT reward model and prior PRMs. This shows that our value model is not only a strong outcome reward model (ORM) but also an effective value model for guiding search. Notably, with a budget of 256, our 1.5B value model can guide DeepSeek-1.5B (total parameter count is 3B) to reach parity with the pass@1 of o3-mini-medium , a strong math reasoning model from OpenAI. Intriguingly, while DeepSeek-BT-1.5B was only trained as an ORM, we find that | https://arxiv.org/abs/2505.17373v1 |
VGSalso improves performance relative to WMV , suggesting that BT models may also provide meaningful block-wise feedback to guide search. We also observe that accuracies for the 7B baseline PRMs (MathSheperd and Qwen2.5-Math) are only slightly higher than MV@256 and do not improve with search, which suggests that these PRMs are likely out-of-distribution (OOD) for the long CoTs generated by DeepSeek-1.5B . In the middle section of Table 1, we guide stronger DeepSeek models with sizes 7B and 14B, and compare VGS to MV , a standard TTC method that does not use an external scoring model. We see that VGSagain achieves higher accuracy than MV for both 7B and 14B, which suggests that DeepSeek-VM-1.5B is also useful in guiding the CoT of stronger DeepSeek models. However, we observe that the gap between VGSand MV becomes smaller for larger DeepSeek models, suggesting thatDeepSeek-14B CoTs may be becoming OOD for our value model, which was trained on DeepSeek-1.5B CoTs. To guide more capable models, new value models should be trained on rollouts from similarly capable models; we however do not foresee this being a practical concern given the scalability of our training process (described in Section 2 and summarized in Appendix B). Finally, we note that the performance of all models on AIME-24 is consistently higher than other competitions, suggesting the importance of evaluating on diverse and newer competitions to reduce risk of overfitting or data contamination. 3Throughout the paper, we use DeepSeek-XB as shorthand for DeepSeek-R1-Distill-Qwen-XB . 6 4 16 64 256 1024 Inference Budget (N)30354045Accuracy BoN VGS + BoNWMV VGS + WMV4 16 64 256 1024 Inference Budget (N)305070 Pass@N VGS + WMVMVFigure 3: Test-Time Compute with DeepSeek-VM-1.5B. (Left) Compares best-of- N(BoN), weighted majority voting (WMV) and VGSwith either BoN or WMV for the final aggregation. (Right) Compares VGSto majority voting (MV), a standard baseline that does not require a scorer. 3.2 Test-Time Compute Scaling for Search This section presents three experiments designed to analyze the TTC scaling properties of VGS. Our investigation addresses three key research questions: 1.Does VGS, with its block-wise guidance, demonstrate superior performance compared to response- level aggregation methods such as BoN or WMV? 2. How does the TTC scaling behavior of VGScompare to the standard score-free baseline MV? 3. How does the TTC scaling behavior of DeepSeek-VM-1.5B compare to baseline models? Response-Level Selection vs Search-Based Block-Level Selection. While BoN and WMV repre- sent standard approaches for selecting responses using an ORM, block-wise VGSguides response generation through sequential block-by-block selection. As Fig. 3 (left) illustrates, WMV consistently outperforms BoN across all inference budget scales, which demonstrates the benefits of combining MV with value scores. Furthermore, VGS(with WMV as a final aggregation step) yields additional improvements beyond WMV alone. This confirms the benefits of search and aligns with conclusions from previous studies [ 6,36,25]. Interestingly, we do not observe the same benefits of search if BoN is used as a final aggregation step, suggesting that WMV is a critical component to VGS. Response Length for VGS.In addition to consistent performance gains, VGSalso produces noticeably shorter responses compared to | https://arxiv.org/abs/2505.17373v1 |
the base DeepSeek-1.5B model. In Figure 15 (Appendix C.7), we present histograms of response lengths across all benchmarks. The results show that VGSconsistently generates more concise outputs, whereas the base model often reaches the generation cap, with up to 4 16 64 256 1024 Inference Budget (N)2530354045Accuracy Weighted Majority Voting 4 16 64 256 1024 Inference Budget (N) Value-Guided Search VM BT Math Shepherd 7B Qwen PRM 7B Figure 4: TTC Scaling of Various Scorers. Comparison of our 1.5B value model (VM), our 1.5B Bradley-Terry reward model (BT), and two 7B state-of-the-art PRMs for two TTC scaling methods: (Left) WMV or (Right) VGS(with WMV as a final aggregation step). 7 50% of its responses being unfinished. On average, VGSresponses are 11,219 tokens long, compared to 12,793 for DeepSeek-1.5B , representing a reduction of over 12% in token and thus FLOPs usage. VGS vs Majority Voting. As Fig. 3 (right) demonstrates, VGSconsistently achieves higher accuracy than MV , attaining equivalent performance while requiring substantially lower inference budgets (also shown in Fig. 1 right). Fully closing the gap with the oracle Pass@N curve may require a larger value model trained on more extensive datasets. DeepSeek-VM-1.5B vs Baseline Scoring Models. Fig. 4 benchmarks DeepSeek-VM-1.5B against existing PRMs and our BT model. We observe that DeepSeek-VM-1.5B consistently delivers superior performance when employed both as an ORM for WMV (left) and as a guidance mechanism for block-wise search (right). Note that we find our BT model to be surprisingly effective as a search guidance model which suggests the importance of our token-level dataset playing an important role in successful downstream search. 3.3 Scaling up the Generator Model Sizes 4 16 64 256 1024 Inference Budget (N)4060Accuracy DeepSeek 1.5B 7B 14B Figure 5: VGS + WMV Performance when Guiding Larger Models. With the same DeepSeek-VM-1.5B providing guid- ance, search continues to improve with more test-time compute.In Fig. 5, we scale up our experiments to guide larger 7B and 14B DeepSeek models. Here, we run VGS with the same search parameters using the same value model DeepSeek-VM-1.5B . Although the 7B and 14B DeepSeek models are in theory OOD for our value model [ 52], which was trained on DeepSeek-1.5B roll- outs, we observe that VGS continues to scale without plateauing as test-time compute increases. This pro- vides some evidence that a value model trained with a weaker verifier policy can generalize effectively and guide the CoTs of stronger models. Such generalization is particularly valuable, as it is significantly cheaper to collect training data from smaller πrefmodels. This form of “weak-to-strong” generalization [ 10] appears to be a promising direction for future research. 4 Ablation Studies 16384 4096 1024 256 64 16 Block Size (num. tokens)0.30.40.50.60.7AIME-24 Accuracy N = 16 32 64 Figure 6: Ablation: Search Block Size. AIME-24 accuracies for beam search (width 2) with varying block sizes from 16 to 16384. We found 4096 to be optimal across test-time budgets and benchmarks.To investigate the role of key hyperparameters in search, we perform sensitivity analyses of block size and beam width on AIME-24 across varying inference budgets. We | https://arxiv.org/abs/2505.17373v1 |
also ablate the amount of DVTS parallelism. These tests suggest that there is a consistent choice of search hyperparameters that work well across inference bud- gets. 4.1 Different Search Parameters and Methods Block Size. We perform beam search with width 2 using search block sizes from 16 to 16384. Fig. 6 shows AIME-24 accuracies across three inference budgets N, revealing that the optimal choice of 4096 stays consistent across different N. We see a decline in performance when searching with more fine-grained blocks. Beam Width. We perform beam search with block size 4096 using varying beam widths, with breadth-first-search (BFS) being a special case where beam width is equal to N. Fig. 7 (left) shows AIME-24 accuracies across five inference budgets, demonstrating that beam width 2is consistently optimal across different N. We note our optimal beam width is different from prior works’ which found 4to work best [6, 36, 25]. DVTS Parallelism. Fig. 7 (right) shows the role of ablating DVTS from VGS. For each inference budget, we report average accuracies without DVTS and with the best DVTS parallelism M. We 8 8 16 32 64 128 Inference Budget (N)45505560657075AIME-24 Accuracy Beam 2 Beam 4 Beam 8 BFS8 32 128 512 Inference Budget (N)30354045Accuracy With DVTS Without DVTSFigure 7: Ablations: Beam-Width and DVTS. (Left) AIME-24 accuracies for beam search with various widths (Section 2.3) across inference budgets N. BFS is equivalent to setting width as N. We find that the optimal beam width is robust across multiple TTC budgets. (Right) Averaged accuracy for beam 2with and without DVTS. For DVTS, we report the best result with parallelism M > 1per inference budget N, which we find scales better at higher budgets. observe that DVTS becomes more effective at higher budgets and scales better than a single search tree, which is consistent with findings from prior works [ 6]. However, we find that DVTS is never worse than a single search tree even at smaller inference budgets, which is the opposite conclusion reached by prior works [ 6]. This discrepancy may be explained by the fact that we use WMV to combine the DVTS responses, which seems to be a more robust way to perform DVTS than BoN (used in prior works) given our findings from Fig. 3. 4.2 Random vs. Value-Guided Search 4 16 64 256 1024 Inference Budget (N)3040Accuracy VGS Random SearchHybrid Figure 8: Ablation: Random Search. Random search is the same search process as VGSexcept intermediate blocks are randomly selected instead of using our value model. Hybrid is a mixture where we flip a fair coin at the start of a search tree that decides whether to use random search or VGS. We see that selecting blocks with highest value improves accuracy across inference budgets.Finally, we directly ablate the role of our value model’s guidance during the search process. We perform VGS (w/ same width, block size and DVTS) but randomly select blocks instead of selecting blocks with the highest value. We still aggregate the final beams via WMV with our value model, so the only change is how inter- | https://arxiv.org/abs/2505.17373v1 |
mediate blocks are chosen. We call this process “random search”. Thus, if our value model is helpful for search, we should expect VGSto out- perform random search. Indeed, Fig. 8 validates this hypothesis. We also evaluate a hybrid ap- proach where half of DVTS’s parallel trees use random search and the other half use VGS. We find that this hybrid approach lands roughly be- tween pure VGSand pure random search, again validating that block-selection from our value model improves over random selection. 5 Conclusion In this paper, we introduced block-wise Value-Guided Search (VGS), a simple yet effective strategy for steering long-context CoT reasoning models. We proposed a scalable token-level value model training pipeline that does not require a pre-defined notion of “step” or expensive per-step annotations. We collect a large dataset of reasoning CoTs ( OpenR1-VM ) and train a lean 1.5B value model (DeepSeek-VM-1.5B ), which we show can effectively guide the CoTs of DeepSeek models up to 14B in size. With extensive experiments, we demonstrate that VGSwith DeepSeek-VM-1.5B enjoys better test-time compute scaling than standard methods ( e.g., majority voting, best-of- N) and other scoring models ( e.g., existing PRMs and a BT model), achieving a higher performance ceiling while reducing the FLOPs needed to extract the same performance as baseline methods (Fig. 1). Our results point to VGSas a promising approach to scale TTC of emerging reasoning models. 9 Discussion of Limitations. Our value model is trained exclusively on completions / roll-outs from a lean reasoning model πref(e.g.,DeepSeek-R1-Distill-Qwen-1.5B ). As frontier LLMs continue to advance, the distribution of their generated responses may increasingly diverge from our training distribution, potentially degrading scoring and search performance. To maintain optimal performance, new value models will need to be retrained on rollouts from updated generator policies. However, we do not foresee this as a major practical concern given the simplicity and scalability of our pipeline. To facilitate retraining and adaptation to similar verifiable domains, we open-source our codebase and provide a step-by-step recipe in Appendix B for data collection, training and search inference. Acknowledgment JPZ is supported by a grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) (567916). ZG is supported by LinkedIn-Cornell Grant. Wen Sun is supported by NSF IIS-2154711, NSF CAREER 2339395 and DARPA LANCER: LeArning Network CybERagents. This research is also supported by grants from the National Science Foundation NSF (IIS-1846210, IIS-2107161, and IIS-1724282, HDR-2118310), the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875), DARPA, arXiv, LinkedIn, Google, and the New York Presbyterian Hospital. References [1]Marah Abdin, Sahaj Agarwal, Ahmed Awadallah, Vidhisha Balachandran, Harkirat Behl, Lingjiao Chen, Gustavo de Rosa, Suriya Gunasekar, Mojan Javaheripi, Neel Joshi, et al. Phi-4- reasoning technical report. arXiv preprint arXiv:2504.21318 , 2025. [2]Loubna Ben Allal, Lewis Tunstall, Anton Lozhkov, Elie Bakouch, Guilherme Penedo, Hynek Kydlicek, and Gabriel Martín Blázquez. Open r1: Update #2. https://huggingface.co/ blog/open-r1/update-2 , February 2025. Hugging Face Blog. [3]Alex Ayoub, Kaiwen Wang, Vincent Liu, Samuel Robertson, James McInerney, Dawen Liang, Nathan Kallus, and Csaba Szepesvári. Switching the loss reduces the cost in batch | https://arxiv.org/abs/2505.17373v1 |
(offline) reinforcement learning. arXiv preprint arXiv:2403.05385 , 2024. [4]Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073 , 2022. [5]Dhruv Batra, Payman Yadollahpour, Abner Guzman-Rivera, and Gregory Shakhnarovich. Diverse m-best solutions in markov random fields. In Computer Vision–ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part V 12 , pages 1–16. Springer, 2012. [6]Edward Beeching, Lewis Tunstall, and Sasha Rush. Scaling test-time compute with open models, 2024. URL https://huggingface.co/spaces/HuggingFaceH4/ blogpost-scaling-test-time-compute . [7]Marc G Bellemare, Will Dabney, and Rémi Munos. A distributional perspective on reinforce- ment learning. In International conference on machine learning , pages 449–458. PMLR, 2017. [8]Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika , 39(3/4):324–345, 1952. [9]Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. Large language monkeys: Scaling inference compute with repeated sampling. arXiv preprint arXiv:2407.21787 , 2024. [10] Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschen- brenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, et al. Weak-to-strong gener- alization: Eliciting strong capabilities with weak supervision. arXiv preprint arXiv:2312.09390 , 2023. 10 [11] Jonathan D Chang, Kiante Brantley, Rajkumar Ramamurthy, Dipendra Misra, and Wen Sun. Learning to generate better than your llm. arXiv preprint arXiv:2306.11816 , 2023. [12] Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daumé III, and John Langford. Learning to search better than your teacher. In International Conference on Machine Learning , pages 2058–2066. PMLR, 2015. [13] Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. Alphamath almost zero: Process supervision without process. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. URL https://openreview.net/forum?id=VaXnxQ3UKo . [14] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [15] Scott Davies, Andrew Y Ng, and Andrew Moore. Applying online search techniques to continuous-state reinforcement learning. In AAAI/IAAI , pages 753–760, 1998. [16] Ahmed El-Kishky, Alexander Wei, Andre Saraiva, Borys Minaiev, Daniel Selsam, David Dohan, Francis Song, Hunter Lightman, Ignasi Clavera, Jakub Pachocki, et al. Competitive programming with large reasoning models. arXiv preprint arXiv:2502.06807 , 2025. [17] Jesse Farebrother, Jordi Orbay, Quan Vuong, Adrien Ali Taïga, Yevgen Chebotar, Ted Xiao, Alex Irpan, Sergey Levine, Pablo Samuel Castro, Aleksandra Faust, et al. Stop regressing: Training value functions via classification for scalable deep rl. arXiv preprint arXiv:2403.03950 , 2024. [18] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, and Others. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [19] Seungwook Han, Idan Shenfeld, Akash Srivastava, Yoon Kim, and Pulkit Agrawal. Value augmented sampling for language model alignment and personalization. arXiv preprint arXiv:2405.06639 , 2024. [20] Ehsan Imani, Kai Luedemann, Sam Scholnick-Hughes, Esraa Elelimy, and | https://arxiv.org/abs/2505.17373v1 |
Martha White. Investigating the histogram loss in regression. arXiv preprint arXiv:2402.13425 , 2024. [21] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 , 2020. [22] Hynek Kydlí ˇcek. Math-verify: Math verification library, 2025. URL https://github.com/ huggingface/Math-Verify . A library to rule-based verify mathematical answers. [23] Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations , 2023. [24] Jiacheng Liu, Andrew Cohen, Ramakanth Pasunuru, Yejin Choi, Hannaneh Hajishirzi, and Asli Celikyilmaz. Don’t throw away your value model! generating more preferable text with value-guided monte-carlo tree search decoding. arXiv preprint arXiv:2309.15028 , 2023. [25] Runze Liu, Junqi Gao, Jian Zhao, Kaiyan Zhang, Xiu Li, Biqing Qi, Wanli Ouyang, and Bowen Zhou. Can 1b llm surpass 405b llm? rethinking compute-optimal test-time scaling. arXiv preprint arXiv:2502.06703 , 2025. [26] Bruce T Lowerre. The harpy speech recognition system. Carnegie Mellon University, 1976. [27] Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Meiqi Guo, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, et al. Improve mathematical reasoning in language models by automated process supervision. arXiv preprint arXiv:2406.06592 , 2024. 11 [28] Sidharth Mudgal, Jong Lee, Harish Ganapathy, YaGuang Li, Tao Wang, Yanping Huang, Zhifeng Chen, Heng-Tze Cheng, Michael Collins, Trevor Strohman, et al. Controlled decoding from language models. arXiv preprint arXiv:2310.17022 , 2023. [29] Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, and Tatsunori Hashimoto. s1: Simple test-time scaling. arXiv preprint arXiv:2501.19393 , 2025. [30] Ivo Petrov, Jasper Dekoninck, Lyuben Baltadzhiev, Maria Drencheva, Kristian Minchev, Mislav Balunovi ´c, Nikola Jovanovi ´c, and Martin Vechev. Proof or bluff? evaluating llms on 2025 usa math olympiad. arXiv preprint arXiv:2503.21934 , 2025. [31] Qwen Team. Qwen3: Think deeper, act faster, 2025. URL https://qwenlm.github.io/ blog/qwen3/ . Accessed: 2025-05-08. [32] Nikhil Sardana, Jacob Portes, Sasha Doubov, and Jonathan Frankle. Beyond chinchilla-optimal: Accounting for inference in language model scaling laws. arXiv preprint arXiv:2401.00448 , 2023. [33] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms, 2017. URL https://arxiv.org/abs/1707.06347 . [34] Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh Agarwal, Jonathan Berant, and Aviral Kumar. Rewarding progress: Scaling automated process verifiers for LLM reasoning. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=A6Y7AqlzLW . [35] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815 , 2017. [36] Charlie Victor Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling LLM test- time compute optimally can be more effective than scaling parameters for reasoning. In The Thirteenth International Conference on Learning Representations , 2025. URL https: //openreview.net/forum?id=4FWAwZtd2n . [37] Giulio Starace, Oliver Jaffe, Dane | https://arxiv.org/abs/2505.17373v1 |
Sherburn, James Aung, Jun Shern Chan, Leon Maksin, Rachel Dias, Evan Mays, Benjamin Kinsella, Wyatt Thompson, et al. Paperbench: Evaluating ai’s ability to replicate ai research. arXiv preprint arXiv:2504.01848 , 2025. [38] Jiashuo Sun, Yi Luo, Yeyun Gong, Chen Lin, Yelong Shen, Jian Guo, and Nan Duan. Enhancing chain-of-thoughts prompting with iterative bootstrapping in large language models. In Kevin Duh, Helena Gomez, and Steven Bethard, editors, Findings of the Association for Computational Linguistics: NAACL 2024 , pages 4074–4101, Mexico City, Mexico, June 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-naacl.257. URL https: //aclanthology.org/2024.findings-naacl.257/ . [39] Richard S Sutton, Andrew G Barto, et al. Reinforcement learning: An introduction , volume 1. MIT press Cambridge, 1998. [40] Ashwin K Vijayakumar, Michael Cogswell, Ramprasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. Diverse beam search: Decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424 , 2016. [41] Junlin Wang, Shang Zhu, Jon Saad-Falcon, Ben Athiwaratkun, Qingyang Wu, Jue Wang, Shuai- wen Leon Song, Ce Zhang, Bhuwan Dhingra, and James Zou. Think deep, think fast: Investigat- ing efficiency of verifier-free inference-time-scaling methods. arXiv preprint arXiv:2504.14047 , 2025. [42] Kaiwen Wang, Kevin Zhou, Runzhe Wu, Nathan Kallus, and Wen Sun. The benefits of being distributional: Small-loss bounds for reinforcement learning. Advances in neural information processing systems , 36:2275–2312, 2023. 12 [43] Kaiwen Wang, Nathan Kallus, and Wen Sun. The central role of the loss function in reinforce- ment learning. Statistical Science , 2025. Forthcoming. [44] Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. arXiv preprint arXiv:2312.08935 , 2023. [45] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 , 2022. [46] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations , 2023. URL https://openreview.net/forum?id=1PL1NIMMrw . [47] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Ad- vances in neural information processing systems , 36:11809–11822, 2023. [48] Di Zhang, Xiaoshui Huang, Dongzhan Zhou, Yuqiang Li, and Wanli Ouyang. Accessing gpt-4 level mathematical olympiad solutions via monte carlo tree self-refine with llama-3 8b. arXiv preprint arXiv:2406.07394 , 2024. [49] Hanning Zhang, Pengcheng Wang, Shizhe Diao, Yong Lin, Rui Pan, Hanze Dong, Dylan Zhang, Pavlo Molchanov, and Tong Zhang. Entropy-regularized process reward model. arXiv preprint arXiv:2412.11006 , 2024. [50] Zhenru Zhang, Chujie Zheng, Yangzhen Wu, Beichen Zhang, Runji Lin, Bowen Yu, Dayiheng Liu, Jingren Zhou, and Junyang Lin. The lessons of developing process reward models in mathematical reasoning. arXiv preprint arXiv:2501.07301 , 2025. [51] Lianmin Zheng, Liangsheng Yin, Zhiqiang Xie, Chuyue Livia Sun, Jeff Huang, Cody Hao Yu, Shiyi Cao, Christos Kozyrakis, Ion Stoica, Joseph E Gonzalez, et al. Sglang: Efficient execution of structured | https://arxiv.org/abs/2505.17373v1 |
language model programs. Advances in Neural Information Processing Systems , 37:62557–62583, 2024. [52] Jin Peng Zhou, Kaiwen Wang, Jonathan Chang, Zhaolin Gao, Nathan Kallus, Kilian Q Wein- berger, Kianté Brantley, and Wen Sun. q♯: Provably optimal distributional rl for llm post-training. arXiv preprint arXiv:2502.20548 , 2025. 13 Appendices Table of Contents • Appendix A. Related Works • Appendix B. Summary of VGS Pipeline • Appendix C. Additional Experiment Results • Appendix D. Further Details of Data Collection • Appendix E. Further Details for Value Model Training • Appendix F. Further Details for Inference with Search • Appendix H. Inference FLOPS Computation Note: In the appendix, we also provide additional empirical results in Appendix C. Two new results are worth highlighting here. First, in Appendix C.6, we provide test-time scaling results for guiding DeepSeek-1.5B further trained with PPO on our math dataset. We find that VGSimproves test-time scaling compared to MV and WMV , which shows that our method nicely complements policy-based RL training. Moreover, in Appendix C.7, we include three qualitative examples of contrastive blocks that were selected or rejected by our value model during beam search process. We see that our value model prefers blocks with more straightforward logical deductions, yielding more efficient and effective CoT for reasoning. A Related Works T est - Time Comput eScor e-F r eeScor e-BasedOut come-Le v elPr ocess-Le v elMajority V oting (MV)Sequential Co TBest -of -NPRMsV alue ModelW eight ed MV Figure 9: Taxonomy of TTC Methods. Score-free TTC methods do not require an external scoring model, e.g., by taking a majority vote. Score-based TTC methods require an external scoring model. The coarsest scoring model is an outcome reward model (ORM), which scores a whole response and can be used for best-of- Nor weighted MV . A more fine-grained scoring model are process-level scorers, which includes process reward models (PRMs) and value models; these more fine-grained scoring models can be used for search. Test-time compute (TTC) broadly refers to algorithms that improve problem-solving performance when given more compute ( i.e., FLOPs) at test-time. Fig. 9 summarizes the taxonomy of TTC methods. The simplest TTC methods are score-free in the sense that they do not require access to an external scoring model. A notable example is majority voting (MV), which selects the most frequent answer among Nresponses, breaking ties randomly [ 14,46,9]. Also known as self-consistency, MV can be applied to tasks where the output space is equipped with an equivalence relation, e.g., mathematical formulae that can be symbolically checked for equality. Other score-free TTC methods include sequentially revising the response via CoT prompting [38, 29] and hybrid methods [41]. There are also score-based TTC methods that employ an external scorer. The coarsest type of a scorer is an outcome reward model (ORM), which takes the full prompt and response as input and produces a scalar that measures the quality / correctness of the response. Popular examples of ORMs include Bradley-Terry reward models [ 8] or LLM-as-a-Judge [ 4]. ORMs can be used for 14 best-of- N(BoN), which selects the response with the highest | https://arxiv.org/abs/2505.17373v1 |
score [ 14,4]. ORMs can also be used for weighted majority voting (WMV), which generalizes MV where the strength of a response’s vote is proportional to its ORM score. Weighted MV (WMV) typically provides an improvement over vanilla (unweighted) MV [6, 25], which is also what we observe in our experiments (Fig. 3). Outcome-level TTC methods ( e.g., BoN, WMV) may be further refined with process-level scorers that guide the generation process in a fine-grained manner. We remark that our value model can act as both an outcome-level and a process-level scorer. When queried with a partial response, the value model predicts the expected quality of future completions under πref. When queried at the end of a full response, the value model predicts the quality of the final response. Indeed, our best performing setup for value-guided search ( VGS) uses intermediate values to guide block-wise beam search and uses final values via WMV to aggregate the final beams, which employs both the process-level and outcome-level scoring capabilities of the value model. Finally, to the best of our knowledge, the combination of search with WMV is novel to this work, and we found this to be a crucial ingredient to effectively scale TTC of DeepSeek models. Prior works on process-level TTC largely focused on step-wise search with process reward models (PRMs), which measures the correctness of a fine-grained step [ 23]. They showed that step-wise search can provide better performance than outcome-level TTC methods [ 28,47,24,36,48,34,13]. However, training step-wise PRMs requires a pre-defined notion of step, which is challenging to explicitly define for general reasoning [ 18];e.g., prior works used newlines \nto separate steps, but DeepSeek’s CoTs often contain newlines in-between coherent thoughts. Moreover, prior PRM training techniques require per-step annotations, via humans [ 23], LLM-Judges [ 50], or per-step MC rollouts [ 44,27], which are expensive to collect for long reasoning traces; e.g., a single response from DeepSeek models typically contains hundreds of newlines. These limitations make it difficult to scale PRMs to long-context reasoning models [ 18], and all these prior works could only evaluate on short-context models with easier benchmarks such as GSM8k [ 14] and MATH [ 23]. In contrast, our paper focuses on scaling process-level guidance to long-context reasoning models, and we propose a block-level search method that mitigates the above limitations. We train a token-level value model by collecting rollouts from random solution prefixes, which requires neither a pre-defined notion of step nor per-step annotations. We use our value model to guide a block-wise search process, where the block size is a hyperparameter and we find there exists a consistent choice that works well across inference budgets (Fig. 6). Crucially, we are able to scale our value-guided search ( VGS) to long-context DeepSeek models and demonstrate impressive performance and efficiency gains on challenging math competitions (Fig. 1). Closely related to our work is Setlur et al. [34], who propose to train a token-level process advantage verifier (PA V), which is the sum of πref’sQ-function and an (off-policy) expert’s advantage function, to guide step-wise search. This method is | https://arxiv.org/abs/2505.17373v1 |
similar to ours since the training process also occurs at a token-level and is agnostic to the definition of step. However, a limitation of the PA V is that if the expert disagrees with the underlying policy, then maximizing the PA V can lead to suboptimal behavior [ 12]. Our approach of directly using the value model does not have this issue. Moreover, Setlur et al. [34] proposed to use PA Vs to guide step-wise search, which still requires a definition of step at inference time; in contrast, we propose to use block-wise search which does not require a definition of step at inference. At a technical level, Setlur et al. [34] trained the PA V by minimizing the mean-squared error (MSE) loss; in contrast, we propose to use the cross-entropy loss, which has been shown to work better for downstream decision making [17, 43, 52]. We remark that some prior works proposed to use token-level value models to reweight the next-token distribution of the generator [ 28,49,19,52]. However, these methods require one classifier call per token, which is more expensive than block-wise search. Moreover, token-level guidance might also be less effective because the imperfect value model may introduce cascading errors if queried at every token. We highlight that Mudgal et al. [28] also experimented with block-wise BFS and found this to be more effective at scaling test-time compute than reweighting the next-token distribution ( i.e., token-wise guidance). One drawback of block-wise BFS is that the blocks may all become correlated due to sharing the same prefix. Thus, we build upon Mudgal et al. [28] by proposing to use beam search, which we show yields better test-time scaling for reasoning models (Fig. 6). 15 B Summary of VGS Pipeline We provide a step-by-step recipe for running VGSon any verifiable domain of interest. This recipe is applicable to any task with a reward label for responses ( i.e., outcome-level feedback). If the task has continuous rewards, a standard trick from distributional RL is to discretize the reward distribution as a histogram, and then the value model is simply the expected reward under the learned distribution [7, 20, 17, 42, 43]. 1. Start with a verifiable domain, where responses are identified with a label and a reward. 2. Identify a good dataset of prompts. 3.Identify a set of roll-in policies and a single roll-out policy. The roll-in policies should provide a diverse distribution of solutions, and the roll-out policy should be strong enough to complete responses given a partial roll-in. 4. For each prompt, sample nroll-in responses from the set of roll-in policies. 5.For each roll-in response, sample mrandom indices {ij}j∈[m], and collect a roll-out per index. Thus, there are nmroll-in, roll-out pairs per prompt. 6.Post-filter by removing prompts where all roll-out responses fail to complete or solve the prompt. 7.At this point, we have created our dataset of roll-in, roll-out pairs. We are now ready to train our value model. 8.Train a classifier / value model by following (Section 2.1). Sweep hyperparameters such as learning rate. 9.Choose a generator policy to be guided | https://arxiv.org/abs/2505.17373v1 |
by the value model. The most in-distribution choice is to use the roll-out policy πref. 10.Perform model selection by running outcome-level TTC ( e.g., WMV) on some validation benchmark. 11.Sweep search parameters ( e.g., block size, beam width, DVTS parallelism) on the validation benchmark. 12. Run the final model on the test benchmark with the best search parameters. The sampling distribution for the cut-off index (Step 5) is also worth tuning. For example, values at earlier or middle indices may be harder to predict than final indices, so it is worth sampling more cut-off indices from these earlier regions. 16 C Additional Experiment Results C.1 Full Main Results Table We reproduce Table 1 with additional baselines: DeepSeek-7B ,DeepSeek-14B ,o1-mini-medium . Test-time scaling DeepSeek-1.5B (N= 256 ) AIME-24 AIME-25 HMMT-24 HMMT-25 A VG VGSw/DeepSeek-VM-1.5B (ours) 72.0±0.4 46 .7±0.7 31 .4±2.0 32 .8±0.845.7±1.0 WMV w/ DeepSeek-VM-1.5B (ours) 69.6±3.9 45 .1±2.2 29 .1±2.6 28 .9±2.6 43 .2±1.4 VGSw/DeepSeek-BT-1.5B (ours) 73.1±1.4 40 .6±0.8 28 .1±1.9 27 .5±0.0 42 .3±0.5 WMV w/ DeepSeek-BT-1.5B (ours) 72.0±3.3 40 .5±2.9 25 .3±2.3 24 .6±4.7 40 .6±1.6 VGSw/Qwen2.5-Math-PRM-7B 71.1±1.0 38 .9±1.4 26 .7±1.2 24 .2±0.2 40 .2±0.5 WMV w/ Qwen2.5-Math-PRM-7B 70.6±3.1 39 .1±2.1 25 .4±2.4 24 .0±3.2 39 .8±1.4 VGSw/MathShepherd-PRM-7B 70.6±3.1 41 .9±1.4 30 .0±1.4 23 .9±1.4 41 .6±0.9 WMV w/ MathShepherd-PRM-7B 71.2±3.2 40 .0±2.5 27 .9±2.3 25 .6±3.1 41 .2±1.4 MV@256 71.0±3.5 38 .9±1.9 24 .4±1.7 24 .3±2.9 39 .7±1.2 Test-time scaling larger models with our DeepSeek-VM-1.5B VGSw/DeepSeek-7B (N= 128 ) 82.2±0.8 59 .4±0.8 42 .8±2.8 41 .1±1.656.4±0.8 MV w/ DeepSeek-7B (N= 128 ) 77.1±1.1 56 .5±1.6 34 .7±1.6 33 .8±2.5 50 .5±0.9 VGSw/DeepSeek-14B (N= 64 ) 86.7±2.7 59 .6±0.6 46 .7±2.7 51 .1±1.661.0±0.9 MV w/ DeepSeek-14B (N= 64 ) 80.6±1.2 67 .0±2.0 40 .6±1.8 50 .1±2.0 59 .6±0.9 Pass@N baselines for various models DeepSeek-1.5B Pass@1 28.2±6.1 22 .4±4.1 13 .9±4.2 13 .0±3.9 19 .4±1.1 DeepSeek-1.5B Pass@256 81.9±1.7 62 .6±3.6 54 .2±4.9 57 .1±3.8 63 .9±0.9 DeepSeek-7B Pass@1 54.8±6.0 40 .9±6.1 31 .5±4.4 25 .5±4.6 38 .2±1.3 DeepSeek-14B Pass@1 72.4±5.4 53 .9±5.5 36 .4±4.8 36 .5±5.5 49 .8±1.3 DeepSeek-32B Pass@1 77.2±4.9 60 .4±6.0 38 .0±4.6 42 .1±5.2 54 .4±1.3 Deepseek-R1 (671B) Pass@1 85.0±2.1 70 .0±0.9 41 .7±3.5 46 .7±2.460.8±0.5 o1-mini-medium Pass@1 63.3±6.6 52 .3±6.8 33 .1±5.1 34 .0±5.9 45 .7±1.5 o1-mini-medium Pass@8 83.7±2.7 81 .8±3.7 58 .0±4.0 52 .8±3.4 69 .1±1.7 o3-mini-medium Pass@1 49.2±6.8 45 .8±6.6 32 .4±5.4 36 .6±6.0 41 .0±1.5 o3-mini-medium Pass@8 83.0±4.6 77 .4±3.9 55 .9±4.3 64 .9±4.4 70 .3±2.1 o4-mini-medium Pass@1 85.4±4.3 82 .3±4.5 50 .4±5.0 61 .1±6.4 69 .8±2.5 o4-mini-medium Pass@8 95.4±2.6 93 .3±0.4 69 .7±3.2 84 .5±2.585.7±1.1 Table 2: (Top) Weighted majority vote (WMV) and VGSaccuracies for DeepSeek-1.5B with an inference budget of N= 256 , with various scoring models. (Middle) Compares MV and VGS for larger DeepSeek models guided with our DeepSeek-VM-1.5B . (Bottom) Lists performance of DeepSeek models and strong close-sourced reasoning models. For VGS,±indicates standard deviation across 3 seeds, and for MV , WMV , Pass@N, it denotes bootstrap with 100repetitions. We bold the highest avg. accuracy and underline second highest. 17 C.2 Per-benchmark Plots for Fig. 3 40506070Accuracy AIME-24 304050 | https://arxiv.org/abs/2505.17373v1 |
AIME-25 4 16 64 256 1024 Inference Budget (N)202530Accuracy HMMT-24 4 16 64 256 1024 Inference Budget (N)15202530 HMMT-25 VGS + WMV WMVVGS + BoN BoN406080 AIME-24 4060 AIME-25 4 16 64 256 1024 Inference Budget (N)204060 HMMT-24 4 16 64 256 1024 Inference Budget (N)204060 HMMT-25 VGS + WMV MV Pass@N Figure 10: Per-benchmark results for Fig. 3. (Left) Compares best-of- N(BoN), weighted majority voting (WMV) and VGSwith either BoN or WMV for the final aggregation. (Right) Compares VGSto majority voting (MV), a standard baseline that does not require a scorer. C.3 Per-benchmark Plots for Fig. 4 VM BT Math Shepherd 7B Qwen PRM 7B3040506070Accuracy AIME-24 3040 AIME-25 4 16 64 256 1024 Inference Budget (N)15202530Accuracy HMMT-24 4 16 64 256 1024 Inference Budget (N)15202530 HMMT-25Weighted Majority Voting 40506070 AIME-24 304050 AIME-25 4 16 64 256 1024 Inference Budget (N)1520253035 HMMT-24 4 16 64 256 1024 Inference Budget (N)15202530 HMMT-25Value-Guided Search Figure 11: Per-benchmark results for Fig. 4. Comparison of our 1.5B value model (VM), our 1.5B Bradley-Terry reward model (BT), and two 7B state-of-the-art PRMs for two TTC scaling methods: (Left) WMV or (Right) VGS(with WMV as a final aggregation step). 18 C.4 Per-benchmark Plots for Fig. 5 5060708090Accuracy AIME-24 30405060 AIME-25 4 16 64 256 1024 Inference Budget (N)304050Accuracy HMMT-24 4 16 64 256 1024 Inference Budget (N)20304050 HMMT-25 DeepSeek 1.5B 7B 14B Figure 12: Per-benchmark results for Fig. 5. With the same DeepSeek-VM-1.5B providing guidance, search continues to improve with more test-time compute. C.5 Per-benchmark Plots for Fig. 8 40506070Accuracy AIME-24 304050 AIME-25 4 16 64 256 1024 Inference Budget (N)202530Accuracy HMMT-24 4 16 64 256 1024 Inference Budget (N)1520253035 HMMT-25 VGS Random Search Hybrid Figure 13: Per-benchmark results for Fig. 8. Random search is the same search process as VGS except intermediate blocks are randomly selected instead of using our value model. 19 C.6 Results for Guiding a PPO Policy Guo et al. [18] mentions that the performance of distilled DeepSeek models can be further en- hanced through reinforcement learning (RL). In this section, we explore whether we could guide the generation of a RL trained policy. Specifically, we apply Proximal Policy Optimization (PPO) [ 33] toDeepSeek-1.5B using prompts from OpenR1-Cleaned and guide the trained model with DeepSeek-VM-1.5B . We perform full parameter training with 8 H100 GPUs and use the same model as the policy for critic. We use a rule-based reward function based solely on the correctness of the response, assigning +1 for correct answers and 0 for incorrect or incomplete ones. To ensure that the learned policy πremains close to the reference policy πref, an additional KL penalty is applied to the reward: r(x, y)−γKL(lnπ(y|x)−lnπref(y|x)), (1) where r(x, y)is the rule-based reward for prompt xand response y, and γKLcontrols the strength of the KL penalty. To further encourage exploration, we apply standard entropy regularization by subtracting the policy entropy from the loss weighted by a coefficient γentropy : LPPO−γentropy H[π(· |x)], (2) The hyperparameter settings are shown below. PPO Hyperparameter Setting Setting Parameters Generation (train) temperature: 1.0 top p: 1 PPO batch size: 256 | https://arxiv.org/abs/2505.17373v1 |
mini batch size: 128 micro batch size: 1 policy learning rate: 1e-6 critic learning rate: 1e-5 train epochs: 25γentropy : 1e-3 γKL: 1e-4 gaeγ: 1 gaeλ: 1 clip ratio: 0.2 Total number of steps: 2250 DeepSeek-1.5B Pass@4 Pass@8 Pass@16 Pass@32 Pass@64 Pass@128 Pass@256 AIME-24 48.7±5.0 58 .9±4.9 67 .3±4.7 74 .1±3.2 78 .4±3.0 80 .4±2.4 81 .9±1.7 AIME-25 33.1±4.1 39 .1±4.4 43 .7±3.9 49 .0±3.8 54 .0±3.9 58 .8±3.4 62 .6±3.6 HMMT-24 24.0±4.6 28 .6±3.9 32 .9±3.6 37 .7±4.4 42 .9±4.2 48 .1±3.7 54 .2±4.9 HMMT-25 21.6±4.6 26 .7±4.0 30 .1±4.8 37 .8±5.1 45 .6±4.6 52 .6±4.2 57 .1±3.8 DeepSeek-1.5B-PPO Pass@4 Pass@8 Pass@16 Pass@32 Pass@64 Pass@128 Pass@256 AIME-24 54.0±5.0 61 .4±4.6 67 .6±4.1 73 .3±3.4 76 .8±2.4 78 .3±1.7 79 .6±1.1 AIME-25 35.9±3.9 39 .8±3.8 45 .8±4.2 50 .9±3.5 56 .1±3.0 59 .8±3.6 64 .1±3.9 HMMT-24 27.2±4.2 32 .8±4.4 37 .6±3.6 41 .5±3.6 45 .0±3.3 48 .8±3.3 52 .4±3.4 HMMT-25 22.8±4.0 26 .8±4.3 32 .5±4.4 36 .9±4.4 43 .7±3.8 48 .9±3.8 52 .6±3.6 Table 3: Pass@N results for DeepSeek-1.5B model and PPO-trained DeepSeek-1.5B-PPO model. Table 3 presents the comparison between the DeepSeek-1.5B model and the PPO-trained model (DeepSeek-1.5B-PPO ). As N increases, the performance gap gradually narrows. While the PPO- trained model performs competitively at lower Nvalues, it is surpassed by the base model at Pass@32 on both the AIME-24 and HMMT-25 datasets. This decline in performance could be attributed to the reduced entropy of the model after PPO training, which limits the diversity of model generations and negatively impacts performance at higher Pass@N. 20 We report our value-guided results of the PPO-trained model in Fig. 14. We observe that VGSnicely complements PPO training and provides additional test-time compute gains in performance compared to WMV and MV . 4 16 64 25640506070Accuracy AIME-24 4 16 64 256304050 AIME-25 4 16 64 256 Inference Budget (N)2530354045 Overall 4 16 64 256 Inference Budget (N)1520253035Accuracy HMMT-24 4 16 64 256 Inference Budget (N)15202530 HMMT-25 VGS WMV MV Figure 14: Guiding DeepSeek-1.5B Trained with PPO. Comparison of VGS, WMV and MV for TTC scaling our PPO policy. 21 C.7 Qualitative Examples Figures 17, 18, and 19 (at the end of the paper) show representative qualitative examples where value scores from VGSare used to guide beam search. At each step, two blocks of tokens are proposed, and the one with the higher value is selected to continue the solution. Due to space constraints, parts of the beams are abridged with . . . , and for ease of visualization, blue highlights indicate correct reasoning steps, while red highlights denote incorrect ones. Low-scoring beams exhibit different types of failure. In Figure 17, the rejected beam alternates between correct and incorrect steps, resulting in confused and ultimately incorrect reasoning. In Figure 18, the beam begins with a plausible strategy involving GCD analysis but eventually resorts to ineffective trial and error. In Figure 19, the beam makes a critical error in the algebraic transformation early on and fails to recover. In contrast, the selected beams across all examples demonstrate systematic reasoning and successfully solve the problems. Interestingly, despite the | https://arxiv.org/abs/2505.17373v1 |
critical error in Figure 19, VGSassigns a moderately high score (0.337) to the rejected beam—higher than scores for less subtle failures in earlier examples—suggesting that even significant mistakes can be difficult to detect when embedded in otherwise coherent reasoning. Finally, we empirically compare the distribution of generation lengths between the DeepSeek-1.5B base model and VGS with DeepSeek-1.5B across all benchmarks (Figure 15). On average, VGS generates noticeably shorter responses (11,219 tokens vs. 12,793 for the DeepSeek-1.5B base model), suggesting that beam search not only enhances accuracy but also promotes more concise reasoning. This trend is consistent with our qualitative analysis, where beam search tends to favor token blocks that are direct and solution-oriented, rather than verbose or meandering reasoning. Notably, the sharp peak near 16,000 tokens corresponds to the maximum generation length of DeepSeek models (16,384). For the base model, as many as 50% of the generations reach this limit, often resulting in incomplete outputs. 2000 4000 6000 8000 10000 12000 14000 16000 Generation Length0.00.10.20.30.40.5PortionAIME-24 2000 4000 6000 8000 10000 12000 14000 16000 Generation LengthAIME-25 2000 4000 6000 8000 10000 12000 14000 16000 Generation Length0.00.10.20.30.40.5PortionHMMT-24 2000 4000 6000 8000 10000 12000 14000 16000 Generation LengthHMMT-25 Base Model VGS Figure 15: Histogram of generation lengths for the DeepSeek-1.5B base model vs. VGS.VGS consistently produces shorter responses across benchmarks, with average lengths of 11,219 and 12,793 tokens for VGSand the base model, respectively. The peak around 16,000 tokens reflects the generation cap of DeepSeek models, which the base model frequently hits, often resulting in incomplete outputs. 22 D Further Details of Data Collection Pre-filtering process. Here we describe the pre-filtering process for constructing OpenR1-Cleaned in more detail. Below are the sequence of filtering operations we performed on OpenR1-Math [2]. We arrived at these rules by manually inspecting the data, by sampling 100random problems from the dataset and checking if all problems’ solutions looked reasonable to us. In OpenR1-Math , a solution is a fleshed-out solution to the math problem, an answer is the final answer to the math problem. 1.Filter out all solutions with 0or>1boxed answers (enclosed in \boxed{}). These are ambiguous and difficult to parse out the answer. 2.Filter out answers which are difficult to automatically parse or verify. This includes answers containing: ‘or’, ‘and’ \mathrm ,\quad , answers with equal signs, commas, semicolons, \cup ,\cap , inequality symbols, approximation symbols. 3. Filter out multiple-choice questions, which are labeled with question_type = ‘MCQ’. 4. Filter out questions with multiple parts, as it is ambiguous which part the answer is for. 5.Filter out questions containing links ( http:// orhttps:// ), since the models we test cannot access the web. Roll-in roll-out process. We also provide further intuition for the roll-in vs. roll-out process (illustrated in Fig. 2 left). The roll-in and then roll-out process is a standard technique in imitation learning [12] and reinforcement learning [11]. Roll-out. The roll-out process uses a fixed policy πrefto roll-out from any partial solution provided by the roll-in process. The rationale for using a fixed roll-out policy is to fix the target of the classification / value regression | https://arxiv.org/abs/2505.17373v1 |
problem. In particular, the classifier is trained to predict the probability of each class under the roll-out policy, given the partial solution. Roll-in. The main point of the roll-in process is to create a diverse distribution of partial solutions to roll-out from. By creating a diverse roll-in distribution with multiple roll-in policies, we can ensure that the classifier is trained on a diverse context distribution and will generalize better to new traces. To select where to cut a roll-in (to start the roll-out), we sample a cut index from the distribution ofp(i) =√ iP j∈[L]√jwhere Lis the length of the roll-in. We chose this such that the cut index is more likely to occur at earlier positions of the roll-in. We want to encourage more learning at earlier positions since those prediction problems are more difficult than at later positions. The following figure (Fig. 16) illustrates the distribution of the length of roll-outs, which shows that this indexing scheme indeed yields many long roll-outs. Figure 16: Distribution of roll-out length. 23 E Further Details for Value Model Training Our value model DeepSeek-VM-1.5B uses the same base architecture as the DeepSeek-R1-Distill-Qwen-1.5B model, which is a 1.5B parameter transformer model with28layers and 1536 hidden size. To turn this model into a value model, we replace the LM head with a scoring head, parameterized by a two-layer MLP, that outputs logits for three classes: ‘correct’, ‘incorrect’, and ‘incomplete’. The number of classes can be modified to suit the task at hand. Category Parameter Value ModelBase Model Initialization DeepSeek-R1-Distill-Qwen-1.5B Hidden size dmodel 1536 Score Head two-layer MLP with hidden size dmodel Score Head Bias False Score Head Labels 0=Incorrect, 1=Correct, 2=Incomplete DataDataset OpenR1-VM Validation split 500 Max sequence length 16384 TrainingBatch size 1024 Learning-rate schedule Cosine with max_lr=1e-4 Warm-up steps 10% of total steps Dropout 0.05 Number of epochs 5 OptimizerOptimizer type AdamW β1 0.9 β2 0.95 Weight decay 0.1 Grad Norm Clip 5 ComputeGPUs 16nodes of 8×NVIDIA H100 Wall-clock time (h) 24 hours Tokens Throughput (tokens/s) 2.07M Loss Tokens Throughput (loss tokens/s) 835k Total Tokens Processed (per epoch) 35.7B Total Loss Tokens Processed (per epoch) 14.4B Table 4: Value Model Training Parameters. We sweeped learning rates 1e-4, 3e-4, 7e-5 and we save checkpoints at every epoch. We selected the best checkpoint via WMV performance on AIME-24. 24 F Further Details for Inference with Search Given a problem , the prompt we used is: <|begin_of_sentence|><|User|>{problem} Please think step-by-step and put your final answer within \boxed{}.<|Assistant|><think>\n We use the same decoding parameters as in the original DeepSeek paper [18]. Category Parameter Value DecodingInference Engine SGLang [51] Max generation length 16384 Temperature 0.6 Top-p 0.95 Think Token <think> End of Think Token </think> Best Parameters for Search ( VGS)Model DeepSeek-VM-1.5B Beam width 2 Block size (tokens) 4096 Parallel branches (DVTS) budget dependent Final aggregation rule Weighted Majority V ote (WMV) Table 5: Decoding and Search Parameters. G Further Details for Training Bradley-Terry Reward Model Dataset. Recall that our dataset for value model training ( OpenR1-VM ) contains 56responses per problem. To construct a Bradley-Terry dataset, we sample up | https://arxiv.org/abs/2505.17373v1 |
to 4response pairs per problem, where a pair consists of a response with reward 0(the ‘reject’ response) and a response with reward 1(the ‘chosen’ response). Some prompts may have fewer than 4responses with reward 0or1, and in those cases, we include as many as possible. This yields a dataset of roughly 122k pairs. Model. We use the same model architecture as the value model (Appendix E) except that the score head outputs a single scalar score instead of a three-dimensional vector. We use the same training pipeline, except that the training loss is swapped to the standard BT loss [8]: LBT(θ,B) =1 |B|X (x,yr,yc)∈B−logσ(fθ(x, yc)−fθ(x, yr)), (3) where σis the sigmoid function, yris the reject response and ycis the chosen response. We use a batch size of 128pairs and we train for one epoch. We sweeped many learning rates: 3e-4, 1e-4, 7e-5, 3e-5, 7e-6. We found that the BT loss drops and plateaus much quicker than the value model loss, and all learning rates yielded similar final losses. We consider the last ckpt of each run and we selected lr=3e-5 as best ckpt with search (width 2, block size 4096 , WMV aggregation) on aime-24 . We note that one detail for getting BT to work with weighted majority is to use the sigmoid of the BT score, i.e., take WMV with σ(fθ(x, y))instead of fθ(x, y)itself. While this doesn’t affect BoN performance, we found that taking sigmoid was crucial for WMV performance to scale well. 25 H Inference FLOPS Computation In this section, we compute the FLOPs for search and show that adding value model guidance at the block-level introduces negligible compute overhead, as the vast majority of the compute is from generator model. We follow the approach outlined in Kaplan et al. [21], Sardana et al. [32] to compute the FLOPs for a single forward pass of the transformer, ignoring the embedding layers for simplicity. Consider a transformer with nlayer layers, ddimensional residual stream, dffdimensional feed- forward layer, and ddimensional attention outputs. Then the number of non-embedding parameters isN= 2nlayerd(2d+dff), and the number of FLOPs for a single forward pass over a context of length nctxis C(nctx) = 2 N+ 2nlayernctxd. (4) Then, in the regime where dmodel > nctx/12, Kaplan et al. [21], Sardana et al. [32] further approx- imate the above by ignoring the nctxterm, i.e.,Cbecomes independent of nctx. We adopt this approximation when estimating the inference FLOPs of our generator models. Thus, for a context length of nctx= 16,384, the inference FLOPs for one complete generation for each generator model is2Nnctx: 1.DeepSeek-R1-Distill-Qwen-1.5B :2×1.5B×16384 = 49 .1T. 2.DeepSeek-R1-Distill-Qwen-7B :2×7B×16384 = 229 T. 3.DeepSeek-R1-Distill-Qwen-14B :2×14B×16384 = 459 T. 4.DeepSeek-R1 (671B, with 37B activated params): 2×37B×16384 = 1212 T. We now compute the FLOPs needed for one forward pass of the value model. Since we use a block-size of 4096 , there are at most 16384 /4096 = 4 value model inferences per generation. Thus, the FLOPs from the value model is: 1. 1.5B classifier: 2×1.5B×4 = 12 B. 2. 7B classifier (baselines): 2×7B×4 = 56 B. Thus, we can | https://arxiv.org/abs/2505.17373v1 |
see that the value model FLOPs is negligible compared to the generator model FLOPs. In particular, when guiding a 1.5B generator with a 1.5B classifier, the classifier FLOPs is only 0.024% of the generator FLOPs. With a compute budget of 256, this amounts to a total FLOPs of (49.1T+ 12B)×256 = 12 .6P. When guiding with a 7B classifier, the total FLOPs is(49.1T+ 56B)×256 = 12 .6P. Note that the FLOPs required for generating 256independent generations is 49.1T×256 = 12 .6P. Thus, search has a negligible overhead comapred to (weighted) majority voting or best of n. 26 AIME-24 Problem: Jen enters a lottery by picking 4distinct numbers from S={1,2,3,···,9,10}. 4numbers are randomly chosen from S.She wins a prize if at least two of her numbers were 2of the randomly chosen numbers, and wins the grand prize if all four of her numbers were the randomly chosen numbers. The probability of her winning the grand prize given that she won a prize ism nwhere mandnare relatively prime positive integers. Find m+n. Rejected beam with score 0.009: Okay, so I need to figure out the probability that Jen wins the grand prize given that she won a prize. That means I have to use conditional probability here. I remember that conditional probability is P(A|B) =P(A∩B)/P(B). In this case, event A is her winning the grand prize, and event B is her winning a prize. So, I need to find the probability that she won both the grand prize and a prize, which is just her winning the grand prize, divided by the probability that she won a prize. First, let me figure out the total number of possible ways Jen can pick 4 numbers from S, which has 10 numbers. That should be the combination of 10 choose 4, right? So, C(10,4) = 210 . So, the total number of possible combinations is 210. So, she can win a prize in two scenarios: either exactly two of her numbers are the randomly chosen ones, or all four are. . . . The number of ways she can have exactly two numbers correct is C(4,2)ways to choose the correct numbers, and then C(6,2)ways to choose the incorrect ones from the remaining 6 numbers. Similarly, the number of ways she can have all four numbers correct is just C(4,4) = 1 . So, the total number of winning ways is C(4,2)∗C(6,2) +C(4,4). . . .C(4,2)is 6, and C(6,2)is 15. So, 6*15 = 90. Then, C(4,4)is 1. So, total winning ways are 90 + 1 = 91. Alternatively, I can compute the probability of not winning a prize, which is the probability that she has 0 or 1 correct numbers. Let me compute that and subtract from 1 to see if it matches. Number of ways to have 0 correct numbers: C(4,0)∗C(6,4) = 1 ∗15 = 15 . Number of ways to have 1 correct number: C(4,1)∗C(6,3) = 4 ∗20 = 80 . So, total non-winning ways: 15 + 80 = 95. Therefore, probability of not winning a prize is 95/210, so probability of winning a prize | https://arxiv.org/abs/2505.17373v1 |
is 1−95/210 = (210−95)/210 = 115 /210. Wait, that doesn’t match with the previous result. Hmm, something’s wrong here. Wait, why did I get 91 before? Let me see. I had C(4,2)∗C(6,2) +C(4,4). So, that’s 6*15 +1=91. But that’s only the number of winning ways where she has exactly two or exactly four correct numbers. So, 91 winning ways, 95 non-winning, which adds to 186, which is less than 210. So, the rest must be the impossible case? No, because in reality, she can’t have 0 correct numbers if she has 4 numbers. Wait, no, she can have 0 correct numbers if all her numbers are among the 6 incorrect ones. So, that’s 15 ways. Similarly, 1 correct number is 80 ways, and 2 correct numbers is 90 ways, and 4 correct numbers is 1 way. So, 15 +80 +90 +1=186. Wait, so actually, 186 winning ways and 24 non-winning ways. So, probability of winning is 186/210, which simplifies to 31/35. . . . Therefore, P(A|B) = (1 /210)/(186/210) = 1 /186. So, m is 1 and n is 186. Are they relatively prime? 1 and 186 share no common factors besides 1, so yes. So, m+n= 1 + 186 = 187 . . . . Selected Beam with score 0.996: Okay, so this is a conditional probability problem. The formula for conditional probability is P(A|B) =P(A∩B)/P(B), where A is the event that she wins the grand prize, and B is the event that she wins a prize. Since winning the grand prize is a specific case of winning a prize, A is a subset of B. So, P(A∩B)is just P(A). Therefore, the probability we’re looking for is P(A)/P(B). . . . Since she picks 4 numbers from 10, the total number of possible 4-number combinations is C(10,4). The number of favorable outcomes for A is 1 (since there’s only one exact set of four numbers that could be the randomly chosen ones). So, P(A)is1/C(10,4). . . . To find P(B), it might be easier to compute the probability of the complementary event, which is that Jen wins zero prizes or one prize, and subtract that from 1. So, P(B)= 1 - P(0 prizes) - P(1 prize). Let me compute P(0 prizes) and P(1 prize). First, P(0 prizes): This is the probability that none of Jen’s four numbers are among the four randomly chosen. So, all four numbers she picked are from the remaining six numbers (since S has 10 numbers, and if she picks 4, the remaining are 6). So, the number of ways to choose 4 numbers from 6 is C(6,4). Therefore, P(0 prizes) = C(6,4)/C(10,4). Similarly, P(1 prize): This is the probability that exactly one of Jen’s numbers is among the four randomly chosen. So, we need to choose 1 number from her 4, and 3 numbers from the remaining 6 (since 10 - 4 = 6). So, the number of ways is C(4,1)∗C(6,3). Therefore, P(1 prize) = [C(4,1)∗ C(6,3)]/C(10,4). . . . So, putting it all together: P(B) = 1 −[C(6,4)/C(10,4)]−[C(4,1)∗ C(6,3)/C(10,4)]. . . Therefore, P(B) = 1−15/210−80/210. | https://arxiv.org/abs/2505.17373v1 |
. . . Therefore, P(B)is 23/42. . . . Therefore, the conditional probability P(A|B)is(1/210)/(23/42) = (1 /210)∗(42/23) = (42) /(210∗23). . . Therefore, the probability becomes (1/5)/23 = 1 /(5∗23) = 1 /115. . . . But 1/115 can’t be simplified further, so m is 1 and n is 115, which are coprime. Therefore, m + n is 1 + 115 = 116. . . . Figure 17: Example of selected and rejected beams during beam search with VGS. The high-scoring beam (score 0.996) follows a correct and coherent line of reasoning, arriving at the correct answer. In contrast, the rejected beam (score 0.009) contains several inconsistencies and incorrect steps, despite occasionally making correct logical deduction—demonstrating the effectiveness of VGSas a value model. Highlighting is added for clarity: blue indicates correct reasoning steps or results, while red indicates incorrect ones. Part of generations are abridged with . . .notation. 27 AIME-25 Problem: Find the sum of all positive integers nsuch that n+ 2 divides the product 3(n+ 3)(n2+ 9) . Rejected beam with score 0.395: First, let’s parse the problem. I need to find all positive integers nwhere n+ 2is a divisor of 3(n+ 3)(n2+ 9) . So, mathematically, this means that n+ 2divides 3(n+ 3)(n2+ 9) . In other words, when you divide 3(n+ 3)(n2+ 9) byn+ 2, there should be no remainder. So, n+ 2must be a factor of this product. One way to approach this is by using the Factor Theorem. The Factor Theorem states that if n+ 2is a factor of a polynomial P(n), then P(−2) = 0 . So, if I let P(n) = 3( n+ 3)(n2+ 9) , then P(−2) should be equal to zero if n+ 2divides P(n). Let me compute P(−2): P(−2) = 3( −2 + 3)(( −2)2+ 9) Simplify each part: (−2 + 3) = 1 ( −2)2= 4, so4 + 9 = 13 Thus, P(−2) = 3 ∗1∗13 = 39 . . . Wait, perhaps I’m misapplying the Factor Theorem here. Let me think again. Alternatively, another approach is to express n+ 2ask, sok=n+ 2. Then, n=k−2. Substitute n=k−2into the expression 3(n+ 3)(n2+ 9) and see what conditions must hold for kto divide this. . . . Therefore, the product becomes: 3(k+ 1)(k2−4k+ 13) . . . Which implies that kdivides 3(k+ 1)(k2−4k+ 13) . Since kdivides the product, it must divide at least one of the factors. But kdivides k, so we can consider the other factors. But wait, kdivides 3(k+ 1)(k2−4k+ 13) , sokdivides 3(k+ 1)(k2−4k+ 13) . Since kandk+ 1 are consecutive integers, their GCD is 1. Similarly, kandk2−4k+ 13 may have a GCD. Let me check: Let me denote d= gcd( k, k2−4k+ 13) . Then ddivides kandk2−4k+ 13 . So, ddivides k2−4k+ 13−k(k−4) =k2−4k+ 13−k2+ 4k= 13 . So,ddivides 13. Therefore, d= 1or13. Wait, maybe I’m complicating things. Let me approach this differently. . . . But maybe there’s another way to approach this. Let me try plugging in small positive integers for n and see if n+ 2divides the product 3(n+ 3)(n2+ 9) . That might | https://arxiv.org/abs/2505.17373v1 |
help confirm if n= 1is the only solution. Testing n= 1: As above, product is 120, which is divisible by 3. Testing n= 2:n+ 2 = 4 . Product is 3(5)(13) = 195 . 195 divided by 4 is 48.75, which isn’t an integer. So, 4 doesn’t divide 195. . . . Testing n= 9:n+ 2 = 11 . Product is 3(12)(90) = 3240 . 3240 divided by 11 is approximately 294.545, not integer. Testing n= 10 :n+ 2 = 12 . Product is 3(13)(109) = 4209 . 4209 divided by 12 is 350.75, not integer. Wait, so up to n= 10 , only n= 1works. Maybe n= 1is the only solution. . . . Selected Beam with score 0.998: First, let me understand what the problem is asking. It says that n+ 2divides 3(n+ 3)(n2+ 9) . In other words, when we divide the product 3(n+ 3)(n2+ 9) by n+ 2, the result should be an integer. So, I need to find all positive integers nfor which this condition holds and then sum them up. Let me write this out mathematically. If n+ 2divides 3(n+ 3)(n2+ 9) , then there exists some integer ksuch that: 3(n+ 3)(n2+ 9) = k(n+ 2) But since we’re looking for integer solutions, maybe it’s better to think about this in terms of divisibility. So,n+ 2must be a divisor of 3(n+ 3)(n2+ 9) . That suggests that n+ 2divides this product. I remember that if a number adivides another number b, then bmodulo ais zero. So, maybe I can use modular arithmetic here. Let me see. Since n+ 2divides 3(n+ 3)(n2+ 9) , it must be that: 3(n+ 3)(n2+ 9)≡0 (mod n+ 2) . . . First, n+3modulo n+2. Let’s substitute n=−2+k, but maybe that’s not necessary. Alternatively, we can note that n+ 3 = ( n+ 2) + 1 . So, modulo n+ 2,n+ 3≡1. Similarly, n2+ 9. Let me compute n2modulo n+ 2. Since n≡ −2 (mod n+ 2) , then n2≡ (−2)2= 4 (mod n+ 2) . Therefore, n2+ 9≡4 + 9 = 13 (mod n+ 2) . So, putting it all together, modulo n+ 2, the product becomes: 3(n+ 3)(n2+ 9)≡3×1×13 = 39 (mod n+ 2) So, this implies that 39≡0 (mod n+ 2) , which means that n+ 2divides 39. Therefore, n+ 2must be a positive divisor of 39. Now, the positive divisors of 39 are 1, 3, 13, and 39. But since nis a positive integer, n+ 2must be at least 3 (because if n= 1,n+ 2 = 3 ). So, the possible values for n+ 2are 3, 13, and 39. Therefore, the corresponding values for nare: - If n+ 2 = 3 , then n= 1. - Ifn+ 2 = 13 , thenn= 11 . - Ifn+ 2 = 39 , then n= 37 . . . . Therefore, the sum of all such positive integers nis 1 + 11 + 37 . Let me compute that: 1 + 11 is 12, 12 + 37 is 49. . . . Figure 18: Additional examples of selected and rejected beams | https://arxiv.org/abs/2505.17373v1 |
under VGS. The high-scoring beam applies a concise and effective modular reasoning strategy whereas the rejected beam attempts several approaches—including an incorrect use of the Factor Theorem and trial-and-error—but ultimately fails. Color coding follows Figure 17. Part of generations are abridged with . . .notation. 28 HMMT-25 Problem: Mark writes the expressionp abcd on the board, where abcd is a four-digit number and a̸= 0. Derek, a toddler, decides to move the a, changing Mark’s expression to ap bcd. Surprisingly, these two expressions are equal. Compute the only possible four-digit number abcd . Rejected beam with score 0.337: Okay, so I have this problem where Mark writes the square root of a four-digit number, which is written asp (abcd), where a isn’t zero. Then Derek moves the first digit ’a’ to the end, making the expression ap (bcd). And the interesting part is that both expressions are equal. I need to find the only possible four-digit number abcd that satisfies this condition. First, let me try to understand the problem step by step. Mark’s expression is the square root of a four-digit number, so that’sp (1000 a+ 100 b+ 10c+d). Derek changes this by moving the first digit ’a’ to the end, so the new number is 1000b+ 100 c+ 10d+a, and Derek’s expression is a multiplied by the square root of this new number, which is ap (1000 b+ 100 c+ 10d+a). And both of these are equal. So, mathematically, I can write this as:p (1000 a+ 100 b+ 10c+d) =ap (1000 b+ 100 c+ 10d+a) If I square both sides to eliminate the square roots, I get: So,1000a+ 100 b+ 10c+d=a∗(1000 b+ 100 c+ 10d+a). . . Selected Beam with score 0.999: Let me write this down step by step. Let’s denote the four-digit number as N= 1000 a+ 100 b+ 10c+d. So, Mark’s expression is√ N. Then, Derek changes it to ap (bcd), which is ap (100b+ 10c+d). According to the problem, these two expressions are equal, Which means:p (1000 a+ 100 b+ 10c+d) =ap (100b+ 10c+d) Let me square both sides to eliminate the square roots: 1000a + 100b + 10c + d = a² * (100b + 10c + d) Hmm, that gives me an equation: 1000a + 100b + 10c + d = a² (100b + 10c + d) Let me denote the four-digit number as N, so N = 1000a + 100b + 10c + d, and the three-digit number as M = 100b + 10c + d. So, the equation becomes: N = a² * M But since N is a four-digit number and a is a single digit (from 1 to 9, since a̸= 0), and M is a three-digit number (from 0 to 999). So, let me rewrite N as 1000a + M. So, 1000a + M = a² * M Let me rearrange this equation: 1000a = a² * M - M Factor out M on the right side: 1000a = M(a² - 1) Therefore, M = (1000a) / (a² - 1) Since M must be an integer (as it’s a | https://arxiv.org/abs/2505.17373v1 |
three-digit number), (1000a) must be divisible by (a ²- 1). So, (a ²- 1) must be a divisor of 1000a. Let me note that a is a digit from 1 to 9. So, I can try each a from 1 to 9 and see if (a ²- 1) divides 1000a, and then check if M is a three-digit integer. Let’s compute for each a: Starting with a=1: a=1: M = 1000*1 / (1 - 1) = 1000 / 0, which is undefined. So, a=1 is invalid. a=2: M = 1000*2 / (4 - 1) = 2000 / 3 ≈666.666..., which is not an integer. So, a=2 is invalid. a=3: M = 1000*3 / (9 - 1) = 3000 / 8 = 375. That’s an integer. So, M=375. a=4: M = 1000*4 / (16 - 1) = 4000 / 15 ≈266.666..., not integer. So, a=4 invalid. a=5: M = 1000*5 / (25 - 1) = 5000 / 24 ≈208.333..., not integer. a=6: M = 1000*6 / (36 - 1) = 6000 / 35 ≈171.428..., not integer. a=7: M = 1000*7 / (49 - 1) = 7000 / 48 ≈145.833..., not integer. a=8: M = 1000*8 / (64 - 1) = 8000 / 63 ≈127.450..., not integer. a=9: M = 1000*9 / (81 - 1) = 9000 / 80 = 112.5, which is not integer. So, only a=3 gives us an integer M=375, which is a three-digit number. Therefore, the only possible four-digit number is 3375. . . . Figure 19: Additional examples of selected and rejected beams under VGS. The rejected beam misrepresents Derek’s transformation, leading to an incorrect equation that derails the solution. In contrast, the high-scoring beam correctly models the relationship and systematically solves for the unique valid value of a. Color coding follows Figure 17. Part of generations are abridged with . . . notation. 29 | https://arxiv.org/abs/2505.17373v1 |
Chart-to-Experience: Benchmarking Multimodal LLMs for Predicting Experiential Impact of Charts Seon Gyeom Kim * KAISTJae Y oung Choi† KAISTRyan Rossi‡ Adobe ResearchEunyee Koh§ Adobe ResearchTak Y eon Lee¶ KAIST ABSTRACT The field of Multimodal Large Language Models (MLLMs) has made remarkable progress in visual understanding tasks, presenting a vast opportunity to predict the perceptual and emotional impact of charts. However, it also raises concerns, as many applications of LLMs are based on overgeneralized assumptions from a few exam- ples, lacking sufficient validation of their performance and effec- tiveness. We introduce Chart-to-Experience, a benchmark dataset comprising 36 charts, evaluated by crowdsourced workers for their impact on seven experiential factors. Using the dataset as ground truth, we evaluated capabilities of state-of-the-art MLLMs on two tasks: direct prediction and pairwise comparison of charts. Our findings imply that MLLMs are not as sensitive as human evalua- tors when assessing individual charts, but are accurate and reliable in pairwise comparisons. Index Terms: Computing methodologies—Artificial intelli- gence; Human-centered computing—Visualization—Visualization design and evaluation methods 1 I NTRODUCTION Researchers have shown interest in how images lead to distinct ex- periences when used in specific contexts. Regarding this, studies in data visualization have prioritized efficiency and effectiveness in objective and analytic tasks. However, recent studies showed that data visualizations are also utilized for provoking creativity and en- gagement or conveying emotions such as sadness, surprise, or trust- worthiness [45, 3, 23, 24]. This has broadened the scope of consid- erations for data visualization creators, challenging them to refine their works for these user experiential factors. Some studies ex- plored how charts affect specific groups of people [36] or focused on specific image features [3]. Additionally, over the last decade, studies have incorporated experiential aspects as additional metrics [4, 13] and developed questionnaires [46] for assessing the qual- ity of charts. Nonetheless, the field of data visualization lacks an- alytical methods or datasets for the automated prediction of such impacts. Recently, the field of Multimodal Large Language Mod- els (MLLMs) has presented an opportunity to predict the expe- riential impact of charts without requiring a complex theoretical background or developing machine learning models. MLLMs have demonstrated both cost efficiency and the capability to understand human nuance [12, 51], while also offering subjective assessments of designs and emotion recognition of natural language dialogues. Despite these advantages, MLLMs often produce incoherent and in- accurate output [49], and researchers have argued that it is crucial to *e-mail: ksg 0320@kaist.ac.kr †e-mail: jaeyoungchoi@kaist.ac.kr ‡e-mail: ryrossi@adobe.com §e-mail: eunyee@adobe.com ¶e-mail: takyeonlee@kaist.ac.krtrain MLLMs with a reliable and sizable dataset containing a wide range of use cases [40, 5]. However, little research has explored the systematic construction of chart datasets that cover a broad spec- trum of designs, from simple charts to detailed infographics. More- over, methods for constructing scalable datasets on emotions and perceptions through crowdsourced studies, as well as their poten- tial applications for evaluation, remain largely unexplored. This paper presents Chart-to-Experience, a benchmark dataset containing 36 charts across three subjects (COVID-19, House Prices, and Global Warming) with their experiential impact on crowdsourced participants. The experiential impact consists of | https://arxiv.org/abs/2505.17374v1 |
two categories: 1) Emotional factors, including empathy ,interest , and comfort ; and 2) Perceptual factors including memorability ,trust- worthiness ,aesthetic pleasure , and intuitiveness . To construct the dataset, we recruited 216 crowdsourced workers1, and asked them to rate their experiences using a 7-point Likert scale when view- ing each chart, repeating this process across six different charts with the same subject. Subsequently, we evaluated the performance of three state-of-the-art MLLMs (GPT-4o, Claude 3.5 Sonnet, and Llama-3.2-11B-Vision-Instruct) on the dataset. The results were twofold: Firstly, the Likert-scores generated by the MLLMs show smaller standard deviations and either higher or lower means than those of humans. This implies that MLLMs are hardly accurate and sensitive in predicting absolute scores. Secondly, MLLMs showed higher accuracy in comparison tasks when they are given chart pairs with larger score differences in human data. For the comparison task, we also suggest the possibility of deriving strategies to in- crease accuracy by comparing human data with the explanations given by MLLMs. 2 R ELATED WORKS Predicting Experiential Impact of Images Recently, affective computing focuses on not only recognizing emotions in images but also predicting emotional impact of vi- sual stimuli on viewers [52]. Researchers often utilize established emotion models, such as Ekman’s set of “basic emotion” [37] and Mehrabian’s continuous dimensions of “valence” and “arousal.” [33] Moreover, researchers delve into complex and experiential as- pects, such as memorability [20], aesthetics [9], and attitude change [19]. To analyze factors that influence such aspects, researchers have focused on visual elements, analyzing images using principles from artistic domains [53, 32], or examining the effects of low- level features such as shape [50] and color [44, 47]. However, re- cent advancements have made MLLMs versatile in predicting po- tential impacts more accessible, as they can predict a wide range of experiential impacts via simple natural language prompt. For example, MLLMs can be aware of emotions related to a pair of images and captions similar to humans [11]. Also, “GPT-4 with Vi- sion” demonstrated superior performance in evaluating aesthetics of general images [1]. Despite this versatility, it remains uncertain whether MLLMs can perform well in predicting the experiential impact of charts due to the lack of datasets focusing on charts and experiential factors at the same time. 1Recruited from Prolific (https://www.prolific.com/)arXiv:2505.17374v1 [cs.HC] 23 May 2025 Figure 1: The collection of 12 charts on the topic of House Prices Automatic Chart Understanding Automatic chart understanding enables machines to interpret charts’ graphical elements and their spatial relationships to extract and analyze the data [17]. This includes tasks related to facts, such as chart-to-table conversion [31], question-and-answering [21], fact-checking [2] and captioning [41]. There have been studies [35] aimed at integrating image processing and natural language gen- eration techniques for such tasks. Then, researchers delved into pre-training [55] for specific tasks, and the adoption of MLLMs for generalized usages [16]. Given that MLLMs have shown potential in chart understanding, modification, and generation, further research can readily focus on how users interact with charts. User experience in chart interaction has been extensively researched from various perspectives. | https://arxiv.org/abs/2505.17374v1 |
For ex- ample, chart quality has been evaluated based on perceptual factors such as readability [34, 43] and cognitive effort or time spent on particular tasks [4, 18] since data visualization primarily aims to fa- cilitate analysis. Also, the affective roles of charts are increasingly identified, emphasizing aspects such as aesthetics, engagement, and effectiveness in evoking certain behavior and emotion in readers [25, 27, 30]. This paper introduces a data collection that considers both perceptual and emotional factors, thereby aligning automatic chart understanding more closely with real-user contexts. Evaluating MLLM as-a-Judge Recent advancements of LLMs have given rise to the “LLM-as- a-judge” paradigm, where LLMs are utilized for tasks such as scor- ing, comparison, and ranking across various tasks and applications [28]. MLLMs further extend the paradigm to cover multimodal tasks, such as image captioning [22] and visual information query- ing [42]. Depending on the benchmarks used, the judging meth- ods vary, involving scoring based on a specific rubric, choosing an answer from candidates, or comparing pairs of inputs. However, like other LLM applications, these judgments suffer from issues such as various biases and hallucinations. For example, while a renowned strategy named “Chain-of-Thought” can bias the judg- ments of LLMs, it is uncertain whether the use of it will enhance [48] or diminish [8] the performance. For pairwise comparisons, an ordering bias must be controlled as LLMs tend to favor the first option presented [54]. Therefore, to assess MLLM “as-a-judge” across diverse scopes, it is essential to establish benchmarks that can detect biases and hallucinations while assessing their alignment with human evaluations. To the best of our knowledge, few (if any) prior studies have fo- cused on benchmarks for predicting impact ofchart regarding ex- periential aspects . One of the closest benchmarks was developed by Chen et al. [8] using charts as visual stimuli, but the benchmark focuses on question-and-answering tasks. Lian et al. [29] evalu- ated “GPT-4 with Vision” for emotion recognition tasks focusing on eight basic emotions and sentiments evoked by general web imagesrather than predicting complex experiential aspects (e.g., memora- bility, trustworthiness) of charts. On the other hand, a few studies involved complex experiential impacts, such as aesthetic harmony in general images [26] or affective reasoning tasks in videos [15], but they did not focus on the experiential impact of charts. 3 C HART -TO-EXPERIENCE DATASET 3.1 Chart Collection In total, we selected 36 charts across three topics (House Prices, COVID-19, and Global Warming) through internet search to create a set of 12 charts for each topic. Each set contains diverse charts in terms of chart types, color schemes, styles, and levels of infor- mation complexity. In particular, each set includes at least one in- stance of each common chart type, such as line, area, bar, pie, and heatmap. In addition, each set features scientific charts, visualiza- tions commonly used in online journalism, and infographics that integrate text and graphics with high completeness and detail. The amount of information presented in each chart ranges from minimal (e.g., a simple chart with a title and short description) to complex (e.g., | https://arxiv.org/abs/2505.17374v1 |
a combination of charts with detailed annotation and/or rich illustrations). Among this information, auxiliary visual elements such as creator logos or certification marks were neither removed nor added for diverse coverage. Moreover, the collected charts con- tain text information, including titles, sources, label names, and an- notations, which show that the six charts shared a single subject. 3.2 Measures Table 1: Factors and corresponding questions. Factor Question Memorability The chart is easily remembered. Interest The chart is interesting. Trustworthiness The chart appears trustworthy. Empathy I can empathize with the chart. Aesthetic Pleasure The chart is aesthetically pleasing. Intuitiveness The chart is intuitive. Comfort I feel comfortable with the chart. To measure the impact of charts, we selected seven factors that are emotionally or perceptually relevant to the user experience of data visualization. We then composed corresponding questions, as shown in Table 1. Selecting these factors mainly based on related papers on the identification of goals and anticipated impacts of data visualizations [45, 23, 25] and measurements of user experiences [4, 46, 13]. The detailed descriptions and rationale for the seven factors are described below. Memorability refers to the ability of a chart to be remembered after viewing. This is one of the basic cognitive concepts associated with Figure 2: The task page for crowdsourced online study effective data communication with viewers. Visual attributes and elements like color, visual complexity, and recognizable objects can influence on this factor [6]. Interest measures the level of hedonic satisfaction and attention a viewer dedicates to a chart [38]. Since our study allows passive viewing only, we chose interest to capture initial and momentary responses, instead of how much viewers feel drawn into reading activities (engagement) or their satisfaction after fully experiencing the chart (enjoyment). Trustworthiness evaluates the viewer’s confidence in the accuracy and reliability of the information presented [39]. Visual character- istics such as source, graphical integrity, and the use of chart junk can influence this factor. Since these elements may be subtly ma- nipulated to effectively convey the intended message, their impact on viewers’ trust would depend on how interested the viewers are. Empathy assesses the capacity of a chart to evoke a personal re- sponse from the viewer [7]. Emotionally, this includes feelings of compassion and sympathy towards others, as well as emotions such as anxiety and discomfort that are triggered by others. On the other hand, empathy also pertains to the accuracy with which one com- prehends of others’ internal states like thoughts and intentions. Aesthetic Pleasure pertains to the visual attractiveness of a chart and its impact on viewer satisfaction. This depends on the in- dividual’s preferences on various elements. For example, some may consider minimal chart designs with fewer non-essential el- ements, while others focus on how colors and composition are har- moniously used. Also, typography can also contribute to overall aesthetic pleasure since charts are text-rich images. Intuitiveness deals with how easily a chart communicates its mes- sage at first glance [14]. This can be enhanced through appropriate use of design elements like auxiliary annotations or color highlight- ing that | https://arxiv.org/abs/2505.17374v1 |
emphasize key data trends and the core message. Notably, a chart can remain intuitively understandable even if its design is unfamiliar, visually unappealing, or uncomfortable to read. Comfort assesses the overall ease and satisfaction with which a viewer interacts with a chart [10]. We included this metric to focus on whether participants felt that the charts were visually organized in a way that could be read as expected. We also anticipate that this metric will capture comfort derived from perception, such as the visual comfort provided by specific color saturation. 3.3 Crowdsourced Data Collection We recruited 216 participants from an online crowdsourcing plat- form2. Participants were aged from 18 to 66 years ( Mean =26.4, SD=7.5), with 58 .8% male. We set several filtering criteria so that 2https://www.prolific.co/all participants are fluent in English and have no color vision im- pairments. Participants who completed the study received compen- sation of £3.50. To ensure data quality and encourage responsible participation, we inform participants that compensation would not be provided for mismatched, random, or intentionally low-quality responses. Also, the entire study was restricted to being completed in 45 minutes. At the beginning of the study, participants viewed the introduc- tory page for the study overview and instructions. They then per- formed the task of assessing six charts. As shown in Figure 2, each chart image was accompanied with a slider to adjust its size. Below the chart, participants answered questions related to the seven fac- tors, by rating their levels of agreement on a 7-point Likert scale, and by providing explanations for their ratings in a text box. For questions that are difficult to agree or disagree with (e.g., feeling empathy toward purely informational content), participants had the option to select a check box labeled “Insufficient information to an- swer.” To minimize the risk of potential ordering effects, where the sequence of the charts could influence the responses of the partic- ipants, all charts were displayed in a systematically rotating order. This approach ensured that no single chart consistently appeared in the same position, helping to balance exposure and reduce bias introduced by presentation order. 3.4 Result of the Data Collection As a result of the crowdsourced data collection, each factor of a chart received 36 scores, accompanied by reasons. Out of the to- tal 9,072 quantitative scores, 362 responses (4.1%)were ignored as participants checked the “Insufficient information to answer.” In detail, empathy (117), trustworthiness (100), and intuitiveness (90) were the factors relatively often ignored, while comfort (26), in- terest (16), aesthetic pleasure (14), and memorability (9) were less frequently reported. Table 2 shows the means and standard devi- ations across all models and topics, excluding such ignored cases. Table 2: Means (M) and standard deviations (SD) across topics and evaluators and correlation coefficients (Kendall’s τ) for the factors. Figure 3: The accuracy of MLLMs in comparing pairs of charts across the seven experiential factors is binned by grouping comparisons based on the magnitude of the difference in human ratings between the chart pairs. The observed overall upward trend suggests that MLLMs perform more accurately | https://arxiv.org/abs/2505.17374v1 |
when comparing chart pairs with larger score disparities. 4 E VALUATING MLLM S AS A JUDGE To evaluate the capability of MLLMs as predictors of the experi- ential impact of the charts, we conducted two studies. By compar- ing the study results with the crowdsourced data, we evaluated to what extent the judgments of the MLLM are associated with human judgments. To avoid biases toward a specific model, three popular state-of-the-art MLLMs (OpenAI GPT-4o, Anthropic Claude 3.5 Sonnet, and Meta Llama-3.2-11B-Vision-Instruct) are used. 4.1 Task 1: Score Prediction The first task for MLLMs was similar to what participants did in the crowdsourced data collection: rating the seven factors using a 7-point Likert scale. To simulate the variability inherent in hu- man responses, we generated 216 unique personas3, and instructed MLLMs, acting as “data visualization experts”, to predict how a specific persona would respond to the given chart. Consequently, every chart was assessed by 36 distinct personas, with each persona being employed to evaluate six charts. Table 2 reports the statistical summary of how different eval- uators predicted the seven experiential factors across three top- ics. While the mean scores from Claude 3.5 Sonnet and GPT- 4o were consistently higher than human evaluators, Llama-3.2’s scores were often lower, particularly for the factors of memorabil- ity,trustworthiness , and empathy . Overall, MLLMs generate rat- ings (0 .45≤SD≤1.22) are tightly clustered, compared to human ratings (1 .63≤SD≤2.20), suggesting that MLLMs are less sensi- tive to the differences in chart design and effectiveness. To further assess the alignment between MLLMs and humans as a judge, we performed a rank correlation analysis and reported Kendall’s τin Table 2. The results indicate that while certain factors show moderate alignment with human ratings, others exhibit only weak or no alignment. For instance, all MLLMs achieved moderate correlation (0 .25≤τ≤0.33) for aesthetic pleasure . In the case of intuitiveness , GPT-4o and Claude 3.5 Sonnet demonstrate moder- ate alignments ( τ=0.33 and 0 .37, respectively), whereas Llama- 3.2 does not show correlation. For the other factors, MLLMs are either weakly or uncorrelated with human ratings. Notably, there are no cases of significant negative correlations. Given the limited sensitivity of MLLMs and their moderate-to-weak alignment with human ratings, relying on MLLMs to directly predict scores may not be ideal, particularly for tasks that demand nuanced judgment. 4.2 Task 2: Pairwise Comparison To address the lack of sensitivity found from Task 1, the second task employed MLLMs to compare pairs of charts and decide which 3See the supplementary materials for the example prompts.Table 3: Comparison accuracy across different models and factors Model Mem Int Tru Emp Aes Itt Cft All GPT-4o 0.75 0.66 0.62 0.69 0.70 0.73 0.73 0.70 Claude 3.5 0.64 0.67 0.68 0.70 0.67 0.74 0.70 0.69 Llama-3.2 0.45 0.54 0.58 0.53 0.54 0.51 0.52 0.52 would receive higher scores for each experiential factor from ordi- nary people. We also instructed MLLMs to provide brief explana- tions for their choices to enable post hoc analysis. However, the task did not incorporate advanced prompt engineering techniques such as Chain-of-Thought or Few-shot | https://arxiv.org/abs/2505.17374v1 |
learning. Although such techniques could potentially enhance the performance of MLLM, they also introduce additional variables, complicating the evalua- tion process, and were therefore beyond the scope of this study. The accuracy of the comparisons was evaluated against human ratings as a benchmark. For example, if the human ratings for the memorability of two charts were 2.5 and 3.5, the comparison was deemed correct if the model identified the second chart as more memorable. As summarized in Table 3, GPT-4o and Claude 3.5 Sonnet demonstrated significantly higher accuracy in overall, com- pared to Llama-3.2 (One-way ANOV A; p<0.001). However, no statistically significant differences were observed across the seven factors ( p=0.884). Not all comparison tasks pose the same level of difficulty. Pairs of charts that received similar human ratings can be particularly challenging for MLLMs to predict which one would be more effec- tive. Building on this idea, we investigated whether the difference in human ratings influenced the accuracy of the MLLMs’ perfor- mance in the comparison task. As shown in Figure 3, the relation- ship between accuracy and score differences (i.e., task difficulty) exhibits an upward trend, which indicates similar patterns between MLLM and humans. MLLMs perform accurately on problems that are easy for humans, but show low accuracy on more challenging problems. For example, GPT-4o correctly compared all 28 pairs in the 1.4-1.6 bin for comfort . Likewise, Claude 3.5 Sonnet was accu- rate for 32 out of 33 pairs in the 1.4–1.6 bin for empathy . Llama-3.2 was an outlier, with its accuracy remaining consistent around 0.5. 5 D ISCUSSION Biases and variability issues. The findings of Task 1 suggest that MLLMs are not as sensitive as human evaluators in evaluating the experiential impact of charts. While human judgments reflect a wide range of responses based on varying experiential factors, MLLMs tend to produce overly consistent results. Moreover, each MLLM demonstrates distinct biases in its evaluations. For example, GPT-4o consistently assigns higher scores on average compared to Llama-3.2 and human evaluators. Furthermore, the performance of MLLMs is not uniform between experiential factors. These ob- servations suggest that relying on MLLMs to directly predict the experiential impact is not ideal, particularly for tasks that require fine-grained sensitivity. On the other hand, Task 2 highlights an alternative use-case: employing MLLMs for comparative evalua- tions. In this context, MLLMs demonstrate greater accuracy and reliability, particularly when comparing charts with large quality disparities. This underscores the promising use-case of MLLMs in specific use cases, where comparative evaluations are sufficient to achieve evaluation goals. Figure 4: The two charts which mainly decreased the accuracy for interest and aesthetic pleasure .Limitations and opportunities of MLLMs in chart comparison. Al- though the results of the pairwise com- parison exhibit similar patterns be- tween MLLMs and humans, there are notable drops between 1.4 and 1.6 forinterest andaesthetic pleasure in Figure 3. A close examination of the incorrect cases reveals that the MLLMs often overrated two specific charts in Figure 4. According to the generated explanations, the MLLMs perceived them as highly interesting and aesthetically pleasing due to | https://arxiv.org/abs/2505.17374v1 |
their “vivid color scheme”, “visual com- plexity”, and “analysis over extended timeframes.” In contrast, a majority of human evaluators described them as “hard to read”, “not pleasing”, and even “chaotic.” Once the limitation of MLLMs has been identified, we believe advanced prompt engineering techniques such as Chain-of-Thought or Few-shot prompt might be useful to fix them. Benchmark as a tool for innovation. Although the primary pur- pose of benchmarking is to evaluate the performance of AI models, benchmark data have the potential to gain deeper insight into model behavior, as demonstrated above. To this end, we expanded the scope of a benchmark dataset by not only ground-truth human rat- ings but also explanations for those ratings. In addition, we used the capabilities of MLLM in image understanding and text generation, to generate explanations for their ratings. By comparing human and MLLM-generated explanations, our benchmark dataset becomes a powerful tool for identifying critical issues in AI performance and gaining inspiration to improve MLLMs and their prompts. This ap- proach highlights the potential of benchmark datasets to drive both evaluation and innovation in AI research. 6 C ONCLUSION To evaluate MLLMs’ capability as a judge of the experiential im- pact of data visualizations, we developed a benchmark dataset com- prising 36 charts, accompanied by human ratings and explana- tions collected from crowdsourced participants. Using this dataset as ground truth, we conducted two tasks: score prediction and pairwise comparison. Our findings reveal that while state-of-the- art MLLMs face challenges in directly predicting scores, they are highly capable at comparing pairs of charts. Lastly, we examined inaccurate cases and discussed the potential benefit of benchmark for identifying issues to improve MLLM’s performance. This study has a few limitations that highlight opportunities for future research. we did not explore the impact of advanced prompt engineering techniques like Chain-of-Thought or Few-shot promptson performance of MLLMs. Second, demographic factors such as educational background, political views, and chart proficiency may influence attitudes and perceptions towards data visualization [36]. Future work could use demographic information from our study’s human evaluators to tailor personalized predictions and enhance the MLLM’s sensitivity. Lastly, future work may introduce new factors depending on different use cases such as the joyfulness or surprise of an interactive and animated chart or the persuasiveness of a chart incorporated with a narrative. SUPPLEMENTAL MATERIALS The supplemental materials4include the chart images, human rat- ings with explanations from the crowdsourced evaluations, and re- sults from Task 1 (direct score prediction) and Task 2 (pairwise comparison). In addition, we provide the prompts used in both tasks and interactive plots for exploring relationships between chart factors, human ratings, and MLLM performance, enabling further analysis and reproducibility. REFERENCES [1] Y . Abe, T. Daikoku, and Y . Kuniyoshi. Assessing the aesthetic evalu- ation capabilities of gpt-4 with vision: Insights from group and indi- vidual assessments. pp. 2Q1IS301–2Q1IS301, 2024. [2] M. Akhtar, N. Subedi, V . Gupta, S. Tahmasebi, O. Cocarascu, and E. Simperl. Chartcheck: An evidence-based fact-checking dataset over real-world chart images. arXiv preprint arXiv:2311.07453 , 2023. [3] L. Bartram, A. Patra, and M. Stone. | https://arxiv.org/abs/2505.17374v1 |
Affective color in visualization. InProceedings of the 2017 CHI conference on human factors in com- puting systems , pp. 1364–1374, 2017. [4] M. Behrisch, M. Blumenschein, N. W. Kim, L. Shao, M. El-Assady, J. Fuchs, D. Seebacher, A. Diehl, U. Brandes, H. Pfister, et al. Quality metrics for information visualization. In Computer Graphics Forum , vol. 37, pp. 625–662. Wiley Online Library, 2018. [5] M. Binder, B. Heinrich, M. Hopf, and A. Schiller. Global reconstruc- tion of language models with linguistic rules–explainable ai for online consumer reviews. Electronic Markets , 32(4):2123–2138, 2022. [6] M. A. Borkin, A. A. V o, Z. Bylinskii, P. Isola, S. Sunkavalli, A. Oliva, and H. Pfister. What makes a visualization memorable? IEEE trans- actions on visualization and computer graphics , 19(12):2306–2315, 2013. [7] J. Boy, A. V . Pandey, J. Emerson, M. Satterthwaite, O. Nov, and E. Bertini. Showing people behind data: Does anthropomorphizing visualizations elicit more empathy for human rights data? In Pro- ceedings of the 2017 CHI conference on human factors in computing systems , pp. 5462–5474, 2017. [8] D. Chen, R. Chen, S. Zhang, Y . Liu, Y . Wang, H. Zhou, Q. Zhang, Y . Wan, P. Zhou, and L. Sun. Mllm-as-a-judge: Assessing multimodal llm-as-a-judge with vision-language benchmark. arXiv preprint arXiv:2402.04788 , 2024. [9] R. Datta, D. Joshi, J. Li, and J. Z. Wang. Studying aesthetics in photographic images using a computational approach. In Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006, Proceedings, Part III 9 , pp. 288–301. Springer, 2006. [10] A. Dave, A. Saxena, and A. Jha. Understanding user comfort and expectations in ai-based systems. 2023. [11] K. Deng, A. Ray, R. Tan, S. Gabriel, B. A. Plummer, and K. Saenko. Socratis: Are large multimodal models emotionally aware? arXiv preprint arXiv:2308.16741 , 2023. [12] P. Duan, J. Warner, Y . Li, and B. Hartmann. Generating automatic feedback on ui mockups with large language models. In Proceedings of the CHI Conference on Human Factors in Computing Systems , pp. 1–20, 2024. [13] N. Errey, J. Liang, T. W. Leong, and D. Zowghi. Evaluating narrative visualization: a survey of practitioners. International Journal of Data Science and Analytics , 18(1):19–34, 2024. 4available at http://chart2experience.github.io [14] S. Few and P. Edge. Data visualization effectiveness profile. Percep- tual Edge , 10:12, 2017. [15] Y . Guo, F. Siddiqui, Y . Zhao, R. Chellappa, and S.-Y . Lo. Stimu- var: Spatiotemporal stimuli-aware video affective reasoning with mul- timodal large language models. arXiv preprint arXiv:2409.00304 , 2024. [16] Y . Han, C. Zhang, X. Chen, X. Yang, Z. Wang, G. Yu, B. Fu, and H. Zhang. Chartllama: A multimodal llm for chart understanding and generation. arXiv preprint arXiv:2311.16483 , 2023. [17] K.-H. Huang, H. P. Chan, Y . R. Fung, H. Qiu, M. Zhou, S. Joty, S.-F. Chang, and H. Ji. From pixels to insights: A survey on automatic chart understanding in the era of large foundation models. arXiv preprint arXiv:2403.12027 , 2024. [18] W. Huang, P. Eades, and S.-H. Hong. Measuring effectiveness of graph visualizations: A cognitive load perspective. Information Vi- | https://arxiv.org/abs/2505.17374v1 |
sualization , 8(3):139–152, 2009. [19] L. Huddy and A. H. Gunnthorsdottir. The persuasive effects of emo- tive visual imagery: Superficial manipulation or the product of pas- sionate reason? Political Psychology , 21(4):745–778, 2000. [20] P. Isola, J. Xiao, D. Parikh, A. Torralba, and A. Oliva. What makes a photograph memorable? IEEE transactions on pattern analysis and machine intelligence , 36(7):1469–1482, 2013. [21] K. Kafle, B. Price, S. Cohen, and C. Kanan. Dvqa: Understanding data visualizations via question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 5648– 5656, 2018. [22] S. Kantharaj, R. T. K. Leong, X. Lin, A. Masry, M. Thakkar, E. Hoque, and S. Joty. Chart-to-text: A large-scale benchmark for chart summa- rization. arXiv preprint arXiv:2203.06486 , 2022. [23] X. Lan, Y . Shi, Y . Wu, X. Jiao, and N. Cao. Kineticharts: Augment- ing affective expressiveness of charts in data stories with animation design. IEEE Transactions on Visualization and Computer Graphics , 28(1):933–943, 2021. [24] X. Lan, Y . Shi, Y . Zhang, and N. Cao. Smile or scowl? looking at infographic design through the affective lens. IEEE Transactions on Visualization and Computer Graphics , 27(6):2796–2807, 2021. [25] X. Lan, Y . Wu, and N. Cao. Affective visualization design: Leveraging the emotional impact of data. IEEE Transactions on Visualization and Computer Graphics , 2023. [26] S. Lee, S. Kim, S. H. Park, G. Kim, and M. Seo. Prometheusvision: Vision-language model as a judge for fine-grained evaluation. arXiv preprint arXiv:2401.06591 , 2024. [27] E. Lee-Robbins and E. Adar. Affective learning objectives for com- municative visualizations. IEEE Transactions on Visualization and Computer Graphics , 29(1):1–11, 2022. [28] D. Li, B. Jiang, L. Huang, A. Beigi, C. Zhao, Z. Tan, A. Bhat- tacharjee, Y . Jiang, C. Chen, T. Wu, et al. From generation to judg- ment: Opportunities and challenges of llm-as-a-judge. arXiv preprint arXiv:2411.16594 , 2024. [29] Z. Lian, L. Sun, H. Sun, K. Chen, Z. Wen, H. Gu, B. Liu, and J. Tao. Gpt-4v with emotion: A zero-shot benchmark for generalized emotion recognition. Information Fusion , 108:102367, 2024. [30] J. Liem, C. Perin, and J. Wood. Structure and empathy in visual data storytelling: Evaluating their influence on attitude. In Computer Graphics Forum , vol. 39, pp. 277–289. Wiley Online Library, 2020. [31] F. Liu, J. M. Eisenschlos, F. Piccinno, S. Krichene, C. Pang, K. Lee, M. Joshi, W. Chen, N. Collier, and Y . Altun. Deplot: One-shot vi- sual language reasoning by plot-to-table translation. arXiv preprint arXiv:2212.10505 , 2022. [32] J. Machajdik and A. Hanbury. Affective image classification using features inspired by psychology and art theory. In Proceedings of the 18th ACM international conference on Multimedia , pp. 83–92, 2010. [33] A. Mehrabian. An approach to environmental psychology. Mas- sachusetts Institute of Technology , 1974. [34] L. Micallef, G. Palmas, A. Oulasvirta, and T. Weinkauf. Towards per- ceptual optimization of the visual design of scatterplots. IEEE trans- actions on visualization and computer graphics , 23(6):1588–1599, 2017. [35] J. Obeid and E. Hoque. Chart-to-text: Generating natural language de- scriptions for charts by adapting the transformer | https://arxiv.org/abs/2505.17374v1 |
model. arXiv preprintarXiv:2010.09142 , 2020. [36] E. M. Peck, S. E. Ayuso, and O. El-Etr. Data is personal: Attitudes and perceptions of data visualization in rural pennsylvania. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems , pp. 1–12, 2019. [37] J. Sabini and M. Silver. Ekman’s basic emotions: Why not love and jealousy? Cognition & Emotion , 19(5):693–712, 2005. [38] B. Saket, A. Endert, and J. Stasko. Beyond usability and performance: A review of user experience-focused evaluations in visualization. In Proceedings of the Sixth Workshop on Beyond Time and Errors on Novel Evaluation Methods for Visualization , pp. 133–142, 2016. [39] J. Stasko. Value-driven evaluation of visualizations. In Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization , pp. 46–53, 2014. [40] T. Sun, Y . Shao, H. Qian, X. Huang, and X. Qiu. Black-box tuning for language-model-as-a-service. In International Conference on Ma- chine Learning , pp. 20841–20855. PMLR, 2022. [41] B. J. Tang, A. Boggust, and A. Satyanarayan. Vistext: A benchmark for semantically rich chart captioning. arXiv preprint arXiv:2307.05356 , 2023. [42] J. Tang, Q. Liu, Y . Ye, J. Lu, S. Wei, C. Lin, W. Li, M. F. F. B. Mahmood, H. Feng, Z. Zhao, et al. Mtvqa: Benchmarking multilingual text-centric visual question answering. arXiv preprint arXiv:2405.11985 , 2024. [43] A. Tatu, P. Bak, E. Bertini, D. Keim, and J. Schneidewind. Visual qual- ity metrics and human perception: an initial study on 2d projections of large multidimensional data. In Proceedings of the international conference on advanced visual interfaces , pp. 49–56, 2010. [44] P. Valdez and A. Mehrabian. Effects of color on emotions. Journal of experimental psychology: General , 123(4):394, 1994. [45] Y . Wang, A. Segal, R. Klatzky, D. F. Keefe, P. Isenberg, J. Hurtienne, E. Hornecker, T. Dwyer, and S. Barrass. An emotional response to the value of visualization. IEEE computer graphics and applications , 39(5):8–17, 2019. [46] T. Willigen. Measuring the user experience of data visualization. Mas- ter’s thesis, University of Twente, 2019. [47] Y . Wu, C. Bauckhage, and C. Thurau. The good, the bad, and the ugly: Predicting aesthetic image labels. In 2010 20th International Conference on Pattern Recognition , pp. 1586–1589. IEEE, 2010. [48] J. Ye, Y . Wang, Y . Huang, D. Chen, Q. Zhang, N. Moniz, T. Gao, W. Geyer, C. Huang, P.-Y . Chen, et al. Justice or prejudice? quantify- ing biases in llm-as-a-judge. arXiv preprint arXiv:2410.02736 , 2024. [49] J. Zamfirescu-Pereira, R. Y . Wong, B. Hartmann, and Q. Yang. Why johnny can’t prompt: how non-ai experts try (and fail) to design llm prompts. In Proceedings of the 2023 CHI Conference on Human Fac- tors in Computing Systems , pp. 1–21, 2023. [50] H. Zhang, E. Augilius, T. Honkela, J. Laaksonen, H. Gamper, and H. Alene. Analyzing emotional semantics of abstract art using low- level image features. In Advances in Intelligent Data Analysis X: 10th International Symposium, IDA 2011, Porto, Portugal, October 29-31, 2011. Proceedings 10 , pp. 413–423. Springer, 2011. [51] Y . Zhang, M. Wang, P. Tiwari, | https://arxiv.org/abs/2505.17374v1 |
Q. Li, B. Wang, and J. Qin. Dialoguellm: Context and emotion knowledge-tuned llama mod- els for emotion recognition in conversations. arXiv preprint arXiv:2310.11374 , 2023. [52] S. Zhao, G. Ding, Q. Huang, T.-S. Chua, B. Schuller, and K. Keutzer. Affective image content analysis: A comprehensive survey. 2018. [53] S. Zhao, Y . Gao, X. Jiang, H. Yao, T.-S. Chua, and X. Sun. Exploring principles-of-art features for image emotion recognition. In Proceed- ings of the 22nd ACM international conference on Multimedia , pp. 47–56, 2014. [54] L. Zheng, W.-L. Chiang, Y . Sheng, S. Zhuang, Z. Wu, Y . Zhuang, Z. Lin, Z. Li, D. Li, E. Xing, et al. Judging llm-as-a-judge with mt- bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623, 2023. [55] M. Zhou, Y . R. Fung, L. Chen, C. Thomas, H. Ji, and S.-F. Chang. Enhanced chart understanding in vision and language task via cross-modal pre-training on plot table pairs. arXiv preprint arXiv:2305.18641 , 2023. | https://arxiv.org/abs/2505.17374v1 |
AI-Augmented LLMs Achieve Therapist -Level Responses in Motivational Interviewing Yinghui Huang , Yuxuan Jiang, Hui Liu, Yixin Cai, Weiqing Li Abstract—Large language models (LLMs) like GPT -4 show potential for scaling motivational interviewing (MI) in addiction care, but require systematic evaluation of therapeutic capabilities. We present a computational framework assessing user -perceived quality (UPQ) throug h expected and unexpected MI behaviors. Analyzing human therapist and GPT -4 MI sessions via human -AI collaboration, we developed predictive models integrating deep learning and explainable AI to identify 17 MI -consistent (MICO) and MI - inconsistent (MIIN) b ehavioral metrics. A customized chain - of-thought prompt improved GPT -4’s MI performance, reducing inappropriate advice (Cohen ’s d = -0.258) while enhancing reflections and empathy. Although GPT -4 remained marginally inferior to therapists overall (OR=1.21) , it demonstrated superior advice management capabilities (Cohen’s d = -0.478). The model achieved measurable quality improvements through prompt engineering, yet showed limitations in addressing complex emotional nuances. This framework establishes a path way for optimizing LLM -based therapeutic tools through targeted behavioral metric analysis and human -AI co -evaluation. Findings highlight both the scalability potential and current constraints of LLMs in clinical communication applications. Index Terms — Motivational Interviewing; User -perceived Quality; Large Language Model; Prompt Engineering; Explainable Machine Learning; Integrative Modeling I. INTRODUCTION lobally, behavioral health issues such as substance abuse, chronic disease management, and lifestyle modifications are increasingly becoming major challenges to public health. According to the 2024 World Drug Report by the United Nations, the global prevalence is projected to reach 292 million by 2022, reflecting a 20% increase o ver ten years. MI, a client -centered counseling approach, has been widely demonstrated to be effective in guiding individuals toward positive behavioral change [1]. The significance of MI lies in its ability to help individuals explore Yinghui Huang, Yuxuan Jiang and Yixin Cai are with the Research Institute of Digital Governance and Management Decision Innovation, Wuhan University of Technology, Wuhan, 430070, China, and also with School of Management, Wuhan University of Technology, Wuhan, 430070, China. E -mail: huangyh@w hut.edu.cn, 338009@whut.edu.cn, 337982@whut.edu.cn . Hui Liu is with the Key Laboratory of Adolescent Cyberpsychology and Behavior (Central China Normal University), Ministry of Education, Wuhan 430079, China. E -mail: hlccnu@mails.ccnu.edu.cn . Weiqing Li is with the School of Economics and Management, Hubei University of Technology, Wuhan, 430068, China. E -mail: liwq@mails.ccnu.edu.cn . and resolve internal conflicts, thereby eliciting intrinsic motivation and facilitating behavioral change [1]. Recent advancements in LLMs, such as GPT 4, have generated significant interest among clinicians, researchers, and the general pu blic for their potential in providing therapeutic support through chatbots [2], [3], [4], [5] . This growing interest is particularly evident in the context of MI, where LLMs are being explored as tools to enhance therapeuti c interventions [6]. This growing interest is driven by the stark reality that hundreds of millions of peopl e worldwide lack access to necessary behavioral and mental health care, due to factors such as professional shortages, lack of insurance, high costs, and stigma associated with treatment [7], [8], [9], | https://arxiv.org/abs/2505.17380v1 |
[10] . These barriers not only degrade individuals ’ quality of life but also place a significant strain on social and economic development [11]. To address the growing gap in mental health services, individuals and researc h teams have increasingly turned to LLMs to enhance interactions between therapists and clients [12], [13], [14], [15], [16], [17], [18] . LLM -based self - help interventions in mental health can as sist individuals in acquiring therapeutic skills, thereby improving psychological well-being during crisis situations [18]. Concurrently, therapeutic chatbots such as Woebot, Wysa, and Sonia are being developed to provide scalable, personalized support [18], [17]. Notably, many individuals are already using general - purpose LLMs like ChatGPT for therapeutic purposes, even though these models were not specifically desi gned for such purpose s[19], [20], [21] . Both general -purpose and mental health -specific LLMs face rigorous scrutiny due to their potential to generate biased, harmful, or inaccurate information. Mental health professionals have raised significant concerns about the premature integration of LLMs into therapeutic settings without comprehensive evaluation [22], [23], [24] . These concerns are primarily driven by the risk that LLMs ma y compromise the effectiveness of therapeutic interventions, particularly in the context of MI, where nuanced and empathetic communication is crucial. Furthermore, the use of unverified natural language processing Xiangen Hu is with the Learning Sciences and Technologies, Hong Kong Polytechnic University, Hong Kong, 100872 , China. E -mail: xiangen.hu@polyu.edu.hk . Corresponding authors: H ui Liu and Yixin Cai. This research was supported by funding from the National Natural Science Foundati on of China (Award Number: 72204095, 72304090), the Humanities and Social Science Young Scientist Program sponsored by the Ministry of Education of the People’s Republic of China (Award Number: 22YJC880022), National Key R&D. Program of China (Grant No.202 3YFE0197900 and No.2023YFE0208300), and the China National Center for Mental Health and Prevention, China Education Development Foundation, Ministry of Education Student Service and Quality Development Center (Award Number: XS24A010). None of the funders h ad any involvement in carrying out this research. G (NLP) interventions with vulnerable popula tions could introduce substantial risks and ethical challenges, including breaches of confidentiality, the exacerbation of mental health conditions, and the spread of misinformation. As such, it is essential to conduct thorough risk assessments and impleme nt robust mitigation strategies to ensure the safe and ethical integration of LLMs into mental health interventions. This approach is critical for preserving the integrity of therapeutic outcomes and safeguarding the well -being of individuals receiving sup port. Established interventions, such as MI, are widely recognized as evidence -based practices for addressing a range of mental health issues, even when delivered through computer -assisted modalities, due to extensive research and rigorous evaluation [1], [25], [26] . While high -quality general -purpose LLMs may have the potential to match the effectiveness of MI deliver ed by human experts, there is still a lack of sufficient empirical evidence validating the efficacy, safety, and feasibility of using general -purpose LLMs in therapeutic settings, particularly when compared to human therapist . This gap highlights the urgen | https://arxiv.org/abs/2505.17380v1 |
t need to evaluate the feasibility, effectiveness, and safety of LLM -based interventions within the MI framework, in order to accurately assess their potential benefits for patient welfare. Accordingly, this study proposes Research Question 1 (RQ1): To wha t extent can general - purpose LLMs replicate the performance of human experts in Motivational Interviewing? In the evaluation of LLMs, human assessments are effective at capturing system performance from the user ’s perspective, but they are often limited by subjectivity, high costs, inefficiency, and inconsistency [27]. In contrast, automated evaluation methods, while less adaptable, can process large datasets quickly and are praised for their efficiency and consistency [28]. To overcome these limitations, this study introduces a user -centered metric based on UPQ to quantitatively assess LLM performance in MI. Additionally, the study proposes a predictive modeling approach that uses machine learning and deep learning methods to simulate the computational assessment of user -LLM interactions. This dual approach seeks to balance objectivity and efficiency in the evaluation process, while mitigating ethical risks and ensuring that the integration of LLMs into MI practices preserves the integrity and effectiveness of therapeutic interventions. Understanding the behavior of general -purpose LLMs in high-risk environments, such as mental health, is crucial due to their widespread accessibility and the potential for large -scale adverse outcomes that could severely impact vulnerable clients. These adverse behavio rs include clinically inappropriate actions, such as offering solutions before adequately reflecting on a client ’s emotions and experiences —behaviors that are contraindicated in clinical practice [29]. In the absen ce of a tailored computational evaluation framework for verbal behaviors, detecting the presence or frequency of such harmful actions remains challenging. Existing research mainly focuses on specific therapeutic modalities [30] and utilizes frameworks like the Cognitive Therapy Rating Scale (CTRS), which were originally designed for the manual assessment of human therapists [31]. However, these frameworks are often complex and not easily adaptable to the automation and large -scale evaluation needs of LLMs. Currently, efforts to translate these established guidelines into metrics suitable for automated and large -scale assessments are limited. This gap underscores the need for specialized computational evaluation frameworks capable of monitoring and mitigating clinically inappropriate behaviors in LLMs, ensuring their safe and ethical deployment in mental health interventions. In the evaluation of LLMs, intrinsic metrics focus on the quality of the outputs generated by the models —such as coherence, consistency, and relevance —and are essential for understanding verbal behaviors and improving performance across various domains [32], [17] . However, these intrinsic metrics are constrained by their lack of flexibility and limited adaptability to different domains, which can lead to evaluation biases and challenges when addressing novel or unforeseen issues [33]. As a result, intrinsic metrics may not be fully effective for assessing LLM ’s verbal behaviors specific to MI. In contrast, user -centered extrinsic metrics, such as UPQ, prioritize the impact on users, offering a complementary perspective that is crucial for comprehensive evaluation in high-stakes environments like mental health interventions. Therefore, this study proposes Research Question 2 (RQ2): How can intrinsic metrics related to | https://arxiv.org/abs/2505.17380v1 |
UPQ be identified and integrated to develop a framework for evaluating the verbal behaviors of general -purpose LLMs in MI? To address this, we employ explainable machine learning methods to introduce a computational eva luation framework that combines UPQ with relevant intrinsic metrics. This framework incorporates both intrinsic metrics —focusing on desirable and undesirable MI- inconsistent and MI-consistent behaviors —and external metrics —focusing on user impact and perce ption. Additionally, it elucidates the relationships between these metrics. By integrating these components, the proposed framework aims to enhance the transparency of LLMs within therapeutic processes and foster trust between human users and AI systems [34]. This comprehensive approach ensures that the evaluation of LLMs in MI not only assesses the quality of their language generation but also considers the broader implications for user experience and therapeutic effectiveness. Identifying both desirable and undesirable behaviors is crucial for understanding the potential benefits and risks associated with LLMs, thereby guiding their adoption, iterative development, and ongoing monitoring. Therefore, it remains to be evaluated whether this framew ork, particularly the intrinsic metrics related to UPQ, can provide valuable insights to enhance the capabilities of general -purpose LLMs. Given the critical role of prompt engineering in improving LLM performance for specific tasks, this study proposes Research Question 3 (RQ3): Can customized prompts enable general - purpose LLMs to achieve performance comparable to those of professional human experts in MI? Since prompt engineering helps direct LLMs to generate more accurate responses by crafting prompts t hat better align with user intent and AI comprehension, we integrate s insights from the evaluation into customized prompts , reassess the prompted LLM ’s capabilities to validate the effectiveness of the framework in enhancing LLM performance. This iterative process aims to ensure that LLMs not only meet the technical standards required for effective MI but also adhere to the therapeutic principles necessary for fostering meaningful and supportive interactions with clients. This study presents an innovative integrated modeling approach for human –AI collaboration in evaluating LLMs [35]. Leveraging a dataset of MI conversations between human and GPT4 , the proposed methodology unfolds across th ree primary steps. First, Predictive Modeling employs machine learning and deep learning techniques to develop a framework for evaluating the UPQ of LLMs in MI contexts. Second, Explainable Modeling identifies key intrinsic metrics that influence UPQ, enab ling a detailed assessment of LLM verbal behaviors throughout the MI process. Third, Customized Prompt Development focuses on creating tailored prompts for MI tasks, assessing the efficacy of the evaluation framework in enhancing LLM performance. Additiona lly, this study conducts a comprehensive comparison between LLMs and human experts across all three steps, offering a thorough analysis of the LLMs ’ strengths and limitations. The paper is organized as follows: Section 2 reviews relevant literature, includ ing the research background, the principles of MI, and current applications of LLMs in MI. Section 3 outlines the research methodology, covering data collection, framework development, and analytical methods. Section 4 presents the experimental results, while Section 5 discusses the findings, highlights study limitations, and suggests directions for | https://arxiv.org/abs/2505.17380v1 |
future research. Finally, Section 6 concludes by emphasizing the primary contributions of this study to the fields of MI, mental health, and LLMs. The contribut ions of this work are delineated as follows: (1) This study empirically demonstrated that LLMs achieve performance levels comparable to human therapists in core MI skills, resolving a decades -old limitation in AI ’s ability to simulate professional psychoth erapeutic interactions. (2) We developed a theory -grounded computational framework based on Miller and Rollnick ’s MI principles, enabling the first systematic quantification and comparison of verbal behavior patterns between LLMs and human therapists, while providing a reproducible, transparent methodology for evaluating AI agents in psychotherapy contexts. (3) Through analysis of psychotherapy dialogues, this work revealed LLMs ’ emergent quasi -social interaction capabilities, thereby expanding the theoretical scope of machine behavior research, and offering empirical evidence for understanding human -AI collaboration mechanisms in high -stakes scenarios. II. LITERATURE REVIEW A. MI and MI Quality MI is a client -centered , collaborative counseling approach aimed at enhancing intrinsic motivation and facilitating behavioral change [36]. This approach emphasizes trust- building , addressing ambivalence, and co-developing actionable change plans [37]. Therapist behaviors in MI are classified into MICO , MIIN and other types [38]. MICO behaviors , such as reflecti ve listening and open -ended questioning , help clarify change intentions and enhance intervention efficacy. In contrast, MIIN behaviors, like confrontation and unsolicited advice, may trigger resistance. Other behaviors should be employed contextually to align with intervention goals [39]. Therefore , a comprehensive analysis of therapist behaviors and their impact on MI effectiveness is essential for optimiz ing therapeutic process and outcomes . MI not only helps individuals resolve behavioral discrepancie s but also fosters positive change through collaboration and empathy, making it widely applied in psychotherapy and behavioral interventions [1]. To guarantee the MI quality and guide therapist development, a reliable MI competence assessment tool is crucial [40]. This tool must meet key criteria : inter -rater reliability, minimized evaluator bias [41], and cross -cultural c onsistency [42]. MI assessment metrics can be categorized into client -reported outcomes and therapist fidelity measures . Client -reported , referring to client behavioral changes such as decreased smoking frequency [43] or alcohol con sumption [44], are often evaluated with tools like the Client Evaluation of Motivational Interviewing (CEMI) scale [45]. Therapist fidelity, measured by adherence to MI principles, is assessed with tools like the Motivational Interviewing Treatment Integrity (MITI) scale [41] and the Motivational Interviewing Skills Code (MISC) [46]. While client -reported outcomes directly indicate MI effectiveness , the absence of fidelity assessment may skew results as failing to account fo r whether the MI principles were effectively implemented [42]. Combining fidelity assessm ent is crucial for more accurate evaluations of MI competence [47]. Consequently, this paper propose s an integrated approach that merges client -reported outcomes (UPQ) with therapist fidelity (MI strategies and linguistic cues ) to offer a more comprehensive and precise evaluation of MI effectiveness , provid ing a more robust measure of MI quality and a solid foundation for | https://arxiv.org/abs/2505.17380v1 |
ongoing therapist development. MI assessment methods can be broadly categorized into manual and automated approaches. M anual evaluation s typically rely on standardized tools such as the MITI and MISC [48]. Artificial intelligence (AI) has significantly impacted various industries [49], with machine learning technologies widely applied in finance [50], healthcare [51], and education [52]. In psychological counseling, these technologies are increasingly utilized to automate the evaluation process . For instance, the ReadMI system uses NLP to analyze and assess MI dialogues in real -time[53]. Compared to labor -intensive and time-consuming manual evaluation [54], automated assessment provide s a more efficient and scalable solution , facilitating broader adoption and immediate feedback. B. Evaluation of Large La nguage Models in Mental Health Applications LLMs, based on the Transformer architecture [55], are trained on vast corpora to develop advanced language comprehension and generation [2]. These models can perform a wide range of tasks in a zero -shot manner, without task -specific data [56]. LLMs excel in natural language generation , semantic analysis, and cross -domain applications (e.g., healthcare and education), marking significant advancements in AI technology [57], [58] . LLMs excel at inferring context and generat ing coherent outputs aligned with prompts. Their integration into mental health is a n emerging and promising research area [59]. Studies suggest LLM -powered chatbots can provide on -demand mental health support, elim inate human biases, enhance social confidence, and encourage self-reflection [60]. Furthermore, research has shown that user s may feel more at ease sharing personal thoughts with chatbots rather than human therapists [61]. Consequently , LLM -powered chatbots may facilitate expression, assist in psychological issue identification , and support problem -solving. Despite the promising potential of LLMs in mental health applications, their use involves risks [62]. First, LLMs demonstrate notable inconsistencies when addressing mental health concerns such as anxiety and depression, thereby demonstrating relatively low reliability in these areas [63]. Second , LLMs may generate harmful or inaccurate information [64], [65] , posing substantial risks to vulnerable individuals . Even harmful outputs can be misinterpreted as helpful, leading to phenomena like "pathological altruism" [66], and raising ethic al concerns about automated decision - making [22]. For instance, weight loss advice given to someone with an eating disorder may worsen their condition, while the same advice might not have such effects in others. Further studies highlight risks of over -reliance on AI, which may contribute to real -life social withdrawal and heightened loneliness [67]. Additionally, the rise of the big data era raises significant concerns about user privacy and data security [68], underscoring the need for strong ethical guidelines in AI -driven mental health tools. The rapid development of LLMs has increased the demand for robust and standardized evaluation benchmarks [69]. Several scholars emphasize the importance of evaluation as a core discipline crucial to the success of LLMs and other AI systems [70]. LLM evaluation methods can be broadly classified into automated and manual assessments. Though manual evaluation offers real -world relevance and detailed insights [70], it demands substantial financial resources and human labor. More over, it suffers | https://arxiv.org/abs/2505.17380v1 |
from limited reproducibility, vulnerability to evaluator bias, and constraints in assessing LLMs ’ full potential [71]. Additionally, cultural differences can lead to substantial variability in manual evaluation outcomes [72]. In contrast, automated evaluation offers a more standardized , efficient and consistent process , making it a preferred methodology in practice [70]. Consequently, t his paper leverage s machine learning and deep learning algorithms to implement automated evaluation, thereby optimizing efficiency and effectiveness. For LLM’s MI performance evaluation , metrics can be categorized int o intrinsic and extrinsic types [73]. Intrinsic metrics focus on internal attributes and performance of the model ’s output , independent of down stream task impact , including factual accuracy, relevance, coherence, and informativeness [74]. In contrast, extrinsic metrics measure the impact of LLM ’s output on downstream tasks, with helpfulnes s and effectiveness serving as key evaluation criteria [74]. Intrinsic metrics excel at the linguistic precision and the use of MI strateg ies, highlighting areas for technical refinement. Extrinsic metrics focus on intervention outcomes , such as promoting behavior change and emotional improvement, thereby highlighting the model ’s real-world applicability. By integrating both intrinsic and extri nsic metrics, a comprehensive evaluation can be achieved, providing a scientific foundation for enhanc ing MI quality . C. Understanding and Measuring Mental Health Using Digital Language Language serves as a mirror of attentional focus, revealing underlying emotion al states , cognitive processes, and behavior al patterns . Computational linguistics tools facilitate the extraction of nuanced insights into the determinants, correlations, and outcome s of mental health from linguistic data, effectively address ing t he limitations of self-report questionnaires and small -scale studies. Empirical studies have demonstrated the capacity of natural language to represent psychological constructs like depression, particularly within multi -method validation frameworks [75]. For instance, language indicative of loneliness (e.g., "alone") is a stronger predictor of depressive mood than simply mentioning "sadness," underscoring the pivotal role of social isolation in depression [76]. Additionally , cognitive manifestations of depression, such as diminished self -worth, often emerge through interrogative terms like ‘why’ and vague expressions like ‘apparently, ’ indicat ing how un certainty exacerbates depression [77]. Furthermore, the intersection of digital language and men tal health opens avenues for the development of advanced assessment systems. Research indicate s that analyzing digital language patterns can predict depression diagnoses [77], facilitating "early warning systems" that detect pathological states and predict behaviors [78]. These systems have been used to monitor various conditions, including alcohol dependence in trauma patients [79], suicidal ideation [80], adverse childhood experiences in Veterans Affairs patients [81], smoking behavior [66], and pos t-discharge mood fluctuations [82]. Moreover, predictive language patterns related to m ental health , such as expressions of sadness, loneliness, physical discomfort, and hostility , offer clinicians real-time "symptom monitoring dashboards," enabling earlier and more targeted interventions. For example, an increase in loneliness or rumination can signal the need for therapeutic intervention , aligning with MI techniques to foster patient engagement and behavior al change. Upon validation, predicti ve models based on digital language can be directly applied to assess | https://arxiv.org/abs/2505.17380v1 |
the mental health status of po pulations, tracking temporal trends in psychological well -being [75]. Overall, the advancement of non -invasive group measurement methods via digital language offer s a robust and scientifi cally validated approach, serving as a reliable data source for digital epidemiology [83], [84] . This enables real -time, population - wide mental health assessments across all demographic groups, including vulnerable population s, and ensures seamless integration with frameworks for MI and other therapeutic modalities. NLP [85] is an informatics methodology that extracts structured digital insights from large volumes of unstructured narrative data . Unlike traditional text mining , which focuses on word -level analysis, NLP captures the complexity of unstructured narratives by leveraging semantic relationships and context ual interdependencies [86]. This methodology encompasses syntactic analysis , information extraction, and meaning disambiguation . Recent research has employed NLP to evaluate healthcare quality and safety [87], identify integrated care components in primary health care settings [88], and detect alterations in clinical records after disclosure to service users [89]. Within mental health and MI, NLP offers powerful tools for analyzing therapeutic dialogue s and patient narratives. By utiliz ing NLP techniques, researchers can identify linguistic patterns that reflect emotional states, cognitive processes, an d behavioral intentions , thus enhancing the understanding of contributors to mental health outcomes and optimizing intervention effectiveness. For instance, NLP can assess MI quality by examining the alignment between client statements and therapeutic stra tegie s, providing real -time feedback to clinicians. Additionally, NLP -driven analys is of patient narratives can reveal underlying themes and sentiments that inform personalized treatment plans, contributing to more targeted and eff ective mental healthcare. Overall, integrating NLP in mental health and MI frameworks demonstrates the transformative potential of computational linguistics in converting qualitative narrative data into actionable insights, advancing both research and clinical practice. A dictiona ry-based approach in NLP utilizes predefined lexicons to identify and extract word categories or phrases from textual data , commonly employed in tasks like part-of-speech tagging, named entity recognition, and sentiment analysis. In this framework, dictionaries consist of predefined lexicons, where words or phrases are mapped to specific categories or labels. One key advantage of dictionary -based methods is t heir simplicity and ease of implementation, relying on predefined vocabularies without the need for complex algorithms or machine learning models. Linguistic Inquiry and Word Count (LIWC) is a widely used dictionary -based tool, particularly effective in id entifying and analyzing emotional states within mental health -related texts [90]. For example , LIWC can quantify the frequency of emotion -related words, cognitive process ter ms, and social engagement indicators within patient narratives, providing valuable insights into psychological states. Despite their utility , dictionary -based methods have seen limited application in certain areas of mental health interventions. For instan ce, although these methods have been successfully applied in oncology settings to track changes in clinical records following patient visits [91], their application to clinical records generated by large language model -based chatbots in psychological interventions remains underexplored. This gap highlights an opportunity to leverage dictiona ry-based NLP | https://arxiv.org/abs/2505.17380v1 |
techniques to assess changes in dialogu es and record - keeping practices post -collaborative sessions with AI -driven conversational agents. In the context of MI, dictionary -based NLP methods are essential for evaluating intervention quality. By s ystematically classifying and comparing human -AI dialogue content, NLP can provide robust evaluation frameworks for assess ing the alignment between therapeutic techniques and client responses. For instance, analyzing the frequency and context of motivation al statements, reflections, and affirmations within dialogues can yield insights into intervention effectiveness and the strength of the therapeutic alliance. Overall, research on NLP -driven analysis of mental health intervention quality within dialogue sy stems remains nascent. However, the integration of dictionary -based approaches can support the development of comprehensive evaluation tools, enhancing the understanding of how AI - mediated interactions influence dialogue quality and therapeutic outcomes. This integration not only facilitates the systematic assessment of intervention fidelity and effectiveness but also contributes to the advancement of personalized and adaptive mental health care. III. METHODOLOGY This study adopted an integrative modeling approach combined with the REFRESH (Responsible and Efficient Feature Reselection guided by SHAP values) methodology to improve predictive performance and model interpretability through dynamic feature selection and iterative updates. The integrat ive modeling approach strategically combined predictive modeling —focused on outcome forecasting —with explanatory modeling, which sought to elucidate the underlying mechanisms driving observed phenomena. This synthesis ensured both high predictive precision and interpretability. The REFRESH framework optimized performance metrics by dynamically reassessing and reselecting features based on SHAP (SHapley Additive exPlanations) values, which provided a principled way to quantify feature importance. This hybrid approach enhanced model accuracy, fairness, and robustness, while simultaneously maintaining transparency and reliability, thereby establishing a new paradigm for responsible AI applications across diverse contexts [92]. The research utilized publicly available MI demonstration videos from YouTube (Section 3.1), applying both predictive and explanatory modeling within the REFRESH framework to assess GPT -4’s performance in MI tasks. In the predictive modeling phase (Section 3.2), two distinct feature engineering methods were employed: LIWC analysis to extract thera pist's psychological linguistic features, and pretrained word embedding models (BERT and RoBERTa) to generate dense vector representations of textual data. These feature sets were independently fed into machine learning and deep learning algorithms for tra ining and evaluation, with the goal of identifying the optimal performing model. In the explanatory modeling phase (Section 3.3), key intrinsic metrics influencing UPQ were identified and a computational evaluation framework was developed. The experimental validation phase (Section 3.4) applied customized prompt engineering, informed by the computational evaluation framework, to optimize GPT - 4’s performance in MI tasks. Finally, GPT -4’s performance was assessed based on intrinsic and extrinsic metrics, prov iding strong empirical evidence for the critical role of the computational framework in enhancing GPT -4’s performance in MI contexts . Figure 1 shows a schematic diagram of the whole process. A. Data Collection The dataset employed in this study was curated by a research team led by Zixiu Wu, based on publicly available MI demonstration videos that illustrate typical therapist -client interactions. The team | https://arxiv.org/abs/2505.17380v1 |
transcribed these videos with high precision, and domain experts specializing in MI meticulously annota ted key elements to ensure the dataset’s validity, comprehensiveness, and reliability. The dataset captured therapist behaviors (such as reflective listening and questioning) as well as the types of client responses (such as change talk and sustain talk), providing a robust foundation for leveraging machine learning algorithms in the modeling of MI dialogues. The dataset originally contained only 133 video dialogues and exhibited significant class imbalance, with a disproportionate number of “high” instance s in the UPQ category. To address this, this study employed GPT -4 to expand the dataset, resulting in an augmented dataset with 598 video dialogues. This approach effectively mitigated the class imbalance and enhanced the model ’s generalization and robustn ess by generating diverse, high -quality text samples, demonstrating the utility of LLMs in NLP data augmentation. The dataset fielded and their corresponding definitions are as follows and the descriptive analysis of the dataset is shown in Table I: (1) UPQ: Classified as either “high” or “low”, used to assess the therapist’s MI performance. (2) Therapist Intervention Type: Includes categories such as “information”, “advice”, “negotiati on” and “options”. (3) Therapist Reflective Listening Type: Divided into simple reflection and complex reflection. (4) Therapist Question Type: Divided into open -ended and closed -ended questions. TABLE I THE DESCRIPTIVE ANALYSIS OF T HE DATASET Statics Utterance Rounds Utterance Length Overall Therapist Client Min 6 2 3 2 Max 598 1212 1034 1212 Mean 25.28 93.72 103.83 83.20 Std 45.78 86.92 88.63 83.82 B. Evaluating GPT4’s Capability in MI Through Predictive Modeling This study developed a predictive model to assess GPT -4’s MI performance, with the UPQ as the dependent variable and linguistic cues and MI strategies derived from the therapist’s dialogue as predictors. Linguistic cues were extracted using two distinct me thods: the LIWC dictionary and pretrained word embedding models (BERT and RoBERTa). Each method was independently evaluated as a predictor to identify the model yielding the highest performance. A range of machine learning and deep learning classification algorithms were employed for training and prediction, including Random Forest (RF), XGBoost, Support Vector Machines (SVM), as well as Recurrent Neural Networks (RNN), Convolutional Neural Networks (CNN), and Long Short -Term Memory (LSTM). In the preprocessing stage of feature engineering, the Jieba library, paired with a psychology -specific stopword list, was employed to segment dialogue, and eliminate non -essential terms, thereby isolating core information. Subsequently, linguistic cues re flecting the therapist’s behavior were extracted using two distinct methodologies: LIWC and pretrained word embedding models. Each method generated a separate feature set for comparison. LIWC proved an effective tool for capturing key psychological dimensi ons of the therapist’s dialogue, including emotional tone, social engagement, and cognitive processes [93]. Language Style Matching (LSM), a pivotal predictor of MI effectiveness, quantified the linguistic alignment between therapist a nd client. Greater LSM strengthens the therapeutic alliance, thereby enhancing MI outcomes [94]. LSM was computed ba sed on LIWC analysis using (1) and (2) to measure the alignment of nine functional word categories (e.g., pronouns, | https://arxiv.org/abs/2505.17380v1 |
prepositions), with the mean score representing the overall LSM, where higher values indicate stronger linguistic alignment [94]. Additionally, pretrained models such as BERT and RoBERTa were employed to generate word embeddings, capturing more nuanced emotional and behavioral patterns in the therapist’s speech [95]. Finally, MI strategies, such as reflection and negotiation, were integral in enhancing MI effectiveness and fostering behavioral change [96]. The frequency of these strategies within the dialogue was quantified, with t he resulting features as well as linguistic cues incorporated into the predictive model. 𝐿𝑆𝑀𝑝𝑟𝑒𝑝 =1−𝑝𝑟𝑒𝑝1−𝑝𝑟𝑒𝑝2 𝑝𝑟𝑒𝑝1+𝑝𝑟𝑒𝑝2+0.0001 (1) 𝐿𝑆𝑀 =𝑎𝑣𝑔 (𝐿𝑆𝑀𝑝𝑟𝑒𝑝 +𝐿𝑆𝑀𝑎𝑟𝑡𝑖𝑐𝑙𝑒 +𝐿𝑆𝑀𝑎𝑢𝑥𝑣𝑒𝑟𝑏 +𝐿𝑆𝑀𝑎𝑑𝑣𝑒𝑟𝑏 +𝐿𝑆𝑀𝑐𝑜𝑛 𝑗+𝐿𝑆𝑀𝑝𝑝𝑟𝑜𝑛 +𝐿𝑆𝑀𝑖𝑝𝑟𝑜𝑛 +𝐿𝑆𝑀𝑛𝑒𝑔𝑎𝑡𝑒)(2) The extracted features were subsequently treated as independent variables, with the initial selection conducted using the Recursive Feature Elimination with Cross -Validation (RFECV) method in combin ation with the XGBoost model. The synergistic application of RFECV and XGBoost for feature selection significantly mitigates feature redundancy while refining the feature set, thereby improving classification accuracy and model robustness. This approach is particularly well-suited for high -dimensional and imbalanced datasets, leading to marked improvements in model performance for practical applications [97]. The selected features were subsequently fed into multiple classification algorithms, with hyperparameter tuni ng conducted via GridSearchCV to optimize predictive performance. Model evaluation was carried out using standard classification metrics, including accuracy, precision, recall, F1 -score, and ROC -AUC. This methodology facilitated the identification of the o ptimal predictive model for automated assessment of GPT -4’s extrinsic metrics in MI tasks. Finally, the selected predictive model was used to predict the UPQ of GPT -4’s responses obtained through the OpenAI API, and the McNemar test was applied to externally compare the MI competence of GPT -4 with that of human therapists. C. Evaluating GPT4’s Verbal Behaviors in MI Through Explanatory Modeling During the explanatory modeling phase, SHAP values were employed to derive a comprehensive global interpretation of the optimal model identified in the predictive modeling phase, while c oncurrently performing feature selection. This methodology enabled the identification of pivotal intrinsic metrics that exert an influence on the extrinsic metric, thereby contributing to the iterative refinement of the model. Building on the global insigh ts, a subsequent local interpretation was carried out, culminating in the establishment of a UPQ -centered computational evaluation framework. SHAP values, grounded in the Shapley value principle from cooperative game theory, provide a consistent and equita ble allocation of feature contributions, thereby making complex black -box models interpretable. This methodology is broadly applicable and instrumental in identifying the underlying factors driving model predictions, thus ensuring the reliability and trans parency of interpretative outcomes. SHAP values supported both local and global interpretations, offering insights into the overall model behavior and the specific contributions of key features [98]. Within the REFRESH framework, SHAP values were computed to provide a global interpretation based on the optimal predictive model identified during the predictive modeling phase, thereby allowing for the quantification of the cumulative contribution of each intrinsic metrics and its directional influence on the extrinsic me tric. Features were ranked by cumulative contribution and sequentially incorporated into the predictive model in descending order | https://arxiv.org/abs/2505.17380v1 |
for forward stepwise selection and cross - validation, aiming to identify the top n features that optimize model performance. Th is methodology improved model performance [99] and enhanced interpretability [100] , mitigated overfitting [101] , and illuminated the relationships between features and the target variable, thereby ensuring heightened transparency and robustness of the model [102] . Subsequently, a local interpretation of the refined model was performed, leading to the development of a computational evaluation framework, which includes the identification of intrinsic metrics with a significant impact on extrinsic metrics and their corresponding influence d irections. Finally, based on the computational evaluation framework, a paired -sample t -test was conducted to internally assess the MI competence of GPT - 4. D. Evaluating the Framework’s Enhancement in GPT’s Performance in MI Through Prompt Engineering The C oT prompting framework, a key technique in prompt engineering, enhances a model’s capacity to tackle complex tasks and improves the logical coherence of its outputs by guiding step -by-step reasoning [103] . Based on the computational evaluation framewor k established in the explanatory modeling phase, a customized CoT -based prompt was developed to optimize GPT -4’s performance in MI tasks. The effectiveness of this customized prompt was rigorously evaluated by comparing the extrinsic and intrinsic metrics of the prompted GPT -4 responses against those of human therapists and the baseline GPT -4 model, thereby validating the effectiveness of the computational evaluation framework in enhancing GPT -4’s MI capabilities. E. Summary The research framework of this s tudy is illustrated in Fig. 1. The dataset utilized was systematically curated by Zixiu Wu, drawing upon publicly available MI videos from the YouTube platform. The dialogue content in these videos was rigorously annotate d by multiple subject matter experts, systematically capturing the therapists’ MI strategies as well as other relevant contextual factors. Secondly, in the predictive modeling phase, feature engineering generated two distinct feature sets: therapists’ linguistic cues (including LIWC analysis results and LSM) or word embeddings, and MI strategies. These feature sets were subsequently used as independent variables in various machine learning and deep learning classification algorithms for training. The model s were evaluated using standard classification metrics, which allowed for the identification of the highest - performing model. The optimal model was then applied to predict the extrinsic metric of GPT -4’s responses, and McNemar’s test was conducted to compa re the UPQ between GPT -4 and human therapists aimed at evaluating GPT -4’s MI performance. In the explanatory modeling phase, the optimal feature set derived from the predictive modeling phase was fed into the predictive model for a global SHAP analysis, an d the features were ranked according to their cumulative contributions. The features were subsequently introduced into the predictive model in descending order for forward stepwise feature selection and cross -validation to refine the feature set and improv e model performance. A local SHAP analysis was then conducted on the refined model to establish a computational evaluation framework. Finally, a paired -sample t -test within the computational framework evaluated GPT -4’s MI competence. During the validation phase, | https://arxiv.org/abs/2505.17380v1 |
the computational evaluation framework was employed to design CoT -based customized prompts to enhance GPT -4’s performance. These prompts were then deployed via the OpenAI API to generate GPT -4- prompted responses. McNemar’s test and paired -sample t -tests were conducted to compare the extrinsic and intrinsic metrics between human therapists and GPT -4-prompted responses, thereby validating whether the computational framework improved GPT -4’s MI performance. Fig. 1. This figure illustrated the step -by-step workflow of the proposed work. The workflow commenced with data collection and preprocessing. Subsequently, the predictive modeling phase employed two types of feature sets alongside multiple machine -learning and deep -learning models to predict the MI capabilities of GPT -4 and conducted a comparison with human therapists in terms of extrinsic metri cs. Next, the explanatory modeling phase established a computational evaluation framework to assess GPT -4’s MI capabilities from both intrinsic and extrinsic metrics. Finally, the validation phase generated prompts based on the computational evaluation fra mework, compared the MI capabilities of GPT -4 and human therapists after targeted prompting, so as to verify the correctness of the computational evaluation framework. Ⅳ.RESULT This section will sequentially present the results of predictive modeling and explanatory modeling, the development and evaluation of the Integrative Evaluation Framework, ethical risks and potential biases, and concluding with limitations and future resea rch. A. Evaluation of GPT -4’s Performance in Motivational Interview Based on UPQ by Predictive Modeling In this study, multiple classification algorithms were trained and evaluated, with the most effective model selected for the development of a predictive model to assess the UPQ of GPT - 4’s responses. The UPQ scores of GPT -4’s responses were then compared to those of human therapists to evaluate the extent to which GPT -4’s MI performance approximates human -level capabilities. 1) Performance Analysis of the Pre dictive Model for UPQ Following feature engineering, classification models were trained and evaluated based on two distinct sets of independent variables. The first set encompassed therapists’ linguistic features extracted from the LIWC tool, LSM between t he therapist and client, and the frequency of MI strategies employed by the therapist. The second set consisted of word embeddings for both therapists and clients, generated by pretrained language models (BERT and RoBERTa), as well as the frequency of MI s trategies. A range of classification algorithms were utilized to construct predictive models, and the corresponding performance metrics were presented in Table II. TABLE II GRAND -AVERAGE PREDICTED PERFORMANCES OF DIFFERENT CLASSIFICATION MODELS (ACCURACY , PRECISION , RECALL , F1-SCORE , AND ROC AUC ) Features Model Accuracy Precision Recall F1-Score ROC AUC LIWC Machine Learning RF 0.8940 0.9089 0.8848 0.8958 0.9597 SVM 0.9636 0.9742 0.9575 0.9653 0.9923 Logistic 0.9525 0.9394 0.9583 0.9483 0.9898 Ridge 0.9426 0.9351 0.9416 0.9381 0.9870 Xgboost 0.9145 0.9299 0.9059 0.9168 0.9704 Deep Learning RNN 0.7650 0.8068 0.5167 0.6238 0.8192 Bilstm 0.8158 0.8307 0.6592 0.7302 0.8897 Bilstm -Attention 0.7463 0.7431 0.5456 0.6190 0.7455 Bert Machine Learning RF 0.8252 0.8271 0.6257 0.7108 0.9037 SVM 0.8326 0.8329 0.6525 0.7296 0.8557 Logistic 0.8238 0.8097 0.8281 0.8182 0.9059 Ridge 0.8297 0.8097 | https://arxiv.org/abs/2505.17380v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.