AcademicEval / intro_28K /test_introduction_long_2404.16375v1.json
jiyuuuu's picture
syn
b0f675a
raw
history blame
238 kB
{
"url": "http://arxiv.org/abs/2404.16375v1",
"title": "List Items One by One: A New Data Source and Learning Paradigm for Multimodal LLMs",
"abstract": "Set-of-Mark (SoM) Prompting unleashes the visual grounding capability of\nGPT-4V, by enabling the model to associate visual objects with tags inserted on\nthe image. These tags, marked with alphanumerics, can be indexed via text\ntokens for easy reference. Despite the extraordinary performance from GPT-4V,\nwe observe that other Multimodal Large Language Models (MLLMs) struggle to\nunderstand these visual tags. To promote the learning of SoM prompting for\nopen-source models, we propose a new learning paradigm: \"list items one by\none,\" which asks the model to enumerate and describe all visual tags placed on\nthe image following the alphanumeric orders of tags. By integrating our curated\ndataset with other visual instruction tuning datasets, we are able to equip\nexisting MLLMs with the SoM prompting ability. Furthermore, we evaluate our\nfinetuned SoM models on five MLLM benchmarks. We find that this new dataset,\neven in a relatively small size (10k-30k images with tags), significantly\nenhances visual reasoning capabilities and reduces hallucinations for MLLMs.\nPerhaps surprisingly, these improvements persist even when the visual tags are\nomitted from input images during inference. This suggests the potential of\n\"list items one by one\" as a new paradigm for training MLLMs, which strengthens\nthe object-text alignment through the use of visual tags in the training stage.\nFinally, we conduct analyses by probing trained models to understand the\nworking mechanism of SoM. Our code and data are available at\n\\url{https://github.com/zzxslp/SoM-LLaVA}.",
"authors": "An Yan, Zhengyuan Yang, Junda Wu, Wanrong Zhu, Jianwei Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Julian McAuley, Jianfeng Gao, Lijuan Wang",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI",
"cs.CL"
],
"label": "Original Paper",
"paper_cat": "LLM AND Reasoning",
"gt": "Recent advances in Multimodal Large Language Models (MLLMs) such as GPT-4V (OpenAI, 2023a) show strong performance in multimodal perception and reasoning, enabling various new capabilities (Yang et al., 2023b). Among these, Set-of-Mark Prompting (SoM) (Yang et al., 2023a) is an interesting new working mode that enhances the connection between visual objects and textual tokens via visual prompting, i.e., placing alphanumeric tags on input images. It provides a natural interface for human-computer interaction, by linking visual locations to executable actions through visual tags, and enables various applications such as GUI navigation (Yan et al., 2023b) and robot interaction (Lin et al., 2023a). Furthermore, GPT-4V with SoM (Yang et al., 2023a) can implicitly align visual objects with their corresponding tags. Such alignments (Li et al., 2020; Yang et al., 2021) allow MLLMs to leverage index numbers to perform multi-hop visual reasoning (Yang et al., 2023a; Wei et al., 2022), thereby improving their abilities in multimodal understanding and reasoning tasks. Despite the significant interest in SoM prompting and its broad applications, it remains unclear why GPT-4V can benefit from SoM prompting, We find that other MLLMs, including the state-of- the-art open-sourced models such as LLaVA-v1.5 (Liu et al., 2024), and commercial systems like Gemini (Team et al., 2023), struggle to understand SoM prompts. This gap prevents them from leveraging the effectiveness of SoM prompting. In this study, we aim to deepen the understanding of SoM, with a goal of facilitating arbitrary MLLMs to benefit from it. We break down SoM prompting into three core capabilities: (1) the ability to identify all tags and read the alphanumeric scene texts written on them; (2) the ability to recognize and pinpoint all objects in 1 arXiv:2404.16375v1 [cs.CV] 25 Apr 2024 Input: Image Input: Image + SoM Conversation Conversation User What items are there near the Marshal speaker? LLaVA-1.5 There is a laptop and a cup near the Marshall speaker. User To move the speaker closer to the curtain while keeping the laptop in the current position, which item should I swap it with? LLaVA-1.5 You should swap the laptop with the cup . a cup User What items are there near the Marshal speaker? SoM-LLaVA There is a laptop tagged with number 7 and a notebook tagged with number 8. User To move the speaker closer to the curtain while keeping the laptop in the current position, which item should I swap it with? SoM-LLaVA You can swap it with the lamp tagged with number 9. notebook laptop lamp laptop cup Figure 1: Example conversations from LLaVA and SoM-LLaVA (LLaVA with SoM ability) to demonstrate the effectiveness of our paradigm. Left: Standard prompting on LLaVA-1.5, which fails to correctly answer the questions. Right: Set-of-Mark prompting on SoM-LLaVA. Simply placing tags on the input image can improve visual reasoning of Multimodal LLMs. an image; (3) the ability to associate tags with corresponding objects in the image. Despite possessing skills such as OCR and visual recognition to meet the first two capabilities, most MLLMs still fail to fully understand SoM prompts. Therefore, we hypothesize that the crucial missing element is the third capability, associating tags with objects, which requires deliberate training. We further validate that SoM-style data are sparse in common MLLM training sources, and it may be necessary to create a specific dataset. To facilitate such training, we introduce a new learning paradigm named \u201clist items one by one\u201d. We show that by asking MLLMs to comprehensively list all tagged items following the alphanumeric order of visual tags, MLLMs can learn SoM prompting with a small number of item-listing samples. Specifically, we create a tailored dataset, by tagging images with Semantic-SAM (Li et al., 2023c; Yang et al., 2023a), and prompting GPT-4V to generate paired text descriptions. With just 10k image-text pairs, MLLMs like LLaVA-1.5 (Liu et al., 2023a) can reliably understand SoM tags. Based on this initial finding, we conduct studies to explore the effective recipes to help MLLMs best utilize SoM prompting. We enhanced MLLMs with this \u201clist items one by one\u201d objective and assess their SoM performance from two aspects: model\u2019s ability to recognize and describe the SoM tags, and its ability to use SoM in improving multimodal reasoning ( Figure 1). For the first aspect, we design the tag listing task, which requires MLLMs to list and describe all tags in the image, evaluated by listing accuracy. For the second aspect, we evaluate finetuned models on five MLLM benchmarks, including POPE, MME, SEED- Bench, LLaVA-Bench, and MM-Vet, showcasing that MLLMs with SoM can significantly boost the multmodal understanding performance. Moreover, our model trained with SoM data outperforms the original MLLM, even without additional visual tags during inference. This demonstrates the potential of incorporating our proposed dataset and learning paradigm to boost general MLLM training. Finally, we revisit our original question regarding the working mechanism of SoM. The preliminary hypothesis is that the SoM capability may be related to OCR and the implicit association among text, tags, and objects. With our trained models, specifically SoM-LLaVA, we gain access to model features and attention maps for an in-depth analysis. We visualize the attention map to verify tag association. Compared with the original LLaVA model, SoM-LLaVA indeed learns better visual-tag- text associations, reflected in corresponding attention maps. 2 Our contributions are summarized as follows. \u2022 We present a new training task and data source named \u201clist items one by one,\u201d which effectively bootstraps MLLMs for the SoM visual prompting ability. \u2022 We evaluate our finetuned SoM MLLMs on five multimodal understanding benchmarks, and show improved performance even when SoM tags are removed from the input image. \u2022 We probe the working mechanism of SoM through the trained MLLMs, showcasing the implicit association between visual objects and text tokens when performing SoM prompting.",
"main_content": "Visual referring prompting. Other than text prompts, visual referring prompting (Yang et al., 2023b) is another effective approach when interacting with multimodal LLMs, where users directly draw on input images to specify their intent, such as drawing visual pointers or handwriting scene texts. Early studies show that vision-language models can understand visual pointers such as circles (Shtedritski et al., 2023) and dots (Mani et al., 2020). Recent studies (Yang et al., 2023b) show that more powerful multimodal LLMs (OpenAI, 2023a) can handle more complicated prompts such as arrows, boxes, circles, hand drawing, scene text, as well as their combinations. Another major advancement is Set-of-Mark Prompting (SoM) (Yang et al., 2023a), where numbered tags can be placed on images to associate visual objects with text indexed. Its effective visual grounding capability (Kazemzadeh et al., 2014; Yu et al., 2016; Mao et al., 2016) enables various applications (Yan et al., 2023b; Zhang et al., 2023). In this work, we aim to better understand SoM and extend its success from GPT-4V (OpenAI, 2023a) to other open-source multimodal LLMs. Multimodal LLMs. Multimodal LLMs (Alayrac et al., 2022; Zhu et al., 2022; OpenAI, 2023a; Liu et al., 2023b; Li et al., 2023b) extend large language models (OpenAI, 2023b; Gao et al., 2023; Touvron et al., 2023) with visual perception capabilities. Recent studies (Chen et al., 2023) show the effectiveness of training open-source models on the GPT-4V generated detailed description data. Another thread of studies explore having multimodal LLMs predicting object locations as bounding boxes (Wang et al., 2023b; Peng et al., 2023) or masks (Rasheed et al., 2023). In contrast to most prior studies that pair the images with different text instructions, our study explores a new direction of how visual prompts such as SoM can improve multimodal LLMs. Specifically, we show that the SoM visual tags provide fine-grained alignments between visual objects and text tokens, thereby improving various visual reasoning tasks, both with and without SoM prompting during inference. 3 Preliminary Examination 3.1 Visualizing SoM Prompting on LLaVA In this section, we first investigate the capacity of LLaVA-1.5 in SoM, concerning its attention sensibility to the numeric IDs tagged on the objects and its answer to the SoM query. We show an example task to list a series of objects tagged with numeric IDs in Figure 2, in which the attention map is extracted from LLaVA-1.5 based on the SoM query (e.g., \u201cI have labeled a bright numeric ID at the center for each visual object in the image. Please enumerate their names.\u201d). The top 20 image patches with the highest average attention weights across the user query tokens are highlighted in transparent red regions. We can observe from the highly attended regions of LLaVA-1.5 that the numeric ID tags can be easily and correctly attended by LLaVA-1.5 along with their associated objects (e.g., bird, vase, and branches). Such capacities in locating numeric ID tags may have been acquired by LLaVA-1.5 from its pretraining tasks in OCR and also benefited from the strong OCR abilities of the ViT feature encoder (Radford et al., 2021) adopted by LLaVA-v1.5. However, the response prompted by the user query in the first example of Figure 2 suggests that LLaVA-1.5 cannot follow the SoM instruction to list all the items. Instead of providing the object descriptions corresponding to all the numeric ID tags, LLaVA-1.5 responds with a general image caption, due to a large portion of image captioning samples in its pretraining stage. From the second example of Figure 2, we can also observe that although LLaVA-1.5 generates a list of tag IDs with object names, it cannot accurately associate the tags to corresponding objects, causing the model to hallucinate the descriptions of these objects. 3 LLaVA-1.5 Attention Map of Tags User: I have labeled a bright numeric ID at the center for each visual object in the image. Please enumerate their names. LLaVA-1.5: In the image, there is a brown and black bird perched on a branch surrounded by leaves. The leaves surrounding the bird are also visible. LLaVA-1.5 Attention Map of Tags User: I have labeled a bright numeric ID at the center for each visual object in the image. Please enumerate their names. LLaVA-1.5: 1. A vase with a flower in it. 2. A white flower in the vase. 3. The vase is on a wooden table. 4. The vase is sitting on a ledge. 5. The vase is sitting on a fence. 6. The vase is sitting on a railing. 7. \u2026 8. \u2026 9. \u2026 10. \u2026 11. \u2026 Figure 2: Two examples of SoM prompting in LLaVA-1.5. Left: Attention map extracted from LLaVA-1.5 on the image of a bird perching on a branch, where 3 objects are tagged. Right: Attention map extracted from LLaVA-1.5 on the image of a vase placed on a table, where 7 objects are tagged. However, LLaVA-1.5 lists more than 7 object names that are repetitions of previous object names. # Dataset #Text Text w/ Listing Source of Text 1 LLaVA-Pretrain-CC3M-595K 595.4K 0 Raw CC3M image captions. 2 LLaVA-Pretrain-LCS-558K 558.1K 0 Captioned by BLIP. 3 LLaVA-v1.5-Mix665K 3356.2K 0.72% Rule-based, or generated by ShareGPT or GPT4-0314. 4 ShareGPT4V 102.0K 0.21% Generated by GPT4-Vision. 5 CogVLM 333.5K 7.16% Generated by MiniGPT4 or by GPT4-0314. Table 1: Examined pretraining (1-2) and instruction-tuning (3-5) datasets in our preliminary study. 3.2 Finding SoM Data in Existing Training Sources We further look into the pretraining/instruction-tuning (IT) dataset, aiming to inspect if there are text contents with listings, or images with SOM annotations. We examine the pretraining dataset of LLaVA-v1 and v1.5 (Liu et al., 2023b;a), and the IT dataset used by LLaVA-v1.5, ShareGPT4V (Chen et al., 2023), and CogVLM (Wang et al., 2023a). Table 1 shows the source of text in each dataset and the percentage of text content with a listing format. The text in the two pretraining datasets for LLaVA are image captions (either the raw caption or generated by BLIP (Dai et al., 2023)), and we did not find any text with listings in them using our parser. Aside from image captions, the IT dataset also contains instructions related to other visual tasks such as VQA. We noticed that the answers provided by GPT-4(V) models sometimes construct the text in a listing manner (e.g., list out possible reasons for a question, list out observed objects in the image, etc). More examples can be found in Appendix A.6. The instruction-following dataset used by CogVLM has the highest percentage of text with listings (\u223c7%). Through our interaction with these models, we also find CogVLM is better at generating listing-style data than LLaVA-1.5. We add tags to MSCOCO-2017 images following the SoM (Yang et al., 2023a) format, and train a binary classifier with ViT/B-16 (Dosovitskiy et al., 2020). We use the classifiers to filter the images in the two LLaVA pretraining datasets, and take the top 2k images with the highest scores for each dataset. We then manually check the top 2k images, and found 12 images with tagging in CC3M-595K (\u223c0.002%), and found 86 images with tagging in LCS-558K (\u223c0.015%). Figure 15 shows a few images with tagging. Given that tagged images are sparse in those datasets and the SoM prompting performance of open-source MLLMs is unsatisfying, it may be worthwhile to design a tailored dataset that empower open-source MLLMs with this emergent ability, similar to what GPT-4V is capable of. 4 Dataset Creation and Training Motivated by the above analysis, in this section, we introduce the pipeline to create our dataset. First, in Section 4.1, we use semantic-SAM to generate semantic visual prompts in the form of numeric tags for each image. We then discuss the learning paradigm of \u201clist items one by one\u201d in Section 4.2. Finally, we use visual prompted images to generate text data in Section 4.3. 4 4.1 Image Source and Visual Prompting Generation There are various open-source image datasets available (Deng et al., 2009; Lin et al., 2014; Schuhmann et al., 2022; Yan et al., 2023a). We use MS-COCO (Lin et al., 2014) as the image source to create our SoM dataset, since it contains comprehensive human annotations with bounding boxes, masks, and captions. It has also been widely used for visual instruction tuning (Liu et al., 2023b; Wang et al., 2023a; Chen et al., 2023), which could benefit controlled experiments as well as comparisons with previous work. The first step is to create visual prompts by placing numeric tags on proper locations. Following SoM (Yang et al., 2023a), we experiment with segmentation models including SEEM (Zou et al., 2023), Semantic-SAM (Li et al., 2023c), and SAM (Kirillov et al., 2023). Empirically, we find that Semantic-SAM provides the annotation granularity that best fits COCO images, and thus use it to create tagged images for our dataset. 4.2 A Learning Paradigm: List Items One by One After obtaining the image data with semantic tags, the next question is how to design the instruction data to best distill the SoM visual prompting ability. A common approach (Liu et al., 2023b; Chen et al., 2023) in multimodal instruction-following data creation is to design and collect \u201cquestionanswering\u201d style samples. This is often done by prompting ChatGPT/GPT-4 or alternative open-source models. Given an image I and optional metadata MI such as captions, bounding boxes, various questions or instructions X(i) Q are posed, and the corresponding answers X(i) A from large models are collected. However, such general question-answering data may not be the most effective in distilling the desired SoM prompting capability, due to the inadequate mention of objects in text. For SoM prompting, one core ability of interest is to associate numbered tags with visual objects in the image, thereby enabling effective referral of visual objects via text tokens. In a general QA data, however, it is rare for multiple objects to be mentioned, even in an extended multi-turn conversation. To enhance tag association, we propose a simple and effective approach: list items one by one, where the model is asked to comprehensively describe all tagged items within an image. Given an image IT with N text tags on the image, we ask the model to enumerate all items in numerical order: {X1 obj, X2 obj, \u00b7 \u00b7 \u00b7 , XN obj}, where Xj obj is the textual description of the j-th item, tagged by ID j in the image. Beyond promoting SoM learning, listing items one by one is also effective in general multi-modal LLM training: if a model learns to list items in the images with a specific order (in our case, the order is determined by the visual numeric tags), it gains a comprehensive and fine-grained understanding of images. This could directly benefit visual grounding and reasoning, which we verified through the standard multimodal QA and chat evaluation benchmarks. Compared with existing visual instruction tuning datasets, such as LLaVA-665K (Liu et al., 2023a) and ShareGPT-4V (Chen et al., 2023), another difference is the implicit spatial information encoded by the visual tags in SoM prompting. Converting images into the language space inevitably loses information, especially spatial locations. For example, \u201ca girl on the right\u201d can only vaguely imply the position of the girl. However, with SoM visual prompting, we provide precise visual guidance on the image. Therefore, our data can be viewed as a form of dense captioning with a new way of encoding spatial information. 4.3 Text Data Generation via GPT-4V With the visual prompting enhanced images, the final step for dataset creation is to generate the corresponding text data. To automate this process, we leverage GPT-4V (OpenAI, 2023a) to generate the listing data {X1 obj, X2 obj, \u00b7 \u00b7 \u00b7 , XN obj}, following the order of visual tags in the images. However, we find that simply prompting the model to list items in a zero-shot manner could lead to noisy and biased generation results, where the model may refer the tag to a distant object that is easy to describe. (see examples in appendix A.4). To mitigate this problem, we seek two complementary solutions: (1) We modify the system message of GPT-4V to avoid assigning tags to distant objects. (2) We 5 10k 25k 50k 75k 100k Number of listing data (k) 50 60 70 80 Accuracy list 7B list 13B GPT-4V (a) Ablation on model sizes with LLaVA-1.5 10k 25k 50k 75k 100k Number of listing data (k) 65 70 75 80 Accuracy list list+mix-665k list+nonocr list+ocrtext (b) Ablation on data sources with LLaVA-1.5-7B Figure 3: Performance analysis on tag listing. Training samples of listing data grow from 10k to 100k. list+mix-665k is to mix listing data with 665k instruction tuning data from (Liu et al., 2023a). list+nonocr is to exclude the OCR and text data from the full 665k data, resulting in 563k samples. list+ocrtext is to mix listing data with only OCR and text data from the full 665k data, resulting in 102k samples. Green-dashed line in Figure 3a is the zero-shot result from GPT-4V. manually design a few correct listing samples via human annotations, and use them as seed examples for in-context-learning to query GPT-4V. The details of our template is in Appendix. In addition to listing, we also consider conversational data similar to LLaVA (Liu et al., 2023b), where GPT-4V is asked to generate mulit-turn question answering between an AI assistant and a person asking questions about the photo. Given a tagged image IT, we use GPT-4V to generate instruction-following data in the form of {Person:IT X(i) Q , Assistant: X(i) A }. 4.4 Model Training We take the pretrained stage of LLaVA-1.5 (Liu et al., 2023a) as the base model, and continue finetuning by mixing instruction tuning data of LLaVA-1.5 with our collected visual prompting data. For SoM-listing, we create 40 task templates as human instructions (e.g., \u201cplease enumerate object names in the tagged image\u201d), and treat them as standard conversational data. We use the same training objective of next-token prediction to train general QA, SoM-QA and SoM-listing data. Specifically, we maximize the conditional log likelihood as follows: \u2212log p(XA|Xv, XQ) = \u2212log L \u220f i=1 p\u0398(xi|I/IT, XQ,<i, XA,<i), (1) where \u0398 are the trainable model parameters, XQ,<i and XA,<i are the instruction and answer tokens in all previous turns of conversations before the current prediction token xi. The input image is I or IT for LLaVA or SoM data, respectively. 5 Experiments 5.1 Experimental Settings Experiment overview. We validate the method effectiveness from two aspects. First, in Section 5.2, we benchmark the model\u2019s capabilities in understand and describing SoM visual prompting. We design the tag listing task on MS-COCO to test the SoM performance. Second, in Section 5.3, we evaluate if our dataset and model can benefit visual reasoning tasks, where we consider five representative visual question answering and reasoning tasks detailed as follows. MLLM benchmarks. We consider the following multimodal LLM benchmarks in Table 2 to validate SoM visual prompting\u2019s benefit on visual reasoning. POPE (Li et al., 2023e) is carefully designed to evaluate object hallucination in multimodal LLMs. We follow POPE and report the F1 Score for the binary choice questions. MME (Fu et al., 2023) contains 2800 binary choice questions for perception and cognition evaluation. We report the overall perception score for the evaluated models. SEED-Bench (Li et al., 2023a) contains 19K multiple choice questions covering both image and video modality. We follow a previous study (Lin et al., 2023b) that reports the multiple choice accuracy on 6 Method LLM Res. Pre-Data IT-Data POPE MME SEED-I LLaVA-W MM-Vet BLIP-2 Vicuna-13B 224 129M 85.3 1293.8 49.7 38.1 22.4 InstructBLIP Vicuna-7B 224 129M 1.2M \u2013 \u2013 58.8 60.9 26.2 InstructBLIP Vicuna-13B 224 129M 1.2M 78.9 1212.8 \u2013 58.2 25.6 Fuyu-8B Fuyu-8B 600 \u2013 \u2013 74.1 728.6 \u2013 \u2013 21.4 LLaMA-Adapter-V2 LLaMA2-7B 336 \u2013 \u2013 \u2013 1328.4 35.2 \u2013 \u2013 mPLUG-Owl-2 LLaMA2-7B 448 348M \u2013 \u2013 1450.2 64.1 \u2013 36.2 Qwen-VL Qwen-7B 448 1.4B\u2020 50M\u2020 \u2013 \u2013 62.3 \u2013 \u2013 Qwen-VL-Chat Qwen-7B 448 1.4B\u2020 50M\u2020 \u2013 1487.5 65.4 \u2013 \u2013 SPHINX LLaMA2-7B 224 80.7 1476.1 69.1 73.5 36.0 LLaVA-1.5 Vicuna-7B 336 558K 665K 85.9 1510.7 64.8 63.4 30.5 LLaVA-1.5 Vicuna-13B 336 558K 665K 85.9 1531.3 68.2 70.7 35.4 SoM-LLaVA-1.5 Vicuna-13B 336 558K 695K 86.6 1563.1 69.6 75.3 35.9 SoM-LLaVA-1.5-T Vicuna-13B 336 558K 695K 87.0 1572.8 69.5 73.3 37.2 Table 2: Performance comparison on MLLM benchmarks. Res., Pre-Data, IT-Data indicate input image resolution, the number of samples in pretraining and instruction tuning stage, respectively. \u2020Includes in-house data that is not publicly accessible. Underlined numbers are the second best results in the column. SoM-LLaVA-1.5-T is the model with tagged images as input. the image subset of 14k images, namely SEED-I. LLaVA-W: LLaVA-Bench (In-the-Wild) (Liu et al., 2023b) and MM-Vet (Yu et al., 2023) computes the evaluation score by prompting a GPT-4 based evaluator (OpenAI, 2023b) with both the predicted and ground-truth reference answer. The score is then scaled to the range of 0 to 100. We introduce extra implementation details in appendix A.1. 5.2 Evaluation on Tag Listing First, we evaluate model performance on the tag listing task, aiming to answer two research questions: (1) Do model sizes matter in terms of learning SoM ability? (2) How will different sets of extra training data impact the SoM performance? We design the listing data based on images with groundtruth mask annotations from MS-COCO, and enumerate each object with corresponding class name. An example list is \u201c1. person, 2. cat, 3. dog.\u201d. We compute list-wise accuracy, where for a caption with N items, the score is M N with M items predicted correctly by the model. With human annotation of objects in an image, we can automatically create abundant rule-based data (up to 100k) for studying model behaviors and perform quantitative evaluations. For the first question, we find that larger LLM performs better for the listing task (see Figure 3a), presumably benefiting from the stronger language prior to help learn SoM prompting. For the second question, we decompose the 665k instruction data from LLaVA-1.5 (Liu et al., 2023a) into two parts. We find that both general caption-QA data, as well as OCR-text data contribute to learning SoM ability when limited listing data are available (10k). The reason could be that OCR can help with identifying numeric tags, and general caption may help the model to recognize objects within an image, both of them are fundamental abilities required by SoM. In general, other visual instruction data may benefit learning SoM, especially when SoM data is scarce. Overall, we observe that with only 10k data, we can outperform zero-shot GPT-4V in listing accuracy, whereas growing data size from 50k to 100k only slightly improves the listing performance. These findings suggest that collecting a small amount of data may be sufficient for learning SoM prompting. 5.3 Evaluation on MLLM Benchmarks We then train LLaVA-1.5 on our collected dataset and perform evaluation on MLLM benchmarks. As shown in Table 2, we observe that our SoM-LLaVA-1.5, which is trained with a mixture of LLaVA visual instructions and our SoM data in order to learn SoM prompting, also obtains superior performance on general MLLM tasks. Surprisingly, we find that even without tagged images, SoM-LLaVA still attains strong performance and substantial improvement over the orignal LLaVA. This indicates the quality of our data and the potential of introducing listing data into general MLLM training to improve visual understanding and reasoning, as well as reduce hallucinations. We conjecture the reason that the great performance of SoM-LLaVA on non-tagged images is that \u201clisting items one by one\u201d with visual prompting guides the model to learn fine-grained semantics for image features. Related case studies and visualizations are in appendix A.2. For the performance of open-vocabulary listing, we present examples in appendix A.3. 7 Data Composition Data Size POPE MME SEED-I random popular adversarial OCR overall overall LLaVA-IT 665K 87.1 86.2 84.5 125.0 1531.3 68.2 LLaVA-IT + Listing 665K + 10k 87.3 86.3 84.8 147.5 1588.2 68.9 LLaVA-IT + QA 695K + 20k 87.5 86.4 84.7 110.0 1540.0 69.2 LLaVA-IT + Listing + QA 695K + 30k 87.8 86.7 85.2 140.0 1563.1 69.6 LLaVA-IT + ShareGPT-4V 695K + 20k 87.1 86.0 84.3 110.0 1528.7 69.3 Table 3: Comparison for different data mixture strategies. LLaVA-IT is the mix665k visual instruction data from (Liu et al., 2023a). Listing and QA is from our SoM dataset with tagged image-text pairs. ShareGPT-4V is from (Chen et al., 2023) with the same MS-COCO images as our 2k QA data and detailed captions from GPT-4V. 5.4 Ablation Study on Mixture of Datasets Finally, we perform ablation on different data mixture strategies in Table 3. We consider mixing our listing and QA data generated from Section 4.3 with LLaVA-665k (Liu et al., 2023a), trained separately or together. Empirically, we find that mixing listing and QA data yields the best overall performance. In Section 5.2, we find OCR data can help the learning of listing. Here we also notice that \u201clisting item one by one\u201d can in turn greatly improve the performance of OCR related task. The results on POPE indicates our data leads to lower hallucinations compared with ShareGPT-4V, which is a dense caption dataset without visual prompting. Placing tags on the images can seamlessly encode spatial information into the data for MLLMs to learn fine-grained vision language alignment. 6 Analysis SoM-LLaVA-1.5 1. Laptop 2. Chair 3. Monitor 4. Desk Lamp 5. Printer 1. Laptop 2. Chair 3. Monitor 4. Desk Lamp 5. Printer LLaVA-1.5 Figure 4: A comparative example of attention maps extracted from LLaVA-1.5 and SoM-LLaVA1.5, where five objects (e.g., laptop, chair, monitor, desk lamp, and printer) are tagged. We highlight the top-5 most attended image patches of the models on each object\u2019s numeric tags individually. SoM-LLaVA is better at attending to objects following numeric text and tags. 6.1 Probing Trained Models We first analyze the tag-listing capacity of SoM-LLaVA-1.5 acquired through fine-tuning. In Figure 4, we show the attention maps on the five tagged objects, which are extracted from SoM-LLaVA-1.5 and LLaVA-1.5 respectively. The comparative example showcases that although both models can locate their model attention on the mentioned objects to some extent, the fine-tuned SoM-LLaVA-1.5 model can attend to and focus on characteristic regions of the object, which can also be accurately 8 Case Study User: Whose pants' color is the same as someone else's white shirt? LLaVA-1.5: The man wearing brown pants has the same pants color as the woman wearing a white shirt. SoM-LLaVA-1.5: The person tagged with number 1 is wearing pants that are the same color as the person tagged with number 9's white shirt. GPT-4V: The person labeled with the number 2 is wearing white pants, and their color matches the white shirt of the person labeled with the number 9. Figure 5: An example comparison for LLaVA, SoM-LLaVA and GPT-4V. Case Study User: How many objects are the same type of the object tagged as 13? LLaVA-1.5: There are two objects of the same type as the one tagged as 13. SoM-LLaVA-1.5: There are three objects of the same type as the object tagged as 13, which is an apple. GPT-4V: The object tagged as number 13 is an apple. In this image, there are three other apples, tagged with the numbers 3, 6, and 9. So, there are a total of four apples, including the one tagged as 13. Figure 6: An example comparison for LLaVA, SoM-LLaVA and GPT-4V. guided by the numeric ID tags. For example, the comparative attention maps on the object \u201cLaptop\u201d tagged with number 1 show that SoM-LLaVA-1.5 can clearly attend to the mentioned object with its main focus. In contrast, LLaVA-1.5 mistakenly attends to the monitor instead of the laptop, due to high similarity between these two objects. In addition, we also observe that SoM-LLaVA-1.5 can be efficiently guided by the numeric ID tags to focus on the specific object the user refers to, even with multiple similar objects within the image. For example, the attention map of SoM-LLaVA-1.5 on the \u201cChair\u201d tagged with a number 2 is mostly focusing on the chair on the left-hand side, instead of the similar chair on the right-hand side. SoM prompting in SoM-LLaVA-1.5 with such the capacity to accurately locate the tagged object, enables more flexible and easier user-referring queries without complicated language descriptions. The attention maps also verify our early hypothesis regarding the implicit association among the text, tag, and object in SoM prompting. 6.2 Visual Reasoning with SoM Prompting We present two examples of different models reasoning over the tagged images. In Figure 5, we examine a multi-step visual reasoning question (i.e., \u201cWhose pants\u2019 color is the same as someone else\u2019s white shirt\u201d), which requires the MLLM to first identify the mentioned objects (i.e., pants and shirt) and compare their visual features (i.e., the same white color). We observe from Figure 5 that LLaVA-1.5 provides an incorrect answer by falsely recognizing the person who wears the white shirt as a female. Such an incorrect answer can be caused by the inferior object recognition capacity in LLaVA-1.5. Similar observation from GPT-4V in Figure 5 showcases that incorrect recognition of the white color of the male\u2019s pants can also cause incorrect reasoning conclusions in GPT-4V. In contrast, SoM-LLaVA-1.5 successfully identifies tags 1 and 9 with the same color in those image regions, while recognizing the two objects as white pants and white shirt, respectively. We show another example of tag selection in Figure 6. 9 7 Conclusion In this paper, we study SoM prompting of multimodal LLMs. We collected a tailored dataset that helps MLLMs acquiring the SoM visual prompting ability. Our approach demonstrates that MLLMs can learn SoM prompting using a small set of GPT-4V generated data, where the text describes the visual objects following the order of tags in the image. We then verify the effectiveness of SoM prompting on general VL reasoning tasks. Our enhanced model, SoM-LLaVA, consistently outperforms the original LLaVA model across five MLLM benchmarks. Our dataset and models will be released to facilitate vision and language research.",
"additional_info": [
{
"url": "http://arxiv.org/abs/2403.03203v1",
"title": "CLEVR-POC: Reasoning-Intensive Visual Question Answering in Partially Observable Environments",
"abstract": "The integration of learning and reasoning is high on the research agenda in\nAI. Nevertheless, there is only a little attention to use existing background\nknowledge for reasoning about partially observed scenes to answer questions\nabout the scene. Yet, we as humans use such knowledge frequently to infer\nplausible answers to visual questions (by eliminating all inconsistent ones).\nSuch knowledge often comes in the form of constraints about objects and it\ntends to be highly domain or environment-specific. We contribute a novel\nbenchmark called CLEVR-POC for reasoning-intensive visual question answering\n(VQA) in partially observable environments under constraints. In CLEVR-POC,\nknowledge in the form of logical constraints needs to be leveraged to generate\nplausible answers to questions about a hidden object in a given partial scene.\nFor instance, if one has the knowledge that all cups are colored either red,\ngreen or blue and that there is only one green cup, it becomes possible to\ndeduce the color of an occluded cup as either red or blue, provided that all\nother cups, including the green one, are observed. Through experiments, we\nobserve that the low performance of pre-trained vision language models like\nCLIP (~ 22%) and a large language model (LLM) like GPT-4 (~ 46%) on CLEVR-POC\nascertains the necessity for frameworks that can handle reasoning-intensive\ntasks where environment-specific background knowledge is available and crucial.\nFurthermore, our demonstration illustrates that a neuro-symbolic model, which\nintegrates an LLM like GPT-4 with a visual perception network and a formal\nlogical reasoner, exhibits exceptional performance on CLEVR-POC.",
"authors": "Savitha Sam Abraham, Marjan Alirezaie, Luc De Raedt",
"published": "2024-03-05",
"updated": "2024-03-05",
"primary_cat": "cs.AI",
"cats": [
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "LLM AND Reasoning",
"gt": "Visual Question Answering (VQA) has been widely investigated by researchers from various subfields in AI like computer vision and natural language understanding. As a result, we now have access to a vast corpus of VQA datasets coupled with nu- merous models addressing the task of VQA (Zou and Xie, 2020; Wu et al., 2017). Most existing VQA datasets (Johnson et al., 2017; Antol et al., 2015) have a collection of images paired with questions such that all information re- quired to answer the question is provided in the image, and hence the scene is considered complete. But in real life, we often engage in tasks where scenes may not be completely visible. We instead may have world knowledge about various locations visited by us, acquired over time, that allows us to generate plausible answers to queries about ob- jects we do not see in a scene. For example, in au- tonomous vehicle scenarios, reasoning is crucial for dealing with partial observability. Comprehensive knowledge of traffic enables the system to inter- pret limited visual information and make informed decisions, ensuring safe navigation despite occlu- sions or limited field of view. Furthermore, in fac- tory settings, reasoning combined with background knowledge about the environment can assist teams of robots in dealing with partial observability dur- ing navigation and other coordination and cooper- ation tasks. In this paper, we introduce a synthetic dataset, CLEVR-POC1, for a reasoning-intensive VQA task set in partially observable scenarios involving ex- ternal knowledge in the form of constraints. The dataset consists of pairs of an image, representing a partial scene (B in Figure 1a) in some environment (D1 in Figure 1a where the environment is defined by a set of constraints), and a question in natural language about some hidden/missing object (C in Figure 1a) in the scene. Although in the literature, there exist datasets for QA tasks in partially ob- servable environments (e.g., CLEVR-dialog (Kot- tur et al., 2019), Visual Dialog (Das et al., 2017), Guess What? (De Vries et al., 2017)), these do not come with additional background knowledge. 1The source code associated with this research project is openly accessible at https://github.com/ savithasam88/CLEVR-POC/tree/master arXiv:2403.03203v1 [cs.AI] 5 Mar 2024 (a) The different components in VQA tasks. (b) The different VQA tasks are based on expected inputs and outputs and the number of agents involved. Figure 1: VQA task components and types of VQA tasks The challenge presented in CLEVR-POC necessi- tates the integration of knowledge and multi-step reasoning involving eliminative induction, into per- ception systems driven by learning. Given that the knowledge associated with a scene typically varies depending on the specific environment in- volved, it is not a constant across the dataset. It becomes challenging for a learning system to simply memorize this knowledge during training iterations. Moreover, because this knowledge is environment-specific, employing Large Language Models (LLMs) such as GPT as the source of knowledge, as demonstrated in some of the recent works like (Zhou et al., 2023) and (Shah et al., 2023), does not yield favorable results. We sub- stantiate these assertions through empirical exper- iments. The contributions of this paper are as follows: \u2022 We introduce a dataset, CLEVR-POC, that introduces the task of reasoning-intensive VQA - given a partial scene, the constraints (knowledge) to which the scene conforms and a question about a hidden object in the scene, find the set of all plausible answers. \u2022 We evaluate the performance of state-of-the- art pre-trained vision language and large lan- guage models on CLEVR-POC. \u2022 We demonstrate that the synergistic use of LLMs alongside a visual perception network and a formal reasoning system with access to external knowledge can efficiently and ef- fectively address the challenges presented by CLEVR-POC. The organization of the paper is as follows. Section 2 provides an overview of existing work in VQA, focusing on various VQA datasets and briefly dis- cussing LLM for reasoning. Section 3 delves into the detailed process of generating CLEVR-POC, while Section 4 outlines the research questions ex- plored in this study. Additionally, this section presents the experiments conducted on CLEVR- POC and the corresponding results.",
"main_content": "In this section, we provide an overview of research in two domains datasets in VQA and LLMs and reasoning. 2.1. Datasets in VQA A VQA task may involve various combinations of the different components shown in Figure 1a a complete scene (A), a partial scene (B), a question (C) about the scene, external knowledge in the form of rules/constraints (D1), or facts in knowledge graphs (D2), and the set of plausible answers to the question (E). Each combination results in a different VQA task (see Figure 1b). 2.2. Types of VQA Tasks 2.2.1. Task 1 Given a complete scene, and a question about an object in the scene, find the answer to the question. Since the scene is complete, the agent can come up with the exact answer implying that the solution set E has just one element (|E| = 1). DAQUAR (Malinowski and Fritz, 2014), VQA (Antol et al., 2015), CLEVR (Johnson et al., 2017) are datasets in this category. 2.2.2. Task 2 Given a complete scene, a question about one of the objects in the scene and external knowledge about objects (in the form of triples D2), find the answer to the question leveraging this external knowledge. FVQA (fact-based VQA) (Wang et al., 2017), and KVQA (knowledge aware VQA) (Shah et al., 2019) are datasets in this category. 2.2.3. Task 3 While the above two tasks involve a single agent being posed with a scene and a question, this category of VQA tasks involves more than one agent. One of the agents has access to the complete scene while the other agent is provided with a partial scene and a question. Answering the question requires the agents to interact with each other. CLEVR-dialog (Kottur et al., 2019), Visual Dialog (Das et al., 2017), Guess What? (De Vries et al., 2017) are datasets handling Task 3. 2.2.4. Task 4 (CLEVR-POC) Given a partial scene, knowledge in the form of rules (constraints) about the environment to which the scene conforms and a question about the hidden object in the scene, find the set of all plausible answers to the question. Since the question is about a hidden object (for example, about the shape of the object), it may not be always possible to provide an exact solution. Answering the question is more about eliminating all cases that are inconsistent with the background knowledge (for example: given the knowledge there are no spheres in this environment) and returning all consistent answers as the solution (the shape is a cone or a cylinder or a cube, which is why |E| \u22651). In contrast to Task 2, where the knowledge graph encompasses general world facts (e.g.,\u201ccows are herbivores\u201d), the knowledge in this context is considerably more specific to an environment. While an LLM can be presumed to possess awareness of the former category of knowledge, the same cannot be said for the latter. 2.3. LLMs and Reasoning In this paper, our emphasis lies on the process of reasoning which depends on a formal system grounded in logical rules and principles. Such a system ensures that all transformations or manipulations of symbols within it, leading to new inferences, adhere to the logical rules and principles governing the system (MacColl, 1897). While LLMs can also be seen as performing symbolic manipulations, these manipulations unlike traditional symbolic reasoning are based on statistical associations or patterns learned from data (Huang and Chang, 2023), because of which it may or may not be logically sound. Despite the progress in the development of large language models (LLMs), many still struggle with a deep understanding of symbols like humans do (Abraham and Alirezaie, 2022; Yao et al., 2022). To address this gap, there are ongoing efforts to create benchmarks (Huang and Chang, 2023), like the proposed CLEVR-POC, to evaluate the reasoning capabilities of LLMs. In CLEVR-POC, we introduce a VQA task that involves constraint-based reasoning, a form of logical reasoning, where the generated response must satisfy a set of constraints given. These benchmarks are used to assess the capacity of language models in handling symbolic reasoning, contributing to the advancements in the development of more logically sound systems. 3. The CLEVR-POC Dataset Now we describe in detail the generation of the CLEVR-POC dataset. The dataset, as the name suggests, is based on the CLEVR (Johnson et al., 2017) dataset, which generated scenes with geometrical shapes. Each object is associated with four attributes color, shape, material, and size. The objects in CLEVR-POC can have one of the four shapes cone, sphere, cylinder, and cube, three sizes large, medium, and small, two materials rubber and metal, and eight colors red, blue, green, yellow, gray, brown and purple. Besides these four attributes, since a scene is divided into four regions (see Figure 1a), CLEVR-POC also associates an object with the region it is in 0, 1, 2 or 3. Each object belongs to exactly one region. Division of a scene into regions enables the specification of constraints at multiple levels. \u2022 Region-based constraints for example, all objects in Region 0 are of shapes cube or cylinder. These constraints must be satisfied by objects in the corresponding region. \u2022 Across-region constraints for example, the total number of objects sharing the same color in regions 1 and 2 is not more than 2. These are constraints specified across two regions. \u2022 Generic constraints for example, there are at least two cubes in the scene. These constraints apply to the whole scene. One of the major points of distinction in the scene generation process of CLEVR-POC from the original CLEVR is that the scenes in CLEVR-POC are generated such that they conform to a chosen set of constraints. The steps in creating an instance i in the dataset are: \u2022 Generating an environment Environmenti, defined by a set of constraints. \u2022 Generating a complete scene graph, Completei, that conforms to Environmenti. \u2022 Generating the partial scene graph, Partiali by removing one of the objects, Obji, from Completei. \u2022 Generating a question, Qi, about the partial scene with object of interest Obji. Template-1 (Value Restriction) :object(X),at(X, R\u2019), not hasProperty(X, P1\u2019, V1\u2019). Translation All objects at region R\u2019 have value V1\u2019 for the property P1\u2019. An instantiation :object(X),at(X, 0), not hasProperty(X, color, red). Template-2 (Negation Constraint) :object(X), at(X, R\u2019), hasProperty(X, P1\u2019, V1\u2019). Translation All objects at region R\u2019 cannot have value V1\u2019 for the property P1\u2019. An instantiation :object(X), at(X, 0), hasProperty(X, material, metal). Template-3 (Exactly N Constraint) :#count{X: hasProperty(X, P1\u2019, V1\u2019), object(X), at(X, R\u2019)}!=N\u2019 Translation There are exactly N\u2019 objects at region R\u2019 with value V1\u2019 for the property P1\u2019. An instantiation :#count{X: hasProperty(X, size, small), object(X), at(X, R\u2019)}!=2 Template-4 (Atleast N Constraint) :#count{X1, X2: sameProperty(X1, X2, P1\u2019), object(X1), object(X2), at(X1, R1\u2019), at(X2, R2\u2019)} < N\u2019. Translation There are at least N \u2032 pairs of objects at regions R1\u2019 and R2\u2019 that has the same value V1\u2019 for the property P1\u2019. An instantiation :#count{X1, X2: sameProperty(X1, X2, shape), object(X1), object(X2), at(X1, 1), at(X2, 2)}<1. Template-5 (OR Constraint) :object(X), at(X, R\u2019), not hasProperty(X, P1\u2019, V1\u2019), not hasProperty(X, P1\u2019, V2\u2019). Translation All objects in region R\u2019 have value V1\u2019 for property P1\u2019 or V2\u2019 for property P2\u2019. An instantiation :object(X), at(X, 1), not hasProperty(X, color, yellow), not hasProperty(X, color, blue). Table 1: A few constraint templates 3.1. Environment Representation An environment in CLEVR-POC is defined by a set of constraints. We provide a set of 11 constraint templates with CLEVR-POC that are expressed in answer set programming (ASP)2. Each environment is created by at most 15 different instantiations of these templates, provided there are at least two constraints associated with each region. A few example constraint templates with their translation in English and an instantiation are shown in Table 1. Around 30 different environments are generated (see Appendix A for an example) and the scenes in the dataset belong to one of these 30 environments the dataset generation process ensures that the scenes are uniformly distributed across the 30 environments. 3.2. Scene Representation CLEVR represented a scene in the form of a scene graph whose nodes represented objects annotated with its attributes and edges denoted the spatial relations (left, right, front, behind) between objects. In CLEVR-POC, besides the scene graph representation, we also represent a scene in ASP. Below we show part of the ASP representation of the partial scene in Figure 1a. %Objects in the scene object(0). object(1). object(2). object(3). %Attributes of objects 2ASP is a declarative programming paradigm applied to solve complex search problems (Lifschitz, 2008) at(0, 2). hasProperty(0, color, green). hasProperty(0, size, large). hasProperty(0, material, rubber). hasProperty(0, shape, cylinder). .... %Spatial relations between objects front(1, 0). right(1, 0). ... The predicate object is used to define the different objects (denoted using identifiers 0, 1, ..). hasProperty(Object, Attribute, Value) associates a Value for an Attribute of an Object. at(Object, Region) represents the region where the object is located. The spatial relations between objects are represented with predicates left, right, front, behind for example, left(Object1, Object2) represents that Object2 is located left of Object1. 3.3. Image Generation While the images in CLEVR are generated from a randomly sampled scene graph, CLEVR-POC generates its images from scene graphs known to adhere to constraints defining an environment. Scene graph creation is thus a reasoning problem given an environment (constraints in ASP) and a desired number of objects (n) in the scene, the goal is to assign each object to one of the four regions and propose values to color, size, shape, and material that are consistent with the constraints in the environment. An ASP reasoning engine solves this problem each answer (a consistent property(a) Pipeline for generating environment and complete scenes in that environment. (b) Pipeline for generating partial scenes, and questions and then labeling them with answers. Figure 2: Two steps in dataset generation process: Figure 2a shows the first step environment generation from constraint templates and generating complete scenes satisfying these constraints. Figure 2b shows Step 2 partial scene and question generation from a complete scene. value assignment for the n objects) in the answer set returned is a scene graph or a possible configuration of the objects in the scene. Since there are many possible configurations we randomly sample a million of these scene graphs for the subsequent image generation phase. A scene graph is then rendered using Blender3. The image representing the partial scene is generated from a partial scene graph constructed from the actual scene graph by randomly removing one of the objects from it. Figure 2a shows the scene graph construction process. 3.4. Question Representation The questions in CLEVR-POC query about one of the four attributes color, size, shape, and material of the missing/hidden object in the partial scene. Besides representing the questions using an equivalent functional program as in CLEVR, CLEVRPOC also represents it in ASP. An example question and its ASP form are shown below: Question: What is the color of the other cylinder that is the same material as the medium red thing? query(Q):hasProperty(X,color,Q), hasProperty(X,shape,cylinder), hasProperty(Y,size,medium), hasProperty(Y,color,red), same material(Y,X), X!=Y. If the query is about attribute A, A \u2208 {color, size, material, shape}, the questions are generated such that the cardinality of the set of possible solutions (S) is 1 \u2264|S| < |A|, where |A| is 3https://www.blender.org/ the set of all values for the attribute A (for example |size| = 3 = |{large, medium, small}|). If the question generated has |A| solutions (for instance, a solution like, \u2018size is large or small or medium\u2019 is true for any question), it is considered invalid. The questions are balanced across the question types (that depend on the query attribute see Appendix B for the distribution). It should be noted that the solution space of CLEVR-POC questions is 16 times that of CLEVR as the solutions expected are not always a single value, but a set of values. 3.5. Question Generation The question in CLEVR-POC is generated from the question templates available in CLEVR. We avoid the yes/no (existence, comparison) and counting questions and focus on just the attribute querying templates. An example template is as follows: What shape is the < Z2 > (size) < C2 > (color) < M2 > (material) [that is] < R > (relation) the < Z > (size) < C > (color) < M > (material) < S > (shape)? Question template instantiation is done based on the complete scene graph of the associated image. The object of interest is always the object that is removed from the complete scene to generate the partial scene graph. The query attribute is picked such that it satisfies the question type balancing requirements. The known attributes of the query object (filling the slots < Z2 > or < C2 > or < M2 > in the above template) are randomly selected. While the filler for the slot < R > (one of the left, right, front, behind) is randomly picked, the reference object in the query is picked based on the spatial relations available in the complete scene picking one of the objects that are in < R > relation of the query object. The ASP representations of the question, the incomplete scene, and the constraints in the environment are given to an ASP solver to identify the set of possible values for the query attribute. Figure 2b shows the pipeline of question generation. Refer to Appendix A and B for a detailed example and statistics of CLEVR-POC. 4. Experiments The experiments are designed to answer the following research questions (RQ): \u2022 RQ1: How do neural-based vision language models perform on reasoning-intensive VQA tasks (with an emphasis on symbolic knowledge representation and reasoning)? \u2022 RQ2: How well do neuro-symbolic vision language architectures handle reasoningintensive VQA tasks (in the context of mapping raw inputs to symbolic space)? \u2022 RQ3: How can we leverage LLMs in reasoning-intensive VQA tasks and what are the challenges associated with it? In the sections following, we describe the methods implemented to answer these questions. 4.1. Methods 4.1.1. CLIP-based model CLIP (Contrastive Language Image Pre-training) (Radford et al., 2021) is a vision-language model that is trained to align pairs of text and images to a unified space. We experimented with the CLIP model to investigate RQ1. Figure 3 shows the architecture of a CLIP-based model to solve CLEVR-POC. The pre-trained vision transformer (ViT-B/32) and the text encoders (masked selfattention) in CLIP are leveraged to obtain encodings for the incomplete scene and the question. The encoding for the environment is obtained from its constraints. A pre-trained GPT-2 (Radford et al., 2019) model is used to encode the constraints. As GPT-2 is more language-oriented, we input the natural language version of ASP constraints (while experimenting with ASP-form constraints to assess their impact on performance). The problem is formulated as a multi-label classification problem where the output is one or more of the following 17 labels {red, blue, green, yellow, cyan, brown, gray, purple, rubber, metal, large, small, medium, cone, cube, cylinder, sphere}. Hence, the three encodings are passed to a multilabel classifier (feed-forward network) which is the only module of the whole model that is trained from scratch. The classifier is trained with a weighted binary cross entropy loss function (Ho and Wookey, 2019) that gives more penalty to the wrong prediction of minority class (as most of the labels in the output are 0, except for the ones in the answer a false negative is given more penalty). For each of the 17 labels, the weighted cross entropy loss is thus defined as below: WCE (y, \u02c6 y) = \u2212(\u03b2y log(\u02c6 y) + (1 \u2212y) log(1 \u2212\u02c6 y)) (1) \u03b2 is the weight (is set > 1 to penalize false negatives)4, y is the ground truth, \u02c6 y is the prediction. 4.1.2. Neuro-Symbolic Visual Question Answering The architecture for the neuro-symbolic approach to solving CLEVR-POC task is shown in Figure 4. The idea is to convert both the image and the question into a unified space as in CLIP, with the difference that this space is symbolic (scene graph and question in ASP). The architecture is based on the state-of-the-art neuro-symbolic approach on the CLEVR dataset, NS-VQA (Yi et al., 2018) and will be used here to study aspects of RQ2. We modify this architecture to include an ASP solver that takes as input the scene in ASP, the question in ASP, and the environment constraints in ASP to derive the answer to the question. The question parser, (a Bidirectional Long Short Term Memory (BiLSTM) sequence to sequence model) is trained as in NS-VQA using REINFORCE the reward is positive if the ASP program generated by the parser results in the correct answer, else it is 0. The question parser is initially pre-trained in a fully supervised way with a small sample of (question, ASP program) pairs. The image perception network in NS-VQA is based on Detectron (Girshick et al., 2018) and it was trained independently of the question parser in a supervised way. The ASP solver used is the same as the one used during the dataset generation phase. 4.1.3. LLMs for solving CLEVR-POC LLMs are leveraged in two ways for solving a reasoning task like CLEVR-POC. LLM as question parser in NS-VQA: In this approach, we use LLM as a question parser converting the question into a semantic representation like ASP. The image is converted to a scene graph as done in NS-VQA. Both semantic representations are then passed on to a formal reasoner like an ASP solver to derive solutions consistent with the constraints. Stand-alone LLM: The second approach is to provide both the image description and the question along with the constraints (in NL) as input to LLM and generate as a response the consistent solutions. We, here, assume as done in NS-VQA that 4The results in Section 4.3 are for \u03b2 = 5. Figure 3: CLIP for CLEVR-POC Figure 4: NS-VQA for CLEVR-POC architecture is updated with an ASP solver the scene graphs are accurate, as our focus is on evaluating LLMs\u2019 ability to perform symbolic reasoning. CLEVR-POC, a synthetic dataset where environment-specific knowledge is not fixed, can assess LLMs\u2019 symbolic reasoning ability without data contamination (where the dataset becomes unusable once it has been exploited). The LLM used in the experiments is GPT-4 (OpenAI, 2023) (See Appendix C for details about prompts used). 4.2. Evaluation Let A = {a1, a2,..} denote the set of values in the actual answer and P = {p1, p2,..} denote the predicted answer set. We evaluate the performance of the two approaches on CLEVR-POC using the two metrics based on accuracy. Exact Accuracy checks whether the prediction made is exactly accurate, i.e., A is exactly equal to P. Exact Accuracy(A, P) = ( 1 if x \u2208A \u21d0 \u21d2x \u2208P 0 otherwise (2) Jaccard Index computes the similarity between the actual answer and predicted answer sets as: Jaccard Index (A, P) = |A \u2229P| |A \u222aP| (3) The value of Jaccard Index is between 0 (no common elements) and 1 (exact match). It gives some credit for partially correct answers as well. 4.3. Results Tables 2a and 2b show the results for exact and partial answer accuracies respectively for NS-VQA, CLIP-based models, and stand-alone GPT-4 on CLEVR-POC. While NS-VQA (BiLSTM) uses a BiLSTM trained from scratch as the question parser, NS-VQA (GPT-4) uses pre-trained GPT4 as the question parser. We experimented with varying dataset sizes 2000, 6000, and then 12000 instances. 5 It can be seen that with a multifold increase in the dataset size, there is an improvement in the answer accuracy, but the performance is not satisfactory. RQ1 CLIP-based model analysis: Since the question is not about some object in the scene, and the set of constraints to be satisfied is also not fixed across the instances in the dataset, it is challenging to learn a mapping from the three inputs (the incomplete scene, the natural language question, and the constraints) to the output set of plausible values. Table 2 shows three sets of results for CLIP. The columns CLIP-NL and CLIP5The models are trained on Intel\u00ae CoreTM i712700K, 32GB RAM, and an NVIDIA GeForce RTX 3080 Ti for training. Dataset NS-VQA (BiLSTM) NS-VQA (GPT-4) CLIP-ASP CLIP-NL CLIP (no knowledge) GPT-4 2000 0.0200 0.9250 0.0350 0.0600 0.0500 0.4626 6000 0.1516 0.9550 0.1500 0.1700 0.1183 12000 0.2308 0.9441 0.1800 0.2283 0.1483 (a) Exact answer accuracies of CLIP, NS-VQA and GPT-4 models on CLEVR-POC. Dataset NS-VQA (Bi-LSTM) NS-VQA (GPT-4) CLIP-ASP CLIP-NL CLIP (no knowledge) GPT-4 2000 0.0591 0.9287 0.1000 0.1557 0.1412 0.5164 6000 0.3602 0.9578 0.3100 0.3403 0.2447 12000 0.4331 0.9496 0.3600 0.4465 0.2912 (b) Jaccard Index of CLIP, NS-VQA and GPT-4 models on CLEVR-POC Table 2: Exact accuracies and Jaccard index scores of NS-VQA with BiLSTM and GPT-4 as question parsers, CLIP and GPT-4 on CLEVR-POC. CLIP-NL and CLIP-ASP take constraints in natural language and ASP, respectively. CLIP (no knowledge) is the performance of CLIP without constraints. Sample Size PA (after pre-training) PA (after REINFORCE) PA (GPT-4) 28 (prompt size) 0.9250 \u2248200 0.0512 0 \u22481000 0.4487 0.0366 \u22482000 0.5043 0 Table 3: Drop of program accuracies (PA) after REINFORCE and the performance of GPT-4 provided with just 28 (question, ASP program) pairs as prompt. ASP correspond to instances of CLIP where the constraints are given in natural language and ASP respectively. It should be noted that CLIP-NL performs better than CLIP-ASP, suggesting that representing symbolic knowledge in natural language may be ideal while incorporating knowledge into neural frameworks for QA. The performance of CLIP on CLEVR-POC when no external knowledge is provided is shown in the column CLIP (no knowledge). Although without the external knowledge CLIP\u2019s performance drops, there is not much of a difference indicating that we need to consider better techniques for incorporating such symbolic constraints into neural frameworks. This points us toward existing neuro-symbolic frameworks. RQ2 NS-VQA analysis: While neural models failed in symbolic reasoning and incorporating symbolic knowledge into the network, it can be seen that the major challenge faced by neurosymbolic architectures lies not in reasoning but in mapping image or question to a symbolic representation in the absence of ground truth semantic representations. In our experiments, we focus on language perception while assuming 100% accuracy in image perception. Tackling both perceptions simultaneously is even more formidable without access to ground truth representations. Hence, the poor performance of NS-VQA (see column NSVQA (BiLSTM) in Tables 2a and 2b) can be solely attributed to the failure of REINFORCE in learning accurate ASP programs. As mentioned in Section 4.1.2, a BiLSTM is initially pre-trained in a supervised fashion with a few examples. We experimented by varying the number of examples provided for pre-training. Table 3 shows the program accuarcy after pre-training with around 200, 1000 and 2000 pairs of <question, ASP program>. When these pre-trained models are further trained with REINFORCE, there is a drastic drop in the program accuracy as the focus of the REINFORCE algorithm is on coming up with the correct answer independent of the program\u2019s accuracy. This fall is observed even with the original CLEVR dataset. The chances of deriving the correct answer even with a wrong program by a fluke are higher in the case of CLEVR compared to CLEVR-POC considering the larger solution space of CLEVRPOC (see Section 3.4). REINFORCE clearly fails to learn ASP programs through weak supervision even when it initiates its training from a proficient model. RQ3 LLM Analysis: In the first experiment we used GPT-4 as a question parser. The BiLSTM-based question parser of NS-VQA is replaced with GPT-4 (the results are shown in column NS-VQA(GPT-4) in Tables 2a and 2b). The model is provided with just 28 (question, ASP program) pairs of examples as prompts. GPT-4 with no fine tuning was able to accurately predict the equivalent ASP programs. The stand-alone GPT-4 approach gave less than 50% exact accuracy. The evidence indicates that employing GPT-4 as a question parser to translate the question into an ASP program and subsequently utilizing an ASP reasoning engine leads to better results compared to placing the entire burden of symbolic reasoning on GPT-4. It should also be noted that GPT-4 with no data-specific training performed better than CLIP and NS-VQA (BiLSTM). There is still room for improvement with some fine-tuning. 5. Discussion We now discuss important challenges that our dataset and work point to. Reasoning and LLM: The experiments showed that the direct application of LLMs is not a good solution for such reasoning-intensive tasks. (Mahowald et al., 2023) also discusses the limitations of LLMs in formal reasoning tasks. Our experiments showed that a more appropriate approach to harnessing LLMs involved relieving them of the task of symbolic reasoning and instead employing them for generating symbolic representations. Progressing even further entails discovering mechanisms for seamlessly incorporating specific knowledge into LLMs and generating responses that are consistent with this knowledge. Symbolic knowledge in visual perception network: Although the focus of this paper was on language and reasoning, it may be noted that knowledge in the form of constraints in CLEVRPOC can play a significant role during image perception as it can provide hints on what can or cannot be in the image. This is a form of weak supervision which is also required in the absence of ground truth scene graphs to accelerate the learning process. Developing neuro-symbolic models with a stronger feedback mechanism for visual perception, such as DeepProbLog (Manhaeve et al., 2018), NeurASP (Yang et al., 2020), Semantic-Loss (Xu et al., 2018) and LTN (Serafini and Garcez, 2016)), would help in faster convergence. The aforementioned frameworks, however, cannot still be applied to VQA tasks due to scalability issues. 6. Conclusion Humans often have to interact with the partially observable environment. In light of the need to deal with the inherent uncertainty in knowledgerich real-world scenarios, this work aimed to establish a benchmark for evaluating reasoning-intensive VQA in partially observable environments. Applying the benchmark to stand-alone LLMs and other vision-language models yielded disappointing results due to their inability to perform symbolic reasoning. We also demonstrated that combining LLM with a visual perception network and a formal reasoner produced positive results. Future directions involve developing visual perception networks with knowledge-guided supervision, enhancing LLMs\u2019 reasoning capabilities, and moving CLEVR-POC to an embodied setup like vision language navigation. 7. Acknowledgements This research was conducted during the authors\u2019 tenure at \u00a8 Orebro University, Sweden and was financially supported by the Wallenberg AI, Autonomous Systems, and Software Program (WASP). 8. Bibliographical"
},
{
"url": "http://arxiv.org/abs/2403.02567v1",
"title": "Eliciting Better Multilingual Structured Reasoning from LLMs through Code",
"abstract": "Development of large language models (LLM) have shown progress on reasoning,\nthough studies have been limited to English or simple reasoning tasks. We thus\nintroduce a multilingual structured reasoning and explanation dataset, termed\nxSTREET, that covers four tasks across six languages. xSTREET exposes a gap in\nbase LLM performance between English and non-English reasoning tasks. We then\npropose two methods to remedy this gap, building on the insight that LLMs\ntrained on code are better reasoners. First, at training time, we augment a\ncode dataset with multi-lingual comments using machine translation while\nkeeping program code as-is. Second, at inference time, we bridge the gap\nbetween training and inference by employing a prompt structure that\nincorporates step-by-step code primitives to derive new facts and find a\nsolution. Our methods show improved multilingual performance on xSTREET, most\nnotably on the scientific commonsense reasoning subtask. Furthermore, the\nmodels show no regression on non-reasoning tasks, thus showing our techniques\nmaintain general-purpose abilities.",
"authors": "Bryan Li, Tamer Alkhouli, Daniele Bonadiman, Nikolaos Pappas, Saab Mansour",
"published": "2024-03-05",
"updated": "2024-03-05",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "LLM AND Reasoning",
"gt": "The ability to perform complex reasoning tasks is fundamental to human intelligence, where multiple steps of thought are required. Complex reasoning remains an open-problem for large language mod- els (LLMs), despite some recent progress. Prior works consider complex reasoning tasks specified only in English. Such an English-centric perspec- tive provides a limited assessment of the underlying reasoning capabilities of LLMs, given any specific language is largely a surface-form representation. This motivates our first inquiry into the multilin- gual complex reasoning capabilities of LLMs. \u2217Work done during an internship at Amazon \u2020corresponding author 1https://github.com/amazon-science/ xstreet released under CC-BY-4.0. Base LLM Reasoning- enhanced LLM fine- tune Original train data fine- tune infer TCC data reasoning tasks Domain shift between training and inference! Predictions ru es en zhar ja Un conejo blanco puede saltar 15 metros en un minuto. El conejo marr\u00f3n salta 12 metros por minuto. \u00bfCu\u00e1l es la distancia total que saltar\u00e1n los dos conejos en 5 minutos? facts = ['\u7eb3\u5854\u5229\u5a05\u5728\u56db\u6708\u4efd\u628a\u7247\u6bb5\u5356\u7ed9\u4e86 \u5979\u768448 \u4e2a\u670b\u53cb\uff0c\u7136\u540e', '\u5979\u5728\u4e94\u6708\u4efd\u5356\u51fa \u7684\u7247\u6bb5\u53ea\u6709\u539f\u6765\u7684\u4e00\u534a\u3002'] question = '\u7eb3\u5854\u5229\u5a05\u5728\u56db\u6708\u548c\u4e94\u6708\u603b\u5171\u5356 \u51fa\u4e86\u591a\u5c11\u7247\u6bb5\uff1f\u2019 facts.append(question) answer = None selected = select_facts(facts) Figure 1: An overview of our methods to improve mul- tilingual structured reasoning. First (top), we create the translated code comments (TCC) dataset, and use it in a fine-tuning setup. Second (bottom), we use the result- ing LLM for inference on reasoning tasks. We find the most success with a code prompt format that bridges the representations between training and inference. We introduce the xSTREET reasoning and expla- nation dataset (as shown in Figure 2). xSTREET covers 4 tasks, and extends the English STREET benchmark (Ribeiro et al., 2022) to 5 additional diverse languages, inheriting the source\u2019s expert an- notations and structured graphs for reasoning steps (7.8 average steps/answer). The tasks cover arith- metic, logic and science commonsense problems. We perform machine translation for the training and development data splits, and also perform hu- man post-editing to the test sets, to ensure a high quality multilingual benchmark. We use xSTREET to evaluate several LLMs, identifying the multilin- gual setting as significantly challenging. To remedy the non-English reasoning gap, we turn to the widely accepted hypothesis that LLMs trained on code are better at reasoning than those trained only on text. This code and reasoning hy- pothesis has been empirically corroborated by sev- eral papers (Suzgun et al., 2022; Liang et al., 2023; arXiv:2403.02567v1 [cs.CL] 5 Mar 2024 A white rabbit can hop 15 meters in one minute. The brown rabbit hops 12 meters\u00a0 per minute. What is the total distance the two rabbits will hop in 5 minutes? White rabbit + brown rabbit =\u00a0 15 + 12 = 27 5 minutes * 27 = 135 meters The answer is 135 Input Output Context Question Reasoning steps Answer Machine translate + Human post-edit STREET (en) xSTREET (es) The two rabbits will hop 135 meters\u00a0 in 5 minutes. Un conejo blanco puede saltar 15 metros en un minuto. El conejo marr\u00f3n salta 12 metros por minuto. \u00bfCu\u00e1l es la distancia total que saltar\u00e1n los dos conejos en 5 minutos? Conejo blanco + conejo marr\u00f3n = 15 + 12 = 27 5 minutos * 27 = 135 metros La respuesta es 135 Input Output Los dos conejos saltar\u00e1n 135 metros en 5 minutos. Figure 2: The translation process for an xSTREET entry. We start from an example from STREET (Ribeiro et al., 2022). The reasoning graphs are directly transferred, while each sentence text is translated. Note that this shows only one (of 4) task, GSM8K, and one (of 5) language, Spanish. Hendy et al., 2023). Our work takes a further step in investigating the extent to which this hypothesis holds for non-English tasks. We proceed with the insight that code can be leveraged as a structured framework to represent the underlying reasoning steps, regardless of the surface-form language of the task. We thus propose two techniques to elicit better multilingual complex reasoning from LLMs (as shown in Figure 1): at training time through a lightweight fine-tuning recipe on code, and at inference time using a novel code prompt format. In the LLM literature, many capabilities have been characterized as \u2018emergent\u2019 with model scale (Wei et al., 2022a; Patel et al., 2022). Re- cent work on complex reasoning has thus focused on huge (175B+) and closed-source models. In our work, we instead aim to boost performance on far smaller open-source LLMs (7B). To make our findings reproducible, we release our benchmark. Our contributions are: 1. We collect and release the first dataset for multilingual structured reasoning, xSTREET, covering 6 diverse languages and 4 tasks (5.5K entries total). 2. At train time: we enhance reasoning capabil- ities of off-the-shelf LLMs by further train- ing on program code data where code is in- terleaved with non-English comments. To this end, we augment a source code corpus through translating code comments and ap- ply low-rank parameter-efficient fine-tuning (LoRA (Hu et al., 2021)) to BLOOMZ (Muen- nighoff et al., 2022). Our method is effec- tive yet lightweight, while preserving general- purpose LM capabilities. 3. At inference time: we design a code-like prompting format that mimics the structure of the reasoning tasks by interweaving func- tion calls and multilingual text. We show this format outperforms several other prompt formats used. 4. We evaluate multiple LLMs (BLOOMZ, GPT- 3, Falcon-40b-instruct) on our benchmark, and show improved performance \u2014 even for top-performing models \u2014 across structured reasoning tasks in different languages. As our inference and training-time techniques are or- thogonal, we show that they can be used in tandem to achieve the best performance. 5. We perform qualitative and quantitative anal- ysis of results. Our findings taken together suggest that code elicits better multilingual structured reasoning by improving LLM\u2019s ad- herence to the reasoning format.",
"main_content": "We adopt the scope of complex reasoning from Ribeiro et al. (2022), and use complex reasoning and structured reasoning interchangeably. The goal is to study the reasoning process itself, and how an LLM can structure its output in steps to improve the final performance. The tasks are selected so that the knowledge to answer a question is contained with the input question and context. For ease of evaluation, the answers are multiple-choice selections, or numbers for arithmetic reasoning. Further details are given in \u00a72.4. 2.1 Code & Reasoning Hypothesis for LLMs This hypothesis arose from empirical evidence by several concurrent works. Suzgun et al. (2022) state, \u201cCodex, trained on both code and text data, shows better performance in following task instructions and exploiting algorithmic patterns based on the prompt exemplars.\u201d Liang et al. (2023) state, \u201cfor reasoning-intensive scenarios, we find that the code models, especially Codex davinci v2, consistently outperform the text models, even on synthetic reasoning scenarios posed in natural language.\u201d Hendy et al. (2023) state that \u201cWe hypothesize that the models acquire their reasoning capabilities through training on natural language multilingual data along with programming languages data\u201d. In summary, these works provide evidence that training LLMs on code serves as indirect supervision for complex reasoning tasks. One of our major goals is to explore the extent to which this hypothesis holds beyond English. 2.2 Code Prompts for Complex Reasoning Reasoning tasks posed in natural language can be reformulated as code prompts. Using these codelike structures to interact with code-LLMs better aligns the representations seen at training time with those at inference time. Madaan et al. (2022) use few-shot prompting on the Codex LLM to convert tasks into Python graphs, deal with structured commonsense tasks. Zhang et al. (2023) proceed similarly, but for causal reasoning tasks. The reformulation process from natural language specification to code prompts is an open-ended one, requiring manual effort, creativity, and trial and error. While these two works use code prompts for complex reasoning tasks with classification outputs, as we did, code prompts can also be applied to tasks with generative outputs, such as knowledge graph construction (Bi et al., 2023) and story understanding (Dong et al., 2023). To the best of our knowledge, our work is the first to use code prompts in multiple languages. 2.3 Multilingual Reasoning for LLMs The MEGA benchmark (Ahuja et al., 2023) covers 70 languages and 16 tasks. MEGA considers only simple reasoning tasks, which, as discussed earlier, limits our understanding of how well LLMs can reason across languages. MGSM (Shi et al., 2022) is an arithmetic reasoning dataset in 10 languages, translated from GSM8K (Cobbe et al., 2021).They find that the chain-of-thought technique (CoT) (Wei et al., 2022b), by adding to the prompt few-shot examples of step-by-step reasoning, is also effective in the multilingual setting. Interestingly, they find that for non-English questions, English CoT outperforms native language CoT. They further emphasize the reasoning ability increases with model scale. Our xSTREET benchmark is a more comprehensive view of multilingual complex reasoning. xSTREET covers not only arithmetic,2 but adds logic and science tasks, has many more entries, and has ground-truth structured reasoning annotations. 2.4 STREET Complex Reasoning Benchmark The STREET benchmark is a composite of several complex reasoning tasks (Ribeiro et al., 2022). The work adds expert human annotations for multipremise, multistep explanations. Each task\u2019s explanation is structured in a reasoning graph. Reasoning graphs, as shown in Figure 2, consist of nodes which contain statements, and edges that connect nodes. Source Tasks The tasks3 and answer formats are: \u2022 ARC science commonsense questions (multiple-choice) \u2022 GSM8k arithmetic word problems (number) \u2022 AQUA_RAT arithmetic word problems (multiple-choice) \u2022 AR_LSAT logic problems from a standardized test (multiple-choice) Linearized prompt format While a reasoning graph is abstract, to interface with an LLM, Ribeiro et al. (2022) use linearized prompts. This represents a graph as a sequence of tokens, as shown in Figure 3. Statements are given a number index; output statements (i.e., reasoning steps) include a trace of the nodes leading to the new statement. Problems with the linearized format arise in that it is task-specific, and that it diverges from LLM\u2019s training data distribution. While in-context learning can help the model pattern-match the output format, the underlying reasoning abilities of the LLM 2Instead of using MGSM, we perform our own translation of GSM8k given the intermediate reasoning annotations inherited from Ribeiro et al. (2022). 3STREET includes a fifth task, SCONE, which is omitted from xSTREET. SCONE is quite abstract, and involves statetracking in a toy world. This requires careful consideration beyond translation, and is thus left to future work. may not be properly elicited. Following Madaan et al. (2022), we argue that interfacing with a codeLLM through code prompts is a more \u201cintuitive\u201d way for the LLM to reason through a task, leading to our novel code prompts format in \u00a75. 2.5 Source Code Dataset The Stack is a 3.1 TB dataset of permissively licensed source code in 30 programming languages (Kocetkov et al., 2022). In this work, we utilize the official small subset4, and consider only 3 popular programming languages: Java, JavaScript, Python (10k files each, 30k total). 3 Multilingual Complex Reasoning Benchmark: xSTREET We create the xSTREET dataset by translating STREET into 5 languages: Arabic (ar), Spanish (es), Russian (ru), Chinese (zh), and Japanese (ja). These languages have linguistic and script diversity; furthermore, they are the languages used in many online programming help websites. To create the xSTREET test split, we hire expert human translators for all 5 languages through an internal team (detailed in \u00a79). Translators are tasked with post-editing the machine translation of one sentence at a time; for context, they can refer to the entire STREET entry the sentence comes from. After receiving the translations, we re-use the reasoning graph edges, and replace English nodes with the translations to create xSTREET. This process is shown in Figure 2. We therefore extend the 914 English entries in STREET to 5484 examples in xSTREET (914 * 6 languages). To create the xSTREET train and development splits, we use machine translation.5 We then asked native speakers to evaluate the quality of 10 random sampled translations of each language. Annotators gave feedback that, despite some errors, the translations were of reasonable enough quality to use for training purposes. Dataset statistics for the xSTREET test benchmark are given in Table 1. 4 Code with Multilingual Comments as Indirect Supervision for Reasoning Taking the idea of using code for reasoning, and comments for multilinguality a step further, we address the question: can multilingual code serve 4Available here 5We used an online translation API (anonymized here). Dataset # entry /lang # sents/ lang avg # sents/entry ARC 340 4334 12.7 AQUA RAT 254 3436 13.5 AR LSAT 50 1158 23.2 GSM8k 270 2255 8.4 Total 914 11183 12.2 x6 languages 5484 67098 Table 1: Statistics for the xSTREET test benchmark. as indirect supervision for multilingual reasoning? In other words, we investigate whether the code & reasoning hypothesis holds multilingually. We therefore propose a lightweight fine-tuning recipe, which consists of creating a multilingually commented code dataset, then fine-tuning on it, which serves as indirect supervision for downstream reasoning tasks. 4.1 Translated Code Comments Dataset (TCC) The first step of the recipe is creating a source code dataset with translated code comments, termed TCC. For each file from the source dataset, and for each target language, we perform the following. We parse the code to extract out comments, translate comments into the target language, then replace the original comments with translations. This is depicted in Appendix Figure 6. We use two simple filters: for source code files that A) have >5 comments, and B) whose comments are over 50% in English.6 This filters 30k source code files down to 20k. After translating into 5 additional languages, TCC consists of 20k*6=120k files total. See Appendix Table 5 for dataset statistics. 4.2 Train Time: fine-tuning on TCC In the second step, we leverage low-rank adaptation (LoRA) (Hu et al., 2021) to finetune instructiontuned LLMs on TCC.7 We use two methods to preserve the original model\u2019s capabilities despite the additional finetuning. First is by using LoRA itself, as it keeps the original base model\u2019s parameters frozen and introduces only a few learned parameters. Secondly, we replay 100k examples from the base model\u2019s training data, xP3 (Muennighoff 6We performed other filtering experiments, described in Appendix E.3, which had similar performance. 7We used a g5.48xlarge instance from AWS, which has 8 NVIDIA A10G GPUs (24*8 GB=192GB vRAM). et al., 2022), in a multitask setup with the TCC LM task. The recipe for a reasoning-enhanced LLM is now complete, and this is depicted in Figure 1. 5 Multilingual Complex Reasoning as a Downstream Task We hypothesize that structure, when applied to reasoning problems formulated in different languages, can abstract away some of the language-specific details, better surfacing the reasoning steps needed for a model. We thus propose the SIM (Selectand-infer multilingual comments) code prompts for complex reasoning tasks. SIM code prompts utilize several functions. We do not provide the API definitions, instead, we expect the model to learn to use them from the in-context examples. The functions are: \u2022 select_facts(facts) \u2022 infer_new_fact(selected) \u2022 is_solved(fact, question) \u2022 make_choice(fact, choices)8 \u2022 facts.append(fact) select_facts and infer_new_fact are loosely inspired by Selection-Inference (Creswell et al., 2023). A key difference, though, is that we use a single prompt, instead of iterative prompts. We therefore include is_solved(fact, question) as a signal for the LLM to stop generation. Each function is annotated with its return value in an inline code comment. This is inspired by prior work (Zhang et al., 2023). infer_new_fact has a string return value, i.e., the text of the new fact. We experiment with two versions of the return value of select_facts. The first, termed SIMindexed, uses variables facts[i] to reference the facts array (similar to the indices used in linearized format). The second, termed SIM-text, directly uses each fact\u2019s text, dereferenced from facts[i]. We find that SIM-text works best for smaller models, while SIM-indexed does for larger ones, and hence apply this going forward. We write a rule-based Python script that converts existing structured graph annotations to SIM code prompts. SIM prompts express the exact same information as the linearized format. This property is unlike code prompts for prior work, wherein the conversion is done through in-context learning with an LLM, which can introduce errors as discussed 8This function is not used for non-MC tasks, i.e. GSM8k. in \u00a72.2. The different prompting formats for LLMs are shown in Figure 3. Multilingual code prompts We use multilingual input in SIM code prompts as follows. First, facts given in the question are listed in the language of the task in a list of strings. Second, new facts and selected facts are given as comment lines adjacent to the function calls. See Figure 3 for an example. 6 Experimental Setup Models Used We primarily study open-source models, which allows for application of both train and inference-time techniques. We use BLOOMZ (Muennighoff et al., 2022) as our base LLM. This model is instruction-finetuned on prompts in 46 natural languages and 10 programming languages. For our experiments, we consider the 7.1B-parameter BLOOMZ, as well as BLOOMZ-TCC which is further finetuned on TCC. For inference-time only, we consider two larger LLMs. We use the instruction-finetuned version of Falcon (Almazrouei et al., 2023) (40B), another open-source LLM trained on text+code. Compared to BLOOMZ, Falcon is more performant on English tasks; however, it has limited multilingual abilities.9 We also use GPT-3 (175B)10, a closedsource model that is popularly-used and powerful. Prompting setup We use few-shot prompting and random sampling up to 5 exemplars from the train split (up to a model\u2019s context length).11 For each inference example, the same exemplars are used for all models and prompt types. We use greedy decoding, and task the model with generating up to 682 tokens (max context length of 2048 // 3). 7 Results We report results on the xSTREET benchmark. We use the answer accuracy metric, adapting evaluation from Ribeiro et al. (2022).12 Given the extensive nature of the xSTREET benchmark and our model experimentation, we 9As stated in the model card for falcon-40b-instruct. 10text-davinci-002 following Ribeiro et al. (2022) 11Due to brevity, Figure 1 uses a 0-shot prompt, and only depicts the SIM prompting format. The reported results use 5-shot prompts, and are given for all prompting formats. 12While STREET also measure graph similarity between linearized output and reference graphs, we did not implement them for SIM prompts. This is because for the small LLMs (7B), even the linearized format had near 0 graph similarity. (1) Natalia sold clips to 48 of her friends in April, and then (2) she sold half as many clips in May. (3) How many clips did Natalia sell altogether in April and May? (1) & (2) -> (4): Natalia sold 48/2 = 24 clips in May. (4) & (1) & (3) -> (5): Natalia sold 48+24 = 72 clips altogether in April and May.; (5) & (3) -> (6): The answer is 72; The answer is 72; facts = ['Natalia sold clips to 48 of her friends in April, and then', 'she sold half as many clips in May.'] question = 'How many clips did Natalia sell altogether in April and May? \u2018 facts.append(question) answer = None selected = select_facts(facts) # [facts[0], facts[1]] new_fact = infer_new_fact(selected) # 'Natalia sold 48/2 = 24 clips in May.' is_solved(new_fact, question) # False facts.append(new_fact) selected = select_facts(facts) # [facts[0], facts[3], facts[2]] new_fact = infer_new_fact(selected) # 'Natalia sold 48+24 = 72 clips altogether in April and May.' is_solved(new_fact, question) # False facts.append(new_fact) selected = select_facts(facts) # [facts[4], facts[2]] new_fact = infer_new_fact(selected) # '72' is_solved(new_fact, question) # True answer = '72' (1) Natalia sold clips to 48 of her friends in April, and then (2) she sold half as many clips in May. (3) How many clips did Natalia sell altogether in April and May? facts = ['\u7eb3\u5854\u5229\u5a05\u5728\u56db\u6708\u4efd\u628a\u7247\u6bb5\u5356\u7ed9\u4e86\u5979\u7684 48 \u4e2a\u670b\u53cb\uff0c\u7136\u540e', '\u5979\u5728\u4e94\u6708\u4efd\u5356\u51fa\u7684\u7247\u6bb5\u53ea \u6709\u539f\u6765\u7684\u4e00\u534a\u3002'] question = '\u7eb3\u5854\u5229\u5a05\u5728\u56db\u6708\u548c\u4e94\u6708\u603b\u5171\u5356\u51fa \u4e86\u591a\u5c11\u7247\u6bb5\uff1f\u2019 facts.append(question) answer = None selected = select_facts(facts) # [facts[0], facts[1]] new_fact = infer_new_fact(selected)# '\u7eb3\u5854\u5229\u5a05\u5728\u4e94\u6708\u4efd\u51fa\u552e\u4e8648/2 = 24 \u7247\u6bb5\u3002' is_solved(new_fact, question) # False facts.append(new_fact) selected = select_facts(facts) # [facts[0], facts[3], facts[2]] new_fact = infer_new_fact(selected) # '\u7eb3\u5854\u5229\u5a05\u5728\u56db\u6708\u548c\u4e94\u6708\u5168\u90e8\u51fa\u552e\u4e8648+24 = 72 \u7247\u6bb5\u3002' is_solved(new_fact, question) # False facts.append(new_fact) selected = select_facts(facts) # [facts[3], facts[2]] new_fact = infer_new_fact(selected) # '72' is_solved(new_fact, question) # True answer = '72' Direct Linearized Code Use xSTREET for multilingual code prompts en zh Figure 3: Depictions of 3 prompting formats for the xSTREET tasks. For each format, input is in a grey box, while expected output is in a white box. Top left: direct. Bottom left: linearized. Right: SIM code prompts (2 languages). In the code prompts, we color code facts which are aligned. highlight our findings iteratively. We first consider only BLOOMZ and BLOOMZ-TCC, with a particular focus on ARC, where our methods are the most impactful. We then consider GPT-3 and Falcon. The full results are given in Appendix D. Here we provide numbers for all tasks, languages, models, and prompt formats (this also includes the direct prompting format). 7.1 Results for BLOOMZ Models Results for the ARC task (science commonsense reasoning) are shown in Figure 4. Several takeaways arise. We see that code prompts greatly outperform linearized prompts across all languages. For results within a single model, reasoning performance drops greatly comparing English vs. average non-English (e.g. from 76.2% to 61.1% accuracy for BLOOMZ-TCC). This provides evidence that current multilingual LLMs are still optimized for English. This underscores the usefulness of xSTREET for developing LLMs with better underlying, language-agnostic abilities. We next turn to comparing base BLOOMZ vs. our finetuned BLOOMZ-TCC. We see that BLOOMZ-TCC outperforms BLOOMZ for all languages and both formats. More interestingly, relative multilingual gain is much larger when using code prompts vs. linearized prompts (Avg non-en, 52.6 \u219261.1 vs. 33.5 \u219236.9). This is evidence that the code prompt format improves multilingual reasoning, likely by the explicit separation of the reasoning task (in code) vs the multilingual understanding (in comments). Finally, looking at perlanguage trends for BLOOMZ-TCC we see that code prompts are most effective for en, es, zh, and ar, while less so for ja and ru.13 Results on GSM8K, AQUA_RAT, AR_LSAT Our results show that BLOOMZ and BLOOMZTCC struggle for the other tasks, with performance being around random chance whether using the interventions or not. We hypothesize that these tasks are \u201ctoo hard\u201d for the small LLM used; to reiterate, GSM8K and AQUA_RAT are arithmetic reasoning, while AR_LSAT is logical reasoning. This concurs with the common view that complex reasoning capabilities of LLMs are emergent with model scale (Wei et al., 2022a). We further discuss these results, and expand our hypotheses, in Appendix \u00a7A.1. 7.2 Results for Larger LLMs As code prompts are at inference time, they can be used to interface with any LLM. We report results for GPT-3 in Figure 5. We see as before that the multilingual setting poses additional challenges for reasoning, as English results are always higher than corresponding non-English tasks. First considering ARC, GPT-3 performs strongly in English for both formats, nearly solving the 13This is likely because the base model, BLOOM, was not trained on any ja or ru text. Figure 4: Results on ARC task of xSTREET, with BLOOMZ-based models. The random baseline is 25%. \u2018Avg\u2019 bars are across the 5 non-English languages. Linearized prompts use lines, while code prompts use dots. Model XNLI XStoryCloze XQUAD BLOOMZ 45.5 72.4 80.5 BLOOMZ-TCC (ours) 45.6 71.8 80.4 Table 2: Results for 3 non-complex multilingual reasoning tasks, averaged over all languages. task. Comparing English to multilingual ARC, linearized suffers a sharp drop (93.2 \u219273.2), while code prompts remain robust (99.1 \u219294.2). This underscores the effectiveness of SIM prompts in disentangling the reasoning and multilingual components of the task. For the other tasks, SIM always outperforms linearized format. Comparing relative gains, code prompts boost performance more in English than on multilingual settings. While still a very positive result, this differs from ARC as discussed above. To discuss why this is the case, we consider the dual effects of SIM code prompts, vs. linearized: the function calls capture the reasoning structure, while the multilingual comments capture the language understanding. Because the arithmetic and logical reasoning tasks are far more symbolic than the ARC commonsense reasoning task, multilingual language understanding is less effective. 7.3 Non-Complex Reasoning Task Results Recall that our fine-tuning recipe aims to improve reasoning of an LLM, while maintaining its natural language understanding (NLU) abilities. We show this is the case by reporting results on 3 multilingual tasks: \u2022 XNLI: natural language inference \u2022 XStoryCloze: given 4 sentences from a short story, choose between 2 possible completions \u2022 XQUAD: extractive question answering To query LLMs, we follow the specific prompting guidelines for each task from Ahuja et al. (2023). Table 2 shows that for all 3 tasks, the differences between BLOOMZ and BLOOMZ-TCC are statistically insignificant. Therefore, the mitigation strategies we used, LoRA and training data replay, have proved effective. 8 Analysis To further understand wherein our techniques help, or fail to help, model reasoning, we perform some manual analysis. For brevity, we focus on 2 languages, en and ar, and 2 tasks, ARC and GSM8K. We first perform error analysis on BLOOMZ, then perform a case study for each task. We further perform 3 additional experiments, which are detailed in Appendix \u00a7E. To highlight one interesting finding, we show that training on diverse code comments, such as from the multilingual TCC, boosts xSTREET performance in all languages including English. Error Analysis for BLOOMZ English For this task, and with base BLOOMZ, SIM achieves 61.5 ARC accuracy, while linearized achieves 35.3. Our manual analysis of outputs reveals that the performance discrepancy is largely due to poor instruction-following when using linearized vs. using SIM. Ribeiro et al. (2022) find that for linearized (and their model), 62% of generations fail to generate a parsable answer (i.e., reasoning graph is incomplete). Our findings concur, in that linearized has 66% (223/340) invalid generations. In contrast, SIM has only \u02dc19% invalid. BLOOMZTCC with SIM further reduces invalid rate to 9%, and increases accuracy to 76.2. We observe that in cases where all formats output successfully, the reasoning graph and answers are nearly identical. The difference is that SIM prompts allows the model to generate a complete reasoning graph far more Figure 5: Results on GSM8k, AQUA_RAT, AR_LSAT tasks of (x)STREET, using GPT-3. The random baseline for each task is a dotted line. xSTREET results are averaged over 5 languages. often. We reiterate that this behavior is a novel finding given BLOOMZ-TCC was indirectly supervised on code, rather than directly on reasoning tasks. Further discussion is found in Appendix A. We summarize this section with the following view: our techniques elicit better instructionfollowing of the proscribed reasoning format from a base LLM, leading to improved benchmark performance. Within a reasoning step, the models are making similar decisions, but at the reasoninggraph level, our methods assist in harder cases. 8.1 Case Study on GSM8k English We perform a case study of one GSM8k problem, comparing 3 models (BLOOMZ, BLOOMZ-TCC, GPT-3) and 2 formats (linearized, SIM) in Appendix Table 3. We observe that only GPT-3 with SIM achieves the correct answer, and reasoning steps also concur with the gold completion. GPT-3 with linearized representation makes an erroneous first step, which propagates the error downwards. Both BLOOMZ models with linearized formatting only follow the output format, and the text statements are copied from the input instead of being new statements. BLOOMZ with SIM has repetitive output and does not output an answer. While BLOOMZ-TCC still outputs a wrong answer, it does perform 2 rounds of reasoning through selecting and inferring facts. So, we see that both interventions elicit better underlying reasoning abilities of LLMs. 8.2 Case Study on ARC Arabic We look at an Arabic example from ARC in Appendix Table 4. We observe that for the linearized format, the final answer is incorrect (A), given the model makes a wrong penultimate inference. The SIM format, meanwhile, allows GPT-3 to output the correct answer (D), given it makes a correct inference step (albeit 1 step less than the gold). In fact, directly prompting GPT-3 leads to a correct answer. This again highlights the importance of aligning the prompt format, which is code here, to the training format. 9 Conclusion We introduced xSTREET, a multilingual structured reasoning benchmark which covers 5 diverse languages , spans science commonsense, arithmetic and logical reasoning tasks, and includes highquality intermediate reasoning steps. We found that current multilingual LLMs underperform in the non-English setting, then proposed two methods to remedy this, based on the popular hypothesis that LLMs trained on code are better reasoners. At training, we propose translating the comments of a source code dataset, to use as indirect supervision data for parameter-efficient fine-tuning. During inference, we leverage code structure to represent reasoning graphs. We perform extensive experimentation, and both of our methods better elicit underlying reasoning abilities of LLMs. Our work brings together two areas of challenge for LLMs \u2014 multilinguality, and complex reasoning. In particular, our fine-tuning recipe shows that the code & reasoning hypothesis can apply multilingually. We suspect that improvements can be amplified if multilingual comments are included at the pre-training, instead of the fine-tuning stage. We hope our findings underscore the key role that code should play in the development of LLMs with better reasoning capabilities across languages. Limitations One limitation is that we were unable to apply our fine-tuning recipe to the stronger LLMs. \u201cStronger\u201d refers to two characteristics. First and unavoidably, we can only apply the method to weaker open-source models, as closed-source models are proprietary; nevertheless, we explored them with our inference-time SIM prompts approach, and this worked well. Second, we only were able to finetune a 7B parameter model due to our resource constraints, so it is to-be-determined the effectiveness of the recipe on 70B+ models. Another limitation is that we only performed human translation on the test set of xSTREET. xSTREET train & development are only machinetranslated, and because we drew few-shot exemplars from these, the exemplar quality likely adversely affects performance compared to a goldstandard. We do release the train & dev splits for future work. We also fine-tuned on machinetranslated TCC. While we tried to be inclusive with the languages chosen, studying 6 languages from different families and using different scripts, we acknowledge that more community effort will need to go into expanding the study of multilingual complex reasoning to lower-resource languages. We further acknowledge the limits of the translation of English reasoning tasks and intermediate steps alone, in that reasoning processes may differ for speakers of different languages. So too may a multilingual LLM respond inconsistently to queries posted in different languages (Li et al., 2024), which warrants future studies into how this holds for the reasoning tasks studied in this work. Finally, in this work, we considered only the final answer accuracy for the tasks. The original STREET tasks from Ribeiro et al. (2022) included various graph similarity metrics used to consider the intermediate reasoning steps as well \u2013 a definite strength of their structured reasoning approach vs. unstructured approaches such as CoT. We did not do this consideration due to the difficulty of reimplementing the graph similarity metric calculation for the different languages, and leave this to follow up work. Furthermore, we note that the 7B LLM we used had overall poor graph similarity (near 0 for all metrics) using the original STREET evaluation scripts and dataset. Data Statement We provide a data statement in adherence with the ACL code of conduct and recommendations laid out in Bender and Friedman (2018). Linguists working on the Machine Translation Post Editing project for the multilingual dataset into Arabic, Chinese, Japanese, Russian, and Spanish are in-country, native speakers. They all are certified translators with more than 5 years of fulltime translation experience, according to the 17100 Translation ISO Standard. These linguists were hired through vendors and were remunerated above industry standard rates. Instructions were to postedit machine translated output and included guidelines on what to localize (artist names, city names, metric conversions), format (capitalization, punctuation) and structure (sentence level breaks). The vendor project managers made sure the instructions were adhered to. The QA process consisted of content review based on the Multidimensional Quality Metric (MQM) model that allocates different weights to 5 error severities (0-none to 5-critical) in several error topics. Total sample reviewed was 5 (5k words) of the total (100k words) source word count. 10 Acknowledgements We would like to thank Daniel Neves Ribeiro for his guidance on working with the STREET benchmark, and insightful conversations on how tackle our multilingual extension of complex reasoning. We thank several colleagues for providing annotations: Etsuko Ishii, Igor Shalyminov, Yuwei Zhang. We thank these people for discussion and feedback: Salvatore Romeo, Yi Zhang, Sam Davidson, Sailik Sengupta."
},
{
"url": "http://arxiv.org/abs/2404.10618v1",
"title": "Private Attribute Inference from Images with Vision-Language Models",
"abstract": "As large language models (LLMs) become ubiquitous in our daily tasks and\ndigital interactions, associated privacy risks are increasingly in focus. While\nLLM privacy research has primarily focused on the leakage of model training\ndata, it has recently been shown that the increase in models' capabilities has\nenabled LLMs to make accurate privacy-infringing inferences from previously\nunseen texts. With the rise of multimodal vision-language models (VLMs),\ncapable of understanding both images and text, a pertinent question is whether\nsuch results transfer to the previously unexplored domain of benign images\nposted online. To investigate the risks associated with the image reasoning\ncapabilities of newly emerging VLMs, we compile an image dataset with\nhuman-annotated labels of the image owner's personal attributes. In order to\nunderstand the additional privacy risk posed by VLMs beyond traditional human\nattribute recognition, our dataset consists of images where the inferable\nprivate attributes do not stem from direct depictions of humans. On this\ndataset, we evaluate the inferential capabilities of 7 state-of-the-art VLMs,\nfinding that they can infer various personal attributes at up to 77.6%\naccuracy. Concerningly, we observe that accuracy scales with the general\ncapabilities of the models, implying that future models can be misused as\nstronger adversaries, establishing an imperative for the development of\nadequate defenses.",
"authors": "Batuhan T\u00f6mek\u00e7e, Mark Vero, Robin Staab, Martin Vechev",
"published": "2024-04-16",
"updated": "2024-04-16",
"primary_cat": "cs.AI",
"cats": [
"cs.AI",
"cs.CV",
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "LLM AND Reasoning",
"gt": "Since the release of ChatGPT (OpenAI, 2023a), large language model-based (LLM-based) applica- tions and chatbots enjoy a rapidly increasing adoption, having already surpassed hundreds of millions of daily active users (Milmo and agency, 2023). Towards making these models universally applicable, there has been a recent push for vision-language models (VLMs) capable of understanding not only text but also reasoning over text and images jointly (Anil et al., 2023; OpenAI, 2023c). However, the rapid adoption of LLM-based applications and the concurrent advances in the underlying models\u2019 capabilities raises several safety and privacy concerns among the general public, researchers, and regulators alike (Bommasani et al., 2021; Inan et al., 2023; OpenAI, 2023c; Weidinger et al., 2021). In response, model providers are under increasing regulatory pressure from existing data protection regulations, such as the EU\u2019s GDPR (EU, 2016) and the California Consumer Privacy Act (CCPA) (CA, 2018), as well as from substantial ongoing regulatory efforts directly concerning AI (EU, 2021; WH, 2023). As a famous example, in 2023, Italy temporarily banned OpenAI\u2019s ChatGPT, citing data protection and privacy concerns (Satariano, 2023). These developments establish the exploration of potential privacy concerns of VLMs as a highly relevant first step towards enabling a wider deployment of VLM-based applications that are privacy-preserving and regulation-compliant. Preprint. arXiv:2404.10618v1 [cs.AI] 16 Apr 2024 Privacy Implications of LLMs Weidinger et al. (2021) lay out the privacy implications of LLMs from two separate perspectives: (i) memorization and (ii) inference. Although several works have examined private information memorization and leakage in LLMs (Carlini et al., 2023; Nasr et al., 2023; Zhang et al., 2023), until recently, inference has remained unexplored. En- abled by the strong inferential capabilities (Bubeck et al., 2023) and vast world-knowledge (Yang et al., 2023) of current frontier models, Staab et al. (2023) were the first to examine the poten- tial of LLMs being misused for the accurate inference of personal attributes from previously un- seen online texts. Their results indicate that current frontier models such as GPT-4 (OpenAI, 2023b) already achieve close to human-level accuracy across various personal attributes (e.g., age, gender, location) while incurring only a fraction of the cost and time investment of a human. Where does the person living in this dormitory room come from? I recognize the logo of the Green Bay Peckers, the local football team of Green Bay, Wisconsin. Location: Wisconsin, USA. Figure 1: Shortened example inference over an image using GPT4-V. The model recognizes the logo of the American football team hanging on the wall and infers that the inhabitant of this dorm room is likely from Wisconsin, while also provid- ing adequate reasoning. The person in the picture is occluded. The recent rise of VLMs lifts this discussion from a text-only domain to include also images, raising the question of how the findings of Staab et al. (2023) translate to the multi-modal setting. This is particularly relevant as even though im- age and video are ubiquitous in most influential social media platforms (e.g., Instagram, TikTok), privacy risks associated with automated VLM inferences have not been explored yet. Example To motivate our setting, consider Fig. 1, which shows the interior of a dormitory room. This picture could have been posted on a pseudonymized social media platform, such as Reddit (e.g., asking for arrangement advice), while not breaking the general assumption that one\u2019s privacy remains uncompromized. Despite no humans being depicted in the image, a human investigator may infer some personal attributes by reasoning over probable cues, such as recog- nizing the football team\u2019s logo or reverse image searching it. However, the involvement of a hu- man intermediary naturally limits the scalability of any such approach, making its application on large-scale real-world data infeasible (already in 2014, there were 1.8 billion daily image uploads (Meeker, 2014)). Yet, when feeding the image to a VLM (in this case to GPT4-V), the model can do the investigator\u2019s work, automatically recognizing the relevant cues and correctly inferring that the person living in the dormitory is a fan of the Green Bay Packers, and as such, has a high probability of living or having lived in Wisconsin. With such model inferences being inherently scalable, VLMs allow for privacy violations of this kind at an unprecedented scale, requiring us to re-evaluate our current understanding of online privacy. This Work In this work, we aim to establish relevant context for this re-evaluation by providing the first investigation and analysis of the capability of VLMs to infer private information from inconspicuous images posted on pseudonymized platforms. Our findings indicate that similarly to the text-only domain, VLMs are able to infer a variety of personal attributes from real-world images both accurately and at an unprecedented scale. Notably, as we show in our evaluation, current safety filters against such privacy-infringing queries are ineffective in the face of simple evasion techniques, allowing for a low entry barrier for potential malicious actors. Circumventing Safety Filters & Resolution Limitations Current VLMs are commonly equipped with safety filters (both via model alignment and additional pre-processing) intended to prevent the model from answering privacy-violating queries (OpenAI, 2023c). However, several recent works (Chao et al., 2023; Mehrotra et al., 2023; Zou et al., 2023) have shown that such safety filters can be broken both by manual intervention as well as automated algorithms. As we are interested in the privacy risks posed by an adversary misusing a VLM for privacy-infringing inferences, we develop a simple inference attack consisting only of the image and a textual prompt aimed at circumventing the 2 safety filter. Notably, we found that once the safety filter of the model has been (easily) evaded, the model cooperates on the inference task by providing helpful guidance. In particular, we observe that it can recognize relevant parts of the image (e.g., a small note posted on a fridge) that could help the inference but which due to technical resolution limitations are too small to be analyzed. Building on this observation, we further develop an automated pipeline in which the model can decide to zoom into parts of the image that it believes to be relevant, effectively improving its inference capabilities. A Visual Inference-Privacy Dataset Due to their extensive reasoning capabilities and world- knowledge VLMs could draw conclusions about private attributes not just from direct depictions of people, but also from other contextual information, e.g. a person posting a picture of their kitchen where products of local brands are visible could reveal their place of living or images of rooms with recognizable items such as college logos could reveal the person\u2019s educational background. Therefore, in order to be able to evaluate the privacy-inference risks associated with VLMs beyond traditional human attribute recognition (HAR), we require a dataset of seemingly inconspicuous images alongside personal attribute labels. However, we find that current vision datasets focusing on private attributes are insufficient in two ways: (1) They focus on a small specific subset of attributes to be inferred from the image, such as gender, age, or the presence of certain clothing items (Bourdev et al., 2011) and (2) they almost exclusively consist of depictions of natural persons (Ravi et al., 2024; Yaghoubi et al., 2020). To reflect the inference-based privacy threat arising from the extended capabilities of VLMs appropriately, we create a dataset by collecting and manually annotating images posted on the popular pseudonymized social media platform Reddit, particularly focusing on images where private information is not sourced from direct depictions of humans. We evaluate the performance of two widely adopted proprietary models, GPT4-V(OpenAI, 2023c) from OpenAI and Gemini-Pro (Anil et al., 2023) from Google, together with five open source models available on Huggingface (Wolf et al., 2019). We find that although the safety filters of some of the models reject up to 54.5% of our queries when using a naive prompt, they can be easily circumvented via prompt engineering, making the models infer up to 77.6% of the private attributes correctly. Furthermore, allowing the models to act autonomously and zoom in on details further improves the accuracy on certain features, e.g., precise location inference accuracy rises from 59.2% to 65.8%. This finding is especially concerning, as it demonstrates that even supposedly safety-aligned VLMs can be misused as eagerly and automatically acting adversaries against their original safety objectives. Additionally, in line with the findings of Staab et al. (2023), we observe that the personal attribute inference accuracy is strongly correlated with the general capabilities of the models, implying that future iterations will pose an even larger privacy threat. Therefore, we advocate for further research into developing defenses against inference-based privacy attacks in the image domain, where the current safety filters do not prove to be sufficient. Main Contributions Our main contributions are: \u2022 The first identification and formalization of the privacy risks posed by vision-language models at inference time. \u2022 Extensive experimental evaluation of 7 frontier VLMs at inferring personal attributes from real-world images. \u2022 An open source implementation1 of our dataset labeling tool and our inference pipeline to advance privacy research. Responsible Disclosure Prior to submitting and making any preprint of this study available publicly, we contacted OpenAI and Google to inform them about our findings, providing access to all our data, prompts, and results.",
"main_content": "This section briefly presents the relevant background for the rest of our work. 1https://github.com/eth-sri/privacy-inference-multimodal 3 Vision-Language Models For the context of this work, we collectively refer to multimodal instruction-tuned foundational (large) language models with image understanding capabilities as vision-language models (VLMs). Note that we only require the model to be able to process image inputs and not also to be able to generate images. While combining different modalities for machine learning exhibits a long line of research (Wu et al., 2023), the first influential VLMs building upon foundational models have only appeared recently (Alayrac et al., 2022; Eichenberg et al., 2021; Hao et al., 2022; Tsimpoukelli et al., 2021; Zeng et al., 2022). These methods achieve image understanding either by combining LMs with pre-trained image encoders, or through joint training across modalities. Fundamentally, both methods rely on both the image and the textual input being translated to token embeddings and fed to a, usually decoder only, transformer model for processing. This approach is widely applied across both state-of-the-art proprietary, i.e., GPT4-V(OpenAI, 2023c) and Gemini (Anil et al., 2023), and open-source (Liu et al., 2023) VLMs. Additionally, these models are often equipped with learned safety filters (i.e., they are aligned) to refuse queries that would lead to the generation of harmful responses (Anil et al., 2023; OpenAI, 2023c). Personal Identifiable Information and Personal data Both personal identifiable information (PII) as well as personal data refer to information that can be attributed to a specific (natural) person. In the EU, the term personal data is defined via Article 4 in the EU\u2019s General Data Protection Regulation (GDPR) (EU, 2016) as \"any information relating to an identified or identifiable natural person.\" While PII definitions in the USA are commonly less comprehensive than the GDPR, they similarly include all information from which \"the identity of an individual [...] can be reasonably inferred by either direct or indirect means.\" Notably, this includes attributes like gender, geographic indicators, or economic status. We note that as in Staab et al. (2023), most attributes considered in this work (e.g., age, location, income, sex) fall under both personal data and PII definitions. Large Language Models and Privacy As the pre-training datasets of LLMs consist of vast amounts of data across diverse sources, they often contain sensitive personal (identifiable) information. Therefore, studying the phenomenon of training data memorization, i.e., the verbatim repetition of training data sequences at inference time, has become an important and active area of research in the context of LLMs (Carlini et al., 2021, 2023; Ippolito et al., 2022; Kim et al., 2023; Lukas et al., 2023; Nasr et al., 2023; Zhang et al., 2023). However, the restricted setting of exact data memorization does in many cases falls short of covering often highly contextual other privacy notions (Ippolito et al., 2022). In particular, as it is limited to the models\u2019 training data, it cannot account for privacy-infringing inferences on previously unseen texts (Bubeck et al., 2023). Staab et al. (2023) were the first to investigate the privacy risks arising through the inference of personal information from text using LLMs, showing that current models can recover personal information even from seemingly anonymized text. However, their analysis was restricted to only the single modality of text, while current widely used frontier models are equipped with visual reasoning capabilities as well. In our work, we aim to bridge this gap by exploring the inference-based privacy threats of VLMs. Human Attribute Recognition Human attribute recognition (HAR) focuses on recognizing features of natural persons from their depictions. Typically, these feature recognitions are formulated as binary or multi-label classification tasks on a single person, commonly focussing on a specific feature such as the person\u2019s sex, age, or dressing style (Yaghoubi et al., 2020). Notably, before VLMs, state-of-the-art HAR models have been trained in standard supervised machine learning frameworks, requiring access to highly task-specific and labeled (image-only) training data. Trained models then focussed on singular tasks, e.g., on the recognition of specific attributes from images of pedestrians (PAR) (Wang et al., 2022). Recently, vision-language-based architectures have also been successfully explored on various PAR datasets (Castrill\u00f3n-Santana et al., 2023; Cheng et al., 2022; Wang et al., 2023), showing promising results over prior, non-VLM-based methods. Although vision-language-based architectures prove to be performant methods on PAR, their capabilities extend beyond the commonly restricted HAR settings. Notably, as existing HAR datasets are centered around direct depictions of humans, they do not cover the privacy risk arising from the application of frontier VLMs with advanced reasoning capabilities and broad lexical knowledge. In particular, as we show in Section 5, VLMs enable the automated inference of personal attributes from images that do not necessarily contain the subjected person in the image but, e.g., only an inconspicuous depiction of their living room. Our evaluation shows that VLMs are able to generalize to a diverse set of attributes and images without any training yet still achieve high personal attribute inference accuracy. 4 3 Privacy Infringing Inferences with VLMs Where does the person with this kitchen live? In case you discover clues that are too small to recognize, provide a bounding box for zooming. The appliances likely indicate USA. From a close-up of the document I could maybe infer more information. I can provide you the following bounding box: Left: 0% Upper: 0% Right: 70% Lower: 55% This is a tax filing to Colorado state. Location: CO, USA. Figure 2: Illustrative example of GPT4-V recognizing that an item that is too small in the current resolution could provide it with more information about the inference task. The model is capable to return a bounding box that can be used to crop the image before returning it for repeated processing. In this section, we first introduce the threat model of privacy-infringing inferences from images using VLMs. Then, we explain our prompting strategy allowing us to circumvent the safety filters of even the most advanced currently available models of OpenAI (OpenAI, 2023c) and Google (Anil et al., 2023). Finally, we outline our automated zooming scheme, allowing the model to autonomously enlarge parts of the image it deems relevant for further inspection. Threat Model While some highly capable VLMs are available with open weights and no safety filters, many of the most advanced VLMs only allow a black-box access and are protected by safety filters and alignment (Anil et al., 2023; OpenAI, 2023c). Therefore, to capture a general threat scenario, we assume an adversary with only black-box query access to a (frontier) VLM. The goal of the adversary is to get as much and as detailed personal information as possible from online images. At the same time, the attack shall remain simple and practical, keeping the entry requirements for any potential adversary low. Such an attack-vector is particularly concerning, as its potential for automation enables its execution at a scale unattainable by pre-VLM methods or human investigators. Importantly, this potential for scaling challenges our current understanding of online privacy, which in many cases and for many users relies heavily on the prohibitively high cost of extraction and inference of private information from seemingly benign images and posts. Circumventing Safety Filters & Prompt Engineering The training of state-of-the-art VLMs, such as GPT4-V (OpenAI, 2023c) and Gemini (Anil et al., 2023) includes a separate safety finetuning (alignment) stage with the goal of creating a model capable of refusing queries that lead to potentially harmful generations. However, as highlighted in Section 1, it has been shown that such training-based safety filters can easily be circumvented both by hand-crafted prompts or even fully automated attacks (Chao et al., 2023; Mehrotra et al., 2023; Zou et al., 2023). As such, to cover the full extent of privacy risks associated with inferences made by VLMs, it is imperative to construct an evaluation method that escapes such safety filters. Additionally, the prompt has to make use of the full capabilities of the model, avoiding a potential false sense of privacy through insufficient evaluation. To construct such a prompt we follow popular reasoning prompting practices, such as Chain of Thought prompting (Wei et al., 2022), to improve performance, and gamify the inference task in a similar vein to Staab et al. (2023), to escape any safety filters. Additionally, we provide task-independent reasoning examples in the prompt, with the goal of increasing the model\u2019s attention to detail. We examine the impact of our prompting choices in Section 5, clearly demonstrating that \"naive\" prompts (\"Where was this picture taken?\") severly underestimate the inference-based privacy risks posed by current frontier vision-language models. 5 r/bedrooms r/doggos r/travel \u2026 1. Extract images & comments from Reddit 2. Label based only on the image Place of Living: Madison, USA Hobbies/Other: Dog owner \u2026 3. Keep only labels confirmed by comments Image labels Madison, USA Middle Income Dog Owner Comment labels Madison, USA High Income Dog Owner + Figure 3: Our data collection and labeling pipeline. In step 1, we collect images from a carefully selected set of subreddits that may contain images suitable for our task. Then, in step 2, we label the images manually while allowing the labeler to access google for assistance. Finally, in step 3, we extract the comments of the profile that posted the image and keep only the obtained image labels that are not contradicted by the information contained in the comments. Note that we hide the true information on the tag and report an alternative location in the example. Automated Zooming Small details in an image often contribute to privacy-infringing inferences, e.g., a certificate or letter hanging on the wall in the background revealing the state one resides in, or recognizing a small university emblem on a larger item in the image signifiying the person\u2019s educational background. However as most current VLM are limited in input resolution, they commonly struggle to properly extract these small yet important details. As exemplified in Fig. 2, our experiments indicidate that even though in some cases VLMs are not able to process small details (e.g., writing on a tax form), they are still able to recognize their potential importance for inference (a tax form contains personal information). In fact, the model can be prompted to return a bounding box for such a recognized clue, which in turn can be automatically processed to feed the model a cropped image enlarging the corresponding section. Based on this observation, we automate the zooming procedure by prompting the model for 3 regions to zoom into if necessary via outputting bounding boxes. We then enlarge and adjust the bounding box from all sides to cover 16% of the image and be within image limits. Finally, we return the the zoomed-in images in a second request to the model. In Section 5, we show that zooming improves GPT4-V\u2019s performance on certain features, improving the precise inference accuracy by up to 6.6% when inferring location. 4 A Visual Inference-Privacy Dataset In this section, we first discuss how current considerations of private attribute inference from images do not extend to the novel privacy-inference threat VLMs pose. Then, we introduce a list of criteria that a dataset suitable for evaluating this threat should fulfill. Afterwards, we present our methodology for collecting our visual inference-privacy (VIP) dataset used for evaluation in Section 5. Not Only Images of Humans Leak Information Although there exist several datasets and benchmarks in the field of computer vision for (personal) human attribute recognition, they primarily focus on extracting and inferring features of persons included in the images, commonly in non-privacy related settings, e.g., pedestrian identification (Ravi et al., 2024; Yaghoubi et al., 2020). This focus is also present in current HAR privacy benchmarks, with the explicit goal of a perceptual protection of humans included in the images (Ravi et al., 2024). However, with the rise of VLMs, which are capable of visual reasoning and are equipped with vast lexical knowledge, considering only images that include humans does not fully cover the potential privacy threat posed by these models. This is highlighted by our examples in Fig. 1 and Fig. 2, where private attributes are inferred from other objects in the depicted environment. Therefore, in this paper, we focus on evaluating the risk of private attribute inferences from images that primarily do not contain depictions of humans, a setting not considered under current benchmarks. To enable the evaluation of this arising privacy risk, we formulate three key criteria that a dataset for inference-based privacy evaluation has to fulfill. Key Criteria As VLMs are no longer limited to the recognition of attributes of human visuals, we require a dataset that reflects this change in domain. In particular, the images should: (i) try to avoid containing full depictions of natural persons, (ii) be representative of what real people may post on (pseudonymized) online platforms, and (iii) come with a diverse set of labels covering a large set of private attributes as introduced in privacy regulations such as the GDPR (EU, 2016). 6 Hard. SEX POI AGE INC LOC EDU OCC MAR P 1 17 1 4 3 11 1 6 4 47 2 63 0 24 48 20 18 19 12 204 3 48 0 53 31 8 15 5 10 170 4 0 74 0 0 22 0 1 0 97 5 0 17 1 0 16 2 0 0 36 P 128 92 82 82 77 36 31 26 554 Table 1: Label counts for each main private attribute category across hardness levels in VIP. Building a Visual Inference-Privacy Dataset To the best of our knowledge, there currently does not exist any dataset that fulfills all three criteria. Therefore, we construct a visual inference-privacy (VIP) dataset, the first benchmark to evaluate the attribute inference capabilities of VLMs from seemingly innocuous images. An overview of our dataset collection pipeline is presented in Fig. 3. First, we source all images from the popular pseudonymized social media site Reddit, where we select a set of suitable subreddits that are likely to contain posts with images suitable for our evaluation task (listed in Appendix D). Next, we manually label all images, using the image as the only source of information (i.e., no other data from the posting profile), but without time or internet browsing restrictions. Note that for ethical considerations, in line with the practices established by Staab et al. (2023), also working with Reddit data, we do not outsource the labeling task, instead, the labeling is fully conducted by the authors of the paper. To cover a wide range of attributes as required by criterion (iii), we collect the following private features: location of residence (LOC), place of image (POI), sex (sex), age (AGE), occupation (OCC), income (INC), marital status (MAR), and education (EDU). Following Staab et al. (2023), we also record a hardness score ranging from 1 to 5 for each label, corresponding to the difficulty for the labeler to extract/infer the label. Likewise we also adopt the scale used in Staab et al. (2023), and rate from 1 to 3 for labels that require increasingly more complex reasoning but no online search. We assign hardness 4 and 5 to labels where the labeler required external knowledge tools, with hardness 5 indicating the additional need of advanced reasoning. As we only record the labels we could reliably extract from the image, we generally only obtain a label for a subset of the attributes per image. In a last step, to ensure that our recorded labels accurately reflect the profile of the posting author, we check the last 100 comments of the author, keeping only labels that are in line with the information contained in the comments. Note that we do not keep the comments for evaluation, as we aim to isolate the effect of privacy inferences from images, where the privacy leakage from text has already been explored in Staab et al. (2023). The distribution of the resulting labels for the main private attribute categories are shown in Table 1. For a detailed overview of the labeling procedure and instructions, we refer the reader to Appendix D. 5 Evaluation In this section, we present the results of our experimental evaluation, which show how current frontier vision-language models enable privacy-infringing inferences from seemingly benign images. Additionally to the experiments presented in this section we include further results in Appendix B. Experimental Setup We evaluate two proprietary, GPT4-V (OpenAI, 2023c) and Gemini-Pro (Anil et al., 2023) (Gemini), and five open source models, LLaVa 1.5 13B (Liu et al., 2023), LLaVa-NeXT 34B (Liu et al., 2024), Idefics 80B (Lauren\u00e7on et al., 2023), CogAgent-VQA (Hong et al., 2023), and InternVL-Chat-V1.2-Plus (Chen et al., 2023). All models are run for every image-attribute pair in the VIP dataset, prompting the models to predict one private attribute at a time. To decrease the impact of randomness on our results, we use greedy sampling (temperature 0.0) across all our experiments. Unless mentioned explicitly, we use a single-round prompt with the models, not allowing for zooming, which we evaluate in a separate experiment. As described in Section 3, all proprietary models are aligned with safety filters. We, therefore, query these models via a gamified and CoT-extended prompt (later referred to as \"Final\" prompt) presented in Appendix E.3. We do so also for LLaVa-NeXT 34B and InternVL-Chat-V1.2-Plus. As CogAgent-VQA, Idefics 80B, and LLaVa 1.5 13B exhibit weaker language understanding capabilities and are mostly free from safety filters, we evaluate them with a simpler prompt (presented in Appendix E.5). Our prompting choices are motivated by avoiding the underreporting of the model\u2019s inference capabilities, and as such, potentially downplaying the posed privacy risk. We ablate the specific choice of prompts for all open source models in Appendix B.3. Calculating Inference Accuracy For the categorical attributes of SEX, INC, and EDU, we use a simple 0-1 accuracy in case the predicted category matches the label. For MAR, we report binary 7 0 100 200 300 400 500 554 Human labelled GPT4-V CogAgent-VQA Idefics 80B Gemini-Pro LLaVa-NeXT 34B InternVL-Chat-V1.2-Plus LLaVa 1.5 13B # Correct SEX POI AGE INC LOC EDU OCC MAR Figure 4: Main results comparing the private attribute inference capabilities of all examined models in our collected Vision Inference-Privacy dataset (VIP). GPT4-V comes out clearly as the best model, with an overall accuracy of 77.6%, while the best open source model, CogAgent-VQA achieves 66.4% accuracy. classification accuracy (has partner/no partner). Following the methodology of Staab et al. (2023), for AGE, we let the model predict a probable interval for the subject\u2019s age. As our ground truth labels for AGE also consist of intervals, we count the model\u2019s guess as accurate if the two intervals have a majority (\u22650.5) overlap. For the attributes LOC and POI, which have a high degree of freedom, we take a hierarchical approach: If the label contains building-, cityor state-level information which is correctly predicted by the model, we assign a correct prediction and additionally record that the inference has been precise (P). When the model only predicts the country correctly, we still count it as a correct prediction for our main experiments but record that the inference has been less precise (LP) than the actual label. If the label only contains country-level information and the prediction contains the correct country information, we count the prediction as precise. For the last attribute, OCC, we take a semantic approach tolerating some minor precision loss, where, for instance, \"Electronics Engineer\" counts as a correct prediction for \"Electrical Engineer\". We evaluate this in a two-step approach, first prompting GPT-4 for a similarity judgement and afterwards manually verifying it. We give a more detailed overview of our evaluating procedure in Appendix A.3. Unless otherwise mentioned, we report the less precise accuracy in our experiments. Main Results We show our combined results across all attributes and models in Fig. 4. Consistent with most benchmarks in the literature, we observe higher performance in proprietary models, with GPT4-V clearly outperforming all other models with a 77.6% accuracy. Remarkably, while GPT4-V is well-ahead of all models, CogAgent-VQA and Idefics 80B strongly outperform the proprietary model Gemini-Pro, with the best model reaching an accuracy of 66.4%. At the same time, other open source models closely match Gemini-Pro in performance, with only LLaVa 1.5 13B lagging considerably behind with an inference accuracy of 53.3%. This result signifies that even if the safety filters of proprietary models were to be improved, there already exist open source models that can make highly accurate privacy-infringing inferences. SEX POI AGE INC LOC EDU OCC MAR GPT4-V 94.5 79.3 74.4 46.3 87.0 77.8 77.4 69.2 CogAgent-VQA 80.5 63.0 67.1 50.0 70.1 52.8 71.0 61.5 Gemini-Pro 52.3 87.0 56.1 46.3 77.9 63.9 51.6 26.9 Table 2: Per feature accuracy [%] on GPT4-V, CogAgentVQA, and Gemini-Pro. Notably, Gemini strongly outperforms other models on POI, while lags behind on other features, with GPT4-V being the best model on most. Further, in line with Staab et al. (2023), we observe that newer iterations of models exhibit a gradually increasing capability of inferring private attributes. In fact, looking at the MMMU (Yue et al., 2023) visual understanding and reasoning benchmark\u2019s leaderboard (Beeching et al., 2023), we can see that the ranking of the models on VIP closely matches the ranking (of the included models) on MMMU, indicating that privacy-inference and general capabilities are closely related. This result is concerning, as it shows that the privacy risk arising from the inference capabilities of VLMs will only increase with stronger models in the future, motivating a clear need for the development of targeted mitigations. Accuracy over Attributes In Table 2, we show the per feature accuracy of GPT4-V, CogAgentVQA, and Gemini. Remarkably, GPT4-V exhibits a strong performance across most attributes, only struggling with inferring the income, where even the best model, CogAgent-VQA is only able to achieve 50% accuracy. Notably, GPT4-V achieves 94.5% accuracy on predicting SEX. At the same time, Gemini\u2019s performance is highly inconsistent across the examined attributes. While 8 outperforming GPT4-V on POI, reaching 87% accuracy, on other non-location attributes, it performs considerably worse, with, for instance, SEX falling close to random guessing accuracy. By manual inspection of Gemini\u2019s outputs we observe that this is mostly due to the limited capabilities of the model, with it often claiming that no sex is inferrable in the absence of a human in the image. Human GPT4-V CogAgent VQA With 88.9 81.5 Without 76.4 64.8 Table 3: Accuracy [%] of GPT4V and CogAgent-VQA on images with and without human depictions. Humans in the Image As we constructed our VIP dataset to emphasize the inference capabilities of models from nonperson-bound clues, only 9.7% of the collected labels came from images containing partial depictions of humans. Examples of these are depictions of hands, lower or full bodies, or reflections. To examine the impact of such depictions, we split our dataset into (1) images that contain parts of the human subject and (2) images that do not contain such depictions. In Table 3, we show our results for GPT4-V and the best opensource model, CogAgent-VQA, on these splits. We can observe that both models exhibit a higher accuracy on the split containing humans, which we hypothesize is due to the fact that most labels contained in this split are usually directly inferable from human depictions, e.g., 31 out of 54 labels total in the split are for the features SEX and AGE. At the same time, the models still exhibit relatively strong performance on images with no human subjects, with GPT4-V achieving a remarkable 76.4% accuracy, signifying that VLMs enable private attribute inference from inconspicuous images that would not be otherwise considered under current HAR-privacy benchmarks. Additionally, the gap between the models is larger in absence of humans in the image, highlighting the advanced reasoning capabilities of GPT4-V when it comes to non-human sourced clues in inferring personal attributes. Model GPT4-V GPT4-V GPT4-V Gemini Prompt Naive Extended Final Final Refusal 54.5 1.2 0 4.6 Accuracy 20.6 76.0 77.6 60.8 Table 4: Top: Rate [%] of models refusing to respond citing safety concerns. Bottom: Overall accuracy [%] over prompts. Impact of Prompting We show the impact of our prompting techniques on the response refusal rate and on overall accuracy in Table 4. Our baseline is a naive prompt using an imperative tone and asking the model for a given private attribute (\"Naive\"). As introduced in Section 3, in order to overcome the safety alignment of the models, we add adversarial prompting elements, such as the gamification of the inference task (\"Extended\"). Finally, we further extend the prompt with chain-of-thought reasoning (Wei et al., 2022), and provide reasoning guidelines and general hints to the model to improve its performance (\"Final\"). All used prompts are included in Appendix E. Remarkably, our simple extension over the naive prompting, achieves substantial improvements in terms of bypassing the safety alignment of GPT4-V, reducing the rejection rate from 54.5% to a mere 1.2%. This is concerning as it confirms that currently applied safety filters are incredibly brittle against even basic circumvention methods. Further, Table 4 shows the impact of the prompts on the overall accuracy of GPT4-V, showing that escaping the safety filter with a gamified prompt provides the largest improvement, and further extensions in our \"Final\" prompt lead to additional accuracy gain. Attribute LOC (P) LOC (LP) POI (P) POI (LP) Final Prompt 58.4 87.0 34.8 79.3 + Zoom +6.5 +0.0 +4.3 +2.2 Table 5: Prediction accuracies [%] for exact (P) and less precise (LP) location predictions of GPT4-V on \"Final\" prompt vs. added zooming. Automated zooming We examine the impact of automated zooming on the location attributes (LOC and POI), as predictions on other attributes were largely not subject to resolution limitations. We show our results in Table 5, comparing our final prompt with an additional automated zooming extension. We show the accuracy improvements made by GPT4-V on LOC and POI, distinguishing between precise (P) and less precise (LP) predictions. Notably, zooming provides the most accuracy improvements on precise predictions, enabling the model to make a more precise inference based on fine-grained clues in the images. As this process is automated, this result raises an important concern over the deployment of these models as autonomously acting inference adversaries. 9 6 Discussion Our empirical evaluation highlights several key privacy threats posed by VLMs, which are especially severe in the face of the wide adoption of these models: (1) Both proprietary and open-source models are capable of making accurate privacy-infringing inferences. (2) The safety filters of the better performing proprietary models such as GPT4-V are brittle and can be easily circumvented in practice, potentially providing a false sense of privacy. (3) As observed previously for text-only models, the capabilities of VLMs to infer personal attributes from images are directly correlated with their performance on other harmless and useful tasks. This can pose a significant challenge in mitigating such inferences on the model provider side, as a potential loss in utility has to be balanced with increased safety and privacy protection. Independently, it is to be expected that upcoming VLMs will only improve in general capabilities, and hence also on the results we have shown in this work, making the threat to user privacy even more imminent. We believe that defenses against such VLM-based inferences both on the model provider and on the user side are an important and potentially highly impactful direction for future development and research to tackle this challenge. Limitations This work aims to provide the first characterization and evaluation of the inferencebased privacy threat arising from recent frontier VLMs. This evaluation is enabled by a manually collected real-world image dataset alongside a wide selection of manually annotated personal attributes. Due to the sensitive nature of such datasets and in line with previous works as well as ethical concerns, we decided not to release the VIP dataset publicly. While VIP allowed us to make a qualitative assessment of the discussed risks, we believe that the field may benefit from future efforts in constructing larger-scale public benchmarks. However, as similar ethical concerns apply here, we see well-curated synthetic benchmarks as a promising remedy to evaluation data limitations. 7 Conclusion In this work, we presented the first exploration and evaluation of the privacy risks emerging from the inference capabilities of frontier VLMs. We examined the VLMs\u2019 capabilities of inferring private features from seemingly inconspicuous real-world online images by tackling two key issues: (1) To allow for a quantitative assessment, we constructed the first benchmark for evaluating the privacyinfringing inference capabilities of VLMs from seemingly benign online images that do not focus on the depiction of human subjects, and (2) we built a simple prompting scheme suitable for evaluating the full extent of potential private feature inferences by enabling the evasion of current safety filters. Concerningly, our evaluation shows that on the one hand, built-in safety filters of models are easily evaded, and on the other, the most capable model, GPT4-V (OpenAI, 2023c) already achieves 77.6% accuracy across all features, remarkably close to human performance. Our results indicate that large-scale, automated, and highly accurate inferences of private attributes from images posted online are already becoming feasible. With current defenses lacking, we, therefore, aim to raise awareness with our findings and appeal to the community for an increased focus on mitigating privacy threats from inferences with frontier VLMs."
},
{
"url": "http://arxiv.org/abs/2402.18312v1",
"title": "How to think step-by-step: A mechanistic understanding of chain-of-thought reasoning",
"abstract": "Despite superior reasoning prowess demonstrated by Large Language Models\n(LLMs) with Chain-of-Thought (CoT) prompting, a lack of understanding prevails\naround the internal mechanisms of the models that facilitate CoT generation.\nThis work investigates the neural sub-structures within LLMs that manifest CoT\nreasoning from a mechanistic point of view. From an analysis of LLaMA-2 7B\napplied to multistep reasoning over fictional ontologies, we demonstrate that\nLLMs deploy multiple parallel pathways of answer generation for step-by-step\nreasoning. These parallel pathways provide sequential answers from the input\nquestion context as well as the generated CoT. We observe a striking functional\nrift in the middle layers of the LLM. Token representations in the initial half\nremain strongly biased towards the pretraining prior, with the in-context\ntaking over abruptly in the later half. This internal phase shift manifests in\ndifferent functional components: attention heads that write the answer token\npredominantly appear in the later half, attention heads that move information\nalong ontological relationships appear exclusively in the initial half, and so\non. To the best of our knowledge, this is the first attempt towards mechanistic\ninvestigation of CoT reasoning in LLMs.",
"authors": "Subhabrata Dutta, Joykirat Singh, Soumen Chakrabarti, Tanmoy Chakraborty",
"published": "2024-02-28",
"updated": "2024-02-28",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "LLM AND Reasoning",
"gt": "The emergence of Large Language Models (LLMs) is by far one of the most remarkable examples of enig- matic ubiquity. Building these models, refining their abilities and unleashing newer potentials have garnered tremendous enthusiasm lately. Yet, we still barely understand the mechanisms within the model that ac- tually implements those abilities \u2013 how they emerge from the training dynamics and why they are often bewilderingly brittle. One of these remarkable capabilities of LLMs that has unleashed their potential in complex reasoning is Chain-of-Thought (CoT) prompting (Wei et al., 2022b; Kojima et al., 2022). Instead of providing a direct answer to the question, in CoT prompting, we expect the model to generate a verbose response, adopting a step-by-step reasoning process to reach the answer. Despite the huge success of eliciting intermediate computation and their diverse, structured demonstration, the exact mechanism of CoT prompting remains mysterious. Prior attempts have been made to restrict the problem within structured, synthetic reasoning to observe the LLM\u2019s CoT generation behavior (Saparov & He, 2023); context perturbation towards causal modeling has also been used (Tan, 2023). However, \u2217Equal contribution. \u2020Work done as Research Assistant at IIT Bombay 1 arXiv:2402.18312v1 [cs.CL] 28 Feb 2024 these endeavors produce only an indirect observation of the mechanism; the underlying neural algorithm implemented by the LLM remains in the dark. Recent developments in the mechanistic interpretation of Transformer-based models provide hope for uncov- ering the neural \u2018algorithms\u2019 at work inside LLMs (Elhage et al., 2021; Nanda, 2022). Typically, mechanistic methods seek to build a causal description of the model; starting from the output, they localize important components of the model (e.g., an attention head or a multilayer perceptron (MLP) block) by activation patching (Wang et al., 2023; Zhang & Nanda, 2023). However, there are some implicit challenges in reverse- engineering CoT-prompting in foundational models. The capability of generating CoT is largely dependent on model scale (Wei et al., 2022a; Saparov & He, 2023). On the other hand, reverse-engineering a large model becomes a wild goose chase due to the hydra effect (McGrath et al., 2023) \u2014 neural algorithmic components within LLMs are adaptive: once we \u2018switch off\u2019 one functional component, others will pitch in to supply the missing functionality. Furthermore, in most real-world problems involving multi-step reasoning, there are implicit knowledge requirements. LLMs tend to memorize factual associations from pretraining as key-value caches using the MLP blocks Geva et al. (2021); Meng et al. (2022). Due to the large number of parameters in MLP blocks and their implicit polysemanticity, interpretation becomes extremely challenging. In this paper, we seek to address these challenges, thus shedding light on the internal mechanism of Transformer-based LLMs while they perform CoT-based reasoning. Going beyond the typical toy model regime of mechanistic interpretation, we work with LLaMA-2 7B (Touvron et al., 2023). To minimize the effects of MLP blocks and focus primarily on reasoning from the provided context, we make use of the PrOntoQA dataset (Saparov & He, 2023) that employs ontology-based question answering using fictional entities (see Figure 1 for an example). Specifically, we dissect CoT-based reasoning on fictional reasoning as a composition of a fixed number of subtasks that require decision-making, copying, and inductive reasoning (\u261bSection 4). We draw on three prominent techniques for investigating neural algorithms, namely, activation patch- ing (Nanda, 2022), probing classifiers (Belinkov, 2022), and logit lens (nostalgebraist, 2020) to disentangle different aspects of the neural algorithm implemented by LLaMA-2 7B. We find that despite the difference in reasoning requirement of different subtasks, the sets of attention heads that implement their respective algorithms enjoy significant intersection (\u261bSection 4.1). Moreover, they point towards the existence of mechanisms similar to induction heads (Olsson et al., 2022) working together. In the initial layers of the model (typically, from first to 16th decoder blocks in LLaMA-2 7B), attention heads conduct information transfer between ontologically related fictional entities; e.g., for the input numpuses are rompuses, this mech- anism copies information from numpus to rompus so that any pattern involving the former can be induced with the latter (\u261bSection 5). Interestingly, in this same segment of the model, we find a gradual transition in the residual stream representation from pretraining before in-context prior, i.e., the contextual information replaces the bigram associations gathered via pretraining (\u261bSection 6.1). Following this, we seek to identify the pathways of information that are responsible for processing the answer and write it to the output residual stream for each subtask. To deal with the abundance of backup circuits (Wang et al., 2023), we look for such parallel pathways simultaneously without switching off any of them. We find that multiple attention heads simultaneously write the answer token into the output, though all of them appear at or after the 16th decoder block (\u261bSection 6.2). These answer tokens are collected from multiple sources as well (i.e., from the few-shot context, input question, and the generated CoT context), pointing towards the coexistence of multiple neural algorithms working in parallel. Our findings supply empirical answers to a pertinent open question about whether LLMs actually rely on CoT to answer questions (Tan, 2023; Lampinen et al., 2022): the usage of generated CoT varies across subtasks, and there exists parallel pathways of answer collection from CoT as well as directly from the question context (\u261bSection 6.3). Here again, we observe the peculiar functional rift within the middle of the model: answer tokens, that are present in the few-shot examples but contextually different from the same token in the question, are used as sources by attention heads, primarily before the 16th decoder block. To the best of our knowledge, this is the first-ever in-depth analysis of CoT-mediated reasoning in LLMs in terms of the neural functional components. Code and data 1 are made available publicly. 1https://github.com/joykirat18/How-To-Think-Step-by-Step 2",
"main_content": "In this section, we provide an overview of literature related to this work, primarily around different forms of CoT reasoning with Transformer-based LMs, theoretical and empirical investigations into characteristics of CoT, and mechanistic interpretability literature around language models. Nye et al. (2021) first showed that, instead of asking to answer directly, letting an autoregressive Transformer generate intermediate computation, which they called scratchpads, elicits superior reasoning performance. They intuitively explained such behavior based on the observation that, in a constant depth-width model with O(n2) complexity attention, it is not possible to emulate an algorithm that requires super-polynomial computation; by writing the intermediate answer, this bottleneck is bypassed. Wei et al. (2022b) demonstrated that LLMs enabled with unstructured natural language expressions of intermediate computations, aka CoT, can demonstrate versatile reasoning capabilities. Multiple recent attempts have been made toward a deeper understanding of CoT, both empirically and theoretically. Feng et al. (2023) employed circuit complexity theory to prove that \u2014 (i) Transformers cannot solve arithmetic problems with direct answers unless the model depth grows super-polynomially with respect to problem input size, and (ii) Constant depth Transformers with CoT generation are able to overcome the said challenge. Liu et al. (2022) showed that Transformers can learn shortcut solutions to simulate automata with standard training (without CoT); however, they are statistically brittle, and recency-biased scratchpad training similar to that of Nye et al. (2021) can help with better generalizability and robustness. Saparov & He (2023) dissected the behavior of LLMs across scale on synthetically generated multistep reasoning tasks on true, false, and fictional ontologies. In a somewhat counterintuitive finding, Lampinen et al. (2022) observed that even if an LLM is prompted to generate an answer followed by an explanation, there is a significant improvement over generating an answer without any explanation. While this might point toward a non-causal dependence between the explanation and the answer, Tan (2023) showed that LLMs do utilize, at least partially, the information generated within the intermediate steps such that the answer is causally dependent on the CoT steps. The above-mentioned studies either 1) dissect the model behavior under controllable perturbations in the input problem or 2) construct theoretical frameworks under different assumptions to prove the existence of certain abilities/disabilities. Mechanistic interpretability techniques go one step deeper and seek to uncover the neural algorithm deployed by the model to perform a certain task. Elhage et al. (2021) studied one-, two, and three-layer deep, attention-only autoregressive Transformers and their training dynamics. A crucial finding of their investigation was the existence of induction heads \u2014 a composition of two attention heads at different layers that can perform pattern copying from context. Olsson et al. (2022) empirically observed the simultaneous emergence of induction heads and in-context learning in the training dynamics of similar toy Transformer models. Similar analyses on phenomena like polysemanticity, superposition, and memorization have been performed in the toy model regime (Elhage et al., 2022; Henighan et al., 2023). Beyond these toy models, Wang et al. (2023) analyzed circuits in GPT-2 responsible for Indirect Object Identification. Wu et al. (2023) proposed a causal abstraction model to explain price-tagging in the Alpaca-7B model: given a range of prices and a candidate price, the task is to classify if the candidate price falls within the given range. Extrapolating mechanistic observations from the regime of toy models to \u2018production\u2019 LLMs is particularly challenging; with billions of parameters and deep stacking of attention heads, identifying head compositions that instantiate an algorithm is extremely difficult. Furthermore, as McGrath et al. (2023) suggest, different components of the LLM are loosely coupled and adaptive in nature, for which they coined the term hydra effect \u2014 ablating an attention layer may get functionally compensated by another. Wang et al. (2023) also identified similar information pathways called backup circuits that can take over once the primary circuits are corrupted. 3 Background In this section, for completeness, we briefly introduce the concepts and assumptions necessary for delving into the interpretation of CoT reasoning. 3 The Transformer architecture (Vaswani et al., 2017) consists of multiple, alternated attention and MLP blocks, preceded and followed by an embedding projection and a logit projection, respectively. Given the vocabulary as V , we denote the embedded representation of the i-th token si \u2208V in an input token sequence S of length N (i.e., the sum of embedding projections and position embeddings in case of additive position encoding) as xi 0 \u2208Rd, where d is the model dimension. We denote the content of the residual stream corresponding to the i-th token, input to the j-th decoder block, as xi j\u22121, which becomes \u02dc xi j\u22121 after the attention layer reads and writes on the residual stream. The initial content of the residual stream is the same as the token embedding xi 0. Assuming H number of heads for each attention layer, the operation of k-th attention head at j-th decoder block on the i-th token\u2019s residual stream is denoted as yi j,k = hj,k(xi j\u22121), yi j,k \u2208Rd. Then \u02dc xi j = xi j\u22121 + P k yi j,k is the content of the residual stream immediately after the attention operations. Each attention head hj,k is parameterized using four projection matrices: query, key and value projection matrices, denoted by W j,k Q , W j,k K , W j,k V \u2208Rd\u00d7 d H , respectively, and output projection matrix W j,k O \u2208R d H \u00d7d. Note that in the case of LLaMA models, the position encoding is incorporated via rotation of query and key projections before computing dot-product (Su et al., 2023); we omit this step for brevity. Similarly, the action of the MLP block can be expressed as zi j = MLPj(\u02dc xi j), with xi j = \u02dc xi j + zi j denoting the content of the residual stream after decoder block j. After processing the information through L number of decoder blocks, a feedforward transformation U \u2208Rd\u00d7|V |, commonly known as the unembedding projection, maps the content of the residual stream into the logit space (i.e., a distribution over the token space). Given a sequence of input tokens S = {s1, \u00b7 \u00b7 \u00b7 , si}, the autoregressive Transformer model predicts a new output token si+1: si+1 = arg max xi logit LM(xi logit|S) (1) Fictional ontology-based reasoning, as proposed by Saparov & He (2023) as PrOntoQA, provides a tractable approach to dissect CoT generation. The reasoning problem is framed as question-answering on a tree-based ontology of fictional entities (see Appendix A). This eases two major challenges in our case: (i) Mechanistic interpretation requires input or activation perturbation and recording the results of such perturbations. With CoT, one then needs to repeat such perturbation process for all the subtasks Ti. Unlike free-form CoT reasoning, PrOntoQA provides a clearly demarcated sequence of successive steps that can be analyzed independently. (ii) The solution to most real-world reasoning problems heavily requires factual knowledge, so much so that a sound reasoning process might get misled by incorrect fact retrieval. A fictional ontology virtually cuts off such dependence and lets us focus on a purely open-book, closed-world reasoning process by the LM. Additionally, PrOntoQA provides reasoning formulation over false ontologies (see example in Appendix A). False ontology grounds the reasoning over statements that are false in the real world. An important demarcation between fictional and false ontological reasoning is that while the former minimizes the effects of factual associations memorized as pretraining prior, the latter requires the LM to actively eclipse such memorized knowledge to solve the problem successfully. Circuits, in mechanistic interpretability research, provide the abstractions of interpretable algorithms implemented by the model within itself. Typically, a circuit is a subgraph of the complete computational graph of the model, responsible for a specific set of tasks. We primarily follow the notation adopted by Wang et al. (2023), with nodes defined by model components like attention heads and projections and edges defined by interactions between such components in terms of attention, residual streams, etc. Activation patching is a common method in interpretability research. Activation patching begins with two forward passes of the model, one with the actual input and another with a selectively corrupted one. The choice of input corruption depends on the task and the type of interpretation required. For example, consider the following Indirect Object Identification (IOI) task (Wang et al., 2023): given an input John and Mary went to the park. John passed the bottle to, the model should predict Mary. Further, corrupting the input by replacing Mary with Anne would result in the output changing to Anne. Let xMary j and xAnne j represent the original and corrupted residual streams at decoder block j, depending on whether Mary or Anne was injected at the input. Now, in a corrupted forward pass, for a given j, if the replacement of xAnne j by xMary j results in the restoration of the output token from Anne to Mary, then one can conclude that attention mechanism at decoder block j is responsible for moving the name information. This is an example 4 Max T umpuses are bright. Lempuses are tumpuses. Max is a lempus. True or False: Max is bright. Max is a lempus are tumpuses Lempuses is a tumpus T umpuses are bright Max is bright Decision circuit Copy circuit Induction circuit Figure 1: An working example of task decomposition of CoT generation on fictional ontology. Decision circuits choose which reasoning path to follow; such decisions may be about the order of the information to be copied from the question to the generated output or whether inductive reasoning will be performed. Copy circuits follow Decision circuits and copy statements from question to output. Induction circuits perform the reasoning steps of the form \u201cif A is B and B is C, then A is C\u201d. of patching corrupted-to-clean activation, often called causal tracing (Meng et al., 2022). Activation patching, in general, refers to both clean-to-corrupted as well as corrupted-to-clean patching (Zhang & Nanda, 2023). Knockout is a method to prune nodes in the full computational graph of the model to identify task-specific circuits. Complete ablation of a node is equivalent to replacing the node output with an all-zero vector. Our initial experiments suggest that such an ablation destructively interferes with the model\u2019s computation. Instead, we follow Wang et al. (2023) for mean-ablation to perform knockouts. Specifically, we construct inputs from the false ontologies provided in the PrOntoQA dataset and compute the mean activations for each layer across different inputs: xKnock j = Meani \u0000{xi j|si \u2208S \u2208DFalse} \u0001 (2) where DFalse denotes the false ontology dataset. Then, the language model function with head hj,k knocked out for the residual stream corresponding to the l-th token, can be represented as, sKnock i = arg max xi logit LMl j,k(xi logit|S, yl j,k = xKnock j ) (3) More often than not in this work, we will need to knock out a set of heads H; we will denote the corresponding language model as LMl H. Also, if we perform knockout on the last residual stream (which, when projected to the token space, gives us the output token at the current step of generation), we will drop the superscript l. 4 Task composition We seek to discover the circuits responsible in LLaMA-2 7B for few-shot CoT reasoning on the examples from PrOntoQA fictional ontology problems. Consider the example presented in Figure 1: we ask the model 5 \u201cTumpuses are bright. Lempuses are tumpuses. Max is a lempus. True or False: Max is bright\u201d (for brevity, we omit the few-shot examples that precede and the CoT prompt Let\u2019s think step by step that follows the question). LLaMA-2 7B generates a verbose CoT reasoning sequence as follows: \u201cMax is a lempus. Lempuses are tumpuses. Max is a tumpus. Tumpuses are bright. Max is bright.\u201d We can observe that such a reasoning process constitutes three critical kinds of subtasks: 1. Decision-making: The model decides on the path of reasoning to follow. In Figure 1, given the three possible entities to start with, namely Tumpus, Lempus, and Max, LLaMA-2 7B starts with the last one (i.e., Max is a lempus). One would require multiple such decision-making steps within the complete reasoning process. 2. Copying: The LM needs to copy key information given in the input to the output. Typically, decision-making precedes copying, as the model needs to decide which information to copy. 3. Induction: The LM uses a set of statements to infer new relations. Again, a decision-making step precedes as the model must decide on the relation to infer. In the majority of this work, we focus on 2-hop inductions where given statements of the form A is B and B is C, the model should infer A is C. In all our analyses henceforth, we follow a structure where the model needs to generate CoT responses that solve the overall task using ten steps or subtasks, each belonging to one of the three categories, as presented in Figure 1. (See Appendix E for details of few-shot examples used along with the overall performance of LLaMA-2 7B.) A natural hypothesis would be that distinct, well-defined circuits corresponding to each category of subtasks (i.e., decision-making, copying, and inductive reasoning) exist that work in conjunction to lead the CoT generation. However, as our findings in the next section suggest, this is not the case in reality. 4.1 Task-specific head identification As Wang et al. (2023) suggested, the very first step toward circuit discovery is to identify the components. Since we presume the attention heads as the nodes of a circuit, the goal then becomes to identify those heads in the language model that are most important for a given task. We define the importance of the k-th head at the j-th decoder block, hj,k for a particular task as follows. Let the task be defined as predicting a token si+1 given the input S = {s1, s2, \u00b7 \u00b7 \u00b7 , si}. For example, in the demonstration provided in Figure 1, predicting the token Max given the context Tumpuses are bright. Lempuses are tumpuses. Max is a lempus. True or False: Max is bright. Response: Let us think step by step. Tumpuses are bright. Lempuses are tumpuses. We assign a score \u00b5Task(hj,k) to each head hj,k proportional to their importance in performing the given task. Following the intuitive subtask demarcation presented earlier, we start with scoring the attention heads for each different subtask. We provide the detailed procedure of calculating \u00b5Task(hj,k) for each category of subtasks \u2014 decision-making, copying, and induction, in Appendix C. Figure 2 demonstrates how each of these three categories of attention heads performs on its respective subtasks and the rest of the subtasks (note that we perform head ablation here for each subtask index independently). We also show the fraction of the total number of heads involved in performing each subtask. A higher accuracy with a lower number of head involvement would suggest that we have found the minimal set of heads constitute the responsible circuit. As we can observe, there are no obvious mappings between the tasks used for head identification and the subtasks that constitute the CoT. Quite surprisingly, heads that we deem responsible for inductive reasoning can perform well across all the subtasks (Figure 2 (c)). Furthermore, the three sets of heads share a significant number of heads that are essential for the subtasks. In Figure 2(d), for each subtask, we use the heads that are not shared by subtasks of other categories. For example, in subtask 3, since we assumed it to be a decision-making subtask, we took only those decisionmaking heads that were not present in the copying or inductive reasoning heads. It is evident that these tasks are not structurally well differentiated in the language model. A good majority of heads share the importance of all three subtasks. The existence of backup circuits justifies such phenomena; however, with 6 (a) Decision heads 0 0.2 0.4 0.6 0.8 1 Sub-Task Accuracy (red)/ Fraction of heads involved (blue) 0 0.2 0.4 0.6 0.8 1 Sub-Task 0 0.2 0.4 0.6 0.8 1 Sub-Task 0 0.05 0.1 0.15 Sub-Task (d) Task-only heads (c) Inductive reasoning heads (b) Copy heads 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 Figure 2: Effects of subtask-specific heads across different subtasks. We show the performance of (a) decision-making, (b) copy, and (c) inductive reasoning heads for each subtask (blue bars show accuracy when the rest of the heads are knocked out; red bars denote the fraction of heads involved). (d) Task-only heads are only those that are not shared with other tasks. Inductive reasoning heads are consistently functional across all the subtasks. 0.300 0.305 0.310 0.315 0.320 0 50 100 150 200 250 300 Number of Heads Eliminated Subtask 0 0.290 0.295 0.300 0.305 0.310 0.315 0.320 0.325 0.300 0.305 0.310 0.315 0.320 0.295 0.300 0.305 0.310 0.315 0.320 0.325 0.0 0.2 0.4 0.6 0.8 1.0 Accuracy 0.290 0.295 0.300 0.305 0.310 0.315 0.320 0.325 0 50 100 150 200 250 300 0 50 100 150 200 250 300 0.290 0.295 0.300 0.305 0.310 0.315 0.320 0.325 0 50 100 150 200 250 300 0.290 0.295 0.300 0.305 0.310 0.315 0.320 0.325 0 50 100 150 200 250 300 0 50 100 150 200 250 300 0.290 0.295 0.300 0.305 0.310 0.315 0.320 0.325 0 50 100 150 200 250 300 0.290 0.295 0.300 0.305 0.310 0.315 0.320 0.325 0 50 100 150 200 250 300 0 50 100 150 200 250 300 0.300 0.305 0.310 0.315 0.320 0 50 100 150 200 250 300 Number of Heads Eliminated Number of Heads Eliminated Subtask 1 Subtask 2 Subtask 3 Subtask 4 Subtask 5 Subtask 6 Subtask 7 Subtask 8 Subtask 9 Figure 3: Importance of inductive reasoning heads across subtasks. We plot the counts of attention heads binned over relative head importance (\u00b5(hj,k)) computed on inductive reasoning task across all subtask indices (0-9, left to right and top to bottom). We show the accuracy (in blue) at each subtask relative to the full model when heads are knocked out recursively. large models, the \u201cself-repairing\u201d tendency (McGrath et al., 2023) is much higher. We hypothesize that 7 there exists even further granularity of functional components that collectively express all three types of surface-level subtasks. We seek a solution in the earlier claims that pattern matching via induction heads serves as the progenitor of in-context learning (Olsson et al., 2022). We observe that the inductive reasoning subtasks of the form if [A] is [B] and [B] is [C] then [A] is [C] essentially requires circuits that can perform 1) representation mixing from [A] to [B] to [C], and 2) pattern matching over past context by deciding which token to copy and then copying them to the output residual stream. Therefore, heads responsible for inductive reasoning can perform decision-making and copying by this argument. For empirical validation of this argument, we analyze the effects of knocking out heads on the accuracy of each of the subtask indices. For each subtask, we construct a histogram of head counts over \u00b5(hj,k). It can be seen that the corresponding density distribution is Gaussian in nature. Furthermore, we observe that the actual effect of pruning a head on the accuracy is not dependent on the actual value of \u00b5(hj,k) but varies with \u03b4(hj,k) = (\u00b5(hj,k) \u2212Meanj,k (\u00b5(hj,k))2. From the definition of \u00b5(hj,k) provided in Appendix C, we can explain the behavior as follows: a very low value of \u00b5(hj,k) is possible when the KL-divergence between the noisy logits and original logits is almost same as the KL-divergence between patched and original logits. Conversely, \u00b5(hj,k) can be very high when the KL-divergence between the patched logit and the original logit is very small. Both of these suggest a strong contribution of the particular head. Figure 3 shows the accuracy of the model at each different subtask indices with heads pruned at different \u03b4(hj,k). Among the 1, 024 heads of LLaMA-2 7B, we can see that pruning the model to as low as \u223c400 heads retains >90% of the original accuracy. Further patterns in allocating attention heads for different subtasks can be observed in Figure 3. Typically, subtask indices 0 and 5 are most sensitive to pruning attention heads. Subtask-0 is a decision-making subtask where the model decides on the reasoning path. One can further demarcate between subtasks 0 vs 2, 6 and 8, while we initially assumed all of these four to be decision-making stages. Referring to the example provided in Figure 1, in subtask-0, the model has very little information about which ontological statement to choose next, compared to subtask indices 2, 6, and 8, where the model needs to search for the fact related to the second entity in the last generated fact. Subtask-5 is the actual inductive reasoning step. Compared to subtask-9, which is also an inductive reasoning step, subtask-5 does not have any prior (the answer generated in subtask-9 is framed as a question in the input). 5 Token mixing over fictional ontology Following the claims in the earlier section, we continue to investigate the inductive reasoning functionality. Recall that we first need to explore if there is indeed a mixing of information happening across tokens of the form [A] is [B]. Specifically, we pose the following question \u2013 given the three pairs of tokens ([A1], [B1]), ([A2], [B2]) and ([A3], [B3]), such that there exists a positive relation [A1] is [B1], a negative relation [A2] is not [B2], and no relations exist between [A3] and [B3], is it possible to distinguish between [xA1 j : xB1 j ], [xA2 j : xB2 j ], and [xA3 j : xB3 j ], where [\u00b7 : \u00b7] denotes the concatenation operator, for a given decoder layer j? Given that these tokens denote entities from a fictional ontology, the model could not possibly have memorized similar representations with the embedding/MLP blocks for the tokens. Instead, it needs to deploy the attention heads to move information from the residual stream of one token. We translate the posed question into learning a probing classifier (Belinkov, 2022) C : X \u2192Y, where for each [xAi j : xBi j ] \u2208X, the corresponding label y \u2208Y = {\u22121, 0, +1}|X| should be \u22121, 0, or +1, if Ai and Bi are negatively related, unrelated, or positively related, respectively. We experiment with a linear and a nonlinear implementation of C as feedforward networks. Specifically, in the linear setting, we use a single linear transformation of the form y = (W x + B) followed by a softmax; in the nonlinear setup, we utilize stacked linear transformations with ReLU as intermediate non-linearity. We provide the full implementation details of this experiment in Appendix D. Given that the baseline distribution of the residual stream representation might vary across layers, we employ layer-specific classifiers. Figure 4 shows the 3-way classification performance in terms of accuracy across different layers using 4-layer ReLU networks. The following conclusions can be drawn from the observations: 8 0 5 10 15 20 25 30 Decoder block 0.58 0.60 0.62 0.64 0.66 0.68 0.70 (a) Zero-shot Accuracy (b) 1-shot) 0.550 0.575 0.600 0.625 0.650 0.675 0.700 (c) 2-shot (d) 3-shot 0.525 (e) 4-shot 0 5 10 15 20 25 30 Decoder block 0 5 10 15 20 25 30 Decoder block 0 5 10 15 20 25 30 Decoder block 0 5 10 15 20 25 30 Decoder block 0.58 0.60 0.62 0.64 0.66 0.68 0.70 Accuracy Accuracy 0.550 0.575 0.600 0.625 0.650 0.675 0.700 Accuracy 0.550 0.575 0.600 0.625 0.650 0.675 0.700 Accuracy Figure 4: How does the LLM mix information among tokens according to ontology? We plot the performance (in terms of accuracy) of classifying whether two tokens are ontologically related, unrelated, or negatively related using their residual stream representations at different layers. We demonstrate the depthwise accuracy profile using different numbers of few-shot examples (0 to 4). Typically, mixing information between tokens according to their relation does not require in-context learning. Information mixing between related (or negatively related) token pairs results in better distinguishability: gradually increasing from the starting decoder blocks and achieving a peak between decoder blocks 10-15. \u2022 Non-linear information movement between residual streams. Our experiments with the linear classifier fail to distinguish between ontologically related, unrelated, and negatively related entities. \u2022 Distinguishability of the residual stream pairs improves across depth. While the very first layer of attention provides a substantial degree of required token mixing, we observe that the classification performance goes better gradually, pointing towards the existence of multiple successive attention heads that continue moving information from Aj to Bj if they are related. However, after a certain depth, the distinguishability starts diminishing, most likely due to accumulation of other information related to the tasks. \u2022 Token mixing is not boosted from contextual prior. Across different numbers of in-context examples provided, we do not observe any significant difference in peak classification performance compared to zero-shot regimes. However, as the number of examples increases, we observe unstable classification performance. It is likely that with more in-context examples, task-specific information is accumulated earlier than the zero-shot regime. 6 Circuitry of step-by-step generation In Section 4.1, we observed that the head importance identified via inductive reasoning task could serve as a proxy to identify the important heads across all the subtasks. This provides us with an opportunity to reduce the total number of attention heads to analyze when investigating the step-by-step generation procedure. For each subtask, we use a threshold range for \u00b5h,k to select a subset of attention heads that are kept intact while the rest of the heads are knocked out (see Appendix C for the subtask-wise threshold range used and 9 0 10 20 30 Head 30 25 20 15 10 5 0 Layer 1 0.5 0 Subtask 0 Subtask 1 Context abidance score Subtask 2 Subtask 3 30 25 20 15 10 5 0 Layer 30 25 20 15 10 5 0 Layer 30 25 20 15 10 5 0 Layer 30 25 20 15 10 5 0 Layer 30 25 20 15 10 5 0 Layer 30 25 20 15 10 5 0 Layer 30 25 20 15 10 5 0 Layer 30 25 20 15 10 5 0 Layer 30 25 20 15 10 5 0 Layer Subtask 4 Subtask 5 Subtask 6 Subtask7 Subtask 8 Subtask 9 0 10 20 30 Head 0 10 20 30 Head 0 10 20 30 Head 0 10 20 30 Head 0 10 20 30 Head 0 10 20 30 Head 0 10 20 30 Head 0 10 20 30 Head 0 10 20 30 Head Figure 5: When does the LLM start following the context? Distribution of context-abidance score cj,k of each head (darker blue shade signifies higher cj,k) with layer (j) on y-axis and head index (k) on xaxis. Attention heads in red denote those with zero context-abidance. We show the distributions for all the subtask indices 0-9 (left to right, top to bottom). Typically, context abidance is task-agnostic and emerges after the 16th decoder block. the corresponding subtask accuracy). This chosen subset of the model gives us an aggregate accuracy of 0.9 over the inputs for which the full model generates correctly. Next, we proceed to look into the exact information that is being read and written by the attention heads from and into the residual streams. Given a head hj,k that writes yi j,k to the residual stream xi j\u22121, we apply the unembedding projection U on yi j,k and select the token with the highest probability, \u02c6 sj,k i . The unembedding projection provides the opportunity to look into the token-space representation directly associated with the residual stream and its subspaces (nostalgebraist, 2020). However, it should be noted that applying unembedding projection typically corresponds to Bigram modeling (Elhage et al., 2021). Therefore, when we map any intermediate representation (attention output, residual stream, etc.), we essentially retrieve the token that is most likely to follow. 6.1 In-context prior vs pretraining prior We start by exploring the depth at which the model starts following the context provided as input. Specifically, we check for a given token si in the sequence S, if \u02c6 sj,k i , the token projected by the attention head hj,k 10 \u22120.4 \u22120.2 0 0.2 0.4 Subtask 0 0 10 20 30 30 25 20 15 10 5 0 Head Layer \u22120.6 \u22120.4 \u22120.2 0 0.2 0.4 0.6 Subtask 1 \u22120.8 \u22120.6 \u22120.4 \u22120.2 0 0.2 0.4 0.6 0.8 Subtask 2 \u22120.6 \u22120.4 \u22120.2 0 0.2 0.4 0.6 Subtask 3 \u22120.4 \u22120.2 0 0.2 0.4 Subtask 4 \u22120.4 \u22120.2 0 0.2 0.4 Subtask 5 \u22120.4 \u22120.3 \u22120.2 \u22120.1 0 0.1 0.2 0.3 0.4 Subtask 6 \u22120.3 \u22120.2 \u22120.1 0 0.1 0.2 0.3 Subtask 7 \u22120.6 \u22120.4 \u22120.2 0 0.2 0.4 0.6 Subtask 8 \u22120.3 \u22120.2 \u22120.1 0 0.1 0.2 0.3 Subtask 9 0 10 20 30 30 25 20 15 10 5 0 Head Layer 0 10 20 30 30 25 20 15 10 5 0 Head Layer 0 10 20 30 30 25 20 15 10 5 0 Head Layer 0 10 20 30 30 25 20 15 10 5 0 Head Layer 0 10 20 30 30 25 20 15 10 5 0 Head Layer 0 10 20 30 30 25 20 15 10 5 0 Head Layer 0 10 20 30 30 25 20 15 10 5 0 Head Layer 0 10 20 30 30 25 20 15 10 5 0 Head Layer 0 10 20 30 30 25 20 15 10 5 0 Head Layer Figure 6: Which heads are writing the answer? For each subtask (0-9), we show the attention heads that write the answer to the corresponding task into the last residual stream, along with the probability of the answer token in the attention head output. is such that \u27e8si, \u02c6 sj,k i \u27e9is bigram present in S. We compute a context-abidance score, cj,k for the head hj,k as the fraction of tokens for which the above condition holds true. In Figure 5, we plot the context abidance distribution for each head across different subtask indices. We can observe a visible correlation between the depth of the head and how much context abiding it is \u2014 the LM starts focusing on the contextual information at deeper layers. Given that our experiment setup is based on fictional ontology, cj,k provides a strong demarcation between pretraining prior (i.e., language model statistics memorized from pretraining) and contextual prior (i.e., language model statistics inferred from context) since there are negligible chances of the LM to memorize any language modeling statistics containing the fictional entities from the pretraining data. Therefore, it predicts a correct bigram only when it is able to follow the context properly. However, there is no visible demarcation across different subtasks; we can claim that this depth-dependence is implicit to the model and does not depend on the prediction task. 6.2 Subtask-wise answer generation We investigate the information propagation pathway through the attention heads that constitute the stepby-step answering of the subtasks. We start with the heads that directly write the answer to the particular subtask into the last residual stream by mapping their output to the token space using U. Figure 6 plots these answer-writing heads along with the probability of the answer in the attention head output across different subtasks. The existence of multiple answer-writing heads suggests that LMs exploit multiple pathways to generate the same answer, with each such pathway reinforcing the generated output. Looking at the heads 11 Generated context Question context Few-shot context Subtask 0 Subtask 1 Subtask 2 Subtask 3 Subtask 4 Subtask 5 Subtask 6 Subtask 3 Subtask 8 Subtask 9 Figure 7: Where do the answer-writing heads collect their answers from? For each answer-writing head and each subtask, we show those attention heads that attend to residual streams corresponding to answer tokens. We demarcate heads that collect the answer tokens from the generated context (green), question context (blue), and few-shot context (red). with the highest answer probability, we can observe that some top heads are utilized across different subtasks, e.g., subtasks 1 and 3, subtasks 2, 4, and 8, subtasks 5 and 9, etc. Sharp change in depth-wise functionality. Interestingly, the 16-th layer appears as a region of functional transition in the model with all the answer-writing heads appearing after this particular layer. Our earlier experiments also associate this layer with some phase shifts; in the token-mixing analysis (Section 5), the distinguishability of ontologically related entities is observed to achieve peaks near this layer and starts falling after that; in Section 6.1 as well, we see a certain overall rise in context-abidance in the heads after the 16-th layer (see Figure 5). All three findings suggest a functional rift in LLMs that happens to lie almost at the halfway point from embedding to unembedding. The embedding layer associates tokens with information available from pretraining. The initial half assists information movement between residual streams and aligns the representations to the contextual prior. The latter half of the model employs multiple pathways to write the answer to the last residual stream. 6.3 Parallel pathways of answer processing Once we observe the existence of multiple answer-writers within the model, the natural step forward is to wonder if they all process the answer from the input using the same mechanism. We employ a recursive 12 strategy to identify the flow of information through the attention heads (see Appendix G for a detailed description of the procedure). Specifically, we start from the answer-writing heads, follow which residual streams are being attended by these heads, identify the content of these residual streams via unembedding projection, and identify the heads in the previous layers that are writing that content into those residual streams. We continue till one of the two conditions is met: (i) we reach a head in the first decoder block, or (ii) we reach a residual stream corresponding to the first token in the input token sequence. With such a procedure, we construct trees of attention heads rooted at the answer writing heads. Now, let us consider the following example context plus generated token sequence: Lempuses are tumpuses. Tumpuses are rompuses. Rompuses are blue. Max is lempus. True or false: Max is blue. Response: Let us think step by step. Max is lempus. Lempuses are tumpuses. Max is with the desired prediction being tumpus (note that this input follows the few shot examples that are not shown here for brevity). We segregate the input context into three parts: few-shot context, question context, and generated context. We can see that the LLM can collect the answer tumpus from either generated or question context. Also, if the same token exists in the few-shot context, it can decide to collect information from there as well (however, that should not be ideal since the contextual role of that token would be different). We proceed to identify from the information-flow trees all those heads that attend to the streams corresponding to the answer token (in this example, tumpus). Note that there can be multiple other heads that might be attending to these same tokens; but only those heads that are elements of the tree rooted at answer writing heads are actually contributing to the flow of information from the answer tokens in the context of the output Figure 7 demonstrates these attention heads that attend to the residual streams corresponding to the answer tokens for different subtasks, present in the generated context (green), question context (blue) and few-shot context (red). The following observations can be drawn from the figures: 1. Coexitent pathways of answer generation. Smaller models like GPT-2 on simpler tasks like IOI implements a unique neural algorithm (Wang et al., 2023). On the contrary, a larger model like LLaMA-2 exploits different parallel pathways of answer propagation while performing reasoning There are different heads that are directly connected to the answer writers, and they collect answer information from different places in the context. 2. Different primary sources of answer for different subtasks. In the initial stages of generation (i.e., subtasks 0, 1, 2, 3), the answer tokens are not present in the generated context. For subtasks 6 and 7, again, the answers need to be collected from the question index. For subtasks 4, 5, 8, and 9, we can observe the presence of heads that collect answer tokens from the context generated via earlier subtasks (in fact, subtask 5 has more such heads). Given the fact that (i) there are multiple answer writing heads, and (ii) there are different heads that are directly connected to the answer writers, and they collect answer information from different places in the context, we can conclude that the different pathways implement different algorithms as well. In the fewshot examples, there are fictional entities that have been used in the question context as well, though in different contextual roles. The fact that there are attention heads collecting those entities as answers suggests that there are pathways prone to collecting similar information from the few-shot context. Although these pathways collect the same token as the answer, they actually deviate from the correct reasoning algorithm. However, the presence of such pathways decreases as the generation progresses from subtask 0 to 9. Note that this is different from few-shot examples providing the necessary pattern via in-context learning. Such patterns are more position-specific, i.e., while generating the answer for a certain subtask in the question, there is an overall increased attention provided to the same subtask tokens in the few-shot examples (see Figures 9, 10, 11, and 12 in Appendix). Finally, the pattern of answer collection from question context and generated context across different subtasks clearly demarcates the inductive reasoning subtasks (5 and 9) from the decision-making and copying. Tasks in the former category require pathways that collect answers from the earlier generated context, but the latter does not. Moreover, the decision-making step right before inductive reasoning can extract answers from the preceding step. This explains an albeit small number of heads that collect the answer from the generated context in subtasks 4 and 8. See Figures 13 and 14 for examples of such information flow towards subtasks 1 and 5, respectively. 13 7 Conclusion Our findings. This work sought to disentangle the functional fabric of CoT reasoning in LLMs. Specifically, we explored LLaMA-2 7B in a few-shot CoT regime for solving multi-step reasoning problems on fictional ontologies of the PrOntoQA dataset. We observed that: 1. Despite different reasoning requirements across different stages of CoT generation, the functional components of the model remain almost the same. Different neural algorithms are implemented as compositions of induction circuit-like mechanisms. 2. Attention heads perform information movement between ontologically related (or negatively related) tokens. This information movement results in distinctly identifiable representations for such token pairs. Typically, this distinctive information movement starts from the very first layer and continues till the middle. While this phenomenon happens zero-shot, in-context examples exert pressure to quickly mix other task-specific information among tokens. 3. Multiple different neural pathways are deployed to compute the answer, that too in parallel. Different attention heads, albeit with different probabilistic certainty, write the answer token (for each CoT subtask) to the last residual stream. 4. These parallel answer generation pathways collect answers from different segments of the input. We found that while generating CoT, the model gathers answer tokens from the generated context, the question context, as well as the few-shot context. This provides a strong empirical answer to the open problem of whether LLMs actually use the context generated via CoT while answering questions. 5. We observe a functional rift at the very middle of the LLM (16th decoder block in case of LLaMA2 7B), which marks a phase shift in the content of residual streams and the functionality of the attention heads. Prior to this rift, the model primarily assigns bigram associations memorized via pretraining; it drastically starts following the in-context prior to and after the rift. It is likely that this is directly related to the token-mixing along ontological relatedness that happens only prior to the rift. Similarly, answer-writing heads appear only after the rift. Attention heads that (wrongly) collect the answer token from the few-shot examples are also bounded by the prior half of the model. Implications for future research. These findings bear important ramifications towards the ongoing research around language modeling and interpretability. A natural extension to this work would be to incorporate pretraining memorization in terms of MLP blocks \u2014 precisely, whether the functional rift across layers bears a similar mechanism when the LLM starts mixing factual associations that have been stored within the MLP neurons. The existence of a parallel answer-generation process is extremely important for causal interventions on model behavior (Li et al., 2024): changing how a model should reason via up-(or down-) scaling certain neural pathways should take all the parallel pathways into account."
},
{
"url": "http://arxiv.org/abs/2402.13035v2",
"title": "Learning to Check: Unleashing Potentials for Self-Correction in Large Language Models",
"abstract": "Large language models (LLMs) have made significant strides in reasoning\ncapabilities, with ongoing efforts to refine their reasoning through\nself-correction. However, recent studies suggest that self-correction can be\nlimited or even counterproductive without external accurate knowledge, raising\nquestions about the limits and effectiveness of self-correction. In this paper,\nwe aim to enhance LLM's self-checking capabilities by meticulously designing\ntraining data, thereby improving the accuracy of self-correction. We conduct a\ndetailed analysis of error types in mathematical reasoning and develop a\ntailored prompt, termed \"Step CoT Check\". Then we construct a\nchecking-correction dataset for training models. After integrating the original\nCoT data and checking-correction data for training, we observe that models\ncould improve their self-checking capabilities, thereby enhancing their\nself-correction capacity and eliminating the need for external feedback or\nground truth labels to ascertain the endpoint of correction. We compare the\nperformance of models fine-tuned with the \"Step CoT Check\" prompt against those\nrefined using other promps within the context of checking-correction data. The\n\"Step CoT Check\" outperforms the other two check formats in model with lager\nparameters, providing more precise feedback thus achieving a higher rate of\ncorrectness. For reproducibility, all the datasets and codes are provided in\nhttps://github.com/bammt/Learn-to-check.",
"authors": "Che Zhang, Zhenyang Xiao, Chengcheng Han, Yixin Lian, Yuejian Fang",
"published": "2024-02-20",
"updated": "2024-02-23",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "LLM AND Reasoning",
"gt": "The colossal number of parameters and extensive training data of Large Language Models (LLMs) have led to remarkable performances in reasoning tasks (Yao et al., 2023b; Wei et al., 2023), notably \u25b3Work done during an internship at Xiaobing.AI. *Equal contribution. \u2020Corresponding author. in mathematical problem-solving (Luo et al., 2023; Wang et al., 2023; Yu et al., 2023; Yuan et al., 2023). Despite these advancements, there is still consid- erable potential for enhancing the mathematical capabilities of LLMs. To further enhance the mathematical reason- ing capabilities of LLMs, self-correction has been proposed. Previous studies have shown that self-corrected LLMs can achieve higher accu- racy (Madaan et al., 2023a; Weng et al., 2023; Huang et al., 2022a; Saunders et al., 2022; Welleck et al., 2022; Miao et al., 2023; Chen et al., 2023). However, recent research (Huang et al., 2023a; Stechly et al., 2023; Valmeekam et al., 2023) in- dicates that such improvements often depend on external feedback, such as ground truth labels for determining the stopping point (Shinn et al., 2023; Kim et al., 2023). This reliance raises questions about the extent to which LLMs can autonomously self-correct their reasoning processes. In this paper, we explore how to enhance the self- correction capability of LLMs. Inspired by Tyen et al. (2024), we divide self-correction into two dis- tinct tasks: Self-Checking Feedback Gathering and Feedback-Driven Reasoning. We believe that the key is to address their inherent deficiency in self- checking capabilities and thereby obtain more accu- rate feedback. (Stechly et al., 2023; An et al., 2023). In order to reach this, we construct a checking- correction dataset and trying to training the model. Additionally, to ensure the model\u2019s capability in direct reasoning, we incorporate Direct Reasoning into our training approach. Our goal is to equip the model with strong self-checking capabilities, mak- ing it independent of external feedback or ground truth labels. We compared the performance of mod- els trained with checking-correction data in three formats of prompt: 1. All Direct Check, 2. Step Direct Check, and 3. Step CoT Check 1 (as illus- trated in Fig. 1). 1The specific prompts are detailed in the Appendix. B arXiv:2402.13035v2 [cs.CL] 23 Feb 2024 All Direct Check and Step Direct Check are akin to those described by An et al. (2023), directly checking the correctness of the entire reasoning path and individually checking the correctness of each step within the reasoning process. Step CoT Check builds on Step Direct Check. Instead of immediately judging each step as cor- rect or incorrect, the model engages in a chain of thought(CoT) (Wei et al., 2022) reasoning process to evaluate the validity of each step under exami- nation. Model evaluates correctness across three dimensions: (1) the reasonableness of the inference objective; (2) the accuracy of the computational formulation at each step; (3) the correctness of the computed results. Such detailed analysis enhances the precision and interpretability of the checking process, more applicable to models with larger pa- rameters. Compared to Shinn et al. (2023) and Kim et al. (2023), our method does not rely on ground truth to terminate the self-correction loop, which empha- sizes our approach\u2019s autonomous self-checking ca- pabilities. On the other hand, diverging from Tyen et al. (2024), our Step Check prompting includes all original steps but deliberately omits the check- ing results of preceding steps, which aims to pro- vide the model with a comprehensive view of the subsequent steps needing verification. Extensive experimental results demonstrate the effectiveness of our approach. Our contributions for this paper are as follows: 1. We demonstrate that LLMs markedly enhance their self-checking capabilities through spe- cialized training, achieving self-correction in- dependently of external data. Through our proposed method, LLMs exhibit a signifi- cant enhancement in self-checking capabil- ities within reasoning tasks. 2. We introduce a CoT prompting approach for analyzing mathematical reasoning tasks. This method categorizes error types and eluci- dates the reasoning process prior to judgment, thereby improving interpretability. This ap- proach markedly surpasses traditional direct answer verification techniques. 3. We have released a dataset for checking and correction, featuring fine-grained, step-level explanations for identifying and rectifying er- rors in reasoning paths, thereby facilitating future research in reasoning tasks.",
"main_content": "Chain-of-Thought (CoT) reasoning, initially pioneered by Wei et al. (2022), has significantly advanced the reasoning capabilities of large language models (LLMs). This approach, which enhances LLMs by incorporating detailed reasoning steps into few-shot examples, has been foundational in improving performance across various reasoning tasks. Subsequent research has expanded upon this, introducing methodologies like SelfConsistency (Wang et al., 2022), various prompting techniques (Zhou et al., 2022; Drozdov et al., 2022), Self-Training (Huang et al., 2022b), Verifier (Li et al., 2022), Tree of Thought (Yao et al., 2023a) and DialCoT (Han et al., 2023). The effectiveness of CoT has been demonstrated in domains including logical, commonsense, and mathematical reasoning. Despite the success, these models are not without their flaws. They are still prone to reasoning confusion, generating hallucinations, and computational errors. A prevailing strategy to rectify these undesired behaviors of LLMs is learning from feedback. Extensive research (Huang et al., 2022b; Madaan et al., 2023b; Gero et al., 2023; Jiang et al., 2023) has been undertaken to improve LLMs through the paradigm of learning from feedback. However,Huang et al. (2023b) found that selfimprovement is not effective when oracle labels are not available. This may be attributed to the limited capability of LLMs to identify errors. Previous works have primarily focused on self-correction, overlooking the enhancement of the models\u2019 error identification capabilities. This paper addresses the gap by meticulously constructing training data that includes steps for detecting and correcting errors, thereby strengthening the model\u2019s capability to identify mistakes. 3 Methodologies 3.1 Self-Correction Mechanism Analysis Assuming a dataset comprising N questions, the whole process serves as the direct reasoning, with self-checking feedback gathering. Should the feedback indicate an error, the model re-engages in reasoning based on that feedback. Initially, we simplify the problem by assuming the model\u2019s direct reasoning accuracy for all questions is p, the probability of the model marking a correct answer as incorrect is pc2w, and the probability of marking an incorrect answer as incorrect is pw2w. According to our hypothesis, a model untrained in self-checking lacks this capability and randomly assigns correct or incorrect labels; hence, we assume pc2w = pw2w = 1 2. 1. Following one round of reasoning, we obtain D(correct) = Np and D(wrong) = N(1 \u2212 p), where D is the quantity. 2. Upon correctness validation, half of the D(correct) answers are incorrectly labeled, denoted as D(c2w) = 1 2Np, and half of the D(wrong) answers remain erroneously labeled, represented as D(w2w) = 1 2N(1 \u2212p). 3. Re-reasoning the answers marked as incorrect, 1 2Np(1\u2212p) from D(c2w) are reasoned incorrectly, while 1 2N(1 \u2212p)p from D(w2w) are corrected. Post self-correction, we observe no change in accuracy. In practice, Although the model has some capability to determine whether an answer is correct or not, i.e., the accuracy of the check is higher than 50%, the model often makes mistakes in complex examples, and the probability of a second reasoning to correct it is less than p. And the probability of an originally correct example being changed to an incorrect example is much higher. Intuitively, enhancing the model\u2019s self-correction capability, according to the reasoning above, would decrease pc2w and increase pw2w, leading to a reduction in D(c2w) and an increase in D(w2w), thereby enabling effective self-correction. The potential accuracy improvement through self-correction is quantified as: (pw2w \u2212pc2w)N(1 \u2212p)p. (1) When external ground truth labels assist the process, the reasoning outcomes that match these labels do not undergo self-correction. This is succinctly expressed as pc2w = 0, bringing improvements in accuracy. From this observation, we infer that enhancing the self-checking capabilities of LLMs can, to a certain extent, replace the need for external ground truth labels, thereby ensuring that the improvement in accuracy is greater than zero. Consequently, this enhances the self-correction capability of LLMs. 3.2 Checking-Correction Data Generation 3.2.1 Checking Data Generation We use a generator model Mg to collect questionanswer pair x \u2208X that lead to incorrect results. x can be decomposed into (q, a), where q represents the question and a represents the answer that contains errors in its reasoning path. We further break down a into fine-grained intermediate steps s1, s2, ..., sK (An et al., 2023), with each sk representing a step in the reasoning process. A feedback model Mf is prepared, for all x, we have (R1, label1), \u00b7 \u00b7 \u00b7 , (Rk, labelk) \u223c Mf(q, a, A) , where k denotes the step number at which the first error occurs. Mf(q, a, A) represents the process of examining each step within a incorrect answer a given a question q and a correct answer A, continuing until an error is detected, with label \u2208{True, False}. Rk represents the detailed CoT check for step sk. We experiment with three distinct prompting strategies to generate training data. Assuming the model to be trained is denoted as M. All Direct Check: The entire pair x = (q, a) is input into M. For an x containing errors in its reasoning path, M directly identifies the erroneous step, i.e., k \u223cM(q, a). Step Direct Check: Due to the limited capabilities of the smaller model, we employ M checking each step individually for correctness. For all intermediate steps sk, k \u2208[1, K], to provide M with a holistic view when checking sk, the complete answer a is also input. M renders a more accurate and clear judgment by considering the steps following sk that need to be assessed. We obtain labelk \u223cM((q, a), sk), where M((q, a), sk) indicates checking the correctness of step sk given the question and the full answer. Due to the unique nature of mathematical reasoning, the check terminates upon finding the first labelk = False, and subsequent steps are disregarded (Lightman et al., 2023). Step CoT Check: Unlike the Step Direct Check, which outputs a label for each step directly, this method requires the model to analyze step sk for correctness from three dimensions: 1. Reasoning Goal Appropriateness: The model assesses whether the computational goal of sk is conducive to solving the question. If the step\u2019s objective is unreasonable or redundant, it is deemed incorrect. 2. Computational Expression Correctness: The model identifies the known conditions related to sk\u2019s computational goal and evaluates if the computation aligns with the problem statement and whether the derived expression achieves the computational goal correctly. 3. Verification through Inverse Operations: The correctness of the computation is checked by performing inverse operations on the derived expression. Therefore, for all intermediate steps sk, we have (Rk, labelk) \u223cM((q, a), sk). If step sk is determined to be incorrect after these checks, a concise summary indicating the error, denoted as Fk, is then extracted from Rk. 3.2.2 Correction Data Generation For the correction model Mc, it is required to generate a revised answer given (q, a, Fk), where a contains incorrect steps. We can derive c \u223c Mc((q, a, Fk)), where c represents the corrected answer. 3.2.3 Details of Data Generation We use GPT-4-1106-preview (OpenAI et al., 2023) as the feedback model Mf and LLaMA7b,LLaMA-13b (Touvron et al., 2023a) as the generator model Mg, which is fine-tuned with one epoch on the GSM8K (Cobbe et al., 2021) training set in order to be able to learn the CoT format. After getting step-level checking, to save the costs, we used GPT-3.5-turbo (OpenAI, 2023) as a correction model Mc to correct errors in the original reasoning paths. We filter out instances that have been successfully rectified to correct answers, ultimately yielding approximately 4,500 checking-corection data points for faulty reasoning. Additionally, we select 2,000 correct CoT traces and generate confirmation checks on them.2 3.3 Data Quality We manually review 50 entries from the Step CoT Checking dataset generated by GPT-4 to ensure data quality. Among these, 43 entries accurately identify the erroneous step and provide precise analysis, while 5 entries pinpoint the incorrect step but offer an analysis that lacks sufficient accuracy. 2 entries fail to identify the location of the error successfully. We conclude that the data generated by GPT-4 meets a relatively reliable standard. 3.4 Finetune We integrate data from Direct Reasoning, Check, and Feedback-Driven Reasoning for training, a pro2The specific prompts are detailed in the Appendix. A cess we term Feedback-Enhanced Reasoning Finetuning. Within the three settings of the Check task, the primary distinction in training lies in the differentiation between All Direct and Step Check. \u2022 Direct Reasoning + All Direct Check + Feedback-Driven Reasoning: Since the All Direct Check provides the question and answer, directly identifying the incorrect steps, the volume of training data depends solely on the size of the question-answer pair set X, irrespective of the number of steps. \u2022 Direct Reasoning + Step Check + FeedbackDriven Reasoning: Step Check necessitates that the model learns the correctness or error of each step, so the total volume of training data is contingent on both the size of set X and the number of steps. 4 Experiments 4.1 Dataset CoT Datasets Our research employs the GSM8K dataset (Cobbe et al., 2021), consisting of 7,473 arithmetic problems at the elementary school level. To facilitate comparison, except for generating the checking-corection data, we did not use any other generated CoT traces to train our models. Mix Data We procured a composite dataset amalgamating CoT data with checking-correction data to form the training corpus. An extensive series of experimental investigations was undertaken to ascertain the ideal proportion of inaccurately corrected to accurately validated instances within a corpus of 4,000 annotated correction records. Ultimately, we determined the optimal dataset composition for each evaluative approach. For the \u201cAll Direct Check\u201d and \u201cStep Direct Check\u201d methodologies, a data distribution of 3,000 inaccurately corrected to 1,000 accurately validated instances was selected. Conversely, for the \u201cStep CoT Check\u201d approach, a skewed data ratio of 3,700 to 300 was deemed optimal. A comprehensive exposition of experimental outcomes and the corresponding analysis regarding data ratios is delineated in Section 4.5. 4.2 Fine-Tuning and Evaluation We experiment on LLaMA-2 (Touvron et al., 2023b) models with 7B and 13B parameters, trained on 4*Nvidia A100 80GB GPUs. The models undergo full parameter fine-tuning, employing Task 1: Direct Reasoning Task 2: Self-Checking Feedback Gathering Task 3: Feedback-Driven Reasoning Question: Maila is reading a 120-page book. Yesterday, she was able to read 12 pages and today, she read twice as many pages as yesterday. If she wants to read half of the remaining pages tomorrow, how many pages should she read? Origin Answer: Step 1: Today's number of pages = 12 x 2= <<12*2=24>>24 Step 2: Remaining pages after today = 120 24 = <<120-24=96>>96 Step 3: Tomorrow, she has to read 96 / 2 = <<96/2=48>>48 pages. answer: 48 1) Direct check 2) Step direct check Step 2 is wrong 3) Step cot check Check entire trace Step 1 is correct Step 2 is wrong Check Step 1 Check Step 2 Check Step 1 Check Step 2 (1)The goal is \u2026, so it\u2019s reasonable. (2) We know that \u2026. So the formula for this step is correct. (3) Using the inverse operation, 24/2=12, so the calculation of 12 * 2 = 24 is correct. Step 1 is correct. (1)The goal is \u2026, so it\u2019s reasonable. (2) We know that Maila read 12 pages yesterday and 24 pages today. To find remaining pages, we should subtract the number of pages read both in today and yesterday from the total number of pages. The formula should account for the total pages read so far, which includes both yesterday's and today's reading. The formula for this step is wrong because it does not include the pages read yesterday. (3) Because the formula is incorrect, no need to check the calculation. Step 2 is wrong because the formula is not correct. Question: Maila is reading a 120-page book. Yesterday, she was able to read 12 pages and today, she read twice as many pages as yesterday. If she wants to read half of the remaining pages tomorrow, how many pages should she read? Origin Answer: Step 1: \u2026 Step 2: \u2026 Step 3: \u2026 Answer: 48 Correction: \u2026 check: Step 2 is wrong.(Direct Check) Step 2 is wrong because the formula is not correct.(CoT Check) Figure 1: A schematic representation of the full self-correction process. The Process is divided into three tasks. Task 1 involves direct reasoning for a given question. Task 2 examines the direct reasoning path and provides feedback at the end. If Task 2 check identifies errors, Task 3 is then initiated. Task 3 involves the model correcting the erroneous answer given the question, original answer and feedback. a learning rate of 8\u00d710\u22126, over three epochs, with a cumulative batch size of 32. The effectiveness of the checkpoints is evaluate on the GSM8K benchmark and other three additional datasets: MultiArith (Roy and Roth, 2015), ASDiv (Miao et al., 2020), and SVAMP (Patel et al., 2021). Our evaluation strategy validates our method\u2019s resilience across varied data distributions. Each dataset contains elementary mathematical problems with distinct lexical patterns. We test our approach on the full 1,319-instance GSM8K set, and on randomly sampled subsets from MultiArith (400), ASDiv (500), and SVAMP (500). To fairly evaluate model capability in verifying CoT trace accuracy, we manually annotate the first incorrect step in 200 sampled GSM8K traces generated by LLMs, which has 100 correct traces and 100 incorrect. Checkpoint evaluations are conducted using the vLLM library (Kwon et al., 2023). For direct reasoning path generation, we set the temperature parameter to zero. The check and correction mechanism employs a temperature of 0.5, complemented by a top-p value of 0.85 and a top-k setting of 30. For all reasoning tasks, we apply a repetition penalty of 1.2 (Artiges et al., 2019), which effectively mitigates the generation of duplicate content. 4.3 Main Results and Analysis Capability to Check Tab. 1 delineates the outcomes of our evaluation across 200 CoT traces. Our analysis extends to models prompted via a few-shot technique without fine-tuning, alongside GPT-3.5 and GPT-4. These models are tasked with adjudicating the veracity of entire reasoning paths, analogous to our \u201cAll Direct Check\u201d method. The base LLaMA models exhibit marginal superiority over random chance (50%) in their checking capability. Despite in direct reasoning task, GPT-3.5-turbo attaining a 75.9% (Tyen et al., 2024) accuracy on GSM8K with standard prompting, its proficiency in evaluating reasoning paths is markedly deficient, with a mere 70% success rate in accurate judgment. GPT-4 demonstrates a substantial improvement; however, it is noteworthy that GPT-4\u2019s direct inference accuracy on GSM8K stands at an exceptional 95.5% (Tyen et al., 2024). This discrepancy suggests that self-correction in these LLMs does not enhance reasoning task performance due to a dissonance between the capability to reason and to Method Model Correct Wrong Average Step Few-Shot GPT-3.5-turbo* 93.0 47.0 70.0 / GPT-4-1106-preview* 85.0 93.0 89.0 / LLaMA-2-7B 74.0 35.0 54.5 / LLaMA-2-13B 69.0 49.0 59.0 / All Direct Check LLaMA-2-7B 84.0 63.0 73.5 28.0 Step Direct Check 80.0 65.0 72.5 22.0 Step CoT Check 63.0 77.0 70.0 30.0 All Direct Check LLaMA-2-13B 61.0 83.0 72.0 36.0 Step Direct Check 76.0 79.0 77.5 32.0 Step CoT Check 80.0 83.0 81.5 48.0 Table 1: The accuracy(%) of checking sampled 200 reasoning paths. Correct and Wrong respectively represent ratio of true positive and true negative. And Step denotes the rate of successful identifying the first wrong steps in 100 wrong traces. check reasoning traces. In essence, the LLMs exhibit significantly lower precision in verifying the correctness of a reasoning trace compared to direct problem solving for the same question (q), thereby increasing the likelihood of erroneously altering a correct response. Our methodology posits that if any step in a trace is deemed incorrect, the entire answer is classified as incorrect. All the approaches surpass a 70% accuracy threshold, signifying a noteworthy enhancement over the unrefined base models. This evidences that our model, through training, augments its self-checking capabilities. The LLaMA-2-13B model, in particular, demonstrates superior performance in the \u201cStep CoT Check\u201d, not only achieving an 81.5% accuracy in trace evaluation but also exhibiting an elevated success rate in identifying the initial incorrect step, compared to alternative methods. However, the \u201cStep CoT Check\u201d does not maintain a discernible edge on the LLaMA2-7B model, suggesting that this strategy is more efficacious with models possessing larger parameters. Additionally, the \u201cStep Direct Check\u201d methodology exhibits a lower likelihood of pinpointing the first incorrect step relative to the \u201cAll Direct Check\u201d approach. This is attributed to the potential of misidentifying a preceding correct step as erroneous, in contrast to the \u201cAll Direct Check\u201d which renders a singular judgment, enhancing the probability of detecting the first incorrect step. Performance Of Reasoning Combined Selfcorrect Tab. 2 and Fig. 2 present the efficacy of models trained using our three methods on test sets. Across all models , a univocal enhancement in accuracy post a self-correction is observed for each Figure 2: Accuracy(%) of checking whether the reasoning traces obtained by itself when self-correcting on the test data sets dataset. Subsequent to the substitution of a subset of the data with checking-correction instances, there is a modest decrement in the models\u2019 direct reasoning accuracy. This divergence from the findings of prior study (An et al., 2023) is ascribed to the distinct ratios of direct CoT data to correction data employed in our experimental setup. Models trained on mixed data consistently surpass those solely fine-tuned on CoT across the majority of datasets. Notably, the improvement on LLaMA-213B models is more pronounced than on LLaMA2-7B models, with an average increase of 3.78% for the latter and 5.47% for the former. This discrepancy is largely attributable to the more substantial reduction in direct reasoning performance on LLaMA-2-7B post data substitution. On LLaMA2-13B, the \u201cStep CoT Check\u201d method significantly outperforms the other check methodologies, indicate that on models with larger parameters, By categorizing the types of errors and explicit disMethod Model GSM8K ASDiv MultiArith SVAMP CoT S-C CoT S-C CoT S-C CoT S-C CoT LLaMA-2-7B 32.60 / 26.80 / 73.50 / 35.00 / All Direct Check 27.22 32.45 25.20 31.60 71.25 76.75 32.00 35.80 Step Direct Check 25.09 32.37 26.20 29.20 72.00 79.00 30.00 35.60 Step CoT Check 27.90 34.80 26.00 31.00 72.75 80.25 31.60 36.40 CoT LLaMA-2-13B 45.64 / 45.20 / 85.25 / 47.20 / All Direct Check 43.52 48.07 41.40 45.80 87.00 89.50 47.00 50.80 Step Direct Check 43.67 48.75 41.20 45.40 82.50 86.50 45.00 49.20 Step CoT Check 45.34 51.18 44.20 49.00 88.25 92.00 48.20 53.00 Table 2: Correct rates (%) on the test set combined with the self-correction mechanism. For each example in the table, the number on the left represents the correct rate, and the number on the right represents the correct rate after self-correction(S-C). playing the process of checking, \u201cStep CoT Check\u201d is possible to provide more precise feedback. The precision in adjudicating the correctness of answers aligns with the outcomes from the sampled 200 reasoning paths. However, it is noteworthy that LLaMA-2-7B exhibits marked improvements, attributable to the preponderance of erroneous traces in the initial sample, which introduced inconsistencies. Furthermore, despite the lower likelihood of accurately identifying the precise erroneous step in the LLaMA-2-7B model, self-correction induces improvement. This suggests that the model\u2019s capacity to recognize an incorrect answer\u2014albeit without pinpointing the error\u2014affords a substantial number of incorrect paths the opportunity for regeneration and subsequent rectification. 4.4 Ablation Study To examine the propensity of LLMs to commit specific errors and to scrutinize our tripartite methodology for assessing the validity of each step in the reasoning sequence, we conducted an ablation study focusing on the \u201cStep CoT Check\u201d implemented on the LLaMA-2-13B model. The findings are catalogued in Tab. 3. Of the 541 incorrect paths accurately detected, 20.1% were amended successfully. Predominantly, errors within the computational procedures emerged as the most frequent, constituting approximately 61.1% of the total errors identified. Such errors typically arise from LLMs misjudging question relationships or erroneously invoking nonexistent conditions, leading to incorrect equation formulation. These computational missteps proved challenging to rectify, with a mere 18% being corrected during the third step, signaling an avenue for future enhancement. Goal errors, conversely, represent a category of Goal Formula Calculate Total Number of wrong cases 131 329 81 541 Successfully correction 31 61 17 109 Performance gains 1.74% 3.11% 0.99% 5.84% Table 3: The results of ablation study. In the table, Performance gains is the proportion of \u201cwrong to correct\u201d minus the number of \u201ccorrect to wrong\u201d cases. errors more amenable to correction, with a rectification rate of 24%. Regarding computation, LLMs were found to be more susceptible to miscalculations when the equation involved multiple numerical entities or integrated unknowns. 4.5 Further Analysis on Data Proportion The experiments in this section use the Step Check prompt, and fine tune on LLama-2-13B. The performance of the check has a strong relationship with the data ratio. To discern the influence of the proportion of correct step confirmations to incorrect step corrections within a pool of 4,000 training data replacements, we adjusted their ratios and documented the outcomes in Tab.4 . On average, a correct-checking datum comprised 3.33 verified steps, whereas an incorrect-checking datum included an average of 1.24 correct steps verification and 1.0 erroneous step detection. A model\u2019s judgment regarding the validity of steps can become skewed toward correctness when the ratio of correct to incorrect steps is disproportionately high, engendering bias. Conversely, an excess of incorrect-checking data may predispose the model to erroneously invalidate answers, thus increasing the \u201ccorrect to wrong\u201d, and thereby diminishing performance. Within our experimental framework, it\u2019s observed that maintain an incorrect to correct step ratio of approximately 0.65 in the training set enPath\u2713: Path\u00d7 Step\u2713: Step\u00d7 Ratio Check Accuracy(%) Correct Rate(%) Correct Wrong Average 2000:2000 2019:9260 0.218 93.63 36.21 60.80 45.34 3000:1000 3028:7225 0.419 87.96 46.95 64.52 47.46 3500:500 3525:6117 0.576 84.53 61.84 72.18 50.27 3700:300 3723:5681 0.662 78.93 75.03 76.80 51.18 4000:0 4027:4990 0.807 46.13 88.14 69.22 48.6 Table 4: Results of the data scaling experiment. The table displays how the number of correct and incorrect reasoning paths in the training data have an effect on the accuracy of the check and the correct rate of the reasoning combined with self-correction. ables the model to maintain a commendable level of accuracy in discerning both correct and incorrect steps. This equilibrium facilitated the highest performance enhancement through a single iteration of the self-correction process. More checking-corection data can continuously improve the reasoning capability of LLM. Different from prior experiments, we utilize the entire GSM8K training dataset, supplementing it with additional checking-corection data that we have generated. Our objective is to investigate the impact on the model\u2019s reasoning faculties when the quantity of checking-corection data is escalated, while the original reasoning data remains constant. We adhere to the optimal incorrect-to-correct data ratio ascertained from preceding experiments. The findings are depicted in Fig. 3, illustrating a progressive enhancement in both direct reasoning and self-corrected reasoning as the volume of checkingcorection data is augmented. Similar to the insights from LEMA (An et al., 2023), we observe that larger models are adept at assimilating from incorrect correction data, thereby refining their direct reasoning capabilities. Additionally, an increased corpus of checkingcorection data markedly bolsters the model\u2019s selfcorrection proficiency. The amalgamation of reasoning with self-correction exhibits a notably accelerated improvement in contrast to direct reasoning alone. These results endorse the inclusion of error-annotated correction data within the training regimen of an inference model, positing that an expanded dataset catalyzes improved inferential processes within the model. Consequently, the model\u2019s self-corrective capacity is significantly enhanced, culminating in substantial performance advancements through the mechanism of self-correction. Figure 3: Accuracy (%) of direct reasoning and reasoning combined with self-correction on GSM8K as checking-corection data types increase 5 Conclusion In this paper, we focused on improving the self-correction of LLMs by enhancing their selfchecking abilities. By conducting an in-depth analysis of error patterns in mathematical reasoning, we have developed the \u201cStep CoT Check\u201d prompt and constructed a checking-correction dataset. This dataset, combined with the original CoT data, has been instrumental in training models that exhibit improved self-checking and self-correction, reducing dependence on external feedback or ground truth labels for error identification and correction. Notably, models fine-tuned with our \u201cStep CoT Check\u201d prompt demonstrated superior performance over those trained with alternative prompts. This approach not only enhances the precision of feedback but also significantly increases correctness rates, showing the effectiveness of our method in refining LLMs for complex reasoning tasks. Limitations Our paper has some limitations, which we leave for future work: The computational cost of the step level Check Compared to direct reasoning and checking the whole path at once, although step-by-step check leads to better performance, it requires more inference rounds. On the GSM8K dataset, inference using step-by-step checking combined with correction process requires 4.8 model calls compared to only one for direct reasoning and 2.6 for checking the whole path at once. In practical use, there is a trade-off between performance and resource consumption. Lack of applicability to broader reasoning tasks. We analyze the types of mistakes in mathematical reasoning and propose a set of hints for checking the reasoning process. Our approach focuses on checking and correcting errors in mathematical reasoning problems, and it remains to be investigated in future work how to extend our approach to a wider range of reasoning tasks. Lack of experimentation on larger data sizes and scale language models. Due to constraints on computational resources and funding, we do not conduct experiments on larger scale language models(>20B), or generate additional CoT and correction data like other works. Thus the performance on larger LLMs and sufficient amount of training data remains undetermined. We will further explore the performance of our method on larger data sizes and scale language models in future research. Ethics Statement The proposed method has no obvious potential risks. All the scientific artifacts used/created are properly cited/licensed, and the usage is consistent with their intended use. Also, we open up our codes and hyper-parameters to facilitate future reproduction without repeated energy cost."
}
]
}