AcademicEval / intro_28K /test_introduction_long_2404.16807v1.json
jiyuuuu's picture
syn
b0f675a
raw
history blame
207 kB
{
"url": "http://arxiv.org/abs/2404.16807v1",
"title": "Improving Diversity of Commonsense Generation by Large Language Models via In-Context Learning",
"abstract": "Generative Commonsense Reasoning (GCR) requires a model to reason about a\nsituation using commonsense knowledge, while generating coherent sentences.\nAlthough the quality of the generated sentences is crucial, the diversity of\nthe generation is equally important because it reflects the model's ability to\nuse a range of commonsense knowledge facts. Large Language Models (LLMs) have\nshown proficiency in enhancing the generation quality across various tasks\nthrough in-context learning (ICL) using given examples without the need for any\nfine-tuning. However, the diversity aspect in LLM outputs has not been\nsystematically studied before. To address this, we propose a simple method that\ndiversifies the LLM generations, while preserving their quality. Experimental\nresults on three benchmark GCR datasets show that our method achieves an ideal\nbalance between the quality and diversity. Moreover, the sentences generated by\nour proposed method can be used as training data to improve diversity in\nexisting commonsense generators.",
"authors": "Tianhui Zhang, Bei Peng, Danushka Bollegala",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Original Paper",
"paper_cat": "LLM AND Reasoning",
"gt": "Commonsense reasoning is the ability to make logi- cal deductions about concepts encountered in daily life, and is considered as a critical property of intel- ligent agents (Davis and Marcus, 2015). Concepts are mental representations of classes and are ex- pressed using words in a language (Liu et al., 2023). Given the inputs, the GCR task requires a model to generate a high quality sentence that is gram- matical and adheres to commonsense, evaluated by its similarity to a set of human-written reference sentences covering the same set of concepts (Lin et al., 2020). Often there exists multiple relationships between a given set of concepts, leading to alternative rea- soning paths that take diverse view points. For ex- ample, given the four concepts dog, frisbee, throw and catch, different sentences can be generated as Dog; Catch; Frisbee; Throw A dog leaps to catch a thrown frisbee. The dog catches the frisbee when the boy throws it. A man throws away his dog's favourite frisbee expecting him to catch it in the air. A\u00a0dog catches\u00a0a\u00a0frisbee thrown\u00a0to it. A dog catches a frisbee thrown by its owner. A dog jumps in the air to catch a frisbee thrown by its owner. Figure 1: An example of diverse generated sentences sets in CommonGen (Lin et al., 2020) dataset. The gen- eration shown at the bottom (in green ) are considered by human annotators to be more diverse than those at the top (in red ). shown in Figure 1. Although all sentences shown in Figure 1 are grammatical, the bottom set ex- presses diverse view points (e.g. from the dog\u2019s as well as the man\u2019s) compared to the set at the top. Apart from the generation quality, diversity is also an important factor in text generation because the low-diversity texts tend to be dull, repetitive or biased towards a particular view point (Tevet and Berant, 2021). Diversity is an important considera- tion in many Natural Language Generation (NLG) applications, such as story generation (Li et al., 2018), paraphrase generation (Gupta et al., 2018), and GCR (Yu et al., 2022; Liu et al., 2023). In GCR tasks, the input text often provides insuffi- cient information to support diverse reasoning and generate multiple plausible outputs. Therefore, the diversity present in GCR task enables the explo- ration of different perspectives or all possible out- comes for a real-world situation. Existing methods promote diversity through special decoding strate- gies, such as nucleus sampling (Holtzman et al., 2019), or encoding interventions such as random noise injection (Gupta et al., 2018) or Mixture of Experts (MoE) approaches (Shen et al., 2019). We propose In-Context Diversification (ICD), a computationally-efficient and accurate method to improve the diversity in GCR, where the sentences are generated from a pre-trained LLM, and strikes arXiv:2404.16807v1 [cs.CL] 25 Apr 2024 a fine-balance between the output diversity and quality. ICD uses an ICL approach to increase the diversity of the sentences generated by an LLM, while maintaining the quality of the generation. ICD is a two-step process where it first lets an LLM to freely generate high-quality sentences that are grammatical, commonsense bearing and cover all the given input concepts. Next, ICD uses a user- specified diversity metric to evaluate the diversity of the generated sentences. If the diversity is low, ICD provides feedback to the LLM, instructing it to generate more diverse sentences considering the already generated sentences. Given that ICD is using LLMs to generate di- verse sentences via ICL and without updating the parameters of the LLMs, an interesting and open question is whether an LLM can accurately judge the diversity of a given set of sentences, covering a common set of concepts. To answer this ques- tion, we conduct an experiment where we instruct GPT3.5-turbo to judge the diversity of the set of input sentences according to a five-scale grading system, and convert the predicted grades into bi- nary judgements (i.e. diverse vs. non-diverse). We compare the LLM-assigned grades against those by a group of human annotators, and find a moderate- level (Cohen\u2019s Kappa of 0.409) agreement between human vs. LLM judgements, demonstrating that LLMs can indeed be instructed to obtain diversity judgements for GCR tasks. We evaluate ICD on three GCR tasks/datasets: CommonGen (Lin et al., 2020), ComVE (Wang et al., 2020), and DimonGen (Liu et al., 2023). We find that our proposed ICD balances diversity and quality appropriately, improving their harmonic mean by at least 6% over that of a default base- line. Moreover, the sentences generated by ICD can be used as training data to improve diversity in a Seq2Seq model (Sutskever et al., 2014; Lewis et al., 2020), producing results that are comparable to the models that are trained on knowledge graphs or human-written text corpora (Liu et al., 2021; Fan et al., 2020; Li et al., 2021).",
"main_content": "Diverse Text Generation. A variety of methods have been proposed to enhance the diversity of NLG. Sampling-based decoding is an effective method to increase the generation diversity. Holtzman et al. (2019) proposed nucleus sampling to generate diverse content at the generation stage. Truncated sampling (Fan et al., 2018) prunes and then samples the tokens based on the probability distribution. Furthermore, Shen et al. (2019) proposed an MoE approach to diversify translation outputs. Moreover, incorporating external corpora in the MoE further promotes diversity, such as by using a knowledge graph (Yu et al., 2022; Hwang et al., 2023) or by a collection of retrieved sentences (Liu et al., 2023). Although LLMs have reported superior performance in numerous Natural Language Processing (NLP) tasks (Touvron et al., 2023; OpenAI, 2023b,a), to the best of our knowledge, diversifying their generations in commonsense reasoning with ICL has not been explored in prior work on GCR. In-Context Learning. Recent studies demonstrate that LLMs can exhibit robust few-shot performance on a variety of downstream tasks through ICL (Brown et al., 2020). ICL is a technique for instructing an LLM using one or more examples for a particular text generation task. The generated text is conditioned on both the input as well as the instruction prompt. Wang et al. (2023) show that in ICL, label words in the demonstration examples function as anchors, which aggregate semantic information to their word representations in the shallow (closer to the input) layers, while providing that information to the final predictions performed by the deeper (closer to the output) layers. In contrast to fine-tuning-based methods, ICL is computationally lightweight because it does not update the parameters of the LLM. Therefore, ICL is an attractive method when integrating task-specific knowledge to an LLM by simply changing the prompt and the few-shot examples (Dong et al., 2022). 3 In-context Diversification We consider the problem of generating a set of diverse sentences that express commonsense reasoning, either by covering a set of given concepts (in CommonGen and DimonGen) or by providing an explanation for a given counterfactual statement (in ComVE). Formally, given a sequence (a set of concepts or a statement) X = {x1, . . . , xm}, the goal of GCR is to generate a set of grammatically correct and commonsense bearing sentences Y = {y1, . . . , yn}, where yi is the i-th output generated by the model with probability p(yi|X). Moreover, we require that the generated sentences {y1, . . . , yn} to be lexically as well as semantically diverse. Default Examples: Given several key words: [SRC], generate one coherent sentences using background commonsense knowledge: [TGT] Test instruction: Given several key words: [INPUT], generate one coherent sentence using background commonsense knowledge: [OUTPUT] Diversified Examples: Given several key words: [SRC], generate one coherent sentence using background commonsense knowledge: [TGT] Test instruction: Step1: Given several key words: [INPUT], generate [N] different and coherent sentences using background commonsense knowledge: [PRV] (If the diversity of [PRV] is low) Step2: You have generated the following sentences: [PRV], try to provide other reasonable sentences: [OUTPUT] (a) (b) Figure 2: An example of default and diversified prompts is shown for an instance selected from the CommonGen dataset. Here, the default prompt shown in Figure 2a is taken from Li et al. (2023). Few-shot examples are included in each prompt where [SRC] denotes the set of input concepts and [TGT] the corresponding sentences in CommonGen. For a given set of [INPUT] concepts, the LLM is then required to generate sentences at the slot [OUTPUT]. As shown in Figure 2b, ICD uses the diversified prompt, which operates in two steps. Step 1 generates a set of [N] sentences, [PRV]. We check for the diversity among the sentences in [PRV], and if it is low, we use the prompt in Step 2 to generate the final set of sentences. 3.1 Sentence Generation To explain our proposed ICD, let us consider GCR on CommonGen, where we must generate a set of sentences Y, such that each sentence contains all of the input concepts X as shown in Figure 2a. Given an LLM, we can design a prompt that contains a task-specific instruction and one or more examples containing the input concepts (denoted by [SRC] in Figure 2) and the corresponding human-written sentences containing all given input concepts (denoted by [TGT]) to instruct the LLM to generate output sentences Y (denoted by [OUTPUT]) for a given set of input concepts X (denoted by [INPUT]). We refer to a prompt of this nature as a default prompt, and the corresponding set of generated sentences by Sdef. Note that the default prompt does not necessarily guarantee that the generated set of sentences will be diverse and an LLM could return sentences that are highly similar to each other. To address this issue, we propose a diversified prompt as shown in Figure 2b. Specifically, the diversified prompt operates in two steps. In Step 1, we require that the LLM generate N sentences that are different, in addition to being coherent and commonsense bearing. Next, we use a suitable diversity metric to evaluate the level of diversity among the generated set of sentences. If the diversity of the generated senAlgorithm 1 In-Context Diversification (ICD) Input: Generated sets of sentences Sdef and Sdiv, respectively from default and diversified prompts, the number of desired output sentences N, and a diversity metric f. Output: Output set of sentences S\u2217 S\u2217\u2190\u2205 \u03b1 \u21900 for S \u2208(Sdef \u222aSdiv) do if (|S| == N) \u2227(f(S) \u2265\u03b1) then \u03b1 \u2190f(S) S\u2217\u2190S end if end for return S\u2217 tences is low, in Step 2, we show those sentences to the LLM and instruct it to generate sentences that are different to those. As the criteria for triggering Step 2, we check whether the exact same sentence has been generated multiple times by the LLM during Step 1. The final set of generated sentences is denoted by Sdiv. 3.2 Diversity-based Sampling Because of the limited availability of humanwritten reference sentences for evaluating GCR models, there exists a trade-off between quality vs. diversity when generating sentences for GCR tasks.1 Simply maximising for diversity often leads to generations that do not cover the input concepts in a natural way. For example, a randomly selected set of sentences would be highly diverse, yet unlikely to capture the input concept sets. On the other hand, if we force an LLM to generate sentences that contain all of the input concepts, it might find difficult to generate semantically diverse sentences and resort to trivial lexical or syntactic diversity tricks such as morphological inflections or word-order permutations. To address this issue, we propose a diversitybased sampling method shown in Algorithm 1. Consider that the default prompt provides a set Sdef of sentences that have not been optimised for diversity (likely to have a higher quality), while on the other hand the diversified prompt provides a set Sdiv of sentences that are further refined for diversity (likely to have a higher diversity). We wish to find a set of sentences that simultaneously satisfies the following criteria: (a) must contain exactly N sentences, as specified by the user, and (b) must have a high diversity score, measured using a user-specified diversity metric f(\u2208R\u22650). We formalise this as a subset search problem, where 1This trade-off is further empirically verified in \u00a7 5.1. we compute the union Sdef \u222aSdiv and search for the subset S\u2217that jointly satisfies those criteria following the procedure detailed in Algorithm 1. Although the total number of subsets of size N is \u0000|Sdef\u222aSdiv| N \u0001 , it is sufficiently small for the values of N in our GCR tasks, which makes this subset search fast in practice. 4 Experimental Settings 4.1 Tasks and Datasets We evaluate ICD on three GCR tasks as follows. Constrained Commonsense Reasoning: In CommonGen (Lin et al., 2020) benchmark, a model is required to generate a sentence covering a given set of concepts such that background commonsense knowledge associated with the input concepts is reflected. This dataset contains 35K distinct concept sets (train = 32651, dev = 993, and test = 1497) with corresponding human written sentences (train = 67389, dev = 4018, and test = 6042). Each instance contains on average 3-5 input concepts. Commonsense Explanation Reasoning: ComVE (Wang et al., 2020) is part of the SemEval 2020 commonsense validation task, where for a given counterfactual statement, a model is required to generate an explanation providing a reason describing why the statement is nonsensical. This dataset contains 10K (train = 8532, dev = 476, and test = 992) examples, where each example contains three reference outputs. Diversified GCR: DimonGen (Liu et al., 2023) involves generating diverse sentences that describe the relationships between two given concepts. It is a challenging task because it requires generating reasonable scenarios for a given pair of concepts without any context. This dataset contains 17109 instances (train = 15263, dev = 665, test = 1181), where each instance has 3-5 references. 4.2 Evaluation Metrics We measure both the quality and diversity of the sentences generated by models using the metrics described next. 4.2.1 Quality Metrics We compare a generated sentence by a model against a set of human-written references to evaluate the quality of the generation using several metrics: BLEU (Papineni et al., 2002) measures n-gram precision against human reference texts, SPICE (Anderson et al., 2016) measures the semantic propositional overlap between two sentences, and BERTScore (Zhang et al., 2020) uses contextualised word embeddings to measure the semantic similarity between tokens in two sentences. In alignment with prior works (Yu et al., 2022; Liu et al., 2023; Hwang et al., 2023), when multiple candidate sentences are generated for a test case, we select the highest-scoring candidate for evaluating quality. 4.2.2 Diversity Metrics Pairwise Diversity: We use self-BLEU (Zhu et al., 2018) to measure n-gram overlap among sentences within each generated set. The metric computes the average sentence-level similarity between all pairwise combinations of the generations in the generation set. Note that unlike BLEU, self-BLEU does not require human generated references for measuring diversity. We use self-BLEU3/4 (corresponding to n = 3 and 4) in our experiment. Lower self-BLEU scores indicate higher lexical diversity. Corpus Diversity: To measure the variety within our generated text corpus, we employ Distinctk (Li et al., 2016), which calculates the ratio of unique k-grams to the total number of k-grams. This metric is particularly useful for adjusting the bias of LLMs toward generating longer sequences, ensuring that diversity is not artificially inflated by the sentence length. Additionally, we use Entropyk to evaluate the distributional uniformity of kgram occurrences, considering word frequencies for a more nuanced view of diversity. Higher Distinct-k and Entropy-k scores indicate higher diversity. Semantic Diversity: All previously described diversity metrics are limited to evaluating lexical diversity. To measure diversity at a semantic level, we propose self-cosSim, which is the average pairwise cosine similarity between generated sentences, computed using sentence embeddings obtained from SimCSE (Gao et al., 2021). Likewise, we define the self-BERTScore as a diversity metric that averages the BERTScores for all generated sentence pairs. Lower self-cosSim and self-BERTScore values indicate higher semantic diversity. 4.2.3 Combined Metrics We would prefer GCR models that have both high quality and high diversity. To incoporate both aspects into a single metric, we compute the Harmonic Mean between (a) the self-BLEU-4 as the diversity metric, and (b) BERTScore as the quality metric. As discussed in \u00a7 3.2, there exists a tradeoff between quality and diversity in GCR. Therefore, the harmonic mean is suitable when averaging quality and diversity scores.2 Alihosseini et al. (2019) proposed Fr\u00b4 echet BERT Distance (FBD) as a joint metric for simultaneously measuring both the quality and diversity of NLG. FBD is inspired by the Fr\u00b4 echet Inception Distance (FID), proposed by Heusel et al. (2017), for measuring the quality of image generation. Specifically, FBD computes the pooler output3 of a sentence as its embedding (Devlin et al., 2019) and represents a set of sentences using the mean vector and the covariance matrix computed from their sentence embeddings. Next, Wasserstein-2 distance is computed between the set of reference sentences and the set of generated sentences, which captures both the distance between the means as well as variance in the distributions. Lower FBD scores indicate high combined performance. 4.3 Implementation Details We use GPT3.5-turbo and Vicuna-13b-v1.54 as LLMs with temperature set to 1.0 in our experiments. By using two LLMs with significantly differing number of parameters and by including, Vicuna, an open source LLM, we plan to improve the reliability and reproducibility of our results. Max response length is set to 25 tokens. The inference times for CommonGen, ComVE and DimonGen datasets are respectively 5-6, 2-3 and 1-2 hours. The cost of running ICD with GPT3.5-turbo are ca. $6, $4 and $4 respectively for CommonGen, ComVE and DimonGen datasets. On the other hand, the costs of fine-tuning on GPT3.5-turbo are much higher at $58.8 for CommonGen, $24.7 for ComVE and $32.0 for DimonGen. Moreover, fine-tuning with LoRA (Hu et al., 2022) with rank of 8 and alpha of 16 on Vicuna takes ca. 34 hours. We use BART-large5 for MoE-based models. We use the GPT3.5-turbo to generate sentences for the CommonGen train/dev/test sets using the de2We use self-BLEU-4 for diversity and BERTScore for quality in Harmonic Mean due to their reliability shown in preliminary evaluations. Other metric pairs are in Appendix D. 3The last layer\u2019s hidden-state of the first token of the sequence is further processed by a Linear layer and a Tanh activation function. 4https://huggingface.co/lmsys/vicuna-13b-v1.5 5https://huggingface.co/facebook/bart-large fault, diversified and for ICD. For model training, we use the Adam optimiser (Kingma and Ba, 2015) with a batch size of 64, a learning rate of 3e-5 and a beam size of 5. All of the MoE-based models are trained for 20 epochs and required to generate k = 3 sentences. All experiments, except with GPT3.5-turbo, are conducted on a single RTX A6000 GPU. 5 Results and Discussion 5.1 Commonsense Generation We compare the commonsense generations made by ICD against those using the default and diversified prompts. For this purpose, we use GPT3.5-turbo as the LLM and use the same 10 few-shot examples in all prompts for ICL. Further templates of the default and diversified prompts used for each task are given in Appendix E. To assess the impact of ICL, we compare against finetune method, wherein GPT3.5-turbo is fine-tuned on the entire training set in each dataset. Specifically, we use multiple human-written sentences, available in the training data for the three datasets to separately fine-tune the models for each task. It is noteworthy that the fine-tune method uses a substantially larger dataset for training (e.g., 67,389 sentences from CommonGen) compared to the 10 examples used by the ICL-based approaches. We use self-BLEU-3 as the diversity metric f in Algorithm 1 for ICD in this evaluation. The outcomes, presented in Table 1, highlight the diversity and quality metrics of these methods across the CommonGen, ConVE, and DimonGen datasets. Additionally, a human baseline is introduced to evaluate the diversity of sentences written by humans, where we pair-wise compare the human-written sentences for each input in the instances in the benchmark datasets using diversity metrics. Note that however, the human baseline must not be considered as an upper-bound for diversity because there are only a smaller number of human-written sentences per instance in the benchmark datasets. From Table 1, we see that fine-tune generates sentences that have high semantic and corpus diversity, and outperforms the human baseline. However, recall that fine-tune requires a much larger training set and is computationally costly compared to all ICL-based methods. Moreover, we see that ICD can strike a good balance between quality and diversity in the sentences generated. Among the ICL-based methods, ICD achieves the best diSemantic Diversity \u21d3 Corpus Diversity \u21d1 Pairwise Diversity \u21d3 Quality \u21d1 Combined self-cosSim self-BERTScore Entropy-4 Distinct-4 self-BLEU-3 self-BLEU-4 BLEU-3 BLEU-4 SPICE BERTScore Harmonic \u21d1 FBD \u21d3 CommonGen Human 67.3 60.6 10.9 91.0 25.4 17.6 Fine-tune 64.7 55.9 11.4 91.1 26.9 17.9 41.2 32.1 30.3 64.2 72.1 51.9 default 93.3 88.7 10.2 53.7 77.2 72.4 50.8 40.9 30.1 70.4 39.6 60.2 diversified 85.2 69.8 11.0 83.7 44.4 34.9 44.3 34.6 28.5 65.0 65.4 53.9 ICD 83.5 66.2 11.0 88.5 31.0 21.0 47.4 37.7 29.1 67.4 72.7 51.8 ComVE Human 62.7 47.0 9.6 96.1 12.4 8.1 Fine-tune 59.8 42.6 9.8 95.2 13.4 10.3 27.4 19.4 33.1 53.7 67.2 47.6 default 83.9 73.5 9.6 74.3 50.8 45.2 27.5 19.7 36.2 55.1 54.9 50.9 diversified 76.1 56.5 9.7 88.0 23.2 16.5 30.5 21.8 35.8 56.5 67.4 47.9 ICD 72.5 51.1 9.8 90.1 13.7 8.7 29.0 20.8 36.1 55.5 69.0 48.7 DimonGen Human 56.8 47.0 10.1 85.6 14.7 8.7 Fine-tune 43.4 33 10.4 98.7 6.8 3.4 17.7 10.7 15.5 42 58.5 51.6 default 75.7 71.3 10 83.2 43.4 37.3 15.9 9.5 16.4 44.5 52.1 68.2 diversified 57.1 46.9 10.5 95.9 11.2 6.5 11.4 6.4 15.2 39.9 55.9 69.0 ICD 56.7 45.7 10.4 96.3 6.5 3.5 13.2 7.6 15.4 41.7 58.2 68.0 Table 1: Diversity and quality scores on CommonGen, ComVE and DimonGen with GPT3.5-turbo LLM. Best results on each task for each metric are shown in italics, while the best performing ICL results are shown in bold. versity scores on all diversity metrics in all three datasets. It also exhibits higher diversity compared against the human-written references. Moreover, ICD outperforms default and diversified according to the Combined metrics. ICD also achieves a Harmonic Mean comparable to that of the fine-tune baseline. Although default reports the best quality scores, it has low diversity, and is consistently outperformed by diversified and ICD on diversity metrics. On the other hand, diversified generally scores lower on the quality metrics. Compared to default and diversified, ICD enhances generation diversity while maintaining a satisfactory level of quality. ICD is also more stable to the sampling method such as temperature than fine-tune, as shown in Appendix B. Note that fine-tune is not an ICL setting (the focus of this paper) and is included only as a baseline to demonstrate the level of performance that can be achieved by finetuning on a much larger dataset. Despite this, ICD outperforms fine-tune on the Pairwise Diversity in all three datasets, and Combined metrics in the CommonGen dataset. As an open source alternative LLM to GPT3.5-turbo, we repeat this evaluation with Vicuna-13b (Zheng et al., 2023) in Table 2. The same 10 few-shot examples as used with GPT3.5-turbo are used in this experiment for the ICL-based methods. Full table on three datasets are shown in Appendix C. Table 2 reconfirms ICD\u2019s ability to balance both quality and diversity according to the Combined metrics (i.e. Harmonic Mean and FBD) on this dataset. Interestingly, we see that Method SCS \u21d3 SBS \u21d3 E-4\u21d1 D-4\u21d1 SB-3\u21d3 BLEU-3\u21d1 SPICE\u21d1 HM \u21d1 FBD \u21d3 Fine-tune 59.6 49.9 11.4 93.3 22.8 35.8 27.6 69.9 52.4 Default 82.2 73.8 10.9 74.9 52.9 44.6 29.1 60.2 56.2 Diversified 59.1 53.3 11.3 91.3 23.6 32.6 24.3 68.6 53.2 ICD 59.3 49.8 11.3 93.7 11.3 34.2 25.5 73.4 51.0 Table 2: GCR on CommonGen using Vicuna-13b. ICD uses self-BLEU-3. Here, SCS: self-CosSim, SBS: selfBERTScore, E-4: Entropy-4, D-4: Distinct-4, SB-3: self-BLEU3, HM: Harmonic Mean. Best results for each metric are shown in italics, while the best performing ICL results are shown in bold. Method SCS \u21d3 SBS \u21d3 E-4\u21d1 D-4\u21d1 SB-3\u21d3 BLEU-3\u21d1 SPICE\u21d1 HM \u21d1 FBD \u21d3 self-BLEU-3 83.5 66.2 11.0 88.5 31.0 47.4 29.1 72.7 51.8 self-CosSim 81.0 70.1 10.9 82.5 44.5 47.6 29.3 65.7 51.8 self-BERTScore 83.1 62.8 11.0 87.0 36.3 46.5 28.9 69.6 51.8 Table 3: Comparing the effect of using different diversity metrics, f, in Algorithm 1 for ICD. We use GPT3.5-turbo as the LLM and the best results on CommonGen dataset are in bold. Here, SCS: self-CosSim, SBS: self-BERTScore, E-4: Entropy-4, D-4: Distinct-4, SB-3: self-BLEU3, HM: Harmonic Mean. methods that use Vicuna-13b to be more diverse compared to those that use GPT3.5-turbo, while the latter showing better generation quality. In Table 3, we use different diversity metrics as f in Algorithm 1 to study the effect on text generation of ICD. We see that self-BLUE-3 and self-CosSim perform similarly across the quality metrics. SelfBERTScore shows a slightly lower quality (BLEU3 and SPICE), which indicates some level of overfitting to the diversity metric being used. According to the combined metrics, any of those diversity metrics can be used with ICD to obtain comparable performance. Semantic Diversity \u21d3 Corpus Diversity \u21d1 Pairwise Diversity \u21d3 Quality \u21d1 Combined self-cosSim self-BERTScore Entropy-4 Distinct-4 self-BLEU-3 self-BLEU-4 BLEU-3 BLEU-4 SPICE BERTScore Harmonic Mean \u21d1 FBD \u21d3 KG-BART 42.1 30.9 32.7 EKI-BART 46.0 36.1 33.4 KFCNet-w/o FC 50.2 42.0 35.9 KFCNet 57.3 51.5 39.1 MoE 89.3 81.9 9.7 61.6 63.1 56.6 49.0 38.5 33.5 70.6 53.8 61.7 MoKGE 88.7 80.6 9.9 65.2 60.4 53.6 48.8 38.4 33.1 70.3 55.9 60.8 default+MoE 90.8 84.2 9.7 61.2 65.6 58.8 51.8 41.3 34.7 73.1 52.7 61.9 diversified+MoE 85.3 79.9 9.8 63.2 58.3 52.6 51.4 41.4 34.6 71.6 57.0 54.5 ICD+MoE 90.4 82.3 9.8 64.9 58.4 50.5 53.2 43.1 35.4 73.8 59.3 62.5 Table 4: Downstream evaluation of the LLM-generated sentences. Top block methods use human-generated resources for training, while the ones in the bottom block are trained on LLM-generated sentences. MoE approaches are shown in the middle block and bottom block. BART-large is used as the generator for MoE-based methods. Best results for each metric are shown in bold, while the best performing MoE for quality is shown in underline. Figure 3: Human vs. GPT3.5 diversity ratings for randomly sampled sets of sentences generated by ICD. Cohen\u2019s \u03ba = 0.409 indicates a moderate agreement. 5.2 Downstream Evaluation The experiments presented in \u00a7 5.1 show the ability of our proposed ICD to generate diverse and commonsense bearing sentences. Therefore, an important question with practical implications is whether we can use the sentences generated by ICD as additional training data to improve both diversity and quality of previously proposed models on the GCR task, which could be seen as a downstream (extrinsic) evaluation. For this purpose we select the MoE (Shen et al., 2019), which diversifies the generation by selecting outputs from a mixture of experts. Each expert is assigned a randomly generated sequence of tokens, which is used as a prefix for all inputs sent to that expert. For each input, an expert is selected according to the value of a latent variable, which is trained using the hard-EM algorithm. We follow Liu et al. (2023) and train three experts that retrieve sentences from the collection of sentences generated by ICD for concept sets in the CommonGen train split (210846 sentences in total). We use BART-large (Lewis et al., 2020) as the base model, which has shown to produce high quality commonsense generations (Zhang et al., 2023) as the generator for all experts (see Appendix A for further details). We denote this method by ICD+MoE. As baselines for comparisons, we repeat the above process using the sentences generated by default and diversified, which we denote respectively as default+MoE and diversified+MoE in Table 4. Moreover, we compare the performance against two previously proposed MoE models: MoE (Shen et al., 2019) and MoKGE (Yu et al., 2022). MoE relies solely on the base model, whereas MoKGE requires each expert to use different sets of concepts from the ConceptNet (Speer et al., 2017) knowledge graph (KG). Because Yu et al. (2022) do not evaluate their MoKGE method on CommonGen, we ran their original implementation6 on CommonGen and report results in Table 4. All previously proposed GCR methods are exclusively trained using human-created data (e.g. sentences written by human and/or manually compiled KGs such as ConceptNet), whereas the methods described thus far in this section are trained on sentences generated by an LLM (GPT3.5-turbo). Therefore, to evaluate the feasibility of using LLMgenerated sentences for training GCR models, we include the following previously proposed GCR models that are trained using a combination of corpora and KGs: KG-BART (Liu et al., 2021),EKIBART (Fan et al., 2020) and KFCNet (Li et al., 2021). For KFCNet, we present its two results \u2013 KFCNet w/o FC, which relies only on sentences including the input concepts, without further processing, and KFCNet, which additionally ranks candidates and adds contrastive modules for the encoder and the decoder (Li et al., 2021). However, note that those methods do not consider diversifica6https://github.com/DM2-ND/MoKGE Human: \u2022 The group will use the tool to make a piece of art out of metal. \u2022 I use a tool to cut a piece of metal out of the car. \u2022 The man used a piece of metal and the tools. Default: \u2022 A piece of metal is being used as a tool. \u2022 A piece of metal was used as a tool in the construction project. \u2022 A metal tool is being used to shape a piece of metal. ICD: \u2022 The piece of metal is essential for any handyman's toolkit. \u2022 The metal tool is a useful piece for working with metal. \u2022 With the right tools, any piece of metal can be transformed into something useful. CommonGen: Input: (piece, use, tool, metal) Human: \u2022 No one can digest electronic goods. \u2022 Electronic products must not be eaten. \u2022 You would die if you ate electronics. Default: \u2022 Electronic goods are not edible and are not meant for consumption. \u2022 Electronic goods are not edible and cannot be consumed as food. \u2022 Electronic goods are not edible and are meant for functional use rather than consumption. ICD: \u2022 Eating electronic goods can damage the digestive system and cause serious health issues. \u2022 It is not healthy or safe to eat electronic goods as they are made up of toxic materials. \u2022 Electronic goods are not edible and cannot be consumed as food. ComVE: Input: My friend like to eat electronic goods. Figure 4: Sentences generated by default prompt and ICD against those by humans on CommonGen and ComVE test instances. ICD generates more diverse and high quality sentences than default. tion, and do not report performance using diversity metrics. Therefore, we report only their published results for generation quality in Table 4. From Table 4 we see that diversified+MoE always outperforms the original MoE in all diversity metrics, which shows that sentences generated from LLMs can be used to diversify MoE-based GCR. ICD+MoE closely matches the performance of diversified+MoE on diversity metrics, while consistently outperforming both diversified+MoE and default+MoE on quality metrics. In particular, the quality metrics reported by ICD+MoE (underlined in Table 4) are competitive against those obtained by the models that are trained on human-compiled resources (in the top block), except against KFCNet. This finding hints at potential improvement gains for GCR by using hybrid training resources that combine both human-compiled and LLM-generated data, which we highlight as an interesting future research direction. 5.3 Diversity-Awareness of LLMs Given that we use LLMs to produce diverse generations via ICL, it remains an open question whether an LLM would agree with humans on the diversity of a given set of sentences. To answer this question, we use randomly selected 210 sentences (35 sets, each containing 6 sentences) generated by ICD (using self-BLEU-3 as the diversity metric) for the input concept sets in the CommonGen dataset. We instruct GPT3.5-turbo to rate the diversity of a given set of sentences according to five diversity ratings 1-5 with 1 being highly similar, while 5 being highly diverse.7 We provide the same instruction as the annotation guidelines for eight 7Detailed prompt templates are shown in Appendix E. human-annotators, who are graduate students in NLP. To reduce the subjective variability in human judgements, we average and then normalise the ratings following the Likert scale. In Figure 3, we plot the GPT-assigned ratings against those by humans. We further split the ratings into high vs. low diversity ratings depending on whether the rating is greater or lesser than 3. The majority of the data points are distributed along the diagonal quadrants and a Cohen\u2019s Kappa of 0.409 indicating a moderate level of agreement between GPT and human ratings for diversity. The generated sentences using the default prompt, ICD and the human references in CommonGen and ComVE datasets for a single test instance are shown in Figure 4. From Figure 4 we see that the sentences generated using the default prompt often results in significant token overlap, thereby lowering the diversity. On the other hand, ICD generates both lexically and semantically diverse sentences, covering the diverse viewpoints in the human references. 6 Conclusion We proposed, ICD, an ICL-based method for achieving the optimal balance between diversity and quality in text generation via LLMs. Our experiments, conducted on three GCR tasks, demonstrate that ICD significantly improves the diversity without substantially compromising the quality. Furthermore, we found that by training on the sentences generated by ICD, we can improve diversity in previously proposed GCR methods. 7 Limitations This study primarily focuses on the generation of English sentences using pre-trained LLMs, a limitation shaped by the datasets we employed. Specifically, we used the ComVE (Wang et al., 2020), CommonGen (Lin et al., 2020) and DimonGen (Liu et al., 2023) datasets, which are well-regarded for evaluating diversified commonsense reasoning in English. Therefore, our evaluation of the generation quality was limited to English, which is a morphologically limited language. Future research could expand this scope to include multilingual pretrained models, thereby encompassing a broader linguistic spectrum. Our approach is primarily geared towards optimizing the trade-off between diversity and quality in text generation. Consequently, we maintained consistent default instructions across all experiments, adopting the standard commonsense generation prompts used in Li et al. (2023) as our default instructions. We conducted our experiments using both a closed model (i.e. GPT3.5-turbo) as well as an open-source one (i.e. Vicuna-13b-v1.5) to promote the reproducibility of our results, which are reported using multiple public available benchmarks. However, there exist many other LLMs with varying numbers of parameters and trained on different corpora. Therefore, we consider it is important to evaluate our proposed method on a broad range of LLMs to verify the generalisability of our proposed method. However, conducting such a broad analysis can be computationally costly and expensive. For example, although GPT-4 is known to have superior text generation capabilities, it incurs substantially greater costs (being 30 times more expensive than GPT3.5-turbo at the current pricing). Nevertheless, ICD is adaptable and could be extended to other LLMs. 8 Ethical Considerations In this work, we did not create or release any manually annotated data. Our work is based on the publicly available datasets, CommonGen, ComVE, and DimonGen. To the best of our knowledge, no ethical issues have been reported for those datasets. Therefore, we do not foresee any data-related ethical issues arising from our work. However, LLMs are known to generate responses that may reflect societal biases and potentially harmful content. We have not verified whether the GPT3.5-turbo and Vicuna-13b LLMs that we use in our experiments have similar problems. Therefore, it is important to test on existing benchmarks for social biases and harmful generations before the proposed method is deployed to diversify existing GCR methods used by human users. To elicit human judgements of diversity for the sentences generated by ICD, we use annotators who are familiar with working with LLMs. It is possible that their subjective (and possibly biased) viewpoints might have influenced the ratings provided. Therefore, it will be important to conduct the evaluation involving a group of annotators with different backgrounds to validate the findings reported in this analysis.",
"additional_info": [
{
"url": "http://arxiv.org/abs/2403.00816v1",
"title": "CFRet-DVQA: Coarse-to-Fine Retrieval and Efficient Tuning for Document Visual Question Answering",
"abstract": "Document Visual Question Answering (DVQA) is a task that involves responding\nto queries based on the content of images. Existing work is limited to locating\ninformation within a single page and does not facilitate cross-page\nquestion-and-answer interaction. Furthermore, the token length limitation\nimposed on inputs to the model may lead to truncation of segments pertinent to\nthe answer. In this study, we introduce a simple but effective methodology\ncalled CFRet-DVQA, which focuses on retrieval and efficient tuning to address\nthis critical issue effectively. For that, we initially retrieve multiple\nsegments from the document that correlate with the question at hand.\nSubsequently, we leverage the advanced reasoning abilities of the large\nlanguage model (LLM), further augmenting its performance through instruction\ntuning. This approach enables the generation of answers that align with the\nstyle of the document labels. The experiments demonstrate that our methodology\nachieved state-of-the-art or competitive results with both single-page and\nmulti-page documents in various fields.",
"authors": "Jinxu Zhang, Yongqi Yu, Yu Zhang",
"published": "2024-02-26",
"updated": "2024-02-26",
"primary_cat": "cs.IR",
"cats": [
"cs.IR",
"cs.AI",
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "LLM AND Reasoning",
"gt": "Document visual question answering possesses considerable practical significance, enabling the swift and accurate extraction of answers from vo- luminous documents in response to user queries. The challenge of this task lies in comprehending the document content and pinpointing the answers within certain documents solely from the query, such as forms, web pages, newspapers, and vari- ous types of information, including large amounts of texts, and complex document layouts (Mathew et al., 2021; Tanaka et al., 2021; Mathew et al., 2022). Currently, Fine-tuning pre-trained visual docu- ment understanding models has produced impres- sive outcomes in question-answering tasks involv- Pretrained model Single page What entity is certified by CoreTrustSeal? In-Context Retrieval (a) Pretrained Document Understanding Models (b) ICRet-DVQA Framework \u2713 \u2718 Noise from irrelevant content Limitation on content length \u2718 Relevant context Long content \u2713 Efficient-tuned LLM What does the the Open Government Directive require agencies to do? a short summary of your agency\u2019s compliance with existing records management requirements and a link to the explanation on your agency\u2019s records webpage. Single & multi page Figure 1: Two approaches to solve the DVQA task. (a) Pre- trained models on large-scale documents have questions such as inaccurate answer positioning and limited context length. (b) A framework for contextual retrieval and efficient tuning, which can handle multi-page documents and target answers more accurately. The contents of the boxes all represent the context relevant to the question, where green is the location of the answer. ing visually rich documents (VRDs) (Xu et al., 2020; Huang et al., 2022; Appalaraju et al., 2023; Yu et al., 2023; Kim et al., 2022; Lee et al., 2023). This suggests that incorporating large-scale, unla- beled training documents in the pre-training phase of document understanding models can signifi- cantly enhance their performance in answering questions from VRDs. These approaches invest heavily in comprehending document images. De- spite notable advancements, most of them can only accept fixed-length document information, and can- not handle long documents or multi-page docu- ments, there is still a significant journey towards practical application. Large languages models (LLMs), such as GPT- 3 (Brown et al., 2020), LLaMA (Touvron et al., 2023), PaLM (Chowdhery et al., 2023), develop quickly and have shown remarkable results in vari- ous natural language processing (NLP) tasks. Re- cently, Some methods have attempted to incor- porate visual features of documents into LLMs arXiv:2403.00816v1 [cs.IR] 26 Feb 2024 for reasoning (Ye et al., 2023a,b; Zhang et al., 2023c). While certain achievements have been re- alized, these LLMs fundamentally language-based, exhibit a significant disconnect with visual ele- ments. Consequently, they are incapable of fully comprehending visual information, potentially in- troducing noise. Additionally, the proficiency of LLMs in managing DVQA tasks remains unex- plored. Especially for multi-page documents, the current method still focuses on the single-page doc- ument method, that is, initially identify the single page pertinent to the answer, and subsequently uti- lize the relevant page along with the questions to decode the corresponding answer. In this paper, we propose CFRet-DVQA, a sim- ple and effective retrieval-augmented and efficient tuning framework for LLMs to perform the DVQA task. Our method comprises three distinct modules: (1) an OCR engine that extracts text from document images. (2) a retrieval module that locates relevant document content based on a given question. (3) a Large Language Model (LLM) which harmonizes the style of data labels and infers appropriate an- swers based on both the question and the pertinent document content. Building upon this foundation, we enhanced both the retrieval and instruction-tuning modules. For the retrieval module, a multi-stage retrieval and sorting method was adopted to effectively filter out irrelevant information. Specifically, documents were initially segmented and sorted using a coarse- grained approach, followed by a fine-grained re- trieval and selection of highly relevant documents. Ultimately, it is up to the LLM to determine chunks of text based on higher-order semantics. Regarding the instruction-tuning module, we integrated tech- niques such as prefix tuning (Li and Liang, 2021; Lester et al., 2021; Liu et al., 2021), bias tuning and LORA (Low-Rank Adaptation) (Hedegaard et al., 2022; Hu et al., 2021; Zhang et al., 2023a) to unlock a few parameters of the large model. This approach facilitates efficient tuning and better adap- tation to the style of the fine-tuned data. Experiments conducted on five widely used benchmark datasets (DocVQA (Mathew et al., 2021), VisualMRC (Tanaka et al., 2021), Infograph- icVQA (Mathew et al., 2022), MP-DocVQA (Tito et al., 2023), DUDE (Van Landeghem et al., 2023)), which encompasses three single-page datasets, along with two multi-page datasets, which cover various fields. Experimental results show that our framework achieves the state-of-art or comparable results compared to previous methods. Our contributions in this work are four-fold: \u2022 We first propose a simple and effective frame- work for both single-page and multi-page doc- ument question answering across various do- mains. \u2022 We propose a coarse-to-fine retrieval method to accurately retrieve the relevant context ac- cording to the question for DVQA. \u2022 We integrated the efficient tuning approaches that required only 22M training parameters for DVQA. \u2022 CFRet-DVQA achieves state-of-the-art perfor- mance in 4 out of 5 document datasets, across multiple domains.",
"main_content": "This part mainly reviews the research on document understanding and retrieval augmented generation and highlights the differences between our work and previous work. 2.1 Visually Rich Document Understanding (VRDU) The Visual Reading of Document Understanding (VRDU) is designed to interpret content within document images and is recognized as a challenging task. Existing approaches to VRDU can be broadly categorized based on the use of Optical Character Recognition (OCR) tools. There are two primary types of models: (1)Two-Stage Models Using OCR Tools: These models initially utilize OCR to extract text and layout information from document images. In this approach, specific pre-training tasks are devised to align visual and textual features within a semantic space. For instance, LayoutLMv3 (Huang et al., 2022) incorporates tasks like Masked Image Modeling and Word-Patch Alignment, aiming to harmonize the relationship between textual content and its spatial arrangement in documents. Similarly, UDOP (Tang et al., 2023) introduces tasks such as text-to-layout and layout-to-text conversions, as well as image reconstruction tasks. (2) End-to-End Models Based on Image Features: The pre-training objectives of this method typically involve text recognition tasks akin to OCR, focusing on the nuanced understanding of document images. For example, Donut (Kim et al., 2022) introduces a pretraining task designed to generate all texts present in a document image. Pix2Struct (Lee et al., 2023) pushes the boundaries of traditional document understanding by requiring the model to infer and reconstruct the underlying HTML structure based solely on visual cues from the webpage\u2019s layout. Ureader (Ye et al., 2023b) fine-tuned multiple document understanding datasets, including question answering and document summary tasks. While the aforementioned methods utilize multimodal information from document images, they entail substantial resource consumption for pretraining alignment tasks. Moreover, Most of them can only handle documents with less information on a single page, and they are unable to fully comprehend the nuances of document information, often contending with excessive noise. In our work, recognizing that such tasks are predominantly governed by textual information, we capitalize on the reasoning capabilities of large models. By refining the instruction-tuning method, we achieve a precise generation of answers. 2.2 Retreival Augmented Generation (RAG) RAG significantly enhances the input capabilities of LLMs by integrating retrieved text passages (Lewis et al., 2020; Guu et al., 2020), leading to notable improvements in knowledge-intensive tasks. This enhancement is evident post-fine-tuning and even when used with off-the-shelf LLMs (Ram et al., 2023). Currently, RAG plays a pivotal role in addressing two key challenges associated with LLMs: the hallucination of knowledge and the need for up-to-date information. A more recent advancement (Luo et al., 2023) in the field involves instruction-tuning a Language Model (LM) by appending a fixed number of retrieved passages to the input. This approach is designed to enrich the model\u2019s context and understanding by providing additional, relevant information upfront. Furthermore, some methodologies involve jointly pre-training a retriever and an LM, which is then followed by few-shot fine-tuning on specific task datasets. These methods, while effective in addressing open-domain questions, also encounter significant shortcomings, notably the retrieval of non-relevant information. To address this, we have innovatively applied RAG to the task of DVQA. Our approach enhances the relevance of the information retrieved by implementing a multi-stage retrieval method. This method is meticulously designed to accurately isolate the specific paragraph containing the answer and to eliminate extraneous content. 3 CFRet-DVQA In this section, we introduce a novel framework for executing the task of document visual question answering. We will first overview the procedure of the framework and then elaborate on the technical design of the model architecture. 3.1 Overview Framework of CFRet-DVQA CFRet-DVQA comprises two primary stages: (1) multi-stage retrieval of pertinent document content, and (2) streamlined fine-tuning and inference utilizing the retrieved documents. The process primarily incorporates a vector similarity retrieval module and an LLM, proving to be both straightforward and efficacious. In the practical execution of document visual question answering, answers frequently reside within one or several segments of the text. Recognizing this, we propose a multi-stage retrieval approach. The primary objective of this method is to incrementally sift through extensive document information to isolate content pertinent to the query. This approach is particularly vital in the context of multi-page or long document visual question answering. Despite the advanced zero-shot capabilities of LLMs, they lack control over the length and style of generated content. For instance, answers derived from document data can be either extracted or generated. Extraction typically yields brief responses, confined to specific entities; we aim to acquire only the pertinent entity information in such cases. Conversely, for generated answers, detailed reasoning information is desirable. Tailoring tuning instructions for varying data types effectively addresses this issue. By integrating retrieval and instruction tuning, our framework adeptly manages both the questionanswering aspect of large-scale documents and the constraints of large models, thereby producing customized responses tailored to different document data types. 3.2 Coarse-to-fine Retrieval Our proposed coarse-to-fine retrieval method effectively filters out content that is most relevant to the answer. Initially, larger text chunks of coarse granularity are selected (for example, setting the length of text chunks to 1024 tokens). Subsequently, smaller text chunks that are most pertinent to the question are retrieved from these larger Tuning Stage LLaMA Tuned 22M Parameters \u2744 Liming What is the successful outcome shown in HD mice treated with Fingolimod? through a broader lens: looking at nonmotor symptoms in hd sleep deficits precede motor problems in hd.\\n melissa christianson january 25\u2026 What is the person name? Name: Liming Address: Harbin Age: 30 \u2026 Fingolimod has been shown to prevent memory problems in HD mice. query Coarse-grained Sim_search OCR Document vector_store \u2026 merged top-K Fine-grained Sim_search RankLLM Big chunks small chunks Ranked chunks Retrieval Stage Coarse-to-fine retrieval Figure 2: Overview of our CFRet-DVQA framework. CFRet-DVQA consists of three stages: (1) High-performance context retrieval, (2) Efficient parameter instruction tuning and (3) answer inference. In the first stage, highly relevant contextual information is obtained through multi-stage retrieval and large model ranking. Subsequently, an extensive model, aligned with domain-specific data, is developed through efficient parameter fine-tuning. Ultimately, this finely-tuned, comprehensive model is employed for reasoning on the pertinent dataset. chunks, with the length set to 512 tokens or smaller. All the smaller text chunks are then reordered, and the top-ranked chunks are concatenated to form the context for the model\u2019s input. This specific process unfolds as follows. Initial Coarse Retrieval. The set Dcoarse consists of larger text chunks retrieved from the extensive document dataset D, Semantic similarity matching uses a ready-trained embedding model, and the similarity is calculated with question Q, targeting text chunks with a length of 1024 tokens. Dcoarse = doc_vec.match(Q, D) (1) Where doc_vec indicates the embedding vector of the document associated with the Q. match represents the similarity calculation. Fine-grained Retrieval. Fine-grained retrieval involves retrieving the top-ranked large document chunks to extract the most similar, smaller document chunks from each of these larger chunks. Dfine = [ di\u2208Dcoarse coarse_vec.match(Q, di) (2) Where, coarse_vec represents embedding vectors of coarse-grained text chunks, di \u2208Dcoarse, Q means question or query, targeting text chunks with a length of 512 tokens. ReRank. In conclusion, the smaller text chunks in Dfine are ranked based on their relevance to the query Q using a Large Language Model (Pradeep et al., 2023). and the top k ranked chunks are concatenated to form the input context C for the model. C = concat(LLM_rank(Dfine, Q, k)) (3) 3.3 Efficient Tuning query context \ud83d\udd25 LoRA Bias & norm \ud83d\udd25 prefix \ud83d\udd25 Word embedding Q K V K V Attention WQ lora \u2295 lora \u2295 WK WV Figure 3: Fusion of different tuning techniques In this phase, we integrated multiple parameter efficient tuning techniques, such as LoRA (Hedegaard et al., 2022; Hu et al., 2021; Zhang et al., 2023a), prefix tuning (Li and Liang, 2021; Lester et al., 2021; Liu et al., 2021), and bias tuning, while incorporating the zero-initialization attention mechanism from prior methodologies (Zhang et al., 2023b; Gao et al., 2023; Wang et al., 2023), enabling us to achieve sophisticated fine-tuning effects with a minimal number of training parameters. In the context of LoRA, trainable low-rank matrices are introduced to modify the query and value matrices within the multi-head attention layer. The specific computation is implemented as follows: Two low-rank matrices, Wa and Wb are initialized. These matrices have dimensions that are significantly smaller than the original query and value matrices, thereby reducing the number of trainable parameters. The original query and value matrices, denoted as Q and V respectively, are modified using the low-rank matrices. This modification is not a direct replacement but an additive update, which can be mathematically represented as: Q\u2032 = Q + WAQWB (4) V \u2032 = V + WAV WB (5) Here, Q\u2032 and V \u2032 represent the updated query and value matrices, respectively. In prefix tuning, a prefix of length l is strategically positioned preceding the key and value matrices within each multi-head attention layer. This approach effectively equates to adding l additional soft prompt tokens alongside each original token for the computation of similarity measures. The aggregation of these calculations is conducted as follows: headi =Attn(xW (i) q , concat \u0010 P (i) k , CW (i) k \u0011 , concat \u0010 P (i) v , CW (i) v \u0011 ) (6) C \u2208Rd indicates the input token sequence. The length is m. W (i) q , W (i) k , W (i) v \u2208Rd\u00d7dh, d indicates embedding size, dh means attention hidden size, P (i) k , P (i) v \u2208Rl\u00d7d/Nh, there are Nh attention heads. To effectively manage the tasks associated with instruction-following data, same as LLaMA adapterV2 (Gao et al., 2023), we initially unfreeze all normalization layers within LLaMA. For each linear layer in the Transformer, we introduce a bias and a scale factor, both serving as learnable parameters. The input and pre-trained weights of a given linear layer are denoted as x and W, respectively. y = W \u00b7 x \u2192y = s \u00b7 (W \u00b7 x + b), (7) where b = Init(0), s = Init(1). (8) We initialize the bias and scale factors with zeros and ones, respectively, to stabilize the training process at the early stages. 4 Experiments This section describes the relevant datasets and technical implementation details and compares them with other methods. We will present our experimental results and findings. 4.1 Experimental Setup Datasets. We experiment on five widely used DVQA datasets, including three datasets focused on single-page DVQA and two datasets dedicated to multi-page DVQA. Here is a brief introduction to these datasets: DocVQA dataset (Mathew et al., 2021) comprises a substantial collection of scanned and handwritten documents, where the answers, typically one or more entities, are extracted from the text. In contrast, the visualMRC dataset (Tanaka et al., 2021), sourced from web page data rich in content, requires answers to be deduced from the original text rather than being directly extracted. The InfographicVQA dataset (Mathew et al., 2022) features documents with a range of chart information and complex layouts, often necessitating inferential calculations and presenting significant challenges. MP-DocVQA (Tito et al., 2023) and DUDE (Van Landeghem et al., 2023) are datasets designed for multi-page document visual question answering, where answers are confined to the specific page, it\u2019s crucial to first accurately locate the page containing the answer before attempting to provide a response. Compared Methods. We evaluated three types of pre-training models based on their use of different modal information. The first type includes models like BERT (Kenton and Toutanova, 2019) and T5 (Raffel et al., 2020), which utilize plain text information. The second type encompasses models such as LayoutLMv3 (Huang et al., 2022), UDOP (Tang et al., 2023) and HiVT5 (Tito et al., 2023), leveraging a combination of text, image, and layout information. Among them, HiVT5 is a model specifically for multi-page document question answering. Both of the two methods employ OCR tools for text extraction. In these models, OCR recognition errors can significantly impact performance. However, with LLMs, the construction Model Train Param Modality Single-page DVQA Multi-page DVQA DocVQA VisualMRC InfoVQA MP-DocVQA DUDE BERT 334M T 67.5 34.8 25.5 T5 223M T 70.4 318.6 36.7 41.8 38.7 HiVT5 316M T+L+V 62.0 23.1 LayoutLMv3 125M T+L+V 78.7 45.1 42.7 20.3 UDOP 794M T+L+V 84.7 47.4 Donut 176M V 67.5 93.91 11.6 Pix2struct 1.3B V 76.6 40.0 Ureader 86M V 65.4 221.7 42.2 CFRet-DVQA (Ours) 22M T 78.9 378.0 52.4 65.2 61.3 Table 1: Results of comparing CFRet-DVQA with existing pre-trained VDU models fine-tuned with five different categories of document visual question answering datasets. Following previous works, DocVQA, InfoVQA, MPDocVQA, and DUDE are evaluated by ANLS, and VisualMRC is measured by CIDEr. Modality T, L, and V denote text, layout, and vision. of sentence semantics is robust enough that minor word errors do not substantially alter the overall sentence meaning and can often be auto-corrected. Thus, the OCR tool becomes a powerful asset for LLMs. The third type, represented by models like Donut (Kim et al., 2022), Pix2struct (Lee et al., 2023), and UReader (Ye et al., 2023b), relies on pure image data with high resolution. Document images, distinct from actual scene images, are predominantly text-based and carry richer semantic content. To maximize the reasoning capabilities of LLMs, we use OCR to extract plain text information. By selecting content relevant to the query, we aim to achieve a comprehensive understanding of the document. Implementation Details. In our experiments, the effectiveness of the retrieval module is important, leading us to compare several embedding models 1, including e5-large, instructor-large, and bge-large. Given the extensive volume of data involved in our testing, we ultimately selected the open-source bge-large as our retrieval model due to its robust performance. For the retrieval framework, we utilized Langchain 2, known for its efficiency in handling complex retrieval tasks. Furthermore, we employed open-source RankVicuna (Pradeep et al., 2023), as our ranking model. This choice was driven by its advanced language understanding capabilities. During the instruction-tuning experiment, we 1https://huggingface.co/spaces/mteb/leaderboard 2https://www.langchain.com opted for LLaMA 7B as the backbone and utilized a single NVIDIA Tesla A100 80G GPU to train each dataset for 10 epochs. The batch size was set to 8, with a learning rate of 5e-5 and a weight decay of 0.02. We also set the prefix length to 10, applying insertions exclusively in the last 30 layers of the model. For LoRA, we set the rank r to 8. 4.2 Main Results Table 1 presents the performance comparison of CFRet-DVQA with eight models on five datasets. Besides DocVQA, CFRet-DVQA demonstrated superior results in the rest datasets. Although UDOP is currently the most advanced effect, its training cost is relatively high. On the contrary, we have maximized our proficiency in plain text analysis, and we employed the fewest training parameters relative to other methodologies, outperforming several multimodal approaches, including LayoutLMv3. For the challenging datasets, VisualMRC and InfoVQA, our methods demonstrate a substantial lead, outperforming competitors by 60 and 5 points, respectively, thereby achieving the most advanced results to date. Furthermore, while HiVT5 shows improved performance on MP-DocVQA, it falls short on DUDE. In contrast, our method consistently achieves state-of-theart results across these datasets, which goes far beyond the baseline, such as MP-DocVQA exceeding HiVT5 by 3.2 points and DUDE exceeding T5 by 23.4 percentage points, with solid performance on multi-page documents. More detailed analysis can be found in Appendix A. DocVQA InfoVQA VisualMRC CFRet-DVQA 78.9 52.4 378.0 w/o RE 75.0 51.6 328.5 w/o PB 77.7 51.2 289.6 w/o LO 75.6 51.5 295.6 Table 2: The effect of different components in CFRetDVQA. RE means Retrieve. PB means Prefix and Bias tuning. LO means LoRA. 4.3 Ablation Study In this section, we conduct a detailed analysis of CFRet-DVQA and its components. To evaluate the impact of individual components on model efficacy, we executed ablation studies across three categories of single-page documents, methodically omitting one module at a time. It is noteworthy that VisualMRC is measured by CIDEr (Vedantam et al., 2015), and other datasets are evaluated by ANLS (Biten et al., 2019) Effect of Retrieval Module. The majority of documents exceed 512 tokens, leading most models to employ a truncation strategy that risks omitting crucial answer segments. However, our retrieval approach accommodates the input length constraints while retaining the most pertinent sections, significantly enhancing model performance. Table 2 illustrates that the exclusion of the retrieval module results in diminished performance. Notably, During the experiment, we found through the experimental results that the retrieval module exerts the most substantial impact, underscoring the efficacy of our retrieval technique across diverse test sets. Effect of Prefix & Bias Tuning and LoRA. Prefix & Bias tuning and LoRA are orthogonal approaches that do not interfere with each other. Prefix tuning allows for the injection of distinct information across various layers of the model, while Lora can further minimize the number of parameters required, without compromising on model performance. According to the experimental results in Table 2, it is found that Prefix & Bias tuning and loRA have different performances on different datasets, and better results can be obtained by combining the two. Effect of Retrieval Strategy. The Table 3 presents a comparative analysis between two approaches: employing the retrieval model for directly fetching the context of relevant text chunks, and a multi-stage retrieval process followed by Retrieval Strategy MP-DocVQA DUDE SR 59.5 58.4 MR2 65.2 61.3 Table 3: The effect of different retrieval strategies. SR indicates Single-stage Retrieval, MR2 means Multistage Retrieval then Ranked by LLM. 56 57 58 59 60 61 e5-large instructor-large bge-large MP-DocVQA DUDE Figure 4: The effect of different text embedding models ranking text chunks. This comparison is specifically focused on multi-page documents, offering a more illustrative perspective. The findings indicate that our adopted strategy is markedly more effective, particularly in the context of multi-page documents, where it has led to significant improvements. Effect of Text Embedding. The success of our retrieval process is heavily reliant on the quality of text embedding, emphasizing the need for an embedding model with superior semantic matching capabilities. In order to highlight the performance of the text embedding, we choose the text chunk of 256 tokens to conduct experiments and compare it to three open-source embedding models, including e5-large, instructor-large, and bge-large. As indicated in Figure 4, The comparative evaluation revealed that bge-large consistently outperforms the others in terms of average performance. Therefore, we ultimately selected bge-large as the retrieval model for all datasets. 4.4 Qualitative Results Figure 5 shows some qualitative results produced by our CFRet-DVQA on different types of documents. CFRet-DVQA is adept at extracting answers from documents with complex layouts (case a) and performing reasoning based on the document\u2019s content (case b). Furthermore, in multi-page documents, which typically encompass extensive textual information, CFRet-DVQA can precisely Human: Which reading is higher than 120 in Hypertensive crisis stage? ICRet-DVQA: Diastolic (\u2713) Human: What is the second tip for managing blood pressure? ICRet-DVQA: Move More (\u2713) Human: What does FSF stands for? ICRet-DVQA: Free Software Foundation (\u2713) Human: Why do most GPL violations occur? ICRet-DVQA: Most GPL violations occur by mistake, without ill will (\u2713) Human: Is the handwritten note on the left or right side? ICRet-DVQA: left (\u2717) Human: how many million cases of foodborne illnesses? ICRet-DVQA: 76 minllion (\u2713) (a) (b) (c) (d) Figure 5: Qualitative results of CFRet-DVQA. Crucial regions are enlarged for clearer visualization. pinpoint the location of an answer, as exemplified in case c. Nonetheless, the use of OCR tools in CFRet-DVQA introduces certain constraints. Specifically, text extraction occurs sequentially from top left to bottom right, resulting in the omission of layout and visual information of the document content. Therefore, the questions such as case d cannot be answered. This is what we will try to solve in the future. More qualitative results can be found in Appendix B. 5 Conclusion In this study, we introduce CFRet-DVQA, a comprehensive and versatile framework designed for DVQA tasks applicable across diverse disciplines. This framework is adept at processing extensive documents, including those that span multiple pages. Moreover, we have enhanced the existing retrieval module by implementing a multi-stage retrieval strategy, thereby enabling the precise extraction of pertinent context. Additionally, we have developed an innovative instruction-tuning approach that amalgamates various techniques while minimizing parameter usage to optimize performance. CFRet-DVQA has demonstrated state-ofthe-art performance on a wide array of datasets, encompassing both single-page and multi-page documents across various fields, surpassing the capabilities of existing multimodal models. 6 Limitations Our experimental results affirm the efficacy of the CFRet-DVQA framework in processing textcentric document images. However, a limitation of the current iteration of CFRet-DVQA is its omission of layout and image information within documents, which hampers its ability to adequately address text-specific characteristics such as image, style, and positioning. Future work will focus on developing more robust methods to compensate for these deficiencies. A significant challenge we currently face is how to integrate image and layout information into the framework without compromising the efficiency and capacity of larger models. Furthermore, while our framework enhances the accuracy of retrieval, it reduces efficiency. Moving forward, we plan to explore more methods to further optimize the retrieval performance. 7 Ethics Statement This paper proposes a general framework for document visual question answering. We worked within the purview of acceptable privacy practices and strictly followed the data usage policy. In all the experiments, we use public datasets and consist of their intended use. We neither introduce any social/ethical bias to the model nor amplify any bias in the data, so we do not foresee any direct social consequences or ethical issues."
},
{
"url": "http://arxiv.org/abs/2403.12596v1",
"title": "Chart-based Reasoning: Transferring Capabilities from LLMs to VLMs",
"abstract": "Vision-language models (VLMs) are achieving increasingly strong performance\non multimodal tasks. However, reasoning capabilities remain limited\nparticularly for smaller VLMs, while those of large-language models (LLMs) have\nseen numerous improvements. We propose a technique to transfer capabilities\nfrom LLMs to VLMs. On the recently introduced ChartQA, our method obtains\nstate-of-the-art performance when applied on the PaLI3-5B VLM by\n\\citet{chen2023pali3}, while also enabling much better performance on PlotQA\nand FigureQA.\n We first improve the chart representation by continuing the pre-training\nstage using an improved version of the chart-to-table translation task by\n\\citet{liu2023deplot}. We then propose constructing a 20x larger dataset than\nthe original training set. To improve general reasoning capabilities and\nimprove numerical operations, we synthesize reasoning traces using the table\nrepresentation of charts. Lastly, our model is fine-tuned using the multitask\nloss introduced by \\citet{hsieh2023distilling}.\n Our variant ChartPaLI-5B outperforms even 10x larger models such as PaLIX-55B\nwithout using an upstream OCR system, while keeping inference time constant\ncompared to the PaLI3-5B baseline. When rationales are further refined with a\nsimple program-of-thought prompt \\cite{chen2023program}, our model outperforms\nthe recently introduced Gemini Ultra and GPT-4V.",
"authors": "Victor Carbune, Hassan Mansoor, Fangyu Liu, Rahul Aralikatte, Gilles Baechler, Jindong Chen, Abhanshu Sharma",
"published": "2024-03-19",
"updated": "2024-03-19",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Original Paper",
"paper_cat": "LLM AND Reasoning",
"gt": "Visual language, where text and images work to- gether to deliver information, can be expressed through charts, plots, and diagrams. Multimodal reasoning within this context is challenging, as it involves linking visual properties (like color, line style, and positioning) with textual content (such as legends and units). \u2217Correspondence to: vcarbune@google.com Many recent advances of vision-language mod- els (VLMs) come from techniques enabling better representations (Dosovitskiy et al., 2021; Lee et al., 2023), giving the model the ability to understand core elements of the image, a necessary building block for basic reasoning. However, complex rea- soning capabilities which combine the core repre- sentation of the image with semantic understanding of a question to provide an answer, have been rather limited. Models oftentimes are not able to contextu- ally combine image and text representations. One technique that improves reasoning capabilities in large-language models (LLMs) includes in-context learning for eliciting reasoning such as chain-of- thought prompting (Wei et al., 2023), decompos- ing tasks (Zhou et al., 2023) or composing stored facts in weights (Press et al., 2023). Fine-tuning on datasets with rationales (Magister et al., 2023; Hsieh et al., 2023) has been shown to be effective for smaller models. In this work, we tackle improv- ing reasoning capabilities in VLMs through better learned image representations, followed by fine- tuning on synthetic datasets with reasoning traces generated by more capable LLMs. We also ex- plore a hybrid online setup for numerical reasoning refinements. We empirically show that this indeed improves per- formance through experiments on ChartQA (Masry et al., 2022). Visual-question answering on charts quantifies the ability of a VLM to reason using com- plex information presented. Oftentimes answering the question requires implicit or explicit informa- tion extraction, followed by intermediate grouping or computations using the extracted information, and reasoning with the final quantities, as shown in Figure 1. Vision-language models (VLMs) such as PaLI-X and PaLI-3 are hybrid model architectures which use a vision and a language backbone to solve vi- sual tasks (Chen et al., 2023b,c). The training recipe typically involves a pre-training stage fo- arXiv:2403.12596v1 [cs.CL] 19 Mar 2024 Question: What's the difference between the highest value of the red line and the lowest value of the green line? Answer: 79 Fig. 1: Example from the ChartQA validation set. cused on learning a good internal representation, followed by a downstream fine-tuning stage. Chen et al. (2023c) note that PaLI-3 falls behind PaLI- X on ChartQA likely due to its limited reasoning capabilities. Results presented in this work sug- gest that the lack of a pre-training task for learning better chart representations, as done in Liu et al. (2023b), may be another reason. Enhancing the reasoning capabilities of large lan- guage models (LLMs) such as PaLM-2 (Anil et al., 2023) or GPT-4 (OpenAI, 2023) is a very active research area. While reasoning is considered an emerging property with scale (Wei et al., 2022), Press et al. (2023) argue that simply scaling only en- ables better memorization of knowledge and does not enable composing multiple stored facts into an answer. On the other hand, prompting techniques enacting complex reasoning on downstream tasks have been shown to be very effective (Wei et al., 2023) (Zhou et al., 2023). Transferring reasoning capabilities from large to small models enables reducing serving costs, while increasing task performance. Hsieh et al. (2023) have introduced an effective multi-task framework which enable small models to outperform their much larger counterparts using less data. They do so by leveraging rationale generation as a sepa- rate task, instead of more standard distillation ap- proaches, which first infer the rationale, followed by the answer (Magister et al., 2023). We apply this framework for the first time on multimodal tasks. Contributions Our main results can be summa- rized as follows: (i) we introduce an efficient recipe consisting of a pre-training task and fine-tuning task with synthetic datasets using a multi-task setup for improving reasoning capabilities, (ii) we ob- tain SoTA performance by significantly improving PaLI-3 performance on the ChartQA benchmark with our recipe and using 10x less parameters than prior work, (iii) we perform numerous ablation ex- periments quantifying the impact of the techniques used in our recipe. The remainder of this paper is structured as fol- lows. Section 2 describes related work, followed by Section 3 which introduces the construction of the training datasets. Section 4 illustrates our novel pre-training and fine-tuning recipe, followed by Section 5 describing the experimental setup and main results. Lastly, Section 8 delivers a conclu- sion and recommendation for future work, followed by Section 9 where we acknowledge limitations of the current work.",
"main_content": "VLM landscape Vision-language models usually combine a vision backbone with a language backbone. Frequently it is a Vision Transformer (ViT) (Dosovitskiy et al., 2021) coupled with a Large Language Model via an encoder-decoder (Chen et al., 2023b) or decoder-only (Alayrac et al., 2022) architecture. More recently, models such as Fuyu-8B (Bavishi et al., 2023) explore projecting the image directly through the language backbone. In this work we extend PaLI-3, an encoder-decoder architecture with ViT-3B as vision and UL2-2B as language backbones. We refer the reader to Chen et al. (2023c) for a complete overview. PaLI-3 is a SoTA model and hence we decided to build on top of it to further focus on improving the results with our methods. Existing approaches for chart understanding The task of answering questions on charts is, alongside documents and infographics, part of a broader set of tasks commonly referred to visually-situated language understanding, where text and image cannot be treated separately (Lee et al., 2023). Finetuned models on downstream ChartQA include PaLI-3 (Chen et al., 2023c), MatCha (Liu et al., 2023b) and UniChart (Masry et al., 2023). Among these, UniChart takes the most similar approach to ours, pre-training a chart image encoder as vision backbone and BART decoder (Lewis et al., 2019) as language backbone. Alternatively, Liu et al. (2023a) took the approach of decomposing question-answering into first translating the chart into a table, then querying an LLM in a plug-andplay fashion. Here our main focus is on fine-tuned self-contained models, however we show that a simple refinement using a much larger LLM, continues to improve performance as well. The role of upstream OCR systems A chart usually has an underlying equivalent tabular representation of the data. However, decoding the tabular representation remains a challenging problem. Alternatively, charts can be passed through an OCR system to extract an unstructured text representation of the image. (Luo et al., 2021) combine chart-specific extraction logic with an OCR system to extract key information from the charts. As intuitively expected, usually the use of an OCR system improves downstream quality. In this work, we assume the model only has access to the chart image. Improving chart reasoning with synthetic data Having the pre-training mixture specialize on chart tasks is effective (Liu et al., 2023b). We further extend the chart derendering task, which translates charts to code or to table. Similar to our approach, Methani et al. (2020) and Masry et al. (2023) have made use of programmatic templates to a synthesize complex QA pairs. However, instead of using an LLM to generate chart summaries as in Masry et al. (2023), here we use it to generate additional QA pairs with rationales. These generated examples together with synthetic programmatic examples are key in the pre-training and fine-tune stages of our model. 3 Dataset 3.1 Brief description of ChartQA ChartQA is one of the widely adopted visual question-answering benchmarks for reasoning capabilities of VLMs. The standard ChartQA benchmark has two components: (a) human set and (b) augmented generated set. The augmented set has been machine generated and is more simplistic in nature than the human set. The charts in the dataset come from four sources (Statista, Pew, Our World in Data and OECD). Gold tables are available for all sources, except for Pew, where the tables are inferred with ChartOCR model (Luo et al., 2021). Although we observed mistakes in inferred tables, our method seems to be fairly resilient to them. 3.2 Synthetic Generation Methods In this work, we use LLMs to synthesize additional examples paired with rationales generated using chain-of-thought prompting. We use the tabular representation of charts present in the training set as a way to mediate the lack of vision input into LLMs. The data we synthesize increases the diversity of the original training set, especially with examples that require extracting multiple quantities from the chart and perform reasoning using them. We combine two approaches that focus on this type of examples, specifically we use a LLM for synthesizing rationale generation and extra question answer pairs. We also use a programmatic approach for generating arithmetic question answer pairs. Rationale Generation We augment the original training set with synthetic explanations on why an answer is reached. We achieve this by using PaLM 2-S to predict a rationale on an input tuple of (table, question, answer) with a 4-shot prompt, as illustrated in Figure 4. We refer to this set as ChartQA-Rationale-S. By requesting the model to provide justifications for ground truth answers, which are typically accurate, we witness a significant reduction in hallucinations. A notable exception is when the answer itself is wrong, which happens more frequently for the ChartQA augmented set than the human set. However, we did not perform a detailed investigation of this aspect in the generated training sets. An instance of the generated rationale can be seen in Figure 2. ExtraQA Generation We hypothesize that the original training set is too small to contain enough diversity in the examples to enable solving more complex QA questions such as the ones present in the human validation set. Therefore we used a 1-shot prompt illustrated in Figure 5 to generate additional examples covering types of errors we identify by examining the model performance on the validation set. The prompt is adapted from the one used in (Liu et al., 2023a). An example of a generated sample can be seen in Figure 7. We used both PaLM 2-S and PaLM 2-L to generate the examples and refer to the respective datasets as ChartQAExtraQAR-S/L. We perform only lightweight filtering of generated examples that deviate from the imposed structure. If we cannot parse from the LLM Question: \u201cFind the difference between the largest value and the median of all values?\" Table: \"TITLE | Change in death rate from tuberculosis, by age, Equatorial Guinea, 2004\\nCountry | Change in death rate from tuberculosis, by age, Equatorial Guinea, 2004\\n70+ years old | 451.03\\n50-69 years old | 180.56\\n15-49 years old | 28.81\\nUnder-5s | 17.65\\n5-14 years old | 1.7\" Answer: 422.22 Rationale: \"The table shows the change in death rate from tuberculosis by age in Equatorial Guinea in 2004. The largest value is 451.03 and the median is 17.65. The difference between the largest value and the median is 422.22.\u201d Fig. 2: ChartQA-Rationale-S: For each example of the original training set, we synthesize a rational based on the table, the question and the answer. response all three elements, we simply drop the example. However, we do not verify the generated examples for hallucinations, fluency or perform any other model-based verification. ArithmeticQA Generation It is well known that large language models have difficulties in performing arithmetic computations accurately. For ChartQA, this is particularly exacerbated by the fact that the small training dataset is adequate for the specifics of the arithmetic questions one can have for charts (as represented by the test set). We programmatically create examples which either require numeric reasoning or a comparative analysis of multiple chart elements. Examples are illustrated in Figure 8 and Figure 9. We abstracted the questions into templates and used a fixed set of mathematical operations such as median, max, min etc. For each template we created a rationale to teach the model a plan to solve the arithmetic problems. For example, computing the mean requires first looking up the values, then adding them up and finally dividing the value by the total. For each type of arithmetic we created multiple templates both for the questions and rationales. The source data we used are only the ChartQA human examples, using the available tables. The type of questions and their count can be found in Table 1. Question Type Count # Mean 235K Subtraction 90K Other 32K Total 357K Table 1: Examples are mostly means or subtractions. 3.3 Resulting Dataset The resulting dataset is roughly 20x larger and is described in Table 2, with further details on the statistics of the dataset in Section D. Sampling was done using greedy decoding with temperature \u03c4 = 0. We used the augmented and human sets to generate examples. PaLM 2-S vs. 2-L The same prompt was used for all examples in the synthetic dataset. We note that using samples from both LLMs improves performance, but ablation studies do not indicate one is better than the other. We hypothesize that diversity matters more than model size, but we have not investigated sampling strategies. 4 Method Our work builds on top of PaLI-3 architecture and pre-training recipe, which consists of two backbones, a Vision Transformer ViT-2B and Text Encoder-Decoder UL2-3B. Our starting point is the recipe described by Chen et al. (2023c). The uni-modal pre-training stage trains the vision encoder using contrastive loss through the SigLIP loss, while the language encoder-decoder is pretrained using the UL2 loss. Both backbones are pretrained jointly using a multi-modal stage. Lastly the resolution increase stage enables the vision encoder backbone to work with 812x812 resolution images. We continue pre-training using this checkpoint. 4.1 Pre-training: Chart2Table Mixture Extending the work done by Liu et al. (2023a), we use a chart-to-table dataset mixture to continue pretraining with the ViT backbone unfrozen, which facilitates learning an internal representation of the chart. We do not explicitly use the tabular conversion further downstream. Dataset For learning this representation, we combine several chart-to-table derendering tasks into a mixture: (1) synthetic chart-to-table data similar to the synthetic mixture introduced by Liu et al. Dataset Hum # Aug # Question type # Total Rate # ChartQA-Rationale-S 7398 20901 R [13%], V [11%], C [43%], B [33%] 28.3K 15% ChartQA-ExtraQAR-S 23261 69433 R [57%], C [43%] 92.7K 15% ChartQA-ExtraQAR-L 16388 50468 R [60%], C [40%] 66.9K 30% ChartQA-ArithmQAR 357000 C [100%] 357.0K 40% ChartQA-Synth (Total) 544.9K Table 2: Overview of the synthetic dataset, which is 20x larger than the original one. The suffix denotes the size of the PaLM 2 model used. The rate refers to the final mixture. Categorization of question types are from (Masry et al., 2022), namely Retrieval, Visual, Compositional or Both visual and compositional. (2023a). We traverse different combinations of plotting options in matplotlib and seaborn to randomly plot tables from Wikipedia into charts of different layouts. (2) the chart-to-table mixture introduced by Masry et al. (2023). (3) The chart-table pairs from the train set of DVQA (Kafle et al., 2018). (4) The chart-table pairs from the train set of TaTA (Gehrmann et al., 2022). (5) The chart-table pairs introduced in Benetech Making Chart Accessible Kaggle challenge1. A complete listing of data source, sampling weight, and number of examples is shown in Table 3. Component Rate Size Synthetic 44.0% 1.2M UniChart 39.5% 612K DVQA 3.2% 200K ChartQA 3.2% 22K TaTa 3.2% 6.7K Chart2Text 3.2% 24K Benetech Challenge 3.2% 21K PlotQA 0.5% 224K Total 2.37M Table 3: Pre-training datasets for learning chart representations include examples from numerous tasks that have paired chart images with table representations. The existing table representation is used as is from the datasets, or, as described earlier, for a small fraction, tables are created programmatically. Tables are also normalized to a standardized format. 4.2 Fine-tuning: Multi-task Loss After the pre-training stage which enables the ViT backbone to work better with charts, we use the synthetic data to fine-tune the model for the downstream task. We investigate two ways of incorporating the rationales available in the extended dataset. 1https://www.kaggle.com/competitions/ benetech-making-graphs-accessible The first one is by changing the task target from answer to rationale, answer. This has been shown to be effective in (Magister et al., 2023). We refer to this approach as single-task setup. However, it requires increased inference time by predicting the rationale, together with increased sequence length during training. The unintended side effect of training to predict jointly rationales and answers is that rationale tokens become equally important as the answer tokens. The second one is inspired by Hsieh et al. (2023) which addresses both concerns by constructing a multi-task setup where the answer and rationale are treated as independent tasks. This can be done using different prefixes similar to T5 (Raffel et al., 2023), such as \"Rationale:\" and \"Question:\". The training loss balances the strength between the two tasks using a hyper-parameter \u03bb: Loss = (1 \u2212\u03bb)Lossans + \u03bbLossrat Our experiments are the first application of this setup for a multimodal task. We further confirm the observation from text domains that not only inference time remains constant, but quality also improves. 5 Experiments We describe the general learning hyper-parameters for the pre-training and fine-tuning stages, followed by interpretation of the results. 5.1 Setup Pre-training We continue pre-training the PaLI3 model with ViT unfrozen on the Chart2Table data mixture for train_steps=6K, batch_size=256 with learning_rate=5e-3 with normalized square root decay using decay_factor=2e-6 and dropout_rate=0.1. Fine-tuning We then freeze the ViT encoder and continue fine-tuning on the synthetic ChartQA dataset for train_steps=5K, batch_size=256 with learning_rate=1e-3 with linear decay using decay_factor=1e-4 using dropout_rate=0.1. Multitask We use \u03bb = 0.5 and we do not find significant differences when using other values. 5.2 Results on ChartQA We validate the effectiveness of the different techniques by reporting the downstream task performance on the ChartQA test set. All following experiments are on PaLI-3. Pre-training Continuing the pre-training stage for the PaLI-3 model using the Chart2Table mixture enables learning a better general representation of the charts. We intuitively expect that this better representation enables the model to more accurately identify quantities on the images. Indeed, we confirm this first through the results reported in Table 4. Later, as we scale the dataset size, we show that this continues to play an important role. Pre-training Strategy ChartQA (RA%) Avg. Hum. Aug. Original PT (Chen et al., 2023c) 70.00 Chart2Table PT (our run) 70.84 48.96 92.72 Table 4: PaLI-3 performance on ChartQA slightly increases with our chart-to-table pre-training phase. As expected, the increase is predominantly in the augmented set, given that the pre-training mixture is constructed synthetically as well. Singletask vs. Multitask We first study the effect of introducing rationales only using the ChartQA-Rationale-S. This only adds rationales to the original ChartQA dataset. When using the rationales in singletask setup the performance difference is not significant compared to not using them. However, when used in the multitask setup, we note a quality improvement, particularly noticeable in the more difficult humanset. We refer to the former as Singletask-Rationale and to the latter as Multitask-Rationale in Table 5. Fine-tuning setup ChartQA (RA%) Avg. Hum. Aug. C2T PT + Singletask-Rationale 70.80 49.36 92.24 C2T PT + Multitask-Rationale 71.72 50.72 92.72 Table 5: Multitask performance stands out compared to Singletask on the more difficult human-written set. We hypothesize that the improvement comes from better use of the rationales, guiding the model to internally produce a form of reasoning before producing the final answer. This is done without paying the cost predicting the rationales tokens. Learning with augmented dataset We use the ChartQA-Synth dataset from Table 2 for studying the extent to which we can transfer reasoning capabilities from PaLM-2 to PaLI-3. We perform an ablation experiment to understand the role of the extra questions, rationales and pre-training stage and report our results in Table 6. We denote experiments using the original pretrained checkpoint as Orig PT and on the further pre-trained checkpoint with chart-to-table translation as C2T. We report a clear improvement, further strengthening our observation that internal representation plays an important role. Fine-tuning Setup ChartQA (RA%) Avg. Hum. Aug. Orig PT + Singletask-ExtraQAR 72.43 53.20 91.67 Orig PT + Multitask-ExtraQAR 73.15 55.20 91.10 C2T PT + ExtraQA (w/o Rationale) 74.67 56.39 92.96 C2T PT + Singletask-ExtraQAR 75.16 55.84 94.48 C2T PT + Multitask-ExtraQAR 75.36 56.80 93.92 C2T PT + Singletask-ChartQA-Synth 76.60 59.04 94.16 C2T PT + Multitask-ChartQA-Synth 77.28 60.88 93.68 Table 6: Ablation results confirm the importance of each step in our recipe. ChartQA-Synth is the mixture described in Table 2 We ran an experiment without rationales, but with the entire synthetically generated QA pairs. We note that the increase in examples ends up improving over the original ChartQA performance reported in Table 4. However, the use of rationales continues to improve quality for both singletask and multitask setups. We observe that in high-data regimes, there is no longer a significant difference between the two. Given the neutral impact of the multi-task setup at inference time, paired with slightly improved performance on the human-written queries of ChartQA, multi-task is the preferred option in practice. Further, we refer to the best performing finetuned setup in Table 6 as ChartPaLI-5B. 5.3 Results on FigureQA and PlotQA ChartQA is currently the most challenging benchmark. To prove that our method is general, we investigate performance on related chart understanding tasks, FigureQA () and PlotQA (). We study 3 operation regimes: (i) zero-shot: no task-specific pre-training or fine-tuning, (ii) quick adaptation: 1K fine-tuning steps and (iii) convergence: 5K fine-tuning steps. We report relaxed accuracy on 10K examples from validation set for FigureQA (ref. Table 8 and from test set from PlotQA (ref. Table 9). Model FigureQA RA% (v1 | v2) ZShot Quick Conv PaLI-3 (original) 41.9 | 42.4 57.2 | 58.1 89.9 | 89.3 ChartPaLI-5B 51.0 | 51.2 92.7 | 93.0 96.3 | 96.2 Table 8: ChartPaLI-3 exhibits strong generalization on FigureQA task, for which no examples are present in pre-training or fine-tuning For PlotQA, images from the training subset are present in our pre-training mixture, while validation and test subset images are not. Therefore, we do not study zero-shot performance, as training images would give an unfair advantage. Model PlotQA RA% (v1 | v2) Quick adapt. Convergence PaLI-3 (original) 62.0 | 15.7 71.5 | 23.6 ChartPaLI-5B 79.1 | 53.3 86.0 | 70.7 Table 9: ChartPaLI-3 exhibits strong generalization on FigureQA task, for which no examples are present in pre-training or fine-tuning ChartPaLI-5B outperforms PaLI-3 in all operation regimes. In general, our recipe significantly increases chart understanding performance when running only a few quick adaptation steps. In particular we report SoTA performance regime for FigureQA (roughly 96%+) and the very strong relative performance on the difficult PlotQA v2 (roughly +47.1% at convergence time). 5.4 Errors and Challenges To understand the effect of our method and investigate further opportunities for improvements, we manually looked at predictions on the ChartQA validation set. We compared baseline PaLI-3 model outputs with the model fine-tuned with our recipe and share our observations below. We report our findings below. General The model predicts the rationale2 or the answer, depending on the task prefix. Because the answer is not conditioned on the rationale, it can differ. One general improvement area we note is the ability to extract necessary intermediate quantities (Fig. 11) and operate with them (Fig. 12). Question: What is the total of Macy's, Bloomingdale's and Bluemercury brand stores worldwide in the 2019? Answer [baseline]: 1122 Answer [ours]: 832 [inexact, but correct because it\u2019s within 5% of ground truth] Rationale [ours]: The table shows the number of Macy's, Bloomingdale's and Bluemercury brand stores worldwide in the years 2015, 2016, 2017, 2018, 2019 and 2020. In 2019, Macy's had 613 brand stores, Bloomingdale's had 55 brand stores and Bluemercury had 171 brand stores. Therefore, the total number of Macy's, Bloomingdale's and Bluemercury brand stores worldwide in the 2019 is 613 + 55 + 171 = 824. Fig. 3: Correct numeric approximations on answers. Numerical reasoning Despite improvements, computation of mathematical expressions continues to be very challenging. The rationales correctly extract (Fig. 3) or infer chart values when missing (Fig. 13), however the computed value is frequently incorrect. This does not always prevent the final answer to be correct (Fig. 15). This seems in line with observations by Wang et al. (2023a), who also conclude that corruption of the chain-of-thought reasoning trace does not always degrade the final answer. Due to the frequency of this numeric computation error, we explore a simple refining technique in Section 5.5. 2Although the table is not used during inference, the rationales contain the word table due to its use in prompts. Fine-tuned VLMs (up to 55B) Source ChartQA (RA%) Fuyu-8B our eval, (Bavishi et al., 2023) 42.1 Pix2Struct-1.3B (Lee et al., 2023) 58.6 MatCha-300M (Liu et al., 2023b) 64.2 UniChart-201M (Masry et al., 2023) 66.2 ChartLlama-13B (Han et al., 2023) 69.6 PaLI-5B (Chen et al., 2023c) 70.0 PaLI-55B (Chen et al., 2023b) 70.9 PaLI-55B (Soft Mixture of Low-rank Experts) (Wu et al., 2023) 73.8 ChartPaLI-5B our work 77.3 Hybrid VLMs/LLMs (undisclosed size) GPT-4V [4-shot with CoT] (OpenAI, 2023) 78.5 DePlot-300M + FlanPaLM + Codex with PoT SC (Liu et al., 2023a) 79.3 Gemini Ultra [0-shot] (Gemini Team, Google, 2023) 80.8 ChartPaLI-5B + PaLM 2-S PoT SC @ 5 our work 81.3 Table 7: State-of-the-art performance among fine-tuned VLMs on ChartQA benchmark. Color reasoning Our synthetic data does not have color metadata, as only the table was used in the generation process. Therefore the model continues to struggle when the reasoning trace requies working with colors (Fig. 10). Thus, this is an area worth of investigating next and has applicability well beyond the specifics of chart understanding. Complex reasoning Reasoning about multiple values and checking for a matching condition which requires arithmetic computations is another example of a remaining difficult task (Fig.14, Fig.16). The increased complexity stemming from internal inability of VLMs to perform numeric operations paired with enumerating chart elements through semantic descriptions is likely fairly difficult to achieve without the use of external tools. Task leakage Due to the training methodology, we observe that when conditioned with the Question task prefix, the model may behave similarly as to when Rationale prefix is used. Sometimes, instead of directly outputting an answer, the model may generate a longer explanation that resembles a rationale or a fragment of rationale. 5.5 Refinement with Program of Thoughts Despite the improved ability to construct numeric equations using the required values on the charts (Fig. 3), the exact numeric computation continues to be wrong. This is unsurprising, since both the visual and the language backbone treat numbers as tokens. Making the problem worse, the character sequence forming a number may be split and encoded in arbitrary chunks. Chen et al. (2023a) have proposed replacing chain-of-thoughts (CoT) prompting with program-of-thoughts (PoT) to enable delegation of the arithmetic computation to a program interpreter. This has previously been explored by Liu et al. (2023a), however in a much more computationally involved setup than the one we describe further. Through our fine-tuning approach, both singletask and multitask setups can be used produce CoT rationales for which an LLM prompted with PoT can write the equivalent code for performing the numeric computation. We take the approach of using a simple 4-shot prompt (Fig. 6) constructed on the validation set to generate code using PaLM 2-S for performing the numeric computation that is present in a rationale. We run this online refinement, only if the rationale contains an arithmetic operator (\u2019+\u2019, \u2019-\u2019, \u2019/\u2019 or \u2019*\u2019). Self-consistency is an effective way to improve chain-of-thoughts rationales by selecting an answer with majority voting from a pool of sampled rationales (Wang et al., 2023b). We apply this approach, by sampling with temperature \u03c4Rat = 0.4 and generate N = 5 rationales that are then refined with PaLM 2-S using temperature \u03c4Ref = 0.0. Setup ChartQA (RA%) Avg. Hum. Aug. ChartPaLI-5B (from Table 6) 77.28 60.88 93.68 ChartPaLI-5B + PaLM 2-S PoT 80.80 67.92 93.68 ChartPaLI-5B + PaLM 2-S PoT SC @ 5 81.32 68.96 93.68 Table 10: PoT refinement improves performance on the human set, while not affecting the augmented set. The results presented in Table 10 highlight the utility of the method, particularly with K=5 for self-consistency. They also highlight the simplicity of the augmented set compared to the human set, for which the refinement does not have an impact. Either the augmented set contains no arithmetic computations or they are simple enough for the fine-tuned VLM to already get right. 6 Performance Overview We position our results relative to existing prior work in Table 7. We extracted the results from the referenced papers, with the exception of the Fuyu8B (Bavishi et al., 2023) model. We performed our own evaluation as the authors have not provided the results on the ChartQA benchmark. Our work significantly outperforms prior models specialized on the ChartQA benchmark. Concurrent to our work, ChartLlama-13B also uses synthetic data generated, but with a fairly different approach. Although outside the scope of our work, it may be that the approach took to train the much smaller MatCha and UniChart models may be combinable with the approach we presented in this work, leading to possible improved performance with even less computational resources. The method introduced in this work can be uniquely combined with much larger models through rationale generation. As shown in the results, rationales generated by VLMs can suffice for larger LLMs to effectively operate on, providing a text-representation of the chart conditioned on the question. Our method matches the recently introduced Gemini Ultra model and outperforms previous approaches. 7 Future Work We highlighted several drawbacks of our approach in Section 5.4. The training mixtures do not have examples where colors are used to construct reasoning examples. Bootstrapping such examples, for example by running a smaller sized model with questions that extract color related information, then combines them, would likely improve quality. Very complex reasoning examples are also limited. Specifically, semantically identifying chart elements and performing numeric computations to solve questions would further improve quality. 8 Conclusion We introduced a novel recipe that significantly improves the reasoning capabilities of VLMs. Applied to PaLI-3, our method significantly outperforms even the 10x larger PaLI-X on the ChartQA benchmark, establishing a new state-of-the-art. We demonstrate how the pre-training stage improves downstream performance. Our synthetic data generation technique coupled with the use of a multitask setup, successfully transfers reasoning capabilities from larger LLMs to smaller VLMs. Moreover, our method enables a computationally more expensive setup where predicted rationales are refined using program-of-thoughts with PaLM 2-S. The composite solution outperforms Gemini Ultra and GPT-4V on the ChartQA benchmark. 9 Limitations We acknowledge limitations of our approach. Table representation Although our final model works on pixels only, our synthetic data generation method requires having access to a table version of the charts for leveraging LLMs to construct rationales, additional question/answer pairs, etc for the training datasets. Although it is likely that inferred tables or output of an OCR model may replace to some degree the presence of gold tables, it will likely affect final model quality. PaLI-3 The pre-training and fine-tuning recipe for synthetic data creation, as well as the training methodology should be applicable broadly on open source models as well. However, we acknowledge that the choice of PaLI-3, a proprietary flavor of VLMs, is not as a good of a choice as an open source flavor available externally. Risks associated with synthetic dataset Since the method for constructing our dataset relies on LLMs, there are certain inherent risks that come with that, for example that of hallucination. Although our technique extends the publicly available ChartQA dataset, additional care needs to be taken into account when planning to apply it for releasing models or dataset openly. Although the metrics are state-of-the-art, it cannot be guaranteed that model outputs can\u2019t be abused if trained in this manner. Reasoning limitations We acknowledge limitations stemming from the empirical prompt creation process, which is based on human inspection of model errors. LLM capabilities used for the synthetic data creation, although impressive, continue to have numerous limitations as reported by the community. 10 Acknowledgements We thank Srinivas Sunkara and Maria Wang for their contributions on the infrastructure that enabled us to run these experiments. Further, we thank Xi Chen for his tireless support and insights into PaLI-3 details and training recipes and ChengYu Hsieh and Yasuhisa Fujii for the detailed discussions on the multi-task setup. Daniel Keysers and Radu Soricut have provided detailed feedbacks that significantly improved the paper. Matt Sharifi and Ewa Dominowska provided senior leadership support for this work. Lastly, feedback from anonymous reviewers rPKR and 453J motivated running additional experiments further strengthening the contribution of this work by showcasing the method is generally applicable."
},
{
"url": "http://arxiv.org/abs/2402.14809v2",
"title": "CriticBench: Benchmarking LLMs for Critique-Correct Reasoning",
"abstract": "The ability of Large Language Models (LLMs) to critique and refine their\nreasoning is crucial for their application in evaluation, feedback provision,\nand self-improvement. This paper introduces CriticBench, a comprehensive\nbenchmark designed to assess LLMs' abilities to critique and rectify their\nreasoning across a variety of tasks. CriticBench encompasses five reasoning\ndomains: mathematical, commonsense, symbolic, coding, and algorithmic. It\ncompiles 15 datasets and incorporates responses from three LLM families.\nUtilizing CriticBench, we evaluate and dissect the performance of 17 LLMs in\ngeneration, critique, and correction reasoning, i.e., GQC reasoning. Our\nfindings reveal: (1) a linear relationship in GQC capabilities, with\ncritique-focused training markedly enhancing performance; (2) a task-dependent\nvariation in correction effectiveness, with logic-oriented tasks being more\namenable to correction; (3) GQC knowledge inconsistencies that decrease as\nmodel size increases; and (4) an intriguing inter-model critiquing dynamic,\nwhere stronger models are better at critiquing weaker ones, while weaker models\ncan surprisingly surpass stronger ones in their self-critique. We hope these\ninsights into the nuanced critique-correct reasoning of LLMs will foster\nfurther research in LLM critique and self-improvement.",
"authors": "Zicheng Lin, Zhibin Gou, Tian Liang, Ruilin Luo, Haowei Liu, Yujiu Yang",
"published": "2024-02-22",
"updated": "2024-03-08",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI",
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "LLM AND Reasoning",
"gt": "The advent of large language models (LLMs) has revolutionized artificial intelligence, showcasing re- markable proficiency across diverse tasks (Brown et al., 2020; Ouyang et al., 2022; Achiam et al., 2023; Chowdhery et al., 2023; Touvron et al., * Equal contribution. \u2020 Corresponding author. 1 Code and data are available at https://github.com/ CriticBench/CriticBench. LLaMa-2-7b Vicuna-7b LLaMa-2-13b Vicuna-13b Vicuna-33b LLaMa-2-70b GPT-3.5 GPT-4 Model 0 20 40 60 80 100 Percentage 16.6% 21.5% 35.1% 34.0% 28.2% 38.7% 52.8% 70.1% 2.7% 7.9% 5.4% 3.1% 3.5% 4.8% 6.1% 2.5% 3.1% 2.1% 2.7% 2.9% 4.8% 3.8% 4.6% 2.9% 3.6% 9.1% 5.8% 20.6% 17.8% 12.1% 14.6% 21.9% 13.9% 8.3% 7.1% 46.2% 50.2% 46.0% 41.8% 36.4% 28.0% 22.9% 15.1% generation & critique & correction generation & critique generation & correction critique & correction generation critique correction fail Figure 1: The distribution of the correct sets and in- tersection for model generation (G), critique (Q), and correction (C) on CRITICBENCH, with \"&\" represent- ing intersection. 2023; Team et al., 2023). Their potential for self- evaluation and improvement is particularly fasci- nating, with studies suggesting that LLMs can ef- fectively assess model outputs (Liu et al., 2023; Fu et al., 2023a; Chiang et al., 2023), and even engage in self-reflection and correction (Bai et al., 2022; Saunders et al., 2022; Madaan et al., 2023; Gou et al., 2024a). This capability rests on the LLMs\u2019 critical reasoning skills, which involves (1) critique - identifying issues in provided responses, and (2) correct - proposing suitable modifications. However, a comprehensive understanding of LLMs\u2019 critical reasoning abilities remains elusive. Prior research (Lightman et al., 2023; Li et al., 2024; Luo et al., 2024) has focused on a narrow range of models and datasets and has yielded incon- sistent findings (Madaan et al., 2023; Huang et al., 2024), underscoring the need for a thorough inves- tigation. It is imperative to systematically gauge LLMs\u2019 proficiency in critiquing and correcting pro- arXiv:2402.14809v2 [cs.CL] 8 Mar 2024 20 40 60 80 100 Math Commonsense Symbolic Code Algorithmic Generation 20 40 60 80 100 Math Commonsense Symbolic Code Algorithmic Critique 20 40 60 80 95 Math Commonsense Symbolic Code Algorithmic Correction LLaMa-2-7b LLaMa-2-13b LLaMa-2-70b Vicuna-7b Vicuna-13b Vicuna-33b Mistral-7b Mixtral-8x7b Mixtral-8x7b inst Phi-2 GPT-3.5 GPT-4 Figure 2: Models\u2019 performance on CRITICBENCH. vided answers. To address these challenges, we present CRIT- ICBENCH, a comprehensive benchmark designed to evaluate the critique and correction skills of LLMs. CRITICBENCH encompasses 15 datasets spanning five task categories: mathematical, com- monsense, symbolic, coding, and algorithmic. We leverage eight models from the LLaMA, Vicuna, and GPT series to create the responses that are to be critiqued and corrected. Moreover, we in- clude GPT-4 and undertake manual data reviews to ensure data quality, culminating in 3.8K data instances (see Table 2 for detailed data collection information). We conduct extensive experiments on CRITICBENCH with 17 LLMs, including closed- source models GPT-3.5 and GPT-4, open-source models like Phi-2 (Javaheripi et al., 2023), the LLaMA family (Touvron et al., 2023), the Vicuna family (Chiang et al., 2023), and the Mistral family (Jiang et al., 2023, 2024), as well as two models specifically trained for critiquing, namely Auto-J (Li et al., 2024) and UltraCM (Cui et al., 2023). Partial results are shown in Figure 2. Our contributions are summarized as follows: \u2022 We present CRITICBENCH, a benchmark com- prising five different domains to systemati- cally assess critique and correction reasoning in various LLMs. Utilizing CRITICBENCH, we investigate the impact of base models, training strategies, prompt strategies, and ora- cle feedback on the critique-correct reasoning performance of LLMs. \u2022 We reveal that LLMs exhibit a linear relation- ship in their generation, critique, and correc- tion (GQC) capabilities, despite limited train- ing on critique tasks. \u2022 The type of task has a significant impact on correction performance. LLMs struggle more with incorrect answers in detail-oriented tasks like algorithmic tasks compared to logic- centric tasks like code generation. \u2022 By comparing the sets of questions where models correctly generate, critique, and cor- rect, we find that a model\u2019s knowledge is inconsistent across these three tasks, with stronger models showing more consistency in GQC capabilities. \u2022 We observe that although stronger models are better at critiquing, models with weaker gen- erative abilities can still accurately evaluate responses from stronger models, sometimes even outperforming the latter in self-critique.",
"main_content": "LLM Reasoning The advent of few-shot learning and Chain of Thought (CoT) prompting (Brown et al., 2020; Wei et al., 2022) has significantly improved the performance of LLMs in reasoning tasks. Since then, various advanced prompting methods (Wang et al., 2022; Zhou et al., 2022; Fu et al., 2022; Zheng et al., 2023a) have achieved remarkable results, and researchers have proposed to use external tools like search engines (Nakano et al., 2021; Gou et al., 2024a) and Python interpreters (Chen et al., 2022; Gao et al., 2023; Gou (a) Question Collection (b) Response Collection (c) Response Annotation Original Dataset: AmbigQA, StrategyQA, HotpotQA, CSQA, TabMWP, AQuA. MATH, GSM8K, HumanEval, MBPP, Object Counting, Repeat Copy, Penguins, Colored Obiects, Date. Question Answer Question Prompt Valid Responses Valid Responses Filtering by Rules Source dataset distribution Model: ChatGPT GPT-4 Vicuna LLaMa 2 1. Rule-based answer matching 3. Manual Review 2. GPT-4 Evaluation Question: Ryan plants 2 flowers a day in his garden. After 15 days, how many flowers does he have if 5 did not grow? Type: Mathematical Reasoning Answer: 25 Source Dataset: GSM8K Solution: Ryan plants 2 flowers a day for 15 days, so he planted 2 * 15 = 30 flowers. If 5 did not grow, then he has 30 5 = 25 flowers. The answer is 25. Solution Label: True Model: GPT-4 Question: Ryan plants 2 flowers a day in his garden. After 15 days, how many flowers does he have if 5 did not grow? Type: Mathematical Reasoning Answer: 25 Source Dataset: GSM8K Response: Ryan plants 2 flowers a day for 15 days, so he planted 2 * 15 = 30 flowers. If 5 did not grow, then he has 30 5 = 25 flowers. The answer is 25. Response Label: True Model: GPT-4 Figure 3: An overview for the CRITICBENCH construction. et al., 2024b) to further augment LLM reasoning in various tasks (Schick et al., 2024; Ma et al., 2024). LLM Critiquing & Correction As LLMs continue to evolve in their capabilities across various domains, many studies seek to enhance performance by requiring LLMs to provide critiques of the generated responses in various forms, including utilizing internal feedback (Bai et al., 2022; Saunders et al., 2022; Welleck et al., 2022; Madaan et al., 2023; Zheng et al., 2023b), leveraging external feedback (Kim et al., 2023; Shinn et al., 2023; Gou et al., 2024a; Chen et al., 2024), and employing multiple models for critiquing and debating answers (Liang et al., 2023; Du et al., 2023; Yin et al., 2023). These studies all demonstrate the potential of LLMs in Critique-Correcting Reasoning. Additionally, some researchers have trained critique models to provide supervisory signals for generative models (Ke et al., 2023; Li et al., 2024; Ye et al., 2023; Wang et al., 2023b; Cui et al., 2023). However, these works focus on specific methods without providing a comprehensive evaluation and analysis of Critique-Correcting Reasoning, which also limits the development of this field. Critiquing Tasks Many researchers have advanced the field by creating critiquing datasets, covering areas such as text generation (Stiennon et al., 2020; Matiana et al., 2021), semantic understanding (Pougu\u00e9-Biyong et al., 2021), factuality (Thorne et al., 2018), alignment (Li et al., 2024), and mathematics (Lightman et al., 2023; Luo et al., 2024). However, these datasets are typically limited to specific tasks and models. In contrast, our work introduces CRITICBENCH to provide the first comprehensive and comparative analysis of LLMs abilities in generation, critique, and correction (GQC). 3 The CRITICBENCH 3.1 Overview of CRITICBENCH CRITICBENCH is designed to assess the two key aspects of LLMs\u2019 critical reasoning: critique and correction. By combining these two aspects, LLMs can critique a given response and apply corrective reasoning to produce an updated answer. In this section, we detail the principles and processes involved in the construction of CRITICBENCH; the construction process is listed in Figure 3. CRITICBENCH is designed to follow these collection principles: (1) it encompasses multiple task types, aimed at comprehensively showcasing the model\u2019s abilities; (2) it incorporates diverse models for response generation, promoting response variety; (3) it employs acknowledged datasets, enabling straightforward comparisons with the models\u2019 generation capabilities; and (4) it ensures data quality through both GPT-4 and manual review. Figure 4: Evaluation process on CRITICBENCH. 3.2 Question Collection This section describes the data collection methodology, following defined principles. A specific quantity of data is extracted from an existing dataset, utilizing any relevant subset if available, or alternatively, selecting randomly from the dataset. Mathematical Reasoning We selected GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021), AQuA (Ling et al., 2017), and TabMWP (Lu et al., 2023) for mathematical reasoning. We utilized the existing subsets from Shi et al. (2022) and Lightman et al. (2023) respectively, randomly sampled 300 questions from TabMWP, and incorporated all questions from AQuA. Commonsense Reasoning To thoroughly evaluate the GQC ability of LLMs in commonsense reasoning tasks, we employ four datasets: CSQA (Talmor et al., 2019), AmbigNQ (Min et al., 2020), StrategyQA (Geva et al., 2021), and HotpotQA (Yang et al., 2018). We randomly sample 300 questions from CSQA, HotpotQA, and AmbigNQ, and include all questions in StrategyQA. Symbolic Reasoning To enrich the variety of symbolic reasoning question types, we utilized three datasets from BIG Bench (Srivastava et al., 2023), namely Penguins, which requires understanding of tabular data; Colored Object, which involves analyzing the relative positions, absolute positions, and colors of various objects; and Date, which involves understanding and calculating dates. For these three datasets, we used complete sets. Code Generation For code generation, we selected MBPP (Austin et al., 2021) and HumanEval (Chen et al., 2021) to construct our dataset. For MBPP, we randomly sampled 300 questions, while for HumanEval, we used its complete set. Algorithmic Tasks We utilized Object Counting and Repeat Copy, sourced from BIG-Bench (Srivastava et al., 2023) to evaluate the model\u2019s ability to manage details. Object Counting involves enumerating items presented within questions, while Repeat Copy requires the generation of word sequences based on instructions. The complete sets of these tasks are employed to form the detailoriented algorithmic task of CRITICBENCH. By utilizing the aforementioned datasets, we ensure that the questions in CRITICBENCH cover a diverse range of examination angles. The five types of tasks correspond to a variety of knowledge domains, with the detail-focused algorithmic tasks, the detail and logic-encompassing mathematical reasoning, and the logic-focused code generation corresponding to different styles of reasoning processes. Specific data statistics can be seen in Appendix A 3.3 Response Collection Following the collection of benchmark questions, we employ various LLMs, including GPT-3.5, GPT4, LLaMa2 (7B, 13B, and 70B variants), and vicuna (7B, 13B, and 33B variants) to generate response for each question, using greedy decoding. The details of the prompts used are available in Table E. Next, we filter out the responses that did not provide valid reasoning. We then apply a random sampling strategy to maintain a consistent number of model-generated responses across each dataset. 3.4 Response Annotation Response correctness is initially determined by rule-based matching, followed by a more detailed evaluation using GPT-4 to assess response precision. This includes flagging mathematically correct answers with incorrect reasoning and recognizing near-correct commonsense responses. Discrepancies between GPT-4 evaluations and initial annotations are resolved through manual review. During this review, we identified questions in the Date dataset that lack correct options, with detailed examples provided in Figure 6. Examples of annotations are provided in Appendix B. Model Type Generation Critiquing Correction ZS-AO ZS-CoT FS ZS-CoT FS FS (oracle) Baseline 50.80 48.37 Phi-2 SIFT 45.23 39.04(-11.76) 24.55(-26.25) 25.78(-25.02) 27.69(-20.68) 45.39(-2.98) 51.22(+2.85) LLaMa-2-7b BASE 31.66 41.33(-9.47) 42.27(-6.10) 51.01(+2.64) LLaMa-2-70b chat RLHF 34.22 60.47(+9.67) 46.81(-3.99) 42.31(-8.49) 21.49(-26.88) 38.51(-9.86) 51.87(+3.50) Vicuna-7b SIFT 31.95 6.45(-44.35) 11.80(-39.00) 40.56(-10.24) 32.73(-15.64) 41.31(-7.06) 51.56(+3.19) Mistral-7b BASE 47.37 55.70(+4.90) 42.61(-5.76) 53.23(+4.86) LLaMa-2-13b BASE 39.37 32.47(-18.33) 45.78(-2.59) 50.88(+2.51) LLaMa-2-70b chat RLHF 41.67 58.41(+7.61) 42.87(-7.93) 47.79(-3.01) 28.89(-19.48) 41.67(-6.70) 52.34(+3.97) Vicuna-13b SIFT 39.58 40.99(-9.81) 11.84(-38.96) 46.05(-4.75) 30.77(-17.60) 42.72(-5.65) 51.82(+3.45) Vicuna-33b SIFT 42.27 23.96(-26.84) 45.64(-5.16) 51.83(+1.03) 39.27(-9.10) 42.61(-5.76) 52.34(+3.97) LLaMa-2-70b BASE 55.53 52.48(+1.68) 46.93(-1.44) 55.35(+6.98) LLaMa-2-70b chat RLHF 51.53 67.64(+16.84) 53.20(+2.40) 59.92(+9.12) 30.51(-17.86) 44.84(-3.53) 55.66(+7.29) Mixtral-8\u00d77b BASE 58.43 63.98(+13.18) 49.78(+1.41) 56.16(+7.79) Mixtral-8\u00d77b inst SIFT 60.03 33.36(-17.44) 43.34(-7.46) 53.67(+2.87) 41.91(-6.46) 51.32(+2.95) 56.44(+8.07) GPT-3.5 RLHF 62.72 69.94(+19.14) 51.44(+0.64) 59.88(+9.08) 44.71(-3.66) 51.24(+2.87) 61.22(+12.85) GPT-4 RLHF 74.33 81.62(+30.82) 78.75(+27.95) 86.04(+35.24) 56.65(+8.28) 69.96(+21.59) 74.80(+26.43) Average 47.726 48.19(-2.61) 41.02(-9.78) 50.65(-0.15) 35.46(-12.91) 46.46(-1.91) 55.06(+6.69) Auto-J-13b CT 65.29(+14.49) UltraCM-13b CT 61.11(+10.31) Table 1: Average performance on CRITICBENCH. In the table, the values in parentheses under the \"Critiquing\" column show comparisons to the Baseline critique score of random guessing (50.80). Similarly, in the \"Correction\" column, the parentheses display changes relative to the Baseline generation score from the original response (48.37). Blue highlights indicate improvement, while orange marks decline. Type: BASE refers to the pretrained model, SIFT means its enhancement via Supervised Instruction Finetuning, RLHF denotes further training with Reinforcement Learning from Human Feedback, and CT represents Critique Training. 3.5 Evalution Evaluation Process The evaluation process on CRITICBENCH is illustrated in Figure 4. First, a critique prompt is constructed using the response within CRITICBENCH, prompting the model to perform a critique. Subsequently, a correct prompt is built incorporating the critique to obtain the model\u2019s correction results. Generation and Correction Metrics We use the accuracy metric Sa to assess models\u2019 generation and correction capability as follows: Sa = c N (1) where c is the number of correct predictions, N is the total number of questions. Critique Metrics We assess the critique ability of LLMs by prompting them to evaluate the correctness of given responses. To address potential issues like class imbalance, biases (Wang et al., 2023a), and the unreliability (Gou et al., 2024a) of LLMbased evaluations, we utilize the F1 score as a more robust and reproducible metric for critiquing errors: Sp = Pm i=1 qi m , (2) Sr = Pn i=1 qi n , (3) qi = ( 1, if correctly classified as wrong, 0, otherwise. (4) Sf = 2 \u00d7 Sp \u00d7 Sr Sp + Sr , (5) where Sp is the precision score, Sr is the recall score, m is the number of classified as wrong, n is the number of actual wrong, and qi indicates the correct discrimination of a response as wrong. 4 Experiments 4.1 Experimental Setup To conduct a comprehensive assessment, we have selected the following models: Phi-2, the LLaMa family, the Vicuna family, the Mistral family, and the GPT family. We evaluate those models on CRITICBENCH under three phases: (1) Generation: Models are tasked with answering questions through the utilization of CoT (Wei et al., 2022). (2) Critique: This phase evaluates provided response correctness. After testing three prompts from Huang et al. (2024), we selected the most effective for zero-shot use. By modifying the original prompt, we experimented with two prompts: a zeroshot answer only (ZS-AO) and a zero-shot chain of thought (ZS-COT) requesting analysis. For pretrained models with weaker instruction adherence Mathematical Commonsense Symbolic Coding Algorithmic T ask T ype 0 10 20 30 40 50 60 70 Average Score Average Model Scores by T ask T ype Generation Critique Correction Figure 5: Average Score on different types of tasks. (labeled BASE), we focused on few-shot (FS) setting. Additionally, to assess critique training effects, we evaluated the 13B model Auto-J (Li et al., 2024) and UltraCM (Cui et al., 2023) by extracting their discrimination on the response using a rulebased method. (3) Correction: After the critique, this phase refines responses by addressing identified inaccuracies. Besides ZS-COT and FS, we evaluated FS (oracle), applying corrections solely to inaccurately generated responses. The prompts for above phases are detailed in E. For all experimental setups, we set the temperature to 0 during the three phases. 4.2 Results and Analysis Table 1 showcases the performance of LLMs on CRITICBENCH. Specifically, we are interested in exploring the following research questions: RQ1: What factors influence the model\u2019s generation, critique, and correction? RQ2: What\u2019s the interrelationship between a model\u2019s capabilities in generation, critique, and correction? RQ3: How do critique-correct reasoning capabilities differ across various task types? RQ4: Is the model\u2019s knowledge consistent in generation, critique, and correction? RQ5: How does the inter-model critiquing patterns manifest among models of varying capability? In the following sections, we will discuss these research questions in turn. 4.2.1 RQ1: Key Factors in LLM Critical Reasoning Base Model & Scale Observations reveal that Phi-2 (2.7B), despite excelling in generation tasks, exhibits weaker performance in critique and correction tasks compared to models with similar generation performance (e.g., LLaMa-2-13b, Vicuna33b). This suggests Phi-2\u2019s training focus leans heavily towards generation, lacking proficiency in critique and correction tasks. This underscores the necessity of evaluating generation, critique, and correction collectively to achieve a comprehensive assessment of a model\u2019s mastery of knowledge. Additionally, Mistral-7b stands out as the top performer among models of similar size, even outperforming Vicuna-33b. However, in Figure 2, GPT-4 consistently maintains a significant lead in GQC of all types of tasks. Despite this, other models like LLaMA-70b and Mixtral-8\u00d77b demonstrate competitiveness against GPT-3.5. Furthermore, it is observed that models with more than 13 billion parameters exhibit certain critique capabilities (surpassing the baseline of random guessing). Meanwhile, only models of the Mixtral-8\u00d77b and above are capable of achieving effective correction (exceeding the baseline generation score). Training Strategy By comparing the results of different models under LLaMa family, it is observed that the alignment tax has limited the RLHF\u2019s generation performance. However, for critique and correction, RLHF consistently outperforms BASE, suggesting that RLHF might suppress the expression of knowledge in generation. Simultaneously, in critique, CT demonstrates results surpassing those of GPT-3.5 with a smaller parameter size (13B), proving the effectiveness of CT. Prompt Strategy The critique results from zeroshot settings show sensitivity to prompts. For instance, Vicuna-13b\u2019s ZS-AO flagged 22.04% of responses as incorrect, compared to only 4.8% in ZS-CoT, against an actual error rate of 51.63%. This inconsistency likely stems from the model\u2019s insufficient training on critique tasks, making it struggle without clear examples. Meanwhile, in correction, few-shot also significantly outperforms zero-shot. Therefore, FS is always the better choice in both critique and correction. For subsequent analysis, we will primarily focus on FS results. Orcale Feedback We also explored an oracle setting for corrections by modifying only incorrect responses in CRITICBENCH. Results from FS (oracle) outperformed those without an oracle, showing that reliable external feedback can significantly enhance correction efficiency. However, for more advanced models (from LLaMa-2-70b to GPT-4), corrections in the oracle setting still fell short of direct generation, indicating they are still influenced 30 40 50 60 70 80 Generation Score 30 40 50 60 70 80 Critique Score (a) Critique vs. Generation Q/G Ideal growth line(y=x) 30 40 50 60 70 Generation Score 30 35 40 45 50 55 60 65 70 Correction Score (b) Correction vs. Generation C/G Original accuracy Ideal growth line(y=x) 30 40 50 60 70 80 Critique Score 30 40 50 60 70 Correction Score (c) Correction vs. Critique C/Q Original accuracy Ideal growth line(y=x) Figure 6: Interrelationship between a model\u2019s capabilities in generation, critique, and correction. Each point on the graph represents a model, with coordinates indicating its performance in Generation (G), Critique (Q), and Correction (C). The graph features fitted lines for the scatter plots, denoted by blue lines for Q/G, C/G, and C/Q, while a red dashed line represents the ideal growth line (y=x). The green dashed line marks the original accuracy of responses from CRITICBENCH. by incorrect responses from other models. 4.2.2 RQ2: Correlations of GQC Abilities The capabilities of generation, critique, and correction exhibit a positive correlation. Figures 6 illustrates the interconnectedness among three capabilities. It is observed that there is a positive linear relationship between generating and judging. The improvement rates of generation and critique are nearly identical, even though the model primarily focuses on learning tasks related to generation during training. However, the linear correlation between the generation and correction capacities is not prominent. Weaker models tend to exhibit diminished correctness upon correction, as compared to the initial benchmark responses. This observation indicates that a model\u2019s limited capability to generate precise answers impacts its ability to correct those responses. Similarly, the relationship between critique and correction reveals that, even if the model can discriminate between correct and incorrect responses, it is not able to rectify them. 4.2.3 RQ3: Impact of Task Type The model\u2019s critique and correction capability depends on whether the task focuses on details or logic. In Figure 7, we illustrate the variability in critique and correction capabilities across various task types. Figure 7 (a), (b), and (c) demonstrate the varying relationships between Q and G across different types of tasks. Specifically, the models exhibit weaker critique performance in detail-oriented algorithmic tasks compared to their generation abilities, indicated by dots below the ideal growth line y=x. In contrast, for mathematical reasoning and code generation tasks, their critique capabilities surpass generation capabilities. The ability to correct errors in algorithmic tasks is also limited, even when the model answers correctly. For mathematical tasks requiring detailed and logical reasoning, significantly higher accuracy in generation is needed for effective corrections, explaining the lack of improvement of math in the self-refine (Madaan et al., 2023). Interestingly, For the logic-focused Code Generation tasks, improvements are realized as long as the model\u2019s generation performance surpasses that of the original responses\u2019 performance. This result indicates that models, when performing critique and correction, are easily disrupted by incorrect answers in tasks that focus on details, but not in those that emphasize logic. Additionally, Figure 5 displays the model\u2019s average GQC score on different types of tasks. It can be observed that, similar to algorithmic tasks, the model struggles to effectively critique in detailoriented symbolic reasoning, where its performance is significantly lower than that in generation and correction. This further demonstrates the model\u2019s lack of critique ability on such tasks. For more detailed results, please refer to Appendix D. 4.2.4 RQ4: Consistency of GQC Knowledge GQC knowledge inconsistencies persist across all models. The relationship between human abilities of generation, critique, and correction suggests that generation falls under critique, with correction closely linked to generation (West et al., 2024). This implies humans can identify errors without necessarily knowing the correct answer, whereas generating the right answer also implies the ability to critique and correct it. However, the knowledge 40 50 60 70 80 90 Generation Score 0 10 20 30 40 50 60 Critique Score (a) Algorithmic T ask Q/G Ideal growth line(y=x) 20 30 40 50 60 70 80 90 Generation Score 20 30 40 50 60 70 80 90 Critique Score (b) Mathematical Reasoning Q/G Ideal growth line(y=x) 20 30 40 50 60 70 80 90 Generation Score 20 40 60 80 Critique Score (c) Code Generation Q/G Ideal growth line(y=x) 40 50 60 70 80 90 Generation Score 30 40 50 60 70 80 Correction Score (d) Algorithmic T ask C/G Original accuracy Ideal growth line(y=x) 20 30 40 50 60 70 Generation Score 20 30 40 50 60 Correction Score (e) Mathematical Reasoning C/G Original accuracy Ideal growth line(y=x) 20 30 40 50 60 70 Generation Score 20 30 40 50 60 70 Correction Score (f) Code Generation C/G Original accuracy Ideal growth line(y=x) Figure 7: Critique/Generation (Q/G) and Correction/Generation (C/G) line in Algorithmic Task, Mathematical Reasoning, and Code Generation. LLaMa-2-7b Vicuna-7b LLaMa-2-13b Vicuna-13b Vicuna-33b LLaMa-2-70b GPT-3.5 GPT-4 Criticized model(weak strong) LLaMa-2-7b Vicuna-7b LLaMa-2-13b Vicuna-13b Vicuna-33b LLaMa-2-70b GPT-3.5 GPT-4 Auto-J UltraCM Critiquing model(strong weak) 0.00 +5.81 +8.38 +9.80 -9.62 -2.67 +9.36 -18.10 -1.15 0.00 +12.04 +7.59 -5.44 -6.36 +5.60 -16.59 -10.77 -11.11 0.00 +1.32 -17.35 -8.44 +1.81 -17.42 +9.76 +9.68 +18.21 0.00 +1.29 +4.47 +3.91 -15.02 +14.88 +18.41 +22.08 +21.91 0.00 +5.99 +5.21 -12.97 +21.21 +18.32 +25.47 +19.61 +0.92 0.00 +8.24 -20.65 +27.99 +28.05 +31.97 +23.85 +12.96 +9.60 0.00 -11.97 +46.16 +54.55 +56.38 +56.57 +38.92 +36.73 +34.60 0.00 +33.02 +39.70 +39.19 +37.10 +21.52 +16.79 +6.87 -10.86 +20.61 +30.61 +29.61 +32.48 +16.09 +10.40 +11.01 -0.78 Critique between models Figure 8: The results of model critiques are depicted in the graph, where the self-critique scores of LLMs are set to 0 for comparison. Models are arranged from weakest to strongest in order of generation accuracy. acquired by LLMs is not entirely consistent across generation, critique, and correction tasks. Figure 1 delineates the overlap and distinctiveness of GQC, highlighting their inconsistencies of knowledge. As a model\u2019s parameter size increases, its knowledge coherence across GQC, and instances of complete task failure (where G, Q, and C all fail) decrease. Notably, it can be noted that questions correctly critiqued alone always occupy a significant portion, indicating that the model possesses a considerable amount of knowledge that is not expressed through generation or correction. 4.2.5 RQ5: Patterns of Inter-Model Critique Figure 8 presents a visualization illustrating the inter-model critiquing result. Overall, it can be observed that strong models consistently have a superior ability to critique than weak models, and the responses of weak models are more easily critiqued accurately, possibly because the errors made by weak models are more evident. Interestingly, certain weaker models approach or even exceed the self-critique scores of stronger models, suggesting that their critique capacity against stronger models might surpass the latter\u2019s selfcritique. Post-critique training, models such as Auto-J and UltraCM show enhanced ability to assess response correctness across different models, with UltraCM\u2019s performance nearing its selfcritique level against GPT-4, underscoring the value of critique training. 5 Conclusion In conclusion, our investigation through CRITICBENCH has illuminated the capacities and limitations of LLMs in GQC reasoning. Our focused exploration using CRITICBENCH on the relationship among models in GQC revealed a linear correlation and subtle inconsistencies between GQC, while our analysis across different task types found that models perform better in Q and C for tasks focused on logic compared to those requiring attention to detail. Additionally, by examining the outcomes of models critiquing each other, we discovered that weaker models could sometimes correct the outputs of stronger models more effectively than those models could self-correct. These findings underscore the effectiveness of CRITICBENCH in evaluating and analyzing the GQC capabilities of LLMs. Limitations Measuring the ability of model critique effectively has always been a challenge. In this paper, we use discrimination results as a valid indicator to measure its critique ability, mainly for the following reasons: (1) Fine-grained indicators that provide scores based on evaluation principles are only suitable for specific tasks and lack generality. Moreover, different tasks have different focuses, and evaluation principles valued by humans may dynamically change. It is somewhat idealistic to exhaustively list all evaluation principles once and for all. (2) Scores based on evaluation principles rely on human annotations or results from GPT-4 for validation. However, reliable human annotations incur high costs, and GPT-4 may contain errors and biases. Using GPT-4 results also makes it impossible to evaluate its critique ability. Additionally, in reasoning tasks, the most important aspect of critique and correction is to judge whether there are errors in the reasoning process and its results. Therefore, considering the above considerations, we use a binary metric to measure the results. Future work should address these challenges by exploring alternative evaluation methodologies that mitigate reliance on costly human annotations. Additionally, there is a need to develop more nuanced critique metrics that can effectively capture the diverse aspects of model performance across various tasks and evaluation scenarios. Ethics Statement We constructed the model based on existing public datasets and models, as detailed in Section 3, and annotated the results using GPT-4 and human evaluation. We acknowledge that, despite employing rule-based filtering, GPT-4 review, and human review, unpredictable errors may still exist in the responses generated by different models. Additionally, when using the critique ability of Large Language Models (LLMs), it\u2019s also important to be aware of the risks involved, such as potential biases. When the GQC capability of an LLM is inconsistent, and its critique ability surpasses the other two aspects, it is necessary to carefully discern whether its discriminate results contain harmful biases."
},
{
"url": "http://arxiv.org/abs/2404.02078v1",
"title": "Advancing LLM Reasoning Generalists with Preference Trees",
"abstract": "We introduce Eurus, a suite of large language models (LLMs) optimized for\nreasoning. Finetuned from Mistral-7B and CodeLlama-70B, Eurus models achieve\nstate-of-the-art results among open-source models on a diverse set of\nbenchmarks covering mathematics, code generation, and logical reasoning\nproblems. Notably, Eurus-70B beats GPT-3.5 Turbo in reasoning through a\ncomprehensive benchmarking across 12 tests covering five tasks, and achieves a\n33.3% pass@1 accuracy on LeetCode and 32.6% on TheoremQA, two challenging\nbenchmarks, substantially outperforming existing open-source models by margins\nmore than 13.3%. The strong performance of Eurus can be primarily attributed to\nUltraInteract, our newly-curated large-scale, high-quality alignment dataset\nspecifically designed for complex reasoning tasks. UltraInteract can be used in\nboth supervised fine-tuning and preference learning. For each instruction, it\nincludes a preference tree consisting of (1) reasoning chains with diverse\nplanning strategies in a unified format, (2) multi-turn interaction\ntrajectories with the environment and the critique, and (3) pairwise data to\nfacilitate preference learning. UltraInteract allows us to conduct an in-depth\nexploration of preference learning for reasoning tasks. Our investigation\nreveals that some well-established preference learning algorithms may be less\nsuitable for reasoning tasks compared to their effectiveness in general\nconversations. Inspired by this, we derive a novel reward modeling objective\nwhich, together with UltraInteract, leads to a strong reward model.",
"authors": "Lifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding, Xingyao Wang, Jia Deng, Boji Shan, Huimin Chen, Ruobing Xie, Yankai Lin, Zhenghao Liu, Bowen Zhou, Hao Peng, Zhiyuan Liu, Maosong Sun",
"published": "2024-04-02",
"updated": "2024-04-02",
"primary_cat": "cs.AI",
"cats": [
"cs.AI",
"cs.CL",
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "LLM AND Reasoning",
"gt": "Current alignment techniques have significantly advanced the development of open-source large language models (LLMs) that effectively meet user expectations and align with human values (Touvron et al., 2023; Tunstall et al., 2023). On complex reasoning, success has been achieved by specializing models for specific capabilities, such as coding (Wei et al., 2023; Guo et al., 2024a; Zheng et al., 2024) and solving math problems (Fu et al., 2023; Yue et al., 2023; Luo et al., 2023a; Toshniwal et al., 2024). However, these models still fall short, by large margins, of the most advanced proprietary models in their all-around capabilities to tackle a diverse range of challenging problems. We conjecture that this performance gap can be primarily attributed to (1) the lack of high-quality alignment data and (2) the underexploration of preference learning techniques for improving models\u2019 complex reasoning capabilities. In this paper, we take strides towards bridging this gap by addressing both factors and developing EURUS. EURUS consists of a suite of LLMs finetuned from Mistral-7B (Jiang et al., 2023a) and CodeLLaMA-70B (Roziere et al., 2023). Across a diverse set of complex reasoning bench- marks that are mostly out-of-distribution (OOD), EURUS achieves state-of-the-art overall performance among all open-source models. In particular, EURUS excels in solving chal- lenging problems that often require sophisticated planning, reasoning, tool integration, and the ability to interact with and learn from the environment and users. As shown in Figure 1, on university-level STEM questions TheoremQA (Chen et al., 2023) and competition-level coding problems LeetCode Contest (Guo et al., 2024a), EURUS-70B significantly outperforms all open-source models, achieving comparable performance to GPT-3.5 Turbo. EURUS models are trained on ULTRAINTERACT, our newly-curated, large-scale, and high-quality alignment data specifically designed to improve LLMs\u2019 reasoning capabilities. ULTRAINTERACT consists of a diverse set of instructions spanning math, coding, and logical reasoning problems from 12 established datasets. For each instruction, ULTRAINTERACT collects a preference tree that includes: (1) Diverse planning strategies in a unified pattern, such as sequential processing (Wei et al., 2022) and tool creation (Qian et al., 2023), followed by executing step-by-step actions formatted in either text or code, to provide divserse reasoning trajectories. (2) Multi-turn interaction trajectories with the environment and the critique, to improve models\u2019 capabilities to learn from feedback and correct previous errors (Wang et al., 2023b). (3) Paired correct and incorrect actions organized in tree structures, to facilitate preference learning. In total, ULTRAINTERACT contains 86K instructions and 220K action pairs, where each pair consists of an instruction, a correct response, and an incorrect one. Conceptually, ULTRAINTERACT\u2019s data resemble imbalanced binary trees as shown in Figure 2. ULTRAINTERACT can be used in both supervised fine-tuning and preference learning. Our experiments show that, using ULTRAINTERACT along with established datasets in instruction fine-tuning already achieves strong performance. ULTRAINTERACT further facilitates preference learning for reasoning tasks, improving the performance even further with KTO (Ethayarajh et al., 2024) and NCA (Chen et al., 2024a). Surprisingly, applied to an instruction finetuned EURUS model, DPO (Rafailov et al., 2023) hurts the performance. Through careful analysis, we provide evidence that the performance in reasoning correlates with the value of rewards of chosen data\u2014a higher final reward often indicates a better reasoning capability. Besides, our investigation suggests that DPO may be less suitable for reasoning tasks than KTO and NCA. Inspired by this fresh finding, we devise a new objective for reward modeling to augment the Bradley-Terry objective (Bradley & Terry, 1952), explicitly encouraging training to increase the absolute rewards of chosen solution and decrease those of rejected data. Furthermore, ULTRAINTERACT leads to our reward model EURUS-RM-7B, which achieves a better correlation with human annotators than all existing models on AutoJ (Li et al., 2023a) and MT-Bench (Zheng et al., 2023), including GPT-4 (OpenAI, 2023). EURUS-RM-7B demonstrates especially strong preference modeling performance on reasoning tasks. Checkpoints of our EURUS models, accompanying ULTRAINTERACT alignment data to reproduce this research, will be publicly available. 2 Preprint",
"main_content": "A U A A C R U A U A U C R C R U C R C Chosen Action Rejected Action Observation R User Instruction U A Unpaired Action O O O&C O&C Observation & Critique O&C O Figure 2: Left: CodeActInstruct (Wang et al., 2024) and Code-Feedback (Zheng et al., 2024); Middle: HH-RLHF (Bai et al., 2022); Right: ULTRAINTERACT. Each instruction in ULTRAINTERACT is constructed as a preference tree. A U A A C R U A U A U C R C R U C R C Chosen Action Rejected Action Observation R User Instruction U A Unpaired Action O O O&C O&C Observation & Critique O&C O Figure 2: Left: CodeActInstruct (Wang et al., 2024) and Code-Feedback (Zheng et al., 2024); Middle: HH-RLHF (Bai et al., 2022); Right: ULTRAINTERACT. Each instruction in ULTRAINTERACT is constructed as a preference tree. Solving complex problems often requires the model\u2019s capability in planning and reasoning, integrating with tools, and interacting with and learning from both the environment and the users. This is reflected in ULTRAINTERACT\u2019s design choices: (1) Its instructions are diverse, challenging, and of a large scale (\u00a72.1); (2) It provides multi-turn trajectories that solve the input instruction through multiple turns of interaction with and learning from the environment and critique. At each turn, it breaks down the problem into smaller ones (\u00a72.2). (3) ULTRAINTERACT includes pairwise data to facilitate preference learning (\u00a72.3). Conceptually, ULTRAINTERACT collects a preference tree for each instruction, with the instruction being the root and each action a node (Figure 2). A trajectory is a root-to-leaf path consisting of a sequence of actions. In each preference tree, all nodes of correct actions and all trajectories ending with correct actions can be used for SFT. Paired correct and incorrect nodes or trajectories can be used for preference learning. 2.1 Instruction Selection Emphasizing Complexity, Quality, and Diversity We target three representative reasoning tasks: math problem-solving, code generation, and logical reasoning. The complexity, quality, and diversity of the alignment data are crucial to the model\u2019s performance (Liu et al., 2023). Following Wang et al. (2023b), we select challenging problems that GPT-3.5-Turbo fails to solve. We intentionally restrict the selection of the datasets to those with ground-truth solutions, aiming to ensure highquality oversight signals rather than relying on LLM-as-a-judge annotation (Weyssow et al., 2024). Besides, the gold solutions also serve as references for the critique model to generate feedback. To promote ULTRAINTERACT\u2019s diversity, we pick datasets of different categories. For each dataset, we include distinct reasoning patterns based on question categories or formulations necessary to solve the problems. Table 6 summarizes the datasets selected by ULTRAINTERACT. Except for MATH, none of the training datasets is used in our evaluation. 2.2 Decomposition and Interaction at Each Turn Figure 3 provides an illustrative example. In what follows, we connect the actor model with a Python interpreter as the \u201cenvironment\u201d. Unless otherwise specified, we use GPT-3.5 Turbo as the actor model. Following Wang et al. (2024), the actor model first decomposes the input problem into several sub-problems and then solves each by generating Python code pieces as actions and using the environment to execute them. To promote solution diversity, the actor model randomly samples one reasoning schema in the form of either CoT (Wei et al., 2022) or modularization programming (Qian et al., 2023; Yuan et al., 2023). The actor then generates actions in text or code to solve each sub-problem, with each step being marked by explicit notations. Multi-turn interactions with the environment are often necessary to solve challenging problems (Wang et al., 2023b). To improve such capabilities of the models, ULTRAINTERACT collects trajectories in which the actor model interacts with the environment and a critique model (a proxy for user) and refines its action based on their feedback. 3 Preprint In a classroom, x children have 7 books each. [\u2026] what is the value of unknown variable x ? <thought> Step 1: First, let's calculate the total [\u2026]. Step 2: Then, we need to add the [\u2026]. Step 3: The sum of the books should [\u2026]. </thought> <execute> # Step 4: First, let's calculate [\u2026]. total_books_child = x * 7 # Step 5: Next, we need to add [\u2026]. total_books = total_books_child + 8 # Step 6: We can now solve for [\u2026]. x = (78 8) / 7 </execute> <solution> x </solution> Generation Observation [\u2026] NameError: name 'x' is not defined. Your answer is wrong. Expert feedback: Your thought process is accurate and your error lies in your code. You need to define `x` first [\u2026] Critique <thought> Step 1: First, create an equation [\u2026]. Step 2: Each child has 7 books, so [\u2026]. Step 3: The teacher brings 8 [\u2026]. Step 4: Now we can solve for x by [\u2026]. </thought> <execute> # Step 5: Setting up equation: equation = \"7x + 8 = 78\" # Step 6: Solving for x: x = (78 8) / 7 </execute> <solution> x </solution> [\u2026] Out: 10.0 Your answer is correct. Expert feedback: Good job! You have fixed the error in last turn. Now your answer is correct. [\u2026] 1 Generation Observation Critique 2 3 4 5 6 Figure 3: An illustrative example of an ULTRAINTERACT trajectory over two turns. In each turn, the actor model generates step-by-step reasoning chains, and the environment and the critique model provide observations and textual critique respectively. The environment receives an action from the actor model along with the interaction history, and then the code interpreter returns two kinds of \u201cObservation\u201d: (1) Python execution results, either program outputs or error traceback messages; (2) binary feedback, indicating whether the solution is correct or not. Then, the observations along with the history will be passed to a critique model, which locates the errors and provides suggestions for improvements. To avoid potential bias introduced by self-correction (Wang et al., 2023b; Xu et al., 2024), we adopt a stronger model, GPT-4, as the critique and ensure critique quality by providing GPT-4 with ground truth answers as references. This procedure resembles Wang et al. (2024). However, we adopt more diverse reasoning patterns to teach LLMs to learn rationales rather than simply memorizing answers (Mitra et al., 2023), and learn to create and use tools (Qian et al., 2023; Yuan et al., 2023; Qin et al., 2023). Besides, we believe that it is important for LLMs to learn from the feedback provided by the critique rather than solely from observations of the environment. 2.3 Preference Trees Facilitates Preference Learning Across Multiple Turns Unlike open-ended conversations, where human preference is ambiguous and challenging to specify, many reasoning tasks have clear and objective preferences for correct actions. The preference annotation is threfore an evaluation of the correctness of the solutions conditioning ground truth ones, which come with the datasets in ULTRAINTERACT. This eliminates the need for human or LLM-based preference annotation and ensures high data quality. To facilitate preference learning, ULTRAINTERACT pairs correct and incorrect actions. Sampling Paired Correct and Incorrect Actions at Each Turn. For each instruction in ULTRAINTERACT, we sample, from the actor model, a pair of correct and incorrect actions following \u00a72.2. We follow Cui et al. (2023) to sample the pair from different actor models to ensure response diversity. To prevent models from exploiting shortcuts based on surface features, we exclude instances that fail to pass the Python syntax check. Certain challenging problems in ULTRAINTERACT pose difficulties in obtaining correct actions, even using strong actors such as GPT-4, with nearly zero pass@100 accuracies. To improve the pass rates of the actor models while keeping the expense under control, we sequentially take the following steps. (1) Directly sampling 20 actions and randomly keeping a correct one, if any. (2) If no correct action is obtained, we repeat the above process up to three times, progressively switching from more cost-effective models to the strong yet expensive GPT-4 Turbo. (3) For the remaining difficult problems where no correct action is acquired after the previous two steps, we provide the actor with ground-truth rationales and answers, and then apply various techniques to elicit correct actions. The specific information provided and the techniques applied vary depending on the tasks (Appendix A.2). 4 Preprint Tree-structured Action Pairs Across Multiple Turns. After each turn, the correct action concludes its trajectory. We expand the incorrect action into the next turn, and have the actor interact with the environment and the critique to refine its solution (\u00a72.2). We then repeat the procedures introduced earlier in this section to collect an additional action pair. By expanding the incorrect action, ULTRAINTERACT can provide data to help models learn from feedback, and collect multiple action pairs for preference learning across multiple turns. Conceptually, for every instruction, ULTRAINTERACT constructs a binary preference tree with each action being a node (Figure 2). We cap the tree at a maximum of five turns. Additional Instruction-action Pairs for Challenging Problems. We believe the challenging instructions that make it to step (3) above can provide valuable training signals. Therefore, for a subset of these problems with multiple ground truth solutions, we further sample additional correct actions to cover all ground truths. Accordingly, we further sample incorrect actions to pair with these additional correct actions, so that they can be used in both supervised fine-tuning and preference learning. With the tree-structured data, ULTRAINTERACT enables comparisons at every turn, in contrast to comparing only at the last turn (Bai et al., 2022), and thus can improve the models\u2019 interaction ability. Closing this section, Table 1 summarizes some statistics of ULTRAINTERACT, and more details are in Appendix A.4. Table 1: Some statistics of ULTRAINTERACT. Task Type # Instructions # Turns per Traj. # Tokens per Traj. Avg. # Traj per Ins. Total # Pairs # Correct Answers w/ Interaction? w/ Tool? T1 T2 T3 T4 T5 Math ! ! 22,928 10,440 4,122 1,898 904 5,564 1,750.0 1.0 42,780 68,033 % ! 2,757 16,154 439.1 5.9 13,217 16,154 ! % 22,639 10,708 3,521 1,459 723 6,228 1,521.9 1.0 44,750 62,182 % % 2,083 16,348 538.1 7.8 12,624 16,348 Coding ! 20,463 13,265 2,584 987 379 3,248 1,728.5 1.0 18,106 22,215 % 8,495 92,618 1,070.4 5.5 78,634 92,618 Logic ! ! 2,086 1,685 298 72 8 23 1,299.8 1.0 1,750 2,198 ! % 4,467 2,453 1,674 340 0 0 1,266.7 1.0 7,958 7,231 Total 85,918 163,671 12,199 4,756 2,014 15,063 1,201.8 2.3 219,819 286,979 3 EURUS: State-of-the-art Open LLMs in Reasoning ULTRAINTERACT helps us develop EURUS, a suite of LLMs and a reward model (RM). Supervised Fine-Tuning. EURUS-7B-SFT is fine-tuned from Mistral-7B (Jiang et al., 2023a) and EURUS-70B-SFT from CodeLLaMA-70B (Roziere et al., 2023). First, we perform SFT using all correct actions (287K) in ULTRAINTERACT. We find it yields better performance to discard interaction history and train only on correct leaf nodes in each tree. To improve general instruction-following ability, we include into our SFT data mixture UltraChat (Ding et al., 2023), ShareGPT2, and OpenOrca (Lian et al., 2023). Please find mixture ratios in Appendix B. Perference Learning. Based on EURUS-SFT models, we explore three preference learning algorithms, DPO (Rafailov et al., 2023), KTO (Ethayarajh et al., 2024), and NCA (Chen et al., 2024a). Differently from SFT, here we include all multi-turn trajectory pairs in our ULTRAINTERACT (220K) and include all UltraFeedback (Cui et al., 2023) pairs (340K). Reward Modeling. Similarly to the preference learning, we use all 220K multi-turn trajectory pairs from ULTRAINTERACT; it is further augmented with the 240K single-turn action pairs from ULTRAINTERACT. More details are in the Appendix B. We include all 340K pairs from UltraFeedback and one pair for each instruction from UltraSafety (Guo et al., 2024b), totaling 3K. EURUS-RM-7B is initialized from EURUS-7B-SFT with a new linear layer. Our findings in \u00a76 indicate that the absolute values of rewards make a big difference in the models\u2019 reasoning performance. We therefore augment the established Bradley-Terry (BT) objective LBT with an additional term LDR to directly increase the reward of the chosen actions for instances from ULTRAINTERACT, and decrease those of the rejected ones: 2https://huggingface.co/datasets/openchat/openchat sharegpt4 dataset 5 Preprint Table 2: Open-source LLM baselines that we compare to. Type Models General Purpose Mistral-7B-Instruct-v0.2 (Jiang et al., 2023a), Zephyr-7B-\u03b2 (Tunstall et al., 2023), OpenChat-3.5-1210 (Wang et al., 2023a), Starling-LM-7B-\u03b1 (Zhu et al., 2023), Mixtral-8x7B-Instruct (Jiang et al., 2023a), DeepSeekLLM-67B-Chat (DeepSeek-AI, 2024), QWen1.5-72B-Chat (Bai et al., 2023) Coding Magicoder-S-DS-6.7B (Wei et al., 2023), OpenCodeInterpreter (OpenCI for short, DS-6.7B/CL-70B) (Zheng et al., 2024), DeepSeek-Coder-33B-Instruct (Guo et al., 2024a), and CodeLLaMA-70B-Instruct(Roziere et al., 2023). Math MAmmoTH-7B-Mistral (Yue et al., 2023), WizardMath-7B-v1.1 (Luo et al., 2023a), OpenMath (Mistral7B/CodeLLaMA-70B) (Toshniwal et al., 2024). LULTRAINTERACT = \u2212log \u0010 \u03c3 \u0000r\u03b8(x, yc) \u2212r\u03b8(x, yr) \u0001\u0011 | {z } LBT: optimize relative rewards \u2212log \u0010 \u03c3 \u0000r\u03b8(x, yc) \u0001\u0011 \u2212log \u0010 \u03c3 \u0000\u2212r\u03b8(x, yr) \u0001\u0011 | {z } LDR: increase r\u03b8(x, yc) and decrease r\u03b8(x, yr) For instances from other datasets, we train with LBT. \u03b8 denotes the reward model\u2019s parameters, r\u03b8 (\u00b7) and r\u03b8 (x, yr) the rewards on the chosen and rejected actions respectively. Our ablation study demonstrates the importance of both LBT and LDR. 4 Evaluation of EURUS-7B and EURUS-70B Evaluation Setup. We consider both single-turn and multi-turn reasoning. For single-turn evaluation, we consider HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), and LeetCode (Guo et al., 2024a) for coding, GSM-Plus (Li et al., 2024), MATH, TheoremQA (Chen et al., 2023), SVAMP (Patel et al., 2021), and ASDiv (Miao et al., 2020) for math, and BBH-Hard (Suzgun et al., 2022) for reasoning. We evaluate with pass@1 accuracy. We also use IFEval (Zhou et al., 2023) to assess the instruction-following ability and report the prompt-level loose score. For multi-turn evaluation, we adopt MINT (Wang et al., 2023b) and only consider the coding and math problems. We report the success rate at Turn 5. Please find further details on evaluation setups and evaluations beyond reasoning in Appendix C. As shown in Table 2, we compare our EURUS with general-purpose models, and those specialized in coding and math of various sizes. We also summarize the results of GPT-3.5 Turbo and GPT-4 reported in previous works. Table 3: Overall performance. All test sets except MATH are out-of-distribution to our models and most baselines. MAmmoTH, OpenChat, and Starling-LM have been trained on TheoremQA test sets. We strikethrough the contaminated numbers. Coding Math Reasoning Ins-Following Multi-Turn Model HumanE. MBPP LeetC. GSM-Plus MATH Theo.QA SVAMP ASDiv BBH IFEval Code Math Avg. \u223c7B Mistral-7B-Instruct-v0.2 39.0 30.8 6.1 15.7 9.5 8.5 42.9 49.5 62.4 44.4 7.4 26.2 28.5 Zephyr-7B-\u03b2 29.3 35.8 2.2 23.3 5.0 7.8 19.1 28.0 61.8 39.7 5.2 16.9 22.8 OpenChat-3.5-1210 64.0 61.7 11.7 46.7 28.1 19.1 75.4 77.0 67.0 50.3 21.3 32.4 46.2 Starling-LM-7B-\u03b1 46.3 51.1 8.9 23.7 21.5 12.0 26.3 39.8 67.1 26.1 18.4 28.9 30.8 Magicoder-S-DS-6.7B 75.6 70.4 23.9 16.4 19.9 13.1 61.6 62.8 57.0 21.1 27.9 8.0 38.1 OpenCI-DS-6.7B 76.8 66.2 16.1 41.5 31.6 16.1 74.5 79.8 53.9 22.6 5.9 1.3 40.5 MAmmoTH-7B-Mistral 24.4 42.4 7.2 40.1 36.0 26.3 60.7 72.3 57.7 34.9 3.7 6.7 34.4 WizardMath-7B-v1.1 50.0 53.9 6.7 54.6 30.0 16.5 57.8 73.5 64.4 22.6 16.2 8.9 37.9 OpenMath-Mistral-7B 33.5 46.6 11.7 59.4 39.1 13.1 83.4 79.8 58.6 15.0 2.9 5.3 37.4 EURUS-7B-SFT 55.5 59.1 20.0 52.1 32.6 20.0 82.2 84.1 64.6 44.0 15.4 28.4 46.5 + DPO 50.6 52.1 8.3 51.0 28.3 20.9 78.7 83.8 65.0 42.5 20.6 32.4 44.5 + KTO 56.1 58.6 18.9 55.0 33.2 20.6 84.4 85.0 67.6 43.1 19.1 43.6 48.8 + NCA 55.5 60.2 14.4 54.9 34.2 20.9 84.6 85.4 64.3 42.7 21.3 38.7 48.1 \u223c40B Mixtral-8x7B-Instruct 50.6 50.1 5.6 49.6 25.9 20.4 66.4 68.8 73.5 48.8 12.5 37.3 42.5 DeepSeek-Coder-33B-Ins 82.3 73.9 27.8 29.5 20.2 21.9 75.2 85.0 61.5 26.1 35.3 21.8 46.7 \u223c70B CodeLLaMA-70B-Instruct 56.7 58.6 14.4 34.9 12.0 8.4 63.5 70.1 74.5 24.0 3.7 14.2 36.3 DeepSeek-LM-67B-Chat 70.7 65.7 20.0 65.0 41.0 17.9 74.0 84.0 78.9 52.7 30.9 41.8 53.5 QWen1.5-72B-Chat 71.3 56.9 15.6 65.4 43.4 18.5 79.5 79.1 78.0 53.4 27.2 38.2 52.2 OpenCI-CL-70B 77.4 71.7 20.0 46.1 29.2 18.8 76.1 79.4 66.7 26.8 30.9 12.0 46.3 OpenMath-CL-70B 39.0 52.6 15.0 62.2 45.9 15.9 86.6 82.8 59.9 15.7 14.0 0.4 40.8 EURUS-70B-SFT 75.6 74.2 33.3 58.1 40.6 28.0 86.3 88.5 79.9 49.2 31.6 40.4 57.1 + KTO 76.8 68.2 26.1 62.2 41.3 30.6 90.4 89.0 80.8 46.4 39.0 49.8 58.4 + NCA 79.3 71.9 33.3 62.8 41.7 32.6 89.5 90.3 80.0 49.2 38.2 39.6 59.0 Proprietary Models GPT-3.5 Turbo 76.8 82.5 23.3 61.2 37.8 35.6 83.0 90.6 70.1 56.6 29.4 36.9 57.0 GPT-4 85.4 83.5 41.8 85.6 69.7 52.4 94.8 92.6 86.7 79.7 59.6 65.8 74.8 6 Preprint 4.1 Results Results are shown in Table 3. We summarize the takeaways as follows: EURUS, both the 7B and 70B variants, achieve the best overall performance among open-source models of similar sizes. EURUS even outperform specialized models in corresponding domains in many cases. Notably, EURUS-7B outperforms baselines that are 5\u00d7 larger and EURUS-70B achieves better performance than GPT-3.5 Turbo. EURUS\u2019s instruction-following performance is among the best general-purpose models, substantially better than specialized ones. Preference learning with ULTRAINTERACT can further improve the performance, especially in math and the multi-turn ability. KTO and NCA consistently improve the models\u2019 performance in all five math benchmarks and mult-turn evaluations, while their effects vary in others. Since SFT models only use the single-turn data from ULTRAINTERACT while preference learning uses the multi-turn ones, the improvements in interaction ability should also be attributed to ULTRAINTERACT rather than the algorithms alone. Surprisingly, we observe that DPO hurts model performance on most benchmarks. DPO training of our 70B model fails since the rewards go down to \u2212\u221e. We analyze this phenomenon in \u00a76.1. 5 Evaluation of EURUS-RM-7B Evaluation Setup. We evaluate EURUS-RM-7B on three RM benchmarks, RewardBench (Lambert et al., 2024), AutoJ (Li et al., 2023a), and MT-Bench (Zheng et al., 2023). Aiming for a more realistic OOD evalation, we exclude the \u201cprior sets\u201d split from RewardBench, since many baselines train on the datasets that this split contains. We compare with PairRM (Jiang et al., 2023b), Starling-RM-7B/34B (Zhu et al., 2023), UltraRM-13B (Cui et al., 2023), GPT-3.5 Turbo, and GPT-4. To further explore EURUS-RM-7B\u2019s potential in improving models\u2019 performance through reranking, we use it to rerank Mistral-7B-Instruct-v0.2\u2019s responses on HumanEval, MBPP, GSM8K, and MATH. We report the results of random sampling, self-consistency, and Starling-RM-34B as baselines. 5.1 Results Table 4 summarizes reward modeling performance, and Figure 4 plots some reranking results with others in Appendix D.1. EURUS-RM-7B stands out as the best 7B RM overall, and achieves similar or better performance than much larger baselines. Particularly, it outperforms GPT-4 in certain tasks. EURUS-RM-7B achieves a better correlation with human experts than all existing models on AutoJ and MT-Bench, and it achieves comparable performance to the 5\u00d7 larger Starling-RM-34B on RewardBench. On RewardBench, EURUS-RM-7B outperforms all baselines on the \u201cChat-Hard\u201d split while achieving very competitive performance on the \u201cReasoning\u201d split. Across the AutoJ splits, EURUS-RM-7B outperforms nearly all existing models, with the only exception being GPT-4\u2019s results on Coding. Our training objective is beneficial in improving RM performance on hard problems and reasoning. Table 4 shows that optimizing LDR improves RM\u2019s reasoning ability, but BT modeling is still beneficial in equipping RM with abilities in general chatting as suggested in the \u201cChat-Hard\u201d column, though its effect on reasoning may vary. ULTRAINTERACT is compatible with other datasets like UltraFeedback and UltraSafety, and mixing these datasets can balance different RM abilities. Improving RM\u2019s capabilities in reasoning with ULTRAINTERACT does not sacrifice others, which indicates that ULTRAINTERACT can be a great ingredient for the training data mixture of reward models. EURUS-RM-7B improves LLMs\u2019 reasoning performance by a large margin through reranking. EURUS-RM-7B consistently improves pass@1 accuracy across all tasks and performs better than 5\u00d7 larger baseline Starling-RM-34B. Also, EURUS-RM-7B\u2019s reranking performance scales well with #responses per instruction, except a slight decrease in HumanEval 7 Preprint 1 2 4 8 16 #Responses Per Instruction 42 44 46 48 Pass@1 Accuracy HumanEval 1 2 4 8 16 #Responses Per Instruction 36 38 40 42 44 Pass@1 Accuracy MBPP 1 2 4 8 16 #Responses Per Instruction 50 55 Pass@1 Accuracy GSM8K 1 2 4 8 16 #Responses Per Instruction 7.5 10.0 12.5 15.0 17.5 Pass@1 Accuracy MATH 14 16 2 4 6 8 10 12 14 16 #Samples Per Instruction 36 38 40 42 44 Pass@1 Accuracy MBPP 2 4 6 8 10 12 14 16 #Samples Per Instruction 47.5 50.0 52.5 55.0 57.5 Pass@1 Accuracy GSM8K 2 4 6 8 10 12 14 16 Pass@1 Accuracy Self-Consistency Starling-RM-34B Eurus-RM-7B Figure 4: Results on reranking Mistral-7B-Instruct-v0.2\u2019s responses. Full results in Table 9. Table 4: Results on reward modeling benchmarks. UF: UltraFeedback; US: UltraSafety. The best performance in each benchmark is in bold and the second best one is underlined. Most baseline results are from Jiang et al. (2023b) and Lambert et al. (2024). Model Reward Bench AutoJ MT-Bench Chat Chat-Hard Safety Reasoning Avg. Code Math Others Overall PairRM 90.2 53.0 31.5 60.0 58.7 58.3 52.8 58.9 59.1 59.0 Starling-RM-7B 98.0 43.4 88.6 74.6 76.2 59.2 47.2 61.4 60.8 56.8 Starling-RM-34B 96.9 59.0 89.9 90.3 84.0 65.8 54.2 62.3 62.6 60.4 UltraRM-13B 96.1 55.3 45.8 82.0 69.8 55.0 43.1 59.6 59.9 56.0 GPT-3.5 Turbo 36.6 40.3 41.2 42.7 57.1 GPT -4 69.2 51.4 61.4 61.9 63.9 EURUS-RM-7B 96.5 65.3 80.7 87.0 82.4 67.5 62.5 63.6 64.5 72.9 w/o LDR 96.4 59.9 79.5 77.5 78.3 64.2 59.7 64.7 65.0 72.8 w/o LBT 96.8 58.5 83.8 84.2 80.8 67.5 66.7 64.8 65.6 72.6 w/o US 96.5 66.2 67.7 81.7 73.3 66.7 61.1 65.0 65.7 72.6 w/o UF + US 95.1 61.1 63.7 73.4 78.0 55.8 58.3 59.0 58.7 67.2 when increasing response number form 8 to 16. In contrast, Starling-RM-34B suffers from severe performance drop on HumanEval and it consistently hurts model accuracy on MATH. 6 Analysis 0 20 40 60 80 100 Steps (%) 4 2 0 2 Value -1.26 DPO Margins Rewards/Chosen Rewards/Rejected 0 20 40 60 80 100 Steps (%) 4 2 0 2 4 Value 0.4 KTO Margins Rewards/Chosen Rewards/Rejected 0 20 40 60 80 100 Steps (%) 3 2 1 0 1 2 3 Value 0.16 NCA Margins Rewards/Chosen Rewards/Rejected Figure 5: Reward patterns of EURUS-7B preference learning with DPO, KTO, and NCA. For all algorithms, the rewards of rejected data keep decreasing and the margins between chosen and rejected data keep increasing. However, the rewards of chosen data decrease below zero in DPO while keeping increasing and staying positive in KTO and NCA. The absolute values of the reward in the last step (in red) of the three algorithms positively correlate with their performance in Table 3. 6.1 Explicit Reward as A Proxy? Hypothesis for Preference Learning in Reasoning We investigate the reason why DPO behaves differently than KTO and NCA. We start by empirically inspecting the rewards throughout the preference learning process, as shown in Figure 5. Rewards for chosen rejected data both keep decreasing through DPO, though the rewards for chosen data is still higher hence the loss decreases. In KTO and NCA, the rewards of chosen data keep increasing with those of rejected data decreasing. Therefore, we hypothesize it is the distinction in the trend of rewards that leads to the performance gap between DPO and the other two algorithms. This distinction can be attributed to that DPO, derived from the Bradley-Terry model, only optimizes the relative differences between chosen and rejected data overlooking the absolute values of the rewards. This is a non-issue in alignment with general human values where preference is \u201crelative\u201d and there 8 Preprint can be many valid answers to the same input. However, in reasoning tasks, the space of correct answers is much smaller than that of incorrect ones. Further, we notice that the rewards of chosen data in the last training step follow the ranking order of KTO > NCA > DPO, positively correlate with their performance trends. Therefore, we believe that increasing the rewards of the chosen data is especially beneficial in preference learning for reasoning tasks. 6.2 Ablation Study Table 5: Ablation study of SFT data. Model Coding Math BBH IFEval Avg. EURUS-7B-SFT 44.9 58.5 64.6 44.0 53.6 Ground-truth 33.9 46.1 64.4 42.9 44.0 Open-source Only 31.2 33.5 65.3 43.6 37.0 ULTRAINTERACT Only 37.3 56.2 67.0 17.4 47.7 We study the impact of ULTRAINTERACT and other open-source alignment data on EURUS-7B-SFT\u2019s performance. We consider three settings: (1) With original ground-truth answers, which replaces the generated actions with ground-truth rationales and answers from the original datasets. If no rationales are available, we use those from ULTRAINTERACT. (2) Open-source data only. (3)ULTRAINTERACT only. We evaluate with the same setting as \u00a74 and report the averaged scores. See full results in Appendix E. In Table 5, EURUS outperforms the \u201cGrouth-truth\u201d model on all tasks, confirming the advantage of ULTRAINTERACT\u2019s designs of divide-and-conquer and code-as-action patterns, in line with conclusions of concurrent work (Chen et al., 2024b; Wang et al., 2024). Training only on open-source data without ULTRAINTERACT greatly hurts the reasoning performance, confirming the effectiveness of ULTRAINTERACT. Meanwhile, training only on ULTRAINTERACT suffers a performance drop except for BBH, especially in instruction following. We attribute the performance drop to a worse instruction-following ability. This suggests the necessity of mixing ULTRAINTERACT with other alignment data for better all-around supervised fine-tuning. 7 Related Work Open LLMs in Reasoning. Open-source LLMs have shown remarkable progress in building specialists that excel in mathematics reasoning (Luo et al., 2023a; Yue et al., 2023; Toshniwal et al., 2024) or coding abilities (Roziere et al., 2023; Wei et al., 2023; Guo et al., 2024a; Zheng et al., 2024). On the contrary, mastering general reasoning capabilities still challenges open models, while the most advanced ones (DeepSeek-AI, 2024; Bai et al., 2023; Touvron et al., 2023; Jiang et al., 2024) are well behind proprietary models. More, these cutting-edge open general-purpose models maintain their alignment recipes confidential, which further hinders the replication and development of open-source reasoning models. Preference Learning for Reasoning. Aligning language models from human or AI preferences has emerged as a prevalent approach in the open-source community (Tunstall et al., 2023; Bai et al., 2023) with the proposal of DPO (Rafailov et al., 2023) and high-quality preference datasets (Cui et al., 2023; Zhu et al., 2023). Different from open-domain chatbots, preference learning is largely underexplored in complex reasoning. Recent research showed performance degradation when applying DPO on reasoning tasks, but some newly proposed algorithms demonstrated a positive effect (Ethayarajh et al., 2024; Chen et al., 2024a; Mitra et al., 2024; Shao et al., 2024). However, a deep understanding of preference learning, specifically its efficacy on complex reasoning, is not yet established. 8 Conclusion We strive to narrow the huge gap between open-source models and proprietary models from the perspective of alignment. Our work pushes the boundaries of open-source reasoning generalists by (1) releasing a high-quality multi-turn reasoning dataset ULTRAINTERACT with preference trees, (2) introducing EURUS-series LLMs which achieve new SOTA on challenging reasoning benchmarks and (3) providing insights on preference learning for reasoning through analysis, leading to new reward modeling objectives as well as a powerful reward model for reasoning. 9 Preprint"
},
{
"url": "http://arxiv.org/abs/2404.08589v1",
"title": "Enhancing Visual Question Answering through Question-Driven Image Captions as Prompts",
"abstract": "Visual question answering (VQA) is known as an AI-complete task as it\nrequires understanding, reasoning, and inferring about the vision and the\nlanguage content. Over the past few years, numerous neural architectures have\nbeen suggested for the VQA problem. However, achieving success in zero-shot VQA\nremains a challenge due to its requirement for advanced generalization and\nreasoning skills. This study explores the impact of incorporating image\ncaptioning as an intermediary process within the VQA pipeline. Specifically, we\nexplore the efficacy of utilizing image captions instead of images and\nleveraging large language models (LLMs) to establish a zero-shot setting. Since\nimage captioning is the most crucial step in this process, we compare the\nimpact of state-of-the-art image captioning models on VQA performance across\nvarious question types in terms of structure and semantics. We propose a\nstraightforward and efficient question-driven image captioning approach within\nthis pipeline to transfer contextual information into the question-answering\n(QA) model. This method involves extracting keywords from the question,\ngenerating a caption for each image-question pair using the keywords, and\nincorporating the question-driven caption into the LLM prompt. We evaluate the\nefficacy of using general-purpose and question-driven image captions in the VQA\npipeline. Our study highlights the potential of employing image captions and\nharnessing the capabilities of LLMs to achieve competitive performance on GQA\nunder the zero-shot setting. Our code is available at\n\\url{https://github.com/ovguyo/captions-in-VQA}.",
"authors": "\u00d6vg\u00fc \u00d6zdemir, Erdem Akag\u00fcnd\u00fcz",
"published": "2024-04-12",
"updated": "2024-04-12",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "LLM AND Reasoning",
"gt": "Visual Question Answering (VQA) is a complex multi- modal task that demands a high-level understanding of several aspects, such as object and attribute identification, object localization, comprehension of the relationship be- tween the image and the question, and reasoning about the context and the scene. The common steps of a typical VQA model involve generating embeddings of the image and the question using encoders for each, combining the image and question embeddings with a fusing module, and generating answers using a text generator or a classifier. For a general overview of the VQA techniques, the reader may refer to [33, 34]. The inherent multimodal nature of the VQA problem is the primary factor contributing to its complexity. Combin- ing different types of information, such as text and images, makes the model\u2019s training more complex, as the model must understand and utilize the connections and interac- tions between these different modalities. Several studies [4, 15, 18, 28, 32] propose an approach to tackle multi- modality for the VQA problem. However, these methods indicate limitations in their capacity to adapt to new tasks, particularly in zero-shot settings. Recent advances in high-capacity large language mod- els (LLMs) [1, 5, 36] have marked a dramatic milestone in the domain. LLMs are predominantly trained with mil- lions (or billions) of parameters and utilized for process- ing textual data. LLMs show outstanding performance in a variety of natural language tasks. The ongoing research challenge lies in extending the capabilities of LLMs to the intersection of different modalities, e.g., textual and visual data. Recently, GPT-4 [1] and Gemini [36] stand out as re- markable examples of multimodal LLMs, adept at success- fully processing textual and visual modalities for various downstream tasks, including VQA. Several alternative ap- proaches [2, 7, 19, 20, 22] have also been proposed in the realm of large-scale vision-language integration. The chal- lenge in multimodal training lies in the extensive compu- tational and data costs required to align the representation spaces of vision and language. Some recent studies [11, 37, 42] delve into the poten- arXiv:2404.08589v1 [cs.CV] 12 Apr 2024 tial of utilizing image captions with unimodal LLMs in the zero-shot VQA setting. Our study differs from these stud- ies in the following aspects. Firstly, we focus on examining the representation capacity of image captions from various vision-language models on the VQA performance. Second, our study investigates whether image captions can be in- formative for specific types of questions by evaluating the results in structurally and semantically different questions. Within this scope, we also evaluate the influence of feeding LLMs with general-purpose and question-driven captions, and only the most relevant sentence in the caption during the QA stage. Numerous VQA datasets are available in the literature, including CLEVR [17], VQA [3], VQA 2.0 [9], OK-VQA [25], GQA [16]. Among these sets, although each serves various purposes effectively, GQA stands out for its empha- sis on testing compositional and grounded reasoning abili- ties and its relatively diverse Q/A set. In this study, we con- duct our experiments on the GQA dataset and focus on mea- suring performance on semantically and structurally differ- ent questions. We structure the VQA task into two fundamental compo- nents: image captioning and question-answering. The goal is to leverage the respective strengths of these tasks, aim- ing for a more thorough comprehension of both the visual content and the corresponding questions. We carry out ex- periments with state-of-the-art vision-language models, in- cluding CogVLM [40], BLIP-2 [20], and FuseCap [31] to comprehend their scene representation capacity in the VQA pipeline. We outline our contributions as follows: \u2022 We evaluate the image captioning performance of various vision-language models incorporating them with LLMs for zero-shot VQA, analyzing their effectiveness across various question types. \u2022 We propose a straightforward question-driven caption- ing approach to better transfer the context into LLMs for question-answering. The rest of the paper is organized as follows. Section 2 re- views related works. Section 3 mentions the components of the proposed pipeline. In Section 4, we present the exper- iments designed for our study. Section 5 discusses evalua- tion results. Section 6 outlines the conclusions drawn from our findings and discusses potential avenues for future re- search.",
"main_content": "2.1. Large Language Models LLMs [5, 27, 38] trained on extensively rich web-scale corpus usually employ autoregressive methods to generate target tokens. LLMs demonstrate remarkable proficiency in processing and generating text with humanlike characteristics. This attribute renders them suitable instruments for various language-related tasks, including question-answering, text generation, machine translation, etc. Expanding the scope of LLMs to include additional modalities results in the creation of multimodal LLMs [1, 20, 22, 36, 40], which boosts the performance for many downstream tasks including image captioning, visual question answering, text-to-image synthesis. 2.2. Visual Question Answering The main challenge of the VQA domain comes from bridging the gap between visual understanding and natural language. Numerous studies have been proposed to tackle questions related to visual content. Relation Networks [32] involves employing a compact and straightforward neural network module that takes pairs of features as input and generates a score indicative of the relationship between these feature pairs. LXMERT [35] is a large-scale transformer model that fuses textual and visual representations with a cross-modality encoder. MDETR [18] is an end-toend modulated detector which is an improved version of the object detection model DETR [6] by adding the capability of processing free-form texts. Alternatively, neurosymbolic approaches in VQA have gained attention to enhance model interpretability. A neuro-symbolic approach in VQA combines two main parts: neural network modules for handling images and text modalities, and a symbolic reasoning module for managing logic and knowledge representation. NS-VQA [43] and NS-CL [24] use neural networks for scene parsing and dividing questions into program instructions, and propose a symbolic module executing the program instructions on the scene representation. An alternative hybrid approach, ProTo [44], proposes program-guided transformers that use semantic and structural information of the programs being parsed from the questions by a sequence-to-sequence model. A recent approach, namely VisProg [12], generates program instructions from questions using LLMs and employs instructions on images benefiting from different modules for object detection, visual question answering, image classification, and segmentation. Recent large-scale multimodal approaches used for VQA are mentioned in Section 1 and Section 2.1. 2.3. Image Captioning Image captioning aims to produce a caption describing visual content in natural language. Conventional approaches in image captioning are based on attention and encoderdecoder structure [13, 14, 41]. A typical image captioning model consists of an encoder for gathering visual cues and a textual decoder to produce the final caption. Like VQA, this requires bridging the gap between visual and natural language understanding. Recently, large-scale multimodal models [1, 12, 19, 20, 26, 36, 40] have resulted in notable enhancements in performance and demonstrated adaptability to various downstream applications, including image captioning. 2.4. Question Answering Question-answering (QA) models aim to provide contextually appropriate responses based on a document or text, often requiring an understanding of linguistic rules, syntax, and contextual nuances. Recent models in QA leverage transformer architectures and large-scale pre-training on diverse datasets [5, 8, 23, 29]. 3. Methodology 3.1. Caption Generation The primary and most crucial element in the suggested pipeline is the creation of image captions with high visual representation capability. Image captions provide a summarized version of the visual content, and specific visual details may be lost, which could affect the VQA performance. We survey image captioning models, selecting ones that provide more detailed captions while taking into account our computational resource limitations. Consequently, we evaluate several zero-shot vision-language models, including CogVLM [40], FuseCap [31], and BLIP-2 OPT2.7b [20] by integrating them into the VQA pipeline. We employ both the chat and visual grounding variants of CogVLM, considering their potential performance impacts across different question types. VQA performance is assessed across various image captions according to structurally and semantically different question categories. More details about question categories are given in Section 4.1. Two approaches are utilized in this paper to generate captions. First, each image is captioned without considering the questions associated with it, which we refer to as \u201cgeneral-purpose captioning\u201d throughout the paper. However, general-purpose captions are designed to provide a broad description of the visual content, and they may lack the precision needed to address detailed and specific queries. Therefore, in our second approach, we create image captions for each image-question pair, a process we refer to as \u201cquestion-driven image captioning\u201d. For this purpose, KeyBERT [10] is employed to extract keywords from the questions. KeyBERT utilizes BERT-embeddings along with a basic cosine similarity measure to identify the most representative words that encapsulate the content of the entire text. Extracted keywords are fed into the image captioning model along with the corresponding image, as illustrated in Figure 1. We also investigate whether less relevant portions of an image caption could potentially introduce confusion or result in inaccurate answers for the QA model/LLM. Hence, in our analysis, we experiment with keeping only the most relevant sentence of the image caption and providing it to the LLM during the QA step. To achieve this, we utilize Sentence-BERT [30], specifically employing the MiniLML6 model1, to extract the most relevant sentence from the image caption based on the given question. 3.2. Question Answering As in the pipeline shown in Figure 1, the QA model takes the image caption and the question as input, leveraging information from the image caption to generate an answer. During the QA step, we utilize GPT-3.5, recognized for its high zero-shot performance in QA benchmarks [39]. Despite the superior performance of the more recent LLM, GPT-4, across various natural language tasks including QA, we choose not to use GPT-4 in our experiments to keep our pipeline cost-effective. In future works, the integration of higher-performing LLMs with the pipeline could be explored. We derive answers with an open-ended generation, specifically using GPT-3.5-turbo API provided by OpenAI. The answer size is restricted to a maximum of two words, aligning with the answer size distribution in the GQA dataset. Optimal prompts are given in the Section 4.5. 4. Experimental Setup 4.1. Dataset We conduct experiments on the GQA [16] dataset, specifically the balanced version of the test-dev subset, comprising 12, 578 questions. Each image in the dataset is linked to multiple questions, and the overall number of images included is 398. This subset contains a diverse distribution of questions across various categories, with a primary focus on categorization based on structure and semantics. The structural type is determined by the final operation in the functional program of the question, encompassing categories such as verify, query, choose, logical, and compare. The semantic type specifies the primary focus of the question and includes categories like object, attribute, category, relation, and global. Table 1 presents an overview of question types, corresponding descriptions, and the respective number of questions in the GQA test-dev [16]. 4.2. Competing VQA Methods To evaluate zero-shot VQA performance, we use the chat variation of CogVLM2 and BLIP-2 FlanT5XL3. CogVLM is an open-sourced pre-trained vision-language model with 10B visual and 7B language parameters. CogVLM outperforms many vision-language models, e.g., InstructBLIP 1https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6cos-v1 2https://huggingface.co/THUDM/cogvlm-chat-hf 3https://huggingface.co/Salesforce/blip2-flan-t5-xl Figure 1. VQA pipeline exploiting general and the proposed question-driven (QD) image captioning as an intermediate step. Table 1. Overview of the question types Question type Description Example No. samples verify yes/no questions Does the device under the picture frame look black? 2252 query open questions Which kind of vehicle is waiting for the traffic light? 6805 choose choosing from alternatives What color is the hair, gray or red? 1128 logical logical inference Are the flags triangular and red? 1803 compare comparison of objects Which is larger, the pasture or the horse? 589 object existence questions Are there both a horse and a fence in the image? 778 attribute object properties/position On which side of the picture are the pens? 5185 category object identification What kind of clothing is yellow? 1149 relation relations with objects/subjects Is the toaster to the right of a refrigerator? 5308 global overall properties Is it an outdoors scene? 157 [7] and LlaVA-1.5 [21], in VQA benchmarks. Due to our resource constraints with 16 GB VRAM, we apply 4-bit quantization to CogVLM. BLIP-2 FlanT5XL with 4.1B parameters also indicate high performance surpassing BLIP-2 OPT6.7B and Flamingo [2] in VQA benchmarks. We employ BLIP-2 FlanT5XL with F16 precision. 4.3. Image Captioning Methods We examine the VQA performance attributed to semantic and structural question types mentioned in Section 4.1. Image captions are obtained through the visual grounding4 and chat5 variations of the CogVLM, FuseCap6, and BLIP-2 OPT2.7b7 models. When determining the image captioning method, we pay attention to both its alignment with our resource capacity and its high performance in image captioning benchmarks. We employ 4-bit quantization to CogVLM and use F16 precision for BLIP-2 OPT2.7b. 4.4. Evaluation Before the evaluation, GPT-3.5 predictions undergo postprocessing, which involves the removal of punctuation. 4https://huggingface.co/THUDM/cogvlm-grounding-generalist-hf 5https://huggingface.co/THUDM/cogvlm-chat-hf 6https://github.com/RotsteinNoam/FuseCap 7https://huggingface.co/Salesforce/blip2-opt-2.7b During the evaluation process, we employ the accuracy metric, calculated as the ratio of correctly predicted answers to the total number of answers. Given that answers are derived through open-ended generation using LLMs and might include variations, we do not seek an exact match between the prediction and the ground truth. Instead, we evaluate semantic similarity using cosine similarity in a vector space with the threshold 0.70. If two strings are closely aligned in meaning, the prediction is accepted as correct; for example, accepting couch as correct for the label sofa. We determine the similarity threshold through manual observation of the results. At lower thresholds, we observe that predictions incorporating words related to each other, yet lacking identical meanings, are also considered correct. For instance, the similarity value between the words blue and brown is found to be 0.67. We additionally assess performance across higher cosine similarity thresholds, e.g. 0.8 and 0.9, and for exact matching (EM). 4.5. Prompt Details A brief prompt, \u2018Describe the scene in this image\u2019 is supplied to the image captioning model to create generalpurpose image captions. To create question-driven captions, \u2018Consider the keywords: [keywords]\u2019 is added to the prompt. In the QA stage, the LLM prompt involves \u2018Answer the question in a maximum of two words based on the Table 2. Comparison of the performances of different image captioning methods in the context of VQA on GQA test-dev. Image captioning methods are employed with GPT-3.5 as the question-answering (QA) method. Two variants of CogVLM, namely visual grounding (CogVLM-V) and chat model (CogVLM-C), are utilized for image captioning. QD and SB refer to question-driven and sentence-based captions, respectively. The answers with a cosine similarity of 0.7 or higher have been considered correct with the label. Accuracy values are compared with the performance of zero-shot VQA models based on various question categories. Question type CogVLM-C Cap. + GPT-3.5 QA CogVLM-V Cap. + GPT-3.5 QA CogVLM-C QD Cap. + GPT-3.5 QA CogVLM-C SB Cap. + GPT-3.5 QA FuseCap Cap. + GPT-3.5 QA BLIP-2 Cap. + GPT-3.5 QA CogVLM VQA BLIP-2 VQA verify 63.01 58.53 66.83 61.06 53.60 55.82 83.04 56.48 query 36.91 31.08 38.34 31.51 29.61 31.87 54.11 41.31 choose 65.25 60.90 65.51 60.90 58.07 60.82 87.32 56.91 logical 59.51 60.29 59.07 58.46 57.07 56.07 77.54 54.24 compare 51.78 51.95 51.95 49.07 54.50 48.22 62.65 46.52 object 61.95 63.24 59.13 58.87 59.38 58.35 84.45 57.07 attribute 51.75 46.42 54.62 50.80 45.11 46.63 70.45 49.33 category 47.35 44.21 50.39 42.56 43.52 42.47 63.19 53.35 relation 42.56 38.32 42.97 35.76 34.98 37.23 59.91 43.31 global 49.04 45.86 45.86 44.59 43.95 45.22 56.05 40.13 total 48.06 43.83 49.50 44.12 41.58 42.99 66.02 47.52 text. Consider the type of question in your answer. For example, if it is a yes/no question, the answer should be yes or no. Text: [text], Question: [question]\u2019. We notice a positive impact on the results when we include an instruction in the prompt to consider the question type. In the decoding step for the answer generation, we set temperature as 0.2, topp as 1, and specify frequency penalty and presence penalty as 0. 5. Results 5.1. Main Findings Table 2 summarizes our results and demonstrates that employing our suggested QD image captioning approach for VQA enhances performance across most question categories compared to general-purpose image captioning. Also, Table 3 indicates that the QD image captioning approach utilizing the CogVLM-chat variant surpasses other image captioning methods in evaluations seeking both different cosine similarity thresholds and exact matching. Significant performance enhancements are evident in QD image captions, particularly in the verify category for yes/no questions, as well as attribute and category types primarily focused on identifying and describing a single object\u2019s properties. However, challenges arise in the object category often asking which of two objects exists in the frame. Particularly in this category of questions, despite the QD image captions containing relevant information, inaccuracies emerge due to the behavior of the QA model, as elaborated in Section 5.2. We also notice that the QD captioning emphasizing quesTable 3. Comparison of overall accuracy for exact matching (EM) and in different cosine similarity thresholds. Models EM sim=0.9 sim=0.8 CogVLM-C Cap. + GPT-3.5 QA 36.77 38.21 43.01 CogVLM-V Cap. + GPT-3.5 QA 36.21 37.51 41.21 CogVLM-C QD Cap. + GPT-3.5 QA 37.64 39.24 44.48 CogVLM-C SB Cap. + GPT-3.5 QA 34.14 35.06 39.41 FuseCap Cap. + GPT-3.5 QA 33.17 34.18 37.64 BLIP-2 Cap. + GPT-3.5 QA 34.77 35.53 39.11 CogVLM VQA 58.43 59.23 62.79 BLIP-2 VQA 37.82 38.57 42.33 tion keywords is linked to a performance decline in the global type questions. Global-type questions typically pertain to the overall content of an image. It suggests that the emphasis on question keywords in the caption negatively affects the model\u2019s ability to make inferences about the entire image. On the other hand, it is quite possible to give other answers to questions of this type that are meaningful and contextually correct but do not match the label. In most of the cases, we observe that GPT-3.5 predicts answers that could be correct but do not precisely match the expected label (see examples in Figure 3). In most question categories, the accuracy achieved by combining QD image captions with GPT-3.5 for VQA exceeds the performance of BLIP-2 FlanT5XL in the zeroshot setting. However, all image captioning-based approaches indicate inferior performance compared to the CogVLM-chat model for VQA. We are intrigued to discover a notable disparity in performance when comparing Figure 2. Examples from correct predictions in case that QD image captioning is applied. Figure 3. Examples from wrong predictions in case that QD image captioning is applied. the image captions extracted by the CogVLM-chat model and provided to LLM, in contrast to the VQA performance of the CogVLM-chat model, unlike the case with BLIP-2. Among the FuseCap, BLIP-2 OPT2.7b, CogVLM-chat, and CogVLM-visual grounding models, the most informative captions for VQA are obtained through the CogVLMchat variant. The CogVLM-visual grounding variant indicates the highest performance only in object and logical question categories. This suggests that visual grounding models may provide an advantage in these question categories with their capacity to connect language queries to relevant visual elements and reason about object-related relationships. Limiting image captions to the most relevant sentence reduces the overall performance of the CogVLM-chat model, though the impact varies across question types, with verify, query, choose and relation types being more negatively affected. This suggests that sentences less directly related to the questions do not result in confusion or inaccuracies for LLM during the QA. Conversely, generating more comprehensive and context-rich image captions is necessary for optimal performance. Figure 2 and 3 feature examples of both correct and incorrect outcomes, where image captions are generated by the CogVLM-chat model using question-driven captioning and then fed to GPT-3.5 for answer prediction. 5.2. Error Analysis When examining incorrect predictions based on question types, we discover some common issues. We notice that 27% of the incorrect predictions are related to yes/no questions. A closer look reveals that in 11% of the incorrectly answered yes/no questions, GPT-3.5 provides a response using a word other than yes or no. For instance, when the provided caption is \u2018The image showcases a skateboarder in action, possibly performing a trick on a ramp. The skateboarder is wearing protective gear, including a helmet, knee pads, and elbow pads. The background features a clear blue sky, trees, and a building. The overall ambiance suggests an outdoor skateboarding event or practice session.\u2019, in response to the question \u2018Are there salt shakers or skateboards in the picture?\u2019 GPT-3.5\u2019s prediction is skateboards, while the ground-truth is yes. We observe that most similar inaccuracies are associated with questions related to object and logical types, often connecting more than one object or attribute using conjunctions like and or or, as given in the example. We posit that this issue can be alleviated by crafting more effective prompts for GPT-3.5 or by employing a more powerful LLM for QA. We also assess the instances where the LLM fails to provide an answer based on the information present in the image caption. Specifically, we examine the occurrences of not mentioned and not visible responses from GPT-3.5. Our findings indicate that, for the best general-purpose image captioning model, GPT-3.5 is not able to respond to 1.7% of the questions. Notably, when employing question-driven captioning, this rate decreases to 0.5%. 6. Conclusion This study aims to develop a zero-shot VQA pipeline, leveraging LLMs with the inclusion of image captioning as an intermediate step, and evaluate its performance on the GQA benchmark. The proposed approach involves questiondriven image captioning to transfer contextual information to the QA model. The study includes a thorough evaluation of zero-shot models for image captioning in the VQA context, comparing the impact of general-purpose and question-driven image captions in terms of various types of questions. Our comparative analysis suggests that incorporating question-driven image captions into the VQA process has a more favorable effect on overall performance, surpassing the VQA performance of BLIP-2. Future endeavors may explore the integration of larger-scale LLMs, e.g., GPT-4, to further enhance performance. Additionally, evaluating the pipeline in a few-shot setting could offer a more comprehensive comparison. To enhance transparency, replacing the QA model with an interpretable alternative, such as graph-based QA models, can be explored. Acknowledgements This work is partially supported by Middle East Technical University Scientific Research Projects Coordination Unit (METU-BAP), under the project number ADEP-704-202411482."
}
]
}