AcademicEval / intro_28K /test_introduction_long_2404.16821v2.json
jiyuuuu's picture
syn
b0f675a
raw
history blame
256 kB
{
"url": "http://arxiv.org/abs/2404.16821v2",
"title": "How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites",
"abstract": "In this report, we introduce InternVL 1.5, an open-source multimodal large\nlanguage model (MLLM) to bridge the capability gap between open-source and\nproprietary commercial models in multimodal understanding. We introduce three\nsimple improvements: (1) Strong Vision Encoder: we explored a continuous\nlearning strategy for the large-scale vision foundation model -- InternViT-6B,\nboosting its visual understanding capabilities, and making it can be\ntransferred and reused in different LLMs. (2) Dynamic High-Resolution: we\ndivide images into tiles ranging from 1 to 40 of 448$\\times$448 pixels\naccording to the aspect ratio and resolution of the input images, which\nsupports up to 4K resolution input. (3) High-Quality Bilingual Dataset: we\ncarefully collected a high-quality bilingual dataset that covers common scenes,\ndocument images, and annotated them with English and Chinese question-answer\npairs, significantly enhancing performance in OCR- and Chinese-related tasks.\nWe evaluate InternVL 1.5 through a series of benchmarks and comparative\nstudies. Compared to both open-source and proprietary models, InternVL 1.5\nshows competitive performance, achieving state-of-the-art results in 8 of 18\nbenchmarks. Code has been released at https://github.com/OpenGVLab/InternVL.",
"authors": "Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi Hu, Jiapeng Luo, Zheng Ma, Ji Ma, Jiaqi Wang, Xiaoyi Dong, Hang Yan, Hewei Guo, Conghui He, Botian Shi, Zhenjiang Jin, Chao Xu, Bin Wang, Xingjian Wei, Wei Li, Wenjian Zhang, Bo Zhang, Pinlong Cai, Licheng Wen, Xiangchao Yan, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, Wenhai Wang",
"published": "2024-04-25",
"updated": "2024-04-29",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "Large language models (LLMs) have been instrumental in advancing artificial general intelligence (AGI) systems, * equal contribution; \u2020 interns at OpenGVLab, Shanghai AI Labora- tory; B corresponding author (wangwenhai@pjlab.org.cn). Benchmark InternVL 1.5 Grok-1.5V GPT-4V Claude-3 Opus Gemini Pro 1.5 MMMU Multi-discipline 45.2% 53.6% 56.8% 59.4% 58.5% MathVista Math 53.5% 52.8% 49.9% 50.5% 52.1% AI2D Diagrams 80.7% 88.3% 78.2% 88.1% 80.3% TextVQA Text reading 80.6% 78.1% 78.0% - 73.5% ChartQA Charts 83.8% 76.1% 78.5% 80.8% 81.3% DocVQA Documents 90.9% 85.6% 88.4% 89.3% 86.5% RealWorldQA Real-world understanding 66.0% 68.7% 61.4% 49.8% 67.5% Figure 1. InternVL 1.5 versus proprietary commercial models. The results of these benchmarks show that InternVL 1.5 achieves performance comparable to leading proprietary models. demonstrating remarkable abilities in processing open- world language tasks. Leveraging the advancements in LLMs, multimodal large language models (MLLMs) [5, 18, 23, 62, 63, 84, 92, 116, 142] have made significant strides, facilitating complex vision-language dialogues and interac- tions that bridge the gap between textual and visual infor- mation. Despite these achievements, there remains a notice- able divide between the capabilities of open-source mod- els and proprietary commercial models, e.g., GPT-4V [87], 1 arXiv:2404.16821v2 [cs.CV] 29 Apr 2024 Gemini series [92, 107], and Qwen-VL-Max [5]. This gap is mainly reflected in the following three as- pects: (1) Parameter Scale: Recent proprietary commer- cial MLLMs [5, 87, 92, 102] typically scales not less than 100 billion parameters, while open-source models com- monly employ a 300 million parameter vision foundation model (VFM), which is integrated with either a 7 billion or 13 billion LLMs. (2) Image Resolution: Proprietary commercial models typically employ a dynamic resolution approach, preserving the original aspect ratio to facilitate detailed scene and document understanding. In contrast, open-source models generally train with fixed resolutions [18, 23, 62, 71, 117, 142], such as 336\u00d7336 and 448\u00d7448, leading to a considerable gap in capabilities relative to com- mercial counterparts. (3) Multilingual Capability: Propri- etary models often leverage extensive multilingual datasets for training, enhancing their performance across diverse languages. However, open-source models predominantly utilize English data, relying on the zero-shot capabilities of LLMs for other languages, e.g. LLaVA-NeXT [64]. This re- sults in sub-optimal performance in non-English scene un- derstanding and OCR tasks. To bridge the gap, we introduce InternVL 1.5, integrat- ing three major improvements to enhance its performance and usability. (1) We implement a continuous learning ap- proach to a large-scale VFM\u2014InternViT-6B [18], refining it using high-quality image-text data. This process not only enhances the model\u2019s ability to understand visual content but also improves its adaptability across various LLMs. In addition, using InternLM2-20B [11] as the language foun- dation model also offers robust initial language processing capabilities. (2) We adopt a dynamic high-resolution strat- egy that segments images into 448\u00d7448 tiles, with the num- ber of tiles ranging from 1 to 40 (i.e., 4K resolution) based on the aspect ratio and resolution of the images. To capture global context, we additionally include a thumbnail view. (3) We gather a diverse collection of public datasets, cov- ering high-quality natural scenes, charts, documents, and conversations in both English and Chinese. Additionally, we develop a data translation pipeline using open-source LLMs, which can be easily extended to more languages. These designs endow our model with several advan- tages: (1) Flexible Resolution: Similar to the \u201clow\u201d or \u201chigh\u201d modes available in GPT-4V [87], InternVL 1.5 en- ables users to select the optimal resolution for their im- ages, such as using low-resolution for scene subject de- scription and high-resolution (up to 4K resolution) for doc- ument understanding, effectively balancing computational efficiency with detail preservation. (2) Bilingual Profi- ciency: InternVL 1.5 exhibits robust bilingual capabili- ties, proficiently handling multimodal perception and un- derstanding tasks in both English and Chinese. Notably, in tasks related to Chinese, our model generally outperforms InternVL 1.5 High-Quality Bilingual Dataset Captioning, General QA, Science, Chart, Mathematics, Knowledge, OCR, Document, Grounding, Conversation, Chinese, English Strong Foundation Models InternViT-6B-448px-V1.5 + InternLM2-20B Dynamic High-Resolution 448 ~ 4K Resolution AGI Figure 2. Characteristics of InternVL 1.5. InternVL 1.5 features strong visual representation through continuous learning, flexible resolution capabilities, and robust bilingual proficiency in English and Chinese, positioning it as a competitive MLLM. the leading commercial model GPT-4V [87]. (3) Strong Vi- sual Representation: By implementing a continuous learn- ing strategy, we enhance the visual representation capabil- ities of InternViT-6B [18], making it robust to flexible in- put resolution and various visual domains. Benefitting from InternViT-6B\u2019s massive parameters, our model achieves a level of visual representation that rivals the linguistic capa- bilities of LLMs with more than 20 billion parameters. This synergy between visual and linguistic processing endows our system with robust multimodal capabilities. We evaluated InternVL 1.5 on 18 representative multi- modal benchmarks, which are categorized into four spe- cific groups: OCR-related, general multimodal, mathemat- ical, and multi-turn conversation benchmarks. Compared to both open-source and proprietary models, InternVL 1.5 shows competitive performance, achieving state-of-the-art results in 8 of 18 benchmarks. Notably, as shown in Fig- ure 1, it even surpasses leading proprietary models like Grok-1.5V [125], GPT-4V [87], Claude-3 Opus [3], and Gemini Pro 1.5 [92] in four specific benchmarks, partic- ularly in OCR-related datasets such as TextVQA [100], ChartQA [81], and DocVQA [82]. This evaluation indicates that InternVL 1.5 has effectively narrowed the gap between open-source models and leading commercial models. We hope that our approach and open-source model weights can contribute to the development of the MLLM community. 2",
"main_content": "2.1. Proprietary Commercial MLLMs Large language models (LLMs) [1, 4, 7, 8, 11, 25, 104, 106, 108, 112, 113, 122, 123, 141] have greatly advanced AGI by enabling complex language tasks previously thought human-exclusive. Building on this, the development of proprietary commercial MLLMs represents a significant evolution. For example, OpenAI\u2019s GPT-4V [87] extends GPT4\u2019s capabilities by incorporating visual inputs, allowing it to handle both text and image content, which stands as a significant development in the domain of MLLMs. Afterward, Google\u2019s Gemini series progresses from Gemini 1.0 [107] to Gemini 1.5 [92], enhancing MLLMs with the ability to process text, images, and audio and support up to 1 million tokens, which boosts performance significantly. The QwenVL-Plus/Max are Alibaba\u2019s leading models in the QwenVL series [5], renowned for superior capacity in multimodal tasks without needing OCR tools. Recent advancements in proprietary MLLMs include Anthropic\u2019s Claude-3V series [3], HyperGAI\u2019s HPT Pro [35], Apple\u2019s MM1 [84], StepFun\u2019s Step-1V [102], and xAI\u2019s Grok-1.5V [125]. 2.2. Open-Source MLLMs The development of open-source MLLMs [2, 13, 43, 48, 51, 55, 56, 69, 70, 103, 110, 118, 120, 124, 138, 139] has significantly influenced the AGI landscape by integrating and enhancing capabilities in processing both visual and textual data. Over the past year, many open-source MLLMs have become well-known, including the LLaVA series [62\u2013 64], MiniGPT-4 [142], VisionLLM [116], Qwen-VL [5], CogVLM [117], Shikra [15], and others [18, 23, 90, 119]. However, these models are typically trained on images with small, fixed resolutions such as 336\u00d7336, or 448\u00d7448, which leads to sub-optimal performance on images with unusual aspect ratios or document data. To address this issue, many approaches have been explored for training on high-resolution images. Currently, there are two common technical routes: one involves designing a dual-branch image encoder [32, 53, 76, 77, 121], and the other involves dividing a high-resolution image into many low-resolution tiles [24, 33, 47, 55, 57, 64, 68, 126, 127]. Despite these explorations in high-resolution training, these open-source models still exhibit significant gaps in understanding documents, charts, and infographics, as well as recognizing scene texts, compared to leading commercial models. 2.3. Vision Foundation Models for MLLMs Vision foundation models (VFMs) are a focal point of research within the MLLM community. Currently, models like CLIP-ViT [91] and SigLIP [136] are prevalently utilized; however, many studies have been conducted to find the most suitable vision encoders for MLLMs [57, 71, 76, Pixel Shuffle Dynamic High Resolution InternViT-6B InternLM2 Tokenizer InternLM2-Chat-20B MLP Projector User Message Figure 3. Overall Architecture. InternVL 1.5 adopts the ViTMLP-LLM architecture similar to popular MLLMs [62, 64], combining a pre-trained InternViT-6B [18] with InternLM2-20B [11] through a MLP projector. Here, we employ a simple pixel shuffle to reduce the number of visual tokens to one-quarter. 111]. For instance, Tong et al. [111] observed notable differences in the visual patterns of CLIP and DINOv2 [88], leading to the development of a mixture-of-features module that combines these two VFMs. LLaVA-HR [76] introduced a dual-branch vision encoder utilizing CLIP-ViT for low-resolution pathways and CLIP-ConvNext for highresolution pathways. Similarly, DeepSeek-VL [71] adopted a dual vision encoder design, using SigLIP-L for lowresolution images and SAM-B for high-resolution images. In this report, we propose a continuous learning strategy for our vision foundation model\u2014InternViT-6B [18], which continuously boosts the visual understanding capabilities and can be transferred and reused across different LLMs. 3. InternVL 1.5 3.1. Overall Architecture As illustrated in Figure 3, InternVL 1.5 employs an architecture akin to widely-used open-source MLLMs, specifically the \u201cViT-MLP-LLM\u201d configuration referenced in various existing studies [18, 23, 62\u201364, 71, 142]. Our implementation of this architecture integrates a pre-trained InternViT-6B [18] with a pre-trained InternLM2-20B [11] using a randomly initialized MLP projector. During training, we implemented a dynamic resolution strategy, dividing images into tiles of 448\u00d7448 pixels in sizes ranging from 1 to 12, based on the aspect ratio and resolution of the input images. During testing, this can be zero-shot scaled up to 40 tiles (i.e., 4K resolution). To enhance scalability for high resolution, we simply employed a pixel shuffle operation to reduce the number of visual tokens to one-quarter of the original. Therefore, in our model, a 448\u00d7448 image is represented by 256 visual tokens. 3 3.2. Strong Vision Encoder In existing MLLMs [5, 23, 62\u201364, 78, 142], the most commonly used vision foundation model is typically a contrastively pre-trained ViT [18, 36, 91, 136]. However, these ViTs are commonly trained on image-text pairs crawled from the Internet at a fixed low resolution (e.g., 224\u00d7224), so their performance degrades when tasked with processing high-resolution images or images from sources other than the Internet, such as document images. InternViT-6B-448px-V1.2. To address this issue, the InternVL 1.2 update involved continuous pre-training of the InternViT-6B model. First, we found that the features from the fourth-to-last layer perform best for multimodal tasks, so we directly discarded the weights of the last three layers, reducing InternViT-6B from 48 layers to 45 layers. Then, we increased the resolution of InternViT-6B from 224 to 448 and integrated it with Nous-Hermes-2-Yi-34B [130]. To equip the model with high-resolution processing and OCR capabilities, both the vision encoder and the MLP were activated for training, utilizing a mix of image captioning [10, 17, 90, 93, 100] and OCR-specific datasets [29, 94]. The newly derived InternViT weights from this process were released as InternViT-6B-448px-V1.21. InternViT-6B-448px-V1.5. The development of InternVL 1.5 continues the pre-training of the strong foundation of InternViT-6B-448px-V1.2. In this update, the resolution of training images is expanded from fixed 448\u00d7448 to dynamic 448\u00d7448, where the basic tile size is 448\u00d7448 and the number of tiles ranges from 1 to 12. Additionally, we enhance the data scale, quality, and diversity of the pretraining dataset, resulting in the powerful robustness, OCR capability, and high-resolution processing capability of our 1.5 version model2. Details of the dynamic resolution and training datasets are described in Sections 3.3 and 3.4. It is noteworthy that despite the LLM in InternVL 1.5 being changed from Nous-Hermes-2-Yi-34B to InternLM220B [11], the InternViT maintained excellent compatibility and portability with the new LLM. This suggests that the visual features learned by InternViT-6B during the pretraining stage of MLLMs are broadly applicable and not tightly bound to the specific LLM. 3.3. Dynamic High-Resolution Inspired by UReader [127], we adopt a dynamic highresolution training approach that effectively adapts to the varying resolutions and aspect ratios of input images. This method leverages the flexibility of segmenting images into tiles, enhancing the model\u2019s ability to process detailed vi1https://huggingface.co/OpenGVLab/InternViT-6B448px-V1-2 2https://huggingface.co/OpenGVLab/InternViT-6B448px-V1-5 1:1 1:4 2:3 1:2 1:5 3:2 1:3 1:6 ... Pre-defined Aspect Ratios 448\u00d7448 Tiles Matching 2:3 (896\u00d71344) Thumbnail Input Image (800\u00d71300) Figure 4. Illustration of dynamic high resolution. We dynamically match an optimal aspect ratio from pre-defined ratios, dividing the image into tiles of 448\u00d7448 pixels and creating a thumbnail for global context. This method minimizes aspect ratio distortion and accommodates varying resolutions during training. sual information while accommodating diverse image resolutions. It mainly consists of the following steps: Dynamic Aspect Ratio Matching. As shown in Figure 4, to maintain natural aspect ratios during processing, we dynamically match the optimal aspect ratio from a pre-defined set of aspect ratios. Due to limited computational resources, we allow a maximum of 12 tiles during training. Consequently, this set includes all 35 possible combinations of aspect ratios formed by 1 to 12 tiles, such as {1:1, 1:2, 2:1, 3:1, ..., 2:6}. During the matching process, for each input image, we calculate its aspect ratio and compare it with the 35 pre-defined aspect ratios by measuring the absolute difference. If multiple pre-defined aspect ratios match (e.g., 1:1 and 2:2), we prioritize the one not exceeding twice the input image\u2019s area, thereby preventing excessive enlargement of low-resolution images. Image Division & Thumbnail. Once an appropriate aspect ratio is determined, the image is resized to the corresponding resolution. For example, an 800\u00d71300 image will be resized to 896\u00d71344. The resized image is then divided into tiles of 448\u00d7448 pixels. Alongside the tiles, we include a thumbnail of the entire image to capture the global context. This thumbnail is scaled down to 448\u00d7448, aiding the model in understanding the overall scene. Therefore, during training, the number of visual tokens ranges from 256 to 3,328. During testing, the number of tiles can increase to a maximum of 40, resulting in 10,496 visual tokens. 3.4. High-Quality Bilingual Dataset Pre-training Dataset. The pre-training dataset utilized in our InternVL 1.5 encompasses a diverse range of publicly accessible sources. We provide an overview of these datasets in Table 1a. These datasets span multi4 task ratio dataset Laion-EN (en) [93], Laion-ZH (zh) [93], COYO (zh) [10], Captioning 53.9% GRIT (zh) [90], COCO (en) [17], TextCaps (en) [99] Objects365 (en&zh) [97], GRIT (en&zh) [90], Detection 5.2% All-Seeing (en&zh) [119] Wukong-OCR (zh) [29], LaionCOCO-OCR (en) [94], OCR (large) 32.0% Common Crawl PDF (en&zh) MMC-Inst (en) [61], LSVT (zh) [105], ST-VQA (en) [9] RCTW-17 (zh) [98], ReCTs (zh) [137], ArT (en&zh) [19], SynthDoG (en&zh) [41], COCO-Text (en) [114], ChartQA (en) [81], CTW (zh) [134], DocVQA (en) [82], OCR (small) 8.9% TextOCR (en) [101], PlotQA (en) [85], InfoVQA (en) [83] (a) Datasets used in the pre-training stage. task dataset Captioning TextCaps (en) [99], ShareGPT4V (en&zh) [16] VQAv2 (en) [28], GQA (en) [34], OKVQA (en) [80], General QA VSR (en) [59], VisualDialog (en) [22] Science AI2D (en) [39], ScienceQA (en) [73], TQA (en) [40] ChartQA (en) [81], MMC-Inst (en) [61], DVQA (en) [38], Chart PlotQA (en) [85], LRV-Instruction (en) [60] GeoQA+ (en) [12], TabMWP (en) [74], MathQA (en) [132], Mathematics CLEVR-Math/Super (en) [54, 58], Geometry3K (en) [72] KVQA (en) [96], A-OKVQA (en) [95], ViQuAE (en) [45], Knowledge Wikipedia (en&zh) [31] OCRVQA (en) [86], InfoVQA (en) [83], TextVQA (en) [100], ArT (en&zh) [19], COCO-Text (en) [114], CTW (zh) [134], LSVT (zh) [105], RCTW-17 (zh) [98], ReCTs (zh) [137], OCR SynthDoG (en&zh) [41], ST-VQA (en) [9] Document DocVQA (en) [20], Common Crawl PDF (en&zh) Grounding RefCOCO/+/g (en) [79, 131], Visual Genome (en) [42] LLaVA-150K (en&zh) [63], LVIS-Instruct4V (en) [115], ALLaVA (en&zh) [14], Laion-GPT4V (en) [44], Conversation TextOCR-GPT4V (en) [37], SVIT (en&zh) [140] OpenHermes2.5 (en) [109], Alpaca-GPT4 (en) [106], Text-only ShareGPT (en&zh) [141], COIG-CQIA (zh) [6] (b) Datasets used in the fine-tuning stage. Table 1. Summary of datasets used in InternVL 1.5. To construct large-scale OCR datasets, we utilized PaddleOCR [49] to perform OCR in Chinese on images from Wukong [29] and in English on images from LAION-COCO [94]. ple tasks, including captioning, which predominantly uses datasets such as Laion-EN [93], Laion-ZH [93], COYO [10], and GRIT [90], constituting 53.9% of the total data. Detection and grounding tasks utilize datasets like Objects365 [97], GRIT [90], and All-Seeing [119], making up 5.2%. For OCR tasks, we utilized large-scale datasets such as Wukong-OCR, LaionCOCO-OCR, and Common Crawl PDFs, which constitute 32.0% of our data. These datasets were constructed using PaddleOCR [49] to perform OCR on Chinese images from Wukong [29] and on English images from LaionCOCO [94]. Smaller OCR datasets include MMC-Inst [61], LSVT [105], ST-VQA [9], RCTW-17 [98], ArT [19], and others, accounting for 8.9% of the data, which focus on more specific or constrained OCR challenges. This diverse dataset assembly ensures robust model pre-training of InternVL, catering to varied linguistic and visual elements across tasks. Fine-tuning Dataset. During the fine-tuning stage, we System: You are a translator proficient in English and {language}. Your task is to translate the following English text into {language}, focusing on a natural and fluent result that avoids \u201ctranslationese.\u201d Please consider these points: 1. Keep proper nouns, brands, and geographical names in English. 2. Retain technical terms or jargon in English, but feel free to explain in {language} if necessary. 3. Use {language} idiomatic expressions for English idioms or proverbs to ensure cultural relevance. 4. Ensure quotes or direct speech sound natural in {language}, maintaining the original\u2019s tone. 5. For acronyms, provide the full form in {language} with the English acronym in parentheses. User: Text for translation: {text} Assistant: {translation results} Figure 5. Explanation of our data translation pipeline. Based on this prompt, we translate English data into Chinese while keeping the language natural and smooth. Here, {language} represents the target language, {text} refers to the original English text, and {translation results} indicates the translated text. meticulously selected datasets to enhance model performance across a wide range of multimodal tasks. The datasets used in this phase are summarized in Table 1b. For image captioning, we included TextCaps [99] and bilingual ShareGPT4V [16], which help the model learn to generate descriptive captions in both English and Chinese. In the domain of general QA, datasets such as VQAv2 [28], GQA [34], and VisualDialog [22] teach the model to handle diverse question-answering scenarios. For scientific image understanding, datasets like AI2D [39], ScienceQA [73], and TQA [40] provide content-rich scenarios to enhance the model\u2019s ability to interpret scientific diagrams and texts. Chart interpretation is bolstered by ChartQA [81], MMC-Inst [61], and PlotQA [85], which train the model to analyze and understand chart images. Mathematics datasets such as GeoQA+ [12], TabMWP [74], and MathQA [132] introduce complex numerical and geometric problem-solving tasks. Knowledge-based QA benefits from the inclusion of datasets like KVQA [96] and bilingual Wikipedia [31], enabling the model to extract and reason with factual information across multiple languages. For tasks involving OCR, we utilize OCRVQA [86], TextVQA [100], and several datasets focused on Chinese and English text recognition, such as SynthDoG [41], to improve text recognition from images. Document understanding is advanced through datasets like DocVQA [82] and Common Crawl PDFs, which help the model for real-world document analysis. Visual grounding is trained using RefCOCO [79, 131] and Visual Genome [42], aiding the model 5 openOCR-related Benchmarks General Multimodal Benchmarks Math model source #param DocVQA ChartQA InfoVQA TextVQA OCRBench MME RWQA AI2D MMMU MMB-EN/CN CCB MMVet SEED HallB MathVista GPT-4V [1] \u2717 \u2212 88.4 78.5 \u2212 78.0 645 1926.6 61.4 78.2 56.8 77.0 / 74.4 46.5 67.6 71.6 46.5 49.9 Gemini Ultra 1.0 [107] \u2717 \u2212 90.9 80.8 80.3 82.3 \u2212 \u2212 \u2212 79.5 59.4 \u2212/ \u2212 \u2212 \u2212 \u2212 \u2212 53.0 Gemini Pro 1.0 [107] \u2717 \u2212 88.1 74.1 75.2 74.6 659 1933.4 \u2212 73.9 47.9 73.6 / 74.3 52.5 64.3 70.7 45.2 45.2 Gemini Pro 1.5 [92] \u2717 \u2212 86.5 81.3 72.7 73.5 \u2212 \u2212 67.5 80.3 58.5 \u2212/ \u2212 \u2212 \u2212 \u2212 \u2212 52.1 Qwen-VL-Max [5] \u2717 \u2212 93.1 79.8 73.4 \u2212 723 2433.6 \u2212 79.3 51.3 77.6 / 75.7 63.5 66.6 \u2212 41.2 51.0 Qwen-VL-Plus [5] \u2717 \u2212 91.4 78.1 \u2212 \u2212 694 2183.4 \u2212 75.9 45.2 67.0 / 70.7 55.1 61.1 72.7 40.6 43.3 Claude-3 Opus [3] \u2717 \u2212 89.3 80.8 \u2212 \u2212 694 1586.8 49.8 88.1 59.4 63.3 / 59.2 26.3 58.1 \u2212 37.8 50.5 Claude-3 Sonnet [3] \u2717 \u2212 89.5 81.1 \u2212 \u2212 646 1625.9 51.9 88.7 53.1 67.8 / 64.2 27.8 \u2212 \u2212 41.3 47.9 Claude-3 Haiku [3] \u2717 \u2212 88.8 81.7 \u2212 \u2212 658 1453.2 \u2212 86.7 50.2 60.7 / 57.2 24.5 \u2212 \u2212 39.2 46.4 HPT Pro [35] \u2717 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 52.0 77.5 / 76.7 \u2212 \u2212 73.1 \u2212 \u2212 MM1 [84] \u2717 30B \u2212 \u2212 \u2212 73.5 \u2212 2069.0 \u2212 \u2212 44.7 75.1 / \u2212 \u2212 48.7 72.1 \u2212 39.4 Step-1V [102] \u2717 100B \u2212 \u2212 \u2212 \u2212 625 2206.4 \u2212 79.2 49.9 80.7 / 79.9 71.2 63.3 70.3 48.4 44.8 Grok-1.5V [125] \u2717 \u2212 85.6 76.1 \u2212 78.1 \u2212 \u2212 68.7 88.3 \u2212 \u2212/ \u2212 \u2212 \u2212 \u2212 \u2212 52.8 Text-Monkey [68] \u2713 10B 66.7 59.9 28.6 64.3 561 \u2212 \u2212 \u2212 \u2212 \u2212/ \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 DocOwl-1.5 [33] \u2713 8B 82.2 70.2 50.7 68.6 599 \u2212 \u2212 \u2212 \u2212 \u2212/ \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 Mini-Gemini [53] \u2713 35B \u2212 \u2212 \u2212 74.1* \u2212 2141.0 \u2212 \u2212 48.0 80.6 / \u2212 \u2212 59.3 \u2212 \u2212 43.3 LLaVA-NeXT [64] \u2713 35B 84.0 68.7 51.5 69.5* 574 2028.0 \u2212 74.9 51.1 81.1 / 79.0 49.2 57.4 75.9 34.8 46.5 InternVL 1.2 (ours) \u2713 40B 57.7 68.0 39.5 72.5* 569 2175.4 67.5 79.0 51.6 82.2 / 81.2 59.2 48.9 75.6 47.6 47.7 InternVL 1.5 (ours) \u2713 26B 90.9 83.8 72.5 80.6 724 2187.8 66.0 80.7 45.2 82.2 / 82.0 69.8 62.8 76.0 49.3 53.5 Table 2. Comparison with SoTA models on 16 multimodal benchmarks. OCR-related benchmarks include: DocVQA test [82], ChartQA test [81], InfographicVQA test [83], TextVQA val [100], and OCRBench [67]. General multimodal benchmarks encompass: MME [26], RealWorldQA [125], AI2D test [39], MMMU val [135], MMBench-EN/CN test [66], CCBench dev [66], MMVet [133], SEED Image [46], and HallusionBench [30]. Additionally, the math dataset includes MathVista testmini [75]. * denotes that Rosetta OCR tokens are used in the testing of TextVQA. The MME results we report are the sum of the perception and cognition scores. The results of OCRBench, MMBench, CCBench, and HallusionBench are collected from the OpenCompass leaderboard [21]. in precise object localization within images. In the realm of multimodal conversation, datasets like LLaVA-150K [63] and ALLaVA [14] enhance the model\u2019s dialogic capabilities by simulating interactive and engaging scenarios. Lastly, text-only datasets include OpenHermes2.5 [109], AlpacaGPT4 [106], among others [6, 141], which are used to maintain the original linguistic capabilities of the LLM. In summary, these datasets together establish a rich and diverse foundation for fine-tuning, which enhances our model\u2019s ability to handle a wide range of multimodal tasks and ensures its readiness for practical applications. Data Translation Pipeline. As shown in Figure 5, to enhance our model\u2019s multilingual capabilities, we implemented a data translation pipeline. This pipeline utilizes state-of-the-art open-source LLMs [4, 11, 130] or GPT-3.5 to convert English datasets to another language (e.g., Chinese), maintaining consistency and precision in bilingual labeling. Moreover, it can readily expand to encompass more languages by adjusting the language prompt, without relying on manual annotation processes. In Table 1, we have annotated the language for each dataset. For a dataset that was originally in English, an annotation as \u201czh\u201d indicates that we have translated it into Chinese using the translation pipeline. For example, COYO [10] and GRIT [90] were originally English datasets, and we have translated them into Chinese. By leveraging this translation pipeline, the Chinese capabilities of InternVL 1.5 have been greatly enhanced. 4. Experiments 4.1. Implementation Details. InternVL 1.5 was developed by integrating the InternViT6B [18] vision encoder with the InternLM2-20B [11] language model, using a dynamic high-resolution strategy. In this approach, images are segmented into 448\u00d7448 pixel tiles, with the number of tiles ranging up to 12 based on the image\u2019s aspect ratio and resolution during training. In testing phases, the model could handle up to 40 tiles, equivalent to 4K resolution, demonstrating its adaptability to highresolution inputs in a zero-shot manner. Notably, we built our model based on the chat version of InternLM2-20B rather than the base model. The training of InternVL 1.5 was divided into two stages. Initially, the pre-training stage focused on training the InternViT-6B vision encoder and the MLP projector to optimize visual feature extraction. Subsequently, the entire model\u2019s 26 billion parameters were fine-tuned to enhance multimodal capabilities. In both two stages of training, we use a context length of 4096 and adopt the same response formatting prompts as LLaVA 1.5 [52]. Additionally, the evaluation was mainly supported by VLMEvalKit [21]. 4.2. Comparison with State-of-the-Art MLLMs 4.2.1 Quantitative Results on 18 Benchmarks In this section, we conduct an extensive evaluation across a series of benchmarks to assess our model\u2019s multimodal un6 openConvBench (Pairwise Grading) ConvBench (Direct Grading) model source #param R1 R2 S1 S2 S3 SO R1 R2 S1 S2 S3 SO GPT-4V [1] \u2717 \u2212 39.51 38.47 38.47 39.34 37.61 40.55 7.09 7.30 7.30 7.48 7.12 6.88 Claude-3 Opus [3] \u2717 \u2212 36.60 37.49 38.99 39.17 34.32 35.70 6.54 6.75 6.53 7.04 6.68 6.32 Reka Flash [89] \u2717 \u2212 25.60 24.67 25.13 27.56 21.32 26.52 6.78 6.86 6.93 7.25 6.41 6.70 Gemini Pro 1.0 [107] \u2717 \u2212 8.44 8.55 9.01 9.36 7.28 8.32 4.42 4.60 5.18 4.95 3.66 4.24 ShareGPT4V-13B [16] \u2713 13B 17.56 17.45 17.85 18.72 15.77 17.68 4.85 5.03 5.16 5.06 4.86 4.67 LLaVA-1.5-13B [62] \u2713 13B 16.93 18.08 20.45 18.02 15.77 15.77 4.94 5.14 5.03 5.41 4.99 4.74 XComposer2 [23] \u2713 8B 15.83 16.41 17.16 19.06 13.00 15.25 5.82 5.98 5.98 6.17 5.78 5.66 mPLUG-Owl2 [128] \u2713 8B 14.93 15.83 17.50 17.16 12.82 14.04 5.04 5.17 4.98 5.38 5.14 4.91 Qwen-VL-Chat [5] \u2713 10B 14.33 14.62 16.29 18.37 9.19 14.04 5.54 5.65 5.96 5.78 5.22 5.43 MiniGPT-4 [142] \u2713 8B 10.95 10.80 11.61 11.27 9.53 11.09 3.85 4.04 3.99 4.40 3.73 3.66 LLaMA-A-V2 [27] \u2713 7B 9.04 9.59 8.84 10.92 9.01 8.49 4.77 4.91 4.77 5.47 4.48 4.64 InternVL 1.2 (ours) \u2713 40B 21.17 22.41 24.96 21.31 20.97 19.93 5.49 5.69 5.80 5.88 5.39 5.29 InternVL 1.5 (ours) \u2713 26B 17.65 20.22 26.00 17.33 17.33 15.08 5.60 5.76 6.11 5.93 5.25 5.43 openMMT-Bench model source #param Overall Overall* GPT-4V [1] \u2717 \u2212 62.0 55.5 Qwen-VL-Plus [4] \u2717 \u2212 62.3 56.6 Gemini Pro 1.0 [107] \u2717 \u2212 61.6 55.1 Claude-3 Haiku [3] \u2717 \u2212 52.2 46.4 LLaVA-NeXT [64] \u2713 35B 60.8 56.3 XComposer2 [23] \u2713 8B 55.7 50.0 BLIP-2-XXL [50] \u2713 12B 54.8 49.1 Yi-VL-34B [130] \u2713 35B 54.2 48.6 Monkey-Chat [107] \u2713 10B 53.4 46.0 DeepSeek-VL [71] \u2713 7B 53.2 46.5 CogVLM-Chat [117] \u2713 17B 51.6 44.2 InternVL 1.2 (ours) \u2713 40B 63.4 58.2 InternVL 1.5 (ours) \u2713 26B 59.0 56.2 Table 3. Comparison with SoTA models on ConvBench and MMT-Bench. ConvBench [65] is a multi-turn conversation evaluation benchmark designed for MLLMs. The table presents win rates against humans, where S1, S2, and S3 represent the scores for perception, reasoning, and creation, respectively. R2 is calculated as (S1 + S2 + S3)/3, reflecting the average performance across three turns. R1 is derived from (R2 + S0)/2, indicating the model\u2019s overall score. MMT-Bench [129] is a comprehensive benchmark designed to assess MLLMs across massive multimodal tasks requiring expert knowledge and deliberate visual recognition, localization, reasoning, and planning. The overall score is computed across 162 subtasks, excluding visual recognition as denoted by *. derstanding and reasoning capability. The benchmarks employed in our study are categorized into four distinct types: OCR-related, general multimodal, mathematical, and multiturn conversation benchmarks. As depicted in Table 2, InternVL 1.5 exhibits leading performance across the majority of these benchmarks. OCR-related Image Understanding. We evaluate the model performance across four key dimensions of OCR: document comprehension (DocVQA [82]), chart understanding (ChartQA [81]), infographic understanding (InfographicVQA [83]), and scene text interpretation (TextVQA [100]). Additionally, we employ OCRBench [67] to perform a comprehensive evaluation of the model\u2019s overall OCR capabilities. As shown in Table 2, our model demonstrated comparable performance to proprietary models on these benchmarks and significantly outperformed the opensource LLaVA-NeXT [64] as well as InternVL 1.2, the predecessor of InternVL 1.5. Notably, our model achieves state-of-the-art performance on ChartQA and OCRBench, outperforming all competing proprietary models. General Multimodal Evaluation. In addition to OCRrelated benchmarks, we tested our model on several general multi-modal benchmarks. We used RealWorldQA [125] to evaluate the model\u2019s real-world spatial understanding capabilities. HallusionBench [30] was employed to assess its ability to control hallucinations. Additionally, MMMU [135] was utilized to evaluate the model\u2019s multidisciplinary capabilities, and AI2D [39] to assess its understanding of science diagrams. We also tested the model\u2019s proficiency in Chinese and understanding of Chinese culture with the MMBench-CN test [66] and CCBench [66], respectively. Other comprehensive benchmarks such as MME [26], MMBench-EN [66], MMVet [133], SEED [46], and MMT-Bench [129] were also used to assess the model\u2019s visual understanding and reasoning abilities. Compared to other open-source models like TextMonkey [68], DocOwl-1.5 [33], and LLaVA-NeXT [64], our InternVL 1.5 significantly closes the gap with proprietary models in these benchmarks. Specifically, our model achieves the best performance on HallusionBench [30], demonstrating its outstanding ability to reduce hallucinations. Moreover, thanks to our high-quality bilingual dataset, our model exhibits robust Chinese language capabilities, significantly surpassing both open-source and proprietary methods on MMBench-CN and CCBench. However, while InternVL 1.5 surpasses MM1 [84] and is comparable to Gemini Pro 1.0 [107] on MMMU, it shows a slight decline from its predecessor, InternVL 1.2. We attribute this modest decrement to the smaller size of the language model, a phenomenon similarly observed in the MMT-Bench [129] results, as shown in Table 3. Math Reasoning. MathVista [75] is a benchmark designed to integrate challenges from various mathematical and visual tasks. Completing these tasks requires a deep understanding of visuals, logical thinking, and math knowledge\u2014areas where many proprietary commercial models encounter significant difficulties. As shown in Table 2, our model outperforms others, including GPT-4V [87], by a clear margin in this benchmark, showcasing its ability to handle mathematically demanding tasks. Multi-Turn Conversation. Compared to single-turn dialogues, multi-turn conversations align more with human preferences. In practical usage, multi-turn dialogue is the preferred mode for general-purpose assistants to engage with humans in solving a variety of tasks. Therefore, we opt to utilize ConvBench [65] for evaluating multi-turn conversations, which progressively assesses the perception, reasoning, and creativity capabilities of MLLMs. As depicted 7 0 10 20 30 40 50 60 70 80 90 100 90.5 18 DocVQA val 0 10 20 30 40 70 75 80 85 83.8 12 ChartQA test 0 10 20 30 40 30 40 50 60 70 80 72.3 24 InfoVQA val 0 10 20 30 40 70 75 80 80.6 24 T extVQA val 0 10 20 30 40 55 60 65 70 75 72.4 24 OCRBench 0 10 20 30 40 72.00 72.25 72.50 72.75 73.00 73.25 73.1 36 MME 0 10 20 30 40 62 64 66 66.1 40 RealWorldQA 0 10 20 30 40 79.8 80.0 80.2 80.4 80.6 80.8 80.7 6 AI2D test 0 5 10 15 44.25 44.50 44.75 45.00 45.25 45.2 6 MMMU val 0 10 20 30 40 81.6 81.8 82.0 82.2 82.2 6 MMBench-EN test 0 10 20 30 40 81.2 81.4 81.6 81.8 82.0 82.0 6 MMBench-CN test 0 10 20 30 40 69.0 69.5 70.0 70.5 70.2 18 CCBench dev 0 10 20 30 40 58 60 62 64 62.8 24 MMVet 0 10 20 30 40 74.5 75.0 75.5 76.0 76.5 76.1 36 SEED Image 0 10 20 30 40 47.5 48.0 48.5 49.0 49.5 50.0 49.6 4 HallusionBench 0 10 20 30 40 65.0 67.5 70.0 72.5 75.0 74.3 24 Average (w/o MMMU val) Figure 6. Comparison of InternVL 1.5 performance across different image resolutions. The X-axis represents the number of tiles, while the Y-axis indicates benchmark performance. The highest value and its corresponding number of tiles are highlighted. The scores of MME [26] and OCRBench [67] have been normalized to a maximum score of 100. We found that although only 1 to 12 tiles were used during training, it is possible to zero-shot scale up to 40 tiles (i.e., 4K resolution) during testing. Note that since MMMU [135] includes multiple images per sample, it may run out of memory when the number of tiles is large. Therefore, we only tested up to 18 tiles maximum, and MMMU was not included when calculating the average score. in Table 3, InternVL exhibits leading performance among open-source models, albeit still trailing behind GPT-4V by a considerable margin. Going forward, we will continue refining InternVL\u2019s capabilities in multi-turn conversations. 4.3. Ablation Study Larger LLMs need Larger VFMs. In this study, we investigate the interplay between LLMs and VFMs. The comparison involves two open-source MLLMs, LLaVA-NeXT [64] and InternVL 1.2, each equipped with LLMs of 34 billion parameters. Notably, although both models employ LLMs of the same scale, InternVL 1.2 incorporates a significantly larger VFM, with 6 billion parameters, compared to LLaVA-NeXT\u2019s 300 million parameters. Since the data for LLaVA-NeXT is not available, we created a similar dataset ourselves. Additionally, InternVL 1.2 was trained at a fixed resolution of 448\u00d7448, while LLaVA-NeXT used a higher dynamic resolution of 672 \u00d7 672. Therefore, this comparison is not entirely fair or equivalent. Nevertheless, the findings still reveal noteworthy insights. For example, after excluding five OCR-related datasets, ConvBench, and RealWorldQA, InternVL 1.2 outperformed LLaVA-NeXT in 9 out of the remaining 11 datasets. This performance difference supports our hypothesis that for a large-scale LLM (e.g., 34B), a larger VFM (e.g., 6B) can effectively improve the model\u2019s ability to handle complex multimodal tasks, thereby enhancing the overall performance. Dynamic Resolution Matters. As shown in Figure 6, we investigated the effectiveness of dynamic resolution across various multimodal benchmarks. We found that not all tasks require high resolution. Specifically, tasks related to OCR, such as DocVQA, InfoVQA, TextVQA, and OCRBench, benefit from increased resolution. However, tasks like AI2D, MMMU, MMBench, and HallusionBench ex8 User: What action is taking place in the picture? Which color jersey is the attacking player wearing? Explain in detail. GPT-4V: In the image, we see a basketball game in progress where a player in a black jersey is in the act of shooting the ball. The player is extending their arms upwards towards the basket, and it looks like they\u2019re attempting a jump shot. The player is wearing a black jersey with the word \u201cSPURS\u201d across the front, which indicates they are a member of the San Antonio Spurs team. There\u2019s a defensive player from the opposing team, wearing a red jersey with the number \u201c9\u201d and the name \u201cVUCEVIC\u201d on it, who is trying to contest the shot. InternVL 1.5: The image depicts a basketball game in progress. A player in a black jersey, who is likely on the offensive team, is in the act of shooting the basketball. He is airborne, with one arm extended upwards towards the basket, indicating that he is in the process of taking a shot. The player in the red jersey is on defense, with his arm raised in an attempt to block the shot. The background shows the interior of an arena with spectators and a scoreboard, suggesting that this is a professional or organized basketball match. The attacking player\u2019s jersey is black, and the defending player\u2019s jersey is red. User: Tell me a person who had made a similar pose to the cute duck. GPT-4V: I can\u2019t identify or make assumptions about a specific person who has made a similar pose to the duck in the image you\u2019ve uploaded, but the stance the duck is taking, with what appears to be a martial arts belt and a weapon, is reminiscent of a martial artist demonstrating a pose or ready stance. This is a common pose in many martial arts, as practitioners often pose for photos demonstrating their skills or in the midst of performing kata (a sequence of movements in martial arts). If you\u2019re looking for something specific, you might find martial artists or actors who play martial arts roles striking similar poses. InternVL 1.5: The pose of the cute duck is reminiscent of a karate stance, which is often associated with martial artists. One famous person who made a similar pose is Bruce Lee, a martial artist and actor known for his contributions to martial arts and film. Figure 7. Examples for General QA. Blue highlights the standout answers, while red highlights incorrect responses. hibit a slight decline in performance at higher resolutions. Overall, InternVL 1.5 demonstrates strong robustness to dynamic resolution. It can adjust the resolution based on the specific requirements of each task, ensuring optimal performance where high resolution is beneficial and conserving resources where it is not. 4.3.1 Qualitative Results on Different Scenes In previous sections, we evaluated our model across various benchmarks and observed its strong performance. In this section, we conduct a qualitative comparison of our model with GPT-4V [87] across diverse scenarios, including General QA, OCR-related QA, Scientific Understanding, Chinese Traditional Culture, Object Localization, and MultiImage Dialogue. We aim to demonstrate the practicality and versatility of our model in real-world applications, offering insights from the perspective of actual user experience. General QA. To compare the general capabilities of InternVL 1.5 and GPT-4V, we first conducted an experiment involving simple user queries with images requiring general knowledge. As shown on the left side of Figure 7, both two models respond accurately to the query, showcasing their proficiency in general topics. As shown on the right side of Figure 7, GPT-4V may excessively refuse to answer some questions due to its involvement in personal privacy. OCR-Related QA. We conducted an evaluation to compare the OCR capabilities of our InternVL 1.5 model against GPT-4V. On the left side of Figure 8, the first prompt aimed to measure the models\u2019 ability to understand Chinese scenes. In this instance, GPT-4V cannot extract all useful information in the image. On the right side of Figure 8, both GPT-4V and our model have good performance on chart understanding. Scientific Understanding. Evaluating the capabilities of models in scientific understanding reasoning tasks is essential for advancing computational intelligence, particularly in contexts requiring in-domain knowledge and logical reasoning. In our study, we compared the performance of our InternVL 1.5 model with GPT-4V by administering complex multi-disciplinary problems designed to assess the accuracy of their reasoning. In Figure 9, for the first question, both models accurately answered and provided an analysis from an aerodynamic perspective. For the second question, our model precisely analyzed the elements depicted in the image and provided the correct response, whereas GPT-4V speculated on the trend of amino acid transport. These results suggest that our method and GPT-4V exhibit comparable capabilities in scientific understanding and reasoning. Chinese Traditional Culture. We selected two typical multimodal examples related to traditional Chinese art to evaluate our model. As illustrated in Figure 10, both InternVL 1.5 and GPT-4V correctly recognize the Chinese traditional culture depicted in the image. Notably, InternVL 1.5 demonstrates a deeper understanding of this culture, as 9 User: Describe this image in detail. GPT-4V: The image shows a signboard with Chinese characters on it. From the text, we can deduce that the sign is likely for a business. The sign features a design that includes a martini glass with a flamingo, indicating that the establishment might serve drinks. There\u2019s additional text on the right side, providing specific details such as business hours. It appears to be a nighttime business, likely a bar or restaurant, operating from 16:30 to 22:30. The other text provides further information about the services or specialties of the place. Unfortunately, I can\u2019t provide a translation of the text, as my capabilities don\u2019t include translating text within images. InternVL 1.5: The image shows a signboard with Chinese text, indicating a business establishment. The signboard reads: \u201c\u6cb3\u5927\u897f\u95e8\u7b2c\u4e00\u5bb6\u201d which translates to \u201cThe first shop at the west gate of the river.\u201d \u201c\u6b63\u5b97\u8001\u5b57\u53f7\u4e94\u9999\u5473\u201d which means \u201cGenuine, old brand, five-spice flavor.\u201d \u201c\u66f9\u8bb0\u7116\u70e4\u9e21\u817f\u201d which translates to \u201cCao\u2019s roasted and baked chicken legs.\u201d \u201c\u8425\u4e1a\u65f6\u95f416:30-22:30\u201d indicating the business hours from 4:30 PM to 10:30 PM. \u201c7\u70b9\u4e4b\u540e\u5728\u95e8\u53e3\u8def\u706f\u4e0b\u51fa\u591c\u5e02\u5730\u644a\u201d which suggests that after 7 PM, the shop operates in the doorway under the streetlights as a night market stall. The signboard also includes a logo that appears to be a stylized rooster, which might be the brand or shop\u2019s emblem. The overall style of the signboard is quite traditional and straightforward, commonly seen in Chinese street markets or small businesses. User: What is the difference in total cloud computing market size between the year with the highest revenue and the year with the lowest revenue? GPT-4V: To find the difference in total cloud computing market size between the year with the highest revenue and the year with the lowest revenue, we need to: 1. Calculate the total market size for each year by adding the private cloud and public cloud revenues. 2. Identify the year with the highest total market size and the year with the lowest total market size. 3. Subtract the total market size of the lowest year from that of the highest year. Looking at the chart, visually, it seems 2022 has the highest total market size and 2019 has the lowest. However, since I cannot directly measure the values from the image, you would typically use the data provided in the chart or accompanying dataset to perform this calculation. Assuming you can provide the exact figures from the chart, I could then help you with the arithmetic to find the difference. InternVL 1.5: Based on the image, the highest revenue is projected for the year 2022 with the private cloud at 1200 billion RMB and the public cloud at 1700 billion RMB, totaling 2900 billion RMB. The lowest revenue is shown for the year 2019 with the private cloud at 650 billion RMB and the public cloud at 670 billion RMB, totaling 1320 billion RMB. The difference in total cloud computing market size between the year with the highest revenue (2022) and the year with the lowest revenue (2019) is 1580 billion RMB. Figure 8. Examples for OCR-related QA. Blue highlights the standout answers, while red highlights incorrect responses. evidenced by its more detailed descriptions of the cultural elements in its response. Object Localization. Evaluating machine learning models for their proficiency in object localization tasks is essential, especially in applications requiring precise spatial awareness. In our comparative analysis, the performance of the InternVL 1.5 model was juxtaposed with GPT-4V, focusing on their ability to accurately detect and localize objects within various environments. Our assessments ranged from simple object recognition in cluttered scenes to complex scenarios involving dynamic interactions among multiple entities. As illustrated in Figure 11, the results demonstrate that InternVL 1.5 not only localized objects with high accuracy but also exhibited a comparable understanding of spatial relationships, matching the performance of GPT-4V. Multi-Image Dialogue. As shown in Figure 12, in this experiment, we ask InternVL 1.5 and GPT-4V to compare the similarities and differences between the two images. As can be seen, both GPT-4V and InternVL 1.5 provide detailed and accurate responses. Through this experiment, we discovered that although InternVL 1.5 was trained solely on single-image inputs, it exhibits strong zero-shot capabilities for multi-image dialogues. 5. Conclusion This work introduced InternVL 1.5, an open-source MLLM designed to narrow the performance gap between opensource and proprietary models in multimodal understanding. By integrating a strong vision encoder with continuous learning capabilities, adopting a dynamic high-resolution strategy, and utilizing a high-quality bilingual dataset, InternVL 1.5 has demonstrated robust performance across a variety of benchmarks. Our evaluations indicate that the model achieves competitive performance with leading proprietary models, excelling particularly in OCR-related tasks and showing significant improvements in Chinese-related scene understanding. While InternVL 1.5 has contributed to the open-source multimodal understanding, the field continues to evolve with many challenges ahead. We aspire to further enhance InternVL\u2019s capabilities and invite collaboration with the global research community, hoping to enrich and expand the reach of open-source models together. 10 User: Which car in the picture is more aerodynamic? GPT-4V: Based on the image you\u2019ve provided, determining which car is more aerodynamic involves analyzing the shape and design of each vehicle. The car on the right appears to have a more streamlined shape, with a sloped front that would likely reduce air resistance better than the boxy design of the car on the left. Therefore, the car on the right would be considered more aerodynamic. InternVL 1.5: The car on the right appears to be more aerodynamic. This is inferred from the shape of the car, which has a more streamlined and elongated form. The design includes a pointed front end and a tapered rear, which are common features in aerodynamic design to reduce air resistance. The car on the left, with its boxy shape, does not exhibit the same level of aerodynamic efficiency. User: If the pH outside the cell decreases, would you expect the amount of amino acids transported into the cell to increase or decrease? GPT-4V: The image depicts a sodium-glucose symporter, which is a type of secondary active transport mechanism. This transport mechanism is specifically for glucose and does not directly provide information about amino acid transporters. However, speaking generally about amino acid transporters, if the pH outside the cell decreases, the effect on amino acid transport would depend on the type of amino acid transporter involved. Some amino acid transporters are proton-dependent, meaning that they use the proton gradient (which increases as pH decreases) to transport amino acids into the cell. For these transporters, a decrease in extracellular pH could potentially increase the transport of amino acids into the cell. To give a precise answer, one would need specific information about the type of amino acid transporter present in the cell membrane. Since this information is not provided in the image or the description, the impact of a decrease in extracellular pH on amino acid transport cannot be determined from the provided content alone. InternVL 1.5: Based on the image, if the pH outside the cell decreases, we would expect the amount of amino acids transported into the cell to increase. This is because the diagram shows a gradient of amino acids moving from an area of lower concentration (outside the cell) to an area of higher concentration (inside the cell), which is a typical representation of active transport. Active transport is a process that moves substances against their concentration gradient, often using energy, and it is typically more efficient at higher concentrations of the substance being transported. Therefore, a decrease in pH outside the cell would likely increase the concentration gradient, leading to an increase in the transport of amino acids into the cell. Figure 9. Examples for Scientific Understanding. Blue highlights the standout answers. User: \u56fe\u4e2d\u6240\u793a\u7684\u4f20\u7edf\u8868\u6f14\u5f62\u5f0f\u4e3a? GPT-4V: \u8fd9\u5f20\u56fe\u7247\u5c55\u793a\u7684\u662f\u4e2d\u56fd\u7684\u4f20\u7edf\u6728\u5076\u620f\uff0c\u4e5f\u88ab\u79f0\u4e3a\u201c\u63d0\u7ebf \u6728\u5076\u201d\u3002\u8fd9\u79cd\u8868\u6f14\u827a\u672f\u5f62\u5f0f\u5728\u4e2d\u56fd\u6709\u7740\u60a0\u4e45\u7684\u5386\u53f2\uff0c\u901a\u5e38\u4ee5\u7cbe\u7f8e \u7684\u6728\u5076\u548c\u590d\u6742\u7684\u64cd\u4f5c\u6280\u5de7\u8457\u79f0\u3002\u6728\u5076\u620f\u5728\u4e2d\u56fd\u7684\u5404\u4e2a\u5730\u533a\u6709\u4e0d\u540c \u7684\u98ce\u683c\u548c\u8868\u6f14\u7279\u8272\u3002\u56fe\u7247\u4e2d\u7684\u6728\u5076\u670d\u9970\u7eda\u4e3d\uff0c\u8868\u6f14\u8005\u901a\u8fc7\u7ebf\u63a7\u5236 \u6728\u5076\u7684\u52a8\u4f5c\uff0c\u8fdb\u884c\u6545\u4e8b\u53d9\u8ff0\u548c\u8868\u6f14\u3002 InternVL 1.5: \u56fe\u4e2d\u6240\u793a\u7684\u4f20\u7edf\u8868\u6f14\u5f62\u5f0f\u4e3a\u5e03\u888b\u620f\uff0c\u5c5e\u4e8e\u6728\u5076\u620f \u7684\u4e00\u79cd\uff0c\u5e03\u888b\u620f\u662f\u4e00\u79cd\u53e4\u8001\u7684\u6c49\u65cf\u6c11\u95f4\u620f\u66f2\u8868\u6f14\u827a\u672f\u5f62\u5f0f\uff0c\u5e03\u888b \u620f\uff08\u53c8\u79f0\u5e03\u888b\u6728\u5076\u620f\u3001\u624b\u64cd\u5080\u5121\u620f\u3001\u638c\u4e2d\u620f\u3001\u5c0f\u7b3c\u3001\u6307\u82b1\u620f\uff09\u662f \u4e00\u79cd\u6e90\u4e8e17\u4e16\u7eaa\u4e2d\u56fd\u798f\u5efa\u6cc9\u5dde\uff0c\u6d41\u884c\u4e8e\u53f0\u6e7e\u3001\u798f\u5efa\u3001\u5e7f\u4e1c\u7b49\u5730\u7684 \u4f20\u7edf\u620f\u5267\u827a\u672f\u3002\u8868\u6f14\u8005\u7528\u53cc\u624b\u64cd\u7eb5\u5e03\u888b\u5185\u7684\u6728\u5076\uff0c\u901a\u8fc7\u624b\u6307\u7684\u52a8 \u4f5c\u6765\u63a7\u5236\u6728\u5076\u7684\u52a8\u4f5c\u548c\u8868\u60c5\uff0c\u540c\u65f6\u914d\u5408\u97f3\u4e50\u3001\u5531\u8154\u548c\u9053\u767d\uff0c\u6765\u8bb2 \u8ff0\u6545\u4e8b\u3002\u5728\u4e2d\u56fd\u4ee5\u53ca\u4e16\u754c\u534e\u4eba\u5730\u533a\u90fd\u6709\u6d41\u4f20\u3002 User: \u56fe\u4e2d\u662f\u8c01\u7684\u4f5c\u54c1? Options: A. \u5f90\u60b2\u9e3fB. \u9f50\u767d\u77f3C. \u5f20\u5927\u5343 D. \u5218\u6d77\u7c9f GPT-4V: \u8fd9\u5e45\u753b\u7684\u98ce\u683c\u4f3c\u4e4e\u662f\u5f90\u60b2\u9e3f\u7684\uff0c\u4ed6\u4ee5\u8868\u73b0\u9a6c\u7684\u9ad8\u8d85\u6280\u5de7 \u800c\u95fb\u540d\u3002\u5f90\u60b2\u9e3f\u7684\u9a6c\u753b\u7279\u70b9\u662f\u7ebf\u6761\u6d41\u7545\u800c\u6709\u529b\uff0c\u4f53\u73b0\u4e86\u9a6c\u7684\u52a8\u6001 \u7f8e\u548c\u751f\u547d\u529b\u3002\u8fd9\u5e45\u753b\u4e2d\u9a6c\u7684\u8868\u73b0\u4e0e\u5f90\u60b2\u9e3f\u7684\u98ce\u683c\u76f8\u543b\u5408\uff0c\u6240\u4ee5\u6b63 \u786e\u7b54\u6848\u662fA. \u5f90\u60b2\u9e3f\u3002 InternVL 1.5: \u8fd9\u5e45\u753b\u770b\u8d77\u6765\u50cf\u662f\u4e00\u5e45\u4e2d\u56fd\u753b\uff0c\u7279\u522b\u662f\u9a6c\u7684\u63cf\u7ed8 \u65b9\u5f0f\u548c\u98ce\u683c\u3002\u5f90\u60b2\u9e3f\u662f\u4e2d\u56fd\u8457\u540d\u7684\u753b\u5bb6\uff0c\u4ee5\u753b\u9a6c\u800c\u95fb\u540d\uff0c\u4ed6\u7684\u753b \u4f5c\u901a\u5e38\u5177\u6709\u5f88\u5f3a\u7684\u8868\u73b0\u529b\u548c\u52a8\u611f\u3002\u4ece\u56fe\u4e2d\u9a6c\u7684\u59ff\u6001\u548c\u8868\u73b0\u624b\u6cd5\u6765 \u770b\uff0c\u4e0e\u5f90\u60b2\u9e3f\u7684\u98ce\u683c\u975e\u5e38\u76f8\u4f3c\u3002\u5f90\u60b2\u9e3f\u7684\u4f5c\u54c1\u901a\u5e38\u4ee5\u5927\u5199\u610f\u7684\u65b9 \u5f0f\u5448\u73b0\uff0c\u6ce8\u91cd\u8868\u73b0\u7269\u8c61\u7684\u7cbe\u795e\u548c\u6c14\u8d28\u3002\u56fe\u4e2d\u9a6c\u7684\u795e\u6001\u548c\u52a8\u6001\u6355\u6349 \u5f97\u975e\u5e38\u751f\u52a8\uff0c\u4f53\u73b0\u4e86\u5f90\u60b2\u9e3f\u5bf9\u9a6c\u5339\u7684\u6df1\u523b\u7406\u89e3\u548c\u7cbe\u6e5b\u7684\u7ed8\u753b\u6280 \u5de7\u3002\u56e0\u6b64\uff0c\u6839\u636e\u63d0\u4f9b\u7684\u9009\u9879\uff0c\u56fe\u4e2d\u4f5c\u54c1\u5f88\u53ef\u80fd\u662f\u5f90\u60b2\u9e3f\u7684\u4f5c\u54c1\u3002 Figure 10. Examples for Chinese Traditional Culture. Blue highlights the standout answers. 11 User: Tell me the locations of all the number 10 jerseys in the picture. GPT-4V: In the image, there are two players wearing number 10 jerseys. The first one is located towards the center, slightly to the right while facing the image; he\u2019s wearing a red jersey and is the second person from the right in the group wearing red and white uniforms. The second player is wearing a light blue jersey and is positioned on the far left side of the image; he\u2019s the third person from the left in the group wearing blue and white uniforms. InternVL 1.5: In the image, there are two number 10 jerseys visible: 1. The first player wearing the number 10 jersey is located in the center of the image. He is wearing a red jersey with the number 10 on it, and he is standing with his hands on his hips. 2. The second player with the number 10 jersey is on the far right of the image. He is wearing a blue jersey with the number 10 on it and is crouching down with his hands on his knees. User: What is the man in the picture doing with his right hand? GPT-4V: The man in the picture is pointing with his right hand towards a beluga whale that appears to be behind a glass barrier. He seems to be doing this to show the whale to the child he is holding, likely to engage the child\u2019s interest or to teach the child about the whale. InternVL 1.5: The man in the picture is pointing at a large dolphin through a glass barrier. He is likely making this gesture to draw attention to the dolphin, either for the child he is holding or for others around them. This could be to educate the child about the dolphin, to highlight a specific behavior or feature of the dolphin, or simply to share the experience with others. The gesture is a common way to direct someone\u2019s attention to something of interest. Figure 11. Examples for Object Localization. Blue highlights the standout answers, while red highlights incorrect responses. User: What are the similarities and differences between these two pictures? GPT-4V: The similarities between the two images are that they both feature red pandas, a species known for their reddish-brown fur, bushy tails, and adorable faces. In both images, the red pandas are resting on tree branches with green foliage in the background, highlighting their arboreal (tree-dwelling) nature. As for the differences, the first image shows a red panda in a curled-up position, appearing to be asleep with its tail wrapped around its body for warmth. This posture also serves as camouflage, making the panda resemble a tree branch or part of the foliage. The angle of the shot is from above the animal, and the focus is on the texture and patterns of the fur. In contrast, the second image captures the red panda in a more alert and awake state, with its head raised and looking toward the camera. This posture allows a clear view of the panda\u2019s facial features, such as its ears, eyes, and the white markings. The angle of the second image is more direct and at the same level as the animal, which gives a better sense of the panda\u2019s overall appearance and stance. InternVL 1.5: The two pictures both feature red pandas, which are small, arboreal mammals native to the temperate forests of the Himalayas, southwestern China, and the northern part of Myanmar. They are known for their distinctive reddish-brown fur, white markings around their eyes, and bushy tails. In the first picture, the red panda is curled up and appears to be sleeping or resting on a branch. The background is blurred with green foliage, suggesting a natural, forested habitat. The focus is on the red panda\u2019s relaxed posture and the texture of its fur. In the second picture, the red panda is sitting upright on a branch, with its front paws resting on the branch. The animal is looking directly at the camera, giving a more alert and curious expression. The background is also green and blurred, indicating a similar natural setting. The similarities between the two pictures include the presence of the red panda, the branch it is on, and the green, forested background. The differences lie in the red panda\u2019s posture and expression, with the first picture showing a more relaxed and the second a more alert state. Additionally, the first picture has a more focused view of the red panda\u2019s fur and form, while the second picture captures the animal\u2019s face and eyes more clearly. Figure 12. Examples for Multi-Image Dialogue. Blue highlights the standout answers. 12",
"additional_info": [
{
"url": "http://arxiv.org/abs/2404.15846v1",
"title": "From Complex to Simple: Enhancing Multi-Constraint Complex Instruction Following Ability of Large Language Models",
"abstract": "It is imperative for Large language models (LLMs) to follow instructions with\nelaborate requirements (i.e. Complex Instructions Following). Yet, it remains\nunder-explored how to enhance the ability of LLMs to follow complex\ninstructions with multiple constraints. To bridge the gap, we initially study\nwhat training data is effective in enhancing complex constraints following\nabilities. We found that training LLMs with instructions containing multiple\nconstraints enhances their understanding of complex instructions, especially\nthose with lower complexity levels. The improvement can even generalize to\ncompositions of out-of-domain constraints. Additionally, we further propose\nmethods addressing how to obtain and utilize the effective training data.\nFinally, we conduct extensive experiments to prove the effectiveness of our\nmethods in terms of overall performance, training efficiency, and\ngeneralization abilities under four settings.",
"authors": "Qianyu He, Jie Zeng, Qianxi He, Jiaqing Liang, Yanghua Xiao",
"published": "2024-04-24",
"updated": "2024-04-24",
"primary_cat": "cs.CL",
"cats": [
"cs.CL"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "Large language models (LLMs) have become the backbone for real-world applications (Anil et al., 2023; Touvron et al., 2023; Achiam et al., 2023). Given natural language instructions, LLMs can solve unseen tasks with few or no examples (Brown et al., 2020). The capability of LLMs to accurately understand instructions and convey the desired out- put, known as Instruction Following (Lou et al., 2024), is crucial for the safety (Mu et al., 2023) and reliability (Zhou et al., 2023a) of LLMs. It is imperative for LLMs to follow instructions with elaborate requirements (Yin et al., 2023; Xu et al., 2023) (i.e. Complex Instructions), such as formatting specifications outlined in Fig. 1. On one * Equal contribution. \u2020 Corresponding author. GOALS: 1. write a weather report for SF today Requirements: 1. ~4000 word limit for short term memory. 2. Exclusively use the commands listed in double quotes e.g. \"command name\u201d Commands: 1. search: useful for when you need to answer questions about current events. You should ask targeted questions, args json schema: {\"query\": {\"title\": \"Query\", \"type\": \"string\"}} 2. write_file: \u2026 You should only respond in JSON format as described below Response Format: { \"thoughts\": { \"text\": \"thought\", \"reasoning\": \"reasoning\", \u2026 } \u2026 } Make a short introduction and list a few popular songs from the album: Back To Black. There should be exactly two paragraphs in your response, separated by the markdown divider: ***. Do not say the word \"popular\" in the response and answer in lowercase letters only. The response should end with the phrase \"really love their song!\". Instructions with Detailed Requirements Instructions with Multiple Constraints Numerical Constraints Format Constraints Format Constraints Format Constraints 1 2 4 5 6 Semantic Constraints 1 2 3 4 To study Complex Instructions Following ability Complex Instruction Model Outputs : \u2026 album by the iconic British singer-songwriter Amy Winehouse \u2026 vocalists of her generation.\\n Some standout tracks \u2026 love their song! 3 1 : \"back to black\" is \u2026 of the 21st century.***some standout tracks from \u2026 21st century.***some standout tracks from \u2026 love their song! 1 3 Semantic Constraints GPT3.5 GPT4 Figure 1: Real-world applications generally involve instructions with multiple constraints (i.e. Complex Instructions), posing challenges for models. hand, the ability to follow detailed instructions alle- viates the need for annotating samples, which can be costly and challenging for intricate tasks (Zeng et al., 2023a). On the other hand, complex instruc- tions hardly appear in the training data (Zhou et al., 2024). Hence, the ability to follow complex in- structions demonstrates models to have better gen- eralization ability to unseen tasks (Yin et al., 2023). Specifically, satisfying the multiple constraints in the instructions simultaneously (i.e. Constraints Following) poses a significant challenge in com- plex instruction following (Jiang et al., 2023; He et al., 2024). As shown in Fig. 1, whether models can satisfy the multiple constraints in the instruc- tions determines their ability to follow complex arXiv:2404.15846v1 [cs.CL] 24 Apr 2024 Make a short introduction and list a few popular songs from the album: Back To Black. There should be exactly two paragraphs in your response, separated by the markdown divider: ***. Do not say the word \"popular\" in the response and answer in lowercase letters only. The response should end with the phrase \"love their song!\". \u2026 Released in 2006, ... \\n\\nSome tracks include \u2026These songs leave an impact on listeners who love their song! Generation Back to Black is a \u2026 by Amy Winehouse, ... *** Here are some popular tracks \u2026 I love their song! back to black is a \u2026 by amy winehouse, ... *** here are some standout tracks \u2026 i love their song! back to black is a \u2026 by amy winehouse, ... *** here are some popular tracks \u2026 i love their song! Discrimination Student Model Why can I see the moon during the day? Do not say the word \u201cmoon\u201d Recommend 5 films to me. End with the phrase \u2018That\u2019s all.\u2019 How can I increase my productivity? Don\u2019t mention the word \u201ctalent\u201d and end with \u201cwork harder!\u201d Teacher Model Composition(C) A Complex Instruction Fix Fix 1 Output 2 3 4 3 2 A C A A C 3 1 \ud835\udc90\ud835\udc97 \ud835\udc90\ud835\udfcf \ud835\udc90\ud835\udfd0 SFT: Positive Samples \ud835\udc90\ud835\udfd0 \ud835\udcdb\ud835\udc7a\ud835\udc6d\ud835\udc7b Training Objective \ud835\udc90\ud835\udfd0 \ud835\udc90\ud835\udfcf Generate \ud835\udc90\ud835\udc97 \ud835\udc90\ud835\udfd0> \ud835\udc90\ud835\udc97 \ud835\udc90\ud835\udfd0> \ud835\udc90\ud835\udfcf \ud835\udcdb\ud835\udc6b\ud835\udc77\ud835\udc76 \ud835\udc90\ud835\udfd0\ud835\udcdb\ud835\udc7a\ud835\udc6d\ud835\udc7b Training Objective Train Fix 3 Fix 2 Train What training data is effective for Complex Instruction Following ? How to use the data? How to obtain the data (the effective training data)? Atom(A) 3 2 2 RLFT: Positive and Negative Samples Atom(A) Figure 2: The framework of our study. We first study what training data is effective in enhancing complex instruction following abilities via an empirical study. Then, we design a discrimination-based method to address how to obtain the data. Finally, we propose a method for effectively utilizing positive and negative samples obtained through the discrimination-based method. instructions. Hence, in our work, we explore com- plex instruction following by examining LLMs\u2019 ability to follow instructions with multiple con- straints (Yin et al., 2023; Lou et al., 2024). On one hand, human instructions are subjective and am- biguous, while constraints within these instructions facilitate the automatic evaluation of instruction following ability (Zhou et al., 2023a; Wang et al., 2024). On the other hand, the compositional na- ture of constraints enables the automatic creation of instructions with unseen compositions of con- straints (Zhou et al., 2023b; Yao et al., 2023). These instructions hardly appear in the training data, thus effectively assessing the model\u2019s ability to general- ize to unseen tasks (Aksu et al., 2023). Complex constraints following is a challenging task for LLMs (Jiang et al., 2023; He et al., 2024; Qin et al., 2024). As shown in Fig. 1, even ad- vanced LLMs struggle to meet the four specified constraints in complex instructions. However, it remains under-explored how to enhance LLMs to follow multi-constraint complex instructions. First, the existing works on constraints following mainly focus on evaluation without proposing methods for enhancement (Jiang et al., 2023; Chen et al., 2024; Xia et al., 2024). Additionally, even when the improvement methods are proposed, they mainly consider instructions with few constraints, thereby failing to showcase the complexity of human in- structions in practical applications (Chen et al., 2022; Zhang et al., 2023; Wang et al., 2024). More- over, although some studies construct complex in- structions with multiple constraints and fine-tune LLMs on them (Aksu et al., 2023; Sun et al., 2024), one key research question remains under-explored: What training data is effective in enhancing com- plex constraint-following abilities? This leads to two follow-up questions: (1) How to obtain the effective training data? and (2) How to utilize the data effectively? In this work, we systematically study how to enhance the ability of LLMs to follow complex in- structions, with the framework shown in Fig. 2. We initially explore the effective training data for this purpose through an empirical study. We found that training LLMs on instructions containing multi- ple constraints (compositional data) enhances their understanding of complex instructions more effec- tively than training on atomic constraints (atomic data). Moreover, the improvement in performance is related to the number of constraints, the model size (\u00a73), and can even generalize to the composi- tions of out-of-domain constraints found in \u00a75.3.1. To obtain high-quality compositional data, we generate initial output via a student model (vanilla model) and then correct via a teacher model (ad- vanced model), termed the Discrimination method. This approach yields higher-quality output than us- ing the teacher model to generate directly. To lever- age the positive and negative samples collected dur- ing the Discrimination method, we introduce a con- trastive method with reinforcement learning fine- tuning (RLFT) (Rafailov et al., 2023). Our method surpasses the SFT training paradigm on the instruc- tion following benchmark (Zhou et al., 2023a) with fewer training steps. It also demonstrates superior generalization across out-of-domain, in-domain, and adversarial settings while preserving overall capabilities. Overall, our contributions are mainly three-fold: (1) We systematically improve LLMs\u2019 instruction- following ability by exploring effective training data. (2) We design a discrimination-based method to obtain effective training data. We also propose a method for utilizing positive and negative samples obtained through this approach. (3) We conduct extensive experiments to prove the effectiveness and efficiency of our method. We also validate its generalization ability under four settings.",
"main_content": "2.1 Instruction Following There are various perspectives for assessing the ability of LLMs to follow instructions. A line of work perturbs the answer space to assess whether the model truly understands instructions or recites the answer (Zeng et al., 2023b; Li et al., 2023a; Wu et al., 2023). Another line of work exemplifies models\u2019 ability to follow instructions by incorporating verifiable constraints within them, such as lexical, numerical, format, and semantic constraints (Sun et al., 2023; Jiang et al., 2023). These constraints can be compositional, allowing one instruction to contain multiple constraints simultaneously (Aksu et al., 2023; Zhou et al., 2023b; Yao et al., 2023). Such complex instructions containing multiple user-specified constraints present greater challenges for LLMs to follow (He et al., 2024; Qin et al., 2024). Our work falls into this latter category. The existing works on constraints following solely either focus on evaluation (Chen et al., 2024; Xia et al., 2024) or only consider instructions with few constraints (Chen et al., 2022; Zhang et al., 2023; Chen and Wan, 2023; Wang et al., 2024). Different from existing works, we systematically investigate how to enhance complex instructions with multiple constraints. 2.2 Complex Instruction Tuning Complex Instructions can refer to instructions that involve more reasoning steps (Mukherjee et al., 2023), intricate input (Zhou et al., 2024), or multiple constraints (Luo et al., 2023a). Many studies have demonstrated that fine-tuning with complex instructions can boost performance in tasks such as instruction following (Xu et al., 2023), reasoning (Mitra et al., 2023), or code generation (Luo et al., 2023b). However, our work differs from these studies in two main aspects. First, we focus on improving LLMs\u2019 ability to follow complex instructions containing multiple constraints, which is crucial for the practicality and safety of LLMs (Zhou et al., 2023a; Mu et al., 2023). Furthermore, traditional supervised fine-tuning (SFT) uses only positive samples, whereas we use both positive and negative samples to enhance the complex instruction-following ability of LLMs effectively and efficiently. 3 Empirical Studies A common approach to improve LLMs\u2019 ability to follow complex instructions is to construct corresponding instances and fine-tune the LLMs on them (Aksu et al., 2023; Sun et al., 2024). Yet, one key research question remains under-explored: What training data is effective in enhancing complex constraint-following abilities? To enhance the LLM\u2019s capacity to follow complex instructions, two types of training data can be utilized: (1) Initially train models to understand atom constraints (atomic data), enabling them to resolve compositional constraints (compositional data) automatically. (2) Train models with compositional data, leading them to understand instructions with atomic or varying compositions of constraints spontaneously. Examples are shown in Fig. 2. To compare these training data types, we split the instructions in existing instructions following benchmarks (Zhou et al., 2023a; Jiang et al., 2023) into training and test sets. The training set contains atomic data (mostly with 1 constraint) and compositional data (mostly with over 3 constraints). Original benchmarks lack corresponding outputs, we first generate them via GPT-3.5-turbo. To improve the quality of the training set, we further filter the datasets to only keep outputs that satisfy all instruction constraints using GPT-3.5-turbo and rules for training. The remaining data forms the test set. Details on data construction and statistics are provided in the Appx. A.1. We compare three methods: (1) Backbone, the backbone model without further training. (2) Atom and (3) Composition, continue training the backbone model with atomic data and compositional data respectively. To prevent models from catastrophic forgetting (McCloskey and Cohen, 1989), we mix training data with ShareGPT data (Chiang et al., 2023) for Atom and Composition checkpoint. We leverage two backbone models (Zheng et al., 2024; Touvron et al., 2023) and adopt two accuracy metrics (Zhou et al., 2023a; Jiang et al., 2023): accins = 1 m m m \ufffd i=1 \ufffd i=1 n \ufffd j=1 n \ufffd j=1 cj i, acccon = 1 mn mn m \ufffd i=1 \ufffd i=1 n \ufffd j=1 \ufffd j=1 cj i, Backbone Methods Level 1 Level 2 Level 3 Level 4 Level 5 Avg. Vicuna-7B-V1.5(Zheng et al., 2024) Backbone 39.07 44.71 37.28 30.93 19.06 34.21 Atom 39.17 39.50 42.07 30.23 16.97 33.59 Comp 39.44 55.90 47.49 22.27 16.65 36.35 LLaMA2-13B-Chat(Touvron et al., 2023) Backbone 33.10 41.71 42.26 23.89 22.07 32.61 Atom 38.99 39.78 36.61 20.74 14.83 30.19 Comp 37.02 44.66 42.55 21.62 22.36 33.64 Table 1: The Instruction-level accuracy of backbone models without further training (Backbone), training with atomic data (Atom), and compositional data (Comp) on FollowBench. Level x indicates there are x constraints in the instructions. Avg. indicates the average performance across 5 levels. The results are evaluated by GPT-4 using the FollowBench prompt template. The bold and underlined represent the first and second rankings among the open-source LLMs, respectively. Backbone Methods ChangeCase Combination Content Format Keywords Language Length Punctuation Startend I-level C-level Vicuna-7B-V1.5 Backbone 27.87 15.91 74.07 44.09 48.57 80.00 30.69 10.71 40.00 26.89 37.47 Atom 29.50 31.82 48.14 63.44 36.19 25.00 31.68 16.07 40.00 27.17 37.29 Comp 37.70 50.00 40.74 55.91 36.19 25.00 32.67 14.29 50.00 28.85 38.76 LLaMA2-13B-Chat Backbone 42.62 11.36 81.48 55.91 45.71 15.00 32.67 00.00 25.00 25.77 36.38 Atom 42.62 00.00 37.04 54.84 42.86 35.00 34.65 12.50 37.50 26.33 35.83 Comp 40.98 02.27 66.67 54.84 38.10 50.00 36.63 16.07 40.00 26.05 37.84 Table 2: The performance of backbone models without further training (Backbone), training with atomic data (Atom), and compositional data (Comp) on IFEval. The I-level and C-level denote the Instruction-level and Constraint-level accuracy respectively. where cj i equals 1 if the j-th constraint of the i-th instruction is satisfied, otherwise 0. Overall, achieving Instruction-level accuracy (accins) is more challenging than Constraint-level accuracy (acccon). The performance of the three methods on the test sets is shown in Tab. 1 and Tab. 2. First, with regard to the overall performance, training with compositional data generally surpasses both the backbone model and atomic data training. This demonstrates that training with compositional data can generally enhance models\u2019 ability to follow complex instructions. Surprisingly, according to Tab. 1, training with atomic data (mostly with 1 constraint) can generally decrease performance compared to the backbone model for instructions with more than 1 constraint. Also, training with compositional data (usually 3 to 5 constraints) boosts performance on instructions with 1 to 3 constraints significantly but shows less enhancement or even a decline for those with 4 to 5 constraints. This suggests that training with compositional data (instructions with multiple constraints) can better generalize to lower-level complex instructions (instructions with fewer constraints). Moreover, this effect is more pronounced in smaller LLMs (7B), likely due to their weaker generalization ability (Magister et al., 2022; Fu et al., 2023). Later in \u00a75.3.1, we found that training with compositional data can even generalize to the compositions of out-of-domain constraints. We have found that training with compositional data can better enhance LLM\u2019s ability to follow complex instructions compared with atomic data. A follow-up research question is how to obtain highquality compositional data? Existing datasets either only provide compositional instructions without output (Zhou et al., 2023a; Jiang et al., 2023) or directly generate responses using advanced LLMs and refine them manually (Sun et al., 2024). We compare the outputs generated by three methods: (1) Vanilla: Output generated directly using backbone model. (2) Generation: Output generated directly using GPT-3.5-turbo. (3) Discrimination: First, we identify the constraints that Vanilla outputs failed to adhere to using test scripts (Zhou et al., 2023a). Then, we rectify the Vanilla outputs constraints by constraints using GPT-3.5-turbo (The framework is shown in Fig. 2 and please refer to \u00a74.2 for details). With regard to the complex instructions, the instructions in IFEval (Zhou et al., 2023a) originally had only 1 to 3 constraints, which were not complex enough. We construct 1500 complex instructions, each with 3 to 5 constraints from IFEval that are objective and can be automatically verified (Please refer to \u00a74.1 for details). We leverage LLaMA2-13B-chat (Touvron et al., 2023) as the backbone and evaluate the performance of the three methods using the test script from Zhou et al. (2023a). As shown in Tab. 3, using the generation method, outputs from advanced LLMs (Generation) are Methods ChangeCase Combination Content Format Keywords Language Length Punctuation Startend I-level C-level Vanilla 21.19 08.89 77.26 56.67 61.60 10.60 30.85 00.26 16.84 06.40 41.33 Generation 56.50 30.37 68.95 74.96 72.29 33.01 52.91 36.76 79.51 21.53 62.68 Discrimination 66.56 25.00 68.11 68.27 77.32 81.95 52.27 70.90 85.60 35.04 68.30 Table 3: The performance of different methods on IFEval. of higher quality than those from weaker LLMs (Vanilla). However, the outputs from weaker LLMs then refined by advanced LLMs (Discrimination) significantly outperform the outputs generated by advanced LLMs directly (Generation). We believe this is because slight changes in the instruction (i.e. constraint) can cause substantial output differences, which the discriminationbased method captures better than the generationbased method. 4 Method According to \u00a73, we propose a discriminationbased method to obtain effective training data. A subsequent question is how to effectively utilize the data obtained through the discriminationbased method? Hence, we introduce a reinforcement learning fine-tuning (RLFT) based method that leverages both positive and negative samples to improve complex instruction following. The framework is shown in Fig. 2. 4.1 Complex Instruction Synthesis According to \u00a73, the effective training data is complex instructions with multiple constraints (compositional data). To obtain compositional data, we first collect seed instructions from three widely used instruction-tuning datasets. Then, we rewrite the instructions to incorporate multiple constraints. To ensure the coverage and diversity of the seed instructions, we consider three sources: (1) Open Assistant (K\u00f6pf et al., 2024): human-written instructions when interacting with chatbots. We only consider rank 0 instructions (annotated by humans as the highest quality) and the first turn of the conversation (Li et al., 2023b). (2) Self-Instruct (Wang et al., 2022a): 175 manually written instructions covering diverse topics to facilitate instruction generation for new tasks. (3) Super-Natural (Wang et al., 2022b): A collection of natural language processing (NLP) tasks formatted with human instructions. We first exclude tasks with finite output sets using rules (e.g., classification, tagging), since the outputs are too simple for the corresponding instructions to incorporate constraints. This leaves us with 318 remaining tasks. Next, we randomly select one instruction for each task. From these three sources, we finally gather 1500 seed instructions. Subsequently, we integrate constraints into these seed instructions. Initially, we randomly sample 3 to 5 constraints and utilize the provided scripts to resolve conflicts among the constraints provided by Zhou et al. (2023a). Next, given that, semantically equivalent but textually distinct instructions can substantially affect model outcomes (Yan et al., 2024; Chen et al., 2024), we employ eight diverse expressions to describe each type of constraint. Specifically, we manually select three common descriptions from the test set as seed descriptions, generate five similar descriptions using GPT-3.5turbo, and refine them manually. For each sampled constraint ci, we randomly select one description di from the description pool and append it to the instructions, formulated as: Ic = LLM(Is \u2295di \u2295... \u2295dn), where Is, Ic and di denote the seed instruction, its corresponding synthesized complex instruction, and appended constraint using a specific description, respectively. The number of constraints n ranges from 3 to 5. 4.2 Teacher Correction As introduced in \u00a73, we propose a discriminationbased approach for obtaining the output, shown to be more effective than directly generating output with advanced LLMs. The details of this approach are as follows. Initially, we utilize LLaMA2-13B-Chat (Touvron et al., 2023) (student model) to generate results for our synthesized complex instructions. Then, we utilize the test scripts from Zhou et al. (2023a) to identify the constraints the model failed to follow since the constraints are objective and automatically verifiable. Finally, we adopt advanced LLMs (teacher model) GPT-3.5-turbo to correct the failed constraints one by one. Specifically, each complex instruction Ic contains multiple constraints. In \u00a74.2, we utilize the test script to pinpoint the f constraints C = {c1, c2, ..., cf} that the student model\u2019s vanilla output ov fails to follow. The teacher model sequentially corrects these failed constraints, yielding an output set O = {ov, o1, o2, ..., of}: o1 = LLM(ov, c1), . . . , of = LLM(of\u22121, cf), where GPT-3.5-turbo is employed as the teacher model with prompts sourced from Tab. 9. 4.3 Contrastive Method During \u00a74.2, for each instruction Ic, we can gather positive sample set {of} and negative samples set {o1, ..., of\u22121}. Supervised fine-tuning (SFT) solely utilizes positive samples successfully meeting constraints specified in complex instructions (Radford et al., 2019; Howard and Ruder, 2018). However, negative samples from \u00a74.2, failing to meet certain constraints, also offer valuable supervision signals. Hence, we leverage the positive and negative samples through reinforcement learning fine-tuning (Rafailov et al., 2023). Specifically, given the output set O = {ov, o1, o2, ..., of} for each complex instruction Ic, we can form a training dataset D comprising f contrastive triplets: D = {I(i) c , o(i) i , of}f i=1 = {(Ic, ov, of), (Ic, o1, of), ..., (Ic, of\u22121, of)}. In each training triplet, the final corrected output of (positive sample) is preferred over oi (negative sample), as of follows more constraints specified in the complex instruction Ic. Following this, Direct Preference Optimization (DPO) (Rafailov et al., 2023) can be applied to model the preference information. The loss function is a maximum likelihood objective for the language model parameters \u03c0\u03b8. LDPO(\u03c0\u03b8; \u03c0ref) = \u2212E(Ic,of ,oi)\u223cD[log\u03c3(\u03b2log\u03c0\u03b8(of|Ic) \u03c0\u03b8(of|Ic) \u2212\u03b2log\u03c0ref(oi|Ic) \u03c0ref(oi|Ic))], where the reference model parameter \u03c0ref is set to \u03c0\u03b8 initially and remains fixed throughout training. \u03b2 is a hyperparameter and \u03c3 is the sigmoid function. The goal of LDPO is to maximize the log probability of preferred output of relative to the dispreferred output oi. However, solely relying on LDPO may lead to low probabilities for both chosen and rejected outputs, yet with a significant disparity between them. Therefore, we additionally integrate the SFT loss LSFT to constrain \u03c0\u03b8 from deviating from the preferred data distribution (Xu et al., 2024; Hejna et al., 2023): LSFT(\u03c0\u03b8) = \u2212E(Ic,of)\u223cD[log \u03c0\u03b8(of|Ic)]. Finally, our training procedure is to optimize LDPO and LSFT jointly: LOurs = LDPO + LSFT. 5 Experiments We conduct experiments to verify the effectiveness of our method, focusing on overall performance, training efficiency, and generalization ability. 5.1 Experiment Setup Models. Our baselines comprise popular opensource and close-source LLMs. With regard to our framework, utilizing synthesized complex instructions (\u00a74.1), we compare three methods: (1) Ours-13B-Generation directly generates output with GPT-3.5-turbo and trains the backbone model via supervised fine-tuning (SFT). (2) Ours-13BDiscrimination generates output via the backbone model then refines with GPT-3.5-turbo (\u00a74.2), and trains the backbone model via SFT. (3) Ours-13BContrastive utilizes DPO for training to model positive and negative samples (\u00a74.3). The backbone model for all three methods is LLaMA2-13BChat, with the instructions of training data being the same; only the output of training data and training paradigms differ. Specifically, continuous training may cause catastrophic forgetting (McCloskey and Cohen, 1989). To address this, we utilize the replay strategy (Ke and Liu, 2022), mixing the training data with 10000 ShareGPT data (Chiang et al., 2023) to maintain the general abilities of models during training. Evaluation. We evaluate all models on IFEval (Zhou et al., 2023a), a widely-used instructionfollowing benchmark. The test set consists of 541 samples, each containing 1 to 3 constraints. All the constraints are objective and can be automatically verified, such as length constraints and detectable formats. The metrics are the same as \u00a73. 5.2 Results Overall Performance. The performance on IFEval is presented in Tab. 4. First, using the same backbone model, Ours-13B-Generation performs worse than many popular open-source models (Vicuna, WizardLM), even when the constraints in the test set have been seen in the instructions. This Models BaseModel ChangeCase Combination Content Format Keywords Language Length Punctuation Startend I-level C-level LLaMA2-13B-Chat (Touvron et al., 2023) LLaMA2 37.08 07.69 83.02 60.51 57.06 25.81 37.76 00.00 29.85 29.94 42.21 LLaMA2-70B-Chat (Touvron et al., 2023) LLaMA2 42.70 24.62 79.25 63.69 68.71 16.13 39.86 12.12 62.69 38.45 50.36 Qwen-14B-Chat (Bai et al., 2023) Qwen 57.30 23.08 75.47 57.96 58.28 83.87 33.57 21.21 68.66 37.89 51.08 Vicuna-13B-V1.5 (Zheng et al., 2024) LLaMA2 56.18 32.31 75.47 62.42 57.06 93.55 42.66 16.67 64.18 42.33 53.48 WizardLM-13B-V1.2 (Xu et al., 2023) LLaMA2 49.44 16.92 75.47 67.52 66.26 83.87 46.85 15.15 64.18 43.07 54.56 OpenChat-13B-V3.2 (Wang et al., 2023) LLaMA2 49.44 26.15 88.68 68.15 66.26 87.10 47.55 19.70 71.64 46.03 57.43 Ours-13B-Generation LLaMA2 64.04 20.00 66.04 70.06 53.99 35.48 44.06 21.21 74.63 41.22 52.88 Ours-13B-Discrimination LLaMA2 60.67 06.15 79.25 64.97 60.12 96.77 43.36 51.52 79.10 46.21 57.43 Ours-13B-Contrastive LLaMA2 65.17 10.77 84.91 66.88 60.74 93.55 47.55 43.94 86.57 48.24 59.71 PaLM2-S* (Anil et al., 2023) PaLM N/A N/A N/A N/A N/A N/A N/A N/A N/A 43.07 55.76 GPT3.5-turbo GPT 58.43 70.77 88.68 88.54 71.17 98.35 53.85 18.18 76.12 58.96 68.47 GPT4* (Achiam et al., 2023) GPT N/A N/A N/A N/A N/A N/A N/A N/A N/A 76.89 83.57 Table 4: The overall performance of models on IFEval (each with 1 to 3 constraints). The asterisk (*) indicates that the results are directly sourced from IFEval. N/A denotes that IFEval does not provide the results for specific constraints. Models ChangeCase Combination Content Format Keywords Language Length Punctuation Startend I-level C-level LLaMA2-13B-Chat 17.86 00.00 68.42 58.54 61.43 27.27 34.43 00.00 27.03 09.50 42.27 WizardLM-13B-V1.2 16.67 13.64 56.58 53.66 64.29 100.00 40.98 17.39 48.65 14.00 47.20 OpenChat-13B-V3.2 25.00 00.00 76.32 56.71 61.43 86.36 35.25 15.22 55.41 16.50 49.07 Ours-13B-Discrimination 48.81 00.00 67.11 50.61 58.57 90.91 36.89 60.87 67.57 15.00 53.33 Ours-13B-Contrastive 35.71 04.55 63.16 50.61 65.00 86.36 47.54 63.04 79.73 19.00 55.73 Table 5: The performance of models on instructions within the same constraint category (each with 3 to 5 constraints) but with varying phrasing and detailed requirements, assessing our methods\u2019 in-domain generalization ability. Models ChangeCase Combination Content Format Keywords Language Length Punctuation Startend I-level C-level LLaMA2-13B-Chat 25.71 08.70 67.44 47.41 60.71 28.00 26.92 02.38 21.90 01.00 40.15 WizardLM-13B-V1.2 28.57 00.00 54.26 50.00 66.67 72.00 34.62 15.48 52.38 07.00 46.60 OpenChat-13B-V3.2 31.43 04.35 62.79 56.03 60.71 72.00 31.73 23.81 49.52 07.30 47.64 Ours-13B-Discrimination 51.43 04.35 57.36 35.34 65.48 48.00 31.25 59.52 69.52 05.00 49.53 Ours-13B-Contrastive 40.95 08.70 50.39 45.69 72.22 64.00 37.50 55.95 74.29 07.50 53.05 Table 6: The performance of models on more challenging complex instructions with 6 to 7 constraints. The adversarial setting stress tests the generalization ability of LLMs in following complex instructions. 0 200 400 600 800 Training Steps 30 40 50 60 D I-level C I-level D C-level C C-level Content Example Format Situation Style Mixed T otal 0 20 40 60 80 Instruction-level Accuracy LLaMA2-13B-Chat Ours-D Ours-C Figure 3: The performance of training efficiency (left) and out-of-domain generalization (right). D and C denote Ours-13B-Discrimination and Ours-13BContrastive respectively. highlights the difficulty in obtaining high-quality output for complex instructions. Next, Ours-13Bdiscrimination achieves significant performance improvement, indicating that discrimination surpasses the generative paradigm in achieving high-quality output. Moreover, Ours-13B-contrastive performs the best, proving that our method excels in capturing subtle variations in complex instructions for the output. Training Efficiency. We compare the training efficiency of Ours-13B-Discrimination and Ours13B-Contrastive. Both use the same training data but employ different training methods: the former uses the next-token-prediction generation approach, while the latter uses our contrastive objective. As shown in Fig. 3 (left), Ours-13BContrastive achieves better performance with the same training steps and ultimately outperforms better than Ours-13B-Discrimination. This proves that our method utilizing both positive and negative samples can enhance complex instruction following ability more effectively and efficiently. 5.3 Generalization Experiments We investigate the generalizability of our framework from four perspectives. 5.3.1 Out-of-Domain Generalization We investigate whether the ability to follow complex instructions extends to unseen constraints. To achieve this, we evaluate models on another instruction-following benchmark FollowBench (Jiang et al., 2023), which has the following features to outline: (1) It contains almost entirely different constraints from IFEval, such as style scenario, and example constraints. (2) It includes complex instructions of five difficulty levels. The difficulty level is denoted by incrementally increasing the same type of constraint to a seed instruction at each level. (3) Specifically, to mirror real-world scenarios, it introduces a Mixed Category. Instructions within this category encompass multiple constraints, akin to the compositional data in our study while incorporating different constraints. As shown in Fig. 3 (right), first, the performance of our methods generally drops compared to the backbone model when tested on individual, unseen constraints. This suggests that models training with certain constraints can hardly generalize to unseen constraints directly. However, surprisingly, our methods show a remarkable 12.92% improvement in performance in the Mixed Category. This proves that tuning with compositional data enhances the models\u2019 capacity to follow instructions covering multiple constraints, even if these constraints differ greatly from those in the training set. 5.3.2 In-Domain Generalization We construct a new test set to evaluate our methods\u2019 in-domain generalization, focusing on the same constraint but with varied wording and specific requirements. First, we select 200 instructions from the Open Assistant dataset (introduced in \u00a74.1) not in our training set. Next, we randomly choose 3 to 5 constraints from IFEval, pair them with descriptions from our description pool (\u00a74.1), and utilize GPT-3.5-turbo to paraphrase them, ensuring distinct descriptions from the training data. Additionally, we manually adjust specific requirements in the instructions, changing symbols (e.g., \"separated by 6 asterisk symbols ******\" to \"separate the responses with 6 hash signs: ######\") and formats (e.g., \"wrap the entire output in JSON format\" to \"I want the entire output in XML format\"). As shown in Tab. 5, Ours-13B-Contrastive remains the top performer. Additionally, the performance gap between Ours-13B-Contrastive and the best open-source model (OpenChat-13B-V3.2) has increased from 2.28 to 6.66. These results highlight the robustness of our method in handling complex instructions across different phrasing and detailed requirements within the same constraint category. Models ARC (25-shot) HellaSwag (10-shot) MMLU (5-shot) TruthfulQA (0-shot) Avg. LLaMA2-13B-Chat 59.04 81.94 54.64 44.12 59.94 WizardLM-13B-V1.2 59.04 82.21 54.64 47.27 60.79 OpenChat-13B-V3.2 59.64 82.68 56.68 44.49 60.87 Ours-13B-Discrimination 56.74 78.39 53.01 48.17 59.08 Ours-13B-Contrastive 57.76 79.95 53.79 48.15 59.91 Table 7: The performance of models on general tasks. 5.3.3 Adversarial Setting We compare models\u2019 performance on more challenging complex instructions with increased constraints. This adversarial setting stress tests the generalization capacity of LLMs in following complex instructions. Specifically, we utilize the same 200 seed instructions from \u00a75.3.2 and the method introduced in \u00a74.1 to append 6 to 7 constraints to the seed instructions. These new instructions are challenging since our training data contains 3 to 5 constraints. As shown in Tab. 6, Ours-13B-Contrastive outperforms all other models and significantly performs better than Ours-13B-Discrimination. This demonstrates our method utilizing positive and negative samples generalizes better to complex instructions than SFT only utilizing positive samples. 5.3.4 General Ability We test whether training with our synthesized complex instructions compromises LLMs\u2019 general ability. To achieve this, we evaluate models on four widely adopted benchmarks, reflecting the models\u2019 knowledge capability (MMLU (Hendrycks et al., 2020), TruthfulQA (Lin et al., 2021), ARC (Clark et al., 2018)), complex reasoning (HellaSwag (Zellers et al., 2019)). As shown in Tab. 7, our methods perform on par with other open-source LLMs, validating that our methods enhance the complex instructions following ability while maintaining the models\u2019 general ability. 6 Conclusion In this paper, we systematically study how to enhance the ability of LLMs to follow complex instructions. Initially, we study effective training data and methods for obtaining high-quality data through two empirical studies. Based on our findings, we introduce a method utilizing positive and negative samples to enhance LLMs\u2019 complex instruction-following capability. Our experiments show that our method more effectively and efficiently captures subtle instruction differences leading to significant output changes compared to the traditional supervised fine-tuning (SFT). Additionally, we evaluate the generalization capabilities of our framework through extensive experiments. 7 Limitations We analyze the limitations of our work as follows. First, we investigate complex instruction-following by testing LLMs\u2019 ability to adhere to instructions with multiple constraints. Even if the model meets all the constraints simultaneously, it may not fully follow complex instructions due to reasoning or knowledge limitations. However, we see complex constraint-following as a significant challenge worth studying. In constructing the training data, we primarily use hard constraints from IFEval, although real-world scenarios often include soft constraints like semantic constraints. We focus on hard constraints because they can be objectively and automatically evaluated, and we believe experiments based on them can yield valuable insights into complex instruction-following."
},
{
"url": "http://arxiv.org/abs/2404.12558v1",
"title": "Just Like Me: The Role of Opinions and Personal Experiences in The Perception of Explanations in Subjective Decision-Making",
"abstract": "As large language models (LLMs) advance to produce human-like arguments in\nsome contexts, the number of settings applicable for human-AI collaboration\nbroadens. Specifically, we focus on subjective decision-making, where a\ndecision is contextual, open to interpretation, and based on one's beliefs and\nvalues. In such cases, having multiple arguments and perspectives might be\nparticularly useful for the decision-maker. Using subtle sexism online as an\nunderstudied application of subjective decision-making, we suggest that LLM\noutput could effectively provide diverse argumentation to enrich subjective\nhuman decision-making. To evaluate the applicability of this case, we conducted\nan interview study (N=20) where participants evaluated the perceived\nauthorship, relevance, convincingness, and trustworthiness of human and\nAI-generated explanation-text, generated in response to instances of subtle\nsexism from the internet. In this workshop paper, we focus on one troubling\ntrend in our results related to opinions and experiences displayed in LLM\nargumentation. We found that participants rated explanations that contained\nthese characteristics as more convincing and trustworthy, particularly so when\nthose opinions and experiences aligned with their own opinions and experiences.\nWe describe our findings, discuss the troubling role that confirmation bias\nplays, and bring attention to the ethical challenges surrounding the AI\ngeneration of human-like experiences.",
"authors": "Sharon Ferguson, Paula Akemi Aoyagui, Young-Ho Kim, Anastasia Kuzminykh",
"published": "2024-04-19",
"updated": "2024-04-19",
"primary_cat": "cs.HC",
"cats": [
"cs.HC"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "Human-AI collaborative decision-making aims for a complemen- tary performance [3, 4, 10, 15, 29], where human and AI partners together achieve a better outcome than they would individually. In such ambiguous and open-to-interpretation scenarios where there is no ground truth, Ferguson et al. [11] have explored the use of Large Language Models (LLMs) to support human-decision makers by surfacing various viewpoints [24]. One such example of these ambiguous scenarios is in the domain of hate speech detection, par- ticularly sexism, where subtle hate speech is more challenging to identify and remove automatically [20]. Benokraitis [13] describes subtle sexism as a less visible form of discrimination that is based on gender and is oftentimes undetected, accepted as normal, or even considered to be benevolent. In fact, research has shown assessment of sexism can be highly subjective, depending on an individual\u2019s personal values, gender ideologies and, thus, is open to interpre- tation [25]. Specifically in a social media context, for example, the same tweet can be considered sexist by one person, but not sexist by another [14]. This ambiguity can be risky: while there is a risk of harm if hateful posts are spread through the internet [37], studies have also proven that hate speech filtering algorithms can unin- tentionally harm LGBTQ communities when mistakenly flagging words that would be considered offensive in other contexts [27, 30]. They also significantly over-restrict African American English [9]. Recent research suggests that human-AI collaboration, towards any goal, requires the AI not just to make a recommendation (i.e., to remove or not remove a potentially sexist post from social media) but also to be able to explain the reasoning behind it in a way that is relevant, convincing and trustworthy for the user [33, 38]. Further, when we specifically consider these subjective cases, Alm [1] argue that we need to move beyond traditional metrics, and evaluate user satisfaction, which can be measured in numerous ways. Firstly, relevance is often used as an evaluation criterion for explanations [16, 22, 33], largely based on how useful they are to the arXiv:2404.12558v1 [cs.HC] 19 Apr 2024 TREW @ CHI \u201924, May 11\u201316, 2024, Honolulu, HI, USA Ferguson et al. human-decision maker and how closely they represent the scenario or decision in question. Secondly, explanations must be convincing to enable collaborative decisions, especially in ambiguous settings [33] and previous work [28] has shown LLM-produced arguments can be as persuasive as those human-authored ones. Then thirdly, trustworthiness is pointed to as a key criterion for acceptance of a recommendation in decision-making tasks, with plenty of work exploring how Explainable AI (XAI) can be leveraged to build [12, 18, 23, 38], and calibrate trust [7, 39, 41, 42]. While truthfulness is undoubtedly also an important metric in objective decision-making, it is a challenging evaluation metric for subjective cases where there is no one ground truth. In summary, in many subjective decision-making scenarios, one\u2019s personal values and lived experiences can heavily influence one\u2019s assessment of a scenario. Given the importance of values, beliefs, and experiences, it would follow that the presence of beliefs or experiences in AI-generated input influences how a human per- ceives the input. While we know that current LLMs can generate uniquely human-like attributes, and humans find this troubling [35], little is known about how users perceive personal opinion and experience representations in explanations when considered in subjective decision-making. In the context of politics, one study found that sharing personal experiences about harm was more con- vincing than sharing facts [21], as everyone could agree that harm should be avoided. Other work has shown that LLMs can generate arguments [28], and that they commonly reiterate certain opinions [32]. However, we don\u2019t yet know how these findings expand to other contexts, and how humans evaluate them. Thus, we address the following research question: 1) Are opinions and experiences perceived by humans in AI-generated explanations? And, 2) How are these opinions and experiences perceived by users? To answer these research questions, we conducted an empirical study with 20 participants to explore how they evaluate human and AI-generated text explanations in the context of subtle sexism. We ask participants to imagine that they are in a decision-making context and have to evaluate whether the provided scenario is sexist or not using the explanation provided. We ask them to assess who authored the explanation, and how relevant, convincing or trustwor- thy it is. We found that personal opinions and personal experiences were identified in both human- and AI-authored explanations, and participants described these as trustworthy. Further, an alignment between the opinions and experiences shown in the explanation and the participant\u2019s own beliefs exaggerated this effect, suggest- ing harmful cognitive biases at play. We hope to bring this scoped finding to the CHI community to start a discussion on how the human-like ability to generate personal beliefs and experiences in- fluences perceptions of trust, and how we can consider this finding in the design of collaborative systems.",
"main_content": "We conducted a set of semi-structured interviews with 20 participants to gauge their perception of human and AI-generated explanations. This study was approved by the university\u2019s research ethics board, and all participants provided informed consent. Participants were shown eight scenarios with accompanying explanations of subtle sexism, and asked to imagine themselves as part of collaborative decision-making on whether the scenario constitutes a case of subtle sexism. Our research methodology can be seen in Figure 1. The scenarios and human explanations, were collected from online discussion sites such as Reddit1, The Everyday Sexism Project2, and Twitter3. We selected scenarios (descriptions of events) that were paired with an interpretation or explanation of the scenario and why it is or is not sexist. The complete dataset contained 117 scenarios and accompanying explanations, which are representative of the \u201ceveryday explanations\u201d that humans use when discussing sexism. To collect the AI-generated explanation text for these scenarios, we used GPT-3 [5], which was the state-of-the-art large language model at the time. We prompted the model using the question-answer feature, asking \u201cIs this scenario sexist: {{scenario}}. Why or why not?\u201d To ensure that a coherent explanation-text was generated for each scenario, we prompted the model three times per scenario, resulting in 351 AI-generated explanation texts. To keep the interviews at an appropriate length, we chose eight scenario and explanation pairs to present to the participants \u2014 four explanations which were generated by GPT-3, and four which were collected from online discussion sites. From the larger dataset, we chose eight scenario and explanation pairs based on the following criteria: the explanation was coherent; there was a balance of argumentative stance (it is sexist vs. it is not sexist); and the length was appropriate for an interview, with both the scenario and explanation being less than five sentences. These chosen explanations also represented some of the higher-quality human explanations in the dataset. The text-based output from LLMs can be displayed to users in multiple modalities, which we know influence the perception of explanations [31]. As part of the larger study, we manipulated whether users were presented the explanation in text or audio form, though we do not focus on the outcome of this manipulation in this short paper. In the semi-structured interviews, we collected demographic information and asked introductory questions to gauge participants\u2019 familiarity with AI technology. Of the 20 participants, 10 identified as women, nine as men and one as non-binary. Participants averaged 30 years old (min: 20, max: 56) and spanned various roles from student to company executive both within and outside of AI. Most participants said they often use conversational AI, while few used chatbots regularly. The rest of the interview contained eight scenario and explanation pairs, each with the same line of questioning. We started this portion of the study by explaining the collaborative decision-making context, and asking participants to imagine that they had to decide on whether a given scenario was sexist or not, and they had input from another party, who could be a human or an AI, but they were not aware of which one. We also briefly described how the AI-generated texts were produced by stating that it was not a model specifically trained on sexism, but just a general language model. We showed participants the scenario and explanation, and asked whether the explanation was generated by a human or an AI model, and why. We then asked 1www.reddit.com 2www.everydaysexism.com 3www.twitter.com Just Like Me TREW @ CHI \u201924, May 11\u201316, 2024, Honolulu, HI, USA Figure 1: Overview of the research methodology participants to rate and explain their rating for the explanation on three qualities: relevance, convincingness, and trustworthiness, based on the dictionary definitions for these terms. At the end of the interview, if participants were interested, we shared which explanations were human and AI-generated. Literature shows relevance, convincing and trustworthy can be evaluated objectively and subjectively [6, 16\u201318, 33, 38]. Thus, we collected both quantitative (scales) and qualitative data for eight explanations across 20 participants. To protect the privacy of the posters whose scenarios and explanations are used in this study, we provide descriptions for the collected scenarios and explanations in Table 1, but do not provide the verbatim text. Verbatim text is provided for AI-generated content. We anonymized and transcribed interviews before following the Braun and Clarke thematic analysis method [8]. Two researchers went through two rounds of initial coding, where three randomly chosen interviews were open-coded individually by both researchers each time. The researchers met after each coding round to consolidate and organize the open codes, resulting in two iterations of the coding scheme before it was finalized. At this time, the researchers re-coded all twenty interviews using this finalized coding scheme. While we also collected quantitative data for the relevant, convincing, and trustworthy scales, in this paper we focus on the qualitative results. 3 RESULTS In this section, we provide evidence for the perception of both personal opinions and experiences within human and AI explanation text. 3.1 Recognizing Opinions and Experiences Participants recognized the elements of personal opinion and personal experience in explanations, both those authored by humans and AI-generated. Personal opinion was simply identified 20 times in our interviews \u2014 mostly in reference to both actual and perceived human-authored explanations. Personal opinions were defined as when the explanation revealed personal beliefs or points of view, for example: \u201c...So it had a very strong opinion and then it supported the opinion afterwards by again, just kind of distilling down what the scenario was talking about...\u201d [P4] Personal experiences were identified 56 times in the interviews, and were primarily discussed in terms of both actual and perceived human examples. Personal experiences refer to the presence of a personal example or the way something personally affected the explanation author, such as: \u201cThis sounds like it comes from a place of having experienced this\u201d. [P5] We found that these explanation elements were identified by participants across scenarios, thus being scenario independent: opinions and experiences were recognized by participants in reference to seven of eight scenarios. We also found no evidence that individual differences between participants drove this identification: opinions were identified by 11 out of 20 participants, and experiences by 17. Both opinion and experience were often brought up when describing why a participant believed the explanation was written by a human (note that this does not mean the explanation was actually human-generated): \u201cIt\u2019s coming from a very personal story perspective. So it\u2019s one person\u2019s perspective which is good. So I wouldn\u2019t say it\u2019s a trustworthy news source, but at the same time, I do trust it as someone\u2019s own personal opinion\u201d [P12] Participants described how the specific examples contained in explanations had to come from lived experiences, and would be challenging to train an AI to replicate: \u201c...It\u2019s actually...coming up with real-life circumstances and scenarios of why this might not be sexist...So I feel like whoever explained this has real-world experience TREW @ CHI \u201924, May 11\u201316, 2024, Honolulu, HI, USA Ferguson et al. Table 1: Description of the Scenarios and Explanations used in the interview study. Verbatim scenarios and explanations (written by humans) are not included to protect poster privacy, though AI-generated text is included verbatim in italics. H=Human-authored, AI = AI-authored. # Scenario # Explanation 1 The scenario contains commentary on the \u201cmassively disproportionate\u201d number of women taking STEM subjects in school, and how this may be caused by \u201csociety\u2019s attitude towards women and these areas\u201d. They discuss people providing surprised comments when women say they are studying high-level science. H1 The explanation states that while these comments are not meant to offend, they are often made at critical times when children are choosing a career path, and can thus cause women to stay out of science to prevent them from being seen as \u201cabnormal\u201d. 2 The scenario where an adult women is referred to as a girl, by a man, in a workplace setting. H2 The explanation states that this scenario drives the reader \u201cinsane\u201d and it would not be appropriate to do the same to a man. 3 The scenario just contains the word \u201chysterical.\u201d H3 The explanation describes how the word is triggering because the Greek root of the word refers to the word for \u201cuterus\u201d, and the word is rarely used to describe a man. 4 The scenario describes a family where the father is upset if the daughter does not help to cook or clean, but the brothers are allowed to relax. H4 The explanation states that while people talk about \u201cmen\u2019s work\u201d, we should not assign chores based on gender. They describe their experience enjoying yard work growing up, and how some men would prefer to be in the kitchen. They say both ways are sexist. 5 The scenario describes many examples of men boiling thing down to a woman\u2019s \u201cbiology\u201d. The examples include being emotional due to your period, being able to handle pain due to childbirth, and women getting drunk faster than men. AI1 It is sexist. It\u2019s a way of undermining women by reducing their capabilities as human beings to nothing more than their physical features. It\u2019s a way of saying that women have no control over their behavior. It\u2019s insulting. And it\u2019s not helping. 6 The scenario contains two quotes from books containing the lines \u201ccourage uncommon in a woman\u201d and \u201cgrip that was decidedly unfeminine.\u201d AI2 Yes, and I think that\u2019s sexist because it is saying that a woman not acting like a woman is admirable. 7 The scenario describes when the term \u201cguys\u201d is used when addressing both men and women. AI3 No, it isn\u2019t sexist. The word \u2019guys\u2019 is not a gendered word. It is not sexist. 8 The scenario describes \u201cbenevolent sexism\u201d and gives the examples of women being called \u201cdear\u201d by senior male colleagues or \u201cgirls\u201d when hanging out with other female co-workers. AI4 I am not sure that it is sexist. It is a term that refers to the way men can try to be inclusive of women or supportive of women but with condescending or patronising undertones. I think it is important to look at how we treat each other and to look at how we might be sexist without meaning it. and it\u2019s not just something that was trained to say the right thing\". [P5] Even going as far as to say that it would be unnerving to think of an AI that could generate text as if from personal experience: \u201c...it was kind of reflecting on their own domestic chore experiences and bringing that into the argument that just instantly made me feel like it was a human cause you don\u2019t want to think about an AI like that. It\u2019s just a bit unnerving...\u201d [P9] In terms of evaluation of the explanation, participants noted that personal opinions are worth considering: \u201cTrustworthy? Yeah. I mean it is based on someone\u2019s opinion and it is based on a different situation that the speaker had provided. So is it reliable? Yes, it\u2019s reliable.\u201d [P8]. One participant even shared that personal experiences are only convincing when they are real \u2013 meaning, they reflect a real event that a human experienced, suggesting that an AI imitation of this would not be convincing: \u201cBut I think if I was standing there and I was in a conversation and there was a human woman saying this in rebuttal, and I would say it\u2019s a four or a five because it\u2019s someone who is speaking from lived experience.\u201d [P15] However, many also noted that sometimes an author\u2019s opinion or description of their experience is not enough to make an explanation trustworthy and convincing. For that effect to be achieved, the opinion or experience must be backed by facts, supporting evidence or sources. \u201cOkay, I don\u2019t see any sort of source for these facts that are being stated. I don\u2019t see any, yeah so it\u2019s just an opinion of AI or a person and I would have to see evidence.\u201d [P2] Some participants argued that a trustworthy explanation would contain both personal components such as opinions and experiences, as well as logic, facts, or statistics: \u201cI think yes [it is trustworthy], because it seems like it doesn\u2019t come only from academic experience but also from personal experience firsthand. So it seems, yeah I think it appeals to our senses to trust one that can handle both [facts and opinions].\u201d [P10] In summary, participants recognize the presence of opinions and experiences displayed in humanand AI-generated explanations, across various scenarios. The presence of these explanation elements made explanations more convincing and trustworthy when used in subjective decision-making, though they should be combined with objective facts and sources. 3.2 Comparing Opinions and Experiences Interestingly, even though we did not directly inquire about the participant\u2019s opinions about a scenario or their similar personal experiences, they often offered these as a justification for their answers. In fact, they recognized the author\u2019s opinion in the explanation and discussed how this opinion did or did not align with their personal opinion, and how this alignment, or lack thereof, influenced their assessment of the explanation. For example: Just Like Me TREW @ CHI \u201924, May 11\u201316, 2024, Honolulu, HI, USA \u201c...it has a lot of truths to it. You don\u2019t really hear someone use the word hysterical to describe another man.\u201d [P7] This comparison with their own opinion was discussed 97 times, in reference to all scenarios, by 19 participants, and overwhelmingly in response to perceived human explanations. Despite the fact that humans often made this comparison in regard to explanations they thought were human-authored, these explanations were often actually AI-authored. This comparison echoes literature [25] that exposed the weight of personal values when considering sexism. Overall, when an opinion displayed in the explanation aligned with the participant\u2019s opinion, they were more likely to assess a human wrote it. When an explanation aligned, or related, to their personal experience in the world, participants described that this felt \u201chuman\u201d: \u201cThis feels like a response that I would have, personally. This is something probably that I would see myself saying. So I would guess that this is a human response.\u201d [P4] \u201cI find it relatable, that explanation to, to my career, to my job and everything. So I feel connected to that explanation or that makes me feel that it was done by a human.\u201d [P11] In terms of comparing experiences displayed in the explanation to their own experiences, this was found 48 times in our interviews, in regard to all scenarios and brought up by 17 participants. In this case, participants made this comparison in explanations that both were actually and perceived to be human-authored. While in general, we found that opinions and experiences had a more important role in the evaluation of convincing and trustworthy, rather than relevant, there were notable examples where participants mentioned alignment with personal experience in their assessment of relevance. Perhaps they used their personal experience as a middle ground between the scenario and explanation. If the scenario aligned with their personal experience, as did the explanation, the explanation was relevant to the scenario: \u201cI think it\u2019s definitely relevant, and even I believe that the word hysterical is typically associated with women and that kind of reinforces this misconception that we as a society have that women are the ones who super emotional and they get carried away and they can act in a crazy manner or be hysterical. So yeah, I think the explanation makes sense to me\u201d [P13] Furthermore, when participants noticed alignment with their personal opinions and experiences within the explanation, they would find the explanation to be more Convincing: \u201cI like the reasoning, it\u2019s pretty similar... It\u2019s just in line with my personal values system\" [P3] \u201cI did relate to it. I don\u2019t see hysterical referred to, [or] used as a descriptor for men or haven\u2019t historically seen that. And so that\u2019s the piece where it was like, oh yeah, that is an observed behavior that I\u2019ve also noticed. So I feel convinced by that...\u201d[P15] Further, a lack of alignment makes things less convincing: \u201c...if I was a guy and you\u2019re trying to convince me based on this explanation, I wouldn\u2019t really be convinced because that\u2019s what I\u2019m used to hearing the entire time. I\u2019m used to saying that\u2019s what I, that\u2019s the way everyone around me talks. Yeah. So I wouldn\u2019t be convinced by this explanation.\u201d [P15] In terms of trustworthiness, participants described an emotional connection, or an emotional appeal, that was brought about when the explanation aligned with their experience: \u201dAnd I think because it also speaks to some of my own experiences and the experiences of some of my friends and colleagues growing up, I\u2019m, it just intuitively fosters this connection...[it] speaks about an experience that a lot of people have had growing up and choosing what they wanna do in life and their career paths. So there is some emotional appeal that is going on there\u201d [P9] In addition, a few participants also considered that while they agree with the explanation\u2019s stance, someone else with a different opinion might make a different assessment: \u201cAgain, same thing. I do agree with what\u2019s been said, so I\u2019m like, yeah, I trust that a lot. But if I put [myself] in someone else\u2019s shoes, if I did not agree with what\u2019s been said would it be trustworthy? I think it\u2019s a three. I could react or someone could react back and say, \u2018Ah, I have a different opinion\u2019. Of course, go ahead. So based on what I know and what I believe, I trust it a lot...\" [P8] And lastly, we found that in some instances, participants said that the lack of alignment with their own opinion made it hard to evaluate the explanation. In this case, the participant recognized the impact that this lack of alignment had on their evaluation, and was thus unable to provide a rating: \u201cI don\u2019t really have enough information to make the decision if it\u2019s convincing or trustworthy mainly because I don\u2019t agree with the explanation. So I\u2019m not really finding an answer to whether it\u2019s trustworthy or convincing.\u201d [P17] In summary, participants tended to automatically compare the opinions and experiences they perceived within the explanation to their own opinions and experiences. Whether or not these opinions and experiences aligned with their own influenced their evaluation of the explanation, making it more convincing and trustworthy, and evening making it challenging to evaluate the explanation if there is a lack of alignment. 4 DISCUSSION AND IMPLICATIONS In this work, we argue that as LLM\u2019s abilities advance, they are becoming more suitable collaboration partners for humans, specifically in the context of subjective decision-making. The role of AI output in these contexts is to present new information and perspectives to the human decision-maker. Explanations in collaborative, subjective decision-making are less likely to be grounded in facts than explanations for objective decisions; it is more common that opinions and personal experiences comprise these new perspectives. As opinions and personal experiences may be considered uniquely human attributes, we were interested in whether and how TREW @ CHI \u201924, May 11\u201316, 2024, Honolulu, HI, USA Ferguson et al. these attributes were perceived in AI explanations. We found that humans did identify opinions and experiences in explanations for subtle sexism scenarios, and perceived them to be both convincing and trustworthy, making them important for subjective decisionmaking. While an argumentative stance (i.e., sexist or not sexist) can be depicted in some non-textual explanation formats, it would be hard to share personal opinions (such as \u201cI don\u2019t think women should be confined to traditionally feminine interests\u201d) and personal experiences, which are normally described in a narrative format, in forms other than text. Thus, word-based explanations are perceived to contain opinions and experiences which are important for subjective decision-making. Personal opinions and experiences were also often attributed to human authorship. As has also been shown in past work in the context of emotions [36], we have demonstrated that these modern LLMs can generate plausibly human text elements; in our case, opinions and experiences. Thus, while these elements may aid in subjective decision-making, we have to consider the ethical implications of participants potentially believing these explanations come from humans. In fact, we provide evidence that the relationship between the explanation author and the explanation evaluation is causal in some cases \u2014 some participants described how they value human\u2019s individual experiences, and thus they would trust a human\u2019s explanation of their experience, but not an AI-generated replication. This means that if we use LLMs in subjective decisionmaking collaborative systems, we must be sure to disclose their contributions as AI-generated, even if this might harm the trustworthiness of the explanation. In many contexts, humans distrust AI-generated text [19, 40], thus future work can investigate which elements or factors need to be present in AI-generated text to calibrate trust. Explanations which featured opinions and experiences were also found to be convincing and trustworthy, which aligns with past work in other contexts, where researchers found that personal experiences regarding politics are more convincing than facts [21], and make people seem more rational and worthy of respect [34]. However, opinions alone were often noted as not enough to completely convince participants. This finding suggests one way in which we may need to fine-tune language models if used in this context. Our results also show that personal opinions and experiences, in conjunction with statistics or other forms of evidence, would be most convincing \u2014 as was also suggested in past work on political disagreement [21]. Future work can identify how to prompt or adjust the design of language models to provide this balance. Perhaps the more troublesome finding was that participants unpromptedly judged how these opinions and experiences aligned with their own, greatly influencing their overall perception. When an explanation contains an opinion or experience similar to the participant\u2019s own, they tend to rate the explanation overall as more trustworthy and convincing. This is known as a confirmation bias \u2013 defined as \u201cseeking or interpreting evidence in ways that are partial to existing beliefs\u201d [26, p. 175] \u2013 which poses the risk of reinforcing the user\u2019s existing opinions and negating the intention of providing new perspectives to the decision-making process. Thus, we have to be careful when deciding on the opinions and experiences present in these explanations. Perhaps in these collaborative decision-making settings, we should present participants with multiple LLM outputs, representing multiple opinions and experiences, or even prompt models to describe different perspectives. It has been shown that these models can provide different opinions, although they may provide one more commonly than another [32]. Further, recent work has shown that LLMs can be trained to generate widely accepted outputs that can help people with diverse viewpoints reach consensus [2], suggesting that these models could be used both to generate different perspectives and resolve them. This would have the benefit of being perceived as convincing and trustworthy, but still providing new perspectives and information that can help participants make their decisions. 5 LIMITATIONS Due to our collection of subtle sexism scenarios from naturally occurring internet discussion sites, we limit the variation of scenarios studied. For instance, we found that the large majority of the posts regarding subtle sexism argued for why a scenario was sexist, limiting our ability to assess human explanations arguing why the scenario was not sexist. Further, because we could only fit a small number of scenario-explanation pairs in the interview, we also cannot comment on the generalizability of our findings to subtle sexism in various contexts and other types of hate speech. Another limitation of this work is the small sample size. The majority of our study participants (though not all) came from Western communities, and may have different beliefs and lived experiences from those in other parts of the world. Additionally, because online forums and social media tend to be populated mostly by Western communities, this could influence the alignment that we see between experiences displayed in explanations and the participant\u2019s own experiences. Future work should extend this investigation to understand how cultural differences may influence the perception of alignment. Furthermore, LLMs have been shown to reflect societal biases, and can create hate-speech themselves. While we filter out this type of content in our study, future work can measure the biases in these LLM-generated texts, and how biased opinions and experiences influence human perceptions. Lastly, as part of the larger scope of the study, we asked participants to describe why they believe a text to be human or AIauthored. This focus prevented us from being able to analyze how knowledge of the explanation source affected perception, which is an important next step in the work. 6 CONCLUSION & FUTURE WORK In this work, we present an interview study of participants\u2019 perceptions of human and AI explanations for subjective decision-making, specifically using the example of identifying subtle sexism. We argue that in subjective cases, multiple perspectives presented in collaboration can be helpful. To make this feasible, we motivate this work with the idea that LLMs can be used to generate these perspectives. We ask participants to evaluate human and AI-generated explanations as if they were participating in this collaborative process. We found that participants often perceived the explanations, both those authored by humans and AI, to contain personal opinions and experiences. The presence of these elements typically leads participants to view the explanation as convincing and trustworthy and Just Like Me TREW @ CHI \u201924, May 11\u201316, 2024, Honolulu, HI, USA also believe that it was written by a human. Further, we show that whether these opinions and experiences are aligned with the participant\u2019s own opinions and experiences is even more important for trust, highlighting a troubling tendency to conform to confirmation bias, negating the original intent of collaborative decision-making. Thus, we show that these elements of explanations are particularly important for collaboration in subjective hate-speech detection, and we motivate future work to address how we might best provide multiple, differing opinions and experiences to collaborative human decision-makers, and how we can avoid ethical challenges that arise when AI generates human-like opinions and experiences."
},
{
"url": "http://arxiv.org/abs/2404.16160v1",
"title": "Domain-Specific Improvement on Psychotherapy Chatbot Using Assistant",
"abstract": "Large language models (LLMs) have demonstrated impressive generalization\ncapabilities on specific tasks with human-written instruction data. However,\nthe limited quantity, diversity, and professional expertise of such instruction\ndata raise concerns about the performance of LLMs in psychotherapy tasks when\nprovided with domain-specific instructions. To address this, we firstly propose\nDomain-Specific Assistant Instructions based on AlexanderStreet therapy, and\nsecondly, we use an adaption fine-tuning method and retrieval augmented\ngeneration method to improve pre-trained LLMs. Through quantitative evaluation\nof linguistic quality using automatic and human evaluation, we observe that\npre-trained LLMs on Psychotherapy Assistant Instructions outperform\nstate-of-the-art LLMs response baselines. Our Assistant-Instruction approach\noffers a half-annotation method to align pre-trained LLMs with instructions and\nprovide pre-trained LLMs with more psychotherapy knowledge.",
"authors": "Cheng Kang, Daniel Novak, Katerina Urbanova, Yuqing Cheng, Yong Hu",
"published": "2024-04-24",
"updated": "2024-04-24",
"primary_cat": "cs.CL",
"cats": [
"cs.CL",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "Large Language Models (LLMs) have demonstrated impres- sive generalization capabilities, such as in-context learning [1], chain-of-thoughts reasoning [2], and biomedical diag- nosing [3]. Instruction-tuning of LLMs has enabled them to follow natural language instructions and perform real- world tasks [4]. Two main methods have been developed for instruction-tuning LLMs: (1) fine-tuning the model on a wide range of tasks using human-annotated prompts and \u2217Corresponding Author. \u2020Thanks to Research Center Informatics (No. CZ.02.1.01/0.0/0.0/16 019/0000765), Brain Dynamics (No. CZ.02.01.01/00/22 008/0004643) and Student Grant in Czech Techni- cal University in Prague (NO.SGS22/165/OHK3/3T/13) for funding. Katerina Urbanova is also with the National Institute of Mental Health in Prague, Czech Republic \u2021The work also was supported during the author\u2019s internship at The Uni- versity of Hong Kong. Yuqing Cheng is also with the Shenzhen Mental Health Centre, China. Question: What is DSM-5 and ICD-10 Coding? Output: The depressive disorders grouping in the fifth edition of the Diagnostic and Statistical Manual of mental disorders or DSM-5 contains a number of distinct disorders, each given it's own unique diagnostic code. Domain: Depressive Disorders. Concept Explanation Question Answering Dialogue Generation Depressive Disorders Addictive Disorders Anxiety Disorders ... Psychotherapy Instructions Template Instruction: Input: Output: Task: Domain: Readability Professional Match Psychotherapy Data on Alexander Street Natural Task Identification Domain Knowledge Reward on Specific Tasks Mental Status Assessment Psychological Counseling Information Extraction ... Step one Step three Step four Step two Fig. 1. Schematic representation of Assistant-Instructional prompts in psychotherapy domains. Step one: Task identi- fication; Step two: Knowledge expansion; Step three: Evalu- ation. feedback [5], and (2) supervised fine-tuning using public benchmarks and datasets augmented with manually or auto- matically generated instructions [6]. Reinforcement Learning on Human Feedback (RLHF) has proven to be an effective way to improve LLMs in various domains, such as medicine [7], knowledge graphs [8], multimodal data fusion [9] and biomedical applications [10], but it comes with a high cost. Self-Instruct tuning [11, 12] and Guess-Instruction tuning methods have shown better performance in aligning LLMs with human intent by learning from instruction-following data generated by state-of-the-art instruction-tuned teacher LLMs (e.g., GPT-3, GPT-3.5, and even GPT-4). These lines of instruction-tuning research have proven effective in im- proving the zero and few-shot generalization abilities of LLMs. To improve the professional knowledge of LLMs on psychotherapy domains, our paper presents the psychother- apy Assistant-Instruction approach, which aims to (1) achieve generalization over different psychological consulting tasks and (2) incorporate psychological knowledge into natural common LLMs. Figure 1 provides an overview of our pro- posed approach, in which a single model can perform various NLP tasks in specific psychotherapy domains. To achieve human-level professional responses in instruc- arXiv:2404.16160v1 [cs.CL] 24 Apr 2024 Pre- trained LLMs Psychotherapy Training Transcripts Assistant Assistant- Instructions on Psychotherapy Data Generated Response Inhibited LoRA Or RAG Embedding Inputs The Server: identify tasks, generate instructions and score relevance. Bot Evaluation Expert 1. Identification 2. Generation 3. Relevance Assistant Fig. 2. The schematic of our system. tion tuning for psychotherapy, we propose a novel approach using GPT-4 as an assistant for Assistant-Instruct tuning (a half self-instruct tuning method) on psychotherapy consult- ing tasks (Seen in Figure 1). Our method makes the following contributions: (a) it covers a wide range of psychological top- ics and incorporating feedback knowledge generated by GPT- 4. (b) it absorbs psychotherapy knowledge from professional data and enables them to generate content close to GPT-4. (c) it demonstrates the effectiveness of using assistant LLMs- revised instruction data to tune LLMs in psychotherapy do- mains, providing practical insights to build a general-purpose LLM-following agent powered by assistant LLMs (e.g., GPT- 4).",
"main_content": "2.1. Assistant on Annotation and Task Identification To arrange psychotherapy data to correct tasks, such as (1) concept explanation, (2) question answering, (3) mental status assessment, (4) psychological counseling and (5) information extraction, (6) dialogue generation, (7) sentiment analysis, (8) event ordering, we use an assistant LLM \u2013 GPT-4 to identify which task the human-constructed instruction should be. We directly prompt the LLM in a few-shot way to determine this, using 8 classification instructions from the seed tasks. The prompting template is shown in Table 1. Can the following task be regarded as a question answering task with finite output on [***] domain? Input: \u201dJEFFREY MISHLOVE Yeah! Well we\u2019re running out of time. . . time. I supposed the point is that you\u2019ve been successful in... in developing these devices and. . . and using them in the laboratory? Output: \u201dSTEPHEN LABERGE Yes! That\u2019s right! Well, not just in the laboratory, but we developed devices that... that do have sensors built under the mass that could be used at home, so that... that of course was one of our major goals which used to have make lucid dreaming available in general to people so that they could make better lucid dream.\u201d Result: Yes Table 1. Prompt used for identifying the type of tasks. 2.2. Assistant on Generation, and Evaluation Our approach involves two main steps. Firstly, we optimize formulations that retain the content of the original instructions. We prompt a language model to reformulate the tasks in the core data for each generated task. In some instruction formulations, we embed the input into or add it behind the \u201cINPUT\u201d template \u2013 \u201dWe are talking about [***].\u201d \u2013 to emphasize the topic. This manually constructed \u201cINPUT\u201d also captures the content discussed by members of the audience in Alexander Street Video, merging the discussed topic with the point of interest for the audience or visitors. Secondly, following [13], we use GPT-4 as an assistant to evaluate the retrieved passage\u2019s relevance.The prompting template is shown in Table 2. Prompt for Generation: \u201dMake a more professional instruction and output based on given context of conversation in [***] domain. Remove people\u2019s names and UNKNOWN. Then, improve them all based on your knowledge. If you cannot do that, output nothing.\u201d Prompt for Evaluation: \u201dGiven an instruction and an output in [***] domain, rate whether the response appears to be a helpful and informative answer to the query, from 1 (lowest) 5 (highest). The detailed criterion is as follows: 5: The response provides a complete, highly detailed, and informative response to the query, fully satisfying the information needs. 4: The response mostly fulfills the need in the query, while there can be some minor improvements such as discussing more detailed information, having better structure of the response, or improving coherence. 3: The response is acceptable, but some major additions or improvements are needed to satisfy users\u2019 needs. 2: The response still addresses the main request, but it is not complete or not relevant to the query. 1: The response is barely on-topic or completely irrelevant..\u201d Table 2. Prompt used for generation and evaluation. 3. EXPERIMENTS 3.1. Data Collection Alexander Street Press is a website known for its vast collection of video transcripts and recordings from therapy and counseling sessions, covering topics such as depression, abuse, trauma, and mental disorders. The video transcript dataset was specifically collected from the Counseling and Therapy channel on the website. We curated the dataset to include only English-language sessions recorded between 1980 and 2023, resulting in a set of 1,333 videos and accompanying transcripts. After filtering out short-length and non-informative videos, the final dataset comprises 1,179 video transcripts, containing a total of 188,421 dialogue turns. To ensure data quality, we performed a cleaning process to remove Unicode characters, pauses, and other unnecessary elements, resulting in a dataset with 3,141,520 words and a vocabulary size of 30,438.1 On the Alexander Street Press website, most video transcripts and recordings consist of knowledge presentations and counseling talks. For knowledge presentations, there are no instruction questions or instance inputs, and the output is the content presented by the speaker. In the first step, we manually set instructions and instance inputs based on the discussed topics (e.g., Depressive disorders, Addiction, etc.). In 1https://alexanderstreet.com/ the second step, we used the GPT-4 API to revise and generate instructions and instance inputs based on the contents. 3.2. Models We conducted an evaluation of the language models mentioned above for the task of response generation in the psychotherapy domain, specifically focusing on therapeutic counseling. For generating the assistant instructions based on new psychotherapy data, we utilized the GPT-4 API as the Assistant-LLM. To fine-tune the generated instruction data effectively, we employed the inhibition adaption finetuning method [14] and self-RAG [13] on Llama2-7B [15], as well as ChatGLM2-6B [16]. The fine-tuned LLMs were then evaluated by two psychologists on psychotherapy data. The fine-tuning process required two weeks for ChatGLM26B/Llama2-7B when using eight NVIDIA Tesla A100 GPUs with 40GB graphic memory cards.2 3.3. Metrics As shown in Table 5, our fine-tuned LLMs are evaluated under two main methods: Automatic evaluation and Human evaluation. For the automatic evaluation, we submit 200 generated answers and use Rouge-L [17] and Fluency [18] to compare the generated output with a reference text and calculate a score for the similarity evaluation. For the human evaluation, we also submit 60 generated outputs (20 examples for each tasks, as we have 3 main tasks) without model and domain information to two professional psychologists, and based on their professional psychotherapy knowledge, they will measure the quality of the generated content according to the questions and answers. There are six grades that they will use to mark the quality of generated content: Extremely Bad (1), Bad (2), Neutral (3), Acceptable (4), Good (5) and Very Good (6). They should consider the readability, professional and match score (that measures the match with the domain knowledge). 3.4. Analytic Experiments 3.4.1. Performance on Revision Due to numerous vocabulary and grammar errors in the original psychotherapy data, manual processing and revision of these errors were not feasible. In Table 1, we presented the one command that requires generated instructions from GPT4. The right panel of Table 3 contains additional information contributed by GPT-4, such as common knowledge about depressive disorders, including psychotherapy, cognitivebehavioral therapy, medication, and a combination of treatment approaches. These recommendations from GPT-4 are 2https://github.com/ChengKang520/ psychotherapy-assistant_instruction Instruction: Instruction: What suggestions or comments you can provide to address or alleviate the following topics? Kindly provide professional suggestions or comments on effectively addressing and alleviating [Depressive Disorders]. Input: Input: We are talking about [Depressive Disorders]. We are discussing [Depressive Disorders]. Output: Output: A major depressive episode has a number of characteristic features, which take place most of the day, nearly every day, where relevant. The two main characteristics are a depressed mood, and a noticeably decreased interest in most activities. Other characteristics include trouble sleeping, diminished hunger or pronounced weight loss, trouble concentrating or making decisions, fatigue or loss of energy, recurring suicidal thoughts with or without intent to act on these thoughts, feelings of worthlessness or excessive or inappropriate guilt, psychomotor agitation or retardation, and feelings of low self-worth or pronounced guilt. These symptoms must cause marked distress in the patient\u2019s work or social life. For all of these episodes the clinician should ensure that the symptoms are not better explained by another medical condition or by the affects of a substance. A major depressive episode is characterized by a range of distinct features that persist for most of the day, nearly every day. The primary indicators include a persistent low mood and a significant decrease in interest or pleasure in most activities. Other symptoms may involve sleep disturbances, changes in appetite or weight, difficulty concentrating or making decisions, fatigue, recurring thoughts of suicide with or without intent to act on them, feelings of worthlessness or excessive guilt, psychomotor agitation or retardation, and feelings of low self-worth. These symptoms should cause marked distress in the individual\u2019s work, social life, or daily functioning. When addressing Depressive Disorders, clinicians should ensure that the symptoms are not better explained by another medical condition or the effects of substance use. As professionals, let\u2019s explore effective treatment options, such as psychotherapy, cognitivebehavioral therapy, medication, or a combination of these approaches. Encouraging patients to engage in regular physical activity, maintain a healthy diet, and establish a supportive social network can also be beneficial. Furthermore, it is crucial to foster a compassionate and understanding environment to help individuals with Depressive Disorders seek appropriate help and support. Perplexity: 6.71 Perplexity: 2.15 Table 3. The manually constructed Instruction and GPT-4 revised Assistant-Instruction on the Depressive Disorder domain. considered common-sense by psychologists, but the original psychotherapy data lacked sufficient common knowledge (because it always merely has professional knowledge). The instructions and instance inputs were also refined by GPT-4 based on the given command. Perplexity is a metric that gauges the language model\u2019s ability to predict a sequence of words. The perplexity of the right panel is lower than that of the left panel. This revision process makes the content more LLM-friendly, transforming \u201dprofessional knowledge format\u201d into a more accessible \u201dcommon knowledge format\u201d (or open-domain knowledge format). 3.4.2. Generation on Psychotherapy Domain When comparing to the content generated by pre-trained ChatGLM2-6B, we observed that the RAG method contains additional professional knowledge (can be seen from Table 4). While this extra knowledge provides more professional information, pre-trained LLMs could not offer further professional insights into autism. When comparing the generated content of ChatGLM2-6B fine-tuned on inhibited LoRA to that of ChatGLM2-6B, we found that the former provides more refinedly revised information (However, this difference is not obvious). For example, it offers insights on professional Instruction: Please provide professional suggestions or comments to effectively address and alleviate [Autism]. Input: We are discussing [Autism]. Output from Zero-Shot: Autism is a developmental disorder that can cause significant social, communication and behavioral challenges. There is no known single cause for autism, but increased awareness and funding support research into possible causes and better ways to treat the disorder. Output from Inhibited LoRA: Autism is a developmental disorder that can cause significant social, communication and behavioral challenges. It is a spectrum disorder, which means that symptoms and severity vary widely among individuals. Output from Inhibited LoRA + RAG: Autism is a developmental disorder that can cause significant social, communication and behavioral challenges. It is a spectrum disorder, which means that symptoms can range from mild to severe. Some people with autism may also have other conditions, such as attention deficit hyperactivity disorder (ADHD), anxiety disorders, or epilepsy. Applied Behavior Analysis (ABA) a widely recognized and evidence-based therapy for Autism. ABA techniques focus on promoting positive behaviors and reducing challenging behaviors. Speech and language therapy can help improve communication skills, such as speech articulation, expressive language, and social communication. Table 4. Based on Llama2-7B, we illustrate the performance of Zero-Shot, inhibited LoRA Tuned and RAG methods on Psychotherapy data. explanation, such as \u201d It is a spectrum disorder, which means that symptoms and severity vary widely among individuals.\u201d. 3.4.3. Evaluation We present a performance summary of different instructiontuning methods applied to two pre-trained LLMs in Table 5. While the Rouge and Fluency evaluation results show improvement with the use of Assistant-Instruction. To validate the performance, we use a selected portion of psychotherapy data as a validation set. Through content revising and leveraging additional common knowledge from GPT-4, both of these two LLMs show significant enhancement in matching the revised answers. Pre-trained LLMs can provide clients with comments to address psychological problems, but the quality of generated content may not always be fully accepted by psychologists. From Table 5, we observe that psychologists tend to prefer models that have been fine-tuned on psychotherapy data. As most LLMs lack specialization in a specific domain, they often require more domain-specific knowledge to improve their performance in professional domains. Because LLMs have been pre-trained on a vast corpus, giving them an inherent advantage in readability, and the size of tokens used does not seem to affect their performance significantly. Regarding the professionalism of the generated content, the psychologists gave higher scores to models that had been fine-tuned on psychotherapy instruction data compared to the corresponding original LLMs. Inhibited LoRA Finetuning (without / with Asisstant-Instruction) Pretrained LLM Automatic Human Evaluation Rouge-L \u2191 Fluency \u2193 Read Prof Match ChatGLM2-7B 24.3/27.1 49.4/48.7 4.8/4.9 2.9/3.3 2.1/2.5 Llama2-7B 15.1/16.9 20.9/20.5 5.0/5.2 3.0/3.2 1.9/2.3 Retravel Augmented Generation (without / with Asisstant-Instruction) Pretrained LLM Automatic Human Evaluation Rouge-L \u2191 Fluency \u2193 Read Prof Match ChatGLM2-7B 25.1/32.8 56.4/46.7 4.6/5.3 3.9/4.2 2.9/3.3 Llama2-7B 15.4/22.4 30.3/20.7 4.8/5.2 3.7/4.1 3.0/3.4 Table 5. For evaluating the performance of LLMs on psychotherapy domain, two methods inhibited LoRA and RAG were used on two pre-trained LLMs have been tuned on Assistant-Instruction using . 3.4.4. Human Evaluation Agreement To assess the reliability of our human evaluation, we conducted an inner-rater agreement analysis [6] between our two evaluators. We used Cohen\u2019s \u03ba to measure inter-rater agreement for categorical items. The 6-level rating scale (ranging from 0 to 5) was treated as a categorical variable for each aspect under consideration. The resulting \u03ba value was 0.63, indicating a moderate level of agreement according to common practice. Furthermore, we computed the Spearman correlation coefficient \u03c1 between the ratings of our two evaluators, treating the ratings as ordinal variables (ranging from 0 to 5). The obtained coefficient was \u03c1 = 0.81, demonstrating a high correlation between the two evaluators. These results indicate a reasonably reliable human evaluation process for our study. 4. CONCLUSION We propose a novel method called ASSISTANT-INSTRUCT for fine-tuning or retrieving information from language models (LMs) to improve their instruction-following ability. This method combines both common knowledge and psychotherapy professional knowledge to generate instruction data with the help of experts. It retains the general knowledge already present in pre-trained LMs and incorporates psychotherapyspecific knowledge from expert-presented instructions. To enhance fine-tuning, as well as retrieval knowledge, we format the psychotherapy data, such as presentations, talks, and conversations, to make it more compatible with LMs. Human evaluation of this method demonstrates significant improvement compared to existing instruction methods. ASSISTANT-INSTRUCT can serve as an initial step to align pre-trained LMs with LM-revised instructions, and further research can build upon this method to enhance instructionfollowing models. 5."
},
{
"url": "http://arxiv.org/abs/2404.15592v1",
"title": "ImplicitAVE: An Open-Source Dataset and Multimodal LLMs Benchmark for Implicit Attribute Value Extraction",
"abstract": "Existing datasets for attribute value extraction (AVE) predominantly focus on\nexplicit attribute values while neglecting the implicit ones, lack product\nimages, are often not publicly available, and lack an in-depth human inspection\nacross diverse domains. To address these limitations, we present ImplicitAVE,\nthe first, publicly available multimodal dataset for implicit attribute value\nextraction. ImplicitAVE, sourced from the MAVE dataset, is carefully curated\nand expanded to include implicit AVE and multimodality, resulting in a refined\ndataset of 68k training and 1.6k testing data across five domains. We also\nexplore the application of multimodal large language models (MLLMs) to implicit\nAVE, establishing a comprehensive benchmark for MLLMs on the ImplicitAVE\ndataset. Six recent MLLMs with eleven variants are evaluated across diverse\nsettings, revealing that implicit value extraction remains a challenging task\nfor MLLMs. The contributions of this work include the development and release\nof ImplicitAVE, and the exploration and benchmarking of various MLLMs for\nimplicit AVE, providing valuable insights and potential future research\ndirections. Dataset and code are available at\nhttps://github.com/HenryPengZou/ImplicitAVE",
"authors": "Henry Peng Zou, Vinay Samuel, Yue Zhou, Weizhi Zhang, Liancheng Fang, Zihe Song, Philip S. Yu, Cornelia Caragea",
"published": "2024-04-24",
"updated": "2024-04-24",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.IR",
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "Attribute Value Extraction (AVE) identifies the value of product attributes from the product infor- mation, which is critical in e-commerce for product representation, recommendation, and categoriza- tion (Yang et al., 2022; Wang et al., 2020; Khan- delwal et al., 2023; Yang et al., 2023; Fang et al., 2024). The attribute values can be categorized into two types: (1) Explicit values can be directly found as a segment in the product text (Yang et al., 2022; Wang et al., 2020), while (2) Implicit values are never mentioned in the text and can only be in- ferred from the product image, contextual clues, \u00d8 Product Text: [Title] Transparent Waterproof Kawaii Shoes [Category] Outdoor Shoes \u00d8 Attribute: Boot Style \u00d8 Attribute Value: Rain Boot Figure 1: An example of implicit attribute value. The attribute value \u201cRain Boot\u201d is not mentioned explicitly in the product text, but can be inferred from text context, product image, or prior knowledge. or prior knowledge (Zhang et al., 2023). Consider the example in Figure 1. The value \u201crain boot\u201d of the attribute \u201cboot style\u201d is implicit since it is not explicitly stated in the product text but can be inferred from its image or context from keywords such as \u201ctransparent\u201d and \u201cwaterproof.\u201d Nonetheless, existing datasets for attribute value extraction exhibit several key limitations: (1) They predominantly focus on explicit attribute values, neglecting implicit attribute values (Zheng et al., 2018; Wang et al., 2020), which are more challeng- ing and commonly encountered in real-world sce- narios; (2) Many datasets lack product images (Yan et al., 2021; Yang et al., 2022), limiting their ap- plicability in multimodal contexts; (3) The limited number of publicly available datasets lack human inspection and cover only a few domains, resulting in inaccurate and restricted benchmarks (Xu et al., 2019; Zhang et al., 2023). Table 1 compares these aspects for various AVE datasets. To address these issues, we present ImplicitAVE, the first publicly available multimodal dataset for implicit attribute value extraction. We initially sourced product text data from the MAVE dataset (Yang et al., 2022) and then curated the data by eliminating unhelpful attributes and redundant or ir- relevant values. Subsequently, we transformed and 1 arXiv:2404.15592v1 [cs.CV] 24 Apr 2024 Dataset Implicit Values Multimodality Publicly Available Human Annotation Multiple Domains Language OpenTag (Zheng et al., 2018) \u2717 \u2717 \u2717 \u2713 \u2713 English AE-110K (Xu et al., 2019) \u2717 \u2717 \u2713 \u2717 \u2713 Chinese MEPAVE (Zhu et al., 2020) \u2717 \u2713 \u2713 \u2713 \u2713 Chinese AdaTag (Yan et al., 2021) \u2717 \u2717 \u2717 \u2713 \u2717 English MAVE (Yang et al., 2022) \u2717 \u2717 \u2713 \u2717 \u2713 English DESIRE (Zhang et al., 2023) \u2713 \u2713 \u2717 \u2713 \u2717 Chinese ImplicitAVE (Ours) \u2713 \u2713 \u2713 \u2713 \u2713 English Table 1: Comparison of existing AVE datasets. While several explicit AVE datasets exist, implicit AVE is much more challenging and under-explored. Our work introduces the first open-source dataset that is expressly designed to address the task of implicit AVE. Our dataset is considerably different from DESIRE, as detailed in Appendix A. expanded the dataset to include implicit attribute value extraction and multimodality and finally vali- dated the test set annotations through two rounds of human inspection. This yields a more refined and quality-improved dataset of 68k training and 1.6k testing data spanning five diverse domains with 25 attributes and corresponding attribute values suit- able for implicit attribute value extraction. Detailed statistics of our dataset are shown in Tables 2, 3. Given the cutting-edge performance of Multi- modal Large Language Models (MLLMs) (Li et al., 2023; Liu et al., 2023b,a; Bai et al., 2023; Ye et al., 2023; Luo et al., 2023) and the absence of previ- ous exploration of their application to implicit at- tribute value extraction, we establish a comprehen- sive benchmark for MLLMs on our ImplicitAVE dataset. We cover six recent MLLMs with 11 vari- ants and compare them with the fine-tuned previ- ous SOTA method. We evaluate their performance across diverse settings, including full/few-shot and zero-shot scenarios, domain-level and attribute- level performance, and single/multi-modality per- formance. We find that implicit value extraction re- mains a challenging task for open-source MLLMs despite their effective capabilities. Our contributions are summarized as follows: (1) The development and release of ImplicitAVE, the first open-source multimodal dataset for im- plicit AVE; (2) The exploration and benchmarking of various MLLMs for implicit attribute value ex- traction across diverse settings, revealing intriguing insights and potential future research directions.",
"main_content": "We outline our approach to constructing the first open-source multimodal implicit attribute value extraction dataset, ImplicitAVE. The dataset construction pipeline is illustrated in Figure 2. It contains four steps: data collection, curation, expansion, and validation. Next, we explain them in detail. 2.1 Initial Data Collection Initially, we sourced product text information, including titles, categories, and corresponding attribute-value annotations, from the publicly available MAVE dataset (Yang et al., 2022), comprising 2.2 million products spanning diverse ecommerce domains. Despite its extensive coverage, the MAVE dataset exhibits several significant limitations, making it unsuitable for implicit AVE: (1) It contains inappropriate attributes and values that are not facilitative to implicit AVE tasks (see Step 2); (2) It is designed solely for explicit attributevalue extraction; (3) It solely comprises textual information and lacks multimodal data sources; (4) Annotations within the MAVE dataset are machinegenerated and lack human inspection, resulting in notable inaccuracies. 2.2 Data Curation for Implicit AVE We further refine the sourced data by removing unhelpful attributes and redundant or irrelevant values for Implicit AVE. Concretely: \u2776Removing Inference-Infeasible Attributes. We manually inspect and remove attributes where the specific values are almost impossible to infer if the values are not mentioned explicitly in the text, such as display resolution, storage capacity, and battery life; \u2777Removing Subjective Attributes. The attributes that are rather subjective and ambiguous, such as the degree of comfort and product quality, are also removed; \u2778Value Merging and Cleaning. Attribute values with similar semantic meanings are consolidated. This includes unifying variations in grammar forms (e.g., Short-Sleeve, Short sleeves, short sleeved for the attribute Sleeve Style), eliminating extraneous words (e.g., running and running shoes), and merging synonyms (e.g., floral and flower, leopard and cheetah, crew neck and round neck, plaid and tartan, etc.) In addition, we notice some values are irrelevant to their parent 2 Initial Data Collection \u00d8 Product Text: Transparent Waterproof Rain Boot Shoes \u00d8 Attribute: Boot Style \u00d8 Attribute Value: Rain Boot MAVE Data Curation for Implicit AVE Subjective InferenceInfeasible Attributes Attribute Values Irrelevant Very Similar Filtering Merging Sampling Data Expansion \u00d8 Product Text: Transparent Waterproof Rain Boot Shoes Product ID \u00d8 Product Text: Transparent Waterproof Shoes Human Inspection & Re-Annotation \u00d8 Attribute: Boot Style \u00d8 Attribute Value: Rain Boot N/A Agree? N/A Majority Vaote N/A ImplicitAVE Home Food Footwear Clothing Jewelry&GA Material Special Occasion Location Animal Theme Season Shape Size Attachment Method Flavor Form Candy Variety Occasion Container Athletics Boot Style Shaft Height Heel Height Toe Style Neckline Sleeve Style Length Shoulder Style Pattern Material Shape Ceramic/Melamine Metal Plastic Fabric Silicone Bamboo Leather Paper Resin Crystal Vinyl Wood Glass Thanksgiving Wedding Easter Graduation Halloween Christmas Birthday Patriotic Bedroom Bathroom Garage Kitchen Owl Cat Bird Dog Horse All Seasons Spring Winter Summer Autumn Rectangular Oval Square Round Queen Full Twin King Grommet Rod Pocket Sriracha Strawberry Salted Peanut Butter Vanilla Teriyaki Chipotle Chocolate Caramel Butter Lemon Habanero Honey Roasted Original Sliced Sticks Bags/Packets Flakes Bottles Soup Mix Teabags Rub Powder Taffy Licorice Hard Candy Lollipop Mints Gum Jelly Beans Mint Gummy/Chewy Christmas Valentine's Easter Halloween Other Holiday Bag Tin Box Case Walking Training Shoe Skateboarding Shoe Hiking Boot Climbing Shoe Golf Soccer Cycling Running Shoe Tennis Volleyball Basketball Western/Cowboy Combat Snow Boots Rain Boots Chelsea Motorcycle Ankle Boot Bootie Over The Knee Mid Calf Knee High Mid Heel High Heel Flat Low Heel Pointed Toe Round Toe V-Neck Crew Neck Strapless Halter Cowl Neck Button Down Henley Square Neck Scoop Neck Polo Turtleneck Sleeveless Long Sleeve 3/4 Sleeve Strappy Short Sleeve Midi Capri Long Dress/Gown Mini/Short One Shoulder Cold Shoulder Off Shoulder Plaid Leopard Stripe Polka Dot Emoji Floral Paisley Argyle Camouflage Galaxy Synthetic Wooden Canvas Leather Metal Cross Heart Oval Round Crucifix \u00fc Publicly Available \u00fc Human Annotation \u00fc Multiple Domains \u00fc Implicit Attribute Values \u00fc Multimodality Figure 2: Steps for constructing our ImplicitAVE dataset. A detailed explanation is provided in Section 2. Domain # Train # Eval # Values # Attributes Attributes Clothing 18868 226 23 4 [\u2019Sleeve Style\u2019, \u2019Neckline\u2019, \u2019Length\u2019, \u2019Shoulder Style\u2019] Footwear 21442 317 29 5 [\u2019Shaft Height\u2019, \u2019Athletic Shoe Style\u2019, \u2019Boot Style\u2019, \u2019Heel Height\u2019, \u2019Toe Style\u2019] Jewelry&GA 13061 220 20 3 [\u2019Pattern\u2019, \u2019Material\u2019, \u2019Shape\u2019] Food 3617 390 41 5 [\u2019Form\u2019, \u2019Candy Variety\u2019, \u2019Container\u2019, \u2019Occasion\u2019, \u2019Flavor\u2019] Home Product 11616 457 45 8 [\u2019Season\u2019, \u2019Material\u2019, \u2019Location\u2019, \u2019Animal Theme\u2019, \u2019Special Occasion\u2019, \u2019Size\u2019, \u2019Attachment Method\u2019, \u2019Shape\u2019] All 68604 1610 158 25 Table 2: Domain-level dataset statistics. attributes (e.g., the value \u201cClear Stamps\u201d of the attribute \u201cMaterial of Artwork\u201d), so these values are removed as well. The value merging and cleaning is achieved collaboratively by lexicon-based scripts, prompting with GPT-4, and human inspection. This curation results in a more refined and quality-improved dataset with 25 attributes and corresponding attribute values spanning five domains suitable for implicit attribute value extraction. We randomly sample up to 1000 instances per attribute value to limit the dataset size. The selected domains and attributes in ImplicitAVE are shown in Table 2. 2.3 Data Expansion To extend the data for implicit attribute value extraction and multimodality, we perform the following processing steps: \u2776Implicit Value Creation. We remove all explicit attribute value mentions from the input text for its corresponding attribute for each data point. As a result, attribute values in these data can only be inferred from the product images, indirect text context, or prior knowledge. That is, these values become implicit attribute values given the modified inputs. We then drop instances with the same product ID or image to prevent potential information leakage across instances based on the same product. \u2777Multimodality Creation. We systematically collect product images from the Amazon website using the product identification number and thus expand our dataset with multimodal information. 2.4 Human Inspection & Re-Annotation Through manual inspection, we observed that the original attribute-value annotations from MAVE contain noticeable errors. This is because they were annotated by ensembling predictions from five variations of AVEQA models (Wang et al., 2020)1 without human inspection. To rectify incorrect annotations and ensure a high-quality test set for implicit attribute value extraction and MLLMs evaluation, we engage five Ph.D. students to manually inspect and re-annotate our evaluation set. This process first involves sampling ten instances per attribute value from the constructed dataset, resulting in 1,676 instances. The human inspection and re-annotation process then unfold in two rounds: In the first round, annotators assess each instance\u2019s product image, text contexts, and relevant attributes to determine the correctness of the 1AVEQA (Wang et al., 2020) is a question-answering model that regards each query attribute as a question and determines the answer span that matches the attribute value within the product text information. 3 Domains Attributes # Train # Eval # Values Attribute Values Sleeve Style 3957 50 5 [\u2019Short Sleeve\u2019, \u2019Long Sleeve\u2019, \u20193/4 Sleeve\u2019, \u2019Sleeveless\u2019, \u2019Strappy\u2019] Neckline 8141 110 11 [\u2019Crew Neck\u2019, \u2019V-Neck\u2019, \u2019Henley\u2019, \u2019Polo\u2019, \u2019Scoop Neck\u2019, \u2019Strapless\u2019, \u2019Button Down\u2019, ...] Clothing Length 4937 40 4 [\u2019Mini/Short\u2019, \u2019Midi\u2019, \u2019Long Dress/Gown\u2019, \u2019Capri\u2019] Shoulder Style 1833 26 3 [\u2019One Shoulder\u2019, \u2019Off Shoulder\u2019, \u2019Cold Shoulder\u2019] Shaft Height 4546 60 5 [\u2019Ankle Boot\u2019, \u2019Bootie\u2019, \u2019Knee High\u2019, \u2019Mid Calf\u2019, \u2019Over The Knee\u2019] Athletic Shoe Style 8165 119 12 [\u2019Hiking Boot\u2019, \u2019Soccer\u2019, \u2019Golf\u2019, \u2019Running Shoe\u2019, \u2019Basketball\u2019, \u2019Tennis\u2019, \u2019Walking\u2019, ...] Footwear Boot Style 5145 68 6 [\u2019Western/Cowboy\u2019, \u2019Chelsea\u2019, \u2019Combat\u2019, \u2019Snow Boots\u2019, \u2019Motorcycle\u2019, \u2019Rain Boots\u2019] Heel Height 2457 50 4 [\u2019High Heel\u2019, \u2019Flat\u2019, \u2019Mid Heel\u2019, \u2019Low Heel\u2019] Toe Style 1129 20 2 [\u2019Round Toe\u2019, \u2019Pointed Toe\u2019] Pattern 8418 111 10 [\u2019Floral\u2019, \u2019Camouflage\u2019, \u2019Plaid\u2019, \u2019Leopard\u2019, \u2019Stripe\u2019, \u2019Paisley\u2019, \u2019Polka Dot\u2019, \u2019Argyle\u2019, ...] Jewelry&GA Material 2390 59 5 [\u2019Leather\u2019, \u2019Canvas\u2019, \u2019Synthetic\u2019, \u2019Wooden\u2019, \u2019Metal\u2019] Shape 2253 50 5 [\u2019Heart\u2019, \u2019Cross\u2019, \u2019Round\u2019, \u2019Oval\u2019, \u2019Crucifix\u2019] Form 1423 86 9 [\u2019Bags/Packets\u2019, \u2019Powder\u2019, \u2019Teabags\u2019, \u2019Rub\u2019, \u2019Bottles\u2019, \u2019Soup Mix\u2019, \u2019Flakes\u2019, \u2019Sticks\u2019, \u2019Sliced\u2019] Candy Variety 798 82 9 [\u2019Gummy/Chewy\u2019, \u2019Gum\u2019, \u2019Hard Candy\u2019, \u2019Mints\u2019, \u2019Licorice\u2019, \u2019Jelly Beans\u2019, \u2019Mint\u2019, \u2019Lollipop\u2019] Food Container 563 40 4 [\u2019Bag\u2019, \u2019Box\u2019, \u2019Tin\u2019, \u2019Case\u2019] Occasion 148 43 5 [\u2019Easter\u2019, \u2019Other Holiday\u2019, \"Valentine\u2019s\", \u2019Halloween\u2019, \u2019Christmas\u2019] Flavor 685 139 14 [\u2019Vanilla\u2019, \u2019Salted\u2019, \u2019Butter\u2019, \u2019Chocolate\u2019, \u2019Original\u2019, \u2019Strawberry\u2019, \u2019Habanero\u2019, \u2019Caramel\u2019, ...] Season 215 40 5 [\u2019All Seasons\u2019, \u2019Autumn\u2019, \u2019Spring\u2019, \u2019Summer\u2019, \u2019Winter\u2019] Material 7523 158 13 [\u2019Metal\u2019, \u2019Ceramic/Melamine\u2019, \u2019Fabric\u2019, \u2019Bamboo\u2019, \u2019Silicone\u2019, \u2019Wood\u2019, \u2019Plastic\u2019, \u2019Glass\u2019, ...] Location 50 47 4 [\u2019Bedroom\u2019, \u2019Kitchen\u2019, \u2019Outdoor\u2019, \u2019Bathroom\u2019] Animal Theme 134 46 5 [\u2019Cat\u2019, \u2019Dog\u2019, \u2019Owl\u2019, \u2019Bird\u2019] Home Special Occasion 1002 76 8 [\u2019Christmas\u2019, \u2019Halloween\u2019, \u2019Wedding\u2019, \u2019Birthday\u2019, \u2019Graduation\u2019, \u2019Patriotic\u2019, \u2019Easter\u2019, ...] Size 655 30 4 [\u2019Queen\u2019, \u2019King\u2019, \u2019Full\u2019, \u2019Twin\u2019] Attachment Method 441 20 2 [\u2019Grommet\u2019, \u2019Rod Pocket\u2019] Shape 1596 40 4 [\u2019Square\u2019, \u2019Rectangular\u2019, \u2019Oval\u2019, \u2019Round\u2019] All 68604 1610 158 Table 3: Attribute-level dataset statistics. The detailed ontology of our data and examples of products in different domains, with different attributes and values are provided in Appendix B. original attribute value annotation. If annotators think the original annotation is incorrect, they select the best attribute value from the corresponding value list (of that attribute) or mark \u2018N/A\u2019 if the annotator believes no suitable value is provided or multiple values are suitable. Additionally, annotators can suggest improvements such as merging, removing, adding, or replacing attribute values. Of the total instances, 1,448 original annotations are correct, 172 are incorrect, and 56 are marked as \u2018N/A,\u2019 yielding an agreement rate of 86.4%. Ten, one, one, and one attribute values are suggested for merging, removing, adding, and replacing, respectively. Instances with disagreed annotations are subject to a second round of inspection and reannotation, wherein three well-trained annotators participate, and a majority vote determines the final annotation for each instance. 2.5 Dataset Statistics The overall domain-level dataset statistics is provided in Table 2. We have 68,604 training instances and 1,610 high-quality evaluation instances. Our dataset covers 5 diverse domains and 25 carefully curated attributes specially for the task of implicit attribute value extraction. We also provide detailed attribute-level statistics in Table 3. Different attributes contain different numbers of value options that are meticulously selected and processed and we have a total of 158 diverse attribute values. In addition, we visualize the data distribution of domains, attributes and their values for our training set and evaluation set in Figure 3(a) and 3(b), respectively. It can be observed that compared to the training set, each attribute in the evaluation set has a considerably balanced value distribution, making it more suitable for zero-shot MLLMs evaluation. 3 Experiment & Benchmark In this section, we describe our experiment results evaluating the effectiveness of various MLLMs and the previous SOTA method on our ImplicitAVE dataset in diverse settings. 3.1 Experimental Setting Evaluation Setup We benchmark different models on our datasets from both attribute and domain levels: \u2022 Attribute-Level Results refer to the micro-F1 score calculated between the ground truth answer and the model-generated answer for each query/interested attribute. \u2022 Domain-Level Results refer to the micro-F1 score calculated between the ground truth answer and the model-generated answer for all query/interested attributes in each domain. We determine whether the generated answer is correct by checking whether the generated answer contains the true answer. 4 Method Language Model Clothing Footwear Jewelry&GA Food Home Product All Zero-shot methods BLIP-2 FlanT5XL-3B 38.05 49.21 72.72 61.54 70.02 59.75 BLIP-2 FlanT5XXL-11B 55.31 55.21 82.72 71.02 71.33 67.39 InstructBLIP Vicuna-7B 47.79 48.26 76.81 61.28 63.02 59.43 InstructBLIP FlanT5XXL-11B 62.83 63.41 83.18 73.58 73.96 71.49 LLaVA Vicuna-7B 22.12 39.74 62.72 49.23 57.76 47.82 LLaVA-1.5 Vicuna-7B 26.54 67.72 41.95 73.85 66.96 59.69 LLaVA-1.5 Vicuna-13B 49.12 63.72 81.81 76.15 80.31 71.86 Qwen-VL-Chat Qwen-7B 32.30 41.01 67.27 55.64 57.11 51.49 Qwen-VL Qwen-7B 59.73 57.72 84.09 76.92 73.96 70.86 GPT-4V 77.43 81.39 90.45 90.77 89.93 86.77 Representative & SOTA methods (Fine-Tuned) DEFLATE (ACL 2023) T5-Base-770M 54.42 71.61 67.73 52.56 61.71 61.24 LAVIN (NeurIPS 2023) LLaMA-7B 65.93 75.39 78.64 60.77 64.33 67.83 Table 4: Domain-level results. Analysis and representative error cases are provided in Section 3.2.1. Bold black shows best results in each block (zero-shot or finetuning), bold blue shows best results overall. Models for Zero-Shot We utilize the following multimodal LLM frameworks in zero-shot settings: \u2022 BLIP-2 (Li et al., 2023) proposes a Query Transformer and employs an efficient two-stage visionand-language pre-training strategy leveraging a frozen image encoder and an LLM. We provide benchmarks of BLIP-2 with two backbone LLM models, FLAN-T5-XL and FLAN-T5-XXL. \u2022 InstructBLIP (Dai et al., 2023) enhances visionlanguage models through instruction tuning with an instruction-aware Query Transformer introduced. We also report the performance with two backbone LLMs, Vicuna-7B and FLAN-T5-XXL. \u2022 LLaVA (Liu et al., 2023b) connects the visual encoder of CLIP (Radford et al., 2021) with the language decoder, and performs fine-tuning on GPT-4 generated language-image instructions. We provide benchmarks of LLaVA with Vicuna-13B. \u2022 LLaVA-1.5 (Liu et al., 2023a) advances its predecessor by focusing on efficient visual instruction tuning, integrating a fully-connected visionlanguage cross-modal connector for enhanced interaction between visual and textual modality. We provide benchmarks of LLaVA-1.5 using Vicuna7B and Vicuna-13B as the language models. \u2022 Qwen-VL (Bai et al., 2023) proposes a novel visual receptor and a position-aware adapter, optimizing through a three-stage training pipeline on a multilingual and multimodal dataset. We report the performance of both Qwen-VL and the chat version, Qwen-VL-Chat. \u2022 GPT-4V2 integrates vision into the GPT-4 architecture, one of the cutting-edge close-sourced LLMs fine-tuned by reinforcement learning from human feedback. 2https://chat.openai.com/ Footwear Clothing Jewelry&GA Home Food Athletics Boot Style Shaft Height Heel Height Toe Style Neckline Length Sleeve Style Shoulder Style Pattern Material Shape Material Shape Special Occasion Size Attachment Method Season Animal Theme Location Form Candy Variety Flavor Container Occasion Hiking Boot Soccer Golf Running Shoe Basketball Tennis Walking Training Shoe Cycling Skateboarding Shoe Climbing Shoe Volleyball Western/Cowboy Chelsea Combat Snow Boots Motorcycle Rain Boots Ankle Boot Bootie Knee High Mid Calf Over The Knee High Heel Flat Mid Heel Low Heel Round Toe Pointed Toe Crew Neck V-Neck Henley Polo Scoop Neck Strapless Button Down Halter Turtleneck Cowl Neck Square Neck Mini/Short Midi Long Dress/Gown Capri Short Sleeve Long Sleeve 3/4 Sleeve Sleeveless Strappy One Shoulder Off Shoulder Cold Shoulder Floral Camouflage Plaid Leopard Stripe Paisley Polka Dot Argyle Galaxy Emoji Leather Canvas Synthetic Wooden Metal Heart Cross Round Oval Crucifix Metal Ceramic/Melamine Fabric Bamboo Silicone Wood Plastic Glass Leather Paper Crystal Vinyl Resin Square Rectangular Oval Round Christmas Halloween Wedding Birthday Graduation Patriotic Queen King Full Twin Grommet Rod Pocket Winter Autumn Cat Dog Owl Bags/Packets Powder Teabags Rub Bottles Soup Mix Gummy/Chewy Gum Hard Candy Mints Licorice Jelly Beans Vanilla Salted Butter Chocolate Original Strawberry Habanero Caramel Honey Roasted Bag Box Tin Easter (a) Training Set Home Food Footwear Clothing Jewelry&GA Material Special Occasion Location Animal Theme Season Shape Size Attachment Method Flavor Form Candy Variety Occasion Container Athletics Boot Style Shaft Height Heel Height Toe Style Neckline Sleeve Style Length Shoulder Style Pattern Material Shape Ceramic/Melamine Metal Plastic Fabric Silicone Bamboo Leather Paper Resin Crystal Vinyl Wood Glass Thanksgiving Wedding Easter Graduation Halloween Christmas Birthday Patriotic Bedroom Bathroom Garage Kitchen Owl Cat Bird Dog Horse All Seasons Spring Winter Summer Autumn Rectangular Oval Square Round Queen Full Twin King Grommet Rod Pocket Sriracha Strawberry Salted Peanut Butter Vanilla Teriyaki Chipotle Chocolate Caramel Butter Lemon Habanero Honey Roasted Original Sliced Sticks Bags/Packets Flakes Bottles Soup Mix Teabags Rub Powder Taffy Licorice Hard Candy Lollipop Mints Gum Jelly Beans Mint Gummy/Chewy Christmas Valentine's Easter Halloween Other Holiday Bag Tin Box Case Walking Training Shoe Skateboarding Shoe Hiking Boot Climbing Shoe Golf Soccer Cycling Running Shoe Tennis Volleyball Basketball Western/Cowboy Combat Snow Boots Rain Boots Chelsea Motorcycle Ankle Boot Bootie Over The Knee Mid Calf Knee High Mid Heel High Heel Flat Low Heel Pointed Toe Round Toe V-Neck Crew Neck Strapless Halter Cowl Neck Button Down Henley Square Neck Scoop Neck Polo Turtleneck Sleeveless Long Sleeve 3/4 Sleeve Strappy Short Sleeve Midi Capri Long Dress/Gown Mini/Short One Shoulder Cold Shoulder Off Shoulder Plaid Leopard Stripe Polka Dot Emoji Floral Paisley Argyle Camouflage Galaxy Synthetic Wooden Canvas Leather Metal Cross Heart Oval Round Crucifix (b) Evaluation Set Figure 3: Data distribution of domains, attributes, and attribute values for training and evaluation sets. (A fullsize version is attached to our appendix Figure 10) Models for Finetuning Due to the resource constraints, we fine-tuned and evaluated the following two open-source models in both few-shot and fulldata tuning settings: \u2022 LaVIN (Luo et al., 2023) introduces a novel mix-of-modularity adaptation module, optimizing the integration of visual inputs into large language models through lightweight adapters and enabling efficient end-to-end training. \u2022 DEFLATE (Zhang et al., 2023) is a multi-modal 5 Domains Attributes # Values InstructBLIP LLaVA 1.5 Qwen-VL GPT-4V DEFLATE LAVIN Language Model/Variants FlanT5XXL-11B Vicuna 13B Qwen-7B T5-Base-770M LLaMA-7B Food Flavor 14 72.66 84.17 89.21 97.12 51.08 53.24 Home Material 13 74.05 61.39 67.09 84.81 77.22 82.28 Jewelry&GA Pattern 10 81.08 80.18 89.19 90.99 61.26 78.38 Footwear Athletic Shoe Style 12 73.95 63.03 57.98 84.03 80.67 78.15 Clothing Neckline 11 53.64 25.45 52.73 78.18 50.91 57.27 Food Form 9 70.93 59.30 75.58 86.05 67.44 81.40 Home Special Occasion 8 90.79 92.11 88.15 98.68 72.37 68.42 Clothing Sleeve Style 5 62.00 46.00 66.00 66.00 34.00 70.00 Footwear Boot Style 6 76.47 73.53 72.05 88.24 75.00 83.82 Jewelry&GA Material 5 81.36 93.22 88.14 94.92 77.97 86.44 Food Container 4 87.50 95.00 80.00 87.50 52.50 60.00 Footwear Heel Height 4 58.00 54.00 54.00 86.00 62.00 72.00 Clothing Shoulder Style 3 88.46 42.31 80.77 80.77 69.23 61.54 Home Attachment Method 2 45.00 100.00 100.00 100.00 90.00 90.00 Table 5: Attribute-level results. Analysis and representative error cases are provided in Section 3.2.2. Best results per attribute are shown in bold blue. generative-discriminative framework designed for both explicit and implicit attribute value extraction and is the previous SOTA model for implicit AVE. 3.2 Experimental Results 3.2.1 Domain-Level Results We present the domain-level results of all evaluated models in Table 4. GPT-4V outperformed every other model in both the zero-shot and finetune setting in every single domain. Among the two models that were finetuned, LAVIN outperformed DEFLATE in every single domain by a minimum of 2.62 points (in the Home Product domain) and a maximum of 11.51 (in the Clothing domain). Among the open-source MLLMs, no single model outperformed all other models across all the domains, but Qwen-VL had the best scores in the Jewelry&GA and Food domains. From Table 4 we also note that other than for LLaVA 1.5 in the Footwear domain, all other models that had multiple variants with different LLM sizes had significantly better performance on average from the variant with the larger size LLM in each domain in comparison to the variant with the smaller sized LLM. For example, in the Clothing domain, there was a minimum improvement of 15.04 micro-F1 points from the model variant with the smaller LLM (InstructBLIP w/ Vicuna 7B) to the model variant with the larger LLM (InstructBLIP w/ FLAN-T5-XXL) and an overall average of 18.29 micro-F1 point increase when using a model variant with a larger LLM in the Clothing domain. Similar trends can be seen among all domains. Additionally, among zero-shot methods, Clothing had the lowest micro-F1 across all domains for all models and model variants except for BLIP2 w/ FLAN-T5-XL and Qwen-VL. This leads us to believe that the Clothing domain is the most challenging domain in the dataset. We performed a comprehensive manual investigation and we believe there are two primary reasons why the Clothing domain presents more challenges, while other domains such as the Home domain are comparatively easier (We show examples from our manual investigation in Figures 8, 9 for clarity): (1) Attributes within the Clothing domain demand a more nuanced understanding of local details in product images. For example, the attribute \u2018Sleeve Style\u2019 in cases 1-4 and \u2018Neckline\u2019 in cases 7-12 (Figure 8). In contrast, attributes in the home domain only require a global understanding of product pictures and text, such as attribute \u2018Special Occasion\u2019 in cases 13-16, \u2018Shape\u2019 and \u2018Material\u2019 in cases 17 and 21 (Figure 9). (2) The values of attributes in the Home domain are significantly more straightforward to identify compared to those in the Clothing domain. For instance, the attribute \u2018Special Occasion\u2019 includes values like [\u2018Birthday\u2019, \u2018Christmas\u2019, \u2018Easter\u2019, \u2018Graduation\u2019, \u2018Halloween\u2019, \u2018Patriotic\u2019, \u2018Thanksgiving\u2019], which are clearly more distinguishable than the values for \u2019Sleeve Style\u2019 [\u2018Sleeveless\u2019, \u2018Long Sleeve\u2019, \u20183/4 Sleeve\u2019, \u2018Strappy\u2019, \u2018Short Sleeve\u2019] in the Clothing Domain. 3.2.2 Attribute-Level Results Table 5 presents the attribute-level performance of all evaluated models. As was observed in Table 4, GPT-4V vastly outperforms all other models. We can see in Table 5 that only in the \u2018Shoul6 Domains GPT-4V Qwen-VL LLaVA-1.5 InstructBLIP BLIP-2 Clothing 77.43 59.73 49.12 62.83 55.31 Footwear 81.39 57.72 67.72 63.41 55.21 Attributes Sleeve Style 66.00 66.00 46.00 62.00 50.00 Shaft Height 63.33 35.00 61.66 26.67 30.00 Season 65.00 57.50 65.00 60.00 62.50 Neckline 78.18 52.73 25.45 53.64 48.18 Average 68.13 52.81 49.53 50.58 47.67 Table 6: Examples of challenging domains & attributes. der Style\u2019 (InstructBLIP), \u2018Container\u2019 (LLaVA 1.5) and \u2018Sleeve Style\u2019 (LAVIN) attributes do a model outperform GPT-4V. InstructBLIP struggled significantly with the \u2018Attachment Method\u2019 attribute as did LLaVA 1.5 with \u2018Shoulder Style\u2019 compared to other models. On the other hand, Table 5 shows that both finetuned models perform better than all of the open-source MLLMs in the zero-shot setting for the \u2018Heel Height\u2019 attribute. This may indicate that there are attributes within the dataset for which prior pretrained knowledge of MLLMs is not sufficient for implicit value extraction of these attribute values and finetuning is needed to learn the mapping between instances of these attributes and the correct attribute values belonging to them. In addition, all models struggled on the \u2018Sleeve Style\u2019 and \u2018Neckline\u2019 attributes compared to each model\u2019s performance on other attributes. Representative error cases for different attributes are presented in Figures 8 and 9 in Appendix C along with a comprehensive error analysis. Here we provide our observations from the attribute-level error analysis: (1) Models often confuse attribute values that are similar yet distinct, such as \u20183/4 Sleeve\u2019 versus \u2018Long Sleeve\u2019 in cases 1-2, \u2018Short Sleeve\u2019 versus \u2018Sleeveless\u2019 in cases 3-4, and \u2018Crew Neck\u2019 versus \u2018Scoop Neck\u2019 in case 8 (Figure 8). (2) Attributes that demand a detailed understanding of small image parts typically challenge models, leading to errors. For instance, mistakes in identifying \u2018Shoulder Style\u2019 in cases 5-6 and \u2018Neckline\u2019 in cases 7-9 (Figure 8). (3) Errors can also arise from conflicting modality inferences, as seen in case 13 (Figure 9), where the word \u2018Snow Village\u2019 in the product text suggested Christmas, but the image aligned more with Halloween. 3.2.3 Challenges and Opportunities Challenging Domains & Attributes: It can be observed in Table 4, 5 that GPT-4V works well on some domains and attributes, but not on all of them, e.g., it only achieves 77.4% micro-F1 on the Clothing domain and 66.0% for the Sleeve Style attribute. Some examples of challenging domains, attributes, and the performance of various MLLMs are highlighted in Table 6. Besides, we can observe that the open-source models are lagging behind GPT-4V in many domains and attributes, and our dataset provides a good benchmark that points out the gap between them and provides opportunities for researchers to close it. Furthermore, inspired by the error cases in Section 3.2.2 and Appendix C, we point out some remaining challenges and opportunities: Model-Aspect: (1) Enhance the ability to understand image details, including small areas and text in images; (2) Devise mechanisms to distinguish similar attribute values; (3) Properly handle conflicting modality inferences; (4) Reduce the performance gap in implicit AVE between open-source models and advanced closed models like GPT-4V. Dataset-Aspect: Our ImplicitAVE dataset does not consider multi-valued attributes and negative instances, i.e., \"none\" as attribute values. We leave this extension for future work. 5 10 15 25 50 # of shots 0.2 0.4 Clothing DEFLATE LAVIN 5 10 15 25 50 # of shots 0.0 0.2 0.4 0.6 Footwear DEFLATE LAVIN 5 10 15 25 50 # of shots 0.2 0.4 0.6 Jewlery & GA DEFLATE LAVIN 5 10 15 25 50 # of shots 0.0 0.2 0.4 0.6 Food DEFLATE LAVIN 5 10 15 25 50 # of shots 0.3 0.4 0.5 0.6 Home DEFLATE LAVIN 5 10 15 25 50 # of shots 0.0 0.2 0.4 0.6 0.8 All DEFLATE LAVIN Figure 4: Performance comparison in few-shot settings of different domains. 3.2.4 Few-Shot Results Figure 4 shows the performance comparison of DEFLATE and LAVIN models in various few-shot settings. We note that in most K-shot settings, LAVIN outperforms DEFLATE by a noticeable amount. Also, we notice that different domains performed differently for the two models. In the 5-shot setting, \u2018Food\u2019 was the lowest scoring do7 Clothing Footwear Jewlery Food Home All Domain 0.0 0.2 0.4 0.6 0.8 1.0 Text-Only DEFLATE LAVIN GPT-4V Clothing Footwear Jewlery Food Home All Domain 0.0 0.5 1.0 Image-Only DEFLATE LAVIN GPT-4V Clothing Footwear Jewlery Food Home All Domain 0.0 0.5 1.0 Multimodality DEFLATE LAVIN GPT-4V (a) Comparison of methods. Clothing Footwear Jewlery Food Home All Domain 0.0 0.2 0.4 0.6 0.8 DEFLATE T ext-Only Image-Only Multimodality Clothing Footwear Jewlery Food Home All Domain 0.00 0.25 0.50 0.75 1.00 LAVIN T ext-Only Image-Only Multimodality Clothing Footwear Jewlery Food Home All Domain 0.0 0.5 1.0 GPT-4V T ext-Only Image-Only Multimodality (b) Comparison of modality. Figure 5: Performance comparison of DEFLATE, LAVIN, and GPT-4V on different modalities. main for DEFLATE whereas \u2018Food\u2019 was the highest scoring domain for LAVIN, but for 10-shot the domain trends for both models became similar (i.e., \u2018Food\u2019 and \u2018Home Product\u2019 domains were the two best-performing domains and \u2018Clothing\u2019 and \u2018Footwear\u2019 were the worst performing domains). For DEFLATE from 25-shot to 50-shot, the largest increase in micro-F1 was for the \u2018Clothing\u2019 and \u2018Footwear\u2019 domains whereas the increase was less significant for the other domains. This indicates that the model\u2019s ability to learn the attributes and attribute values in the \u2018Clothing\u2019 and \u2018Footwear\u2019 domains may continue to increase as the number of training examples increases. On the contrary, LAVIN saw the biggest increase in micro-F1 for the \u2018Jewelry&GA\u2019 and \u2018Footwear\u2019 domains thereby hinting that increasing the training examples for these domains in LAVIN would enable the model to substantially increase its ability to categorize instances of these two domains. 3.2.5 Modality-Level Results Figure 5 visualizes performance comparisons of DEFLATE, LAVIN, and GPT-4V with different modalities. Firstly, it is evident that for LAVIN and DEFLATE, the image-only modality performed extremely poorly compared to the text-only and combined modalities. This leads us to believe that these models\u2019 image understanding capabilities may be too poor to extract implicit value from product images. However, it is worth noting that in all domains except \u2018Footwear\u2019 for DEFLATE, both LAVIN and DEFLATE perform better in the multimodal modality over the text-only modality thereby indicating that the image information does in fact help the model predict attribute values of instances. With GPT-4V we notice a very high performance in the image-only modality and only minimal improvement with the multimodal modality in comparison to the image-only modality. This speaks to the strength of GPT-4V\u2019s zero-shot image classification capabilities, especially in comparison to LAVIN and DEFLATE. Even though GPT-4V boasts impressive performance in most regards, it is worth noting that GPT-4V\u2019s text-only modality performance in the \u2018Clothing\u2019 domain was especially poor. It scored even lower than the textonly scores of LAVIN and DEFLATE and, in the \u2018Clothing\u2019 domain, the multimodal performance for GPT-4V was lower than the image-only performance thereby indicating that the text component confused the model causing it to perform worse than it did without the text component. Prompt1 Prompt2 Prompt3 Prompt4 Prompt5 Prompt6 Prompt7 Prompt8 Blip2 FlanT5 XL LLaVA 1.5 Vicuna 13B Qwen_VL Qwen 7B Qwen_VL_Chat Qwen-7B InstructBlip2 Vicuna 7B 0.38 0.28 0.33 0.34 0.33 0.35 0.35 0.29 0.40 0.44 0.46 0.45 0.39 0.48 0.48 0.39 0.45 0.60 0.59 0.50 0.46 0.61 0.56 0.62 0.29 0.35 0.31 0.29 0.30 0.30 0.30 0.32 0.39 0.46 0.47 0.48 0.43 0.46 0.48 0.42 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Figure 6: The influence of prompts (detailed in Table 7 in Appendix E) on different models. 3.3 Ablation Study on Prompt Templates In order to obtain baseline results that accurately reflect the quality of our dataset we conducted ablations on the prompt for the open-source MLLMs. Observing drastic micro-F1 score differences on the evaluation set by using different prompts in the early stages of experimenting led us to conduct a standardized ablation study on 8 different prompts listed in Table 7. Each prompt had three components: context containing the title of the product with explicit mention of the attribute value removed, question, and options to answer from. To conduct a fair evaluation of the prompts, across all models we fixed the random seed at 42 as well as the hyperparameters: temperature = 1, top_p = 0.8, max_new_tokens = 17, min_length = 1, and num_beams = 5. Our results are shown in Fig 6 and the best prompt for each model type was used for all variants of that model. 8 4 Related Work 4.1 Attribute Value Extraction Dataset Attribute Value Extraction (AVE) has emerged as a crucial task for online shopping, aiming to identify the values of product attributes from various data sources. At the heart of many e-commerce applications, such as product comparison, retrieval, recommendation, and the construction of product graphs and online shop assistants, lies the extraction of attribute values (Zalmout et al., 2021). Although several AVE datasets have been introduced, each exhibits certain limitations, as shown in Table 1. The OpenTag dataset (Zheng et al., 2018), one of the early datasets collected from Amazon, highlights the importance of open-world value sets. In contrast, the AE-110K dataset (Xu et al., 2019) expands the scope of AVE datasets to include more products, a broader range of attributes, and denser attribute coverage per product, though it lacks human expert annotation. The AdaTag dataset (Yan et al., 2021) focuses on the rich information contained in product bullets, excluding product descriptions, which facilitates more efficient training and inference for such tasks but lacks diversity in product domains and is not publicly available. The MAVE dataset (Yang et al., 2022), a large public dataset for AVE research, encompasses a wide range of categories and diverse attributes, constructing structured product files as text inputs. However, in real-world scenarios, text information alone may not imply certain attributes of interest, making product images a complementary source of information for indicating or validating the answers to specific attributes. To address this, the MEPAVE (Zhu et al., 2020) and DESIRE (Zhang et al., 2023) datasets were introduced to include multimodal product information such as product titles, descriptions and images. While several explicit AVE datasets exist, implicit AVE is much more challenging and under-explored. To advance multi-modal AVE research further, we introduce the first publicly available multimodal implicit AVE dataset, ImplicitAVE, featuring careful human annotation and a versatile range of items from multiple domains. Our dataset is considerably different from DESIRE, as detailed in Appendix A. 4.2 Multimodal Large Language Models Multimodal Large Language Models (MLLMs) have demonstrated impressive performance on a variety of tasks (Li et al., 2023; Liu et al., 2023b,a; Bai et al., 2023; Ye et al., 2023; Luo et al., 2023; Dong et al., 2024). BLIP-2 (Li et al., 2023) uses frozen pre-trained image models and language models, and proposes a lightweight querying transformer Q-Former to bridge the two modalities. InstructBLIP (Dai et al., 2023) outperforms BLIP2 (Li et al., 2023) by using vision-language instruction tuning, where the instruction tuning data is collected from publicly available datasets, by manually transforming them into instruction tuning format. To improve the diversity and in-depth reasoning in the instruction, LLaVa (Liu et al., 2023b) proposes to use language-only GPT-4 to construct multimodal language-image instruction tuning data. mPLUG-Owl (Ye et al., 2023) and Qwen-VL (Bai et al., 2023) propose novel training paradigms for LLMs. However, since most popular open-source MLLMs are parameter-heavy, LAVIN (Luo et al., 2023) proposes a novel and efficient solution for vision-language instruction tuning by adopting lightweight modules, i.e., adapters, to bridge the gap between LLMs and vision modules, which does not require expensive vision-language pretraining to align text and image embedding beforehand. Despite achieving significant progress, the performance of MLLMs on implicit AVE has not been well-studied. Recent work EIVEN (Zou et al., 2024) finetuned an efficient MLLM framework for implicit AVE but did not compare with exiting MLLMs in zero-/few-shot settings. Our work establishes the first comprehensive benchmark of multimodal LLMs for implicit AVE under diverse settings and reveals intriguing insights and potential future research directions in Section 3.2.3. 5 Conclusion In this paper, we introduced ImplicitAVE, the first publicly accessible multimodal dataset specifically designed for implicit attribute value extraction, aimed at overcoming the limitations of existing datasets focused on explicit attribute values. By carefully curating attribute values and incorporating both implicit attribute values and product images, ImplicitAVE comprises 6.8K training instances and 1.6K human re-annotated high-quality evaluation instances across five diverse domains. Moreover, we benchmarked the performance of six recent multimodal large language models on it under diverse settings, highlighting the challenges of implicit value extraction. In the future, we plan to further expand our ImplicitAVE dataset to include multi-valued attributes and negative instances. 9 Acknowledgements This research is partially supported by NSF grant #210751 and UIC DPI Seed Program. Any opinions, findings, and conclusions expressed here are those of the authors and do not necessarily reflect the views of NSF. We thank our reviewers for their insightful feedback and comments which helped improve the quality of our paper. 6 Limitation Our ImplicitAVE dataset does not consider multivalued attributes and negative instances, i.e. \"none\" as attribute values. We leave this extension as future work. Due to computational resource constraints and limited budgets, we did not evaluate open MLLMs with parameters larger than 13B. 7 Ethics Statement The datasets that we sourced from are publicly available. In this work, we propose a multimodal Implicit AVE dataset and provide a comprehensive benchmark of MLLMs. We do not expect any direct ethical concern from our work."
},
{
"url": "http://arxiv.org/abs/2404.14180v1",
"title": "Metric Distortion under Group-Fair Objectives",
"abstract": "We consider a voting problem in which a set of agents have metric preferences\nover a set of alternatives, and are also partitioned into disjoint groups.\nGiven information about the preferences of the agents and their groups, our\ngoal is to decide an alternative to approximately minimize an objective\nfunction that takes the groups of agents into account. We consider two natural\ngroup-fair objectives known as Max-of-Avg and Avg-of-Max which are different\ncombinations of the max and the average cost in and out of the groups. We show\ntight bounds on the best possible distortion that can be achieved by various\nclasses of mechanisms depending on the amount of information they have access\nto. In particular, we consider group-oblivious full-information mechanisms that\ndo not know the groups but have access to the exact distances between agents\nand alternatives in the metric space, group-oblivious ordinal-information\nmechanisms that again do not know the groups but are given the ordinal\npreferences of the agents, and group-aware mechanisms that have full knowledge\nof the structure of the agent groups and also ordinal information about the\nmetric space.",
"authors": "Georgios Amanatidis, Elliot Anshelevich, Christopher Jerrett, Alexandros A. Voudouris",
"published": "2024-04-22",
"updated": "2024-04-22",
"primary_cat": "cs.GT",
"cats": [
"cs.GT"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "One of the main subjects of study in (computational) social choice theory is to identify the capabilities and limitations of making appropriate collective decisions when given the preferences of individuals (or, agents) over alternative outcomes. Tis is done either by an axiomatic analysis of the potential decision-making mechanisms (which are also referred to as voting rules) [Brandt et al., 2016], or a qualitative analysis that aims to quantify the possible loss of e\ufb03ciency when the agents have private cardinal utilities or costs for the alternatives but are only able to communicate partial information about their preferences, for example using ordinal information. Tis ine\ufb03ciency is quanti\ufb01ed by the notion of distortion which compares the quality of the computed outcome to that of the ideal outcome that could have been computed if full information about the underlying utilities of the agents was available. Since its introduction more than 15 year ago, distortion has been studied for many di\ufb00erent social choice problems (such as voting applications, resource allocation, and facility location) and under di\ufb00erent restrictions about the cardinal preferences of the agents (such as assuming unit-sum utilities or metric costs). For a more detailed overview see our discussion of the related work below and the survey of Anshelevich et al. [2021]. With few exceptions, the distortion literature has focused on voting setings in which the agents are assumed to be independent of each other. As such, the quality of the di\ufb00erent outcomes is typically calculated using measures such as the social welfare (de\ufb01ned as the total or average utility of all agents) or the egalitarian welfare (de\ufb01ned as the minimum utility over all agents). However, there are social choice applications in which, while the agents can act autonomously, they are also part of larger groups 1 and care about the overall welfare of the members of their groups, but not that much about other groups. As a toy example, consider the case of a university department, the academics of which are members of di\ufb00erent research groups. For several maters, such as electing the head of the department, each academic participates individually in the voting process, but the outcome might a\ufb00ect di\ufb00erent groups in di\ufb00erent ways. Due to this, objectives such as the social and the egalitarian welfare do not fully capture the quality of an outcome according to the structure of the problem. Instead, we would like objectives that take into account the partition of the agents into groups to measure e\ufb03ciency and also satisfy other desired properties such as fairness or some form of balance among di\ufb00erent groups. Two such natural objectives were \ufb01rst introduced by Anshelevich et al. [2022] who studied a metric district-based single-winner voting seting, where the agents have costs for the alternatives that are determined by their distances in a metric space, and the agents are furthermore partitioned into groups that are called districts. Te \ufb01rst objective is Max-of-Avg, de\ufb01ned as the maximum over all groups of the average total distance of the agents within each group from the chosen alternative, and the second one is Avg-of-Max, de\ufb01ned as the average over all groups of the maximum distance among any agent within each group from the chosen alternative.1 By their de\ufb01nition, to optimize them, we need to choose outcomes that strike a balance between the average or maximum cost of any group as a whole, thus achieving fairness among di\ufb00erent groups, on top of absolute e\ufb03ciency. 1.1 Our Contribution We study a single-winner voting seting with n agents and m alternatives that lie in a metric space. Furthermore, the agents are partitioned into k disjoint groups. Given some information about the groups of agents, as well as the distances between agents and alternatives in the metric space, our goal is to choose an alternative as the winner that is (approximately) e\ufb03cient with respect to the Max-of- Avg and Avg-of-Max objectives that were de\ufb01ned above. In particular, we show tight bounds on the distortion of di\ufb00erent classes of deterministic mechanisms, depending on the type of information they have access to in order to decide the winner. We start by considering group-oblivious mechanisms which have no knowledge of the groups of agents. In Section 3, we consider the class of full-information group-oblivious mechanisms which have complete information about the distances between agents and alternatives in the metric space. For such mechanisms, we show a tight bound of 3 for Max-of-Avg, a tight bound of 3 for Avg-of-Max on instances in which the groups are symmetric (i.e., all groups have the same size), and a tight bound of k for Avg-of-Max on general instances. In Section 4, we turn our atention to ordinal-information group-oblivious mechanisms which are given as input the ordinal preferences of the agents over the alternatives in the form of rankings from the smallest distance to the largest. We show a tight bound of 5 for Max-of-Avg, a tight bound of 5 for Avg-of-Max on instances with symmetric groups, and a tight bound of 2k + 1 for Avg-of-Max on general instances. An overview of our results for group-oblivious mechanisms is given in Table 1. In Section 5 we turn our atention to group-aware mechanisms which know the structure of the groups of agents. Having full information about the metric space on top of this knowledge about the groups makes the problem of optimizing the Max-of-Avg and the Avg-of-Max objectives trivial by simply calculating the cost of every alternative. Consequently, we consider group-aware mechanisms that have access to the ordinal preferences of the agents over the alternatives instead. For instances with two alternatives, we prove a tight bound of 3 on the distortion of such mechanisms for both objectives. For general instances, we show that the distortion is still 3 when we are allowed to exploit more information about the metric space for the upper bound. In particular, we assume access to the 1Observe that both of these objectives are essentially combinations of the social cost and the egalitarian cost, which are the analogues of the social welfare and egalitarian welfare when the agents have costs for the alternatives rather than utilities. 2 Full-information Ordinal-information Max-of-Avg 3 (Teorems 3.1, 3.2) 5 (Teorems 4.1, 4.3) Avg-of-Max Symmetric 3 (Teorems 3.3, 3.4) 5 (Teorems 4.4, 4.5) Asymmetric k (Teorems 3.5, 3.6) 2k + 1 (Teorems 4.6, 4.7) Table 1: An overview of our tight distortion bounds for the class of group-oblivious mechanisms. distances between the alternatives. Resolving the distortion of group-aware mechanisms is probably the most challenging open question that our work leaves open; we discuss this in Section 6. 1.2 Related Work Inspired by worst-case analysis, Procaccia and Rosenschein [2006] introduced the distortion as a means of quantifying the ine\ufb03ciency of voting mechanisms that base their decisions on the ordinal prefer- ences of the agents over the alternative outcomes. Without restricting the possible underlying cardinal utilities of the agents, the distortion can be shown to be unbounded for most natural mechanisms. Tis led to subsequent works to study voting setings where it is assumed that the agents have underlying normalized utilities [Boutilier et al., 2015, Caragiannis et al., 2017, Ebadian et al., 2022, 2023a], or costs determined by distances in an unknown metric space [Anshelevich et al., 2018, Gkatzelis et al., 2020, Kizilkaya and Kempe, 2022, Charikar and Ramakrishnan, 2022, Charikar et al., 2024, Caragiannis et al., 2022, Jaworski and Skowron, 2020], or combinations of the two [Gkatzelis et al., 2023]. Te distortion has also been studied for other social choice problems, such as participatory budgeting [Benad` e et al., 2021], matching [Filos-Ratsikas et al., 2014, Amanatidis et al., 2022], as well as clustering [Anshelevich and Sekar, 2016, Burkhardt et al., 2024] and other graph problems where only ordinal information is available [Abramowitz and Anshelevich, 2018]. We refer to the survey of Anshelevich et al. [2021] for a more detailed exposition of the distortion framework and the problems it has been applied to. While the bulk of the distortion literature has focused on setings where ordinal or even less than ordinal information is available about the preferences of the agents, there has been recent interest in setings where it is also possible to elicit some cardinal information. For example, the agents might be able to communicate a number of bits about their preferences [Mandal et al., 2019, 2020, Kempe, 2020], or answer value queries related to their utilities about the alternatives [Amanatidis et al., 2021, 2022, 2024, Ma et al., 2021, Caragiannis and Fehrs, 2023, Burkhardt et al., 2024], or provide more information in the form of intensities [Abramowitz et al., 2019, Kahng et al., 2023] or threshold approvals [Bhaskar et al., 2018, Benad` e et al., 2021, Ebadian et al., 2023b, Anshelevich et al., 2024, Lati\ufb01an and Voudouris, 2024]. In our work, we also consider more than ordinal information in the case of full-information group-oblivious mechanisms, where the main source of ine\ufb03ciency comes from not knowing the struc- ture of the groups of agents. As already previously mentioned, the particular objective functions (Max-of-Avg and Avg-of-Max) that we consider in this paper have been studied in the context of distortion by Anshelevich et al. [2022] for single-winner distributed metric voting, and subsequently by Voudouris [2023] for the same seting, and by Filos-Ratsikas et al. [2024] for distributed facility location on the line. In those setings, similarly to our model here, the agents are partitioned into groups that are called districts, and a mechanism works in two steps: First, for each district, it decides a representative alternative or location based on given information about the preferences of the agents in the district, and then it decides a winner or a facility location based on information about the district representatives. Such distributed mechanisms can be thought of as members of the class of group-aware mechanisms in our seting when the groups are assumed to be known. However, they are very restricted as they essentially forget any detailed 3 in-group information in the second step and instead rely only on the group representatives to make \ufb01nal decisions. Te Max-of-Avg and Avg-of-Max objectives have also been considered in the context of mechanism design without money for altruistic facility location problems by Zhou et al. [2022, 2024].",
"main_content": "An instance I of our voting problem consists of a set N of n \u22652 agents and a set A of m \u22652 alternatives. Agents and alternatives are represented by points in a metric space. We denote by d(x, y) the distance between any two points x and y in the metric space; the distance function satisfies the properties d(x, x) = 0, d(x, y) = d(y, x), and the triangle inequality d(x, y) \u2264d(x, z) + d(z, y) for any x, y, z \u2208N \u222aA. Te agents are partitioned into k \u22652 pairwise disjoint groups which may be known or unknown; Let G := {g1, . . . , gk} be the set of groups, and denote by ng the size of any group g \u2208G. If the groups are symmetric, to simplify our notation we write ng = \u03bb = n/k. A mechanism M takes as input some information info(I) related to the groups of agents and the distances between agents and alternatives in the metric space. Based on this information, it outputs one of the alternatives as the winner, denoted by M(info(I)). When the groups are assumed to be unknown (Sections 3 and 4), we consider two different classes of group-oblivious mechanisms depending on the type of information related to the metric space they have access to: \u2022 Full-information group-oblivious mechanisms have complete knowledge of the metric space, that is, they have access to the distances between all agents and alternatives. \u2022 Ordinal-information group-oblivious mechanisms have access to the ordinal preferences of the agents over the alternatives according to their distances; that is, if d(i, x) < d(i, y) for an agent i and alternatives x and y, then i ranks x higher y. When the groups are assumed to be known (Section 5), we consider group-aware mechanisms that have access to the ordinal preferences of the agents and\u2014potentially\u2014information related to the distances between alternatives (but not between agents, or between agents and alternatives). We are interested in designing socially efficient mechanisms according to collective cost objective functions that take the groups of the agents into account. In particular, we focus on the following two objectives: \u2022 Te Max-of-Avg cost of an alternative x in a given instance I is the maximum over all groups of the average total distance of the agents within each group from x, that is, Max-of-Avg(x | I) = max g\u2208G \ufffd1 ng ng \ufffd i\u2208g \ufffd i\u2208g d(i, x) \ufffd . \u2022 Te Avg-of-Max cost of an alternative x in a given instance I is the average over all groups of the maximum distance of any agent within each group from x, that is, Avg-of-Max(x | I) = 1 k k \ufffd g\u2208G \ufffd g\u2208G max i\u2208g d(i, x). Whenever the cost objective is clear from context, we will simplify our notation and write cost(x | I) for the cost of alternative x in a given instance I. We will simplify our notation even more and write cost(x) when the instance is also clear from context. 4 Since the mechanisms we consider only have partial information about the groups of agents or the metric space, they cannot always identify the alternatives that optimize cost objectives which depend on the structure of the groups, like Max-of-Avg and Avg-of-Max. Te loss of e\ufb03ciency of a mechanism M is captured by its distortion, which is the worst-case ratio (over all possible instances) of the cost of the alternative chosen by M over the minimum possible cost of any alternative, that is sup I cost(M(info(I)) | I) minx cost(x | I) . Observe that the distortion of any mechanism is always at least 1; we de\ufb01ne 0/0 = 1 for consistency. We aim to reveal the best possible distortion that can be achieved by mechanisms in this group voting seting. 3 Full-Information Group-Oblivious Mechanisms We start the presentation of our technical results with the class of full-information group-oblivious mechanisms; recall that such mechanisms have complete access to the distances between all agents and alternatives, which means that their ine\ufb03ciency is solely due to being oblivious to how the agents are partitioned into groups. For the Max-of-Avg objective, we show a tight bound of 3 on the distortion of full-information mechanisms (Section 3.1). For the Avg-of-Max objective, we \ufb01rst show a tight bound of 3 for instances in which the groups are symmetric, and a tight bound of k for general instances with asymmetric groups (Section 3.2). 3.1 Max-of-Avg We begin by showing a lower bound of 3 on the distortion of full-information group-oblivious mechanisms for the Max-of-Avg objective using an instance with symmetric groups. Teorem 3.1. For Max-of-Avg, the distortion of any full-information group-oblivious mechanism is at least 3 \u2212\u03b5 for any \u03b5 > 0, even when there are only two alternatives and the groups are symmetric. Proof. Let \u03b5 > 0 be any constant and \u03bb \u2208N be such that \u03bb > 6 \u03b5 \u22122. Consider the following instance with n = \u03bb(\u03bb + 1) agents and two alternatives with known locations on the line of real numbers: \u2022 Alternative a is at 1 and alternative b is at 3; \u2022 Tere are \u03bb agents at 0, \u03bb(\u03bb \u22121) agents at 2, and \u03bb agents at 4. Due to the symmetric locations of the alternatives and the agents, any of the two alternatives can be chosen as the winner. We assume the winner is a, without loss of generality. Te agents might be partitioned into the following k = \u03bb + 1 symmetric groups of size \u03bb each: \u2022 Te \ufb01rst group consists of all the \u03bb agents at 4; \u2022 Each of the remaining \u03bb groups consists of one agent at 0 and \u03bb \u22121 agents at 2. Te total distance of the agents in the \ufb01rst group is 3\u03bb from a and \u03bb from b, whereas the total distance of the agents in each of the remaining groups is \u03bb from a and \u03bb + 2 from b. Hence, cost(a) = 3 and cost(b) = 1 + 2 \u03bb, leading to a distortion of at least 3\u03bb \u03bb+2 = 3 \u2212 6 \u03bb+2 > 3 \u2212\u03b5. 5 It is not hard to obtain a matching upper bound of 3 by using a mechanism that chooses the winner to be any alternative that minimizes the total distance of all agents. In Appendix A we present a re\ufb01ned analysis of this mechanism, by characterizing the worst-case distortion instances, and we obtain a distortion upper bound of 3 \u22122\u00b5 n , where \u00b5 is the smallest group size and n is the number of agents. Teorem 3.2. For Max-of-Avg, the distortion of a mechanism that returns an alternative who minimizes the total distance from all agents is at most 3. Proof. Let w be an alternative that minimizes the total distance from all agents, and let o be an optimal alternative (that minimizes the Max-of-Avg cost according to the unknown groups of the agents). By the de\ufb01nition of w, there must exist some group \u03b3 such that P i\u2208\u03b3 d(i, w) \u2264P i\u2208\u03b3 d(i, o); otherwise, the total distance of o from all agents would be strictly less than that of w, thus contradicting the choice of w. By the de\ufb01nition of the objective function, we also have that cost(o) \u2265 1 ng P i\u2208g d(i, o) for every group g. Denoting by gw the group that determines the cost of w and using the triangle inequality, we have cost(w) = 1 ngw X i\u2208gw d(i, w) \u2264 1 ngw X i\u2208gw (d(i, o) + d(w, o)) \u2264cost(o) + d(w, o). Using the triangle inequality and the property of group \u03b3 mentioned above, we further have that d(w, o) = 1 n\u03b3 X i\u2208\u03b3 d(w, o) \u22641 n\u03b3 X i\u2208\u03b3 (d(i, w) + d(i, o)) \u22642 \u00b7 1 n\u03b3 X i\u2208\u03b3 d(i, o) \u22642 \u00b7 cost(o). Combining the two, we obtain cost(w) \u22643 \u00b7 cost(o), i.e., the desired upper bound of 3. 3.2 Avg-of-Max For the Avg-of-Max objective, we \ufb01rst focus on instances where the groups are symmetric (that is, every group consists of the same number \u03bb = n/k of agents) and show a tight bound of 3. Teorem 3.3. For Avg-of-Max, the distortion of any full-information group-oblivious mechanism is at least 3 \u2212\u03b5 for any \u03b5 > 0, even when there are two alternatives and the groups are symmetric. Proof. Let \u03b5 > 0 be any constant and \u03bb \u2208N be such that \u03bb > 8 \u03b5 \u22123. We consider the same instance construction as in the proof of Teorem 3.1 on the line of real numbers. Recall that: \u2022 Alternative a is at 1 and alternative b is at 3; \u2022 Tere are \u03bb agents at 0, \u03bb(\u03bb \u22121) agents at 2, and \u03bb agents at 4. We assumed that the winner is a, which is without loss of generality due to symmetry. Te agents are partitioned into the k = \u03bb + 1 symmetric groups: \u2022 Te \ufb01rst group consists of all the \u03bb agents at 4; \u2022 Each of the remaining \u03bb groups consists of one agent at 0 and \u03bb \u22121 agents at 2. Terefore, cost(a) = (3 + \u03bb) /(\u03bb + 1) and cost(b) = (1 + 3\u03bb) /(\u03bb + 1), leading to a distortion of at least 3\u03bb+1 \u03bb+3 = 3 \u2212 8 \u03bb+3 > 3 \u2212\u03b5. 6 Te tight upper bound follows again by choosing any alternative that minimizes the total distance from all agents; hence, this very simple mechanism is best possible in terms of both the Max-of-Avg objective for general instances and the Avg-of-Max objective for instances with symmetric groups. Teorem 3.4. For Avg-of-Max and symmetric groups, the distortion of a mechanism that returns an alternative who minimizes the total distance from all agents is at most 3. Proof. Let w be an alternative that minimizes the total distance from all agents, and denote by o an optimal alternative (that minimizes the Avg-of-Max cost according to the k unknown groups of agents). Let S1, . . . , S\u03bb be any partition of the agents into \u03bb = n/k disjoint sets of size k such that each set consists of one agent per group; note that there are multiple such partitions of the agents since the groups are symmetric. By the de\ufb01nition of w, there must exist some \u2113\u2208[\u03bb] such that P i\u2208S\u2113d(i, w) \u2264 P i\u2208S\u2113d(i, o) since, otherwise, the total distance of o from the agents would be strictly less than that of w, thus contradicting the choice of w. Let ig be a most-distant agent in group g from w, i.e., ig \u2208 arg maxi\u2208g d(i, w). By matching each ig to a unique agent f(ig) \u2208S\u2113(i.e., f : {ig1, . . . , igk} \u2192S\u2113is a bijection), we can rewrite the property of set S\u2113as X g\u2208G d(f(ig), w) \u2264 X g\u2208G d(f(ig), o). In addition, by the de\ufb01nition of the objective function, we have that cost(o) \u22651 k X g\u2208G d(ig, o) and cost(o) \u22651 k X i\u2208S\u2113 d(i, o) = 1 k X g\u2208G d(f(ig), o). Hence, by applying the triangle inequality twice, we obtain cost(w) = 1 k X g\u2208G d(ig, w) \u22641 k X g\u2208G \u0000d(ig, o) + d(f(ig), o) + d(f(ig), w) \u0001 \u22643 \u00b7 cost(o), which shows the desired upper bound of 3. We now turn our atention to the general case where the groups might be asymmetric and show a tight bound of k. Teorem 3.5. For Avg-of-Max, the distortion of any full-information group-oblivious mechanism is at least k, even when there are two alternatives. Proof. Consider the following instance with n = 2k agents and two alternatives on the line of real numbers: \u2022 Alternative a is at 0 and alternative b is at 1; \u2022 Tere are k agents at 0 and k agents at 1. 7 Due to symmetry, given only this information, any of the two alternatives can be chosen as the winner. Without loss of generality, we assume the winner is a. In that case, however, the agents might be split into k groups as follows: \u2022 Te \ufb01rst group consists of all agents at 0 and one agent at 1; \u2022 Each of the remaining k \u22121 groups consists of a single agent at 1. Hence, cost(a) = 1 and cost(b) = 1/k, leading to a distortion of k. For the upper bound, we \ufb01rst remark that choosing any alternative that minimizesthe total distance from all agents (as we did in the case of Avg-of-Max, or Max-of-Avg with symmetric groups) leads to a distortion of at least 2k + 1. Nevertheless, we can achieve a matching bound of k by choosing any alternative that minimizes the maximum distance from the agents. Teorem 3.6. For Avg-of-Max, the distortion of a mechanism that returns an alternative who minimizes the maximum distance from any agent is at most k. Proof. Let w be the chosen alternative and o an optimal alternative. Let iw and io be the most distant agents from w and o, respectively. Ten, by the de\ufb01nition of w, d(iw, w) \u2264d(io, o). By the de\ufb01nition of iw, d(i, w) \u2264d(iw, w) for every agent i. Hence, cost(w) = 1 k X g\u2208G max i\u2208g d(i, w) \u22641 k X g\u2208G d(iw, w) = d(iw, w). On the other hand, cost(o) = 1 k X g\u2208G max i\u2208g d(i, o) \u22651 k d(io, o) \u22651 k d(iw, w). Consequently, the distortion is at most k. 4 Ordinal-Information Group-Oblivious mechanisms We now consider mechanisms that are given access to ordinal information about the preferences of the agents over the alternatives, but are still oblivious to how the agents are partitioned into groups. Recall that every agent i reports a ranking of the alternatives such that, if d(i, x) < d(i, y) for alternatives x and y, then i ranks x higher y. For the Max-of-Avg objective, we show a tight bound of 5 on the distortion of ordinal-information group-oblivious mechanisms. For the Avg-of-Max objective, we show that the distortion is exactly 5 when the groups are symmetric, and is exactly 2k + 1 when the groups are asymmetric. 4.1 Max-of-Avg We start by showing a lower bound of 5 on the distortion of any mechanism. Teorem 4.1. For Max-of-Avg, the distortion of any ordinal-information group-oblivious mechanism is at least 5 \u2212\u03b5 for any \u03b5 > 0, even when there are only two alternatives and the groups are symmetric. 8 Proof. Let \u03b5 > 0 be any constant and \u03bb \u22653 be an odd integer such that \u03bb > 4 \u03b5 \u22121. Consider an instance with n = \u03bb2 + \u03bb = \u03bb(\u03bb + 1) agents and two alternatives {a, b}; clearly \u03bb2 + \u03bb is an even number. Half of the agents prefer a and the other half prefer b. With this information, any of the two alternatives can be chosen as the winner, so assume that the winner is a. Te agents are partitioned into k = \u03bb + 1 symmetric groups of \u03bb agents each. Consider the scenario where the metric space is the line of real numbers and the grouping of the agents is as follows: \u2022 a is at 0 and b is at 2; \u2022 In the \ufb01rst group, all \u03bb agents prefer b and are all positioned at 2 + \u03bb+1 2\u03bb . \u2022 In each of the remaining \u03bb groups, there are \u03bb+1 2 agents that prefer a and are positioned at 1, and \u03bb\u22121 2 agents that prefer b and are positioned at 2. Te total distance of the agents in the \ufb01rst group is \u03bb \u00002 + \u03bb+1 2\u03bb \u0001 = 5\u03bb+1 2 from a and \u03bb\u00b7 \u03bb+1 2\u03bb = \u03bb+1 2 from b. In each of the remaining \u03bb groups, the total distance of the agents therein is \u03bb+1 2 + 2 \u00b7 \u03bb\u22121 2 = 3\u03bb\u22121 2 from a and \u03bb+1 2 from b. Consequently, cost(a) = 5\u03bb+1 2\u03bb (realized by the \ufb01rst group) and cost(b) = \u03bb+1 2\u03bb (realized by any of the groups), leading to a distortion of at least 5 \u2212 4 \u03bb+1 > 5 \u2212\u03b5. We now show that there are ordinal-information group-oblivious mechanisms which do achieve this best possible bound of 5. Te domination graph of an alternative x is a bipartite graph Gx = (N, N, Ex) with the set of agents on both sides and set of (directed) edges such that (i, j) \u2208Ex if and only if i prefers x over the most-preferred alternative top(j) of j, that is, d(i, x) \u2264d(i, top(j)). We focus on alternatives whose domination graphs atain perfect matchings. Tere are several voting rules that compute alternatives with this property, such as PluralityMatching [Gkatzelis et al., 2020] and PluralityVeto [Kizilkaya and Kempe, 2022]. Te distortion of these rules in terms of the social cost (the total distance of the agents) is known to be exactly 3. We show the following property of such alternatives, which will be useful in some of our upper bounds. Lemma 4.2. Given an instance, let x be some alternative whose domination graph atains a perfect matching, and y any other alternative. Ten, d(x, y) \u22644 n \u00b7 X g\u2208G X i\u2208g d(i, y). Proof. Let \u00b5 = (\u00b5(i))i be the perfect matching in the domination graph Gx of x; that is, agent i is matched to agent \u00b5(i). By the triangle inequality, the property of the domination graph that d(i, x) \u2264 d(i, top(\u00b5(i))), the fact that M is a perfect matching, and the fact that d(i, top(i)) \u2264d(i, y) for any i, we have n \u00b7 d(x, y) = X i\u2208N d(x, y) \u2264 X i\u2208N d(i, x) + X i\u2208N d(i, y) \u2264 X i\u2208N d(i, top(\u00b5(i))) + X i\u2208N d(i, y) \u2264 X i\u2208N \u0000d(i, y) + d(\u00b5(i), y) + d(\u00b5(i), top(\u00b5(i))) \u0001 + X i\u2208N d(i, y) \u2264 X i\u2208N d(i, y) + 2 \u00b7 X i\u2208N d(\u00b5(i), y) + X i\u2208N d(i, y) = 4 \u00b7 X g\u2208G X i\u2208g d(i, y). Te statement now follows by dividing each side of the inequality by n. 9 We are now ready to show the upper bound of 5 for Max-of-Avg. Teorem 4.3. For Max-of-Avg, the distortion of a mechanism that returns an alternative whose domination graph has a perfect matching is at most 5. Proof. Let w be the chosen alternative (whose domination graph has a perfect matching), and o an optimal alternative. Let gw be the group that determines the maximum cost of w. By the de\ufb01nition of Max-of-Avg, we have ng \u00b7 cost(o) \u2265P i\u2208g d(i, o) for any group g. Since n = P g ng, by adding all these inequalities together, we have n \u00b7 cost(o) \u2265 X g\u2208G X i\u2208g d(i, o). (1) By the triangle inequality, we have cost(w) = 1 ngw X i\u2208gw d(i, w) \u2264 1 ngw X i\u2208gw d(i, o) + 1 ngw X i\u2208gw d(w, o) \u2264cost(o) + d(w, o). (2) By Lemma 4.2 with x = w and y = o, and using (1), we have d(w, o) \u22644 n \u00b7 X g\u2208G X i\u2208g d(i, o) \u22644 \u00b7 cost(o), which, combined with (2), leads to cost(w) \u22645 \u00b7 cost(o), which directly implies the desired upper bound. 4.2 Avg-of-Max For the Avg-of-Max cost, we \ufb01rst consider the case of symmetric groups, in which ng = \u03bb for every g, and show a tight bound of 5 on the distortion of ordinal-information group-oblivious mechanisms. Teorem 4.4. For Avg-of-max, the distortion of any ordinal-information group-oblivious mechanism is at least 5, even when there are only two alternatives and the groups are symmetric. Proof. Let \u03b5 > 0 be any constant and \u03bb \u22652 be an integer such that \u03bb > 4 \u03b5. Consider an instance with n = 2\u03bb(\u03bb \u22121) agents and two alternatives {a, b}; clearly, n is even. Half of the agents prefer a while the remaining half prefer b. With this information, any of the two alternatives can be chosen as the winner, so assume that the winner is a. Te agents might be split into k = 2(\u03bb\u22121) groups of \u03bb agents each as follows: \u2022 Tere are \u03bb groups, each consisting of \u03bb \u22121 agents that prefer a and one agent that prefers b; \u2022 Tere are \u03bb \u22122 groups, each consisting of \u03bb agents that prefer b. Further, consider the metric space being the line of real numbers and the positioning of the alternatives and the agents being as follows: 10 \u2022 a is at 0 and b is at 2; \u2022 All agents that prefer a are at 1 \u2212\u03b5/10; \u2022 Te \u03bb agents that prefer b and are part of the \ufb01rst \u03bb groups (in which there are agents that prefer a) are at 3; \u2022 Te remaining \u03bb(\u03bb \u22122) agents that prefer b are at 2. We have that k \u00b7 cost(a) = \u03bb \u00b7 3 + (\u03bb \u22122) \u00b7 2 = 5\u03bb \u22122 and k \u00b7 cost(b) = \u03bb \u00b7 (1 + \u03b5/10) + (\u03bb \u22122) \u00b7 0 = \u03bb(1 + \u03b5/10), leading to a distortion of at least 5 1+\u03b5/10 \u2212 2 \u03bb(1+\u03b5/10) > 5 \u2212\u03b5 2 \u22122 \u03bb > 5 \u2212\u03b5, where the \ufb01rst inequality is just a mater of simple calculations. For the upper bound, we consider again mechanisms that output alternatives whose domination graphs have perfect matchings, and show an upper bound of 5 with a proof similar to the one used for the Max-of-Avg objective. Teorem 4.5. For Avg-of-Max and symmetric groups, the distortion of a mechanism that returns an alternative whose domination graph has a perfect matching is at most 5. Proof. Consider any instance with k symmetric groups, each consisting of \u03bb = n/k agents. Let w be an alternative whose domination graph has a perfect matching, and o an optimal alternative. For every group g, let ig and i\u2217 g be most-distant agents from w and o, respectively. Clearly, cost(o) = 1 k \u00b7 X g\u2208G d(i\u2217 g, o) \u22651 k \u00b7 X g\u2208G d(ig, o). By the triangle inequality, we have cost(w) = 1 k \u00b7 X g\u2208G d(ig, w) \u22641 k \u00b7 X g\u2208G d(ig, o) + 1 k \u00b7 X g\u2208G d(w, o) \u2264cost(o) + d(w, o) (3) By Lemma 4.2 with x = w and y = o, and since k = n/\u03bb, we have d(w, o) \u22644 n \u00b7 X g\u2208G X i\u2208g d(i, o) \u22644 n \u00b7 X g\u2208G \u03bb \u00b7 max i\u2208g d(i, o) = 4 k X g\u2208G d(i\u2217 g, o) 11 = 4 \u00b7 cost(o) Using this, (3) becomes cost(w) \u22645 \u00b7 cost(o), giving us the desired bound of 5 on the distortion. For general instances with asymmetric groups, we show a tight bound of 2k + 1. Teorem 4.6. For Avg-of-Max, the distortion of any ordinal-information group-oblivious mechanism is at least 2k + 1, even when there are only two alternatives. Proof. Consider the following instance with n = 2k agents and two alternatives located on the line of real numbers: \u2022 Alternative a is at 0 and alternative b is at 2; \u2022 Tere k agents that prefer alternative a and k agents that prefer alternative b. Since there is no way of distinguish between the two alternative given the preferences of the agents, we may assume that the winner is a, without loss of generality. Te agents might be partitioned into the following k groups: \u2022 Te \ufb01rst group consists of k +1 agents that includes those that prefer a who are located at 1 and one agent that prefers b who is located at 3; \u2022 Each of the remaining k \u22121 groups consist of just one agent that prefers b who is located at 2. Hence, k \u00b7 cost(a) = 3 + (k \u22121) \u00b7 2 = 2k + 1 and k \u00b7 cost(b) = 1, leading to a distortion of 2k + 1. Te matching upper bound follows easily by choosing any alternative who is ranked \ufb01rst by some agent. Teorem 4.7. For Avg-of-Max, the distortion of a mechanism that returns an alternative who is the most-preferred of some agent is at most 2k + 1. Proof. For any group g, let ig and i\u2217 g be agents that are most-distant from the winner w and the optimal alternative o, respectively. Let S be the set of groups in which there is at least one agent with w as her most-preferred alternative, and observe that |S| \u22651, and thus |G\\S| \u2264k \u22121. We make the following observations: \u2022 For any group g \u2208S, let jg be an agent who ranks w \ufb01rst. By the triangle inequality, for any g \u2208S, we have that d(ig, w) \u2264d(ig, o) + d(jg, o) + d(jg, w) \u2264d(ig, o) + 2d(jg, o) \u22643 \u00b7 d(i\u2217 g, o). In addition, since there is agent jg that prefers w over o, then 1 2 \u00b7 d(w, o) \u22641 2 \u0000d(jg, w) + d(jg, o) \u0001 \u2264d(jg, o) \u2264d(i\u2217 g, o). 12 \u2022 For any group g \u0338\u2208S, by the triangle inequality, we have that d(ig, w) \u2264d(ig, o) + d(w, o) \u2264d(i\u2217 g, o) + d(w, o). Also d(i\u2217 g, o) \u22650. Using these, we can now bound the distortion as follows: cost(w) cost(o) = P g\u2208S d(ig, w) + P g\u0338\u2208S d(ig, w) P g\u2208G d(i\u2217 g, o) \u2264 3 P g\u2208S d(i\u2217 g, o) + P g\u0338\u2208S \u0000d(i\u2217 g, o) + d(w, o) \u0001 P g\u2208G d(i\u2217 g, o) \u22643 + P g\u0338\u2208S d(w, o) P g\u2208S d(i\u2217 g, o) \u22643 + |G \\ S| \u00b7 d(w, o) |S| \u00b7 1 2 \u00b7 d(w, o) \u22643 + 2(k \u22121) = 2k + 1, as desired. 5 Group-Aware Mechanisms In the previous two sections, we focused on mechanisms that are oblivious to the partition of the agents into groups. It is thus natural for one to wonder whether improved distortion bounds can be achieved by mechanisms that are aware of the groups. Clearly, we can optimize exactly both objectives if we are also given full information about the locations of the agents and the alternatives in the metric space, so this question makes sense when we only have access to partial information about the metric space, such as ordinal information. In this section, we consider such group-aware mechanisms and show tight bounds on the distortion in two cases: (1) there are only two alternatives; (2) there are m \u22652 alternatives and the distances between them are known. 5.1 Te Case of Two Alternatives Here, we consider the case of two alternative a and b. For both objectives (Max-of-Avg and Avg-ofMax), we show a tight bound of 3 on the distortion of ordinal-information mechanisms. We start with the lower bounds, which are implied by the classic voting seting without groups. Teorem 5.1. For both Max-of-Avg and Avg-of-Max, the distortion of any ordinal-information groupaware mechanism is at least 3, even when there are only two alternatives and the groups are symmetric. Proof. Te lower bounds for both objectives follow by considering instances in which the agents are partitioned into singleton groups. Ten, the Max-of-Avg objective reduces to the egalitarian cost (the maximum distance over all agents), while the Avg-of-Max objectives reduces to the average social cost (the average total distance of the agents). When there are no groups (or, equivalently, there are singleton groups), the best possible distortion in terms of the egalitarian or the average social cost is 3, even where there are only two alternatives [Anshelevich et al., 2018, Gkatzelis et al., 2020, Kizilkaya and Kempe, 2022]. 13 Next, we present the tight upper bounds. For the Max-of-Avg objective, we consider the GroupProportional-Majority mechanism which chooses the winner w to be an alternative that has the largest proportional majority within any group. In particular, for any alternative x \u2208{a, b}, let ng(x) be the number of agents in group g that prefer x. Ten, w \u2208arg max x\u2208{a,b} max g\u2208G ng(x) ng . Teorem 5.2. For Max-of-Avg and two alternatives, the distortion of Group-Proportional-Majority is at most 3. Proof. For any group g, let Sg(x) be the set of agents in g that prefer x; thus, ng(x) = |Sg(x)|. By the de\ufb01nition of the mechanism, there is a group \u03b3 such that n\u03b3(w) n\u03b3 \u2265ng(o) ng for every group g. Clearly, for any agent i \u2208S\u03b3(w), d(i, w) \u2264d(i, o), and thus, by the triangle inequality, d(i, o) \u2265d(w, o)/2. Using this, for any group g, we can bound the optimal cost as follows: cost(o) \u22651 n\u03b3 X i\u2208\u03b3 d(i, o) \u22651 n\u03b3 X i\u2208S\u03b3(w) d(i, o) \u2265n\u03b3(w) n\u03b3 \u00b7 d(w, o) 2 \u2265ng(o) ng \u00b7 d(w, o) 2 or, equivalently, ng(o) ng \u00b7 d(w, o) \u22642 \u00b7 cost(o). (4) Now, let gw be the group that determines the cost of w. Using the fact that d(i, w) \u2264d(i, o) for every agent i \u2208Sgw(w) and the triangle inequality, we have cost(w) = 1 ngw X i\u2208gw d(i, w) = 1 ngw X i\u2208Sgw(w) d(i, w) + 1 ngw X i\u2208Sgw (o) d(i, w) \u2264 1 ngw X i\u2208Sgw(w) d(i, o) + 1 ngw X i\u2208Sgw(o) (d(i, o) + d(w, o)) \u2264cost(o) + ngw(o) ngw \u00b7 d(w, o). Using (4) for g = gw, we \ufb01nally obtain cost(w) \u22643 \u00b7 cost(o), as desired. For Avg-of-Max, we consider the Group-Score mechanism which, for any alternative x \u2208{a, b}, assigns 2 points to x for any group in which all agents prefer x, and 1 point for any group in which some agents prefer x while the remaining agents prefer the other alternative. Te winner w is the alternative with maximum score, breaking possible ties arbitrarily. Teorem 5.3. For Avg-of-Max and two alternatives, the distortion of Group-Score is at most 3. Proof. Let w be the alternative chosen by the mechanism, and o an optimal alternative; clearly, if w = o, the distortion is 1, so we assume that w \u0338= o. We partition the groups into three sets: \u2022 Sw contains the groups that are in favor of w, in which all agents prefer w over o; \u2022 So contains the groups that are in favor of o, in which all agents prefer o over w; 14 \u2022 Sm contains the groups that are mixed, in which some agents prefer w over o and some agents prefer o over w. For any group g, let ig be a most-distant agent from w and i\u2217 g a most-distant agent from o; hence, cost(w) = 1 k P g d(ig, w) and cost(o) = 1 k P g d(i\u2217 g, o). We make the following observations: \u2022 For any g \u2208Sw, both ig and i\u2217 g prefer w over o. Hence, d(ig, w) \u2264d(ig, o) \u2264d(i\u2217 g, o) and, using the triangle inequality, d(i\u2217 g, o) \u22651 2 \u00b7 d(w, o). \u2022 For any g \u2208So, by the triangle inequality, d(ig, w) \u2264d(ig, o) + d(w, o) \u2264d(i\u2217 g, o) + d(w, o). Also, recall that d(i\u2217 g, o) \u22650. \u2022 For any g \u2208Sm, like above, d(ig, w) \u2264d(i\u2217 g, o) + d(w, o). Also, since there is at least one agent that prefers w over o, it must be the case that d(i\u2217 g, o) \u22651 2 \u00b7 d(w, o). Using \ufb01rst the upper bounds on the distances from w, and then the lower bounds on the distances from o, we can write the distortion as follows: cost(w) cost(o) = P g d(ig, w) P g d(i\u2217 g, o) \u2264 P g d(i\u2217 g, o) + (|So| + |Sm|) \u00b7 d(w, o) P g d(i\u2217 g, o) = 1 + (|So| + |Sm|) \u00b7 d(w, o) P g d(i\u2217 g, o) \u22641 + (|So| + |Sm|) \u00b7 d(w, o) (|Sw| + |Sm|) \u00b7 1 2 \u00b7 d(w, o) = 1 + 2 \u00b7 |So| + |Sm| |Sw| + |Sm|. By the de\ufb01nition of the mechanism, w is chosen as the winner because 2|Sw| + |Sm| \u22652|So| + |Sm| or, equivalently, |Sw| \u2265|So|. Using this, the distortion is at most 1 + 2 \u00b7 |So| + |Sm| |Sw| + |Sm| \u22641 + 2 \u00b7 |So| + |Sm| |So| + |Sm| = 3, as claimed. 5.2 Known Distances between Alternatives We \ufb01nally consider the general case of m \u22652 but when slightly more information than just ordinal preferences is available. In particular, besides knowing the ordinal preferences of the agents over the alternatives, we assume that the distances between the alternatives in the metric space are also known. Tis is a natural assumption in various important applications (such as in facility location problems) and it has thus been examined in previous work on the distortion for di\ufb00erent voting setings [Anshelevich and Zhu, 2021, Anshelevich et al., 2024]. Before we continue, we remark that the lower bound of 3, and even the lower bounds in the previous sections, still hold for this seting where the distances between the alternatives are known since they have been proven using instances with just two alternatives. To show a tight bound of 3 for the two objectives, we consider mechanisms that virtually map each agent i to its most-preferred alternative top(i), and then choose the winner to be an alternative that 15 minimizes the objective under consideration for these most-preferred alternatives. In particular, the winner for the Max-of-Avg objective is w \u2208arg max x\u2208A max g\u2208G \u0012 1 ng X i\u2208g d(top(i), x) \u0013 , while the winner for the Avg-of-Max objective is w \u2208arg max x\u2208A \u00121 k X g\u2208G max i\u2208g d(top(i), x) \u0013 . We will refer to these two mechanisms as Virtual-MiniMax-of-Avg and Virtual-MiniAvg-of-Max, respectively. Teorem 5.4. When the alternative locations are known, the distortion of Virtual-MiniMax-of-Avg is at most 3 for Max-of-Avg, and the distortion of Virtual-MiniAvg-of-Max is at most 3 for Avg-of-Max. Proof. We \ufb01rst show the bound for the Max-of-Avg objective. Let w be the alternative chosen by the Virtual-MiniMax-of-Avg mechanism, and denote by o an optimal alternative. By de\ufb01nition, cost(o) \u2265maxg\u2208G \u0000 1 ng P i\u2208g d(i, o) \u0001 . Let gw be the group that determines the cost of w. By the triangle inequality, the fact that d(i, top(i)) \u2264d(i, o) for any agent i, the de\ufb01nition of w (which minimizes the Max-of-Avg cost of the most-preferred alternatives of all agents), and the fact that the maximum of a set of additive functions is subadditive, we obtain cost(w) = 1 ngw X i\u2208gw d(i, w) \u2264 1 ngw X i\u2208gw d(i, top(i)) + 1 ngw X i\u2208gw d(top(i), w) \u2264 1 ngw X i\u2208gw d(i, o) + max g\u2208G \u0012 1 ng X i\u2208g d(top(i), w) \u0013 \u2264cost(o) + max g\u2208G \u0012 1 ng X i\u2208g d(top(i), o) \u0013 \u2264cost(o) + max g\u2208G \u0012 1 ng X i\u2208g d(i, top(i)) \u0013 + max g\u2208G \u0012 1 ng X i\u2208g d(i, o) \u0013 \u22643 \u00b7 cost(o). Te proof for the Max-of-Avg objective is quite similar. Now let w be the alternative chosen by the Virtual-MiniAvg-of-Max mechanism. For the optimal alternative o, by de\ufb01nition, we have cost(o) \u22651 k P g\u2208G maxi\u2208g d(i, o). Let ig be the most-distant agent from w in group g. Again, using the triangle inequality, the fact that d(i, top(i)) \u2264d(i, o) for any agent i, the de\ufb01nition of w (which now minimizes the Avg-of-Max cost of the most-preferred alternative of all agents), and the fact that max is a subadditive function, we obtain cost(w) = 1 k X g\u2208G d(ig, w) \u22641 k X g\u2208G d(ig, top(ig)) + 1 k X g\u2208G d(top(ig), w) 16 \u22641 k X g\u2208G d(ig, o) + 1 k X g\u2208G max i\u2208g d(top(i), w) \u22641 k X g\u2208G max i\u2208g d(i, o) + 1 k X g\u2208G max i\u2208g d(top(i), o) \u2264cost(o) + 1 k X g\u2208G max i\u2208g \u0000d(i, top(i)) + d(i, o) \u0001 \u2264cost(o) + 1 k X g\u2208G max i\u2208g \u00002 \u00b7 d(i, o) \u0001 \u22643 \u00b7 cost(o), as claimed. 6 Conclusion and Open problems In this paper, we considered a metric voting seting in which the agents are partitioned into groups. When the groups are unknown, we showed tight bounds on the distortion of oblivious full-information and oblivious ordinal-information mechanisms in terms of two objectives that take the groups into account, the Max-of-Avg and the Avg-of-Max objectives. On the other hand, when the groups are known, we managed to show tight bounds on the distortion of group-aware ordinal mechanisms when there are just two alternatives or when we also have access to the locations of the alternatives in the metric space. Tere are multiple avenues for further research in the group voting model we considered here. Te most important problem that our work leaves open is to resolve the distortion of group-aware ordinal mechanisms for more than two alternatives. While this a very challenging task in general, we remark that achieving constant distortion can be done by using the two-step distributed mechanisms of Anshelevich et al. [2022] which are, by de\ufb01nition, group-aware. However, those mechanisms do not fully exploit the structure of the groups, and we therefore expect that beter distortion bounds can be achieved by unlocking the full potential of group-aware mechanisms. Other interesting directions would be to consider randomized mechanisms and other objective functions that take the groups into account, beyond Max-of-Avg and Avg-of-Max."
}
]
}