| { |
| "url": "http://arxiv.org/abs/2404.16538v1", |
| "title": "OpenDlign: Enhancing Open-World 3D Learning with Depth-Aligned Images", |
| "abstract": "Recent advances in Vision and Language Models (VLMs) have improved open-world\n3D representation, facilitating 3D zero-shot capability in unseen categories.\nExisting open-world methods pre-train an extra 3D encoder to align features\nfrom 3D data (e.g., depth maps or point clouds) with CAD-rendered images and\ncorresponding texts. However, the limited color and texture variations in CAD\nimages can compromise the alignment robustness. Furthermore, the volume\ndiscrepancy between pre-training datasets of the 3D encoder and VLM leads to\nsub-optimal 2D to 3D knowledge transfer. To overcome these issues, we propose\nOpenDlign, a novel framework for learning open-world 3D representations, that\nleverages depth-aligned images generated from point cloud-projected depth maps.\nUnlike CAD-rendered images, our generated images provide rich, realistic color\nand texture diversity while preserving geometric and semantic consistency with\nthe depth maps. OpenDlign also optimizes depth map projection and integrates\ndepth-specific text prompts, improving 2D VLM knowledge adaptation for 3D\nlearning efficient fine-tuning. Experimental results show that OpenDlign\nsignificantly outperforms existing benchmarks in zero-shot and few-shot 3D\ntasks, exceeding prior scores by 8.0% on ModelNet40 and 16.4% on OmniObject3D\nwith just 6 million tuned parameters. Moreover, integrating generated\ndepth-aligned images into existing 3D learning pipelines consistently improves\ntheir performance.", |
| "authors": "Ye Mao, Junpeng Jing, Krystian Mikolajczyk", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", |
| "gt": "3D understanding, which involves tasks such as point cloud classification and 3D object detection, is pivotal for advancing augmented/virtual reality [1; 2], autonomous vehicles [3; 4], and robotics [5; 6]. Traditional 3D models [7; 8; 9; 10; 11; 12; 13] are closed-world, which can only recognize pre-defined categories and struggle with \u2019unseen\u2019 ones. The emergence of Vision-Language Models (VLMs) like CLIP [14], renowned for their success in identifying \u2018unseen\u2019 categories in 2D images through open-world representation learning [15; 16; 17; 18], has sparked interest in applying these models to develop robust open-world 3D representations for 3D vision tasks. Existing open-world 3D learning methods can be categorized into depth-based and point-based methods. Depth-based methods [19; 20; 21] project point clouds into multi-view depth maps and employ the pre-trained CLIP image encoder for 3D representations. However, this process encounters a domain gap because CLIP is primarily trained with RGB images rather than depth maps. To bridge this gap, methods like [21] incorporate an additional depth encoder and utilize contrastive learning to align depth features from this encoder with image and text features from pre-trained CLIP encoders, as illustrated in Fig. 1(a). The images used here, specifically rendered from CAD models for feature alignment, are not employed in the zero-shot inference phase. Point-based methods [22; 23; 24; 25; 26; 27] directly learn 3D representations from point clouds, avoiding the latency of Preprint. Under review. arXiv:2404.16538v1 [cs.CV] 25 Apr 2024 \u201cA point cloud of a small airplane.\u201d Image Encoder Text Encoder Point Encoder Align (b) Point-based Method (a) Depth-based Method (c) OpenDlign (ours) \u201cA depth map of a small airplane.\u201d Image Encoder Text Encoder Depth Encoder Align Depth Map Rendered Image (Training only) Text Point Cloud Rendered Image (Training only) Text \u201cA depth map of a small airplane.\u201d Image Encoder Text Encoder Align Depth Map Depth-Aligned Image (Training only) Text CAD-rendered images Depth-aligned images vs. Figure 1: Top: OpenDlign vs. Conventional Open-World 3D Learning Frameworks: OpenDlign enhances multimodal alignment using depth-aligned images, providing more detailed geometric and semantic information along with enhanced color and texture compared to previously used rendered images. It refines 3D representation by fine-tuning the CLIP image encoder directly, eliminating the extra encoder pre-training required by other methods. Note that both rendered and depth-aligned images are used exclusively for learning alignment. Bottom: Visual comparison between CAD- rendered and corresponding depth-aligned multi-view images. depth map projection. However, due to the inherent data format differences between images and point clouds, these methods also need an additional point encoder for extracting 3D features, akin to depth-based methods (See Fig. 1(b)). Thus, aligning 3D data (e.g., depth maps or point clouds) with the image-text modalities pre-aligned by CLIP is a standard step in current 3D open-world methods. Depth-based and point-based methods encounter two primary challenges in the alignment process. First, the CAD-rendered images used for aligning 3D data typically display consistent color and texture styles across various views. Over-aligning with these low-diversity images compromises the generalizability of learned 3D representations. Secondly, the 3D datasets used for encoder pre- training, like ShapeNet [28] and Objaverse [29], contain less than 1 million synthetic 3D objects, significantly smaller than the DFN5B [30] and LAION-5B [31] datasets with 5 billion images used to train the cutting-edge CLIPs. This data volume disparity, which is due to the high cost of 3D data acquisition, results in the sub-optimal transfer of CLIP\u2019s knowledge to 3D representations. While fine-tuning CLIP\u2019s encoders yields more direct knowledge transfer, it restricts the input to depth maps. Unfortunately, 3D representations from depth maps still underperform in downstream 3D tasks compared to those from point clouds, due to two factors: (1) The absence of a robust projection method for creating dense depth maps with smooth contours from point clouds. (2) The current widely used CLIP text prompt templates are tailored for matching with RGB images, not depth maps. To address these challenges, this paper proposes OpenDlign, a novel framework that learns Open- world 3D representations via aligning multi-view depth maps projected from point clouds with Depth-aligned images produced by a generative model [32]. These images offer enhanced color and texture diversity compared to CAD-rendered images while maintaining geometric and semantic consistency with the depth maps (See Fig. 1). Additionally, as shown in Fig. 1(c), OpenDlign fine-tunes the CLIP image encoder rather than pre-training a separate depth encoder, thus maximally adapting CLIP\u2019s existing knowledge for effective 3D learning, even with a limited 3D dataset. Specifically, fine-tuning is limited to the attention layers of the last transformer block, comprising just 6 million parameters. Moreover, OpenDlign employs a new projection pipeline to generate dense depth maps with clear contours. For zero-shot inference, OpenDlign employs depth-specific text prompts and a logit aggregation method, emphasizing depth-related features and combining results from various viewpoint depth maps. Experimental results show that OpenDlign greatly surpasses the prior state-of-the-art, pre-trained on ShapeNet [28], with accuracy gains of 8.0% on ModelNet40 2 and 16.4% on OmniObject3D, the largest real-world 3D shape dataset. Notably, using realistic depth- aligned images significantly boosts the performance of existing SOTA models, like those pretrained on ShapeNet or 3D Ensemble datasets [24]. This consistent improvement across all benchmarks highlights the versatility of depth-aligned images in any 3D open-world learning pipeline. The main contributions of this paper are outlined as follows: \u2022 We propose a multimodal alignment framework that aligns features from depth maps and depth-aligned images to learn a unified depth map, image, and text representation. \u2022 We develop a contour-aware projection pipeline to produce dense and contour-preserving multi-view depth maps from point clouds. \u2022 We introduce depth-specific text prompt templates for zero-shot inference to accurately capture both the semantic and visual traits in depth maps. \u2022 We design a logit aggregation strategy that derives final 3D representations from both CLIP and OpenDlign visual encoders, reducing catastrophic forgetting in alignment.", |
| "main_content": "2.1 Open-World 3D Representation Learning Vision and Language models such as CLIP [14] have revolutionized 2D representation learning in open-world settings through contrastive learning with large-scale image-text pairs [33; 34; 35; 36]. Building on this, recent studies have adapted CLIP for 3D representation learning, achiving impressive performance in diverse 3D zero-shot tasks [24; 25]. PointCLIP [20], as a pioneering study, utilizes the CLIP image encoder for extracting 3D representations from depth maps of point clouds, achieving zero-shot recognition by aligning with text embeddings of semantic categories. To address CLIP\u2019s training bias towards RGB images, Zhu et al. [19] introduced GPT-generated 3D-specific prompts and a denser depth map projection, while CLIP2Point [21] pre-trains a depth encoder for closer alignment with CLIP\u2019s encoders. These methods derive representations from depth maps with noisy contours, causing a loss of key shape features needed for precise recognition. Moreover, their reliance on either natural image text prompts or depth-specific prompts generated by GPT-3 [37] for certain categories highlights a lack of versatility in handling diverse 3D contexts. Alternative methods [23; 23; 24; 25; 27] avoid depth map projection by directly aligning point clouds, images, and text using specialized 3D encoders. By scaling up the dataset and encoder sizes, these methods show promise in diverse 3D tasks. However, these methods are limited by their reliance on CAD-rendered images, which have limited texture diversity across views, leading to less generalizable representations. Additionally, the smaller volume of 3D datasets compared to CLIP\u2019s training data hinders effective knowledge transfer to point cloud encoders. In this paper, we substitute rendered images with AI-generated, depth-aligned images to enhance texture diversity. We also fine-tune the CLIP image encoder for 3D representation learning instead of training a new 3D encoder from scratch, reducing the reliance on large 3D datasets. 2.2 Continual Learning in CLIP Fine-Tuning Continual Learning (CL) in CLIP aims to mitigate catastrophic forgetting [38], ensuring retention of zero-shot capabilities across varied data distributions while fine-tuning to new tasks. CL methods fall into three categories: adaptive-plasticity methods [39; 40; 41; 42; 43; 44], replay methods [45; 46; 47], and architecture-based methods [48; 49]. Adaptive-plasticity methods limit the plasticity of the essential model parameters for past tasks during fine-tuning. For instance, the IMM-Mean [44] method achieves CL by simply averaging parameters of pre-trained and fine-tuned models for inference, although its efficacy might be limited for complex tasks [50]. Replay methods leverage stored exemplars to enable CLIP to recall previously learned knowledge, while they encounter scalability challenges. Without relying on exemplars, architecture-based CL methods dynamically adjust the model\u2019s architecture to accommodate new information without losing existing knowledge [50]. In this study, we align the depth map with the RGB image by freezing the pre-trained CLIP encoder weights and incorporating a trainable transformer-based branch for encoding depth maps, adhering to architecture-based principles. Inspired by IMM-Mean [44], we use pre-trained and fine-tuned model weights to compute classification logits for multi-view depth maps. 3 3 Methodology Fig. 2 illustrates the OpenDlign framework, which learns effective open-world 3D representations by aligning embeddings from projected depth maps and depth-aligned images. Initially, a contour-aware projection method is employed to create shape-preserved, dense depth maps from point clouds. These maps then guide a generative model to produce depth-aligned images with rich color and texture diversity. OpenDlign then uses contrastive learning to align features between depth maps and generated images by fine-tuning a transformer block linked to the CLIP image encoder. This step enables the extraction of robust embeddings from \u2019unseen\u2019 multi-view depth maps at test time, using both fine-tuned and pre-trained states of the image encoder. These embeddings are matched with depth-specific text embeddings, which encode the depth maps\u2019 semantic and visual traits, to compute logits for each viewpoint and aggregate these logits to enable zero-shot classification. Alternatively, these embeddings can be refined using a logistic regressor for few-shot classification. 3.1 Contour-Aware Depth Map Projection The contour-aware projection method transforms the input point cloud into multi-view depth maps with clear contours. Inspired by the pipeline in [19], this method involves four main steps: Quantize, Densify, Smooth, and Squeeze. In the Quantize step, for the i^{\\text {th}} view of point cloud P_i, the 3D coordinates (x, y , z ) \\in P_i are normalized to [0, 1] and mapped onto a discrete grid G \\in \\mathbb {R}^{H \\times W \\times B}, where H and W correspond to the dimensions required by the CLIP image encoder, and B is a pre-defined depth dimension. Next, the Densify step enhances G by updating each voxel to the maximum value within its 7 \\ t imes 7 \\times 7 neighborhood, yielding a denser map G'. Subsequently, the Smooth step applies bilateral filtering to each voxel v_i in G', adjusting its intensity I_{v_i} to I' _{v_i} using: I' _ { v_ i } = \\frac { 1 }{W_v} \\sum _ {v_j \\in S} G_{\\sigma _1}(\\|v_i v_j\\|) G_{\\sigma _2}(|I_{v_i} I_{v_j}|) I_{v_j} (1) where W_ { v } = \\sum _{ v _j \\in S} G_ { \\sigma _1}(\\|v_i v_j\\|) G_{\\sigma _2}(|I_{v_i} I_{v_j}|) is the normalization factor that ensures voxel weights sum to 1.0. The Gaussian functions G\u03c31 and G\u03c32 adjust the influence of each neighboring voxel vj within the 5 \u00d7 5 \u00d7 5 kernel from set S around vi, based on spatial and intensity differences, enhancing contour sharpness and reducing jagged edges in G\u2032. Finally, the Squeeze step applies the minimal pooling on the depth channel of the smoothed G', then triples the output to mimic RGB intensity, producing the final depth map D \u2208RH\u00d7W \u00d73. 3.2 Depth-Aligned Image Generation We generated 524,700 depth-aligned images from ShapeNet [28], one of the leading public 3D CAD datasets containing around 52,470 models, each annotated with semantic metadata. To align with prior experimental protocols [24; 23], we sampled a point cloud of 10,000 points from each model, projecting these onto 10 contour-aware depth maps. A conditional image generative model (ControlNet v1.1 [32]) then produced depth-aligned images for each map (D), using 1 \u2212D and the model\u2019s metadata as conditions. This approach ensures that the images remain consistent with the depth maps both geometrically and semantically, while also adding texture diversity across different views. The conditioning of ControlNet utilizes 1 \u2212D instead of D because it is predominantly pre-trained on depth images, in which brighter regions indicate closer proximity. The supplemental material details the positive and negative prompts used in ControlNet to achieve high-fidelity and noise-free depth-aligned image generation. 3.3 Multimodal Representation Alignment OpenDlign aligns representations from multi-view depth maps and depth-aligned images by finetuning a transformer block that is residually connected to the final block of the pre-trained CLIP image encoder, using contrastive learning. As CLIP pre-training already aligns image and text modalities, OpenDlign implicitly aligns depth maps with the shared image and text space. Multimodal Feature Extraction. Given a 3D point cloud input, let D = \\{ D_i\\}_{i=1}^N represent the set of its N projected depth map views, and R = \\{ R_i\\}_{i=1}^N the corresponding set of depth-aligned 4 CLIP Image Encoder Transformer Block \u2026 \u2026 \ud835\udc21! \" \ud835\udc21# $ \ud835\udc21% $ \ud835\udc21& $ \u2026 \ud835\udc21! $ \u2026 \u2026 \u2026 \u2026 \u2026 Transformer Block \u2026 Transformer Block Transformer Block \u2026 (a) Point Cloud Representation Learning via Generated Depth-Aligned Images Contour-Aware Depth Map Projection Image Generative Model \u2026 \u2026 Input Point Cloud Multi-View Depth Maps (b) Zero-Shot 3D Classification (c) Few-Shot 3D Classification \u201cA depth map of a {car} 3D model.\u201d View 1 \u2026 View ! % + 1 \u2026 airplane cup sofa car \u2026 Pretrained Text Encoder \u2026 \u201cA silhouette of a {class}. \u201cA raytraced image, Kinetic pointillism \u201cA depth map of a {class} 3D model\u201d. Depth-Specific Text Multi-View Depth-Aligned RGB Images V! \u22c5F\" V # \u22c5F\" \u2026 \ud835\udc05! \ud835\udc05\ud835\udfd1 \ud835\udc05\ud835\udfcf \ud835\udc05\ud835\udfd0 \u2026 \ud835\udc15\ud835\udc22\u22c5\ud835\udc05 \ud835\udfd1 V! \u22c5F& V! \u22c5F' \u2026 \ud835\udc15 \ud835\udc23\u22c5\ud835\udc05 \ud835\udfd1 V # \u22c5F' \ud835\udc15 \ud835\udc22 \ud835\udc15 \ud835\udc23 V # \u22c5F& 1 \u2264\ud835\udc56\u2264\ud835\udc41/2 \ud835\udc41/2 < \ud835\udc57\u2264\ud835\udc41 Pre-trained Image Encoder Fine-tuned Image Encoder View ! % View \ud835\udc41 \u2026 \u2026 \ud835\udc21# \" \u2026 \ud835\udc21\ud835\udfcf \ud835\udc79\u22c5\ud835\udc21\ud835\udfcf \ud835\udc6b h' , \u22c5h& h. , \u22c5h& h/ , \u22c5h& h& , \u22c5h/ h. , \u22c5h/ h' , \u22c5h/ \ud835\udc21% \" \u2026 h& , \u22c5h' \ud835\udc21\ud835\udfd0 \ud835\udc79\u22c5\ud835\udc21\ud835\udfd0 \ud835\udc6b h. , \u22c5h' h/ , \u22c5h' \ud835\udc21& \" \u2026 h& , \u22c5h. h' , \u22c5h. h/ , \u22c5h. \ud835\udc21\ud835\udfd1 \ud835\udc79\u22c5\ud835\udc21\ud835\udfd1 \ud835\udc6b \ud835\udc21\ud835\udc75 \ud835\udc79\u22c5\ud835\udc21\ud835\udc75 \ud835\udc6b \ud835\udc13\ud835\udfcf \ud835\udc13\ud835\udc73#\ud835\udfcf \ud835\udc13\ud835\udc73 \ud835\udc13\ud835\udc73 \ud835\udc95 Logistic Regressor bike cup \u2026 table car View 7 % + 1 \u2026 \ud835\udc41 \u2026 View 1 \u2026 ! % \u2026 Fine-tuned Image Encoder Pre-trained Image Encoder Figure 2: Overview of OpenDlign. In (a), OpenDlign converts point clouds into multi-view depth maps using a contour-aware projection, which then helps generate depth-aligned RGB images with diverse textures, geometrically and semantically aligned with the maps. A transformer block, residually connected to the CLIP image encoder, is fine-tuned to align depth maps with depth-aligned images for robust 3D representation. For zero-shot classification (b), OpenDlign aggregates multiview logits from both pre-trained and fine-tuned encoders for label prediction and for few-shot classification (c), it employs a logistic regressor trained on multi-view features from the encoders. images. Each image R_i is encoded through L layers of a pre-trained CLIP image encoder, \\{\\text {T}_l(\\cdot )\\}_{l=1}^L , to obtain feature representations I ^ { R}_{i} =\\text {T}_{1\\ldots L}(R_i) . Each depth map D_i is processed up to layer \\text {T}_{L-1} , obtaining preliminary features \\text {T}_{1\\ldots L-1}(D_i) . Subsequently, these features are passed through the frozen layer \\protect \\text {T}_L and its trainable counterpart \\ text {T}^t_L , yielding the feature for the ith depth map view ID i = T1...L(Di) + Tt L(T1...L\u22121(Di)). Inspired by [17], only the layers for spatial interaction in \\ text {T}^t_L (i.e., attention layers) are trainable. The final feature vectors for multi-view depth maps D and depth-aligned images R are \\ m a t hb f { h}^ D = \\frac {1}{N} \\sum ^N_{i=1}\\|I^D_i\\| and \\ m a t hb f { h}^ R = \\frac {1}{N} \\sum ^N_{i=1}\\|I^R_i\\|, respectively. Loss Functions. The alignment of hD and hR is achieved by minimizing a composite loss function, comprising the contrastive loss Lcont and the feature distance loss Ldist, defined as: \\mat h c al {L }_ { \\te xt { to t al } } = \\ und e rb r ac e { \\ s u m _{ (i, j) } -\\ f ra c { 1}{ 2 } \\ lo g \\ f r ac {\\exp \\ left (\\m a t hb f { h }_ i ^{D} \\mathbf {h}_j^{R} / \\tau \\right )}{\\sum _k \\exp \\left (\\mathbf {h}_i^{D} \\mathbf {h}_k^{R}/\\tau \\right )} -\\frac {1}{2} \\log \\frac {\\exp \\left (\\mathbf {h}_i^{D} \\mathbf {h}_j^{R}/\\tau \\right )}{\\sum _k \\exp \\left (\\mathbf {h}_k^{D} \\mathbf {h}_j^{R} / \\tau \\right )}}_{\\mathcal {L}_{\\text {cont}}} + \\underbrace {\\sum _{(i,j)} \\|\\mathbf {h}^D_i \\mathbf {h}^R_j\\|_2}_{\\mathcal {L}_{\\text {dist}}} (2) 5 In each training batch, (hD i , hR j ) represents a positive pair and k \u0338= i, j. Here, \u03c4 is a learnable temperature parameter, similar to CLIP [14]. 3.4 3D Zero-Shot Transfer The alignment between depth maps and depth-aligned RGB images facilitates 3D zero-shot classification by aggregating multi-view classification logits. Each logit represents the similarity between features of a single-view depth map and text features specific to category candidates. Depth-Specific Text Generation. We generate 80 depth-specific text prompt templates based on 80 ImageNet zero-shot recognition prompts1, integrating keywords such as \"depth map\", \"white background image\", \"raytraced image\", and \"silhouette of [CLASS]\". These keywords guide OpenDlign to target depth-related features, such as the distance of object surfaces from a viewpoint. To identify these keywords, we use the CLIP-Interrogator tool [51] to analyze depth maps from ShapeNet [28], seeking text prompts that best match their visual features. The 10 most recurring prompts from this analysis are chosen as our essential keywords. In zero-shot inference, we employ our depth-specific templates to generate 80 text descriptions for each label l. These descriptions {ti}80 i=1 are encoded by a texture encoder F(\u00b7), normalized, and then merged into a unified text feature Fl via average pooling, calculated as 1 80 P80 i=1 \u2225F(ti)\u2225. Multi-View Logits Aggregation. To calculate classification logits, we first gather visual features from multi-view depth maps {Vi}N i=1, aiming to align with depth-specific text features of M candidate labels F = {Fi}M i=1. The feature extraction utilizes a dual-encoder strategy: the first half of the views \\ifmmod e \\lbrace \\else \\textbraceleft \\fi V_i\\}_{i=1}^{N/2} utilize a pre-trained CLIP image encoder, while the second half of views \\ifmm ode \\lbrace \\else \\textbraceleft \\fi V_i\\}_{i=N/2+1}^{N} employs a fine-tuned encoder. The strategy ensures that OpenDlign maintains its capability to recognize previously identifiable depth maps after learning multimodal alignment via fine-tuning. As shown in Fig. 2(b), the logit for a single depth map view is the product of Vi and F, with the overall classification logit being the sum of logits across all views, calculated as PN i=1 ViFT . 4 Experiments 4.1 Zero-Shot 3D Classification We first evaluated OpenDlign under the zero-shot shape classification task on three benchmark datasets: ModelNet40 [52], ScanObjectNN [53], and OmniObject3D [54]. ModelNet40 offers synthetic 3D CAD models in 40 categories. ScanObjectNN provides real-scanned objects in 15 categories from OBJ_ONLY version. OmniObject3D, the largest, includes 5,911 real-scanned objects in 216 categories, well-suited for fine-grained, real-world classification evaluation. Point cloud sizes are 10,000 points for ModelNet40, 2,048 for ScanObjectNN, and 4,096 for OmniObject3D. OpenDlign was compared against existing methods, including three depth-based methods: PointCLIP [20], PointCLIP V2 [19], and CLIP2Point [21], and three point-based methods: ULIP [23], OpenShape [24], and TAMM [27]. Additionally, we improved the OpenShape and TAMM models by retraining them with depth-aligned and CAD-rendered images from an integrated dataset provided by OpenShape, which combines four distinct collections: Objaverse [29], ShapeNet [24], 3D-Future [55], and ABO [56]. Our aim was to investigate if depth-aligned images consistently enhance the performance of existing 3D open-world methods. Moreover, we evaluated OpenDlign\u2019s scalability by training it with various CLIP variants to adapt to the complexity of pre-trained image-text encoders. Table 1 shows OpenDlign substantially outperforms existing methods trained on ShapeNet on three benchmarks, exceeding the previous best, TAMM-SparseConv trained on ShapeNet, by margins of 8.0% on ModelNet40, 1.6% on ScanObjectNN, and 16.4% on OmniObject3D in top-1 accuracy. OpenDlign also greatly exceeds the leading depth-based method, PointCLIP V2\u2014by 19% on ModelNet40 and 27.4% on OmniObject3D. Significantly, OpenDlign outshines all methods pre-trained on the ensemble dataset in the ScanObject3D benchmark. Moreover, OpenDlign\u2019s performance scales linearly with the complexity of CLIP variants, surpassing most of the baseline models on ModelNet40 and OmniObject3D benchmarks, even when employing the light ViT-B-16 CLIP model. Moreover, the use of depth-aligned images consistently boosts the performance of OpenShape and 1Text Prompts for ImageNet: ImageNet Prompt Engineering. 6 Table 1: Zero-shot classification results on ModelNet40 [52], ScanObjectNN [53] and OmniObject3D[54]. Best: bolded. Second-best: underlined. Training 3D Open-World CLIP ModelNet40 [52] ScanObjectNN [53] OmniObject3D[54] Source Methods Variant Top1 Top3 Top5 Top1 Top3 Top5 Top1 Top3 Top5 2D inferences PointCLIP [20] ResNet-50 19.3 28.6 34.8 10.5 20.8 30.6 0.3 1.0 1.8 No Training PointCLIP V2 [19] ViT-B-16 63.6 77.9 85.0 42.2 63.3 74.5 3.9 9.6 14.4 CLIP2Point [21] ViT-B-32 49.5 71.3 81.2 25.5 44.6 59.4 1.4 3.7 7.1 ULIP-PointBERT [23] SLIP [57] 60.4 79.0 84.4 51.5 71.1 80.2 8.4 15.2 19.7 OpenShape-PointBERT [24] ViT-bigG-14 70.3 86.9 91.3 51.3 69.4 78.4 13.0 23.3 29.4 OpenShape-SparseConv [24] ViT-bigG-14 72.9 87.2 93.0 52.7 72.7 83.6 13.7 24.2 30.0 TAMM-PointBERT [27] ViT-bigG-14 73.1 88.5 91.9 54.8 74.5 83.3 14.9 26.2 33.4 TAMM-SparseConv [27] ViT-bigG-14 74.6 88.2 94.0 57.9 75.3 83.1 ShapeNet OpenShape-PointBERT(+Dlign) ViT-bigG-14 73.7 87.1 91.3 52.7 72.4 82.6 13.4 23.7 29.9 OpenShape-SparseConv (+Dlign) ViT-bigG-14 74.9 89.5 94.1 56.3 75.2 85.4 15.0 26.1 32.8 TAMM-PointBERT(+Dlign) ViT-bigG-14 73.7 89.1 92.2 57.3 73.6 82.3 15.8 27.4 33.0 OpenDlign-B32 ViT-B-32 68.4 86.4 92.6 46.7 72.0 83.0 17.3 29.2 36.3 OpenDlign-B16 ViT-B-16 74.2 90.5 95.4 49.3 74.0 84.4 23.2 37.5 44.3 OpenDlign-L ViT-L-14 77.8 93.1 96.4 52.1 74.6 82.8 27.5 41.3 47.8 OpenDlign-H ViT-H-14 82.6 96.2 98.4 59.5 76.8 83.7 31.3 46.7 53.2 OpenShape-SparseConv [24] ViT-bigG-14 83.4 95.6 97.8 56.7 78.9 88.6 33.7 49.3 57.4 OpenShape-PointBERT [24] ViT-bigG-14 84.4 96.5 98.0 52.2 79.7 88.7 34.0 49.7 57.9 TAMM-PointBERT [27] ViT-bigG-14 85.0 96.6 98.1 55.7 80.7 88.9 37.1 53.5 61.8 Ensemble TAMM-SparseConv [27] ViT-bigG-14 85.4 96.4 98.1 58.5 81.3 89.5 OpenShape-SparseConv (+Dlign ) ViT-bigG-14 85.0 96.1 97.9 56.2 78.5 87.8 34.1 50.5 58.5 OpenShape-PointBERT (+Dlign) ViT-bigG-14 85.4 96.5 98.2 51.1 77.4 88.2 35.6 50.4 57.9 TAMM-PointBERT(+Dlign) ViT-bigG-14 86.2 96.6 97.5 60.5 82.5 90.4 37.5 54.9 62.1 TAMM variants pre-trained on the ShapeNet dataset across all benchmarks. It also improves the performance of variants pre-trained on the ensemble dataset in at least two benchmarks, despite depth-aligned images being available only for the 3D data from ShapeNet, which represents no more than 10% of the ensemble dataset. Significantly, TAMM-PointBERT (+Dlign) achieves a 4.8% top-1 accuracy improvement on the ScanObjectNN dataset, and OpenShape-PointBERT (+Dlign) gains a 1.6% increase on the most challenging OmniObject3D benchmark. These results validate that using depth-aligned images is a universally effective strategy to enhance any 3D open-world pipeline. 4.2 Few-Shot 3D Classification We then assessed OpenDlign\u2019s few-shot classification capability by training a logistic regressor with linear probing on features from N-shot, 10-view depth maps. Similar to the zero-shot scenario, we extracted multi-view features using both fine-tuned and pre-trained OpenDlign encoders (see Fig. 2). At inference, the regressor aggregates logits from 10 views to predict the final label. We compared OpenDlign\u2019s few-shot performance with variants of ULIP [23], OpenShape [24], and TAMM [27], which extract features for training regressor from point clouds using their pre-trained point encoders. Table 2 shows OpenDlign outperforms all baselines across varied few-shot scenarios with 1 to 16 training samples per class. OpenDlign significantly outperforms the leading baseline on the OmniObject3D dataset, exceeding it by 8.8% and 11.8% in 4-shot and 8-shot classification, respectively. This underscores the robustness and transferability of its 3D representations. Table 2: Few-shot classification results on ModelNet40 [52], ScanObjectNN [53] and OmniObject3D [54]. Our results are averaged over 10 random seeds. ModelNet40 [52] ScanObjectNN [53] OmniObject3D [54] Model 1-Shot 2-Shot 4-Shot 8-Shot 16-Shot 1-Shot 2-Shot 4-Shot 8-Shot 16-Shot 1-Shot 2-Shot 4-Shot 8-Shot 16-Shot ULIP-PointBERT [23] 54.4 64.3 74.1 79.3 81.3 46.7 55.1 62.5 70.7 73.9 37.5 41.2 44.1 49.7 53.4 OpenShape-PointBERT [24] 57.5 70.1 76.5 80.4 82.1 47.9 55.6 62.7 67.0 72.0 34.5 34.1 37.8 41.9 45.6 OpenShape-SparseConv [24] 62.8 72.0 78.9 82.9 85.7 47.3 56.3 64.5 68.2 74.0 36.0 37.0 41.5 44.7 48.6 TAMM-PointBERT [27] 62.4 73.3 81.7 83.8 85.9 48.2 57.1 63.6 72.1 76.5 38.9 41.6 46.3 50.1 54.2 OpenDlign (ours) 65.6 73.9 82.9 85.5 87.6 48.9 58.5 67.9 74.2 79.0 42.1 46.9 55.1 61.9 65.8 4.3 Zero-Shot 3D Object Detection We evaluated OpenDlign\u2019s capabilities in Zero-Shot 3D Object Detection using the ScanNet V2 dataset [58], which contains richly annotated 3D indoor scenes in 18 object categories. Following the PointCLIP V2 methodology [19], we began with the pre-trained 3DETR-m model to pinpoint 3D regions of interest, successfully delineating 3D bounding boxes and extracting the points inside each box. Finally, we applied OpenDlign to these points to generate our predictions. Table 3 illustrates OpenDlign\u2019s zero-shot detection prowess using mean Average Precision (mAP) at IoU thresholds 7 Table 3: Zero-shot 3D object detection results on ScanNet V2 [58]. Method Mean Cabinet Bed Chair Sofa Table Door Window Counter Desk Sink Bathtub PointCLIP [20] 6.00 3.99 4.82 45.16 4.82 7.36 4.62 2.19 1.02 4.00 13.40 6.46 AP25 PointCLIP V2 [19] 18.97 19.32 20.98 61.89 15.55 23.78 13.22 17.42 12.43 21.43 14.54 16.77 OpenDlign (ours) 50.72 38.91 67.27 86.33 72.01 58.72 44.58 32.07 50.49 62.04 51.98 64.29 PointCLIP [20] 4.76 1.67 4.33 39.53 3.65 5.97 2.61 0.52 0.42 2.45 5.27 1.31 AP50 PointCLIP V2 [19] 11.53 10.43 13.54 41.23 6.60 15.21 6.23 11.35 6.23 10.84 11.43 10.14 OpenDlign (ours) 37.97 17.04 66.68 73.92 54.96 50.03 24.73 12.84 20.44 41.64 34.17 64.29 of 0.25 and 0.5, achieving scores of 50.72% and 37.97%, respectively. It significantly outperforms PointCLIP V2 by more than 31.75% and 26.44%. Remarkably, OpenDlign can detect the \u2019Sofa\u2019 shape with an AP50 of 54.96%, whereas PointCLIP and V2 score below 10, demonstrating OpenDlign\u2019s superior capability in extracting robust 3D representations from sparse and noisy point clouds in real-world indoor scenes. 4.4 Cross-Modal Retrieval 3D shapes were retrieved by computing the cosine similarity between the embeddings of a query and those generated by OpenDlign, followed by a k-nearest neighbors (kNN) analysis to find the most similar shapes. Fig. 3 illustrates OpenDlign\u2019s capability in matching 3D shapes to image and text queries. Column (a) illustrates its precision in distinguishing sub-categories like grand versus upright pianos from image queries. Column (b) demonstrates successful shape retrieval using distinct text descriptions, such as \"Batmobile armored\". Notably, averaging image and text query embeddings allows OpenDlign to find shapes that combine elements of both inputs. For example, merging a running horse image with the text \"man\" results in the retrieval of both a centaur and a running man, as shown in Fig. 3 (c). A house image combined with \"tree\" retrieves a treehouse. (b) Text Query \u201cBatmobile armored.\u201d \u201cTelephone box.\u201d \u201cDouble-decker bus.\u201d \u201cSpaceship.\u201d \u201cAir Jordan.\u201d (a) Image Query (c) Image + Text Queries \u201cMan.\u201d + \u201cTree.\u201d + Figure 3: 3D shape retrieval results. (a) Two most similar shapes for each query image. (b) Most similar shapes for each query text. (c) Two most similar shapes for combined image and text queries. 4.5 Ablation Study Ablation studies were conducted on zero-shot classification benchmarks to assess the contribution of each component in OpenDlign. Consistently, all OpenDlign variants used in these studies employed OpenCLIP-ViT-H-14 as their backbone. ShapeNet was the default training dataset for all models. Contour-Aware Projection. Replacing PointCLIP V2\u2019s projection pipeline [19] with our contouraware version, as shown in Table 4, enables a pre-trained CLIP to reach 68.8% zero-shot accuracy on ModelNet40, even outperforming several baselines that need extra training. This suggests that through large-scale contrastive learning, CLIP can understand RGB images as well as depth maps, as long as key shape features are maintained during projection. Multimodal Alignment. Table 4 shows that alignment between depth maps and depth-aligned images (depth-daRGB) substantially boosts performance. It improves top-1 accuracy by over 10% across datasets, indicating that depth-daRGB alignment effectively generalizes CLIP to depth maps, with consistent gains in zero-shot inference, regardless of depth-specific text prompts. Further analysis compared depth-daRGB alignment against three alternatives: depth-rendRGB (aligning depth maps with CAD-rendered RGB images), daRGB-text & depth (aligning depth-aligned images with text before depth-daRGB alignment), and depth-text & daRGB (simultaneous alignment 8 Table 4: Ablation study for OpenDlign on ModelNet40 [52] and ScanObjectNN [53]. Acc. improvements over the baseline (first-row) are highlighted in green. Contour-Aware Multimodal Depth-Specific Logits ModelNet40 [52] ScanObjectNN [53] Projection Alignment Texts Aggregation Top 1 Top 3 Top 5 Top 1 Top 3 Top 5 \u2717 \u2717 \u2717 \u2717 59.7 79.6 86.3 42.8 66.7 78.4 \u2713 \u2717 \u2717 \u2717 68.8 (+9.1) 85.8 (+6.2) 91.6 (+5.3) 44.6 (+1.8) 68.3 (+1.6) 78.9 (+0.5) \u2713 \u2713 \u2717 \u2717 79.2 (+19.5) 94.4 (+14.8) 97.6 (+11.3) 56.9 (+14.1) 75.5 (+8.8) 83.8 (+5.4) \u2713 \u2717 \u2713 \u2717 75.9 (+16.2) 91.0 (+11.4) 95.4 (+9.1) 49.3 (+6.5) 69.8 (+3.1) 79.2 (+0.8) \u2713 \u2713 \u2713 \u2717 80.2 (+20.5) 95.3 (+15.7) 97.7 (+11.4) 58.1 (+15.3) 75.2 (+8.5) 84.2 (+5.8) \u2713 \u2713 \u2717 \u2713 81.0 (+21.3) 95.2 (+15.6) 97.6 (+11.3) 56.8 (+14.0) 74.6 (+7.9) 81.6 (+3.2) \u2713 \u2713 \u2713 \u2713 82.6 (+22.9) 96.2 (+16.6) 98.4 (+12.1) 59.5 (+16.7) 76.8 (+10.1) 83.7 (+5.3) of depth maps with text and depth-aligned images). Table 5 shows depth-daRGB outperforming depthrendRGB by 6.8% on the ScanObjectNN dataset, confirming concerns that alignment with rendered images may lead to overfitting on specific 3D shapes. Moreover, daRGB-text & depth performs worst, suggesting that pre-aligning depth-aligned images with text compromises CLIP\u2019s ability to generate robust image representations, thus affecting subsequent depth-daRGB alignment efficacy. Depth-daRGB\u2019s superior performance on ModelNet40 and OmniObject3D compared to depth-text & daRGB shows that aligning depth maps with depth-aligned images indirectly aligns with text, making additional text alignment unnecessary and potentially limiting OpenDlign\u2019s generalization. Depth-Specific Texts. Table 4 indicates that OpenDlign outperforms others in zero-shot classification tasks using depth-specific prompts, whether it incorporates multimodal alignment or logit aggregation. This implies that the inaccuracies in recognition partly result from processing input data as typical RGB images, rather than as depth maps. Logits Aggregation. Results in Table 4 show that multi-view logit aggregation improves zeroshot classification on all datasets by combining logits from pre-trained and fine-tuned encoders. This approach effectively mitigates the catastrophic forgetting problem in OpenDlign\u2019s multimodal alignment, enabling it to recognize 3D objects identifiable by both pre-trained CLIP and OpenDlign. Varying Number of Depth Views. OpenDlign, like other depth-based methods, necessitates extracting multiple embeddings from multi-view depth maps for zero-shot inference. Figure 4 illustrates that OpenDlign\u2019s zero-shot accuracy on both ModelNet40 and OmniObject3D increases as the number of depth map views rises. Notably, OpenDlign achieves top benchmark performance, comparable to TAMM-PointBERT, with no more than two views, indicating a good balance between latency in embedding extraction and effective zero-shot classification. Furthermore, we observed a slower performance improvement on OmniObject3D, reflecting its finer-grained classification requirements. Table 5: Ablation study on various alignment strategies. Aligning with text modality was achieved by fine-tuning the image encoder. Alignment MNet40 ScanNN Omni3D Strategy Top 1 Top 5 Top 1 Top 5 Top 1 Top 5 depth-rendRGB 78.8 96.8 52.7 82.5 29.4 51.8 daRGB-text & depth 78.6 96.4 51.1 79.6 29.1 51.6 depth-text & daRGB 79.4 98.0 60.7 86.0 29.5 52.7 depth-daRGB (ours) 82.6 98.4 59.5 83.7 31.3 53.2 Figure 4: Impact of the number of views on OpenDlign\u2019s zero-shot performance. 5 Conclusion and Future Work In this study, we introduce OpenDlign, an open-world framework that enhances 3D representation by efficiently fine-tuning the CLIP with depth-aligned images, which exhibit more diverse textures and colors than CAD-rendered images. Our experiments demonstrate OpenDlign\u2019s superior performance in various 3D zero-shot and few-shot tasks, especially with real-scanned objects. However, generating depth-aligned images with the ControlNet model is slower than direct CAD rendering, which extends training dataset preparation time. Moreover, depth-aligned images can be created from both CAD objects and real 3D scenes, likely highlighting a greater texture diversity gap between depth-aligned and CAD-rendered scenes and further highlighting OpenDlign\u2019s 3D scene understanding capabilities. 9", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2311.18681v1", |
| "title": "RaDialog: A Large Vision-Language Model for Radiology Report Generation and Conversational Assistance", |
| "abstract": "Conversational AI tools that can generate and discuss clinically correct\nradiology reports for a given medical image have the potential to transform\nradiology. Such a human-in-the-loop radiology assistant could facilitate a\ncollaborative diagnostic process, thus saving time and improving the quality of\nreports. Towards this goal, we introduce RaDialog, the first thoroughly\nevaluated and publicly available large vision-language model for radiology\nreport generation and interactive dialog. RaDialog effectively integrates\nvisual image features and structured pathology findings with a large language\nmodel (LLM) while simultaneously adapting it to a specialized domain using\nparameter-efficient fine-tuning. To keep the conversational abilities of the\nunderlying LLM, we propose a comprehensive, semi-automatically labeled,\nimage-grounded instruct dataset for chest X-ray radiology tasks. By training\nwith this dataset, our method achieves state-of-the-art clinical correctness in\nreport generation and shows impressive abilities in interactive tasks such as\ncorrecting reports and answering questions, serving as a foundational step\ntoward clinical dialog systems. Our code is available on github:\nhttps://github.com/ChantalMP/RaDialog.", |
| "authors": "Chantal Pellegrini, Ege \u00d6zsoy, Benjamin Busam, Nassir Navab, Matthias Keicher", |
| "published": "2023-11-30", |
| "updated": "2023-11-30", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV", |
| "cs.CL" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", |
| "gt": "Radiology plays a key role in clinical decision-making, with radiology reports acting as the major way of communica- tion between radiologists and other clinicians [14]. Within radiology, chest X-rays are the most frequent imaging exam and are crucial for diagnosing common thoracic diseases such as pneumonia [19]. However, writing accurate and concise chest X-ray reports is time-intensive and demands significant expertise, while the daily amount of images to be examined is rising [40]. In this context, automated re- port generation is a potential solution to reduce radiologists\u2019 workload and support fast and accurate diagnostic decision- *Equal contribution Please write a radiology report for this image. No evidence of consolidation to suggest pneumonia is seen. There is some retrocardiac atelectasis. A small left pleural effusion may be present. No pulmonary edema. A right granuloma is unchanged. The heart is mildly enlarged, unchanged. There is tortuosity of the aorta. Is the patient\u2019s heart healthy? No, the patient has mild cardiomegaly. I think there is no atelectasis. Can you adapt the report? No evidence of consolidation to suggest pneumonia is seen. A small left pleural effusion may be present. No pulmonary edema. A right granuloma is unchanged. The heart is mildly enlarged, unchanged. There is tortuosity of the aorta. Figure 1. RaDialog is a vision-language model that integrates au- tomated radiology report generation with interactive dialog. This facilitates a conversational workflow, allowing for real-time query resolution and report adjustments by the radiologist. making [20]. Further, with the rise of conversational chat- bots, there lies an unexplored potential beyond mere report generation: interactive conversational assistance. Such in- teractivity could revolutionize the radiology workflow, en- abling a collaborative diagnostic process between expert ra- diologists and AI-based tools. The latest generations of large language models (LLMs) have shown excellent performance in a large number of tasks, demonstrating their abilities to understand context, have coherent conversations, and integrate domain knowl- edge [6, 8, 9, 35, 48]. In the medical field, LLMs have shown promise in tasks such as solving medical exams [43, 47], medical literature comprehension [42], and conversa- tional diagnosis assistance [26, 47]. Large Vision-Language models (VLLMs) aim to pair these powerful LLMs with im- age information, building a bridge between the visual and the textual domain [1, 10, 24, 29, 30]. As medical imaging forms a core part of diagnosis and treatment, the potential for VLLMs in radiology is immense. However, applying such models to medical images poses unique challenges due arXiv:2311.18681v1 [cs.CV] 30 Nov 2023 Image Encoder CheXpert Classifier LLM Alignment module 32 image tokens 0 0 1 0 0 0 0 0 0 1 0 0 1 0 structured finding labels Image Features: Predicted Findings: opacity, pneumonia, edema <instruction> ,... , Image Feature Extraction Prompt Construction <answer to the instruction> ,... , opacity, pneumonia, edema Figure 2. Visualization of our pipeline. A chest X-ray image undergoes feature extraction using the Image Encoder. The visual features are aligned with the text space to produce 32 image tokens. Concurrently, the CheXpert Classifier provides structured findings of the image. The Prompt Construction Module then processes both of these image descriptors to create a comprehensive prompt for the Large Language Model (LLM), which in turn generates the response depending on the task instruction. to the domain shift from natural images. While some very recent works propose medical VLLMs [23, 33, 49, 52], they are either private models trained with proprietary data or fo- cus on general medical visual question answering (VQA). At the same time, while state-of-the-art radiology report generation methods perform well in generating coherent re- ports, their factual correctness is limited, and no conver- sational assistance is possible [7, 16, 34, 50, 51, 53]. We hypothesize that LLM-based interactive dialog systems can improve factual correctness in report generation and en- hance the radiology workflow through quick clarifications, report refinements, collaborative insights for complex cases, and reduced mental load for routine tasks. Moreover, such a model could also be used for more general tasks, such as asking knowledge questions or explaining a report to a pa- tient with limited medical knowledge. To this end, we propose RaDialog, an interactive radi- ology assistant with a focus on automated report generation and auxiliary, interactive downstream tasks (see Fig. 1). Ra- Dialog integrates both image features and structured pathol- ogy findings with an LLM, significantly improving over previous methods in the clinical correctness of generated reports. Furthermore, our model can provide interactive assistance and human-AI collaboration, which we demon- strate on a wide range of downstream tasks. We achieve this by parameter-efficient fine-tuning on our proposed instruct dataset, consisting of a mixture of different tasks, such as radiology report generation, correction, and question an- swering. This diverse dataset allows us to keep the gen- eral capacities of LLMs while learning radiology-specific knowledge and style. We encourage open research in ad- vanced medical VLLMs by providing a public model that can be trained with limited compute and data resources. In summary, our main contributions are as follows: \u2022 We present the first validated, publicly available model that harnesses the power of LLMs for radiology report generation, improving the state-of-the-art by 7.3% in clin- ical efficacy on MIMIC-CXR [19]. \u2022 We design a semi-automatically labeled image-grounded instruct dataset with a unique emphasis on interactivity, enabling a wide range of dialog-based downstream tasks. \u2022 We propose a novel strategy to incorporate medical im- age information in LLMs using parameter-efficient fine- tuning to integrate both text-aligned image features and structured findings.", |
| "main_content": "Large Vision-Language Models Large language models (LLMs) have achieved remarkable success across a multitude of Natural Language Processing tasks. The integration of visual information into LLMs represents a significant step in multi-modal machine learning, combining the foundational knowledge of LLMs with image recognition. One of the first VLLMs, Flamingo [1], bridges powerful vision-only and language-only models for handling interleaved image and text data by repeatedly fusing visual features into each block of the frozen language model. In the following, BLIP-2 [24] proposed a more parameter-efficient approach by training a BERT-based [12] alignment module for generating language-aligned tokens as input into a frozen LLM. Other works, such as LLaVA [30], opt to fine-tune the entire LLM, leading to improved results but much higher training costs. We fuse the use of a resourceefficient alignment module and parameter-efficient LLM fine-tuning [15], which only trains a very small portion of the LLM\u2019s weights, to keep training costs low while allowing domain adaptation. To enable conversations, LLaVA proposes to generate instruction-following data using GPT4, which motivates our instruct dataset. Very recently, a handful of works proposed multi-modal biomedical chatbots and generalist models [23, 33, 49, 52]. The most similar model, ELIXR, directly builds upon the BLIP-2 framework [24], training for contrastive and generative tasks on X-ray image-report pairs. However, unlike our model, it is neither trained for interactive tasks nor tested on report generation and remains private due to the proprietary PaLM-2 [2] model. MedPaLM [49] proposes a private, PaLM-based [9] generalist model with impressive results for a variety of tasks and different types of medical images, including VQA, image classification and report generation. In contrast, we focus on only X-ray images, outperforming MedPaLM in report generation for comparable model sizes while only using public data. This indicates that for limited resources, focusing on a more narrow domain can be beneficial. Publicly available models include LLaVAMed [23] and Med-Flamingo [33], built upon LLaVA [30] and Flamingo [1] respectively. Both utilize image-caption pairs from a wide spectrum of medical literature, facilitating a generalized understanding of diverse medical imagery and have been evaluated using established medical VQA datasets. However, neither focuses on X-ray report generation and understanding and they seem to lack clinical correctness when tested on medical image interpretation. Radiology Report Generation The automatic generation of radiology reports has become a significant research focus in recent years [7, 50, 51, 53]. To improve clinical correctness, some works adopt a two-step pipeline, first predicting core concepts and then generating reports from these [34, 46]. RGRG [46] explicitly detects relevant and abnormal anatomical regions and generates separate sentences per region to then merge them into a complete report. M2 Tr [34] first predicts global context descriptions and reforms them into a coherent text with a transformer decoder. Several other works have proposed a form explicit knowledge injection using finding labels [28, 45], abnormal region detection [46] or symptom graphs [16, 25, 28]. Liu et al. [28] explicitly predict relevant topics to form topicrelated image features and utilize a static medical knowledge graph. In KiUT [16] and DCL [25], a dynamic symptom graph updated with knowledge from a disease classifier [16] or reports from similar images [25] is used. Their success at using structured knowledge has inspired the integration of structured findings into our pipeline. However, all these methods are only trained for report generation and, therefore, cannot perform any other tasks. In contrast, our VLLM-based pipeline provides more clinically accurate reports and supports dialog-based interactivity for a multitude of tasks. Therefore, we consider RaDialog not just a report generation method but a conversational radiology assistant. 3. Method Our method leverages Large Language Models (LLMs) and visual feature extraction techniques to address the complexities of medical imaging diagnostics, particularly focusing on X-ray image report generation. In this section, we present our model architecture, training and inference pipeline, and instruct dataset. 3.1. Model Architecture We propose to adopt a large language model for interactive radiology report generation and conversational assistance. Our architecture consists of three main components: an Image Feature Extraction Module, which extracts both visual encodings and structured labels from the given image; a Prompt Construction Module; and a Large Language Model (LLM), which outputs a response to the instruction. An overview of the architecture is provided in Fig. 2. Image Feature Extraction We integrate a combination of structured findings and visual features into the image encoding process to capture both the presence of pathologies on a high level as well as more nuanced details in the image. Visual Feature Encoding Given a chest X-ray image as input, we first extract patch-wise image embeddings using BioViL-T [4], a pre-trained domain-specific X-ray encoder. BioVil-T is pre-trained using contrastive languageimage learning on chest X-rays paired with radiology reports, making it a useful foundation model for understanding X-ray images. The patch-based features are passed to an alignment module, which transforms them into 32 embedded language model tokens. Inspired by BLIP-2 [24], we use a BERT [12] model as an alignment module to get textaligned image features. Then, we incorporate the aligned image features into the input of the language model prompt. Structured Findings Unlike the visual feature encoder, our CheXpert Classifier is specifically designed to provide structured findings for the medical image, which ensures the clinical efficacy of our model. Concretely, our model solves the task of multi-label classification, where each class corresponds to one pathology. We train this model separately using CheXbert [44] labels predicted on the findings section of the MIMIC-CXR [19] reports. Prompt Construction The image features, structured findings, and instruction are converted into one prompt as input for the LLM. First, the 32 image tokens from the alignment module are added to the prompt as \"Image Information: <IMG>\", providing the LLM with contextual image features. Next, the structured findings from the CheXpert Classifier are introduced with \"Predicted Findings: <findings>\". This gives the LLM a clear understanding of the image\u2019s key observations, improving clinical accuracy. The prompt concludes with an instruction, like \"Write a radiology report.\" to specify the expected output. This method ensures that the generated report is relevant, precise, and meets user specifications. Exact instruction prompt formulations are provided in the supplementary material. Language Model We utilize an LLM to process the prompt and produce an instruct-specific response. As the training data of generalist LLMs usually consists of only limited medical information, we choose to fine-tune our language model on radiology reports as well as instructions, improving both its medical knowledge and aligning its writing style with the one of radiologists. Furthermore, this fine-tuning also teaches it to work with image features and structured finding labels. 3.2. Model Training and Inference We train our entire architecture in multiple stages, where every network except the image encoder is fine-tuned. Training the Alignment Module We perform a visual language alignment training to fine-tune the alignment module. This training follows the stage-1 pre-training of BLIP-2 [24], which jointly optimizes for three objectives: image-text contrastive learning using a contrastive loss, image-grounded text generation optimized using language modeling loss, and image-text matching modeled as a classification task and trained with cross-entropy loss. All tasks are performed on X-ray image-report pairs from the MIMIC-CXR [19] training set. Training the Chexpert Classifier We extract CheXpert labels for our dataset using the publicly available CheXbert model [44]. We use a domain-specific image encoder [4] but fine-tune it for the task of multi-label CheXpert label prediction. We apply log-weighted cross-entropy loss during training and evaluate with macro F1 to encourage the model to focus on all pathologies rather than only frequent ones. Adapting the LLM to Images and Radiology Generalpurpose, off-the-shelf LLMs have no ability to work with images, and while they have some general radiology knowledge, they lack specific information, as well as the correct free-text report style necessary to perform well on our tasks. Therefore, we fine-tune the LLM on a dataset comprising pairs of radiology images and their reports [19], enabling it to work with the image embeddings coming from the alignment module, structured finding labels and adapting its knowledge and style to the radiology domain. We include the images and finding labels in our prompt and perform parameter-efficient fine-tuning with language modeling loss [15], allowing us to fine-tune with limited resources. Inference Pipeline During inference, an input image is first processed by the image encoder, which passes its features to the alignment module. The aligned features and the output of the CheXpert Classifier are then fed into the Prompt Construction Module. The assembled prompt, containing all necessary image-derived context, is provided to the fine-tuned LLM, which generates a task-specific output, such as a report. Through this structured approach, the system aims to provide reliable and accurate radiological responses, assisting professionals in their diagnosis. 4. Instruct Dataset Training only on image and report pairs leads to catastrophic forgetting [31] in the LLM, where its ability to perform tasks other than report generation reduces significantly. To ensure our model is capable of many diverse downstream tasks and keeps general conversation abilities, we design a new instruct dataset, which consists of eight different tasks as shown in Fig. 3. For each of the seven instruct tasks, excluding report generation, we formulate ten different prompts, from which we choose randomly to generate samples. For the tasks of report generation, complete and binary CheXpert QA, and natural language explanations, we utilize existing datasets [19, 21] to retrieve corresponding ground truth. For the remaining tasks, namely correction, summarization, easy language, and region QA, we use the non-fine-tuned version of the LLM to generate pseudo ground truth answers, similar to replay-based continual learning [41]. The answers are generated based on a free-text radiology report and a sampled instruction prompt. When using the instruct dataset for training, the model is asked to produce the corresponding ground truth, depending on the instruction, significantly reducing catastrophic forgetting. In the following, we describe the individual tasks: Report Generation: Produce a free text radiology report given an X-ray. We use the image-report pairs from the MIMIC-CXR dataset [19] as ground truth. Findings QA: Answer a question about the CheXpert labels by either listing all findings (complete mode) in the image or providing a straightforward yes/no answer about a specific finding (binary mode). We employ MIMIC-CXR CheXbert [44] labels for supervision. Region QA: Answer a question about a specific region, such as the heart or lung, which can be binary as well as open-ended. The supervision signal is LLM-generated. Easy Language: Reformulate the produced report into a simpler and more understandable language. The supervision signal is LLM-generated. Summarization: Summarize the report as bullet points or a short text. The supervision signal is LLM-generated. Correction: Correct an error in the produced report. The training prompts are generated by detecting wrongly predicted CheXpert labels on reports predicted by the non-fineNatural Language Explanations Example Prompt: Why do you think the patient has Pneumonia? GT: Mimic-NLE dataset Correction Example Prompt: The patient also has <add>, correct the report. Pseudo GT: LLM-generated Easy Language Example Prompt: Reformulate this report in simple and understandable language. Pseudo GT: LLM-generated Summarization Example Prompt: Write a short summary of this report, please. Pseudo GT: LLM-generated Region QA Example Prompt: Does the patient have any heart-related issues? Pseudo GT: LLM-generated Binary CheXpert QA Example Prompt: Does the patient have Cardiomegaly? GT: CheXpert labels Complete CheXpert QA Example Prompt: List all the findings in this report. GT: CheXpert labels Report Generation Example Prompt: Write a radiology report for this X-Ray. GT: Mimic-CXR reports Figure 3. Illustration of the Instruct Dataset for LLM training. It covers eight task categories, including report generation, questionanswering, correction, summarization, easy language, and explanations. Some tasks derive from existing datasets [19, 21], and others employ a non-fine-tuned LLM for pseudo ground truth answer generation. Method CE BS B-1 B-4 MTR R-L R2Gen [7] 27.6 0.27* 35.3 10.3 14.2 27.7 MDT+WCL [53] 29.4 0.28* 37.3 10.7 14.4 27.4 M2 Tr. [34] 30.8 0.39* 37.8 10.7 14.5 27.2 ITA [50] 30.8 39.5 12.1 14.7 28.4 METransformer [51] 31.1 38.6 12.4 15.2 29.1 Kiut [16] 32.1 39.3 11.3 16.0 28.5 RaDialog-INS 38.6 0.39 34.0 9.7 13.6 27.0 RaDialog-RG 39.4 0.40 34.6 9.5 14.0 27.1 Table 1. Performance comparisons of RaDialog with existing methods on the test set of the MIMIC-CXR dataset [19] with respect to CE [44] and NLG [3, 27, 36, 55] metrics. The best values are highlighted in bold. The proposed model outperforms previous methods in CE by a large margin and achieves state-of-the-art BertScore results. Values marked with * are taken from [18]. tuned LLM. The supervision signal is LLM-generated. Natural Language Explanation: Clarify and explain which part of the report indicates a specific pathology. We utilize the Mimic-NLE dataset [21] as ground truth. 5. Experimental Setup Dataset We use the official train, validation and test split of the MIMIC-CXR dataset [19] to train and evaluate our model. It includes 377,110 chest X-ray images and corresponding free-text radiology reports. A report usually consists of an indication section describing the reason for the examination, a findings section outlining the observations of the radiologist, and an impression section summarizing the major findings. Similar to previous work [7, 32, 46], we use the findings section as the ground truth report and exclude all samples with an empty findings section. Evaluation Metrics To evaluate our two models, one trained only on report generation (RaDialog-RG) and the second trained with the instruct dataset (RaDialog-INS), we report clinical efficacy (CE) [44], embedding-based text similarity [55], and common Natural Language Generation (NLG) [3, 27, 36, 55] metrics. To calculate the CE score, we use the CheXbert labeler [44] to predict 14 common chest X-ray pathologies given an input text. By comparing the occurrence of these in the ground truth and the generated report, we capture the diagnostic correctness of our model. We report macro F1 to reflect the performance on all pathologies. The embedding-based similarity is used to be less reliant on exact word matches, which is realized by using the BertScore (BS) [55]. Finally, we use BLEU-1 (B1), BLEU-4 (B-4) [36], ROUGE (R-L) [27], and METEOR (MTR) [3] scores as NLG metrics. These metrics rely on word and phrase-level similarity, therefore mainly judging the closeness of the exact formulations in the reference reports. Given the wide variability in report formulations with the same clinical meaning, there is a growing consensus in the community that these conventional NLG metrics alone are not well suited to judge the clinical correctness of generated radiology reports [5, 22, 37\u201339, 46, 54]. However, we have included these metrics to maintain comparability with existing studies and provide a comprehensive overview. Implementation Details We use vicuna-7b [8] as our base LLM and fine-tune it using LoRA [15]. We use the BioVil-T image encoder [4], an adapted ResNet-50 model pre-trained on radiology data, for visual feature extraction and as initialization for our CheXpert classifier. We train our model on a single Nvidia A-40 GPU. For LoRA training, we use the PEFT library1 from huggingface and a learning rate (LR) of 3 \u00d7 10\u22124 and train for up to five epochs, using early stopping on the validation set. For the alignment 1https://huggingface.co/docs/peft/index module and CheXpert classifier training, we use pytorchlightning 1.6.5 [13]. CheXpert classification is trained for six epochs with a LR of 5\u00d710\u22125. For alignment model pretraining, we use a cosine annealing LR between 1 \u00d7 10\u22125 to 1 \u00d7 10\u22124 with a linear warmup over 1000 steps starting from LR 1 \u00d7 10\u22126 and train for four epochs. 6. Results and Discussion 6.1. Radiology report generation We evaluate our model on radiology report generation and provide comparisons to other methods in Tab. 1. For a fair comparison, we include only methods evaluated on the findings sections of the official MIMIC-CXR test set. RaDialog outperforms all prior works in the clinical efficacy metric by a large margin of 7.3%, demonstrating our model\u2019s ability to infer a correct clinical diagnosis. While we achieve inferior results in the word-based NLG metrics, we also outperform previous methods in the BertScore, indicating that RaDialog often does not predict the exact formulation of the ground truth report but the correct content. We argue that while an LLM understands context and semantics more deeply, a smaller model trained only on a specific dataset may mirror the dataset\u2019s exact wording more closely, resulting in higher BLEU, METEOR, and ROUGE scores due to better n-gram overlap without necessarily improving clinical correctness. Fig. 4 shows qualitative report generation results on a frontal and lateral chest X-ray. It can be observed that our model captures a majority of the findings. We further compare our model to MedPaLM [49] in Tab. 2. We separated this comparison because MedPaLM is the only state-of-the-art method that uses the indication section of the report as input, making it not comparable to other methods. We also evaluate our model with this additional input information and show that using the indication section leads to a significant jump in performance in the NLG metrics. We hypothesize that the indication section helps the model to understand which topics it should mention in the report, as it often includes information such as the reason for the examination. Even though MedPaLM relies on image and text encoders pre-trained with large-scale private data, we outperform MedPaLM-12b in all metrics and the 84b variant in the text-based metrics while having comparable clinical efficacy. This shows the value of our model as an open-source model trained with public data, allowing further development on top of this model. To set our results in the context of current developments in the domain of VLLMs, we provide a qualitative comparison to LlaVa-Med [23], a public biomedical visionlanguage model, and GPT4-Vision (GPT4-V) [35] in Fig. 5. While RaDialog excels in capturing very detailed medical findings on the image, both LlaVA-Med and GPT4-V provide trivial and inaccurate reports. LlaVA-Med has difficulMethod Public Ind. CE B-1 B-4 R-L MedPaLM-12b \u00d7 \u2713 37.3 30.9 10.4 26.2 MedPaLM-84b \u00d7 \u2713 39.8 32.2 11.3 27.3 RaDialog-RG \u2713 \u00d7 39.4 34.6 9.5 27.1 RaDialog-RG \u2713 \u2713 39.2 39.2 14.8 31.6 Table 2. Comparison to MedPaLM [49], a closed-source model using encoders trained with private data. RaDialog (7b parameters) outperforms MedPaLM-12b and has a comparable CE score with much better NLG metrics than the 12x larger MedPaLM-84b. \u201dInd.\u201d denotes if the indication section was used as input. ties meeting the style of radiology reports, describing how a normal chest X-ray typically looks like. GPT4-V seems to be aware of the correct style for report writing but predicts a very generic report of a healthy patient, indicating it can not really understand the image. This shows the importance of developing domain-specific models targeted at clinical correctness and evaluating them systematically on the application of interest. 6.2. Interactive Downstream Tasks Apart from report generation, we further evaluate our model on different interactive downstream tasks. Report Correction To perform a quantitative evaluation of report correction, we automatically generate correction prompts for the entire MIMIC-CXR [19] test set by asking to correct all incorrect pathologies found by the CheXbert [44] labeler in the initially predicted report. As other report generation works do not allow interactive prompting, we can not compare with them. Instead, we compare our base model to our instruct model, which was trained on our instruct dataset. We evaluate both our models on report generation before and after correction and report the change in performance (see Tab. 3). As our correction prompts directly target the classes used in calculating the clinical efficacy and mainly affect the change of single words, the language-based performance metrics minimally change. The greatly improved clinical efficacy indicates that our model correctly reacts to the instruction. Comparing the RG model to the instruct model, it is clear that the instruct training was very effective, increasing the improvement by about 20%. Finding Prediction As a second downstream task, we ask the model to predict the main CheXpert findings for an image after it generates a report in either \u201dbinary\u201d or \u201dcomplete\u201d mode. Again, we compare the base model to the instruct model and report F1, precision, and recall in this task. To extract the prediction from the generated answer, Prediction Ground Truth Input X-ray CP Findings: Cardiomegaly, Edema Report: There is moderate pulmonary edema. There is a left retrocardiac opacity. There is no pneumothorax. There is no pleural effusion. The heart is moderately enlarged. The patient is status post median sternotomy. The aorta is calcified. CP Findings: Cardiomegaly, Edema Report: Patient is status post median sternotomy, CABG, and mitral valve replacement. A left-sided AICD device is noted with leads terminating in the right atrium, right ventricle, and coronary sinus. Mild enlargement of the cardiac silhouette is redemonstrated, with unchanged tortuosity of the thoracic aorta. There is perihilar haziness with vascular indistinctness and diffuse alveolar opacities compatible with moderate pulmonary edema. No large pleural effusion or pneumothorax is seen. There are no acute osseous abnormalities. CP Findings: Lung Opacity, Lung Lesion, Pneumonia Report: There is a new right lower lobe opacity, concerning for pneumonia. There is also a new nodular opacity in the right upper lobe. The left lung is clear. The cardiomediastinal silhouette is unremarkable. There is no pleural effusion or pneumothorax. CP Findings: Pneumonia Report: PA and lateral views of the chest. There are new bibasilar opacities compatible with right middle lobe and lingular pneumonia. Elsewhere, the lungs are clear and there is no effusion. Cardiomediastinal silhouette is within normal limits. No acute osseous abnormality. Figure 4. Qualitative report generation results of RaDialog on both frontal and lateral chest X-rays. GPT4-Vision Prediction LlaVA-Med Prediction RaDialog Prediction Ground Truth Input X-ray A: The chest X-ray demonstrates clear lung fields without evidence of focal consolidation, pneumothorax, or pleural effusion. The cardiac silhouette is normal in size and contour. There is no evidence of mediastinal widening. The diaphragmatic surfaces are smooth, and costophrenic angles are sharp. The bony structures of the thorax show no acute abnormality. Overall, there are no radiographic features suggestive of acute cardiopulmonary disease. Q: List all the pathologies in this report. A: The report indicates that there are no radiographic features suggestive of acute cardiopulmonary disease, and as such, it does not list any pathologies. [\u2026] A: The image shows a normal chest x-ray. This means that there are no visible abnormalities or signs of disease in the lungs, heart, or surrounding structures. A normal chest x-ray typically displays clear lung fields, a normal heart size, and no signs of fluid or masses. Q: List all the pathologies in this report. A: There are no pathologies mentioned in this report. it is important to note that a normal chest x-ray indicates that there are no visible abnormalities or signs of disease in the lungs, heart, or surrounding structures. A: AP upright and lateral views of the chest provided. Left chest wall Port-A-Cath is again seen with its tip extending into the region of the right atrium. Midline sternotomy wires and mediastinal clips are again noted. There is a small left pleural effusion with left basal atelectasis. The heart is mildly enlarged. The mediastinal contour is stable. No pneumothorax. Bony structures are intact. Q: List all the pathologies in this report. A: cardiomegaly, atelectasis, pleural effusion, support devices Report: Cardiac silhouette is enlarged but unchanged. There is left-sided XXXX central line with a XXXX lumen. Poly vasculature is within normal limits. Mediastinum is normal. Bibasilar opacity, left greater than right is appreciated. Left pleural effusion with adjacent atelectasis, right effusion is also present. No pneumothorax. CP Findings: Cardiomegaly,Lung Opacity, Pleural Effusion, Atelectasis, Support Devices Figure 5. Qualitative comparison of RaDialog\u2019s performance to LLaVA-Med and ChatGPT on an out-of-domain image from the IU-Xray dataset [11], all initially prompted for report generation. Method CE BS B-1 B-4 MTR R-L RaDialog-RG +10.5 -0.02 +2.3 +0.6 +0.2 -0.6 RaDialog-INS +33.1 +0.0 +1.5 +0.1 +0.4 -0.1 Table 3. Report generation performance improvement (compared to results in Tab. 1) after correction based on CheXpert [17] findings incorrectly mentioned in the first iteration report. Our instruct-trained model (RaDialog-INS) is clearly superior to RaDialog-RG, which is trained only on report generation. we use simple text matching after lower-casing. For the binary task, we check if the words \u201dyes\u201d or \u201dno\u201d occur in the answer. For the complete prediction, we check for all occurrences of the 14 CheXpert labels. While the base model struggles at these tasks, our instruct model achieves significantly superior results, showing it understands which findings it mentioned in its own report (see Tab. 4). Method Mode F1 Prec Rec RaDialog-RG binary 1.8 17.3 7.5 RaDialog-INS binary 39.7 37.5 43.5 RaDialog-RG complete 9.8 16.0 12.5 RaDialog-INS complete 40.3 39.9 42.0 Table 4. Binary and complete CheXpert findings prediction performance of our non-instruct (RG) and instruct (INS) model variants. Qualitative Results To demonstrate other interactive tasks, we provide conversation examples in Fig. 6, which show the unique capabilities of our model to interact in dialog and carry out instructions. Some of the tasks, such as correction and easy language, were part of our instruct dataset, while others, such as translation to another language and knowledge questions, were not seen during training. This shows how our training procedure could adapt <report generation prompt> <report generation prompt> The patient is status post median sternotomy and CABG. The cardiac, mediastinal and hilar contours appear unchanged including cardiac enlargement. There is no pleural effusion or pneumothorax. Mild pulmonary edema is present. The lungs appear clear. I don't think the patient has edema, update the report. The patient is status post median sternotomy and CABG. The cardiac, mediastinal and hilar contours appear unchanged including cardiac enlargement. There is no pleural effusion or pneumothorax. The lungs appear clear. Can you translate this report to German? Der Patient hat eine median sternotomie und eine CABG durchgef\u00fchrt. Die HerzMediastinalund Hilarkonturen sind unver\u00e4ndert, einschlie\u00dflich der Herzvergr\u00f6\u00dferung. Es gibt keine Pleura-Effusion oder Pneumothorax. Die Lungen erscheinen klar. The patient is status post median sternotomy and CABG. The cardiac silhouette size is normal. The mediastinal and hilar contours are unremarkable. The pulmonary vasculature is normal. Lungs are clear. No pleural effusion or pneumothorax is seen. There are no acute osseous abnormalities. Translate this report to easy language for a patient to understand. The patient had surgery on their chest to fix their heart. The heart looks normal and the area around it is also normal. The lungs are clear and there are no problems with the blood vessels in the lungs. There are no signs of fluid around the lungs or in the chest cavity. The bones in the chest are also normal. What is CABG? CABG stands for Coronary Artery Bypass Graft. It is a surgical procedure used to treat narrowed or blocked coronary arteries, which can lead to chest pain, heart attack, or other heart problems. During the CABG surgery, a healthcare provider will make a cut in the chest and remove a section of a healthy artery from the leg or chest. This artery is then used to bypass the blocked or narrowed artery in the heart. <report generation prompt> The patient is status post median sternotomy and CABG. The cardiac, mediastinal and hilar contours appear unchanged including cardiac enlargement. There is no pleural effusion or pneumothorax. Mild pulmonary edema is present. The lungs appear clear. I don't think the patient has edema, update the report. The patient is status post median sternotomy and CABG. The cardiac, mediastinal and hilar contours appear unchanged including cardiac enlargement. There is no pleural effusion or pneumothorax. The lungs appear clear. Can you translate this report to German? Der Patient hat eine median sternotomie und eine CABG durchgef\u00fchrt. Die HerzMediastinalund Hilarkonturen sind unver\u00e4ndert, einschlie\u00dflich der Herzvergr\u00f6\u00dferung. Es gibt keine Pleura-Effusion oder Pneumothorax. Die Lungen erscheinen klar. The patient is status post median sternotomy and CABG. The cardiac silhouette size is normal. The mediastinal and hilar contours are unremarkable. The pulmonary vasculature is normal. Lungs are clear. No pleural effusion or pneumothorax is seen. There are no acute osseous abnormalities. Translate this report to easy language for a patient to understand. The patient had surgery on their chest to fix their heart. The heart looks normal and the area around it is also normal. The lungs are clear and there are no problems with the blood vessels in the lungs. There are no signs of fluid around the lungs or in the chest cavity. The bones in the chest are also normal. What is CABG? Figure 6. Qualitative examples of conversations with RaDialog. The examples show a multitude of tasks, including correction, translation, easy language, and knowledge QA. the model to radiology-specific tasks while maintaining the general capabilities of the underlying LLM. 6.3. Ablation Studies In Tab. 5 and Tab. 6, we show different ablations of our architecture and training procedure. Comparing different sizes of the LLM (Tab. 5), we observe that just scaling up the LLM size does not lead to a relevant performance increase. Therefore, we opt to use the seven billion parameter version for our experiments, leading to faster training and inference speeds. Comparing the first two rows in Tab. 6, we can see the effect of adapting our language model compared to directly using the pre-trained LLM without finetuning. For this comparison, we cannot use visual features as input, as the base LLM was not trained to work with images. The comparison demonstrates that domain-specific fine-tuning is crucial in order to adapt to medical tasks, such as radiology report generation. Additionally, in rows two to four in Tab. 6, we ablate the effect of the structured and visual image information on the performance. The results LLM size Sec. CE BS B-1 B-4 MTR R-L 7b 1.2 39.4 0.40 34.6 9.5 14.0 27.1 13b 1.9 39.4 0.39 34.8 9.5 14.0 27.1 33b 7.9 39.0 0.40 35.0 9.5 14.1 27.0 Table 5. Ablation of different LM sizes (vicuna-7b, vicuna-13b and vicuna-33b [8]). Sec. shows the approximate seconds to generate one report. All available model sizes perform similarly. Method Feats. CE BS B-1 B-4 MTR R-L RaDialog-NF S 35.8 0.20 5.5 0.4 4.7 11.7 RaDialog-RG S 37.3 0.39 32.6 8.2 12.8 25.9 RaDialog-RG V 26.1 0.39 31.3 9.0 13.0 27.1 RaDialog-RG V+S 39.4 0.40 34.6 9.5 14.0 27.1 Table 6. Ablations on using a non-fine-tuned LLM (NF) and using only visual (V) or only structured (S) input. prove that both components are essential for the final model performance. Incorporating predicted pathology labels explicitly has a major effect on clinical efficacy. Using visual features improves the CE metric further by an additional 2% as well as all other metrics, indicating that the model uses the visual input to both correct the label information and include further details not captured by the structured findings. 7. Conclusion In this work, we introduced RaDialog, a novel approach to integrating advanced vision-language models for the generation, interactive modification, and analysis of radiology reports. Our model demonstrates not only state-of-the-art results in creating clinically accurate reports but also the ability to engage in a dialog, answer follow-up questions, and incorporate feedback, enabling intuitive quality control through experts in the loop. This represents a significant leap forward from static automated report generation to a more dynamic, collaborative tool that mirrors the interactive nature of clinical practice. However, as this is a pioneering work, there are areas for further development. The reliance on a findings classifier, for instance, improves clinical accuracy but also bears the risk of error propagation from the classifier. Additionally, RaDialog is currently limited to single X-ray images, and expanding it to multi-view or longitudinal inputs or incorporating more patient data would further enhance its utility in medical diagnostics. Further, the next important step is conducting a clinical evaluation to verify its effectiveness in practice. We believe RaDialog will inspire further research in radiology image understanding, encouraging the community to explore new collaborative medical image understanding approaches with expert radiologists. Acknowledgements The authors gratefully acknowledge the financial support by the Federal Ministry of Education and Research of Germany (BMBF) under project DIVA (13GW0469C) and the Bavarian Ministry of Economic Affairs, Regional Development and Energy (StMWi) under project ThoraXAI (DIK-2302-0002)." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2311.09216v1", |
| "title": "Assessing Translation capabilities of Large Language Models involving English and Indian Languages", |
| "abstract": "Generative Large Language Models (LLMs) have achieved remarkable advancements\nin various NLP tasks. In this work, our aim is to explore the multilingual\ncapabilities of large language models by using machine translation as a task\ninvolving English and 22 Indian languages. We first investigate the translation\ncapabilities of raw large language models, followed by exploring the in-context\nlearning capabilities of the same raw models. We fine-tune these large language\nmodels using parameter efficient fine-tuning methods such as LoRA and\nadditionally with full fine-tuning. Through our study, we have identified the\nbest performing large language model for the translation task involving LLMs,\nwhich is based on LLaMA.\n Our results demonstrate significant progress, with average BLEU scores of\n13.42, 15.93, 12.13, 12.30, and 12.07, as well as CHRF scores of 43.98, 46.99,\n42.55, 42.42, and 45.39, respectively, using 2-stage fine-tuned LLaMA-13b for\nEnglish to Indian languages on IN22 (conversational), IN22 (general),\nflores200-dev, flores200-devtest, and newstest2019 testsets. Similarly, for\nIndian languages to English, we achieved average BLEU scores of 14.03, 16.65,\n16.17, 15.35 and 12.55 along with chrF scores of 36.71, 40.44, 40.26, 39.51,\nand 36.20, respectively, using fine-tuned LLaMA-13b on IN22 (conversational),\nIN22 (general), flores200-dev, flores200-devtest, and newstest2019 testsets.\nOverall, our findings highlight the potential and strength of large language\nmodels for machine translation capabilities, including for languages that are\ncurrently underrepresented in LLMs.", |
| "authors": "Vandan Mujadia, Ashok Urlana, Yash Bhaskar, Penumalla Aditya Pavani, Kukkapalli Shravya, Parameswari Krishnamurthy, Dipti Misra Sharma", |
| "published": "2023-11-15", |
| "updated": "2023-11-15", |
| "primary_cat": "cs.CL", |
| "cats": [ |
| "cs.CL", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", |
| "gt": "Generative Large Language Models (LLMs) have made significant performance improvements in various natural language processing (NLP) tasks, showcasing exceptional progress in a wide range of applications (Xuanfan and Piji, 2023; Figure 1: LLMs based Machine Translation perfor- mance comparison with public systems for English to Indian Languages. BLEU and chrF scores are aver- aged over 22 Indian Languages and 5 different bench- mark data-sets. The available MT systems are GPT-3.5 (GPT-3.5 Davinci, by OpenAI), IndicTrans-2, Google Translation, LTRC-IIIT-H, SeamlessM4T. LLaMA-2- 7b and LLaMA-2-13b are evaluated as LLM based fine-tuned MT systems are namely LLaMA-2-7b+lora (Multi), LLaMA-2-13b+lora (Multi), and LLaMA-2- 13b+FF+lora (Multi). Xi et al., 2023). These tasks span from open domain question answering, where LLMs excel at providing accurate and coherent responses, to instruction-based tasks such as code completion, where LLMs can generate code snippets based on given prompts (Vaithilingam et al., 2022). LLMs have also demonstrated proficiency in tasks like essay writing, grammar checking (Wu et al., 2023a), and text summarization, where they can produce high-quality outputs (Chang et al., 2023). These advancements have primarily been observed in English-centric tasks. The popular LLMs support several of natural languages. The performance for some languages other than English is not yet on par or yet to be evaluated (Lai et al., 2023; Zhu et al., 2023). A multilingual country like India, where arXiv:2311.09216v1 [cs.CL] 15 Nov 2023 over 364+ languages and dialects 1 are spoken across its vast territory, presents a multitude of challenges across various domains due to language barriers (Zieli\u00b4 nski et al., 2021), such as day-to-day communication, education (Steigerwald et al., 2022), business, healthcare (Mehandru et al., 2022), tourism, governance, and more. Recent advancements in the field of Large Language Models may offer solutions to these challenges tailored to Indian languages. To test whether LLM can effectively overcome language barriers, it is crucial to evaluate the proficiency of large language models in handling Indian languages. Machine Translation, as a critical multilingual task, could be an ideal option to explore the multilingual capabilities of existing models. Hence, we can formulate the question to assess the proficiency of large language models in handling Indian languages as follows: How effectively do large language models perform in multilingual tasks like Machine Translation, particularly when dealing with Indian languages? In this work, our major contribution is to ad- dress the following points in response to the above question. \u2022 What are the directions for utilizing or adapt- ing Large Language Models for Indian Lan- guages? \u2013 How do LLMs perform in translating a wide range of Indian languages un- der zero-shot and in-context learning set- tings? \u2013 Does LLM fine-tuning improve the trans- lation capabilities of Large Language Models? How do they perform on low- resourced MT languages? \u2013 The impact of LLM Vocabulary on the Performance of Large Language Models in Translation Tasks. To address the above questions, we assess the translation capabilities of popular large language models (opt, bloom, LLaMA-1, MPT, Falcon, LLaMA-2, and Mistral [Section 3]) involving English and 22 scheduled Indian languages (As- samese, Bangla, Bodo, Dogri, Konkani, Gujarati, 1https://en.wikipedia.org/wiki/Linguistic_ Survey_of_India Hindi, Kannada, Kashmiri, Maithili, Malayalam, Marathi, Meitei, Nepali, Odia, Punjabi, Sanskrit, Santali, Sindhi, Tamil, Telugu, and Urdu). We ini- tially examine the translation capabilities of above mentioned raw large language models [Section 5.1]. Subsequently, we explore their in-context learning abilities [Section 5.1]. Additionally, we fine-tune the base models using parameter-efficient fine-tuning methods specifically LoRa [Section-6]. Furthermore, we investigate the potential of 2-stage fine-tuning for large language models, which involves full parameter fine-tuning in the first stage, followed by LoRa-based adaptor fine-tuning [Section 6]. The key findings of our work, as summarized in Figure 1, highlight the performance of our LLM-based machine translation fine-tuned models compared to various known translation engines. These engines range from commercial (Google2, GPT-3.53) to open source (IndicTrans-24, LTRC- IIIT-H5, seamlessm4t6), traditional supervised encoder-decoder translation models (Google, IndicTrans-2, LTRC-IIIT-H) and decoder-driven causal large language model-based translation systems (GPT-3.5). Our findings underscore the significant po- tential of large language models for translation tasks involving English and Indian Languages. While raw LLMs (LLaMA-2-7b and LLaMA- 2-13b) not perform well on translation tasks, our two-stage MT fine-tuned models (LLaMA- 2-13b+FF+lora(Multi)) yields comparative results even with minimal parallel corpora. This suggests that LLMs have the potential to possess multilin- gual capabilities for translating into underrepre- sented languages, which can be further enhanced through fine-tuning. This work will be a crucial and pioneering milestone in evaluating LLMs for lan- guage representation and assessing their translation capabilities for a diverse range of Indian languages, especially those with limited available resources. 2https://translate.google.co.in/ 3https://chat.openai.com/ 4https://github.com/AI4Bharat/IndicTrans2 5https://ssmt.iiit.ac.in/translate 6https://github.com/facebookresearch/seamless_ communication", |
| "main_content": "Recent advancements in machine translation have shown that neural machine translation (NMT) has made significant strides in terms of output fluency and translation quality, especially when ample parallel data is available (Barrault et al., 2020). However, the scarcity or absence of parallel data poses a challenge for most language pairs. In the case of Indian languages, recent developments have tried to addressed this issue by introducing a new state-of-the-art approach: multilingual machine translation involving Indian languages and English (Wang et al., 2021; Dabre et al., 2020). This approach leverages a single script for machine translation, capitalizing on the lexical and syntactic similarities that arise from the genetic and contactrelatedness among Indian languages (Gala et al., 2023; Eriguchi et al., 2022; Bapna and Firat, 2019). In the field of LLM driven machine translation, in-context learning has gained significant attention (Wu et al., 2023b). The use of large language models (LLMs) for multilingual machine translation has been a subject of interest (Zhang et al., 2023). Recent studies have evaluated the translation capabilities of LLMs for different language directions, with a focus on models like ChatGPT (Bang et al., 2023). Notably, Xu et al. proposed a two-stage fine-tuning approach for machine translation using LLMs, involving fine-tuning on monolingual data followed by fine-tuning on a small set of highquality parallel data. Our work represents the first study that specifically explores machine translation involving Indian languages using large language models. 3 Large Language Models Language modeling, a well-established task in the field of natural language processing, has garnered significant attention over the years (Bellegarda, 2004; Bengio et al., 2000). This task involves predicting the probability of the next token in a sequence of words. Transformers have emerged as the fundamental architecture underlying many existing Large Language Models (Vaswani et al., 2017). Transformers based autoregressive models like GPT (Brown et al., 2020; Radford et al., 2019) have played a crucial role in advancing Natural Language Processing (NLP). GPT-3, with 175 billion parameters, is a standout in this category. It is similar in structure to GPT-2 and GPT-1 but benefits from a more extensive and varied dataset, making it exceptionally powerful in NLP. Further, prompt-based ChatGPT (GPT-3.5 text-davinci-003 and GPT-3.5 turbo) has been performing exceptionally by utilizing the reinforcement-based human feedback strategy. Although these models exhibit impressive performance on several NLP tasks, privacy and bias of the models have been a bottleneck. To mitigate such issues, LLaMA (Touvron et al., 2023a) is an open-sourced foundation model trained on publicly available datasets. Similarly, Falcon-40B (Almazrouei et al., 2023) is another open-source LLM trained on a RefinedWeb corpus of 1500 billion tokens. Falcon even comes with 7 and 40 billion instruction versions trained on conversation data. The recent adaptation of Large Language Models (LLMs) for instruction tuning has proven to be a promising approach in improving the performance of various natural language processing tasks. Specifically, in languages like Chinese and Swedish demonstrates the impressive zero-shot and generation abilities of the low-rank adaptation of LLaMA for non-English languages (Cui et al., 2023; Holmstr\u00f6m and Doostmohammadi, 2023). However, it is worth noting that the current focus of these instruction models is primarily on English. Therefore, there is an immediate need to explore ways to adapt these models to low-resource Indian languages. 3.1 Base models In this work, we used the following base LLM models to test the levels of language coverage and explore their potential for machine translation tasks involving English and Indian languages. \u2022 opt-6.7b7 : The OPT-6.7b (Zhang et al., 2022) model has been extensively trained on the objective of causal language modeling (CLM) using English text. Although the majority of the training data is in English, a small portion of non-English data from CommonCrawl has also been included. This model utilizes 6.7 billion parameters, consisting of 32 layers and 32 attention heads, and employs an embedding size of 4096. \u2022 Bloom-7B8 : BLOOM (Scao et al., 2022) was 7https://huggingface.co/facebook/opt-6.7b 8https://huggingface.co/bigscience/bloom-7b1 the first largest multilingual large language model with causal language modeling objective and supports 46 languages and 13 programming languages. Its overall training data contains 1.1% of Indian languages. We opted for Bloom model with 7,069,016,064 parameters with 30 layers, 32 attention heads, 4096 embedding dimensional where maximum token length is 2048. \u2022 LLaMA-7B9: LLaMA is a collection of foundation language models ranging from 7B to 65B parameters. These models are multilingual models and trained on trillions of tokens. The data includes CCNet, C4, GitHub, Wikipedia, Books, ArXiv, Stack Exchange. In our experiments we evaluated LLaMA model with 7B parameters where 4096 is embedding dimensions and 32 layers and 32 attention head. \u2022 MPT-7B10 : Similar to above models MPT-7B model is trained on a large amount of data 1T tokens on causal language modeling objective. \u2022 Falcon11 : Falcon (Penedo et al., 2023a) is another large language model trained on causal language modeling (CLM) objective. Here, we utilised Falcon-7B model which is a 7B parameters and trained on 1.5 trillion tokens of RefinedWeb (a novel massive web data-set based on CommonCrawl) enhanced with curated corpora. The model has multilingual capabilities but no Indian languages are explicitly present. We have used Falcon-7B for our experiments. \u2022 LLaMA-2-7B12 and LLaMA-2-13B13 : LLaMA 2 based models (Touvron et al., 2023b) are also trained on causal language modeling (CLM) objective and pretrained on 2 trillion tokens of data from publicly available sources of till September 2022. These models are available in different range parameters from 7 billion to 70 billion. These models have 4k sub-words as context length. In 9https://huggingface.co/decapoda-research/ llama-7b-hf 10https://huggingface.co/mosaicml/mpt-7b 11https://huggingface.co/tiiuae/falcon-7b 12https://huggingface.co/meta-llama/ Llama-2-7b-hf 13https://huggingface.co/meta-llama/ Llama-2-13b-hf our experiments we have experimented with 7B and 13B LLaMA-2 models. LLaMA-2-7B network has 32 layers and 32 attention heads while LLaMA-2-13B has 40 layers and 40 attention heads. \u2022 Mistral-7B14 : Mistral-7B Large Language Model (LLM) (Jiang et al., 2023) is a pretrained on causal language modeling (CLM) objective with 7 billion parameters. It uses Sliding Window Attention (SWA) to handle longer sequences at smaller cost and Groupedquery attention (GQA) for faster inference which reduces the memory requirement during decoding. It has 4096 embedding dimension, 32 layers and 32 attention heads with context length of 8192 context length. 4 Indian Languages representation in LLMs Pre-trained (or Raw) large language models are trained on a huge amount of language data, and some of the these models are trained on multiple languages (Naveed et al., 2023). However, their training primarily focuses on English text (Penedo et al., 2023b). Emphasis on English is due to its substantial presence on the internet and its widespread usage in business contexts. For the purpose of this work, our objective is to assess the effectiveness of these models in Machine Translation tasks that involve both English and Indian Languages. Consequently, it becomes crucial to investigate the representation of Indian languages within these large language models. An approach to investigating the representation of Indian languages within a large language model can involve analyzing the frequency of languagespecific words and sentences used during the training of these models. Unfortunately, it is not possible to perform this analysis as the training data used for these models are not publicly accessible. LLaMA-2, in particular, has mentioned that its pretraining corpus primarily consists of English and may not be optimal for other languages (Touvron et al., 2023b). However, it is worth mentioning that approximately 8.38% of the data does include languages other than English and codes in LLaMA-2. On the other hand, studying the vocabulary (or letters/characters) of a corpus can provide valuable 14https://huggingface.co/mistralai/ Mistral-7B-v0.1 Language Family Indo-Aryan Dravidian Sino-Tibetan Austroasiatic Language asm ban kas snd urd doi hin gom mai mar nep san guj odi pan kan mal tam tel mni brx sat Language Script Bangla Perso-Arabic Devanagari Gujarati Odia Gurmukhi Kannada Malayalam Tamil Telugu Meitei Devanagari Ol Chik No of Letters in Unicode 96 256 128 91 91 80 91 118 72 100 56 96 48 Models (Vocab) BLOOM (250680) (48,48) (49,207) (67,61) (57,34) (56,35) (55,25) (62,29) (66,52) (46,26) (61,39) (00,56) (67,29) (00,48) FALCON (65024) (00,96) (12,244) (2,126) (00,91) (00,91) (00,72) (0,100) (00,56) (02,70) (04,96) (00,56) (02,94) (00,48) LLAMA-1,2 (32024) (24,72) (45,211) (38,90) (01,90) (00,91) (04,76) (02,89) (33,155) (19,53) (01,99) (00,56) (38,90) (00,48) MISTRAL (32052) (34,62) (47,209) (43,85) (05,86) (00,91) (02,78) (18,73) (04,116) (22,50) (11,89) (00,56) (43,53) (00,48) MPT (50277) (05,91) (35,221) (22,106) (02,89) (00,91) (00,80) (00,91) (01,117) (05,67) (03,97) (00,56) (22,106) (00,48) OPT (50265) (00,96) (13,243) (1,127) (00,91) (00,91) (00,80) (00,91) (0,118) (00,72) (0,100) (00,56) (01,95) (00,48) Table 1: The language support of various LLMs for 22 Indian languages, along with the corresponding families, scripts, and letters representing each language. In each tuple (xx, yy), the first value represents the number of language-specific characters, while the second value indicates the count of byte-supported characters in respective LLM and for respective language. insights into the representation and coverage of language within that corpus. The writing system or script used plays a crucial role in representing a language. Therefore, analyzing the vocabulary can be considered a proximal task. Fortunately, we have access to the sub-word vocabulary for the considered large language models. By comparing the characters present in the sub-word vocabulary with those in the corresponding language script, we can approximate the language representation within the respective LLM. For this work, we included a total of 22 scheduled Indian languages for translation, which can be categorized into four main language families: Indo-Aryan, Dravidian, Sino-Tibetan, and Austroasiatic. These 22 Indian languages are written using 13 major scripts. It is interesting to note that most of these scripts can be traced back to the Brahmi script 15, which served as the foundation for the development of several Indian scripts (Salomon, 1995). Each of these 13 writing systems has its own unique set of letters and characters 16, reflecting the phonetic and linguistic characteristics of the respective languages they represent. Table 1 presents an overview of the scripts, the languages utilizing these scripts, and the corresponding sub-word vocabulary sizes for LLMs. The numbers indicated in \u2018(X,Y)\u2019 represent the counts of native script letters (characters in unicode 17) present and not present in the respective LLM. Specifically, X denotes the number of native language characters present in the vocabulary, while Y denotes the number of characters represented as 15https://www.education.gov.in/sites/upload_ files/mhrd/files/upload_document/languagebr.pdf 16https://en.wikipedia.org/wiki/Official_ scripts_of_the_Republic_of_India 17https://unicode.org/ pre-defined (multiple) hexadecimal values. Upon analysis, we observe that, in general, the 22 Indian languages have a limited presence in most of the LLMs. However, the Devanagari, Perso-Arabic, and Bangla scripts demonstrate the most extensive sub-word vocabularies, while other scripts have minimal or near-zero representation within the vocabulary. 5 Experiment setup: Machine Translation using LLMs To evaluate the performance of the large language models (LLMs) in machine translation tasks involving English and 22 Indian languages, we conducted two experiments. The first experiment focused on assessing the performance of the pre-trained (raw) LLM. In the second experiment, we utilized example-based in-context learning for the same machine translation task. Both translation directions were explored, including English to 22 Indian languages and 22 Indian languages to English. All experiments were conducted using translation benchmark data, as discussed in Section 6. As part of our experimental setup, we used the prompting pipeline depicted in Figure 2. This pipeline involved using a Prompt Generator to generate specific prompts for the source and target language along with source text. Subsequently, an LLM call is triggered to generate a response, which was then processed by a translation parser to obtain the actual translation. To ensure high-throughput and memory-efficient inference and serving for LLMs, we utilized the vLLM library18 (Kwon et al., 2023). We conducted all experiments using a temperature parameter of 0, which ensures that the model behaves deterministically. By setting the temperature to 0, the model is constrained to se18https://github.com/vllm-project/vllm Figure 2: Prompting Mechanism for Translation lect the word with the highest probability, effectively limiting its choice to the most likely option (Aksitov et al., 2023). All of our experiments are conducted using vLLM library on A100, 40GB GPUs. 5.1 Machine Translation on Raw LLM To optimize the machine translation task on our selected LLMs, we conducted manual trials with various prompts. Through these trials, we discovered that directly asking for the translation and presenting the text in JSON format yielded better results, as the models seemed to comprehend the JSON structure more effectively (Reinauer et al., 2023). After multiple iterations, we finalized two prompts for translating sentences using raw (pretrained) LLMs, as illustrated in below examples. These prompts were used to evaluate the efficiency of the models. Example: Translation Prompt-1 Translate this to <Target Language> from <Source Language> Text: <Source Language Text> Translated Text: Example: Translation Prompt-2 Translate this from <Source Language> to <Target Language> <Source Language>: <Source Language Text> <Target Language>: Example: ICL Translation Prompt If the <Source Language> to <Target Language> translation for \"<Source Example>\" is \"<Target Example>\" from <Source Language>, following that, translate this to <Target Language> from <Source Language> Text: <Source Language Text> Translated Text: Similarly, we identified and modified the prompt for example-based in-context learning with LLM. This prompt is specified in Example above (ICL Translation Prompt). In the case of in-context learning, all of our experiments involved providing a single translation sample as a contextual learning example prior to the actual translation command. We ensured that this example remained consistent for the same language pair across the sentences. The sample itself was randomly selected from the Human-BPCC translation training corpus (AI4Bharat et al., 2023). We present the outcomes of both of these experiments in the Performance and Discussions section. 5.2 Fine-tuning LLM for Machine Translation To examine the potential improvement in multilingual understanding or translation performance of LLMs beyond the pre-trained LLM baseline, we conducted fine-tuning experiments for the translation task. 5.2.1 Training Data To fine-tune large language models (LLMs) for the machine translation task, we utilized the Bharat Parallel Corpus Collection (BPCC). This corpus is publicly available and specifically for English to 22 Indic languages translation. It consists of two main parts: BPCC-Mined and BPCC-Human, comprising a total of approximately 230 million English#Sents S-AvgL T-AvgL S-Words T-Words S-Types T-Types Assamese (asm) 138208 16.88 14.39 2333583 1988395 125480 185151 Bangla (ban) 180219 17.80 15.07 3208203 2715959 161820 227468 Bodo (brx) 113139 17.79 13.96 2012274 1579042 116963 227180 Dogri (doi) 24157 15.32 17.68 370047 427110 48256 41370 Konkani (gom) 97555 17.13 14.03 1671465 1368512 82783 145300 Gujarati (guj) 135664 17.71 15.96 2402552 2164831 123935 174886 Hindi (hin) 222356 17.84 19.69 3966247 4378231 183737 202423 Kannada (kan) 117222 16.83 12.44 1972881 1458053 100778 208803 Kashmiri (kas) 19824 16.02 17.68 317634 350577 43197 66210 Maithili (mai) 23690 16.11 15.79 381720 374042 52920 57423 Malayalam (mal) 137950 16.30 11.13 2248081 1535654 120999 299146 Marathi (mar) 175893 17.94 14.81 3154904 2604119 167822 299983 Meitei (mni) 56617 17.77 15.73 1006271 890828 86175 161043 Nepali (nep) 85442 16.76 14.13 1431858 1207687 105411 145175 Odia (odi) 36923 17.07 15.49 630148 571958 68765 79932 Punjabi (pan) 80951 17.22 18.29 1394286 1480835 63510 74451 Sanskrit (san) 33189 16.30 11.69 541034 387957 61591 119856 Santali (sat) 24368 16.95 19.28 412918 469791 51307 56053 Sindhi (sin) 10503 17.10 19.32 179592 202952 28945 30782 Tamil (tam) 150254 17.76 13.34 2668252 2004981 139214 290917 Telugu (tel) 111808 16.81 12.64 1879737 1413466 96105 191792 Urdu (urd) 150747 17.62 20.20 2656814 3044480 144001 129856 Table 2: English to Indian Languages machine translation Fine-tuning data from BPCC-Human (AI4Bharat et al., 2023). In this, the term \"#Sents\" refers to the total number of parallel sentences. \"S-AvgL\" and \"T-AvgL\" represent the average sentence length, in terms of words, for the source and target languages, respectively. Likewise, \"Words\" denotes the total number of words, while \"Type\" represents the total number of unique words. Method Hyper-parameter Value LoRA LoRA modules PEFT19 rank 8 dropout 0.05 learning rate 1e-4 global batch size 8 epochs 6 Full-parameter FSDP learning rate 1e-4 global batch size 4 epochs 5 Table 3: Hyper-parameter configurations of LoRA based and full fine-tuning for 4*A100 40GB GPUs parallel text pairs. For the fine-tuning process, we focused on the BPCC-Human dataset, which contains 2.2 million English-Indic pairs. Additionally, this dataset includes subsets derived from English Wikipedia sentences and everyday usage scenarios. For more information about this corpus, are presented in Table 2. 5.2.2 Fine-tuning Details Considering the raw LLM performance, model parameters, and resource constraints, we selected a subset of LLMs for the fine-tuning process. Specifically, we chose LLaMA-2-7b, LLaMA-2-13b, and Mistral-7B for the fine-tuning experiment. For the selected LLMs, we decided to conduct fine-tuning using multiple parameters to enhance their performance. These parameters included bi-lingual translation fine-tuning, multi-lingual translation fine-tuning, low-rank adaptation-based fine-tuning, and a two-stage fine-tuning approach: full fine-tuning followed by low-rank adaptationbased fine-tuning. Due to limitations in training resources, we prioritized full fine-tuning as the chosen option. Specifically, we performed LoRa-based finetuning (Hu et al., 2021) for all English to 22 Indian languages (in both directions) under bi-lingual settings using LLaMA-2-7b and LLaMA-2-13b. Additionally, we conducted TestSet #Sent Details IN22_conv_test 1502 AI4Bharat et al. released MT benchmark data covering English to 22 Indian Languages. IN22_gen_test 1023 Flores200-dev 997 Goyal et al. released MT benchmark data which includes English to 17 Indian Language pairs considered in this work. Flores200-devtest 1012 Newstest2019 1997 Federmann et al. released MT benchmark data which includes English to 10 Indian Language pairs considered in this work. Table 4: Benchmark data details covering English to 22 Indian Languages Figure 3: Evaluation of English 22 Indic language Translation over 5 benchmark-sets (averaged): Raw LLM vs In Context Learning (ICL); Raw LLM models: LLaMA-2-7b, LLaMA-2-13b) multi-lingual LoRa-based fine-tuning for English to the combined 22 Indian languages, as well as for the combined 22 Indian languages to English, using LLaMA-2-7b, LLaMA-2-13b, and Mistral-7B. Based on the overall performance, we proceeded with a two-stage fine-tuning approach for the multi-lingual translation task specifically on LLaMA-2-13b. In the first stage, we performed full fine-tuning as a multi-lingual translation setup. Subsequently, in the second stage, we conducted multi-lingual LoRa-based fine-tuning on the same fully fine-tuned model. For both types of fine-tuning LLMs, we utilized the llama-recipes codebase20 which provides an efficient implementation for LoRa-based adaptor fine-tuning with PEFT (Mangrulkar et al., 2022). For more details, please refer to the llama-recipes documentation 21. The hyperparameters for the fine-tuning process are specified in Table 3. The training data used for the fine-tuning experiments will be presented in the sub-section 5.2.1. 6 Machine Translation Benchmark Data We evaluate the performance of multilingual translation using three different benchmark datasets, as outlined in Table 4. The table provides a compre20https://github.com/facebookresearch/ llama-recipes/ 21https://github.com/facebookresearch/ llama-recipes/blob/main/docs/LLM_finetuning.md hensive overview of each dataset, highlighting the availability of n-way parallel data for the specified number of Indian languages from English as a source direction. 7 Performance Evaluation We evaluated the performance of the translation outputs using BLEU (Papineni et al., 2002) and chrF (Popovi\u00b4 c, 2015) evaluation methods on benchmark data described in Section 6. However, we did not include COMET (Rei et al., 2022) as an evaluation method due to the absence of support for many low-resource Indian languages at the time of evaluation. We used sacreBLEU library (Post, 2018) for BLEU 22 and chrF 23 calculation. To mitigate the impact of randomness in scores, we present our findings as the average of two runs for all of our results. Raw (Zero shot) vs ICL based Translation on LLMs Figure 3 presents the comparison of overall results when evaluating translation quality for Raw LLMs and In Context Learning (ICL) based LLMs outputs. The left sub-figure represents the results for English to 22 Indian languages, while the right sub-figure presents the results for 22 Indian languages to English translation. We 22footprint for BLEU: nrefs:1|case:mixed|eff:no|tok:13a|smooth:exp|version:2.1.0 23footprint for chrF: nrefs:1|case:mixed|eff:yes|nc:6|nw:0|space:no|version:2.1.0 Figure 4: Performance comparison of GPT-3.5 vs our Fine-Tuned LLM Translation models (LLaMA-2-7b+lora (Multi), LLaMA-2-13b+lora (Multi), and LLaMA-2-13b+FF+lora (Multi)): English to 22 Indian Languages over 5 benchmark-sets (averaged). Here, LORA stands for Low-Rank Adaptation of Large Language Models based finetuning. Multi stands for the multilingual model, FF for full-finetuning, and FF+lora stands for 2-stage fine-tuning. observed amplified performance for the Bloom large language model for certain languages, which can be attributed to the known MT benchmark data leak in the pre-training (Zhu et al., 2023). Consequently, we decided to exclude this language model from further experiments. LLMs models such as OPT, MPT, LLAMA-1 and Falcon exhibited poor performance, which can be correlated with the no or minimal presence of characters of our focused Indian Languages in their vocabulary (Table 1). Therefore, we have omitted reporting the results for these models. Figure 3 indicates that the Llama-2 models show relatively better performance with ICL settings compared to the raw models. Detailed results are presented in appendix. Through manual analysis, we observed that for less-represented languages such as Gujarati, Kannada, Odia, etc. (Table 1), the ICL-driven translation tends to repeat the same translation given in the context as learning. On the other hand, the raw models tend to hallucinate and repeat words throughout the translation (Guerreiro et al., 2023). One important finding from the manual analysis is that these raw LLMs demonstrate the ability to accurately identify languages (e.g., when asked for Gujarati translation, it gives inaccurate translations but correctly hallucinate text in the Gujarati script). This is a positive aspect and indicates a significant advantage of these LLMs in terms of their understanding and differentiation of languages and language scripts. In response to the question asked in Introduction, it is true that the major available LLMs are primarily focused on English. However, they do exhibit minimal potential for zero-shot and example-based translation capabilities. Fine-Tuned LLM driven Translations: English to Indian Languages We conducted an evaluation to compare the performance of our Fine-Tuned LLM models with GPT-3.5, as both models use the same decoder-based approach. Figure 4 illustrates the comparison for English to 22 Indian language translation. The scores for GPT-3.5 are generally lower compared to our fine-tuned methods, also our fine-tuned models have higher numbers than our previously mentioned zero-shot and example-based learning baseline. This indicates that with minimal translation corpora, we are able to achieve considerable translations for translating into Indian languages from English. Additionally, we observed that multilingual fine-tuning yielded better overall performance compared to bilingual fine-tuning. The two-stage fine-tuning approach also outperformed other fine-tuning methods for the translation task. The impressive results of the two-stage fine-tuning approach, as shown in Figure 4, are comparable to those of traditional encoder-decoder based translation models. It is worth noting that this performance improvement was achieved using only a few thousand parallel data, whereas traditional NMT models typically require a larger amount of data. From Figure 4, we can see that translating to Figure 5: Performance comparison of GPT-3.5 vs our Fine-Tuned LLM Translation models (LLaMA-2-7b+lora (Multi), LLaMA-2-13b+lora (Multi), and LLaMA-2-13b+FF+lora (Multi)): English to 22 Indian Languages over 5 benchmark-sets (averaged). Here, LORA stands for Low-Rank Adaptation of Large Language Models based fine-tuning. Multi stands for the multilingual model. low-resource languages such as Dogri, Konkani, Kashmiri, Meitei, Sanskrit, and Sindhi yielded favorable evaluation numbers (Detailed results are presented in appendix) compared to existing translation systems. In answer to the question posed in the introduction, fine-tuning LLMs does enhance translation capabilities, particularly more when employing multilingual fine-tuning. These models demonstrate proficiency in translating low-resource languages as well. Fine-Tuned LLM driven Translations: Indian Languages to English Figure 5 showcases the comparison for Indian language to English translation. The scores for GPT-3.5 are generally not higher compared to our fine-tuned methods, while our fine-tuned models still outperform the previously mentioned zero-shot and example-based context learning driven LLM results. Notably, the performance improvement for Indian language to English translation is comparatively lower than that of English to Indian language translation. Compared to translations from English to Indian languages, the LoRa-based single-stage fine-tuning here performs the best among all the fine-tuning approaches. Detailed results are presented in the appendix. This disparity can be attributed to the vocabulary representation of Indian languages in these LLMs. As presented in Table 1, the subword vocabulary for Indian languages is limited in the considered LLMs. Consequently, when processing input in Indian languages, characters that are not present in the vocabulary receive multiple hexadecimal representations from the vocabulary. This creates a bottleneck in understanding the underlying meaning, making it challenging for the larger LLM network to establish corresponding semantic translations. However, this issue does not arise when translating from English to Indian languages. The underlying understanding of English is robust, allowing the network to effectively map the respective language translations. Hence, this suggest the need for LLMs where enough language representation is required and future development of LLMs must address this. 8 Limitations In order to conduct our experiments, we relied on high-performance GPUs, specifically the A10040GB. However, we acknowledge that not everyone may have access to such powerful computing resources, making it challenging to reproduce our experiments and achieve identical results. To overcome this limitation, our objective is to provide open access to all outputs, including model and results, to facilitate further research and exploration. By making these resources openly available, we aim to promote collaboration and enable others to build upon our work. 9 Conclusion Our experiments and results have provided promising insights into the use of LLMs for translation tasks. We have found that LLMs have the potential to perform translations involving English and Indian languages without the need for an extensive collection of parallel data, which distinguishes them from traditional translation models. Furthermore, our findings indicate that LLaMA-2 based models outperform other models in zero-shot and in-context example-based learning. Notably, the LLaMA-2-13b based model demonstrates superior performance compared to its counterparts. To enhance the LLM\u2019s understanding of English and Indian languages, we have introduced a two-stage fine-tuning process. This process begins with initial full-finetuning, followed by LoRa-based fine-tuning. Through this approach, we have significantly improved the LLM\u2019s comprehension of content in both languages. However, our experiments suggest that further work on LLMs is required to surpass the performance of traditional encoder-decoder based translation models. This work could involve the development of Indian language-specific LLMs, which would enhance vocabulary and alphabet coverage, resulting in better representation of Indian languages. On the other hand, in the future, we plan to incorporate Indian to Indian language translation using LLMs. Additionally, our aim is to develop a single LLM model capable of translating all Indian languages, as well as English, in both directions. By doing so, we strive to push the boundaries of language capabilities within LLMs and further advance the field. Acknowledgement We express our gratitude to Pruthwik Mishra, Arafat Ahsan and Palash Gupta for their contributions throughout the different phases of this project. This undertaking is funded by the Ministry of Electronics and Information Technology, Government of India, as evidenced by the Sanction Order: 11(1)/2022-HCC(TDIL)-Part(2)/A/B/C and the Administrative Approval: 11(1)/2022-HCC(TDIL)Part(2)." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2312.04265v5", |
| "title": "Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation", |
| "abstract": "In this paper, we first assess and harness various Vision Foundation Models\n(VFMs) in the context of Domain Generalized Semantic Segmentation (DGSS).\nDriven by the motivation that Leveraging Stronger pre-trained models and Fewer\ntrainable parameters for Superior generalizability, we introduce a robust\nfine-tuning approach, namely Rein, to parameter-efficiently harness VFMs for\nDGSS. Built upon a set of trainable tokens, each linked to distinct instances,\nRein precisely refines and forwards the feature maps from each layer to the\nnext layer within the backbone. This process produces diverse refinements for\ndifferent categories within a single image. With fewer trainable parameters,\nRein efficiently fine-tunes VFMs for DGSS tasks, surprisingly surpassing full\nparameter fine-tuning. Extensive experiments across various settings\ndemonstrate that Rein significantly outperforms state-of-the-art methods.\nRemarkably, with just an extra 1% of trainable parameters within the frozen\nbackbone, Rein achieves a mIoU of 78.4% on the Cityscapes, without accessing\nany real urban-scene datasets.Code is available at\nhttps://github.com/w1oves/Rein.git.", |
| "authors": "Zhixiang Wei, Lin Chen, Yi Jin, Xiaoxiao Ma, Tianle Liu, Pengyang Ling, Ben Wang, Huaian Chen, Jinjin Zheng", |
| "published": "2023-12-07", |
| "updated": "2024-04-18", |
| "primary_cat": "cs.CV", |
| "cats": [ |
| "cs.CV" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", |
| "gt": "Prior works [35, 37, 39, 64, 73, 76, 82] in Domain Gener- alized Semantic Segmentation (DGSS) focus on improving prediction accuracy across multiple unseen domains with- out accessing their data, thus enabling a high generaliza- tion for real applications. Since models are fine-tuned using datasets [12, 65] that are either limited in scale or different in image style from the target domain, complex data aug- mentation approaches [5, 61, 83] and domain invariant fea- ture extraction strategies [10, 59, 72, 77] have been widely explored in previous DGSS. These methods result in en- hanced generalization when applied to classic backbones, e.g., VGGNet [71], MobileNetV2 [68], and ResNet [26]. In recent years, large-scale Vision Foundation Models (VFMs) like CLIP [63], MAE [27], SAM [42], EVA02 [18, 19], and DINOv2 [58] have significantly advanced the arXiv:2312.04265v5 [cs.CV] 18 Apr 2024 Previous DGSS methods Frozen backbone of VFMs Methods GTR[61] AdvStyle[83] WildNet[45] SPC[30] PASTA[5] TLDR[40] CLIP-ViT-L[63] MAE-L[27] SAM-H[42] EVA02-L[18] DINOv2-L[58] Publications TIP21 NIPS22 CVPR22 CVPR23 ICCV23 ICCV23 ICML21 CVPR22 ICCV23 arXiv23 arXiv23 mIoU (Citys) 43.7 43.4 45.8 46.7 45.3 47.6 53.7 43.3 57.0 56.5 63.3 mIoU (BDD) 39.6 40.3 41.7 43.7 42.3 44.9 48.7 37.8 47.1 53.6 56.1 mIoU (Map) 39.1 42.0 47.1 45.5 48.6 48.8 55.0 48.0 58.4 58.6 63.9 mIoU (Average) 40.8 41.9 44.9 45.3 45.4 47.1 52.4 43.0 54.2 56.2 61.1 Table 1. Performance benchmarking of multiple VFMs and previous DGSS methods under the GTAV \u2192Cityscapes (Citys) + BDD100K (BDD) + Mapillary (Map) generalization setting. Without specialized design, frozen VFMs demonstrate stronger performance. boundaries of performance in a variety of computer vision challenges. Giving the remarkable generalization of these VFMs across various unseen scenes, two intuitive questions emerge: How do VFMs perform in the context of DGSS? And How to harness VFMs for DGSS? We attempt to an- swer these questions as follows: Stronger: We begin by evaluating and comparing the performance of various VFMs against existing DGSS meth- ods. To ensure a fair comparison, we use image encoders from a variety of VFMs as the backbone for feature extrac- tion in all cases. These backbones are coupled with the widely-used decode head, i.e., Mask2Former [9], to gen- erate semantic predictions. As illustrated in Tab. 1, while previous DGSS methods have showcased commendable results, they perform less effectively compared to frozen VFMs. This finding clearly demonstrates the powerful po- tential of VFMs in DGSS, outperforming traditional back- bones like ResNet [26] and MobileNetV2 [68], thereby es- tablishing VFMs as a meaningful benchmark in the field. Fewer: Although VFMs have exhibited impressive gen- eralization capabilities, fine-tuning them for DGSS tasks poses a challenge. The datasets [12, 65] commonly used in DGSS tasks are significantly smaller in scale compared to ImageNet [13], and fine-tuning VFMs with their huge number of trainable parameters on these datasets result in limited generalizability [36]. To address this issue, instead of the difficult task of large datasets collection, we resort to fine-tuning VFMs with fewer trainable parameters. How- ever, most existing parameter-efficient fine-tuning strate- gies, which fine-tune a large-scale model with fewer train- able parameters, are primarily designed for adapting large language models [28, 29, 46, 49, 51, 80, 85] or classifica- tion networks [7, 31]. These methods are not developed for refining features for distinct instances within a single image, thereby limiting their effectiveness in DGSS contexts. Superior: In this work, we introduce a robust and ef- ficient fine-tuning approach, namely \u201cRein\u201d. Tailored for DGSS tasks, Rein employs fewer trainable parameters to harness stronger VFMs for achieving superior generaliza- tion. At its core, Rein comprises a set of randomly ini- tialized tokens, each directly linked to different instances. These tokens, through a dot-product operation with VFMs features, generate an attention-like similarity map. This map enables Rein to perform precise refinement tailored to each instance within an image, significantly boosting VFMs in the context of DGSS. Moreover, to reduce the number of trainable parameters, we employ shared weights across MLPs in different layers and design our learnable tokens by multiplying two low-rank matrices. Extensive experi- ments on various DGSS settings demonstrate that the pro- posed Rein outperforms existing DGSS methods by a large margin with fewer trainable parameters. In a nutshell, the main contributions of this paper are as follows: \u2022 We first assess various Vision Foundation Models (VFMs) in the context of Domain Generalized Semantic Segmentation (DGSS). Our extensive experiments in the DGSS framework highlight the impressive generalization capabilities of VFMs. The findings confirm that VFMs serve as Stronger backbones, thereby establishing a sig- nificant benchmark in this field. \u2022 We present a robust fine-tuning method, namely \u201cRein\u201d, to parameter-efficiently harness VFMs. At its core, Rein consists of a set of learnable tokens, each directly linked to instances. With deliberate design, this linkage enables Rein to refine features at an instance-level within each layer. As a result, Rein reinforces the ability of VFMs in DGSS tasks, achieving this with Fewer trainable param- eters while preserving the pre-trained knowledge. \u2022 Comprehensive experiments across various DGSS set- tings demonstrate that Rein employs Fewer trainable parameters to effectively leverage Stronger VFMs for achieving Superior generalizability. This performance surpasses existing DGSS methods by a large margin. No- tably, Rein is designed to integrate smoothly with existing plain vision transformers, improving their generalization ability and making training more efficient.", |
| "main_content": "Domain Generalized Semantic Segmentation. Domain Generalized Semantic Segmentation (DGSS) focuses on enhancing model generalizability. This field involves training models on a set of source domain to enhance their performance on distinct and unseen target domain. Various approaches [6, 14, 22, 23, 30, 32\u201334, 48, 62, 75] have been proposed to address this issue, with methods including splitting the learned features into domain-invariant and domainspecific components [72, 77], or employing meta-learning to train more robust models [38]. A standard scenario in \ud835\udc53 \ud835\udc56 \ud835\udc53 \ud835\udc56\u00d7\ud835\udc47\ud835\udc56 \ud835\udc47 S Similarity Map \ud835\udc46\ud835\udc56 MLP \ud835\udc53 \ud835\udc56\u2032 Frozen Backbone, Tunable Rein \ud835\udc46\ud835\udc56\u00d7\ud835\udc40(\ud835\udc47\ud835\udc56) Layer \ud835\udc3f\ud835\udc56+1 \u2026 \ud835\udc44 \u2026 Head \ud835\udc40 \ud835\udc53 \ud835\udc40\ud835\udc44 \ud835\udc40\ud835\udc60 \ud835\udc40 \ud835\udc3f \ud835\udc43 Layer \ud835\udc3f\ud835\udc56 Input Image \u2026 Learnable Tokens \ud835\udc47\ud835\udc56(\ud835\udc5a\u00d7\ud835\udc50) \u2026 \u2026 \u00d7 \ud835\udc34 \ud835\udc35 \ud835\udc50 = \ud835\udc5f \ud835\udc5a \ud835\udc5f MLP \u2026 MLP max & avg & last (\ud835\udc5f\u226a\ud835\udc50) softmax \ud835\udc53 \ud835\udc5c \ud835\udc4a\ud835\udc47\ud835\udc56 \ud835\udc4a \ud835\udc53\ud835\udc56 \ud835\udc4a \ud835\udc44\ud835\udc56 \ud835\udc4a \ud835\udc44 Figure 2. An overview of proposed Rein. Rein primarily consists of a collection of low-rank learnable tokens, denoted as T = {T1, T2, . . . , TN}. These tokens establish direct connections to distinct instances, facilitating instance-level feature refinement. This mechanism results in the generation of an enhancement feature map f \u2032 i = fi + Rein(fi) for each layer within backbone. All parameters of MLPs are layer-shared to reduce the number of parameters. Mf, MQ, and MS are features module, queries module, and segmentation module, respectively. The notation max & avg & last refers to the equation Eq. (8) and Eq. (10). Backbone Fine-tune Method Trainable Params\u2217 mIoU Citys BDD Map Avg. CLIP [63] (ViT-Large) Full 304.15M 51.3 47.6 54.3 51.1 Freeze 0.00M 53.7 48.7 55.0 52.4 Rein 2.99M 57.1 54.7 60.5 57.4 MAE [27] (Large) Full 330.94M 53.7 50.8 58.1 54.2 Freeze 0.00M 43.3 37.8 48.0 43.0 Rein 2.99M 55.0 49.3 58.6 54.3 SAM [42] (Huge) Full 632.18M 57.6 51.7 61.5 56.9 Freeze 0.00M 57.0 47.1 58.4 54.2 Rein 4.51M 59.6 52.0 62.1 57.9 EVA02 [18, 19] (Large) Full 304.24M 62.1 56.2 64.6 60.9 Freeze 0.00M 56.5 53.6 58.6 56.2 Rein 2.99M 65.3 60.5 64.9 63.6 DINOV2 [58] (Large) Full 304.20M 63.7 57.4 64.2 61.7 Freeze 0.00M 63.3 56.1 63.9 61.1 Rein 2.99M 66.4 60.4 66.1 64.3 Table 2. Performance Comparison with the proposed Rein across Multiple VFMs as Backbones under the GTAV \u2192Cityscapes (Citys) + BDD100K (BDD) + Mapillary (Map) generalization setting. Mark \u2217denotes trainable parameters in backbones. DGSS is generalizing from one urban-scene dataset to another, for instance, from the synthetic GTAV [65] dataset to the real-world Cityscapes [12]. In this classic setting, certain techniques [10, 59, 60] have achieved notable performance through learning feature normalization/whitening schemes, while others [45] have improved segmentation results through feature-level style transfer and the introduction of additional data. Additionally, strong data augmentation [5, 17, 61, 83] often simply and effectively enhances model robustness. However, most of previous DGSS methods generally utilize outdated backbones like ResNet [26], VGGNet [71], MobileNetV2 [68], and ShuffleNetV2 [54], thereby leaving the efficacy of stronger Vision Foundation Models (VFMs) in DGSS relatively unexplored. Vision Foundation Models. The concept of a Foundation Model, initially introduced by Bommasani et al. [2] in the field of Natural Language Processing (NLP), defined as \u201cthe base models trained on large-scale data in a selfsupervised or semi-supervised manner that can be adapted for several other downstream tasks\u201d. While models like the ViT [15] and Swin Transformer [52] have demonstrated excellent performance, the quest for a Vision Foundation Model (VFM) similar to their NLP counterparts is ongoing. This pursuit has yielded significant advancements with the advent of models such as CLIP [63], which learn highquality visual representation by exploring contrastive learning with large-scale image text pairs; MAE [27], utilizing a masked image modeling for learning latent image representations; SAM [42], which develops a promptable model and pre-train it on a broad dataset for segmentation task; EVA02 [18, 19], which integrates Masked Image Modeling pre-training with CLIP\u2019s vision features; and DINOv2 [58], which is pretrained on extensive, curated datasets without explicit supervision. These VFMs have shown remarkable performance in downstream applications. Yet, a dedicated investigation into their performance in the specific context of DGSS tasks remains unexplored. Parameter-Efficient Fine-tuning. In the realm of NLP, parameter-efficient fine-tuning (PEFT) has achieved notable success by freezing most parameters of VFMs and finetuning a select few. Various approaches have been developed, such as BitFit [80], which adjusts only the model\u2019s bias terms; Prompt-tuning [46], introducing soft prompts to adapt frozen language models; Adapter-tuning [28], adding lightweight modules to each transformer layer; and notably, LoRA [29], which injects trainable rank decomposition matrices into transformer layers, yielding significant influence. Target ACDC[67] (test) Cityscapes-C[56] (level-5) Night Snow Fog Rain All Blur Noise Digital Weather Avg. Motion Defoc Glass Gauss Gauss Impul Shot Speck Bright Contr Satur JPEG Snow Spatt Fog Frost HGFormer 52.7 68.6 69.9 72.0 67.2 64.1 67.2 61.5 63.6 27.2 35.7 32.9 63.1 79.9 72.9 78.0 53.6 55.4 75.8 75.5 43.2 59.4 Ours 70.6 79.5 76.4 78.2 77.6 68.5 71.7 69.7 68.7 6.2 23.0 13.1 63.7 81.5 78.9 80.6 68.8 63.8 73.6 79.5 47.9 60.0 Table 3. Results on Cityscapes \u2192ACDC (test) and Cityscapes-C (level-5) datasets, utilizing a batch size of 8. Backbone Fine-tune Method Trainable Params\u2217 mIoU Citys BDD Map Avg. EVA02 (Large) [18, 19] Full 304.24M 62.1 56.2 64.6 60.9 +AdvStyle [83] 304.24M 63.1 56.4 64.0 61.2 +PASTA [5] 304.24M 61.8 57.1 63.6 60.8 +GTR-LTR [61] 304.24M 59.8 57.4 63.2 60.1 Freeze 0.00M 56.5 53.6 58.6 56.2 +AdvStyle [83] 0.00M 51.4 51.6 56.5 53.2 +PASTA [5] 0.00M 57.8 52.3 58.5 56.2 +GTR-LTR [61] 0.00M 52.5 52.8 57.1 54.1 +LoRA [29] 1.18M 55.5 52.7 58.3 55.5 +AdaptFormer [7] 3.17M 63.7 59.9 64.2 62.6 +VPT [31] 3.69M 62.2 57.7 62.5 60.8 +Rein (ours) 2.99M 65.3 60.5 64.9 63.6 DINOv2 (Large) [58] Full 304.20M 63.7 57.4 64.2 61.7 +AdvStyle [83] 304.20M 60.8 58.0 62.5 60.4 +PASTA [5] 304.20M 62.5 57.2 64.7 61.5 +GTR-LTR [5] 304.20M 62.7 57.4 64.5 61.6 Freeze 0.00M 63.3 56.1 63.9 61.1 +AdvStyle [83] 0.00M 61.5 55.1 63.9 60.1 +PASTA [5] 0.00M 62.1 57.2 64.5 61.3 +GTR-LTR [5] 0.00M 60.2 57.7 62.2 60.0 +LoRA [29] 0.79M 65.2 58.3 64.6 62.7 +AdaptFormer [7] 3.17M 64.9 59.0 64.2 62.7 +VPT [31] 3.69M 65.2 59.4 65.5 63.3 +Rein (ours) 2.99M 66.4 60.4 66.1 64.3 Table 4. Performance Comparison of the proposed Rein against other DGSS and PEFT methods under the GTAV \u2192Cityscapes (Citys) + BDD100K (BDD) + Mapillary (Map) generalization setting. Mark \u2217denotes trainable parameters in backbones. The application of PEFT methods is also expanding into the field of computer vision [16, 43], with notable examples such as Visual Prompt Tuning (VPT) [31], which prepends prompts into the input sequence of transformer layers; AdaptFormer [7], replacing the MLP block in the transformer encoder with an AdaptMLP; LP-FT [43] find that fine-tuning can achieve worse accuracy than linear probing out-of-distribution; and Prompt-ICM [20], applying largescale pre-trained models to the task of image coding for machines. Contrasting with these methods, we aim to refine feature maps for each instance within an image, thereby achieving superior performance in the realm of DGSS. 3. Methods 3.1. Preliminary Driven by the motivation that Leveraging Stronger pretrained models and Fewer trainable parameters for SuSource Domain Cityscapes mIoU GTAV 66.4 +Synthia 68.1 +UrbanSyn 78.4 +1/16 of Cityscapes Training set 82.5 Table 5. Synthetic data + 1/16 of Citys. \u2192Citys. val set. perior generalizability, we choose to fine-tune VFMs with a reduced parameter set. A straightforward thought might involve a smaller decode head; however, this method merely acts as a passive receiver of feature maps from the backbone, lacking the flexibility to effectively adapt a frozen backbone for generating task-specific or scene-specific features. In contrast, we propose to embed a mechanism, named \u201cRein\u201d, between the layers within the backbone. Rein actively refines and forwards the feature maps from each layer to the subsequent one. This approach allows us to more effectively utilize the powerful capabilities of VFMs, much like using rein to control a horse. Given a pre-trained VFM with parameters \u03a6M, consisting of a sequence of layers L1, L2, . . . , LN, a decode head H parameterized by \u03b8h, and the Rein strategy with parameters \u03b8R, the optimization objective can be written as: \\ lab el {e q: p eft _optimal} \\mathop {\\a rg \\min }\\limits _{\\theta _{R},\\theta _{h}} \\sum _{i=1}^{N_d} \\mathcal {L}oss(\\mathcal {H}_{\\theta _{h}}(\\mathcal {F}_{\\Phi _{M},\\theta _{R}}(x_i)),y_i), (1) where xi and yi denote the input image and its corresponding ground truth, respectively, and Nd signifies the total number of samples. F\u03a6M,\u03b8R represents the forward process of VFM after applying the Rein strategy. 3.2. Core of Rein For simple implementation across different VFMs, we opt not to modify MLP weights at specific positions as described in the [7, 29]. Instead, our approach focuses on refining the output feature maps at each layer within the VFMs, as illustrated in Fig. 2. Precisely, for the features fi produced by the i-th layer Li, Rein produces enhanced feature maps for the next layer as follows: \\ be gin {align ed } f_{1 }&=L _ {1}~(Em b ed(x ) ) ~~ ~~ ~ ~ ~ ~ ~ ~ f_ {1}\\ i n \\ mathbb {R}^{n\\times c}, \\\\ f_{i+1}&=L_{i+1}(f_{i}+\\Delta f_{i})~~~~~~~~~~i=1,2,\\ldots ,N-1, \\\\ f_{out}&=f_N+\\Delta f_N, \\end {aligned} \\label {eq:delta} (2) where f \u2032 i = fi + \u2206fi symbolizes the refined feature map, x is the input image, Embed denotes the patch embedding layer in VFMs, n represents the number of patches, N denotes the number of layers, and c is the dimensionality of f1, f2, . . . , fN. Note that the layers L1, L2, . . . , LN are kept frozen, and our focus is on training an efficient module, Rein, to generate \u2206fi as follows: \\ D elta f_i =Re i n(f_i ) ~ ~~ ~~ ~ ~ ~ \\Delta f_i\\in \\mathbb {R}^{n\\times c},i=1,2,\\dots ,N. (3) In the context of DGSS, an ideal \u2206fi should assist VFMs to bridge two types of gaps. The first is gap in scene between pre-training dataset and target scene, exemplified by the contrast between ImageNet [13] and urban-scene images [12, 65]. The second is task divergence between pretraining and fine-tuning, such as the differences between masked image modeling and semantic segmentation tasks. To establish this dual bridge, Rein starts with a set of learnable tokens T = {Ti \u2208Rm\u00d7c | i \u2208N, 1 \u2264i \u2264N}, where each token sequence Ti is randomly initialized, and m denotes the sequence length of Ti. Rein freezes the backbone and embeds knowledge learned from the finetuning dataset into these tokens, thereby bridging the gap in scene relative to the pre-training dataset. Moreover, considering the essential need in semantic segmentation to discern multiple instances within a single image, Rein implements an attention-inspired mechanism, which enables VFMs to make tailored adjustments to the features of distinct instances, thereby aiding VFMs in adapting to the differences between semantic segmentation and pre-training tasks. Specifically, Rein employs a dot-product operation to generate a similarity map Si, which captures the associations between feature vectors in fi and the tokens in T: \\ la b e l {e q :dotproduct} S_i=f_i\\times T_i^\\text {T}~~~~~~~~ S_i\\in \\mathbb {R}^{n\\times m}, (4) where Ti represents the token sequence of the i-th layer, m indicates the number of tokens in Ti. As S quantitatively evaluates the relationships between various tokens and feature vectors, Rein can apply a softmax function to align each patch with a unique instance: \\ label {eq: s o f t ma x} S_i=Softmax(\\frac {f_i\\times T_i^\\text {T}}{\\sqrt {c}}). (5) Leveraging the feature-to-token similarity map Si, we can preliminarily estimates of \u2206fi using the equation: \\l a bel { e q :o b t ain_ d el t a} \\ Delta \\bar {f_i}=S_i(:,2:m)\\times [~T_i(2:m)\\times W_{T_i} + b_{T_i}], (6) where WTi and bTi denote the weights and biases of a MLP, respectively. This MLP enables the transformation of Ti across different feature spaces during the computation of Si and \u2206\u00af fi. Optionally, Rein can pre-calculate Ti \u00d7 WTi + bTi to reduce inference time. The sum of Si equals one due to the softmax function; however, this can induce unneeded changes when all features are precise. To avoid this, Si(:, 2 : m) is designed to choose columns 2 to m of Si, and Ti(2 : m) denotes the selection of rows 2 to m of Ti. This strategic selection allows models to sidestep unnecessary adjustments by assigning a high value to the first token and subsequently discarding it. This approach allows the sum of each row in Si to vary from 0 to 1, thus reducing the risk of inappropriate changes. To enhance the flexibility in feature adjustment, Rein utilizes a MLP composed of Wfi and bfi to produce the final feature modifications \u2206fi: \\ l ab e l { eq: f ina l delta} \\Delta f_i=(\\Delta \\bar {f_i}+f_i) \\times W_{f_i} +b_{f_i}. (7) Benefiting from these instance-level \u2206fi adjustments, Rein is capable of generating diverse modifications for various categories within a single image. The details of Rein will be explained in the next section. 3.3. Details of Rein Linking tokens to instances. At the core of Rein, we establish an implicit yet effective linkage between tokens and instances, which has demonstrated notable performance, as detailed in Sec. 4. This connection is further reinforced by utilizing object queries, a key component in DETR[3]-style decode heads [8, 9, 81], as intermediaries. These queries are empirically proven to establish a direct association with instances. Specifically, we generate layer-wise queries Qi from our learnable tokens Ti via linear transformation: \\ la b el { eq: li n k} Q_i=T_i \\times W_{Q_i}+b_{Q_i}~~~~~~~~Q_i\\in \\mathbb {R}^{m\\times c'}, (8) where WQi and bQi signify the weights and biases, respectively, and c\u2032 denotes the dimension of Qi. However, due to the complexity arising from the large numbers of various layers in VFMs, transforming the diverse Qi into a single query Q poses computational challenges. To address this, Rein computes both the maximal component Qmax \u2208Rm\u00d7c\u2032 and the average component Qavg \u2208Rm\u00d7c\u2032 using the following equation: \\begi n { ali gned} Q_{ma x}(j, k)& =\\max _ {i = 1 , 2 , \\ld ots , N}{Q_i(j,k)},\\\\ Q_{avg}(j,k)&=\\frac {1}{N}\\sum _{i=1}^N{Q_i(j,k)}. \\end {aligned} \\label {eq:max_avg} (9) Subsequently, Q is derived as: \\label {eq:li nk2} Q&=C o nc a t([Q_{max},Q_{avg},Q_N])\\times W_Q + b_Q. (10) By mapping T onto Q, which subsequently links to instances, Rein achieves enhanced performance with a marginal increase in parameters. Layer-shared MLP weights. To address the redundancy of parameters in the layer-specific MLP weights, specifically WTi in Eq. (6), Wfi in Eq. (7), and WQi in Eq. (8), which collectively contribute to a substantial trainable parameter count, we adopt a new strategy. Since the learnable Ti is capable of producing distinct \u2206fi for each layer, we design the role of the MLP to primarily perform consistent linear transformations across different feature spaces for each layer within the backbone. To this end, we employ shared MLP weights across layers as outlined in the equations: \\be gin { align ed} & [ W _ { T_1} , b_{ T_ 1}]=[ W_{T _ 2},b_ {T_2 } ] = \\ l dots =[W _{ T_N}, b_{T _ N}],\\ \\ &[ W _ { f _ 1},b _ {f_ 1}]~=[W_{f_2},b_{f_2}]~~=\\ldots =[W_{f_N},b_{f_N}],\\\\ &[W_{Q_1},b_{Q_1}]=[W_{Q_2},b_{Q_2}]=\\ldots =[W_{Q_N},b_{Q_N}].\\label {eq:shareweight} \\end {aligned} (11) Low-rank token sequence. Recognizing the potential for information overlap among diverse learnable tokens, such as the high similarity between tokens representing a car\u2019s headlight and a bicycle\u2019s light, Rein adopts a strategy to generate a low-rank token sequence T as follows: \\ la b el { e q:low r a nk} Ti=A_i\\times B_i, ~~~~~~~A\\in \\mathbb {R}^{m\\times r}, B\\in \\mathbb {R}^{r\\times c}, (12) where c denotes the dimension of Ti, m is the length of sequence Ti, and r represents the rank, with r \u226ac. Here, matrices A and B are constructed as low-rank matrices. To reduce inference time, Rein can precompute and store T. By implementing this low-rank token sequence approach, Rein significantly reduces the number of parameter. 4. Experiments 4.1. Settings Visual Foundation Models. To thoroughly assess the influence of Visual Foundation Models (VFMs) within the context of DGSS, we analyze five distinct VFMs, each with different training strategies and datasets. Our selection includes CLIP [63], a language-image pre-training model; MAE [27], known for its masked pre-training approach; SAM [42], which leverages a large-scale segmentation dataset; EVA02 [18, 19] combines CLIP with masked image modeling; and DINOv2 [58], based on self-supervised pretraining with curated dataset. For balancing precision and efficiency, we mainly employ the ViT-Large architecture for these VFMs, except SAM, which utilizes a ViT-Huge image encoder, as described in its original paper [42]. We establish two fundamental baselines for VFMs: \u201cFull\u201d, where we fine-tune the entire network, and \u201cFreeze\u201d, in which all backbone parameters are fixed, with training solely on the segmentation head. More details about VFMs and PEFT methods are available in the supplementary material. Datasets. We evaluate VFMs and proposed methods on both real-world datasets (Cityscapes [12], BDD100K [78], Mapillary [57]) and synthetic datasets (GTAV [65], Synthia [66], UrbanSyn [24]). In detail, Cityscapes (denoted as Citys) is an autonomous driving dataset that contains 2975 training images and 500 validation images, each with the resolution of 2048 \u00d7 1024. BDD100K (shortened to BDD) and Mapillary (denoted by Map) offer 1,000 (1280 \u00d7 720) and 2,000 (1902 \u00d7 1080) validation images, respectively. GTAV, a synthetic dataset, presents 24,966 labeled images obtained from the game. Synthia, a synthetic dataset, provides 25,000 images created by photo-realistic rendering. UrbanSyn, a synthetic dataset consists of 7,539 images. Implementation details. We utilize the MMSegmentation [11] codebase for our implementation. For superior performance, mask2former [9], a widely-used segmentation head, is integrated with various VFMs that serve as the backbone. Additional experiments involving other decode heads are detailed in the supplementary material. For the training phase, the AdamW optimizer [53] is employed, setting the learning rate at 1e-5 for the backbone and 1e-4 for both the decode head and the proposed Rein. Aiming to efficient training process, we utilize a configuration of 40,000 iterations with a batch size of 4, and crop images to a resolution of 512 \u00d7 512. Our approach includes only basic data augmentation, following Mask2Former [9]. Thanks to our streamlined training configuration and reduced number of trainable parameters, Rein can fine-tune models like DINOv2-Large or EVA02-Large on a single RTX 3090Ti GPU within 12 hours for superior generalization ability. 4.2. Comparison with State-of-The-Art Methods In this section, we comprehensively evaluate Rein over five datasets within three generalization settings: GTAV \u2192 Citys + BDD + Map, GTAV + Synthia \u2192Citys + BDD + Map, and Citys \u2192BDD + Map. Rein is benchmarked against state-of-the-art (SOTA) methods, which can be classified into two groups, including domain generalized semantic segmentation (DGSS) methods[5, 10, 14, 30, 38, 45, 59, 61, 62, 74, 79, 83], and parameter-efficient fine-tuning (PEFT) approaches [7, 29, 31]. Investigation of various VFMs. Our analysis of VFMs and proposed Rein in the GTAV \u2192Citys + BDD + Map setting is presented in Tables 1 and 2. In this setup, models are fine-tuned using GTAV and evaluated on Cityscapes, BDD100K, and Mapillary. Note that, due to the fixed and relatively small number of trainable parameters in the decode head (20.6M), the count of trainable parameters presented in the tables are focused solely on the backbone and the PEFT module. Our results, as detailed in Table 1, indicate that frozen VFMs significantly outperform previous DGSS methods without specialized design. Moreover, as shown in Table 2, VFMs with full parameter fine-tuning exhibit enhanced performance relative to their frozen counterparts. Remarkably, Rein achieves even superior generalization capabilities, surpassing the full parameter fine-tuning with merely an extra 1% of trainable parameters compared to the original backbone. Visual samples for qualitative comparison are given in Fig. 3. Backbone Fine-tune Method Trainable Params\u2217 road side. build. wall fence pole light sign vege terr. sky pers. rider car truck bus train moto. bicy. mIoU EVA02 (Large) [18, 19] Full 304.24M 89.3 46.9 89.9 47.7 45.6 50.1 56.8 42.2 88.8 48.4 89.9 75.8 49.0 90.5 45.3 69.2 55.9 44.4 55.1 62.2 Freeze 0.00M 93.1 52.7 88.0 47.4 31.1 41.7 46.0 39.6 85.7 41.4 89.5 67.5 39.7 89.0 47.0 72.8 46.3 19.2 35.2 56.5 Rein-core 52.84M 91.1 53.8 90.0 50.3 47.7 46.6 56.4 42.9 87.8 44.2 90.4 73.5 44.2 91.8 58.1 77.2 57.3 43.4 57.3 63.4 + Rein-link 59.33M 90.9 48.5 90.0 52.6 49.4 49.1 57.2 39.8 88.9 46.5 90.5 74.4 44.0 91.0 52.3 80.7 67.3 44.3 60.3 64.1 + Rein-share 5.02M 92.7 54.3 90.0 51.8 48.6 48.8 55.3 45.0 88.9 46.7 89.8 73.7 43.3 90.6 49.5 81.1 69.6 41.7 50.2 63.4 + Rein-lora 2.99M 91.7 51.8 90.1 52.8 48.4 48.2 56.0 42.0 89.1 44.1 90.2 74.2 47.0 91.1 54.5 84.1 78.9 47.2 59.4 65.3 DINOv2 (Large) [58] Full 304.20M 89.0 44.5 89.6 51.1 46.4 49.2 60.0 38.9 89.1 47.5 91.7 75.8 48.2 91.7 52.5 82.9 81.0 30.4 49.9 63.7 Freeze 0.00M 92.1 55.2 90.2 57.2 48.5 49.5 56.7 47.7 89.3 47.8 91.1 74.2 46.7 92.2 62.6 77.5 47.7 29.6 47.2 61.1 Rein-core 52.84M 92.4 57.8 90.6 56.8 50.7 50.5 57.5 44.8 89.8 47.0 91.1 75.9 47.2 91.9 60.1 80.3 59.8 37.9 52.3 64.9 + Rein-link 59.33M 91.2 55.5 90.6 55.6 52.5 51.1 59.7 45.1 89.8 47.1 91.1 75.8 47.1 92.6 64.6 82.2 65.5 40.4 52.7 65.8 + Rein-share 5.02M 93.5 61.2 90.7 57.7 53.2 52.4 58.0 50.1 89.7 49.9 90.7 74.8 45.0 91.7 58.5 80.1 66.3 36.9 50.7 65.8 + Rein-lora 2.99M 92.4 59.1 90.7 58.3 53.7 51.8 58.2 46.4 89.8 49.4 90.8 73.9 43.3 92.3 64.3 81.6 70.9 40.4 54.0 66.4 Table 6. Ablation Study about Rein under Cityscapes \u2192BDD100K generalization in terms of mIoU. Components are sequentially incorporated. To better illustrate the gains contributed by each component, we employ varying shades of yellow to demonstrate the relative performance of the Freeze and Rein methods. The best results across all methods are highlighted. Input RobustNet GTR Ours GT Citys BDD Map WildNet Figure 3. Qualitative Comparison under GTAV \u2192Cityscapes (Citys) + BDD100K (BDD) + Mapillary (Map) generalization setting. Comparing Rein with SOTA. We conduct a comprehensive performance comparison of the proposed Rein against existing DGSS and PEFT methods under the GTAV \u2192 Citys + BDD + Map setting, as detailed in Table 4. Owing to the robust feature extraction capabilities inherent in VFMs, DGSS methods, which typically enhance generalizability through strong data augmentation or consistency constraints, (e.g., AdvStyle, PASTA, and GTR), do not exhibit significant performance improvement. On the other hand, PEFT methods have demonstrated notable advancements. For instance, AdaptFormer outperforms the \u201cFreeze\u201d baseline using EVA02 as the backbone, while VPT shows improved performance over \u201cFull\u201d with DINOv2. Employing the same backbones (DINOv2 and EVA02), proposed Rein achieves superior performance and surpass previous DGSS and PEFT methods. Real-to-Real generalization of Rein. The generalization from one real-world dataset to others is pivotal for practical applications in the field. To this end, we conduct experiments under the Citys \u2192ACDC, Citys \u2192Cityscapes-C, and Citys \u2192BDD + Map generalization setting. As shown in Table 3 and 7, Rein, when coupled with the DINOv2-Large, demonstrates superior performance across all datasets. This underscores the effectiveness of Rein in generalizing to diverse real-world scenarios. Synthetic-to-real generalization of Rein. As Tab. 11 illustrates, trained on synthetic UrbanSyn+GTAV+Synthia datasets, Rein achieved a 78.4% mIoU on the Cityscapes validation set. Further improvement is possible with additional synthetic data and higher-quality images generated by diffusion models, like [1]. This result can also be a valuable pre-trained weight for data-efficient training, reaching an 82.5% mIoU with 1/16 of Cityscapes training set. This is a significant performance for semi-supervised semantic segmentation. More backbones. We extend our analysis to integrating Rein with Convolutional Networks, such as ResNet and ConvNeXt, and smaller scale architectures like DINOv2S/B. As shown in Table 8, our findings reveal that Rein exhibits remarkable performance with diverse backbones. 4.3. Ablation Studies and Analysis We conduct extensive ablation studies within two settings: GTAV \u2192Citys and GTAV \u2192Citys + BDD + Map. Analysis of the key components. Table 6 is dedicated to thoroughly examining the effectiveness of each component within Rein. In the GTAV \u2192Citys generalization setting, we sequentially incorporate different components of Rein and assess their impact. Interestingly, we observe that the \u201cFreeze\u201d occasionally exhibit better recognition for specific Methods Backbone Trainable Parameters\u2217 mIoU BDD Map Avg. IBN [59] ResNet50 [26] 23.58M 48.6 57.0 52.8 DRPC [79] ResNet50 [26] 23.58M 49.9 56.3 53.1 GTR [61] ResNet50 [26] 23.58M 50.8 57.2 54.0 SAN-SAW [62] ResNet50 [26] 23.58M 53.0 59.8 56.4 WildNet [45] ResNet101 [26] 42.62M 50.9 58.8 54.9 HGFormer [14] Swin-L [52] 196.03M 61.5 72.1 66.8 Freeze EVA02-L [18] 0.00M 57.8 63.8 60.8 Rein (Ours) EVA02-L [18] 2.99M 64.1 69.5 66.8 Freeze DINOv2-L [58] 0.00M 63.4 69.7 66.7 Rein (Ours) DINOv2-L [58] 2.99M 65.0 72.3 68.7 Table 7. Performance Comparison of the Rein against other DGSS methods under Cityscapes \u2192BDD100K (BDD) +Mapillary (Map) generalization. The best results are highlighted. Avg. mIoU ResNet (50) ResNet (101) ConvNeXt (Large) DINOv2 (S) DINOv2 (B) Full 38.9 46.1 52.2 51.8 56.7 Ours 46.6 46.3 55.5 55.7 59.1 Table 8. Results for ConvNets and smaller backbones. categories, e.g., \u2018road, sidewalk\u2019, compared to the \u201cFull\u201d. This suggests that VFMs lose some pre-training knowledge during fine-tuning, and \u201cFreeze\u201d helps to prevent. Similarly, our methods mitigate this knowledge forgetting. Furthermore, our methods show improved recognition capabilities for the majority of the 19 categories. For example, in recognizing \u2018wall, motorcycle, bicycle\u2019, our approach significantly outperforms both the \u201cFull\u201d and \u201cFreeze\u201d baselines. Overall, \u201cRein-core\u201d boosts the average performance across 19 classes. Furthermore, \u201cRein-link\u201d further boosts accuracy for certain objects, including \u2018car, bus, train, motorcycle\u2019, especially for DINOv2. Employing layer-shared MLP weights and low-rank token sequence efficiently reduces the number of trainable parameters and positively influences the performance of the model. Study on token length m. The core component of Rein is learnable tokens T \u2208Rm\u00d7c. We explored various lengths m for the token sequence, ranging from 25 to 200. As demonstrated in Fig. 4, models with m = 100 and m = 150 both achieve a strong mIoU of 64.3%. We ultimately selected m = 100 as the most suitable parameter. Study on rank r. As shown in Table 12, we turn attention to the effect of rank r on model performance. With DINOv2 as the backbone, the optimal results are observed at r = 16 and r = 32. Consequently, unlike LoRA [29], we opt for a comparatively higher value of r = 16 for our model. Speed, memory, and storage. For practical applications, training speed, GPU memory usage, and model storage requirements are crucial. As shown in Table 13, compared to \u201cFull\u201d baseline, proposed Rein improves training speed and reduces GPU memory usage. A significant advantage of Rein is that models trained under different settings can share the same backbone parameters. This means that for switch 25 50 75 100 125 150 175 200 Length m 62.5 63.0 63.5 64.0 64.5 Average mIoU (%) 62.50 62.70 63.60 62.40 62.40 63.80 64.20 64.30 64.30 62.30 EVA02 DINOv2 2.96 2.97 2.99 3.01 3.03 Trainable Parameters (M) Figure 4. Ablation study on token length m. Rank r 4 8 16 32 64 Params 2.67M 2.77M 2.99M 3.42M 4.28M DINOv2 (Large) [58] Citys 65.8 66.1 66.4 66.1 66.4 BDD 60.2 60.3 60.4 60.7 61.0 Map 65.2 65.1 66.1 65.9 65.0 Avg. 63.7 63.9 64.3 64.3 64.1 Table 9. Ablation study on lora dim r. VFMs Method Training Time GPU Memory Storage DINOv2 (Large) Full 11.2 h 14.7 GB 1.22 GB Rein 9.5 h 10.0 GB 1.23 GB Table 10. Training Time, GPU Memory, and Storage. in diverse tasks and settings, we can only store and swap the rein weights (0.01GB) and head weights (0.08GB), rather than all parameters. 5. Conclusions In this paper, we assess and harness Vision Foundation Models (VFMs) in the context of DGSS. Driven by the motivation that Leveraging Stronger pre-trained models and Fewer trainable parameters for Superior generalizability, we first investigate the performance of VFMs under diverse DGSS settings. Subsequently, we introduce a robust fine-tuning approach, namely Rein, to parameter-efficiently harness VFMs for DGSS. With a fewer trainable parameters, Rein significantly enhance generalizability of VFMs, outperforming SOTA methods by a large margin. Rein can be seamlessly integrated as a plug-and-play adapter for existing VFMs, improving generalization with efficient training. Extensive experiments demonstrate the substantial potential of VFMs in the DGSS field, validating the effectiveness of proposed Rein in harnessing VFMs for DGSS. 6. Acknowledgements This work was supported in part by the Anhui Provincial Key Research and Development Plan 202304a05020072, in part by the Postdoctoral Fellowship Program of CPSFGZB20230713, and in part by the National Natural Science Foundation of China under Grant 61727809. Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation Supplementary Material Freeze 0.00M Rein 2.53M Full 304.24M 50 55 60 65 Test mIoU EVA02 + SemFPN 0.00 0.05 0.10 0.15 0.20 Training Loss Freeze 0.00M Rein 2.53M Full 304.20M 50 55 60 65 Test mIoU DINOv2 + SemFPN 0.00 0.05 0.10 0.15 0.20 Training Loss Freeze 0.00M Rein 2.99M Full 304.24M 50 55 60 65 Test mIoU EVA02 + Mask2Former 15.0 17.5 20.0 22.5 25.0 Training Loss Freeze 0.00M Rein 2.99M Full 304.20M 50 55 60 65 Test mIoU DINOv2 + Mask2Former 15.0 17.5 20.0 22.5 25.0 Training Loss Figure 5. The curves of training loss and test metrics display consistent trends across different VFMs and decode heads: intuitively, as trainable parameters increase from 0.00M(Freeze) \u21922.53M(Rein) \u2192304.24M(Full), the training loss monotonically decreases, indicating that a greater number of trainable parameters indeed better fit the training dataset. However, the test metrics on the target dataset initially rise and then fall, forming an inverted U-shape. This pattern suggests that the \u201cFull\u201d baseline overfits the training data, leading to diminished test performance. These findings are aligned with our motivation that Leveraging Stronger pre-trained models and Fewer trainable parameters for Superior generalizability. The blue bar charts in the figure represent the average mIoU tested on the Cityscapes, BDD100K, and Mapillary datasets, while the yellow line denotes the training loss during fine-tuning on GTAV dataset. 7. Fewer Trainable Parameters Classical neural network theory [21, 25] points out that as model capacity increases, the empirical risk (or training risk) monotonically decreases, indicating an improved fit to training data. Conversely, the true risk (or test risk) typically exhibits a \u201cU-shaped\u201d curve, initially decreasing and then increasing, a phenomenon known as overfitting. From a modern viewpoint, the scaling law [36] suggests that on a smaller fixed dataset, performance stops to improve as model parameters increase, leading to overfitting. In the majority of general tasks, the practice of earlystopping, based on evaluation data, can partly mitigate overfitting. However, in the field of domain generalization, the unknown test data distribution makes acquiring a valid evaluation dataset unavailable. Moreover, fine-tuning datasets are often smaller compared to ImageNet [13] or LVD142M [58]. Hence, employing fewer trainable parameters emerges as a strategic approach to mitigate overfitting. In our main paper, extensive experiments comprehensively demonstrate Rein\u2019s pivotal role in enhancing the generalization capabilities of VFMs. This enhancement may be attributed to two factors: 1) Rein\u2019s improved fitting capability for VFMs, ensuring better alignment with training data; 2) Rein\u2019s reduction of overfitting in VFMs during finetuning on smaller datasets, thus exhibiting enhanced generalization in testing. To delve into this, we analyze and compare the average training loss in the final 1000 iterations of the fine-tuning phase and their corresponding test metrics for various VFMs and decode heads. Fig. 5 showcases a consistent trend across four different configurations. As trainable parameters increase from 0.00M(Freeze) \u21922.53M(Rein) \u2192304.24M(Full), the training loss monotonically decreases. However, the test metrics on the target dataset peak with Rein, which employs 2.53 million parameters and incurs a sub-optimal training loss. In contrast, the \u201cFull\u201d baseline, despite recording the lowest training loss, only achieves sub-optimal test performance, a clear indicator of overfitting when compared to other setups. This observation aligns with the conclusions in [21, 36], supporting ours observation that leveraging Stronger pre-trained models and Fewer trainable parameters can lead to Superior generalizability. Source Domain Cityscapes mIoU GTAV 66.4 +Synthia 68.1 +UrbanSyn 78.4 +1/16 of Cityscapes Training set 82.5 Table 11. Results on Cityscapes validation set. 8. Value of synthetic data As Tab. 11 illustrates, trained on synthetic UrbanSyn [24]+GTA+Synthia datasets, Rein achieved a 78.4% mIoU on the Cityscapes validation set. Further improvement is possible with additional synthetic data and higherquality images generated by diffusion models, like [1]. This result can also be a valuable pre-trained weight for dataefficient training, reaching an 82.5% mIoU with 1/16 of Rank r 4 8 16 32 64 Params 2.67M 2.77M 2.99M 3.42M 4.28M EVA02 (Large) [18] Citys 62.6 63.5 65.3 63.8 63.4 BDD 58.5 58.9 60.5 60.5 60.2 Map 63.7 63.8 64.9 64.5 64.3 Avg. 61.6 62.1 63.6 62.9 62.7 Table 12. Ablation study on lora dim r. VFMs Method Training Time GPU Memory Storage EVA02 (Large) Full 11.8 h 15.9 GB 1.22 GB Rein 10.5 h 12.5 GB 1.23 GB Table 13. Training Time, GPU Memory, and Storage. Cityscapes training set. This is a significant performance for semi-supervised semantic segmentation. 9. Ablation on decode head Our experiments on Rein employ the Mask2Former [9] decode head, which shares structures or core concepts with numerous methods in dense prediction tasks [3, 8, 47, 55, 81]. The universality of Mask2Former highlights the significance of our findings for a range of segmentation tasks, including instance and panoptic segmentation. Furthermore, to demonstrate Rein\u2019s effectiveness in enhancing backbone generalization and its robustness across various decode heads, we conduct supplementary experiments using the popular SemFPN decode head [41], in the GTAV\u2192 Cityscapes + BDD100K + Mapillary setting. As shown in Table 14, Rein surpasses the \u201cFull\u201d and \u201cFreeze\u201d baselines, employing 2.53 million trainable parameters within the backbone, while the SemFPN decode head comprises 1.63 million parameters. Owing to the absence of object queries in SemFPN, the \u201clinking tokens to instance\u201d mechanism, described in Sec.3.3, is not utilized, resulting in a reduction of Rein\u2019s trainable parameters from 2.99 million to 2.53 million. When compared to the complete Rein configuration using the Mask2Former, using SemFPN achieves sub-optimal performance, evident in the 64.3% mIoU reported in Table 2 and 62.1% mIoU in Table 9, both implemented with DINOv2-Large. As shown in Table 15, the Mask2Former brings the 11.7% mIoU for ResNet101.These findings guide our decision to focus on experiments involving Mask2Former in the main paper. 10. Ablation on EVA02 Study on rank r As shown in Table 12, with EVA02 as the backbone, the optimal results are observed at r = 16. Speed, memory, and storage. As shown in Table 13, compared to \u201cFull\u201d baseline, proposed Rein improves training speed and reduces GPU memory usage. Backbone Fine-tune Method Trainable Params\u2217 mIoU Citys BDD Map Avg. EVA02 [18, 19] (Large) Full 304.24M 58.5 56.9 62.0 59.1 Freeze 0.00M 54.1 51.2 54.3 53.2 Rein 2.53M 61.4 58.5 62.0 60.7 DINOv2 [58] (Large) Full 304.20M 61.2 55.9 62.5 59.9 Freeze 0.00M 58.9 56.4 60.3 58.5 Rein 2.53M 63.6 59.0 63.7 62.1 Table 14. Performance Comparison with the proposed Rein with SemFPN [41] as Backbones under the GTAV \u2192Cityscapes (Citys) + BDD100K (BDD) + Mapillary (Map) generalization setting. Models are fine-tuned on GTAV and tested on Cityscapes, BDD100K and Mapillary. The best results are highlighted. \u2217denotes trainable parameters in backbones. Backbone Decoder Tune mIoU ResNet101 [45] DeeplabV3plus Full 34.4 ResNet101 Mask2Former Full 46.1 DINOv2 Mask2Former Full 61.7 DINOv2 Mask2Former Ours 64.3 Table 15. Results on GTAV\u2192Citys+BDD+Map. Metrics for first line are from Wildnet. Methods Publication mIoU Citys BDD Map Avg. RobustNet [10] CVPR 21 37.7 34.1 38.5 36.8 PintheMem [38] CVPR 22 44.5 38.1 42.7 41.8 SAN-SAW [62] CVPR 22 42.1 37.7 42.9 40.9 WildNet [45] CVPR 22 43.7 39.9 43.3 42.3 DIGA [74] CVPR 23 46.4 33.9 43.5 41.3 SPC [30] CVPR 23 46.4 43.2 48.2 45.9 EVA02 Frozen [18, 19] arXiV 23 55.8 55.1 59.1 56.7 EVA02 + Rein 63.5 60.7 63.9 62.7 DINOv2 Frozen [58] arXiV 23 64.8 60.2 65.2 63.4 DINOv2 + Rein 68.1 60.5 67.1 65.2 Table 16. Performance Comparison of the proposed Rein against other DGSS methods under GTAV + Synthia \u2192Cityscapes (Citys) + BDD100K (BDD) +Mapillary (Map) generalization. 11. Multi-source generalization. In this part, we compare Rein against other DGSS methods under GTAV + Synthia \u2192Citys + BDD + Map setting, in which networks are fine-tuned using both GTAV and Synthia datasets, and tested on Cityscapes, BDD100K, and Mapillary. As shown in Table 16, we report the performance of Rein employing two VFMs, EVA02 and DINOv2. Our results demonstrate that Rein significantly surpasses existing DGSS methods by a large margin in average mIoU (from 45.9% to 65.2%). 12. More details about VFMs CLIP. In our study, we utilize the ViT-Large architecture, setting the patch size to 16 \u00d7 16. Each layer of this architecture outputs features with a dimensionality of 1024, making use of the pre-trained weights from the foundational work [63]. Our model undergoes a pre-training phase through contrastive learning, employing publicly available image-caption data. This data is compiled through a blend of web crawling from select websites and integrating widely-used, existing image datasets. For the model\u2019s pre-trained weights, which have a patch size of 14 \u00d7 14 and an original pre-training image size of 224 \u00d7 224, we adopt bilinear interpolation to upscale the positional embeddings to a length of 1024. Moreover, trilinear interpolation is utilized to enlarge the kernel size of the patch embed layer to 16 \u00d7 16. Our model comprises 24 layers, and the features extracted from the 7th, 11th, 15th, and 23rd layers (counting from the zeroth layer) are subsequently channeled into the decoding head. MAE. Employing the ViT-Large architecture, our model outputs features from each layer with a dimensionality of 1024, maintaining a patch size of 16 \u00d7 16. This model capitalizes on the pre-trained weights as delineated in the original work [27], and it undergoes self-supervised training using masked image modeling on ImageNet-1K. The architecture is composed of 24 layers, directing features from the 7th, 11th, 15th, and 23rd layers directly into the decoding head. SAM. Aligning with the methodology described in the foundational paper [42], we employ the ViT-Huge architecture as our image encoder, making use of pre-trained weights that were trained on SA-1B [42] for a promptable segmentation task. The patch size of this model is set to 16\u00d716, and each layer is designed to output features with a dimensionality of 1280, summing up to a total of 32 layers. The positional embeddings of the model are upscaled to a length of 1024 via bicubic interpolation. From this model, we extract features from the 7th, 15th, 23rd, and 31st layers and feed them into the decoder. EVA02. In our approach, we adopt the largest scale configuration, EVA02-L, as our structural backbone, as suggested in the paper [18]. This particular model configuration determines its patch size as 16, with each layer producing feature maps of 1024 dimensions, across a total of 24 layers. EVA02 undergoes training through a combination of CLIP and Masked Image Modeling techniques on an aggregated dataset that includes IN-21K [13], CC12M [4], CC3M [70], COCO [50], ADE20K [84], Object365 [69], and OpenImages [44]. Mirroring the approach used in previous models, we upscale the positional embeddings to 1024 through bilinear interpolation, and the patch embed layer\u2019s convolutional kernel size is augmented to 16 \u00d7 16 via bicubic interpolation. Features from the 7th, 11th, 15th, and 23rd layers are then processed through the decode head. DINOv2. Our choice of backbone for this study is DINOv2-L, which has been distilled from DINOv2-g. As noted in the original documentation, DINOv2-L occasionally surpasses the performance of DINOv2-g [58]. Sharing the same patch size, dimensionality, and layer count as EVA02-L, we apply equivalent processing to both the positional embeddings and patch embed layer of DINOv2-L. The features extracted from the 7th, 11th, 15th, and 23rd layers are subsequently fed into the decode head. DINOv2 is originally pretrained in a self-supervised fashion on the LVD-142M [58] dataset, following the procedures outlined in its respective paper. VPT, LoRA, and AdaptFormer. Based on extensive experimentation, we have optimized the implementation of PEFT methods for DINOv2, utilizing configurations that enhance performance. These methods include: 1) VPT: It is deep and has 150 tokens. 2) LoRA: Applied to the query and value MLP components, LoRA is configured with a rank of 8. Additionally, it incorporates a minimal dropout rate of 0.1%. 3) AdaptFormer: This method employs a bottleneck design with a width of 64, initialized using LoRA. Notably, it omits layer normalization. 13. Algorithm of Proposed Rein Algorithm 1 outlines the training procedure for Rein, wherein the weights conform to the constraints specified in Eq. (11). In this context, the variable c represents the number of channels in the feature maps of model M, N denotes the total number of layers within M, T indicates the overall number of training iterations, and r is defined as a hyperparameter that is considerably smaller than c. 14. Qualitative Results and Future works In this section, we showcase our prediction results across various datasets, including Cityscapes, BDD100K, and Mapillary, as depicted in Fig.6, Fig.8, and Fig.7. All models are trained on the GTAV dataset without any fine-tuning on real-world urban-scene datasets. Our method outshines other approaches in accuracy, especially in categories like traffic signs, bicycles, traffic lights, sidewalks, roads, and trucks, demonstrating high precision for both large objects and smaller targets. Notably, despite not specifically optimizing for night-time segmentation, Rein\u2019s performance during night conditions is surprisingly high, almost akin to daytime performance, as illustrated in Fig.6. With the rapid development of generative models research, we anticipate that our work could leverage highquality generated samples to approach the performance of models trained with supervision on real datasets. Furthermore, we are prepared to investigate how VFMs can enhance the performance of semantic segmentation models trained on real datasets under various adverse weather conditions or on special road types. Finally, further exploration is necessary to investigate how Rein can be extended to Algorithm 1: Training process of Rein. Input: A sequence of input data and corresponding labels {(xi, yi) | t \u2208N, 1 \u2264i \u2264Nd}; Pre-trained Vision Foundation Model M, consisting of a patch embed layer Lemb, and layers L1, L2, . . . , LN; a decode head H; and a proposed module Rein R. The module Rein comprises the following matrices and vectors, initialized as specified: \\ begin {array}{ ll} A_i \\in \\m a thbb {R}^{m \\t imes r}, & \\ tex t {uni formly in itialized}, \\\\ B _i \\i n \\mathbb { R} ^{r \\ tim e s c}, & \\text { uniformly in iti a liz ed}, \\\\ W_{ T_ i} \\i n \\ m ath bb {R}^{c \\ ti mes c }, & \\te xt {uniform ly initialized}, \\\\ W_{f_i} \\in \\mathbb {R}^{c \\times c}, & \\text {initialized to zero}, \\\\ W_{Q_i} \\in \\mathbb {R}^{c \\times c'}, & \\text {uniformly initialized}, \\\\ b_{T_i} \\in \\mathbb {R}^{c}, & \\text {initialized to zero}, \\\\ b_{f_i} \\in \\mathbb {R}^{c}, & \\text {initialized to zero}, \\\\ b_{Q_i} \\in \\mathbb {R}^{c'}, & \\text {initialized to zero}, \\\\ \\end {array} for each i \u2208N, 1 \u2264i \u2264N. Additionally, WQ \u2208R3c\u2032\u00d7c\u2032 is uniformly initialized, and bQ \u2208Rc\u2032 is initialized to zero. Output: The optimized H and R. for t \u21901 to T do Get batch data:(x, y) f0 = Lemb(x) for i \u21901 to N do fi = Li(fi\u22121) Ti = Ai \u00d7 Bi Si = Softmax( fi\u00d7T T i \u221ac ) \u2206\u00af fi = Si(:, 2 : m) \u00d7 [Ti(2 : m) \u00d7 WTi + bTi] \u2206fi = (\u2206\u00af fi + fi) \u00d7 Wfi + bfi Qi = Ti \u00d7 WQi + bQi fi = fi + \u2206fi Ft \u2286{f0, f1, . . . , fN} Calculate Qmax and Qavg by Eq. (9) Q = Concat([Qmax, Qavg, QN]) \u00d7 WQ + bQ \u00af yt = H(Ft, Q) Optimize H and R by Loss(\u00af y, y) tasks such as instance segmentation, panoptic segmentation, open-vocabulary segmentation, and even object detection. Input RobustNet GTR Ours GT WildNet Figure 6. Prediction results of DINOv2+Rein on the BDD100K validation set. The model is fine-tuned exclusively on the GTAV dataset, without access to any real-world urban-scene datasets. Input RobustNet GTR Ours GT WildNet Figure 7. Prediction results of DINOv2+Rein on the Cityscapes validation set. The model is fine-tuned exclusively on the GTAV dataset, without access to any real-world urban-scene datasets. Input RobustNet GTR Ours GT WildNet Figure 8. Prediction results of DINOv2+Rein on the Mapillary validation set. The model is fine-tuned exclusively on the GTAV dataset, without access to any real-world urban-scene datasets." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2309.09443v2", |
| "title": "Enhancing Multilingual Speech Recognition through Language Prompt Tuning and Frame-Level Language Adapter", |
| "abstract": "Multilingual intelligent assistants, such as ChatGPT, have recently gained\npopularity. To further expand the applications of multilingual artificial\nintelligence assistants and facilitate international communication, it is\nessential to enhance the performance of multilingual speech recognition, which\nis a crucial component of speech interaction. In this paper, we propose two\nsimple and parameter-efficient methods: language prompt tuning and frame-level\nlanguage adapter, to respectively enhance language-configurable and\nlanguage-agnostic multilingual speech recognition. Additionally, we explore the\nfeasibility of integrating these two approaches using parameter-efficient\nfine-tuning methods. Our experiments demonstrate significant performance\nimprovements across seven languages using our proposed methods.", |
| "authors": "Song Li, Yongbin You, Xuezhi Wang, Ke Ding, Guanglu Wan", |
| "published": "2023-09-18", |
| "updated": "2023-09-19", |
| "primary_cat": "eess.AS", |
| "cats": [ |
| "eess.AS", |
| "cs.CL", |
| "cs.SD" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", |
| "gt": "Recently, multilingual AI assistants represented by ChatGPT have brought great convenience to people\u2019s work and life. As an important component of speech interaction, multilingual speech recognition has also received much attention from re- searchers to promote wider applications of speech interac- tion [1\u20137]. The multilingual speech recognition systems men- tioned in this paper refer to architectures that can recognize multiple languages using a single model. However, this all- in-one architecture may decrease speech recognition perfor- mance for certain languages due to language confusion. Gen- erally, low-resource languages benefit from the architecture, while high-resource languages may experience a decline [1]. Therefore, how to improve the language discrimination abil- ity to enhance the performance of multilingual speech recog- nition systems has become a key research topic. Currently, there are two mainstream approaches to en- hancing the language discrimination ability of multilingual speech recognition: incorporating language information and self-supervised training. For the former, language informa- tion can be directly integrated into the model or additional components can be introduced to provide language bias and improve the performance of specific languages. Ref. [6,7] in- troduced an additional language identification (LID) module to predict language information, while Ref. [2] treated lan- guage information as a special textual token and concatenated it to the input of the decoder of the autoregressive speech recognition model, achieving joint modeling of speech recog- nition and language identification. Ref. [3] provided language information directly as prior information to speech recogni- tion models, this can be achieved by encoding language in- formation as a one-hot vector or embedding and concatenat- ing it with acoustic features. Alternatively, language-specific sub-modules can be built based on prior language informa- tion, such as language-specific adapters [4], mixture of ex- perts [5], and decoders [1]. For the latter, self-supervised learning-based methods employ a significant corpus of mul- tilingual audio only data for pre-training. By utilizing con- trastive learning or masking language model loss function, the model acquires language discrimination ability and rich mul- tilingual acoustic semantic knowledge, which is subsequently fine-tuned with multilingual annotated data [8]. Language-agnostic and language-configurable speech recognition are two major application scenarios of multilin- gual speech recognition systems. The former is suitable for scenarios where the language is unknown in advance, while the latter provides users with a switch to select the language to be recognized. In previous research, these two scenarios were usually studied separately. In this paper, we attempt to explore simple and parameter-efficient methods to study both scenarios simultaneously and investigate the feasibility of integrating them. Specifically, for language-configurable scenarios, we propose simple and efficient language prompt tuning based methods to achieve language adaptation. For language-agnostic scenarios, we propose using a frame-level language adapter to enhance the model\u2019s ability to distin- guish between languages. Finally, we explore the possibility of merging the models for both scenarios into a unified model using parameter-efficient fine-tuning methods. The rest of the paper is organized as follows. In Section 2, the details of our proposed parameter-efficient methods are described. In Section 3, we introduce specific experimental details. Experimental results are presented in Section 4. Fi- nally, the paper is concluded in Section 5. arXiv:2309.09443v2 [eess.AS] 19 Sep 2023", |
| "main_content": "In this paper, we focus on exploring methods to enhance the performance of CTC-based multilingual speech recognition through the utilization of language information. Inspired by large language models (LLM), we first propose prompt tuning to improve language-configurable scenarios. Next, we introduce the frame-level language adapter to enhance languageagnostic scenarios. Finally, we merge these strategies into a unified model using parameter-efficient fine-tuning methods. 2.1. Language Prompt Tuning A common approach to leverage language information is to encode it as a one-hot vector or embedding [3] , and then concatenate it with acoustic features to form the input for a multilingual speech recognition model. This can be achieved using the Concat method described in Figure 1(a), or the Add method described in Figure 1(b). For the Add method, as shown in Figure 1(c), an attention mechanism can also be used to obtain the weights of acoustic features and language embedding, followed by a weighted sum. Fig. 1. Methods of utilizing language information. Recently, multilingual intelligent assistants such as ChatGPT have introduced new ideas for developing multi-task unified models. By providing different prompts to the model, a single model can perform various tasks. Inspired by this, we attempt to introduce prompt tuning [9] into CTC-based end-to end multilingual speech recognition. As described in Figure 1(e), we use a prompt encoder to encode language information (in the form of discrete LIDs) into a prompt embedding with the same dimension as the acoustic features. The prompt embedding serves as a language prompt, guiding the model to accurately recognize specific languages. We can position the prompt embedding as a prefix or suffix to the acoustic features using the transformer encoder as the acoustic encoder. Another implementation of prompt tuning is prefix tuning [10] , as shown in Figure 1(d), which concatenates the prompt embedding to the key and value of each transformer layer. Compared to the Concat and Add methods, prompt tuning enables adaptive allocation of language information to each frame at every layer of the transformer encoder through self-attention mechanism, providing greater flexibility for the model to utilize language information at each layer. 2.2. Frame-level Language Adapter To enhance a model\u2019s language discrimination ability, a typical approach is to equip it with language identification (LID) capabilities [2, 6, 7] . For instance, an additional branch can be introduced in the model for language identification, and the identified language can be further inputted into the subsequent layers of the model in the form of an embedding or one-hot encoding to provide language information. Fig. 2. The description of frame-level language adapter. Previous approaches mainly predicted language information at the utterance level through a pooling layer or the last output of LSTM networks. However, relying solely on global language information may not be adequate to equip the model with fine-grained language information. Therefore, we propose the frame-level language adapter (FL-Adapter) to finely provide language information for each frame of acoustic features. As shown in Figure 2, for parameter efficiency, we designed a structure similar to the adapter [11] network, which introduces a branch from the middle layer of the encoder for frame-level language information prediction. By leveraging a residual connection and incorporating the output of the encoder\u2019s intermediate layer, the language bias information is effectively infused into each frame feature. For the framelevel LID loss function, cross-entropy loss (expanding LID labels to the same length as the acoustic features) or CTC loss (expanding LID labels to the same length as the text labels) can be used. The total loss function is shown as follows: L = LCT C + \u03b1LLID (4) where \u03b1 is a hyperparameter to balance the two losses. 2.3. Parameter-efficient Fine-tuning For scenarios that require both language-agnostic and configurable capabilities, we can deploy both models introduced in sections 2.1 and 2.2 simultaneously and choose which model to use based on whether language information is provided in advance. However, this approach can be costly. To address this issue, we attempted to combine the two models using parameter-efficient fine-tuning methods. Specifically, we froze the parameters of the model introduced in section 2.2 and used language prompt tuning, as introduced in section 2.1, for fine-tuning. To further enhance the fine-tuning effect, we also introduced the residual adapter [11] after the feedforward module in each transformer encoder layer. Through these methods, we aim to equip the model with both language-agnostic and configurable capabilities. 3. EXPERIMENTAL SETUP 3.1. Data Sets To ensure the reproducibility of our proposed methods, we conducted experiments on multilingual open-source datasets. For Chinese, we used AISHELL-1 [12], Aidatatang 200zh1 , Magicdata2, and extracted some data from Wenetspeech [13]. For Korean, we used the Zeroth-Korean3 dataset. Detailed information on other language datasets can be found in Table 1. Our experimental data is unbalanced in terms of language, which aims to verify whether our methods still work in such a complex situation. The acoustic features we used are 80dimensional energy-based Log Mel-filterbanks (FBank) computed on a window of 25ms with a 10ms shift. 3.2. Modeling Unit Chinese, Japanese, and Korean typically employ characters or words as their modeling units. However, directly using characters or words as modeling units in multilingual speech recognition would result in a large modeling vocabulary, increasing memory usage and decreasing decoding speed. To address this issue, we converted all languages\u2019 texts to the byte level and used a byte pair encoder (BPE) to achieve shared modeling units across different languages. Ultimately, we obtained a multilingual modeling vocabulary of size 6000. 3.3. Implementation Details Our basic architecture for all experiments is TransformerCTC, which includes 12 encoder layers with dmodel=768, dffn=3072, and nhead=12. A convolutional front-end was used to subsample the acoustic features by a factor of 6. For the baseline system, we trained a language-agnostic model by pooling data from all languages together. For prompt tuning, the prompt encoder is only one embedding 1https://openslr.org/62/ 2https://openslr.org/68/ 3https://github.com/goodatlas/zeroth Table 1. Data partitions (in hours) Language Train Test Chinese (ZH) 6200 10 English (EN) [14] 961 5.3 Japanese (JA) [15] 930 10 Spanish (ES) [16] 435 27 Tibetan (BO) [17] 84.3 3.0 Korean (KO) 51.6 1.2 Uyghur (UG) [18] 21 2.7 Total 8723 59.2 layer. For prefix tuning, the prompt encoder consists of one embedding layer and one linear layer, which finally maps to num layer * dmodel * 2 dimensions, and split and concatenated onto each layer\u2019s key and value as the prompts. For the Concat method, we compared using one-hot encoding and embedding, while for the Add and Attention methods, we default to using embedding. For FL-Adapter, the projection dimension is the number of language classes plus one (an extra CTC blank). For the parameter-efficient fine-tuning mentioned in Section 2.3, we may need multiple prompt tokens, this can be achieved by expanding the output layer dimension of prompt tuning and prefix tuning by num prompt and then splitting it into num prompt prompt tokens. All experiments are conducted with the PyTorch toolkit. During training, we adopt the Adam optimizer with \u03b21= 0.9, \u03b22 = 0.98, and a Noam learning rate decay strategy with d = 768, warmup step = 25000, and factor = 1.0. 4. RESULTS 4.1. The Effects of Language Prompt Tuning We compared various methods that utilize prior language information to achieve language-configurable multilingual speech recognition. As shown in Table 2, the prompt tuningbased methods outperformed other methods, while methods such as Add and Concat performed worse than the baseline on some test sets. This can be attributed to the adaptive nature of the prompt tuning-based methods, which enables the model to allocate language information to each frame using self-attention at each layer, as opposed to a blanket imposition on every frame. Additionally, prompt tuning outperforms prefix tuning, because the latter\u2019s method of inserting prompts at every layer could potentially result in overfitting. Furthermore, for prompt tuning, appending the language prompt as a suffix yields superior performance, as it ensures the continuity of acoustic features and the integrity of semantic content. 4.2. The Effects of Frame-level Language Adapter Table 2 presents a comparison of the two frame-level LID loss, CE (with \u03b1=0.2 being optimal) and CTC (with \u03b1=0.5 being optimal) for FL-Adapter. The results indicate that the Table 2. Comparison of word error rates (WERs, %) for different multilingual speech recognition models. Model Params ZH EN JA ES BO KO UG Avg Baseline 91.6169M 4.97 26.61 14.33 24.41 15.61 13.25 83.18 26.05 Language-configurable multilingual model Add 91.6174M 4.93 11.22 14.67 24.03 14.18 15.27 58.22 20.36 Attention 91.6176M 4.87 12.89 14.95 25.62 14.71 15.24 59.79 21.15 Concat (one-hot) 91.6241M 4.97 10.83 14.85 24.66 15.62 13.97 56.39 20.19 Concat (embedding) 91.6303M 4.93 10.73 14.86 24.46 15.61 13.98 55.36 19.99 Prefix Tuning 91.7927M 4.68 11.10 14.58 24.49 13.84 14.16 56.72 19.94 Prompt Tuning (prefix) 91.6174M 4.66 9.78 14.32 23.76 13.06 11.96 56.95 19.21 Prompt Tuning (suffix) 91.6174M 4.54 9.33 14.01 22.51 12.42 11.66 54.17 18.38 Prompt Tuning (both) 91.6174M 4.63 10.67 13.96 23.52 13.84 12.72 56.35 19.39 Language-agnostic multilingual model FL-Adapter (CE, \u03b1=0.2) 91.6272M 4.85 14.57 14.34 23.88 15.01 13.20 68.19 22.01 FL-Adapter (CTC, \u03b1=0.2) 91.6300M 4.71 12.09 14.30 23.48 14.35 13.14 64.18 20.89 FL-Adapter (CTC, \u03b1=0.5) 91.6300M 4.60 10.62 14.14 23.51 14.91 12.99 56.70 19.64 FL-Adapter (CTC, \u03b1=0.5) 98.7171M 4.87 11.02 13.76 23.31 14.24 14.69 63.22 20.73 FL-Adapter (CTC, \u03b1=0.8) 91.6300M 4.85 11.63 14.13 24.41 14.31 15.03 65.28 21.38 FL-Adapter (CTC, \u03b1=1.0) 91.6300M 5.22 13.44 15.26 25.32 15.43 15.26 68.95 22.70 Unified language-agnostic and configurable model FL-Adapter (\u03b1=0.5, frozen) 91.6300M 4.60 10.62 14.14 23.51 14.91 12.99 56.70 19.64 + Prompt Tuning (1 token) 91.6305M 4.60 10.60 14.13 23.50 14.81 12.96 56.70 19.61 + Prompt Tuning (5 tokens) 91.6328M 4.66 10.62 14.18 23.48 14.46 12.96 56.60 19.57 + Prompt Tuning (10 tokens) 91.6356M 4.60 10.60 14.12 23.48 14.46 12.96 56.55 19.54 + Prefix Tuning (1 token) 91.8328M 4.60 10.18 13.87 23.00 13.90 12.54 56.31 19.20 + Prefix Tuning (5 tokens) 92.6438M 4.58 9.99 13.79 22.70 13.68 12.11 55.32 18.88 + Prefix Tuning (10 tokens) 93.6576M 4.56 9.94 13.77 22.61 13.68 11.99 55.06 18.80 + Prefix Tuning (15 tokens) 94.6713M 4.54 9.84 13.76 22.59 13.54 11.99 55.05 18.76 + Residual Adapter (dim=32) 95.2338M 4.52 9.59 13.58 22.18 12.91 11.71 54.09 18.37 CTC loss function yields better performance. This is because speech signals may contain non-linguistic or silent segments, which can be aligned to the blank label using the CTC loss function instead of forcing alignment to the language label. In addition, Table 2 illustrates that FL-Adapter can effectively provide language bias information, thereby enhancing the performance of multilingual speech recognition. 4.3. The Effects of Parameter-efficient Fine-tuning In order to unify language-agnostic and language-configurable scenarios, we froze the optimal FL-Adapter model and finetune it using prompt tuning based methods. To demonstrate that our methods are not solely due to an increase in the number of model parameters, we also trained a FL-Adapter model with increased parameters (13 layers) as a new reference. As shown in Table 2, prompt tuning does not lead to significant improvement, as it only adds learnable language prompt tokens to the input layer with limited fine-tuning ability. In contrast, prefix tuning introduces language prompt tokens in every Transformer layer, resulting in greater fine-tuning capability, and the performance of prefix tuning improves as the number of prompt tokens increases. In addition, we added residual adapter networks to each layer of the Transformer encoder upon prefix tuning, which further improves the speech recognition performance. This is owing to prefix tuning mainly fine-tunes the self-attention module of Transformer layers, while the residual adapter network complements and fine-tunes the feedforward module of Transformer layers, and the integration of the two culminates in superior outcomes. Finally, we improved the performance of the FL-Adapter model to be comparable to the language-configurable model through these parameter-efficient fine-tuning methods, this allows the model to perform well across different languages while still being configurable for specific languages. 5. CONCLUSIONS This paper proposed two parameter-efficient methods to enhance the performance of CTC-based end-to-end multilingual speech recognition in both language-agnostic and languageconfigurable scenarios. We also explored some parameterefficient fine-tuning methods to unify the two scenarios into a unified model. The proposed methods achieved significant performance improvements across seven languages with imbalanced data. In the future, we plan to further validate our methods on larger models and a wider range of languages. 6." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2309.00964v2", |
| "title": "eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models", |
| "abstract": "Since Large Language Models or LLMs have demonstrated high-quality\nperformance on many complex language tasks, there is a great interest in\nbringing these LLMs to mobile devices for faster responses and better privacy\nprotection. However, the size of LLMs (i.e., billions of parameters) requires\nhighly effective compression to fit into storage-limited devices. Among many\ncompression techniques, weight-clustering, a form of non-linear quantization,\nis one of the leading candidates for LLM compression, and supported by modern\nsmartphones. Yet, its training overhead is prohibitively significant for LLM\nfine-tuning. Especially, Differentiable KMeans Clustering, or DKM, has shown\nthe state-of-the-art trade-off between compression ratio and accuracy\nregression, but its large memory complexity makes it nearly impossible to apply\nto train-time LLM compression. In this paper, we propose a memory-efficient DKM\nimplementation, eDKM powered by novel techniques to reduce the memory footprint\nof DKM by orders of magnitudes. For a given tensor to be saved on CPU for the\nbackward pass of DKM, we compressed the tensor by applying uniquification and\nsharding after checking if there is no duplicated tensor previously copied to\nCPU. Our experimental results demonstrate that \\prjname can fine-tune and\ncompress a pretrained LLaMA 7B model from 12.6 GB to 2.5 GB (3bit/weight) with\nthe Alpaca dataset by reducing the train-time memory footprint of a decoder\nlayer by 130$\\times$, while delivering good accuracy on broader LLM benchmarks\n(i.e., 77.7% for PIQA, 66.1% for Winograde, and so on).", |
| "authors": "Minsik Cho, Keivan A. Vahid, Qichen Fu, Saurabh Adya, Carlo C Del Mundo, Mohammad Rastegari, Devang Naik, Peter Zatloukal", |
| "published": "2023-09-02", |
| "updated": "2023-09-13", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Parameter AND Efficient AND Fine AND Tuning", |
| "gt": "Large language models or LLMs, and especially Genera- tive Pre-trained Transformer (GPT) models have shown excel- lent performance on many complex language tasks [11,23]. Such breakthrough leads to the desire to run these LLMs locally on mobile devices for user privacy [20,21], but even small LLMs are too big for on-device execution. For example, the smallset LLaMA model has 7B parameters which is 14GB in FP16 [18], while high-end mobile devices have only up to 18GB DRAM. Therefore, aggressively compressing LLMs via train-time optimizations, such as sparsification, quanti- zation, or weight clustering, is a crucial step for on-device LLM deployment [3,5,6,8,12,13,14,15,16,16,19,20,22,24] However, train-time optimization of LLM is highly ex- pensive due to the model size and computational resource overheads. Especially, the computational resource demand from a train-time differentiable weight clustering in DKM [3], one of the state-of-the-art weight clustering algorithm is pro- hibitively high, as it needs to analyze the interactions between all the weights and all possible clustering options. Accord- ingly, many existing LLM compression techniques, such as GTPQ [7], and AWQ [9] rely on post-training optimization. In this work, we propose memory optimization techniques to enable train-time weight clustering and their applications to DKM [3], leading to eDKM. Our techniques include cross-device tensor marshaling and weight matrix uniquifica- tion/sharding. When we used eDKM to fine-tune and com- press LLaMA 7B model into 3bit-per-weight, we achieved about 130\u00d7 memory footprint reduction for a decoder stack, yet outperformed the existing 3bit compression techniques.", |
| "main_content": "1 arXiv:2309.00964v2 [cs.LG] 13 Sep 2023 Figure 1: General overview of weight optimization systems. For DKM [3], an attention map for differentiable weight clustering is created inside the system. off between compression ratio and accuracy. Therefore, using DKM for LLM compression would yield high-quality result. However, DKM computes a large attention map with O(|W||C|) memory complexity (i.e., the matrix in Fig. 1) for forward/backward passes (see the Appendix in [3]), which is particularly challenging for LLM compression. For example, a LLaMA 7B model needs at least 224GB just to compute an attention map for 4bit weight clustering. Accordingly, we need to tap onto CPU memory to handle such large memory demand by overflowing to CPU memory and copying back to GPU when needed later. However, it will incur significant traffic between GPU and CPU (slowing down the training), and need immense CPU memory capacity. Hence, it is critical to reduce the number of transactions between CPU and GPU, and minimize the traffic of each transaction. To address such challenges, we introduce two novel memory optimization techniques in PyTorch. \u2022 Cross-Device Tensor Marshaling: We track tensors being copied across devices and avoid redundant copying to reduce the memory footprint and expedite training. \u2022 Weight Uniquification and Sharding: We use the fact that weights in 16 bits have only 216 unique values to reduce the attention map (in Fig 1) representation and further shard it over multiple learners. 2.1 Cross-device Tensor Marshaling PyTorch represents a tensor with data storage that links to the actual data layout and metadata that keeps the tensor shapes, types, and so on. Such tensor architecture lets PyTorch reuse the data storage whenever possible and efficiently reduces the memory footprint. However, when a tensor moves to another device (i.e., from GPU to CPU), the data storage cannot be reused and a new tensor needs to be created. Table 1 shows an example of the memory footprint overhead when a tensor moves between devices in PyTorch. The tensor, x0 allocated in line 0, consumes 4MB on GPU. When its view is changed in line 1, no additional GPU memory is required as the underlying data storage can be reused (i.e., x0 and x1 are effectively identical). However, when x0 and x1 move to CPU as in lines 2 and 3, the CPU memory consumption becomes 8MB, although y0 and y1 could share the same data storage on CPU, which leads to the redundancy on CPU memory and increases GPU-CPU traffic. line code GPU CPU 0 x0 = torch.rand([1024,1024]) 4 0 1 x1 = x0.view(-1,1) 4 0 2 y0 = x0.to(\u2018cpu\u2019) 4 4 3 y1 = x1.to(\u2018cpu\u2019) 4 8 Table 1: LLM fine-tuning may need to use CPU memory to offload large activations. Lacking cross-device tensor management can lead to redundant copies across devices (especially when the computation graph is complex), which can be particularly undesirable for LLM train-time optimization. For example, although x0 and x1 are the same tensor with just a different view, when copied to CPU, the resulting tensors y0 and y1 do not share the data storage while x0 and x1 do on GPU. (a) Without Marshalling (b) With Marshalling Figure 2: When the proposed cross-device tensor marshalling is applied to the case in Table 1, we can avoid duplication on the CPU side, which saves the memory/traffic. Before copying x1 to CPU, our marshaling scheme checks if there exits tensor with the same data storage on the CPU (i.e., y0). If there is, we reuse the reference for y0 along with the required ops (view in this case) for future retrieval. To address such inefficiency, we place a marshaling layer as in Fig. 2 (b), where the black represents actual data storage and metadata, and the gray indicates only the metadata. Fig. 2 (a) illustrates the example in Table 1 (with the corresponding line numbers) where x1 shares the data layout with x0 but y0 and y1 have independent/duplicated data storage on CPU. By inserting a marshaling layer as in Fig. 2 (b), we avoid such redundancy and reduce the GPU-CPU traffic. We use the save-tensor-hook in PyTorch (see [2] for reference) to implement such a marshaling scheme, where we examine if the same data storage has been already copied. However, checking whether the same tensor exists on the destination device is prohibitively expensive when using a convention scheme like hashing. Therefore, when a new 2 Figure 3: Weight Uniquification and Sharding: since wi and wk have the same bit value (BA45), both can share the same attention to centroids in the attention table, yet use the bit value as the offset to the table in the index list. tensor enters our marshaling system, we turn to the forward graph and check if there exists another tensor that is already on CPU and is reachable via only data-storage invariant operations (i.e., view, transpose, ...) from the new tensor within a few hops. If not found, the tensor is copied and a reference to the tensor is generated. If found, we return the reference of the existing tensor and the list of operations tracing back to the new tensor. For the example in Fig. 2 (b), instead of copying x1 to CPU, we simply return the reference to y0 and the view operation between x1 and y0. Navigating the computation graph costs extra compute cycles, but saving on an unnecessary copy can compensate for such overhead. We found that searching within 4 hops is sufficient to detect all the qualified cases in the computation graph from the original DKM implementation. 2.2 Weights Uniquification and Sharding In most LLM training, 16bit (e.g., BF16 or FP16) is widely used for weights, which means although there are multibillion parameters in LLMs, there are only 216 unique coefficients due to the bit-width. This allows an opportunity to significantly compress the attention map between weights and the centroids, as in Fig 3. By computing the attention to the centroids once for each unique weight value, the attention map can be converted into an attention table with O(|C|) and the index list with O(|W|). Note that the number of rows in the attention table is at most 65,536. The index list (denoted L) can be further sharded over a set of learners (i.e., GPUs) in a fully synchronous training setup [4], as the weights are identical in each learner at any moment (thus, attention map and index list too). Such sharding will bring down the memory complexity to O( |W| |L| ). Uniquifying and sharding come with higher communication and computation costs, as the sharded weights need to be all-gathered and the attention table and index list need to be converted back to the attention map for backward propagation (see Table 2 for the runtime overhead). Assume {wi,w j,wk} \u2208W and {cp,cq,cr} \u2208C, which denote the weights and centroids respectively in Fig. 3. Further consider the case where {wi,wk} have the same 16bit representation BA45 and w j has CB1F. Then, when an attention map is computed during forward pass, wi and wk shall have the same attention to C. After uniquification, the attention map is decomposed into an attention table with O(|C|) memory complexity and an index list with O(|W|) complexity. For example, the 16bit value, BA45 of wi and wk can serve as an offset to the attention table in the index list. The index list can be further sharded over |L| learners to reduce the complexity in each learner into O( |W| |L| ). The original attention map needs to be reconstructed for backward pass to stay compatible with the existing autograd implementation. Therefore, we take the reverse steps to restore the attention map by performing all-gather and look-up. 3. EXPERIMENTAL RESULTS We used the PyTorch 2.0.01 and applied Fully Sharded Data Parallel (FSDP) to fine-tune the pretrained LLaMA 7B model in brainfloat16 with the Alpaca dataset [17]. We fine-tuned for 2 epochs while compressing the model on a single node with 8\u00d7 A100-80GB GPUs using eDKM. The maximum sequence length during fine-tuning was 256. We used AdamW optimizer with learning rate as 5e-5, weight decay as 0, and betas as (0.9,0.95). The global batch size is 64, and the gradient norm clipping with 1.0 is used. 3.1 LLM Accuracy We compared eDKM against other quantization-based compression schemes: round-to-nearest (RTN), SmoothQuant, GPTQ [7], AWQ [9] and LLM-QAT [10]. For eDKM, we also compressed the embedding layers with 8 bits. Table 3 reports the accuracy with Common Sense Reasoning, and Few-Shot benchmarks with the compressed LLaMA 7B models from each technique. \u2022 eDKM allows the 3bit compressed LLaMA 7B model to Ma Sb Uc Memory Memmory Runtime (MB) Reduction (\u00d7) (sec) 1600 1 8.67 \u2713 544 2.9 8.97 \u2713 \u2713 68 23.5 9.5 \u2713 \u2713 97 16.4 15.9 \u2713 \u2713 \u2713 12 129.9 14.9 a M: using marshaling layer b S: using sharding c U: using uniquification Table 2: Ablation study to understand the effects of each techniques: With the proposed techniques, the memory footprint can be reduced by 130x with 1.7x slow down. 3 Method bits Model Common Sense Reasoning Few-shot Size(GB) PIQA HellaSwag Winograde ARC-e ARC-c TriviaQA MMLU LLaMA-7B 16 12.6 79.3 76.1 70.0 73.0 48.0 57.0 35.2 RTN 4 3.5 77.3 72.7 66.9 68.8 46.4 44.9 28.9 GPTQ g128c 4 3.7 77.2 54.0 65.7 61.6 \u2013a \u2013 \u2013 AWQ g128 4 3.7 78.1 55.8 65.8 66.8 \u2013 \u2013 \u2013 LLM-QAT 4 3.5 78.3 74.0 69.0 70.0 45.0 50.8 30.8 GPTQ g128 3 3.0 70.9 46.8 60.9 66.1 \u2013 \u2013 \u2013 AWQ g128 3 3.0 76.7 53.6 66.1 65.7 \u2013 \u2013 \u2013 eDKM 3 2.5 77.7 54.6 66.1 72.3 40.3 35.2b 30.3 a The result is not reported for the corresponding scheme; b One-shot is applied; c Group size is 128. Table 3: When compared our techniques against the state-of-the-art compression scheme, eDKM offered the smallest model size, yet similar or better accuracy for the broader set of benchmarks with the 3bit compressed LLaMA 7B model. outperform all other schemes in the 3bit configuration. \u2022 eDKM even delivers the best accuracy for ARC-e benchmarks across 3 and 4bit configurations. \u2022 eDKM yields the competitive performance for PIQA and MMLU benchmarks with 4bit compressed models. 3.2 Ablation Study For the ablation study, we made an example with one attention layer from the LLaMA 7B decoder stack and measured the trade-off between the memory footprint vs. the forwardbackward speed with 3bit compression, as shown in Table 2. Cross-device tensor marshaling alone reduces the memory footprint by 2.9\u00d7 with little runtime overhead, and the additional savings of 23.5\u00d7 and 16.4\u00d7 are achieved with sharding and uniquification, respectively. When all techniques combined, as in Fig. 3, eDKM offered about 130x reduction. Although these steps require extra computation/communications (i.e., all-gather), the runtime overhead is insignificant, as the traffic between GPU and CPU has decreased substantially. 4. CONCLUSION In this work, we propose a memory-efficient differentiable weight clustering scheme, eDKM, to provide train-time compression for LLMs. With the proposed techniques, the memory consumption was reduced by almost 130x, and the resulting 3bit compressed LLaMA model yields state-of-the-art accuracy on various LLM-harness benchmarks. 5. ADDITIONAL AUTHORS" |
| } |
| ] |
| } |