diff --git "a/20241127/2408.14776v2.json" "b/20241127/2408.14776v2.json" new file mode 100644--- /dev/null +++ "b/20241127/2408.14776v2.json" @@ -0,0 +1,519 @@ +{ + "title": "MROVSeg: Breaking the Resolution Curse of Vision-Language Models in Open-Vocabulary Image Segmentation", + "abstract": "Pretrained vision-language models (VLMs), e.g. CLIP, are increasingly used to bridge the gap between open- and close-vocabulary recognition in open-vocabulary image segmentation. As VLMs are generally pretrained with low-resolution images (e.g. ), most previous methods operate only on downscaled images. We question this design as low resolution features often fail to preserve fine details. A typical solution is to employ additional image backbones for high-resolution inputs, but it also introduce significant computation overhead. Therefore, we propose MROVSeg, a multi-resolution training framework for open-vocabulary image segmentation with a single pretrained CLIP backbone, that uses sliding windows to slice the high-resolution input into uniform patches, each matching the input size of the well-trained image encoder. Its key components include a Multi-Res Adapter, which restores the spatial geometry and grasps local-global correspondences across patches by interacting with multi-resolution features. To achieve accurate segmentation, we introduce Multi-grained Masked Attention scheme to aggregate multi-grained semantics from multi-resolution CLIP features to object queries. Through comprehensive experiments, we demonstrate the superiority of MROVSeg on well-established open-vocabulary image segmentation benchmarks, establishing new standards for open-vocabulary image segmentation.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### ###figure_2### Open-vocabulary image segmentation aims to segment semantic pixels belonging to arbitrary classes beyond pre-defined categories and dataset. In recent years, large-scale vision-language pretrained models (VLMs), such as CLIP [26 ###reference_b26###] and ALIGN [14 ###reference_b14###], have demonstrated remarkable generalization capabilities for recognizing open-vocabulary categories. This motivates the research community to investigate the potential of VLMs in open-vocabulary image segmentation. To address the discrepancy between the per-pixel semantic requirements and the image-level labels provided by VLMs, initial studies [6 ###reference_b6###, 42 ###reference_b42###, 43 ###reference_b43###] modified CLIP model by removing its final pooling layer to obtain dense category embeddings for per-pixel classification. However, these approaches typically necessitate fine-tuning VLMs on a base segmentation dataset with limited images and categories, which is demonstrated [42 ###reference_b42###] to impair the transferability of VLM features, leading to unsatisfactory zero-shot performance on downstream tasks.\nRecent approaches [35 ###reference_b35###, 10 ###reference_b10###, 36 ###reference_b36###, 28 ###reference_b28###, 22 ###reference_b22###] reformulate open-vocabulary image segmentation as a region-level recognition problem. These methods typically adopt two branch meta architecture (As in Fig. 1(a) ###reference_sf1###): one branch extract image feature and generate mask proposals, and the other branch classifies the predicted proposals with pretrained VLM. Although these methods are promising, we note\ntheir following limitation. Due to pretrained VLMs exhibit inferior size adaptability, most of open-vocabulary image segmentation methods (e.g. [10 ###reference_b10###, 36 ###reference_b36###, 19 ###reference_b19###, 35 ###reference_b35###, 34 ###reference_b34###, 5 ###reference_b5###, 28 ###reference_b28###, 22 ###reference_b22###]) so far need to downsample images to fit the pretrained resolution (e.g. ) of VLM to perform region-level recognition (as in Fig. 1(a) ###reference_sf1###). However, low-resolution input usually lacks segmentation details. Although na\u00efvely applying sliding window inference [6 ###reference_b6###, 35 ###reference_b35###] could partly compensate for the details, the spatial structure across windows is corrupted and the local-global modeling is also absent.\nIn light of the limitations and challenges faced by previous methods, we propose MROVSeg, a VLM-based Multi-Resolution training framework for Open-Vocabulary Image Segmentation. As illustrated in Fig. 1(b) ###reference_sf2###, first, MROVSeg uses downsampled low-resolution images as VLM input to extract global low-resolution features. Second, MROVSeg split the high-resolution images into slices and feeds them to VLM to extract detailed high-resolution features. The key components of MROVSeg contain a Multi-Res Adapter, in which we employs depthwise convolution layers to restore the spatial geometry across slices. To effectively capture global long-range context, inspired by previous multi-scale training framework [4 ###reference_b4###, 13 ###reference_b13###, 38 ###reference_b38###], we employ a image-dependent Scale-aware Attention [4 ###reference_b4###] to dynamically adjust the trustworthiness of high-resolution and low-resolution VLM features based on their relevance. The resulting multi-resolution features are fused hierarchically then employed for precise mask proposals generation.\nTo achieve accurate mask class recognition, we propose a Multi-grained Masked Attention mechanism. The core hypothesis is that multi-resolution CLIP features of the save image input hold semantic consistency. Based on this, we reuse the CLIP [CLS] token, and manipulate its attention map on multi-resolution features in CLIP attention layers with resolution-aware attention masks. We find this resolution-aware design can enforce the low- and high-resolution attention map focus on global contexts and spatial details respectively and thus effectively aggregate multi-grained semantics.\nWith extensive experiments on well-established open-vocabulary semantic segmentation and panoptic segmentation benchmarks, we are delighted to report that our method achieves new state-of-the-art performance, demonstrating the advancements of MROVSeg in the domain of open-vocabulary image segmentation. Our contributions can be summarized as follows:\nWe propose a novel end-to-end multi-resolution training framework to tackle the task of open-vocabulary image segmentation. It enables improved open-vocabulary segmentation by leveraging multi-resolution vision-language features.\nA multi-grained masked attention scheme is proposed to effectively aggregate regional and universal semantics on multi-resolution vision-language features.\nThe efficacy of our method is confirmed by achieving state-of-the-art performance by evaluating our proposed approach on well-established open-vocabulary segmentation benchmarks." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related work", + "text": "###figure_3### Pretrained Vision-language Models\u2003Recently, large-scale pretrained, contrastive learning based methods [26 ###reference_b26###, 14 ###reference_b14###] demonstrate powerful open-vocabulary image classification capability by learning shared image-text feature representations. Pretrained CLIP [26 ###reference_b26###] has been generalized to many downstream computer vision tasks such as object detection [40 ###reference_b40###], image caption [12 ###reference_b12###], image generation [24 ###reference_b24###] and image segmentation [36 ###reference_b36###, 1 ###reference_b1###, 34 ###reference_b34###, 39 ###reference_b39###, 35 ###reference_b35###]. Our method invokes natural language perceptual ability of pretrained VLMs, aiming to explore their application boundary in open vocabulary semantic segmentation tasks. \nMulti-Resolution Training\u2003As the quadratic computational overhead along with the number of tokens increases, recent multimodal large language models [21 ###reference_b21###, 18 ###reference_b18###, 37 ###reference_b37###] (MLLMs) employ sliding window technique to divide high-resolution images into patches, thereby achieving competitive performance while maintaining computational efficiency. Unlike prevalent MLLMs, MROVSeg adaptively restores the spatial geometry of multi-resolution features across patches, and effectively extract global contexts that benefit segmentation tasks.\nOpen Vocabulary Image Segmentation\u2003Pioneering works [1 ###reference_b1###, 32 ###reference_b32###] use learned language embedding to align the feature space of class texts and visual semantics. Recently, some works [35 ###reference_b35###, 10 ###reference_b10###, 19 ###reference_b19###, 22 ###reference_b22###] develop two-stage training approaches. More recently, end-to-end [36 ###reference_b36###] frameworks emerge in the community, which unify mask generation and region classification into the same model. SAN [36 ###reference_b36###] propose a lightweight side adapter to effectively adapt CLIP features. ODISE [34 ###reference_b34###] employs a stable diffusion UNet [27 ###reference_b27###] to generate mask proposals. With the extracted dense pixel semantic embedding [42 ###reference_b42###], CAT-Seg [6 ###reference_b6###] proposes to finetune CLIP with cost aggregation. EBSeg [28 ###reference_b28###] integrate Segment Anything Model [16 ###reference_b16###] into CLIP-based framework with image embedding balancing. \nDiscussion with Previous Methods\u2003Our method MROVSeg is inspired by [36 ###reference_b36###, 39 ###reference_b39###], but has significant differences: (1) Distinct with previous open-vocabulary segmentation methods, MROVSeg adapts multi-resolution CLIP features to open-vocabulary segmentation. (2) The introduced Muti-grained Masked Attention scheme explicitly enforces the mask class recognition to aggregate both local and global semantics, which takes advantage of the internal consistency between multi-resolution features." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Method", + "text": "In open vocabulary image segmentation task setting [35 ###reference_b35###, 19 ###reference_b19###], an image is operated by a segmentation model with parameter to produce a set of masks associated with categories:\nThe segmentation model is trained on a base segmentation dataset (e.g., COCO [2 ###reference_b2###]) annotated with a fixed label set of categories. And during test, model is expected to segment objects of category set , which generally contains novel categories ().\nAccurate segmentation needs high-resolution image inputs. Due to low-resolution images used in vision-language pretraining, previous open-vocabulary methods employ extra image backbone (such as SAM [28 ###reference_b28###] and ResNet [22 ###reference_b22###]) to provide segmentation detail. Although recent studies [33 ###reference_b33###, 39 ###reference_b39###] adapt convolution-based CLIP model for high-resolution training, but directly apply these methods to ViT-based CLIP model results in suboptimal performance [36 ###reference_b36###] due to undesirable size adaptability of ViT. To this end, we propose MROVSeg, a ViT-based training framework to provide multi-resolution vision-language features for open-vocabulary image segmentation." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Framework Overview", + "text": "The overview of our training framework MROVSeg, for open-vocabulary image segmentation is shown in Fig. 2 ###reference_###. At a high-level, an image is downsampled and padded to a low-resolution (such as ) and processed by a pretrained CLIP ViT to extract global feature. To capture high-resolution local details, the high-resolution image is split into slices and input to the shared CLIP ViT encoder. These multi-resolution features are then concatenated with learnable queries and fed into a Multi-Res Adapter(Sec. 3.2 ###reference_###), to produce the fused features, query features, and attention masks used for mask predction( Sec. 3.3 ###reference_###) mask classification( Sec. 3.4 ###reference_###)." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Multi-Res Adapter", + "text": "As depicted in Fig. 3 ###reference_###, denote the slice (high resolution) features of -layer as , the global feature (low resolution) as , where , is the slice number, is the token length, and is the channel number. Na\u00efvely concatenating the high resolution slice features for the subsequent segmentation prediction is promising, but have two defects: 1) the spatial geometry, such as the positional information, is corrupted across slice features ; 2) the long-range dependencies and global context is missing. Thus, we propose Multi-Res Adapter to effectively restore spatial geometry of slice features and capture long-range context from global feature. In Multi-Res Adapter, the -th slice features are firstly concatenated with learnable queries and input to the vanilla ViT blocks to build the query features for each objects. Then for target fusion layer , the slice features and global feature are fused through a Multi-Res Fusion (MRF) Module and then injected into the ViT branch.\n###figure_4### Multi-Res Fusion (MRF) Module first reshape the global feature to , and restore slice features to , where . To retain the spatial geometry of high-resolution features, we employ depth-wise separable convolutions to fuse the restored feature. To effectively model the local-global correspondence, we train a Scale-aware Attention [4 ###reference_b4###] to fuse multi-res features into as the fused feature\nThen is added to the visual tokens in Multi-Res Adapter. The scale attention decoder learns to predict the scale attention for layer to weigh the trustworthiness of low resolution context and high resolution detail. The sigmoid function ensures weight in , where means focus on high resolution detail. In practice, we empirically select the features from a CLIP layer set to apply in Multi-Res Adapter. For instance, for the model based on CLIP ViT-L model, . The fused features are used for hierarchical mask decoding. The final layer output slice features are restored to as the visual feature for hierarchical mask decoding and multi-grained masked attention. And the output queries are projected as the query feature\nfor hierarchical mask decoding and multi-grained masked attention." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Mask Prediction", + "text": "Hierarchical Mask Decoding\u2003High-resolution features preserve more spatial detail, thus benefit segmentation, especially for mask prediction [3 ###reference_b3###]. However, directly upsampling features is computationally demanding. Thus, similar to FPN, we first upsample the multi-resolution features from the Multi-Res Adapter by to build the feature pyramid. Then we gradually concatenate the multi-resolution features with the final visual feature at channel dimension and upsample by 2 transposed convolution layers . Finally, we project the upsampled feature to the pixel feature space by MLP then decode the mask by inner product of query feature and mask feature\nwhere the query feature is from the Multi-Res Adapter which described in Sec. 4.3. is the mask prediction, and we omit the sigmoid function in Eq.5." + }, + { + "section_id": "3.4", + "parent_section_id": "3", + "section_name": "Mask Classification", + "text": "###figure_5### Recent ViT-based open-vocabulary segmentation methods [10 ###reference_b10###, 36 ###reference_b36###, 28 ###reference_b28###] perform mask class recognition by predicting attention masks to guide the attention maps of original CLIP [CLS] token on the region of interests in the intermediate layers. We observe that the predicted attention masks (shown in Fig. 4 ###reference_###(b)) in these methods tend to be overwhelmed by heavy background noises which spatially related to high norm tokens (shown in Fig. 4 ###reference_###(a)), leading to the unsatisfactory classification performance. While prior ViT pretraining technique [8 ###reference_b8###] indicates these high norm tokens contain rich global contexts, recent advances in training-free open-vocabulary segmentation [31 ###reference_b31###, 29 ###reference_b29###, 17 ###reference_b17###] reveal that high norm tokens can easily disturb spatial correlation in CLIP features. Inspired by these inspection, we hypothesize that CLIP features hold internal consistency among multi-resolution inputs, and propose to simultaneously aggregate semantics from multi-resolution CLIP features by predicting decoupled attention masks. \nDecoupled Attention Mask Decoding\u2003To sufficiently aggregate multi-grained semantics from CLIP, we first duplicate the [CLS] token to query number and create learnable positional embedding for them, dubbed as the . We aim to enforce to extract the global image-wise and local object-specific semantics from low- and high-resolution CLIP feature respectively. Thus, for visual feature , we first extract global contexts with max pooling and train MLPs project them to attention space\nwhere denote the local and global attention features respectively. Then we decode local and global per-head attention masks by the inner product with\nwhere is the output query feature described in Sec.3.2 ###reference_###. We show this decoupled resolution-aware attention decoding benefit the multi-grained aggregation in Fig.4 ###reference_###.\nMulti-grained Masked Attention\u2003As shown in Fig.5 ###reference_###, we perform cross attention to update the with multi-resolution CLIP features, with the predicted attention masks and ,\nwhere is query embeddings. Denote the low- and high-resolution CLIP tokens as and . and are the key embeddings of low- and high-resolution CLIP visual tokens respectively. and are value embeddings. , and are projection weights of cross-attention layer. The final output proposal logits are projected to the shared vision-language space and compute cosine similarity with text embeddings to obtain proposal logits : , where is the number of categories, and and are projection weights. Finally, the final segmentation map is produced by\n###figure_6### Image-conditioned Text Feature\u2003Recent studies [15 ###reference_b15###, 6 ###reference_b6###] reveal that CLIP text encoder struggles to generate discriminative text embeddings for similar categories. Thus, we follow MAFT-Plus [15 ###reference_b15###] to condition the text embeddings with learnable cross attention layers between text embeddings and regional pooling visual features: , where is depicted in Sec. 3.2 ###reference_###. is the original text embedding extract by CLIP text encoder." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Settings", + "text": "For open-vocabulary semantic segmentation task, We train our models on COCO-Stuff [2 ###reference_b2###] dataset which comprises 164K images with densely annotated masks spanning 171 categories. Then we evaluate MROVSeg on five well-established open-vocabulary semantic segmentation benchmarks for standard evaluation. We further evaluate MROVSeg on Cityscapes [7 ###reference_b7###] benchmarks to explore the ability of handling high-resolution image input. We follow common practice [35 ###reference_b35###, 1 ###reference_b1###] to measure the segmentation performance by mean intersection over union (mIoU) score. For open-vocabulary panoptic segmentation task, we train MROVSeg on COCO-Panoptic [20 ###reference_b20###] dataset. Then we evaluate zero-shot performance of MROVSeg on ADE [41 ###reference_b41###] panoptic benchmark, and measure the panoptic segmentation performance by panoptic quality (PQ), segmentation qualitiy (SQ) and recognition quality (RQ).\nDatasets\u2003The standard semantic segmentation benchmarks contains three dataset: ADE [41 ###reference_b41###], Pascal Context [23 ###reference_b23###], and Pascal VOC [11 ###reference_b11###]. The ADE dataset contains around 20K and 2K images for training and validation, respectively. This dataset is annotated with 150 and 847 categories, resulting in two separate segmentation benchmarks, namely ADE-150 and ADE-847. Similarly, the Pascal Context dataset has 5K images for both training and validation. It is annotated with 59 and 459 classes, forming two benchmarks known as PC-59 and PC-459. The Pascal VOC dataset comprises 1464 and 1449 images for training and validation, encompassing annotated masks across 20 semantic categories.\nImplementation Details\u2003We adopt the vanilla ViT block as transformer block in Multi-Res Adapter, and we use 6 blocks with 12 attention heads, 100 query tokens, feature dimension is 768 as default. We choose OpenAI pretrained ViT-based CLIP [26 ###reference_b26###] models in all experiments for better reproducibility. We empirically choose [CLS] token from CLIP layer 9, CLIP ViT-B/16 model, and subsequent layers for Multi-grained Masked Attention. i.e., we use last 3 blocks for Multi-grained Masked Attention. For CLIP ViT-L/14 model, we [CLS] token from CLIP layer 18, and use subsequent layers for Multi-grained Masked Attention. Our models are trained with image input resolution , and slice and downsample to to fit the CLIP input resolution. More implementation details are in Supplementary Material.\n###figure_7### ###figure_8###" + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Accuracy Evaluation", + "text": "Open-Vocabulary Semantic Segmentation\u2003We compare the semantic segmentation performance of MROVSeg with current state-of-the-art methods in Tab. 1 ###reference_###. First of all, our method significantly outperforms other state-of-the-art methods all various on most open-vocabulary semantic segmentation benchmarks. Specifically, our method supasses state-of-the-art method CAT-Seg [28 ###reference_b28###] with the same CLIP backbones on four benchmarks by remarkable margins ( mIoU for ADE-847, mIoU for PC-459, mIoU for ADE-150, mIoU for PC-59, and for VOC-20 with CLIP ViT-B backbone, mIoU for ADE-847, mIoU for PC-459, mIoU for PC-59, and for VOC-20 with CLIP ViT-B backbone). Compared to methods with additional image backbones, our model outperforms EBSeg [28 ###reference_b28###](with SAM) by mIoU% for ADE-847, PC-459, ADE-150, PC-59, and VOC-20 respectively. In addition, our models outperforms other convolution-based high-resolution trained method FC-CLIP [39 ###reference_b39###] and SED [33 ###reference_b33###]. Fig. 8 ###reference_### shows the qualitative comparison between MROVSeg and state-of-the-art methods (SAN [36 ###reference_b36###] and EBSeg [28 ###reference_b28###]). Evidently, MROVSeg can segment objects more precisely (the first row, class sofa), and provide more detailed mask prediction (the second and third row, more accurate object boundaries).\nThe size adaptability of ViTs have been demonstrated [39 ###reference_b39###] to be worse than ConvNets. Fig. 7 ###reference_### shows that the image sizes of the datasets evaluated in Tab. 1 ###reference_### are primarily distributed from . To evaluate the ability of handling high-resolution image input, we retrain MROVSeg on COCO-Stuff with resolution evaluate performance on Cityscapes benchmark. Notably, MROVSeg reaches comparable performance to convolution-based methods [39 ###reference_b39###] while scaling up the backbone, which greatly enhanced than other ViT-based methods (EBSeg [28 ###reference_b28###] and SAN [36 ###reference_b36###]).\n###figure_9### Open-Vocabulary Panoptic Segmentation\u2003In Tab. 2 ###reference_###, We evalute the panoptic segmentation performance of MROVSeg on mainstream open-vocabulary panoptic segmentation benchmarks ADE [41 ###reference_b41###]. It is noteworthy that our method surpass previous arts on panoptic quality (PQ) and recognition quality (RQ), achieving new state-of-art open-vocabulary panoptic segmentation performance. Furthermore, close-set panoptic segmentation results indicate that training MROVSeg model does not lead to severe overfitting issue on the base dataset." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Efficiency Evaluation", + "text": "Memory Consumption\u2003Denote the slice ratio as , we examine the effect of value in Tab. 3 ###reference_###. Notably, only using single resolution feature, i.e., , or directly adopting high-resolution image as CLIP input, i.e., lead to significant performance degradation. While some values with overlapped slicing (such as ) obtains better performance on some datasets(such as ADE-150 and VOC-20), we choose the default value as considering computation overhead.\nComputation Overhead\u2003We compare the computation overhead of MROVSeg with recent methods [10 ###reference_b10###, 19 ###reference_b19###, 6 ###reference_b6###, 28 ###reference_b28###, 39 ###reference_b39###] in Tab. 4 ###reference_###. We measure the number of parameter, giga floating-point operations per second, inference FPS and training time. Our method exhibits strong efficiency among these methods in terms of both training and inference. This is achieved by 1) single CLIP backbone do not need additional image backbones; 2) slice then input strategy avoid quadratic computation cost with regard to input image size. More detailed parameter settings are in Supplementary Material." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Ablation Study", + "text": "For ablation experiments, except for Tab. 8 ###reference_###, we all employ our CLIP ViT-B/16 based model as our ablated baseline.\nComponents Analysis\u2003Tab. 5 ###reference_### shows the performance effect of the main components in MROVSeg. The baseline adopts single resolution CLIP plain ViT feature for mask decoding, and uses the MaskPooling [34 ###reference_b34###] for mask class recognition. The baseline obtains the mIoU scores of 52.3%, 22.4%, 8.5%, 10.8% and 93.6% for PC-59, ADE-150, ADE-847, PC-459 and VOC-20. We first introduce multi-resolution CLIP features, which marginally outperforms baseline. Then we introduce the Multi-Res Adapter, the model significantly outperforms baseline by 4.8%, 5.1%, 2.3%, 7.2% and 1.5%. Next we integrate the Masked-Attention [36 ###reference_b36###] to the model, it slightly outperforms MaskPooling. Finally we integrate Multi-grained Masked Attention, the model performance reaches 58.7%, 32.4%, 12.9%, 19.2% and 95.8% for PC-59, ADE-150, ADE-847, PC-459 and VOC-20, which outperforms baseline by 6.4%, 7.0%, 3.9%, 8.4% and 2.2%. Futhermore, we show the effect of of Hierarchical Mask Decoding and Image-conditioned Text Feature in Tab. 6 ###reference_###, both showing consistent improvements.\nEffect of Multi-Res Adapter\u2003We first conduct experiment to examine the different micro-design choice in Multi-Res Adapter in Tab. 3 ###reference_###. In (a), we examine different spatial restore strategies in MRF module. The depthwise convolution is significantly better than concatenation. And we present the impact of local-global modeling methods (b), and the best results on all benchmarks are achieved by scale attention. Then for the ViT blocks setting, we present the effect of block number in (c). Increasing the block number to 9 brings limited improvement while incurring heavy computation cost. In (d), we examine the effect of CLIP layers whose feature we adopt. In (e), we present the impact of channel width we use. Finally, we examine the effect of query number in (f). We further compare the performance of MROVSeg with ViT-based and convolution-based CLIP models in Tab. 8 ###reference_###. The results indicate that ViT-based encoder with a sliding window approach outperforms convolution-based counterpart.\n###figure_10### Effect of Multi-grained Masked Attention\u2003As mentioned before, Tab. 5 ###reference_### and Fig. 4 ###reference_### quantitatively and qualitatively show the effectiveness of Multi-grained Masked Attention respectively. We further visualize the query [CLS] embeddings (\ni.e., ) by t-SNE [30 ###reference_b30###] dimensionality reduction within ADE-150 [41 ###reference_b41###] benchmarks in Fig. 9 ###reference_###. We color the embeddings based on the Hungarian matching with groundtruth. In (a), we can observe that the queries from the same classes are well-posed as masked attention can aggregate semantics. Conversely, with decoupled attention masks, the query embeddings from different classes are split further in (b), indicating the effectiveness of multi-grained masked attention in mask classification." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this paper, we introduce MROVSeg, a multi-resolution training framework designed to enhance open-vocabulary image segmentation by leveraging multi-resolution VLM features. The exceptional quantitative and qualitative results obtained on well-established open-vocabulary segmentation benchmarks serve as compelling evidence of its effectiveness and versatility. We hope our method can serve as a strong baseline for future research." + } + ], + "appendix": [], + "tables": { + "1": { + "table_html": "
\n
Table 1: Semantic segmentation performance comparison with state-of-the-art methods. \u2020:\u00a0We cite the reproduction performance of these methods[35, 9] trained with full COCO Stuff dataset from previous works[36, 6]. PC: Pascal Context, VOC: Pascal VOC.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodVL-ModelTraining DatasetEnd to End TrainingExtra BackboneADE-847PC-459ADE-150PC-59VOC-20
\nZegformer\u2020\u00a0[9]\nCLIP ViT-B/16COCO Stuff\u2717ResNet-1015.610.418.045.589.5
\nZSSeg\u2020\u00a0[35]\nCLIP ViT-B/16COCO Stuff\u2717ResNet-1016.99.721.151.991.8
\nOvSeg\u00a0[19]\nCLIP ViT-B/16COCO Stuff\u2717ResNet-101c7.111.024.853.392.6
\nSAN\u00a0[36]\nCLIP ViT-B/16COCO Stuff\u2713-10.713.728.955.494.6
\nEBSeg\u00a0[28]\nCLIP ViT-B/16COCO Stuff\u2717SAM-B11.117.330.056.794.6
\nSCAN\u00a0[22]\nCLIP ViT-B/16COCO Stuff\u2717ResNet-10110.813.230.858.497.0
\nSED\u00a0[33]\nCLIP ConvNeXt-BCOCO Stuff\u2713-11.218.631.657.394.4
\nCAT-Seg\u00a0[6]\nCLIP ViT-B/16COCO Stuff\u2713-12.019.031.857.594.6
MROVSegCLIP ViT-B/16COCO Stuff\u2713-12.919.232.458.795.8
\nOvSeg\u00a0[19]\nCLIP ViT-L/14COCO Stuff\u2717Swin-B9.012.429.655.794.5
\nMaskCLIP\u00a0[10]\nCLIP ViT-L/14COCO Panoptic\u2717ResNet-508.210.023.745.9-
\nZSSeg\u2020\u00a0[35]\nCLIP ViT-L/14COCO Stuff\u2717ResNet-1017.110.221.752.292.3
\nSAN\u00a0[36]\nCLIP ViT-L/14COCO Stuff\u2713-13.717.133.360.295.5
\nODISE\u00a0[34]\nCLIP ViT-L/14COCO Panoptic\u2717Stable Diffusion11.013.828.755.3-
\nEBSeg\u00a0[28]\nCLIP ViT-L/14COCO Stuff\u2713SAM-B13.721.032.860.296.4
\nSCAN\u00a0[22]\nCLIP ViT-L/14COCO Stuff\u2717ResNet-10114.016.733.559.397.2
\nFC-CLIP\u00a0[39]\nCLIP ConvNeXt-LCOCO Panoptic\u2717-14.818.234.158.495.4
\nSED\u00a0[33]\nCLIP ConvNeXt-LCOCO Stuff\u2713-13.722.135.360.996.1
\nCAT-Seg\u00a0[6]\nCLIP ViT-L/14COCO Stuff\u2713-16.023.837.963.397.0
\nMAFT+\u00a0[15]\nCLIP ConvNext-LCOCO Stuff\u2713-15.121.636.159.496.5
MROVSegCLIP ViT-L/14COCO Stuff\u2713-16.424.036.964.397.6
\n
\n
", + "capture": "Table 1: Semantic segmentation performance comparison with state-of-the-art methods. \u2020:\u00a0We cite the reproduction performance of these methods[35, 9] trained with full COCO Stuff dataset from previous works[36, 6]. PC: Pascal Context, VOC: Pascal VOC." + }, + "2": { + "table_html": "
\n
Table 2: Panoptic segmentation performance comparison with state-of-the-art methods.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ADECOCO
open-vocabularyclose-set
MethodPQSQRQPQSQRQ
\nFreeSeg\u00a0[25]\n16.3----
\nMaskCLIP\u00a0[10]\n15.170.419.2---
\nODISE\u00a0[34]\n22.6----
\nOPSNet\u00a0[5]\n19.052.423.0---
\nFCCLIP\u00a0[39]\n26.871.532.254.444.663.7
\nMAFT-Plus\u00a0[15]\n27.173.532.9---
MROVSeg27.372.833.452.041.460.9
\n
", + "capture": "Table 2: Panoptic segmentation performance comparison with state-of-the-art methods. " + }, + "3": { + "table_html": "
\n
Table 3: Effect of different crop ratio . We also report #GFLOPS and GPU memory consumption (MB). For , we adopt overlapped slicing.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Mem.GFLOPSA-150PC-59A-847PC-459VOC-20
6768116.528.654.710.616.494.5
13493293.128.854.510.515.694.4
16261225.731.554.011.515.995.1
12942184.532.159.112.019.695.6
9779142.832.458.712.919.295.8
756092.527.555.79.916.093.7
\n
", + "capture": "Table 3: Effect of different crop ratio . We also report #GFLOPS and GPU memory consumption (MB). For , we adopt overlapped slicing. " + }, + "4": { + "table_html": "
\n
Table 4: Efficiency comparisons. We report the and inference FPS of MROVSeg running on a RTX 3090. Training time is mesured with 2 NVIDIA H100.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodParams.#GFLOPSInference FPSTraining Time
\nZSSeg\u00a0[35]\n530.822302.10.315h59min
\nOVSeg\u00a0[19]\n532.619345.60.417h54min
\nCAT-Seg\u00a0[6]\n433.72121.12.07h41min
\nFC-CLIP\u00a0[39]\n221.3680.02.32d5h
\nEBSeg\u00a0[28]\n210.9867.54.718h33min
MROVSeg162.1640.410.59h40min
\n
", + "capture": "Table 4: Efficiency comparisons. We report the and inference FPS of MROVSeg running on a RTX 3090. Training time is mesured with 2 NVIDIA H100." + }, + "5": { + "table_html": "
\n
Table 5: Ablation study of components. We show the effects of integrating each modules into the baseline.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodPC-59A-150A-847PC-459VOC-20
Baseline52.325.49.010.892.9
+ High-res Features54.128.110.413.093.6
+ Multi-Res Adapter55.628.710.514.994.7
+ Masked Attention56.430.911.817.995.5
+ Multi-grained Masked Attention58.732.412.919.295.8
\n
", + "capture": "Table 5: Ablation study of components. We show the effects of integrating each modules into the baseline." + }, + "6": { + "table_html": "
\n
Table 6: Effect of Hierarchical Mask Decoding and Image-conditioned Text Feature.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodPC-59A-150A-847PC-459VOC-20
MROVSeg w/o Hier. Mask Dec.57.130.511.318.095.1
MROVSeg w/o Img-cond. Text Feat.58.532.012.119.695.5
MROVSeg58.732.412.919.295.8
\n
", + "capture": "Table 6: Effect of Hierarchical Mask Decoding and Image-conditioned Text Feature." + }, + "7": { + "table_html": "
\n
Table 7: Ablation study on various designs in Multi-Res Adapter. The default setting are marked underline.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
(a)Spatial FusionA-847PC-459A-150PC-59VOC-20
Concat.12.018.431.357.194.8
Depth.Conv.12.919.232.458.795.8
(b)Multi-Res FusionA-847PC-459A-150PC-59VOC-20
Add12.017.731.255.994.5
Concat.12.419.831.858.795.6
Scale.Attn.12.919.232.458.795.8
(c)Block NumA-847PC-459A-150PC-59VOC-20
39.812.527.753.994.1
612.919.232.458.795.8
912.719.331.959.295.4
(d)Fusion LayerA-847PC-459A-150PC-59VOC-20
{3,6,9}11.819.131.656.595.9
{3,6,9,12}12.419.832.058.095.4
{stem,3,6,9}12.819.432.658.194.9
{stem,3,6,9,12}12.919.232.458.795.8
(e)Channel WidthA-847PC-459A-150PC-59VOC-20
38410.417.027.556.194.2
76812.919.232.458.795.8
102411.119.731.059.596.1
(f)Query NumA-847PC-459A-150PC-59VOC-20
10012.919.232.458.795.8
20011.918.331.857.093.8
30012.218.732.457.595.2
\n
\n
", + "capture": "Table 7: Ablation study on various designs in Multi-Res Adapter. The default setting are marked underline." + }, + "8": { + "table_html": "
\n
Table 8: Comparison between different VLMs. All Multi-grained Masked Attention is disabled to adapt ConvNext.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodVLMPC-59A-150A-847PC-459VOC-20
MROVSegConvNext-L57.028.410.115.893.3
MROVSegViT-L58.530.413.420.195.4
\n
", + "capture": "Table 8: Comparison between different VLMs. All Multi-grained Masked Attention is disabled to adapt ConvNext." + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2408.14776v2_figure_1(a).png", + "caption": "(a) Previous open-vocabulary image segmentation.\nFigure 1: Comparison between other training frameworks and MROVSeg. Previous methods (a) adopt additional image backbone to provide mask feature. The mask prediction is class-unaware. Our method (b) provide multi-resolution CLIP feature for both mask decoding and mask classification, and the whole framework is class-aware.", + "url": "http://arxiv.org/html/2408.14776v2/x1.png" + }, + "1(b)": { + "figure_path": "2408.14776v2_figure_1(b).png", + "caption": "(b) MROVSeg open-vocabulary image segmentation.\nFigure 1: Comparison between other training frameworks and MROVSeg. Previous methods (a) adopt additional image backbone to provide mask feature. The mask prediction is class-unaware. Our method (b) provide multi-resolution CLIP feature for both mask decoding and mask classification, and the whole framework is class-aware.", + "url": "http://arxiv.org/html/2408.14776v2/x2.png" + }, + "2": { + "figure_path": "2408.14776v2_figure_2.png", + "caption": "Figure 2: The overall pipeline of MROVSeg. For an high-resolution input image, its downsampled image and are fed into CLIP visual encoder to extract multi-resolution CLIP features. The Multi-Res Adapter adapts these features for mask decoder and attention mask decoder. The generated attention masks are employed to aggregate semantics from the multi-resolution CLIP features.", + "url": "http://arxiv.org/html/2408.14776v2/x3.png" + }, + "3": { + "figure_path": "2408.14776v2_figure_3.png", + "caption": "Figure 3: Multi-Res Adapter. The slice features from CLIP layer 0 {\ud835\udc0fi0}i=1Ssuperscriptsubscriptsuperscriptsubscript\ud835\udc0f\ud835\udc560\ud835\udc561\ud835\udc46\\{\\mathbf{P}_{i}^{0}\\}_{i=1}^{S}{ bold_P start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_S end_POSTSUPERSCRIPT are concatenated with learnable queries and fed to ViT Blocks. The slice features from various CLIP layers are first adapted by MRF module to restore spatial geometry and capture long-range global contexts, then are injected to the intermediate ViT Blocks. The final output visual tokens and projected queries are utilized for downstream mask prediction and classification.", + "url": "http://arxiv.org/html/2408.14776v2/x4.png" + }, + "4": { + "figure_path": "2408.14776v2_figure_4.png", + "caption": "Figure 4: Effect of decoupled attention decoding for multi-grained semantics. With single attention mask decoding, the spatial cues are overwhelmed by background noise (b). Our decoupled attention mask decoding effectively splits the global and local semantics, producing relatively clean global (c) and local (d) attention masks.", + "url": "http://arxiv.org/html/2408.14776v2/x5.png" + }, + "5": { + "figure_path": "2408.14776v2_figure_5.png", + "caption": "Figure 5: Multi-grained Masked Attention. Object [CLS] tokens \ud835\udc17propsubscript\ud835\udc17prop\\mathrm{\\mathbf{X}}_{\\texttt{prop}}bold_X start_POSTSUBSCRIPT prop end_POSTSUBSCRIPT perform cross attention with high- and low-resolution CLIP features \ud835\udc17LRsubscript\ud835\udc17LR\\mathbf{X}_{\\texttt{LR}}bold_X start_POSTSUBSCRIPT LR end_POSTSUBSCRIPT and \ud835\udc17HRsubscript\ud835\udc17HR\\mathbf{X}_{\\texttt{HR}}bold_X start_POSTSUBSCRIPT HR end_POSTSUBSCRIPT with decoupled attention masks.", + "url": "http://arxiv.org/html/2408.14776v2/x6.png" + }, + "6": { + "figure_path": "2408.14776v2_figure_6.png", + "caption": "Figure 6: Visualization of image resolution distribution histogram of the datasets in Tab.1.\n", + "url": "http://arxiv.org/html/2408.14776v2/x7.png" + }, + "7": { + "figure_path": "2408.14776v2_figure_7.png", + "caption": "Figure 7: Effect of scaling up backbone on Cityscapes.\n", + "url": "http://arxiv.org/html/2408.14776v2/x8.png" + }, + "8": { + "figure_path": "2408.14776v2_figure_8.png", + "caption": "Figure 8: Qualitative comparison with SAN [36] and EBSeg [28].", + "url": "http://arxiv.org/html/2408.14776v2/x9.png" + }, + "9": { + "figure_path": "2408.14776v2_figure_9.png", + "caption": "Figure 9: Effects of decoupled attention decoding for multi-grained semantics. The t-SNE [30] visualization shows that decoupled attention masks can further split the query embedding from different classes, making clear classification boundaries.", + "url": "http://arxiv.org/html/2408.14776v2/x10.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Zero-shot semantic segmentation.", + "author": "Maxime Bucher, Tuan-Hung Vu, Matthieu Cord, and Patrick P\u00e9rez.", + "venue": "Advances in Neural Information Processing Systems, 32, 2019.", + "url": null + } + }, + { + "2": { + "title": "Coco-stuff: Thing and stuff classes in context.", + "author": "Holger Caesar, Jasper Uijlings, and Vittorio Ferrari.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1209\u20131218, 2018.", + "url": null + } + }, + { + "3": { + "title": "End-to-end object detection with transformers.", + "author": "Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko.", + "venue": "In European Conference on Computer Vision, pages 213\u2013229. Springer, 2020.", + "url": null + } + }, + { + "4": { + "title": "Attention to scale: Scale-aware semantic image segmentation.", + "author": "Liang-Chieh Chen, Yi Yang, Jiang Wang, Wei Xu, and Alan L Yuille.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3640\u20133649, 2016.", + "url": null + } + }, + { + "5": { + "title": "Open-vocabulary panoptic segmentation with embedding modulation.", + "author": "Xi Chen, Shuang Li, Ser-Nam Lim, Antonio Torralba, and Hengshuang Zhao.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1141\u20131150, 2023.", + "url": null + } + }, + { + "6": { + "title": "Cat-seg: Cost aggregation for open-vocabulary semantic segmentation.", + "author": "Seokju Cho, Heeseong Shin, Sunghwan Hong, Seungjun An, Seungjun Lee, Anurag Arnab, Paul Hongsuck Seo, and Seungryong Kim.", + "venue": "arXiv preprint arXiv:2303.11797, 2023.", + "url": null + } + }, + { + "7": { + "title": "The cityscapes dataset for semantic urban scene understanding.", + "author": "Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele.", + "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3213\u20133223, 2016.", + "url": null + } + }, + { + "8": { + "title": "Vision transformers need registers.", + "author": "Timoth\u00e9e Darcet, Maxime Oquab, Julien Mairal, and Piotr Bojanowski.", + "venue": "arXiv preprint arXiv:2309.16588, 2023.", + "url": null + } + }, + { + "9": { + "title": "Decoupling zero-shot semantic segmentation.", + "author": "Jian Ding, Nan Xue, Gui-Song Xia, and Dengxin Dai.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11583\u201311592, 2022.", + "url": null + } + }, + { + "10": { + "title": "Open-vocabulary universal image segmentation with maskclip.", + "author": "Zheng Ding, Jieke Wang, and Zhuowen Tu.", + "venue": "In International Conference on Machine Learning, pages 8090\u20138102, 2023.", + "url": null + } + }, + { + "11": { + "title": "The pascal visual object classes (voc) challenge.", + "author": "Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman.", + "venue": "International Journal of Computer Vision, 88:303\u2013338, 2010.", + "url": null + } + }, + { + "12": { + "title": "Clipscore: A reference-free evaluation metric for image captioning.", + "author": "Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi.", + "venue": "arXiv preprint arXiv:2104.08718, 2021.", + "url": null + } + }, + { + "13": { + "title": "Hrda: Context-aware high-resolution domain-adaptive semantic segmentation.", + "author": "Lukas Hoyer, Dengxin Dai, and Luc Van Gool.", + "venue": "In European Conference on Computer Vision, pages 372\u2013391. Springer, 2022.", + "url": null + } + }, + { + "14": { + "title": "Scaling up visual and vision-language representation learning with noisy text supervision.", + "author": "Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig.", + "venue": "In International Conference on Machine Learning, pages 4904\u20134916. PMLR, 2021.", + "url": null + } + }, + { + "15": { + "title": "Collaborative vision-text representation optimizing for open-vocabulary segmentation.", + "author": "Siyu Jiao, Hongguang Zhu, Jiannan Huang, Yao Zhao, Yunchao Wei, and Humphrey Shi.", + "venue": "In European Conference on Computer Vision, pages 399\u2013416. Springer, 2025.", + "url": null + } + }, + { + "16": { + "title": "Segment anything.", + "author": "Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollar, and Ross Girshick.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4015\u20134026, 2023.", + "url": null + } + }, + { + "17": { + "title": "Proxyclip: Proxy attention improves clip for open-vocabulary segmentation.", + "author": "Mengcheng Lan, Chaofeng Chen, Yiping Ke, Xinjiang Wang, Litong Feng, and Wayne Zhang.", + "venue": "In European Conference on Computer Vision, 2024.", + "url": null + } + }, + { + "18": { + "title": "Monkey: Image resolution and text label are important things for large multi-modal models.", + "author": "Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, and Xiang Bai.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26763\u201326773, 2024.", + "url": null + } + }, + { + "19": { + "title": "Open-vocabulary semantic segmentation with mask-adapted clip.", + "author": "Feng Liang, Bichen Wu, Xiaoliang Dai, Kunpeng Li, Yinan Zhao, Hang Zhang, Peizhao Zhang, Peter Vajda, and Diana Marculescu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7061\u20137070, 2023.", + "url": null + } + }, + { + "20": { + "title": "Microsoft coco: common objects in context.", + "author": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C Lawrence Zitnick.", + "venue": "In European Conference on Computer Vision, pages 740\u2013755. Springer, 2014.", + "url": null + } + }, + { + "21": { + "title": "Improved baselines with visual instruction tuning.", + "author": "Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 26296\u201326306, 2024a.", + "url": null + } + }, + { + "22": { + "title": "Open-vocabulary segmentation with semantic-assisted calibration.", + "author": "Yong Liu, Sule Bai, Guanbin Li, Yitong Wang, and Yansong Tang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3491\u20133500, 2024b.", + "url": null + } + }, + { + "23": { + "title": "The role of context for object detection and semantic segmentation in the wild.", + "author": "Roozbeh Mottaghi, Xianjie Chen, Xiaobai Liu, Nam-Gyu Cho, Seong-Whan Lee, Sanja Fidler, Raquel Urtasun, and Alan Yuille.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 891\u2013898, 2014.", + "url": null + } + }, + { + "24": { + "title": "Styleclip: Text-driven manipulation of stylegan imagery.", + "author": "Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2085\u20132094, 2021.", + "url": null + } + }, + { + "25": { + "title": "Freeseg: Unified, universal and open-vocabulary image segmentation.", + "author": "Jie Qin, Jie Wu, Pengxiang Yan, Ming Li, Ren Yuxi, Xuefeng Xiao, Yitong Wang, Rui Wang, Shilei Wen, Xin Pan, et al.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19446\u201319455, 2023.", + "url": null + } + }, + { + "26": { + "title": "Learning transferable visual models from natural language supervision.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.", + "venue": "In International Conference on Machine Learning, pages 8748\u20138763. PMLR, 2021.", + "url": null + } + }, + { + "27": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn Ommer.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684\u201310695, 2022.", + "url": null + } + }, + { + "28": { + "title": "Open-vocabulary semantic segmentation with image embedding balancing.", + "author": "Xiangheng Shan, Dongyue Wu, Guilin Zhu, Yuanjie Shao, Nong Sang, and Changxin Gao.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 28412\u201328421, 2024.", + "url": null + } + }, + { + "29": { + "title": "Explore the potential of clip for training-free open vocabulary semantic segmentation.", + "author": "Tong Shao, Zhuotao Tian, Hang Zhao, and Jingyong Su.", + "venue": "In European Conference on Computer Vision, pages 139\u2013156. Springer, 2025.", + "url": null + } + }, + { + "30": { + "title": "Visualizing data using t-sne.", + "author": "Laurens Van der Maaten and Geoffrey Hinton.", + "venue": "Journal of Machine Learning Research, 9(11), 2008.", + "url": null + } + }, + { + "31": { + "title": "Sclip: Rethinking self-attention for dense vision-language inference.", + "author": "Feng Wang, Jieru Mei, and Alan Yuille.", + "venue": "In European Conference on Computer Vision, pages 315\u2013332. Springer, 2025.", + "url": null + } + }, + { + "32": { + "title": "Semantic projection network for zero-and few-label semantic segmentation.", + "author": "Yongqin Xian, Subhabrata Choudhury, Yang He, Bernt Schiele, and Zeynep Akata.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8256\u20138265, 2019.", + "url": null + } + }, + { + "33": { + "title": "Sed: A simple encoder-decoder for open-vocabulary semantic segmentation.", + "author": "Bin Xie, Jiale Cao, Jin Xie, Fahad Shahbaz Khan, and Yanwei Pang.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3426\u20133436, 2024.", + "url": null + } + }, + { + "34": { + "title": "Open-vocabulary panoptic segmentation with text-to-image diffusion models.", + "author": "Jiarui Xu, Sifei Liu, Arash Vahdat, Wonmin Byeon, Xiaolong Wang, and Shalini De Mello.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2955\u20132966, 2023a.", + "url": null + } + }, + { + "35": { + "title": "A simple baseline for open-vocabulary semantic segmentation with pre-trained vision-language model.", + "author": "Mengde Xu, Zheng Zhang, Fangyun Wei, Yutong Lin, Yue Cao, Han Hu, and Xiang Bai.", + "venue": "In European Conference on Computer Vision, pages 736\u2013753. Springer, 2022.", + "url": null + } + }, + { + "36": { + "title": "Side adapter network for open-vocabulary semantic segmentation.", + "author": "Mengde Xu, Zheng Zhang, Fangyun Wei, Han Hu, and Xiang Bai.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2945\u20132954, 2023b.", + "url": null + } + }, + { + "37": { + "title": "Llava-uhd: an lmm perceiving any aspect ratio and high-resolution images.", + "author": "Ruyi Xu, Yuan Yao, Zonghao Guo, Junbo Cui, Zanlin Ni, Chunjiang Ge, Tat-Seng Chua, Zhiyuan Liu, Maosong Sun, and Gao Huang.", + "venue": "arXiv preprint arXiv:2403.11703, 2024.", + "url": null + } + }, + { + "38": { + "title": "Multi-scale context aggregation by dilated convolutions.", + "author": "Fisher Yu and Vladlen Koltun.", + "venue": "arXiv preprint arXiv:1511.07122, 2015.", + "url": null + } + }, + { + "39": { + "title": "Convolutions die hard: Open-vocabulary segmentation with single frozen convolutional clip.", + "author": "Qihang Yu, Ju He, Xueqing Deng, Xiaohui Shen, and Liang-Chieh Chen.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "40": { + "title": "Regionclip: Region-based language-image pretraining.", + "author": "Yiwu Zhong, Jianwei Yang, Pengchuan Zhang, Chunyuan Li, Noel Codella, Liunian Harold Li, Luowei Zhou, Xiyang Dai, Lu Yuan, Yin Li, et al.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16793\u201316803, 2022.", + "url": null + } + }, + { + "41": { + "title": "Semantic understanding of scenes through the ade20k dataset.", + "author": "Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba.", + "venue": "International Journal of Computer Vision, 127:302\u2013321, 2019.", + "url": null + } + }, + { + "42": { + "title": "Extract free dense labels from clip.", + "author": "Chong Zhou, Chen Change Loy, and Bo Dai.", + "venue": "In European Conference on Computer Vision, pages 696\u2013712. Springer, 2022.", + "url": null + } + }, + { + "43": { + "title": "Zegclip: Towards adapting clip for zero-shot semantic segmentation.", + "author": "Ziqin Zhou, Yinjie Lei, Bowen Zhang, Lingqiao Liu, and Yifan Liu.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11175\u201311185, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2408.14776v2" +} \ No newline at end of file