Buckets:
Title: SegCLIP: Patch Aggregation with Learnable Centers for Open-Vocabulary Semantic Segmentation
URL Source: https://arxiv.org/html/2211.14813
Markdown Content:
Abstract
Recently, the contrastive language-image pre-training, e.g., CLIP, has demonstrated promising results on various downstream tasks. The pre-trained model can capture enriched visual concepts for images by learning from a large scale of text-image data. However, transferring the learned visual knowledge to open-vocabulary semantic segmentation is still under-explored. In this paper, we propose a CLIP-based model named SegCLIP for the topic of open-vocabulary segmentation in an annotation-free manner. The SegCLIP achieves segmentation based on ViT and the main idea is to gather patches with learnable centers to semantic regions through training on text-image pairs. The gathering operation can dynamically capture the semantic groups, which can be used to generate the final segmentation results. We further propose a reconstruction loss on masked patches and a superpixel-based KL loss with pseudo-labels to enhance the visual representation. Experimental results show that our model achieves comparable or superior segmentation accuracy on the PASCAL VOC 2012 (+0.3% mIoU), PASCAL Context (+2.3% mIoU), and COCO (+2.2% mIoU) compared with baselines. We release the code at https://github.com/ArrowLuo/SegCLIP.
Semantic Segmentation, Open-Vocabulary, ViT, CLIP
1 Introduction
Semantic segmentation, aiming to assign a label to each pixel of a given image, is an important task and has been researched for a long time. The CNN-based approaches (Long et al., 2015; Ronneberger et al., 2015; Chen et al., 2015; Zhao et al., 2017; Chen et al., 2018; Wen et al., 2022) and Transformer-based approaches (Cheng et al., 2021; Zheng et al., 2021; Xie et al., 2021; Cheng et al., 2022; Jain et al., 2022) have achieved impressive performance on this topic. However, two significant limitations still need exploration: expensive pixel-level labeling and restricted labeled categories leading to weak generalization (Bucher et al., 2019; Xian et al., 2019).
Figure 1: Overview of our problem. The proposed SegCLIP can achieve open-vocabulary semantic segmentation through training with image-text pairs.
Recent works propose to leverage large-scale image-text pre-trained models to alleviate the above limitations. These works involve zero-shot or weakly supervised semantic segmentation because the large image-text pairs are class-agnostic. Due to the target being to segment an image with arbitrary categories instead of fixed labeling vocabularies, this kind of method is also called open-vocabulary semantic segmentation (Ghiasi et al., 2021; Xu et al., 2022b; Liang et al., 2022; Ma et al., 2022). They can be roughly divided into two types. The first is the classification-based method, supervised by the extracted pseudo labels or text features from a pre-trained model, e.g., CLIP (Radford et al., 2021). Moreover, this type is usually achieved with a fully convolutional network or carries out prediction based on mask proposals (Zhou et al., 2022a; Xu et al., 2022b). The other is to group semantic regions along training with large-scale image-text datasets, which can be called the group-based method (Xu et al., 2022a). Through different routes, the fundamental logic behind them is that the image-text pre-trained model can learn vision-text alignment from image-level to pixel-level features. Some interpretability methods, like CAM (Selvaraju et al., 2017) and Transformer-interpretability (Chefer et al., 2021), can support such an argument, such as in the work of (Zabari & Hoshen, 2021).
Following the research line of learning pixel-level alignment from image-text pairs, we explore the semantic regions with the group-based method in this paper. Compared with the classification-based method, which involves mask proposals and label classification, the group-based method is straightforward. It has consistent objectives with the pretraining model, e.g., training with a contrastive loss using image-text pairs. Further, the group-based model jointly learns visual and textual representations as humans do, so it has the potential to be improved from a multimodal perspective. Instead of training from scratch, the group-based method can also benefit from the pre-trained model.
To this end, we propose a group-based model SegCLIP to accomplish open-vocabulary semantic segmentation. The SegCLIP can be regarded as segmentation+CLIP. Specifically, the proposed model has a similar architecture to the CLIP but a modified image encoder. The image encoder is based on the ViT (Vision Transformer) (Dosovitskiy et al., 2021). Instead of operating on regular grids, we designed a plugged semantic group module to aggregate patches with learnable centers. The learnable centers can dynamically merge visual patches to semantic concepts via a mapping matrix generated by a cross-attention mechanism. This plugged group module can be inserted into the middle layers of the image encoder to generate irregular-shaped segments. Thus, the SegCLIP can transfer knowledge from CLIP to semantic segmentation. We use a small number of image-text pairs to train our experiments’ extra randomly initialized parameters. Figure 1 illustrates the training and inference process. During inference, the label name is filled into a given prompt format, and the semantic segments are obtained by calculating the similarity between the text representation and the semantic groups.
Moreover, we propose two auxiliary losses to enhance the visual representation for semantic segmentation. One is a reconstruction loss, which aims to recover the masked patches through their visual context. Such a reconstruction loss is effective from the previous work (He et al., 2022; Wang et al., 2022a; Zhou et al., 2022b). The difference is that our reconstruction process is designed based on irregular-shaped segments with a mapping matrix instead of regular patches. The other is a KL loss (Kullback-Leibler divergence Loss) used to learn a better mapping matrix via the superpixel label, which can be obtained via the off-the-shelf tool. The KL loss can keep the consistency of pixel-level features.
2 Model
Figure 2 presents the SegCLIP as a dual-encoder architecture. One encoder is for text representation, and the other is for image representation. We propose a plugged semantic group module to aggregate patches with learnable centers in the image encoder, thus injecting the CLIP with the capacity to deal with semantic segmentation. The backbone of SegCLIP is the ViT version of CLIP, and the details can be found in (Radford et al., 2021). We describe the architecture of SegCLIP, training losses, and inference process in detail in this section.
Figure 2: The framework of SegCLIP. The SegCLIP is a dual-encoder architecture containing a text and image encoder. The semantic group module (zoom in at the right) is proposed to generate regular patches to arbitrary-shaped semantic regions. Three losses, including contrastive loss, reconstruction loss, and superpixel-based KL loss, are used in training.
2.1 Main Architecture
The architecture of SegCLIP mainly consists of a text encoder ℰ T(⋅)subscript ℰ 𝑇⋅\mathcal{E}{T}(\cdot)caligraphic_E start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ( ⋅ ) and an image encoder ℰ I(⋅)subscript ℰ 𝐼⋅\mathcal{E}{I}(\cdot)caligraphic_E start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ( ⋅ ), similar to the CLIP. Such a design can transfer the knowledge naturally from the pre-trained weights of the CLIP instead of training from scratch. Nevertheless, it takes work to achieve semantic segmentation directly because the CLIP is pre-trained with image-level features and needs help to finish pixel-level tasks. We propose a plugged semantic group module within the image encoder with learnable centers to aggregate the low-layer pixel features to achieve the segmentation. The learnable centers can be regarded as semantic regions and gather semantical pixels along with the training process. Thus, the SegCLIP can finish open-vocabulary semantic segmentation.
As shown in Figure 2, the model’s input is a pair of text 𝐓={w i}i=1 M 𝐓 superscript subscript subscript 𝑤 𝑖 𝑖 1 𝑀\mathbf{T}={w_{i}}{i=1}^{M}bold_T = { italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT and image 𝐈={p j}j=1 N 𝐈 superscript subscript subscript 𝑝 𝑗 𝑗 1 𝑁\mathbf{I}={p{j}}{j=1}^{N}bold_I = { italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT, where w i subscript 𝑤 𝑖 w{i}italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT means the i 𝑖 i italic_i-th token within the text, p j subscript 𝑝 𝑗 p_{j}italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT means the j 𝑗 j italic_j-th non-overlapped patches of the image, M 𝑀 M italic_M and N 𝑁 N italic_N denotes the number of given text and image, respectively. Following the ViT version of CLIP, the token is generated via a lower-cased byte pair encoding (BPE), and the tokens representation {𝐞 w i}i=1 M superscript subscript subscript 𝐞 subscript 𝑤 𝑖 𝑖 1 𝑀{\mathbf{e}{w{i}}}{i=1}^{M}{ bold_e start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT and patches representation {𝐞 p j}j=1 N superscript subscript subscript 𝐞 subscript 𝑝 𝑗 𝑗 1 𝑁{\mathbf{e}{p_{j}}}{j=1}^{N}{ bold_e start_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT are obtained by an Embedding operation and a linear projection, respectively. Then the tokens representation is fed into Transformer layers (Vaswani et al., 2017) to generate the final text feature as 𝐳 w=ℰ T({𝐞 w i}i=1 M)subscript 𝐳 𝑤 subscript ℰ 𝑇 superscript subscript subscript 𝐞 subscript 𝑤 𝑖 𝑖 1 𝑀\mathbf{z}{w}=\mathcal{E}{T}\big{(}{\mathbf{e}{w_{i}}}{i=1}^{M}\big{)}bold_z start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT = caligraphic_E start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ( { bold_e start_POSTSUBSCRIPT italic_w start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT ). The image representation is fed into other Transformer layers plus the semantic group module to generate the final image feature as 𝐳 p=ℰ I({𝐞 p j}j=1 N)subscript 𝐳 𝑝 subscript ℰ 𝐼 superscript subscript subscript 𝐞 subscript 𝑝 𝑗 𝑗 1 𝑁\mathbf{z}{p}=\mathcal{E}{I}\big{(}{\mathbf{e}{p_{j}}}{j=1}^{N}\big{)}bold_z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT = caligraphic_E start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ( { bold_e start_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ). Finally, the contrastive loss can be calculated on the text feature 𝐳 w subscript 𝐳 𝑤\mathbf{z}{w}bold_z start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT and the image feature 𝐳 p subscript 𝐳 𝑝\mathbf{z}{p}bold_z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT. In our setting, the text feature 𝐳 w subscript 𝐳 𝑤\mathbf{z}{w}bold_z start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT comes from a special token [𝚂𝙴𝙿]delimited-[]𝚂𝙴𝙿\mathtt{[SEP]}[ typewriter_SEP ], which is appended as the last token of the text. The image feature 𝐳 p subscript 𝐳 𝑝\mathbf{z}_{p}bold_z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT is generated by the last Transformer layer followed by a max-pooling operation.
2.2 Semantic Group Module
To gather the regular patches to arbitrary-shaped semantic regions, we design a semantic group to plug into the Transformer layers of the image encoder. In other words, the semantic group module can be regarded as the second stage of the image encoder, with different Transformer layers as the first and third stages. Assuming the patches representation is ℋ p={𝐡 p j s}j=1 N subscript ℋ 𝑝 superscript subscript superscript subscript 𝐡 subscript 𝑝 𝑗 𝑠 𝑗 1 𝑁\mathcal{H}{p}={\mathbf{h}{p_{j}}^{s}}{j=1}^{N}caligraphic_H start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT = { bold_h start_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT after passing through the first stage’s s 𝑠 s italic_s-th (also the last) Transformer layer. The semantic group module can gather different patches by calculating semantic similarity. Specifically, we first randomly initialize a group of learnable centers ℋ c={𝐜 k}k=1 L subscript ℋ 𝑐 superscript subscript subscript 𝐜 𝑘 𝑘 1 𝐿\mathcal{H}{c}={\mathbf{c}{k}}{k=1}^{L}caligraphic_H start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = { bold_c start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_L end_POSTSUPERSCRIPT, then obtain contextual centers ℋ^c={𝐜^k}k=1 L subscript^ℋ 𝑐 superscript subscript subscript^𝐜 𝑘 𝑘 1 𝐿\hat{\mathcal{H}}{c}={\hat{\mathbf{c}}{k}}_{k=1}^{L}over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = { over^ start_ARG bold_c end_ARG start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_L end_POSTSUPERSCRIPT through some cross-attention layers as follows,
ℋ^c t=𝙲𝚛𝚘𝚜𝚜𝙰𝚝𝚝𝚎𝚗𝚝𝚒𝚘𝚗(ℋ c t,ℋ p,ℋ p),superscript subscript^ℋ 𝑐 𝑡 𝙲𝚛𝚘𝚜𝚜𝙰𝚝𝚝𝚎𝚗𝚝𝚒𝚘𝚗 superscript subscript ℋ 𝑐 𝑡 subscript ℋ 𝑝 subscript ℋ 𝑝\displaystyle\hat{\mathcal{H}}{c}^{t}=\texttt{CrossAttention}(\mathcal{H}{c}% ^{t},\mathcal{H}{p},\mathcal{H}{p}),over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT = CrossAttention ( caligraphic_H start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT , caligraphic_H start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT , caligraphic_H start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) ,(1)
where t 𝑡 t italic_t is the layer number of cross-attention, the start ℋ c 1 superscript subscript ℋ 𝑐 1\mathcal{H}{c}^{1}caligraphic_H start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT is the ℋ c subscript ℋ 𝑐\mathcal{H}{c}caligraphic_H start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT, and the ℋ^c subscript^ℋ 𝑐\hat{\mathcal{H}}{c}over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT is the last ℋ^c t superscript subscript^ℋ 𝑐 𝑡\hat{\mathcal{H}}{c}^{t}over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT, the CrossAttention is a cross-attention layer, the same as the Self-Attention layer in Transformer (Vaswani et al., 2017), but the input is asymmetrically separate embedding sequences, in here, the query is ℋ c t superscript subscript ℋ 𝑐 𝑡\mathcal{H}{c}^{t}caligraphic_H start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT, and the key and value are ℋ p subscript ℋ 𝑝\mathcal{H}{p}caligraphic_H start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, respectively.
After obtaining the contextual centers ℋ^c subscript^ℋ 𝑐\hat{\mathcal{H}}_{c}over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT, we can assign each patch to a corresponding center via a mapping matrix ℳ∈ℝ N×L ℳ superscript ℝ 𝑁 𝐿\mathcal{M}\in\mathbb{R}^{N\times L}caligraphic_M ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × italic_L end_POSTSUPERSCRIPT generated by the Gumbel-Softmax operation (Jang et al., 2017; Xu et al., 2022a).
ℳ=Gumbel-Softmax(ℋ pℋ^c⊤),ℳ Gumbel-Softmax subscript ℋ 𝑝 superscript subscript^ℋ 𝑐 top\displaystyle\mathcal{M}=\texttt{Gumbel-Softmax}(\mathcal{H}{p}\hat{\mathcal{% H}}{c}^{\top}),caligraphic_M = Gumbel-Softmax ( caligraphic_H start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT ) ,(2)
where each row of ℳ ℳ\mathcal{M}caligraphic_M is a one-hot vector, and ℳ jk subscript ℳ 𝑗 𝑘\mathcal{M}_{jk}caligraphic_M start_POSTSUBSCRIPT italic_j italic_k end_POSTSUBSCRIPT denotes the j 𝑗 j italic_j-th patch belongs to k 𝑘 k italic_k-th semantic center if its value is 1. The ℳ ℳ\mathcal{M}caligraphic_M keeps a patch belonging to only and if only a center, which benefits the final semantic segmentation.
Finally, we can calculate the representation of semantic regions ℋ^p subscript^ℋ 𝑝\hat{\mathcal{H}}{p}over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT with the patches representation ℋ p subscript ℋ 𝑝\mathcal{H}{p}caligraphic_H start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, the mapping matrix ℳ ℳ\mathcal{M}caligraphic_M, and the contextual centers ℋ^c subscript^ℋ 𝑐\hat{\mathcal{H}}_{c}over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT as follows,
ℋ^p=𝙼𝙻𝙿(𝙼𝙴𝙰𝙽(ℳ⊤ℋ p)+ℋ^c),subscript^ℋ 𝑝 𝙼𝙻𝙿 𝙼𝙴𝙰𝙽 superscript ℳ top subscript ℋ 𝑝 subscript^ℋ 𝑐\displaystyle\hat{\mathcal{H}}{p}=\texttt{MLP}\big{(}\texttt{MEAN}(\mathcal{M% }^{\top}\mathcal{H}{p})+\hat{\mathcal{H}}_{c}\big{)},over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT = MLP ( MEAN ( caligraphic_M start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT caligraphic_H start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ) + over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) ,(3)
where MEAN denotes doing the average for each center using the patches belonging to it. MLP is a multilayer perceptron block containing two fully-connected layers and a GELU (Hendrycks & Gimpel, 2016) between them.
The generated representation of semantic regions ℋ^p subscript^ℋ 𝑝\hat{\mathcal{H}}{p}over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT is fed to the Transformer layers of the third stage to learn sufficiently interactive region features 𝒵 p subscript 𝒵 𝑝\mathcal{Z}{p}caligraphic_Z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT further.
2.3 Reconstruction Loss
In addition to the contrastive loss, we propose a self-supervised reconstruction loss to enhance the visual representation for segmentation. As shown in Figure 3, the reconstruction loss aims to recover the masked patches through their visual context, similar to MAE (He et al., 2022). The difference is that our reconstruction process is designed based on irregular-shaped segments with a mapping matrix.
We first generate a masked version of region representation ℋ^p(m)superscript subscript^ℋ 𝑝 𝑚\hat{\mathcal{H}}{p}^{(m)}over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT and mapping matrix ℳ(m)superscript ℳ 𝑚\mathcal{M}^{(m)}caligraphic_M start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT via the semantic group module on the unmasked patches for the MAE encoder. However, the region representation can not be used to calculate the reconstruction loss because the unmasked patches have been gathered into different regions. We propose a reconstruction layer to restore the representation of patches from ℋ^p(m)superscript subscript^ℋ 𝑝 𝑚\hat{\mathcal{H}}{p}^{(m)}over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT as,
ℋp(m)=GELU(𝙻𝚒𝚗𝚎𝚊𝚛(ℳ(m))⊤ℋ^p(m)),superscript subscriptℋ 𝑝 𝑚 GELU 𝙻𝚒𝚗𝚎𝚊𝚛 superscript superscript ℳ 𝑚 top superscript subscript^ℋ 𝑝 𝑚\displaystyle\tilde{\mathcal{H}}{p}^{(m)}=\text{GELU}\big{(}\texttt{Linear}(% \mathcal{M}^{(m)})^{\top}\hat{\mathcal{H}}{p}^{(m)}\big{)},over~ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT = GELU ( Linear ( caligraphic_M start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT over^ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ) ,(4)
where Linear is a fully-connected layer, and the GELU is the activation function. Then we use extra Transformer layers, similar to the third stage of the image encoder, to obtain the final representation 𝒵 p(m)superscript subscript 𝒵 𝑝 𝑚\mathcal{Z}{p}^{(m)}caligraphic_Z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT using the ℋp(m)superscript subscriptℋ 𝑝 𝑚\tilde{\mathcal{H}}{p}^{(m)}over~ start_ARG caligraphic_H end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT.
We keep the MAE decoder as in (He et al., 2022) with the input 𝒵 p(m)superscript subscript 𝒵 𝑝 𝑚\mathcal{Z}{p}^{(m)}caligraphic_Z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT. Finally, the reconstruction loss is the mean squared error (MSE) between the reconstructed image 𝐈(m)superscript 𝐈 𝑚\mathbf{I}^{(m)}bold_I start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT and the original image 𝐈 𝐈\mathbf{I}bold_I, ℒ rec=𝙼𝚂𝙴(𝐈(m),𝐈)subscript ℒ 𝑟 𝑒 𝑐 𝙼𝚂𝙴 superscript 𝐈 𝑚 𝐈\mathcal{L}{rec}=\texttt{MSE}(\mathbf{I}^{(m)},\mathbf{I})caligraphic_L start_POSTSUBSCRIPT italic_r italic_e italic_c end_POSTSUBSCRIPT = MSE ( bold_I start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT , bold_I ).
Figure 3: Reconstruction Loss.
2.4 Superpixel based KL Loss
Besides the reconstruction loss, we propose a superpixel-based KL loss to guild the learning of a mapping matrix. The motivation is to keep the pixel-level consistency when gathering the patches to regions. Intuitively, the pixels of a superpixel should be gathered into a region instead of one more region. The calculation process is illustrated in Figure 4. For a given image, we first obtain its superpixel with a graph-based segmentation method from (Felzenszwalb & Huttenlocher, 2004), which is unsupervised and does not need to train on any datasets. There are many other superpixel methods, but we chose this typical one as a demonstration.
Assuming there are some superpixels, each pixel in the same superpixels has the same label, e.g., superpixel id. Thus for each patch, we assign it a label, e.g., the average floor value of ids from its pixels. Thus, we can obtain a super-patch corresponding to the superpixel. Intuitively, the patches within a super-patch are also covered by a superpixel. Note that a superpixel id is a number used to distinguish different superpixels, and we do not care about its meaning in the loss calculation. Every patch of a super-patch should have a consistent probability in the mapping matrix ℳ ℳ\mathcal{M}caligraphic_M because they should be gathered in a region. In other words, the probability of a patch in the mapping matrix should be similar to the average probability of the patches within the same super-patch. Thus, a symmetric KL loss is designed as follows,
𝒫^j=𝚜𝚘𝚏𝚝𝚖𝚊𝚡(1|𝒢 j|∑j^∈𝒢 j 𝒫 j^),subscript^𝒫 𝑗 𝚜𝚘𝚏𝚝𝚖𝚊𝚡 1 subscript 𝒢 𝑗 subscript^𝑗 subscript 𝒢 𝑗 subscript 𝒫^𝑗\displaystyle\hat{\mathcal{P}}{j}=\texttt{softmax}\big{(}\frac{1}{|\mathcal{G% }{j}|}\sum_{\hat{j}\in\mathcal{G}{j}}\mathcal{P}{\hat{j}}\big{)},over^ start_ARG caligraphic_P end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = softmax ( divide start_ARG 1 end_ARG start_ARG | caligraphic_G start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | end_ARG ∑ start_POSTSUBSCRIPT over^ start_ARG italic_j end_ARG ∈ caligraphic_G start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT caligraphic_P start_POSTSUBSCRIPT over^ start_ARG italic_j end_ARG end_POSTSUBSCRIPT ) ,(5) ℒ sup=1 2N∑j=1 N(𝙺𝙻(𝒫 j,𝒫^j)+𝙺𝙻(𝒫^j,𝒫 j)),subscript ℒ 𝑠 𝑢 𝑝 1 2 𝑁 superscript subscript 𝑗 1 𝑁 𝙺𝙻 subscript 𝒫 𝑗 subscript^𝒫 𝑗 𝙺𝙻 subscript^𝒫 𝑗 subscript 𝒫 𝑗\displaystyle\mathcal{L}{sup}=\frac{1}{2N}!\sum{j=1}^{N}!\big{(}\texttt{KL% }(\mathcal{P}{j},!\hat{\mathcal{P}}{j})!+!\texttt{KL}(\hat{\mathcal{P}}{% j},!\mathcal{P}{j})\big{)},caligraphic_L start_POSTSUBSCRIPT italic_s italic_u italic_p end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG 2 italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ( KL ( caligraphic_P start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , over^ start_ARG caligraphic_P end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) + KL ( over^ start_ARG caligraphic_P end_ARG start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , caligraphic_P start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) ) ,(6)
where KL is the Kullback-Leibler divergence, 𝒫 j subscript 𝒫 𝑗\mathcal{P}{j}caligraphic_P start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is the regions’ probability of j 𝑗 j italic_j-th patch, which is obtained by the j 𝑗 j italic_j-th row of ℳ ℳ\mathcal{M}caligraphic_M after softmax operation, and 𝒢 j subscript 𝒢 𝑗\mathcal{G}{j}caligraphic_G start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT is the indexes of the patches contained in a super-patch which also contains the j 𝑗 j italic_j-th patch. By decreasing the ℒ sup subscript ℒ 𝑠 𝑢 𝑝\mathcal{L}_{sup}caligraphic_L start_POSTSUBSCRIPT italic_s italic_u italic_p end_POSTSUBSCRIPT, the model tends to gather the patches within a superpixel together, which benefits the segmentation.
Figure 4: Superpixel based KL Loss.
2.5 Training and Inference
Training
In addition to the reconstruction loss ℒ rec subscript ℒ 𝑟 𝑒 𝑐\mathcal{L}{rec}caligraphic_L start_POSTSUBSCRIPT italic_r italic_e italic_c end_POSTSUBSCRIPT and the superpixel-based KL loss ℒ sup subscript ℒ 𝑠 𝑢 𝑝\mathcal{L}{sup}caligraphic_L start_POSTSUBSCRIPT italic_s italic_u italic_p end_POSTSUBSCRIPT, the model is also trained with the contrastive loss ℒ con subscript ℒ 𝑐 𝑜 𝑛\mathcal{L}_{con}caligraphic_L start_POSTSUBSCRIPT italic_c italic_o italic_n end_POSTSUBSCRIPT in an end-to-end manner. The total loss is the sum of them,
ℒ total=ℒ con+ℒ rec+ℒ sup.subscript ℒ 𝑡 𝑜 𝑡 𝑎 𝑙 subscript ℒ 𝑐 𝑜 𝑛 subscript ℒ 𝑟 𝑒 𝑐 subscript ℒ 𝑠 𝑢 𝑝\displaystyle\mathcal{L}{total}=\mathcal{L}{con}+\mathcal{L}{rec}+\mathcal{% L}{sup}.caligraphic_L start_POSTSUBSCRIPT italic_t italic_o italic_t italic_a italic_l end_POSTSUBSCRIPT = caligraphic_L start_POSTSUBSCRIPT italic_c italic_o italic_n end_POSTSUBSCRIPT + caligraphic_L start_POSTSUBSCRIPT italic_r italic_e italic_c end_POSTSUBSCRIPT + caligraphic_L start_POSTSUBSCRIPT italic_s italic_u italic_p end_POSTSUBSCRIPT .(7)
The ℒ con subscript ℒ 𝑐 𝑜 𝑛\mathcal{L}{con}caligraphic_L start_POSTSUBSCRIPT italic_c italic_o italic_n end_POSTSUBSCRIPT is a symmetric cross-entropy loss calculated on the text feature 𝐳 w subscript 𝐳 𝑤\mathbf{z}{w}bold_z start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT and image feature 𝐳 p subscript 𝐳 𝑝\mathbf{z}_{p}bold_z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, similar to the CLIP (Radford et al., 2021),
ℒ con subscript ℒ 𝑐 𝑜 𝑛\displaystyle\mathcal{L}{con}caligraphic_L start_POSTSUBSCRIPT italic_c italic_o italic_n end_POSTSUBSCRIPT=1 2(ℒ t2i+ℒ i2t),absent 1 2 subscript ℒ 𝑡 2 𝑖 subscript ℒ 𝑖 2 𝑡\displaystyle=\frac{1}{2}(\mathcal{L}{t2i}+\mathcal{L}{i2t}),= divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( caligraphic_L start_POSTSUBSCRIPT italic_t 2 italic_i end_POSTSUBSCRIPT + caligraphic_L start_POSTSUBSCRIPT italic_i 2 italic_t end_POSTSUBSCRIPT ) ,(8) ℒ t2i subscript ℒ 𝑡 2 𝑖\displaystyle\mathcal{L}{t2i}caligraphic_L start_POSTSUBSCRIPT italic_t 2 italic_i end_POSTSUBSCRIPT=−1 ℬ∑i ℬ logexp(s(𝐳 w(i),𝐳 p(i)))∑j=1 ℬ exp(s(𝐳 w(j),𝐳 p(i)),\displaystyle=!!-\frac{1}{\mathcal{B}}!!\sum_{i}^{\mathcal{B}}!{\log\frac% {\exp\big{(}s(\mathbf{z}{w}^{(i)},\mathbf{z}{p}^{(i)})\big{)}}{\sum_{j=1}^{% \mathcal{B}}{\exp\big{(}s(\mathbf{z}{w}^{(j)},\mathbf{z}{p}^{(i)}\big{)}}}},= - divide start_ARG 1 end_ARG start_ARG caligraphic_B end_ARG ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_B end_POSTSUPERSCRIPT roman_log divide start_ARG roman_exp ( italic_s ( bold_z start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , bold_z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) ) end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_B end_POSTSUPERSCRIPT roman_exp ( italic_s ( bold_z start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT , bold_z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) end_ARG ,(9) ℒ i2t subscript ℒ 𝑖 2 𝑡\displaystyle\mathcal{L}{i2t}caligraphic_L start_POSTSUBSCRIPT italic_i 2 italic_t end_POSTSUBSCRIPT=−1 ℬ∑i ℬ logexp(s(𝐳 w(i),𝐳 p(i)))∑j=1 ℬ exp(s(𝐳 w(i),𝐳 p(j)),\displaystyle=!!-\frac{1}{\mathcal{B}}!!\sum{i}^{\mathcal{B}}!{\log\frac% {\exp\big{(}s(\mathbf{z}{w}^{(i)},\mathbf{z}{p}^{(i)})\big{)}}{\sum_{j=1}^{% \mathcal{B}}{\exp\big{(}s(\mathbf{z}{w}^{(i)},\mathbf{z}{p}^{(j)}\big{)}}}},= - divide start_ARG 1 end_ARG start_ARG caligraphic_B end_ARG ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_B end_POSTSUPERSCRIPT roman_log divide start_ARG roman_exp ( italic_s ( bold_z start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , bold_z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) ) end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_B end_POSTSUPERSCRIPT roman_exp ( italic_s ( bold_z start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , bold_z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ) end_ARG ,(10)
where, s(𝐳 i,𝐳 j)=𝐳 i𝐳 j⊤∥𝐳 i∥∥𝐳 j∥𝑠 subscript 𝐳 𝑖 subscript 𝐳 𝑗 subscript 𝐳 𝑖 superscript subscript 𝐳 𝑗 top delimited-∥∥subscript 𝐳 𝑖 delimited-∥∥subscript 𝐳 𝑗 s(\mathbf{z}{i},\mathbf{z}{j})=\frac{\mathbf{z}{i}\mathbf{z}{j}^{\top}}{% \lVert\mathbf{z}{i}\rVert\lVert\mathbf{z}{j}\rVert}italic_s ( bold_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) = divide start_ARG bold_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT bold_z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT end_ARG start_ARG ∥ bold_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∥ ∥ bold_z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ∥ end_ARG is the cosine similarity, ℬ ℬ\mathcal{B}caligraphic_B is the batch size, the superscripts (i)𝑖(i)( italic_i ) and (j)𝑗(j)( italic_j ) means the i 𝑖 i italic_i-th and j 𝑗 j italic_j-th sample, respectively.
Inference
Due to the learned mapping matrix, the SegCLIP can finish semantic segmentation without further finetuning on any datasets. The image feature can be obtained through the image encoder for a segmentation task with candidate labels. For the text feature, we use a template a photo of a {label name}. to form the input text of the text encoder with different label names. Specifically, we use the 𝒵 p∈ℝ L×H subscript 𝒵 𝑝 superscript ℝ 𝐿 𝐻\mathcal{Z}{p}\in\mathbb{R}^{L\times H}caligraphic_Z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_L × italic_H end_POSTSUPERSCRIPT from the last Transformer layer of the image encoder as the representation of each region, where L 𝐿 L italic_L is the number of learnable centers, and H 𝐻 H italic_H is the hidden size. The label features can be denoted as {𝐳 w(τ)}τ=1 𝒯 superscript subscript superscript subscript 𝐳 𝑤 𝜏 𝜏 1 𝒯{\mathbf{z}{w}^{(\tau)}}{\tau=1}^{\mathcal{T}}{ bold_z start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_τ ) end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_τ = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT caligraphic_T end_POSTSUPERSCRIPT if there are 𝒯 𝒯\mathcal{T}caligraphic_T candidate labels. After calculating the cosine similarity of each row of 𝒵 p subscript 𝒵 𝑝\mathcal{Z}{p}caligraphic_Z start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT and each 𝐳 w(τ)superscript subscript 𝐳 𝑤 𝜏\mathbf{z}{w}^{(\tau)}bold_z start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_τ ) end_POSTSUPERSCRIPT via s(𝐳 i,𝐳 j)𝑠 subscript 𝐳 𝑖 subscript 𝐳 𝑗 s(\mathbf{z}{i},\mathbf{z}_{j})italic_s ( bold_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , bold_z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) from Eqs. (9-10), we can obtain a similarity matrix 𝒮^∈ℝ L×𝒯^𝒮 superscript ℝ 𝐿 𝒯\hat{\mathcal{S}}\in\mathbb{R}^{L\times\mathcal{T}}over^ start_ARG caligraphic_S end_ARG ∈ blackboard_R start_POSTSUPERSCRIPT italic_L × caligraphic_T end_POSTSUPERSCRIPT, in which each row denotes the labels’ probability of a region. Then the similarity 𝒮∈ℝ N×𝒯 𝒮 superscript ℝ 𝑁 𝒯\mathcal{S}\in\mathbb{R}^{N\times\mathcal{T}}caligraphic_S ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × caligraphic_T end_POSTSUPERSCRIPT between patches and candidate labels can be calculated using the mapping matrix ℳ∈ℝ N×L ℳ superscript ℝ 𝑁 𝐿\mathcal{M}\in\mathbb{R}^{N\times L}caligraphic_M ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × italic_L end_POSTSUPERSCRIPT via 𝒮=ℳ𝒮^𝒮 ℳ^𝒮\mathcal{S}=\mathcal{M}\hat{\mathcal{S}}caligraphic_S = caligraphic_M over^ start_ARG caligraphic_S end_ARG. We can assign each patch a label with the highest similarity of each row from the 𝒮 𝒮\mathcal{S}caligraphic_S. We can further execute an interpolation operation on the 𝒮 𝒮\mathcal{S}caligraphic_S from N 𝑁 N italic_N to image size to obtain a pixel-level assignment matrix, then an irregular-shaped and pixel-level segmentation.
3 Experiments
We first describe datasets and implementation details before ablating various settings of our model. Then we present the state-of-the-art results on three datasets in an annotation-free manner. Finally, we demonstrate some qualitative results of our model.
3.1 Datasets
We pretrain the SegCLIP on the training splits of Conceptual Captions (CC) (Sharma et al., 2018) and COCO (Lin et al., 2014), which contain 3M and 400K image-text pairs, respectively.
For the semantic segmentation, we evaluate the model on the validation splits of the PASCAL VOC 2012 (Everingham et al., 2010), PASCAL Context (Mottaghi et al., 2014), and COCO datasets. These datasets contain 20, 59, and 80 foreground classes, respectively. To distinguish the foreground classes from the background, we set the threshold to 0.75, 0.25, and 0.65 on the similarities for PASCAL VOC 2012, PASCAL Context, and COCO, respectively. The metric is the mIoU calculated with the predicted and ground truth segmentation masks. The short side of the given image is resized to 224 during inference.
3.2 Experimental Details
The architecture is based on the ViT version of CLIP, and the text encoder and image encoder are all 12 Transformer layers. The image size is set to 224 ×\times× 224, and the patch size is 16 ×\times× 16. The max length of the text tokens is 32. We initialize the embedding and Transformer layers from the CLIP pre-trained weight as default. For the semantic group module, we put it after the 10th Transformer layer in the image encoder via grid search based on segmentation datasets. The cross-attention layer number is set to 2. The decoder layer of MAE is 3, and the mask rate of patches is 0.75. The number of learnable centers is 8. We randomly initialize the parameters of the semantic group module, MAE decoder, and the rest of the Linears in the model. For the optimization, we use Adam optimizer and a cosine schedule of learning rate following the CLIP. The initial learning rate is 4e-6 for the embedding layers, text encoder, and Transformer layers of the image encoder before the semantic group module. For the rest of the parameters, the initial learning rate is 4e-3. We pretrain our model using 8 NVIDIA A100 GPUs with a batch size of 768 for 10 epochs. This process takes approximately 6 hours.
3.3 Ablation Studies
We conduct ablation studies on the designed losses and key hyperparameters to introduce their influence in this section.
Effective of the Reconstruction Loss As shown in Table 1, training with the reconstruction loss can improve the mIoU by 1.19%, 0.92%, and 0.66% on PASCAL VOC, PASCAL Context, and COCO, respectively under the condition of without the superpixel-based KL loss and can improve the mIoU by 4.11%, 0.56%, and 0.52% under the condition of with the superpixel based KL loss. The results demonstrate that the well-designed reconstruction loss restoring the masked patches with contextual visual features can enhance the encoder and make the mapping matrix a better match for the segmentation task.
Effective of the Superpixel based KL Loss We also report the consistent improvement of the superpixel-based KL loss in Table 1. The gains are 0.54%, 0.72%, and 1.07%, and 3.46%, 0.36%, and 0.93% on PASCAL VOC, PASCAL Context, and COCO under the condition of without or with the reconstructing loss, respectively. We suppose that the pseudo superpixel labels can keep the pixel-level visual feature relatively consistent within the segments.
Table 1: Ablation of the proposed losses (mIoU). R-Loss is the reconstruction loss, S-KL is the superpixel-based KL loss.
Table 2: Ablation of plugged layer (P-Ly) and center number (C-NO.) of semantic group module. The results are obtained with only the contrastive loss.
Influence of the Plugged Layer In Table 2, we conduct experiments on different plugged layers of the semantic group module, from 6 to 11, with the same 8 learnable centers. The results reflect that the plugged layer 10 can achieve better performance than other plugged points, and too small and big numbers decrease the mIoU significantly. We suppose that too small plugged points may harm the pre-trained CLIP weights, and the low-layer feature is segments-irrelevant. The features from big plugged points are also segments-irrelevant and can not benefit the segmentation task.
Influence of the Center Number We also conduct experiments on different learnable centers of the semantic group module with the same plugged layer 10. The results in Table 2 demonstrate that the 8 learnable centers can achieve better or comparable performance. The mIoU is not sensitive on 6, 8, and 10 learnable centers. We chose 8 as the default hyperparameter in this work.
Influence of the Cross-Attention Layer Table 3 shows the results on different layers of the cross-attention layer in the semantic group module. Compared with the mIoU obtained by training without a cross-attention layer, training with a cross-attention layer can achieve better performance. Such a phenomenon suggests that the cross-attention layer can make the learnable centers match better with the features of patches and focus on different parts of the given image. We also obverse that two cross-attention layers achieve better mIoU than others, but the results from numbers 1 and 3 are comparable. A large number of cross-attention layers, e.g., 4, may harm the performance. We consider that our training datasets are insufficient to train deep layers.
Table 3: Ablation of cross-attention layer (Cross-Att.) The plugged layer is 10, and the center NO. is 8. The results are obtained with only the contrastive loss.
Table 4: Comparison of different models on mIoU. ‘Arch.’ and ‘Sup.’ are short for architecture and supervision, respectively. ‘Init.’ means whether be initialized with CLIP. CC12M and YFCC are from (Changpinyo et al., 2021) and (Thomee et al., 2016), respectively. ♮♮{}^{\natural}start_FLOATSUPERSCRIPT ♮ end_FLOATSUPERSCRIPT means results from (Xu et al., 2022a). GroupViT 1-s 1-s{}{\text{1-s}}start_FLOATSUBSCRIPT 1-s end_FLOATSUBSCRIPT and GroupViT 2-s 2-s{}{\text{2-s}}start_FLOATSUBSCRIPT 2-s end_FLOATSUBSCRIPT are our implementations on the CC and COCO datasets, with one-stage and two-stage grouping blocks, respectively.
3.4 Comparisons with State-of-the-Art Methods
As shown in Table 4, we compare our model against class-supervised, visually self-supervised, and textually supervised baselines. The results of class-supervised and visually self-supervised baselines are obtained from (Xu et al., 2022a). They are pixel-wise classification models finetuned on the pre-trained ViT models, i.e., DeiT (Touvron et al., 2021), DINO (Caron et al., 2021), and MoCo (Chen et al., 2021), with a 1×\times×1 convolutional layer as the semantic segmentation head. The finetuning datasets are the training sets of the VOC and Context separately. Compared with the class-supervised model, our result (52.5%) on VOC is still comparable (53.0%), although training without manually pixel-level annotations.
Figure 5: Qualitative results on PASCAL VOC.
Figure 6: Qualitative results on PASCAL Context.
Compared with the state-of-the-art textually supervised method GroupViT, our initialized SegCLIP achieves 0.3%, 2.3%, and 2.2% gains on the VOC, Context, and COCO, respectively. We also conduct experiments for GroupViT on CC and COCO datasets for a fair comparison. Our SegCLIP trained from scratch achieves 5.2%, 4.3%, and 2.3% improvements compared with the GroupViT 1-s 1-s{}{\text{1-s}}start_FLOATSUBSCRIPT 1-s end_FLOATSUBSCRIPT. Note that the GroupViT 1-s 1-s{}{\text{1-s}}start_FLOATSUBSCRIPT 1-s end_FLOATSUBSCRIPT achieves superior accuracy than GroupViT 2-s 2-s{}_{\text{2-s}}start_FLOATSUBSCRIPT 2-s end_FLOATSUBSCRIPT in our settings. We suppose the CC and COCO, which are smaller than the CC12M (Changpinyo et al., 2021) and YFCC (Thomee et al., 2016), may lead the unstable and insufficient training for the 2-stage GroupViT. When initialized with the pre-trained CLIP, the SegCLIP improves the mIoU by 19.3%, 5.6%, and 11.3% on the VOC, Context, and COCO compared with training from scratch, respectively. It implies that our model could benefit from the pre-trained CLIP, which also proves the flexibility of the semantic ground module.
Figure 7: Qualitative results on COCO.
3.5 Qualitative Results
We demonstrate the qualitative results on PASCAL VOC, PASCAL Context, and COCO in Figures 5-7, respectively. The results indicate that the SegCLIP can generate plausible segments and reasonable tags. Compared with the SegCLIP training from scratch, the initialized SegCLIP achieves better semantic segmentation. In Figure 5, the first row implies the initialized SegCLIP could obtain better semantics, e.g., the airplane area, and the last row presents the initialized SegCLIP could obtain correct tags, e.g., dog, for the generated segments. The same conclusion could be drawn from Figure 6 and the first two rows of Figure 7. We can also observe that the single objective, multiple objects of the same class, or multiple objects from different classes can be captured by the SegCLIP. It suggests that the model training on the large scale of image-text pairs could induce the fine-grained alignment between segments and tags.
4 Related Work
This paper is related to the vision-language pre-training and open-vocabulary semantic segmentation.
4.1 Vision-Language pre-training
The vision-language pre-training (VLP) is an emerging research topic with the increase of large-scale visual and linguistic pairs collected from the Internet (Tan & Bansal, 2019; Chen et al., 2020; Huang et al., 2020; Kim et al., 2021; Li et al., 2021; Wang et al., 2022b; Sun et al., 2019; Luo et al., 2020; Bain et al., 2021; Li et al., 2022b, c). The research directions commonly involve the design of new model architectures and pre-training objectives (Gan et al., 2022). For the architecture, the VLP models usually contain several modules, e.g., visual encoder, text encoder, multimodal fusion encoder, or decoder. For the objective, the representative pre-training tasks contain the masked language model (MLM) introduced in language pre-training (Devlin et al., 2019), vision-text matching (VTM), vision-text contrastive learning (VTC), and masked vision model (MVM). Besides the model, the available datasets are the key factor in pushing the development of this research field, e.g., Conceptual Captions (Sharma et al., 2018) and COCO (Lin et al., 2014) used in this work, CC12M (Changpinyo et al., 2021), YFCC (Thomee et al., 2016), LAION-400M (Schuhmann et al., 2021), and HowTo100M (Miech et al., 2019).
Most vision-language pre-training models are designed for image-text or video-text related downstream tasks. Beyond that, some are mainly designed for visual tasks with text supervision. The CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) are two typical models trained with VTC for image classification. Further, the works from (Yao et al., 2022; Zeng et al., 2022) consider fine-grained alignment, and the work from (Li et al., 2022e) considers self-supervision within each modality plus other tasks when pre-training the model. There are also some pretrain models for object detection (Gu et al., 2022; Zhong et al., 2022; Li et al., 2022d) and segmentation (Wu et al., 2020; Ghiasi et al., 2021; Lüddecke & Ecker, 2022; Rao et al., 2022; Ding et al., 2022b).
The proposed SegCLIP is a vision-language pre-training model for segmentation. Besides our elaborate model, we also designed a reconstruct loss and a superpixel-based KL loss as our training objectives. Although the SegCLIP is capable of training with a large-scale dataset, we consider its transfer capability of reusing the existing pre-trained model, i.e., CLIP, for segmentation and reducing the cost of training resources.
4.2 Open-Vocabulary Semantic Segmentation
The open-vocabulary semantic segmentation also called semantic segmentation in the wild in the literature, has been widely researched along with vision-text pretraining. Its target is to segment an image with arbitrary categories described by texts instead of fixed labeling vocabularies. As a pioneering work, ZS3Net (Bucher et al., 2019) combines a deep visual segmentation model with a generative model of class-dependent features. Such architecture allows the generation of visual samples from unseen classes via training a classifier with real visual samples from seen classes. SPNet (Xian et al., 2019) achieves that by transferring the knowledge from previously seen classes to novel classes by incorporating class-level semantic information into any network designed for semantic segmentation.
Due to the impressive zero-shot transferability of CLIP (Radford et al., 2021) on various downstream tasks, a research line is to leverage it for open-vocabulary semantic segmentation. DenseCLIP (Rao et al., 2022) is a dense prediction framework that converts the original image-text matching problem in CLIP to a pixel-text matching problem and uses the pixel-text score maps to guide the learning of dense prediction models. Unlike DenseCLIP, which needs an image decoder to generate the segments and is trained with ground-truth labels, MaskCLIP (Zhou et al., 2022a) uses pseudo per-pixel labels generated from CLIP and self-training to achieve annotation-free segmentation. Similarly, (Zabari & Hoshen, 2021) uses model interpretability to obtain pixel-level pseudo-labels from CLIP to supervise single-image segmentation methods. ZegFormer (Ding et al., 2022a) decouples the zero-shot semantic segmentation into two sub-tasks, i.e., grouping the pixels into segments and classifying the segments with the CLIP. CLIPSeg (Lüddecke & Ecker, 2022) is a system building upon the CLIP model as a backbone and can generate image segmentations based on arbitrary prompts. OpenSeg (Ghiasi et al., 2021) also involves proposal generation and segments classification as the ZegFormer, but it needs training with class agnostic mask annotations to generate mask proposals.
Similarly, ZSSeg (Xu et al., 2022b) proposes a two-stage semantic segmentation framework, with the first stage generating mask proposals and the second stage leveraging CLIP to classify the generated proposals. LSeg (Li et al., 2022a) uses a text encoder to provide a flexible label representation with a transformer-based image encoder trained with a contrastive objective to align pixel embeddings to the text embedding of the corresponding semantic class. OVSeg (Liang et al., 2022) proposes to finetune CLIP on a collection of masked image regions and their corresponding text descriptions. Fusioner (Ma et al., 2022) is a simple, lightweight cross-modality fusion module that can be used to explicitly bridge a variety of self-supervised pre-trained visual/language models for open-vocabulary semantic segmentation.
Unlike previous works, our model does not require any mask proposals or segmentation decoders. Instead, the proposed model uses a plugged semantic group module to aggregate patches as segments. Our work follows the line of GroupViT (Xu et al., 2022a), which learns segmentation masks from text supervision. However, we have different architecture compared with the GroupViT, and the proposed semantic group module makes the model capable of resuing the pre-trained weights from CLIP and training from scratch with noisy image-text pairs. Moreover, we propose two novel objectives to improve the visual representation further.
5 Conclusion and Future Work
This paper proposes a CLIP-based model SegCLIP for weakly-supervised semantic segmentation. The model could generate plausible segmentation results with only training on the annotation-free text-image datasets. The process does not contain the training on the labels or even the seen classes of segmentation datasets before inference, demonstrating a solid transfer character. The other advantage is the flexibility of the plugged design of the semantic group module, which brings the possibility of reusing the pre-trained CLIP weights. Besides, the proposed reconstruction loss and the superpixel-based KL loss improve performance, indicating that the image encoder’s encoding capacity is essential for the semantics before giving the proper tags. In summary, the work takes a further step toward achieving fine-grained alignment, e.g., semantic segmentation in this paper, from training only on a large scale of image-text pairs.
Limitations The SegCLIP utilizes an interpolation operation to smooth the predicted boundaries. However, it was found that the use of regular image patches as the input to the image encoder often leads to rough predictions. The study recommends reducing the patch size to achieve smoother and more precise boundaries.
Future Work To demonstrate the advantages of using smaller patch sizes, we conduct experiments on the VOC, Context, and COCO datasets using a patch size of 32, resulting in 49 patches per image. The obtained mIoU scores are 44.2%, 22.0%, and 21.4% for each dataset, respectively. By comparing these scores to the mIoU scores of 52.5%, 24.7%, and 26.5% achieved with a patch size of 16, which equates to 196 patches per image, it becomes evident that larger patch sizes can lead to inferior performance. Therefore, future research could concentrate on pretraining models with smaller patch sizes.
Furthermore, additional experiments are performed using the validation sets from ADE20K (Zhou et al., 2017) and Cityscapes (Cordts et al., 2016) to assess more complex scenes. The results show that SegCLIP achieves mIoU scores of 8.7% and 11.0% on ADE20K and Cityscapes, respectively, meanwhile GroupViT 1-s 1-s{}_{\text{1-s}}start_FLOATSUBSCRIPT 1-s end_FLOATSUBSCRIPT achieves 4.9% and 4.2% mIoU on the same datasets. It is revealed that the complexity and intricacy of the scenes play a crucial role in performance, suggesting that exploring complex scenes is a promising research direction in the field of open-vocabulary segmentation.
The current superpixel generation process in SegCLIP operates as an offline module and is not trained end-to-end, making it essential to explore innovative end-to-end training techniques for this module to enhance its effectiveness. Additionally, the use of finely-divided superpixels may result in biased patches, making it necessary to consider the use of class-agnostic segmentation methods to generate improved pseudo-labels and improve performance. Furthermore, although the SegCLIP framework can utilize the pre-trained CLIP model as initialization, there is still potential for further advancements through post-pretraining on more extensive datasets such as CC12M and YFCC. These future research directions will present both challenges and exciting opportunities to enhance SegCLIP’s performance to new heights.
Acknowledgments
This work was supported by the National Key R&D Program of China (No. 2020AAA0108600) and the National Science Foundation of China (No. 62176221).
References
- Bain et al. (2021) Bain, M., Nagrani, A., Varol, G., and Zisserman, A. Frozen in time: A joint video and image encoder for end-to-end retrieval. In ICCV, pp. 1708–1718, 2021.
- Bucher et al. (2019) Bucher, M., Vu, T., Cord, M., and Pérez, P. Zero-shot semantic segmentation. In NeurIPS, 2019.
- Caron et al. (2021) Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., and Joulin, A. Emerging properties in self-supervised vision transformers. In ICCV, pp. 9630–9640, 2021.
- Changpinyo et al. (2021) Changpinyo, S., Sharma, P., Ding, N., and Soricut, R. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In CVPR, pp. 3558–3568, 2021.
- Chefer et al. (2021) Chefer, H., Gur, S., and Wolf, L. Transformer interpretability beyond attention visualization. In CVPR, pp. 782–791, 2021.
- Chen et al. (2015) Chen, L., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A.L. Semantic image segmentation with deep convolutional nets and fully connected crfs. In ICLR, 2015.
- Chen et al. (2018) Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018.
- Chen et al. (2021) Chen, X., Xie, S., and He, K. An empirical study of training self-supervised vision transformers. In ICCV, pp. 9620–9629, 2021.
- Chen et al. (2020) Chen, Y., Li, L., Yu, L., Kholy, A.E., Ahmed, F., Gan, Z., Cheng, Y., and Liu, J. UNITER: universal image-text representation learning. In ECCV, volume 12375, pp. 104–120, 2020.
- Cheng et al. (2021) Cheng, B., Schwing, A.G., and Kirillov, A. Per-pixel classification is not all you need for semantic segmentation. In NeurIPS, pp. 17864–17875, 2021.
- Cheng et al. (2022) Cheng, B., Misra, I., Schwing, A.G., Kirillov, A., and Girdhar, R. Masked-attention mask transformer for universal image segmentation. In CVPR, pp. 1290–1299, 2022.
- Cordts et al. (2016) Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., and Schiele, B. The cityscapes dataset for semantic urban scene understanding. In CVPR, pp. 3213–3223, 2016.
- Devlin et al. (2019) Devlin, J., Chang, M., Lee, K., and Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, pp. 4171–4186, 2019.
- Ding et al. (2022a) Ding, J., Xue, N., Xia, G., and Dai, D. Decoupling zero-shot semantic segmentation. In CVPR, pp. 11573–11582, 2022a.
- Ding et al. (2022b) Ding, Z., Wang, J., and Tu, Z. Open-vocabulary panoptic segmentation with maskclip. arXiv preprint arXiv:2208.08984, 2022b.
- Dosovitskiy et al. (2021) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR, 2021.
- Everingham et al. (2010) Everingham, M., Gool, L.V., Williams, C. K.I., Winn, J.M., and Zisserman, A. The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis., 88(2):303–338, 2010.
- Felzenszwalb & Huttenlocher (2004) Felzenszwalb, P.F. and Huttenlocher, D.P. Efficient graph-based image segmentation. International journal of computer vision, 59(2):167–181, 2004.
- Gan et al. (2022) Gan, Z., Li, L., Li, C., Wang, L., Liu, Z., and Gao, J. Vision-language pre-training: Basics, recent advances, and future trends. arXiv preprint arXiv:2210.09263, 2022.
- Ghiasi et al. (2021) Ghiasi, G., Gu, X., Cui, Y., and Lin, T.-Y. Scaling open-vocabulary image segmentation with image-level labels. arXiv:2112.12143, 2021.
- Gu et al. (2022) Gu, X., Lin, T.-Y., Kuo, W., and Cui, Y. Open-vocabulary object detection via vision and language knowledge distillation. In ICLR, 2022.
- He et al. (2022) He, K., Chen, X., Xie, S., Li, Y., Dollár, P., and Girshick, R.B. Masked autoencoders are scalable vision learners. In CVPR, pp. 15979–15988, 2022.
- Hendrycks & Gimpel (2016) Hendrycks, D. and Gimpel, K. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415, 2016.
- Huang et al. (2020) Huang, Z., Zeng, Z., Liu, B., Fu, D., and Fu, J. Pixel-bert: Aligning image pixels with text by deep multi-modal transformers. arXiv preprint arXiv:2004.00849, 2020.
- Jain et al. (2022) Jain, J., Li, J., Chiu, M., Hassani, A., Orlov, N., and Shi, H. Oneformer: One transformer to rule universal image segmentation. arXiv preprint arXiv:abs/2211.06220, 2022.
- Jang et al. (2017) Jang, E., Gu, S., and Poole, B. Categorical reparameterization with gumbel-softmax. In ICLR, 2017.
- Jia et al. (2021) Jia, C., Yang, Y., Xia, Y., Chen, Y., Parekh, Z., Pham, H., Le, Q.V., Sung, Y., Li, Z., and Duerig, T. Scaling up visual and vision-language representation learning with noisy text supervision. In ICML, volume 139, pp. 4904–4916, 2021.
- Kim et al. (2021) Kim, W., Son, B., and Kim, I. Vilt: Vision-and-language transformer without convolution or region supervision. In ICML, volume 139, pp. 5583–5594, 2021.
- Li et al. (2022a) Li, B., Weinberger, K.Q., Belongie, S., Koltun, V., and Ranftl, R. Language-driven semantic segmentation. In ICLR, 2022a.
- Li et al. (2022b) Li, J., Li, D., Xiong, C., and Hoi, S. C.H. BLIP: bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICML, volume 162, pp. 12888–12900, 2022b.
- Li et al. (2022c) Li, L., Gan, Z., Lin, K., Lin, C., Liu, Z., Liu, C., and Wang, L. LAVENDER: unifying video-language understanding as masked language modeling. arXiv preprint arXiv:2206.07160, 2022c.
- Li et al. (2022d) Li, L.H., Zhang, P., Zhang, H., Yang, J., Li, C., Zhong, Y., Wang, L., Yuan, L., Zhang, L., Hwang, J., Chang, K., and Gao, J. Grounded language-image pre-training. In CVPR, pp. 10955–10965, 2022d.
- Li et al. (2021) Li, W., Gao, C., Niu, G., Xiao, X., Liu, H., Liu, J., Wu, H., and Wang, H. UNIMO: towards unified-modal understanding and generation via cross-modal contrastive learning. In ACL/IJCNLP, pp. 2592–2607, 2021.
- Li et al. (2022e) Li, Y., Liang, F., Zhao, L., Cui, Y., Ouyang, W., Shao, J., Yu, F., and Yan, J. Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm. In ICLR, 2022e.
- Liang et al. (2022) Liang, F., Wu, B., Dai, X., Li, K., Zhao, Y., Zhang, H., Zhang, P., Vajda, P., and Marculescu, D. Open-vocabulary semantic segmentation with mask-adapted CLIP. arXiv preprint arXiv:abs/2210.04150, 2022.
- Lin et al. (2014) Lin, T., Maire, M., Belongie, S.J., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C.L. Microsoft COCO: common objects in context. In ECCV, volume 8693, pp. 740–755, 2014.
- Long et al. (2015) Long, J., Shelhamer, E., and Darrell, T. Fully convolutional networks for semantic segmentation. In CVPR, pp. 3431–3440, 2015.
- Lüddecke & Ecker (2022) Lüddecke, T. and Ecker, A.S. Image segmentation using text and image prompts. In CVPR, pp. 7076–7086, 2022.
- Luo et al. (2020) Luo, H., Ji, L., Shi, B., Huang, H., Duan, N., Li, T., Chen, X., and Zhou, M. Univl: A unified video and language pre-training model for multimodal understanding and generation. arXiv preprint arXiv:2002.06353, 2020.
- Ma et al. (2022) Ma, C., Yang, Y., Wang, Y., Zhang, Y., and Xie, W. Open-vocabulary semantic segmentation with frozen vision-language models. arXiv preprint arXiv:abs/2210.15138, 2022.
- Miech et al. (2019) Miech, A., Zhukov, D., Alayrac, J., Tapaswi, M., Laptev, I., and Sivic, J. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In ICCV, pp. 2630–2640, 2019.
- Mottaghi et al. (2014) Mottaghi, R., Chen, X., Liu, X., Cho, N., Lee, S., Fidler, S., Urtasun, R., and Yuille, A.L. The role of context for object detection and semantic segmentation in the wild. In CVPR, pp. 891–898, 2014.
- Radford et al. (2021) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., and Sutskever, I. Learning transferable visual models from natural language supervision. In ICML, volume 139, pp. 8748–8763, 2021.
- Rao et al. (2022) Rao, Y., Zhao, W., Chen, G., Tang, Y., Zhu, Z., Huang, G., Zhou, J., and Lu, J. Denseclip: Language-guided dense prediction with context-aware prompting. In CVPR, pp. 18061–18070, 2022.
- Ronneberger et al. (2015) Ronneberger, O., Fischer, P., and Brox, T. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, pp. 234–241, 2015.
- Schuhmann et al. (2021) Schuhmann, C., Vencu, R., Beaumont, R., Kaczmarczyk, R., Mullis, C., Katta, A., Coombes, T., Jitsev, J., and Komatsuzaki, A. LAION-400M: open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021.
- Selvaraju et al. (2017) Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV, pp. 618–626, 2017.
- Sharma et al. (2018) Sharma, P., Ding, N., Goodman, S., and Soricut, R. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL, pp. 2556–2565, 2018.
- Sun et al. (2019) Sun, C., Myers, A., Vondrick, C., Murphy, K., and Schmid, C. Videobert: A joint model for video and language representation learning. In ICCV, pp. 7463–7472, 2019.
- Tan & Bansal (2019) Tan, H. and Bansal, M. LXMERT: learning cross-modality encoder representations from transformers. In EMNLP, 2019.
- Thomee et al. (2016) Thomee, B., Shamma, D.A., Friedland, G., Elizalde, B., Ni, K., Poland, D., Borth, D., and Li, L. YFCC100M: the new data in multimedia research. Commun. ACM, 59(2):64–73, 2016.
- Touvron et al. (2021) Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., and Jégou, H. Training data-efficient image transformers & distillation through attention. In ICML, volume 139, pp. 10347–10357, 2021.
- Vaswani et al. (2017) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. Attention is all you need. In NeurIPS, pp. 5998–6008, 2017.
- Wang et al. (2022a) Wang, R., Chen, D., Wu, Z., Chen, Y., Dai, X., Liu, M., Jiang, Y.-G., Zhou, L., and Yuan, L. Bevt: Bert pretraining of video transformers. In CVPR, pp. 14733–14743, 2022a.
- Wang et al. (2022b) Wang, Z., Yu, J., Yu, A.W., Dai, Z., Tsvetkov, Y., and Cao, Y. Simvlm: Simple visual language model pretraining with weak supervision. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022, 2022b.
- Wen et al. (2022) Wen, X., Zhao, B., Zheng, A., Zhang, X., and Qi, X. Self-supervised visual representation learning with semantic grouping. In NeurIPS, 2022.
- Wu et al. (2020) Wu, C., Lin, Z., Cohen, S., Bui, T., and Maji, S. Phrasecut: Language-based image segmentation in the wild. In CVPR, pp. 10213–10222, 2020.
- Xian et al. (2019) Xian, Y., Choudhury, S., He, Y., Schiele, B., and Akata, Z. Semantic projection network for zero- and few-label semantic segmentation. In CVPR, pp. 8256–8265, 2019.
- Xie et al. (2021) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., and Luo, P. Segformer: Simple and efficient design for semantic segmentation with transformers. In NeurIPS, pp. 12077–12090, 2021.
- Xu et al. (2022a) Xu, J., De Mello, S., Liu, S., Byeon, W., Breuel, T., Kautz, J., and Wang, X. Groupvit: Semantic segmentation emerges from text supervision. In CVPR, pp. 18134–18144, 2022a.
- Xu et al. (2022b) Xu, M., Zhang, Z., Wei, F., Lin, Y., Cao, Y., Hu, H., and Bai, X. A simple baseline for open vocabulary semantic segmentation with pre-trained vision-language model. ECCV, 2022b.
- Yao et al. (2022) Yao, L., Huang, R., Hou, L., Lu, G., Niu, M., Xu, H., Liang, X., Li, Z., Jiang, X., and Xu, C. FILIP: fine-grained interactive language-image pre-training. In ICLR, 2022.
- Zabari & Hoshen (2021) Zabari, N. and Hoshen, Y. Semantic segmentation in-the-wild without seeing any segmentation examples. arXiv:2112.03185, 2021.
- Zeng et al. (2022) Zeng, Y., Zhang, X., and Li, H. Multi-grained vision language pre-training: Aligning texts with visual concepts. In ICML, volume 162, pp. 25994–26009, 2022.
- Zhao et al. (2017) Zhao, H., Shi, J., Qi, X., Wang, X., and Jia, J. Pyramid scene parsing network. In CVPR, pp. 6230–6239, 2017.
- Zheng et al. (2021) Zheng, S., Lu, J., Zhao, H., Zhu, X., Luo, Z., Wang, Y., Fu, Y., Feng, J., Xiang, T., Torr, P.H., and Zhang, L. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. In CVPR, 2021.
- Zhong et al. (2022) Zhong, Y., Yang, J., Zhang, P., Li, C., Codella, N., Li, L.H., Zhou, L., Dai, X., Yuan, L., Li, Y., and Gao, J. Regionclip: Region-based language-image pretraining. In CVPR, pp. 16772–16782, 2022.
- Zhou et al. (2017) Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., and Torralba, A. Scene parsing through ade20k dataset. In CVPR, pp. 633–641, 2017.
- Zhou et al. (2022a) Zhou, C., Loy, C.C., and Dai, B. Extract free dense labels from clip. In ECCV, 2022a.
- Zhou et al. (2022b) Zhou, J., Wei, C., Wang, H., Shen, W., Xie, C., Yuille, A.L., and Kong, T. iBOT: Image BERT pre-training with online tokenizer. In ICLR, 2022b.
Xet Storage Details
- Size:
- 77.8 kB
- Xet hash:
- 107b2af2508945b6cd5c3b57cb4a79187dd582832ca804e66365e82702481d2e
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.






