diff --git "a/2310/2310.09503.md" "b/2310/2310.09503.md" new file mode 100644--- /dev/null +++ "b/2310/2310.09503.md" @@ -0,0 +1,416 @@ +Title: JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues + +URL Source: https://arxiv.org/html/2310.09503 + +Published Time: Fri, 26 Jan 2024 14:28:04 GMT + +Markdown Content: +Jiayi Ji, Haowei Wang, Changli Wu, Yiwei Ma, Xiaoshuai Sun, Rongrong Ji J. Ji, H. Wang, C. Wu, Y. Ma, X. Sun and R. Ji are with Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China. J. Ji and H. Wang contributed equally to this work. + +###### Abstract + +The rising importance of 3D understanding, pivotal in computer vision, autonomous driving, and robotics, is evident. However, a prevailing trend, which straightforwardly resorted to transferring 2D alignment strategies to the 3D domain, encounters three distinct challenges: (1) Information Degradation: This arises from the alignment of 3D data with mere single-view 2D images and generic texts, neglecting the need for multi-view images and detailed subcategory texts. (2) Insufficient Synergy: These strategies align 3D representations to image and text features individually, hampering the overall optimization for 3D models. (3) Underutilization: The fine-grained information inherent in the learned representations is often not fully exploited, indicating a potential loss in detail. To address these issues, we introduce JM3D, a comprehensive approach integrating point cloud, text, and image. Key contributions include the Structured Multimodal Organizer (SMO), enriching vision-language representation with multiple views and hierarchical text, and the Joint Multi-modal Alignment (JMA), combining language understanding with visual representation. Our advanced model, JM3D-LLM, marries 3D representation with large language models via efficient fine-tuning. Evaluations on ModelNet40 and ScanObjectNN establish JM3D’s superiority. The superior performance of JM3D-LLM further underscores the effectiveness of our representation transfer approach. Our code and models are available at[https://github.com/Mr-Neko/JM3D](https://github.com/Mr-Neko/JM3D). + +###### Index Terms: + +3D Understanding, Structured Multimodal Organizer, Joint Multi-modal Alignment, Large Language Model. + +1 Introduction +-------------- + +The study of 3D model understanding[[1](https://arxiv.org/html/2310.09503v3#bib.bib1), [2](https://arxiv.org/html/2310.09503v3#bib.bib2), [3](https://arxiv.org/html/2310.09503v3#bib.bib3), [4](https://arxiv.org/html/2310.09503v3#bib.bib4), [5](https://arxiv.org/html/2310.09503v3#bib.bib5), [6](https://arxiv.org/html/2310.09503v3#bib.bib6), [7](https://arxiv.org/html/2310.09503v3#bib.bib7)] has gained prominence, especially in areas like augmented/virtual reality[[8](https://arxiv.org/html/2310.09503v3#bib.bib8), [9](https://arxiv.org/html/2310.09503v3#bib.bib9), [10](https://arxiv.org/html/2310.09503v3#bib.bib10)] and autonomous driving[[11](https://arxiv.org/html/2310.09503v3#bib.bib11), [12](https://arxiv.org/html/2310.09503v3#bib.bib12)]. However, limited data availability and suboptimal category representation pose challenges for 3D understanding, especially when compared to the abundance of image-text pair data. + +Addressing the challenge of limited 3D data, recent studies[[13](https://arxiv.org/html/2310.09503v3#bib.bib13), [14](https://arxiv.org/html/2310.09503v3#bib.bib14), [15](https://arxiv.org/html/2310.09503v3#bib.bib15)] have leveraged the rich resources of other modalities. They utilize large-scale vision-language models like CLIP[[16](https://arxiv.org/html/2310.09503v3#bib.bib16)] to enhance 3D representation. The core idea is to integrate 3D features into the combined vision-language space, leveraging the robust zero-shot capabilities of foundational models. Commonly, an image rendered from a specific view of a 3D model, paired with a basic category label, is inputted into CLIP. The 3D features are subsequently aligned to the vision-language domain through a contrastive methodology. As proven by ULIP[[14](https://arxiv.org/html/2310.09503v3#bib.bib14)] and CG3D[[13](https://arxiv.org/html/2310.09503v3#bib.bib13)], this approach, enriched by external information, markedly bolsters 3D understanding and showcases commendable transferability. + +While prevalent methods[[13](https://arxiv.org/html/2310.09503v3#bib.bib13), [14](https://arxiv.org/html/2310.09503v3#bib.bib14), [15](https://arxiv.org/html/2310.09503v3#bib.bib15)] lean on 2D alignment techniques for 3D representation learning, they often overlook the inherent complexities of 3D models. This oversight results in three principal challenges: (1) Information Degradation: By aligning 3D models with single-view images and broad text descriptions, essential spatial and depth details are lost. For example, in single-view images such as Fig.[1](https://arxiv.org/html/2310.09503v3#S1.F1 "Figure 1 ‣ 1 Introduction ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), the airplane’s front render misses out on wing specifics, much like its rear view. Textually speaking, a generalized label like “airplane” doesn’t provide distinctions between specific aircraft categories, like airliners versus jets or bombers. (2) Insufficient Synergy: These methods align 3D representations with image features and text features separately, neglecting the joint modeling of vision and language modalities. This issue complicates the optimization process for 3D representations, making it difficult to determine whether to move closer to image features or text features, and to what extent, leading to incomplete information utilization. (3) Underutilization: Moreover, the learned representations frequently fall short in harnessing and further cultivating the available granular details. Successful transfer from 2D representations should theoretically endow the 3D model not just with categorical insights but also with finer attributes and subtler characteristics. + +![Image 1: Refer to caption](https://arxiv.org/html/2310.09503v3/x1.png) + +Figure 1: The visualization of JM3D. JM3D coherently aligns the 3D modality with previously aligned vision and language modalities, forming a consolidated tri-modal representation. Subsequently, the derived representation finds application in tasks such as image-3D retrieval, zero-shot 3D classification, and interfaces with LLM to discern more granular information. + +To address the above challenges, we introduce JM3D, a novel multi-modal approach for comprehensive 3D representation learning, as showcased in Fig.[1](https://arxiv.org/html/2310.09503v3#S1.F1 "Figure 1 ‣ 1 Introduction ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"). JM3D pivots on two primary modules: the Structured Multi-modal Organizer (SMO) and the Joint Multi-modal Alignment (JMA). We first propose SMO to address information degradation, which enhances each modality independently. For visual enhancement, we assert that a 3D model should correlate with a continuous array of images spanning diverse angles. Consequently, a Continuous Image Sequence (CIS) is introduced to concurrently model multiple viewpoints. To capture intricate details, we encode attributes such as angle, color, and depth into these images. From a linguistic perspective, our Hierarchical Text Tree (HTT) augments textual representation. By incorporating specific sub-categories, like “jet”, “airliner”, or “bomber”, the system gains granularity in understanding. Meanwhile, coarser classifications like “airplane” consolidate semantically related sub-categories, reinforcing the robustness of our model. We then design JMA to tackle the insufficient synergy. JMA synergistically aligns both visual and language modalities, engaging them concurrently during optimization. This synergy ensures the 3D model effectively harnesses insights from both domains. Furthermore, we provide a theoretical framework underscoring our approach’s effectiveness, laying the groundwork for forthcoming explorations in the domain. Building upon the feature alignment accomplished by our JM3D framework, the 3D representation seamlessly synchronizes with textual descriptions, thereby setting the stage for its potential integration into the Large Language Model (LLM). Motivated by the well-regarded technique of Instruction Tuning, we employ Parameter-Efficient fine-tuning to embed 3D semantics into the LLM. This fusion culminates in JM3D-LLM, enabling the LLM to parse more granular cues, thereby addressing the underutilization issue. To anchor this integration, we’ve curated a conversational dataset from Cap3D tailored for training. + +Our comprehensive experiments on both ModelNet40 and ScanObjectNN underscore the potency of our introduced approach, JM3D, which sets a new benchmark in zero-shot 3D classification. In particular, when compared to ULIP, JM3D demonstrates a substantial performance advantage. It surpasses PointMLP by an impressive margin of approximately 4.3%, and achieves a remarkable increase in accuracy of up to 6.5% when evaluated with PointNet++ for top-1 accuracy in zero-shot 3D classification on the ModelNet40 dataset. The results achieved with JM3D-LLM further reinforce the strength and adaptability of our representation. The nuanced and detailed descriptions generated by JM3D highlight its proficiency in discerning and capturing intricate features. + +In summary, this paper presents three key contributions: + +* •To combat the challenge of information degradation, we introduce the Structured Multimodal Organizer (SMO). This framework generates a continuous sequence of multi-view rendered images and establishes a hierarchical text tree. Through the augmentation of both visual and textual modalities, the SMO effectively offsets the diminishing 3D visual attributes, thereby facilitating a richer and more encompassing representation. +* •To address the challenge of inadequate synergy, we introduce the Joint Multi-modal Alignment (JMA). This method seamlessly integrates textual and visual modalities, leading to a unified representation. By doing so, the JMA minimizes potential suboptimal outcomes and enhances the coherent interpretation of image-text pairings. +* •We introduce JM3D-LLM, which seamlessly embeds 3D representations into the Large Language Model (LLM) using an efficient training method. Evaluation on ModelNet40 and ScanObjectNN highlight JM3D’s excellence in zero-shot 3D classification. The exciting results achieved by JM3D-LLM further emphasize JM3D’s robust transfer learning capabilities and its ability to discern important features. + +2 Related work +-------------- + +### 2.1 Representation Learning in 3D Space + +3D representation learning seeks to derive semantic features of 3D models. Among various representation techniques for these models, point clouds have emerged as a favored input format in deep learning due to their sparsity and discreteness[[17](https://arxiv.org/html/2310.09503v3#bib.bib17), [18](https://arxiv.org/html/2310.09503v3#bib.bib18), [19](https://arxiv.org/html/2310.09503v3#bib.bib19), [20](https://arxiv.org/html/2310.09503v3#bib.bib20), [21](https://arxiv.org/html/2310.09503v3#bib.bib21), [22](https://arxiv.org/html/2310.09503v3#bib.bib22), [7](https://arxiv.org/html/2310.09503v3#bib.bib7)]. Initial approaches[[21](https://arxiv.org/html/2310.09503v3#bib.bib21), [23](https://arxiv.org/html/2310.09503v3#bib.bib23)] relied on extracting point cloud data from voxels and applying convolutions for global feature capture. However, subsequent strategies, such as PointNet[[24](https://arxiv.org/html/2310.09503v3#bib.bib24)], PointNext[[25](https://arxiv.org/html/2310.09503v3#bib.bib25)], and PointMLP[[26](https://arxiv.org/html/2310.09503v3#bib.bib26)], devised architectures tailored for point clouds. Particularly, PointNet pioneered the extraction of permutation-invariant features from point clouds, laying groundwork for future methodological designs. The recent PointMLP employs dual MLP blocks and a geometric transformer, achieving notable outcomes without necessitating intricate local geometric feature extractors. + +The advent of transformers[[27](https://arxiv.org/html/2310.09503v3#bib.bib27)] has spurred self-supervised learning methods[[28](https://arxiv.org/html/2310.09503v3#bib.bib28), [29](https://arxiv.org/html/2310.09503v3#bib.bib29), [30](https://arxiv.org/html/2310.09503v3#bib.bib30)] to generate augmented point clouds. These methods leverage encoder-decoder architectures to reconstruct point clouds, a strategy found effective in models like PointBert[[31](https://arxiv.org/html/2310.09503v3#bib.bib31)], Point-MAE[[32](https://arxiv.org/html/2310.09503v3#bib.bib32)], and Point-M2AE[[33](https://arxiv.org/html/2310.09503v3#bib.bib33)]. + +Nevertheless, independent of architectural innovations, the paramount challenge in 3D representation learning remains the small-scale datasets. Many methods grapple with rudimentary category annotations and scarce data, leading to compromised robustness in real-world scenarios. + +### 2.2 Representation Learning in Multi-modal Space + +Representation learning in multi-modal space mainly aims at aligning semantics features from different modalities. Contemporary techniques largely fall into two categories. The first merges features from various modalities using intricate architectures[[34](https://arxiv.org/html/2310.09503v3#bib.bib34), [35](https://arxiv.org/html/2310.09503v3#bib.bib35), [36](https://arxiv.org/html/2310.09503v3#bib.bib36), [37](https://arxiv.org/html/2310.09503v3#bib.bib37)] and fosters alignment through tasks like retrieval and comprehension. Such methods[[34](https://arxiv.org/html/2310.09503v3#bib.bib34), [35](https://arxiv.org/html/2310.09503v3#bib.bib35), [38](https://arxiv.org/html/2310.09503v3#bib.bib38), [39](https://arxiv.org/html/2310.09503v3#bib.bib39), [40](https://arxiv.org/html/2310.09503v3#bib.bib40)] predominantly delve into the synergy between image regions and their corresponding textual descriptions. + +On the other hand, methods exemplified by CLIP[[16](https://arxiv.org/html/2310.09503v3#bib.bib16)] lean on contrastive learning across visual and linguistic domains, achieving direct feature alignment with an extensive set of positive and negative examples. Successive techniques have refined CLIP’s alignment, considering both the scope of data and its granularity. Specifically, the GLIP series[[41](https://arxiv.org/html/2310.09503v3#bib.bib41), [42](https://arxiv.org/html/2310.09503v3#bib.bib42)] seeks nuanced alignment through detection-oriented tasks, while Flamingo[[43](https://arxiv.org/html/2310.09503v3#bib.bib43)] employs a more expansive dataset. Concurrently, works[[44](https://arxiv.org/html/2310.09503v3#bib.bib44), [45](https://arxiv.org/html/2310.09503v3#bib.bib45), [46](https://arxiv.org/html/2310.09503v3#bib.bib46), [47](https://arxiv.org/html/2310.09503v3#bib.bib47), [48](https://arxiv.org/html/2310.09503v3#bib.bib48)] validate the CLIP model’s adaptability, underscoring its efficacy even when applied to diverse modalities, including video and text. + +### 2.3 Enhancing 3D Representation through Multi-modality + +The merits of multi-modal representation learning, especially its enhanced performance from assimilating diverse modalities, are becoming increasingly evident. Tapping into the rich knowledge reservoir of image-text pre-trained models to augment 3D model representations offers promising avenues[[49](https://arxiv.org/html/2310.09503v3#bib.bib49), [50](https://arxiv.org/html/2310.09503v3#bib.bib50), [51](https://arxiv.org/html/2310.09503v3#bib.bib51), [52](https://arxiv.org/html/2310.09503v3#bib.bib52)]. Pioneering this approach, PointCLIP series[[53](https://arxiv.org/html/2310.09503v3#bib.bib53), [54](https://arxiv.org/html/2310.09503v3#bib.bib54)] leveraged a visual-language pre-trained model for point cloud models, transforming point clouds into a sequence of depth images which were subsequently input into CLIP for zero-shot classification. However, its potential to truly enhance the IEEEtranTPAMIexpressiveness of point clouds remained untapped. ULIP[[14](https://arxiv.org/html/2310.09503v3#bib.bib14)] and CG3D[[13](https://arxiv.org/html/2310.09503v3#bib.bib13)], in contrast, endeavor to adapt the CLIP paradigm directly, striving for a cohesive representation space encompassing point clouds, text, and images. They deploy a contrastive learning approach, delineating relations between the point cloud modality with both the visual and language modalities independently. Yet, such methods might be shortsighted, as they often equate the richness of a solitary image or text fragment to a comprehensive 3D model. Furthermore, this segmentation fails to capture the joint distribution intrinsic to image-text combinations. In parallel developments, ULIP2[[55](https://arxiv.org/html/2310.09503v3#bib.bib55)] approached the challenge from a data-centric perspective, generating extensive descriptions for images from each viewpoint, significantly augmenting the scalability and comprehensiveness of 3D representations. In contrast, our work focuses on enhancing capabilities through innovative training strategies. The contributions of both methodologies are thus orthogonal and complementary. + +![Image 2: Refer to caption](https://arxiv.org/html/2310.09503v3/x2.png) + +Figure 2: The framework of JM3D. Continuous Image Sequence (CIS) and Hierarchical Text Tree (HTT) organized continuous multi-view images and hierarchical texts respectively, which are fed into a pre-training model (frozen) to extract features on the left. Then, Joint Multi-modal Alignment (JMA) incorporates the features from two modalities to generate the joint modeling features. On the last, contrastive learning is applied to align 3D features (training) with joint features and subcategory texts, while 3D features are aggregated with the assistance of the parent category. + +### 2.4 Large Language Model + +In contemporary natural language processing (NLP) research, the emphasis has increasingly shifted towards Large Language Models (LLM), characterized by their expansive parameter counts and intensive pre-training on vast datasets. The efficacy of LLMs is evident in their superior performance across varied linguistic tasks such as translation, reasoning, and conversational engagement. The GPT series[[56](https://arxiv.org/html/2310.09503v3#bib.bib56), [57](https://arxiv.org/html/2310.09503v3#bib.bib57), [16](https://arxiv.org/html/2310.09503v3#bib.bib16)] stands as a testament to this trend, consistently enhancing both model size and training data breadth. In the quest for optimal data utilization, the concept of prompt learning[[58](https://arxiv.org/html/2310.09503v3#bib.bib58), [59](https://arxiv.org/html/2310.09503v3#bib.bib59), [60](https://arxiv.org/html/2310.09503v3#bib.bib60), [61](https://arxiv.org/html/2310.09503v3#bib.bib61)] emerged, facilitating a streamlined approach to diverse task formats. This has paved the way for models like Instruct GPT[[62](https://arxiv.org/html/2310.09503v3#bib.bib62)], which boasts a staggering 175 billion parameters. Building on this foundation, techniques such as Supervised Fine-tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) have been introduced, refining LLM outputs to more closely resemble human-like interactions, as seen in ChatGPT and LLaMA[[63](https://arxiv.org/html/2310.09503v3#bib.bib63)]. The landscape further evolved with multi-modal approaches like MiniGPT-4[[64](https://arxiv.org/html/2310.09503v3#bib.bib64)], LLaVA[[65](https://arxiv.org/html/2310.09503v3#bib.bib65)], and LLaMA-adapter[[66](https://arxiv.org/html/2310.09503v3#bib.bib66)], leveraging LLM’s inferential prowess to decipher visual content. This progression signals an intriguing trajectory, harnessing LLMs to amalgamate 3D data for enriched comprehension. Concurrently, both POINTLLM[[67](https://arxiv.org/html/2310.09503v3#bib.bib67)] and Point-LLM[[68](https://arxiv.org/html/2310.09503v3#bib.bib68)] have also delved into this direction. Compared to their efforts, our work specifically aims to investigate the benefits of finer-grained 3D representations in aiding the understanding capabilities of LLMs. + +3 Joint Multi-modal 3D Representation Learning +---------------------------------------------- + +We first provide an overview of the contrastive learning framework in Sec.[3.1](https://arxiv.org/html/2310.09503v3#S3.SS1 "3.1 Preliminary ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), highlighting its key components and limitations. Next, we delve into the details of our proposed SMO in Sec.[3.2](https://arxiv.org/html/2310.09503v3#S3.SS2 "3.2 Structured Multi-modal Organizer ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"). Then, we introduce the JMA in Sec.[3.3](https://arxiv.org/html/2310.09503v3#S3.SS3 "3.3 Joint Multi-modal Alignment ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), focusing on its ability to optimize the aligning process of the model. Finally, we outline the objectives of the construction of unified triplet modality modeling in Sec.[3.4](https://arxiv.org/html/2310.09503v3#S3.SS4 "3.4 Training Objective ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"). See Fig.[2](https://arxiv.org/html/2310.09503v3#S2.F2 "Figure 2 ‣ 2.3 Enhancing 3D Representation through Multi-modality ‣ 2 Related work ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues") for more details about the framework. + +### 3.1 Preliminary + +ULIP[[14](https://arxiv.org/html/2310.09503v3#bib.bib14)] explores the transfer ability of 2D contrastive learning to 3D by constructing a dataset combining point clouds, rendered images, and language descriptions from ShapeNet55 [[69](https://arxiv.org/html/2310.09503v3#bib.bib69)]. For each CAD model, a triplet sample S i:(I i,T i,C i):subscript 𝑆 𝑖 subscript 𝐼 𝑖 subscript 𝑇 𝑖 subscript 𝐶 𝑖 S_{i}:(I_{i},T_{i},C_{i})italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : ( italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) is created, consisting of a rendered image I i∈ℝ H×W×3 subscript 𝐼 𝑖 superscript ℝ 𝐻 𝑊 3 I_{i}\in\mathbb{R}^{H\times W\times 3}italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_H × italic_W × 3 end_POSTSUPERSCRIPT, a text description T i subscript 𝑇 𝑖 T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, and a sampled point cloud C i∈ℝ N c subscript 𝐶 𝑖 superscript ℝ subscript 𝑁 𝑐 C_{i}\in\mathbb{R}^{N_{c}}italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_N start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, where H 𝐻 H italic_H, W 𝑊 W italic_W, and N c subscript 𝑁 𝑐 N_{c}italic_N start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT represent the height, width of the image, and the number of sampled points, respectively. + +Specifically, The image I i subscript 𝐼 𝑖 I_{i}italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT undergoes rendering at a random angle, whereas the description T i subscript 𝑇 𝑖 T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is generated by combining a fixed prompt with a broad category, _i.e._, “a 3D representation of [CLASS]”. To accommodate various point cloud backbones, the point cloud C i subscript 𝐶 𝑖 C_{i}italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is evenly sampled. The objective is to bring together the representations of the triplet modalities in a cohesive semantic space, which can be stated as follows: + +P⁢(C,I,T)=P⁢(C|I,T)⋅P⁢(I,T).𝑃 𝐶 𝐼 𝑇⋅𝑃 conditional 𝐶 𝐼 𝑇 𝑃 𝐼 𝑇 P(C,I,T)=P(C|I,T)\cdot P(I,T).italic_P ( italic_C , italic_I , italic_T ) = italic_P ( italic_C | italic_I , italic_T ) ⋅ italic_P ( italic_I , italic_T ) .(1) + +In prior research[[14](https://arxiv.org/html/2310.09503v3#bib.bib14), [13](https://arxiv.org/html/2310.09503v3#bib.bib13)], Eq.[1](https://arxiv.org/html/2310.09503v3#S3.E1 "1 ‣ 3.1 Preliminary ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues") has been simplified based on an approximate assumption, which posits the conditional independence of the vision information I normal-I I italic_I and language modality T normal-T T italic_T. Consequently, the joint modality conditional probability P⁢(C|I,T)𝑃 conditional 𝐶 𝐼 𝑇 P(C|I,T)italic_P ( italic_C | italic_I , italic_T ) is overlooked and the simplification of aligning each modality individually in ULIP can be outlined below: + +P⁢(C,I,T)=P⁢(C|I)⋅P⁢(C|T)⋅P⁢(I,T).𝑃 𝐶 𝐼 𝑇⋅⋅𝑃 conditional 𝐶 𝐼 𝑃 conditional 𝐶 𝑇 𝑃 𝐼 𝑇 P(C,I,T)=P(C|I)\cdot P(C|T)\cdot P(I,T).italic_P ( italic_C , italic_I , italic_T ) = italic_P ( italic_C | italic_I ) ⋅ italic_P ( italic_C | italic_T ) ⋅ italic_P ( italic_I , italic_T ) .(2) + +Drawing inspiration from Eq.[2](https://arxiv.org/html/2310.09503v3#S3.E2 "2 ‣ 3.1 Preliminary ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), although ULIP adopts contrastive learning to effectively learn the joint distribution of trimodal, it focuses on aligning the 3D features with language and vision features separately. Specifically, the joint probability distribution P⁢(I,T)𝑃 𝐼 𝑇 P(I,T)italic_P ( italic_I , italic_T ) can be obtained by employing a pre-trained vision-language model, such as CLIP[[16](https://arxiv.org/html/2310.09503v3#bib.bib16)]. This pre-trained model serves as a valuable resource for pre-aligned features, while various backbones[[26](https://arxiv.org/html/2310.09503v3#bib.bib26), [70](https://arxiv.org/html/2310.09503v3#bib.bib70), [31](https://arxiv.org/html/2310.09503v3#bib.bib31)] extract the 3D features. These operations can be formulated as the following: + +h i C,h i I,h i T=f C⁢(C i),f I⁢(I i),f T⁢(T i),formulae-sequence superscript subscript ℎ 𝑖 𝐶 superscript subscript ℎ 𝑖 𝐼 superscript subscript ℎ 𝑖 𝑇 subscript 𝑓 𝐶 subscript 𝐶 𝑖 subscript 𝑓 𝐼 subscript 𝐼 𝑖 subscript 𝑓 𝑇 subscript 𝑇 𝑖 h_{i}^{C},\ h_{i}^{I},\ h_{i}^{T}=f_{C}({C}_{i}),\ f_{I}(I_{i}),\ f_{T}(T_{i}),italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT , italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT , italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT = italic_f start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT ( italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) , italic_f start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ( italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) , italic_f start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ( italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ,(3) + +where f C subscript 𝑓 𝐶 f_{C}italic_f start_POSTSUBSCRIPT italic_C end_POSTSUBSCRIPT is a 3D backbone. f I subscript 𝑓 𝐼 f_{I}italic_f start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT and f T subscript 𝑓 𝑇 f_{T}italic_f start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT are the vision encoder and language encoder from the pre-trained vision-language model, adopting vanilla transformer[[27](https://arxiv.org/html/2310.09503v3#bib.bib27)] structures. The vectors h i C∈ℝ D superscript subscript ℎ 𝑖 𝐶 superscript ℝ 𝐷 h_{i}^{C}\in\mathbb{R}^{D}italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT, h i I∈ℝ D superscript subscript ℎ 𝑖 𝐼 superscript ℝ 𝐷 h_{i}^{I}\in\mathbb{R}^{D}italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT, and h i T∈ℝ D superscript subscript ℎ 𝑖 𝑇 superscript ℝ 𝐷 h_{i}^{T}\in\mathbb{R}^{D}italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT denote 3D, language, and vision features, respectively, and D 𝐷 D italic_D represents the dimensions of the final representation vectors. + +A contrastive way[[16](https://arxiv.org/html/2310.09503v3#bib.bib16)] followed is chosen to construct the conditional distribution between any two modalities in Eq.[2](https://arxiv.org/html/2310.09503v3#S3.E2 "2 ‣ 3.1 Preliminary ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), aligning h i C superscript subscript ℎ 𝑖 𝐶 h_{i}^{C}italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT, h i I superscript subscript ℎ 𝑖 𝐼 h_{i}^{I}italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT, and h i T superscript subscript ℎ 𝑖 𝑇 h_{i}^{T}italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT by: + +ℒ(h M 1,h M 2)=∑i−1 2⁢l⁢o⁢g⁢exp⁡(𝐡 i M 1⁢𝐡 i M 2)∑k exp⁡(𝐡 i M 1⁢𝐡 k M 2),subscript ℒ superscript ℎ subscript 𝑀 1 superscript ℎ subscript 𝑀 2 subscript 𝑖 1 2 𝑙 𝑜 𝑔 subscript superscript 𝐡 subscript 𝑀 1 𝑖 subscript superscript 𝐡 subscript 𝑀 2 𝑖 subscript 𝑘 subscript superscript 𝐡 subscript 𝑀 1 𝑖 subscript superscript 𝐡 subscript 𝑀 2 𝑘\mathcal{L}_{(h^{M_{1}},h^{M_{2}})}=\sum_{i}-\frac{1}{2}log\frac{\exp\left({% \mathbf{h}^{M_{1}}_{i}\mathbf{h}^{M_{2}}_{i}}\right)}{\sum_{k}\exp\left({% \mathbf{h}^{M_{1}}_{i}\mathbf{h}^{M_{2}}_{k}}\right)},caligraphic_L start_POSTSUBSCRIPT ( italic_h start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT , italic_h start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ) end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - divide start_ARG 1 end_ARG start_ARG 2 end_ARG italic_l italic_o italic_g divide start_ARG roman_exp ( bold_h start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT bold_h start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT roman_exp ( bold_h start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT bold_h start_POSTSUPERSCRIPT italic_M start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) end_ARG ,(4) + +where M=(M 1,M 2)∈{(T,I),(C,T),(C,I)}𝑀 subscript 𝑀 1 subscript 𝑀 2 𝑇 𝐼 𝐶 𝑇 𝐶 𝐼 M=(M_{1},M_{2})\in\{(T,I),(C,T),(C,I)\}italic_M = ( italic_M start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_M start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ) ∈ { ( italic_T , italic_I ) , ( italic_C , italic_T ) , ( italic_C , italic_I ) } represents the combination of pairwise modalities. + +### 3.2 Structured Multi-modal Organizer + +Structured Multi-modal Organizer (SMO) is a data-refined module to fill the gap of information between the 3D model C i subscript 𝐶 𝑖 C_{i}italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT with the image I i subscript 𝐼 𝑖 I_{i}italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and text T i subscript 𝑇 𝑖 T_{i}italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. For example, from a visual perspective, consider a 3D model of a car. A single frontal rendered image of the car may not capture crucial information about the rear end of the vehicle. Similarly for the language aspect, using the term “bottle” does not accurately represent specific models like “jug”, “beer”, or “flask”. To alleviate this information loss, we adopt a multi-view approach to organize the data in the triplet sample S i subscript 𝑆 𝑖 S_{i}italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, constructing a refreshed form of triplet and redefining it as follows: + +S i:([I i⁢1,⋯,I i⁢v],[T i p,T i s],C i).:subscript 𝑆 𝑖 subscript 𝐼 𝑖 1⋯subscript 𝐼 𝑖 𝑣 superscript subscript 𝑇 𝑖 𝑝 superscript subscript 𝑇 𝑖 𝑠 subscript 𝐶 𝑖 S_{i}:\left(\left[I_{i1},\cdots,I_{iv}\right],\left[T_{i}^{p},T_{i}^{s}\right]% ,C_{i}\right).italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : ( [ italic_I start_POSTSUBSCRIPT italic_i 1 end_POSTSUBSCRIPT , ⋯ , italic_I start_POSTSUBSCRIPT italic_i italic_v end_POSTSUBSCRIPT ] , [ italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT , italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ] , italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) .(5) + +To ensure a more fair and comprehensive alignment between the trimodal, SMO incorporates v 𝑣 v italic_v images from Continuous Image Sequence (CIS) and structured texts from the Hierarchical Text Tree (HTT). By leveraging these additional visual and textual cues, we capture more accurate and detailed associations between the 3D models, visual representations, and the accompanying textual descriptions. + +#### 3.2.1 Continuous Image Sequence + +When considering the visual modality, the single synthetic image captured from random angles only provides a partial representation of a 3D feature. It is imperative to introduce a set of multi-view rendered images to enhance the semantic information. + +More clearly, we synthesize RGB and depth images at regular intervals of 12 degrees, resulting in a candidate image set denoted as C I∈ℝ 30×3 subscript 𝐶 𝐼 superscript ℝ 30 3 C_{I}\in\mathbb{R}^{30\times 3}italic_C start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT 30 × 3 end_POSTSUPERSCRIPT. However, we have observed that significant angular deviations between consecutive sampled images [I 1,⋯,I v]subscript 𝐼 1⋯subscript 𝐼 𝑣\left[I_{1},\cdots,I_{v}\right][ italic_I start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , ⋯ , italic_I start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ] may lead to training instability due to the discrete nature of the sampling process. To overcome this challenge, we employ a novel approach where, during each training process, we selectively sample v 𝑣 v italic_v images within a specific angular range. This meticulous strategy ensures a more stable and effective training process. The process can be formulated as: + +[I 1,⋯,I v]=W⁢S⁢(C I),|∠⁢I i−∠⁢I j|<ω,∀i,j∈[1,v],formulae-sequence subscript 𝐼 1⋯subscript 𝐼 𝑣 𝑊 𝑆 subscript 𝐶 ���� formulae-sequence∠subscript 𝐼 𝑖∠subscript 𝐼 𝑗 𝜔 for-all 𝑖 𝑗 1 𝑣\left[I_{1},\cdots,I_{v}\right]=WS(C_{I}),|\angle I_{i}-\angle I_{j}|<\omega,% \forall i,j\in[1,v],[ italic_I start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , ⋯ , italic_I start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ] = italic_W italic_S ( italic_C start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ) , | ∠ italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT - ∠ italic_I start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | < italic_ω , ∀ italic_i , italic_j ∈ [ 1 , italic_v ] ,(6) + +where W⁢S⁢(⋅)𝑊 𝑆⋅WS(\cdot)italic_W italic_S ( ⋅ ) means sampling within a specific angular range, ∠⁢I i∠subscript 𝐼 𝑖\angle I_{i}∠ italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT denotes the render degree of the i 𝑖 i italic_i-th image, and ω 𝜔\omega italic_ω is a hyperparameter set to 60∘superscript 60 60^{\circ}60 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT based on our experiments. + +The image encoder embeds the multi-view images into the feature vectors. Additionally, considering the significance of angle and depth as positional information within images, we employ angle and depth encodings, inspired by the approach described in [[27](https://arxiv.org/html/2310.09503v3#bib.bib27)], to capture the spatial information inherent. As depicted in Fig.[2](https://arxiv.org/html/2310.09503v3#S2.F2 "Figure 2 ‣ 2.3 Enhancing 3D Representation through Multi-modality ‣ 2 Related work ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), these encodings are combined with the visual features through an addition operation, yielding the following formulation: + +h i⁢v I=LayerNorm⁢(f I⁢(I i⁢v)+ϵ d⁢e⁢g⁢r⁢e⁢e⁢[∠⁢I i]+ϵ d⁢e⁢p⁢t⁢h⁢[∠⁢I i]),superscript subscript ℎ 𝑖 𝑣 𝐼 LayerNorm subscript 𝑓 𝐼 subscript 𝐼 𝑖 𝑣 superscript italic-ϵ 𝑑 𝑒 𝑔 𝑟 𝑒 𝑒 delimited-[]∠subscript 𝐼 𝑖 superscript italic-ϵ 𝑑 𝑒 𝑝 𝑡 ℎ delimited-[]∠subscript 𝐼 𝑖\small h_{iv}^{I}=\text{LayerNorm}\left(f_{I}(I_{iv})+\epsilon^{degree}[\angle I% _{i}]+\epsilon^{depth}[\angle I_{i}]\right),italic_h start_POSTSUBSCRIPT italic_i italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT = LayerNorm ( italic_f start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ( italic_I start_POSTSUBSCRIPT italic_i italic_v end_POSTSUBSCRIPT ) + italic_ϵ start_POSTSUPERSCRIPT italic_d italic_e italic_g italic_r italic_e italic_e end_POSTSUPERSCRIPT [ ∠ italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ] + italic_ϵ start_POSTSUPERSCRIPT italic_d italic_e italic_p italic_t italic_h end_POSTSUPERSCRIPT [ ∠ italic_I start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ] ) ,(7) + +where LayerNorm⁢(⋅)LayerNorm⋅\text{LayerNorm}(\cdot)LayerNorm ( ⋅ )[[71](https://arxiv.org/html/2310.09503v3#bib.bib71)] controls the range of vision vectors h i⁢v I superscript subscript ℎ 𝑖 𝑣 𝐼 h_{iv}^{I}italic_h start_POSTSUBSCRIPT italic_i italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT, consequently expediting convergence. + +#### 3.2.2 Hierarchical Text Tree + +When considering the language modality, it is worth noting that existing methods such as ULIP solely rely on simple parent categories (55 categories). While this approach facilitates the training target, it also limits its robustness. However, the ShapeNet55 dataset[[69](https://arxiv.org/html/2310.09503v3#bib.bib69)] provides subcategories (205 categories) that offer more detailed distinctions. By introducing subcategories, the model can focus on capturing finer-grained representations and accurately differentiate visually similar subcategories. + +Nevertheless, when working with limited dataset sizes, independently incorporating subcategories may overlook the family relationships. These relationships are crucial as they indicate that features of subcategories belonging to the same parent category should be more similar. To address this issue, we propose a novel approach called Hierarchical Text Tree (HTT) that constructs a hierarchical category tree for each triplet. This tree comprises coarse semantic parent categories and more specific subcategories (e.g., “bed” →→\rightarrow→ “bunk”). In cases where subcategory annotations are incomplete, the parent category is used as a replacement. By employing HTT, we assign structured category information to each model as [T p,T s]superscript 𝑇 𝑝 superscript 𝑇 𝑠[T^{p},T^{s}][ italic_T start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT , italic_T start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ], where T p superscript 𝑇 𝑝 T^{p}italic_T start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT represents the parent category with primary semantics, and T s superscript 𝑇 𝑠 T^{s}italic_T start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT represents the subcategory with fine-grained details. + +To leverage the hierarchical-grained categories, we design specific tasks for both parent and subcategories. The subcategories T s superscript 𝑇 𝑠 T^{s}italic_T start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT undergo the process described in Eq.[3](https://arxiv.org/html/2310.09503v3#S3.E3 "3 ‣ 3.1 Preliminary ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), utilizing the language encoder to generate text features that are learned in a contrastive manner. Additionally, the parent categories guide the aggregation of point cloud features, as expressed by the following equation: + +arg⁡min θ g⁢(θ)=‖θ⁢(h i C)−f⁢(T i P)‖2,subscript 𝜃 𝑔 𝜃 superscript norm 𝜃 superscript subscript ℎ 𝑖 𝐶 𝑓 superscript subscript 𝑇 𝑖 𝑃 2\mathop{\arg\min}\limits_{\theta}g(\theta)={||\theta(h_{i}^{C})-f(T_{i}^{P})||% }^{2},start_BIGOP roman_arg roman_min end_BIGOP start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT italic_g ( italic_θ ) = | | italic_θ ( italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT ) - italic_f ( italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_P end_POSTSUPERSCRIPT ) | | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ,(8) + +where θ 𝜃\theta italic_θ is an MLP[[72](https://arxiv.org/html/2310.09503v3#bib.bib72)], and f⁢(⋅)𝑓⋅f(\cdot)italic_f ( ⋅ ) transfers each parent category annotation of i 𝑖 i italic_i-th sample to a numbered label. This approach strikes a balance between specific subcategories and their broader parent categories, enabling the model to leverage both detailed distinctions and overarching similarities within the hierarchical category structure. Consequently, the model becomes more robust, capable of capturing intricate nuances and generalizing well across different instances within the given dataset. + +![Image 3: Refer to caption](https://arxiv.org/html/2310.09503v3/x3.png) + +Figure 3: The framework of JM3D-LLM. We take the LLM as the cornerstone to support the further semantic understanding task like the fine-grained 3D model captioning. + +### 3.3 Joint Multi-modal Alignment + +In previous methods, the assumption of conditional independence between the vision and language modalities was described as Eq.[2](https://arxiv.org/html/2310.09503v3#S3.E2 "2 ‣ 3.1 Preliminary ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"). However, this approximation fails to fully capture the synergistic relationship between the two modalities as Eq.[1](https://arxiv.org/html/2310.09503v3#S3.E1 "1 ‣ 3.1 Preliminary ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), leading to _insufficient synergy_ issue, which is the main reason of suboptimal performance. To address this limitation and unlock the potential of the model, we propose a more comprehensive approach called Joint Multi-modal Alignment (JMA) that directly models the relationships between the vision and language modalities, denoted as P⁢(C|I,T)𝑃 conditional 𝐶 𝐼 𝑇 P(C|I,T)italic_P ( italic_C | italic_I , italic_T ). This reformulated relationship can be expressed as: + +P⁢(C|I,T)𝑃 conditional 𝐶 𝐼 𝑇\displaystyle P(C|I,T)italic_P ( italic_C | italic_I , italic_T )=P⁢(C,I|T)P⁢(I|T)=P⁢(C,I|T)⁢P⁢(T)P⁢(I|T)⁢P⁢(T)=P⁢(C,I|T)⁢P⁢(T)P⁢(I,T)absent 𝑃 𝐶 conditional 𝐼 𝑇 𝑃 conditional 𝐼 𝑇 𝑃 𝐶 conditional 𝐼 𝑇 𝑃 𝑇 𝑃 conditional 𝐼 𝑇 𝑃 𝑇 𝑃 𝐶 conditional 𝐼 𝑇 𝑃 𝑇 𝑃 𝐼 𝑇\displaystyle=\frac{P(C,I|T)}{P(I|T)}=\frac{P(C,I|T)P(T)}{P(I|T)P(T)}=\frac{P(% C,I|T)P(T)}{P(I,T)}= divide start_ARG italic_P ( italic_C , italic_I | italic_T ) end_ARG start_ARG italic_P ( italic_I | italic_T ) end_ARG = divide start_ARG italic_P ( italic_C , italic_I | italic_T ) italic_P ( italic_T ) end_ARG start_ARG italic_P ( italic_I | italic_T ) italic_P ( italic_T ) end_ARG = divide start_ARG italic_P ( italic_C , italic_I | italic_T ) italic_P ( italic_T ) end_ARG start_ARG italic_P ( italic_I , italic_T ) end_ARG(9) +∝∑i P⁢(C,I|T i)⁢P⁢(T i)=∑i∑j P⁢(C,I i,j|T i)⁢P⁢(T i),proportional-to absent subscript 𝑖 𝑃 𝐶 conditional 𝐼 subscript 𝑇 𝑖 𝑃 subscript 𝑇 𝑖 subscript 𝑖 subscript 𝑗 𝑃 𝐶 conditional subscript 𝐼 𝑖 𝑗 subscript 𝑇 𝑖 𝑃 subscript 𝑇 𝑖\displaystyle\propto\sum_{i}P(C,I|T_{i})P(T_{i})=\sum_{i}\sum_{j}P(C,I_{i,j}|T% _{i})P(T_{i}),∝ ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_P ( italic_C , italic_I | italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) italic_P ( italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) = ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_P ( italic_C , italic_I start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT | italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) italic_P ( italic_T start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) , + +that means that for any sample S i subscript 𝑆 𝑖 S_{i}italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in Eq.[5](https://arxiv.org/html/2310.09503v3#S3.E5 "5 ‣ 3.2 Structured Multi-modal Organizer ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), both the language and multi-view vision information should be incorporated to take part in the process of contrastive process. Specifically, JMA generates multiple image features h i⁢v I∈ℝ V×D superscript subscript ℎ 𝑖 𝑣 𝐼 superscript ℝ 𝑉 𝐷 h_{iv}^{I}\in\mathbb{R}^{V\times D}italic_h start_POSTSUBSCRIPT italic_i italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_V × italic_D end_POSTSUPERSCRIPT and fine-grained text feature h i⁢s T∈ℝ D superscript subscript ℎ 𝑖 𝑠 𝑇 superscript ℝ 𝐷 h_{is}^{T}\in\mathbb{R}^{D}italic_h start_POSTSUBSCRIPT italic_i italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT. A similarity matrix between the two features are caluated as the weight to facilitate the reconstruction of image features into joint modality features. This process can be expressed as follows: + +h i J=∑v V Softmax⁢(h i⁢v I×h i⁢s T)⊗h i⁢v I,superscript subscript ℎ 𝑖 𝐽 superscript subscript 𝑣 𝑉 tensor-product Softmax superscript subscript ℎ 𝑖 𝑣 𝐼 superscript subscript ℎ 𝑖 𝑠 𝑇 superscript subscript ℎ 𝑖 𝑣 𝐼 h_{i}^{J}=\sum_{v}^{V}\text{Softmax}(h_{iv}^{I}\times h_{is}^{T})\otimes h_{iv% }^{I},italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT = ∑ start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_V end_POSTSUPERSCRIPT Softmax ( italic_h start_POSTSUBSCRIPT italic_i italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT × italic_h start_POSTSUBSCRIPT italic_i italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ) ⊗ italic_h start_POSTSUBSCRIPT italic_i italic_v end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_I end_POSTSUPERSCRIPT ,(10) + +where ×\times× denotes matrix multiplication, and ⊗tensor-product\otimes⊗ is element-wise multiplication. + +The incorporation of JMA allows the model to capture the underlying connections and semantic associations between vision and language modalities, resulting in the representation of the joint vision-language modality. This representation is designed to align more closely with the distribution of the joint semantic space, facilitating the convergence of 3D features into the same space during the process of comparative learning. + +TABLE I: Instruction conversation constructing. The Point means the position where to inject the features. + +### 3.4 Training Objective + +During the training process, we establish two tasks for unified 3D representation learning. The first task involves utilizing a contrastive approach between the point cloud features h C superscript ℎ 𝐶 h^{C}italic_h start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT and the other features h J superscript ℎ 𝐽 h^{J}italic_h start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT and h s T superscript subscript ℎ 𝑠 𝑇 h_{s}^{T}italic_h start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT, as exemplified in Eq.[4](https://arxiv.org/html/2310.09503v3#S3.E4 "4 ‣ 3.1 Preliminary ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"). This task is formulated as follows: + +ℒ c⁢o⁢n⁢t⁢r⁢a⁢s⁢t⁢i⁢v⁢e=λ 1⁢ℒ(h C,h J)+λ 2⁢ℒ(h C,h s T)+λ 3⁢ℒ(h s T,h J),subscript ℒ 𝑐 𝑜 𝑛 𝑡 𝑟 𝑎 𝑠 𝑡 𝑖 𝑣 𝑒 subscript 𝜆 1 subscript ℒ superscript ℎ 𝐶 superscript ℎ 𝐽 subscript 𝜆 2 subscript ℒ superscript ℎ 𝐶 superscript subscript ℎ 𝑠 𝑇 subscript 𝜆 3 subscript ℒ superscript subscript ℎ 𝑠 𝑇 superscript ℎ 𝐽\mathcal{L}_{contrastive}=\lambda_{1}\mathcal{L}_{(h^{C},h^{J})}+\lambda_{2}% \mathcal{L}_{(h^{C},h_{s}^{T})}+\lambda_{3}\mathcal{L}_{(h_{s}^{T},h^{J})},caligraphic_L start_POSTSUBSCRIPT italic_c italic_o italic_n italic_t italic_r italic_a italic_s italic_t italic_i italic_v italic_e end_POSTSUBSCRIPT = italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT ( italic_h start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT , italic_h start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT ) end_POSTSUBSCRIPT + italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT ( italic_h start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT , italic_h start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT ) end_POSTSUBSCRIPT + italic_λ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT ( italic_h start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT , italic_h start_POSTSUPERSCRIPT italic_J end_POSTSUPERSCRIPT ) end_POSTSUBSCRIPT ,(11) + +where the λ 1 subscript 𝜆 1\lambda_{1}italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, λ 2 subscript 𝜆 2\lambda_{2}italic_λ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT, and λ 3 subscript 𝜆 3\lambda_{3}italic_λ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT are the hyper parameters. Through contrastive learning, the features from the three modalities are brought together into a unified semantic space. In this process, the guidance provided by the h s T superscript subscript ℎ 𝑠 𝑇 h_{s}^{T}italic_h start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT feature plays a crucial role in capturing fine-grained details. + +By the way, another task is a classification task with parent category label as Eq.[8](https://arxiv.org/html/2310.09503v3#S3.E8 "8 ‣ 3.2.2 Hierarchical Text Tree ‣ 3.2 Structured Multi-modal Organizer ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), which is formulated as: + +ℒ c⁢l⁢a⁢s⁢s⁢e⁢d=1 N⁢∑i∑j|T p|f⁢(T i,j p)⁢l⁢o⁢g⁢(Softmax⁢(θ⁢(h i C))),subscript ℒ 𝑐 𝑙 𝑎 𝑠 𝑠 𝑒 𝑑 1 𝑁 subscript 𝑖 superscript subscript 𝑗 superscript 𝑇 𝑝 𝑓 superscript subscript 𝑇 𝑖 𝑗 𝑝 𝑙 𝑜 𝑔 Softmax 𝜃 superscript subscript ℎ 𝑖 𝐶\mathcal{L}_{classed}=\frac{1}{N}\sum_{i}\sum_{j}^{|T^{p}|}f(T_{i,j}^{p})log(% \text{Softmax}(\theta(h_{i}^{C}))),caligraphic_L start_POSTSUBSCRIPT italic_c italic_l italic_a italic_s italic_s italic_e italic_d end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT | italic_T start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT | end_POSTSUPERSCRIPT italic_f ( italic_T start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT ) italic_l italic_o italic_g ( Softmax ( italic_θ ( italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_C end_POSTSUPERSCRIPT ) ) ) ,(12) + +where the θ 𝜃\theta italic_θ means the parameters of a MLP, and f⁢(T i,j p)𝑓 superscript subscript 𝑇 𝑖 𝑗 𝑝 f(T_{i,j}^{p})italic_f ( italic_T start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT ) is the one-hot encoding of j 𝑗 j italic_j-th parent category for the i 𝑖 i italic_i-th sample. By classified on parent categories, we introduce a softer constraint that alleviates the challenge of fitting subcategories. This approach enables the model to capture the overall characteristics and shared features within the parent category, while still accommodating the specific details and variations within the subcategories. To obtain the final loss, we combine the two aforementioned losses. By summing these losses, we ensure that the model optimizes both the contrastive learning task and the aggregation of point cloud features under the guidance of structured vision and language information. + +TABLE II: The results of zero-shot 3D classification on ModelNet40 and ScanObjectNN datasets. PointMLP + JM3D outperforms the previous state-of-the-art methods by a large margin in various evaluation settings, especially achieving a 12.3% and 13.7% improvement of the “Medium” and “Hard” mode on ModelNet40, which is the SOTA. + +4 Integrating JM3D with Large Language Model +-------------------------------------------- + +Following the training on JM3D, the point cloud features gravitate towards linguistic features within a unified semantic space. This alignment suggests that the distributions of the point cloud features mirror their linguistic counterparts. With this foundation, we can harness Natural Language Processing (NLP) techniques to interpret point cloud data and address tasks demanding richer semantic understanding, such as crafting intricate 3D descriptions. The recent emergence of prompt tuning and SFT-based methods[[65](https://arxiv.org/html/2310.09503v3#bib.bib65), [62](https://arxiv.org/html/2310.09503v3#bib.bib62)] has streamlined diverse NLP tasks, showcasing formidable reasoning prowess and adaptability across a range of applications. Pioneering multi-modal research[[64](https://arxiv.org/html/2310.09503v3#bib.bib64), [63](https://arxiv.org/html/2310.09503v3#bib.bib63)] has intertwined image data with language, leveraging Large Language Models (LLM) as foundational reasoning aids. Drawing inspiration from these advances, we augment JM3D with LLM, leading to our refined model: JM3D-LLM. This hybrid aims to probe the potentials of 3D models within a sophisticated semantic framework. + +### 4.1 Instruct Conversations for Point Querying + +Drawing from methodologies in recent studies[[65](https://arxiv.org/html/2310.09503v3#bib.bib65), [64](https://arxiv.org/html/2310.09503v3#bib.bib64)], we first curated a comprehensive conversational dataset to fine-tune our model. Given that JM3D’s pre-training data is primarily centered around individual object models rather than entire scenes, we sought point cloud data that would harmoniously complement this domain. Thus, we sourced data from Cap3D[[75](https://arxiv.org/html/2310.09503v3#bib.bib75)], a database rich in detailed descriptions of various objects. + +For our instruction-based tasks, we employed a captioning strategy. The objective is to empower the LLM to generate in-depth descriptions of 3D models, effectively maximizing the semantic depth extracted from point cloud representations. In doing so, the LLM is enriched with a more granular understanding of the 3D spatial information. For our implementation, we adhered to the criteria outlined in Tab.[I](https://arxiv.org/html/2310.09503v3#S3.T1 "TABLE I ‣ 3.3 Joint Multi-modal Alignment ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), ensuring the diverse and consistent framing of instructional data. + +### 4.2 JM3D-LLM Architecture + +Within our JM3D-LLM architecture as shown in Fig.[3](https://arxiv.org/html/2310.09503v3#S3.F3 "Figure 3 ‣ 3.2.2 Hierarchical Text Tree ‣ 3.2 Structured Multi-modal Organizer ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), a Multi-layer Perceptron (MLP) is deployed to align the feature dimensions of the point cloud with those of the language tokens. Subsequently, these point cloud features are seamlessly integrated as specialized tokens into the LLM’s input stream. + +For more details, given a paragraph from a conversation, a tokenizer, denoted as f t subscript 𝑓 𝑡 f_{t}italic_f start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, converts all tokens into their corresponding IDs using the vocabulary f v subscript 𝑓 𝑣 f_{v}italic_f start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT. In parallel, the pre-trained point cloud module, represented by f c p⁢r⁢e⁢t⁢r⁢a⁢i⁢n subscript 𝑓 subscript 𝑐 𝑝 𝑟 𝑒 𝑡 𝑟 𝑎 𝑖 𝑛 f_{c_{pretrain}}italic_f start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_p italic_r italic_e italic_t italic_r italic_a italic_i italic_n end_POSTSUBSCRIPT end_POSTSUBSCRIPT, gleans a sequence of point cloud features, symbolized as T c={c 1,c 2,…,c n}∈ℝ n×d 1 subscript 𝑇 𝑐 subscript 𝑐 1 subscript 𝑐 2…subscript 𝑐 𝑛 superscript ℝ 𝑛 subscript 𝑑 1 T_{c}=\{c_{1},c_{2},\dots,c_{n}\}\in\mathbb{R}^{n\times d_{1}}italic_T start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = { italic_c start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_c start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × italic_d start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, with n 𝑛 n italic_n signifying the token count of the point features. + +These point cloud features undergo a transformation, as represented by: + +T c′=MLP⁢(T c)→D l⁢l⁢m⁢(t;f v,f t),subscript superscript 𝑇′𝑐 MLP subscript 𝑇 𝑐→subscript 𝐷 𝑙 𝑙 𝑚 𝑡 subscript 𝑓 𝑣 subscript 𝑓 𝑡 T^{{}^{\prime}}_{c}=\text{MLP}(T_{c})\rightarrow D_{llm}(t;f_{v},f_{t}),italic_T start_POSTSUPERSCRIPT start_FLOATSUPERSCRIPT ′ end_FLOATSUPERSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT = MLP ( italic_T start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ) → italic_D start_POSTSUBSCRIPT italic_l italic_l italic_m end_POSTSUBSCRIPT ( italic_t ; italic_f start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT , italic_f start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) ,(13) + +where D l⁢l⁢m⁢(t;f v,f t)subscript 𝐷 𝑙 𝑙 𝑚 𝑡 subscript 𝑓 𝑣 subscript 𝑓 𝑡 D_{llm}(t;f_{v},f_{t})italic_D start_POSTSUBSCRIPT italic_l italic_l italic_m end_POSTSUBSCRIPT ( italic_t ; italic_f start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT , italic_f start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) denotes the token distribution within the LLM framework. The adjusted tokens, T c′subscript superscript 𝑇′𝑐 T^{{}^{\prime}}_{c}italic_T start_POSTSUPERSCRIPT start_FLOATSUPERSCRIPT ′ end_FLOATSUPERSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT, are then positioned at placeholders marked by ⟨point⟩delimited-⟨⟩point\langle\text{point}\rangle⟨ point ⟩. The LLM, represented as f l⁢l⁢m subscript 𝑓 𝑙 𝑙 𝑚 f_{llm}italic_f start_POSTSUBSCRIPT italic_l italic_l italic_m end_POSTSUBSCRIPT, subsequently processes the remaining language tokens to produce a sequence of linguistic features, T l={l 1,l 2,…,l m}∈ℝ m×d subscript 𝑇 𝑙 subscript 𝑙 1 subscript 𝑙 2…subscript 𝑙 𝑚 superscript ℝ 𝑚 �� T_{l}=\{l_{1},l_{2},\dots,l_{m}\}\in\mathbb{R}^{m\times d}italic_T start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT = { italic_l start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_l start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_l start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT } ∈ blackboard_R start_POSTSUPERSCRIPT italic_m × italic_d end_POSTSUPERSCRIPT. The aggregate set of tokens can be represented as [T c′,T l]∈ℝ(n+m)×d 1 subscript superscript 𝑇′𝑐 subscript 𝑇 𝑙 superscript ℝ 𝑛 𝑚 subscript 𝑑 1[T^{{}^{\prime}}_{c},T_{l}]\in\mathbb{R}^{(n+m)\times d_{1}}[ italic_T start_POSTSUPERSCRIPT start_FLOATSUPERSCRIPT ′ end_FLOATSUPERSCRIPT end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT , italic_T start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ] ∈ blackboard_R start_POSTSUPERSCRIPT ( italic_n + italic_m ) × italic_d start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT. + +Post integration of these tokens into f l⁢l⁢m subscript 𝑓 𝑙 𝑙 𝑚 f_{llm}italic_f start_POSTSUBSCRIPT italic_l italic_l italic_m end_POSTSUBSCRIPT, the LLM iteratively derives the associated hidden states, represented by z i subscript 𝑧 𝑖 z_{i}italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. These states are then projected onto the vocabulary space f v subscript 𝑓 𝑣 f_{v}italic_f start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT to yield z i~~subscript 𝑧 𝑖\widetilde{z_{i}}over~ start_ARG italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG. Leveraging a softmax operation, the resultant distribution is created over the vocabulary, leading to the decoding of the word w~~𝑤\widetilde{w}over~ start_ARG italic_w end_ARG with the maximal probability, as illustrated by: + +w~i=f v−1⁢(arg⁡max w∈z~i Softmax⁢(z i)).subscript~𝑤 𝑖 subscript superscript 𝑓 1 𝑣 subscript 𝑤 subscript~𝑧 𝑖 Softmax subscript 𝑧 𝑖\widetilde{w}_{i}=f^{-1}_{v}(\mathop{\arg\max}\limits_{w\in\widetilde{z}_{i}}% \text{Softmax}(z_{i})).over~ start_ARG italic_w end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_f start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT ( start_BIGOP roman_arg roman_max end_BIGOP start_POSTSUBSCRIPT italic_w ∈ over~ start_ARG italic_z end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT Softmax ( italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ) .(14) + +![Image 4: Refer to caption](https://arxiv.org/html/2310.09503v3/x4.png) + +Figure 4: The qualitative results of the real image to point cloud retrieval. Giving an image, We show the top-3 point cloud retrieval results from ModelNet40. All models perform well on the simple samples (the 1st row and the 3rd row). However, when it comes to the challenging samples (the 2nd row and the 4th row), JM3D demonstrates a more accurate retrieval ability compared to the previous state-of-the-art (ULIP). The JM3D trained with 4 view images shows better performance compared with the 2 view images, benefiting the more solid bias of vision modality. + +5 Experiment +------------ + +### 5.1 Datasets + +#### 5.1.1 Pretrain Dataset + +We use ShapeNet55[[69](https://arxiv.org/html/2310.09503v3#bib.bib69)] as the pre-training dataset, which is the publicly-available subset of ShapeNet. ShapeNet consists of 52.5K CAD models with multiple texture maps and corresponding category annotations. The annotations have a total of 55 basic categories and 205 fine-grained subcategories, with a small number of models missing subcategories. During training, we randomly sample different numbers of points from the CAD models to adapt different networks. + +#### 5.1.2 Downstream Datasets + +Our investigations in downstream tasks are predominantly anchored on two datasets, offering diverse perspectives on 3D object categorization. + +ModelNet40[[20](https://arxiv.org/html/2310.09503v3#bib.bib20)]: Originating from synthetic 3D CAD models, this dataset amasses 9,843 training samples and 2,468 testing samples, distributed over 40 distinct categories. When testing, we adhere to the protocol outlined in[[26](https://arxiv.org/html/2310.09503v3#bib.bib26)], which entails downsampling the point cloud data to a resolution of 1024 points. + +ScanObjectNN[[76](https://arxiv.org/html/2310.09503v3#bib.bib76)]: In contrast to ModelNet40, ScanObjectNN offers a collection of 3D objects that have been scanned directly from real-world scenarios. This dataset houses 2,902 samples, spread across 15 categories. Intriguingly, it presents two variations, distinguished by the presence or absence of background noise: _OBJ\_ONLY_ and _OBJ\_BJ_. The former encapsulates a pristine mesh, devoid of any background interference, while the latter retains background noise. In our studies, we utilize the pre-processed data as recommended by ULIP[[76](https://arxiv.org/html/2310.09503v3#bib.bib76)], which has been further refined by[[31](https://arxiv.org/html/2310.09503v3#bib.bib31)]. This data is both normalized and downsampled to 1024 points for uniformity. + +TABLE III: The ablation study of CIS. Multi-views will lead to point cloud features being biased towards vision modality, which decreases the performance on zero-shot 3D classification. However, CIS effectively improves the performance with the embeddings and within-view sample. + +TABLE IV: The ablation study of HTT. Results show that structured text is more effective than fine-grained text. Even if the language modality is enhanced, the independent alignment method still makes the improvements unstable. + +#### 5.1.3 Instruct Dataset + +We source our instructional fine-tuning conversation data from Cap3D[[75](https://arxiv.org/html/2310.09503v3#bib.bib75)], which is a rich repository of point cloud samples curated from Objverse[[77](https://arxiv.org/html/2310.09503v3#bib.bib77)], accompanied by detailed descriptions. Specifically, Cap3D boasts 660k point cloud data instances, each paired with a description. Notably, 40k of these descriptions undergo manual annotation for enhanced precision. For instructive purposes, we adopt the methodology outlined in[[63](https://arxiv.org/html/2310.09503v3#bib.bib63)], devising 11 tailor-made templates tailored to the nuances of 3D modality. + +### 5.2 3D Backbone Networks + +To ascertain the efficacy of our novel JM3D framework, we orchestrate experiments on ModelNet40 and ScanObjectNN, employing a variety of 3D backbone networks, namely: + +PointNet++[[70](https://arxiv.org/html/2310.09503v3#bib.bib70)]: Serving as the successor to PointNet[[24](https://arxiv.org/html/2310.09503v3#bib.bib24)], PointNet++ harnesses an encoder-decoder architecture to delve deep into hierarchical features of point sets. Its encoder consists of a plethora of set abstraction modules interspersed with farthest point sampling techniques, the latter aiming to condense the scale of point sets. The decoder, meanwhile, channels the outputs of the encoder’s multiple layers to a range of head networks, ensuring adaptability to a spectrum of tasks. + +PointMLP[[26](https://arxiv.org/html/2310.09503v3#bib.bib26)]: In contrast, PointMLP is a streamlined network, integrating residual MLP modules for enhanced feature extraction, all while sidestepping intricate architectures. It innovatively incorporates a Geometric Affine Module, designed to transition points to a standard distribution. Two distinct MLP blocks then undertake the task of gleaning both representational and positional information. + +PointBert[[31](https://arxiv.org/html/2310.09503v3#bib.bib31)]: A model rooted in the transformer paradigm, PointBert integrates self-supervised learning into point cloud representation. Its prowess lies in reconstructing masked point clouds, a feature that has yielded stellar outcomes on unlabeled point cloud datasets. In essence, PointBert strategically obscures select points, subsequently refining features through the act of reconstructing these occluded points. + +### 5.3 Implementation Details + +#### 5.3.1 Pre-training + +We uniformly sample the point cloud into 1024, 2048, and 8192 points to align with the recommended settings of various backbones. The preprocessing of rendered images and texts adheres to the requirements of the pre-trained image-text encoder. Favoring performance, we opt for SLIP[[78](https://arxiv.org/html/2310.09503v3#bib.bib78)] over the conventional CLIP model. In our setup, both the image and text encoders remain static, mirroring the behavior of ULIP. Throughout training, only the parameters of the point cloud backbone undergo adjustments. JM3D undergoes training for 250 epochs, with batches of 128 and a learning rate set at 1⁢e−3 1 𝑒 3 1e-3 1 italic_e - 3. For optimization, we employ AdamW alongside the Cosine LR schedule. + +#### 5.3.2 Zero-shot 3D Classification + +JM3D evaluates the distance between point cloud features and text features from new datasets, selecting the category with the shortest distance. The preprocessing steps for both text and point cloud mirror those of pre-training, eliminating the need for a fine-tuning phase. Our zero-shot evaluations target both ModelNet40 and ScanObjectNN. While ModelNet40, with its synthetic nature and unseen categories, serves as a benchmark for evaluating the alignment capabilities of multi-modal features, ScanObjectNN, populated with real-world scanned data, tests the resilience of 3D pre-trained models. + +TABLE V: The ablation study for JMA. Independent alignment wastes the rich semantics brought by SMO, while JMA achieves significant improvements on all settings of different datasets. + +#### 5.3.3 JM3D-LLM + +For JM3D-LLM, we integrate the 7B Vicuna[[79](https://arxiv.org/html/2310.09503v3#bib.bib79)] as its foundation for LLM and employ PointMLP[[26](https://arxiv.org/html/2310.09503v3#bib.bib26)], known for its superior performance, as our point cloud encoder. A linear layer bridges the point cloud dimensions with the language domain. To enhance the point cloud feature set, we exclude the terminal classification layer of PointMLP, capturing 64 tokens from the point cloud for embedding into the query. The primary learning rate stands at 2⁢e−3 2 𝑒 3 2e-3 2 italic_e - 3, but PointMLP operates at a diminished 2⁢e−5 2 𝑒 5 2e-5 2 italic_e - 5, with the LLM remaining unaltered. We carry out all experiments on three 40G A100s. + +### 5.4 Zero-shot 3D Classification + +JM3D facilitates zero-shot 3D recognition by calculating the similarity between point cloud and textual features. In this segment, we exhibit results underscoring JM3D’s aptitude for cross-modal understanding through zero-shot evaluations on two distinct datasets. Our primary evaluation metric is the top-1 accuracy, reflecting the efficacy of the feature alignment in real-world applications. + +#### 5.4.1 Evaluation Sets + +To maintain consistency with prior research[[14](https://arxiv.org/html/2310.09503v3#bib.bib14), [53](https://arxiv.org/html/2310.09503v3#bib.bib53), [13](https://arxiv.org/html/2310.09503v3#bib.bib13)], we perform experiments using a single model on the ModelNet40 and ScanObjectNN datasets. The results for all test samples are summarized in the “All” column of Tab.[II](https://arxiv.org/html/2310.09503v3#S3.T2 "TABLE II ‣ 3.4 Training Objective ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"). Additionally, to address potential confounding factors arising from the presence of similar classes in both the ModelNet40 and pre-trained datasets, which can significantly impact the fairness of zero-shot evaluation, we have created two distinct subsets within ModelNet40: a ”Medium” set and a “Hard” set. The “Medium” set excludes classes that are common to both ModelNet40 and ShapeNet55, while the “Hard” set goes a step further by eliminating semantically similar classes. For instance, the class “stool” is excluded from the “Hard” set as it already exists in the ShapeNet55 dataset. This rigorous selection process ensures that all categories within the “Hard” set remain uncompromised by any potential information leakage. For each of these subsets, we employ top-1 accuracy and top-5 accuracy as our evaluation metrics. Top-1 accuracy assesses the model’s capability to correctly identify the nearest text representation to the point cloud feature. We believe that the top-1 metric provides a more intuitive demonstration of the model’s zero-shot performance. + +#### 5.4.2 Experiment Results + +In Tab.[II](https://arxiv.org/html/2310.09503v3#S3.T2 "TABLE II ‣ 3.4 Training Objective ‣ 3 Joint Multi-modal 3D Representation Learning ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), we provide a comprehensive overview of our zero-shot evaluations conducted on the ModelNet40 and ScanObjectNN datasets. Remarkably, our JM3D model surpasses the former state-of-the-art ULIP[[14](https://arxiv.org/html/2310.09503v3#bib.bib14)] across all 3D backbones, exhibiting superior top-1 accuracy. Specifically, when comparing JM3D+PointMLP to ULIP, we observe a substantial improvement in top-1 accuracy: a 4.3% increase on the “All” set, a 12.3% enhancement on the “Medium” set, and an impressive 13.7% boost on the “Hard” set. These results emphasize the clear superiority of JM3D in the context of zero-shot learning. Furthermore, our evaluations extend to the ScanObjectNN dataset, where JM3D + PointMLP outperforms ULIP by a notable 2.9% margin in terms of top-1 accuracy. In summary, our findings represent significant advancements over the previous state-of-the-art method, ULIP[[14](https://arxiv.org/html/2310.09503v3#bib.bib14)]. They underscore JM3D’s robust generalizability and its ability to deliver enhanced performance in real-world 3D scanning scenarios. + +TABLE VI: The semantic segmentation on S3DIS. + +TABLE VII: The Part segmentation on ShapeNet. + +### 5.5 3D Classification Fine-tuning + +To demonstrate the capabilities of JM3D, we perform fine-tuning experiments on ScanObjectNN using one of the state-of-the-art frameworks, PointMLP[[26](https://arxiv.org/html/2310.09503v3#bib.bib26)]. + +During the fine-tuning process, we train only the 3D encoder of JM3D on the _PB\_T50\_RS_ subset of ScanObjectNN. We choose _PB\_T50\_RS_ due to its challenging nature: it consists of real-world scanned objects accompanied by background noise. The PointMLP fine-tuning uses a learning rate of 0.03 and a weight decay of 3e-4 over 350 epochs. We initiate the process with the pre-trained parameters and maintain the original model’s conditions throughout. Consistent with standard conventions in the research community, we use OA (Overall Accuracy) and mAcc (Class Average Accuracy) as evaluation metrics. + +Tab.[VIII](https://arxiv.org/html/2310.09503v3#S5.T8 "TABLE VIII ‣ 5.5 3D Classification Fine-tuning ‣ 5 Experiment ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues") illustrates that JM3D substantially boosts the performance of the baseline model. Specifically, JM3D augments PointMLP’s OA by 3.5% and mAcc by 4.0%. Moreover, PointMLP enhanced with JM3D surpasses the previous state-of-the-art, RepSurf-U (2×2\times 2 ×), by a margin of 3.2%. Employing a voting strategy, PointMLP*{}^{*}start_FLOATSUPERSCRIPT * end_FLOATSUPERSCRIPT combined with JM3D establishes a new benchmark. Indeed, when compared to the training approach used in ULIP, JM3D clearly demonstrates its ability to enhance existing models without requiring custom-designed architectures. + +TABLE VIII: 3D classification results on ScanObjectNN. We follow the default settings of the original method to train on the hardest set. JM3D achieves a significant improvement compared to the previous method, helping to improve the original backbone by 3.5%. + +TABLE IX: Qualitative comparisons with ULIP-based LLM and our JM3D-LLM on our benchmark. + +TABLE X: Qualitative comparisons with ULIP-based LLM and our JM3D-LLM on our ShapeNet benchmark. + +### 5.6 Semantic and Partial Segmentation Experiments + +We conduct experiments on semantic segmentation using the S3DIS[[10](https://arxiv.org/html/2310.09503v3#bib.bib10)] dataset and on partial segmentation with the ShapeNet[[69](https://arxiv.org/html/2310.09503v3#bib.bib69)] dataset. The results are presented in Tab.[VI](https://arxiv.org/html/2310.09503v3#S5.T6 "TABLE VI ‣ 5.4.2 Experiment Results ‣ 5.4 Zero-shot 3D Classification ‣ 5 Experiment ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues") for S3DIS and Tab.[VII](https://arxiv.org/html/2310.09503v3#S5.T7 "TABLE VII ‣ 5.4.2 Experiment Results ‣ 5.4 Zero-shot 3D Classification ‣ 5 Experiment ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues") for ShapeNet. Our proposed method consistently showcases strong performance. Specifically, we achieve scores of 57.1 on the S3DIS dataset and 82.1 on the ShapeNet dataset, marking improvements of 3.6 and 0.3 points respectively. These advancements in scene-understanding tasks unequivocally validate the efficacy of our approach. + +### 5.7 Exploring Cross-Modal Capabilities of JM3D + +A salient feature of JM3D is its ability to empower the foundational point cloud model with enhanced cross-modal capabilities. In this section, we delve into JM3D’s proficiency in preserving uncommon viewpoint information within models. Specifically, we investigate its performance when using images captured from unconventional viewpoints to retrieve 3D models. + +We utilize images from the real-world dataset, Caltech101[[86](https://arxiv.org/html/2310.09503v3#bib.bib86)], aiming to retrieve 3D models from the ModelNet40 test set, a medium-scale dataset comprising over 2.5K models spanning 40 categories. Additionally, we construct several challenging samples characterized by their unique perspectives, which are typically difficult for conventional models to recognize. The top-3 retrieval models based on this experiment are showcased in Fig.[4](https://arxiv.org/html/2310.09503v3#S4.F4 "Figure 4 ‣ 4.2 JM3D-LLM Architecture ‣ 4 Integrating JM3D with Large Language Model ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"). + +Observing Fig.[4](https://arxiv.org/html/2310.09503v3#S4.F4 "Figure 4 ‣ 4.2 JM3D-LLM Architecture ‣ 4 Integrating JM3D with Large Language Model ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), we categorize samples into two classes, namely “airplane” and “laptop”. Each class is further bifurcated into two difficulty levels: simple (top) and challenging (bottom). For simpler samples, most models demonstrate commendable retrieval accuracy. However, for images captured from rarer viewpoints, ULIP struggles to match the correct point cloud. In stark contrast, JM3D trained with two views yields some correct matches. Moreover, when the image count increases to four in the CIS, JM3D almost flawlessly identifies the correct models. These results are a testament to our model’s capability to bridge meaningful features between visual and 3D modalities. Furthermore, as indicated by Tab.[III](https://arxiv.org/html/2310.09503v3#S5.T3 "TABLE III ‣ 5.1.2 Downstream Datasets ‣ 5.1 Datasets ‣ 5 Experiment ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), while increasing the number of views in the CIS might marginally influence text-based performance, it significantly bolsters the model’s alignment prowess in the image domain. + +### 5.8 Detailed Captioning from JM3D-LLM + +While the experiments discussed thus far primarily assess the capabilities of 3D representations in classification tasks, this subsection seeks to explore the representational power of various features in terms of fine-grained cues. To achieve this, we incorporate 3D representations into a Large Language Model (LLM), leveraging the LLM to parse detailed description results. The outcomes are presented in Tab.[IX](https://arxiv.org/html/2310.09503v3#S5.T9 "TABLE IX ‣ 5.5 3D Classification Fine-tuning ‣ 5 Experiment ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"). Contrasting with ULIP-based LLM, JM3D-LLM emerges superior in generating more precise descriptions, rather than merely offering a simplistic categorization. As Tab.[IX](https://arxiv.org/html/2310.09503v3#S5.T9 "TABLE IX ‣ 5.5 3D Classification Fine-tuning ‣ 5 Experiment ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues") reveals, JM3D demonstrates a nuanced understanding not only of category information but also discerns material properties (sample 1), intricate instance details (sample 2 and sample 3), and even abstract concepts like car brands (sample 5). By bridging with the LLM, we can produce descriptions with abstract concepts or more detailed insights—such as the “teddy bear” in sample 9 or the comprehensive analysis of house structure in sample 10—that ULIP fails to capture. These findings underscore the formidable linguistic prowess of LLM and, crucially, attest to JM3D’s training strategy that meticulously conserves precise fine-grained features. + +Furthermore, capitalizing on the diverse granular textual information introduced during training, we utilize JM3D-LLM to delve deeper into the fine-grained representational capabilities of our proposed JM3D. As illustrated in Tab.[X](https://arxiv.org/html/2310.09503v3#S5.T10 "TABLE X ‣ 5.5 3D Classification Fine-tuning ‣ 5 Experiment ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), JM3D-LLM adeptly details the fine-grained categories of the model, such as “attack aircraft” and “armchair”. This substantiates the efficacy of our introduced Hierarchical Text Tree module. + +### 5.9 Ablation Study + +To unpack the specific contributions of SMO and JMA during the pre-training phase, we conducted ablation studies on these two modules using PointMLP as our foundational model. Given that the ultimate aim is to achieve alignment between point cloud features and image-text features, we employed zero-shot metrics on ModelNet40 and ScanObjectNN datasets and utilized cross-modal retrieval as our qualitative measure. + +#### 5.9.1 Continuous Multi Views vs. Random One Look + +The results of this comparison are presented in Tab.[III](https://arxiv.org/html/2310.09503v3#S5.T3 "TABLE III ‣ 5.1.2 Downstream Datasets ‣ 5.1 Datasets ‣ 5 Experiment ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"). We initiated the study by examining the effect of varying numbers of viewpoint images. Intriguingly, integrating multiple viewpoints directly results in a performance dip. This suggests that the semantic continuity across different viewpoint features is disrupted abruptly, leading the model to grapple with diverse information stemming from the multiple perspectives. However, it becomes evident that by incorporating embeddings, the model’s proficiency in discerning between viewpoints escalates, accounting for a 2.3% improvement. Introducing within-view sampling further bolsters the semantic continuity of viewpoints, leading to a 1.9% boost. On the flip side, an excess of images skews the model towards aligning predominantly with image features. This becomes manifest when performance diminishes as the number of images rises from 2 to 4. Concurrently, the prowess in image retrieval enhances, a phenomenon that is elaborated upon in Sec.[5.7](https://arxiv.org/html/2310.09503v3#S5.SS7 "5.7 Exploring Cross-Modal Capabilities of JM3D ‣ 5 Experiment ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"). + +#### 5.9.2 Hierarchical Text vs. Pre-defined Text + +Following the validation of the efficacy of CIS, we spotlight the significance of the text tree as displayed in Tab.[IV](https://arxiv.org/html/2310.09503v3#S5.T4 "TABLE IV ‣ 5.1.2 Downstream Datasets ‣ 5.1 Datasets ‣ 5 Experiment ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"). Merely introducing subcategory text results in a negligible uptick – a mere 0.6% enhancement in top-1 accuracy. This emphasizes that the granularity of the text isn’t the crux of the matter. Yet, the incorporation of the structured category tree, specifically, the HTT, propels a commendable 1.3% improvement. It’s worth mentioning that the top-5 accuracy with subcategories is diminishing. This can be attributed to the augmented categories, which ratchets up the intricacy of aligning linguistic features. Overall, the HTT aids in minimizing the semantic gap between samples clustered under a parent category. By doing so, the HTT infuses structured semantic insights, optimizing the alignment between point cloud and textual features. + +#### 5.9.3 Collaborative Alignment vs. Independent Alignment + +As shown in Tab.[V](https://arxiv.org/html/2310.09503v3#S5.T5 "TABLE V ‣ 5.3.2 Zero-shot 3D Classification ‣ 5.3 Implementation Details ‣ 5 Experiment ‣ JM3D & JM3D-LLM: Elevating 3D Understanding with Joint Multi-modal Cues"), the inclusion of JMA results in a 2.7% enhancement in top-1 accuracy, underscoring the efficacy of JMA. While SMO provides a more streamlined data organization, it falls short in markedly boosting the model’s performance. Aligning 3D representations separately with images and text can lead to unstable optimization. By incorporating JMA, we transition from the rigid assumption of independent alignment in the text-image domain to a more joint modeling approach. This shift significantly strengthens the alignment of point clouds. + +6 Conclusion +------------ + +We introduce JM3D, a comprehensive pre-training framework that incorporates both the SMO and JMA modules. This framework harmoniously merges language, image, and point cloud features into a cohesive semantic space, circumventing the need for any specialized architecture. With the precision of the SMO module in harnessing information across various modalities and the JMA module’s novel approach to joint modeling, the alignment across these modalities is optimized. Delving deeper, we extend our framework to JM3D-LLM, which synergistically combines 3D representation with large language models through an efficient fine-tuning process. The outstanding results achieved by JM3D in zero-shot 3D classification and image retrieval tasks, establishing a new benchmark, underline its unmatched cross-modal prowess. The precise and rich descriptions provided by JM3D-LLM further attest to the formidable representational capacity of JM3D. + +Acknowledgments +--------------- + +This work was supported by National Key R&D Program of China (No.2022ZD0118201), the National Science Fund for Distinguished Young Scholars (No.62025603), the National Natural Science Foundation of China (No. U21B2037, No. U22B2051, No. 62176222, No. 62176223, No. 62176226, No. 62072386, No. 62072387, No. 62072389, No. 62002305 and No. 62272401), the National Natural Science Fund for Young Scholars of China (No. 62302411), China Postdoctoral Science Foundation (No.2023M732948), and the Natural Science Foundation of Fujian Province of China (No.2021J01002, No.2022J06001). + +References +---------- + +* [1] P.Achlioptas, O.Diamanti, I.Mitliagkas, and L.Guibas, “Learning representations and generative models for 3d point clouds,” in _International conference on machine learning_.PMLR, 2018, pp. 40–49. +* [2] Y.Liu, B.Fan, G.Meng, J.Lu, S.Xiang, and C.Pan, “Densepoint: Learning densely contextual representation for efficient point cloud processing,” in _Proceedings of the IEEE/CVF international conference on computer vision_, 2019, pp. 5239–5248. +* [3] Z.Liu, H.Hu, Y.Cao, Z.Zhang, and X.Tong, “A closer look at local aggregation operators in point cloud analysis,” in _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII 16_.Springer, 2020, pp. 326–342. +* [4] H.Ran, J.Liu, and C.Wang, “Surface representation for point clouds,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 18 942–18 952. +* [5] H.Xie, H.Yao, S.Zhou, J.Mao, S.Zhang, and W.Sun, “Grnet: Gridding residual network for dense point cloud completion,” in _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IX_.Springer, 2020, pp. 365–381. +* [6] M.Xu, R.Ding, H.Zhao, and X.Qi, “Paconv: Position adaptive convolution with dynamic kernel assembling on point clouds,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2021, pp. 3173–3182. +* [7] Z.Wang, X.Yu, Y.Rao, J.Zhou, and J.Lu, “P2p: Tuning pre-trained image models for point cloud analysis with point-to-pixel prompting,” _Advances in neural information processing systems_, vol.35, pp. 14 388–14 402, 2022. +* [8] Z.Liu, Z.Zhang, Y.Cao, H.Hu, and X.Tong, “Group-free 3d object detection via transformers,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 2949–2958. +* [9] T.Vu, K.Kim, T.M. Luu, T.Nguyen, and C.D. Yoo, “Softgroup for 3d instance segmentation on point clouds,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 2708–2717. +* [10] I.Armeni, O.Sener, A.R. Zamir, H.Jiang, I.Brilakis, M.Fischer, and S.Savarese, “3d semantic parsing of large-scale indoor spaces,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2016, pp. 1534–1543. +* [11] Y.Li, A.W. Yu, T.Meng, B.Caine, J.Ngiam, D.Peng, J.Shen, Y.Lu, D.Zhou, Q.V. Le _et al._, “Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 17 182–17 191. +* [12] T.Yin, X.Zhou, and P.Krahenbuhl, “Center-based 3d object detection and tracking,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2021, pp. 11 784–11 793. +* [13] D.Hegde, J.M.J. Valanarasu, and V.M. Patel, “Clip goes 3d: Leveraging prompt tuning for language grounded 3d recognition,” _arXiv preprint arXiv:2303.11313_, 2023. +* [14] L.Xue, M.Gao, C.Xing, R.Martín-Martín, J.Wu, C.Xiong, R.Xu, J.C. Niebles, and S.Savarese, “Ulip: Learning unified representation of language, image and point cloud for 3d understanding,” _arXiv preprint arXiv:2212.05171_, 2022. +* [15] J.Zhang, R.Dong, and K.Ma, “Clip-fo3d: Learning free open-world 3d scene representations from 2d dense clip,” _arXiv preprint arXiv:2303.04748_, 2023. +* [16] A.Radford, J.W. Kim, C.Hallacy, A.Ramesh, G.Goh, S.Agarwal, G.Sastry, A.Askell, P.Mishkin, J.Clark _et al._, “Learning transferable visual models from natural language supervision,” in _International conference on machine learning_.PMLR, 2021, pp. 8748–8763. +* [17] M.Aubry, U.Schlickewei, and D.Cremers, “The wave kernel signature: A quantum mechanical approach to shape analysis,” in _2011 IEEE international conference on computer vision workshops (ICCV workshops)_.IEEE, 2011, pp. 1626–1633. +* [18] M.M. Bronstein and I.Kokkinos, “Scale-invariant heat kernel signatures for non-rigid shape recognition,” in _2010 IEEE computer society conference on computer vision and pattern recognition_.IEEE, 2010, pp. 1704–1711. +* [19] J.Sun, M.Ovsjanikov, and L.Guibas, “A concise and provably informative multi-scale signature based on heat diffusion,” in _Computer graphics forum_, vol.28, no.5.Wiley Online Library, 2009, pp. 1383–1392. +* [20] Z.Wu, S.Song, A.Khosla, F.Yu, L.Zhang, X.Tang, and J.Xiao, “3d shapenets: A deep representation for volumetric shapes,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2015, pp. 1912–1920. +* [21] D.Maturana and S.Scherer, “Voxnet: A 3d convolutional neural network for real-time object recognition,” in _2015 IEEE/RSJ international conference on intelligent robots and systems (IROS)_.IEEE, 2015, pp. 922–928. +* [22] Y.Zhao, H.Fei, W.Ji, J.Wei, M.Zhang, M.Zhang, and T.-S. Chua, “Generating visual spatial description via holistic 3D scene understanding,” in _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, 2023, pp. 7960–7977. +* [23] S.Shi, C.Guo, L.Jiang, Z.Wang, J.Shi, X.Wang, and H.Li, “Pv-rcnn: Point-voxel feature set abstraction for 3d object detection,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020, pp. 10 529–10 538. +* [24] C.R. Qi, H.Su, K.Mo, and L.J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_, 2017, pp. 652–660. +* [25] G.Qian, Y.Li, H.Peng, J.Mai, H.Hammoud, M.Elhoseiny, and B.Ghanem, “Pointnext: Revisiting pointnet++ with improved training and scaling strategies,” _Advances in Neural Information Processing Systems_, vol.35, pp. 23 192–23 204, 2022. +* [26] X.Ma, C.Qin, H.You, H.Ran, and Y.Fu, “Rethinking network design and local geometry in point cloud: A simple residual mlp framework,” in _International Conference on Learning Representations_. +* [27] A.Vaswani, N.Shazeer, N.Parmar, J.Uszkoreit, L.Jones, A.N. Gomez, Ł.Kaiser, and I.Polosukhin, “Attention is all you need,” _Advances in neural information processing systems_, vol.30, 2017. +* [28] M.-H. Guo, J.-X. Cai, Z.-N. Liu, T.-J. Mu, R.R. Martin, and S.-M. Hu, “Pct: Point cloud transformer,” _Computational Visual Media_, vol.7, pp. 187–199, 2021. +* [29] H.Liu, M.Cai, and Y.J. Lee, “Masked discrimination for self-supervised learning on point clouds,” in _Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part II_.Springer, 2022, pp. 657–675. +* [30] A.Xiao, J.Huang, D.Guan, X.Zhang, S.Lu, and L.Shao, “Unsupervised point cloud representation learning with deep neural networks: A survey,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2023. +* [31] X.Yu, L.Tang, Y.Rao, T.Huang, J.Zhou, and J.Lu, “Point-bert: Pre-training 3d point cloud transformers with masked point modeling,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 19 313–19 322. +* [32] Y.Pang, W.Wang, F.E. Tay, W.Liu, Y.Tian, and L.Yuan, “Masked autoencoders for point cloud self-supervised learning,” in _Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part II_.Springer, 2022, pp. 604–621. +* [33] R.Zhang, Z.Guo, P.Gao, R.Fang, B.Zhao, D.Wang, Y.Qiao, and H.Li, “Point-m2ae: Multi-scale masked autoencoders for hierarchical point cloud pre-training,” in _Advances in Neural Information Processing Systems_. +* [34] L.H. Li, M.Yatskar, D.Yin, C.-J. Hsieh, and K.-W. Chang, “Visualbert: A simple and performant baseline for vision and language,” _arXiv preprint arXiv:1908.03557_, 2019. +* [35] X.Li, X.Yin, C.Li, P.Zhang, X.Hu, L.Zhang, L.Wang, H.Hu, L.Dong, F.Wei _et al._, “Oscar: Object-semantics aligned pre-training for vision-language tasks,” in _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXX 16_.Springer, 2020, pp. 121–137. +* [36] Y.-C. Chen, L.Li, L.Yu, A.El Kholy, F.Ahmed, Z.Gan, Y.Cheng, and J.Liu, “Uniter: Universal image-text representation learning,” in _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXX_.Springer, 2020, pp. 104–120. +* [37] H.Fei, Q.Liu, M.Zhang, M.Zhang, and T.-S. Chua, “Scene graph as pivoting: Inference-time image-free unsupervised multimodal machine translation with visual scene hallucination,” in _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_, 2023, pp. 5980–5994. +* [38] J.Lu, D.Batra, D.Parikh, and S.Lee, “Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks,” _Advances in neural information processing systems_, vol.32, 2019. +* [39] H.Tan and M.Bansal, “Lxmert: Learning cross-modality encoder representations from transformers,” in _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, 2019, pp. 5100–5111. +* [40] H.Wang, J.Ji, Y.Zhou, Y.Wu, and X.Sun, “Towards real-time panoptic narrative grounding by an end-to-end grounding network,” _arXiv preprint arXiv:2301.03160_, 2023. +* [41] L.H. Li, P.Zhang, H.Zhang, J.Yang, C.Li, Y.Zhong, L.Wang, L.Yuan, L.Zhang, J.-N. Hwang _et al._, “Grounded language-image pre-training,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 10 965–10 975. +* [42] H.Zhang, P.Zhang, X.Hu, Y.-C. Chen, L.Li, X.Dai, L.Wang, L.Yuan, J.-N. Hwang, and J.Gao, “Glipv2: Unifying localization and vision-language understanding,” _Advances in Neural Information Processing Systems_, vol.35, pp. 36 067–36 080, 2022. +* [43] J.-B. Alayrac, J.Donahue, P.Luc, A.Miech, I.Barr, Y.Hasson, K.Lenc, A.Mensch, K.Millican, M.Reynolds _et al._, “Flamingo: a visual language model for few-shot learning,” _Advances in Neural Information Processing Systems_, vol.35, pp. 23 716–23 736, 2022. +* [44] H.Luo, L.Ji, M.Zhong, Y.Chen, W.Lei, N.Duan, and T.Li, “Clip4clip: An empirical study of clip for end to end video clip retrieval and captioning,” _Neurocomputing_, vol. 508, pp. 293–304, 2022. +* [45] D.Li, J.Li, H.Li, J.C. Niebles, and S.C. Hoi, “Align and prompt: Video-and-language pre-training with entity prompts,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 4953–4963. +* [46] H.Xu, G.Ghosh, P.-Y. Huang, D.Okhonko, A.Aghajanyan, F.Metze, L.Zettlemoyer, and C.Feichtenhofer, “Videoclip: Contrastive pre-training for zero-shot video-text understanding,” in _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_, 2021, pp. 6787–6800. +* [47] C.Ju, T.Han, K.Zheng, Y.Zhang, and W.Xie, “Prompting visual-language models for efficient video understanding,” in _Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXV_.Springer, 2022, pp. 105–124. +* [48] H.Fei, S.Wu, Y.Ren, and M.Zhang, “Matching structure for dual learning,” in _Proceedings of the International Conference on Machine Learning, ICML_, 2022, pp. 6373–6391. +* [49] Z.Chen and L.Jing, “Multimodal semi-supervised learning for 3d objects,” in _The British Machine Vision Conference (BMVC)_, 2021. +* [50] X.Yan, H.Zhan, C.Zheng, J.Gao, R.Zhang, S.Cui, and Z.Li, “Let images give you more: Point cloud cross-modal training for shape analysis,” in _Advances in Neural Information Processing Systems_. +* [51] Y.Ma, X.Zhang, X.Sun, J.Ji, H.Wang, G.Jiang, W.Zhuang, and R.Ji, “X-mesh: Towards fast and accurate text-driven 3d stylization via dynamic textual guidance,” _arXiv preprint arXiv:2303.15764_, 2023. +* [52] H.Wang, J.Tang, J.Ji, X.Sun, R.Zhang, Y.Ma, M.Zhao, L.Li, T.Lv, R.Ji _et al._, “Beyond first impressions: Integrating joint multi-modal cues for comprehensive 3d representation,” _arXiv preprint arXiv:2308.02982_, 2023. +* [53] R.Zhang, Z.Guo, W.Zhang, K.Li, X.Miao, B.Cui, Y.Qiao, P.Gao, and H.Li, “Pointclip: Point cloud understanding by clip,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2022, pp. 8552–8562. +* [54] X.Zhu, R.Zhang, B.He, Z.Guo, Z.Zeng, Z.Qin, S.Zhang, and P.Gao, “Pointclip v2: Prompting clip and gpt for powerful 3d open-world learning,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2023, pp. 2639–2650. +* [55] L.Xue, N.Yu, S.Zhang, J.Li, R.Martín-Martín, J.Wu, C.Xiong, R.Xu, J.C. Niebles, and S.Savarese, “Ulip-2: Towards scalable multimodal pre-training for 3d understanding,” _arXiv preprint arXiv:2305.08275_, 2023. +* [56] A.Radford, K.Narasimhan, T.Salimans, I.Sutskever _et al._, “Improving language understanding by generative pre-training,” 2018. +* [57] A.Radford, J.Wu, R.Child, D.Luan, D.Amodei, I.Sutskever _et al._, “Language models are unsupervised multitask learners,” _OpenAI blog_, vol.1, no.8, p.9, 2019. +* [58] P.Liu, W.Yuan, J.Fu, Z.Jiang, H.Hayashi, and G.Neubig, “Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing,” _ACM Computing Surveys_, vol.55, no.9, pp. 1–35, 2023. +* [59] N.Houlsby, A.Giurgiu, S.Jastrzebski, B.Morrone, Q.De Laroussilhe, A.Gesmundo, M.Attariyan, and S.Gelly, “Parameter-efficient transfer learning for nlp,” in _International Conference on Machine Learning_.PMLR, 2019, pp. 2790–2799. +* [60] F.Petroni, T.Rocktäschel, S.Riedel, P.Lewis, A.Bakhtin, Y.Wu, and A.Miller, “Language models as knowledge bases?” in _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_.Association for Computational Linguistics, 2019. +* [61] T.Schick and H.Schütze, “Exploiting cloze-questions for few-shot text classification and natural language inference,” in _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_, 2021, pp. 255–269. +* [62] L.Ouyang, J.Wu, X.Jiang, D.Almeida, C.Wainwright, P.Mishkin, C.Zhang, S.Agarwal, K.Slama, A.Ray _et al._, “Training language models to follow instructions with human feedback,” _Advances in Neural Information Processing Systems_, vol.35, pp. 27 730–27 744, 2022. +* [63] H.Touvron, T.Lavril, G.Izacard, X.Martinet, M.-A. Lachaux, T.Lacroix, B.Rozière, N.Goyal, E.Hambro, F.Azhar _et al._, “Llama: Open and efficient foundation language models,” _arXiv preprint arXiv:2302.13971_, 2023. +* [64] D.Zhu, J.Chen, X.Shen, X.Li, and M.Elhoseiny, “Minigpt-4: Enhancing vision-language understanding with advanced large language models,” _arXiv preprint arXiv:2304.10592_, 2023. +* [65] H.Liu, C.Li, Q.Wu, and Y.J. Lee, “Visual instruction tuning,” _arXiv preprint arXiv:2304.08485_, 2023. +* [66] R.Zhang, J.Han, A.Zhou, X.Hu, S.Yan, P.Lu, H.Li, P.Gao, and Y.Qiao, “Llama-adapter: Efficient fine-tuning of language models with zero-init attention,” _arXiv preprint arXiv:2303.16199_, 2023. +* [67] R.Xu, X.Wang, T.Wang, Y.Chen, J.Pang, and D.Lin, “Pointllm: Empowering large language models to understand point clouds,” _arXiv preprint arXiv:2308.16911_, 2023. +* [68] Z.Guo, R.Zhang, X.Zhu, Y.Tang, X.Ma, J.Han, K.Chen, P.Gao, X.Li, H.Li _et al._, “Point-bind & point-llm: Aligning point cloud with multi-modality for 3d understanding, generation, and instruction following,” _arXiv preprint arXiv:2309.00615_, 2023. +* [69] A.X. Chang, T.Funkhouser, L.Guibas, P.Hanrahan, Q.Huang, Z.Li, S.Savarese, M.Savva, S.Song, H.Su _et al._, “Shapenet: An information-rich 3d model repository,” _arXiv preprint arXiv:1512.03012_, 2015. +* [70] C.R. Qi, L.Yi, H.Su, and L.J. Guibas, “Pointnet++: Deep hierarchical feature learning on point sets in a metric space,” _Advances in neural information processing systems_, vol.30, 2017. +* [71] J.L. Ba, J.R. Kiros, and G.E. Hinton, “Layer normalization,” 2016. +* [72] I.O. Tolstikhin, N.Houlsby, A.Kolesnikov, L.Beyer, X.Zhai, T.Unterthiner, J.Yung, A.Steiner, D.Keysers, J.Uszkoreit _et al._, “Mlp-mixer: An all-mlp architecture for vision,” _Advances in neural information processing systems_, vol.34, pp. 24 261–24 272, 2021. +* [73] H.Zhao, L.Jiang, J.Jia, P.H. Torr, and V.Koltun, “Point transformer,” in _Proceedings of the IEEE/CVF international conference on computer vision_, 2021, pp. 16 259–16 268. +* [74] H.Wang, J.Tang, J.Ji, X.Sun, R.Zhang, Y.Ma, M.Zhao, L.Li, Z.Zhao, T.Lv, and R.Ji, “Beyond first impressions: Integrating joint multi-modal cues for comprehensive 3d representation,” in _Proceedings of the 31st ACM International Conference on Multimedia, MM 2023, Ottawa, ON, Canada, 29 October 2023- 3 November 2023_.ACM, 2023, pp. 3403–3414. [Online]. Available: [https://doi.org/10.1145/3581783.3611767](https://doi.org/10.1145/3581783.3611767) +* [75] T.Luo, C.Rockwell, H.Lee, and J.Johnson, “Scalable 3d captioning with pretrained models,” _arXiv preprint arXiv:2306.07279_, 2023. +* [76] M.A. Uy, Q.-H. Pham, B.-S. Hua, T.Nguyen, and S.-K. Yeung, “Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data,” in _Proceedings of the IEEE/CVF international conference on computer vision_, 2019, pp. 1588–1597. +* [77] M.Deitke, D.Schwenk, J.Salvador, L.Weihs, O.Michel, E.VanderBilt, L.Schmidt, K.Ehsani, A.Kembhavi, and A.Farhadi, “Objaverse: A universe of annotated 3d objects,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2023, pp. 13 142–13 153. +* [78] N.Mu, A.Kirillov, D.Wagner, and S.Xie, “Slip: Self-supervision meets language-image pre-training,” in _Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVI_.Springer, 2022, pp. 529–544. +* [79] W.-L. Chiang, Z.Li, Z.Lin, Y.Sheng, Z.Wu, H.Zhang, L.Zheng, S.Zhuang, Y.Zhuang, J.E. Gonzalez _et al._, “Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality,” _See https://vicuna. lmsys. org (accessed 14 April 2023)_, 2023. +* [80] Z.-H. Lin, S.-Y. Huang, and Y.-C.F. Wang, “Convolution in the cloud: Learning deformable kernels in 3d graph convolution networks for point cloud analysis,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2020, pp. 1800–1809. +* [81] S.Lee, M.Jeon, I.Kim, Y.Xiong, and H.J. Kim, “Sagemix: Saliency-guided mixup for point clouds,” _Advances in Neural Information Processing Systems_, vol.35, pp. 23 580–23 592, 2022. +* [82] Y.Rao, J.Lu, and J.Zhou, “Spherical fractal convolutional neural networks for point cloud recognition,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, 2019, pp. 452–460. +* [83] C.Sun, Z.Zheng, X.Wang, M.Xu, and Y.Yang, “Self-supervised point cloud representation learning via separating mixed shapes,” _IEEE Transactions on Multimedia_, 2022. +* [84] B.Wu, Y.Liu, B.Lang, and L.Huang, “Dgcnn: Disordered graph convolutional neural network based on the gaussian mixture model,” _Neurocomputing_, vol. 321, pp. 346–356, 2018. +* [85] A.Hamdi, S.Giancola, and B.Ghanem, “Mvtn: Multi-view transformation network for 3d shape recognition,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_, 2021, pp. 1–11. +* [86] L.Fei-Fei, R.Fergus, and P.Perona, “Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories,” in _2004 conference on computer vision and pattern recognition workshop_.IEEE, 2004, pp. 178–178.