Activating Sparse Part Concepts for 3D Class Incremental Learning
Zhenya Tian, Jun Xiao*, Lupeng Liu, Haiyong Jiang*
School of Artificial Intelligence, University of Chinese Academy of Sciences
tianzhenya20@mails.ucas.ac.cn, xiaojun@ucas.ac.cn
liulupeng@ucas.ac.cn, haiyong.jiang@ucas.ac.cn
Abstract
This work tackles the challenge of 3D Class-Incremental Learning (CIL), where a model must learn to classify new 3D objects while retaining knowledge of previously learned classes. Existing methods often struggle with catastrophic forgetting, misclassifying old objects due to overreliance on shortcut local features. Our approach addresses this issue by learning a set of part concepts for part-aware features. Particularly, we only activate a small subset of part concepts for the feature representation of each part-aware feature. This facilitates better generalization across categories and mitigates catastrophic forgetting. We further improve the task-wise classification through a part relation-aware Transformer design. At last, we devise learnable affinities to fuse task-wise classification heads and avoid confusion among different tasks. We evaluate our method on three 3D CIL benchmarks, achieving state-of-the-art performance. Code is available at https://github.com/zenyatian/ILPC.
1. Introduction
A key aspect of human intelligence is the ability to continuously learn and adapt to new semantic concepts. This ability is crucial for 3D recognition in robotics and autonomous driving and mirrors Class-Incremental Learning (CIL) in machine learning [35, 36]. This work focuses on CIL in 3D recognition that trains the network one task by one task with each task containing 3D objects from different semantic categories. 3D CIL shares the same challenge of catastrophic forgetting in old class recognition as 2D CIL when adapting to a new task [3, 4, 8, 11, 39, 40, 47, 59-61]. Moreover, 3D CILs are still confronted with texture-less shapes and unstructured points in inputs, resulting in shortcut features [44] to mislead the shape classification. This problem further amplifies the catastrophic forgetting of 3D deep learning methods, e.g., [25, 26, 53].
2D CIL has been extensively studied and can be broadly
categorized into two schemes: replay-based methods [3, 8, 20, 34, 40, 61] and dynamic network expanding-based methods [4, 32, 39, 47, 59, 60]. These methods can achieve wonderful performance in 2D tasks and are a good start for 3D CIL. Existing 3D CIL methods [10, 12, 21, 33, 49, 54] mainly extend 2D CIL methods based on 3D geometric structures, e.g., neighborhood-based feature aggregations. However, these methods still suffer from catastrophic forgetting. We owe partial reasons for catastrophic forgetting in 3D CIL to shortcut features [14, 16]. Particularly, 3D classifiers prioritize shortcut features for recognition, neglecting the importance of other shape parts and their overall composition. This can lead to misclassification of past tasks if new tasks share similar local features with old tasks. For instance, a 3D classifier that classifies a table in the old task perfectly may fail in classifying chairs in the new task when the classifier has learned a shortcut strategy relying on the legs.
This work tackles the limitations of 3D CIL with two key designs. First, we leverage sparsely activated part concepts for local part feature representation. This is because common part concepts are usually shared among different classes, ensuring good generalization across different tasks. For example, legs, planes, and bases learned in an old task already provide discriminative enough information and make the adaption in novel tasks easier, e.g., chair recognition (see Fig. 1). Analysis of the generalization of part concepts is demonstrated in Fig. 3. Second, the catastrophic forgetting of a model is mainly caused by confusion among different task heads. Therefore, learning a dynamic mixture of task-wise classification heads can relieve catastrophic forgetting.
To fulfill the above-mentioned observations, we address 3D Incremental Learning with Part Concepts Awareness, called ILPC. The overview is shown in Fig. 2. First, ILPC learns a set of shared part concepts as representative local geometric features among different classes. Then, we selectively activate related part concepts according to the similarity between a part concept and a geometric feature. Activated part concepts span the feature space of an input

Figure 1. Classes in different tasks can share a set of part concepts, which facilitate easy recognition with part compositions and their relations.
and are used to produce part-aware features for incremental learning. We further encode mutual relationships between part-aware features within each task using a task-wise Transformer classifier. To avoid confusion among different classification heads, we dynamically update a learnable affinity to fuse task-wise classification heads.
In conclusion, our contribution can be summarized as follows:
- A 3D CIL framework based on sparsely activated part concepts.
- Learnable affinities for fusing multi-task classification head.
- Extensive experiments to demonstrate the superiority over other baseline methods on three 3D CIL benchmarks.
2. Related Work
2.1. 2D Class-Incremental Learning
Class-incremental learning for image recognition has received lots of attentions. There are mainly two kinds of schemes: replay-based methods and dynamic network-based methods. Replay-based methods cache extra exemplars for early-stage task rehearsal during model updating, enabling the model to retain old knowledge while learning new concepts. Due to a limited memory buffer, representative extra exemplars can be selected from old tasks [2, 19, 29, 31, 34] and storage efficient strategies are also explored [3, 20, 22, 23, 40, 58]. Dynamic network-based methods [1, 4, 9, 17, 18, 24, 28, 32, 39, 45, 47,
50, 59, 60] design additional model components to fit each task while freezing model parameters for previous tasks. Additional model components can be a dynamically expanded network [17, 18, 24, 45, 50, 60], duplicate subnetworks [1, 9, 32, 47], and a task-specific attention module [4]. These 2D methods provide inspiration for 3D CIL.
2.2. 3D Class-Incremental Learning
3D Class-Incremental Learning for point clouds is important for autonomous driving and indoor robotics and is rarely explored for now. Dong et al. [12] introduce 3D geometric information to learn distinctive 3D features in each class and correct biased weights caused by class imbalance to avoid forgetting. Liu et al. [21] propose a layer-wise task-shared knowledge factorization to reduce catastrophic forgetting. Chowdhury et al. [10] build a common set of basic descriptions to enhance the adaptability of the model to the open-world data. Zhao et al. [54] propose a static-dynamic co-teaching technique with one teacher only preserving previously learned knowledge and the other one consistently learning new knowledge. Tan et al. [33] decompose the learning tasks into the base task and new tasks, thus the model can adapt to new task with task-specific layers. Yang et al. [49] utilize geometric information of point clouds to capture point-wise feature relations. In this work, we introduce part concepts and part compositions as prompts to help mitigate catastrophic forgetting in CIL.
2.3. Part-based 3D Recognition
3D shape parts have a crucial role in 3D object recognition. Existing works can be classified into supervised part segmentation and unsupervised part discovery. Supervised methods [46, 57] require annotated part instances and can achieve better performance. On the other hand, unsupervised methods [10, 55, 56] explore the generalization of 3D shape parts. Weng et al. [43, 44] devise class-specific part prototypes for open-set recognition and shared part prototypes for novel class discovery, respectively. Inspired by these works, we present sparely activated part concepts as shared knowledge for novel tasks.
3. Methodology
This work aims at 3D class incremental learning, where the model is trained with different tasks and each task contains novel classes. The overall framework of the method is shown in Fig. 2. In light of the benefits of part compositions on 3D feature shortcuts, we build the method on a part concept-based method [43] (see Sec. 3.1). Then we present a selection mechanism and only use the most important part concepts to avoid confusions among novel classes and old classes (see Sec. 3.2). Afterward, we feed part features to a task-specific classification head by learning their mutual relations for task-aware shape classification (see Sec. 3.3).

Figure 2. The overall architecture. First, training dataset $\mathcal{D}_t$ for task $t$ and exemplar memory $\mathcal{E}_t$ for previous tasks are used to learn point-wise features from point cloud $x$ , which we further group into part-wise features $Z_l$ . By representing part features with a sparse set of part concepts from $P$ , we can construct part composite features $Z_p$ from $Z_l$ according to the concept activation map $S$ . Afterward, a task-specific classification head $f_t(\cdot)$ leverages part composite features $Z_p$ for mutual part relations for task-wise 3D recognition. At last, we fuse predictions from all task classification heads with learnable affinity as a unified classifier to mitigate the task bias.
Then we fuse task-specific classification heads for different tasks as a unified classifier (see Sec. 3.4).
3.1. Preliminaries
Problem Statement. 3D CIL learns a classifier from a sequence of tasks with different sets of classes. For incremental task $t^{th}$ , the model takes a training dataset $\mathcal{D}t = {(x_t^i, y_t^i)}{i=1}^{N_d}$ , where $x_t^i \in \mathbb{R}^{L \times 3}$ denotes an input sample and $y_t^i \in Y_t$ denotes its semantic label. Let $Y_t$ be the semantic label set of task $t$ , then we have $Y_t \cap Y_i = \emptyset$ for any $i < t$ . Due to data privacy and storage constraints, it is impossible to access the whole dataset in previous tasks and only a small number of instances from previous tasks are selected as the exemplar set $\mathcal{E}t \subseteq \cup{i=1}^{t-1} \mathcal{D}_i$ . The model is then trained on $\mathcal{D}_t \cup \mathcal{E}_t$ and is evaluated on the test set of all seen classes.
3D Part Concept Learning. Part concepts are important to the analysis of 3D shapes. In this work, we adopt DNIK [43] to learn 3D part concepts. Given a 3D point cloud, DNIK extracts point-wise features with PointNet [25], and then uses farthest point sampling (FPS) to sample $N_{p}$ points. Based on sampled points, we group $K$ neighboring points as a set of parts and accumulate these point features as part-level features $Z_{l} \in \mathbb{R}^{N_{p} \times D}$ . DNIK constructs a projection space with part codebook $P = {P^{m}}_{m=1}^{M}$ with each part concept $P^{m} \in \mathbb{R}^{D}$ representing prototypical features of a 3D part. We project
part-level features $Z_{l}$ into the space spanned by $P^{m}$ as part composition features $Z_{p}$ :
where $S \in \mathbb{R}^{N_q \times M}$ denotes the part activation map and distance function $\varphi(\cdot, \cdot)$ compares part-level features and part concepts and measures the distance in the hyperbolic space. We can sum $S$ along the first dimension and normalize accumulated part activations with $L2$ norm as a distribution $M$ along different part concepts. During optimization, DNIK applies the supervised contrastive loss (denoted as $\mathcal{L}_c$ ) to encourage part activations to be similar for same-category shapes and different from different-category shapes.
3.2. Learning Sparse Part Concepts for 3D CIL
Sparsely Activated Part Concepts. During incremental learning, different tasks may share some part concepts in common. By opting for part concepts that resonate most with the current task, the part codebook can leverage the homogeneity between part concepts in the new task and those learned in previous tasks and stored in the codebook. In Fig. 3, we demonstrate that part concepts learned on a set of old classes are already discriminative enough for both old classes and unseen new classes. Therefore learned part con-

Figure 3. Similarity histograms of a class and the other ones. We select five classes from old tasks and five classes from new tasks and learn the part concepts from old tasks. First, we take the average of part distribution $M$ within each class as the class-wise part distribution. Then we calculate the cosine similarity between each sample of one class and its class-wise part distribution and plot the similarity distribution in green. We also plot that of samples of the other classes and their class-wise part distribution in red.
cepts on old tasks can also ensure enough discriminativity on new tasks without any further training.
However, part activations for different classes usually vary, and constructing part composite features with all part concepts may lead unnecessary confusion. To this end, we introduce a selection mechanism to avoid interference of irrelevant part concepts and an updatable part codebook to satisfy the command of continual learning as follows:
where $\mathrm{TopK}(\cdot)$ is a one-hot embedding that sets all other elements in the output vector as zero except for the elements with the largest $k$ activations among different parts
Regularization on Part Composite Features. Compared to the part loss in [43], we take a step further and devise a pseudo part label loss to encourage similar parts to be the same. We devise a loss based on the similarity of activated part maps. If the cosine similarity between the activated concept maps $S_{i}$ , $S_{j}$ of two parts is greater than a threshold (0.7 in our implementation), they are most likely to be the same part and we denote it with $\diamond(S_{i}, S_{j}) = 1$ , otherwise $\diamond(S_{i}, S_{j}) = 0$ . If two parts share the same geometry, their part composite features should be as similar as possible. On the other hand, parts with different geometry should have different composite features. Based on this observation, we construct a pseudo-part loss as follows:
where $\cos (\cdot)$ calculates the cosine similarity, $\mathcal{C}p$ collects all FPS parts from $Q$ , and $Z{p_i}$ indexes the part composite features for part $i$ . $\overline{Z_{p_i}}$ takes the mean value of part composi
tion features of all parts $i$ with a similar activated concept map (i.e., $\diamondsuit(5, S_{j}) = 1$ )
3.3. Part-Aware Task-Specific Classification Head
After we obtain part composition features $Z_{p}$ , we build a task-specific classification head for 3D recognition. Each task-specific classification head has dependent parameters and only predicts the categories in each task. Prior works [48, 51, 52] have shown that self-attention can learn spatial relationships between local patches. So, after obtaining input part composition features $Z_{p}$ and position embeddings $Z_{c}$ , a classification head uses a shallow self-attention Transformer [13] with three layers to learn mutual relations between different shape parts as follows:
where $c \in \mathbb{R}^{1 \times D}$ denotes a learnable class token, $Z_{c} \in \mathbf{R}^{(N_{p} + 1) \times D}$ is a linearly projected centroid position embedding of a part shape, and the centroid embedding is randomly initialized for the class token. We use $F_{*}$ to mark the intermediate features of each Transformer layer. CAT concatenates the class token $c$ and composition features $Z_{p}$ . We compute the LayerNorm (LN) of the concatenated features as keys, values, and queries of a multi-head self-attention module (MSA) with separate projection matrices for keys, values, and queries. We further apply a residual connection followed by a feed-forward network (MLP). The MSA and MLP modules are executed two times. At last, we apply a linear layer and a softmax function to output likelihood for each class of a task. The task-specific classification head $f_{t}(\cdot)$ for task $t$ is lightweight and can be adapted to easily

Figure 4. An illustration of task bias when fusing results of different tasks. Without task affinity, classes in different tasks may interfere with each other and be biased towards classes with more training examples.
fit different new tasks with a small scale of parameters and memory consumption.
3.4. Learnable Affinities for Multi-Task Fusion
For the training of task $t$ , we freeze the part codebook-based backbone and classifier heads ${f_i(\cdot)}{i=1}^{t-1}$ of the previous tasks and train a novel classifier head $f_t(\cdot)$ for the current task to recognize new classes in task $t$ . A vanilla approach to combine ${f_i(\cdot)}{i=1}^{t-1}$ and $f_t(\cdot)$ can be written as:
However, this fusion method is usually biased towards the classes of task $t$ as there are very few stored examples for learned classes of previous tasks. For example, in Fig. 4, a bench may be misclassified as a chair during the training of task 2 resulting in forgetting old classes in task 1. To account for the mutual influences of classes from different tasks, we introduce a learnable affinity term $\alpha_{k,i}$ to adjust the relative importance of predictions between task $i$ and task $k$ . For example, in Fig. 4, we learn an affinity $\alpha_{2,1}$ to balance the relative importance of predictions in task 1 and task 2. To ensure the correct prediction, $\alpha_{2,1}$ is adjusted to a value greater than 1, thus leading to the correct prediction of a bench shape. Therefore, the overall classifier head fusion can be formulated as follows:
The overall objective sums a standard cross-entropy loss $\mathcal{L}{ce}$ for classification, contrastive loss $\mathcal{L}c$ , pseudo part loss $\mathcal{L}{pp}$ so that part concepts can be generalized and properly learned. The weight terms $\lambda{*}$ to balance the above four losses are set to (1.0, 0.1, 0.3).
4. Experiments
In this section, we conduct comparisons with state-of-the-art methods on three different benchmarks and ablate core designs of the proposed method.
4.1. Experimental Setup
Evaluation Datasets. We evaluate the method on three datasets, including ShapeNetCore [7], Co3D [30], and nuScenes [5]. ShapeNetCore is composed of 51,127 3D CAD models from 55 common object categories. The total incremental tasks are set to 7 with the first task containing 25 classes and each incremental task containing 5 classes. Co3D consists of 18,619 objects in 50 classes. The total incremental tasks are set as 6 with the initial task having 25 classes and each incremental task having 5 classes. nuScenes contains $40\mathrm{k}$ annotated point cloud frames in 23 classes. We extract foreground instance point clouds from each point cloud frame with the instance labels. The total incremental states are set to 5 with the first task containing 11 classes and each incremental state having 3 classes.
Evaluation Metrics. Following other baseline methods [29, 38, 39, 47, 59], we utilize top-1 mean accuracy [6, 42] of the prediction as the evaluation metric to conduct comparison experiments. We report the mean accuracy of all classes as last accuracy. In addition, we calculate the mean accuracy of seen classes at each task and take their average as avg accuracy.
Baselines. We compare our method with typical CIL methods including replay-based methods (e.g., ER [11] and iCaRL [29]), dynamic network-based methods (e.g., DER [47], FOSTER [39] and MEMO [59]) and other latest methods (e.g., DS-AL [62] and DGR [15]). We adopt the same backbone for all methods for a fair comparison. The above methods release their codes. We implemented its 3D CIL version based on the released code and followed the same settings as the original paper.
4.2. Comparisons
In Tab. 1, we compare ILPC with competing methods on Co3D [30], ShapeNet [7], and nuScenes [5] dataset. For a fair comparison, all baseline methods employ PointNet [25] as the backbone to obtain local features of a point cloud and are trained with the same data augmentation mechanism. The dynamic network-based methods, e.g., DER, FOSTER, and MEMO, can achieve better results compared with replay-based methods. This is because these methods expand new modules to learn knowledge in new tasks and freeze old modules to retain learned knowledge, while replay-based methods only select a set of representative exemplars for 3D CIL leading to more serious catastrophic forgetting. We can observe that our method consistently outperforms other methods in last-task accuracy and average accuracy among all tasks on four bench

Figure 5. Task-wise performance on each incremental state for different methods. The numbers are the mean accuracy of classes in each task.
Table 1. Comparison results on Co3D, ShapeNet, and nuScenes dataset.
| Dataset | Co3D | nuScenes | ShapeNet | |||
| Last | Avg | Last | Avg | Last | Avg | |
| ER [11] | 62.82 | 69.76 | 67.40 | 75.96 | 74.10 | 78.68 |
| iCaRL [29] | 61.03 | 69.36 | 55.20 | 69.12 | 74.90 | 78.46 |
| DER [47] | 69.75 | 76.72 | 77.24 | 85.67 | 80.32 | 83.82 |
| FOSTER [39] | 74.60 | 80.18 | 78.28 | 82.65 | 77.65 | 83.40 |
| MEMO [59] | 70.27 | 77.12 | 76.92 | 85.26 | 77.31 | 82.21 |
| DS-AL [62] | 78.87 | 81.74 | 76.56 | 86.11 | 80.96 | 84.74 |
| DGR [15] | 72.06 | 76.22 | 74.58 | 80.61 | 78.88 | 83.67 |
| Ours | 81.18 | 86.37 | 82.44 | 87.27 | 82.27 | 86.08 |
| Improvement | +2.31 | +4.63 | +4.16 | +1.16 | +1.31 | +1.34 |
mark datasets. For ShapeNet, our method outperforms the runner-up method by $1% - 2%$ on the last-task accuracy. The improvement on Co3D and nuScenes is much greater, which reaches $4% - 6%$ . The results demonstrate ILPC can achieve superior performance by leveraging part concept compositions and alleviating catastrophic forgetting for more robust 3D recognition ability.
Fig. 5 demonstrates the task-wise performance of different models. Our proposed method retains the performance of previous tasks and performs well on the new task. However, the baseline models adapt to new tasks and forget the knowledge they have gained from previous tasks. By comparing the columns with the same task label, we can find that our method has less mean class accuracy decrease on most tasks compared with other methods. Especially, for classes in task 1, the mean class accuracy of our method drops $13.2%$ , while the performances of baseline methods drop $26.95%$ and $21.05%$ , respectively. This evidence demonstrates that our method performs better in maintaining the learned knowledge and mitigating forgetting.
4.3. Ablation Study
We first analyze key components in our method and then evaluate the impact of different backbones. At last, we report results for different CIL settings and few-shot settings. All results are reported on the Co3D dataset.
Model Components. Tab. 2 demonstrates the effectiveness of different components. The baseline employs PointNet as the backbone for point-wise feature learning and then uses max-pooling to obtain global features. The global features are fed to a linear layer followed by a softmax for task-specific 3D classification. At last, the baseline fuses predictions from all classification heads with Eq. 5. The component w/ PC augments the baseline by grouping FPS part features and learning a part concept codebook for generalizable part composite feature encoding (see the first two rows). Results show this component can significantly increase the overall performance for about $5.1%$ , $3.9%$ in the last accuracy and the avg accuracy, respectively. Even when we add this component (w/ PC) on the baseline with a Transformer classification head and affinity fusion (see row β€ and β₯), the accuracy also boosts for about $3.1%$ , $6.9%$ indicating learned part concepts benefit 3D CIL recognition.
The Transformer-based classification head (i.e., the Transformer in Eq. 4) also plays a critical role. For example, adding the component on the baseline and the baseline with an affinity fusion module improves the performance by huge margins (7.5%, 10.1% for row β and row β’ and 18.8%, 13.6% for row β£ and row β€). This suggests learning mutual part relations with a Transformer is effective.
The role of affinity-based fusion (i.e., Eq. 5 β Eq. 6) stands out when the transformer-based classification head is used. With the classification head of the baseline, adding affinity-based fusion to the baseline (see row β and row β£) leads to very slight improvements (3.6%, 4.1%). However, combining the Transformer head and affinity-based fusion (see row β and row β€) enhances the performance for
Table 2. Ablation experiments on model components. PC adds the part codebook for part concept learning. The second column replaces a mean pooling with Eq. 4 as the classification head.
| w/ PC | Eq. 4 | Eq. 5β6Lc | Lpp | Last | Avg | ||
| β | 53.35 | 60.72 | |||||
| β‘ | β | 58.42 | 64.60 | ||||
| β’ | β | 60.85 | 70.80 | ||||
| β£ | β | 56.99 | 64.88 | ||||
| β€ | β | β | 75.81 | 78.48 | |||
| β₯ | β | β | β | 78.93 | 85.34 | ||
| β¦ | β | β | β | β | 79.97 | 85.73 | |
| β§ | β | β | β | β | 80.36 | 85.87 | |
| β¨ | β | β | β | β | β | 81.18 | 86.37 |
Table 3. Results on different TopK values.
| Last | Avg | |
| 0.2 | 78.87 | 84.88 |
| 0.4 | 80.04 | 85.77 |
| 0.6 | 81.18 | 86.37 |
| 0.8 | 79.32 | 85.45 |
| 1.0 | 78.18 | 84.32 |
(22.5%, 17.8%). Therefore, we can conclude that affinity-based fusion can properly adjust the relative importance of different tasks for better performance. Moreover, the part-relation-based classification head and affinity-based fusion can mutually reinforce each other for a significant result.
Contrastive loss $\mathcal{L}c$ encourages the model to learn more diverse part concepts and raises the performance (1.04% and 0.41%). Pseudo Part loss $\mathcal{L}{pp}$ is also helpful for 3D incremental learning (1.43% and 0.53%). By combining all the above core components, the method achieves the best result.
We report the results of different K in Tab. 3. We observe that if we use the full codebook, some useless part concepts will bring negative impacts on the results. On the contrary, if we select too few part concepts, the geometric information brought by the part features is not sufficient for precise classification.
Different Backbones To show the influence of different backbones, we further conduct comparisons on three popular 3D backbones including PointNet++ [26], DGCNN [41], and PointNeXt [27]. We replace the backbone of the compared methods with the above network architectures and evaluate their performance on the 3D CIL task. The results are shown in Table 4. The final accuracy of different methods does not vary too much, which indicates designing a better network backbone cannot mitigate catastrophic forgetting. In contrast, our method can surpass all the baselines
Table 4. Results on different backbones.
| PointNet++ | DGCNN | PointNeXt | ||||
| Last | Avg | Last | Avg | Last | Avg | |
| iCaRL | 76.33 | 81.59 | 74.36 | 81.69 | 73.33 | 81.09 |
| DER | 80.02 | 86.50 | 79.56 | 85.11 | 80.48 | 85.58 |
| MEMO | 79.68 | 84.47 | 77.71 | 83.88 | 78.35 | 83.84 |
| Ours | 82.97 | 86.60 | 82.58 | 87.09 | 82.79 | 86.34 |
Table 5. Results on different incremental settings. For each incremental setting, we denote it with (#classes in the first task) (#classes in each subsequent task).
| 5-5 | 10-5 | 10-10 | ||||
| Last | Avg | Last | Avg | Last | Avg | |
| iCaRL | 55.77 | 63.81 | 55.14 | 64.76 | 57.91 | 67.98 |
| DER | 59.24 | 68.67 | 58.26 | 67.43 | 64.03 | 67.25 |
| MEMO | 56.79 | 65.73 | 58.89 | 67.72 | 60.97 | 70.67 |
| Ours | 63.34 | 73.99 | 66.74 | 77.10 | 75.75 | 78.04 |
on final accuracy and average accuracy. This superiority is primarily attributed to dedicated network designs that are more discriminative to mitigate catastrophic forgetting than other backbones.
3D CIL Settings We introduce experiments with a setting of a different number of base classes and incremental classes as shown in Tab. 5. We observe that our method still outperforms other methods even though the number of classes in the first task and the number of classes in each task have changed. It validates the robustness of our method to help mitigate catastrophic forgetting across various experimental setups. By comparing the settings of 5-5 and 10-5, we can see that the final accuracy increases if more base classes are given. This is because the first task has more data to learn a better feature representation leading to good results in the final evaluation. The setting of 10-10 archives much higher final accuracy than that of 10-5. This is because fewer learning tasks will lead to less confusion between classes from different tasks and less forgetting of knowledge.
4.4. More Analysis
The Confusion Matrices. The confusion matrices of the final task are shown in Fig. 6 for different methods. The former 25 classes are base classes, and the rest 25 classes are incremental classes. In these figures, brighter colors indicate higher accuracy while darker colors denote lower accuracy. We can see that the diagonal line of Our method gets brighter colors, while ER and DER perform worse, especially for the first 25 classes. Both ER and DER are more likely to produce more wrong predictions above the diag-

Figure 6. Visualization of the confusing matrix after the last incremental task.
DER

iCaRL
(b) Five classes in an old task & another five classes in a new task
Figure 7. Visualization of the embedding spaces of classes between two different tasks with 2D t-SNE for different methods. The first row shows the embedding of five old classes, and the second row shows the embedding of five new classes.

(a) Five classes in an old task
DER
onal line than ours, indicating these two methods bias the prediction towards classes from later tasks. The confusion matrices of ER also present more bright dots than DER and ours, suggesting the method is more probable to classify new classes from later tasks to old classes from early tasks. Feature Embedding. We visualize the embedding space in Fig. 7 with t-SNE [37], where learned features of five classes from two different tasks are shown in various colors, respectively. Compared to other baselines, ILPC can preserve the relatively compact embedding of old classes in previous tasks, while pushing away embedding regions of new classes to a greater extent. ILPC can discriminate classes from different tasks better therefore avoiding knowledge forgetting.
5. Conclusion
This work introduces a novel framework for 3D CIL that leverages part concepts and part-wise relations. These con
cepts, widely shared across different tasks, improve the model's ability to recognize shapes consistently. Additionally, learning task-wise affinities for classification head fusion minimizes task bias. Extensive experiments demonstrate that our method outperforms all baselines on all three benchmarks. This work opens doors for further research on fine-grained concept learning in 3D data.
While our method achieves strong performance, some limitations exist. Our approach does not explicitly enforce human-like part segmentation, potentially leading to a small percentage of learned part concepts that are not easily interpretable by humans. We believe incorporating a reconstruction task during training could encourage the model to learn more semantically meaningful parts. Another interesting direction is to investigate how pre-trained models can be adapted to different classes without too much training. These avenues present exciting opportunities for future research.
Acknowledgement
References
[1] Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. Expert gate: Lifelong learning with a network of experts. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3366-3375, 2017. 2
[2] Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. Advances in neural information processing systems, 32, 2019. 2
[3] Jihwan Bang, Heesu Kim, YoungJoon Yoo, Jung-Woo Ha, and Jonghyun Choi. Rainbow memory: Continual learning with a memory of diverse samples. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8218-8227, 2021. 1, 2
[4] Prashant Shivaram Bhat, Bahram Zonooz, and Elahe Arani. Task-aware information routing from common representation space in lifelong learning. In The Eleventh International Conference on Learning Representations, 2022. 1, 2
[5] Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11621-11631, 2020. 5
[6] Stefano Ceri, Alessandro Bozzon, Marco Brambilla, Emanuele Della Valle, Piero Fraternali, Silvia Quarteroni, Stefano Ceri, Alessandro Bozzon, Marco Brambilla, Emanuele Della Valle, et al. An introduction to information retrieval. Web information retrieval, pages 3-11, 2013. 5
[7] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. 5
[8] Arslan Chaudhry, Albert Gordo, Puneet Dokania, Philip Torr, and David Lopez-Paz. Using hindsight to anchor past knowledge in continual learning. In Proceedings of the AAAI conference on artificial intelligence, pages 6993-7001, 2021. 1
[9] Xiuwei Chen and Xiaobin Chang. Dynamic residual classifier for class incremental learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 18743-18752, 2023. 2
[10] Townim Chowdhury, Ali Cheraghian, Sameera Ramasinghe, Sahar Ahmadi, Morteza Saberi, and Shafin Rahman. Few-
We thank all the anonymous reviewers for their insightful comments. We also thank Tingyu Weng for his helpful suggestions. This work was partially supported by the National Natural Science Foundation of China (62271467, 62476262, 62206263, 62306297, 62306296), Beijing Nova Program, Beijing Natural Science Foundation (4242053, L242096), and China Postdoctoral Science Foundation (2022T150639).
shot class-incremental learning for 3d point cloud objects. In ECCV, 2022. 1, 2
[11] P Dokania, P Torr, and M Ranzato. Continual learning with tiny episodic memories. In Workshop on Multi-Task and Life-long Reinforcement Learning, 2019. 1, 5, 6
[12] Jiahua Dong, Yang Cong, Gan Sun, Lixu Wang, Lingjuan Lyu, Jun Li, and Ender Konukoglu. Inor-net: Incremental 3-d object recognition network for point cloud representation. IEEE Transactions on Neural Networks and Learning Systems, 2023. 1, 2
[13] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2020. 4
[14] Robert Geirhos, Jorn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11):665-673, 2020. 1
[15] Jiangpeng He. Gradient reweighting: Towards imbalanced class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16668-16677, 2024. 5, 6
[16] Katherine Hermann and Andrew Lampinen. What shapes feature representations? exploring datasets, architectures, and training. Advances in Neural Information Processing Systems, 33:9995-10006, 2020. 1
[17] Zhiyuan Hu, Yunsheng Li, Jiancheng Lyu, Dashan Gao, and Nuno Vasconcelos. Dense network expansion for class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11858-11867, 2023. 2
[18] Bingchen Huang, Zhineng Chen, Peng Zhou, Jiayin Chen, and Zuxuan Wu. Resolving task confusion in dynamic expansion architectures for class incremental learning. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 908-916, 2023. 2
[19] David Isele and Akansel Cosgun. Selective experience replay for lifelong learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 2018. 2
[20] Yaoyao Liu, Yuting Su, An-An Liu, Bernt Schiele, and Qianru Sun. Mnemonics training: Multi-class incremental learning without forgetting. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 12245-12254, 2020. 1, 2
[21] Yuyang Liu, Yang Cong, Gan Sun, Tao Zhang, Jiahua Dong, and Hongsen Liu. L3doc: Lifelong 3d object classification. IEEE Transactions on Image Processing, 30:7486-7498, 2021. 1, 2
[22] Yaoyao Liu, Yingying Li, Bernt Schiele, and Qianru Sun. Wakening past concepts without past data: Class incremental learning from online placebos. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 2226-2235, 2024. 2
[23] Zilin Luo, Yaoyao Liu, Bernt Schiele, and Qianru Sun. Class-incremental exemplar compression for class-
incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11371-11380, 2023. 2
[24] Oleksiy Ostapenko, Mihai Puscas, Tassilo Klein, Patrick Jahnichen, and Moin Nabi. Learning to remember: A synaptic plasticity driven framework for continual learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11321-11329, 2019. 2
[25] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 652-660, 2017. 1, 3, 5
[26] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30, 2017. 1, 7
[27] Guocheng Qian, Yuchen Li, Houwen Peng, Jinjie Mai, Hasan Hammoud, Mohamed Elhoseiny, and Bernard Ghanem. Pointnext: Revisiting pointnet++ with improved training and scaling strategies. In Advances in Neural Information Processing Systems (NeurIPS), 2022. 7
[28] Jathushan Rajasegaran, Munawar Hayat, Salman H Khan, Fahad Shahbaz Khan, and Ling Shao. Random path selection for continual learning. Advances in Neural Information Processing Systems, 32, 2019. 2
[29] Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2001-2010, 2017. 2, 5, 6
[30] Jeremy Reizenstein, Roman Shapovalov, Philipp Henzler, Luca Sbordone, Patrick Labatut, and David Novotny. Common objects in 3d: Large-scale learning and evaluation of real-life 3d category reconstruction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10901-10911, 2021. 5
[31] David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy Lillicrap, and Gregory Wayne. Experience replay for continual learning. Advances in Neural Information Processing Systems, 32, 2019. 2
[32] Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016. 1, 2
[33] Yuwen Tan and Xiang Xiang. Cross-domain few-shot incremental learning for point-cloud recognition. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2024, Waikoloa, HI, USA, January 3-8, 2024, pages 2296-2305. IEEE, 2024. 1, 2
[34] Rishabh Tiwari, Krishnateja Killamsetty, Rishabh Iyer, and Pradeep Shenoy. Gcr: Gradient coreset based replay buffer selection for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 99-108, 2022. 1, 2
[35] Gido M. van de Ven, Hava T. Siegelmann, and Andreas Savas Tolias. Brain-inspired replay for continual learning with ar
tificial neural networks. Nature Communications, 11, 2020. 1
[36] Gido M Van de Ven, Tinne Tuytelaars, and Andreas S Tolias. Three types of incremental learning. Nature Machine Intelligence, 4(12):1185-1197, 2022. 1
[37] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9 (11), 2008. 8
[38] Fu-Yun Wang, Da-Wei Zhou, Liu Liu, Han-Jia Ye, Yatao Bian, De-Chuan Zhan, and Peilin Zhao. Beef: Bi-compatible class-incremental learning via energy-based expansion and fusion. In The Eleventh International Conference on Learning Representations, 2022. 5
[39] Fu-Yun Wang, Da-Wei Zhou, Han-Jia Ye, and De-Chuan Zhan. Foster: Feature boosting and compression for class incremental learning. In European conference on computer vision, pages 398-414. Springer, 2022. 1, 2, 5, 6
[40] Liyuan Wang, Xingxing Zhang, Kuo Yang, Longhui Yu, Chongxuan Li, HONG Lanqing, Shifeng Zhang, Zhenguo Li, Yi Zhong, and Jun Zhu. Memory replay with data compression for continual learning. In International Conference on Learning Representations, 2021. 1, 2
[41] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (tog), 38(5):1-12, 2019. 7
[42] Kun Wei, Cheng Deng, Xu Yang, and Dacheng Tao. Incremental zero-shot learning. IEEE Transactions on Cybernetics, 52(12):13788-13799, 2021. 5
[43] Tingyu Weng, Jun Xiao, and Haiyong Jiang. Decompose novel into known: Part concept learning for 3d novel class discovery. Advances in Neural Information Processing Systems, 36, 2024. 2, 3, 4
[44] Tingyu Weng, Jun Xiao, Hao Pan, and Haiyong Jiang. Partcom: Part composition learning for 3d open-set recognition. International Journal of Computer Vision, 132(4): 1393-1416, 2024. 1, 2
[45] Ju Xu and Zhanxing Zhu. Reinforced continual learning. Advances in Neural Information Processing Systems, 31, 2018. 2
[46] Mutian Xu, Junhao Zhang, Zhipeng Zhou, Mingye Xu, Xiaojuan Qi, and Yu Qiao. Learning geometry-disentangled representation for complementary understanding of 3d object point cloud. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 3056-3064, 2021. 2
[47] Shipeng Yan, Jiangwei Xie, and Xuming He. Der: Dynamically expandable representation for class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3014-3023, 2021. 1, 2, 5, 6
[48] Xincheng Yang, Mingze Jin, Weiji He, and Qian Chen. Pointcat: Cross-attention transformer for point cloud. arXiv preprint arXiv:2304.03012, 2023. 4
[49] Yuwei Yang, Munawar Hayat, Zhao Jin, Chao Ren, and Yinjie Lei. Geometry and uncertainty-aware 3d point cloud class-incremental semantic segmentation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition,
CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023, pages 21759-21768. IEEE, 2023. 1, 2
[50] Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Lifelong learning with dynamically expandable networks. In 6th International Conference on Learning Representations, ICLR 2018. International Conference on Learning Representations, ICLR, 2018. 2
[51] Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie Zhou, and Jiwen Lu. Point-bert: Pre-training 3d point cloud transformers with masked point modeling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 19313-19322, 2022. 4
[52] Renrui Zhang, Ziyu Guo, Peng Gao, Rongyao Fang, Bin Zhao, Dong Wang, Yu Qiao, and Hongsheng Li. Point-m2ae: multi-scale masked autoencoders for hierarchical point cloud pre-training. Advances in neural information processing systems, 35:27061-27074, 2022. 4
[53] Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip HS Torr, and Vladlen Koltun. Point transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 16259-16268, 2021. 1
[54] Na Zhao and Gim Hee Lee. Static-dynamic co-teaching for class-incremental 3d object detection. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 3436-3445, 2022. 1, 2
[55] Shizhen Zhao and Xiaojuan Qi. Prototypical voteret for few-shot 3d point cloud object detection. In Advances in Neural Information Processing Systems, 2022. 2
[56] Tianchen Zhao, Niansong Zhang, Xuefei Ning, He Wang, Li Yi, and Yu Wang. Codedvtr: Codebook-based sparse voxel transformer with geometric guidance. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 1425-1434. IEEE, 2022. 2
[57] Yongheng Zhao, Tolga Birdal, Haowen Deng, and Federico Tombari. 3d point capsule networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1009-1018, 2019. 2
[58] Bowen Zheng, Da-Wei Zhou, Han-Jia Ye, and De-Chuan Zhan. Multi-layer rehearsal feature augmentation for class incremental learning. In Forty-first International Conference on Machine Learning, 2024. 2
[59] Da-Wei Zhou, Qi-Wei Wang, Han-Jia Ye, and De-Chuan Zhan. A model or 603 exemplars: Towards memory-efficient class-incremental learning. In The Eleventh International Conference on Learning Representations, 2022. 1, 2, 5, 6
[60] Fei Zhu, Zhen Cheng, Xu-Yao Zhang, and Cheng-lin Liu. Class-incremental learning via dual augmentation. Advances in Neural Information Processing Systems, 34:14306-14318, 2021. 1, 2
[61] Fei Zhu, Xu-Yao Zhang, Chuang Wang, Fei Yin, and ChengLin Liu. Prototype augmentation and self-supervision for incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5871-5880, 2021. 1
[62] Huiping Zhuang, Run He, Kai Tong, Ziqian Zeng, Cen Chen, and Zhiping Lin. Ds-al: A dual-stream analytic learning for
exemplar-free class-incremental learning. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 17237-17244, 2024. 5, 6










