Introduction
[Formulating the every step according to the problem statement. We can do this when everything else is done.]{style="color: red"}
Our bottom-up strategy start with the initialization of a pool of sub-parts from input point sets. We use a part proposal network as shown in Figure [fig:subpart]{reference-type="ref" reference="fig:subpart"}. Then, we use a MergeNet to determine if a pair of the proposal should be merged. If the answer is yes, we put the merged larger sub-part into the sub-part pool, replacing the input pair of sub-parts. We repeat the process until no new sub-part can be merged. The final pool will be the segmentation results.
[The partness score will be used for merging policy.]{style="color: red"}
[Given a sub-part, this module will predict the partness score. For the MergeNet, the input is a pair of sub-parts. The two modules seem a little bit repetitive. We can replace the MergeNet with the partness score and a threshold. The reason why we have two modules is that the regression task is too difficult to have a good performance. I train the MergeNet in a binary classification way to alleviate this problem.]{style="color: red"}
The binary mask prediction task is very difficult. The sub-part proposal may contain points from multiple instances. Also, it can not guarantee the small ball does not across the boundary in the inference phase. In our experiments, we found the binary mask prediction will be much more bad if the small ball across the boundary. All in all, we need a module to evaluate the quality of the proposals. Similar to objectness score defined in 2d image object detection [@alexe2012measuring], we propose the partness score to evalute the quality of the proposals. The partness score is defined as $S(P) = {\max_{j} \frac{Num{Label{p_i} == j}}{Num{P}},p_i \in P, j \in Labels}$. We will feed the part proposal into a PointNet and directly optimize the parameters by $\ell_2$ loss. In the next step, we will use this score to guide our merging policy.
[Formulating a clustering operation on a set.]{style="color: red"}
$$\text{Ultimate gobal}: \max ; mIOU,\quad \text{Currently Objective}: |G_i - G_i^{gt}|$$
Given a pair of part proposal, MergeNet aims to predict if they can be merged. In the early stage, we will choose the pair only it is very close in Eucildean distance. We will put the merged proposal into the sub-part proposal pool and remove input proposal pair from the sub-part proposal pool. We repeat the process until no new proposal generated in an iterative way. The final pool will be the segmentation results.
When the patch is small, the relation between patch is only located in a small arena. When the patch becomes large, the relation is actually across the space, such as
[Formulating the merging policy.]{style="color: red"}
[The score for every pair beforce Softmax will be the partness score $\times$ MergeNet predition.]{style="color: red"}
[On policy gradient descent? Hao used this term.]{style="color: red"}
[The policy $\mathcal{\pi}{purity}$ is just feeding purity score $\times$ MergeNet predition into a Softmax layer and choice the argmax term. However, we might not always choose the argmax term but also consider other information such as the relative size. So we use a small network to learn the policy $\mathcal{\pi}{strategy}$.]{style="color: red"}
[Meta Learning: training on class A for the policy $\mathcal{\pi}{purity}$, finetuning the small network on class B for the policy $\mathcal{\pi}{strategy}$, and finally test on class C.]{style="color: red"}
For the inference phase, we will choice the highest partness score pair in each iteration and feed the pair into MergeNet to determine if we merge it. For training, We will on-online training all modules and the training samples are generated by inferencing our model.
[Currently, we have a very simple strategy that refine the sub-proposals between the parts in the final stage.]{style="color: red"}