Chelsea707 commited on
Commit
791b5e2
·
verified ·
1 Parent(s): 936cd40

Add Batch 2243897a-9167-4b3f-8de3-4f908914984f data

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +29 -0
  2. 2023/Voint Cloud_ Multi-View Point Cloud Representation for 3D Understanding/70b114f7-aea9-450a-a38b-661ed1d2e4cb_content_list.json +0 -0
  3. 2023/Voint Cloud_ Multi-View Point Cloud Representation for 3D Understanding/70b114f7-aea9-450a-a38b-661ed1d2e4cb_model.json +0 -0
  4. 2023/Voint Cloud_ Multi-View Point Cloud Representation for 3D Understanding/70b114f7-aea9-450a-a38b-661ed1d2e4cb_origin.pdf +3 -0
  5. 2023/Voint Cloud_ Multi-View Point Cloud Representation for 3D Understanding/full.md +582 -0
  6. 2023/Voint Cloud_ Multi-View Point Cloud Representation for 3D Understanding/images.zip +3 -0
  7. 2023/Voint Cloud_ Multi-View Point Cloud Representation for 3D Understanding/layout.json +0 -0
  8. 2023/Volumetric Optimal Transportation by Fast Fourier Transform/468f5fc6-f60a-4c98-879c-a2f5d8b676d8_content_list.json +0 -0
  9. 2023/Volumetric Optimal Transportation by Fast Fourier Transform/468f5fc6-f60a-4c98-879c-a2f5d8b676d8_model.json +0 -0
  10. 2023/Volumetric Optimal Transportation by Fast Fourier Transform/468f5fc6-f60a-4c98-879c-a2f5d8b676d8_origin.pdf +3 -0
  11. 2023/Volumetric Optimal Transportation by Fast Fourier Transform/full.md +1027 -0
  12. 2023/Volumetric Optimal Transportation by Fast Fourier Transform/images.zip +3 -0
  13. 2023/Volumetric Optimal Transportation by Fast Fourier Transform/layout.json +0 -0
  14. 2023/Wasserstein Auto-encoded MDPs_ Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees/7aa139d3-a427-412b-84b8-883489a7c318_content_list.json +0 -0
  15. 2023/Wasserstein Auto-encoded MDPs_ Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees/7aa139d3-a427-412b-84b8-883489a7c318_model.json +0 -0
  16. 2023/Wasserstein Auto-encoded MDPs_ Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees/7aa139d3-a427-412b-84b8-883489a7c318_origin.pdf +3 -0
  17. 2023/Wasserstein Auto-encoded MDPs_ Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees/full.md +0 -0
  18. 2023/Wasserstein Auto-encoded MDPs_ Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees/images.zip +3 -0
  19. 2023/Wasserstein Auto-encoded MDPs_ Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees/layout.json +0 -0
  20. 2023/Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic/d5f92e4c-b0b4-48f2-acb6-1c3d35000445_content_list.json +0 -0
  21. 2023/Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic/d5f92e4c-b0b4-48f2-acb6-1c3d35000445_model.json +0 -0
  22. 2023/Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic/d5f92e4c-b0b4-48f2-acb6-1c3d35000445_origin.pdf +3 -0
  23. 2023/Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic/full.md +464 -0
  24. 2023/Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic/images.zip +3 -0
  25. 2023/Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic/layout.json +0 -0
  26. 2023/Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection/95efb798-c3e3-43db-b4b3-c866d3d1db85_content_list.json +0 -0
  27. 2023/Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection/95efb798-c3e3-43db-b4b3-c866d3d1db85_model.json +0 -0
  28. 2023/Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection/95efb798-c3e3-43db-b4b3-c866d3d1db85_origin.pdf +3 -0
  29. 2023/Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection/full.md +633 -0
  30. 2023/Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection/images.zip +3 -0
  31. 2023/Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection/layout.json +0 -0
  32. 2023/Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning/b2c89086-3efa-4d35-8fb8-fa570d2c2733_content_list.json +0 -0
  33. 2023/Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning/b2c89086-3efa-4d35-8fb8-fa570d2c2733_model.json +0 -0
  34. 2023/Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning/b2c89086-3efa-4d35-8fb8-fa570d2c2733_origin.pdf +3 -0
  35. 2023/Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning/full.md +385 -0
  36. 2023/Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning/images.zip +3 -0
  37. 2023/Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning/layout.json +0 -0
  38. 2023/Weighted Clock Logic Point Process/3eef33de-4305-442c-87ae-f007ec3ea0e2_content_list.json +0 -0
  39. 2023/Weighted Clock Logic Point Process/3eef33de-4305-442c-87ae-f007ec3ea0e2_model.json +0 -0
  40. 2023/Weighted Clock Logic Point Process/3eef33de-4305-442c-87ae-f007ec3ea0e2_origin.pdf +3 -0
  41. 2023/Weighted Clock Logic Point Process/full.md +774 -0
  42. 2023/Weighted Clock Logic Point Process/images.zip +3 -0
  43. 2023/Weighted Clock Logic Point Process/layout.json +0 -0
  44. 2023/Weighted Ensemble Self-Supervised Learning/0c863f59-c784-4516-9026-d5e5e7ae916e_content_list.json +0 -0
  45. 2023/Weighted Ensemble Self-Supervised Learning/0c863f59-c784-4516-9026-d5e5e7ae916e_model.json +0 -0
  46. 2023/Weighted Ensemble Self-Supervised Learning/0c863f59-c784-4516-9026-d5e5e7ae916e_origin.pdf +3 -0
  47. 2023/Weighted Ensemble Self-Supervised Learning/full.md +0 -0
  48. 2023/Weighted Ensemble Self-Supervised Learning/images.zip +3 -0
  49. 2023/Weighted Ensemble Self-Supervised Learning/layout.json +0 -0
  50. 2023/What Can we Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers_/9da122df-288c-42c9-8090-73c7e3adccf9_content_list.json +0 -0
.gitattributes CHANGED
@@ -7544,3 +7544,32 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
7544
  2023/Visual[[:space:]]Imitation[[:space:]]Learning[[:space:]]with[[:space:]]Patch[[:space:]]Rewards/7ff3492f-7d60-4240-bcc1-cecd00ae1b72_origin.pdf filter=lfs diff=lfs merge=lfs -text
7545
  2023/Visually-Augmented[[:space:]]Language[[:space:]]Modeling/fea11c19-f3c1-4765-aaca-6a4d7a8ff1ea_origin.pdf filter=lfs diff=lfs merge=lfs -text
7546
  2023/VoGE_[[:space:]]A[[:space:]]Differentiable[[:space:]]Volume[[:space:]]Renderer[[:space:]]using[[:space:]]Gaussian[[:space:]]Ellipsoids[[:space:]]for[[:space:]]Analysis-by-Synthesis/babfe1c6-0687-43cf-a603-79217b17846f_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7544
  2023/Visual[[:space:]]Imitation[[:space:]]Learning[[:space:]]with[[:space:]]Patch[[:space:]]Rewards/7ff3492f-7d60-4240-bcc1-cecd00ae1b72_origin.pdf filter=lfs diff=lfs merge=lfs -text
7545
  2023/Visually-Augmented[[:space:]]Language[[:space:]]Modeling/fea11c19-f3c1-4765-aaca-6a4d7a8ff1ea_origin.pdf filter=lfs diff=lfs merge=lfs -text
7546
  2023/VoGE_[[:space:]]A[[:space:]]Differentiable[[:space:]]Volume[[:space:]]Renderer[[:space:]]using[[:space:]]Gaussian[[:space:]]Ellipsoids[[:space:]]for[[:space:]]Analysis-by-Synthesis/babfe1c6-0687-43cf-a603-79217b17846f_origin.pdf filter=lfs diff=lfs merge=lfs -text
7547
+ 2023/Voint[[:space:]]Cloud_[[:space:]]Multi-View[[:space:]]Point[[:space:]]Cloud[[:space:]]Representation[[:space:]]for[[:space:]]3D[[:space:]]Understanding/70b114f7-aea9-450a-a38b-661ed1d2e4cb_origin.pdf filter=lfs diff=lfs merge=lfs -text
7548
+ 2023/Volumetric[[:space:]]Optimal[[:space:]]Transportation[[:space:]]by[[:space:]]Fast[[:space:]]Fourier[[:space:]]Transform/468f5fc6-f60a-4c98-879c-a2f5d8b676d8_origin.pdf filter=lfs diff=lfs merge=lfs -text
7549
+ 2023/Wasserstein[[:space:]]Auto-encoded[[:space:]]MDPs_[[:space:]]Formal[[:space:]]Verification[[:space:]]of[[:space:]]Efficiently[[:space:]]Distilled[[:space:]]RL[[:space:]]Policies[[:space:]]with[[:space:]]Many-sided[[:space:]]Guarantees/7aa139d3-a427-412b-84b8-883489a7c318_origin.pdf filter=lfs diff=lfs merge=lfs -text
7550
+ 2023/Weakly[[:space:]]Supervised[[:space:]]Explainable[[:space:]]Phrasal[[:space:]]Reasoning[[:space:]]with[[:space:]]Neural[[:space:]]Fuzzy[[:space:]]Logic/d5f92e4c-b0b4-48f2-acb6-1c3d35000445_origin.pdf filter=lfs diff=lfs merge=lfs -text
7551
+ 2023/Weakly[[:space:]]Supervised[[:space:]]Knowledge[[:space:]]Transfer[[:space:]]with[[:space:]]Probabilistic[[:space:]]Logical[[:space:]]Reasoning[[:space:]]for[[:space:]]Object[[:space:]]Detection/95efb798-c3e3-43db-b4b3-c866d3d1db85_origin.pdf filter=lfs diff=lfs merge=lfs -text
7552
+ 2023/Weakly-supervised[[:space:]]HOI[[:space:]]Detection[[:space:]]via[[:space:]]Prior-guided[[:space:]]Bi-level[[:space:]]Representation[[:space:]]Learning/b2c89086-3efa-4d35-8fb8-fa570d2c2733_origin.pdf filter=lfs diff=lfs merge=lfs -text
7553
+ 2023/Weighted[[:space:]]Clock[[:space:]]Logic[[:space:]]Point[[:space:]]Process/3eef33de-4305-442c-87ae-f007ec3ea0e2_origin.pdf filter=lfs diff=lfs merge=lfs -text
7554
+ 2023/Weighted[[:space:]]Ensemble[[:space:]]Self-Supervised[[:space:]]Learning/0c863f59-c784-4516-9026-d5e5e7ae916e_origin.pdf filter=lfs diff=lfs merge=lfs -text
7555
+ 2023/What[[:space:]]Can[[:space:]]we[[:space:]]Learn[[:space:]]From[[:space:]]The[[:space:]]Selective[[:space:]]Prediction[[:space:]]And[[:space:]]Uncertainty[[:space:]]Estimation[[:space:]]Performance[[:space:]]Of[[:space:]]523[[:space:]]Imagenet[[:space:]]Classifiers_/9da122df-288c-42c9-8090-73c7e3adccf9_origin.pdf filter=lfs diff=lfs merge=lfs -text
7556
+ 2023/What[[:space:]]Do[[:space:]]Self-Supervised[[:space:]]Vision[[:space:]]Transformers[[:space:]]Learn_/eb9117a9-6734-4afe-bd94-17080f9ab76e_origin.pdf filter=lfs diff=lfs merge=lfs -text
7557
+ 2023/What[[:space:]]Is[[:space:]]Missing[[:space:]]in[[:space:]]IRM[[:space:]]Training[[:space:]]and[[:space:]]Evaluation_[[:space:]]Challenges[[:space:]]and[[:space:]]Solutions/ada9a8d8-393e-4e9c-91dc-8e9b9de8056a_origin.pdf filter=lfs diff=lfs merge=lfs -text
7558
+ 2023/What[[:space:]]Makes[[:space:]]Convolutional[[:space:]]Models[[:space:]]Great[[:space:]]on[[:space:]]Long[[:space:]]Sequence[[:space:]]Modeling_/33760ea2-7ca5-43be-a157-6f11d24d15b1_origin.pdf filter=lfs diff=lfs merge=lfs -text
7559
+ 2023/What[[:space:]]shapes[[:space:]]the[[:space:]]loss[[:space:]]landscape[[:space:]]of[[:space:]]self[[:space:]]supervised[[:space:]]learning_/2fc00309-6678-46e3-bb56-f662dfd5b3bb_origin.pdf filter=lfs diff=lfs merge=lfs -text
7560
+ 2023/When[[:space:]]Data[[:space:]]Geometry[[:space:]]Meets[[:space:]]Deep[[:space:]]Function_[[:space:]]Generalizing[[:space:]]Offline[[:space:]]Reinforcement[[:space:]]Learning/01fada97-5ce7-4d5f-a893-a0388d8d2a96_origin.pdf filter=lfs diff=lfs merge=lfs -text
7561
+ 2023/When[[:space:]]to[[:space:]]Make[[:space:]]and[[:space:]]Break[[:space:]]Commitments_/8ecf68b7-cbf5-414a-9452-a5b931a222f9_origin.pdf filter=lfs diff=lfs merge=lfs -text
7562
+ 2023/Where[[:space:]]to[[:space:]]Diffuse,[[:space:]]How[[:space:]]to[[:space:]]Diffuse,[[:space:]]and[[:space:]]How[[:space:]]to[[:space:]]Get[[:space:]]Back_[[:space:]]Automated[[:space:]]Learning[[:space:]]for[[:space:]]Multivariate[[:space:]]Diffusions/f7a9c89f-158a-46a9-8f48-9b5bcdfbc0da_origin.pdf filter=lfs diff=lfs merge=lfs -text
7563
+ 2023/Which[[:space:]]Layer[[:space:]]is[[:space:]]Learning[[:space:]]Faster_[[:space:]]A[[:space:]]Systematic[[:space:]]Exploration[[:space:]]of[[:space:]]Layer-wise[[:space:]]Convergence[[:space:]]Rate[[:space:]]for[[:space:]]Deep[[:space:]]Neural[[:space:]]Networks/38a6c3d5-1c42-41e7-83e7-973b9e617235_origin.pdf filter=lfs diff=lfs merge=lfs -text
7564
+ 2023/Why[[:space:]](and[[:space:]]When)[[:space:]]does[[:space:]]Local[[:space:]]SGD[[:space:]]Generalize[[:space:]]Better[[:space:]]than[[:space:]]SGD_/306d38ac-f98a-4b3c-97a7-4af7a2c739ce_origin.pdf filter=lfs diff=lfs merge=lfs -text
7565
+ 2023/Why[[:space:]]adversarial[[:space:]]training[[:space:]]can[[:space:]]hurt[[:space:]]robust[[:space:]]accuracy/b15d9063-140c-4e2d-a2bd-fd12553144a4_origin.pdf filter=lfs diff=lfs merge=lfs -text
7566
+ 2023/WiNeRT_[[:space:]]Towards[[:space:]]Neural[[:space:]]Ray[[:space:]]Tracing[[:space:]]for[[:space:]]Wireless[[:space:]]Channel[[:space:]]Modelling[[:space:]]and[[:space:]]Differentiable[[:space:]]Simulations/41f81ce5-4453-4061-b257-336a66f472e8_origin.pdf filter=lfs diff=lfs merge=lfs -text
7567
+ 2023/Winning[[:space:]]Both[[:space:]]the[[:space:]]Accuracy[[:space:]]of[[:space:]]Floating[[:space:]]Point[[:space:]]Activation[[:space:]]and[[:space:]]the[[:space:]]Simplicity[[:space:]]of[[:space:]]Integer[[:space:]]Arithmetic/a62258ff-e367-4f69-b7af-d16c0a09ca72_origin.pdf filter=lfs diff=lfs merge=lfs -text
7568
+ 2023/Words[[:space:]]are[[:space:]]all[[:space:]]you[[:space:]]need_[[:space:]]Language[[:space:]]as[[:space:]]an[[:space:]]approximation[[:space:]]for[[:space:]]human[[:space:]]similarity[[:space:]]judgments/bd5757f5-b64d-41a1-849d-8e09ed031d8e_origin.pdf filter=lfs diff=lfs merge=lfs -text
7569
+ 2023/Write[[:space:]]and[[:space:]]Paint_[[:space:]]Generative[[:space:]]Vision-Language[[:space:]]Models[[:space:]]are[[:space:]]Unified[[:space:]]Modal[[:space:]]Learners/fd6fd19a-99f2-4a1c-9940-84627f28fb05_origin.pdf filter=lfs diff=lfs merge=lfs -text
7570
+ 2023/Your[[:space:]]Contrastive[[:space:]]Learning[[:space:]]Is[[:space:]]Secretly[[:space:]]Doing[[:space:]]Stochastic[[:space:]]Neighbor[[:space:]]Embedding/97d2e52c-457b-46f6-8c21-13d5d765eb07_origin.pdf filter=lfs diff=lfs merge=lfs -text
7571
+ 2023/Zeroth-Order[[:space:]]Optimization[[:space:]]with[[:space:]]Trajectory-Informed[[:space:]]Derivative[[:space:]]Estimation/c092ea5b-92fc-44ba-b455-d7307b3016a2_origin.pdf filter=lfs diff=lfs merge=lfs -text
7572
+ 2023/f-DM_[[:space:]]A[[:space:]]Multi-stage[[:space:]]Diffusion[[:space:]]Model[[:space:]]via[[:space:]]Progressive[[:space:]]Signal[[:space:]]Transformation/04076be8-bdc7-4349-91f7-210b46dd8933_origin.pdf filter=lfs diff=lfs merge=lfs -text
7573
+ 2023/kNN-Diffusion_[[:space:]]Image[[:space:]]Generation[[:space:]]via[[:space:]]Large-Scale[[:space:]]Retrieval/91d6cd85-11f4-46c6-bd18-53dbb2f775b5_origin.pdf filter=lfs diff=lfs merge=lfs -text
7574
+ 2023/simpleKT_[[:space:]]A[[:space:]]Simple[[:space:]]But[[:space:]]Tough-to-Beat[[:space:]]Baseline[[:space:]]for[[:space:]]Knowledge[[:space:]]Tracing/026aa3e3-fd2b-47eb-9b32-9efc86e03a5c_origin.pdf filter=lfs diff=lfs merge=lfs -text
7575
+ 2023/wav2tok_[[:space:]]Deep[[:space:]]Sequence[[:space:]]Tokenizer[[:space:]]for[[:space:]]Audio[[:space:]]Retrieval/fdaea0fe-1baa-4dee-8dce-c076dd80d99a_origin.pdf filter=lfs diff=lfs merge=lfs -text
2023/Voint Cloud_ Multi-View Point Cloud Representation for 3D Understanding/70b114f7-aea9-450a-a38b-661ed1d2e4cb_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Voint Cloud_ Multi-View Point Cloud Representation for 3D Understanding/70b114f7-aea9-450a-a38b-661ed1d2e4cb_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Voint Cloud_ Multi-View Point Cloud Representation for 3D Understanding/70b114f7-aea9-450a-a38b-661ed1d2e4cb_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5c63e52812e7af5aa05bac7be47de3de07ece92994a41a64092cfde6a35ee7f
3
+ size 32955658
2023/Voint Cloud_ Multi-View Point Cloud Representation for 3D Understanding/full.md ADDED
@@ -0,0 +1,582 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VOINT CLOUD: MULTI-VIEW POINT CLOUD REPRESENTATION FOR 3D UNDERSTANDING
2
+
3
+ Abdullah Hamdi
4
+
5
+ Silvio Giancola
6
+
7
+ Bernard Ghanem
8
+
9
+ King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia {abdullah.hamdi, silvio.giancola, bernard.ghanem}@kaust.edu.sa
10
+
11
+ # ABSTRACT
12
+
13
+ Multi-view projection methods have demonstrated promising performance on 3D understanding tasks like 3D classification and segmentation. However, it remains unclear how to combine such multi-view methods with the widely available 3D point clouds. Previous methods use unlearned heuristics to combine features at the point level. To this end, we introduce the concept of the multi-view point cloud (Voint cloud), representing each 3D point as a set of features extracted from several view-points. This novel 3D Voint cloud representation combines the compactness of 3D point cloud representation with the natural view-awareness of multi-view representation. Naturally, we can equip this new representation with convolutional and pooling operations. We deploy a Voint neural network (VointNet) to learn representations in the Voint space. Our novel representation achieves state-of-the-art performance on 3D classification, shape retrieval, and robust 3D part segmentation on standard benchmarks (ScanObjectNN, ShapeNet Core55, and ShapeNet Parts).<sup>1</sup>
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ A fundamental question in 3D computer vision and computer graphics is how to represent 3D data (Mescheder et al., 2019; Qi et al., 2017a; Maturana & Scherer, 2015). This question becomes particularly vital given how the success of deep learning in 2D computer vision has pushed for the wide adoption of deep learning in 3D vision and graphics. In fact, deep networks already achieve impressive results in 3D classification (Hamdi et al., 2021), 3D segmentation (Hu et al., 2021), 3D detection (Liu et al., 2021a), 3D reconstruction (Mescheder et al., 2019), and novel view synthesis (Mildenhall et al., 2020). 3D computer vision networks either rely on direct 3D representations, indirect 2D projection on images, or a mixture of both. Direct approaches operate on 3D data commonly represented with point clouds (Qi et al., 2017a), meshes (Feng et al., 2019), or voxels (Choy et al., 2019). In contrast, indirect approaches commonly render multiple 2D views of objects or scenes (Su et al., 2015), and process each image with a traditional 2D image-based architecture. The human visual system is closer to such a multi-view indirect approach for 3D understanding, as it receives streams of rendered images rather than explicit 3D data.
18
+
19
+ Tackling 3D vision tasks with indirect approaches has three main advantages: (i) mature and transferable 2D computer vision models (CNNs, Transformers, etc.), (ii) large and diverse labeled image datasets for pre-training (e.g. ImageNet (Russakovsky et al., 2014)), and (iii) the multi-view images give context-rich features based on the viewing angle, which are different from the geometric 3D neighborhood features. Multi-view approaches achieve impressive performance in 3D shape classification and segmentation (Wei et al., 2020; Hamdi et al., 2021; Dai & Nießner, 2018). However, the challenge with the multi-view representation (especially for dense predictions) lies in properly aggregating the per-view features with 3D point clouds. The appropriate aggregation is necessary to obtain representative 3D point
20
+
21
+ ![](images/0f87af169b3654c00a374430901e2066ece51e5796b3b55094f704f8b831430b.jpg)
22
+ Figure 1: 3D Voint Clouds. We propose the multi-view point cloud (Voint cloud), a novel 3D representation that is compact and naturally descriptive of view projections of a 3D point cloud. Each point in the 3D cloud is tagged with a Voint, which accumulates view-features for that point. Note that not all 3D points are visible from all views. The set of Voints constructs a Voint cloud.
23
+
24
+ clouds with a single feature per point suitable for typical point cloud processing pipelines. Previous multi-view works rely on heuristics (e.g. average or label mode pooling) after mapping pixels to points (Kundu et al., 2020; Wang et al., 2019a), or multi-view fusion with voxels (Dai & Nießner, 2018). Such setups might not be optimal for a few reasons. (i) Such heuristics may aggregate information of misleading projections that are obtained from arbitrary view-points. For example, looking at an object from the bottom and processing that view independently can carry wrong information about the object's content when combined with other views. (ii) The views lack geometric 3D information.
25
+
26
+ To this end, we propose a new hybrid 3D data structure that inherits the merits of point clouds (i.e. compactness, flexibility, and 3D descriptiveness) and leverages the benefits of rich perceptual features of multi-view projections. We call this new representation multi-view point cloud (or Voint cloud) and illustrate it in Figure 1. A Voint cloud is a set of Voints, where each Voint is a set of view-dependent features (view-features) that correspond to the same point in the 3D point cloud. The cardinality of these view-features may differ from one Voint to another. In Table 1, we compare some of the widely used 3D representations and our Voint cloud representation. Voint clouds inherit the characteristics of the parent explicit 3D point clouds, which facilitates learning Voint representations for a variety of vision applications (e.g. point cloud classification and segmentation). To deploy deep learning on the new Voint space, we define basic operations on Voints, such as pooling and convolution. Based on these operations, we define a practical way of building Voint neural networks that we dub VointNet. VointNet takes a Voint cloud and outputs point cloud features for 3D point cloud processing. We show how learning this Voint cloud representation leads to strong performance and gained robustness for the tasks of 3D classification, 3D object retrieval, and 3D part segmentation on standard benchmarks like ScanObjectNN (Uy et al., 2019), and ShapeNet (Chang et al., 2015).
27
+
28
+ Contributions: (i) We propose a novel multi-view 3D point cloud representation (denoted as Voint cloud), which represents each point (namely a Voint) as a set of features from different view-points. (ii) We define pooling and convolutional operations at the Voint level to construct a Voint Neural Network (VointNet) capable of learning to aggregate information from multiple views in the Voint space. (iii) Our VointNet reaches state-of-the-art performance on several 3D understanding tasks, including 3D shape classification, retrieval, and robust part segmentation. Further, VointNet achieves robustness improvement to occlusion and rotation.
29
+
30
+ <table><tr><td>3D Representation</td><td>Explicitness</td><td>View-Based</td><td>Main Use</td><td>3D Expressiveness</td></tr><tr><td>Point Clouds</td><td>Explicit</td><td>X</td><td>3D Understanding</td><td>Medium</td></tr><tr><td>Multi-View Projections</td><td>Implicit</td><td>✓</td><td>3D Understanding</td><td>Low</td></tr><tr><td>Voxels</td><td>Explicit</td><td>X</td><td>3D Understanding</td><td>Medium</td></tr><tr><td>Mesh</td><td>Explicit</td><td>X</td><td>3D Modeling</td><td>High</td></tr><tr><td>NeRFs</td><td>Implicit</td><td>✓</td><td>Novel View Synthesis</td><td>Medium</td></tr><tr><td>Voint Clouds (ours)</td><td>Explicit</td><td>✓</td><td>3D Understanding</td><td>Medium</td></tr></table>
31
+
32
+ Table 1: Comparison of Different 3D Representations. We compare some of the widely used 3D representations to our proposed Voint cloud. Note that our Voint cloud shares the view-dependency of NeRFs (Mildenhall et al., 2020) while inheriting the merits of 3D point clouds.
33
+
34
+ # 2 RELATED WORK
35
+
36
+ Learning on 3D Point Clouds. 3D point clouds are widely used for 3D representation in computer vision due to their compactness, flexibility, and because they can be obtained naturally from sensors like LiDAR and RGBD cameras. PointNet (Qi et al., 2017a) paved the way as the first deep learning algorithm to operate directly on 3D point clouds. It computes point features independently and aggregates them using an order-invariant function like max-pooling. Subsequent works focused on finding neighborhoods of points to define point convolutional operations (Qi et al., 2017b; Wang et al., 2019c; Li et al., 2018; Han et al., 2019). Several recent works combine point cloud representations with other 3D modalities like voxels (Liu et al., 2019b; You et al., 2018) or multi-view images (Jaritz et al., 2019). We propose a novel Voint cloud representation for 3D shapes and investigates novel architectures that aggregate view-dependent features at the 3D point level.
37
+
38
+ Multi-View Applications. The idea of using 2D images to understand the 3D world was initially proposed in 1994 by Bradski et. al. (Bradski & Grossberg, 1994). This intuitive multi-view approach was combined with deep learning for 3D understanding in MVCNN (Su et al., 2015). A line of works continued developing multi-view approaches for classification and retrieval by improving the aggregation of the view-features from each image view (Kanezaki et al., 2018; Esteves et al., 2019; Cohen & Welling, 2016; Wei et al., 2020; Hamdi et al., 2021). In this work, we fuse the concept of multi-view into the 3D structure itself, such that every 3D point would have an independent set of view-features according to the view-points available in the setup. Our Voints are aligned with the sampled 3D point cloud, offering a compact representation that allows for efficient computation and memory usage while maintaining the view-dependent component that facilitates view-based learning for vision.
39
+
40
+ Hybrid Multi-View with 3D Data. On the task of 3D semantic segmentation, a smaller number of works tried to follow the multi-view approach (Dai & Nießner, 2018; Kundu et al., 2020; Wang et al., 2019a; Kalogerakis et al., 2017; Jaritz et al., 2019; Liu et al., 2021b; Lyu et al., 2020). A problem arises when combining view features to represent local points/voxels while preserving local geometric features. These methods tend to average the view-features (Kundu et al., 2020; Kalogerakis et al., 2017), propagate the labels only (Wang et al., 2019a), learn from reconstructed points in the neighborhood (Jaritz et al., 2019), order points on a single grid (Lyu et al., 2020), or combine the multi-view features with 3D voxel features (Dai & Nießner, 2018; Hou et al., 2019). To this end, our proposed VointNet operates on the Voint cloud space while preserving the compactness and 3D descriptiveness of the original point cloud. VointNet leverages the power of multi-view features with learned aggregation on the view-features applied to each point independently.
41
+
42
+ # 3 METHODOLOGY
43
+
44
+ The primary assumption in our work is that surface 3D points are spherical functions, i.e. their representations depend on the viewing angles observing them. This condition contrasts with most 3D point cloud processing pipelines that assume a view-independent representation of 3D point clouds. The full pipeline is illustrated in Figure 2.
45
+
46
+ ![](images/f18c527b4598c89667a2cfd0519a8caecd049ba9b97eafab366dccb7b44f9b72.jpg)
47
+ Figure 2: Learning from Voint Clouds. To construct a 3D Voint cloud $\widehat{\mathcal{X}}$ , a renderer $\mathbf{R}$ renders the point cloud $\mathcal{X}$ from view-points $\mathcal{U}$ and image features are extracted from the generated images via a 2D backbone $\mathbf{C}$ . The image features are then unprojected to the Voint cloud by $\Phi_{\mathbf{B}}$ and passed to VointNet $\widehat{\mathbf{F}}$ . To learn both $\mathbf{C}$ and $\widehat{\mathbf{F}}$ , a 3D loss on the output points is used with an optional auxiliary 2D loss on $\mathbf{C}$ .
48
+
49
+ # 3.1 3D VOINT CLOUD
50
+
51
+ From Point Clouds to Voint Clouds. A 3D point cloud is a compact 3D representation composed of sampled points on the surface of a 3D object or a scene and can be obtained by different sensors like LiDAR (Chen et al., 2017) or as a result of reconstruction (Okutomi & Kanade, 1993). Formally, we define the coordinate function for the surface $g_{\mathrm{s}}(\mathbf{x}) : \mathbb{R}^3 \to \mathbb{R}$ as the Sign Distance Function (SDF) in the continuous Euclidean space (Park et al., 2019; Mescheder et al., 2019). The 3D iso-surface is then defined as the set of all points $\mathbf{x}$ that satisfy the condition $g_{\mathrm{s}}(\mathbf{x}) = 0$ . We define a surface 3D point cloud $\mathcal{X} \in \mathbb{R}^{N \times 3}$ as a set of $N$ 3D points, where each point $\mathbf{x}_i \in \mathbb{R}^3$ is represented by its 3D coordinates $(x_i, y_i, z_i)$ and satisfies the iso-surface condition as follows: $\mathcal{X} = \{\mathbf{x}_i \in \mathbb{R}^3 \mid g_{\mathrm{s}}(\mathbf{x}_i) = 0\}_{i=1}^N$ . In this work, we aim to fuse the view-dependency to 3D point. Inspired by NeRFs (Mildenhall et al., 2020), we assume that surface points also depend on the view direction from which they are being observed. Specifically, there exists a continuous implicit spherical function $\mathbf{g}(\mathbf{x}, \mathbf{u}) : \mathbb{R}^5 \to \mathbb{R}^d$ that defines the features of each point $\mathbf{x}$ depending on the view-point direction $\mathbf{u}$ . Given a set of $M$ view-point directions $\mathcal{U} \in \mathbb{R}^{M \times 2}$ , a Voint $\widehat{\mathbf{x}} \in \mathbb{R}^{M \times d}$ is a set of $M$ view-dependent features of size $d$ for the sphere centered at point $\mathbf{x}$ as follows.
52
+
53
+ $$
54
+ \widehat {\mathbf {x}} _ {i} = \left\{\mathbf {g} \left(\mathbf {x} _ {i}, \mathbf {u} _ {j}\right) \in \mathbb {R} ^ {d} \mid \mathbf {x} _ {i} \in \mathcal {X} \right\} _ {j = 1} ^ {M} \tag {1}
55
+ $$
56
+
57
+ The Voint cloud $\widehat{\mathcal{X}}\in \mathbb{R}^{N\times M\times d} = \{\widehat{\mathbf{x}}_i\}_{i = 1}^N$ is the set of all $N$ Voints $\widehat{\mathbf{x}}_i$ corresponding to the parent point cloud $\mathcal{X}$ . Note that we typically do not have access to the underlying implicit function $\mathbf{g}$ and we approximate it with the following three steps.
58
+
59
+ 1-Multi-View Projection. As mentioned earlier, a Voint combines multiple view-features of the same 3D point. These view-features come from a multi-view projection of the points by a point cloud renderer $\mathbf{R}:\mathbb{R}^{N\times 3}\to \mathbb{R}^{M\times H\times W\times 3}$ that renders the point cloud $\mathcal{X}$ from multiple view-points $\mathcal{U}$ into $M$ images of size $H\times W\times 3$ . In addition to projecting the point cloud into the image space, $\mathbf{R}$ defines the index mapping $\mathbf{B}\in \{0,\dots,N\}^{M\times H\times W}$ between each pixel to the N points and background it renders. Also, $\mathbf{R}$ outputs the visibility binary matrix $\mathbf{V}\in \{0,1\}^{N\times M}$ for each point from each view. Since not all points appear in all the views due to pixel discretization, the visibility score $\mathbf{V}_{i,j}$ defines if the Voint $\hat{\mathbf{x}}_i$ is visible in the view $\mathbf{u}_j$ . The matrix $\mathbf{B}$ is crucial for unprojection, while $\mathbf{V}$ is needed for defining meaningful operations on Voints.
60
+ 2-Multi-View Feature Extraction. The rendered images are processed by a function $\mathbf{C}:\mathbb{R}^{M\times H\times W\times 3}\to \mathbb{R}^{M\times H\times W\times d}$ that extracts image features, as shown in Figure 2. If $\mathbf{C}$ is the identity function, all the view-features would typically the RGB value of the corresponding point. However, the $\mathbf{C}$ function can be a 2D network dedicated to the downstream task and can extract useful global and local features about each view.
61
+ 3-Multi-View Unprojection. We propose a module $\Phi_{\mathbf{B}}:\mathbb{R}^{M\times H\times W\times d}\to \mathbb{R}^{N\times M\times d}$ that unprojects the 2D features from each pixel to be 3D view-features at the corresponding voint. Using the mapping $\mathbf{B}$ created by the renderer, $\Phi_{\mathbf{B}}$ forms the Voint cloud features $\widehat{\mathcal{X}}$
62
+
63
+ To summarize, the output Voint cloud is described by Eq (1), where $\mathbf{g}(\mathbf{x}_i,\mathbf{u}_j) = \Phi_{\mathbf{B}}\big(\mathbf{C}(\mathbf{R}(\mathcal{X},\mathbf{u}_j))\big)_i$ and the features are only defined for a view $j$ of Voint $\hat{\mathbf{x}}_i$ if $\mathbf{V}_{i,j} = 1$ .
64
+
65
+ # 3.2 OPERATIONS ON 3D VOINT CLOUDS
66
+
67
+ We show in the Appendix that a functional form of max-pooled individual view-features of a set of angles can approximate any function in the spherical coordinates. We provide a theorem that extends PointNet's theorem of point cloud functional composition (Qi et al., 2017a) and its Universal Approximation to spherical functions underlying Voints. Next, we define a set of operations on Voints as building blocks for Voint neural networks (VointNet).
68
+
69
+ VointMax. We define VointMax as max-pooling on the visible view-features along the views dimension of the voint $\hat{\mathbf{x}}$ . For all $i \in 1,2,\dots,N$ and $j \in 1,2,\dots,M$ ,
70
+
71
+ $$
72
+ \operatorname {V o i n t M a x} \left(\widehat {\mathbf {x}} _ {i}\right) = \max _ {j} \widehat {\mathbf {x}} _ {i, j}, \quad \text {s . t .} \quad \mathbf {V} _ {i, j} = 1 \tag {2}
73
+ $$
74
+
75
+ VointConv. We define the convolution operation $h_{\mathrm{V}}: \mathbb{R}^{N \times M \times d} \to \mathbb{R}^{N \times M \times d'}$ as any learnable function that operates on the Voint space with shared weights on all the Voints and has the view-features input size $d$ and outputs view-features of size $d'$ and consists of $l_{V}$ layers. A simple example of this VointConv operation is the shared MLP applied only on the visible view-features. We provide further details for such operations in Section 4.2, which result in different non-exhaustive variants of VointNet.
76
+
77
+ # 3.3 LEARNING ON 3D VOINT CLOUDS
78
+
79
+ VointNet. The goal of the VointNet model is to obtain multi-view point cloud features that can be subsequently used by any point cloud processing pipeline. The VointNet module $\widehat{\mathbf{F}}:\mathbb{R}^{N\times M\times d}\to \mathbb{R}^{N\times d}$ is defined as follows.
80
+
81
+ $$
82
+ \widehat {\mathbf {F}} (\widehat {\boldsymbol {\chi}}) = h _ {\mathrm {P}} \left(\operatorname {V o i n t M a x} \left(h _ {\mathrm {V}} (\widehat {\boldsymbol {\chi}})\right)\right), \tag {3}
83
+ $$
84
+
85
+ where $h_{\mathrm{P}}$ is any point convolutional operation (e.g. shared MLP or EdgeConv). VointNet $\widehat{\mathbf{F}}$ transforms the individual view-features using the learned VointConv $h_{\mathrm{V}}$ before VointMax is applied on the view-features to obtain point features.
86
+
87
+ VointNet Pipeline for 3D Point Cloud Processing. The full pipeline is described in Figure 2. The loss for this pipeline can be described as follows:
88
+
89
+ $$
90
+ \underset {\boldsymbol {\theta} _ {\mathbf {C}}, \boldsymbol {\theta} _ {\widehat {\mathbf {F}}}} {\arg \min } \sum_ {i} ^ {N} L \left(\widehat {\mathbf {F}} \left(\Phi_ {\mathbf {B}} \left(\mathbf {C} \left(\mathbf {R} (\mathcal {X}, \mathcal {U})\right)\right)\right) _ {i}, \mathbf {y} _ {i}\right), \tag {4}
91
+ $$
92
+
93
+ where $L$ is a Cross-Entropy (CE) loss defined on all the training points $\mathcal{X}$ , and $\{y_i\}_{i=1}^N$ defines the labels of these points. The other components $(\mathbf{R}, \Phi_{\mathbf{B}}, \mathcal{U}, \mathbf{C})$ are all defined before. The weights to be jointly learned are those of the 2D backbone $(\theta_{\mathbf{C}})$ and those of the VointNet $(\theta_{\widehat{\mathbf{F}}})$ using the same 3D loss. An auxiliary 2D loss on $\theta_{\mathbf{C}}$ can be optionally added for supervision at the image level. For classification, the entire object can be treated as a single Voint, and the global features of each view would be the view-features of that Voint. We analyze different setups in detail in Section 6.
94
+
95
+ # 4 EXPERIMENTS
96
+
97
+ # 4.1 EXPERIMENTAL SETUP
98
+
99
+ Datasets. We benchmark VointNet on the challenging and realistic ScanObjectNN dataset for 3D point cloud classification (Uy et al., 2019). The dataset has three variants, includes background and occlusion, and has 15 categories and 2,902 point clouds. For the shape retrieval task, we benchmark on ShapeNet Core55 as a subset of ShapeNet (Chang et al., 2015). The dataset consists of 51,162 3D mesh objects labeled with 55 object classes. We follow the MVTN's setup (Hamdi et al., 2021) in sampling 5,000 points from each mesh object to obtain point cloud. On the other hand, for the task of shape part segmentation,
100
+
101
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">Data Type</td><td colspan="2">Classification</td><td>Accuracy</td></tr><tr><td>OBJ_BG</td><td>OBJ_ONLY</td><td>Hardest</td></tr><tr><td>PointNet (Qi et al., 2017a)</td><td>Points</td><td>73.3</td><td>79.2</td><td>68.0</td></tr><tr><td>SpiderCNN (Xu et al., 2018)</td><td>Points</td><td>77.1</td><td>79.5</td><td>73.7</td></tr><tr><td>PointNet ++ (Qi et al., 2017b)</td><td>Points</td><td>82.3</td><td>84.3</td><td>77.9</td></tr><tr><td>PointCNN (Li et al., 2018)</td><td>Points</td><td>86.1</td><td>85.5</td><td>78.5</td></tr><tr><td>DGCNN (Wang et al., 2019c)</td><td>Points</td><td>82.8</td><td>86.2</td><td>78.1</td></tr><tr><td>SimpleView (Goyal et al., 2021)</td><td>M-View</td><td>-</td><td>-</td><td>79.5</td></tr><tr><td>BGA-DGCNN (Uy et al., 2019)</td><td>Points</td><td>-</td><td>-</td><td>79.7</td></tr><tr><td>BGA-PN++ (Uy et al., 2019)</td><td>Points</td><td>-</td><td>-</td><td>80.2</td></tr><tr><td>MVTN (Hamdi et al., 2021)</td><td>M-View</td><td>92.6</td><td>92.3</td><td>82.8</td></tr><tr><td>VointNet (ours)</td><td>Voints</td><td>93.7</td><td>94.0</td><td>85.4</td></tr></table>
102
+
103
+ Table 2: 3D Point Cloud Classification on ScanObjectNN. We report the accuracy of VointNet in 3D point cloud classification on three different variants of ScanObjectNN (Uy et al., 2019). Bold denotes the best result in its setup. Note that the Hardest variant includes rotated and translated objects, which highlights the benefits of Voints on challenging scenarios.
104
+
105
+ we test on ShapeNet Parts (Yi et al., 2016), a subset of ShapeNet (Chang et al., 2015) that consists of 16,872 point cloud objects from 16 categories and 50 parts. For occlusion robustness, we follow MVTN (Hamdi et al., 2021) and test on ModelNet40 (Wu et al., 2015), which is composed of 40 classes and 12,311 3D objects.
106
+
107
+ Metrics. For 3D point cloud classification, we report the overall accuracy, while shape retrieval is evaluated using mean Average Precision (mAP) over test queries (Hamdi et al., 2021). 3D semantic segmentation is evaluated using mean Intersection over Union (mIoU) on points. For part segmentation, we report Instance-averaged mIoU (Ins. mIoU).
108
+
109
+ Baselines. We include PointNet (Qi et al., 2017a), PointNet++ (Qi et al., 2017b), DGCNN (Wang et al., 2019c), as baselines that use point clouds. We also compare against multi-view classification approaches like MVCNN (Su et al., 2015), SimpleView (Goyal et al., 2021), and MVTN (Hamdi et al., 2021) as baselines for classification and retrieval and adopt some of the multi-view segmentation baselines (e.g. Label Fusion (Wang et al., 2019a) and Mean Fusion (Kundu et al., 2020)) for part segmentation.
110
+
111
+ # 4.2 VOINTNET VARIANTS
112
+
113
+ VointNet in Eq (3) relies on the VointConv operation $h_{\mathrm{V}}$ as the basic building block. Here, we briefly describe three examples of $h_{\mathrm{V}}$ operations VointNet uses.
114
+
115
+ Shared Multi-Layer Perceptron (MLP). It is the most basic VointConv formulation. For a layer $l$ , the features of Voint $i$ at view $j$ are updated to layer $l + 1$ as: $\mathbf{h}_{i,j}^{l + 1} = \rho (\mathbf{h}_{i,j}^{l}\mathcal{W}_{\rho})$ , where $\rho$ is the shared MLP with weights $\mathcal{W}_{\rho}$ followed by normalization and a nonlinear function (e.g. ReLU). This operation is applied on all Voints independently and only involves the visible views-features for each Voint. This formulation extends the shared MLP formulation for PointNet (Qi et al., 2017a) to work on Voints' view-features.
116
+
117
+ Graph Convolution (GCN). We define a fully connected graph for each Voint by creating a virtual center node connected to all the view-features to aggregate their information (similar to "cls" token in ViT (Dosovitskiy et al., 2021)). Then, the graph convolution can be defined as the shared MLP (as described above) but on the edge features between all view features, followed by a max pool on the graph neighbors. An additional shared MLP is used before the final output.
118
+
119
+ Graph Attention (GAT). A graph attention operation can be defined just like the GCN operation above but with learned attention weights on the graph neighbor's features before averaging them. A shared MLP computes these weights.
120
+
121
+ <table><tr><td>Results</td><td>MVCNN (Su et al., 2015)</td><td>RotNet (Kanezaki et al., 2018)</td><td>ViewGCN (Wei et al., 2020)</td><td>MVTN (Hamdi et al., 2021)</td><td>VointNet (ours)</td></tr><tr><td>ShapeNet</td><td rowspan="2">73.5</td><td rowspan="2">77.2</td><td rowspan="2">78.4</td><td rowspan="2">82.9</td><td rowspan="2">83.3</td></tr><tr><td>Retr. mAP</td></tr></table>
122
+
123
+ Table 3: 3D Shape Retrieval. We report 3D shape retrieval mAP on ShapeNet Core55 (Chang et al., 2015; Sfikas et al., 2017). VointNet achieves state-of-the-art results on this benchmark.
124
+
125
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">Data Type</td><td colspan="2">Part Segmentation</td></tr><tr><td>(Unrotated)</td><td>(Rotated)</td></tr><tr><td>PointNet (Qi et al., 2017a)</td><td>Points</td><td>80.1</td><td>36.6 ±0.2</td></tr><tr><td>DGCNN (Wang et al., 2019c)</td><td>Points</td><td>80.1</td><td>37.1 ±0.2</td></tr><tr><td>CurveNet (Xiang et al., 2021)</td><td>Points</td><td>84.9</td><td>32.3 ±0.0</td></tr><tr><td>Label Fuse (Wang et al., 2019a)</td><td>M-View</td><td>80.0</td><td>61.4 ±0.2</td></tr><tr><td>Mean Fuse (Kundu et al., 2020)</td><td>M-View</td><td>77.5</td><td>62.0 ±0.2</td></tr><tr><td>VointNet (ours)</td><td>Voints</td><td>81.2</td><td>62.4 ±0.2</td></tr></table>
126
+
127
+ Table 4: Robust 3D Part Segmentation on ShapeNet Parts. We compare the Inst. mIoU of VointNet against other methods in 3D segmentation on ShapeNet Parts (Yi et al., 2016). At test time, we randomly rotate the objects and report the results over ten runs. Note how VointNet's performance largely exceeds the point baselines in the realistic rotated scenarios, while exceeding multi-view baselines on the unrotated benchmark. All the results are reproduced in our setup.
128
+
129
+ # 4.3 IMPLEMENTATION DETAILS
130
+
131
+ Rendering and Unprojection. We choose the differentiable point cloud renderer $\mathbf{R}$ from Pytorch3D (Ravi et al., 2020) in our pipeline for its speed and compatibility with Pytorch libraries (Paszke et al., 2017). We render point clouds on multi-view images with size $224 \times 224 \times 3$ . We color the points by their normals' values or keep them white if the normals are not available. Following a similar procedure to (Wei et al., 2020; Hamdi et al., 2021), the view-points setup is randomized during training (using $M = 8$ views) and fixed to spherical views in testing (using $M = 12$ views).
132
+
133
+ Architectures. For the 2D backbone $\mathbf{C}$ , we use ViT-B (Dosovitskiy et al., 2021) (with pretrained weights from TIMM library (Wightman, 2019)) for classification and DeepLabV3 (Chen et al., 2018) for segmentation. We use the 3D CE loss on the 3D point cloud output and the 2D CE loss when the loss is defined on the pixels. The feature dimension of the VointNet architectures is $d = 64$ , and the depth is $l_{V} = 4$ layers in $h_V$ . The main results are based on VointNet (MLP), unless otherwise specified as in Section 6, where we study in details the effect of VointConv $h_\mathrm{V}$ and $\mathbf{C}$ .
134
+
135
+ Training Setup. We train our pipeline in two stages, where we start by training the 2D backbone on the 2D projected labels of the points, then train the entire pipeline end-to-end while focusing the training on the VointNet part. We use the AdamW optimizer (Loshchilov & Hutter, 2017) with an initial learning rate of 0.0005 and a step learning rate schedule of $33.3\%$ every 12 epochs for 40 epochs. The pipeline is trained with one NVIDIA Tesla V100 GPU. We do not use any data augmentation. More details about the training setup (loss and rendering), VointNet, and the 2D backbone architectures can be found in the Appendix.
136
+
137
+ # 5 RESULTS
138
+
139
+ The main test results of our Voint formulations are summarized in Tables 2,3, 4, and 5. We achieve state-of-the-art performance in the task of 3D classification, retrieval, and robust 3D part segmentation. More importantly, under the realistic rotated setups of ScanObjectNN and ShapeNet Parts, we improve over $7.2\%$ Acc. and $25\%$ mIoU respectively compared to point baselines Qi et al. (2017a); Wang et al. (2019c). Following common practice Hamdi et al. (2021), we report the best results out of four runs in benchmark tables, but detailed results are provided in the Appendix.
140
+
141
+ ![](images/123fba4fd773417d0a4163ecd2fb10d4b2176e02cc5cd0cc864f083a9c86b7cd.jpg)
142
+ Figure 3: Qualitative Comparison for Part Segmentation. We compare our VointNet 3D segmentation predictions to Mean Fuse (Kundu et al., 2020) that is using the same trained 2D backbone. Note how VointNet distinguishes detailed parts (e.g. the car window frame).
143
+
144
+ ![](images/cd772a1ce730ead1824f2b7f7a275f785539b7534c5bcf3091b1539db62ac8d3.jpg)
145
+
146
+ ![](images/62f87428c6e762834173ef8afc04dd088d57d8d321339eb21332f097f9ee4c51.jpg)
147
+
148
+ # 5.1 3D SHAPE CLASSIFICATION
149
+
150
+ Table 2 reports the classification accuracy on the 3D point cloud classification task on ScanObjectNN Uy et al. (2019). It benchmarks VointNet against other recent and strong baselines Hamdi et al. (2021); Goyal et al. (2021); Hamdi et al. (2021). VointNet demonstrates state-of-the-art results on all the variants, including the challenging Hardest (PB_T50_RS) variant that includes challenging scenarios of rotated and translated objects. The increase in performance $(+2.6\%)$ is significant on this variant, which highlights the benefits of Voints on challenging scenarios, with further affirming results in Section 5.4. We follow exactly the same procedure as in MVTN Hamdi et al. (2021).
151
+
152
+ # 5.2 3D SHAPE RETRIEVAL
153
+
154
+ Table 3 benchmarks the 3D shape retrieval mAP on ShapeNet Core55 Chang et al. (2015). VointNet achieves state-of-the-art performance on ShapeNet Core55. Baseline results are reported from Hamdi et al. (2021).
155
+
156
+ # 5.3 ROBUST 3D PART SEGMENTATION
157
+
158
+ Table 4 reports the Instance-averaged segmentation mIoU of VointNet compared with other methods on ShapeNet Parts Yi et al. (2016). Two variants of the benchmark are reported: unrotated normalized setup, and the rotated realistic setup. For the rotated setup, we follow the previous 3D literature Liu et al. (2019a); Hamdi et al. (2021; 2020) by testing the robustness of trained models by perturbing the shapes in ShapeNet Parts with random rotations at test time (ten runs) and report the averages in Table 4. Note VointNet's improvement over Mean Fuse Kundu et al. (2020) and Label Fuse Wang et al. (2019a) on unrotated setup despite that both baselines use the same trained 2D backbone as VointNet. Also, for rotated setups, point methods don't work as well. All the results in Table 4 are reproduced by our code in the same setup (see the code attached in supplementary material). Figure 3 shows qualitative 3D segmentation results for VointNet and Mean Fuse Kundu et al. (2020) as compared to the ground truth.
159
+
160
+ # 5.4 OCCLUSION ROBUSTNESS
161
+
162
+ One of the aspects of the robustness of 3D classification models that have been recently studied is their robustness to occlusion, as detailed in MVTN Hamdi et al. (2021). These simulated occlusions are introduced at test time, and the average test accuracy is reported on each cropping ratio. We benchmark our VointNet against recent baselines in Table 5. PointNet Qi et al. (2017a) and DGCNN Wang et al. (2019c) are used as point-based baselines, and MVTN Hamdi et al. (2021) as a multi-view baseline.
163
+
164
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">Data Type</td><td colspan="5">Occlusion Ratio</td></tr><tr><td>0</td><td>0.1</td><td>0.2</td><td>0.3</td><td>0.5</td></tr><tr><td>PointNet (Qi et al., 2017a)</td><td>Points</td><td>89.1</td><td>88.2</td><td>86.1</td><td>81.6</td><td>53.5</td></tr><tr><td>DGCNN (Wang et al., 2019c)</td><td>Points</td><td>92.1</td><td>77.1</td><td>74.5</td><td>71.2</td><td>30.1</td></tr><tr><td>PCT (Guo et al., 2021)</td><td>Points</td><td>93.3</td><td>92.6</td><td>91.1</td><td>88.2</td><td>61.9</td></tr><tr><td>MVTN (Hamdi et al., 2021)</td><td>M-View</td><td>93.8</td><td>90.3</td><td>89.9</td><td>88.3</td><td>67.1</td></tr><tr><td>VointNet (ours)</td><td>Voints</td><td>92.8</td><td>91.6</td><td>91.2</td><td>89.1</td><td>66.1</td></tr></table>
165
+
166
+ ![](images/ca2196b969d0352d172fa93d3fa3d0cc81b1a8cce986db6e983565528fffa072.jpg)
167
+ Figure 4: Effect of the Number of Views. We plot Ins. mIoU of 3D segmentation vs. the number of views $(M)$ used in inference on ShapeNet Parts. Note VointNet's consistent improvement over Mean Fuse (Kundu et al., 2020) and Label Fuse (Wang et al., 2019a). Both baselines use the same trained 2D backbone as VointNet and are tested on the same unrotated setup.
168
+
169
+ # 6 ANALYSIS AND INSIGHTS
170
+
171
+ Number of Views. We study the effect of the number of views $M$ on the performance of 3D part segmentation using multiple views. We compare Mean Fuse (Kundu et al., 2020) and Label Fuse (Wang et al., 2019a) to our VointNet when all of them have the same trained 2D backbone. The views are randomly picked, and the experiments are repeated four times. Ins. mIoU with confidence intervals are shown in Figure 4. We observe a consistent improvement with VointNet over the other two baselines across different numbers of views.
172
+
173
+ Table 5: Occlusion Robustness for 3D Classification. We report the test accuracy on ModelNet40 (Wu et al., 2015) for different occlusion ratios of the data to measure occlusion robustness of different 3D methods.
174
+
175
+ <table><tr><td colspan="2">2D Backbone</td><td colspan="3">VointConv</td><td>Results</td></tr><tr><td>FCN</td><td>DeepLabV3</td><td>MLP</td><td>GCN</td><td>GAT</td><td>Inst. mIoU</td></tr><tr><td>✓</td><td>-</td><td>✓</td><td>-</td><td>-</td><td>78.8 ± 0.2</td></tr><tr><td>✓</td><td>-</td><td>-</td><td>✓</td><td>-</td><td>77.6 ± 0.2</td></tr><tr><td>✓</td><td>-</td><td>-</td><td>-</td><td>✓</td><td>77.1 ± 0.2</td></tr><tr><td>-</td><td>✓</td><td>✓</td><td>-</td><td>-</td><td>80.6 ± 0.1</td></tr><tr><td>-</td><td>✓</td><td>-</td><td>✓</td><td>-</td><td>77.2 ± 0.4</td></tr><tr><td>-</td><td>✓</td><td>-</td><td>-</td><td>✓</td><td>80.4 ± 0.2</td></tr></table>
176
+
177
+ Table 6: Ablation Study for 3D Segmentation. We ablate different components of VointNet (2D backbone and VointConv choice) and report Ins. mIoU performance on ShapeNet Parts.
178
+
179
+ Choice of Backbones. We ablate the choice of the 2D backbone and the VointConv operation used in VointNet and report the segmentation Ins. mIoU results in Table 6. Note how the 2D backbone greatly affects performance, while the VointConv operation type does not. This ablation highlights the importance of the 2D backbone in VointNet pipeline and motivates the use of the simplest variant of VointNet (MLP). We provide a detailed study of more factors as well as compute and memory costs in the Appendix.
180
+
181
+ # 7 LIMITATIONS AND ACKNOWLEDGMENTS
182
+
183
+ One aspect limiting the performance of Voints is how well-trained the 2D backbone is for the downstream 3D task. In most cases, the 2D backbone must be pretrained with enough data to learn meaningful information for VointNet. Another aspect that limits the capability of the Voint cloud is how to properly select the view-points for segmentation. Addressing these limitations is an important direction for future work. Also, extending Voint learning on more 3D tasks like 3D scene segmentation and 3D object detection is left for future work.
184
+
185
+ Acknowledgments. This work was supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research through the Visual Computing Center (VCC) funding and the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI)
186
+
187
+ # REFERENCES
188
+
189
+ Gary Bradski and Stephen Grossberg. Recognition of 3-d objects from multiple 2-d views by a self-organizing neural architecture. In *From Statistics to Neural Networks*, pp. 349–375. Springer, 1994.
190
+ Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical Report arXiv:1512.03012 [cs.GR], Stanford University — Princeton University — Toyota Technological Institute at Chicago, 2015.
191
+ Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pp. 801-818, 2018.
192
+ Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia. Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1907-1915, 2017.
193
+ Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4d spatio-temporal convnets: Minkowski convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3075-3084, 2019.
194
+ Taco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pp. 2990-2999, 2016.
195
+ Angela Dai and Matthias Nießner. 3dmv: Joint 3d-multi-view prediction for 3d semantic scene segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 452-468, 2018.
196
+ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2021.
197
+ Carlos Esteves, Yinshuang Xu, Christine Allen-Blanchette, and Kostas Daniilidis. Equivariant multi-view networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1568-1577, 2019.
198
+ Yutong Feng, Yifan Feng, Haoxuan You, Xibin Zhao, and Yue Gao. Meshnet: Mesh neural network for 3d shape representation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 8279-8286, 2019.
199
+ Ankit Goyal, Hei Law, Bowei Liu, Alejandro Newell, and Jia Deng. Revisiting point cloud shape classification with a simple and effective baseline. In ICML, 2021.
200
+ Meng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R Martin, and Shi-Min Hu. Pct: Point cloud transformer. Computational Visual Media, 7(2):187-199, 2021.
201
+
202
+ Abdullah Hamdi, Sara Rojas, Ali Thabet, and Bernard Ghanem. Advpc: Transferable adversarial perturbations on 3d point clouds. In Computer Vision - ECCV 2020, pp. 241-257, Cham, 2020. Springer International Publishing. ISBN 978-3-030-58610-2.
203
+ Abdullah Hamdi, Silvio Giancola, and Bernard Ghanem. Mvtn: Multi-view transformation network for 3d shape recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1-11, October 2021.
204
+ Zhizhong Han, Xiyang Wang, Yu-Shen Liu, and Matthias Zwicker. Multi-angle point cloud-vae: Unsupervised feature learning for 3d point clouds from multiple angles by joint self-reconstruction and half-to-half prediction. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10441-10450. IEEE, 2019.
205
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. URL http://arxiv.org/abs/1512.03385.
206
+ Ji Hou, Angela Dai, and Matthias Nießner. 3d-sis: 3d semantic instance segmentation of rgb-d scans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4421-4430, 2019.
207
+ Wenbo Hu, Hengshuang Zhao, Li Jiang, Jiaya Jia, and Tien-Tsin Wong. Bidirectional projection network for cross dimension scene understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14373-14382, 2021.
208
+ Maximilian Jaritz, Jiayuan Gu, and Hao Su. Multi-view pointnet for 3d scene understanding. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 0-0, 2019.
209
+ Evangelos Kalogerakis, Melinos Averkiou, Subhransu Maji, and Siddhartha Chaudhuri. 3d shape segmentation with projective convolutional networks. In proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3779-3788, 2017.
210
+ Asako Kanezaki, Yasuyuki Matsushita, and Yoshifumi Nishida. Rotationnet: Joint object categorization and pose estimation using multiviews from unsupervised viewpoints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5010-5019, 2018.
211
+ Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, and Caroline Pantofaru. Virtual multi-view fusion for 3d semantic segmentation. In European Conference on Computer Vision (ECCV), pp. 518-535. Springer, 2020.
212
+ Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. Pointcnn: Convolution on x-transformed points. In Advances in neural information processing systems (NIPS), pp. 820-830, 2018.
213
+ Yongcheng Liu, Bin Fan, Shiming Xiang, and Chunhong Pan. Relation-shape convolutional neural network for point cloud analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8895-8904, 2019a.
214
+ Ze Liu, Zheng Zhang, Yue Cao, Han Hu, and Xin Tong. Group-free 3d object detection via transformers. arXiv preprint arXiv:2104.00678, 2021a.
215
+ Zhengzhe Liu, Xiaojuan Qi, and Chi-Wing Fu. 3d-to-2d distillation for indoor scene parsing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4464-4474, 2021b.
216
+ Zhijian Liu, Haotian Tang, Yujun Lin, and Song Han. Point-voxel cnn for efficient 3d deep learning. In Advances in Neural Information Processing Systems, pp. 965-975, 2019b.
217
+ Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440, 2015.
218
+
219
+ Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
220
+ Yecheng Lyu, Xinming Huang, and Ziming Zhang. Learning to segment 3d point clouds in 2d image space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12255-12264, 2020.
221
+ Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, and Daniel Duckworth. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7210-7219, 2021.
222
+ Daniel Maturana and Sebastian Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 922-928. IEEE, 2015.
223
+ Leonard McMillan and Gary Bishop. Plenoptic modeling: An image-based rendering system. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pp. 39-46, 1995.
224
+ Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4460-4470, 2019.
225
+ Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision, pp. 405-421. Springer, 2020.
226
+ Masatoshi Okutomi and Takeo Kanade. A multiple-baseline stereo. IEEE Transactions on pattern analysis and machine intelligence, 15(4):353-363, 1993.
227
+ Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 165-174, 2019.
228
+ Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In NIPS-W, 2017.
229
+ Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10318-10327, 2021.
230
+ Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 652-660, 2017a.
231
+ Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems (NIPS), pp. 5099-5108, 2017b.
232
+ Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari. Accelerating 3d deep learning with pytorch3d. arXiv:2007.08501, 2020.
233
+ Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei-Fei Li. Imagenet large scale visual recognition challenge. CoRR, abs/1409.0575, 2014. URL http://arxiv.org/abs/1409.0575.
234
+
235
+ Konstantinos Sfikas, Theoharis Theoharis, and Ioannis Pratikakis. Exploiting the PANorama Representation for Convolutional Neural Network Classification and Retrieval. In Ioannis Pratikakis, Florent Dupont, and Maks Ovsjanikov (eds.), Eurographics Workshop on 3D Object Retrieval, pp. 1-7. The Eurographics Association, 2017. ISBN 978-3-03868-030-7. doi: 10.2312/3dor.20171045.
236
+ Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik Learned-Miller. Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE international conference on computer vision, pp. 945-953, 2015.
237
+ Hugues Thomas, Charles R Qi, Jean-Emmanuel Deschaud, Beatrix Marcotegui, François Goulette, and Leonidas J Guibas. Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6411–6420, 2019.
238
+ Mikaela Angelina Uy, Quang-Hieu Pham, Binh-Son Hua, Duc Thanh Nguyen, and Sai-Kit Yeung. Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data. In International Conference on Computer Vision (ICCV), 2019.
239
+ Brian H Wang, Wei-Lun Chao, Yan Wang, Bharath Hariharan, Kilian Q Weinberger, and Mark Campbell. Ldls: 3-d object segmentation through label diffusion from 2-d images. IEEE Robotics and Automation Letters, 4(3):2902-2909, 2019a.
240
+ He Wang, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, and Leonidas J Guibas. Normalized object coordinate space for category-level 6d object pose and size estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2642-2651, 2019b.
241
+ Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, and Justin M. Solomon. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (TOG), 2019c.
242
+ Xin Wei, Ruixuan Yu, and Jian Sun. View-gen: View-based graph convolutional network for 3d shape analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1850-1859, 2020.
243
+ Ross Wightman. Pytorch image models. https://github.com/rwrightman/pytorch-image-models, 2019.
244
+ Zhirong Wu, S. Song, A. Khosla, Fisher Yu, Linguang Zhang, Xiaou Tang, and J. Xiao. 3d shapenets: A deep representation for volumetric shapes. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1912-1920, 2015.
245
+ Tiang Xiang, Chaoyi Zhang, Yang Song, Jianhui Yu, and Weidong Cai. Walk in the cloud: Learning curves for point clouds shape analysis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 915-924, October 2021.
246
+ Yifan Xu, Tianqi Fan, Mingye Xu, Long Zeng, and Yu Qiao. SpiderCNN: Deep learning on point sets with parameterized convolutional filters. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 87-102, 2018.
247
+ Li Yi, Vladimir G Kim, Duygu Ceylan, I-Chao Shen, Mengyan Yan, Hao Su, Cewu Lu, Qixing Huang, Alla Sheffer, and Leonidas Guibas. A scalable active framework for region annotation in 3d shape collections. ACM Transactions on Graphics (ToG), 35(6):1-12, 2016.
248
+ Haoxuan You, Yifan Feng, Rongrong Ji, and Yue Gao. Pvnet: A joint convolutional network of point cloud and multi-view for 3d shape recognition. In Proceedings of the 26th ACM international conference on Multimedia, pp. 1310-1318, 2018.
249
+ Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, and Angjoo Kanazawa. PlenOctrees for real-time rendering of neural radiance fields. In ICCV, 2021.
250
+ Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip Torr, and Vladlen Koltun. Point transformer. arXiv preprint arXiv:2012.09164, 2020.
251
+
252
+ # APPENDIX
253
+
254
+ # A DETAILED FORMULATIONS
255
+
256
+ # A.1 TOY EXAMPLE
257
+
258
+ In the toy 2D example in Figure 5, the center point (represented by a circular function $g$ ) is viewed from various view-points $u_{j}$ that are agnostic to the underlying function itself. In many applications, it is desired to have a single feature representing each point in the point cloud. When the projected values of $g$ from these $u_{j}$ view-points are aggregated together (e.g. by max/mean pool) to get a constant representation of that point, the underlying properties of $g$ are lost. We build our Voint representation to keep the structure of $g$ intact by taking the full set $\{(u_{j},g(u_{j}))\}_{j = 1}^{5}$ in learning the aggregations.
259
+
260
+ # A.2 FUNCTIONAL FORM OF VOINTNET
261
+
262
+ We can look at a simplified setup to decide on the functional form of the deep neural network that operates in the Voint space. In this simplified setup, we consider a 2D example (instead of 3D Voints) and assume that a circular function describes a point at the center. The center point will assume its value according to the angle $u$ . The following Theorem 1 proves that for any continuous set function $f$ that operates on any set of $M$ angles $\{u_1, \dots, u_M\}$ , there exists an equivalent composite function consisting of transformed max-pooled individual view-features. This composition is the functional form we describe later for Voint neural networks
263
+
264
+ Theorem 1 Suppose $f: \mathcal{S} \to \mathbb{R}$ is a continuous set function operating on an angles set $\mathcal{S} = \{u \mid u \in [0,2\pi]\}$ . The continuity of $f$ is based on the Hausdorff distance $d_H$ between two sets of angles, where $d_H(\mathcal{S},\mathcal{S}') = \max_{u_i' \in \mathcal{S}'} \min_{u_i \in \mathcal{S}} d_A(u_i,u_i')$ , and $d_A$ is the smallest positive angle between two angles $d_A(u,u') = \min(|u - u'|, 2\pi - |u - u'|)$ . Then, for every $\epsilon > 0$ , and $\mathcal{U} = \{u_1,\dots,u_M\} \subset \mathcal{S}$ , there exists a continuous function $\mathbf{h}$ and a symmetric function $g(u_1,\dots,u_M) = \gamma \circ \mathrm{MAX}$ , such that:
265
+
266
+ $$
267
+ \left| f (\mathcal {U}) - \gamma \left(\operatorname {M A X} \left(\mathbf {h} \left(u _ {1}\right), \dots , \mathbf {h} \left(u _ {M}\right)\right)\right) \right| < \epsilon , \tag {5}
268
+ $$
269
+
270
+ where $\gamma$ is a continuous function, and MAX is an element-wise vector max operator.
271
+
272
+ Proof. By the continuity of $f$ , we take $\delta_{\epsilon}$ so that $|f(\mathcal{U}) - f(\mathcal{U}')| < \epsilon$ for any $\mathcal{U}, \mathcal{U}' \subset \mathcal{S}$ if $d_H(\mathcal{U}, \mathcal{U}') < \delta_{\epsilon}$ . Define $K = [2\pi/\delta_{\epsilon}]$ , which split $[0, 2\pi]$ into $K$ intervals evenly and define an auxiliary function that maps an angle to the beginning of the interval it lies in:
273
+
274
+ $$
275
+ \sigma (u) = \frac {\lfloor K u \rfloor}{K}
276
+ $$
277
+
278
+ Let $\tilde{\mathcal{U}} = \sigma(u): u \in \mathcal{U}$ , then
279
+
280
+ $$
281
+ \left| f (\mathcal {U}) - f (\tilde {\mathcal {U}}) \right| < \epsilon \tag {6}
282
+ $$
283
+
284
+ Let $h_k(u) = e^{-d\left(u, \left[\frac{k-1}{K}, \frac{k}{K}\right]\right)}$ be a soft indicator function where $d\left(u, \left[\frac{k-1}{K}, \frac{k}{K}\right]\right) = \min\left(d_A\left(u, \frac{k-1}{K}\right), d_A\left(u, \frac{k}{K}\right)\right)$ is the distance between angle $u$ to interval $\left[\frac{k-1}{K}, \frac{k}{K}\right]$ . Let $\mathbf{h}(u) = [h_1(u); \ldots; h_K(u)]$ , then $\mathbf{h}: \mathbb{R} \to \mathbb{R}^K$
285
+
286
+ Let $q_{j}(u_{1},\ldots ,u_{M}) = \max \{h_{j}(u_{1}),\ldots ,h_{j}(u_{M})\}$ , indicating the occupancy of the $j$ -th interval by angles in $\mathcal{U}$ . Let $\mathbf{q} = [q_1;\dots;q_K]$ , then $\mathbf{q}:[0,2\pi ]^M\to \{0,1\} ^K$ is a symmetric function, indicating the occupancy of each interval by angles in $\mathcal{U}$ .
287
+
288
+ Define $\zeta : \{0,1\}^K \to S$ as $\zeta(\mathbf{q}) = \left\{\frac{k-1}{K} : q_k \geq 1\right\}$ which maps the occupancy vector to a set which contains the left end of each angle interval. It is straightforward to show:
289
+
290
+ $$
291
+ \zeta (\mathbf {q} (\mathcal {U})) \equiv \tilde {\mathcal {U}} \tag {7}
292
+ $$
293
+
294
+ Let $\gamma : \mathbb{R}^K \to \mathbb{R}$ be a continuous function such that $\gamma(\mathbf{q}) = f(\zeta(\mathbf{q}))$ for $\mathbf{q} \in \{0,1\}^K$ . Then from Eq (6) and Eq (7),
295
+
296
+ $$
297
+ \begin{array}{l} \left| \gamma (\mathbf {q} (\mathcal {U})) - f (\mathcal {U}) \right| (8) \\ = \left| f \left(\zeta (\mathbf {q} (\mathcal {U}))\right) - f (\mathcal {U}) \right| < \epsilon (8) \\ \end{array}
298
+ $$
299
+
300
+ ![](images/72e8bc6c21ce842713cf2f761952fd837f95ca87330b4317430bb398147d99b3.jpg)
301
+ Figure 5: A Toy 2D Example of Voints. Voints assume view-dependency for every 3D point. Here, we look at a single 2D point at the center with a circular function $g(u) = \mathrm{sign}(\cos u)$ from five arbitrary view-points $\{u_j\}_{j=1}^5$ . Trying to reduce $g$ to a single value based on $u_j$ projections undermines the underlying structure of $g$ . We take the full set $\{(u_j, g(u_j))\}_{j=1}^5$ as a representation of $g$ and learn a set function $f$ on these view-features for a more informative manner of representation aggregation.
302
+
303
+ Note that $\gamma (\mathbf{q}(\mathcal{U}))$ can be rewritten as follows:
304
+
305
+ $$
306
+ \begin{array}{l} \gamma \left(\mathbf {q} \left(\mathcal {U}\right)\right) = \gamma \left(\mathbf {q} \left(u _ {1}, \dots , u _ {M}\right)\right) \\ = \gamma (\operatorname {M A X} \left(\mathbf {h} \left(u _ {1}\right), \dots , \mathbf {h} \left(u _ {M}\right)\right)) \tag {9} \\ = (\gamma \circ \operatorname {M A X}) \left(\mathbf {h} \left(u _ {1}\right), \dots , \mathbf {h} \left(u _ {M}\right)\right) \\ \end{array}
307
+ $$
308
+
309
+ Since $\gamma \circ$ MAX is a symmetric function and from Eq (8) and Eq (9), we reach to the main result in Eq (5). This concludes the proof.
310
+
311
+ # A.3 3D VOINT CLOUD
312
+
313
+ Plenoptic and Spherical Coordinate Functions. The Plenoptic function was first introduced by McMillan and Bishop (McMillan & Bishop, 1995) in 1995 as a general function that describes the visible world. The Plenoptic function $P$ is a continuous spherical function that describes the visibility at any Euclidean 3D point in space $(V_x, V_y, V_x)$ when looking into any direction $(\theta, \phi)$ across wavelength $\lambda$ at time $t$ . It is defined as $p = P(\theta, \phi, \lambda, V_x, V_y, V_x, t)$ . Such a remarkable and compact formulation covers all the images observed as just samples of the function $P$ . For fixed time and wavelength, the reduced Plenoptic function $P$ becomes $p = P(\theta, \phi, V_x, V_y, V_x,)$ which can describe any field in 3D space. This shortened formulation is what Neural Radiance Fields (NeRFs) (Mildenhall et al., 2020; Pumarola et al., 2021; Martin-Brualla et al., 2021) try to learn with MLPs to describe the radiance and RGB values in the continuous Euclidean space with a dependency on the view direction $(\theta, \phi)$ . In the same spirit of the Plenoptic function and NeRFs, the Voint cloud representation relies on the viewing angles $(\theta, \phi)$ to define the view-features. The problem with the plenoptic functions $P$ , and subsequently NeRFs, is that they are very high dimensional, and any attempt to densely represent the scene with discrete and fixed data will cause memory and compute issues (Yu et al., 2021; Pumarola et al., 2021). Unlike NERFs (Mildenhall et al., 2020) that define dense 3D volumes, we focus only on the surface of the 3D shapes with our Voint clouds representation. Our Voints are in the order of the sampled point cloud, offering a compact representation that allows for efficient computation and memory while maintaining the view-dependent component that facilitates view-based learning.
314
+
315
+ From Point Clouds to Voint Clouds. Implicit representation of 3D surfaces typically aims to learn an implicit function $g_{\mathrm{s}}(\mathbf{x}) : \mathbb{R}^3 \to \mathbb{R}$ that define the Sign Distance Function
316
+
317
+ (SDF) or the occupancy in the continuous Euclidean space (Park et al., 2019; Mescheder et al., 2019). The 3D iso-surface is then defined as the set of all points $\mathbf{x}$ that satisfy the condition $g_{\mathrm{s}}(\mathbf{x}) = 0$ (assuming $g_{\mathrm{s}}(\mathbf{x})$ as SDF hereafter). We define a surface 3D point cloud $\mathcal{X} \in \mathbb{R}^{N \times 3}$ , as a set of $N$ 3D points, where each point $\mathbf{x}_i \in \mathbb{R}^3$ is represented by its 3D coordinates $(x_i, y_i, z_i)$ and satisfy the iso-surface condition as follows.
318
+
319
+ $$
320
+ \mathcal {X} = \left\{\mathbf {x} _ {i} \in \mathbb {R} ^ {3} \mid g _ {\mathrm {s}} (\mathbf {x} _ {i}) = 0 \right\} _ {i = 1} ^ {N} \tag {10}
321
+ $$
322
+
323
+ Here, we assume that surface points also depend on the view direction from which they are being observed. Specifically, there exists a continuous implicit spherical function $\mathbf{g}(\mathbf{x},\mathbf{u}):$ $\mathbb{R}^5\to \mathbb{R}^d$ that defines the features at each point $\mathbf{x}$ depending on the view direction $\mathbf{u}$ . Given a set of $M$ view-point directions $\mathcal{U}\in \mathbb{R}^{M\times 2}$ , a Voint $\widehat{\mathbf{x}}\in \mathbb{R}^{M\times d}$ is a set of $M$ view-dependent features of size $d$ for the sphere centered at point $\mathbf{x}$ . The Voint cloud $\widehat{\mathcal{X}}\in \mathbb{R}^{N\times M\times d}$ is the set of all $N$ Voints $\widehat{\mathbf{x}}$ .
324
+
325
+ $$
326
+ \widehat {\mathbf {x}} _ {i} = \left\{\mathbf {g} \left(\mathbf {x} _ {i}, \mathbf {u} _ {j}\right) \in \mathbb {R} ^ {d} \mid \mathbf {x} _ {i} \in \mathcal {X} \right\} _ {j = 1} ^ {M} \tag {11}
327
+ $$
328
+
329
+ $$
330
+ \widehat {\mathcal {X}} = \left\{\widehat {\mathbf {x}} _ {i} \in \mathbb {R} ^ {M \times d} \right\} _ {i = 1} ^ {N}
331
+ $$
332
+
333
+ Note that we typically do not have access to the underlying implicit function $\mathbf{g}$ and we approximate it by 2D projection, feature extraction, and then un-projection as we show next.
334
+
335
+ 1-Multi-View Projection. As mentioned earlier, a Voint combines multiple view-features of the same 3D point. These view-features come from a multi-view projection of the points by a point cloud renderer $\mathbf{R}:\mathbb{R}^{N\times 3}\to \mathbb{R}^{M\times H\times W\times 3}$ that renders the point cloud $\mathcal{X}$ from multiple view-points $\mathcal{U}$ into $M$ images of size $H\times W\times 3$ . In addition to projecting the point cloud into the image space, $\mathbf{R}$ defines the mapping $\mathbf{B}\in \{0,\dots,N\}^{M\times H\times W}$ between each pixel to the N points and background it renders. Also, $\mathbf{R}$ outputs the visibility binary matrix $\mathbf{V}\in \{0,1\}^{N\times M}$ for each point from each view. Since not all points appear in all the views due to pixel discretization, the visibility score $\mathbf{V}_{i,j}$ defines if the Voint $\hat{\mathbf{x}}_i$ is visible in the view $\mathbf{u}_j$ . The matrix $\mathbf{B}$ is crucial for unprojection, while $\mathbf{V}$ is needed for defining meaningful operations on Voints.
336
+
337
+ 2-Multi-View Feature Extraction. The rendered images are processed by a function $\mathbf{C}:\mathbb{R}^{M\times H\times W\times 3}\to \mathbb{R}^{M\times H\times W\times d}$ that extracts image features. If $\mathbf{C}$ is the identity function, all the view-features would be identical for each Voint (typically the RGB value of the corresponding point). However, the $\mathbf{C}$ function can be a 2D network dedicated to the downstream task and can extract useful global and local features about each view.
338
+
339
+ 3-Multi-View Unprojection. We propose a module $\Phi_{\mathbf{B}}:\mathbb{R}^{M\times H\times W\times d}\to \mathbb{R}^{N\times M\times d}$ that unprojects the 2D features from each pixel to be 3D view-features at the corresponding Voint. This is performed by using the mapping $\mathbf{B}$ created by the renderer to form the Voint cloud features $\widehat{\mathcal{X}}$ . Note that the points are not necessarily visible from all the views, and some Voints that are not visible from any of the $M$ views will not receive any features. We post-process these empty points ( $\sim 0.5\%$ of points during inference) to be filled with nearest 3D neighbors features. The output Voint cloud features would be described as follows.
340
+
341
+ $$
342
+ \widehat {\mathbf {x}} _ {i} = \left\{\mathbf {g} _ {i, j,:} \in \mathbb {R} ^ {d} \mid \mathbf {x} _ {i} \in \mathcal {X}, \mathbf {V} _ {i, j} = 1 \right\} _ {j = 1} ^ {M}
343
+ $$
344
+
345
+ $$
346
+ \mathbf {g} _ {:, j} = \Phi_ {\mathbf {B}} \left(\mathbf {C} \left(\mathbf {R} \left(\mathcal {X}, \mathbf {u} _ {j}\right)\right), \mathbf {B}\right) \tag {12}
347
+ $$
348
+
349
+ $$
350
+ \widehat {\mathcal {X}} = \left\{\widehat {\mathbf {x}} _ {i} \in \mathbb {R} ^ {M \times d} \right\} _ {i = 1} ^ {N}
351
+ $$
352
+
353
+ # A.4 VOINT OPERATIONS
354
+
355
+ VointMax. In order to learn a neural network in the Voint space in the form dictated by Theorem 1, we need to define some basic differentiable operations on the Voint space. The
356
+
357
+ ![](images/4b68f906dad809b2cdbb8df39da391275635ec0e528c03019df6e17bc56ee754.jpg)
358
+ Figure 6: VointNet Variants. We propose three variants of VointNet that use three different examples of VointConv operation $h_v$ : shared MLP (MLP), Graph Convolution (GCN), and Graph Attention (GAT). Here we highlight the main difference between VointNet (MLP) that shares the MLP on all the view-features and VointNet (GCN) that creates a fully connected graph on the view-features and learn an MLP on the edge view-features. VointNet (GAT) is similar to VointNet (GCN) in addition to learning attention weights for each view-feature in weighted average aggregation.
359
+
360
+ max operation on the Voint cloud can be defined as follows.
361
+
362
+ $$
363
+ \begin{array}{l} \operatorname {V o i n t M a x} (\widehat {\mathbf {x}}) = \max \widehat {\mathbf {x}} _ {i, j}, \forall i, j \\ \left(1 3\right) \\ \end{array}
364
+ $$
365
+
366
+ $$
367
+ \mathrm {s . t .} i \in 1, 2, \dots , N, j \in 1, 2, \dots , M, \mathbf {V} _ {i, j} = 1
368
+ $$
369
+
370
+ Equivalently, $\mathrm{VointMax}(\widehat{\mathbf{x}}) = \max_j\left(\widehat{\mathbf{x}}_{:,j} - \infty \overline{\mathbf{V}}_{:,j}\right)$ , where $\overline{\mathbf{V}}$ is the complement of $\mathbf{V}$ .
371
+
372
+ VointConv. We define the convolution operation $h_{\mathrm{V}}: \mathbb{R}^{N \times M \times d} \to \mathbb{R}^{N \times M \times d'}$ as any learnable function that operates on the Voint space with shared weights on all the Voints and has the view-features input size $d$ and outputs view-features of size $d'$ and consists of $l_{V}$ layers. Examples of this VointConv operation include the following operations applied only on the visible view-features: a shared MLP, a graph convolution, and a graph attention. We detail these operations later in Section A.6, which result in different non-exhaustive variants of VointNet.
373
+
374
+ # A.5 LEARNING ON 3D VOINT CLOUDS
375
+
376
+ VpointNet. Typical 3D point cloud classifiers with a feature max pooling layer work as in Eq (14), where $h_{\mathrm{mlp}}$ and $h_{\mathrm{Pconv}}$ are the MLP and point Convolutional $(1 \times 1$ or edge) layers, respectively. This produces a K-class classifier $\mathbf{F}$ .
377
+
378
+ $$
379
+ \mathbf {F} (\mathcal {X}) = h _ {\operatorname {m l p}} \left(\max _ {\mathbf {x} _ {i} \in \mathcal {X}} \left\{h _ {\text {P c o n v}} \left(\mathbf {x} _ {i}\right) \right\}\right) \tag {14}
380
+ $$
381
+
382
+ Here, $\mathbf{F}:\mathbb{R}^{N\times 3}\to \mathbb{R}^K$ produces the logits layer of the classifier with size $K$ . On the other hand, the goal of the VointNet model is to get multi-view point cloud features that can be used after which by any point cloud processing pipeline. The VointNet module $\widehat{\mathbf{F}}:\mathbb{R}^{N\times M\times d}\rightarrow \mathbb{R}^{N\times d}$ as follows.
383
+
384
+ $$
385
+ \widehat {\mathbf {F}} (\widehat {\mathcal {X}}) = h _ {\mathrm {P}} \left(\operatorname {V o i n t M a x} \left(h _ {\mathrm {V}} (\widehat {\mathcal {X}})\right)\right), \tag {15}
386
+ $$
387
+
388
+ # A.6 VOINTNET VARIANTS
389
+
390
+ We define the convolution operation $h_{\mathrm{V}} \colon \mathbb{R}^{N \times M \times d} \to \mathbb{R}^{N \times M \times d'}$ in VointNet from Eq (15) as any learnable function that operates on the Voint space with shared weights on all the
391
+
392
+ Voints and has the view-features input size $d$ and outputs view-features of size $d'$ and consists of $l_V$ layers. Examples of this VointConv operation include the following:
393
+
394
+ Shared MLP. It is the most basic Voint neural network. For layer $l$ , the features of Voint i at view j is updated as follows to layer $l + 1$
395
+
396
+ $$
397
+ \mathbf {h} _ {i, j} ^ {l + 1} = \rho \left(\mathbf {h} _ {i, j} ^ {l} \mathcal {W} _ {\rho}\right), \forall i, j \tag {16}
398
+ $$
399
+
400
+ $$
401
+ \mathrm {s . t .} i \in {1, 2, \dots , N}, j \in {1, 2, \dots , M}, \mathbf {V} _ {i, j} = 1
402
+ $$
403
+
404
+ where $\rho$ is the shared MLP with weights $\mathcal{W}_{\rho}$ followed by normalization and nonlinear function (e.g. ReLU) applied on all Voints independently at the visible views features for each Voint. This formulation extends the shared MLP formulation for PointNet (Qi et al., 2017a) to make the MLP shared across the Voints and the views-features.
405
+
406
+ Graph Convolution (GCN). Just like how DGCNN (Wang et al., 2019c) extended PointNet (Qi et al., 2017a) by taking the neighborhood information and extract edge features, we extend the basic VointNet formulation in Eq (15). We define a fully connected graph for each Voint along the views dimension by creating a center virtual node connected to all the view features (similar to the classification token in ViT (Dosovitskiy et al., 2021)). This center virtual view-feature would be assigned the index $j = 0$ and can be initialized with zeros as the "cls" token in ViT (Dosovitskiy et al., 2021). Then, Voint graph convolution operation can be defined as follows to update the activations from layer $l$ to $l + 1$
407
+
408
+ $$
409
+ \mathbf {h} _ {i, j} ^ {l + 1} = \rho \left(\left(\max _ {k} \psi \left(\left(\mathbf {h} _ {i, j} ^ {l}, \mathbf {h} _ {i, k} ^ {l}\right) \mathcal {W} _ {\psi}\right)\right) \mathcal {W} _ {\rho}\right) \forall i, j \in \{1, 2, \dots , N - 1, 0, 1, M \} \tag {17}
410
+ $$
411
+
412
+ $$
413
+ \forall i, j, k \quad \text {s . t .} \quad i \in 1, 2, \dots , N, j \in 0, 1, \dots , M \tag {17}
414
+ $$
415
+
416
+ $$
417
+ k \in 0, 1, \dots , M, k \neq j, \mathbf {V} _ {i, j} = 1
418
+ $$
419
+
420
+ where $\rho, \psi$ are two different shared MLPs as in Eq (16). The difference between VointNet (MLP) and VointNet (GCN) is highlighted in Figure 6.
421
+
422
+ Graph Attention (GAT). Similar to how Point Transformer (Zhao et al., 2020) extended the graph convolution by adding attention to DGCNN (Wang et al., 2019c), we extend the basic Voint GraphConv formulation in Eq (17). Voint graph attention operation can be defined as follows to update the activations from layer $l$ to $l + 1$
423
+
424
+ $$
425
+ \mathbf {h} _ {i, j} ^ {l + 1} = \rho \left(\left(\sum_ {k = 0, k \neq j} ^ {M} \eta_ {k} \psi \left((\mathbf {h} _ {i, j} ^ {l}, \mathbf {h} _ {i, k} ^ {l}) \mathcal {W} _ {\psi}\right)\right) \mathcal {W} _ {\rho}\right) \tag {18}
426
+ $$
427
+
428
+ $$
429
+ \forall i, j \mathrm {s . t .} i \in 1, 2, \dots , N, j \in 0, 1, \dots , M
430
+ $$
431
+
432
+ $$
433
+ \eta_ {k} = \zeta \left(\mathbf {h} _ {i, k} ^ {l} \mathcal {W} _ {\zeta}\right), \mathbf {V} _ {i, j} = 1
434
+ $$
435
+
436
+ where $\rho, \psi, \zeta$ are three different shared MLPs as in Eq (16), and $\eta_{k}$ are the learned attention weights for each neighbor view-feature.
437
+
438
+ # B DETAILED EXPERIMENTAL SETUP
439
+
440
+ # B.1 DATASETS
441
+
442
+ ScanObjectNN: 3D Point Cloud Classification. We follow the literature (Goyal et al., 2021; Hamdi et al., 2021) on testing 3D classification in the challenging ScanObjectNN (Uy et al., 2019) point cloud dataset, since it includes background and considers occlusions. The dataset is composed of 2902 point clouds divided into 15 object categories. We use 2048 sampled points per object for Voint learning. We benchmark on its variants: Object only, Object with Background, and the Hardest perturbed variant (PB_T50_RS variant). Visualization is provided in Figure 7 of some of the renderings used in training the 2D backbone in our pipeline.
443
+
444
+ ShapeNet Core55: 3D Shape Retrieval. The shape retrieval challenge SHREC (Sfikas et al., 2017) uses ShapeNet Core55 is a subset of ShapeNet (Chang et al., 2015) for benchmarking. The dataset consists of 51,162 3D mesh objects labeled with 55 object classes. The
445
+
446
+ training, validation, and test sets consist of 35764, 5133, and 10265 shapes. We create a dataset of point clouds by sampling 5000 points from each mesh object as in MVTN (Hamdi et al., 2021).
447
+
448
+ ShapeNet Parts: 3D Part Segmentation. ShapeNet Parts is a subset of ShapeNet (Chang et al., 2015) that consists of 13,998 point cloud objects for train and 2,874 objects for the test from 16 categories and 50 parts. It is designed for the part segmentation task (Yi et al., 2016). Visualization is provided in Figure 10 of some of the renderings used in training the 2D backbone in our pipeline colored with the ground truth segmentation labels.
449
+
450
+ ModelNet40: 3D Shape Classification Occlusion Robustness. ModelNet40 (Wu et al., 2015) is composed of 12,311 3D objects (9,843/2,468 in training/testing) labelled with 40 object classes. We sample 2048 points clouds from the objects following previous works (Qi et al., 2017b; Zhao et al., 2020). Visualization is provided in Figure 8 of some of the renderings used in training the 2D backbone in our pipeline.
451
+
452
+ # B.2 METRICS
453
+
454
+ Classification Accuracy. The standard evaluation metric in 3D classification is accuracy. We report overall accuracy (percentage of correctly classified test samples) and average per-class accuracy (mean of all true class accuracies).
455
+
456
+ Retrieval mAP. Shape retrieval is evaluated by mean Average Precision (mAP) over test queries. For every query shape $\mathbf{S}_q$ from the test set, AP is defined as $AP = \frac{1}{\mathrm{GTP}}\sum_{n}^{N}\frac{\mathbb{1}(\mathbf{S}_n)}{n}$ , where $GTP$ is the number of ground truth positives, $N$ is the size of the ordered training set, and $\mathbb{1}(\mathbf{S}_n) = 1$ if the shape $\mathbf{S}_n$ is from the same class label of query $\mathbf{S}_q$ . We average the retrieval AP over the test set to measure retrieval mAP.
457
+
458
+ Segmentation mIoU. Semantic Segmentation is evaluated by mean Intersection over Union (mIoU) over pixels or points. For every class label, measure the size of the intersection mask between the ground truth points of that label and the predicted points as that label. Then, divide by the size of the union mask of the same label to get IoU. This procedure is repeated over all the labels, and averaging the IoUs gives mIoU. We report two types of mIoUs: Instance-averaged mIoU (averages all mIoUs across all objects) and Category-averaged mIoU (averages all mIoU from shapes of the same category, and then average those across object categories).
459
+
460
+ # B.3 BASELINES
461
+
462
+ Point Cloud Networks. We include PointNet (Qi et al., 2017a), PointNet++ (Qi et al., 2017b), DGCNN (Wang et al., 2019c), PVNet (You et al., 2018), and KPConv (Thomas et al., 2019), Point Transformer (Zhao et al., 2020) and CurveNet (Xiang et al., 2021) as baselines that use point clouds. These methods leverage different convolution operators on point clouds by aggregating local and global point information.
463
+
464
+ Multi-View Networks. We also compare against multi-view classification approaches like MVCNN (Su et al., 2015) and MVTN (Hamdi et al., 2021) as baselines for classification and retrieval. Since there is no available multi-view pipeline for 3D part segmentation, we adopt some of the multi-view segmentation baselines (e.g. Label Fusion (Wang et al., 2019a) and Mean Fusion (Kundu et al., 2020)) for part segmentation to work in the Voint space.
465
+
466
+ # B.4 IMPLEMENTATION DETAILS
467
+
468
+ Rendering and Un-Projection. We choose the differentiable point cloud renderer $\mathbf{R}$ from Pytorch3D (Ravi et al., 2020) in our pipeline for its speed and compatibility with Pytorch libraries (Paszke et al., 2017). We render multi-view images with size $224 \times 224 \times 3$ . We color the points by their normals' values or keep them white if the normals are not available. Following a similar procedure to (Wei et al., 2020; Hamdi et al., 2021), the view-point setup is randomized during training (using $M = 8$ views) and fixed to spherical views in testing (using $M = 12$ views).
469
+
470
+ ![](images/b72ea466f6ac3834f0aaa6df8cef30defba9035c4d7883401a3643a21dbd1009.jpg)
471
+ Figure 7: ScanObjectNN Variants. We show examples of point cloud renderings of different variants of the ScanObjectNN (Uy et al., 2019). These renderings are used in training VointNet for 3D point cloud classification.
472
+
473
+ Architectures. For the 2D backbone, we use ViT (Dosovitskiy et al., 2021) (with pretrained weights from TIMM library (Wightman, 2019)) for classification and DeepLabV3 (Chen et al., 2018) for segmentation. We used parallel heads for each object category for part segmentation since the task is solely focused on parts. We use the 3D cross-entropy loss on the 3D point cloud output and the 2D cross-entropy loss when the loss is defined on the pixels. When used, the linear tradeoff coefficient of the 2D loss term is set to 0.003. To balance the frequency of objects in part segmentation, we multiply the loss by the frequency of the object class of each object we segment. The feature dimension of the VointNet architectures is $d = 64$ , and the depth is $l_{V} = 4$ layers in $h_V$ . The main results are based on VointNet (MLP) variant unless otherwise specified. The coordinates $\mathbf{x}$ can be optionally appended to the input view-features $\hat{\mathbf{x}}$ , which can improve the performance but reduce the rotation robustness as we show later in Section C.1 and Table 9.
474
+
475
+ Training Setup. We train our pipeline in two stages, where we start by training the 2D backbone on the 2D projected labels of the points, then train the full pipeline end-to-end while focusing the training on the VointNet part. We use the AdamW optimizer (Loshchilov & Hutter, 2017) with an initial learning rate of 0.0005 and a step learning rate schedule of $33.3\%$ every 12 epochs for 40 epochs. The pipeline is trained with one NVIDIA Tesla V100 GPU. We do not use any data augmentation.
476
+
477
+ ![](images/37f5bed1a1b60474aaf51dd5ca7f2d0a5227449d29ac440e1539667849c0ab69.jpg)
478
+ Figure 8: ModelNet40. We show some examples of point cloud renderings of ModelNet40 (Wu et al., 2015) used for 3D classification robustness in our setup.
479
+
480
+ ![](images/67afcf6005c307d2f1f9b41ca7f0088ec0f12e2feb5ef06c405e1dae574656c4.jpg)
481
+ Figure 9: ShapeNet Core55. We show some examples of point cloud renderings of ShapeNet Core55 (Chang et al., 2015) used for 3D shape retrieval in our setup.
482
+
483
+ ![](images/e89f3da343521484611f5cf0479ba70b7aecbbbadd1cf045bfef60c2f4c38f75.jpg)
484
+ Figure 10: ShapeNet Parts. We show some examples of point cloud renderings of ShapeNet Parts (Yi et al., 2016) colored with ground truth segmentation labels. We use these renderings as 2D ground truth to pre-train the 2D backbone $\mathbf{C}$ for 2D segmentation before training VointNet's pipeline for 3D segmentation.
485
+
486
+ <table><tr><td>Method</td><td>Data Type</td><td>Classification
487
+ ModelNet40</td><td>Shape Retrieval
488
+ ShapeNet</td><td>Core</td></tr><tr><td>PointNet (Qi et al., 2017a)</td><td>Points</td><td>89.2</td><td>-</td><td></td></tr><tr><td>PointNet++ (Qi et al., 2017b)</td><td>Points</td><td>91.9</td><td>-</td><td></td></tr><tr><td>DGCNN (Wang et al., 2019c)</td><td>Points</td><td>92.2</td><td>-</td><td></td></tr><tr><td>KPConv(Thomas et al., 2019)</td><td>Points</td><td>92.9</td><td>-</td><td></td></tr><tr><td>PCT(Guo et al., 2021)</td><td>Points</td><td>93.3</td><td>-</td><td></td></tr><tr><td>CurveNet(Xiang et al., 2021)</td><td>Points</td><td>93.8</td><td>-</td><td></td></tr><tr><td>ReVGG (Sfikas et al., 2017)</td><td>M-View</td><td>-</td><td>74.9</td><td></td></tr><tr><td>MVCNN (Su et al., 2015)</td><td>M-View</td><td>90.1</td><td>73.5</td><td></td></tr><tr><td>ViewGCN (Wei et al., 2020)</td><td>M-View</td><td>93.3</td><td>78.4</td><td></td></tr><tr><td>MVTN (Hamdi et al., 2021)</td><td>M-View</td><td>93.8</td><td>82.9</td><td></td></tr><tr><td>VointNet (ours)</td><td>Voints</td><td>92.8</td><td>83.3</td><td></td></tr></table>
489
+
490
+ Table 7: 3D Shape Classification and Retrieval. We report VointNet's classification accuracy on ModelNet40 (Wu et al., 2015) and its 3D shape retrieval mAP on ShapeNet Core55 (Chang et al., 2015; Sfikas et al., 2017). Baseline results are reported from (Hamdi et al., 2021; Zhao et al., 2020; Xiang et al., 2021).
491
+
492
+ <table><tr><td rowspan="2">Method</td><td colspan="3">Rotation Perturbations Range</td></tr><tr><td>0°</td><td>±90°</td><td>±180°</td></tr><tr><td>PointNet (Qi et al., 2017a)</td><td>88.7</td><td>42.5</td><td>38.6</td></tr><tr><td>PointNet ++ (Qi et al., 2017b)</td><td>88.2</td><td>47.9</td><td>39.7</td></tr><tr><td>RSCNN (Liu et al., 2019a)</td><td>90.3</td><td>90.3</td><td>90.3</td></tr><tr><td>MVTN (Hamdi et al., 2021)</td><td>91.7</td><td>90.8</td><td>91.2</td></tr><tr><td>VointNet (ours)</td><td>91.5</td><td>90.9</td><td>91.1</td></tr></table>
493
+
494
+ Table 8: Rotation Robustness for 3D Classification. At test time, we randomly rotate objects in ModelNet40 (Wu et al., 2015) around the Y-axis (gravity) with different ranges and report the overall accuracy.
495
+
496
+ # C ADDITIONAL RESULTS
497
+
498
+ # C.1 MODEL ROBUSTNESS
499
+
500
+ Rotation Robustness for 3D Classification. We follow the standard practice in 3D shape classification literature by testing the robustness of trained models to perturbations at test time (Liu et al., 2019a; Hamdi et al., 2021). We perturb the shapes with random rotations around the Y-axis (gravity-axis) contained within $\pm 90^{\circ}$ and $\pm 180^{\circ}$ and report the test accuracy over ten runs in Table 8.
501
+
502
+ Rotation Robustness for 3D Segmentation. We follow the previous 3D literature by testing the robustness of trained models to perturbations at test time (Liu et al., 2019a; Hamdi et al., 2021; 2020). We perturb the shapes in ShapeNet Parts with random rotations in $SO(3)$ at test time (ten runs) and report Ins. mIoU in Table 9. Note how our VointNet performance largely exceeds the baselines in this realistic unaligned scenario. We can augment the training with rotated objects for the baselines, which improves their robustness, but loses performance on the unrated setup. Adding xyz coordinates to the view-features of VointNet improves the performance on an unrotated setup but negatively affects the robustness to rotations. The discrepancy between the Voint results and the results of some point cloud methods is that Voints heavily depend on the underlying 2D backbone and inherit all its biases, especially those from pretraining. Hence, the 2D backbone limits what the performance can reach with VointNet. We study the effect of the backbone in detail in Section C.2. Figure 11 shows qualitative 3D segmentation results for VointNet and Mean Fuse (Kundu et al., 2020) as compared to the ground truth.
503
+
504
+ <table><tr><td>Ground Truth</td><td>VointNet (ours)</td><td>Mean Fuse (Kundu et al., 2020)</td></tr><tr><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td></tr><tr><td></td><td></td><td></td></tr></table>
505
+
506
+ Figure 11: Qualitative Comparison for 3D Part Segmentation. We compare our VointNet 3D segmentation prediction to Mean Fuse (Kundu et al., 2020) that is using the same trained 2D backbone. Note how VointNet distinguishes detailed parts (e.g. the car window frame). Beware that visualization colors can shift if an extra label is predicted (e.g. the motorbike labels are correct).
507
+
508
+ <table><tr><td>Method</td><td>Segmentation Unrotated</td><td>Under Rotation Rotated</td></tr><tr><td>PointNet (Qi et al., 2017a)</td><td>80.1</td><td>36.6 ±0.2</td></tr><tr><td>DGCNN (Wang et al., 2019c)</td><td>80.1</td><td>37.1 ±0.2</td></tr><tr><td>PointNet + Aug.</td><td>65.8</td><td>65.8 ±0.1</td></tr><tr><td>DGCNN + Aug.</td><td>60.7</td><td>60.7 ±0.2</td></tr><tr><td>Mean Fuse (Kundu et al., 2020)</td><td>79.1</td><td>61.6 ±0.1</td></tr><tr><td>Label Fuse (Wang et al., 2019a)</td><td>78.9</td><td>61.0 ±0.1</td></tr><tr><td>VointNet (w/o xyz)</td><td>79.6</td><td>65.4 ±0.1</td></tr><tr><td>VointNet (w/o xyz) + Aug.</td><td>68.0</td><td>68.5 ±0.1</td></tr><tr><td>VointNet (w/ xyz)</td><td>81.2</td><td>61.5 ±0.2</td></tr></table>
509
+
510
+ Table 9: Rotation Robustness for 3D Part Segmentation. At test time, we randomly rotate objects from ShapeNet Parts (Yi et al., 2016) and report the Ins. mIoUs of our VointNet compared to trained PointNet (Qi et al., 2017a) and DGCNN (Wang et al., 2019c). Note how VointNet's performance largely exceeds the baselines in realistic unaligned scenarios, highlighting the benefit of view dependency. If we use rotation augmentation in training for the baselines, the rotated performance improves, but the unrotated performance drops.
511
+
512
+ # C.2 DETAILED ANALYSIS
513
+
514
+ Effect of Pretraining. We study the effect of pretraining the 2D backbone C for 3D classification on ModelNet40. Training a ViT with Mean Fuse for 3D classification on ModelNet40 obtains 92.2 test Acc. with ImageNet pretraining and 80.0 test Acc. from scratch. Other multi-view networks, e.g. MVCNN (Su et al., 2015), ViewGCN(Wei et al., 2020), and MVTN(Hamdi et al., 2021) all use ImageNet pretraining, which is not unique to Voints.
515
+
516
+ Classification Backbone. We study the effect of ablating the 2D backbone C for 3D classification on ModelNet40. We show in Table 10 the performance of VointNet (MLP) when Vit-B (Dosovitskiy et al., 2021) and ResNet-18 (He et al., 2015) are used. We also show that following the per-point classification setup instead of the per-shape for 3D shape classification leads to worse performance for VointNet and the naive multi-view. This is why we used the per-shape approach when adopting VointNet for 3D classification (using one Voint for the entire shape).
517
+
518
+ Number of points and visibility. Table 11 studies the effect of point number on 3D part segmentation performance, when different numbers of views are used. The visibility ratio is also reported in each case.
519
+
520
+ Points color. We colored the points with ground truth normals as in Figure 16, when they are available (ShapeNet Parts), and we used white colors as in Figure 9, when other baselines do not use normals. We ablate the color of the points on VointNet (MLP) with normals colors, white color, and NOCs colors (Wang et al., 2019b). We obtain the following segmentation mIoU results: (normals: 80.6), (white: 74.7), and (NOCs: 57.9).
521
+
522
+ Time and Memory Requirements. To assess the contribution of the Voint module, we take a macroscopic look at the time and memory requirements of each component in the pipeline. We record the number of floating-point operations (GFLOPs) and the time of a forward pass for a single input sample. In Table 12, the VointNet module contributes negligibly to the memory requirements compared to multi-view and point networks.
523
+
524
+ Feature Size $(d)$ . We study the effect of the feature size $d$ on the performance of VointNet (MLP) in 3D part segmentation on ShapeNet Parts (Yi et al., 2016) and plot the results (with confidence intervals) in Figure 12. We note that the performance peaks at $d = 128$ , but it is close to what we use in the main results $(d = 64)$ .
525
+
526
+ <table><tr><td rowspan="2">View Aggregation</td><td colspan="3">2D Backbone</td></tr><tr><td>ResNet18 (per-shape)</td><td>ViT-B (per-shape)</td><td>DeepLabV3 (per-point)</td></tr><tr><td>VointNet</td><td>91.2</td><td>92.8</td><td>10.2</td></tr></table>
527
+
528
+ Table 10: Ablation Study for 3D Classification. We study the effect of different 2D backbone for ModelNet40 3D classification task. We compare VointNet's performance to naive multi-view (e.g. MVCNN (Su et al., 2015) or Mean Fuse (Kundu et al., 2020)) using the same 2D backbone. Note that using the per-point classification setup instead of the per-shape for 3D shape classification leads to worse performance for VointNet and the naive multi-view.
529
+
530
+ <table><tr><td rowspan="2">Points #</td><td rowspan="2">Metric</td><td colspan="4">Number of Views</td></tr><tr><td>2</td><td>4</td><td>8</td><td>12</td></tr><tr><td rowspan="2">500</td><td>visibility</td><td>99.1</td><td>99.9</td><td>100</td><td>100</td></tr><tr><td>mIoU</td><td>69.2</td><td>73.9</td><td>76.0</td><td>76.4</td></tr><tr><td rowspan="2">1000</td><td>visibility</td><td>98.0</td><td>99.7</td><td>100</td><td>100</td></tr><tr><td>mIoU</td><td>69.5</td><td>74.3</td><td>76.5</td><td>77.1</td></tr><tr><td rowspan="2">2000</td><td>visibility</td><td>95.7</td><td>99.2</td><td>99.8</td><td>99.9</td></tr><tr><td>mIoU</td><td>69.7</td><td>75.0</td><td>77.7</td><td>78.5</td></tr></table>
531
+
532
+ Table 11: Analysis on Number of Points and Visibility. We show the Instance mIoUs and visibility ratio $(1 - \frac{\text{empty}}{\text{total}})\%$ of our VointNet on ShapeNet Parts when varying points # and number of views.
533
+
534
+ Model Depth $(l_v)$ . We study the effect of the model depth $l_v$ on the performance of VointNet (MLP) in 3D part segmentation on ShapeNet Parts (Yi et al., 2016) and plot the results (with confidence intervals) in Figure 13. We note that model depth of VointNet does not enhance the performance significantly. Our choice of $l_v = 4$ balances the performance and the memory/computations requirements of VointNet (MLP).
535
+
536
+ Distance to the Object. We study the effect of distance to the object in rendering as in Figure 17 to the performance of VointNet (MLP) in 3D part segmentation on ShapeNet Parts (Yi et al., 2016) and plot the results (with confidence intervals) in Figure 14. We note that our default choice of 1.0 is actually reasonable. This choice of distance shows the object entirely (as illustrated in Figure 17), but also cover the details needed for small parts segmentation (see Figure 11).
537
+
538
+ Image Size $(H,W)$ . We study the effect of the image size $H\& W$ on the performance of Mean Fuse (Kundu et al., 2020) baseline when training the 2D backbone for 3D part segmentation. We plot the results (with confidence intervals) in Figure 15.
539
+
540
+ Number of Views on Classification. We study the effect of the number of views (M) on classification accuracy on ModelNet40 Wu et al. (2015) of VointNet and report results in Table 13.
541
+
542
+ Unprojection Operation Speed. We evaluate the speed of the unprojection operation $\Phi_{\mathbf{B}}$ and report average latency of 10,000 runs (in ms) in Table 14.
543
+
544
+ Unprojection Operation Speed. We evaluate the speed of the point cloud renderer $\mathbb{R}$ used in Voint pipeline from Pytroch3D Ravi et al. (2020) and report average latency of 1,000 renderings (in ms/image) in Table 15.
545
+
546
+ # C.3 VISUALIZATIONS
547
+
548
+ In Figure 16 and 17, we visualize the multi-view renderings of the point clouds along with the 2D learned features based on the DeepLabV3 (Chen et al., 2018) backbone. These features are then unprojected and transformed by VointNet to obtain 3D semantic labels.
549
+
550
+ <table><tr><td>Network</td><td>GFLOPs</td><td>Time (ms)</td><td>Parameters # (M)</td></tr><tr><td>MVCNN (Su et al., 2015)</td><td>43.72</td><td>39.89</td><td>11.20</td></tr><tr><td>ViewGCN (Wei et al., 2020)</td><td>44.19</td><td>26.06</td><td>23.56</td></tr><tr><td>ResNet 18 (He et al., 2015)</td><td>3.64</td><td>3.70</td><td>11.20</td></tr><tr><td>ResNet 50 (He et al., 2015)</td><td>8.24</td><td>9.42</td><td>23.59</td></tr><tr><td>ViT-B (Dosovitskiy et al., 2021)</td><td>33.70</td><td>12.46</td><td>86.57</td></tr><tr><td>ViT-L (Dosovitskiy et al., 2021)</td><td>119.30</td><td>29.28</td><td>304.33</td></tr><tr><td>FCN (Long et al., 2015)</td><td>53.13</td><td>10.34</td><td>32.97</td></tr><tr><td>DeeplabV3 (Chen et al., 2018)</td><td>92.61</td><td>20.62</td><td>58.64</td></tr><tr><td>PointNet (Qi et al., 2017a)</td><td>1.78</td><td>4.24</td><td>3.50</td></tr><tr><td>DGCNN (Wang et al., 2019c)</td><td>10.42</td><td>0.95</td><td>16.350</td></tr><tr><td>MVTN (Hamdi et al., 2021)</td><td>1.78</td><td>4.24</td><td>3.5</td></tr><tr><td>VointNet (MLP)</td><td>1.90</td><td>2.90</td><td>0.04</td></tr><tr><td>VointNet (GCN)</td><td>16.18</td><td>32.10</td><td>0.05</td></tr><tr><td>VointNet (GAT)</td><td>32.05</td><td>68.71</td><td>0.07</td></tr><tr><td>Full Voint pipeline</td><td>94.51</td><td>23.50</td><td>58.68</td></tr></table>
551
+
552
+ Table 12: Time and Memory Requirements. We assess the contribution of the Voint module to the time and memory requirements in the multi-view and point cloud pipeline. Note that VointNet (shared MLP) is almost 100 times smaller than PointNet (Qi et al., 2017a).
553
+
554
+ ![](images/3413317cf245b6b22fd5a908edc91f26e73ed14cffeb764360672238e7bce506.jpg)
555
+ Figure 12: The Effect of Feature Size $d$ . We plot Ins. mIoU of 3D segmentation vs. the feature size $d$ used in training on ShapeNet Parts (Yi et al., 2016). We note that the performance peaks at $d = 128$ , but it is close to what we use in the main results ( $d = 64$ ).
556
+
557
+ ![](images/01e61314143ad25c176cabb5dc11f6aaf12fcedd834931452f72552917cdb345.jpg)
558
+ Figure 13: The Effect of Model Depth $l_{v}$ . We plot Ins. mIoU of 3D segmentation vs. the model depth $l_{v}$ used in training on ShapeNet Parts (Yi et al., 2016). We note that model depth of VointNet does not enhance the performance significantly. Our choice of $l_{v} = 4$ balances the performance and the memory/computations requirements of VointNet (MLP).
559
+
560
+ ![](images/1b0ad412327d57e53c249108a0b45a1fb42553e42323dd97babc2396302b514e.jpg)
561
+ Figure 14: The Effect of Distance to the Object. We plot Ins. mIoU of 3D segmentation vs. the distance to the object used in inference on ShapeNet Parts (Yi et al., 2016). We note that our default choice of 1.0 is actually reasonable. This choice of distance shows the object entirely (as illustrated in Figure 17), but also cover the details needed for small parts segmentation (see Figure 11).
562
+
563
+ ![](images/e3aa40f0be56a237c5c00ea85726607b465624d13e968d151ab38d5a45b193ed.jpg)
564
+ Figure 15: The Effect of Image Size $H, W$ . We plot Ins. mIoU of 3D segmentation vs. the image size used in inference on ShapeNet Parts (Yi et al., 2016).
565
+
566
+ ![](images/177dcc285a4a120e89a761450d372408dc02ebe73f3aab8f9b2f0e8c97be5aba.jpg)
567
+ Figure 16: Multi-view Projected Segmentation 1. We show how, after rendering points, we can segment in the image space. For each example, we show (INPUT): the projections of the points (colored with normals) used in training with random view-points. (PRED 2D): the segmentation prediction of the 2D backbone (DeepLabV3) (Chen et al., 2018). (PRED 3D): the unprojected 3D segmentation prediction. $(GT)$ : the 3D segmentation ground truth.
568
+
569
+ ![](images/5399f426231ab43347b5b869edd7def9af0be33cbd699d4076f8bc3b337f0b29.jpg)
570
+ Figure 17: Multi-view Projected Segmentation 2. We show how, after rendering points, we can segment in the image space. For each example, we show (INPUT): the projections of the points (colored with normals) used in training with random view-points. (PRED 2D): the segmentation prediction of the 2D backbone (DeepLabV3) (Chen et al., 2018). (PRED 3D): the unprojected 3D segmentation prediction. $(GT)$ : the 3D segmentation ground truth.
571
+
572
+ <table><tr><td rowspan="2">Method</td><td colspan="4">Number of Views</td></tr><tr><td>4</td><td>6</td><td>8</td><td>10</td></tr><tr><td>VointNet (Cls. Acc.)</td><td>90.3</td><td>90.8</td><td>92.0</td><td>92.3</td></tr></table>
573
+
574
+ Table 13: Effect of the Number of Views on Classification. We report the classification accuracy of VointNet vs. the number of views (M) used in the training on ModelNet40.
575
+
576
+ <table><tr><td rowspan="2">Method</td><td colspan="7">Number of Views</td></tr><tr><td>1</td><td>2</td><td>4</td><td>6</td><td>8</td><td>10</td><td>12</td></tr><tr><td>Features Unprojection</td><td>3.0</td><td>5.3</td><td>11.45</td><td>15.7</td><td>17.2</td><td>29.7</td><td>24.0</td></tr><tr><td>Labels Unprojection</td><td>2.6</td><td>2.5</td><td>3.4</td><td>3.1</td><td>3.0</td><td>3.2</td><td>3.6</td></tr></table>
577
+
578
+ Table 14: Unprojection Operation Speed. We report the average latency (in ms) over 10,000 runs of the unprojection operation with its two forms: features unprojection (used in mean) and labels unprojection (used in mode).
579
+
580
+ <table><tr><td rowspan="2">Criteria</td><td colspan="5">Number of Points</td></tr><tr><td>1e2</td><td>1e3</td><td>1e4</td><td>1e5</td><td>1e6</td></tr><tr><td>Point Rendering Speed (ms/image)</td><td>7.2</td><td>7.6</td><td>7.7</td><td>10.4</td><td>37.7</td></tr></table>
581
+
582
+ Table 15: Point Rendering Speed. We report the average rendering speed (in ms/image) over 1,000 renderings of the point cloud renderer Ravi et al. (2020) used in Voint clouds.
2023/Voint Cloud_ Multi-View Point Cloud Representation for 3D Understanding/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8db3161437e2e62992f49e8aed224279e01ea8c5b4e4650d4370ce5f8bd8788b
3
+ size 1800055
2023/Voint Cloud_ Multi-View Point Cloud Representation for 3D Understanding/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Volumetric Optimal Transportation by Fast Fourier Transform/468f5fc6-f60a-4c98-879c-a2f5d8b676d8_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Volumetric Optimal Transportation by Fast Fourier Transform/468f5fc6-f60a-4c98-879c-a2f5d8b676d8_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Volumetric Optimal Transportation by Fast Fourier Transform/468f5fc6-f60a-4c98-879c-a2f5d8b676d8_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13a314e552dbb5251d7ac7911371c568802a78352508b84faa906964f206d518
3
+ size 21978046
2023/Volumetric Optimal Transportation by Fast Fourier Transform/full.md ADDED
@@ -0,0 +1,1027 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # VOLUMETRIC OPTIMAL TRANSPORTATION BY FAST FOURIER TRANSFORM
2
+
3
+ Na Lei*
4
+
5
+ Dalian University of Technology
6
+ nalei@dlut.edu.cn
7
+
8
+ Dongsheng An
9
+
10
+ Stony Brook University
11
+ doan@cs.stonybrook.edu
12
+
13
+ Min Zhang
14
+
15
+ Zhejiang University min_zhang@zju.edu.cn
16
+
17
+ Xiaoyin Xu
18
+
19
+ Harvard Medical School
20
+ xxu@bwh.harvard.edu
21
+
22
+ Xianfeng Gu
23
+
24
+ Stony Brook University gu@cs.stonybrook.edu
25
+
26
+ # ABSTRACT
27
+
28
+ The optimal transportation map finds the most economical way to transport one probability measure to another, and it has been applied in a broad range of applications in machine learning and computer vision. By the Brenier theory, computing the optimal transport map is equivalent to solving a Monge-Ampère equation, which is highly non-linear. Therefore, the computation of optimal transportation maps is intrinsically challenging. In this work, we propose a novel and powerful method, the FFT-OT (fast Fourier transform-optimal transport), to compute the 3-dimensional OT problems. The method is based on several key ideas: first, the Monge-Ampère equation is linearized to a sequence of linear elliptic PDEs with spacial and temporal variant coefficients; second, the obliqueness property of optimal transportation maps is reformulated as a Neumann boundary condition; and third, the variant coefficient elliptic PDEs are approximated by constant coefficient elliptic PDEs and solved by FFT on GPUs. We also prove that the algorithm converges linearly. Experimental results show that the FFT-OT algorithm is more than a hundred times faster than the conventional methods based on the convex geometry. Furthermore, the method can be directly applied for sampling from complex 3D density functions in machine learning and magnifying the volumetric data in medical imaging.
29
+
30
+ # 1 INTRODUCTION
31
+
32
+ Optimal transportation (OT) transports one probability measure to another in the most economical way, and it plays a fundamental role in areas like machine learning Courty et al. (2017); Altschuler et al. (2019), computer vision Arjovsky et al. (2017); Tolstikhin et al. (2018); An et al. (2020), and computer graphics Solomon et al. (2015); Nader & Guennebaud (2018). Given a Riemannian manifold $X$ , all the probability distributions on $X$ form an infinite dimensional space $\mathcal{P}(X)$ . Given any two distributions $\mu, \nu \in \mathcal{P}(X)$ , the optimal transportation map defines a distance between them, and the McCann interpolation McCann (1997) defines the geodesic connecting them. Hence optimal transportation equips $\mathcal{P}(X)$ with a Riemannian metric and defines its covariant differentiation, which provides a variational calculus framework for optimization in it.
33
+
34
+ As the optimal transportation problem is highly non-linear, it is quite challenging to compute the OT maps. Recently, researchers have developed many algorithms. The geometric variational approach Aurenhammer et al. (1998); Gu et al. (2016); Levy (2015) based on the Brenier theorem Brenier (1991) is capable of achieving high accuracy for low dimensional problems, but it requires complicated geometric data structure and the storage complexity grows exponentially as the dimension increases. The Sinkhorn method Cuturi (2013) based on the Kantorovich theorem adds an entropic regularizer to the primal problem and can handle high dimensional tasks, but it suffers from the intrinsic approximation error.
35
+
36
+ We propose a novel method to tackle this challenging problem through Fast Fourier Transformation (FFT). According to the Brenier theorem Brenier (1991), under the quadratic distance cost, the optimal transportation map is the gradient of the Brenier potential, which satisfies the Monge-Ampère equation. With the continuity method Delanoë (1991), the Monge-Ampère equation can be linearized as a sequence of elliptic partial differential equations (PDEs) with spacial and temporal variant coefficients. By iteratively solving the linearized Monge-Ampère equations, we can obtain the OT map. Specifically, we propose to approximate the linearized Monge-Ampère equation by constant coefficient elliptic PDEs and solve them using the FFT on GPUs.
37
+
38
+ Our proposed FFT-OT method has many merits: (i) it is generalizable for arbitrary dimension; (ii) it has a linear convergence rate, namely the approximation error decays exponentially fast; (iii) in each iteration, the computational complexity of FFT is $O(n \log n)$ , thus our algorithm can solve large scale OT problems; and (iv) it is highly parallelable and can be efficiently implemented on GPUs. We demonstrate the efficiency of the FFT-OT algorithm by solving the volumetric OT problems for machine learning and medical imaging applications including sampling from given 3D density functions and volumetric magnifier. The algorithm also has its own limitations: (i) although it can be generalized to any dimensions, the storage complexity increases exponentially with respect to the dimension, so its power is limited by the memory size of the GPUs; (ii) Since the algorithm uses FFT, the current version of the method only works well for continuous density functions. (iii) In this work, we mainly focus on the computation of the OT map from the uniform distribution to another arbitrary continuous distribution. To extend the method to find the OT map between any two continuous measures, we can compute two OT maps from the uniform distribution to the both continuous measures, then combine them together. The combination will give a reasonable approximation of the OT map Nader & Guennebaud (2018).
39
+
40
+ Though Lei and Gu Lei & Gu (2021) also uses FFT to solve the 2-dimensional OT problem, our method differs their works in the following two aspects: (i) Lei and Gu's method uses the fixed point method to compute the 2D OT problems, ours is based on the linearization of the Monge-Ampère operator to solve the 3D OT problems, these are two different methodologies in PDE theory; (ii) In our paper, we also provide the theoretical convergence analysis of the proposed method. For more detailed analysis and related work, please refer to the Appendix A.
41
+
42
+ # 2 OPTIMAL TRANSPORTATION THEORY
43
+
44
+ In this section, we review the fundamental concepts and theorems of the OT problem and the Monge-Amperè equation, more details can be found in Villani (2008).
45
+
46
+ Optimal Transportation Map and the Monge-Ampère equation Suppose the source domain $\Omega$ is an open set in $\mathbb{R}^d$ with the probability measure $\mu$ , the target domain $\Sigma$ is with the probability measure $\nu$ . Both $\mu$ and $\nu$ have density functions $d\mu(x) = f(x)dx$ and $d\nu(y) = g(y)dy$ , respectively, with the equal total mass: $\int_{\Omega} f(x)dx = \int_{\Sigma} g(y)dy$ , which is called the balance condition.
47
+
48
+ Suppose $T: \Omega \to \Sigma$ is a measurable map. The mapping $T$ is called measure preserving and denoted as $T_{\#} \mu = \nu$ if the following relation
49
+
50
+ $$
51
+ \mu (T ^ {- 1} (A)) = \nu (A) \tag {1}
52
+ $$
53
+
54
+ for every Borel subset $A \subset \Sigma$ . A cost function $c: \Omega \times \Sigma \to \mathbb{R}$ measures the transportation cost for transporting the unit mass from $x \in \Omega$ to $y \in \Sigma$ .
55
+
56
+ Problem 1 (Monge). The optimal transportation problem finds the measure preserving map with the minimal total transportation cost,
57
+
58
+ $$
59
+ \min _ {T _ {\#} \mu = \nu} \int_ {\Omega} c (x, T (x)) f (x) d x
60
+ $$
61
+
62
+ The solution to the Monge's problem is called the optimal transport map between $\mu$ and $\nu$ . The existence, uniqueness and regularity of OT maps depend on the boundedness and the continuity of the density functions, the convexity of the supporting domains, the continuity of their boundaries, and the cost function. In our current work, we focus on the similar situation in Saumier et al. (2013),
63
+
64
+ - The cost function is quadratic Euclidean distance $c(x, y) = \| x - y \|^2 / 2$ ;
65
+
66
+ - The supports of the source and the target measures are the canonical cube $\Omega = [-1, 1]^3$ , which is uniformly convex;
67
+ - The source and the target measures $\mu, \nu$ are absolutely continuous with respect to the Lebesgue measure, their densities $f, g$ are positive and bounded away from zero;
68
+
69
+ $$
70
+ 0 < m < f, g < M,
71
+ $$
72
+
73
+ and $f,g$ are of class $C^\alpha (\Omega)$
74
+
75
+ - The boundary condition is second boundary condition (OT boundary condition), $T(\Omega) = \Omega$ .
76
+
77
+ Then according to (Villani (2003) Theorem 14.4, Saumier et al. (2013) Theorem 2.1), the OT maps $T: \Omega \to \Omega$ exists and is unique and invertible ( $\mu$ a.e), and the Brenier potential is of class $C^{2,\beta}(\bar{\Omega})$ form some $0 < \beta < \alpha$ .
78
+
79
+ Theorem 2. Assume that $\Omega, \mu, \nu, f$ and $g$ are defined as above. Then there exists a convex function $u: \Omega \to \mathbb{R}$ , $u \in C^{2,\beta}(\Omega)$ for some $0 < \beta < \alpha$ , such that $\nabla u$ pushes $\mu$ forward to $\nu$ , $(\nabla u)_{\#} \mu = \nu$ . Moreover, $\nabla u$ is unique and invertible ( $\mu$ a.e.), and its inverse $\nabla v$ satisfies $(\nabla v)_{\#} \nu = \mu$ .
80
+
81
+ We call such a convex function $u$ the Brenier potential, it satisfies the Monge-Ampère equation,
82
+
83
+ $$
84
+ \det D ^ {2} u (x) = \frac {f (x)}{g \circ \nabla u (x)}. \tag {2}
85
+ $$
86
+
87
+ with the boundary condition $\nabla u(\Omega) = \Sigma$ . Then finding the optimal transportation map is equivalent to solving the corresponding Monge-Ampère equation. In the current work, the target measure is always the Lebesgue measure, and the source density $f$ is of class $C^{2,\alpha}(\Omega)$ .
88
+
89
+ Linearized Monge-Ampère Operator The Monge-Ampère operator is defined as
90
+
91
+ $$
92
+ \mathrm {M A} [ u ] = \det D ^ {2} u,
93
+ $$
94
+
95
+ which is highly non-linear. It can be linearized as following:
96
+
97
+ $$
98
+ \mathrm {M A} [ u + \varepsilon v ] = \det (D ^ {2} u + \varepsilon D ^ {2} v) \approx \det D ^ {2} u + \varepsilon \operatorname {T r a c e} (\operatorname {A d j} (D ^ {2} u) \cdot D ^ {2} v), \tag {3}
99
+ $$
100
+
101
+ where $\operatorname{Adj}(A)$ is the adjoint (co-factor) matrix of $A$ , $\operatorname{Adj}(A) := \det(A)A^{-T}$ . Therefore the linearized Monge-Ampère operator is defined as
102
+
103
+ $$
104
+ \mathrm {D M A} _ {u} [ v ] := \operatorname {T r a c e} \left(\operatorname {A d j} \left(D ^ {2} u\right) \cdot D ^ {2} v\right) = \sum_ {p, q = 1} ^ {d} u ^ {p q} (x) \partial_ {p} \partial_ {q} v (x), \tag {4}
105
+ $$
106
+
107
+ where $(u^{pq}) = \mathrm{Adj}(D^2 u)$ is the adjoint matrix of the Hessian of $u$ , and $\partial_p\partial_q\coloneqq \frac{\partial^2}{\partial x_p\partial x_q}$ .
108
+
109
+ Continuity Method For simplicity, we assume the source domain coincides with the target domain, that is $\Omega = \Sigma$ , and the target density is $g(x) \equiv 1$ . The Monge-Ampère equation Eqn. (2) is simplified as $\operatorname{det}D^{2}u(x) = f(x)$ . Define a flow of density as
110
+
111
+ $$
112
+ \rho (x, t) = (1 - t) + t f (x), \quad t \in [ 0, 1 ]. \tag {5}
113
+ $$
114
+
115
+ The corresponding flow of the Brenier potentials is $u(x,t):\Omega \times [0,1]\to \mathbb{R}$
116
+
117
+ $$
118
+ \det D _ {x} ^ {2} u (x, t) = \rho (x, t), \quad s. t. \nabla_ {x} u (x, t) (\Omega) = \Omega ,
119
+ $$
120
+
121
+ where $D_x^2 u(x,t)$ is the Hessian of $u(x,t)$ with respect to $x$ , and $u(x,1)$ is the solution to the initial Monge-Ampère equation Eqn. (2). Take the derivative w.r.t. time $t$ on both sides of the linearized Monge-Ampère operator Eqn. (4), we obtain an elliptic PDE with the spacial and temporal variant coefficients of the unknown $v(x,t) \coloneqq \dot{u} (x,t)$ , namely the "velocity" of the Brenier potential,
122
+
123
+ $$
124
+ \mathrm {D M A} _ {u} [ v ] = \sum_ {p, q = 1} ^ {d} u ^ {p q} (x, t) \partial_ {p} \partial_ {q} v (x, t) = \frac {\partial}{\partial t} \rho (x, t) = f (x) - 1. \tag {6}
125
+ $$
126
+
127
+ At time $t = 0$ , the initial Brenier potential is known as $u(x,0) = \frac{1}{2}\| x\|^2$ . Suppose at time $t$ , we have obtained $u(x,t)$ already, then we can compute the adjoint matrix $u^{pq}(x,t)$ of the Hessian $D_x^2 u(x,t)$ and solve Eqn. (6) to get the velocity $v(x,t) = \dot{u} (x,t)$ . In turn, we move forward to time $t + \delta t$ , and update $u(x,t + \delta t)$ by $u(x,t) + \dot{u} (x,t)\delta t$ . By repeating this procedure, eventually we reach time $t = 1$ and obtain the solution $u(x)\coloneqq u(x,1)$ to the initial Monge-Ampère Eqn. (2).
128
+
129
+ Obliqueness Boundary Condition Suppose the boundary of $\Omega$ is $C^1$ almost everywhere, therefore at a $C^1$ point $x\in \partial \Omega$ , the outer normal $\mathbf{n}(x)$ is well defined. For almost every boundary point $x\in \partial \Omega$ , the obliqueness condition is represented as
130
+
131
+ $$
132
+ \langle \mathbf {n} (x), \mathbf {n} (\nabla u (x)) \rangle \geq 0. \tag {7}
133
+ $$
134
+
135
+ Suppose $\Omega$ is a cuboid and has 6 faces, if a boundary point $x\in \partial \Omega$ is on a face, by the cyclic monotonicity of the map and the strict convexity of $u$ Villani (2008), its image $\nabla u(x)$ must be on the same face of $x$ , namely,
136
+
137
+ $$
138
+ \langle \nabla u (x) - x, \mathbf {n} (x) \rangle = 0. \tag {8}
139
+ $$
140
+
141
+ We can rewrite the Brenier potential as $u(x_{1},x_{2},\ldots ,x_{d}) = \frac{1}{2}\sum_{i = 1}^{d}x_{i}^{2} + v(x_{1},\dots ,x_{d})$ , then $\nabla u(x) - x = \nabla v(x)$ . By Eqn. (8), $v(x)$ satisfies the Neumann boundary condition,
142
+
143
+ $$
144
+ \frac {\partial v}{\partial \mathbf {n}} (x) = 0, \quad x \in \partial \Omega . \tag {9}
145
+ $$
146
+
147
+ Similarly, the velocity of the (modified) Brenier potential $v$ in Eqn. (6) also satisfies the Neumann boundary condition. The analysis about the existence and regularity of the solutions to Eqn. (6) with boundary condition Eqn. (9) can be found in the supplementary material.
148
+
149
+ # 3 COMPUTATIONAL ALGORITHM
150
+
151
+ Here we introduce the 3-dimensional FFT-OT algorithm, which can be generalized to any dimensions. We approximate the Monge-Ampère equation by a sequence of constant coefficient elliptic PDEs, and solve them by FFT on GPUs. More detailed analysis about the solution of the discretized Monge-Ampère equation, and the proofs of the lemmas and theorems are given by Appendix B.
152
+
153
+ # 3.1 CONTINUITY METHOD FOR SOLVING THE MONGE-AMPERE EQUATION
154
+
155
+ By using the continuity method, we can solve the Monge-Ampère equation iteratively. For simplicity, we assume the target measure is the Lebesgue's measure with $g \equiv 1$ . At the $n$ -th iteration, the Brenier potential is represented as $\frac{1}{2} \| x \|^2 + u_n(x)$ , its Hessian matrix is $H_n(x) \coloneqq \mathrm{I} + D^2 u_n(x)$ , the corresponding density function is defined as the determinant of the Hessian $\rho_n = \operatorname*{det}(H_n)$ , and the velocity of the Brenier potential is $v_n(x)$ . In the beginning, the Brenier potential $u_0(x)$ is zero, the Hessian is $H_0 = \mathrm{I}$ and the density is $\rho_0 = 1$ . At the $n$ -th step, we compute the adjoint matrix $[H_n^{pq}(x)]$ of the Hessian matrix $H_n(x)$ for any $x \in \Omega$ . According to Eqn. (3), the velocity $v_n(x)$ satisfies the variant coefficient elliptic PDE induced by the linearized Monge-Ampère operator,
156
+
157
+ $$
158
+ \mathrm {D M A} _ {u _ {n}} [ v _ {n} ] = \sum_ {p, q = 0} ^ {2} H _ {n} ^ {p q} (x) \partial_ {p} \partial_ {q} v _ {n} (x) = \frac {1}{\tau} \left(f (x) - \rho_ {n} (x)\right). \tag {10}
159
+ $$
160
+
161
+ Note that the right hand side of Eqn. (6) is the difference between the initial and the target densities, whereas here it is replaced by the difference between the initial and the current densities. The step length parameter $\tau \geq 1$ can be chosen to guarantee the convergence Loepers & Rapetti (2005).
162
+
163
+ The elliptic PDE Eqn. (10) is with spatially variant coefficients. Although the traditional finite element method (FEM) can solve it using the GMRES algorithm Saad (2003), this algorithm can not be directly accelerated by GPUs. To overcome this difficulty, we approximate Eqn. (10) by a much simpler elliptic PDE with constant coefficients, which can be directly solved using the following FFT-OT algorithm pipeline Alg. 1 on GPUs in Appendix C.
164
+
165
+ At the $n$ -th iteration, after obtaining the adjoint matrix $[H_n^{pq}(x)], x \in \Omega$ , we compute the mean adjoint matrix $[\bar{H}_n^{pq}(x)]$
166
+
167
+ $$
168
+ \bar {H} _ {n} ^ {p q} := \frac {\int_ {\Omega} H _ {n} ^ {p q} (x) \rho_ {n} (x) d x}{\int_ {\Omega} \rho_ {n} (x) d x}, \quad p, q = 0, 1, 2 \tag {11}
169
+ $$
170
+
171
+ and replace the elliptic PDE Eqn.(10) with variant coefficients by the elliptic PDE with constant coefficients,
172
+
173
+ $$
174
+ \overline {{\mathrm {D M A}}} _ {u _ {n}} [ v _ {n} ] = \sum_ {p, q = 0} ^ {2} \bar {H} _ {n} ^ {p q} \partial_ {p} \partial_ {q} v _ {n} (x) = \frac {1}{\tau} (f (x) - \rho_ {n} (x)), \tag {12}
175
+ $$
176
+
177
+ where $\overline{\mathrm{DMA}}$ is called the mean linearized Monge-Ampère operator.
178
+
179
+ Then we solve the constant coefficient elliptic PDE Eqn. (12) by FFT Algorithm Alg. 2 in Appendix C. Although the original variant coefficient PDE Eqn. (10) is replaced by its constant coefficient approximation Eqn. (12), the algorithm still converges to the solution with a linear convergence rate. This replacement allows the whole algorithm to be solved by FFT on GPUs, which greatly improves the computational efficiency.
180
+
181
+ Theorem 3 (main). Given a domain $\Omega \subset \mathbb{R}^d$ , which is a canonical cuboid $\Omega = [-1,1]^d$ , and a positive density function $f:\Omega \to \mathbb{R}$ with the balance condition $\int_{\Omega}f(x)dx = \int_{\Omega}dx$ , suppose the mirror reflection extension Eqn. (14) of $f$ to the flat torus $\tilde{f}:\mathbb{T}^n\to \mathbb{R}$ is $C^\alpha$ , $\alpha \in (0,1)$ , then the Monge-Ampère equation,
182
+
183
+ $$
184
+ d e t D ^ {2} u (x) = f (x), \quad \nabla u (\Omega) = \Omega
185
+ $$
186
+
187
+ can be solved using the FFT-OT Algorithm Alg. 1 in Appendix C. In particular, one can choose the step length parameter $\tau$ , such that there is a constant $0 < \gamma < 1$ that the approximation error satisfies
188
+
189
+ $$
190
+ \left\| f - \rho_ {n + 1} \right\| ^ {2} < C \gamma^ {n}, \tag {13}
191
+ $$
192
+
193
+ namely the algorithm has a linear convergence rate.
194
+
195
+ # 3.2 FFT SOLVER FOR CONSTANT COEFFICIENT ELLIPTIC PDES
196
+
197
+ To solve the constant coefficient elliptic PDE Eqn. (12), we first extend the PDE to the flat torus by mirror reflection, then discretize the domain and compute the differential operators by central difference scheme. Finally the PDE is converted to algebraic equations in the frequency domain by FFT and can be efficiently solved on GPUs.
198
+
199
+ Extension by Mirror Reflection Suppose $\Omega = [0,1]^3$ and $f:\Omega \to \mathbb{R}$ are given, we extend $\Omega$ to $\tilde{\Omega} = [-1,1]^3$ and $f$ to $\tilde{f}:\tilde{\Omega}\rightarrow \mathbb{R}$ by mirror reflection
200
+
201
+ $$
202
+ \tilde {f} (x, y, z) = f (| x |, | y |, | z |), \quad \forall (x, y, z) \in \tilde {\Omega}. \tag {14}
203
+ $$
204
+
205
+ By definition, $\tilde{f}$ satisfies the periodic boundary condition and can be treated as a function defined on the flat torus $\mathbb{T}^3$ . $\tilde{\Omega}$ is one of the fundamental domain of $\mathbb{T}^3$ . The constant coefficients $a^{p,q}$ keep unchanged. Then we solve the following constant coefficient elliptic PDE Eqn. (18) $L[\tilde{u}] = \tilde{f}$ with the periodic boundary condition. Finally, the restriction of $\tilde{u}$ on $\Omega$ gives the initial solution $u$ to $L[u] = f$ with Neumann boundary condition.
206
+
207
+ In the following, to avoid using overly complicated symbols, we use $(u,f,\Omega)$ to represent $(\tilde{u},\tilde{f},\tilde{\Omega})$ for simplicity.
208
+
209
+ Tessellation Suppose $\Omega = [-1,1]^3$ is the canonical cube (a fundamental domain of a flat torus), we tessellate it to the regular cells, and the centers of the cells form a grid $M\times N\times L$ . The Brenier potential $u:\Omega \to \mathbb{R}$ is discretized to a tensor $u_{i,j,k}$ with $\{i,j,k\} \in \{0,\dots ,M - 1\} \times \{0,\dots ,N - 1\} \times \{0,\dots ,L - 1\}$ . The spacial step lengths are $(h_x,h_y,h_z) = (2 / M,2 / N,2 / L)$ . The coordinate of each sample point $(x_{i},y_{j},z_{k})$ is $(x_{i},y_{j},z_{k}) = (-1 + h_{x}(i + 1 / 2), - 1 + h_{y}(j + 1 / 2), - 1 + h_{z}(k + 1 / 2))$ . The periodic boundary condition is then formulated as
210
+
211
+ $$
212
+ u _ {i, j, k} = u _ {i + \alpha M, j + \beta N, k + \gamma L}, \quad \alpha , \beta , \gamma \in \mathbb {Z}. \tag {15}
213
+ $$
214
+
215
+ Finite Difference Differential Operator We use the standard central differences to compute the differential operators. The first order derivative $\mathcal{D}_x$ is approximated by
216
+
217
+ $$
218
+ \mathcal {D} _ {x} u _ {i, j, k} = \frac {u _ {i + 1 , j , k} - u _ {i - 1 , j , k}}{2 h _ {x}},
219
+ $$
220
+
221
+ where the index $i + 1$ means $i + 1$ modulus $M$ . The operators $\mathcal{D}_y, \mathcal{D}_z$ are defined in a similar way. The second order derivative operator $\mathcal{D}_{xx}$ and $\mathcal{D}_{xy}$ are approximated by
222
+
223
+ $$
224
+ \mathcal {D} _ {x x} ^ {2} u _ {i, j, k} = \frac {u _ {i + 1 , j , k} + u _ {i - 1 , j , k} - 2 u _ {i , j , k}}{h _ {x} ^ {2}}
225
+ $$
226
+
227
+ $$
228
+ \mathcal {D} _ {x y} ^ {2} u _ {i, j, k} = \frac {u _ {i + 1 , j + 1 , k} + u _ {i - 1 , j - 1 , k} - u _ {i + 1 , j - 1 , k} - u _ {i - 1 , j + 1 , k}}{4 h _ {x} h _ {y}}
229
+ $$
230
+
231
+ The other operators $\mathcal{D}_{yy},\mathcal{D}_{zz},\mathcal{D}_{yz}$ and $\mathcal{D}_{xz}$ are defined similarly.
232
+
233
+ Discrete Fourier Transformation The discrete Fourier transformation (DFT) of $u_{i,j,k}$ is given by
234
+
235
+ $$
236
+ \hat {u} _ {m, n, l} = \sum_ {i = 0} ^ {M - 1} \sum_ {j = 0} ^ {N - 1} \sum_ {k = 0} ^ {L - 1} u _ {i, j, k} \hat {\omega} _ {m n l} \tag {16}
237
+ $$
238
+
239
+ $$
240
+ u _ {i, j, k} = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} \omega_ {m n l} \tag {17}
241
+ $$
242
+
243
+ where $\hat{\omega}_{mnl} = e^{-\iota \frac{2\pi mi}{M}}e^{-\iota \frac{2\pi nj}{N}}e^{-\iota \frac{2\pi lk}{L}}$ , $\omega_{mnl} = e^{\iota \frac{2\pi mi}{M}}e^{\iota \frac{2\pi nj}{N}}e^{\iota \frac{2\pi lk}{L}}$ and $\iota = \sqrt{-1}$ , $\{m,n,l\}$ are the indices of the frequency coefficients. By using DFT, the differential operators are converted to algebraic operators in the frequency domain.
244
+
245
+ Lemma 4. Suppose the discrete function is $u_{i,j,k}$ , with the discrete Fourier transformation Eqn. (16) and Eqn. (17), by using the central difference scheme, the first order differential operator is given by
246
+
247
+ $$
248
+ \mathcal {D} _ {x} u _ {i, j, k} = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} \frac {\sin \frac {2 \pi m}{M}}{h _ {x}} \omega_ {m n l}
249
+ $$
250
+
251
+ the second order differential operators are represented by
252
+
253
+ $$
254
+ \mathcal {D} _ {x x} ^ {2} u _ {i, j, k} = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} \frac {2 (\cos \frac {2 \pi m}{M} - 1)}{h _ {x} ^ {2}} \omega_ {m n l}
255
+ $$
256
+
257
+ $$
258
+ \mathcal {D} _ {x y} ^ {2} u _ {i, j, k} = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} \frac {- \sin \frac {2 \pi m}{M} \sin \frac {2 \pi n}{N}}{h _ {x} h _ {y}} \omega_ {m n l}
259
+ $$
260
+
261
+ The other differential operators $\mathcal{D}_y, \mathcal{D}_z, \mathcal{D}_{yy}, \mathcal{D}_{zz}, \mathcal{D}_{yz}$ and $\mathcal{D}_{xz}$ are also represented accordingly. The detailed proofs can be found in the supplementary material.
262
+
263
+ FFT Solver Suppose we want to solve an elliptic PDE with constant coefficients on $\Omega \subset \mathbb{R}^3$
264
+
265
+ $$
266
+ L [ u ] := \left(\sum_ {p = 0} ^ {2} \sum_ {q = 0} ^ {2} a ^ {p, q} \partial_ {p} \partial_ {q} + \sum_ {r = 0} ^ {2} b ^ {r} \partial_ {r} + c\right) u (x) = f (x), \tag {18}
267
+ $$
268
+
269
+ with the periodic boundary condition, where $a^{p,q}, b^r, c$ are constants, the matrix $(a^{p,q})$ is positive definite, namely the PDE is uniformly elliptic. By the discrete Fourier transformation $\mathcal{F}$ , we convert the differential equation to an algebraic equation in the frequency domain,
270
+
271
+ $$
272
+ \sum_ {p = 0} ^ {2} \sum_ {q = 0} ^ {2} a ^ {p, q} \mathcal {F} (\partial_ {p} \partial_ {q} u) + \sum_ {r = 0} ^ {2} b ^ {r} \mathcal {F} (\partial_ {r} u) + c \mathcal {F} (u) = \mathcal {F} (f)
273
+ $$
274
+
275
+ By applying Lemma 4 and defining
276
+
277
+ $$
278
+ \begin{array}{l} \lambda_ {m, n, l} = a ^ {0, 0} \frac {2 (\cos \frac {2 \pi m}{M} - 1)}{h _ {x} ^ {2}} + a ^ {1, 1} \frac {2 (\cos \frac {2 \pi n}{N} - 1)}{h _ {y} ^ {2}} \\ + a ^ {2, 2} \frac {2 (\cos \frac {2 \pi l}{L} - 1)}{h _ {z} ^ {2}} - \left(a ^ {0, 1} + a ^ {1, 0}\right) \frac {\sin \frac {2 \pi m}{M} \sin \frac {2 \pi n}{N}}{h _ {x} h _ {y}} (19) \\ - \left(a ^ {1, 2} + a ^ {2, 1}\right) \frac {\sin \frac {2 \pi n}{N} \sin \frac {2 \pi l}{L}}{h _ {y} h _ {z}} - \left(a ^ {0, 2} + a ^ {2, 0}\right) \frac {\sin \frac {2 \pi l}{L} \sin \frac {2 \pi m}{M}}{h _ {z} h _ {x}} (19) \\ + b ^ {0} \frac {\sin \frac {2 \pi m}{M}}{h _ {x}} + b ^ {1} \frac {\sin \frac {2 \pi n}{N}}{h _ {y}} + b ^ {2} \frac {\sin \frac {2 \pi l}{L}}{h _ {z}} + c \\ \end{array}
279
+ $$
280
+
281
+ We have the algebraic equations in frequency domain,
282
+
283
+ $$
284
+ \hat {u} _ {m, n, l} \lambda_ {m, n, l} = \hat {f} _ {m, n, l}
285
+ $$
286
+
287
+ With $\hat{u}_{m,n,l}$ 's, we can easily obtain $u_{i,j,k}$ 's by the Inverse Discrete Fourier Transform (IDFT), which means solving the constant coefficient elliptic equation. The algorithm is described in Alg. 2 in Appendix C.
288
+
289
+ The FFT for solving the constant coefficient elliptic PDE can be efficiently computed with GPUs. Moreover, the algorithm Alg. 2 solves the constant coefficient elliptic PDEs with a periodic boundary condition, which can be generalized to solving the same type of PDEs with Neumann boundary condition by extending the PDE to the flat torus $\mathbb{T}^3$ using mirror reflection Eqn. (14).
290
+
291
+ # 4 EXPERIMENTAL RESULTS
292
+
293
+ In this section, we firstly show that the our proposed FFT-OT algorithm converges linearly and runs $100 \times$ faster than the conventional convex geometry based solver Levy (2015), then demonstrate the method in two applications: 3D adaptive sampling and Volume Magnifier. All the algorithms are developed using generic C++ with CUDA Toolkit. All the experiments are conducted on a Windows laptop with Intel Core i7-7700HQ CPU with 16 GB memory and NVIDIA GeForce GTX 1060 Graphics Cards. More experiments can be found in Appendix D.
294
+
295
+ # 4.1 RUNNING TIME AND CONVERGENCE ANALYSIS
296
+
297
+ To show the performance of the proposed method, we experiment on the density functions defined by the Gaussian mixture models. To be specific, the domain is a cube $\Omega = [0,1]^3$ , the 3-dimensional density function defined on $\Omega$ is set to be $f(x) = \sum_{i=1}^{30} p_i \mathcal{N}(\mu_i, \Sigma_i)$ , where $\mathcal{N}(\mu_i, \Sigma_i)$ represents Gaussian distribution with mean $\mu_i$ and variance $\Sigma_i = \mathrm{diag}(\sigma_{i0}^2, \sigma_{i1}^2, \sigma_{i2}^2)$ . $\mu_i \in \mathbb{R}^3$ is uniformly sampled from $[0,1]^3$ , $\sigma_{ij}$ is uniformly sampled from $[0,0.5]$ , $p_i \in \mathbb{R}$ is uniformly sampled from $[0.2,1]$ and normalized such that $\int_{\Omega} f(x) dx = 1$ . Thus the source distribution $\mu$ is a complicated Gaussian mixture distribution restricted on $\Omega$ . Then by mirror reflection in Sec. 3.2, we obtain the complex density function which is defined on $[-1,1]^3$ and satisfies the periodic boundary condition.
298
+
299
+ We directly use the FFT-OT algorithm Alg. 1 to solve the linearized Monge-Ampère equation. With the approximation error threshold $\varepsilon = 1.0 \times 10^{-6}$ and the resolution $256 \times 256 \times 256$ , the running time for our FFT-OT algorithm with double precision on GPU is less than 175 seconds. The conventional convex geometry based algorithm for 3D optimal transportation Levy (2015) can neither handle such large data sets nor be implemented on GPUs. It can only compute OT map with resolution no greater than $100 \times 100 \times 100$ on our system, which takes about 2700 seconds. When handling problem with $128 \times 128 \times 128$ resolution, our FFT-OT consumes about
300
+
301
+ ![](images/3bdbb91d57bc03d187391b82d1f4bbac774ffb0ed2c127ee3cfab9ecf8513c36.jpg)
302
+ Figure 1: Convergence Analysis.
303
+
304
+ 20.3 seconds, which is $130 \times$ faster than the power diagram based method Levy (2015).
305
+
306
+ Fig. 1 shows the approximation error for the above Gaussian mixture density with respect to iterations, namely $\log \| f - \rho_n\| _2^2$ . Our algorithm does converge linearly and the result is consistent with the prediction Eqn. (13) in Thm. 3. Therefore, this experiment validates the theorem.
307
+
308
+ # 4.2 3D ADAPTIVE SAMPLING
309
+
310
+ Generating random samples matching a given density function plays an essential role in the applications like Monte-Carlo integration or stippling. Efficiently obtaining high quality samples is still an on-going research topic Bauer et al. (2015); Perrier et al. (2018). And optimal transportation has been successfully applied for generating high quality 2D samples de Goes et al. (2012); Nader & Guennebaud (2018). Most of the current research focuses on generating 2D samples fitting the given density function. Here we apply the proposed 3D FFT-OT method to generate high quality 3D samples according to the given complex density functions. To the best of our knowledge, it is the first work that uses OT to sample from 3D density functions.
311
+
312
+ Suppose the source probability distribution $d\mu (x) = f(x)dx$ is defined on $\Omega = [0,1]^3$ with $\mu (\Omega) = 1$ . The target distribution $d\nu (y) = dy$ is the uniform distribution. We use the FFT-OT algorithm Alg. 1 to compute the OT map $T:\Omega \to \Omega ,T_{\#}\mu = \nu$ . The domain is tessellated to a $256\times 256\times 256$ grid. For each $x_{ijk},i,j,k\in \{0,1,\ldots ,255\}$ , the image $T(x_{ijk})$ can be obtained. We use $\{T(x_{ijk})\}$ as vertices to compute the Delaunay triangulation of $\Omega$ . Then representing the OT map $T:(\Omega ,\mu)\rightarrow (\Omega ,\nu)$ as a piecewise linear map, the restriction of $T$ on each tetrahedron is a linear map. Then the inverse OT map $T^{-1}:(\Omega ,\nu)\to (\Omega ,\mu)$ is also a piecewise linear map. Namely, given a grid point $y_{mnl}$ , we can find a tetrahedron containing it. Suppose the vertices of the tetrahedron are $\{T(x_i),T(x_j),T(x_k),T(x_l)\}$ , then $y_{mnl}$ is computed as
313
+
314
+ $$
315
+ y _ {m n l} = \lambda_ {i} T (x _ {i}) + \lambda_ {j} T (x _ {j}) + \lambda_ {k} T (x _ {k}) + \lambda_ {l} T (x _ {l}),
316
+ $$
317
+
318
+ ![](images/9e23e79754394ea47c674701a280bdd55feaa52de2d09dfbac5d654f3479e131.jpg)
319
+
320
+ ![](images/6c4973590e35572795f31cbf0ccd3c7259833b41ad4e2dbbced9a255031e0226.jpg)
321
+ (a) Density
322
+
323
+ ![](images/991c27da91d2b9a35751811c8d8b66d71b4f0b1d623541b6f4a316a83e509cc3.jpg)
324
+
325
+ ![](images/1c460ddc7081f3eddd7743544cfbccc8449855610c14e1204499168b0c73fe58.jpg)
326
+ (b) Rejection
327
+
328
+ ![](images/42274fe3238f11b799d712596f55048296cca366c58c386d20c9588fa5a82ca7.jpg)
329
+
330
+ ![](images/6393df53e6e6e271b41246b6537f60061ba7c4fb93111daf4b4d0b10c4620dfc.jpg)
331
+ (c) MH
332
+ Figure 2: 3D density function sampling. (a) The density functions in a slice. The slices in each row come from two different density functions. (b)-(f) The samples obtained by different sampling methods. (b) Rejection sampling. (c) Metropolis-Hastings (MH) algorithm Bishop (2006). (d) Slice sampling Neal (2003). (e) The sampling results by mapping the random samples from the uniform distribution back to the desired distribution with $T^{-1}$ . (f) The sampling results by mapping the grid centers back with $T^{-1}$ . The scores of the top right give the results of the Chi-square goodness-of-fit test. Smaller means better.
333
+
334
+ ![](images/1386ac29405621587ac210b503a5dea958f573ac4514d9efc83e5c0d92511114.jpg)
335
+
336
+ ![](images/eff55229582a2e4bb9d8e713da05b6ad53348e1f5cfef7d1f2838fb12460df68.jpg)
337
+ (d) Slice
338
+
339
+ ![](images/d85bc0ba5e2c8b1233922b11ac9edf9eca3bde735bd03384e7b6bf4159723282.jpg)
340
+
341
+ ![](images/8e2babcbb74fefc8012dd3da3d63874b37804f4626866aa9648c765bad8df38a.jpg)
342
+ (e) Ours-R
343
+
344
+ ![](images/12b78ae2688e92da4208cb0e7c2c9586e51e58ef5e36438feac9884a3758799f.jpg)
345
+
346
+ ![](images/8a65f5c0f7af9a0785980ddfcfbb259e088777602f0ed19e24504b5b81ae417e.jpg)
347
+ (f) Ours-G
348
+
349
+ where the non-negative barycenter coordinates satisfy $\lambda_{i} + \lambda_{j} + \lambda_{k} + \lambda_{l} = 1$ . Then the image of the inverse OT map is given by
350
+
351
+ $$
352
+ T ^ {- 1} \left(y _ {m n l}\right) = \lambda_ {i} x _ {i} + \lambda_ {j} x _ {j} + \lambda_ {k} x _ {k} + \lambda_ {l} x _ {l}. \tag {20}
353
+ $$
354
+
355
+ We generate random samples $\{y_k\}$ according to the uniform distribution $\nu$ on $\Omega$ , then their images $\{T^{-1}(y_k)\}$ are the desired random samples following the distribution $\mu$ .
356
+
357
+ In our experiment, we use the same Gaussian mixture settings of the density function as Sec. 4.1. Fig. 2 visualizes the generated samples. We randomly pick the $k$ -th slice along the $z$ -direction from the discretized volume, draw the source density function on this slice, and use pixel intensity to represent the density in Fig. 2(a). (i) We uniformly generate $100k$ random samples $\{y_k\} \subset \Omega$ , and obtain the desired random samples by applying the inverse OT map $\{T^{-1}(y_k)\}$ . (ii) We also set $\{y_k\}$ as the grid centers of $\Omega$ and obtain the corresponding samples of the desired distribution $\mu$ . The samples around the $k$ -th slice of both sampling strategies are plotted in Fig. 2(e) and Fig. 2(f).
358
+
359
+ By visual comparison, it is obvious that the distributions of Fig. 2(e) and Fig. 2(f) are consistent with the density function in Fig. 2(a). The consistency of the boundary of Fig. 2(e) and (f) and Fig. 2(a) also verifies the obliqueness boundary condition of the Monge-Ampère equation. To further show the performance of the proposed method, we compare it with the classical sampling methods, namely rejection sampling, the Metropolis-Hastings algorithm Bishop (2006) and the slice sampling Neal (2003), shown in Fig. 2(b), Fig. 2(c) and Fig. 2(d). To quantitatively compare the sampling results, we use the Chi-square goodness-of-fit test, which firstly groups the data and then computes the $L^2$ norm of the difference between the actual number of observations in each group and the expected number of observations. In our experiment, we set the group number to $64 \times 64 \times 64$ and use 500K samples to make the comparison. The corresponding $L^2$ norm of each method is shown in the top-right of the corresponding figure. We can see that the both sampling strategies of our method give smaller scores than the classical ones.
360
+
361
+ # 4.3 VOLUMETRIC MAGNIFIER
362
+
363
+ In reality, physical magnifiers can only magnify planar images. In medical image processing, it is highly desirable to magnify certain regions of the 3D MRIs or CT images. Our algorithm can address such requests with the user prescribed region of interest (ROI) and magnifying factor. Suppose the ROI is a symmetric region with the center $(\bar{x},\bar{y},\bar{z})\in \Omega$ and the radius $\sigma_x,\sigma_y,\sigma_z$ in different directions. The density function $f$ of the source measure $\mu$ is defined as
364
+
365
+ $$
366
+ f (x, y, z) = 0. 5 + 0. 5 e ^ {- ((x - \bar {x}) ^ {2} / 2 \sigma_ {x} ^ {2} + (y - \bar {y}) ^ {2} / 2 \sigma_ {y} ^ {2} + (z - \bar {z}) ^ {2} / 2 \sigma_ {z} ^ {2})}
367
+ $$
368
+
369
+ We compute OT map $T: (\Omega, \mu) \to (\Omega, \nu)$ , where $\nu$ is the uniform distribution. Similar to the method in 3D adaptive sampling, we compute the Delaunay triangulation of the images $\{T(x_{ijk})\}$ , then the OT map $T$ is represented as a piecewise linear map. The inverse optimal transportation map
370
+
371
+ ![](images/9617a55da512e830fdb2957b053afbf67cde99d45ee37404750cdf5b376f62f5.jpg)
372
+ Figure 3: The volume magnifier of an aneurysm. The first column shows the original volumetric data, and the last three columns give the magnified data from the same viewpoints with different magnifying ratios. The yellow circle denotes the ROI/aneurysm. To obtain the results, we set $\sigma = \sigma_{x} = \sigma_{y} = \sigma_{z}$ , and they are 0.83, 0.75 and 0.5 respectively.
373
+
374
+ ![](images/69a02b53691bc471fb821d5c6db1f725236ce6031bb0554dec9901a519abf12c.jpg)
375
+
376
+ ![](images/386aab8fb7671eec7accf9774320c70703684277bb174009e9564739ebd35a58.jpg)
377
+
378
+ ![](images/f7423a5b99e94f5d341b8159bc08750b17410a0ccaf0730d033af287440fc51a.jpg)
379
+
380
+ ![](images/fdcd37e360c4edc5d7205291a179e64e74a004e8852cee7f98b9ecdc15549027.jpg)
381
+
382
+ ![](images/d3a86e9df0f9fb6cc6fd84a90e8d1778a943ac75a38ee32c4e3d8969df70c85c.jpg)
383
+
384
+ ![](images/ebdff74f3bbc7c0e5b13f7d1bf88838e5fe7f7443c9a08fbce7813d4e1a0a048.jpg)
385
+
386
+ ![](images/3a119ac03a5c3ee978ad5e6cd7f6c4ada4884521e6cfa58455f65ee717b8c464.jpg)
387
+
388
+ ![](images/b5c18de141438767f08b957306c042f2c43ef845f677c23d259a0301c1abc03b.jpg)
389
+ Figure 4: The volume magnifier of the knee. The first row gives the original volumetric data with different ROIs denoted by the blue boxes from different viewpoints, and the second row shows the corresponding magnified results. In the experiments we set $\sigma_{x} = \sigma_{y} = \sigma_{z} = 0.75$ .
390
+
391
+ ![](images/7b8683c61289f9240fae83afa24973daf767edbc881a65835aed49339dc711b1.jpg)
392
+
393
+ ![](images/5e2ab732a05ed454bef976b47f9fba3031bd402970ee1fba72556b924a40eb13.jpg)
394
+
395
+ ![](images/6447c7ba8e3eff44e23870d8d42c9cf37c5a1d7ecfa3d6fa64a0ce607cbb3218.jpg)
396
+
397
+ $T^{-1}:(\Omega ,\nu)\to (\Omega ,\mu)$ is also piecewise linear. For each grid point $y_{mnl}\in \Omega$ we use Eqn. (20) to find its pre-image. Similarly, its corresponding intensity $I_{mnl}$ is computed by linear interpolation. Then we obtain the new volumetric data $\{I_{mnl}\}$ with the magnified ROI and visualize the result with Voreen Meyer-Spradow et al. (2009).
398
+
399
+ Fig. 3 demonstrates our volumetric magnifier by magnifying an aneurysm on blood vessel Hansen & Johnson (2004). We choose the aneurysm region as the ROI. The first column gives the snapshot of the blood vessel, and the yellow circle denotes the location of the aneurysm. The last three columns show the magnified aneurysm with different magnifying ratio from the same viewpoints. Moreover, we show the magnified volumetric knee from different viewpoints with different ROIs denoted by the blue boxes in Fig. 4. Our method only magnifies the ROIs and keeps other regions unchanged. Compared with the traditional method requiring tedious zoom in/out, our method only magnifies the ROI region and keeps the whole subject in the field of view, which enables doctors to visualize the overall anatomy while scrutinize detailed anatomical structure at the same time.
400
+
401
+ # 5 CONCLUSION
402
+
403
+ In this paper, we propose the FFT-OT method to solve the optimal transportation problem. According to the Brenier theory, under the quadratic distance cost, finding the solution to the OT problem is equivalent to solving the Monge-Ampère equation, which can be linearized as a sequence of variant coefficient elliptic PDEs. Later, the variant coefficient PDEs are approximated by constant coefficient PDEs and solved by Fast Fourier Transformation. We also prove that the proposed method converges linearly. Experiments on volumetric data show that the FFT-OT can be used to sample from complex 3D density functions and magnify the volumetric data in medical images.
404
+
405
+ # ACKNOWLEDGEMENT
406
+
407
+ This research was partially supported by National Key R&D Program of China 2021YFA1003003 and NSFC No. 61936002, T2225012. This work was also partially supported by NIH 3R01LM012434-05S1, 1R21EB029733-01A1, NSF FAIN-2115095 and NSF CMMI-1762287.
408
+
409
+ # REFERENCES
410
+
411
+ Mokhtar Z. Alaya, Maxime Berar, Gilles Gasso, and Alain Rakotomamonjy. Screening sinkhorn algorithm for regularized optimal transport. In Advances in Neural Information Processing Systems 32, 2019.
412
+ Jose I. Aliaga, Ernesto Dufrechou, Pablo Ezzatti, and Enrique S. Quintana-Orti. An efficient gpu version of the preconditioned gmres method. The Journal of Supercomputing, 75, 2019.
413
+ Jason Altschuler, Jonathan Niles-Weed, and Philippe Rigollet. Near-linear time approximation algorithms for optimal transport via sinkhorn iteration. In Advances in Neural Information Processing Systems 30, 2017.
414
+ Jason Altschuler, Francis Bach, Alessandro Rudi, and Jonathan Niles-Weed. Massively scalable sinkhorn distances via the nystrom method. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper/2019/file/f55cadb97eaff2ba1980e001b0bd9842-Paper.pdf.
415
+ Dongsheng An, Yang Guo, Na Lei, Zhongxuan Luo, Shing-Tung Yau, and Xianfeng Gu. Ae-ot: A new generative model based on extended semi-discrete optimal transport. In International Conference on Learning Representations, 2020.
416
+ Dongsheng An, Na Lei, and Xianfeng Gu. Efficient optimal transport algorithm by accelerated gradient descent. In The Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI), 2022.
417
+ Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In ICML, pp. 214-223, 2017.
418
+ F. Aurenhammer, F. Hoffmann, and B. Aronov. Minkowski-type theorems and least-squares clustering. Algorithmica, 1998.
419
+ Martin Bauer, Sarang Joshi, and Klas Modin. Diffeomorphic density matching by optimal information transport. SIAM Journal on Imaging Sciences, 8, 2015.
420
+ J.D. Benamou, Y. Brenier, and K. Guittet. The Monge-Kantorovitch mass transfer and its computational fluid mechanics formulation. International Journal for Numerical Methods in Fluids, 2002.
421
+ Jean-David Benamou, Brittany D. Froese, and Adam M. Oberman. Numerical solution of the optimal transportation problem using the monge-ampère equation. J. Comput. Phys, 2014.
422
+ Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
423
+ Y. Brenier. Polar decomposition and increasing rearrangement of vector fields. C. R. Acad. Sci. Paris Sr. I Math., 305(19):805-808, 1987.
424
+ Y. Brenier. Polar factorization and monotone rearrangement of vector-valued functions. Comm. Pure Appl. Math., 44(4):375-417, 1991.
425
+ Dario Cordero-Erausquin. Sur le transport de mesures periodiques monotone maps preserving periodic measures. Comptes Rendus de l'Académie des Sciences - Series I - Mathematics, 329: 199-202, 1999.
426
+ N. Courty, R. Flamary, D. Tuia, and A. Rakotomamonjy. Optimal transport for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(9):1853-1865, 2017.
427
+
428
+ Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transportation distances. In International Conference on Neural Information Processing Systems, 2013.
429
+ F. de Goes, K. Breeden, V. Ostromoukhov, and M. Desbrun. Blue noise through optimal transport. ACM Trans. Graph. (SIGGRAPH Asia), 31, 2012.
430
+ Philippe Delanoë. Classical solvability in dimension two of the second boundary-value problem associated with the Monge-Ampère operator. Annales de l'I.H.P. Analyse non linéaire, 8(5): 443-457, 1991.
431
+ Pavel Dvurechensky, Alexander Gasnikov, and Alexey Kroshnin. Computational optimal transport: Complexity by accelerated gradient descent is better than by sinkhorn's algorithm. In Proceedings of the 35th International Conference on Machine Learning. PMLR, 2018.
432
+ Suli Endre. Lecture Notes on Finite Element Methods for Partial Differential Equations. University of Oxford, 2020.
433
+ David Xianfeng Gu, Feng Luo, Jian Sun, and Shing-Tung Yau. Variational principles for minkowski type problems, discrete optimal transport, and discrete monge-ampère equations. *Asian Journal of Mathematics*, 2016.
434
+ Charles D. Hansen and Chris R. Johnson. Visualization Handbook. Academic Press, 2004.
435
+ Jun Kitagawa, Quentin Mérigot, and Boris Thibert. Convergence of a newton algorithm for semi-discrete optimal transport. Journal of the European Mathematical Society, 2019.
436
+ Na Lei and Xianfeng Gu. Fft-ot: A fast algorithm for optimal transportation. In Proceedings of International Conference on Computer Vision (ICCV), 2021.
437
+ Bruno Levy. A numerical algorithm for 12 semi-discrete optimal transport in 3d. ESAIM: M2AN, 49 (6):1693-1715, 2015.
438
+ Gregorire Loeper and Francesca Rapetti. Numerical solution of the monge-ampère equation by a newton's algorithm. C. R. Acad. Paris, pp. 319-324, 2005.
439
+ Robert J. McCann. A convexityprincipleforinteractinggases. Advances in mathematics, 128:153-179, 1997.
440
+ Quentin Merigot. A multiscale approach to optimal transport. Computer Graphics Forum., 2011.
441
+ Jennis Meyer-Spradow, Timo Ropinski, Jörg Mensmann, and Klaus H. Hinrichs. Voreen: A rapid-prototyping environment for ray-casting-based volume visualizations. IEEE Computer Graphics and Applications, 2009.
442
+ Georges Nader and Gael Guennebaud. Instant transport maps on 2d grids. ACM Trans. Graph., 37 (6), 2018.
443
+ Radford M. Neal. Slice sampling. The Annals of Statistics, 2003.
444
+ Nicolas Papadakis, Gabriel Peyre, and Edouard Oudet. Optimal transport with proximal splitting. SIAM Journal on Imaging Sciences, 2014.
445
+ Hélène Perrier, David Coeurjolly, Feng Xie, Matt Pharr, Pat Hanrahan, and VictorOstromoukhov. Sequences with low-discrepancy blue-noise 2-d projections. Computer Graphics Forum, 2018.
446
+ Gabriel Peyre and Marco Cuturi. Computational optimal transport. Found. Trends Mach. Learn., 11 (5-6):355-607, 2019.
447
+ Yousef Saad. Iterative Methods For Sparse Linear Systems. Society of Industrial and Applied Mathematics, 2003.
448
+ Filippo Santambrogio. Optimal Transport for Applied Mathematicians. Springer, 2015.
449
+ Louis-Philippe Saumier, Martial Agueh, and Boualem Khouider. An efficient numerical algorithm for the $l^2$ optimal transport problem with periodic densities. IMA Journal of Applied Mathematics, 80:135-157, 2013.
450
+
451
+ Yuliy Schwartzburg, Romain Testuz, Andrea Tagliasacchi, and Mark Pauly. High-contrast computational caustic design. ACM Trans. Graph., 33(4), July 2014. ISSN 0730-0301. doi: 10.1145/2601097.2601200. URL https://doi.org/10.1145/2601097.2601200.
452
+ Justin Solomon, Fernando de Goes, Gabriel PeyrÅ, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, and Leonidas Guibas. Convolutional Wasserstein distances: Efficient optimal transportation on geometric domains. ACM Transactions on Graphics (TOG), 2015.
453
+ Kehua Su, Wei Chen, Na Lei, Junwei Zhang, Kun Qian, and Xianfeng Gu. Volume preserving mesh parameterization based on optimal mass transportation. Comput. Aided Des., 82:42-56, 2017.
454
+ Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. Wasserstein auto-encoders. In ICLR, 2018.
455
+ Cédric Villani. Topics in Optimal transportation. AMS, 2003.
456
+ Cédric Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008.
457
+
458
+ # A RELATED WORK
459
+
460
+ There is a huge literature about optimal transportation. Here we will only briefly review the most related works. For detailed reviews, we refer readers to Santambrogio (2015); Peyre & Cuturi (2019).
461
+
462
+ The first type of algorithms is based on the Kantorovich theory. When both the input and output domains are Dirac masses, the Kantorovich problem can be treated as a standard linear programming (LP) task. In order to tackle large data sets, Cuturi (2013) adds an entropic regularizer to the original LP problem and the regularized problem can be quickly solved by the Sinkhorn algorithm. Recently, various algorithms have been proposed to further accelerate the computation by improving the efficiency of matrix-vector multiplications, including the Greenkhorn Altschuler et al. (2017), Sreenkhorn Alaya et al. (2019) and the NYS-SINK Altschuler et al. (2019) algorithms. Dvurechensky et al. Dvurechensky et al. (2018) also propose the adaptive primal-dual accelerated gradient descent algorithm (APDAGD) to solve the discrete OT problem. An et al. An et al. (2022) compute the approximate OT plan by smoothing the dual Kantorovich problem and solving it with the FISTA method. This kind of methods have limitations: (i) they only give transport plans and cannot produce the bijective transportation maps; and (ii) the computational complexity is too high to apply them in the scenarios with huge number of samples.
463
+
464
+ The second type of algorithms is based on the Brenier theory Brenier (1987) and its intrinsic connection with convex geometry Gu et al. (2016). The semi-discrete OT algorithm proposed in Aurenhammer et al. (1998) finds the transport map between a continuous distribution and a discrete measure via a variational approach by dynamically constructing the power diagrams. Its efficiency can be further improved Levy (2015); Merigot (2011) by the multi-resolution strategy. The algorithms proposed in Kitagawa et al. (2019); Su et al. (2017) also improve the efficiency by applying the Newton's method. When both the source and target measures are continuous, some interpolation methods are necessary Schwartzburg et al. (2014). The major drawback of this type of algorithms is the high computational complexity of constructing the dynamic power diagram, which prevents them from handling high dimensional tasks. For example, for the 3D OT problems, these algorithms usually run very slow.
465
+
466
+ The third type of algorithms is based on computational fluid dynamics Benamou et al. (2002); Papadakis et al. (2014). These methods aim at finding a special temporal-spacial flow field that transports the initial source density to the target density with the minimal total kinetic energy. Then the diffeomorphism induced by the flow gives the optimal transport map under the quadratic Euclidean distance cost. However, this kind of algorithms are difficult to extend to high dimensional space.
467
+
468
+ The fourth type of algorithms directly solve the Monge-Ampère equation using numerical methods. Loeper and Rapetti Loeper & Rapetti (2005) propose to solve the linearized Monge-Ampère equation defined on a flat torus in each iteration. Its corresponding variant coefficient elliptic PDE is converted to a positive definite linear system using the finite-difference scheme, which can be solved by the BiCG algorithm Endre (2020). Benamou et al. Benamou et al. (2014) propose to solve the linearized Monge-Ampère on more general domains using Newton's method. Nader and Guennebaud Nader & Guennebaud (2018) apply the similar discretization strategy and solve the Monge-Ampère equation by conjugate gradient method. Saumier et al. Saumier et al. (2013) propose to solve the linearized Monge-Ampère equation using FFT. In each iteration the elliptic PDE with spacial and temporal variant coefficients is converted to a group of linear equations in the frequency domain, which is solved by the GMRES algorithm. Although the GMRES algorithm can be implemented on GPUs Aliaga et al. (2019), there is no available open source code. The work in Saumier et al. (2013) focuses on periodic boundary condition, but this our proposed work focuses on general second boundary condition; the work in Saumier et al. (2013) concerns planar OT maps, ours emphasizes on volumetric OT maps, which has higher complexity. The work in Saumier et al. (2013) can handle more general target measures, the proposed work currently only deals with the Lebesgue target measure. Nevertheless, the current work can be directly generalized to handle general target measures as well. Lei and Gu Lei & Gu (2021) use the fixed point method to compute the 2-dimensional OT problem based on FFT, but it cannot be extended to solve the 3-dimensional problems.
469
+
470
+ In this work, we combine the idea of linearizing the Monge-Ampère equation Loeper & Rapetti (2005) and the idea of FFT Saumier et al. (2013). The key novelty of our proposed method is to use the mean linearized Monge-Ampère operator Eqn. (12) to replace the conventional linearized
471
+
472
+ Monge-Ampere operator Eqn. (10). This replacement allows the algorithm to be implemented on GPUs and makes the algorithm hundreds of times faster. In the following, we compute the 3-dimensional optimal transport problem by applying the proposed algorithm. Our method also runs more than $100 \times$ faster than the convex geometry based method Levy (2015).
473
+
474
+ # B APPENDIX THEORY
475
+
476
+ In the section, we give the detailed proofs for several lemmas and theorems. Some of them are well known in the Monge-Ampère PDE field and the applied mathematics field, we include them for the completeness.
477
+
478
+ # B.1 EXISTENCE OF THE SOLUTION TO THE TIME DEPENDENT MONGE-AMPERE EQNUATION
479
+
480
+ Let $\mathbb{T}^n = \mathbb{R}^n / \mathbb{Z}^n$ be the $n$ -dimensional flat torus. Below we sometimes identify it with $\Omega = [0,1]^n$ and assume all data are periodic. The existence and regularity of solutions to the Monge-Ampère equation are given by the following theorem,
481
+
482
+ Theorem 5. Suppose a positive density function $f: \Omega \to \mathbb{R}$ is defined on $\Omega = [0,1]^n$ , such that $\int_{\Omega} f(x) dx = 1$ , and $f \in C^{\alpha}(\Omega)$ , then the solution $u: \Omega \times [0,1]$ to the time-dependent Monge-Ampère equation
483
+
484
+ $$
485
+ \det D _ {x} ^ {2} u (x, t) = (1 - t) + t f (x), \quad \nabla_ {x} u (x, t) (\Omega) = \Omega \tag {21}
486
+ $$
487
+
488
+ exists and is unique up to a constant. Furthermore, there exist constants $0 < \lambda < \Lambda$ , such that
489
+
490
+ $$
491
+ \lambda \sum_ {p = 1} ^ {n} \xi_ {p} ^ {2} \leq \sum_ {p, q = 1} ^ {n} u ^ {p q} (x, t) \xi_ {p} \xi_ {q} \leq \Lambda \sum_ {p = 1} ^ {n} \xi_ {p} ^ {2}, \quad \forall \xi \in \mathbb {R} ^ {n}, \forall (x, t) \in \Omega \times [ 0, 1 ]. \tag {22}
492
+ $$
493
+
494
+ We refer readers to Cordero-Erasquin (1999) for detailed proof.
495
+
496
+ Weak Solution In practice, we compute the weak solution of the linearized Monge-Ampère Eqn. (6) using numerical methods. We first rewrite the differential operator to a divergence form, then define a bi-linear form.
497
+
498
+ Since $(u^{pq}(x,t))$ is the adjoint matrix of $D_x^2 u(x,t)$ , by direct computation, we obtain
499
+
500
+ $$
501
+ \sum_ {p = 1} ^ {n} \partial_ {p} u ^ {p q} (x, t) = 0, \quad \forall (x, t) \in \Omega \times [ 0, 1 ], \quad \forall q = 1, \dots , n. \tag {23}
502
+ $$
503
+
504
+ so Eqn. (6) can be converted into the divergence form:
505
+
506
+ $$
507
+ \sum_ {p = 1} ^ {n} \partial_ {p} \left(\sum_ {q = 1} ^ {n} u ^ {p q} \partial_ {q} v\right) = \sum_ {p, q = 1} ^ {n} u ^ {p q} \partial_ {p} \partial_ {q} v + \sum_ {q = 1} ^ {n} \left(\sum_ {p = 1} ^ {n} \partial_ {p} u ^ {p q}\right) \partial_ {q} v = \sum_ {p, q = 1} ^ {n} u ^ {p q} \partial_ {p} \partial_ {q} v,
508
+ $$
509
+
510
+ we obtain
511
+
512
+ $$
513
+ \sum_ {p = 1} ^ {n} \partial_ {p} \left(\sum_ {q = 1} ^ {n} u ^ {p q} (x, t) \partial_ {q} v (x, t)\right) = f (x) - 1. \tag {24}
514
+ $$
515
+
516
+ with Neumann boundary condition
517
+
518
+ $$
519
+ \frac {\partial v (x , t)}{\partial \mathbf {n}} = 0, \quad \forall (x, t) \in \partial \Omega \times [ 0, 1 ]. \tag {25}
520
+ $$
521
+
522
+ For any $w\in H^{1}(\Omega)$ , by differentiation of product, we obtain
523
+
524
+ $$
525
+ \sum_ {p = 1} ^ {n} \partial_ {p} \left(\sum_ {q = 1} ^ {n} u ^ {p q} \partial_ {q} v\right) w + \sum_ {p = 1} ^ {n} \left(\sum_ {q = 1} ^ {n} u ^ {p q} \partial_ {q} v\right) \partial_ {p} w = \sum_ {p = 1} ^ {n} \partial_ {p} \left[ \left(\sum_ {q = 1} ^ {n} u ^ {p q} \partial_ {q} v\right) w \right]
526
+ $$
527
+
528
+ by integrating both sides, and from the fact that $v$ satisfies the Neumann boundary condition, we deduce
529
+
530
+ $$
531
+ \int_ {\Omega} \sum_ {p = 1} ^ {n} \partial_ {p} \left(\sum_ {q = 1} ^ {n} u ^ {p q} \partial_ {q} v\right) w + \int_ {\Omega} \sum_ {p, q = 1} ^ {n} u ^ {p q} \partial_ {q} v \partial_ {p} w = \int_ {\partial \Omega} \sum_ {p = 1} ^ {n} \left(\sum_ {q = 1} ^ {n} u ^ {p q} \partial_ {q} v\right) w = 0. \tag {26}
532
+ $$
533
+
534
+ For any fixed time $t \in [0,1]$ , by the divergence form, we can construct a bilinear form $a: H^1(\Omega) \times H^1(\Omega)$ and a linear form $l: H^1(\Omega) \to \mathbb{R}$ ,
535
+
536
+ $$
537
+ a (v, w) = \sum_ {p, q = 1} ^ {n} \int_ {\Omega} u ^ {p q} \partial_ {p} v \partial_ {q} w, \quad l (w) = - \int_ {\Omega} (f - 1) w d x. \tag {27}
538
+ $$
539
+
540
+ A weak solution to Eqn. (24) is a function $v \in H^{1}(\Omega)$ , such that
541
+
542
+ $$
543
+ a (v, w) = l (w), \quad \forall w \in H ^ {1} (\Omega). \tag {28}
544
+ $$
545
+
546
+ By the uniform ellipticity Eqn. (22), the Lax-Milgram theorem Endre (2020) shows the existence of the weak solution.
547
+
548
+ # B.2 DISCRETE LINEARIZED MONGE-AMPERE EQUATION SOLVABILITY
549
+
550
+ Galerkin Method In practice, we construct a triangulation $\mathcal{T}$ of $\Omega$ , such that the ratio between the diameter and inscribe-sphere radius of each simplex is bounded, and variation of the diameters of all the simplexes is small. We call such kind of $\mathcal{T}$ a quasi-uniform triangulation, and denote the largest diameter as $h$ . For each vertex $v_{i} \in \mathcal{T}$ , we construct a piecewise linear base function $\varphi_{i}$ , such that $\varphi_{i}$ is linear on each triangle, $\varphi_{i}(v_{j})$ is $\delta_{ij}$ . We define a finite dimensional subspace $V_{h} \subset H^{1}(\Omega)$ ,
551
+
552
+ $$
553
+ V _ {h} := \left\{v _ {h} (x) := \sum_ {v _ {i} \in \mathcal {T}} \lambda_ {i} \varphi_ {i} (x), \lambda_ {i} \in \mathbb {R} \right\}.
554
+ $$
555
+
556
+ Given a function $u \in H^{1}(\Omega)$ , we use $u_{h} \in V_{h}$ to denote its approximation in $V_{h}$ . Furthermore, $u_{h} = \sum_{i} \lambda_{i} \varphi_{i}$ , we also use $u_{h}$ to represent the coefficient vector $(\lambda_1, \lambda_2, \dots, \lambda_k)^T$ depending on the context. The weak solution Eqn. (28) to the Monge-Ampère equation (6) is equivalent to find a $v \in H^{1}(\Omega)$ , such that $a(v, w) = l(w)$ for all $w \in H^{1}(\Omega)$ . In discrete cases, we want to find $v_{h} \in V_{h}$ , such that
557
+
558
+ $$
559
+ a \left(v _ {h}, w _ {h}\right) = l \left(w _ {h}\right), \quad \forall w _ {h} \in V _ {h}. \tag {29}
560
+ $$
561
+
562
+ Eqn. (29) is equivalent to the linear system,
563
+
564
+ $$
565
+ \left( \begin{array}{c c c c} a \left(\varphi_ {1}, \varphi_ {1}\right) & a \left(\varphi_ {2}, \varphi_ {1}\right) & \dots & a \left(\varphi_ {N}, \varphi_ {1}\right) \\ a \left(\varphi_ {1}, \varphi_ {2}\right) & a \left(\varphi_ {2}, \varphi_ {2}\right) & \dots & a \left(\varphi_ {N}, \varphi_ {2}\right) \\ \vdots & \vdots & & \vdots \\ a \left(\varphi_ {1}, \varphi_ {N}\right) & a \left(\varphi_ {2}, \varphi_ {N}\right) & \dots & a \left(\varphi_ {N}, \varphi_ {N}\right) \end{array} \right) \left( \begin{array}{c} \lambda_ {1} \\ \lambda_ {2} \\ \vdots \\ \lambda_ {N} \end{array} \right) = \left( \begin{array}{c} l \left(\varphi_ {1}\right) \\ l \left(\varphi_ {2}\right) \\ \vdots \\ l \left(\varphi_ {N}\right) \end{array} \right) \tag {30}
566
+ $$
567
+
568
+ From the weak solution to the linearized Monge-Ampère equation (10), we obtain the linear system Eqn. (30). We denote the stiffness matrix $A = (a(\varphi_i, \varphi_j))$ . By the uniform ellipticity Eqn. (22), and $V_h \subset H^1(\Omega)$
569
+
570
+ $$
571
+ a (v, v) \geq \lambda \| \nabla v \| _ {L ^ {2} (\Omega)} ^ {2}
572
+ $$
573
+
574
+ Assume $\int_{\Omega} v dx = 0$ , by Poincaré inequality,
575
+
576
+ $$
577
+ \| \nabla v \| _ {L ^ {2} (\Omega)} ^ {2} \geq C _ {1} (\Omega) \| v \| _ {L} ^ {2} (\Omega), \quad \forall v \in H ^ {1} (\Omega), \int_ {\Omega} v d x = 0,
578
+ $$
579
+
580
+ where the constant $C_1(\Omega)$ depends on $\Omega$ . Combine the above two inequalities, we obtain
581
+
582
+ $$
583
+ a (v, v) \geq c \| v \| _ {L ^ {2} (\Omega)} ^ {2}, \quad \forall v \in H ^ {1} (\Omega), \int_ {\Omega} v d x = 0. \tag {31}
584
+ $$
585
+
586
+ Similarly, By the uniform ellipticity Eqn. 22, and $V_{h}\subset H^{1}(\Omega)$
587
+
588
+ $$
589
+ a (v, v) \leq \Lambda \| \nabla v \| _ {L ^ {2} (\Omega)} ^ {2}
590
+ $$
591
+
592
+ For linear finite element and quasi-uniform triangulation, we have the inverse Poincaré inequality,
593
+
594
+ $$
595
+ \left\| \nabla v _ {h} \right\| _ {L ^ {2}} ^ {2} \leq C _ {2} (\Omega) h ^ {- 1} \left\| v _ {h} \right\| _ {L ^ {2}} ^ {2}.
596
+ $$
597
+
598
+ where $h$ is the diameter of each element. Combine the above two inequalities, we obtain
599
+
600
+ $$
601
+ a \left(v _ {h}, v _ {h}\right) \leq C \| v _ {h} \| _ {L ^ {2} (\Omega)} ^ {2}, \quad \forall v _ {h} \in V _ {h}. \tag {32}
602
+ $$
603
+
604
+ By combining the inequalities Eqn. (31) and Eqn. (32), we obtain
605
+
606
+ $$
607
+ \frac {1}{C _ {3}} \| v _ {h} \| _ {L ^ {2} (\Omega)} ^ {2} \leq a (v _ {h}, v _ {h}) \leq C _ {3} \| v _ {h} \| _ {L ^ {2} (\Omega)} ^ {2}, \quad \forall v _ {h} \in V _ {h}, \int_ {\Omega} v _ {h} = 0, \tag {33}
608
+ $$
609
+
610
+ where $C_3 > 1$ is a constant. Suppose $v_{h} = \sum_{i = 1}^{n}\xi_{i}\varphi_{i}$ , then
611
+
612
+ $$
613
+ \| v _ {h} \| _ {L ^ {2} (\Omega)} ^ {2} = \int_ {\Omega} v _ {h} ^ {2} d x = \sum_ {i, j = 1} ^ {n} \xi_ {i} \xi_ {j} \int_ {\Omega} \varphi_ {i} (x) \varphi_ {j} (x) d x = \xi^ {T} \Phi \xi ,
614
+ $$
615
+
616
+ where $\xi = (\xi_{i})$ and the matrix $\Phi = \left(\int_{\Omega}\varphi_{i}\varphi_{j}\right)$ is positive definite. Therefore,
617
+
618
+ $$
619
+ \frac {1}{C _ {4}} \| \xi \| ^ {2} \leq \xi^ {T} \Phi \xi < C _ {4} \| \xi \| ^ {2}. \tag {34}
620
+ $$
621
+
622
+ By $a(v_h,v_h) = \xi^T A\xi$ , combing inequalities Eqn. (33) and Eqn. (34), we obtain
623
+
624
+ $$
625
+ \frac {1}{C _ {3} C _ {4}} \| \xi \| ^ {2} \leq \xi^ {T} A \xi \leq C _ {3} C _ {4} \| \xi \| ^ {2}, \quad \forall \xi \in \mathbb {R} ^ {n}, \sum_ {i = 1} ^ {n} \xi_ {i} = 0, \tag {35}
626
+ $$
627
+
628
+ where $C_3C_4 > 1$ . This proves the following lemma,
629
+
630
+ Lemma 6. By using Galerkin method using linear elements to numerically approximate the weak solution Eqn. (28) to the linearized Monge-Ampère Eqn. (6), if the uniform ellipticity Eqn. (22) holds, and the triangulation $\mathcal{T}$ is quasi-uniform, then the stiffness matrix of the linear system Eqn. (30) is positive definite on the space $\sum_{i=1}^{n} \xi_i = 0$ ,
631
+
632
+ $$
633
+ \frac {1}{C _ {3} C _ {4}} \| \xi \| ^ {2} \leq \xi^ {T} A \xi \leq C _ {3} C _ {4} \| \xi \| ^ {2}, \quad \forall \xi \in \mathbb {R} ^ {n}, \sum_ {i = 1} ^ {n} \xi_ {i} = 0, \tag {36}
634
+ $$
635
+
636
+ where $C_3C_4 > 1$
637
+
638
+ Since the uniform ellipticity Eqn. (22) holds for any time $t \in [0,1]$ , then we obtain
639
+
640
+ Corollary 7. By using Galerkin method with linear elements on quasi-uniform triangulations, the linearized Monge-Ampère equation in the continuity method Eqn. (6) always has a solution $v_h \in V_h$ for any $t \in [0,1]$ .
641
+
642
+ Please note that the central differential scheme can be treated as Galerkin's method on a special uniform triangulation. Therefore, the above estimates still hold.
643
+
644
+ # B.3 CONVERGENCE RATE
645
+
646
+ Theorem 8 (main). Given a domain $\Omega \subset \mathbb{R}^n$ , which is a canonical cuboid $\Omega = [-1,1]^n$ , and a positive density function $f:\Omega \to \mathbb{R}$ with the balance condition
647
+
648
+ $$
649
+ \int_ {\Omega} f (x) d x = \int_ {\Omega} 1 \cdot d x,
650
+ $$
651
+
652
+ suppose the mirror reflection extension Eqn. (14) of $f$ to the flat torus $\tilde{f} : \mathbb{T}^n \to \mathbb{R}$ is $C^\alpha$ , $\alpha \in (0,1)$ , then Monge-Ampère equation,
653
+
654
+ $$
655
+ d e t D ^ {2} u (x) = f (x), \quad \nabla u (\Omega) = \Omega
656
+ $$
657
+
658
+ can be solved using FFT-OT Algorithm Alg. (1). In particular, one can choose the step length parameter $\tau$ , such that there is a constant $0 < \gamma < 1$ , the approximation error satisfies
659
+
660
+ $$
661
+ \left\| f - \rho_ {k + 1} \right\| ^ {2} < C \gamma^ {k},
662
+ $$
663
+
664
+ namely the algorithm has a linear convergence rate.
665
+
666
+ Proof. Suppose at the $k + 1$ -th iteration, $\rho_{k + 1} = \operatorname*{det}(I + D^2 u_{k + 1})$ , $\| v_k\| \sim O(\tau^{-1})$
667
+
668
+ $$
669
+ \begin{array}{l} f - \rho_ {k + 1} = f - \det (I + \mathcal {D} ^ {2} u _ {k} + \mathcal {D} ^ {2} v _ {k}) \\ = f - \det (I + \mathcal {D} ^ {2} u _ {k}) - \sum_ {p q} u _ {k} ^ {p q} \partial_ {p} \partial_ {q} v _ {k} + o (\tau^ {- 1}) \\ = \left(f - \rho_ {k}\right) - L _ {k} \left[ v _ {k} \right] + o \left(\tau^ {- 1}\right) \\ \end{array}
670
+ $$
671
+
672
+ where $L_{k}[v_{k}] = \sum_{pq}u_{k}^{pq}\partial_{p}\partial_{q}v_{k}$ . Hence by integration by parts Eqn. (27),
673
+
674
+ $$
675
+ \begin{array}{l} \left\| f - \rho_ {k + 1} \right\| _ {L ^ {2} (\Omega)} ^ {2} = \left\| f - \rho_ {k} \right\| _ {L ^ {2} (\Omega)} ^ {2} - 2 \int_ {\Omega} L _ {k} [ v _ {k} ] (f - \rho_ {k}) + o (\tau^ {- 1}) \\ = \left\| f - \rho_ {k} \right\| _ {L ^ {2} (\Omega)} ^ {2} + 2 a _ {k} (f - \rho_ {k}, v _ {k}) + o (\tau^ {- 1}) \\ \end{array}
676
+ $$
677
+
678
+ where $a_{k}$ is the bilinear form in Eqn.(27). In the discrete case, all functions are in $V_{h}$ , we denote
679
+
680
+ $$
681
+ \| u _ {h} \| _ {\Phi} ^ {2} := \| u _ {h} \| _ {L ^ {2} (\Omega)} ^ {2} = u _ {h} ^ {T} \Phi u _ {h}, \quad \| u _ {h} \| ^ {2} := u _ {h} ^ {T} u _ {h}, \quad \| u _ {h} \| _ {A} ^ {2} := u _ {h} ^ {T} A u _ {h},
682
+ $$
683
+
684
+ by the inequality Eqn. (34) and Eqn. 35,
685
+
686
+ $$
687
+ \frac {1}{C _ {4}} \| u _ {h} \| ^ {2} \leq \| u _ {h} \| _ {\Phi} ^ {2} \leq C _ {4} \| u _ {h} \| ^ {2}, \quad \frac {1}{C _ {3} C _ {4}} \| u _ {h} \| ^ {2} \leq \| u _ {h} \| _ {A} ^ {2} \leq C _ {3} C _ {4} \| u _ {h} \| ^ {2}.
688
+ $$
689
+
690
+ Therefore
691
+
692
+ $$
693
+ \left\| f _ {h} - \rho_ {h, k + 1} \right\| _ {\Phi} ^ {2} = \left\| f _ {h} - \rho_ {h, k} \right\| _ {\Phi} ^ {2} - 2 \tau^ {- 1} \left(f - \rho_ {h, k}\right) ^ {T} A _ {k} \bar {A} _ {k} ^ {- 1} \left(f _ {h} - \rho_ {h, k}\right) + o \left(\tau^ {- 1}\right), \tag {37}
694
+ $$
695
+
696
+ where $A_{k}$ is the stiffness matrix in Eqn.(30), and $\bar{A}_k$ is the mean stiffness matrix. (By the uniform ellipticity Eqn. (22), the eigen values of the adjoint matrix $(u^{pq})(x,t)$ is uniformly bounded away from zero in the space $\mathcal{H} := \{\xi \in \mathbb{R}^n | \sum_i \xi_i = 0\}$ , so the eigen value of the mean adjoint matrix $\bar{u}^{pq}(t)$ is bounded away from zero in $\mathcal{H}$ . After discretization, the eigen values of $\bar{A}_k$ is strictly positive in $\mathcal{H}$ , hence $\bar{A}_k$ is invertible in $\mathcal{H}$ . In the following discussion, the term $o(\tau^{-1})$ will be ignored.) Remark that the following displayed equation is a scalar
697
+
698
+ $$
699
+ \left(f _ {h} - \rho_ {h, k}\right) ^ {T} A _ {k} \bar {A} _ {k} ^ {- 1} (f - \rho_ {h, k}) = \mathrm {t r} \left(\left(f _ {h} - \rho_ {h, k}\right) ^ {T} A _ {k} \bar {A} _ {k} ^ {- 1} (f _ {h} - \rho_ {h, k})\right)
700
+ $$
701
+
702
+ Since $A_{k}$ and $\bar{A}_{k}$ are symmetric, positive definite on the space $\sum_{i}\xi_{i} = 0$ , $\| A_k\| _2\leq C_3C_4$ and $\| \bar{A}_k\| _2\leq C_3C_4$ , so are their inverses. Since $A_{n}$ and $\bar{A}_n$ are symmetric, positive definite on the space orthogonal to $(1,1,\ldots ,1)^T$ , by Eqn. (35) and $\| A_k\bar{A}_k^{-1}\| \leq \| A_k\| \| \bar{A}_k^{-1}\|$ , we have
703
+
704
+ $$
705
+ \frac {(n - 1)}{C _ {3} ^ {2} C _ {4} ^ {3}} \| f _ {h} - \rho_ {h, k} \| _ {\Phi} ^ {2} \leq \left(f _ {h} - \rho_ {h, k}\right) ^ {T} A _ {k} \bar {A} _ {k} ^ {- 1} (f _ {h} - \rho_ {h, k}).
706
+ $$
707
+
708
+ Plug into Eqn. (37), we have
709
+
710
+ $$
711
+ \left\| f _ {h} - \rho_ {h, k + 1} \right\| _ {\Phi} ^ {2} \leq \left(1 - \frac {1}{\tau} \frac {(n - 1)}{C _ {3} ^ {2} C _ {4} ^ {3}}\right) \left\| f _ {h} - \rho_ {h, k} \right\| _ {\Phi} ^ {2} \leq \left(1 - \frac {1}{\tau} \frac {(n - 1)}{C _ {3} ^ {2} C _ {4} ^ {3}}\right) ^ {k} \left\| f _ {h} - \rho_ {h, 0} \right\| _ {\Phi} ^ {2}. \tag {38}
712
+ $$
713
+
714
+ We can choose the step-length $\tau^{-1}$ , such that $\gamma \in (0, 1)$ , where
715
+
716
+ $$
717
+ \gamma = 1 - \frac {(n - 1)}{\tau C _ {3} ^ {2} C _ {4} ^ {3}}.
718
+ $$
719
+
720
+ Therefore
721
+
722
+ $$
723
+ \left\| f _ {h} - \rho_ {h, k + 1} \right\| _ {\Phi} ^ {2} \leq \gamma^ {k} \left\| f _ {h} - \rho_ {h, 0} \right\| _ {\Phi} ^ {2} \leq C _ {4} \gamma^ {k} \left\| f _ {h} - \rho_ {h, 0} \right\| ^ {2}. \tag {39}
724
+ $$
725
+
726
+ ![](images/9d32cf41dd420f054c911100857edaa1d69208dd0df5a90d593bd716b8f123a5.jpg)
727
+
728
+ # B.4 DIFFERENTIAL OPERATOR USING FFT
729
+
730
+ By using the Discrete Fourier Transformation, the differential operators can be converted to algebraic operators in the frequency domain.
731
+
732
+ Lemma 9. Suppose the discrete function is $u_{i,j,k}$ , with discrete Fourier transformation
733
+
734
+ $$
735
+ u _ {i, j, k} = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} e ^ {\sqrt {- 1} \frac {2 \pi m i}{M}} e ^ {\sqrt {- 1} \frac {2 \pi n j}{N}} e ^ {\sqrt {- 1} \frac {2 \pi l k}{L}}
736
+ $$
737
+
738
+ then the differential operator using central difference $\partial_i\partial_i u_{i,j,k}$ is given by
739
+
740
+ $$
741
+ \begin{array}{l} \partial_ {i} \partial_ {i} u _ {i, j, k} = \frac {1}{h _ {x} ^ {2}} \left(u _ {i + 1, j, k} + u _ {i - 1, j, k} - 2 u _ {i, j, k}\right) \\ = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} \frac {2 \left(\cos \frac {2 \pi m}{M} - 1\right)}{h _ {x} ^ {2}} e ^ {\imath \frac {2 \pi m i}{M}} e ^ {\imath \frac {2 \pi n j}{N}} e ^ {\imath \frac {2 \pi l k}{L}} \\ \end{array}
742
+ $$
743
+
744
+ where $\iota = \sqrt{-1}$ , and $\partial_i\partial_ju_{i,j,k}$ is given by,
745
+
746
+ $$
747
+ \begin{array}{l} \partial_ {i} \partial_ {j} u _ {i, j, k} = \frac {1}{4 h _ {x} h _ {y}} \left(u _ {i + 1, j + 1, k} + u _ {i - 1, j - 1, k} - u _ {i + 1, j - 1, k} - u _ {i - 1, j + 1, k}\right) \\ = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} \frac {- \sin \frac {2 \pi m}{M} \sin \frac {2 \pi n}{N}}{h _ {x} h _ {y}} e ^ {\iota \frac {2 \pi m i}{M}} e ^ {\iota \frac {2 \pi n j}{N}} e ^ {\iota \frac {2 \pi l k}{L}} \\ \end{array}
748
+ $$
749
+
750
+ Proof. By equations
751
+
752
+ $$
753
+ \begin{array}{l} \cos (A + \alpha) + \cos (A - \alpha) - 2 \cos (A) \\ = (\cos A \cos \alpha - \sin A \sin \alpha) + (\cos A \cos \alpha + \sin A \sin \alpha) - 2 \cos A \\ = 2 (\cos \alpha - 1) \cos A \\ \end{array}
754
+ $$
755
+
756
+ and
757
+
758
+ $$
759
+ \begin{array}{l} \sin (A + \alpha) + \sin (A - \alpha) - 2 \sin (A) \\ = (\sin A \cos \alpha + \cos A \sin \alpha) + (\sin A \cos \alpha - \cos A \sin \alpha) - 2 \cos A \\ = 2 (\cos \alpha - 1) \sin A \\ \end{array}
760
+ $$
761
+
762
+ we obtain
763
+
764
+ $$
765
+ \frac {1}{h _ {x} ^ {2}} \left[ e ^ {\iota^ {\frac {2 \pi m (i + 1)}{M}}} + e ^ {\iota^ {\frac {2 \pi m (i - 1)}{M}}} - 2 e ^ {\iota^ {\frac {2 \pi m i}{M}}} \right] = \frac {2 \left(\cos \frac {2 \pi m}{M} - 1\right)}{h _ {x} ^ {2}} e ^ {\iota^ {\frac {2 \pi m i}{M}}}
766
+ $$
767
+
768
+ by direct computation, we have
769
+
770
+ $$
771
+ \begin{array}{l} \partial_ {i} \partial_ {i} u _ {i, j, k} = \frac {1}{h _ {x} ^ {2}} (u _ {i + 1, j, k} + u _ {i - 1, j, k} - 2 u _ {i, j, k}) \\ = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} \frac {e ^ {\iota \frac {2 \pi m (i + 1)}{M}} + e ^ {\iota \frac {2 \pi m (i - 1)}{M}} - 2 e ^ {\iota \frac {2 \pi m i}{M}}}{h _ {x} ^ {2}} e ^ {\iota \frac {2 \pi n j}{N}} e ^ {\iota \frac {2 \pi l k}{L}} \\ = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} \frac {2 (\cos \frac {2 \pi m}{M} - 1)}{h _ {x} ^ {2}} e ^ {\iota \frac {2 \pi m i}{M}} e ^ {\iota \frac {2 \pi n j}{N}} e ^ {\iota \frac {2 \pi l k}{L}} \\ \end{array}
772
+ $$
773
+
774
+ Similarly, by equations
775
+
776
+ $$
777
+ \begin{array}{l} \cos (A + \alpha + B + \beta) + \cos (A - \alpha + B - \beta) - \cos (A + \alpha + B - \beta) - \cos (A - \alpha + B + \beta) \\ = \cos (A + B + \alpha + \beta) + \cos (A + B - \alpha - \beta) - \cos (A + B + \alpha - \beta) - \cos (A + B - \alpha + \beta) \\ = 2 \cos (A + B) \cos (\alpha + \beta) - 2 \cos (A + B) \cos (\alpha - \beta) \\ = 2 \cos (A + B) (\cos (\alpha + \beta) - \cos (\alpha - \beta)) \\ = 2 \cos (A + B) (\cos \alpha \cos \beta - \sin \alpha \sin \beta - \cos \alpha \cos \beta - \sin \alpha - \sin \beta) \\ = - 4 \cos (A + B) \sin \alpha \sin \beta \\ \end{array}
778
+ $$
779
+
780
+ and
781
+
782
+ $$
783
+ \begin{array}{l} \sin (A + \alpha + B + \beta) + \sin (A - \alpha + B - \beta) - \sin (A + \alpha + B - \beta) - \sin (A - \alpha + B + \beta) \\ = \sin (A + B + \alpha + \beta) + \sin (A + B - \alpha - \beta) - \sin (A + B + \alpha - \beta) - \sin (A + B - \alpha + \beta) \\ = 2 \sin (A + B) \cos (\alpha + \beta) - 2 \sin (A + B) \cos (\alpha - \beta) \\ = 2 \sin (A + B) (\cos (\alpha + \beta) - \cos (\alpha - \beta)) \\ = 2 \sin (A + B) (\cos \alpha \cos \beta - \sin \alpha \sin \beta - \cos \alpha \cos \beta - \sin \alpha - \sin \beta) \\ = - 4 \sin (A + B) \sin \alpha \sin \beta \\ \end{array}
784
+ $$
785
+
786
+ we deduce the following equation,
787
+
788
+ $$
789
+ \begin{array}{l} \partial_ {i} \partial_ {j} u _ {i, j, k} = \frac {1}{4 h _ {x} h _ {y}} \left(u _ {i + 1, j + 1, k} + u _ {i - 1, j - 1, k} - u _ {i + 1, j - 1, k} - u _ {i - 1, j + 1, k}\right) \\ = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} \frac {- \sin \frac {2 \pi m}{M} \sin \frac {2 \pi n}{N}}{h _ {x} h _ {y}} e ^ {\iota \frac {2 \pi m i}{M}} e ^ {\iota \frac {2 \pi n j}{N}} e ^ {\iota \frac {2 \pi l k}{L}} \\ \end{array}
790
+ $$
791
+
792
+ ![](images/128785ac32bd0a8f19798902b6a1d6a700bd904a841a4eb14140d17f13a3b664.jpg)
793
+
794
+ Similarly, we have the representations of other differential operators in the frequency domain,
795
+
796
+ $$
797
+ \begin{array}{l} \partial_ {j} \partial_ {j} u _ {i, j, k} = \frac {1}{h _ {x} ^ {2}} \left(u _ {i, j + 1, k} + u _ {i, j - 1, k} - 2 u _ {i, j, k}\right) \\ = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} \frac {2 (\cos \frac {2 \pi n}{N} - 1)}{h _ {y} ^ {2}} e ^ {\iota \frac {2 \pi m i}{M}} e ^ {\iota \frac {2 \pi n j}{N}} e ^ {\iota \frac {2 \pi l k}{L}} \\ \end{array}
798
+ $$
799
+
800
+ $$
801
+ \begin{array}{l} \partial_ {k} \partial_ {k} u _ {i, j, k} = \frac {1}{h _ {z} ^ {2}} \left(u _ {i, j, k + 1} + u _ {i, j, k - 1} - 2 u _ {i, j, k}\right) \\ = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} \frac {2 \left(\cos \frac {2 \pi l}{L} - 1\right)}{h _ {z} ^ {2}} e ^ {\iota \frac {2 \pi m i}{M}} e ^ {\iota \frac {2 \pi n j}{N}} e ^ {\iota \frac {2 \pi l k}{L}} \\ \end{array}
802
+ $$
803
+
804
+ $$
805
+ \begin{array}{l} \partial_ {j} \partial_ {k} u _ {i, j, k} = \frac {1}{4 h _ {y} h _ {z}} \left(u _ {i, j + 1, k + 1} + u _ {i, j - 1, k - 1} - u _ {i, j + 1, k - 1} - u _ {i, j - 1, k + 1}\right) \\ = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} \frac {- \sin \frac {2 \pi n}{N} \sin \frac {2 \pi l}{L}}{h _ {y} h _ {z}} e ^ {\iota \frac {2 \pi m i}{M}} e ^ {\iota \frac {2 \pi n j}{N}} e ^ {\iota \frac {2 \pi l k}{L}} \\ \end{array}
806
+ $$
807
+
808
+ $$
809
+ \begin{array}{l} \partial_ {k} \partial_ {i} u _ {i, j, k} = \frac {1}{4 h _ {z} h _ {x}} \left(u _ {i + 1, j, k + 1} + u _ {i - 1, j, k - 1} - u _ {i + 1, j, k - 1} - u _ {i - 1, j, k + 1}\right) \\ = \frac {1}{M N L} \sum_ {m = 0} ^ {M - 1} \sum_ {n = 0} ^ {N - 1} \sum_ {l = 0} ^ {L - 1} \hat {u} _ {m, n, l} \frac {- \sin \frac {2 \pi l}{L} \sin \frac {2 \pi m}{M}}{h _ {z} h _ {x}} e ^ {\iota \frac {2 \pi m i}{M}} e ^ {\iota \frac {2 \pi n j}{N}} e ^ {\iota \frac {2 \pi l k}{L}} \\ \end{array}
810
+ $$
811
+
812
+ # C ALGORITHM PIPELINES
813
+
814
+ In this section, we give the algorithm pipeline of the FFT-OT in Alg. 1 and the details to solve the costant coefficient elliptic PDE through FFT in Alg. 2.
815
+
816
+ # Algorithm 1: FFT-OT
817
+
818
+ Input: Domain $\Omega = [-1, 1]^3$ , the source density function $f > 0$ , the target density $g = 1$ , step length $\tau$ , approximation error threshold $\varepsilon$
819
+
820
+ Output: Solution $\frac{1}{2}\| x\|^2 + u_n$ to the Monge-Ampère Eqn. (2) with the corresponding boundary condition.
821
+
822
+ Initialize $u_0(x) = 0$
823
+
824
+ while true do
825
+
826
+ Compute the Hessian matrix $D^2 u_n(x)$
827
+
828
+ Compute the density function $\rho_{n}(x)\gets \operatorname *{det}(I + D^{2}u_{n}(x))$
829
+
830
+ if $\| f - \rho_n\|_{L_2(\Omega)} < \varepsilon$ then
831
+
832
+ Break;
833
+
834
+ Compute the adjoint matrix $[H_n^{pq}(x)]\gets \mathrm{Adj}(I + D^2 u_n(x))$
835
+
836
+ Compute the mean adjoint matrix $[H_n^{pq}]$ using Eqn. (11);
837
+
838
+ Solve the constant coefficient elliptic PDE (12) using the FFT Solver Alg. 2;
839
+
840
+ Update the Brenier potential $u_{n + 1}(x) \gets u_n + \tau v_n$ ;
841
+
842
+ ![](images/d61c13d938ba3e3e011a9da39cfe736b7b910a39b97e25db82c6206119535e6e.jpg)
843
+
844
+ ![](images/44e79679a69958eabe685acf804b31adb3d9599663bed582cdc5120bd078deb4.jpg)
845
+
846
+ ![](images/1e925c9b7f9647892a7ea3fbfddb61e80987d81ba2f6646e4581b685a2fb9402.jpg)
847
+
848
+ ![](images/fbb2d06d3357e0337f05a41ceb38dde94cda28eb36ddfaed59169e9c28b735c1.jpg)
849
+ (a) Density
850
+
851
+ ![](images/3ae8a214333cab2e75560cf2faed28321156b390f144dbbdac2346938375b192.jpg)
852
+
853
+ ![](images/7a32bc28f59db7be75eb28365f52fb5978fc247d00945738d29d8b27caf7778c.jpg)
854
+
855
+ ![](images/e494bc42a1aee6690ee63f3e1884e1f5b88d7d1c5347206571a5bd629f439cae.jpg)
856
+
857
+ ![](images/4126d843e81326161452112c5f97ad88a01e118bcc4743555b0a982acb664266.jpg)
858
+ (b) Rejection
859
+
860
+ ![](images/1f5a7553399bb362b5bd8eed40380cd780f118852192b67f50de4b54c3f6c612.jpg)
861
+
862
+ ![](images/b637a1f2ee6ff502245d3ce9ab33b5a84d0d1e67c0550b102ee5658949ea98c9.jpg)
863
+
864
+ ![](images/8ee81eb856ffcc90253543aece5f629617a0c4b424a18fdddd4d4cb6a1953052.jpg)
865
+
866
+ ![](images/58ae9947b1a4e94f5726fd435e640a4184c32f9790027899c0b277b8abe3ab76.jpg)
867
+ (c) MH
868
+
869
+ ![](images/2d347f904fa60d1b03ebaa080f4fd81a76d3650db8736de9e8879d7c30cb0ce7.jpg)
870
+
871
+ ![](images/ec24c9178f566cf4b8f34ca5fa4af2218552888e0a66d54fccc9a561be2d7ab5.jpg)
872
+
873
+ ![](images/5cb225d82aa21de75571e1da0d0cd4719d28deccd58fbbb9d338b0e80644afd2.jpg)
874
+
875
+ ![](images/02063de3e6ff0a0c1fc406d3985036a189531ee95712e25fed72960b5493b688.jpg)
876
+ (d) Slice
877
+
878
+ ![](images/af1aa4f0612f357b809fff64e0055dcd3eb08a278a3526ead2603686e412890b.jpg)
879
+
880
+ ![](images/f7e41c15e16491b309e865d95b76e88783dc5c13fc07702f1ef3b8f9db2cd246.jpg)
881
+
882
+ ![](images/80125737bb277afa86ba721eeb59fe6a247694e77ec0c822bc2ff888c8b3b27c.jpg)
883
+
884
+ ![](images/fa1bf5ab48c383d9d7520f197517751a02857b01e395036a260f7ca4df30098e.jpg)
885
+ (e) Ours-rand
886
+
887
+ ![](images/ed40012dcc547db48f95acbf7f256edfeb9868e75cb741cda923f9ab383df9ea.jpg)
888
+
889
+ ![](images/56efa3f542a9c1d5eeaacc4e0771ff63903e8ad30a5981e2d2b9b9ff6a17c45d.jpg)
890
+
891
+ ![](images/2aa7363eaa0da9ad29c7a1e8a9519b144066355e9cbf1199d988c6a33c228c87.jpg)
892
+
893
+ ![](images/cff095d6b8df797d938236d52afdc9be601480f956c2306b2b11b66a54c0d80b.jpg)
894
+ (f) Ours-grid
895
+ Figure 5: 3D density function sampling. (a) The density functions in different slices of the same model, namely the 40th, 56th, 72th and 80th. (b)-(f) The samples obtained by different sampling methods. (b) Rejection sampling. (c) Metropolis-Hastings (MH) algorithm Bishop (2006). (d) Slice sampling Neal (2003). (e) The sampling results by mapping the random samples from the uniform distribution back to the desired distribution with $T^{-1}$ . (f) The sampling results by mapping the grid centers back with $T^{-1}$ . The scores of the top right give the results of the Chi-square goodness-of-fit test. Smaller means better. Zoom in for better visualization.
896
+
897
+ # Algorithm 2: FFT Solver for the Constant Coefficient Elliptic PDE
898
+
899
+ Input: Domain $\Omega = [-1,1]^3$ , $M,N,L$ , $\{a^{pq}\}$ , $b^r$ , $c$ , function $f$ with the periodic boundary condition
900
+
901
+ Output: Solution $u$ to the elliptic PDE Eqn. (18)
902
+
903
+ Discretize the domain $\Omega$ to a $M\times N\times L$ grid;
904
+
905
+ Sample the function $f$ to $f_{i,j,k}$
906
+
907
+ Compute FFT using Eqn. (16), $\{\hat{f}_{m,n,l}\} \gets \mathrm{FFT}(\{f_{i,j,k}\})$ ;
908
+
909
+ for $(m,n,l)\in [0,M - 1]\times [0,N - 1]\times [0,L - 1]$ do
910
+
911
+ Compute the factor $\lambda_{m,n,l}$ using Eqn. (19);
912
+
913
+ if $\lambda_{m,n,l}$ is 0 then
914
+
915
+ $\hat{u}_{m,n,l}\gets 0;$
916
+
917
+ else
918
+
919
+ $\hat{u}_{m,n,l}\gets \hat{f}_{m,n,l} / \lambda_{m,n,l};$
920
+
921
+ Compute the Inverse FFT using Eqn. (17), $\{u_{i,j,k}\} \gets \mathrm{IFFT}(\{\hat{u}_{m,n,l}\})$ ;
922
+
923
+ Return $\{u_{i,j,k}\}$
924
+
925
+ # D APPENDIX EXPERIMENTS
926
+
927
+ In this section, as a compensation of the experiments in the main paper, we give more results on the 3D adaptive sampling and volumetric magnifier.
928
+
929
+ # D.1 MORE RESULTS ON 3D ADAPTIVE SAMPLING
930
+
931
+ In the experiments, we set the density function $f(x) = \sum_{i=1}^{30} p_i \mathcal{N}(\mu_i, \Sigma_i)$ , where $\mathcal{N}(\mu_i, \Sigma_i)$ represents Gaussian distribution with mean $\mu_i$ and variance $\Sigma_i = \mathrm{diag}(\sigma_{i0}^2, \sigma_{i1}^2, \sigma_{i2}^2)$ . $\mu_i \in \mathbb{R}^3$ is uniformly sampled from $[0,1]^3$ , $\sigma_{ij}$ is uniformly sampled from $[0,0.5]$ , $p_i \in \mathbb{R}$ is uniformly sampled from $[0.2,1]$ and normalized such that $\int_{\Omega} f(x) dx = 1$ . Thus the source distribution $\mu$ is a complicated Gaussian mixture distribution restricted on $\Omega = [0,1]^3$ . After computing the OT map
932
+
933
+ ![](images/b632fd714fbfdb632a3fadf61432a89b4935cba5a58458f57721ec541b864b8a.jpg)
934
+
935
+ ![](images/ce168d8abe6ef7856881a01379eafc45ff484d3f482f28bbd9121ee6f360fcd8.jpg)
936
+
937
+ ![](images/1d0ed36963385a1cdeb5ba7d7491cde70599c9d615925ec1eb0eb33d253da61a.jpg)
938
+
939
+ ![](images/eb9ce5bc43fe4ea1e02f33564d44774cad8052be6a1d9599c507358664a72adc.jpg)
940
+ (a) Density
941
+
942
+ ![](images/3a3dafa3734f7c62672195aae43f7933770947304af30b77cb539a98c00c056c.jpg)
943
+
944
+ ![](images/8a81d3b1245a6f48c68ece122d2b992520da7cf082d951fe4aa566e2d2b3fd5d.jpg)
945
+
946
+ ![](images/251020e575b7b5f86fb186ef23a404656f39073fc82867552527c84baf964ab9.jpg)
947
+
948
+ ![](images/3ccc7c884bf260d3d7a589e79b9b4fa1d982905da5ec2d50108ac999beea27b7.jpg)
949
+ (b) Rejection
950
+
951
+ ![](images/eaf96e2003f1ea59584af9725a79164f6b33ab7b581efab41acb30dee13b0bf8.jpg)
952
+
953
+ ![](images/fd227b3612f179fff0a571758e1efca8a888148821fc29a10a544659f4bd65e7.jpg)
954
+
955
+ ![](images/8e0e7a79899d3ecec4623468e3128ae43c89a6526bdb894bf3beecf22dd3eddd.jpg)
956
+
957
+ ![](images/80942969ab76543868ba09a0801227a57eb9b33db9897b7f213aa7c1f5bc3163.jpg)
958
+ (c) MH
959
+
960
+ ![](images/28c4c3972de99557357c14c6b87619ff7a34e1166bc578df3c195d38f4ea8a05.jpg)
961
+
962
+ ![](images/cf31df0e7b0570b35ed2fca7512ae315ab5ca51d3b38856e83b45d9d5e4b9902.jpg)
963
+
964
+ ![](images/0ecdda00548d3aec475ef7e2e5395252966aa131b202cd34fc6a6ce13f498fb3.jpg)
965
+
966
+ ![](images/40d3fab3b9d3f33210e869c2778b9abe4e6549645280d60ee8b9fbaa682466a3.jpg)
967
+ (d) Slice
968
+ Figure 6: 3D density function sampling. (a) The density functions in different slices of the same model, namely the 56th, 64th, 80th and 88th. (b)-(f) The samples obtained by different sampling methods. (b) Rejection sampling. (c) Metropolis-Hastings (MH) algorithm Bishop (2006). (d) Slice sampling Neal (2003). (e) The sampling results by mapping the random samples from the uniform distribution back to the desired distribution with $T^{-1}$ . (f) The sampling results by mapping the grid centers back with $T^{-1}$ . The scores of the top right give the results of the Chi-square goodness-of-fit test. Smaller means better. Zoom in for better visualization.
969
+
970
+ ![](images/a44e65cf4720fb3a9025753064e926dfdb8bde3b86e5bc3e5cf471a43408922e.jpg)
971
+
972
+ ![](images/15e043fbbd4253c3678c4a6c8987915513b324058b4a6013c68462e6053f8c4b.jpg)
973
+
974
+ ![](images/569538f217b565d540365f94cdc03f275139cbdb91aa6587279a13dbd86cbd41.jpg)
975
+
976
+ ![](images/2dbd3cd34ac41eb59e083df05340237d5623a52dac6d916612bb68fbf4b503eb.jpg)
977
+ (e) Ours-rand
978
+
979
+ ![](images/8edc0a8c659ae779ef7604f2c644f9c4736a8e2a02872c6fb5ab0ca4a2a395f6.jpg)
980
+
981
+ ![](images/89522a5ec4fc0c4966fa43f854ea69059752ee3f5a866a35ba53e8a54ef7f3ca.jpg)
982
+
983
+ ![](images/0ebd275f7cefa65894cf1568baccba33a3509c5fff52d6a3c85a0338be5d4ad8.jpg)
984
+
985
+ ![](images/f85149e701cb473dacd334bb3fafafc12d8d82164ca006fc6af47bd045e3392c.jpg)
986
+ (f) Ours-grid
987
+
988
+ $T$ from $\mu$ to the uniform distribution $\nu$ defined on $[-1,1]^3$ , we conduct two groups of experiments: (i) we map the cell centers of the grid $\{y_k\}$ of $[-1,1]^3$ back to $[-1,1]^3$ through the inverse OT map $T^{-1}(y_k)$ defined by Eqn. (20); (ii) we randomly sample $100k$ samples $\{y_k\}$ from the Uniform distribution defined in $[-1,1]^3$ , then map them back to $[-1,1]^3$ through the inverse OT map $T^{-1}(y_k)$ . In order to keep the consistency with the mirror reflection process in the FFT-OT algorithm, we also reflect the generated samples back to $\Omega$ . To visualize the results of the $k$ th slice, we plot the samples whose $z$ coordinates satisfy the inequality,
989
+
990
+ $$
991
+ k / 1 2 8 - 1 / 2 5 6 \leq z \leq k / 1 2 8 + 1 / 2 5 6.
992
+ $$
993
+
994
+ In Fig. 5 and Fig. 6, we give more sampling results of different slices correspond to the two models used in Fig. 2 in the main paper. Fig. 5 visualize the density function restricted on the 40th, 56th, 72th and 80th slices for different methods of the model displayed in the first row of 2. Fig. 6 visualize the density function restricted on the 56th, 64th, 80th and 88th slices for different methods of the model displayed in the second row of 2. Compared with the classical methods, the both sampling strategies of our method give decent sampling results that fit the prescribed density function well. Moreover, the number of generated samples for different slices of the same 3D model fits the density functions restricted to the corresponding slices well, namely more samples are generated in the brighter regions for different slices.
995
+
996
+ # D.2 MORE RESULTS ON VOLUMETRIC MAGNIFIER
997
+
998
+ In this experiment, we magnify the volumetric MRI image of the aneurysm by different amplification factors. In Fig. 7, we show the original aneurysm viewed from difference angles in the first column. The last three columns give the magnified results with different amplification factors from the viewpoints same as those in the first column. We can see that the aneurysm region is successfully magnified by different factors and the rest parts of the volume nearly keeps the same.
999
+
1000
+ ![](images/fb7d09d5709a40cfb69539473eaed5d763a441ad2075023fee75460f50533828.jpg)
1001
+
1002
+ ![](images/11a8bfcfe495005c1b746312291aa2cded56fd424078f9ac0e844becb48e7b4e.jpg)
1003
+
1004
+ ![](images/bc21db47976ef52cdf7e67a303c072525938597b4d3b10adab258e8ae0eff02c.jpg)
1005
+ (a) Original
1006
+
1007
+ ![](images/e66ecdc0147b920220d6ad7885d69226c479c799bcf72243d8bbd04c7128c3f1.jpg)
1008
+
1009
+ ![](images/56de3e5a0cf32f1b7bfb23652890c5c3d8b7521e6d37d17261226aba14fa84b6.jpg)
1010
+
1011
+ ![](images/856064b9ce45d2bc94f593509ee413a02c6a616819f4afd839216a8291a836f0.jpg)
1012
+ (b) Magnifying ratio 1
1013
+
1014
+ ![](images/612523e4a7d50159c05d7ceb09aa87af4ce236800deb9243b95993711e37bcb6.jpg)
1015
+
1016
+ ![](images/891f404debf9f17eba4150df5625178a5feb7d425472183b96528929f320b7ad.jpg)
1017
+
1018
+ ![](images/7ade992cc0161eba250766e4f278b57ec9d994c8ed45392476a9053a21254b72.jpg)
1019
+ (c) Magnifying ratio 2
1020
+
1021
+ ![](images/3710da88f2a7dd69ed1eea4fd8ab45ed766d513a4ed1084de222e94bdf4cc8e4.jpg)
1022
+
1023
+ ![](images/1152a6ab31b4aec89407cf6eba702c2129f587ffc0f15c6a290b6cd99606045c.jpg)
1024
+
1025
+ ![](images/bf5e003805ef42d6c857e8ec10508c0cd76198e6f141390526699155a9ca25dc.jpg)
1026
+ (d) Magnifying ratio 3
1027
+ Figure 7: The volume magnifier of an aneurysm. The first column shows the original volumetric data from different viewpoints, and the last three columns give the magnified data from the same viewpoints of the first column with different magnifying ratios. The yellow circles denote the aneurysm or the ROIs.
2023/Volumetric Optimal Transportation by Fast Fourier Transform/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d3e6784f1d665df18582b18bc3300730af36b23294ceb6a13c60d0c1aad97a5
3
+ size 1658623
2023/Volumetric Optimal Transportation by Fast Fourier Transform/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Wasserstein Auto-encoded MDPs_ Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees/7aa139d3-a427-412b-84b8-883489a7c318_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Wasserstein Auto-encoded MDPs_ Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees/7aa139d3-a427-412b-84b8-883489a7c318_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Wasserstein Auto-encoded MDPs_ Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees/7aa139d3-a427-412b-84b8-883489a7c318_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79884101850741402a86f73e8f487ee7c838e6b12fb9157e654d96bed21eb5e1
3
+ size 2107861
2023/Wasserstein Auto-encoded MDPs_ Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/Wasserstein Auto-encoded MDPs_ Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ec55c06c108285561a9e039b958d98781e5b76c4618265dfc81952c665445a9
3
+ size 1103958
2023/Wasserstein Auto-encoded MDPs_ Formal Verification of Efficiently Distilled RL Policies with Many-sided Guarantees/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic/d5f92e4c-b0b4-48f2-acb6-1c3d35000445_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic/d5f92e4c-b0b4-48f2-acb6-1c3d35000445_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic/d5f92e4c-b0b4-48f2-acb6-1c3d35000445_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3139871c47fdb51b4114d680c0f4b39e9f7a9bcb406dab81325d2946d0dbf240
3
+ size 1383458
2023/Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic/full.md ADDED
@@ -0,0 +1,464 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WEAKLY SUPERVISED EXPLAINABLE PHRASAL REASONING WITH NEURAL FUZZY LOGIC
2
+
3
+ Zijun $\mathbf{W}\mathbf{u}^{*1}$ , Zi Xuan Zhang\*, Atharva Naik+2, Zhijian Mei', Mauajama Firdaus', Lili Mou
4
+
5
+ $^{1}$ Dept. Computing Science & Alberta Machine Intelligence Institute (Amii), University of Alberta
6
+
7
+ 2Carnegie Mellon University
8
+
9
+ {zijun4, zixuan7, zimei1}@ualberta.ca, arnaik@cs.cmu.edu,
10
+
11
+ {mauzama.03, doublepower.mou}@gmail.com
12
+
13
+ *Equal contribution, †Work done during the internship at UofA/Amii
14
+
15
+ # ABSTRACT
16
+
17
+ Natural language inference (NLI) aims to determine the logical relationship between two sentences, such as Entailment, Contradiction, and Neutral. In recent years, deep learning models have become a prevailing approach to NLI, but they lack interpretability and explainability. In this work, we address the explainability of NLI by weakly supervised logical reasoning, and propose an Explainable Phrasal Reasoning (EPR) approach. Our model first detects phrases as the semantic unit and aligns corresponding phrases in the two sentences. Then, the model predicts the NLI label for the aligned phrases, and induces the sentence label by fuzzy logic formulas. Our EPR is almost everywhere differentiable and thus the system can be trained end to end. In this way, we are able to provide explicit explanations of phrasal logical relationships in a weakly supervised manner. We further show that such reasoning results help textual explanation generation.<sup>1</sup>
18
+
19
+ # 1 INTRODUCTION
20
+
21
+ Natural language inference (NLI) aims to determine the logical relationship between two sentences (called a premise and a hypothesis), and target labels include Entailment, Contradiction, and Neutral (Bowman et al., 2015; MacCartney & Manning, 2008). Figure 1 gives an example, where the hypothesis contradicts the premise. NLI is important to natural language processing, because it involves logical reasoning and is a key problem in artificial intelligence. Previous work shows that NLI can be used in various downstream tasks, such as information retrieval (Karpukhin et al., 2020) and text summarization (Liu & Lapata, 2019).
22
+
23
+ In recent years, deep learning has become a prevailing approach to NLI (Bowman et al., 2015; Mou et al., 2016; Wang & Jiang, 2016; Yoon et al., 2018). Especially, pretrained language models with the Transformer architecture (Vaswani et al., 2017) achieve state-of-the-art performance for the NLI task (Radford et al., 2018; Zhang et al., 2020). However, such deep learning models are black-box machinery and lack interpretability. In real applications, it is important to understand how these models make decisions (Rudin, 2019).
24
+
25
+ Several studies have addressed the explainability of NLI models. Camburu et al. (2018) generate a textual explanation by sequence-to-sequence supervised learning, in addition to NLI classification; such an approach is multi-task learning of text classification and generation, which does not perform reasoning itself. MacCartney et al. (2008) propose a scoring model to align related phrases; Parikh et al. (2016) and Jiang et al. (2021) propose to obtain alignment by attention mechanisms. However, they only provide correlation information, instead of logical reasoning. Other work incorporates upward and downward monotonicity entailment reasoning for NLI (Hu et al., 2020; Chen et al., 2021), but these approaches are based on hand-crafted rules (e.g., every downward entailing some) and are restricted to Entailment only; they cannot handle Contradiction or Neutral.
26
+
27
+ In this work, we address the explainability for NLI by weakly supervised phrasal logical reasoning. Our goal is to explain NLI predictions with phrasal logical relationships between the premise and
28
+
29
+ hypothesis. Intuitively, an NLI system with an explainable reasoning mechanism should be equipped with the following functionalities:
30
+
31
+ 1. The system should be able to detect corresponding phrases and tell their logical relationship, e.g., several men contradicting one man, but pull in a fishing net entailing holding the net (Figure 1).
32
+ 2. The system should be able to induce sentence labels from phrasal reasoning. In the example, the two sentences are contradictory because there exists one contradictory phrase pair.
33
+ 3. More importantly, such reasoning should be trained in a weakly supervised manner, i.e., the phrase-level predictions are trained from sentence labels only. Otherwise, the reasoning mechanism degrades to multi-task learning, which requires massive fine-grained human annotations.
34
+
35
+ To this end, we propose an Explainable Phrasal Reasoning (EPR) approach to the NLI task. Our model obtains phrases as semantic units, and aligns corresponding phrases by embedding similarity. Then, we predict the NLI labels (namely, Entailment, Contradiction, and Neutral) for the aligned phrases. Finally, we propose to induce the sentence-level label from phrasal labels in a fuzzy logic manner (Zadeh, 1988; 1996). Our model is differentiable, and the phrasal reasoning component can be trained
36
+
37
+ ![](images/36d8599dc69495cec1040aad3f195d48f60ea647a67099348cbb3ffd1e91bf76.jpg)
38
+ Figure 1: The natural language inference (NLI) task and desired phrasal reasoning.
39
+
40
+ with the weak supervision of sentence NLI labels. In this way, our EPR approach satisfies all the desired properties mentioned above.
41
+
42
+ In our experiments, we developed a comprehensive methodology (data annotation and evaluation metrics) to quantitatively evaluate phrasal reasoning performance, which has not been accomplished in previous work. We extend previous studies and obtain plausible baseline models. Results show that our EPR yields a much more meaningful explanation regarding $F$ scores against human annotation.
43
+
44
+ To further demonstrate the quality of extracted phrasal relationships, we feed them to a textual explanation model. Results show that our EPR reasoning leads to an improvement of 2 points in BLEU scores, achieving a new state of the art on the e-SNLI dataset (Camburu et al., 2018).
45
+
46
+ Our contributions are summarized as follows:
47
+
48
+ 1. We formulate a phrasal reasoning task for natural language inference (NLI), addressing the interpretability of neural models.
49
+ 2. We propose an EPR model that induces sentence-level NLI labels from explicit phrasal logical labels by neural fuzzy logic. EPR is able to perform reasoning in a weakly supervised way.
50
+ 3. We annotated phrasal logical labels and designed a set of metrics to evaluate phrasal reasoning. We further use our reasoning results to improve textual explanation generation. Our code and annotated data are released for future studies.
51
+
52
+ To the best of our knowledge, we are the first to develop a weakly supervised phrasal reasoning model for the NLI task.
53
+
54
+ # 2 RELATED WORK
55
+
56
+ Natural Language Inference. MacCartney & Manning (2009) propose seven natural logic relations in addition to Entailment, Contradiction, and Neutral. MacCartney & Manning (2007) also distinguish upward entailment (every mammal upward entailing some mammal) and downward entailment (every mammal downward entailing every dog) as different categories. Manually designed lexicons and rules are used to interpret Entailment in a finer-grained manner, such as downward and upward entailment (Hu et al., 2020; Chen et al., 2021). Feng et al. (2020) apply such natural logic to NLI reasoning at the word level; however, our experiments will show that their word-level treatment is not an appropriate granularity, and they fail to achieve meaningful reasoning performance.
57
+
58
+ The above reasoning schema focuses more on the quantifiers of first-order logic (Beltagy et al., 2016). However, the SNLI dataset (Bowman et al., 2015) we use only contains less than $5\%$ samples with explicit quantifiers, and the seven-category schema complicates reasoning in the weakly supervised
59
+
60
+ setting. Instead, we adopt three-category NLI labels following the SNLI dataset. Our focus is entity-based reasoning, and the treatment of quantifiers is absorbed into phrases.
61
+
62
+ We also notice that previous work lacks explicit evaluation on the reasoning performance for NLI. For example, the SNLI dataset only provides sentence-level labels. The HELP (Yanaka et al., 2019a) and MED (Yanaka et al., 2019b) datasets concern monotonicity inference problems, where the label is also at the sentence level; they only consider Entailment, ignoring Contradiction and Neutral. Thus, we propose a comprehensive framework for the evaluation of NLI reasoning.
63
+
64
+ e-SNLI. Camburu et al. (2018) propose the e-SNLI task of textual explanation generation and use LSTM as a baseline. Kumar & Talukdar (2020) propose the NILE approach, using multiple decoders to generate explanations for all E, C, and N labels, and then predicting which to be selected. Zhao & Vydiswaran (2021) propose the LIREx approach, using additionally annotated rationales for explanation generation. Narang et al. (2020) finetune T5 with multiple explanation generation tasks. Although these systems can generate explanations, the nature of such finetuning approaches renders the explanation generator per se unexplainable. By contrast, we design a textual explanation generation model that utilizes our EPR's phrasal reasoning, obtained in a weakly supervised manner.
65
+
66
+ Neuro-Symbolic Approaches. In recent years, neuro-symbolic approaches have attracted increasing interest in the AI and NLP communities for interpreting deep learning models. Typically, these approaches are trained by reinforcement learning or its relaxation, such as attention and Gumbel-softmax (Jang et al., 2017), to reason about certain latent structures in a downstream task.
67
+
68
+ For example, Lei et al. (2016) and Liu et al. (2018) extract key phrases or sentences for a text classification task. Lu et al. (2018) extract entities and relations for document understanding. Liang et al. (2017) and Mou et al. (2017) perform SQL-like execution based on input text for semantic parsing. Xiong et al. (2017) hop over a knowledge graph for reasoning the relationships between entities. Li et al. (2019) and Deshmukh et al. (2021) model symbolic actions for unsupervised syntactic structure induction. In the vision domain, Mao et al. (2019) propose a neuro-symbolic approach to learn visual concepts. Our work addresses logical reasoning for the NLI task, which is not tackled in previous neuro-symbolic studies.
69
+
70
+ Fuzzy Logic. Fuzzy logic (Zadeh, 1988; 1996) models an assertion and performs logic calculation with probability. For example, a quantifier (e.g., "most") and assertion (e.g., "ill") are modeled by a score in $(0,1)$ ; the score of a conjunction $s(x_{1} \wedge x_{2})$ is the product of $s(x_{1})$ and $s(x_{2})$ . In old-school fuzzy logic studies, the mapping from language to the score is usually given by human-defined heuristics (Zadeh, 1988; Nozaki et al., 1997), and may not be suited to the task of interest. By contrast, we train neural networks to predict the probability of phrasal logical relations, and induce the sentence NLI label by fuzzy logic formulas. Thus, our approach takes advantage of both worlds of symbolism and connectionism. Mahabadi et al. (2020) apply fuzzy logic formulas to replace multi-layer perceptrons for NLI. But they are unable to provide expressive reasoning because their fuzzy logic works on sentence features. Our work is inspired by Mahabadi et al. (2020). However, we propose to apply fuzzy logic to the detected and aligned phrases, enabling our approach to provide reasoning in a symbolic (i.e., expressive) way. We develop our own fuzzy logic formulas, which are also different from Mahabadi et al. (2020).
71
+
72
+ # 3 OUR EPR APPROACH
73
+
74
+ In this section, we describe our EPR approach in detail, also shown in Figure 2. It has three main components: phrase detection and alignment, phrasal NLI prediction, and sentence label induction.
75
+
76
+ Phrase Detection and Alignment. In NLI, a data point consists of two sentences, a premise and a hypothesis. We first extract content phrases from both input sentences by rules and heuristics. For example, $\left[\mathrm{AUX}\right] + \left[\mathrm{NOT}\right] + \mathrm{VERB} + \left[\mathrm{RP}\right]$ is treated as a verb phrase. Full details are presented in Appendix A.1. Compared with the word level (Parikh et al., 2016; Feng et al., 2020), a phrase is a more meaningful semantic unit for logical reasoning.
77
+
78
+ We then align corresponding phrases in the two sentences based on cosine similarity. Let $\mathrm{P} = (\mathrm{p}_1,\dots ,\mathrm{p}_M)$ and $\mathrm{H} = (\mathrm{h}_1,\dots ,\mathrm{h}_N)$ be the premise and hypothesis, respectively, where $\mathrm{p}_m$ and $\mathrm{h}_n$ are extracted phrases. We apply Sentence-BERT (Reimers & Gurevych, 2019) to each individual phrase and obtain the local phrase embeddings by $\pmb {p}_m^{(L)} = \mathrm{SBERT}(\mathrm{p}_m),\pmb {h}_n^{(L)} = \mathrm{SBERT}(\mathrm{h}_n)$ . We
79
+
80
+ ![](images/07d7923cf7122154318a4e7621f0c71d8c910a16064898df435c9314cf0f5e25.jpg)
81
+ Figure 2: An overview of our Explainable Phrasal Reasoning (EPR) model.
82
+
83
+ Table 1: An example showing the importance of handling unaligned phrases (in highlight).
84
+
85
+ <table><tr><td>Premise
86
+ Hypothesis</td><td colspan="2">People are shopping for fruit.
87
+ People are shopping for fruit in the market.</td><td colspan="2">People are shopping for fruit in the market.
88
+ People are shopping for fruit.</td></tr><tr><td>Sentence NLI</td><td colspan="2">[ ] Entailment [ ] Contradiction [√] Neutral</td><td colspan="2">[√] Entailment [ ] Contradiction [ ] Neutral</td></tr></table>
89
+
90
+ also apply Sentence-BERT to the entire premise and hypothesis sentences to obtain the global phrase embeddings $\pmb{p}_m^{(G)}$ and $\pmb{h}_n^{(G)}$ by mean-pooling the features of the words in the phrase. The phrase similarity is given by
91
+
92
+ $$
93
+ \sin \left(\mathrm {p} _ {m}, \mathrm {h} _ {n}\right) = \gamma \cos \left(\boldsymbol {p} _ {m} ^ {(G)}, \boldsymbol {h} _ {n} ^ {(G)}\right) + (1 - \gamma) \cos \left(\boldsymbol {p} _ {m} ^ {(L)}, \boldsymbol {h} _ {n} ^ {(L)}\right) \tag {1}
94
+ $$
95
+
96
+ where $\gamma$ is a hyperparameter balancing the lexical and contextual representations of a phrase (Hewitt & Manning, 2019). It is noted that Sentence-BERT is finetuned on paraphrase datasets, and thus is more suitable for phrasal similarity matching than pretrained language models (Devlin et al., 2019).
97
+
98
+ We obtain phrase alignment between the premise and hypothesis in a heuristic way. For every phrase $\mathrm{p}_m$ in the premise, we look for the most similar phrase $\mathrm{h}_n$ from the hypothesis by
99
+
100
+ $$
101
+ n = \operatorname {a r g m a x} _ {n ^ {\prime}} \sin \left(\boldsymbol {p} _ {m}, \boldsymbol {h} _ {n ^ {\prime}}\right) \tag {2}
102
+ $$
103
+
104
+ Likewise, for every phrase $\mathrm{h}_n$ in the hypothesis, we look for the most similar phrase $\mathrm{p}_m$ from the premise. A phrase pair $(\mathrm{p}_m, \mathrm{h}_n)$ is considered to be aligned if $\mathrm{h}_n$ is selected as the closest phrase to $\mathrm{p}_m$ , and $\mathrm{p}_m$ is the closest to $\mathrm{h}_n$ . Such hard alignment differs from commonly used soft attention-based approaches (Parikh et al., 2016). Our alignment method can ensure the quality of phrase alignment, and more importantly, leave other phrases unaligned (e.g., helping each other in Figure 1), which are common in the NLI task. The process is illustrated in Figure 2a.
105
+
106
+ Phrasal NLI Prediction. Our model then predicts the logical relationship of an aligned phrase pair $(p, h)$ among three target labels: Entailment, Contradiction, and Neutral. While previous work (Feng et al., 2020) identifies finer-grained labels for NLI, we do not follow their categorization, because it complicates the reasoning process and makes weakly supervised training more difficult. Instead, we adopt a three-way phrasal classification, which is consistent with sentence NLI labels.
107
+
108
+ We represent a phrase, say, $p$ in the premise, by a vector embedding, and we consider two types of features: a local feature $\pmb{p}^{(L)}$ and a global feature $\pmb{p}^{(G)}$ , re-used from the phrase alignment component. They are concatenated as the phrase representation $\pmb{p} = [p^{(L)}; p^{(G)}]$ . Likewise, the phrase representation for a hypothesis phrase $h$ is obtained in a similar way. Intuitively, local features force the model to perform reasoning in a serious manner, but global features are important to sentence-level prediction. Such intuition is also verified in an ablation study (§ 4.2).
109
+
110
+ Then, we use a neural network to predict the phrasal NLI label (Entailment, Contradiction, and Neutral). This is given by the standard heuristic matching (Mou et al., 2016) based on phrase embeddings, followed by a multi-layer perceptron (MLP) and a three-way softmax layer:
111
+
112
+ $$
113
+ \left[ P _ {\text {p h r a s e}} (\mathsf {E} | \mathrm {p}, \mathrm {h}); P _ {\text {p h r a s e}} (\mathsf {C} | \mathrm {p}, \mathrm {h}); P _ {\text {p h r a s e}} (\mathsf {N} | \mathrm {p}, \mathrm {h}) \right] = \operatorname {s o f t m a x} (\operatorname {M L P} \left(\left[ \boldsymbol {p}; \boldsymbol {h}; \left| \boldsymbol {p} - \boldsymbol {h} \right|; \boldsymbol {p} \circ \boldsymbol {h} \right]\right)) \tag {3}
114
+ $$
115
+
116
+ where $\circ$ is the element-wise product, and the semicolon refers to column vector concatenation. E, C, and N refer to the Entailment, Contradiction, and Neutral labels, respectively.
117
+
118
+ It should be mentioned that a phrase may be unaligned, but plays an important role in sentence-level NLI prediction, as shown in Table 1. Thus, we would like to predict phrasal NLI labels for unaligned
119
+
120
+ phrases as well, but pair them with a special token $(\mathrm{p}_{\langle \mathrm{EMPTY}\rangle}$ or $\mathrm{h}_{\langle \mathrm{EMPTY}\rangle})$ , whose embedding is randomly initialized and learned by back-propagation.
121
+
122
+ Sentence Label Induction. We observe the sentence NLI label can be logically induced from phrasal NLI labels. Based on the definition of the NLI task, we develop the following induction rules.
123
+
124
+ Entailment Rule: According to Bowman et al. (2015), a premise entailing a hypothesis means that, if the premise is true, then the hypothesis must be true. We find that this can be oftentimes transformed into phrasal relationships: a premise entails the hypothesis if all paired phrases have the label Entailment.
125
+
126
+ Let $\{(\mathrm{p}_k,\mathrm{h}_k)\}_{k = 1}^K\bigcup \{(\mathrm{p}_k,\mathrm{h}_k)\}_{k = K + 1}^{K'}$ be all phrase pairs. For $k = 1,\dots ,K$ , they are aligned phrases; for $k = K + 1,\dots ,K'$ , they are unaligned phrases paired with the special token, i.e., $\mathrm{p}_k = \mathrm{p}_{\langle \mathrm{EMPTY}\rangle}$ or $\mathrm{h}_k = \mathrm{h}_{\langle \mathrm{EMPTY}\rangle}$ . Then, we induce a sentence-level Entailment score by
127
+
128
+ $$
129
+ S _ {\text {s e n t e n c e}} (\mathsf {E} | \mathrm {P}, \mathrm {H}) = \left[ \prod_ {k = 1} ^ {K ^ {\prime}} P _ {\text {p h r a s e}} (\mathsf {E} | \mathrm {p} _ {k}, \mathrm {h} _ {k}) \right] ^ {\frac {1}{K ^ {\prime}}} \tag {4}
130
+ $$
131
+
132
+ This works in a fuzzy logic fashion (Zadeh, 1988; 1996), deciding whether the sentence-level label should be Entailment considering the average of phrasal predictions. Here, we use the geometric mean, because it is biased towards low scores, i.e., if there exists one phrase pair with a low Entailment score, then the chance of sentence label being Entailment is also low. Unaligned pairs should be considered in Eq. (4), because an unaligned phrase may indicate Entailment, shown in the second example of Table 1. Notice that the resulting value $S_{\text{sentence}}(\mathsf{E}|\mathsf{P}, \mathsf{H})$ is not normalized with respect to Contradiction and Neutral; thus, we call it a score (instead of probability), which will be normalized afterwards.
133
+
134
+ Contradiction Rule: Two sentences are contradictory if there exists (at least) one paired phrase labeled as Contradiction. The fuzzy logic version of this induction rule is given by
135
+
136
+ $$
137
+ S _ {\text {s e n t e n c e}} (\mathbb {C} | \mathrm {P}, \mathrm {H}) = \max _ {k = 1, \dots , K} P _ {\text {p h r a s e}} (\mathbb {C} | \mathrm {p} _ {k}, \mathrm {h} _ {k}) \tag {5}
138
+ $$
139
+
140
+ Here, the max operator is used in the induction, because the contradiction rule is an existential statement, i.e., there exist(s) $\cdots$ . Also, unaligned phrases are excluded in calculating the sentence-level Contradiction score, because an unaligned phrase indicates the corresponding information is missing in the other sentence and it cannot be Contradiction (recall examples in Table 1).
141
+
142
+ Rule for Neutral: Two sentences are neutral if there exists (at least) one neutral phrase pair, but there does not exist any contradictory phrase pair. The fuzzy logic formula is
143
+
144
+ $$
145
+ S _ {\text {s e n t e n c e}} (\mathrm {N} | \mathrm {P}, \mathrm {H}) = \left[ \max _ {k = 1, \dots , K ^ {\prime}} P _ {\text {p h r a s e}} (\mathrm {N} | \mathrm {p} _ {k}, \mathrm {h} _ {k}) \right] \cdot \left[ 1 - S _ {\text {s e n t e n c e}} (\mathrm {C} | \mathrm {P}, \mathrm {H}) \right] \tag {6}
146
+ $$
147
+
148
+ The first factor determines whether there exists a Neutral phrase pair (including unaligned phrases, illustrated in the first example in Table 1). The second factor evaluates the negation of "at least one contradictory phrase," as suggested in the second clause of the Rule for Neutral.
149
+
150
+ Finally, we normalize the scores into probabilities by dividing the sum, since all the scores are already positive. This is given by
151
+
152
+ $$
153
+ P _ {\text {s e n t e n c e}} (\mathrm {L} | \cdot) = \frac {1}{Z} S _ {\text {s e n t e n c e}} (\mathrm {L} | \cdot) \tag {7}
154
+ $$
155
+
156
+ where $\mathsf{L}\in \{\mathsf{E},\mathsf{C},\mathsf{N}\}$ , and $Z = S_{\text{sentence}}(\mathsf{E}|\cdot) + S_{\text{sentence}}(\mathsf{C}|\cdot) + S_{\text{sentence}}(\mathsf{N}|\cdot)$ is the normalizing factor.
157
+
158
+ Training and Inference. We use cross-entropy loss to train our EPR model by minimizing $-\log P_{\text{sentence}}(\mathsf{t}|\cdot)$ , where $\mathsf{t} \in \{\mathsf{E}, \mathsf{C}, \mathsf{N}\}$ is the groundtruth sentence-level label.
159
+
160
+ Our underlying logical reasoning component can be trained end-to-end by back-propagation in a weakly supervised manner, because the fuzzy logic rules are almost everywhere differentiable. Although the max operators in Eqs. (5) and (6) may not be differentiable at certain points, they are common in max-margin learning and the rectified linear unit (ReLU) activation functions, and do not cause trouble in back-propagation.
161
+
162
+ Once our EPR model is trained, we can obtain both phrasal and sentence-level labels. This is accomplished by performing argmax on the predicted probabilities (3) and (7), respectively.
163
+
164
+ Improving Textual Explanation. Camburu et al. (2018) annotated a dataset to address NLI interpretability by generating an explanation sentence. For the example in Figure 1, the reference explanation is "There cannot be one man and several men at same time."
165
+
166
+ In this part, we apply the predicted phrasal logical relationships to textual explanation generation and examine whether our EPR's output can help a downstream task. Figure 3 shows the overview of our textual explanation generator. We concatenate the premise and hypothesis in the form of “Premise : Hypothesis : …,” and feed it to a standard Transformer encoder (Vaswani et al., 2017).
167
+
168
+ We utilize the phrase pairs and our predicted phrasal labels as factual knowledge to enhance the decoder. Specifically, our EPR model yields a set of tuples $\{(\mathrm{p}_k,\mathrm{h}_k,\mathrm{l}_k)\}_{k = 1}^K$ for a sample, where $\mathbf{l}_k\in \{\mathsf{E},\mathsf{N},\mathsf{C}\}$ is the predicted phrasal label for the aligned phrases, $\mathrm{p}_k$ and $\mathrm{h}_k$ . We embed phrases by Sentence-BERT: $\pmb{p}^{(L)}$ and $\pmb{h}^{(L)}$ ; the phrasal label is represented by a one-hot vector $\pmb{l}_k = \mathrm{onehot}(\mathrm{l}_k)$ . They are concatenated as a vector $\pmb{m}_k = [\pmb {p}_k;\pmb {h}_k;\pmb {l}_k]$ . We compose the vectors as a factual memory matrix $\mathbf{M} = [m_1^\top ;\dots ;m_K^\top ]\in \mathbb{R}^{K\times d}$ , where $d$ is the dimension of $\pmb{m}_k$ .
169
+
170
+ Our decoder follows a standard Transformer architecture (Vaswani et al., 2017), but is equipped with additional attention mechanisms to the factual memory. Consider the $i$ th decoding step. We feed the factual memory to an MLP as $\tilde{\mathbf{M}} = \mathrm{MLP}(\mathbf{M})$ . We compute attention $\pmb{a}$ over $\tilde{\mathbf{M}}$ with the embedding of the input $\pmb{y}_{i-1}$ , and aggregate factual information $\pmb{c}$ for the rows $\pmb{m}_t$ in $\mathbf{M}$ :
171
+
172
+ $$
173
+ \boldsymbol {a} = \operatorname {s o f t m a x} (\tilde {\mathbf {M}} \boldsymbol {y} _ {i - 1}), \quad \boldsymbol {c} = \sum_ {k = 1} ^ {K} a _ {k} \tilde {\boldsymbol {m}} _ {t} ^ {\top}
174
+ $$
175
+
176
+ where $a_{k}$ is the kth element of the vector $\pmb{a}$ and Figure 5. $\hat{\pmb{m}}_t$ is the kth row of the matrix $\tilde{\mathbf{M}}$ . The factual information $\pmb{c}$ is fed to another layer $\pmb{g}_i = \mathrm{MLP}([c; y_{i-1}]) + c$ .
177
+
178
+ ![](images/59828ff8e82cdca43543a438f05e4fee8d4571b8d4ce9ba761fa4e8c3e224c45.jpg)
179
+ Figure 3: Overview of the model for textual explanation generation.
180
+
181
+ Our Transformer decoder layer starts with self-attention $\tilde{q}_i = \mathrm{SelfAttn}(g_i)$ . Then, residual connection and layer normalization are applied as $q_{i} = \mathrm{LayerNorm}(\tilde{q}_{i} + g_{i})$ . A cross-attention mechanism obtains input information by $v_{i} = \mathrm{CrossAttn}(q_{i},\mathbf{H})$ , where $\mathbf{H}$ is the representation given by the encoder. $v_{i}$ is fed to the Transformer's residual connection and layer normalization sub-layer. Multiple Transformer layers as mentioned above are stacked to form a deep architecture. The model is trained by standard cross-entropy loss against the reference explanation as in previous work (Kumar & Talukdar, 2020; Zhao & Vydiswaran, 2021; Narang et al., 2020).
182
+
183
+ In this way, the model is enhanced with factual information given by our EPR weakly supervised reasoning. Experiments will show that it largely improves the BLEU score by 2 points (§ 4.2), being a new state of the art. This further verifies that our EPR indeed yields meaningful phrasal explanations.
184
+
185
+ # 4 EXPERIMENTS
186
+
187
+ # 4.1 DATASETS AND EVALUATION METRICS
188
+
189
+ The main dataset we used in our experiments is the Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015), which consists of 550K training samples, 10K validation samples, and another 10K test samples. Each data sample consists of two sentences (premise and hypothesis) and a sentence-level groundtruth label. For sentence-level NLI prediction, we still use accuracy to evaluate our approach, following previous work (Parikh et al., 2016; Chen et al., 2017; Radford et al., 2018).
190
+
191
+ To evaluate the phrasal reasoning performance, we need additional human annotation and evaluation metrics, because most previous work only considers sentence-level performance (Feng et al., 2020) and has not performed quantitative phrasal reasoning evaluation. Although Camburu et al. (2018) annotated phrase highlights in their e-SNLI dataset, they are incomplete and do not provide logical relationships. Our annotators selected relevant phrases from two sentences and tagged them with phrasal NLI labels; they also selected and tagged unaligned phrases.
192
+
193
+ Table 2: Main results on the SNLI dataset. †Quoted from respective papers. ‡Obtained from the checkpoint sent by the authors. Other results are obtained by our experiments. GM and AM are the geometric and arithmetic means of the $F$ scores.
194
+
195
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Sent Acc</td><td colspan="7">Reasoning Performance</td></tr><tr><td>FE</td><td>FC</td><td>FN</td><td>FUP</td><td>FUH</td><td>GM</td><td>AM</td></tr><tr><td>Human</td><td>-</td><td>84.71</td><td>71.01</td><td>55.12</td><td>82.46</td><td>61.80</td><td>70.07</td><td>71.02</td></tr><tr><td colspan="9">Non-reasoning</td></tr><tr><td>Mahabadi et al. (2020)†</td><td>85.1</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>LSTM (Wang &amp; Jiang, 2016)†</td><td>86.1</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Transformer (Radford et al., 2018)</td><td>89.9</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>SBERT (Reimers &amp; Gurevych, 2019)</td><td>91.4</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan="9">Baselines</td></tr><tr><td>NNL (Feng et al., 2020)‡</td><td>79.91</td><td>62.72</td><td>17.49</td><td>1.50</td><td>66.22</td><td>0.00</td><td>0.00</td><td>29.59</td></tr><tr><td>STP</td><td>85.76</td><td>62.40</td><td>34.76</td><td>37.04</td><td>76.61</td><td>51.80</td><td>50.20</td><td>52.52</td></tr><tr><td>GPT-3-Davinci (Brown et al., 2020)</td><td>-</td><td>53.75</td><td>58.00</td><td>16.12</td><td>52.24</td><td>31.08</td><td>38.23</td><td>42.24</td></tr><tr><td colspan="9">Ours</td></tr><tr><td>EPR (Local, LM unfinetuned)</td><td>76.33±0.48</td><td>83.11±0.29</td><td>38.73±0.85</td><td>44.63±0.88</td><td>76.61</td><td>51.80</td><td>56.39±0.43</td><td>58.98±0.34</td></tr><tr><td>EPR (Local, LM finetuned)</td><td>79.36±0.13</td><td>82.44±0.26</td><td>44.10±1.32</td><td>44.69±3.22</td><td>76.61</td><td>51.80</td><td>57.77±0.85</td><td>59.93±0.67</td></tr><tr><td>EPR (Concat, LM unfinetuned)</td><td>84.53±0.19</td><td>73.29±0.68</td><td>37.95±1.16</td><td>40.56±1.10</td><td>76.61</td><td>51.80</td><td>53.73±0.39</td><td>56.04±0.33</td></tr><tr><td>EPR (Concat, LM finetuned)</td><td>87.56±0.15</td><td>69.91±1.21</td><td>39.97±2.12</td><td>43.31±2.78</td><td>76.61</td><td>51.80</td><td>54.46±1.35</td><td>56.32±1.13</td></tr></table>
196
+
197
+ We further propose a set of $F$ -scores, which are a balanced measure of precision and recall between human annotation and model output for Entailment, Contradiction, Neutral, and Unaligned in terms of word indexes. Details of human annotation and evaluation metrics are shown in Appendix B.
198
+
199
+ The inter-annotator agreement is presented in Table 2 in comparison with model performance (detailed in the next part). Here, we compute the agreement by treating one annotator as the ground truth and another as the system output; the score is averaged among all annotator pairs. As seen, humans generally achieve high agreement with each other, whereas model performance is relatively low. This shows that our task and metrics are well-defined, yet phrasal logical reasoning is a challenging task for machine learning models.
200
+
201
+ Textual explanation generation was evaluated on the e-SNLI dataset (Camburu et al., 2018), which extends the SNLI dataset with one reference explanation for each training sample, and three reference explanations for each validation or test sample. Each reference explanation comes with highlighted rationales, a set of annotated words in the premise or hypothesis considered as the reason for the explanation annotation. We do not use these highlighted rationales, but enhance the neural model with EPR output for textual explanation generation. We follow previous work (Camburu et al., 2018; Narang et al., 2020), adopting BLEU (Papineni et al., 2002) and SacreBLEU (Post, 2018) scores as the evaluation metrics; they mainly differ in the tokenizer. Camburu et al. (2018) also report low consistency of the third annotated reference, and thus use only two references for evaluation. In our study, we consider both two-reference and three-reference BLEU/SacreBLEU. Appendix A.2 provides additional implementation details of textual explanation generation.
202
+
203
+ # 4.2 RESULTS
204
+
205
+ Phrasal Reasoning Performance. To the best of our knowledge, phrasal reasoning for NLI was not explicitly evaluated in previous literature. Therefore, we propose plausible extensions to previous studies as our baselines. We consider the study of Neural Natural Logic (NNL, Feng et al., 2020) as the first baseline. It applies an attention mechanism (Parikh et al., 2016), so that each word in the hypothesis is softly aligned with the words in the premise. Then, each word in the hypothesis is predicted with one of the seven natural logic relations proposed by MacCartney & Manning (2009). We consider the maximum attention score as the alignment, and map their seven natural logic relations to our three-category NLI labels: Equivalence, ForwardEntailment $\mapsto$ Entailment; Negation, Alternation $\mapsto$ Contradiction; and ReverseEntailment, Cover, Independence $\mapsto$ Neutral.
206
+
207
+ Table 2 shows that the word-level NNL approach cannot perform meaningful phrasal reasoning, although our metrics have already excluded explicit evaluation of phrases. The low performance is because their soft attention leads to many misalignments, whereas their seven-category logical relations are too fine-grained and cause complications in weakly supervised reasoning. In addition, NNL does not allow unaligned words in the hypothesis, showing that such a model is inadequate for NLI reasoning. By contrast, our EPR model extracts phrases of meaningful semantic units, being an appropriate granularity of logical reasoning. Moreover, we work with three-category NLI labels following the sentence-level NLI task formulation. This actually restricts the model's capacity, forcing the model to perform serious phrasal reasoning.
208
+
209
+ Table 3: Results of ablation studies on SNLI.
210
+
211
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Features</td><td rowspan="2">Sent Acc</td><td colspan="7">Reasoning Performance</td></tr><tr><td>FE</td><td>FC</td><td>FN</td><td>FUP</td><td>FUH</td><td>GM</td><td>AM</td></tr><tr><td rowspan="3">Full model</td><td>Local</td><td>76.33±0.48</td><td>83.11±0.29</td><td>38.73±0.85</td><td>44.63±0.88</td><td>76.61</td><td>51.80</td><td>56.39±0.43</td><td>58.98±0.34</td></tr><tr><td>Global</td><td>84.03±0.12</td><td>70.84±0.60</td><td>35.12±0.90</td><td>36.37±1.52</td><td>76.61</td><td>51.80</td><td>51.41±0.62</td><td>54.15±0.41</td></tr><tr><td>Concat</td><td>84.53±0.19</td><td>73.29±0.68</td><td>37.95±1.16</td><td>40.56±1.10</td><td>76.61</td><td>51.80</td><td>53.73±0.39</td><td>56.04±0.33</td></tr><tr><td rowspan="3">Random chunker</td><td>Local</td><td>72.44</td><td>63.21</td><td>22.65</td><td>32.04</td><td>65.94</td><td>36.13</td><td>40.53</td><td>43.99</td></tr><tr><td>Global</td><td>82.81</td><td>58.09</td><td>30.64</td><td>27.49</td><td>65.94</td><td>36.13</td><td>41.05</td><td>43.66</td></tr><tr><td>Concat</td><td>83.09</td><td>58.75</td><td>32.41</td><td>31.14</td><td>65.94</td><td>36.13</td><td>42.66</td><td>44.87</td></tr><tr><td rowspan="3">Semantic role labeling</td><td>Local</td><td>71.10</td><td>73.79</td><td>29.39</td><td>28.99</td><td>70.19</td><td>43.11</td><td>45.27</td><td>49.09</td></tr><tr><td>Global</td><td>82.81</td><td>60.14</td><td>32.07</td><td>30.48</td><td>70.19</td><td>43.11</td><td>44.67</td><td>47.20</td></tr><tr><td>Concat</td><td>83.11</td><td>61.64</td><td>31.76</td><td>28.33</td><td>70.19</td><td>43.11</td><td>44.15</td><td>47.01</td></tr><tr><td rowspan="3">Random alignment</td><td>Local</td><td>68.52</td><td>59.32</td><td>21.79</td><td>26.20</td><td>51.43</td><td>16.50</td><td>31.02</td><td>35.05</td></tr><tr><td>Global</td><td>81.99</td><td>53.85</td><td>35.10</td><td>31.39</td><td>51.43</td><td>16.50</td><td>34.71</td><td>37.66</td></tr><tr><td>Concat</td><td>82.49</td><td>57.22</td><td>34.83</td><td>30.91</td><td>51.43</td><td>16.50</td><td>34.97</td><td>38.18</td></tr><tr><td rowspan="3">Mean induction</td><td>Local</td><td>79.61</td><td>77.38</td><td>37.14</td><td>36.13</td><td>76.61</td><td>51.80</td><td>52.84</td><td>55.81</td></tr><tr><td>Global</td><td>83.82</td><td>55.08</td><td>29.92</td><td>24.70</td><td>76.61</td><td>51.80</td><td>43.82</td><td>47.62</td></tr><tr><td>Concat</td><td>84.96</td><td>57.12</td><td>31.93</td><td>31.41</td><td>76.61</td><td>51.80</td><td>46.92</td><td>49.77</td></tr></table>
212
+
213
+ In addition, we include another intuitive SBERT-based competing model for comparison. We first apply our own heuristics of phrase detection and alignment (thus, the model will have the same $F_{\mathsf{UP}}$ and $F_{\mathsf{UH}}$ scores); then, we directly train the phrasal NLI predictor by sentence-level labels. We obtain the sentence NLI prediction by taking argmax over Eq. (7). We call this STP (Sentence label Training Phrases). As seen, STP provides some meaningful phrasal reasoning results, because the training can smooth out the noise of phrasal labels, which are directly set as the sentence-level labels. But still, its performance is significantly lower than our EPR model.
214
+
215
+ We experimented with a baseline of few-shot prompting with GPT-3 (Brown et al., 2020), and the implementation detail is shown in Appendix A.2. We see that GPT-3 is able to provide more or less meaningful reasoning, and surprisingly the contradiction $F$ -score is higher than all competing methods. However, the overall mean $F$ scores are much lower. The results show that phrasal reasoning is challenging for pretrained language models, highlighting the importance of our task formulation and the proposed EPR approach even in the prompting era.
216
+
217
+ Among our EPR variants, we see that EPR with local phrase embeddings achieves the highest reasoning performance, and that EPR with concatenated features achieves a good balance between sentence-level accuracy and reasoning. Our EPR variants were run 5 times with different initialization, and standard deviations are also reported in Table 3. As seen, our improvement compared with the best baseline is around 9.1-10.7 times the standard deviation in mean $F$ scores, which is a large margin. Suppose the $F$ scores are Gaussian distributed, $^{4}$ the improvement is also statistically significant ( $p$ -value $< 4.5\mathrm{e} - 20$ comparing our worse variant with the best competing model by one-sided test).
218
+
219
+ We further compare our EPR with non-reasoning models (Wang & Jiang, 2016; Radford et al., 2018), which are unable to provide phrasal explanations but may or may not achieve high sentence accuracy. The results show that our phrasal EPR model hurts the sentence-level accuracy by 2-4 points, when the model architecture is controlled. This resonates with traditional symbolic AI approaches (MacCartney & Manning, 2008), where interpretable models may not outperform black-box neural networks. Nevertheless, our sentence-level accuracy is still decent, outperforming a few classic neural models, including fuzzy logic applied to sentence embeddings (Mahabadi et al., 2020).
220
+
221
+ Analysis. We consider several ablated models to verify the effect of every component in our EPR model. (1) Random chunker, which splits the sentence randomly based on the number of chunks detected by our system. (2) Random aligner, which randomly aligns phrases but keeps the number of aligned phrases unchanged. (3) Semantic role labeling, which uses the semantic roles, detected by AllenNLP (Gardner et al., 2018), as the reasoning unit. (4) Mean induction, which induces the sentence NLI label by the geometric mean of phrasal NLI prediction. In addition, we consider local phrase embedding features, global features, and their concatenation for the above model variants. Due to a large number of settings, each variant was run only once; we do not view this as a concern because Table 2 shows a low variance of our approach. Also, the underlying language model is un-finetuned in our ablation study, as it yields slightly lower performance but is much more efficient.
222
+
223
+ As seen in Table 3, the random chunker and aligner yield poor phrasal reasoning performance, showing that working with meaningful semantic units and their alignments is important to logical reasoning. This also verifies that our word index-based metrics are able to evaluate phrase detection
224
+
225
+ and alignment in an implicit manner. We further applied semantic role labeling as our reasoning unit. We find its performance is higher than the random chunker but lower than our method. This is because semantic role labeling is verb-centric, and the extracted spans may be incomplete.
226
+
227
+ Interestingly, local features yield higher reasoning performance, but global and concatenated features yield higher sentence accuracy. This is because global features provide aggregated information of the entire sentence and allow the model to bypass meaningful reasoning. In the variant of the mean induction, for example, the phrasal predictor can simply learn to predict the sentence-level label with global sentence information; then, the mean induction is an ensemble of multiple predictors. In this way, it achieves the highest sentence accuracy (0.43 points higher than our full model with concatenated features), but is 6 points lower in reasoning performance.
228
+
229
+ This reminds us of the debate between old schools of AI (Chandrasekaran et al., 1988; Boucher & Dienes, 2003; Goel, 2022). Recent deep learning models take the connectionists' view, and generally outperform symbolists' approaches in terms of the ultimate prediction, but they lack expressible explanations. Combining neural and symbolic methods becomes a hot direction in recent AI research (Liang et al., 2017; Dong et al., 2018; Yi et al., 2018). In general, our EPR model with global features achieves high performance in both reasoning and ultimate prediction for the NLI task.
230
+
231
+ Results of Textual Explanation Generation. In this part, we apply EPR's predicted output—phrasal logical relationships—as factual knowledge to textual explanation generation. Most previous studies use the groundtruth sentence-level NLI label and/or highlighted rationales. This requires human annotations, which are resource-consuming to obtain. By contrast, we require no extra human-annotated resources; our factual knowledge is based on our weakly supervised reasoning approach.
232
+
233
+ Table 4: Textual explanation results on e-SNLI. Previous work uses auxiliary information (L: the groundtruth NLI label; H: human-annotated highlights), but we use neither. ${}^{ \dagger }$ Quoted from respective papers. ${}^{ \ddagger }$ Evaluated by checkpoints. ${}^{\parallel }$ Our replication with provided code.
234
+
235
+ <table><tr><td rowspan="2">Model</td><td colspan="2">Info</td><td colspan="2">BLEU</td><td colspan="2">SacreBLEU</td></tr><tr><td>L</td><td>H</td><td>2 refs</td><td>3 refs</td><td>2 refs</td><td>3 refs</td></tr><tr><td>Camburu et al. (2018)†</td><td>-</td><td>-</td><td>27.58</td><td>-</td><td>-</td><td>-</td></tr><tr><td>NILE (Kumar &amp; Talukdar, 2020)∥</td><td>✓</td><td>-</td><td>28.57</td><td>37.73</td><td>32.51</td><td>41.78</td></tr><tr><td>NILE (Kumar &amp; Talukdar, 2020)‡</td><td>✓</td><td>-</td><td>28.67</td><td>37.84</td><td>32.74</td><td>42.06</td></tr><tr><td>FinetunedWT5220M (Narang et al., 2020)†</td><td>✓</td><td>-</td><td>-</td><td>-</td><td>32.40</td><td>-</td></tr><tr><td>FinetunedWT511B (Narang et al., 2020)†</td><td>✓</td><td>-</td><td>-</td><td>-</td><td>33.70</td><td>-</td></tr><tr><td>LIREx (Zhao &amp; Vydiswaran, 2021)∥</td><td>✓</td><td>✓</td><td>17.22</td><td>22.40</td><td>21.24</td><td>26.68</td></tr><tr><td>Finetune T560M</td><td>-</td><td>-</td><td>27.75</td><td>36.78</td><td>31.74</td><td>40.89</td></tr><tr><td>+ Annotated Highlights64M</td><td>✓</td><td>✓</td><td>27.91</td><td>36.90</td><td>32.20</td><td>41.21</td></tr><tr><td>+ EPR Outputs64M (ours)</td><td>-</td><td>-</td><td>29.91</td><td>38.30</td><td>33.96</td><td>42.63</td></tr></table>
236
+
237
+ Table 4 shows our explanation generation performance on e-SNLI. Since evaluation metrics are not consistently used for explanation generation in previous studies, we replicate the approaches when the code or checkpoint is available. For large pretrained models, we quote results from the previous paper (Narang et al., 2020). Their model is called WT5, having 220M or 11B parameters depending on the underlying T5 model. Profoundly, we achieve higher performance with 60M-parameter T5-small, which is $3.3\mathrm{x}$ and $170\mathrm{x}$ smaller in model size than the two WT5 variants.
238
+
239
+ In addition, we conducted a controlled experiment using the rationale highlights annotated by Camburu et al. (2018) for e-SNLI. It achieves a relatively small increase of 0.2-0.5 BLEU points, whereas our EPR's outputs yield a 2-point improvement. The difference in the performance gains shows that our EPR's phrasal logical relationships provide more valuable information than human-annotated highlights. In general, we achieve a new state of the art on e-SNLI with a small language model, demonstrating the importance of phrasal reasoning in textual explanations.
240
+
241
+ Additional Results. We show additional results as appendices. § C.1: Reasoning performance on the MNLI dataset; § C.2: Error analysis; § C.3: Case studies of our EPR model; and § C.4: Case studies of textual explanation generation.
242
+
243
+ Conclusion. The paper proposes an explainable phrasal reasoning (EPR) model for NLI with neural fuzzy logic, trained in a weakly supervised manner. We further propose an experimental design, including data annotation, evaluation metrics, and plausible baselines. Results show that phrasal reasoning for NLI is a meaningfully defined task, as humans can achieve high agreement. Our EPR achieves decent sentence-level accuracy, but much higher reasoning performance than all competing models. We also achieve a new state-of-the-art performance on e-SNLI textual explanation generation by applying EPR's phrasal logical relationships.
244
+
245
+ # REFERENCES
246
+
247
+ Islam Beltagy, Stephen Roller, Pengxiang Cheng, Katrin Erk, and Raymond J Mooney. Representing meaning with a combination of logical and distributional models. Computational Linguistics, pp. 763-808, 2016. URL https://aclanthology.org/J16-4007/.
248
+ Luke Boucher and Zoltán Dienes. Two ways of learning associations. Cognitive Science, 27(6):807-842, 2003. URL https://www.sciencedirect.com/science/article/pii/S0364021303000715.
249
+ Samuel Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. In EMNLP, pp. 632-642, 2015. URL https://aclanthology.org/D15-1075.
250
+ Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In NeurIPS, pp. 1877-1901, 2020. URL https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
251
+ Oana-Maria Camburu, Tim Rocttäschel, Thomas Lukasiewicz, and Phil Blunsom. eSNLI: Natural language inference with natural language explanations. In NeurIPS, pp. 9539-9549, 2018. URL https://proceedings.neurips.cc/paper/2018/bit/4c7a167bb329bd92580a99ce422d6fa6-Abstract.html.
252
+ Balakrishnan Chandrasekaran, Askhok Goel, and Dean Allemang. Connectionism and information processing abstractions. AI Magazine, 9(4):24-24, 1988. URL https://ojs.aaaai.org/index.php/imagazine/article/view/951.
253
+ Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. Enhanced LSTM for natural language inference. In ACL, pp. 1657-1668, 2017. URL https://aclanthology.org/P17-1152/.
254
+ Zeming Chen, Qiyue Gao, and Lawrence S Moss. NeuralLog: Natural language inference with joint neural and logical reasoning. arXiv preprint arXiv:2105.14167, 2021. URL https://arxiv.org/abs/2105.14167.
255
+ Anup Anand Deshmukh, Qianqiu Zhang, Ming Li, Jimmy Lin, and Lili Mou. Unsupervised chunking as syntactic structure induction with a knowledge-transfer approach. In Findings of EMNLP, pp. 3626-3634, 2021. URL https://aclanthology.org/2021.findings-emnlp.307.
256
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*, pp. 4171–4186, 2019. URL https://aclanthology.org/N19-1423.
257
+ Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. arXiv preprint arXiv:2002.06305, 2020. URL https://arxiv.org/abs/2002.06305.
258
+ Honghua Dong, Jiayuan Mao, Tian Lin, Chong Wang, Lihong Li, and Denny Zhou. Neural logic machines. In ICLR, 2018. URL https://openreview.net/forum?id=B1xY-hRctX.
259
+ Yufei Feng, Quan Liu, Michael Greenspan, Xiaodan Zhu, et al. Exploring end-to-end differentiable natural logic modeling. In COLING, pp. 1172-1185, 2020. URL https://aclanthology.org/2020.coling-main.101.
260
+ Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. AllenNLP: A deep semantic natural language processing platform. In Proc. Workshop for NLP Open Source Software (NLP-OSS), pp. 1-6, 2018. URL https://aclanthology.org/W18-2501.
261
+
262
+ Ashok Goel. Looking back, looking ahead: Symbolic versus connectionist AI. AI Magazine, 42(4): 83-85, 2022. URL https://ojs.aaii.org/index.php/aimagazine/article/view/15111.
263
+ John Hewitt and Christopher D Manning. A structural probe for finding syntax in word representations. In NAACL-HLT, pp. 4129-4138, 2019. URL https://aclanthology.org/N19-1419.
264
+ Hai Hu, Qi Chen, Kyle Richardson, Atreyee Mukherjee, Lawrence S Moss, and Sandra Kübler. MonaLog: A lightweight system for natural language inference based on monotonicity. In Proc. Society for Computation in Linguistics, pp. 284-293, 2020. URL https://aclanthology.org/2020.scil-1.40/.
265
+ Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with Gumbel-softmax. In ICLR, 2017. URL https://openreview.net/forum?id=rkE3y85ee.
266
+ Zhongtao Jiang, Yanzhe Zhang, Zhao Yang, Jun Zhao, and Kang Liu. Alignment rationale for natural language inference. In ACL-IJCNLP, pp. 5372-5387, 2021. URL https://aclanthology.org/2021.acl-long.417/.
267
+ Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. Dense passage retrieval for open-domain question answering. In EMNLP, pp. 6769-6781, 2020. URL https://aclanthology.org/2020.emnlp-main.550/.
268
+ Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. URL https://arxiv.org/abs/1412.6980.
269
+ Sawan Kumar and Partha Talukdar. NILE: Natural language inference with faithful natural language explanations. In ACL, pp. 8730-8742, 2020. URL https://aclanthology.org/2020.acl-main.771.
270
+ Tao Lei, Regina Barzilay, and Tommi Jaakkola. Rationalizing neural predictions. In EMNLP, pp. 107-117, 2016. URL https://aclanthology.org/D16-1011/.
271
+ Bowen Li, Lili Mou, and Frank Keller. An imitation learning approach to unsupervised parsing. In ACL, pp. 3485-3492, 2019. URL https://aclanthology.org/P19-1338.
272
+ Chen Liang, Jonathan Berant, Quoc Le, Kenneth Forbus, and Ni Lao. Neural symbolic machines: Learning semantic parsers on Freebase with weak supervision. In ACL, pp. 23-33, 2017. URL https://aclanthology.org/P17-1003/.
273
+ Xianggen Liu, Lili Mou, Haotian Cui, Zhengdong Lu, and Sen Song. Jumper: Learning when to make classification decisions in reading. In *IJCAI*, pp. 4237-4243, 2018. URL https://www.ijcai.org/proceedings/2018/0589.pdf.
274
+ Yang Liu and Mirella Lapata. Text summarization with pretrained encoders. In EMNLP-IJCNLP, pp. 3730-3740, 2019. URL https://aclanthology.org/D19-1387/.
275
+ Zhengdong Lu, Xianggen Liu, Haotian Cui, Yukun Yan, and Daqi Zheng. Object-oriented neural programming (OONP) for document understanding. In ACL, pp. 2717-2726, 2018. URL https://aclanthology.org/P18-1253.
276
+ Bill MacCartney and Christopher D Manning. Natural logic for textual inference. In Proc. ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pp. 193-200, 2007. URL https://aclanthology.org/W07-1431/.
277
+ Bill MacCartney and Christopher D. Manning. Modeling semantic containment and exclusion in natural language inference. In *COLING*, pp. 521-528, 2008. URL https://aclanthology.org/C08-1066.
278
+ Bill MacCartney and Christopher D Manning. An extended model of natural logic. In Proc. International Conference on Computational Semantics, pp. 140-156, 2009. URL https://aclanthology.org/W09-3714.
279
+ Bill MacCartney, Michel Galley, and Christopher D Manning. A phrase-based alignment model for natural language inference. In EMNLP, pp. 802-811, 2008. URL https://aclanthology.org/D08-1084.
280
+
281
+ Rabeeh Karimi Mahabadi, Florian Mai, and James Henderson. Learning entailment-based sentence embeddings from natural language inference. Online Manuscript, 2020. URL https://openreview.net/forum?id=BkxackSKvH.
282
+ Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B. Tenenbaum, and Jiajun Wu. The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. In ICLR, 2019. URL https://openreview.net/forum?id=rJgM1hRctm.
283
+ Lili Mou, Rui Men, Ge Li, Yan Xu, Lu Zhang, Rui Yan, and Zhi Jin. Natural language inference by tree-based convolution and heuristic matching. In ACL, pp. 130-136, 2016. URL https://aclanthology.org/P16-2022.
284
+ Lili Mou, Zhengdong Lu, Hang Li, and Zhi Jin. Coupling distributed and symbolic execution for natural language queries. In ICML, pp. 2518-2526, 2017. URL https://proceedings.mlrpress/v70/mou17a.html.
285
+ Sharan Narang, Colin Raffel, Katherine Lee, Adam Roberts, Noah Fiedel, and Karishma Malkan. WT5?! Training text-to-text models to explain their predictions. arXiv preprint arXiv:2004.14546, 2020. URL https://arxiv.org/abs/2004.14546.
286
+ Ken Nozaki, Hisao Ishibuchi, and Hideo Tanaka. A simple but powerful heuristic method for generating fuzzy rules from numerical data. Fuzzy Sets and Systems, 86(3):251-270, 1997. URL https://www.sciencedirect.com/science/article/abs/pii/0165011495004130.
287
+ Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: A method for automatic evaluation of machine translation. In ACL, pp. 311-318, 2002. URL https://aclanthology.org/P02-1040.
288
+ Ankur Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language inference. In EMNLP, pp. 2249-2255, 2016. URL https://aclanthology.org/D16-1244/.
289
+ Matt Post. A call for clarity in reporting BLEU scores. In Proc. Conference on Machine Translation: Research Papers, pp. 186-191, 2018. URL https://aclanthology.org/W18-6319.
290
+ Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. OpenAI Blog, 2018. URL https://cdn.openai.com/research-covers/language-unsupervised/language understands_paper.pdf.
291
+ Nils Reimers and Iryna Gurevych. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In EMNLP, 2019. URL https://aclanthology.org/D19-1410.
292
+ Cynthia Rudin. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5):206-215, 2019. URL https://www.nature.com/articles/s42256-019-0048-x.
293
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, pp. 5998-6008, 2017. URL https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
294
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR, 2019. URL https://openreview.net/forum?id=rJ4km2R5t7.
295
+ Shuohang Wang and Jing Jiang. Learning natural language inference with LSTM. In NAACL-HLT, pp. 1442-1451, 2016. URL https://aclanthology.org/N16-1170/.
296
+ Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian richter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. In NeurlPS, 2022. URL https://openreview.net/forum?id=._VjQlMeSB_J.
297
+
298
+ Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In *NAACL-HLT*, pp. 1112–1122, 2018. URL https://aclanthology.org/N18-1101.
299
+ Wenhan Xiong, Thien Hoang, and William Yang Wang. DeepPath: A reinforcement learning method for knowledge graph reasoning. In EMNLP, pp. 564-573, 2017. URL https://aclanthology.org/D17-1060/.
300
+ Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, and Johan Bos. HELP: A dataset for identifying shortcomings of neural models in monotonicity reasoning. In Proc. Conference on Lexical and Computational Semantics, pp. 250-255, 2019a. URL https://aclanthology.org/S19-1027.
301
+ Hitomi Yanaka, Koji Mineshima, Daisuke Bekki, Kentaro Inui, Satoshi Sekine, Lasha Abzianidze, and Johan Bos. Can neural networks understand monotonicity reasoning? In ACL BlackboxNLP Workshop, pp. 31-40, 2019b. URL https://aclanthology.org/W19-4804.
302
+ Kexin Yi, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Josh Tenenbaum. Neural-symbolic VQA: Disentangling reasoning from vision and language understanding. In NeurIPS, 2018. URL https://proceedings.neurips.cc/paper/2018/file/5e388103a391daabe3de1d76a6739ccd-Paper.pdf.
303
+ Deunsol Yoon, Dongbok Lee, and SangKeun Lee. Dynamic self-attention: Computing attention over words dynamically for sentence embedding. arXiv preprint arXiv:1808.07383, 2018. URL https://arxiv.org/abs/1808.07383.
304
+ Lotfi A Zadeh. Fuzzy logic. Computer, 21(4):83-93, 1988. URL https://ieeexplore.ieee.org/abstract/document/53.
305
+ Lotfi A Zadeh. Fuzzy sets. In *Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems*, pp. 394-432. World Scientific, 1996. URL https://www.worldscientific.com/doi/abs/10.1142/9789814261302_0021.
306
+ Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou. Semantics-aware BERT for language understanding. In AAAI, pp. 9628-9635, 2020. URL https://ojs.aaai.org/index.php/AAAI/article/view/6510.
307
+ Xinyan Zhao and V.G.Vinod Vydiswaran. LIREx: Augmenting language inference with relevant explanations. In AAAI, pp. 14532-14539, 2021. URL https://ojs.aaai.org/index.php/AAAI/article/view/17708.
308
+
309
+ # A IMPLEMENTATION DETAILS
310
+
311
+ # A.1 PHRASE DETECTION
312
+
313
+ We present more details about our phrase detection. We use $\mathrm{SpaCy}^5$ to obtain the part-of-speech (POS) tag $^6$ of every word. SpaCy also tags noun phrases. However, if a noun phrase follows a preposition (with a fine-grained POS tag being IN), we remove it from noun phrases but tag it as a prepositional phrase.
314
+
315
+ In addition, we extract verbs by the POS tag VERB. A verb may be followed by a particle with the fine-grained POS tag being RP (e.g., show off). It is treated as a verb phrase. In order to handle negation, we allow optional AUX NOT before a verb, (e.g., could not help). This, however, only counts less than $1\%$ in the dataset, and does not affect our model much.
316
+
317
+ To capture other potential semantic units, we treat remaining open class words<sup>7</sup> as individual phrases. Finally, the remaining non-content words (in the categories of closed words and others) are discarded (e.g., "there is"). This is appropriate, because they do not represent meaningful semantics or play a
318
+
319
+ Table 5: Our rules for phrase detection. "["] means the item is optional.
320
+
321
+ <table><tr><td colspan="4">Example: The woman is showing off her blue dog at the playground.</td></tr><tr><td>Number</td><td>Phrase type</td><td>Rule</td><td>Extracted phrase(s)</td></tr><tr><td>1</td><td>Prepositional phrase</td><td>IN + NP</td><td>at the playground</td></tr><tr><td>2</td><td>Noun phrase</td><td>NP</td><td>The woman|her blue dog</td></tr><tr><td>3</td><td>Verb phrase</td><td>[AUX] + [NOT] + VERB + [RP]</td><td>is showing off</td></tr><tr><td>4</td><td>Others</td><td>Other open class words</td><td>-</td></tr></table>
322
+
323
+ ![](images/8248aa4102f7171ad75b057337f1e3f4e19a75b3822c2d4ea449260d0811d919.jpg)
324
+ Figure 4: Results of tuning the coefficient of global features.
325
+
326
+ ![](images/0103103fde1e521ef699f5f4acd134e4a8292249cc3d31cc07a1f2ae43aa5200.jpg)
327
+
328
+ role in reasoning. Table 5 summarizes all the rules used in our approach. They are executed in order and extracted phrases are exclusive. For example, the playground in the phrase at the playground will not be treated as a standalone noun phrase, as it is already part of a prepositional phrase.
329
+
330
+ Empirically, our rule-based approach works well for the NLI dataset, and our logical reasoning is at the granularity of the extracted phrases.
331
+
332
+ # A.2 SETTINGS
333
+
334
+ Details of the EPR Model. We chose the pretrained model a11-mpnet-base- $v2^8$ from the SentenceBERT study (Reimers & Gurevych, 2019) and obtained 768-dimensional local and global phrase embeddings. Our MLP had the same dimension as the embeddings, i.e., 768D for the local and global variants, or 1536D for the concatenation variant. We chose the coefficient for the global feature in Eq. (1) from a candidate set of $\{0.0, 0.2, 0.4, 0.6, 0.8, 1.0\}$ . Figure 4 shows the hyperparameter tuning results on SNLI (mentioned in § 4.2) and MNLI (to be discussed in § C.1). We find that 0.4 yields the best sentence accuracy in SNLI, and that 1.0 is the best for MNLI. As our focus is on reasoning, we set the coefficient to be 0.6, because it yields the highest phrasal reasoning performance and decent sentence-level performance for both experiments and in terms of both geometric mean and arithmetic mean of $F$ scores. The pretrained language model (LM) was either finetuned or un-finetuned during training. Finetuning yields higher performance (Table 2), whereas un-finetuned LM is more efficient for in-depth analyses (Table 3). We trained the model with a batch size of 256. We used Adam (Kingma & Ba, 2015) with a learning rate of 5e-5, $\beta_1 = 0.9$ , $\beta_2 = 0.999$ , learning rate warm up over the first 10 percent of the total steps, and linear decay of the learning rate. The model was trained up to 3 epochs, following the common practice (Dodge et al., 2020). Our main model variants were trained 5 times with different parameter initializations, and we report the mean and standard deviation.
335
+
336
+ Details of Textual Explanation Generation. We used the pretrained T5-small model for finetuning with a batch size of 32. The optimizer was Adam with an initial learning rate of 3e-4, $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , learning rate warm-up for the first 2 epochs, and linear decay of the learning rate up to 10
337
+
338
+ ![](images/1084e0ae9a63b49c6777536940cc338080d3b2b18180e7db18d99603a3053c0d.jpg)
339
+ Figure 5: The prompt for phrasal reasoning.
340
+
341
+ epochs; then we decreased the learning rate to 3e-6 and trained the model until the validation BLEU score did not increase for 2 epochs.
342
+
343
+ Details of the Prompting Baseline. We adopted the GPT-3 (the text-davinci-003 version with 175B parameters) (Brown et al., 2020) as a prompting baseline to demonstrate large language models (LLMs)' phrasal reasoning ability.
344
+
345
+ We consider exemplar-based prompting, because it is unlikely for an LLM to output structured reasoning results in a zero-shot manner. Moreover, our examples are chosen to cover all reasoning cases. We also set the temperature of decoding to 0 to obtain deterministic reasoning, following CoT prompting (Wei et al., 2022). Rule-based post-processing was applied to extract slot values. Figure 5 presents the prompt used for phrasal reasoning.
346
+
347
+ # B DATA ANNOTATION AND REASONING EVALUATION METRICS
348
+
349
+ Previous studies have not explicitly evaluated reasoning performance. Typically, they resort to sentence-level classification accuracy (Wang & Jiang, 2016; Mahabadi et al., 2020) or case studies (Parikh et al., 2016; Feng et al., 2020) to demonstrate the effectiveness of their alleged interpretable models, which we believe is inadequate.
350
+
351
+ Therefore, we annotated a model-agnostic corpus about phrasal logical relationships and developed a set of metrics to evaluate the phrasal reasoning performance quantitatively. The resources are released on our website (Footnote 1) to facilitate future research.
352
+
353
+ # B.1 DATA ANNOTATION
354
+
355
+ We annotated the phrases and their logical relationships in a data sample. The annotators were asked to select corresponding phrases from both premise and hypothesis, and label them as either Entailment, Contradiction, or Neutral, with the sentence-level NLI label being given. Annotators could also select a phrase from either a premise or a hypothesis and label it as Unaligned. The process can be repeated until all phrases are labeled for a data sample. Figure 6 shows a screenshot of our annotation page. In the left panel, the annotator could select phrases in the two sentences and mark them with NLI labels. The annotator can view a sample's annotated phrases in the right panel and navigate through different samples.
356
+
357
+ The annotation was performed by three in-lab researchers who are familiar with the NLI task. Our preliminary study shows low agreement when the annotators are unfamiliar with the task; thus it is inappropriate to recruit Mechanical Turks for annotation. We randomly selected 100 samples for annotation, following previous work on the textual explanation for SNLI (Camburu et al., 2018),
358
+
359
+ ![](images/b4fe9d3be55d6fd2a0483619e745b2fe18ac33cf34df0d009798b3d99827c502.jpg)
360
+ Figure 6: A screenshot of the annotation page.
361
+
362
+ ![](images/6b790ebae135bb16bcb093f146fe34e93ccaf5d0c14c254e08af31be87a70402.jpg)
363
+
364
+ Table 6: Examples illustrating the proposed metrics, where we consider the Entailment category. "|" refers to a phrase segmentation.
365
+
366
+ <table><tr><td colspan="10">Example annotation of entailment (in highlight): Premise: A kid in red is playing in a garden. Hypothesis: A child in red is watching TV in the bedroom.</td></tr><tr><td>#</td><td>Example Output</td><td>PE(P)</td><td>PE(H)</td><td>PE</td><td>RE(P)</td><td>RE(H)</td><td>RE</td><td>FE</td><td>Explanation</td></tr><tr><td>1</td><td>PH in a garden</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>Although in occurs in the annotation, the word indexes are different. The reasoning is wrong.</td></tr><tr><td>2</td><td>PH watching TV</td><td>1</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>Mis-matched phrases in hypothesis. The reasoning is wrong.</td></tr><tr><td>3</td><td>PH a kid | in red</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>All word indexes match the annotation. The reasoning is correct.</td></tr></table>
367
+
368
+ which is adequate to show statistical significance. Since our annotation only concerns data samples, it is agnostic to any machine learning model.
369
+
370
+ # B.2 EVALUATION METRICS FOR PHRASAL REASONING
371
+
372
+ We propose a set of $F$ -scores in Entailment, Contradiction, Neutral, and Unaligned to quantitatively evaluate the phrasal reasoning performance. We first introduce our metric for one data sample and then explain the extension to a corpus.
373
+
374
+ Consider the Entailment category as an example. We first count the number of "hits" (true positives) between the word indexes of model output and annotation. Using word indexes (instead of words) rules out hitting the words in misaligned phrases (Example 1, Table 6). Then, we calculate precision scores for the premise and hypothesis, denoted by $P_{\mathsf{E}}^{(P)}$ and $P_{\mathsf{E}}^{(H)}$ , respectively. Their geometric mean $P_{\mathsf{E}} = (P_{\mathsf{E}}^{(P)}P_{\mathsf{E}}^{(H)})^{1 / 2}$ is considered as the precision for Entailment. Here, the geometric mean rules out incorrect reasoning that hits either the premise or hypothesis, but not both (Example 2, Table 6). Further, we compute the recall score $R_{\mathsf{E}}$ in a similar way, and finally obtain the $F$ -score by $F_{\mathsf{E}} = \frac{2P_{\mathsf{E}}R_{\mathsf{E}}}{P_{\mathsf{E}} + R_{\mathsf{E}}}$ . Likewise, $F_{\mathsf{C}}$ and $F_{\mathsf{N}}$ are calculated for Contradiction and Neutral. In addition, we compute the $F$ -score for unaligned phrases in premise and hypothesis, denoted by $F_{\mathsf{UP}}$ and $F_{\mathsf{UH}}$ , respectively.
375
+
376
+ When calculating our $F$ -scores for a corpus, we use micro-average, i.e., the precision and recall ratios are calculated in the corpus level. This is more stable, especially considering the varying lengths of sentences. Moreover, we compare model output against three annotators and perform an arithmetic average, further reducing the variance caused by ambiguity.
377
+
378
+ Table 7: Results on MNLI. †Quoted from respective papers. ‡Our replication.
379
+
380
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Sent Acc</td><td colspan="6">Reasoning Performance</td></tr><tr><td>FE</td><td>FC</td><td>FUP</td><td>FUH</td><td>GM</td><td>AM</td></tr><tr><td>Human</td><td>-</td><td>85.15</td><td>73.44</td><td>73.18</td><td>46.31</td><td>67.85</td><td>69.52</td></tr><tr><td colspan="8">Non-reasoning methods</td></tr><tr><td>Mahabadi et al. (2020)†</td><td>73.8</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>LSTM (Wang et al., 2019)†</td><td>72.2</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Transformer (Radford et al., 2018)</td><td>82.1</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan="8">Reasoning methods</td></tr><tr><td>NNL (Feng et al., 2020)‡</td><td>61.28</td><td>50.33</td><td>32.00</td><td>49.78</td><td>0.00</td><td>0.00</td><td>33.03</td></tr><tr><td>STP</td><td>75.15</td><td>55.47</td><td>51.72</td><td>64.32</td><td>37.57</td><td>51.31</td><td>52.27</td></tr><tr><td>EPR (Concat, LM finetuned)</td><td>79.65±0.19</td><td>61.76±0.32</td><td>52.09±0.41</td><td>64.32</td><td>37.57</td><td>52.80±0.07</td><td>53.93±0.07</td></tr></table>
381
+
382
+ It should be emphasized that our metrics evaluate phrase detection and alignment in an implicit manner. A poor phrase detector and aligner will result in a low reasoning score (shown in our ablation study), but we do not explicitly calculate phrase detection and alignment accuracy. This helps us cope with the ambiguity of the phrase granularity (Example 3, Table 6).
383
+
384
+ To summarize, we propose an evaluation framework including data annotation (§ B.1) and evaluation metrics (§ B.2). These are our contributions in formulating the phrasal reasoning task for NLI.
385
+
386
+ # C ADDITIONAL RESULTS
387
+
388
+ # C.1 RESULTS ON MNLI
389
+
390
+ In this appendix, we provide additional results on the matched section of the MNLI dataset (Williams et al., 2018), which consists of 393K training samples, 10K validation samples, and another 10K test samples. It has the same format as the SNLI dataset, but samples come from multiple domains and are more diverse. We follow § 4.1 and use the same protocol to create the phrasal reasoning annotation for the MNLI dataset based on 100 randomly selected samples. However, we found that MNLI is much noisier than SNLI; particularly, the sentences labeled as Neutral in MNLI share few related phrases. For example, the two sentences do not have much in common in the sample "Premise: If you still want to join, it might be worked." and "Hypothesis: Your membership is the only way that this could work". Moreover, the inter-human agreement is low in the Neutral category. Therefore, we believe the corpus quality is less satisfactory for Neutral. To ensure meaningful evaluation, we ignored the evaluation of Neutral in this experiment, although our reasoning approach is not changed. The remaining 60 samples containing Entailment and Contradiction serve as the MNLI phrasal reasoning corpus.
391
+
392
+ We consider the EPR variant with concatenated local and global features, since the SNLI experiment shows it achieves a good balance between sentence-level accuracy and reasoning. Our models were run 5 times with different initializations.
393
+
394
+ As seen in Table 7, our EPR approach is again worse than humans, but largely improves the reasoning performance compared with NNL and STP baselines. Its sentence-level prediction is comparable to (although slightly lower than) finetuning Transformers. The results are highly consistent with SNLI experiments, showing the robustness of our approach.
395
+
396
+ It is important to notice that the EPR model here is trained on MNLI sentence labels, and is not transferred from the SNLI dataset. In our preliminary experiments, we tried transfer learning from SNLI to MNLI and failed to obtain satisfactory performance. We found that our EPR is more prone to the out-of-vocabulary issue (i.e., it does not predict well for the phrases in the new domain), whereas a black-box neural network may learn biased sentence patterns and achieve higher performance in transfer learning.
397
+
398
+ # C.2 ERROR ANALYSIS
399
+
400
+ To show how phrasal reasoning affects sentence-level prediction, we perform an error analysis in Table 8. Specifically, we examine the reasoning performance (arithmetic mean of $F$ -scores) when the sentence label is correctly and incorrectly predicted on the SNLI dataset. As shown, EPR models
401
+
402
+ Table 8: Sentence-level prediction count and arithmetic average reasoning performance ( $F$ -score) when the sentence label is correctly and incorrectly predicted on the SNLI dataset.
403
+
404
+ <table><tr><td rowspan="2">Sentence-level prediction</td><td colspan="2">Count (in percentage)</td><td colspan="2">Reasoning performance (AMF)</td></tr><tr><td>Local finetuned</td><td>Concat finetuned</td><td>Local finetuned</td><td>Concat finetuned</td></tr><tr><td>Correct</td><td>75.4±1.36</td><td>87.8±0.75</td><td>65.71±0.83</td><td>58.68±0.67</td></tr><tr><td>Wrong</td><td>24.6±1.36</td><td>12.2±0.75</td><td>40.74±2.01</td><td>37.58±3.28</td></tr><tr><td>Overall</td><td>100.0±0.00</td><td>100.0±0.00</td><td>59.93±0.67</td><td>56.32±1.13</td></tr></table>
405
+
406
+ <table><tr><td>Groundtruth: Entailment Prediction: Entailment
407
+ Three young boys enjoying a day at the beach.
408
+ (a)
409
+ The boys are in the beach.</td><td>Groundtruth: Contradiction Prediction: Contradiction
410
+ A man playing fetch with two brown dogs.
411
+ (b)
412
+ The dogs are asleep.</td><td>Entailment
413
+ Contradiction
414
+ Neutral
415
+ Unaligned</td></tr><tr><td>Groundtruth: Neutral Prediction: Neutral
416
+ Walkers on a concrete boardwalk under a blue sky.
417
+ (c)
418
+ Walkers under a blue sky near the beach.</td><td colspan="2">Groundtruth: Entailment Prediction: Neutral
419
+ An elderly couple in heavy coats are looking at black and white photos displayed on a wall.
420
+ (d)
421
+ Octogenarians admiring the old photographs that decorated the wall.</td></tr></table>
422
+
423
+ Figure 7: Examples of explainable phrasal reasoning predicted by our EPR model. Words in one color block are detected phrases, a dotted line shows the alignment of two phrases, and the color represents the predicted phrasal NLI label. In Example (d), EPR's prediction suggests the provided label in SNLI is incorrect.
424
+
425
+ with both local and concatenated features have much higher reasoning performance when sentence labels are correctly predicted than incorrectly predicted. The positive correlation between phrasal reasoning performance and sentence-level accuracy shows our fuzzy logic induction rules indeed make sense.
426
+
427
+ We also find that the model with local features has a higher reasoning performance than with concatenated features, even when the sentence-level prediction is wrong. This is because the local model is unaware of the context of the sentences. Thus, it must perform strict phrasal reasoning based on the induction rules, even if in this case the reasoning process is imperfect and leads to sentence-level errors.
428
+
429
+ # C.3 CASE STUDY OF EPR
430
+
431
+ We present case studies of EPR in Figure 7. Our EPR performs impressive reasoning for the NLI task, which is learned in a weakly supervised manner with only sentence-level labels.
432
+
433
+ In Example (a), the two sentences are predicted Entailment because three young boys entails the boys and at the beach entails in the beach, whereas unaligned phrases enjoying and a day are allowed in the premise for Entailment. In Example (b), playing contradicts asleep, and the two sentences are also predicted Contradiction. Likewise, Example (c) is predicted Neutral because the aligned phrases on a concrete boardwalk and near the beach are neutral.
434
+
435
+ In our study, we also find several interesting examples where EPR's reasoning provides clues suggesting that the target labels may be incorrect in the SNLI dataset. In Example (d), our model predicts Neutral for looking and admiring, as well as for at black and white photos and the old photographs. Thus, the two sentences are predicted Neutral instead of the provided label Entailment. We believe our model's reasoning and prediction are correct, because people looking at something may or may not admire it; a black-and-white photo may or may not be an old photo (as it could be a black-and-white artistic photo).
436
+
437
+ # C.4 CASE STUDY OF THE TEXTUAL EXPLANATION GENERATION
438
+
439
+ We conduct another case study to show how EPR's reasoning is used in the textual explanation generation task. As seen in Figure 8, our EPR reasoning yields structured factual tuples: on a deserted beach entailing at the beach, Some dogs contradicting only one dog, and running unaligned (matched with a special token [EMPTY]). Our explanation generation model attends to these factual tuples, and the heat map shows that our model gives the most attention weights (with an average of
440
+
441
+ <table><tr><td colspan="4">Input Premise : Some dogs are running on a deserted beach.
442
+ Hypothesis : There is only one dog at the beach.</td></tr><tr><td colspan="4">Label Contradiction (not used during our explanation generation)</td></tr><tr><td colspan="4">EPR&#x27;s Reasoning Output</td></tr><tr><td>Premise phrase</td><td>Hypothesis phrase</td><td>EPR label</td><td>Attention score</td></tr><tr><td>on a deserted beach</td><td>at the beach</td><td>E</td><td>23.16</td></tr><tr><td>Some dogs</td><td>only one dog</td><td>C</td><td>61.22</td></tr><tr><td>running</td><td>[EMPTY]</td><td>E</td><td>15.62</td></tr><tr><td colspan="4">Output explanation Some dogs is more than one dog.</td></tr><tr><td colspan="4">Reference explanations:
443
+ (1) Some is more than one, therefore there can&#x27;t be only one dog.
444
+ (2) Some indicates more than one dog. One dog is not some dogs.
445
+ (3) Some dogs are not one dog.</td></tr></table>
446
+
447
+ ![](images/7aad61fa82aba6c0d7d230334d58e9d5a183d8bb903f0c5c9b91377fcfef4dd7.jpg)
448
+ Figure 8: Case study of the textual explanation generation. The heat map shows the step-by-step and average attention weights to the factual tuples (vertical axis).
449
+
450
+ 0.61) to the tuple, Some dogs contradicting only one dog, to generate the explanation "Some dogs is more than one dog." This example illustrates that the factual tuples given by our EPR model provide meaningful information and can improve textual explanation generation.
451
+
452
+ # D LIMITATION AND FUTURE WORK
453
+
454
+ This paper performs phrase detection and alignment by heuristics. They work well empirically in our experiments, although further improvement is possible (for example, by considering syntactic structures). However, our main focus is neural fuzzy logic for weakly supervised reasoning. This largely differs from previous work based on manually designed lexicons and rules (Hu et al., 2020; Chen et al., 2021).
455
+
456
+ Our long-term goal is to develop a weakly supervised, end-to-end trained neuro-symbolic system that can extract semantic units and perform reasoning for a given downstream NLP task. This paper is an important milestone toward the long-term goal.
457
+
458
+ # E ETHICAL STATEMENTS
459
+
460
+ Our work involves human annotation of the phrasal logical relationships. Since the research subject here is logic (rather than humans), there are minimal ethical concerns. We nevertheless followed a standard protocol of human evaluation (involving identity protection, and proper compensation), approved by our institutional ethics board.
461
+
462
+ # ACKNOWLEDGMENTS
463
+
464
+ We thank all reviewers and chairs for their valuable comments. The research is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) under Grant No. RGPIN2020-04465, the Amii Fellow Program, the Canada CIFAR AI Chair Program, a UAHJIC project, a donation from DeepMind, and the Digital Research Alliance of Canada (alliancecan.ca). Atharva Naik contributed to the research as an intern at the University of Alberta through the Mitacs Globalink program.
2023/Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f31dd8ed05c77954ccd22f7f9551b42d0afda22c9ee115265e0729d592a496a5
3
+ size 876410
2023/Weakly Supervised Explainable Phrasal Reasoning with Neural Fuzzy Logic/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection/95efb798-c3e3-43db-b4b3-c866d3d1db85_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection/95efb798-c3e3-43db-b4b3-c866d3d1db85_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection/95efb798-c3e3-43db-b4b3-c866d3d1db85_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:606d8ae42d37831de18fddb18faaa24d4b465e91b2b091b1d7070ef6eb6b9d6d
3
+ size 1521830
2023/Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection/full.md ADDED
@@ -0,0 +1,633 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WEAKLY SUPERVISED KNOWLEDGE TRANSFER WITH PROBABILISTIC LOGICAL REASONING FOR OBJECT DETECTION
2
+
3
+ Martijn Oldenhof
4
+
5
+ ESAT-STADIUS
6
+
7
+ KU Leuven, Belgium
8
+
9
+ martijn. oldenhof@kuleuven.be
10
+
11
+ Adam Arany
12
+
13
+ ESAT-STADIUS
14
+
15
+ KU Leuven, Belgium
16
+
17
+ adam.arany@esat.kuleuven.be
18
+
19
+ Yves Moreau
20
+
21
+ ESAT-STADIUS
22
+
23
+ KU Leuven, Belgium
24
+
25
+ yves.moreau@esat.kuleuven.be
26
+
27
+ Edward De Brouwer
28
+
29
+ ESAT-STADIUS
30
+
31
+ KU Leuven, Belgium
32
+
33
+ edward.debrouwer@gmail.com
34
+
35
+ # ABSTRACT
36
+
37
+ Training object detection models usually requires instance-level annotations, such as the positions and labels of all objects present in each image. Such supervision is unfortunately not always available and, more often, only image-level information is provided, also known as weak supervision. Recent works have addressed this limitation by leveraging knowledge from a richly annotated domain. However, the scope of weak supervision supported by these approaches has been very restrictive, preventing them to use all available information. In this work, we propose ProbKT, a framework based on probabilistic logical reasoning that allows to train object detection models with arbitrary types of weak supervision. We empirically show on different datasets that using all available information is beneficial as our ProbKT leads to significant improvement on target domain and better generalization compared to existing baselines. We also showcase the ability of our approach to handle complex logic statements as supervision signal. Our code is available at https://github.com/molden/ProbKT
38
+
39
+ # 1 INTRODUCTION
40
+
41
+ Object detection is a fundamental ability of numerous high-level machine learning pipelines such as autonomous driving [4; 16], augmented reality [42] or image retrieval [17]. However, training state-of-the-art object detection models generally requires detailed image annotations such as the box-coordinates location and the labels of each object present in each image. If several large benchmark datasets with detailed annotations are available [26; 15], providing such detailed annotation on new specific datasets comes with a significant cost that is often not affordable for many applications.
42
+
43
+ More frequently, datasets come with only limited annotation, also referred to as weak supervision. This has sparked research in weakly-supervised object detection approaches [25; 6; 40], using techniques such as multiple instance learning [40] or variations of class activation maps [3]. However, these approaches have been shown to significantly underperform their fully-supervised counterparts in terms of robustness and accurate localization of the objects [39].
44
+
45
+ An appealing and intuitive approach to improve the performance of weakly supervised object detection is to perform transfer learning from an existing object detection model pre-trained on a fully annotated dataset [14; 46; 43]. This approach, also referred to as transfer learning or domain adaptation, consists in leveraging transferable knowledge from the pre-trained model (such as bounding boxes prediction capabilities) to the new weakly supervised domain. This transfer has been embodied in different ways in the literature. Examples include a simple fine-tuning of the classifier of bounding box proposals of the pre-trained model [43], or an iterative relabeling of the weakly supervised dataset for retraining a new full objects detection model on the re-labeled data [46].
46
+
47
+ ![](images/243aadad3b25bc99b81b9d9924300b2d9c50d1ddde480b20145264765a247667.jpg)
48
+ Figure 1: ProbKT: Weakly supervised knowledge transfer with probabilistic logical reasoning. (Left) A model can be trained on the source domain using full supervision (labels, positions) but only on a limited set of shapes (cylinders and spheres). (Middle) The pre-trained model does not recognize the cubes from the target domain correctly. (Right) The model can adapt to the target domain after applying ProbKT and can recognize the cubes.
49
+
50
+ However, existing approaches are very restrictive in the type of weak supervision they are able to harness. Indeed, some do not support new object classes in the new domain [20], others can only use a label indicating the presence of an object class [46]. However, in practice, the supervision on the new domain can come in very different forms. For instance, the count of each object class can be given, such as in atom detection from molecule images where only chemical formula might be given. Or, when many objects are present on an image, a range can be provided instead of an exact class counts (e.g. "there are at least 4 cats on this image"). Crucially, this variety of potential supervisory signals on the target domain cannot be fully utilized by existing domain adaption approaches.
51
+
52
+ To address this limitation, we introduce ProbKT, a novel framework that allows to generalize knowledge transfer in object detection to arbitrary types of weak supervision using neural probabilistic logical reasoning [27]. This paradigm allows to connect probabilistic outputs of neural networks with logical rules and to infer the resulting probability of particular queries. One can then evaluate the probability of a query such as "the image contains at least two animals" and differentiate through the probabilistic engine to train the underlying neural network. Our approach allows for arbitrarily complex logical statements and therefore supports weak supervision like class counts or ranges, among other. To our knowledge, this is the first approach to allow for such versatility in utilizing the available information on the new domain.
53
+
54
+ To assess the capabilities of this framework, we provide extensive empirical analysis of multiple object detection datasets. Our approach also supports any type of objects detection backbone architecture. We thus use two popular backbone architectures, DETR [7] and RCNN [34] and evaluate their performance in terms of accuracy, convergence as well of generalization on out-of-distribution data. Our experiments show that, due to its ability to use the complete supervisory signal, our approach outperforms previous works in a wide range of setups.
55
+
56
+ Key contributions: (1) We propose a novel knowledge transfer framework for object detection relying on probabilistic programming that uniquely allows using arbitrary types of weak supervision on the target domain. (2) We make our approach amenable to different levels of computational capabilities by proposing different approximations of ProbKT. (3) We provide an extensive experimental setup to study the capabilities of our framework for knowledge transfer and out-of-distribution generalization.
57
+
58
+ # 2 RELATED WORKS
59
+
60
+ A comparative summary of related works is given in Table 1. We distinguish three main categories: (1) pure weakly supervised object detection methods (WSOD) that do not leverage a richly annotated source domain, (2) unsupervised object detection methods with knowledge transfer (DA or domain adaptation methods) that do not use supervision on the target domain and (3) weakly supervised
61
+
62
+ object detection methods with knowledge transfer (WSOD w/transfer) that are restrictive in the type of supported weak supervision. To our knowledge, our work is the first to allow for arbitrary supervision on the target domain (and supporting new classes in the target domain) while also leveraging knowledge from richly annotated domains. ProbKT supports arbitrary weak supervision thanks to the inherited expressiveness of Prolog [41] which is based on a subset of first-order predicate logic, Horn clauses and is Turing-complete.
63
+
64
+ Weakly supervised object detection (WSOD) This class of method allows training object detection models with only weak supervision. One can thus train these approaches directly on the target domain. However, they do not allow to leverage potentially available richly annotated datasets, which has been shown to lead to worse performance [39]. Different flavors of WSOD architectures have been proposed relying on a variety of implementations such as multiple instance learning (MIL)-based [25; 40] or class activation (CAM) based [47; 3]. In contrast to WSOD methods, our approach is designed to exploit existing richly annotated datasets and thus provides increased performance on the target domain. For a comprehensive review of WSOD methods we refer the reader to Shao et al. [39].
65
+
66
+ Domain adaptation methods (DA) In contrast to WSOD methods, domain adaptation methods do rely on fully supervised source domain dataset. However, they do not assume any supervision on the target domain and are therefore not equipped to exploit such signal when available [37; 8; 22; 48].
67
+
68
+ WSOD with knowledge transfer Our approach belongs to the class of weakly supervised object detection models with knowledge transfer. These methods aim to transfer knowledge from a source domain, where full supervision is available, to a target domain where only weak labels are available. Existing work in this class of models only allows for limited type of supervision of the target domain. Most architectures only support a label indicating the presence or absence of a class of object in the image[14; 46; 43]. Inoue et al. [20] allows for class counts as weak supervision but unfortunately does not allow for new classes in the target domain. In contrast, ProbKT natively allows for class counts and new classes as well as other types of weak supervision.
69
+
70
+ Neural probabilistic logical reasoning Probabilistic logical reasoning combines logic and probability theory. Favored for its high-level reasoning abilities, it was introduced as an alternative way to deep learning in the quest for artificial intelligence [10]. Statistical artificial intelligence [32; 23] and probabilistic logic programming [11] are examples of areas relying on these premises. In a unification effort, researchers have proposed hybrid architectures, embedding both deep learning and logical reasoning components [38; 35]. Our work builds upon the recent advances in the field, where combinations of deep learning, logical, and probabilistic approaches were introduced [27], allowing high-level reasoning with uncertainty using differentiable neural network architectures.
71
+
72
+ <table><tr><td>Method</td><td>Type</td><td>Annotated source dom.</td><td>Weak supervision</td><td>New classes</td><td>Implementation</td></tr><tr><td>Li et al. [25]</td><td>WSOD</td><td>X</td><td>presence/absence</td><td>✓</td><td>MIL-based</td></tr><tr><td>Bilen and Vedaldi [6]</td><td>WSOD</td><td>X</td><td>presence/absence</td><td>✓</td><td>spatial pyramid pooling layer</td></tr><tr><td>Song et al. [40]</td><td>WSOD</td><td>X</td><td>presence/absence</td><td>✓</td><td>MIL based</td></tr><tr><td>Zhou et al. [47]</td><td>WSOD</td><td>X</td><td>mix</td><td>✓</td><td>CAM-based</td></tr><tr><td>Bae et al. [3]</td><td>WSOD</td><td>X</td><td>mix</td><td>✓</td><td>CAM based</td></tr><tr><td>Kundu et al. [24]</td><td>DA</td><td>✓</td><td>one-shot</td><td>✓</td><td>Class-Incremental DA</td></tr><tr><td>Saito et al. [37]</td><td>DA</td><td>✓</td><td>X</td><td>X</td><td>Strong-Weak Distribution Alignment</td></tr><tr><td>Chen et al. [8]</td><td>DA</td><td>✓</td><td>X</td><td>X</td><td>Adversarial training</td></tr><tr><td>Kim et al. [22]</td><td>DA</td><td>✓</td><td>X</td><td>X</td><td>Adversarial training and Domain Diversification</td></tr><tr><td>Zhu et al. [48]</td><td>DA</td><td>✓</td><td>X</td><td>X</td><td>selective region adaptation framework</td></tr><tr><td>Deselaers et al. [14]</td><td>WSOD w/transfer</td><td>✓</td><td>presence/absence</td><td>✓</td><td>CRF-based, iteratively</td></tr><tr><td>Zhong et al. [46]</td><td>WSOD w/transfer</td><td>✓</td><td>presence/absence</td><td>✓</td><td>MIL based, iteratively</td></tr><tr><td>Uijlings et al. [43]</td><td>WSOD w/transfer</td><td>✓</td><td>presence/absence</td><td>✓</td><td>MIL based, non iteratively</td></tr><tr><td>Inoue et al. [20]</td><td>WSOD w/transfer</td><td>✓</td><td>class counts</td><td>X</td><td>DA + pseudolabeling, iteratively</td></tr><tr><td>ProbKT (ours)</td><td>WSOD w/transfer</td><td>✓</td><td>arbitrary</td><td>✓</td><td>Probabilistic logical reasoning, iteratively</td></tr></table>
73
+
74
+ Table 1: Summary table of related works with weakly supervised object detection(WSOD), Domain Adaptation(DA) and weakly supervised knowledge transfer methods (WSOD w/ transfer).
75
+
76
+ # 3 METHODOLOGY
77
+
78
+ # 3.1 PROBLEM STATEMENT
79
+
80
+ We consider the problem of weakly supervised knowledge transfer for object detection. Using a model trained on a richly annotated source domain, we aim at improving its performance on a less richly annotated target domain.
81
+
82
+ Let $\mathcal{D}_s = \{(I_s^i, b_s^i, y_s^i) : i = 1, \dots, N_s)\}$ be a dataset issued from the source domain and consisting of $N_s$ images $I_s$ along with their annotations. We write $b_s^i \in \mathbb{R}^{n_i \times 4}$ and $y_s^i \in \{1, \dots, K_s\}^{n_i}$ for the box coordinates and class labels of objects in image $I_s^i$ , $n_i$ is the number of objects present in image $I_s^i$ and $K_s$ is the total number of object classes in the source domain. This represents the typical dataset required to train classical fully-supervised object detection architectures. The target dataset $\mathcal{D}_t = \{(I_t^i, q_t^i) : i = 1, \dots, N_t)\}$ contains $N_t$ image from the target domain along with image-level annotations $q_t^i$ . These annotations are logical statements about the content of the image in terms of object classes and their location. Examples include the presence of different classes in each image (i.e., the classical assumption in weakly supervised object detection) but also extends to the counts of classes or a complex combination of counts of objects attributes (e.g., "two red objects, and at least two bicycles"). What is more, the logical statements $q_t^i$ can include classes not already present in the source domain. This type of logical annotation is then strictly broader than the restrictive supervision usually assumed.
83
+
84
+ Based on the availability of a source dataset and a target dataset as described above, our goal is then to harness the available detailed information from the source domain to perform accurate object detection on the target domain. A graphical illustration of this process is given in Figure 1.
85
+
86
+ # 3.2 BACKGROUND
87
+
88
+ # 3.2.1 OBJECT DETECTION
89
+
90
+ Object detection aims at predicting the location and labels of objects in images. One then wishes to learn a parametric function $f_{\theta}:\mathcal{I}\rightarrow \{\mathcal{B}\times \mathbb{R}^{K}\}^{\mathbb{Z}}$ with $f_{\theta}(I) = \{(\hat{b},\hat{p}_y)\}^{\hat{n}} = \{(\hat{b}_i,\hat{p}_{y,i}):i = 1,\dots,\hat{n}\}$ such that the distance between predicted and true boxes and labels, $d(\{\hat{(b},\hat{p}_y)\}^{\hat{n}},\{(b,y)\}^{n})$ , is minimum. Objects detection architecture would usually output box features proposals $\{h_i:i = 1,\dots,\hat{n}\}$ conditioned on which they would predict the probability vector of class labels $\hat{p}_{y,i} = g_p(h_i)$ and the box location predictions $\hat{b}_i = g_b(h_i)$ using shared parametric functions $g_{p}(\cdot)$ and $g_{b}(\cdot)$ . For an object $n$ , we write the predicted probability of the object belonging to class $k$ as $\hat{p}_{y,n}^{k}$ .
91
+
92
+ # 3.2.2 PROBABILISTIC LOGICAL REASONING
93
+
94
+ Probabilistic logical reasoning uses knowledge representation relying on probabilities that allow encoding uncertainty in knowledge. Such a knowledge is encoded in a probabilistic logical program $\mathcal{P}$ as a set of $N$ probabilistic facts $U = \{U_{1},\dots,U_{N}\}$ and $M$ logical rules $F = \{f_{1},\dots f_{M}\}$ connecting them. A simple example of probabilistic fact is "Alice and Bob will each pass their exam with probability 0.5" and an example of logical rule is "if both Alice and Bob pass their exam, they will host a party". Combining probabilistic facts and logical rules, one can then construct complex probabilistic knowledge representation, that can also be depicted as probabilistic graphical models.
95
+
96
+ Probabilistic logical programming allows to perform inference by computing the probability of a particular statement or query. For instance, one could query the probability that "Alice and Bob will host a party". This query is executed by summing over the probabilities of occurrence of the different worlds $w = \{u_1, \dots, u_N\}$ (i.e. individual realization of the set of probabilistic facts) that are compatible with the query $q$ . The probability of a query $q$ in a program $\mathcal{P}$ can then be inferred as $P_{\mathcal{P}}(q) = \sum_{w} P(w) \cdot \mathbb{I}[F(w) \equiv q]$ , where $F(w) \equiv q$ stands for the fact that propagation of the realization $w$ across the knowledge graph, according to the logical rules $F$ leads to $q$ being true.
97
+
98
+ Remarkably, recent advances in probabilistic programming have led to learnable probabilistic facts [27]. In particular, the probability of a fact can be generated by a neural network with learnable weights. Such a learnable probabilistic fact is then referred to as a neural predicate $U^{\theta}$ , where we make the dependence on the weights $\theta$ explicit. One can then train these weights to minimize a loss that depend on the probability of a query $q$ : $\hat{\theta} = \arg \min_{\theta} \mathcal{L}(P(q \mid \theta))$ .
99
+
100
+ Our approach builds upon this ability to learn neural predicates and uses DeepProbLog [27] as the probabilistic reasoning backbone. DeepProbLog is a neural probabilistic logic programming language that allows to conveniently perform inference and differentiation with neural predicates. We refer the reader to the excellent introduction of Manhaeve et al. [28] for further details about this framework.
101
+
102
+ # 3.3 PROBKT: WEAKLY SUPERVISED KNOWLEDGE TRANSFER WITH PROBABILISTIC LOGICAL REASONING
103
+
104
+ A graphical description of our approach is presented in Figure 2. Our framework starts from a pre-trained object detection model $f_{\theta}$ on the source domain. The backbone of this model is extracted and inserted into a new object detection model $f_{\theta}^{*}$ with new target box position predictors and box label classifiers. This new model is then used to predict box proposals along with the corresponding box features on target domain images $I_{t}$ . These box features are then fed to a new target box position predictor and box label classifier. The predictions of this classifier are considered neural predicates and are given to a probabilistic logical module. This module evaluates the probability of queries $q_{t}$ , the loss, and the corresponding gradient that can be backpropagated to the classifier and the backbone. As we want to maximize the probability of the queries being true, we use the following loss function:
105
+
106
+ $$
107
+ \mathcal {L} _ {\theta} = \sum_ {\left(I _ {t}, q _ {t}\right) \in \mathcal {D} _ {t}} - \log P _ {\mathcal {P}} \left(q _ {t} \mid f _ {\theta} ^ {*} \left(I _ {t}\right)\right) \tag {1}
108
+ $$
109
+
110
+ In theory, the backbone can be trained end to end with this procedure. Our experiments showed that only updating the box features classifiers resulted in more stability as also shown in previous works [46]. We then adopt here the same iterative relabeling strategy, as described next.
111
+
112
+ ![](images/dffc40d15684072de167b6480bb4e4eca8afe6c9bdae887f549c4e44330c8f86.jpg)
113
+ Figure 2: ProbKT. The pre-trained object detection backbone outputs the box features $h$ for the detected objects. Box classifiers (red) and box position predictors (blue) then predict corresponding label predictions $\hat{p}_y$ and box position predictions $\hat{b}$ that are fed to the probabilistic reasoning layer. This layer computes the probability of the query along with the gradients with respect to $\hat{p}_y$ and $\hat{b}$ that can be backpropagated through the entire network.
114
+
115
+ # 3.3.1 ITERATIVE RELABELING
116
+
117
+ The approach described above allows to fine-tune our model $f_{\theta}^{*}$ to the target domain. To further improve the performance, we propose an iterative relabeling strategy that consists in multiple steps: fine-tuning, re-labeling and re-training. A similar has also been proposed by Zhong et al. [46].
118
+
119
+ Fine-tuning. This step corresponds to training ProbKT on the weakly supervised labels, by minimizing the loss of Equation 1.
120
+
121
+ Re-labeling. Once ProbKT has been trained, we can use its predictions to annotate images in the target domain. In practice, we only relabel images for which the model predictions comply with the available query labels in order to avoid too noisy labels.
122
+
123
+ Re-training. The re-labeled target domain can be used to re-train the object detection backbone of ProbKTin a fully-supervised fashion.
124
+
125
+ This procedure can be repeated multiple times to improve the quality of the relabeling and the quantity of relabelled in the target domain dataset. A graphical representation of the relabeling pipeline is presented in Figure 3.
126
+
127
+ ![](images/0d147b1754e149afa89c63b8de727c6f155d030239363796f62a085bc8d10f91.jpg)
128
+ Figure 3: Iterative relabeling. A full cycle is composed of a fine-tuning, a re-labeling and a re-training step. After one cycle, the fine-tuning step and/or re-labeling step can be iteratively repeated.
129
+
130
+ # 3.3.2 COMPUTATIONAL COMPLEXITY AND APPROXIMATIONS
131
+
132
+ The computational complexity of inference in probabilistic programming depends on the specific query $q$ and several approximations have been proposed for improving the computation time [44]. We propose two approaches for reducing the computational cost adapted to object detection: (1) filtering the data samples before applying ProbKT (see Appendix Section C.1) or (2) when the supervision consists of the class labels counts, considering only the most probable world (ProbKT*) instead of all possible worlds.
133
+
134
+ # 3.3.3 PROBKT*: THE MOST PROBABLE WORLD AND CONNECTION TO HUNGARIAN MATCHING
135
+
136
+ The probabilistic inference step requires a smart aggregation of all worlds compatible with the query $q$ . Yet, in certain cases, one can reduce the computational cost by only considering the most probable world. Indeed, consider the case when the query consists of the list of different class labels in the images. For a number of boxes $\hat{n}$ proposed by the objects detection model, the query can be written as the set of labels $q = \{y^i : i = 1, \dots, \hat{n}\}$ . If we further write $\hat{p}_{y,n}^k$ as the probability of the label of box $n$ belonging to class $k$ given by the model (as introduced in Section 3.2.1), we have:
137
+
138
+ $$
139
+ P _ {\mathcal {P}} (q) = \sum_ {j = 1} ^ {\hat {n}!} \hat {p} _ {y, 0} ^ {\sigma_ {j} (0)} \cdot \hat {p} _ {y, 1} ^ {\sigma_ {j} (1)} \cdot \ldots \cdot \hat {p} _ {y, \hat {n}} ^ {\sigma_ {j} (\hat {n})} = \sum_ {j = 1} ^ {\hat {n}!} \prod_ {n} \hat {p} _ {y, n} ^ {\sigma_ {j} (n)}
140
+ $$
141
+
142
+ where $\sigma_{j}$ corresponds to the $j^{th}$ permutation of the query vector $q$ . To avoid the computation of each possible world contribution, one can only use the configuration with the largest contribution to $P_{\mathcal{P}}(q)$ and discard the other ones.
143
+
144
+ This possible world corresponds to the permutation $\sigma^{*}$ that satisfies:
145
+
146
+ $$
147
+ \sigma^ {*} = \underset {\sigma} {\arg \max} \log (\prod_ {n} \hat {p} _ {y, n} ^ {\sigma_ {j} (n)}) = \underset {\sigma} {\arg \max} \sum_ {n} \hat {p} _ {y, n} ^ {\sigma_ {j} (n)} = \underset {\sigma} {\arg \min} \sum_ {n} (1 - \hat {p} _ {y, n} ^ {\sigma_ {j} (n)}).
148
+ $$
149
+
150
+ Remarkably, this corresponds to the solution of the best alignment using the Hungarian matching algorithm with cost $c(n) = (1 - \hat{p}_{y,n}^{\sigma_j(n)})$ , as used, among others, in DETR [7]. Thus, when the query is the set of class labels, the most plausible world can thus be inferred with the Hungarian matching algorithm. In Appendix C.2, we also show that the gradient of ProbKT can be interpreted as a probability weighted extension of the gradient resulting from the Hungarian matching.
151
+
152
+ # 4 EXPERIMENTS
153
+
154
+ # 4.1 DATASETS
155
+
156
+ We evaluate our approach on three different datasets: (1) a CLEVR-mini dataset, (2) a Molecules dataset with images of chemical compounds, and (3) an MNIST-based object detection dataset. For each dataset, three subsets, corresponding to different domains, are used: (1) a source domain, (2) a target domain, and (3) an out-of-distribution domain (OOD). The source domain is the richly annotated domain that was used to pre-train the object detection model. The target domain is
157
+
158
+ the domain of interest but with image-level annotations only. Lastly, the OOD domain contains images from a different distribution than the source and target domains and is used to study the generalizability of the models. Source and target domains are split into 5 folds of train and validation sets and an independent test set. We focused our experiments on the small sample regime (range 1k-2k numbers of samples) both for the source as the target domain. More details on each dataset can be found in Appendix B.
159
+
160
+ # 4.2 MODELS
161
+
162
+ In the experiments, we apply our method ProbKT on two different pre-trained object detection backbone models: (1) DETR [7] and (2) FasterRCNN [34]. Both are pre-trained on the COCO dataset [26]. We also evaluate an Hungarian-algorithm approximation (ProbKT*) of our method when the weak supervision allows it. For sake of conciseness, we omit the results of ProbKT* here but they can be found in Appendix D. The details of the training procedures, as well as the hyper-parameters used for the different models and the different datasets are summarized in Table 4 in Appendix A.
163
+
164
+ # 4.2.1 BASELINE MODELS
165
+
166
+ As shown in Section 2, all available approaches for weakly supervised object detection are very restrictive in terms of the supervision signal they support. Our main comparison partner is the state of the art WSOD-transfer method [46].
167
+
168
+ Additionally, we compare our approach against a Resnet50 [18] backbone pre-trained on ImageNet [12]. Fine-tuning is performed by adding an extra multitask regression layer that is trained to predict the individual counts of the objects in the image as in Xue et al. [45]. This architecture naturally relies only on label counts in the target images for fine-tuning. We then predict box predictions using class activation maps as in Bae et al. [3] to compare its performance on object localization. We call this approach Resnet50-CAM.
169
+
170
+ When the supervision signal allows it, we also compare with a DETR model trained end-to-end jointly on target and source domains, masking the box costs in the matching cost of the Hungarian algorithm for image-level annotated samples. We call this approach DETR-joint.
171
+
172
+ <table><tr><td>Model</td><td>Data Domain</td><td>CLEVR count acc.</td><td>CLEVR mAP (mAP@IoU=0.5)</td><td>Mol. count. acc</td><td>Mol. mAP (mAP@IoU=0.5)</td></tr><tr><td>Resnet50-CAM</td><td>target domain</td><td>0.97 ± 0.005</td><td>0.036 ± 0.014 (0.200 ± 0.071)</td><td>0.978 ± 0.004</td><td>0.0 ± 0.0 (0 ± 0)</td></tr><tr><td>Resnet50-CAM</td><td>OOD</td><td>0.831 ± 0.016</td><td>0.029 ± 0.010 (0.153 ± 0.044)</td><td>0.0 ± 0.0</td><td>n/a*</td></tr><tr><td>Resnet50-CAM</td><td>source domain</td><td>0.993 ± 0.003</td><td>0.035 ± 0.019 (0.178 ± 0.084)</td><td>0.828 ± 0.021</td><td>0.0 ± 0.0 (0 ± 0)</td></tr><tr><td>WSOD-transfer</td><td>target domain</td><td>0.944 ± 0.004</td><td>0.844 ± 0.005 (0.988 ± 0.001)</td><td>0.001 ± 0.0</td><td>0.018 ± 0.004 (0.061 ± 0.011)</td></tr><tr><td>WSOD-transfer</td><td>OOD</td><td>0.73 ± 0.011</td><td>0.79 ± 0.005 (0.969 ± 0.001)</td><td>0.003 ± 0.002</td><td>n/a*</td></tr><tr><td>WSOD-transfer</td><td>source domain</td><td>0.989 ± 0.001</td><td>0.926 ± 0.001 (0.995 ± 0.0)</td><td>0.0 ± 0.0</td><td>0.021 ± 0.003 (0.069 ± 0.009)</td></tr><tr><td>DETR-joint</td><td>target domain</td><td>0.159 ± 0.133</td><td>0.579 ± 0.012 (0.684 ± 0.019)</td><td>0.357 ± 0.196</td><td>0.197 ± 0.055 (0.481 ± 0.071)</td></tr><tr><td>DETR-joint</td><td>OOD</td><td>0.084 ± 0.039</td><td>0.534 ± 0.012 (0.66 ± 0.012)</td><td>0.024 ± 0.021</td><td>n/a*</td></tr><tr><td>DETR-joint</td><td>source dom.</td><td>0.923 ± 0.049</td><td>0.908 ± 0.017 (0.992 ± 0.001)</td><td>0.232 ± 0.127</td><td>0.23 ± 0.063 (0.565 ± 0.08)</td></tr><tr><td>RCNN (pre-trained)</td><td>target domain</td><td>0.0 ± 0.0</td><td>0.586 ± 0.014 (0.598 ± 0.013)</td><td>0.592 ± 0.007</td><td>0.568 ± 0.005 (0.785 ± 0.004)</td></tr><tr><td>RCNN (pre-trained)</td><td>OOD</td><td>0.0 ± 0.0</td><td>0.582 ± 0.012 (0.603 ± 0.011)</td><td>0.348 ± 0.036</td><td>n/a*</td></tr><tr><td>RCNN (pre-trained)</td><td>source domain</td><td>0.988 ± 0.002</td><td>0.984 ± 0.01 (0.996 ± 0.0)</td><td>0.948 ± 0.004</td><td>0.737 ± 0.005 (0.979 ± 0.0)</td></tr><tr><td>DETR (pre-trained)</td><td>target domain</td><td>0.0 ± 0.0</td><td>0.498 ± 0.019 (0.533 ± 0.024)</td><td>0.464 ± 0.033</td><td>0.314 ± 0.006 (0.542 ± 0.006)</td></tr><tr><td>DETR (pre-trained)</td><td>OOD</td><td>0.0 ± 0.0</td><td>0.477 ± 0.013 (0.531 ± 0.021)</td><td>0.002 ± 0.001</td><td>n/a*</td></tr><tr><td>DETR (pre-trained)</td><td>source domain</td><td>0.97 ± 0.009</td><td>0.945 ± 0.009 (0.992 ± 0.001)</td><td>0.581 ± 0.022</td><td>0.409 ± 0.005 (0.722 ± 0.004)</td></tr><tr><td>ProbKT (DETR)</td><td>target domain</td><td>0.946 ± 0.014</td><td>0.803 ± 0.011 (0.989 ± 0.006)</td><td>0.508 ± 0.027</td><td>0.204 ± 0.02 (0.507 ± 0.014)</td></tr><tr><td>ProbKT (DETR)</td><td>OOD</td><td>0.726 ± 0.035</td><td>0.715 ± 0.006 (0.974 ± 0.006)</td><td>0.004 ± 0.003</td><td>n/a*</td></tr><tr><td>ProbKT (DETR)</td><td>source domain</td><td>0.987 ± 0.003</td><td>0.948 ± 0.005 (0.995 ± 0.001)</td><td>0.549 ± 0.026</td><td>0.38 ± 0.013 (0.713 ± 0.006)</td></tr><tr><td>ProbKT (RCNN)</td><td>target domain</td><td>0.975 ± 0.003</td><td>0.856 ± 0.039 (0.993 ± 0.001)</td><td>0.942 ± 0.009</td><td>0.289 ± 0.041 (0.829 ± 0.054)</td></tr><tr><td>ProbKT (RCNN)</td><td>OOD</td><td>0.89 ± 0.022</td><td>0.833 ± 0.042 (0.991 ± 0.001)</td><td>0.603 ± 0.037</td><td>n/a*</td></tr><tr><td>ProbKT (RCNN)</td><td>source domain</td><td>0.995 ± 0.002</td><td>0.941 ± 0.041 (0.998 ± 0.001)</td><td>0.96 ± 0.002</td><td>0.666 ± 0.005 (0.978 ± 0.002)</td></tr></table>
173
+
174
+ Table 2: Results of the experiments for the datasets: CLEVR-mini and Molecules. Reported test accuracies over the 5 folds. Best method is in bold for each metric and data distribution. *: OOD test set of Molecules dataset has no bounding box labels.
175
+
176
+ # 4.3 EVALUATION METRICS
177
+
178
+ We evaluate the performance of the models on the different datasets based on two criteria: the count accuracy and the objects localization performance. The count accuracy measures the ratio of correct images where all individual counts of (all detected) objects are correct. To evaluate how well the
179
+
180
+ model is performing in localizing the different objects in the image we report the mean average precision (mAP) performance, a widely used metric for evaluating object detection models.
181
+
182
+ # 4.4 WEAKLY SUPERVISED KNOWLEDGE TRANSFER WITH CLASS COUNTS
183
+
184
+ We first investigate the performance of ProbKT when the weakly supervision consists of class counts only. The query $q$ for each image then consists of the number of objects from each class in the image. We evaluate the models on the CLEVR-mini and Molecules datasets. For the Molecules dataset, the query for an image containing 6 carbon atoms (C), 6 oxygen atoms (O) and 12 hydrogen atoms (H) would result in the following query: $q = ([C,O,H],[6,6,12])$ . These weak labels in the case of the Molecules dataset are widely and easily available in the form of the chemical formula of the molecule on the image (e.g $C_6H_{12}O_6$ ). The recognition of atomic level entities on images of molecules is a challenge in the field of Optical Chemical Structure Recognition (OCSR) [9; 33; 29; 19]. For the CLEVR-mini dataset, the query for an example image containing 2 spheres, 1 cylinder and 3 cubes would be $q = ([\mathrm{Cube},\mathrm{Cylinder},\mathrm{Sphere}],[3,1,2])$ . Formal descriptions of the queries for each task are presented in Appendix E.
185
+
186
+ Results of the experiments are summarized in Table 2. We observe on both datasets that ProbKT is able to transfer knowledge from the source domain to the target domain and improve count accuracy on the target domain and in most cases also on the source domain. The count accuracy increases on both the target domain and on OOD, suggesting better generalization performance. This is in contrast with Resnet50-CAM which performs well on the target domain of the Molecules dataset but fails on OOD. We also note a significant improvement in object localization (mAP) for ProbKT on the CLEVR-mini dataset. However, fine-tuning seems detrimental for mAP on the Molecules dataset. This can be explained by the very small bounding boxes in the Molecules dataset. We therefore also report the mAP@IoU=0.5 where we observe some increase in performance after fine-tuning. Lastly, we observe that our approach outperforms WSOD-transfer on all metrics for both datasets. WSOD-transfer performs well on CLEVR-mini but fails for the Molecules dataset. This can be explained by the fact that this method only supports class indicators (whether a class is present in the image), which is particularly detrimental in molecules images containing a lot of objects.
187
+
188
+ # 4.5 OTHER TYPES OF WEAK SUPERVISION
189
+
190
+ # 4.5.1 CLASS RANGES
191
+
192
+ The annotation of images is a tedious task, which limits the availability of fully annotated datasets. When the number of objects on an image is large, counting the exact number of objects of a particular class becomes too time-consuming. A typical annotation in this case consists oof class ranges where, instead of exact class counts, an interval is given for the count. For example an image from the CLEVR-mini dataset with more than 4 cubes, exactly 4 cylinders and less than 4 spheres would result in the following query: $q = ([\text{cube}, \text{cylinder}, \text{sphere}], [[4, \infty[, [4, 5[, [0, 4[}})$ . We evaluate this experimental setup and report results in Table 3. We observe that ProbKT performs significantly better than WSOD-transfer on count accuracy, which still uses only presence/absence labels. We note that Resnet50-CAM is unable to use this type of supervision and is thus reported as $n / a$ .
193
+
194
+ <table><tr><td>Model</td><td>Data Domain</td><td>MNIST count acc.</td><td>MNIST sum acc.</td><td>MNIST mAP (mAP@IoU=0.5)</td><td>CLEVR* count acc.</td><td>CLEVR* mAP (mAP@IoU=0.5)</td></tr><tr><td>Resnet50-CAM</td><td>target domain</td><td>0.044 ± 0.041</td><td>0.506 ± 0.063</td><td>0.003 ± 0.003(0.014 ± 0.011)</td><td>n/a</td><td>n/a</td></tr><tr><td>Resnet50-CAM</td><td>OOD</td><td>0.01 ± 0.009</td><td>0.015 ± 0.004</td><td>0.003 ± 0.002(0.011 ± 0.007)</td><td>n/a</td><td>n/a</td></tr><tr><td>Resnet50-CAM</td><td>source domain</td><td>0.127 ± 0.132</td><td>0.649 ± 0.108</td><td>0.005 ± 0.004(0.028 ± 0.018)</td><td>n/a</td><td>n/a</td></tr><tr><td>WSOD-transfer</td><td>target domain</td><td>n/a</td><td>n/a</td><td>n/a</td><td>0.944 ± 0.004</td><td>0.844 ± 0.005 (0.988 ± 0.001)</td></tr><tr><td>WSOD-transfer</td><td>OOD</td><td>n/a</td><td>n/a</td><td>n/a</td><td>0.73 ± 0.011</td><td>0.79 ± 0.005 (0.969 ± 0.001)</td></tr><tr><td>WSOD-transfer</td><td>source domain</td><td>n/a</td><td>n/a</td><td>n/a</td><td>0.989 ± 0.001</td><td>0.926 ± 0.001 (0.995 ± 0.0)</td></tr><tr><td>RCNN (pre-trained)</td><td>target domain</td><td>0.292 ± 0.005</td><td>0.298 ± 0.005</td><td>0.632 ± 0.014 (0.685 ± 0.002)</td><td>0.0 ± 0.0</td><td>0.586 ± 0.014 (0.598 ± 0.013)</td></tr><tr><td>RCNN (pre-trained)</td><td>OOD</td><td>0.205 ± 0.004</td><td>0.212 ± 0.004</td><td>0.631 ± 0.013 (0.683 ± 0.002)</td><td>0.0 ± 0.0</td><td>0.582 ± 0.012 (0.603 ± 0.011)</td></tr><tr><td>RCNN (pre-trained)</td><td>source domain</td><td>0.961 ± 0.008</td><td>0.961 ± 0.008</td><td>0.917 ± 0.021 (0.988 ± 0.002)</td><td>0.988 ± 0.002</td><td>0.984 ± 0.01 (0.996 ± 0.0)</td></tr><tr><td>ProbKT (RCNN)</td><td>target domain</td><td>0.902 ± 0.005</td><td>0.903 ± 0.005</td><td>0.786 ± 0.021 (0.974 ± 0.001)</td><td>0.971 ± 0.006</td><td>0.838 ± 0.034 (0.993 ± 0.001)</td></tr><tr><td>ProbKT (RCNN)</td><td>OOD</td><td>0.863 ± 0.008</td><td>0.865 ± 0.008</td><td>0.778 ± 0.021 (0.97 ± 0.001)</td><td>0.884 ± 0.01</td><td>0.812 ± 0.036 (0.991 ± 0.001)</td></tr><tr><td>ProbKT (RCNN)</td><td>source domain</td><td>0.967 ± 0.004</td><td>0.967 ± 0.004</td><td>0.873 ± 0.016 (0.989 ± 0.001)</td><td>0.994 ± 0.001</td><td>0.922 ± 0.035 (0.998 ± 0.001)</td></tr></table>
195
+
196
+ Table 3: Results of the experiments on the MNIST object detection dataset and on CLEVR* dataset (*CLEVR uses ranges of class counts as labels instead of exact class counts). Reported test accuracies over the 5 folds. Best method is in bold for each metric and data distribution.
197
+
198
+ # 4.5.2 COMPLEX QUERIES
199
+
200
+ More complex types of weak supervision than the ones considered above are also possible. To illustrate the capabilities of our approach, we build an MNIST object detection dataset where images show multiple digits as objects. Examples images are available in Appendix B. The weak supervision is here the sum of all digits in the image: $q = \mathrm{SUM}(\mathrm{digits})$ . Our ProbKT can seamlessly integrate this type of supervision as shown in Table 3. As all other baselines are unable process this type of supervision, we compare against a pre-trained RCNN and a variation of Resnet50-CAM where we add an extra neural network layer that sums the individual counts to give the resulting sum. We report count accuracy, mAP and sum accuracy. The sum accuracy measures the ratio of correct images where the predicted sum (instead of the label of the digits) is correct. Details about the results on extra experiments with DETR as backbone using complex types of weak supervision can be found in Appendix D.
201
+
202
+ # 4.6 ABLATION STUDIES
203
+
204
+ ![](images/0e773b45cbf7640f37656d4e8b4687dcfb7363cfd1d1ec58084cdbf33c00298d.jpg)
205
+ (a) CLEVR iterative relabeling
206
+
207
+ ![](images/d17235340db1e49a74095c49b3c4b80909d704f18f417672a46e1577bd8f7bad.jpg)
208
+ Figure 4: Iterative relabeling performance for the different datasets. Iteration 0: pretrained on source domain. Iteration 1: fine-tuned. Iteration 2: re-labeled and re-trained. Iteration 3: relabeled and re-trained. Iteration 4: relabeled and re-trained.
209
+
210
+ ![](images/01b3ce7dde365f828fc3b5f7694aacef5d650d5f027b39ff3eafe00a70f51d7e.jpg)
211
+ (b) Molecules iterative relabeling
212
+ (c) MNIST iterative relabeling
213
+
214
+ Iterative relabeling. In Figure 4, we plot the evolution of the performance on the test sets after multiple rounds of fine-tuning and re-labeling, as detailed in Section 3.3.1. The final performance reported in the results tables is selected based on best relabeling iteration on the validation dataset. We observe that iterative relabeling after fine-tuning can improve performance significantly. Nevertheless, the benefit of iterative relabeling is less pronounced for DETR on the Molecules dataset. We impute it to the fact that the fine-tuned DETR model is less accurate on this dataset.
215
+
216
+ # Object detection backbone
217
+
218
+ Our method can seamlessly accommodate different object detection backbones. In Table 2, we present the results for our method with a DETR[7] and a FasterRCNN[34] backbone. We observe that FasterRCNN is typically performing better. In particular, the DETR backbone performs poorly on the Molecules dataset. This could be due to the small objects in the Molecules dataset. Indeed, Carion et al. [7] recommend to use DETR-DC5 or DETR-DC5-R101 for small objects instead.
219
+
220
+ # 5 CONCLUSIONS AND DISCUSSION
221
+
222
+ Objects detection models are a key component of machine learning deployment in the real world. However, training such models usually requires large amounts of richly annotated images that are often prohibitive for many applications. In this work, we proposed a novel approach to train object detection models by leveraging richly annotated datasets from other domains and allowing arbitrary types of weak supervision on the target domain. Our architecture relies on a probabilistic logical programming engine that efficiently blends the power of symbolic reasoning and deep learning architecture. As such, our model also inherits the current limitations from the probabilistic reasoning implementations, such as higher computational complexity. We proposed several approaches to speed-up the inference process significantly and our work will directly benefit from further advances in this field. Lastly, the versatility of probabilistic programming could help support other related tasks in the future, such as image to graph translation.
223
+
224
+ Reproducibility Statement Details for reproducing all experiments shown in this work are available in Appendix E. More details on the datasets used in the experiments can be found in Appendix B.
225
+
226
+ # ACKNOWLEDGMENTS
227
+
228
+ AA, MO and YM are funded by (1) Research Council KU Leuven: Symbiosis 4 (C14/22/125), Symbiosis3 (C14/18/092); (2) Federated cloud-based Artificial Intelligence-driven platform for liquid biopsy analyses (C3/20/100); (3) CELSA - Active Learning (CELSA/21/019); (4) European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 956832; (5) Flemish Government (FWO: SBO (S003422N), Elixir Belgium (I002819N), SB and Postdoctoral grants: S003422N, 1SB2721N, 1S98819N, 12Y5623N) and (6) VLAIO PM: Augmenting Therapeutic Effectiveness through Novel Analytics (HBC.2019.2528); (7) YM, AA, EDB, and MO are affiliated to Leuven.AI and received funding from the Flemish Government (AI Research Program). EDB is funded by a FWO-SB grant (S98819N). Computational resources and services used in this work were partly provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government - department EWI.
229
+
230
+ # REFERENCES
231
+
232
+ [1] Mnist object detection dataset. URL https://github.com/hukkelas/MNIST-ObjectDetection. accessed on 01.02.2022.
233
+ [2] Rdkit: Open-source cheminformatics. URL https://www.rdkit.org. accessed on 01.02.2022.
234
+ [3] Wonho Bae, Junhyug Noh, and Gunhee Kim. Rethinking class activation mapping for weakly supervised object localization. In European Conference on Computer Vision, pages 618-634. Springer, 2020.
235
+ [4] Aseem Behl, Omid Hosseini Jafari, Siva Karthik Mustikovela, Hassan Abu Alhaija, Carsten Rother, and Andreas Geiger. Bounding boxes, segmentations and object coordinates: How important is recognition for 3d scene flow estimation in autonomous driving scenarios? In Proceedings of the IEEE International Conference on Computer Vision, pages 2574-2583, 2017.
236
+ [5] Lukas Biewald. Experiment tracking with weights and biases, 2020. URL https://www.wandb.com/. Software available from wandb.com.
237
+ [6] Hakan Bilen and Andrea Vedaldi. Weakly supervised deep detection networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2846-2854, 2016.
238
+ [7] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In European conference on computer vision, pages 213-229. Springer, 2020.
239
+ [8] Yuhua Chen, Wen Li, Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Domain adaptive faster r-cnn for object detection in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3339-3348, 2018.
240
+ [9] Djork-Arné Clevert, Tuan Le, Robin Winter, and Floriane Montanari. Img2mol-accurate smiles recognition from molecular graphical depictions. Chemical science, 12(42):14174-14181, 2021.
241
+ [10] Luc De Raedt and Kristian Kersting. Probabilistic logic learning. ACM SIGKDD Explorations Newsletter, 5(1):31-48, 2003.
242
+ [11] Luc De Raedt and Angelika Kimmig. Probabilistic (logic) programming concepts. Machine Learning, 100(1):5-47, 2015.
243
+ [12] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. IEEE, 2009.
244
+
245
+ [13] Li Deng. The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE signal processing magazine, 29(6):141-142, 2012.
246
+ [14] Thomas Deselaers, Bogdan Alexe, and Vittorio Ferrari. Weakly supervised localization and learning with generic knowledge. International journal of computer vision, 100(3):275-293, 2012.
247
+ [15] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The Pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2):303-338, June 2010.
248
+ [16] Eleonora Giunchiglia, Mihaela Cătălina Stoian, Salman Khan, Fabio Cuzzolin, and Thomas Lukasiewicz. Road-r: The autonomous driving dataset with logical requirements. arXiv preprint arXiv:2210.01597, 2022.
249
+ [17] Ibtihaal M Hameed, Sadiq H Abdulhussain, and Basheera M Mahmmod. Content-based image retrieval: A review of recent trends. *Cogent Engineering*, 8(1):1927469, 2021.
250
+ [18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
251
+ [19] Rodrigo Hormazabal, Changyoung Park, Soonyoung Lee, Sehui Han, Yeonsik Jo, Jaewan Lee, Ahra Jo, Seung Hwan Kim, Jaegul Choo, Moontae Lee, et al. Cede: A collection of expert-curated datasets with atom-level entity annotations for optical chemical structure recognition.
252
+ [20] Naoto Inoue, Ryosuke Furuta, Toshihiko Yamasaki, and Kiyoharu Aizawa. Cross-domain weakly-supervised object detection through progressive domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5001-5009, 2018.
253
+ [21] Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2901–2910, 2017.
254
+ [22] Taekyung Kim, Minki Jeong, Seunghyeon Kim, Seokeon Choi, and Changick Kim. Diversify and match: A domain adaptive representation learning paradigm for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12456-12465, 2019.
255
+ [23] Daphne Koller, Nir Friedman, Sašo Džeroski, Charles Sutton, Andrew McCallum, Avi Pfeffer, Pieter Abbeel, Ming-Fai Wong, Chris Meek, Jennifer Neville, et al. Introduction to statistical relational learning. MIT press, 2007.
256
+ [24] Jogendra Nath Kundu, Rahul Mysore Venkatesh, Naveen Venkat, Ambareesh Revanur, and R Venkatesh Babu. Class-incremental domain adaptation. In European Conference on Computer Vision, pages 53-69. Springer, 2020.
257
+ [25] Dong Li, Jia-Bin Huang, Yali Li, Shengjin Wang, and Ming-Hsuan Yang. Weakly supervised object localization with progressive domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3512-3520, 2016.
258
+ [26] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dóllár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014.
259
+ [27] Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. Deepproblog: Neural probabilistic logic programming. Advances in Neural Information Processing Systems, 31, 2018.
260
+ [28] Robin Manhaeve, Sebastijan Dumančić, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. Neural probabilistic logic programming in deepproblog. Artificial Intelligence, 298: 103504, 2021.
261
+
262
+ [29] Martijn Oldenhof, Adam Arany, Yves Moreau, and Jaak Simm. Chemographer: optical graph recognition of chemical compounds by deep learning. Journal of chemical information and modeling, 60(10):4506-4517, 2020.
263
+ [30] Martijn Oldenhof, Adam Arany, Yves Moreau, and Jaak Simm. Self-labeling of fully mediating representations by graph alignment. In Benelux Conference on Artificial Intelligence, pages 46-65. Springer, 2021.
264
+ [31] Martijn Oldenhof, Ádám Arany, Yves Moreau, and Edward De Brouwer. Updating object detection models with probabilistic programming. 2022. ICML workshop - UpML.
265
+ [32] Luc De Raedt, Kristian Kersting, Siraam Natarajan, and David Poole. Statistical relational artificial intelligence: Logic, probability, and computation. Synthesis lectures on artificial intelligence and machine learning, 10(2):1-189, 2016.
266
+ [33] Kohulan Rajan, Achim Zielesny, and Christoph Steinbeck. Decimer: towards deep learning for chemical image recognition. Journal of Cheminformatics, 12(1):1-9, 2020.
267
+ [34] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems, 28, 2015.
268
+ [35] Tim Rocktäschel and Sebastian Riedel. End-to-end differentiable proving. Advances in neural information processing systems, 30, 2017.
269
+ [36] Noureddin M Sadawi, Alan P Sexton, and Volker Sorge. Chemical structure recognition: a rule-based approach. In Document Recognition and Retrieval XIX, volume 8297, page 82970E. International Society for Optics and Photonics, 2012.
270
+ [37] Kuniaki Saito, Yoshitaka Ushiku, Tatsuya Harada, and Kate Saenko. Strong-weak distribution alignment for adaptive object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6956–6965, 2019.
271
+ [38] Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. Advances in neural information processing systems, 30, 2017.
272
+ [39] Feifei Shao, Long Chen, Jian Shao, Wei Ji, Shaoning Xiao, Lu Ye, Yueting Zhuang, and Jun Xiao. Deep learning for weakly-supervised object detection and localization: A survey. Neurocomputing, 2022.
273
+ [40] Hyun Oh Song, Ross Girshick, Stefanie Jegelka, Julien Mairal, Zaid Harchaoui, and Trevor Darrell. On learning to localize objects with minimal supervision. In International Conference on Machine Learning, pages 1611-1619. PMLR, 2014.
274
+ [41] Leon Sterling and Ehud Y Shapiro. The art of Prolog: advanced programming techniques. MIT press, 1994.
275
+ [42] Matteo Tomei, Marcella Cornia, Lorenzo Baraldi, and Rita Cucchiara. Art2real: Unfolding the reality of artworks via semantically-aware image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5849-5859, 2019.
276
+ [43] Jasper Uijlings, Stefan Popov, and Vittorio Ferrari. Revisiting knowledge transfer for training object class detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1101-1110, 2018.
277
+ [44] Thomas Winters, Giuseppe Marra, Robin Manhaeve, and Luc De Raedt. Deepstochlog: Neural stochastic logic programming. arXiv preprint arXiv:2106.12574, 2021.
278
+ [45] Yao Xue, Nilanjan Ray, Judith Hugh, and Gilbert Bigras. Cell counting by regression using convolutional neural network. In European Conference on Computer Vision, pages 274-290. Springer, 2016.
279
+
280
+ [46] Yuanyi Zhong, Jianfeng Wang, Jian Peng, and Lei Zhang. Boosting weakly supervised object detection with progressive knowledge transfer. In European conference on computer vision, pages 615-631. Springer, 2020.
281
+ [47] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2921–2929, 2016.
282
+ [48] Xinge Zhu, Jiangmiao Pang, Ceyuan Yang, Jianping Shi, and Dahua Lin. Adapting object detectors via selective cross-domain alignment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 687-696, 2019.
283
+
284
+ # A TRAINING DETAILS
285
+
286
+ For the hyper-parameters the idea was to stay as close as possible to the defaults of the pre-trained standard models although some lightweight tuning was done. In Table 4 a summary is given for the hyper-parameters used for the different models.
287
+
288
+ <table><tr><td>Model</td><td>dataset</td><td>epochs</td><td>lr</td><td>lr_step_size</td><td>lr-gamma</td><td>momentum</td><td>batch size</td><td>weight decay</td><td>optimizer</td></tr><tr><td>DETR pre-train (retrain)</td><td>CLEVR</td><td>max 100</td><td>0.0001</td><td>7 (7-8)</td><td>0.1</td><td></td><td>8</td><td>0.0001</td><td>AdamW</td></tr><tr><td>DETR pre-train (retrain)</td><td>Mols.</td><td>max 100</td><td>0.0001</td><td>20 (20)</td><td>0.1</td><td></td><td>8</td><td>0.0001</td><td>AdamW</td></tr><tr><td>DETR pre-train (retrain)</td><td>MNIST</td><td>max 100</td><td>0.0001</td><td>15-20 (20)</td><td>0.1</td><td></td><td>8</td><td>0.0001</td><td>AdamW</td></tr><tr><td>RCNN pre-train (retrain)</td><td>all datasets</td><td>max 30</td><td>0.005</td><td>5 (5)</td><td>0.1</td><td>0.9</td><td>1</td><td>0.0005</td><td>SGD</td></tr><tr><td>RCNN Finetune</td><td>all datasets</td><td>max 20</td><td>0.001</td><td></td><td></td><td></td><td>16</td><td></td><td>Adam</td></tr><tr><td>DETR Finetune</td><td>CLEVR/Mols</td><td>max 20</td><td>0.001</td><td></td><td></td><td></td><td>16</td><td></td><td>Adam</td></tr><tr><td>DETR Finetune</td><td>MNIST</td><td>max 20</td><td>0.01</td><td></td><td></td><td></td><td>16</td><td></td><td>Adam</td></tr><tr><td>DETR Finetune*</td><td>CLEVR/Mols</td><td>max 100</td><td>0.002</td><td>20</td><td>0.1</td><td></td><td>8</td><td>0.0001</td><td>AdamW</td></tr><tr><td>RCNN Finetune*</td><td>CLEVR</td><td>max 20</td><td>0.001</td><td></td><td></td><td></td><td>15</td><td></td><td>Adam</td></tr><tr><td>RCNN Finetune*</td><td>Mols</td><td>max 20</td><td>0.00001</td><td></td><td></td><td></td><td>15</td><td></td><td>Adam</td></tr><tr><td>DETR masked box loss</td><td>CLEVR/Mols</td><td>max 100</td><td>0.0001</td><td>7</td><td>0.1</td><td></td><td>8</td><td>0.0001</td><td>AdamW</td></tr><tr><td>Resnet50-CAM models</td><td>all datasets</td><td>max 500</td><td>0.001</td><td></td><td></td><td></td><td>32</td><td></td><td>Adam</td></tr></table>
289
+
290
+ Table 4: Overview of hyperparameters for the different models, most hyperparamaters are left default from standard models. Tuning was mostly done on learning rate and learning rate scheduling. For every fold/dataset the best epoch/lr/lr_step_size model is used based on validation data.
291
+
292
+ # B DATASETS
293
+
294
+ We evaluate our approach on three different datasets: (1) a CLEVR-mini dataset, (2) a Molecules data set with images of chemical compounds, and (3) an MNIST-based object detection dataset. For each dataset, three subsets, corresponding to different domains, are used: (1) a source domain, (2) a target domain, and (3) an out-of-distribution domain (OOD). Source and target domains are split into 5 folds of train and validation sets and an independent test set. Sizes of the different splits per dataset are summarized in Table 5.
295
+
296
+ <table><tr><td>Dataset</td><td>Type</td><td>Split</td><td>Size (number of samples)</td></tr><tr><td>MNIST object detection</td><td>Source</td><td>train</td><td>700</td></tr><tr><td>MNIST object detection</td><td>Source</td><td>validation</td><td>300</td></tr><tr><td>MNIST object detection</td><td>Source</td><td>test</td><td>1000</td></tr><tr><td>MNIST object detection</td><td>Target</td><td>train</td><td>700</td></tr><tr><td>MNIST object detection</td><td>Target</td><td>validation</td><td>300</td></tr><tr><td>MNIST object detection</td><td>Target</td><td>test</td><td>1000</td></tr><tr><td>MNIST object detection</td><td>OOD</td><td>test</td><td>1000</td></tr><tr><td>Molecules</td><td>Source</td><td>train</td><td>1400</td></tr><tr><td>Molecules</td><td>Source</td><td>validation</td><td>600</td></tr><tr><td>Molecules</td><td>Source</td><td>test</td><td>1000</td></tr><tr><td>Molecules</td><td>Target</td><td>train</td><td>1400</td></tr><tr><td>Molecules</td><td>Target</td><td>validation</td><td>600</td></tr><tr><td>Molecules</td><td>Target</td><td>test</td><td>1000</td></tr><tr><td>Molecules</td><td>OOD</td><td>test</td><td>1000</td></tr></table>
297
+
298
+ Table 5: Dataset sizes for the different splits. For train and validations splits 5 folds are used.
299
+
300
+ # B.0.1 CLEVR-MINI DATASET
301
+
302
+ The CLEVR-mini dataset for our experiments is a selection of samples from the CLEVR dataset [21]. The different types available in the CLEVR dataset are combinations of shapes (cube, sphere, and cylinder), materials (metal and rubber), and sizes (large and small). Colors are ignored as the images are first converted to grayscale before feeding them to the models. For the richly annotated source domain, we randomly select images with only sphere or cylinder-shaped objects (no cubes) and with a maximum of four objects per image and a minimum of three objects. For the weakly annotated target domain we experiment with two type of annotations. Firstly we experiment when we have the class counts of objects in the image available. Secondly, instead of the exact counts of classes in the image the annotations only specify if there is exactly one object class in the image or multiple. The advantage of this kind of labeling is that the annotator does not need to count the objects and instead just make a distinction of only one object class in image or multiple. The images in the target domain can contain all combinations of object types (including cube-shaped objects) and allow a minimum of five objects per image and a maximum of six objects per image. For the OOD dataset we also select images with all possible combinations of object types, always with 10 objects per image. Some example images from the CLEVR-mini dataset can be found in Figure 1.
303
+
304
+ # B.0.2 MOLECULES DATASET
305
+
306
+ The Molecules dataset contains images depicting chemical compounds. For the richly annotated source domain, a procedure similar as described in Oldenhof et al. [29, 30] was executed using an RDKit [2] fork for generating the bounding box labels for the individual atoms present in the images. In the source domain, we allow the following atom types: carbon (C), hydrogen (H), oxygen (O), and nitrogen (N). In the weakly annotated target domain, we only have the counts of the atoms present which translates to the chemical formula of the molecule in the image ( $e.g. C_6H_{12}O_6$ ). The same classes from the source domain (C, H, O, and N) are also present in the target domain as well as an extra atom type: sulfur (S). The OOD test dataset consists of 1000 images from the external UoB dataset [36] where chemical compounds containing only the atom types present in the target domain (C, H, O, N, and S). Some example images from the Molecules dataset are visualized in Figure 5.
307
+
308
+ # B.0.3 MNIST OBJECT DETECTION DATASET
309
+
310
+ The MNIST object detection dataset is generated [1] using the original MNIST dataset [13]. Each image consists of three MNIST digits randomly positioned in the image. The MNIST object detection dataset allows experimenting with a more arbitrary type of weak supervision. Each object in this dataset represents a digit that can be aggregated. This allows to label an image with only the sum of all digits in the image instead of the class counts of the objects. For the richly annotated source domain digits 7, 8, and 9 are left out. The weakly annotated target domain has all possible digit classes (0-9). The labels of the target domain only contain the sum of all digits. For the OOD test dataset, images are used that contain maximum of four MNIST digits, instead of three digits as in the other domains. Some example images from the MNIST object detection dataset are visualized in Figure 6.
311
+
312
+ ![](images/8322a53663ffea603503373d41cb97feee570947a49aa14c27fc4c4d869989ed.jpg)
313
+ Figure 5: Weakly supervised knowledge transfer with probabilistic logical reasoning (ProbKT). On the left we have source domain where a model can be trained using bounding box information labels, positions) but only on a limited set of atom types (C,H,O,N). In the middle we can see that the pre-trained model is not able to recognize the sulfur (S) from target domain correctly. On the right we see that the model is able to adapt to target domain after probabilistic reasoning using weak labels (e.g. counts of objects on image) and is able to recognize the sulfur (S).
314
+
315
+ ![](images/a0b861faf5e883198d7f1afee3d04067d4e837bfd41f6b562175b9ecc4cec360.jpg)
316
+ Figure 6: Weakly supervised knowledge transfer with probabilistic logical reasoning (ProbKT). On the left we have source domain where a model can be trained using bounding box information labels, positions) but only on a limited set of digits (0, 1, 2, 3, 4, 5, 6). In the middle we can see that the pre-trained model is not able to recognize the digit eight (8) from target domain correctly. On the right we see that the model is able to adapt to target domain after probabilistic reasoning using weak labels (e.g. sum of digits on image) and is able to recognize the digit eight (8).
317
+
318
+ # C PROBKT AND PROBKT* SUPPLEMENTARY DETAILS
319
+
320
+ # C.1 FILTERING SAMPLES
321
+
322
+ The computation complexity of inference in the probabilistic programming module grows with the number of possible worlds. In turn, the number of possible worlds grows with the number of probabilistic facts $\hat{n}$ .
323
+
324
+ One avenue to reduce the computational cost of the inference step is then to artificially reduce the number of probabilistic facts in each image. Let $\{\hat{p}_{y,n} : n = 1, \dots, \hat{n}\}$ and $q$ the corresponding inference query. We compute the filtered set of probabilistic facts $\bar{p}_{y,n}$ by setting
325
+
326
+ $$
327
+ \bar {p} _ {y, n} ^ {k} = \left\{ \begin{array}{l l} 1 & \text {i f} \hat {p} _ {y, n} ^ {k} \geq \delta \\ 0 & \text {i f} \exists k ^ {\prime} \text {s . t .} \hat {p} _ {y, n} ^ {k ^ {\prime}} \geq \delta \\ \hat {p} _ {y, n} ^ {k} & \text {o t h e r w i s e .} \end{array} \quad \text {a n d} \quad \hat {p} _ {y, n} ^ {k} < \delta \right. \tag {2}
328
+ $$
329
+
330
+ The parameter $\delta \in [0,1]$ is a threshold at which we consider the probabilistic fact as certain. A probability of 1 or 0 effectively discards the probabilistic fact $\bar{p}_{y,n}$ from the inference procedure. However, we also have to update the inference query $q$ to reflect this filtration. We write $\bar{q}$ the filtered query $q$ .
331
+
332
+ Example To illustrate this filtration strategy let's consider an MNIST image with 3 digits in the image: $\{3,4,7\}$ . The query $q$ corresponds to the class labels in the images. That is $q = \{3,4,7\}$ . The object detection backbones outputs 3 box features with corresponding probabilities $\{\hat{p}_{y,0},\hat{p}_{y,1},\hat{p}_{y,2},\}$ . Now let e.g. $\hat{p}_{y,1}^3 = 0.99$ . We can filter out $\hat{p}_{y,1}$ (i.e. the prediction for a digit 3 is certain), and compute the filtered query $\bar{q} = \{4,7\}$ .
333
+
334
+ Remark Equation 2 suggests a filtering based on the output probabilities only. However, one can also use information about the query for the filtration. For instance, one would only filter out a probabilistic fact if it is consistent with the query $q$ . In the example above, it would be wiser not to filter out e.g. $\hat{p}_{y,1}^{9} = 0.99$ as no images are supposedly present in the image. One should then ideally propagate this probabilistic fact to the inference module such as to update the weights of the backbone and learn from this error.
335
+
336
+ # C.2 GRADIENT OF THE LIKELIHOOD
337
+
338
+ The ProbKT likelihood has the following form:
339
+
340
+ $$
341
+ P _ {\mathcal {P}} (q) = \sum_ {\alpha \in E _ {q}} \prod_ {i} \prod_ {j} \hat {p} _ {i j} ^ {\alpha_ {i j}},
342
+ $$
343
+
344
+ where $\alpha$ is a "possible world" matrix of indicator variables:
345
+
346
+ $$
347
+ \alpha_ {i j} = \left\{ \begin{array}{l l} 1 & \text {o b j e c t i i s o f c l a s s j} \\ 0 & \text {o t h e r w i s e ,} \end{array} \right.
348
+ $$
349
+
350
+ and $E_{q}$ is the set of all possible $\alpha$ worlds compatible with the logical annotation $q$ .
351
+
352
+ Lemma 1. The gradient of the likelihood has the following form:
353
+
354
+ $$
355
+ \frac {\partial P _ {\mathcal {P}} (q)}{\partial \theta} = \sum_ {i} \sum_ {j} \frac {\partial p _ {i j}}{\partial \theta} C _ {i j},
356
+ $$
357
+
358
+ where the weight has the form:
359
+
360
+ $$
361
+ C _ {i j} = P (E | O _ {i} = j) = \sum_ {\alpha \in E | O _ {i = j}} \prod_ {i ^ {\prime}} \prod_ {j ^ {\prime}} I _ {(i \neq i ^ {\prime} \lor j \neq j ^ {\prime})} p _ {i j} ^ {\alpha_ {i j}}
362
+ $$
363
+
364
+ In case of the Hungarian matching the most probable possible word is selected, which corresponds to setting the conditional probability $P(E|O_{i} = j)$ to 1 if object $i$ is paired with label $j$ and 0 otherwise. The ProbKT gradient can be interpreted as a probability weighted extension of the gradient resulting from the Hungarian matching.
365
+
366
+ # D FULL RESULTS
367
+
368
+ In Table 6, we present the full results for the MNIST experiment. We report the count accuracy (i.e., correct identification of the digits in the image), sum accuracy (i.e., correct estimation of the sum of
369
+
370
+ <table><tr><td>Model</td><td>Type</td><td>mnist count acc.</td><td>mnist sum acc.</td><td>mnist mAP (mAP@IoU=0.5)</td></tr><tr><td>Resnet50-CAM (baseline)</td><td>In-distribution</td><td>0.044 ± 0.041</td><td>0.506 ± 0.063</td><td>0.003 ± 0.003(0.014 ± 0.011)</td></tr><tr><td>Resnet50-CAM (baseline)</td><td>OOD</td><td>0.01 ± 0.009</td><td>0.015 ± 0.004</td><td>0.003 ± 0.002(0.011 ± 0.007)</td></tr><tr><td>Resnet50-CAM (baseline)</td><td>Source Domain</td><td>0.127 ± 0.132</td><td>0.649 ± 0.108</td><td>0.005 ± 0.004(0.028 ± 0.018)</td></tr><tr><td>DETR (Pre-trained)</td><td>In-distribution</td><td>0.26 ± 0.012</td><td>0.262 ± 0.01</td><td>0.518 ± 0.014 (0.637 ± 0.017)</td></tr><tr><td>DETR (Pre-trained)</td><td>OOD</td><td>0.173 ± 0.01</td><td>0.177 ± 0.009</td><td>0.51 ± 0.012 (0.632 ± 0.015)</td></tr><tr><td>DETR (Pre-trained)</td><td>Source Domain</td><td>0.859 ± 0.031</td><td>0.86 ± 0.031</td><td>0.781 ± 0.009 (0.957 ± 0.008)</td></tr><tr><td>DETR (ProbKT)</td><td>In-distribution</td><td>0.662 ± 0.064</td><td>0.664 ± 0.065</td><td>0.615 ± 0.025 (0.856 ± 0.037)</td></tr><tr><td>DETR (ProbKT)</td><td>OOD</td><td>0.532 ± 0.083</td><td>0.533 ± 0.082</td><td>0.591 ± 0.03 (0.845 ± 0.038)</td></tr><tr><td>DETR (ProbKT)</td><td>source domain</td><td>0.878 ± 0.023</td><td>0.879 ± 0.023</td><td>0.737 ± 0.014 (0.952 ± 0.009)</td></tr><tr><td>RCNN (Pre-trained)</td><td>In-distribution</td><td>0.292 ± 0.005</td><td>0.298 ± 0.005</td><td>0.632 ± 0.014 (0.685 ± 0.002)</td></tr><tr><td>RCNN (Pre-trained)</td><td>OOD</td><td>0.205 ± 0.004</td><td>0.212 ± 0.004</td><td>0.631 ± 0.013 (0.683 ± 0.002)</td></tr><tr><td>RCNN (Pre-trained)</td><td>source domain</td><td>0.961 ± 0.008</td><td>0.961 ± 0.008</td><td>0.917 ± 0.021 (0.988 ± 0.002)</td></tr><tr><td>RCNN (ProbKT)</td><td>In-distribution</td><td>0.902 ± 0.005</td><td>0.903 ± 0.005</td><td>0.786 ± 0.021 (0.974 ± 0.001)</td></tr><tr><td>RCNN (ProbKT)</td><td>OOD</td><td>0.863 ± 0.008</td><td>0.865 ± 0.008</td><td>0.778 ± 0.021 (0.97 ± 0.001)</td></tr><tr><td>RCNN (ProbKT)</td><td>source domain</td><td>0.967 ± 0.004</td><td>0.967 ± 0.004</td><td>0.873 ± 0.016 (0.989 ± 0.001)</td></tr></table>
371
+
372
+ the digits in the image) and the mean average precision (mAP) (i.e. a common object detection metric that reflects the ability to predict the positions and labels of the objects). We observe that the Resnet baseline performs poorly, lacking the necessary logic to process this dataset. We used both DETR and RCNN as object detection backbones in our experiments, showing high test accuracies when fine-tuned with our approach. As the results suggest, RCNN backbones lead to better performance than the DETR backbone.
373
+
374
+ Table 6: Results of the SUM experiments on the MNIST object detection dataset. Reported test accuracies over the 5 folds.
375
+
376
+ <table><tr><td>Model</td><td>Data Domain</td><td>CLEVR count acc.</td><td>CLEVR mAP (mAP@IoU=0.5)</td><td>Mol. count. acc</td><td>Mol. mAP (mAP@IoU=0.5)</td></tr><tr><td>Resnet50-CAM</td><td>target domain</td><td>0.97 ± 0.005</td><td>0.036 ± 0.014 (0.200 ± 0.071)</td><td>0.978 ± 0.004</td><td>0.0 ± 0.0 (0 ± 0)</td></tr><tr><td>Resnet50-CAM</td><td>OOD</td><td>0.831 ± 0.016</td><td>0.029 ± 0.010 (0.153 ± 0.044)</td><td>0.0 ± 0.0</td><td>n/a1</td></tr><tr><td>Resnet50-CAM</td><td>source domain</td><td>0.993 ± 0.003</td><td>0.035 ± 0.019 (0.178 ± 0.084)</td><td>0.828 ± 0.021</td><td>0.0 ± 0.0 (0 ± 0)</td></tr><tr><td>WSOD-transfer</td><td>target domain</td><td>0.944 ± 0.004</td><td>0.844 ± 0.005 (0.988 ± 0.001)</td><td>0.001 ± 0.0</td><td>0.018 ± 0.004 (0.061 ± 0.011)</td></tr><tr><td>WSOD-transfer</td><td>OOD</td><td>0.73 ± 0.011</td><td>0.79 ± 0.005 (0.969 ± 0.001)</td><td>0.003 ± 0.002</td><td>n/a1</td></tr><tr><td>WSOD-transfer</td><td>source domain</td><td>0.989 ± 0.001</td><td>0.926 ± 0.001 (0.995 ± 0.0)</td><td>0.0 ± 0.0</td><td>0.021 ± 0.003 (0.069 ± 0.009)</td></tr><tr><td>DETR-joint</td><td>target domain</td><td>0.159 ± 0.133</td><td>0.579 ± 0.012 (0.684 ± 0.019)</td><td>0.357 ± 0.196</td><td>0.197 ± 0.055 (0.481 ± 0.071)</td></tr><tr><td>DETR-joint</td><td>OOD</td><td>0.084 ± 0.039</td><td>0.534 ± 0.012 (0.66 ± 0.012)</td><td>0.024 ± 0.021</td><td>n/a1</td></tr><tr><td>DETR-joint</td><td>source dom.</td><td>0.923 ± 0.049</td><td>0.908 ± 0.017 (0.992 ± 0.001)</td><td>0.232 ± 0.127</td><td>0.23 ± 0.063 (0.565 ± 0.08)</td></tr><tr><td>DETR (Pre-trained)</td><td>target domain</td><td>0.0 ± 0.0</td><td>0.498 ± 0.019 (0.533 ± 0.024)</td><td>0.464 ± 0.033</td><td>0.314 ± 0.006 (0.542 ± 0.006)</td></tr><tr><td>DETR (Pre-trained)</td><td>OOD</td><td>0.0 ± 0.0</td><td>0.477 ± 0.013 (0.531 ± 0.021)</td><td>0.002 ± 0.001</td><td>n/a1</td></tr><tr><td>DETR (Pre-trained)</td><td>source domain</td><td>0.97 ± 0.009</td><td>0.945 ± 0.009 (0.992 ± 0.001)</td><td>0.581 ± 0.022</td><td>0.409 ± 0.005 (0.722 ± 0.004)</td></tr><tr><td>ProbKT*(DETR)</td><td>target domain</td><td>0.949 ± 0.005</td><td>0.728 ± 0.014 (0.99 ± 0.003)</td><td>0.589 ± 0.042</td><td>0.373 ± 0.02 (0.669 ± 0.045)</td></tr><tr><td>ProbKT*(DETR)</td><td>OOD</td><td>0.741 ± 0.038</td><td>0.606 ± 0.017 (0.977 ± 0.004)</td><td>0.008 ± 0.008</td><td>n/a1</td></tr><tr><td>ProbKT*(DETR)</td><td>source domain</td><td>0.985 ± 0.004</td><td>0.937 ± 0.006 (0.995 ± 0.001)</td><td>0.275 ± 0.066</td><td>0.371 ± 0.021 (0.649 ± 0.041)</td></tr><tr><td>ProbKT(DETR)</td><td>target domain</td><td>0.946 ± 0.014</td><td>0.803 ± 0.011 (0.989 ± 0.006)</td><td>0.508 ± 0.027</td><td>0.204 ± 0.02 (0.507 ± 0.014)</td></tr><tr><td>ProbKT(DETR)</td><td>OOD</td><td>0.726 ± 0.035</td><td>0.715 ± 0.006 (0.974 ± 0.006)</td><td>0.004 ± 0.003</td><td>n/a1</td></tr><tr><td>ProbKT(DETR)</td><td>source domain</td><td>0.987 ± 0.003</td><td>0.948 ± 0.005 (0.995 ± 0.001)</td><td>0.549 ± 0.026</td><td>0.38 ± 0.013 (0.713 ± 0.006)</td></tr><tr><td>RCNN (pre-trained)</td><td>target domain</td><td>0.0 ± 0.0</td><td>0.586 ± 0.014 (0.598 ± 0.013)</td><td>0.592 ± 0.007</td><td>0.568 ± 0.005 (0.785 ± 0.004)</td></tr><tr><td>RCNN (pre-trained)</td><td>OOD</td><td>0.0 ± 0.0</td><td>0.582 ± 0.012 (0.603 ± 0.011)</td><td>0.348 ± 0.036</td><td>n/a1</td></tr><tr><td>RCNN (pre-trained)</td><td>source domain</td><td>0.988 ± 0.002</td><td>0.984 ± 0.01 (0.996 ± 0.0)</td><td>0.948 ± 0.004</td><td>0.737 ± 0.005 (0.979 ± 0.0)</td></tr><tr><td>ProbKT*(RCNN)</td><td>target domain</td><td>0.974 ± 0.004</td><td>0.855 ± 0.025 (0.994 ± 0.001)</td><td>0.945 ± 0.006</td><td>0.24 ± 0.042 (0.788 ± 0.073)</td></tr><tr><td>ProbKT*(RCNN)</td><td>OOD</td><td>0.901 ± 0.017</td><td>0.827 ± 0.022 (0.991 ± 0.001)</td><td>0.592 ± 0.032</td><td>n/a1</td></tr><tr><td>ProbKT*(RCNN)</td><td>source domain</td><td>0.993 ± 0.002</td><td>0.95 ± 0.021 (0.998 ± 0.0)</td><td>0.96 ± 0.003</td><td>0.655 ± 0.01 (0.974 ± 0.004)</td></tr><tr><td>ProbKT(RCNN)</td><td>target domain</td><td>0.975 ± 0.003</td><td>0.856 ± 0.039 (0.993 ± 0.001)</td><td>0.942 ± 0.009</td><td>0.289 ± 0.041 (0.829 ± 0.054)</td></tr><tr><td>ProbKT(RCNN)</td><td>OOD</td><td>0.89 ± 0.022</td><td>0.833 ± 0.042 (0.991 ± 0.001)</td><td>0.603 ± 0.037</td><td>n/a1</td></tr><tr><td>ProbKT(RCNN)</td><td>source domain</td><td>0.995 ± 0.002</td><td>0.941 ± 0.041 (0.998 ± 0.001)</td><td>0.96 ± 0.002</td><td>0.666 ± 0.005 (0.978 ± 0.002)</td></tr></table>
377
+
378
+ Table 7: Results of the experiments for the datasets: CLEVR-mini and Molecules. Reported test accuracies over the 5 folds. Best method is in bold for each metric and data distribution.
379
+
380
+ # E SOURCE CODE AND DATASETS
381
+
382
+ The source code and basic instructions are available on https://github.com/molden/ProbKT. The source code integrates features from the Weights & Biases (WandB) platform [5]. Basic features are supported without the need for an account on WandB but to make full use of all features we recommend to create an account.
383
+
384
+ Datasets can be downloaded here:
385
+
386
+ - CLEVR-mini dataset https://figshare.com/s/db012765e5a38e14ef9c
387
+ - Molecules dataset https://figshare.com/s/3dc3508d39bf4cff8c7f
388
+ - MNIST object detection dataset https://figshare.com/s/c760de026f000524db5a
389
+
390
+ ProbLog script used in the ProbKT Probabilistic logical reasoning framework for counting of objects on an image (as on CLEVR-mini dataset):
391
+
392
+ ```prolog
393
+ :- use_module library(lists)).
394
+ nn(mnist_net,[X],Y,[0,1,2,3,4,5,6,7,8,9,10,11]) :: digit(X,Y).
395
+ count([],X,0).
396
+ count([X|T],X,Y):- count(T,X,Z), Y is 1+Z.
397
+ count([X1|T],X,Z):- X1\=X,count(T,X,Z).
398
+ countall(List,X,C) :- sort(List,List1), member(X,List1), count(List,X,C).
399
+ roll([],L,L).
400
+ roll([H|T],A,L):- roll(T,[Y|A],L), digit(H,Y).
401
+ countpart(List,[],[])
402
+ countpart(List,[H|T],[F|L]):- countall(List,H,F), countpart(List,T,L).
403
+ count.objects(X,L,C):- roll(X,[],Result), countpart(Result,L,C).
404
+ ```
405
+
406
+ The query $q$ in the case of class counts would be count.objects(X, L, C). For example an image $X$ with 1 small metal cube and 3 large rubber cylinders would result in the following query: count.objects(X, [small métal Cube, large rubber_cylinder], [1, 3]).
407
+
408
+ ProbLog script used in the ProbKT Probabilistic logical reasoning framework for aggregating the digits on an image:
409
+
410
+ ```prolog
411
+ : - use_module library(lists)).
412
+ nn (mnist_net, [X], Y, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) :: digit(X,Y).
413
+ sum([[], 0).
414
+ sum([X|T], Y) : - sum(T,Z), Y is X+Z.
415
+ roll([[], L, L).
416
+ roll([H|T], A, L) : - roll(T, [Y|A], L), digit(H,Y).
417
+ sum_digits(X,Y) : - roll(X, [], Result), sum(Result,Y).
418
+ ```
419
+
420
+ The query $q$ in the case of sum of digits would be sum_digits(X, Y). For example an image $X$ with as sum of digits 12 would result in the following query: sum_digits(X, 12).
421
+
422
+ ProbLog script used in the ProbKT Probabilistic logical reasoning framework for taking into account non-exact counts on images
423
+
424
+ ```txt
425
+ -- useModule library(lists)) ;
426
+ inn (mnist_net, [X], Y, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) :: digit(X,Y) . . . ..
427
+ count([ ], X, 0 ) ;
428
+ count([X|T], X,Y): - count(T,X,Z), Y is 1+Z;
429
+ count([X|T], X,Z): - X1 $\equiv$ X, count(T,X,Z);
430
+ count(, X, 0 ) ;
431
+ count([X|T], X,Y): - count(T,X,Z), Y is 1+Z;
432
+ count(, X|T], X,Z): - count(T,X,Z);
433
+ count(, X|T], X,C,A): - count(T,X,Z);
434
+ count(, X|T], X,C,A): - count(T,X,Z);
435
+ count(, X|T], X,C,A): - count(T,X,Z);
436
+ count(, X|T], X,C,A): - count(T,X,Z);
437
+ count(, X|T], X,C,A): - count(T,X,Z);
438
+ count(, X|T], X,C,A): - count(T,X,Z);
439
+ count(, X|T], x,C,A): - count(T,X,Z);
440
+ count(, X|T], x,C,A): - count(T,X,Z);
441
+ count(, X|T], x,C,A): - count(T,X,Z);
442
+ count(, X|T], x,C,A): - count(T,X,Z);
443
+ count(, X|T], x,C,A): - count(T,X,Z);
444
+ count(, X|T], x,C,A): - count(T,X,Z);
445
+ count(, X|7), x,C,A): - count(T,X,Z);
446
+ count(, X|7), x,C,A): - count(T,X,Z);
447
+ count(, X|7), x,C,A): - count(T,X,Z);
448
+ count(, X|7), x,C,A): - count(T,X,Z);
449
+ count(, X|7), x,C,A): - count(T,X,Z);
450
+ count(, X|7), x,C,A): - count(T,X,Z);
451
+ count(, X |7), x,C,A): - count(T,X,Z);
452
+ count(, X|7), x,C,A): - count(T,X,Z);
453
+ count(, X|7), x,C,A): - count(T,X,Z);
454
+ count(, X|7), x,C,A): - count(T,X,Z);
455
+ count(, X|7), x,C,A): - count(T,X,Z);
456
+ count(, X|7), x,C,A): - count(T,X,Z);
457
+ count(,X|7),x,C,A):- count(T,X,Z);
458
+ count(, X|7), x,C,A): - count(T,X,Z);
459
+ count(, X|7), x,C,A): - count(T,X,Z);
460
+ count(, X|7), x,C,A): - count(T,X,Z);
461
+ count(, X|7), x,C,A): - count(T,X,Z);
462
+ count(, X|7), x,C,A): - count(T,X,Z);
463
+ count(, X|7), x,C,B): - count(T,X,Z);
464
+ count(, X|7), x,C,B): - count(T,X,Z);
465
+ count(, X|7), x,C,B): - count(T,X,Z);
466
+ count(, X|7), x,C,B): - count(T,X,Z);
467
+ count(, X|7), x,C,B): - count(T,X,Z);
468
+ count(, X|7), x,C,B): - count(T,X,Z);
469
+ count(, X|7), $x\text{、} C\text{、} B\text{、}$ ): - count(T,X,Z);
470
+ count(, X|7), x,C,B): - count(T,X,Z);
471
+ count(, X|7), x,C,B): - count(T,X,Z);
472
+ count(, X|7), x,C,B): - count(T,X,Z);
473
+ count(, X|7), x,C,B): - count(T,X,Z);
474
+ count(, x|7), x,C,B): - count(T,X,Z);
475
+ count(, x|7), x,C,B): - count(T,X,Z);
476
+ count(, x|7), x,C,B): - count(T,X,Z);
477
+ count(, x|7), x,C,B): - count(T,X,Z);
478
+ count(, x|7), x,C,B): - count(T,X,Z);
479
+ count(, x|7), x,C,B): - count(T,X,Z);
480
+ count (, x|7), x,C,B): - count(T,X,Z);
481
+ count(, x|7), x,C,B): - count(T,X,Z);
482
+ count(, x|7), x,C,B): - count(T,X,Z);
483
+ count(, x|7), x,C,B): - count(T,X,Z);
484
+ count(, x|7), x,C,B): - count(T,X,Z);
485
+ count(, x|7), x,C,B): - count(T,X,Z)
486
+ count(, x|7), x,C,B): - count(T,X,Z);
487
+ count(, x|7), x,C,B): - count(T,X,Z);
488
+ count(, x|7), x,C,B): - count(T,X,Z);
489
+ count(, x|7), x,C,B): - count(T,X,Z);
490
+ count(, x|7), x,C,B): - count(T,X,Z);
491
+ count(, x|7), x,C,B): - count(T,x,Z);
492
+ count(, x|7), x,C,B): - count(T,x,Z);
493
+ count(, x|7), x,C,B): - count(T,x,Z);
494
+ count(, x|7), x,C,B): - count(T,x,Z);
495
+ count(, x|7), x,C,B): - count(T,x,Z);
496
+ count(, x|7), x,C,B): - count(T,x,Z);
497
+ count(, x|7), x,C,B): - call T;
498
+ count(, x|7), x.C,B); -- call T;
499
+ count(, x|7), x.C,B); -- call T;
500
+ count(, x|7), x.C,B); -- call T;
501
+ count(, x|7), x.C,B); -- call T;
502
+ count(, x|7), x.C,B); -- call T;
503
+ count(, x|7), x.C,B); -- call T;
504
+ count(, x|7), x.C,B); -- call T;
505
+ count(, x|7),
506
+ -- call T;
507
+ -- call T;
508
+ -- call T;
509
+ -- call T;
510
+ -- call T;
511
+ -- call T;
512
+ -- call T;
513
+ -- call T;
514
+ -- call T;
515
+ -- call T;
516
+ -- call T;
517
+ -- call T;
518
+ -- call T;
519
+ -- call T;
520
+ -- call T;
521
+ -- call T;
522
+ -- call T;
523
+ -- Call
524
+ -- call T;
525
+ -- call T;
526
+ -- call T;
527
+ -- call T;
528
+ -- call T;
529
+ -- call T;
530
+ -- call T;
531
+ -- call T;
532
+ -- call T;
533
+ -- call T;
534
+ -- call T;
535
+ -- call T;
536
+ -- call T;
537
+ -- call T;
538
+ -- call T;
539
+ -- call T;
540
+ -- called
541
+ -- call T;
542
+ -- call T;
543
+ -- call T;
544
+ -- call T;
545
+ -- call T;
546
+ -- call T;
547
+ -- call T;
548
+ -- call T;
549
+ -- call T;
550
+ -- call T;
551
+ -- call T;
552
+ -- call T;
553
+ -- call T;
554
+ -- call T;
555
+ -- call T;
556
+ -- call T;
557
+ -- call T;
558
+ -- called
559
+ -- call T;
560
+ -- call T;
561
+ -- call T;
562
+ -- call T;
563
+ -- call T;
564
+ -- call T;
565
+ -- call T;
566
+ -- call T;
567
+ -- call T;
568
+ -- call T;
569
+ -- call T;
570
+ -- call T;
571
+ -- call T;
572
+ -- call T;
573
+ -- call T;
574
+ -- call T
575
+ -- call T
576
+ -- call T
577
+ -- call T
578
+ -- call T
579
+ -- call T
580
+ -- call T
581
+ -- call T
582
+ -- call T
583
+ -- call T
584
+ -- call T
585
+ -- call T
586
+ -- call T
587
+ -- call T
588
+ -- call T
589
+ -- call T
590
+ -- call T
591
+ -- called
592
+ -- call T
593
+ -- called
594
+ -- called
595
+ - roll(x,[Y],[A],[L],[C],[S] -- roll(x,[Y],[A],[L],[C],[S])
596
+ - roll(x,[Y],[A],[L],[C],[S] -- roll(x,[Y],[A],[L],[C],[S])
597
+ - roll(x,[Y],[A],[L],[C],[S] -- roll(x,[Y],[A],[L],[C],[S])
598
+ - roll(x,[Y],[A],[L],[C],[S] -- roll(x,[Y],[A],[L],[C],[S]
599
+ - roll(x,[Y],[A],[L],[C],[S] -- roll(x,[Y],[A],[L],[C],[S]
600
+ - roll(x,[Y],[A],[L],[C],[S] -- roll(x,[Y],[A],[L],[C],[S]
601
+ - roll(x,[Y],[A],[L],[C],[S] -- roll(x,[Y],[A],[L],[C],[S]
602
+ - roll(x,[Y],[A],[-X] -- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[y,-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y],[-X] [- roll(x,[Y变压] [- roll(x,[Y变压] [- roll(x,[Y变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- roll(x变压] [- rollx变压] [- rollx变压] [- rollx变压] [- rollx变压] [- rollx变压] [- rollx变压] [- rollx变压] [- rollx变压] [- rollx变压] [- rollx变压] [- rollx变压] [-
603
+ ```
604
+
605
+ The query $q$ in the case of non exact counts of objects would be range_countobjects(X,L,C,S). For example an image $X$ with exactly one metal small cube and multiple rubber large spheres would result in the following query: range_countobjects(X,[s_metal Cube,l_rubber Sphere],[1,1],[0,1]).
606
+
607
+ # E.1 INFERENCE EXAMPLE FOR MNIST DATASET
608
+
609
+ To illustrate the inference process let us follow the evaluation of the clause sum([x1, x2], 8), what can result from query sum>digits(X, 8) in case of two visible digit in the image X.
610
+
611
+ This clause is true if and only if $X_{1} + X_{2} = 8$ .
612
+
613
+ In case of MNIST digits $\{(0,1,\dots ,9)\}$ enumerating the possible worlds would give the following set:
614
+
615
+ $$
616
+ \{(0, 8), (1, 7), (2, 6), \dots , (8, 0) \} \tag {3}
617
+ $$
618
+
619
+ After summing the probability of all possible worlds we get:
620
+
621
+ $$
622
+ p _ {1} (0) p _ {2} (8) + p _ {1} (1) p _ {2} (7) + \dots + p _ {1} (0) p _ {2} (8), \tag {4}
623
+ $$
624
+
625
+ where $p_1$ and $p_2$ are the distribution of random variable $X_1$ and $X_2$ respectively.
626
+
627
+ Or in a general form:
628
+
629
+ $$
630
+ p _ {Y} (Y) = \sum_ {X _ {1}} p _ {1} \left(X _ {1}\right) p _ {2} \left(Y - X _ {1}\right). \tag {5}
631
+ $$
632
+
633
+ As expected the distribution of the sum is the convolution of the distributions of the two terms. This observation trivially generalizes to more than two terms. The cost function corresponding to the maximum likelihood estimation is the negative log-likelihood $-\log (p_{Y}(Y))$ .
2023/Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5643b4bdd88ab7818ad8e1bde25002306f826786a6f8f1613d639dc786c5fe89
3
+ size 1001298
2023/Weakly Supervised Knowledge Transfer with Probabilistic Logical Reasoning for Object Detection/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning/b2c89086-3efa-4d35-8fb8-fa570d2c2733_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning/b2c89086-3efa-4d35-8fb8-fa570d2c2733_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning/b2c89086-3efa-4d35-8fb8-fa570d2c2733_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ac80ab06bfd4465efd7b7d6d5fd4aa6ece75f756e605c06841275ac879ec820
3
+ size 13311161
2023/Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning/full.md ADDED
@@ -0,0 +1,385 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WEAKLY-SUPERVISED HOI DETECTION VIA PRIOR-GUIDED BI-LEVEL REPRESENTATION LEARNING
2
+
3
+ Bo Wan $^{1,*}$ , Yongfei Liu $^{2*}$ , Desen Zhou $^{2}$ , Tinne Tuytelaars $^{1}$ , Xuming He $^{2,3}$
4
+
5
+ $^{1}$ KU Leuven, Leuven, Belgium; $^{2}$ ShanghaiTech University, Shanghai, China
6
+ $^{3}$ Shanghai Engineering Research Center of Intelligent Vision and Imaging {bwan, tinne.tuytelaars}@esat.kuleuven.be {liuyf3,zhouds,hexm}@shanghaitech.edu.cn
7
+
8
+ # ABSTRACT
9
+
10
+ Human object interaction (HOI) detection plays a crucial role in human-centric scene understanding and serves as a fundamental building-block for many vision tasks. One generalizable and scalable strategy for HOI detection is to use weak supervision, learning from image-level annotations only. This is inherently challenging due to ambiguous human-object associations, large search space of detecting HOIs and highly noisy training signal. A promising strategy to address those challenges is to exploit knowledge from large-scale pretrained models (e.g., CLIP), but a direct knowledge distillation strategy (Liao et al., 2022) does not perform well on the weakly-supervised setting. In contrast, we develop a CLIP-guided HOI representation capable of incorporating the prior knowledge at both image level and HOI instance level, and adopt a self-taught mechanism to prune incorrect human-object associations. Experimental results on HICO-DET and V-COCO show that our method outperforms the previous works by a sizable margin, showing the efficacy of our HOI representation.
11
+
12
+ # 1 INTRODUCTION
13
+
14
+ Human object interaction detection aims to simultaneously localize the human-object regions in an image and to classify their interactions, which serves as a fundamental building-block in a wide range of tasks in human-centric artificial intelligence, such as human activity recognition (Heilbron et al., 2015; Tina et al., 2021), human motion tracking (Wafae et al., 2019; Nishimura et al., 2021) and anomalous behavior detection (Liu et al., 2018; Pang et al., 2020).
15
+
16
+ Usually, HOI detection adopts a supervised learning paradigm (Gupta & Malik, 2015; Chao et al., 2018; Wan et al., 2019; Gao et al., 2020; Zhang et al., 2021c). This requires detailed annotations (i.e. human and object bounding boxes and their interaction types) in the training stage. However, such HOI annotations are expensive to collect and prone to labeling errors. In contrast, it is much easier to acquire image-level descriptions of target scenes. Consequently, a more scalable strategy for HOI detection is to learn from weak annotations at the image level, known as weakly-supervised HOI detection (Zhang et al., 2017). Learning under such weak supervision is particularly challenging mainly due to the lack of accurate visual-semantic associations, large search space of detecting HOIs and highly noisy training signal from only image level supervision.
17
+
18
+ Most existing works (Zhang et al., 2017; Baldassarre et al., 2020; Kumaraswamy et al., 2021) attempt to tackle the weakly-supervised HOI detection in a Multiple Instance Learning (MIL) framework (Ilse et al., 2018). They first utilize an object detector to generate human-object proposals and then train an interaction classifier with image-level labels as supervision. Despite promising results, these methods suffer from several weaknesses when coping with diverse and fine-grained HOIs. Firstly, they usually rely on visual representations derived from the external object detector, which mainly focus on the semantic concepts of the objects in the scene and hence are insufficient for capturing the concept of fine-grained interactions. Secondly, as the image-level supervision tends to ignore the imbalance in HOI classes, their representation learning is more susceptible to the dataset bias and dominated by frequent interaction classes. Finally, these methods learn the HOI concepts from a candidate set generated by pairing up all the human and object proposals, which is highly noisy and often leads to erroneous human-object associations for many interaction classes.
19
+
20
+ To address the aforementioned limitations, we introduce a new weakly-supervised HOI detection strategy. It aims to incorporate the prior knowledge from pretrained foundation models to facilitate the HOI learning. In particular, we propose to integrate CLIP (Radford et al., 2021b), a large-scale vision-language pretrained model. This allows us to exploit the strong generalization capability of the CLIP representation for learning a better HOI representation under weak supervision. Compared to the representations learned by the object detector, the CLIP representations are inherently less object-centric, hence more likely to incorporate also aspects about the human-object interaction, as evidenced by Appendix A. Although a few works have successfully exploited CLIP for supervised HOI detection in the past, experimentally we find they do not perform well in the more challenging weakly-supervised setting (c.f. Appendix.B). We hypothesize this is because they only transfer knowledge at image level, and fail without supervision at the level of human-object pairs.
21
+
22
+ To this end, we develop a CLIP-guided HOI representation capable of incorporating the prior knowledge of HOIs at two different levels. First, at the image level, we utilize the visual and linguistic embeddings of the CLIP model to build a global HOI knowledge bank and generate image-level HOI predictions. In addition, for each human-object pair, we enrich the region-based HOI features by the HOI representations in the knowledge bank via a novel attention mechanism. Such a bi-level framework enables us to exploit the image-level supervision more effectively through the shared HOI knowledge bank, and to enhance the interaction feature learning by introducing the visual and text representations of the CLIP model.
23
+
24
+ We instantiate our bi-level knowledge integration strategy as a modular deep neural network with a global and local branch. Given the human-object proposals generated by an off-the-shelf object detector, the global branch starts with a backbone network to compute image feature maps, which are used by a subsequent HOI recognition network to predict the image-wise HOI scores. The local branch builds a knowledge transfer network to extract the human-object features and augment them with the CLIP-guided knowledge bank, followed by a pairwise classification network to compute their relatedness and interaction scores<sup>1</sup>. The relatedness scores are used to prune incorrect human-object associations, which mitigates the issue of noisy proposals. Finally, the outputs of the two branches are fused to generate the final HOI scores.
25
+
26
+ To train our HOI detection network with image-level annotations, we first initialize the backbone network and the HOI knowledge bank from the CLIP encoders, and then train the entire model in an end-to-end manner. In particular, we devise a novel multi-task weak supervision loss consisting of three terms: 1) an image-level HOI classification loss for the global branch; 2) an MIL-like loss for the interaction scores predicted by the local branch, which is defined on the aggregate of all the human-object pair predictions; 3) a self-taught classification loss for the relatedness of each human-object pair, which uses the interaction scores from the model itself as supervision.
27
+
28
+ We validate our methods on two public benchmarks: HICO-DET (Chao et al., 2018) and V-COCO (Gupta & Malik, 2015). The empirical results and ablative studies show our method consistently achieves state-of-the-art performance on all benchmarks. In summary, our contributions are three-fold: (i) We exploit the CLIP knowledge to build a prior-enriched HOI representation, which is more robust for detecting fine-grained interaction types and under imbalanced data distributions. (ii) We develop a self-taught relatedness classification loss to alleviate the problem of mis-association between human-object pairs. (iii) Our approach achieves state-of-the-art performance on the weakly-supervised HOI detection task on both benchmarks.
29
+
30
+ # 2 RELATED WORKS
31
+
32
+ HOI detection: Most works on supervised HOI detection can be categorized in two groups: two-stage and one-stage HOI detection. Two-stage methods first generate a set of human-object proposals with an external object detector, then classify their interactions. They mainly focus on exploring additional human pose information (Wan et al., 2019; Li et al., 2020a; Gupta et al., 2019), pairwise relatedness (Li et al., 2019a; Zhou et al., 2020) or modeling relations between object and human (Gao et al., 2020; Zhang et al., 2021c; Ulutan et al., 2020; Zhou & Chi, 2019), to enhance the HOI representations. One-stage methods predict human & object locations and their interaction types simultaneously in an end-to-end manner, which are currently dominated by transformer-based architectures (Carion et al., 2020; Kim et al., 2022; Dong et al., 2022; Zhang et al., 2021a;b).
33
+
34
+ ![](images/e651bacab4fc1cd655ae1937f2758bab90cffc62a4962ef321be81b8fb18d4d7.jpg)
35
+ Figure 1: Model Overview: There are four modules in our network: a backbone Network, an HOI recognition network, a knowledge transfer network and a pairwise classification network.
36
+
37
+ Supervised methods show superior performance, but require labor-intensive HOI annotations that are infeasible to obtain in many scenarios. Thus, in this work we focus on HOI detection under weak supervision.
38
+
39
+ Weakly-supervised HOI detection: Weakly-supervised HOI detection aims to learn instance-level HOIs with only image-level annotations. (Prest et al., 2011) learns a set of binary action classifiers based on detected human-object pairs, where human proposal is obtained from a part-based human detector and object is derived from the relative position with respect to the human. PPR-FCN (Zhang et al., 2017) employs a parallel FCN to perform pair selection and classification. Explainable-HOI (Baldassarre et al., 2020) adopts graph nets to capture relations for better image-level HOI recognition, and uses backward explanation for instance-level HOI detection. MX-HOI (Kumaraswamy et al., 2021) proposes a momentum-independent learning strategy to utilize strong & weak labels simultaneously. AlignFormer (Kilickaya & Smeulders, 2021) proposes an align layer in transformer framework, which utilizes geometric & visual priors to generate pseudo alignments for training. Those methods focus on learning HOIs with advanced network structures or better pseudo alignments. However, they still suffer from noisy human-object associations and ambiguous interaction types. To address those challenges, we exploit prior knowledge from CLIP to build a discriminative HOI representations.
40
+
41
+ Knowledge exploitation of pretrained V&L models: Recently, CLIP (Radford et al., 2021a) model has demonstrated strong generalization to various downstream tasks (Ghiasi et al., 2021; Du et al., 2022; Gu et al., 2021). Some works also explore CLIP knowledge in supervised HOI detection, e.g., CATN (Dong et al., 2022) initializes the object query with category-aware semantic information from CLIP text encoder, and GEN-VLTK (Liao et al., 2022) employs image feature distillation and classifier initialization with HOI prompts. However, they only exploit CLIP knowledge at a coarse level and require detailed annotations of human-object pairs. It is non-trivial to extend such strategies to the weak supervision paradigm due to highly noisy training signals. In our work, we build a deep connection between CLIP and HOI representation by incorporating the prior knowledge of HOIs at both image and HOI instance levels.
42
+
43
+ # 3 METHOD
44
+
45
+ # 3.1 PROBLEM SETUP AND METHOD OVERVIEW
46
+
47
+ Problem setup Given an input image $I$ , the task of weakly-supervised HOI detection aims to localize and recognize the human-object interactions, while only the corresponding image-level HOI categories are available for training. Formally, we aim to learn a HOI detector $\mathcal{M}$ , which takes an image $I$ as input and generates a set of tuples $\mathcal{O} = \{(\mathbf{x}_h,\mathbf{x}_o,c_o,a_{h,o},R_{h,o}^a)\}$ , i.e., $\mathcal{O} = \mathcal{M}(I)$ . Here each tuple indicates a HOI instance, in which $\mathbf{x}_h,\mathbf{x}_o\in \mathbb{R}^4$ represent human and object bounding boxes, $c_{o}\in \{1,\dots,C\}$ is the object category, $a_{h,o}\in \{1,\dots,A\}$ denotes the interaction class associated with $\mathbf{x}_h$ and $\mathbf{x}_o$ , and $R_{h,o}^{a}\in \mathbb{R}$ is the HOI score. For the weakly-supervised setting,
48
+
49
+ each training image is annotated with a set of HOI categories $\mathcal{R} = \{r^{*}\}$ at the image level only, where $r^{*} \in \{1, \dots, N\}$ is an index to a combination of ground-truth object category $c^{*}$ and interaction category $a^{*}$ , and $N$ denotes the number of all possible HOI combinations defined on the dataset.
50
+
51
+ Method Overview As we lack supervision for the HOI locations, we adopt a typical hypothesize-and-recognize strategy (Zhang et al., 2017; Baldassarre et al., 2020; Kumaraswamy et al., 2021) for HOI detection: first we generate a set of human and object proposals with an off-the-shelf object detector (Ren et al., 2015) and then predict the interaction class for all human-object combinations.
52
+
53
+ Unlike other methods, we do not re-use the feature maps of the object or human detector - we only keep the bounding boxes. Instead, we learn a new representation optimized for the HOI task. This is challenging under the weak setting as the model learning is noisy, but feasible by leveraging the rich semantic knowledge from a pretrained large-scale multimodal model, like CLIP. However, the naive knowledge integration strategies for supervised setting fail when directly applied in the weak setting, as evidenced by our experiments in Appendix.B
54
+
55
+ Our framework adopts two philosophies to address the challenges in the weakly-supervised HOI task: the first is to integrate the prior knowledge into discriminative representation learning, and the second is to suppress noise in learning. For the first philosophy, we utilize the prior knowledge from CLIP to guide the representation learning in both global image-level and fine-grained human-object pairs, which is instantiated by a bi-level knowledge integration strategy. For the second philosophy, we adopt an effective self-taught learning mechanism to suppress the irrelevant pairs.
56
+
57
+ We instantiate the bi-level knowledge integration strategy with a two-branch deep network. Our detection pipeline starts with a set of human proposals with detection scores $\{(\mathbf{x}_h, s_h)\}$ , and object proposals with their categories and detection scores $\{(\mathbf{x}_o, c_o, s_o)\}$ . Then, the global branch performs image-level HOI recognition by utilizing a CLIP-initialized HOI knowledge bank as a classifier. This allows us to exploit both visual and text encoders from CLIP to generate better HOI representations. In parallel, for each human-object pair $(\mathbf{x}_h, \mathbf{x}_o)$ , the local branch explicitly augments the pairwise HOI features with the HOI knowledge bank to then identify their relatedness and interaction classes.
58
+
59
+ To train our model, we use a multi-task loss, which incorporates a HOI recognition loss defined on image-wise HOIs for the visual encoder and knowledge bank finetuning, and a self-taught relatedness classification for suppressing the background human-object associations, on top of the standard MIL-based loss. We first present model details in Sec.3.2, followed by the training strategy in Sec.3.3.
60
+
61
+ # 3.2 MODEL DESIGN
62
+
63
+ Now we introduce our bi-level knowledge integration strategy, where the aim is to exploit CLIP textual embeddings of HOI labels as a HOI knowledge bank for the HOI representation learning, and to transfer such knowledge both at image level as well as at the level of human-object pairs for interaction predictions. Specifically, as shown in Fig. 1, our network consists of a global branch and a local branch. The global branch includes a backbone network (Sec.3.2.1) that extracts image features, and a HOI recognition network (Sec.3.2.2) that uses a HOI knowledge bank based on CLIP to predict image-level HOI scores. For each human-object proposal generated by an off-the-shelf object detector, the local branch employs a knowledge transfer network (Sec.3.2.3) to compute its feature representation with enhancement from the HOI knowledge bank, and a pairwise classification network (Sec.3.2.4) to compute their relatedness and interaction scores. Finally, we generate the final HOI detection scores by combining global HOI scores with local predictions (Sec. 3.2.5).
64
+
65
+ HOI Knowledge Bank Generation CLIP builds a powerful vision-language model by pretraining on large-scale image-text pairs. It consists of a visual encoder $\mathcal{F}_V$ and textual encoder $\mathcal{F}_T$ , mapping both visual and textual inputs to a shared latent space. Here, we exploit CLIP to generate a HOI knowledge bank. We take a similar prompt strategy as in CLIP, adopting a common template 'a person {verb} a/an {object}' to convert HOI labels into text prompts (e.g., converting 'drive car' to 'a person driving a car'). Then we input the sentences into the CLIP textual encoder $\mathcal{F}_T$ to initialize the HOI knowledge bank $\mathcal{W}_T \in \mathbb{R}^{N \cdot D}$ , with $D$ denoting the feature dimension. One can think of $\mathcal{W}_T$ as a set of 'prototypes' in feature space, one for each HOI in the dataset.
66
+
67
+ # 3.2.1 GLOBAL BRANCH: BACKBONE NETWORK
68
+
69
+ To incorporate CLIP for feature extraction, we initialize the backbone network (e.g., a ResNet-101 (He et al., 2016)) with CLIP's visual encoder $\mathcal{F}_V$ to generate a feature map $\Gamma$ for the input image $I$ . We further compute a global feature vector $v_{g} \in \mathbb{R}^{D}$ with self-attention operation (Radford et al., 2021b).
70
+
71
+ ![](images/f3349fb8251160198ddf86acc80f35bdf49add88f16e617e0264756fb346c105.jpg)
72
+ (a) knowledge transfer network
73
+
74
+ ![](images/43ccc2a33ded6bf714fc9c749ae5107721836e3e4420f610ab7a8b7c1b4c5370.jpg)
75
+ (b) pseudo relatedness label generation
76
+ Figure 2: The knowledge transfer network explicitly transfers the discriminative relation-level semantic knowledge derived from CLIP to the pairwise HOI representations. Pseudo relatedness label generation uses the pairwise interaction scores to generate the pseudo association labels for self-taught relatedness classification
77
+
78
+ # 3.2.2 GLOBAL BRANCH: HOI RECOGNITION NETWORK
79
+
80
+ We perform an image-wise HOI recognition task with the HOI knowledge bank $\mathcal{W}_T$ . We obtain global HOI scores $s_g \in \mathbb{R}^N$ by computing the inner product between the image feature $v_g$ and the knowledge bank $\mathcal{W}_T$ : $s_g = \mathcal{W}_T \times v_g$ , where $\times$ is matrix multiplication. This has the effect of adapting the visual encoder and knowledge bank parameters to the HOI recognition task, fully taking advantage of the knowledge from CLIP.
81
+
82
+ # 3.2.3 LOCAL BRANCH: KNOWLEDGE TRANSFER NETWORK
83
+
84
+ Given the CLIP-initialized visual encoder, a standard HOI representation can be formed by concatenating the human and object appearance features along with their spatial encoding. However, even after the finetuning as described above, such a representation still mainly focuses on object-level semantic cues rather than relation-level concepts. In this module, we explicitly exploit the HOI knowledge bank $\mathcal{W}_T$ to learn a local relation-specific HOI representation. To achieve this, we propose an attention-based architecture as shown in Fig.2(a).
85
+
86
+ Specifically, for each human proposal $\mathbf{x}_h$ and object proposal $\mathbf{x}_o$ , we use RoI-Align (He et al., 2017) to crop the feature maps from $\Gamma$ followed by a self-attention operation to compute their appearance features $v_h, v_o \in \mathbb{R}^D$ . Then we compute a spatial feature $v_{sp}$ by encoding the relative positions of their bounding boxes $(\mathbf{x}_h, \mathbf{x}_o)^2$ . The holistic HOI representation $v_p \in \mathbb{R}^D$ is an embedding of the human and object appearance features and their spatial feature, i.e., $v_p = \mathcal{F}_E([v_h; v_o; v_{sp}])$ , where $[\cdot]$ is the concatenation operation and $\mathcal{F}_E$ is a multi-layer perceptron (MLP).
87
+
88
+ To enhance relation-level concepts, we further compute its union region $\mathbf{x}_u\in \mathbb{R}^4$ (see Fig. 2a) and extract the corresponding appearance feature $v_{u}\in \mathbb{R}^{D}$ via RoI-align over the feature map $\Gamma$ . The union region is important as it encodes relational context cues, but it potentially also contains a large amount of background that is noisy for model learning. We thus devise an attention module that is similar in design to the HOI recognition network, but uses the union feature $v_{u}$ as query to extract a meta-embedding $v_{meta}\in \mathbb{R}^{D}$ from the HOI knowledge bank $\mathcal{W}_T$ . The final HOI representation $\hat{v}_p\in \mathbb{R}^D$ is built by fusing the holistic representation $v_{p}$ and $v_{meta}$ with a MLP $\mathcal{F}_K$ .
89
+
90
+ $$
91
+ \alpha = \operatorname {S o f t m a x} \left(\mathcal {W} _ {T} \times v _ {u}\right); \quad v _ {\text {m e t a}} = \alpha^ {\intercal} \times \mathcal {W} _ {T}; \quad \hat {v} _ {p} = \mathcal {F} _ {K} \left(v _ {p} + v _ {\text {m e t a}}\right). \tag {1}
92
+ $$
93
+
94
+ Here $\alpha \in \mathbb{R}^N$ is the normalized attention weight and $\tau$ is the transpose operation. $v_{meta}$ encodes a discriminative representation from CLIP and facilitates feature sharing between HOI classes.
95
+
96
+ # 3.2.4 LOCAL BRANCH: PAIRWISE CLASSIFICATION NETWORK
97
+
98
+ Given the relation-aware HOI representation $\hat{v}_p$ , our final module performs a coarse-level classification on human-object association and a fine-level classification for interaction recognition. Specifically, we use two MLPs $\mathcal{F}_P$ and $\mathcal{F}_B$ to predict the interaction scores $s_p \in \mathbb{R}^A$ and the relatedness score $s_b \in \mathbb{R}$ for each human-object pair:
99
+
100
+ $$
101
+ s _ {p} = \mathcal {F} _ {P} (\hat {v} _ {p}); \quad s _ {b} = \mathcal {F} _ {B} (\hat {v} _ {p}) \tag {2}
102
+ $$
103
+
104
+ To train the model under weak supervision (see Sec. 3.3), we further aggregate the pairwise interaction scores into image-level interaction scores. Assume we have $M$ pairs of human-object proposals for a given image, and denote the interaction scores for the $m$ -th pair as $s_p^m$ . We first concatenate all the interaction scores to compose a bag $S = [s_p^1; \ldots; s_p^M] \in \mathbb{R}^{M \cdot A}$ , then we maximize over all pairs to obtain the image-wise interaction scores: $\tilde{s}_p = \max_m S$ .
105
+
106
+ # 3.2.5 MODEL INFERENCE
107
+
108
+ During model inference, we do not use the local interaction scores $s_p$ directly. Instead, we normalize $S$ with a Softmax operation defined on all pairs: $\bar{S} = \text{Softmax}(S)$ , and then compute the normalized
109
+
110
+ pairwise interaction scores $e_p = \sigma(\tilde{s}_p) \cdot \bar{s}_p$ , where $\bar{s}_p$ is a row from $\bar{S}$ and $\sigma$ is Sigmoid function. This has the effect of measuring the contribution of a given pair, in case of multiple pairs in an image share the same interaction.
111
+
112
+ The final interaction score $s_{h,o}^{a}$ for human-object pair $(\mathbf{x}_h,\mathbf{x}_o)$ combines multiple scores, including the global HOI scores $s_g$ , the normalized pairwise interaction scores $e_p$ , and the relatedness score $s_b$ . The overall HOI score $R_{h,o}^{a}$ is a combination of the interaction score and the object detection scores.
113
+
114
+ $$
115
+ s _ {h, o} ^ {a} = \sigma \left(s _ {g} ^ {a, c _ {o}}\right) \cdot e _ {p} ^ {a} \cdot \sigma \left(s _ {b}\right); \quad R _ {h, o} ^ {a} = \left(s _ {h} \cdot s _ {o}\right) ^ {\gamma} \cdot s _ {h, o} ^ {a} \tag {3}
116
+ $$
117
+
118
+ where $s_g^{a,c_o}$ is the HOI score corresponding to $a$ -th interaction and $c_o$ -th object category in $s_g$ , $e_p^a$ is the score of $a$ -th interaction in $e_p$ , and $\gamma$ is a hyper-parameter to balance the scores (Zhang et al., 2021c; Li et al., 2019b).
119
+
120
+ # 3.3 LEARNING WITH WEAK SUPERVISION
121
+
122
+ To train our deep network in a weakly supervised setting, we use a multi-task loss defined on three different levels. Specifically, our overall loss function $\mathcal{L}$ consists of three terms: i) an image-wise HOI recognition loss $\mathcal{L}_g$ to adapt CLIP features to the task of human-object interaction detection; ii) a pairwise interaction classification loss $\mathcal{L}_p$ to guide the knowledge transfer towards fine-grained relation-aware representations; and iii) a self-taught relatedness classification loss $\mathcal{L}_b$ to prune non-interacting human-object combinations. Formally, the overall loss is written as:
123
+
124
+ $$
125
+ \mathcal {L} = \mathcal {L} _ {g} + \mathcal {L} _ {p} + \mathcal {L} _ {b} \tag {4}
126
+ $$
127
+
128
+ Image-wise HOI recognition loss $\mathcal{L}_g$ : Given the HOI scores $s_g$ and ground-truth HOI categories $\mathcal{R}$ , $\mathcal{L}_g$ is a standard binary cross-entropy loss for multi-label classification: $\mathcal{L}_g = L_{BCE}(s_g, \mathcal{R})$ .
129
+
130
+ Pairwise interaction classification loss $\mathcal{L}_p$ : We adopt a MIL strategy that first aggregates the pairwise interaction scores and supervises this with image-level interaction labels as $\mathcal{A} = \{a^*\}$ . Given the image-wise interaction scores $\tilde{s}_p$ , $\mathcal{L}_p$ is a standard binary cross-entropy loss for multi-label classification as: $\mathcal{L}_p = L_{BCE}(\tilde{s}_p, \mathcal{A})$ .
131
+
132
+ Self-taught relatedness classification loss $\mathcal{L}_b$ : As human-object associations are not annotated, we devise a novel pseudo relatedness label generation mechanism for training a self-taught binary classifier to identify valid human-object associations. Specifically, we observe that the human-object pairs with confident interaction scores are often associated after a short period of initial training without self-taught classification loss. Motivated by this, we use the interaction scores $s_p$ from the model under training to supervise the relatedness classification.
133
+
134
+ Concretely, we generate pseudo labels $\mathcal{B} = \{b_1,\dots,b_M\}$ for all human-object pairs in an image, where $b_{m}\in \{0,1\}$ indicates the relatedness for the $m$ -th combination. To this end, as illustrated in Fig.2(b), we first propose a binary mask $Z\in \{0,1\}^{M\cdot A}$ for all interaction scores $S$ with respect to the ground-truth object categories $\mathcal{C} = \{c^*\}$ . For each human-object pair where the object label $c_{o}$ is included in $\mathcal{C}$ , we consider it as a potential interactive combination and thus assign the corresponding row in $Z$ as 1, and other rows as 0. For the latter, we also immediately set $b_{m} = 0$ . Then we generate pairwise scores $t^a\in \mathbb{R}^M$ for each ground-truth interaction $a^*$ by selecting the corresponding row from $S\odot Z$ . The pseudo label for the pair with the highest score is assigned as 1, i.e., $m_a = \arg \max_{m}t^a$ and $b_{m_a} = 1$ . We only select one positive pair<sup>3</sup> for each $a^*$ . Finally, $\mathcal{L}_b$ is defined as a binary cross-entropy loss: $\mathcal{L}_b = \sum_m L_{BCE}(s_b^m,b_m)$ , where $s_b^m$ is the relatedness score for the $m$ -th pair.
135
+
136
+ 3We also explore top-K selection in Appendix F
137
+
138
+ Table 1: mAP comparison on HICO-DET and V-COCO test set. - denotes the results are not available. * stands for the method we re-evaluate with the correct evaluation protocol (see Appendix.I for details) and †means our re-implementation. For V-COCO, all object detectors are pretrained on MSCOCO dataset by default, and details about the evaluation metrics APS1&2 c.f. Appendix H. IN-1K denotes ImageNet with 1000 classes.
139
+
140
+ <table><tr><td rowspan="2">Methods</td><td rowspan="2">Backbone</td><td rowspan="2">Detector</td><td colspan="3">HICO-DET (%)</td><td colspan="2">V-COCO (%)</td></tr><tr><td>Full</td><td>Rare</td><td>Non-Rare</td><td>AProle</td><td>AProle</td></tr><tr><td colspan="8">supervised</td></tr><tr><td>iCAN (Gao et al., 2018)</td><td>RN50 (IN-1K&amp;COCO)</td><td>FRCNN (COCO)</td><td>14.84</td><td>10.45</td><td>16.15</td><td>45.30</td><td>52.40</td></tr><tr><td>PMFNet (Wan et al., 2019)</td><td>RN50-FPN (IN-1K&amp;COCO)</td><td>FRCNN (COCO)</td><td>17.46</td><td>15.56</td><td>18.00</td><td>52.00</td><td>-</td></tr><tr><td>TIN (Li et al., 2019b)</td><td>RN50-FPN (IN-1K&amp;COCO)</td><td>FRCNN (COCO)</td><td>17.22</td><td>13.51</td><td>18.32</td><td>47.80</td><td>54.20</td></tr><tr><td>DJ-RN (Li et al., 2020a)</td><td>RN50 (IN-1K&amp;COCO)</td><td>FRCNN (COCO)</td><td>21.34</td><td>18.53</td><td>21.18</td><td>53.30</td><td>60.30</td></tr><tr><td>IDN (Li et al., 2020b)</td><td>RN50 (IN-1K&amp;COCO)</td><td>FRCNN (HICO-DET)</td><td>26.29</td><td>22.61</td><td>27.39</td><td>53.30</td><td>60.30</td></tr><tr><td>SCG (Zhang et al., 2021c)</td><td>RN50-FPN (IN-1K&amp;HICO-DET)</td><td>FRCNN (HICO-DET)</td><td>31.33</td><td>24.72</td><td>33.31</td><td>54.20</td><td>60.90</td></tr><tr><td>HOTR (Kim et al., 2021)</td><td>RN50+Transformer (IN-1K&amp;COCO)</td><td>DETR (HICO-DET)</td><td>25.10</td><td>17.34</td><td>27.42</td><td>55.20</td><td>64.40</td></tr><tr><td>QPIC (Tamura et al., 2021)</td><td>RN101+Transformer (IN-1K&amp;COCO)</td><td>DETR (COCO)</td><td>29.90</td><td>23.92</td><td>31.69</td><td>58.30</td><td>60.70</td></tr><tr><td>CATN (Dong et al., 2022)</td><td>RN50+Transformer (IN-1K&amp;HICO-DET&amp;COCO)</td><td>DETR (HICO-DET)</td><td>31.86</td><td>25.15</td><td>33.84</td><td>60.10</td><td>-</td></tr><tr><td>MSTR (Kim et al., 2022)</td><td>RN50 + Transformer (IN-1K&amp;COCO)</td><td>DETR(HICO-DET)</td><td>31.17</td><td>25.31</td><td>33.92</td><td>62.00</td><td>65.20</td></tr><tr><td>DisTr (Zhou et al., 2022)</td><td>RN50+Transformer (IN-1K&amp;COCO)</td><td>DETR (HICO-DET)</td><td>31.75</td><td>27.45</td><td>33.03</td><td>66.20</td><td>68.50</td></tr><tr><td>SSRT (Iftekhar et al., 2022)</td><td>R101+Transformer (IN-1K&amp;COCO)</td><td>DETR (COCO)</td><td>31.34</td><td>24.31</td><td>33.32</td><td>65.00</td><td>67.10</td></tr><tr><td>GEN-VLKT (Liao et al., 2022)</td><td>RN101+Transformer (IN-1K&amp;HICO-DET)</td><td>DETR (HICO-DET)</td><td>34.95</td><td>31.18</td><td>36.08</td><td>63.58</td><td>65.93</td></tr><tr><td colspan="8">between supervised &amp; weakly-supervised setting, learning with image-level HOIs and box annotations</td></tr><tr><td>AlignFormer (Kilickaya &amp; Smeulders, 2021)</td><td>RN101+Transformer (IN-1K&amp;HICO-DET)</td><td>DETR (HICO-DET)</td><td>20.85</td><td>18.23</td><td>21.64</td><td>15.82</td><td>16.34</td></tr><tr><td colspan="8">weakly-supervised</td></tr><tr><td>Explanation-HOI* (Baldassarre et al., 2020)</td><td>ResNeXt101 (IN-1K&amp;COCO)</td><td>FRCNN (COCO)</td><td>10.63</td><td>8.71</td><td>11.20</td><td>-</td><td>-</td></tr><tr><td>MX-HOI (Kumaraswamy et al., 2021)</td><td>RN101 (IN-1K&amp;COCO)</td><td>FRCNN (COCO)</td><td>16.14</td><td>12.06</td><td>17.50</td><td>-</td><td>-</td></tr><tr><td>PPR-FCN† (Zhang et al., 2017)</td><td>RN50 (CLIP dataset)</td><td>FRCNN (COCO)</td><td>17.55</td><td>15.69</td><td>18.41</td><td>-</td><td>-</td></tr><tr><td>ours</td><td>RN50 (CLIP dataset)</td><td>FRCNN (COCO)</td><td>22.89</td><td>22.41</td><td>23.03</td><td>42.97</td><td>48.06</td></tr><tr><td>ours</td><td>RN101 (CLIP dataset)</td><td>FRCNN (COCO)</td><td>25.70</td><td>24.52</td><td>26.05</td><td>44.74</td><td>49.97</td></tr></table>
141
+
142
+ # 4 EXPERIMENTS
143
+
144
+ # 4.1 EXPERIMENTAL SETUP
145
+
146
+ Datasets: We benchmark our model on two public datasets: HICO-DET and V-COCO. HICO-DET consists of 47776 images (38118 for training and 9658 for test). It has $N = 600$ HOI categories, which are composed of $C = 80$ common objects (the same as MSCOCO (Lin et al., 2014)) and $A = 117$ unique interaction categories. V-COCO is a subset of MSCOCO, consisting of 2533 images for training, 2867 for validation and 4946 for test. It has 16199 human instances, each annotated with binary labels for $A = 26$ interaction categories.
147
+
148
+ Evaluation Metric: Following (Chao et al., 2015), we use mean average precision (mAP) to evaluate HOI detection performance. A human-object pair is considered as positive when both predicted human and object boxes have at least 0.5 IoU with their ground-truth boxes, and the HOI class is classified correctly.
149
+
150
+ # 4.2 IMPLEMENTATION DETAILS
151
+
152
+ We use an off-the-shelf Faster R-CNN (Ren et al., 2015) pretrained on MSCOCO to generate at most 100 object candidates for each image. For V-COCO, it is worth noting that we train the object detector by removing the images in MSCOCO that overlap with V-COCO to prevent information leakage. The backbone network is initialized with the visual encoder from CLIP-RN101 model and the feature dimension $D = 1024$ .
153
+
154
+ For model learning, we set the detection score weight $\gamma = 2.8$ as default by following previous works (Zhang et al., 2021c; Li et al., 2019b), then optimize the entire network with AdamW and an initial learning rate of 1e-5 for backbone parameters and 1e-4 for others. We detach the parameters of the knowledge bank on the local branch for better model learning. We train up to 60K iterations with batch-size 24 in each on 4 NVIDIA 2080TI GPUs, and decay the learning rate by 10 times in 12K and 24K iteration.
155
+
156
+ # 4.3 QUANTITATIVE RESULTS
157
+
158
+ For HICO-DET (Tab.1), our approach outperforms the previous state of the arts on the weakly supervised setting by a clear margin, achieving 22.89 mAP with ResNet-50 and 25.70 mAP with ResNet-101 as backbone. For a fair comparison, we also re-implement PPR-FCN with CLIP visual encoder. The results show that we still outperform PPR-FCN by a sizeable margin, which validates the superiority of our framework. Besides, we even perform comparably with HOTR and IDN under an inferior experimental setting where HOTR adopts a more advanced transformer encoder-decoder architecture, and both methods are trained with strong supervision. Furthermore, the mAP gap between Rare (training annotations $< 10$ ) and Non-rare HOI classes in our results is much smaller than other methods, demonstrating the superior generalization capability of our HOI representation for solving the long-tailed distribution issue. In detail, we achieve a 0.62 mAP gap with ResNet-50
159
+
160
+ Table 2: Ablation study on HICO-DET dataset. "RN50-FPN(COCO)" denotes the backbone initialized with Faster R-CNN parameters pretrained on MSCOCO dataset while "CLIP RN50" stands for the backbone initialized with CLIP visual encoder. Besides, we construct the knowledge bank $\mathcal{W}_T$ with random initialization, or computing HOI prompts by RoBERTa or CLIP text transformer.
161
+
162
+ <table><tr><td rowspan="2">Methods</td><td colspan="2">Parameter initialization</td><td colspan="4">CLIP Knowledge</td><td colspan="3">mAP (%)</td></tr><tr><td>Backbone</td><td>knowledge bank</td><td>HOI recognition</td><td>KTN</td><td>score fusion</td><td>SRC</td><td>Full</td><td>Rare</td><td>Non-Rare</td></tr><tr><td>baseline</td><td>CLIP RN50</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>19.52</td><td>16.58</td><td>20.40</td></tr><tr><td>Exp 1</td><td>CLIP RN50</td><td>CLIP Text</td><td>✓</td><td>-</td><td>-</td><td>-</td><td>20.31</td><td>18.34</td><td>20.90</td></tr><tr><td>Exp 2</td><td>CLIP RN50</td><td>CLIP Text</td><td>✓ (freeze WT)</td><td>-</td><td>-</td><td>-</td><td>20.09</td><td>18.23</td><td>20.64</td></tr><tr><td>Exp 3</td><td>CLIP RN50</td><td>CLIP Text</td><td>✓</td><td>✓</td><td>-</td><td>-</td><td>20.86</td><td>18.40</td><td>21.60</td></tr><tr><td>Exp 4</td><td>CLIP RN50</td><td>CLIP Text</td><td>✓</td><td>✓</td><td>✓</td><td>-</td><td>22.40</td><td>20.70</td><td>22.90</td></tr><tr><td>Exp 5</td><td>CLIP RN50</td><td>-</td><td>-</td><td>-</td><td>-</td><td>✓</td><td>19.88</td><td>17.45</td><td>20.61</td></tr><tr><td>Exp 6</td><td>CLIP RN50</td><td>CLIP Text</td><td>✓</td><td>-</td><td>-</td><td>✓</td><td>20.75</td><td>19.38</td><td>21.16</td></tr><tr><td>Exp 7</td><td>CLIP RN50</td><td>CLIP Text</td><td>✓</td><td>✓</td><td>-</td><td>✓</td><td>21.53</td><td>20.05</td><td>21.97</td></tr><tr><td>ours</td><td>CLIP RN50</td><td>CLIP Text</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>22.89</td><td>22.41</td><td>23.03</td></tr><tr><td>Exp 8</td><td>RN50-FPN (COCO)</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>19.44</td><td>16.20</td><td>20.41</td></tr><tr><td>Exp 9</td><td>RN50-FPN (COCO)</td><td>random</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>19.61</td><td>15.57</td><td>20.82</td></tr><tr><td>Exp 10</td><td>RN50-FPN (COCO)</td><td>RoBERTa</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>20.45</td><td>16.46</td><td>21.65</td></tr></table>
163
+
164
+ and 1.53 with ResNet-101 backbone, which is much smaller than AlignFormer (3.14) and PPR-FCN (2.64), and supervised methods SSRT (9.01) and GEN-VLKT (4.9).
165
+
166
+ For V-COCO dataset, we report the performance of $\mathrm{AP}_{role}$ in both scenario1 and scenario2 for a complete comparison, which are 42.97 / 48.06 $\mathrm{AP}_{role}$ with ResNet-50 and 44.74 / 49.97 $\mathrm{AP}_{role}$ with ResNet-101 as backbone. As shown in Tab.1, our model achieves significant improvement compared with AlignFormer, and even is comparable with supervised methods TIN and iCAN.
167
+
168
+ # 4.4 ABLATION STUDY
169
+
170
+ In this section, we mainly validate the effectiveness of each component with detailed ablation studies on HICO-DET dataset. We use ResNet-50 as the backbone network to reduce experimental costs.
171
+
172
+ **Baseline:** The baseline adopts the visual encoder from CLIP-RN50 to generate the vanilla HOI representation $v_{p}$ , which is directly used to predict the interaction scores $s_{p}$ . Only pairwise interaction classification loss $\mathcal{L}_{p}$ is used for model learning.
173
+
174
+ HOI recognition: We augment the baseline with a HOI recognition network and observe the full mAP improves from 19.52 to 20.41, as reported in Exp 1 of Tab. 2. It suggests that the learnable knowledge bank $\mathcal{W}_T$ serves as a powerful classifier to perform image-level HOI recognition and update the visual encoder for better HOI representation. We visualize the learned parameters of knowledge bank in Appendix D to demonstrate its effectiveness. Furthermore, as in Exp 2, the performance slightly decreases from 20.31 to 20.09 when we freeze the training of the knowledge bank, indicating that joint learning of visual features and the knowledge bank is more appropriate for HOI detection.
175
+
176
+ Knowledge Transfer Network (KTN): KTN explicitly transfers the CLIP meta-knowledge to pairwise HOI features. As a result, it contributes 0.55 Full mAP improvement (Exp 3 v.s. Exp 1) and most of the performance gains come from Non-rare classes. This result shows KTN is capable of extracting discriminative features from the relational knowledge bank to our HOI representation. We also study the effectiveness of the attention mechanism of KTN in Appendix E.
177
+
178
+ Score fusion: In Tab. 2, we largely improve the Full mAP from 20.86 (Exp 3) to 22.40 (Exp 4) by fusing the global HOI scores $s_g$ to pairwise interaction score $s_p$ . As the HOI recognition network seamlessly inherits the visual-linguistic features from CLIP and directly adopts image labels as supervision, the global interaction scores are pretty accurate and largely enhance the pairwise scores, demonstrating its strong capabilities to cope with long-tailed and fine-grained HOI recognition.
179
+
180
+ Self-taught Relatedness Classification (SRC): Self-taught classification aims to identify the relatedness between human and objects. The improvements from Exp 4 to ours show the effectiveness of our self-taught strategy, which is capable of figuring out the irrelevant human-object pairs and suppressing their interaction scores during inference.
181
+
182
+ Combining KTN & SRC: The ablation results of Exp 5-7 in Tab. 2 show the KTN and SRC are able to facilitate each other. In detail, the SRC obtains 0.49 Full mAP improvement when the KTN is introduced (ours v.s. Exp 4), which is only 0.36 without KTN (Exp 5 v.s. baseline). Similarly,
183
+
184
+ ![](images/3c4dc5eb76fc4028aefbf775852ee762e247e291481c548a837a8639ba17ecdc.jpg)
185
+ (a)
186
+ wash_motorcycle
187
+ ours: 0.18, 0.355
188
+ baseline: 0.0189
189
+
190
+ ![](images/64d18385f4e7e147826a9c9ded9896f1574f4ad51300b8e50043ecca7c12edd0.jpg)
191
+ hold_horse:0.062,0.397,0.998 ride_horse:0.405,0.966,0.998
192
+
193
+ ![](images/9df9bcc5f1461c0846c4218ef4173e3faefd7d6edad70696dd7e72628870ebc0.jpg)
194
+ (c)
195
+ sit_on_motorcycle: 0.515, 0.033, 0.950
196
+
197
+ ![](images/3d058fc7e928eaacd2f473320fb34d4c09870479628f97693de6ed7388ecbfea.jpg)
198
+ (d)
199
+ sit_at_dining_table: 0.006, 0.993, 0.079
200
+ sit_at_dining_table: 0.232, 0.993, 0.994
201
+
202
+ ![](images/31a3d121ff52f3113f31974f177930970ca7ec73ecd4e03889b2f475115f9c2c.jpg)
203
+ paint_fire_hydrant:
204
+ ours: 0.203, 0.505, 0.955
205
+ baseline: 0.0027
206
+ Figure 3: Visualization of HOI detection results on HICO-DET test set. Red scores denote the negative HOI predictions. We mainly demonstrate the model's capabilities on four aspects: (a) coping with imbalanced HOI distribution; (b) distinguishing subtle differences among interaction types; (c) suppressing background HOI classes, and (d) pruning irrelevant human-object associations. The numbers reported are normalized pairwise interaction score, global HOI score and relatedness score.
207
+
208
+ ![](images/192708158f64b8c7105ece6aeedbf1e1b24fd0ec539a1403e13410348bb7f329.jpg)
209
+ repair truck: 0.23, 0.055, 0.979
210
+ inspect truck: 0.48, 0.138, 0.979
211
+
212
+ ![](images/5579212847887188404267a0686fd5fb59b35064bcc9bf3ca9fa886fb6aa1cfd.jpg)
213
+ stand_on_skateboard: 0.009, 0.001, 0.98
214
+
215
+ ![](images/acddb4ab24f2368acd17d4d546335723573c1d65e5ad4a799246606a28382d6a.jpg)
216
+ hold_kite:0.039,0.892,0.238 hold_kite:0.478,0.892,0.995
217
+
218
+ the KTN contributes 0.78 Full mAP improvement with SRC (Exp 7 v.s. Exp 6), which is only 0.55 without SRC (Exp 3 v.s. Exp 1).
219
+
220
+ Parameter initialization: Our visual encoder and knowledge bank are both initialized from CLIP. We also explore different parameter initialization strategy in Exp 8-10. Specifically, we initialize the visual encoder with a ResNet50-FPN pretrained on COCO detection task for the baseline (Exp 8), and the knowledge bank with random parameters (Exp 9) or embeddings of HOI labels from RoBERTa model (Exp 10) for the final model. We observe severe drops with all these initialization methods compared with ours, demonstrating the effectiveness and generalization ability of CLIP model. It is worth noting that the mAP of Rare classes decreases from 16.20 in Exp 8 to 15.57 in Exp 9, which suggests the randomly initialized knowledge bank even aggravates the imbalance issue in final model.
221
+
222
+ # 4.5 QUALITATIVE RESULTS
223
+
224
+ We show some qualitative results of our method in Fig.3. For each HOI prediction, we report (i) normalized pairwise interaction score, (ii) global HOI score and (iii) relatedness score for ours, and only pairwise interaction score for baseline. In Fig.3(a), ours interaction scores are more confident than baseline in Rare HOI classes, demonstrating the generalization ability of our CLIP-guided HOI representation. Besides, when incorporating relational knowledge bank into pairwise HOI representation, our method is capable of distinguishing the subtle differences among similar HOIs in Fig.3(b) (e.g., repair_truck:0.23 v.s. inspect_truck:0.48 in the bottom figure). Moreover, in Fig.3(c), the global branch suppresses background HOIs by predicting low global scores for them (e.g., the global HOI score is 0.033 for sit_onmotorcycle while the ground-truth is sit_on_bicycle). Finally, in Fig.3(d), our self-taught relatedness classification strategy shows strong capability at recognizing the ambiguous human-object associations (e.g., 0.079 v.s. 0.994 in the upper figure).
225
+
226
+ # 5 CONCLUSION
227
+
228
+ In this paper, we propose a bi-level knowledge integration strategy that incorporates the prior knowledge from CLIP for weakly-supervised HOI detection. Specifically, we exploit CLIP textual embeddings of HOI labels as a relational knowledge bank, which is adopted to enhance the HOI representation with an image-wise HOI recognition network and a pairwise knowledge transfer network. We further propose the addition of a self-taught binary pairwise relatedness classification loss to overcome ambiguous human-object association. Finally, our approach achieves the new state of the art on both HICO-DET and V-COCO benchmarks under the weakly supervised setting.
229
+
230
+ # ACKNOWLEDGEMENT
231
+
232
+ We acknowledge funding from Flemish Government under the Onderzoeksprogramma Artificiele Intelligentie (AI) Vlaanderen programme, Shanghai Science and Technology Program 21010502700 and Shanghai Frontiers Science Center of Human-centered Artificial Intelligence.
233
+
234
+ # ETHICS STATEMENT
235
+
236
+ Hereby, we consciously assure that our study is original work which has not been previously published elsewhere, and is not currently being considered for publication elsewhere. We do not have ethics risks as mentioned in the author guidelines.
237
+
238
+ # REPRODUCIBILITY STATEMENT
239
+
240
+ We use publicly available benchmarks, HICO-DET and V-COCO, to validate our method. Code is available at https://github.com/bobwan1995/Weakly-HOI.
241
+
242
+ # REFERENCES
243
+
244
+ Federico Baldassarre, Kevin Smith, Josephine Sullivan, and Hossein Azizpour. Explanation-based weakly-supervised learning of visual relations with graph networks. In ECCV, 2020.
245
+ Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, 2020.
246
+ Yu-Wei Chao, Zhan Wang, Yugeng He, Jiaxuan Wang, and Jia Deng. HICO: A benchmark for recognizing human-object interactions in images. In ICCV, 2015.
247
+ Yu-Wei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, and Jia Deng. Learning to detect human-object interactions. In WACV, 2018.
248
+ Leizhen Dong, Zhimin Li, Kunlun Xu, Zhijun Zhang, Luxin Yan, Sheng Zhong, and Xu Zou. Category-aware transformer network for better human-object interaction detection. arXiv preprint arXiv:2204.04911, 2022.
249
+ Yu Du, Fangyun Wei, Zihe Zhang, Miaojing Shi, Yue Gao, and Guoqi Li. Learning to prompt for open-vocabulary object detection with vision-language model. arXiv preprint arXiv:2203.14940, 2022.
250
+ Chen Gao, Yuliang Zou, and Jia-Bin Huang. ican: Instance-centric attention network for human-object interaction detection. In BMVC, 2018.
251
+ Chen Gao, Jiarui Xu, Yuliang Zou, and Jia-Bin Huang. Drg: Dual relation graph for human-object interaction detection. In ECCV, 2020.
252
+ Golnaz Ghiasi, Xiuye Gu, Yin Cui, and Tsung-Yi Lin. Open-vocabulary image segmentation. arXiv preprint arXiv:2112.12143, 2021.
253
+ Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, and Yin Cui. Open-vocabulary object detection via vision and language knowledge distillation. In ICLR, 2021.
254
+ Saurabh Gupta and Jitendra Malik. Visual semantic role labeling. arXiv preprint arXiv:1505.04474, 2015.
255
+ Tanmay Gupta, Alexander Schwing, and Derek Hoiem. No-frills human-object interaction detection: Factorization, layout encodings, and training techniques. In ICCV, 2019.
256
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
257
+ Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In ICCV2017, 2017.
258
+
259
+ Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 961-970, 2015.
260
+ ASM Iftekhar, Hao Chen, Kaustav Kundu, Xinyu Li, Joseph Tighe, and Davide Modolo. What to look at and where: Semantic and spatial refined transformer for detecting human-object interactions. arXiv preprint arXiv:2204.00746, 2022.
261
+ Maximilian Ilse, Jakub Tomczak, and Max Welling. Attention-based deep multiple instance learning. In ICML, pp. 2127-2136, 2018.
262
+ Mert Kilickaya and Arnold Smeulders. Human-object interaction detection via weak supervision. arXiv preprint arXiv:2112.00492, 2021.
263
+ Bumsoo Kim, Junhyun Lee, Jaewoo Kang, Eun-Sol Kim, and Hyunwoo J. Kim. Hotr: End-to-end human-object interaction detection with transformers. In CVPR, 2021.
264
+ Bumsoo Kim, Jonghwan Mun, Kyoung-Woon On, Minchul Shin, Junhyun Lee, and Eun-Sol Kim. Mstr: Multi-scale transformer for end-to-end human-object interaction detection. arXiv preprint arXiv:2203.14709, 2022.
265
+ Suresh Kirthi Kumaraswamy, Miaojing Shi, and Ewa Kijak. Detecting human-object interaction with mixed supervision. In WACV, 2021.
266
+ Yong-Lu Li, Siyuan Zhou, Xijie Huang, Liang Xu, Ze Ma, Hao-Shu Fang, Yan-Feng Wang, and Cewu Lu. Transferable interactiveness prior for human-object interaction detection. In CVPR, 2019a.
267
+ Yong-Lu Li, Siyuan Zhou, Xijie Huang, Liang Xu, Ze Ma, Hao-Shu Fang, Yanfeng Wang, and Cewu Lu. Transferable interactiveness knowledge for human-object interaction detection. In CVPR, 2019b.
268
+ Yong-Lu Li, Xinpeng Liu, Han Lu, Shiyi Wang, Junqi Liu, Jiefeng Li, and Cewu Lu. Detailed 2d-3d joint representation for human-object interaction. In CVPR, 2020a.
269
+ Yong-Lu Li, Xinpeng Liu, Xiaogqian Wu, Yizhuo Li, and Cewu Lu. Hoi analysis: Integrating and decomposing human-object interaction. In NeurIPS, 2020b.
270
+ Yue Liao, Aixi Zhang, Miao Lu, Yongliang Wang, Xiaobo Li, and Si Liu. Gen-vlkt: Simplify association and enhance interaction understanding for hoi detection. arXiv preprint arXiv:2203.13954, 2022.
271
+ Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dóllár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
272
+ Wen Liu, Weixin Luo, Dongze Lian, and Shenghua Gao Gao. Future frame prediction for anomaly detection - a new baseline. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
273
+ Hitoshi Nishimura, Satoshi Komorita, Yasutomo Kawanishi, and Hiroshi Murase. Sdof-tracker: Fast and accurate multiple human tracking by skipped-detection and optical-flow. arXiv preprint arXiv:2106.14259, 2021.
274
+ Guansong Pang, Cheng Yan, Chunhua Shen, van den Hengel Anton, and Xiao Bai. Self-trained deep ordinal regression for end-to-end video anomaly detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020.
275
+ Alessandro Prest, Cordelia Schmid, and Vittorio Ferrari. Weakly supervised learning of interactions between humans and objects. IEEE TPAMI, 2011.
276
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, 2021a.
277
+
278
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In ICML, 2021b.
279
+ Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497, 2015.
280
+ Masato Tamura, Hiroki Ohashi, and Tomoaki Yoshinaga. Qpic: Query-based pairwise human-object interaction detection with image-wide contextual information. In CVPR, 2021.
281
+ Tina, Anmol Kumar Sharma, Siddharth Tomar, and Kapil Gupta. Various approaches of human activity recognition: A review. In International Conference on Computing Methodologies and Communication(ICCMC), 2021.
282
+ Oytun Ulutan, A S M Iftekhar, and B. S. Manjunath. Vsgnet: Spatial attention network for detecting human object interactions using graph convolutions. In CVPR, 2020.
283
+ Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. JMLR, 2008. URL http://jmlr.org/papers/v9/vandermaaten08a.html.
284
+ Mrabti Wafae, Baibai Kaoutar, Bellach Benaissa, Oulad Haj Thami Rachid, and Tairi Hamid. Human motion tracking: A comparative study. Procedia Computer Science, 148:145-153, 2019.
285
+ Bo Wan, Desen Zhou, Yongfei Liu, Rongjie Li, and Xuming He. Pose-aware multi-level feature network for human object interaction detection. In ICCV, 2019.
286
+ Aixi Zhang, Yue Liao, Si Liu, Miao Lu, Yongliang Wang, Chen Gao, and Xiaobo Li. Mining the benefits of two-stage and one-stage hoi detection. NeuIPS, 2021a.
287
+ Frederic Z Zhang, Dylan Campbell, and Stephen Gould. Efficient two-stage detection of human-object interactions with a novel unary-pairwise transformer. arXiv preprint arXiv:2112.01838, 2021b.
288
+ Frederic Z Zhang, Dylan Campbell, and Stephen Gould. Spatially conditioned graphs for detecting human-object interactions. In ICCV, 2021c.
289
+ Hanwang Zhang, Zawlin Kyaw, Jinyang Yu, and Shih-Fu Chang. Ppr-fcn: Weakly supervised visual relation detection via parallel pairwise r-fcn. In ICCV, 2017.
290
+ Desen Zhou, Zhichao Liu, Jian Wang, Leshan Wang, Tao Hu, Errui Ding, and Jingdong Wang. Human-object interaction detection via disentangled transformer. arXiv preprint arXiv:2204.09290, 2022.
291
+ Penghao Zhou and Mingmin Chi. Relation parsing neural network for human-object interaction detection. In ICCV, 2019.
292
+ Tianfei Zhou, Wenguan Wang, Siyuan Qi, Haibin Ling, and Jianbing Shen. Cascaded human-object interaction recognition. In CVPR, 2020.
293
+
294
+ # APPENDIX
295
+
296
+ In this appendix, we first describe the spatial feature generation, and then supplement more experimental results of different CLIP knowledge integration strategies for weakly-supervised HOI detection. For Explanation-HOI (Baldassarre et al., 2020), we further clarify the difference between their mAP evaluation protocol and the standard one. Finally, we demonstrate the limitations, potential negative societal impacts as well as the result error bars of our method.
297
+
298
+ # A THE ADVANTAGE OF OUR HOI REPRESENTATION
299
+
300
+ To verify the improvement obtained with our CLIP-based HOI representation, we visualize the HOI representation $\hat{v}_p$ in feature space with t-SNE(van der Maaten & Hinton, 2008). For clarity, we randomly sample 80 HOI categories, and collect 50 samples for each category. For comparison, we also demonstrate the object-based HOI representation derived from 'Exp 9' in Tab.2 (i.e., the model without CLIP knowledge and using a random knowledge bank). As shown in Fig.4, we observe that CLIP-based HOI representations for different HOI categories are diverse and well separated in feature space, which is better for HOI detection. In contrast, the object-based representations are not well separated in feature space (see the red box region in Fig.4b). Besides, the experimental results in the ablation study (ours v.s. 'Exp 9') also validate the advantage of CLIP-based HOI representation, improving full mAP from 19.61 to 22.89.
301
+
302
+ # B ABLATION ON CLIP KNOWLEDGE INTEGRATION
303
+
304
+ To further demonstrate the superiority of our CLIP knowledge integration strategy, we study several proven techniques for CLIP knowledge transfer in Tab. 3. In $Abl1$ , for each human-object pair, we directly infer the HOI scores with CLIP by computing the cross-modal similarities between their visual union region and the HOI prompts. Without introducing any HOI priors, the promising results indicate the powerful generalization ability of CLIP and motivate the design of incorporating CLIP knowledge for weakly-supervised HOI detection. In $Abl2$ , we duplicate the experiment setting and results from $Exp8$ in Tab. 2 of the main paper. It is a simplified baseline model but initializes the visual encoder with a ResNet50-FPN pretrained on COCO detection task. Then we introduce three different CLIP knowledge transfer strategies (Abl 3-4 and ours) based on $Abl2$ .
305
+
306
+ In Abl 3, we directly enhance baseline scores in Abl 2 with the CLIP similarity scores in Abl 1 on the inference stage. Without bells and whistles, we obtain 1.12 gain in Full mAP.
307
+
308
+ Furthermore, in Abl 4, we adopt a similar knowledge transfer strategy as GEN-VLKT (Liao et al., 2022), where we initialize the HOI classifier $\mathcal{F}_P$ with HOI prompt and regularize the global HOI representation with CLIP image feature $v_{g}$ . In detail, we first compute the global HOI representation $v_{mean}$ with mean pooling on all pairwise HOI representations, i.e., $v_{mean} = MeanPool(\{v_p^m\}_{m=1}^M)$ . Here $v_p^m$ is the holistic HOI representation (c.f. Sec. 3.2.3 in the main paper) for $m$ -th human-object pair. Then we develop an additional $L2$ loss $\mathcal{L}_{reg}$ to transfer the knowledge from CLIP to HOI representations: $\mathcal{L}_{reg} = L2(v_{mean}, v_g)$ . The performance even decreases slightly from 19.44 to 19.39, which might be caused by the incompatibility of parameters between backbone network (ResNet50-FPN pretrained on COCO) and $\mathcal{F}_P$ (HOI prompt embeddings from CLIP). When directly applying the knowledge transfer strategy of GEN-VLKT to a weakly-supervised setting, it is difficult to map the unmatched HOI representation and classification weights to a joint space as the supervisory signals are noisy.
309
+
310
+ Finally, our approach achieves the best performance compared with other strategies, demonstrating the effectiveness of our bi-level knowledge integration strategy.
311
+
312
+ # C SPATIAL FEATURE GENERATION
313
+
314
+ Following (Zhang et al., 2021c), we generate the spatial feature $v_{sp} \in \mathbb{R}^{D}$ for each pair of human-object proposals $(\mathbf{x}_h, \mathbf{x}_o)$ . Specifically, we first compute the bounding boxes information for $\mathbf{x}_h$ and $\mathbf{x}_o$ separately, including their center coordinates, widths, heights, aspect ratios and areas, all
315
+
316
+ ![](images/ec0d2877c25887ff4609d620312019b28f059c4f6a7a3f591043ace767320b71.jpg)
317
+ Figure 4: The t-SNE visualization of CLIP-based HOI representation and object-based HOI representation.
318
+
319
+ ![](images/9c33688555a1473b9211dc245403d73ceed296cf44e1d2b567f377ee0add3094.jpg)
320
+
321
+ Table 3: Ablation of different CLIP knowledge integration strategies on HICO-DET dataset.
322
+
323
+ <table><tr><td rowspan="2">Methods</td><td rowspan="2">Experimental setting</td><td colspan="3">mAP (%)</td></tr><tr><td>Full</td><td>Rare</td><td>Non-Rare</td></tr><tr><td>Abl 1</td><td>CLIP inference score</td><td>11.84</td><td>13.72</td><td>11.27</td></tr><tr><td>Abl 2</td><td>RN50-FPN (COCO) + FP random init.</td><td>19.44</td><td>16.20</td><td>20.41</td></tr><tr><td>Abl 3</td><td>RN50-FPN (COCO) + FP random init. + CLIP inference score</td><td>20.56</td><td>18.19</td><td>21.27</td></tr><tr><td>Abl 4</td><td>RN50-FPN (COCO) + FP HOI prompt init. + CLIP visual regularization</td><td>19.39</td><td>15.12</td><td>20.66</td></tr><tr><td>ours</td><td>CLIP RN50 + HOI recognition + KTN + self-taught relatedness cls.</td><td>22.89</td><td>22.41</td><td>23.03</td></tr></table>
324
+
325
+ normalized by the corresponding dimension of the image. We also encode their relative spatial relations by estimating the intersection over union (IoU), a ratio of the area of $\mathbf{x}_h$ and $\mathbf{x}_o$ , a directional encoding and the distance between center coordinates of $\mathbf{x}_h$ and $\mathbf{x}_o$ . We concatenate all the above-mentioned preliminary spatial cues and obtain a spatial encoding $\mathbf{p} \in \mathbb{R}_{+}^{18}$ . To encode the second and higher order combinations of different terms, the spatial encoding is concatenated with its logarithm and then embedded to $v_{sp}$ : $v_{sp} = \mathcal{F}_{sp}([p; \log(p + \epsilon)])$ . Where $\epsilon > 0$ is a small constant to guarantee the numerical stability, and $\mathcal{F}_{sp}$ is a multi-layer fully connected network.
326
+
327
+ # D VISUALIZATION OF HOI KNOWLEDGE BANK $\mathcal{W}_T$
328
+
329
+ To further understand $\mathcal{W}_T$ , we visualize the knowledge bank features initialized by CLIP (Fig.5(a)) and learned from scratch (Fig.5(b)) in feature space by t-SNE. It is worth noting that the knowledge bank learned from scratch is derived from 'Exp 9' in Tab.2. As shown in Fig.5, we observe that the knowledge features of HOI classes initialized with CLIP are more discriminative than random initialized, and show a better clustering result (e.g. the HOI classes in red box regions).
330
+
331
+ # E DIFFERENT DESIGNS OF KTN
332
+
333
+ To further validate the effectiveness of our attention mechanism in KTN, we compare our design with some variants in Tab. 4. First of all, we directly encode the relation-level features within the union region to enhance the pairwise representation rather than the external knowledge bank. As a result, the mAP even decreases a little bit from 20.75 (Exp 6) to 20.69 (Exp 11). The potential reason is that the union region contains more ambiguous visual relations and background clutters, which are difficult to learn in a weak setting. Besides, we also explore different normalization strategies in KTN. The results in Tab. 4 demonstrate that Softmax operation (ours) performs better than uniform attention (Exp 12) or Sigmoid operation (Exp 13), indicating our attention mechanism is non-trivial and more effective on aggregating the relational cues from HOI knowledge bank.
334
+
335
+ ![](images/2e8f35e8efe169ddf8e8f37e0ef00a2cb7d7406a4aefb85cf71fe3778f5da5f5.jpg)
336
+ Figure 5: The t-SNE visualization of knowledge bank $\mathcal{W}_T$ . (a) is the knowledge bank distribution in feature space based on our CLIP-based HOI representation while (b) is the knowledge bank learned from scratch (the model in Tab.2-Exp 9) based on object-based HOI representation.
337
+
338
+ Table 4: Different network design of Knowledge Transfer Network (KTN).
339
+
340
+ <table><tr><td rowspan="2">Methods</td><td colspan="2">Parameter initialization</td><td colspan="4">CLIP Knowledge</td><td colspan="3">mAP (%)</td></tr><tr><td>Backbone</td><td>knowledge bank</td><td>HOI recognition</td><td>KTN</td><td>score fusion</td><td>SRC</td><td>Full</td><td>Rare</td><td>Non-Rare</td></tr><tr><td>Exp 11</td><td>CLIP RN50</td><td>CLIP Text</td><td>✓</td><td>✓ (union)</td><td>-</td><td>✓</td><td>20.69</td><td>19.55</td><td>21.04</td></tr><tr><td>Exp 12</td><td>CLIP RN50</td><td>CLIP Text</td><td>✓</td><td>✓ (uniform)</td><td>-</td><td>✓</td><td>21.14</td><td>19.82</td><td>21.53</td></tr><tr><td>Exp 13</td><td>CLIP RN50</td><td>CLIP Text</td><td>✓</td><td>✓ (sigmoid)</td><td>-</td><td>✓</td><td>21.28</td><td>19.27</td><td>21.88</td></tr><tr><td>ours</td><td>CLIP RN50</td><td>CLIP Text</td><td>✓</td><td>✓</td><td>-</td><td>✓</td><td>21.53</td><td>20.05</td><td>21.97</td></tr></table>
341
+
342
+ # F TOP-K POSITIVE PAIR SELECTION FOR SRC
343
+
344
+ In this section we show the results of selecting top-2 and top-5 pairs as positive in Tab. 5. We notice that there is a small performance drop, which is likely to be caused by mislabeling more negative pairs as positive, resulting in model learning with more noise.
345
+
346
+ # G THE PROMPT GENERATION FOR V-COCO
347
+
348
+ For the V-COCO dataset, each action has two different semantic roles ('instrument' and 'object') for different objects, like 'cut cake' and 'cut with knife'. We use two different prompt templates to convert a HOI label to a language sentence. For the former one, we take template "a person verb a/an object", and use "a person verb with object" for the latter.
349
+
350
+ # H EVALUATION METRIC FOR V-COCO
351
+
352
+ V-COCO dataset has two scenarios for role AP evaluation. In Tab. 1, APS1&2 refer to 'Average Precision in scenario 1&2'. V-COCO dataset has two different annotations for HOIs: the first is a
353
+
354
+ Table 5: Ablation of top-K positive pair selection for SRC on HICO-DET dataset.
355
+
356
+ <table><tr><td rowspan="2">Methods</td><td colspan="3">mAP (%)</td></tr><tr><td>Full</td><td>Rare</td><td>Non-Rare</td></tr><tr><td>Top-5</td><td>22.45</td><td>21.61</td><td>22.70</td></tr><tr><td>Top-2</td><td>22.49</td><td>21.83</td><td>22.69</td></tr><tr><td>ours (Top-1)</td><td>22.89</td><td>22.41</td><td>23.03</td></tr></table>
357
+
358
+ ![](images/5ea9aca00229807cfcbed7f05dd2e53f142325eae021f722c0c643c640bf02ee.jpg)
359
+ (a) Evaluation protocol in Explanation-HOI
360
+
361
+ ![](images/b1ffa02461492b359ad767aae489aa3222ac27a003e81f6fd5b1f59f8392a3fe.jpg)
362
+ (b) The correct evaluation protocol
363
+ Figure 6: The screenshot of the evaluation code in Explanation-HOI. (a) is the original code while (b) is the correct one based on the standard evaluation code. We use red rectangle boxes to highlight the most important differences
364
+
365
+ full label of (human location, interaction type, object location, object type), and the second misses target object (also denoted as 'role' in the original paper (Gupta & Malik, 2015)) annotations, and the label only includes (human location, interaction type). For the second case, there are two different evaluation protocols (scenarios) when taking a prediction as correct $^4$ : In scenario 1, it requires the interaction is correct & the overlap between the human boxes is $> 0.5$ & the corresponding role is empty, which is more restricted; in scenario 2, it only requires the interaction is correct & the overlap between the person boxes is $> 0.5$ .
366
+
367
+ # I EVALUATION OF EXPLANATION-HOI
368
+
369
+ The Explanation-HOI (Baldassarre et al., 2020) has a misunderstanding of mAP evaluation protocol. As shown in Fig.6(a) L200-L205, the Explanation-HOI only takes some specific predicted HOIs into the evaluation process, which has the same HOI labels as groundtruth HOIs. Thus, they ignore lots of false-positive HOI predictions when calculating mAP, leading to an untrustable high mAP score (reported in their original paper). In Fig.6(b) L204-L208, we evaluate all predicted HOIs, which is the same as the standard evaluation protocol proposed in HICO-DET (Chao et al., 2015). The correct results have already been reported in Tab.1 in the main paper.
370
+
371
+ # J LIMITATIONS
372
+
373
+ As described in Sec. 3.1, we adopt an external object detector to generate human-object proposals and then recognize their interactions. Consequently, our method is faced with two limitations brought by erroneous object detection results. Firstly, the positive human-object pairs are not recalled if the human or object proposals are not detected. Secondly, the proposals are kept fixed during learning, which leads to the problem of inaccurate localization and object types.
374
+
375
+ # K RISK OF USING CLIP
376
+
377
+ For all the methods that adopt CLIP in their model design, there is a potential risk of data leakage as CLIP has seen quite a lot of data during pretraining. For HOI detection task, we cannot get access to CLIP dataset and do not know the exact overlap between CLIP and HOI benchmarks (i.e., HICO-DET and V-COCO), we carefully read Sec. 5 (Data Overlap Analysis) of the CLIP paper (Radford et al., 2021b), including an analysis of the overlap between its dataset with 35 popular datasets (HICO-DET and V-COCO are not included). It shows the overlap is small (median is $2.2\%$ and average is $3.2\%$ ) and the influence is limited ("overall accuracy is rarely shifted by more than $0.1\%$ with only 7 datasets above this threshold"). Besides, the training text accompanying an image in the CLIP dataset is often not related to the HOI annotations. Thus, we think the risk is limited.
378
+
379
+ # L LICENSE
380
+
381
+ The licenses of the assets used in our work are listed below, including open-sourced CLIP model, HICO-DET dataset, and V-COCO dataset. As for HICO-DET, we cannot find its license in the paper and the official project page. Thus we provide the official project page instead here for clarity.
382
+
383
+ 1. CLIP: https://github.com/openai/CLIP MIT License
384
+ 2. VCOCO: https://github.com/s-gupta/v-coco/MIT License
385
+ 3. HICO-DET: http://www-personal.umich.edu/ ywchao/hico/ No license
2023/Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8743b72b5330c10a748f4d39b6b85416151fdcd6e6429b35955fd049a880da53
3
+ size 769759
2023/Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weighted Clock Logic Point Process/3eef33de-4305-442c-87ae-f007ec3ea0e2_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weighted Clock Logic Point Process/3eef33de-4305-442c-87ae-f007ec3ea0e2_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weighted Clock Logic Point Process/3eef33de-4305-442c-87ae-f007ec3ea0e2_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c36453a5123425f0cff87356182e77a2ed246cb3d6c688bb81de41a770e04a97
3
+ size 1093684
2023/Weighted Clock Logic Point Process/full.md ADDED
@@ -0,0 +1,774 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WEIGHTED CLOCK LOGIC POINT PROCESS
2
+
3
+ Ruixuan Yan $^{1}$ , Yunshi Wen $^{1}$ , Debarun Bhattacharjya $^{2}$ , Ronny Luss $^{2}$ , Tengfei Ma $^{2}$ , Achille Fokoue $^{2}$ , and Agung Julius $^{1}$
4
+
5
+ $^{1}$ Rensselaer Polytechnic Institute
6
+ $^{2}$ IBM T.J. Watson Research Center
7
+
8
+ # ABSTRACT
9
+
10
+ Datasets involving multivariate event streams are prevalent in numerous applications. We present a novel framework for modeling temporal point processes called clock logic neural networks (CLNN) which learn weighted clock logic (wCL) formulas as interpretable temporal rules by which some events promote or inhibit other events. Specifically, CLNN models temporal relations between events using conditional intensity rates informed by a set of wCL formulas, which are more expressive than related prior work. Unlike conventional approaches of searching for generative rules through expensive combinatorial optimization, we design smooth activation functions for components of wCL formulas that enable a continuous relaxation of the discrete search space and efficient learning of wCL formulas using gradient-based methods. Experiments on synthetic datasets manifest our model's ability to recover the ground-truth rules and improve computational efficiency. In addition, experiments on real-world datasets show that our models perform competitively when compared with state-of-the-art models.
11
+
12
+ # 1 INTRODUCTION AND RELATED WORK
13
+
14
+ Multivariate event streams are emerging types of data that involve occurrences of different types of events in continuous time. Event streams are observed in a wide range of applications, including but not limited to finance (Bacry et al., 2015), politics (O'Brien, 2010), system maintenance (Gunawardana et al., 2011), healthcare (Weiss & Page, 2013), and social networks (Farajtabar et al., 2015). As opposed to time series data that typically comprises continuous-valued variables evolving in regular discrete time stamps, event streams involve events occurring irregularly and asynchronously in continuous time. Modeling the dynamics in event streams is important for a wide range of scientific and industrial processes, such as predicting the occurrence of events of interest or understanding why some deleterious events occur so as to possibly prevent their occurrence. A (multivariate) temporal point process (TPP) provides a formal mathematical framework for representing event streams, where a conditional intensity rate for each event measures its occurrence rate at any time given the historical events in the stream (Daley & Vere-Jones, 2003; Aalen et al., 2008).
15
+
16
+ There has been a proliferation of research around TPPs in recent years, particularly around the use of neural networks for modeling conditional intensity rates as a function of historical occurrences (Du et al., 2016; Mei & Eisner, 2017; Xiao et al., 2017; Xu et al., 2017; Gao et al., 2020; Zhang et al., 2020; Zuo et al., 2020). One stream of research studies graphical event models (GEMs) as a compact and interpretable graphical representation for TPPs, where the conditional intensity rate for any particular event depends only on the history of a subset of the events (Didelez, 2008; Gunawardana & Meek, 2016). While any TPP can be represented as a GEM, various models make assumptions about the parametric form of conditional intensity rates for the sake of learnability, for instance that rates are piece-wise constant with respect to occurrences within historical windows (Gunawardana et al., 2011; Bhattacharjya et al., 2018). Ordinal GEMs(OGEM) (Bhattacharjya et al., 2020; 2021) are a recent model from this family where a conditional intensity rate depends on the order in which parent events occur within the most recent historical time period.
17
+
18
+ A temporal logic point process (TLPP) framework was proposed as an alternate way to lend some interpretability to TPPs by modeling intensity rates using temporal logic rules (Li et al., 2020). Although the initial work pre-specified temporal logic rules, recent work has introduced a temporal logic rule learner (TELLER) for automatically discovering rules (Li et al., 2021). There is however
19
+
20
+ the issue of scalability since TELLER exploits an expensive branch-and-price algorithm to search for temporal logic rules in a discrete space. Another important limitation of this work is that TELLER's rules are not informative enough to explain how the interval length between ordered events impacts the conditional intensity rate. For instance, while predicting the occurrence of diabetes, the rule that "insulin injection happens 20 minutes before eating meal" is more informative and accurate in predicting "blood glucose remains normal" than the rule that "insulin injection happens before eating meal", as the latter rule cannot expose the interval between 'insulin injection' and 'eating meal'. To tackle the above limitations, we propose novel atomic predicates enriching the expressiveness of temporal logic rules as well as a differentiable framework to learn rules in an end-to-end manner.
21
+
22
+ This work introduces a differentiable neuro-symbolic framework, clock logic neural network (CLNN), to model TPPs by learning weighted clock logic (wCL) formulas as explanations. Firstly, event streams are converted into continuous-time clock signals representing the time interval between the last occurrence of an event and the current time. Next, we propose a novel wCL to describe the underlying temporal relations with relative interval length, enabling the design of a CLNN to learn the generative mechanisms. Instead of searching for temporal logic rules in some vast discrete space, CLNN associates every neuron with an order representation or a logical operator and assigns weights to edges to reflect the importance of various inputs, which relaxes the search space to be continuous. Moreover, architecture weights are introduced into CLNN to make the formula structure search differentiable. wCL formula-informed intensity rates are carefully designed so that the parameters appearing in the rules can be learned through maximum likelihood estimation using gradient-based approaches. CLNN is tested on synthetic datasets to show that CLNN can recover the ground-truth rules as well as on real-world datasets to demonstrate its model-fitting performance.
23
+
24
+ # 2 PRELIMINARIES
25
+
26
+ # 2.1 NOTATION & BACKGROUND
27
+
28
+ Let $\mathcal{L}$ denote the set of event labels, and $M = |\mathcal{L}|$ denote the number of event labels. An event stream is a sequence of events including time stamps, denoted as $\mathcal{D} = \{(l_1,t_1),(l_2,t_2),\dots,(l_N,t_N)\}$ , where $t_i\in \mathbb{R}^+$ denotes a time stamp between the beginning time $t_0 = 0$ and end time $t_{N + 1} = T$ , and $l_{i}\in \mathcal{L}$ is the event label that happens at $t_i$ . We refer to 'event label' and 'label' interchangeably. Every event label $l\in \mathcal{L}$ has an associated conditional intensity rate describing the occurrence rate of label $l$ at $t$ given the history up to $t$ . In multivariate temporal point processes, conditional intensity rates describe the dynamics of events. Let $\mathcal{H}_t = \{(l_i,t_i):t_i < t\}$ denote the historical events up to time $t$ . The conditional intensity rate of event label $l$ is denoted as $\lambda_l(t|\mathcal{H}_t)$ . Specifically, $\lambda_l(t|\mathcal{H}_t)$ describes the expected number of occurrences of event label $l$ in an infinitesimal interval $[t,t + \Delta t]$ given the history $\mathcal{H}_t$ , i.e., $\lambda_l(t|\mathcal{H}_t) = \lim_{\Delta t\to 0}(E[N_l(t + \Delta t) - N_l(t)|\mathcal{H}_t] / \Delta t)$ , where $N_{l}(t)$ denotes the number of event label $l$ 's occurrences up to $t$ .
29
+
30
+ Example 1 A running example of an event stream with 11 events of 4 labels is shown in Figure 1(a).
31
+
32
+ ![](images/557ee827b58081cb6292fa7bca2267a38e9181cbaf851f056a773b70a640ed44.jpg)
33
+ (a)
34
+ (b)
35
+ Figure 1: (a): An event stream example with $N = 11$ events of $M = 4$ event labels over $T = 30$ days. (Integer-valued time stamps are utilized for easy interpretation, note that the proposed approach also works for $t_i \in \mathbb{R}$ ). (b): The overall workflow of the proposed method (POC: paired order cell, SOC: singleton order cell, AC: architecture cell, details presented in Section 2.2 to 3.3).
36
+
37
+ # 2.2 ORDER REPRESENTATIONS FOR EVENT STREAMS
38
+
39
+ The overall workflow of the proposed framework is visualized as Figure 1(b). The raw event streams first go through a masking function to generate the masked event streams, which are then transformed into event clocks using a clocking function. The event clocks are given as inputs to the clock logic neural network (CLNN) to learn interpretable wCL formulas and the intensity rate of event occurrences. The following sections provide a detailed explanation for each module in Figure 1(b).
40
+
41
+ We are interested in exploring the effect of temporal ordering between event labels and the occurrences of causal event labels in a historical window on the occurrence rate of a particular event label,
42
+
43
+ where the generative mechanism is expressed as interpretable formulas. An event stream up to $t$ may include multiple occurrences of the same event label, thus a masking function is required to mask out duplicated event labels in the history for accessing the ordering information at any $t$ . Here we adopt a technique similar to Bhattacharjya et al. (2020) for extracting distinct event labels from $\mathcal{H}_t$ .
44
+
45
+ Definition 1 (Masking Function) A masking function $\Gamma(\cdot)$ is a function that takes an event stream as input and returns a new event stream that is a subset of the input stream and contains no duplicated event labels. Mathematically, $\Gamma(\cdot)$ is applied to $\mathcal{H}_t = \{(l_i, t_i)\}$ and converts it into a new stream $\mathcal{H}_t' = \{(l_j, t_j) \in \mathcal{H}_t : l_j \neq l_{j'} \text{ if } j \neq j'\}$ .
46
+
47
+ We consider the following two masking functions as per Bhattacharjya et al. (2020) due to simplicity: 'first' masking and 'last' masking. The 'first' (resp. 'last') masking function keeps the first (resp. last) occurrence of an event label in an event stream.
48
+
49
+ Example 1 (cont.) Let $\mathcal{H}_{13} = \{(A,1),(B,3),(A,6),(D,8),(C,10),(D,12)\}$ . The 'first' masking function converts it to $\mathcal{H}_{13}' = \{(A,1),(B,3),(D,8),(C,10)\}$ , and the 'last' masking function converts it to $\mathcal{H}_{13}' = \{(B,3),(A,6),(C,10),(D,12)\}$ .
50
+
51
+ With the masked event history $\mathcal{H}_t^\prime$ , we define two order representations for the order relationship between any two event labels and the occurrence of an event within a historical window of $t$ .
52
+
53
+ Definition 2 (Paired Order Representation (POR)) A paired order representation is defined as $[l_i, l_j] \in [\mathcal{L}]^2$ , where $[\mathcal{L}]^2$ denotes two-element permutation of a subset of $\mathcal{L}$ . A paired order representation for $\mathcal{H}_t^\prime$ can be obtained by arranging any two distinct labels in $\mathcal{H}_t^\prime$ in a sequential order.
54
+
55
+ Definition 3 (Singleton Order Representation (SOR)) A singleton order representation is denoted as $[l_j, \underline{u}_{l_j}] \in \mathcal{L} \times \mathbb{R}_+$ , representing event label $l_j \in \mathcal{L}$ occurred within the past $\underline{u}_{l_j}$ time units, where $\underline{u}_{l_j}$ is a variable to learn through a process that will be explained in Section 3.3.
56
+
57
+ Example 1 (cont.) With first masking, an example of paired order representation for $\mathcal{H}_{13}^{\prime}$ can be $[A,B]$ representing "A happens before $B$ " or $[B,C]$ representing "B happens before $C$ ". The overall order representation for $\mathcal{H}_{13}^{\prime}$ is expressed as $[A,B,D,C]$ , which can be derived from the paired order representations: $[A,B],[B,D],[D,C]$ . A singleton order representation example of $\mathcal{H}_{13}^{\prime}$ can be expressed as $[B,10.5]$ , meaning $B$ happened in the past 10.5 days.
58
+
59
+ # 2.3 WEIGHTED CLOCK LOGIC FORMULA
60
+
61
+ To adapt $\mathcal{H}_t^\prime$ to continuous-time signals that can be described by logical statements, we extract clock signals from $\mathcal{H}_t^\prime$ to describe the time passed since the last occurrence of a label. A clocking function is introduced to convert $t_j$ into a clock signal $c_{j}$ denoting the time interval length between $t_j$ and $t$ .
62
+
63
+ Definition 4 (Clocking Function) A clocking function $\Xi(\cdot)$ converts $\mathcal{H}_t^\prime$ into a vector of clock signals as $\mathcal{C}'(t) = [c_1(t), c_2(t), \dots, c_M(t)]^T \in \mathbb{R}_+^M$ with $c_i(t)$ denoting the clock signal for event label $i \in \mathcal{L}$ , where $c_i(t)$ is computed as $c_i(t) = t - t_j$ if $(l_j, t_j) \in \mathcal{H}_t^\prime$ and $l_j = i$ , and $c_i(t) = \bar{Z}$ otherwise. Note that $\bar{Z}$ is a user-defined, large positive number to indicate event label $i$ not happening in $\mathcal{H}_t^\prime$ .
64
+
65
+ Example 1 (cont.) Taking the 'first' masked event stream $\mathcal{H}_{13}^{\prime} = \{(A,1),(B,3),(D,8),(C,12)\}$ as an example, the event clocks are extracted as $\mathcal{C}'(13) = [12,10,1,5]^T$ .
66
+
67
+ The event clocks can essentially provide the ordering between any two event labels in that the difference between any two event labels' clock signals reflects which event label happens first. As shown in the diabetes prediction example in the Introduction section, the time interval between ordering events is notably important in explaining and predicting an event label's occurrence. In contrast to (Li et al., 2020; 2021) which only learns the temporal ordering relation between event labels, we define a paired order predicate (POP) with a learnable parameter $\underline{u}_{l_i l_j}$ to describe the time interval between two ordered event labels $l_i$ and $l_j$ and a singleton order predicate (SOP) with a learnable parameter $\underline{u}_{l_j}$ to describe the occurrence of label $l_j$ within a historical window $\underline{u}_{l_j}$ as follows.
68
+
69
+ Definition 5 (Paired Order Predicate) A POP describes the order between two labels $l_i, l_j \in \mathcal{L}, l_i \neq l_j$ , denoted as $\pi_{pop}^{l_i l_j} := g(c_{l_i}, c_{l_j}) = c_{l_i} - c_{l_j} > \underline{u}_{l_i l_j}$ , where $\underline{u}_{l_i l_j} \in \mathbb{R}$ is a parameter to learn. A positive $\underline{u}_{l_i l_j}$ means $l_i$ happened before $l_j$ for at least $\underline{u}_{l_i l_j}$ time units, and a negative $\underline{u}_{l_i l_j}$ means $l_j$ happened before $l_i$ for at most $-\underline{u}_{l_i l_j}$ time units. A POP is used in the POC of Figure 1(b).
70
+
71
+ Definition 6 (Singleton Order Predicate) An SOP describes a causal label $l_j \in \mathcal{L}$ occurring within the past $\underline{u}_{l_j}$ time units, defined as $\pi_{sop}^{l_j} := c_{l_j} - \underline{u}_{l_j} < 0$ , where $\underline{u}_{l_j} \in \mathbb{R}_+$ is a learnable parameter.
72
+
73
+ Instead of taking a heuristic approach for some underlying combinatorial search problem for a given set of temporal predicates (Bhattacharjya et al., 2020; 2021; Li et al., 2021) to uncover the effective order relations, this work proposes a differentiable learning model to learn suitable singleton and paired order predicates among all the possible choices of order predicates through a gradient-based approach. The scheme of weighted signal temporal logic (wSTL) in Yan et al. (2021; 2022) is exploited to build weighted clock logic (wCL) formulas that are logical compositions of singleton and paired order predicates. The syntax of wCL is recursively defined as (Mehdipour et al., 2021):
74
+
75
+ $$
76
+ \phi := \pi_ {p o p} ^ {l _ {i} l _ {j}} \left| \pi_ {s o p} ^ {l _ {j}} \right| \neg \phi \left| \phi_ {1} ^ {w _ {1}} \wedge \phi_ {2} ^ {w _ {2}} \dots \wedge \phi_ {k} ^ {w _ {k}} \right| \phi_ {1} ^ {w _ {1}} \vee \phi_ {2} ^ {w _ {2}} \dots \vee \phi_ {k} ^ {w _ {k}}, \tag {1}
77
+ $$
78
+
79
+ where $\phi_1, \dots, \phi_k$ are wCL formulas, $\neg$ denotes negation, $\land$ denotes logical conjunction, $\lor$ denotes logical disjunction, $w_j \geq 0, j = 1, \dots, k$ denotes non-negative weights assigned to $\phi_1, \dots, \phi_k$ in the conjunction and disjunction operations. A wCL formula can describe the characteristics of $\mathcal{H}_t$ , thus the conditional intensity rate of event $l$ given $\mathcal{H}_t$ can be equivalently denoted as $\lambda_{l|\phi}(t)$ .
80
+
81
+ Remark 7 The syntax above means each wCL formula can be built by using predicates in $\pi_{pop}^{l_i l_j}$ or $\pi_{sop}^{l_j}$ and then by recursively applying the $\neg$ or the $\land$ or the $\lor$ operations.
82
+
83
+ Example 1 (cont.) A $wCL$ formula example is $\phi = (c_A - c_B > 1)^1 \wedge (c_C < 3)^{0.05}$ . The first and second clauses read "A happened before $B$ for at least one day" and "C happened less than 3 days ago", respectively. Note that $\phi$ is satisfied by the event stream up to $t = 13$ in Figure 1(a). The two clauses have weights of 1 and 0.05, reflecting the first clause is more important than the second one.
84
+
85
+ # 3 WEIGHTED CLOCK LOGIC POINT PROCESSES
86
+
87
+ # 3.1 TRUTH DEGREE OF WEIGHTED CLOCK LOGIC
88
+
89
+ To quantitatively measure the satisfaction degree of a wCL formula $\phi$ over the event clocks $\mathcal{C}'(t)$ , i.e., how well does $\phi$ describe the underlying patterns of $\mathcal{C}'(t)$ , we propose smooth activation functions (AFs) to compute the truth degree, denoted $p(\mathcal{C}',\phi,t)\in [0,1]$ , defined as (Riegel et al., 2020):
90
+
91
+ $$
92
+ p \left(\mathcal {C} ^ {\prime}, \pi_ {p o p} ^ {l _ {i} l _ {j}}, t\right) = \operatorname {s i g m o i d} \left(c _ {l _ {i}} (t) - c _ {l _ {j}} (t) - \underline {{u}} _ {l _ {i} l _ {j}}\right), \tag {2}
93
+ $$
94
+
95
+ $$
96
+ p \left(\mathcal {C} ^ {\prime}, \pi_ {s o p} ^ {l _ {j}}, t\right) = \operatorname {s i g m o i d} \left(\underline {{u}} _ {l _ {j}} - c _ {l _ {j}} (t)\right), \tag {3}
97
+ $$
98
+
99
+ $$
100
+ p \left(\mathcal {C} ^ {\prime}, \neg \phi , t\right) = 1 - p \left(\mathcal {C} ^ {\prime}, \phi , t\right). \tag {4}
101
+ $$
102
+
103
+ In contrast to the combinatorial search of the temporal logic predicates in Li et al. (2021), the smooth design of AFs in (2) - (4) benefits the maximum likelihood estimation problem shown later in Section 3.6 by allowing it to learn the parameters in the POP and SOP through gradient-based methods. Next, we present the design of activation functions (AF) for the $\wedge$ operator. Here we use a 2-ary conjunction operator to motivate the design. Let $p^{\wedge} = p(\mathcal{C}',\phi_1^{w_1}\wedge \phi_2^{w_2},t)\in [0,1]$ . Intuitively, $p^{\wedge}$ is low when either input is low, and $p^{\wedge}$ is high when both inputs are high. Here we adopt a similar idea to Sen et al. (2022) for capturing the low and high. A user-defined hyperparameter $\alpha \in [\frac{1}{2},1]$ is introduced to aid the interpretability of low and high such that $p^{\wedge}$ represents high if $p^{\wedge}\in [\alpha ,1]$ and low if $p^{\wedge}\in [0,1 - \alpha ]$ . Considering the importance weights, a low input with a zero weight should not impact the output, which implies $p^{\wedge}$ should be low when both inputs are low. With these considerations, the AF for the $\wedge$ operator is defined as follows: (See Appendix A for more details.)
104
+
105
+ $$
106
+ p \left(\mathcal {C} ^ {\prime}, \phi_ {1} ^ {w _ {1}} \wedge \phi_ {2} ^ {w _ {2}} \dots \wedge \phi_ {k} ^ {w _ {k}}, t\right) = f \left(\beta - \sum_ {j = 1} ^ {k} w _ {j} \left(1 - p \left(\mathcal {C} ^ {\prime}, \phi_ {j}, t\right)\right)\right), \tag {5}
107
+ $$
108
+
109
+ $$
110
+ \text {s u b j e c t} \quad \beta - \sum_ {j = 1} ^ {k} w _ {j} (1 - \alpha) \geq \alpha , \beta - \sum_ {j = 1} ^ {k} w _ {j} \alpha \leq 1 - \alpha ,
111
+ $$
112
+
113
+ where $f(z) = \max \{0, \min \{z, 1\}\}$ clamps the truth degree into [0,1], $w_{j} \geq 0$ and $\beta \geq 0$ are parameters to learn. By De Morgan's law (Hurley, 2014), the AF for the $\vee$ operator is defined as
114
+
115
+ $$
116
+ p \left(\mathcal {C} ^ {\prime}, \phi_ {1} ^ {w _ {1}} \vee \phi_ {2} ^ {w _ {2}} \dots \vee \phi_ {k} ^ {w _ {k}}, t\right) = f (1 - \beta + \sum_ {j = 1} ^ {k} w _ {j} \left(p \left(\mathcal {C} ^ {\prime}, \phi_ {j}, t\right)\right)), \tag {6}
117
+ $$
118
+
119
+ $$
120
+ \text {s u b j e c t} \quad 1 - \beta + \sum_ {j = 1} ^ {k} w _ {j} \alpha \geq \alpha , 1 - \beta + \sum_ {j = 1} ^ {k} w _ {j} (1 - \alpha) \leq 1 - \alpha .
121
+ $$
122
+
123
+ An event stream with $M$ event labels would generate $\mathrm{P}_M^2 = \frac{M!}{(M - 2)!}$ paired order predicates and $M$ singleton order predicates. If a conjunction or disjunction operator takes these predicates as inputs, how it recognizes the effective order predicates in describing the event dynamics becomes a critical issue. By carefully designing the AFs in (5) - (6), the logical operators exhibit the following properties so as to recognize effective inputs. This is a critical advantage over Bhattacharjya et al. (2020; 2021); Li et al. (2021) in that it allows a differentiable search of the suitable predicates among all the possible choices of order predicates in an end-to-end manner. Here we illustrate the properties for $\wedge$ with two inputs, which can be generalized to $k$ -ary inputs. (See Appendix B for more details.)
124
+
125
+ Theorem 8 The $AF$ for the $\wedge$ operator with two inputs exhibits the following properties.
126
+
127
+ 1) Nonimpact for zero weights: If $w_{j} = 0, j = 1,2$ , $p(\mathcal{C}',\phi_j,t)$ has no impact on $p(\mathcal{C}',\phi_1\wedge \phi_2,t)$ .
128
+ 2) Impact ordering: If $p(\mathcal{C}', \phi_1, t) = p(\mathcal{C}', \phi_2, t)$ , and $w_1 \geq w_2$ , then $\frac{\partial p(\mathcal{C}', \phi_1 \wedge \phi_2, t)}{\partial p(\mathcal{C}', \phi_1, t)} \geq \frac{\partial p(\mathcal{C}', \phi_1 \wedge \phi_2, t)}{\partial p(\mathcal{C}', \phi_2, t)}$ .
129
+ 3) Monotonicity: $f(\beta - \sum_{j=1}^{2} w_j (1 - p(\mathcal{C}', \phi_j, t))) \leq f(\beta - \sum_{j=1}^{2} w_j (1 - (p(\mathcal{C}', \phi_j, t) + d))), d \geq 0.$
130
+
131
+ ![](images/9e0cac7601d681cedd8dc0d19c63518c0191e9bee81a8bb2a62181f80436ece0.jpg)
132
+ (a)
133
+
134
+ ![](images/c640ccb66e96e9424797e3c3f14486a4b78a2595964840319ae75556b8aea159.jpg)
135
+ (b)
136
+ Figure 2: CLNN Structure. (a): Continuous relaxation of the search space using weights. (b): The learned discrete model structure for $\phi = (\pi_{pop}^{A,B}\wedge \pi_{pop}^{B,C})\vee (\pi_{sop}^{A})$
137
+
138
+ # 3.2 LEARNING OF PAIRED ORDER REPRESENTATION
139
+
140
+ With the smooth AFs designed in (2) - (6), a neuro-symbolic model called clock logic neural network (CLNN) can be designed for any given wCL formula $\phi$ , in which every neuron has a corresponding symbolic representation. A typical CLNN for $\phi = (\pi_{pop}^{A,B}\wedge \pi_{pop}^{B,C})\vee (\pi_{sop}^{A})$ is visualized as Fig. 2(b), which can be considered as the discrete structure obtained by learning the parameters of the model in Figure 2(a) and keeping the dominant components. Here $\phi$ can be interpreted as “(A happens before $B$ for at least $\underline{u}_{AB}$ time units or $B$ happens before $C$ for at least $\underline{u}_{BC}$ time units) and $A$ happens within the past $\underline{u}_A$ time units.” This part describes the continuous relaxation of the search space by designing a paired order cell, a singleton order cell, and an architecture cell for learning the paired order representation, singleton order representation and the formula structure.
141
+
142
+ Paired Order Cell (POC). A POC is a directed acyclic graph (DAG) comprising two paired order predicate (POP) nodes and one logical node for the $\wedge$ operator, shown as an orange block in Figure 2(a). The two POP nodes represent $\pi_{pop}^{l_i,l_j}$ and $\pi_{pop}^{l_j,l_i}$ sharing the same parameter $\underline{u}_{l_i,l_j}$ , where $\pi_{pop}^{l_i,l_j}$ denotes " $l_i$ happened before $l_j$ for at least $\underline{u}_{l_i,l_j}$ time units" and $\pi_{pop}^{l_j,l_i}$ denotes " $l_j$ happened before $l_i$ for at least $\underline{u}_{l_i,l_j}$ time units". Each POP has an associated weight $w_{pop}^{l_i,l_j}$ or $w_{pop}^{l_j,l_i}$ to be learned, and the $\wedge$ operator forces one of the two weight parameters to dominate the other one such that the learned POR is consistent with the event stream. For example, the POC in Figure 2(a) aims to learn the POR between $A$ and $B$ , whose discretized version would be either $\pi_{pop}^{A,B}$ or $\pi_{pop}^{B,A}$ . An event stream with $M$ event labels can generate $\mathrm{P}_M^2 = \frac{M!}{(M - 2)!}$ PORs between any two event
143
+
144
+ labels, resulting in $(\mathrm{P}_M^2 / 2)$ POCs. Similar to learning the POR between any two events, the discrete order representations for the entire history $\mathcal{H}_t$ can be learned using a POP selection node (as shown in Figure 2(a)) that takes the outputs of all the POCs as input and identifies the important PORs. The learning of the POCs essentially becomes learning the $w$ , $\beta$ in (5) for the POCs and the POP selection node, as well as $\underline{u}_{l_i l_j}$ in (2) for the POPs through back propagation. The discrete PORs can be acquired by keeping the top- $k$ strongest POCs and the dominant POPs.
145
+
146
+ # 3.3 LEARNING OF SINGLETON ORDER REPRESENTATION
147
+
148
+ Singleton Order Cell (SOC). The learning of SOR is accomplished by an SOC, which is displayed as a green block in Figure 2(a). An SOC is a DAG comprising $M$ singleton order predicate (SOP) nodes and one SOP selection node for the $\wedge$ operator. An SOP node represents $\pi_{sop}^{l_j}$ that takes $c_{l_j}(t)$ as input and returns the truth degree of $\pi_{sop}^{l_j}$ over $c_{l_j}(t)$ . The SOP selection node has the same functionality as the POP selection node. The $\wedge$ operator in the SOP selection node assigns a nonnegative weight to every SOP node and learns the importance weights $w$ and $\beta$ to extract the dominant SORs affecting the conditional intensity rate the most. The learning of the SOC is thus learning the $w, \beta$ in (5) for the SOP selection node and $\underline{u}_{l_j}$ in (3) for the SOPs through back propagation. The discrete SORs can be determined by keeping the top- $k$ strongest SOPs.
149
+
150
+ # 3.4 LEARNING OF FORMULA STRUCTURE
151
+
152
+ Architecture Cell (AC). For a given set of PORs or SORs, their conjunction or disjunction will behave differently and have distinct meanings. For instance, given two causal formulas $\phi_{1} = (c_{A} - c_{B} > 1)^{1}\wedge (c_{C} < 5)^{1}$ and $\phi_{2} = (c_{A} - c_{B} > 1)^{1}\vee (c_{C} < 5)^{1}$ for the occurrence of event label $D$ , $\phi_{1}$ means “(A happens before $B$ for at least 1 time unit) and (C happens within the past 5 time units) simultaneously will cause $D$ to happen”, whereas $\phi_{2}$ means “(A happens before $B$ for at least 1 time unit) or (C happens within the past 5 time units) alternatively will cause $D$ to happen.” The afore-mentioned cells can learn the order representations. Nevertheless, whether their outputs should be connected by the $\wedge$ or $\vee$ operator needs to be determined. Here we consider the outputs of the POCs and the SOCs having two choices of being connected by a $\wedge$ or $\vee$ operator, each of which is associated with an architecture weight $\alpha_{arc}^{\wedge}$ or $\alpha_{arc}^{\vee}$ that enables continuous learning of the two choices; this is also called differentiable architecture search (Liu et al., 2019). An architecture cell is introduced for learning the model architecture, which comprises two logical nodes representing a $\wedge$ operator and a $\vee$ operator as well as a logical selection node (LSN), shown as the blue block in Figure 2(a). Let $\pmb{p} = \{p_1,\dots,p_k\}$ denote the set of inputs for each logical operator. Subsequently, the conjunction operator takes $\pmb{p}$ as input and returns $p^{\wedge} = f(\beta^{\wedge} - \sum_{j = 1}^{k}w_{j}^{\wedge}(1 - p_{j}))$ , and the disjunction operator takes $\pmb{p}$ as input and returns $p^{\vee} = f(1 - \beta^{\vee} + \sum_{j = 1}^{k}w_{j}^{\vee}p_{j})$ . The LSN represented by $\ominus$ takes $p^{\wedge}$ and $p^{\vee}$ as inputs and returns their weighted sum, where the weights are computed using the softmax of the architecture weights as shown below:
153
+
154
+ $$
155
+ p _ {\ominus} = p \left(\mathcal {C} ^ {\prime}, \phi , t\right) = \sum_ {m \in \{\wedge , \vee \}} \frac {e ^ {\alpha_ {a r c} ^ {m}}}{\sum_ {m ^ {\prime} \in \{\wedge , \vee \} e ^ {\alpha_ {a r c} ^ {m ^ {\prime}}}}} p ^ {m}. \tag {7}
156
+ $$
157
+
158
+ The task of architecture search then reduces to learning the architecture weights $\alpha_{arc}^{\wedge}$ , $\alpha_{arc}^{\vee}$ and the $w, \beta$ in (5) - (6) for the two logical operators, which can be executed simultaneously while learning parameters in the POCs and SOCs. The outcome of the architecture search process is a discrete architecture obtained by retaining the logical operator with the strongest architecture weight.
159
+
160
+ # 3.5 WCL-INFORMED INTENSITY FUNCTION
161
+
162
+ The output of a CLNN is the truth degree of $\phi$ over $\mathcal{C}'$ at $t$ , which is incorporated into modeling the conditional intensity rates. The modeling process aims to discover the generative mechanism as wCL formulas for every $l \in \mathcal{L}$ . In other words, a larger value of $p(\mathcal{C}', \phi, t)$ should reflect that $\phi$ has a greater impact on the occurrence of a particular label. For example, if the wCL formula for affecting the occurrence of event label $D$ is given as $\phi = ((\pi_{pop}^{A,B})^{w_1} \wedge (\pi_{sop}^{C})^{w_2})$ , it means if $\phi$ is satisfied or the truth degree of $\phi$ is high, then it has a strong impact on the occurrence of $D$ , where the impact can be promoting or inhibiting the occurrence of $D$ . In terms of the relation between the truth degree and the con-
163
+
164
+ ![](images/5d5b49c9b0aa3673d8a056f465562a03c01bddae53c13e45a86456e37664c6a3.jpg)
165
+ Figure 3: The overall learning framework for $n$ wCL formulas.
166
+
167
+ ditional intensity rate, the higher the truth degree $p(\mathcal{C}',\phi ,t)$ , the greater its impact on $\lambda_{D|\phi}$ . Note
168
+
169
+ that the occurrence of one event label may depend on multiple wCL formulas. This work follows the assumption that the impact of multiple formulas are additive in predicting the intensity rate, similar to Li et al. (2020). To incorporate a set of wCL formulas $\Phi = \{\phi_1,\phi_2,\dots,\phi_n\}$ into the modeling of the conditional intensity rate, we define a wCL formula-informed conditional intensity rate as:
170
+
171
+ $$
172
+ \lambda_ {l \mid \Phi} (t) = \exp \left(\sum_ {i = 1} ^ {n} w _ {\phi_ {i}} p \left(\mathcal {C} ^ {\prime}, \phi_ {i}, t\right) + \rho\right), \tag {8}
173
+ $$
174
+
175
+ where $w_{\phi_i}$ is the weight of $\phi_i$ , and $\rho$ is a bias term that allows for spontaneous occurrence without the influence from $\phi$ .
176
+
177
+ # 3.6 MAXIMUM LIKELIHOOD ESTIMATION
178
+
179
+ Suppose event stream $\mathcal{D}$ contains $n_l$ occurrences of event $l$ , for which the occurrence time stamps are denoted as $t_{l_1}, t_{l_2}, \ldots, t_{l_{n_l}}$ . Let $t_0 = 0$ , $t_{l_{n_l + 1}} = T$ . Based on the conditional intensity function in (8), the likelihood for label $l$ over the event stream is calculated as (Daley & Vere-Jones, 2003):
180
+
181
+ $$
182
+ L _ {l} = \prod_ {i = 0} ^ {n _ {l} - 1} \left(\exp \left(- \int_ {t _ {l _ {i}}} ^ {t _ {l _ {i + 1}}} \lambda_ {l | \Phi} (s) d s\right) \lambda_ {l | \Phi} \left(t _ {l _ {i + 1}}\right)\right) \exp \left(- \int_ {t _ {l _ {n _ {l}}}} ^ {T} \lambda_ {l | \Phi} (s) d s\right). \tag {9}
183
+ $$
184
+
185
+ The corresponding log-likelihood for event label $l$ is expressed as $LL_{l} = (-\int_{0}^{T}\lambda_{l|\Phi}(s)ds) + \sum_{i = 1}^{n_{l}}[\log (\lambda_{l|\Phi}(t_{l_{i}}))]$ . The total log-likelihood of all the events in $\mathcal{D}$ is thus $LL_{\mathcal{D}} = \sum_{l\in \mathcal{L}}LL_{l}$ . During the training process, we train the model parameters for each event label separately. Specifically, the maximum likelihood estimation problem for event label $l$ can be formulated as follows:
186
+
187
+ $$
188
+ \min - L L _ {l} \tag {10}
189
+ $$
190
+
191
+ $$
192
+ s. t. \quad \forall \phi \in \Phi , \forall 1 \leq k \leq K _ {\phi} ^ {\wedge}, \beta_ {k} - \sum_ {i \in I _ {k}} w _ {i, k} (1 - \alpha) \geq \alpha , \beta_ {k} - \sum_ {i \in I _ {k}} w _ {i, k} \alpha \leq 1 - \alpha , \tag {11}
193
+ $$
194
+
195
+ $$
196
+ \forall \phi \in \Phi , \forall 1 \leq k ^ {\prime} \leq K _ {\phi} ^ {\vee}, 1 - \beta_ {k ^ {\prime}} + \sum_ {i \in I _ {k ^ {\prime}}} w _ {i, k ^ {\prime}} \alpha \geq \alpha , 1 - \beta_ {k ^ {\prime}} + \sum_ {i \in I _ {k ^ {\prime}}} w _ {i, k ^ {\prime}} (1 - \alpha) \leq 1 - \alpha , \tag {12}
197
+ $$
198
+
199
+ $$
200
+ w _ {i, k} \geq 0, \beta_ {k} \geq 0, w _ {i, k ^ {\prime}} \geq 0, \beta_ {k ^ {\prime}} \geq 0, \underline {{u}} _ {l _ {j}} \geq 0,
201
+ $$
202
+
203
+ where $K_{\phi}^{\wedge}$ (resp. $K_{\phi}^{\vee}$ ) is the number of $\wedge$ (resp. $\vee$ ) operators in $\phi$ , $I_{k}$ (resp. $I_{k'}$ ) denotes the inputs to the $k$ -th $\wedge$ (resp. $k'$ -th $\vee$ ) operator. Please see Appendix A for more details about the above formulation. The overall learning framework is shown in Figure 3, in which the forward propagation computes $LL_{l}$ by using $n$ CLNNs; each learns a wCL formula $\phi_{i}$ and the backward propagation updates the parameters in $n$ CLNNs using projected gradient descent.
204
+
205
+ # 4 EXPERIMENTS
206
+
207
+ We conduct several experiments on synthetic and real-world datasets to demonstrate the efficacy of our proposed model. Simultaneously, we compare with state-of-the-art (SOTA) models. The experiments are run using the AdamW optimizer in Pytorch (1.10.2) on a Windows 10 system desktop with a 16-core CPU (i7, 3.60GHz) and 32 GB RAM. Our code is available at https://ICLR-CLNN.
208
+
209
+ # 4.1 MODELS
210
+
211
+ Multivariate Hawkes Process (MHP) [(Bacry et al., 2017)]: A conventional multivariate Hawkes process utilizing an exponential kernel function to describe the conditional intensity rate, which involves a decay rate and an infectivity matrix characterizing the inter-dependence among events. This model is implemented in the tick $^{1}$ library, where the learning problem is posed as a convex quadratic programming problem with a fixed decay rate.
212
+
213
+ Proximal Graphical Event Model (PGEM) [(Bhattacharjya et al., 2018)]: A type of GEM that models event data by considering whether a parent in some underlying graph happens in a proximal (recent) window.
214
+
215
+ <table><tr><td>Ground truth</td><td>φ1=(cA-cB&gt;1)1∧(cA-cC&gt;3)1</td></tr><tr><td>CLNN&#x27;s rule</td><td>(cA-cB&gt;1.21)1.52 ∧ (cA-cC&gt;3.00)1.41 ∧ (cA-cD&gt;0.82)0.33 ∧ (cB-cC&gt;4.33)0 ∧ (cB-cD&gt;10.69)0 ∧ (cD-cC&gt;-6.57)0.16</td></tr><tr><td>TELLER&#x27;s rule</td><td>A before D, B before D, C before D, A before D and C before D</td></tr><tr><td>OGEM-tab&#x27;s rule</td><td>Excitation: [B], [C,B], [B,C]; Inhibitory: [A], [C,A], [A,C]</td></tr></table>
216
+
217
+ Ordinal Graphical Event Model (OGEM) [(Bhattacharjya et al., 2020; 2021)]: An ordinal GEM that models the impact of the order of events on the conditional intensity rate. OGEM-tab (resp. OGEM-tree) refers to an OGEM that adopts a tabular (resp. tree) representation of orders.
218
+
219
+ Temporal Logic Rule Learner (TELLER)<sup>2</sup> [(Li et al., 2021)]. This is a method to learn first-order temporal logic rules explaining the generative mechanism of TPPs. The rule discovery process is formulated as a maximum likelihood estimation problem solved by a branch-and-price algorithm.
220
+
221
+ # 4.2 SYNTHETIC DATASETS
222
+
223
+ The first part of this experiment demonstrates CLNN's capability of recovering ground-truth rules using three synthetic datasets generated by CLNN with pre-specified formula structure and parameters, including $\underline{u}_{l_i l_j}$ in $\pi_{pop}^{l_i, l_j}$ , as well as the importance weights $w$ and bias $\beta$ in (5) for logical operators, and the $w_\phi$ and $\rho$ in (8) for the conditional intensity rate.
224
+
225
+ Experimental Setting. Each synthetic dataset contains 1,000 event streams partitioned into three sets: training (70%), validation (15%), and test (15%). Every dataset is generated using a wCL formula with $w_{\phi} = 3$ and $\rho = -5$ . The truth value threshold is set as $\alpha = 0.5$ , and the clock signal for representing an event not occurring in $\mathcal{H}_t^\prime$ is set as $\bar{Z} = 1.5T_{\mathrm{max}}$ , where $T_{\mathrm{max}}$ is the maximal ending time among all the event streams. During the training process, we initialize the parameters using four approaches (see Appendix C.5 for more details) and report the best one, and CLNN aims to recover the manually set parameters.
226
+
227
+ Results. The ground-truth rule $\hat{\phi}_1$ for generating the first synthetic dataset (Syn-1) with $\mathcal{L} = \{A, B, C, D\}$ and the rules discovered by CLNN, TELLER, and OGEM-tab are summarized in Table 1. Results for the other synthetic datasets are presented in Appendix C. The rules are learned using the 'last' masking method, which was also used for data generation. The experimental results show an accurate recovery performance of CLNN in terms of order representation recovery and parameter identification. The unweighted version of the ground truth rule reads: "If $A$ happens before $B$ for at least 1 time unit and $A$ happens before $C$ for at least 3 time units, then $D$ will happen". The rule of TELLER only reflects the temporal relation between events $A, B, C$ and $D$ but is unable to capture the temporal relation between $A$ and $B$ or $A$ and $C$ , which does not match the ground-truth rule. In OGEM-tab's rule, $[l]$ denotes a single parent. We show the top 3 excitation and inhibitory rules from OGEM-tab, where excitation (resp. inhibitory) means $\lambda_{l|\Phi}$ is higher (resp. lower) than the $\lambda_{l|\Phi}$ with all $w_{\phi_i} = 0$ . The excitation rules of OGEM-tab do not match the ground-truth rule. In contrast, the rule discovered by CLNN ( $\phi_1$ ) assigns larger weights to the paired order predicates $\pi_{pop}^{A,B} = (c_A - c_B > 1.21)$ and $\pi_{pop}^{A,C} = (c_A - c_C > 3.00)$ and small weights to the other predicates, where the interval values of 1.21 and 3.00 are both learned. By ignoring the small weights, $\phi_1$ can be interpreted as "If $A$ happens before $B$ for at least 1.21 time units and $A$ happens before $C$ for at least 3.00 time units, then $D$ will happen", meaning the paired order representations discovered by CLNN match well with the ground truth. Moreover, CLNN's rules are more expressive than TELLER and OGEM as it provides a detailed interval length between two ordered labels.
228
+
229
+ To show the computational efficiency of our gradient-based learning, we compare the runtimes of CLNN and TELLER on the synthetic datasets in Table 2. Notably, CLNN not only recovers the correct order representations but also was two orders of magnitude faster on average (5.62 s vs 635.99
230
+
231
+ s). In addition, CLNN can learn more expressive order representations that describe both the order relation between two events and their interval length.
232
+
233
+ Table 1: Comparison of rule discovery for CLNN, TELLER, and OGEM-tab on the Syn-1 dataset.
234
+
235
+ <table><tr><td>wCL formula</td><td>φ1</td><td>φ2</td><td>φ3,1</td><td>φ3,2</td><td>Average</td></tr><tr><td>CLNN</td><td>5.20</td><td>4.60</td><td>4.95</td><td>7.73</td><td>5.62</td></tr><tr><td>TELLER</td><td>252.91</td><td>286.83</td><td>925.58</td><td>1078.66</td><td>635.99</td></tr></table>
236
+
237
+ Table 2: Runtime (s) for CLNN and TELLER on synthetic datasets.
238
+
239
+ # 4.3 REAL-WORLD DATASETS
240
+
241
+ LinkedIn [(Xu et al., 2017)]. An event dataset related to job hopping records of 3,000 LinkedInIn users in 82 IT companies. Each event stream records a user's check-in time stamps for different companies or the time stamps for role change within the same company. We filter the dataset to popular companies as per Bhattacharjya et al. (2020), resulting in 1,000 users.
242
+
243
+ Mimic II [(Saeed et al., 2011)]. An event dataset concerning health records of patients from Intensive Care Unit (ICU) visits over 7 years. A patient's event stream records each visit's time stamp and the corresponding diagnosis. We filter out sequences with few visits, resulting in 650 patients.
244
+
245
+ Stack Overflow [(Grant & Betts, 2013)]. An event dataset that is related to the badges awarded to users in the question-answering website, the Stack Overflow. Each user's event stream records the badges that he/she receives at various time stamps. We keep the event streams with one or more of 20 types of badges and sample 1,000 users from the dataset used in Du et al. (2016).
246
+
247
+ Experimental Setup. Each dataset is partitioned into three sets: training (70%), validation (15%), and test (15%). For simplicity, $\underline{u}_{l_i l_j}$ are set as 0 to study the ordering representations. The truth value threshold is $\alpha = 0.5$ , and $\bar{Z} = 1.5T_{\mathrm{max}}$ , same as the setting for the synthetic datasets, and the number of subformulas is $n = 5$ , and the parameters are initialized as random numbers from a uniform distribution on [0, 1). CLNN is trained on the training set, and the validation set is utilized for model selection during training. Model fit is evaluated using log-likelihood on the test set.
248
+
249
+ Results. We follow a similar trend to Bhattacharjya et al. (2018; 2020; 2021) to use the log-likelihood for evaluation of the model's performance. The log-likelihood on the real-world datasets is reported in Table 3, where $DR$ denotes the difference ratio – the difference between CLNN and the best SOTA divided by the absolute value of best SOTA. CLNN's result is chosen as the better one among the 'first' or the 'last' masking. Notably, CLNN outperforms the baseline models on the LinkedIn dataset (13.40% advantage) and achieves a competitive result on the MIMIC II dataset (1.63% loss only). It is observed that PGEM achieves a better result on the Stack Overflow dataset. In Stack Overflow, one type of badge can be awarded only when a user receives a particular badge multiple times, for example, the 'Epic' badge is awarded only when earning 200 daily reputations 50 times, depending on the 'Mortarboard' badge acquired while answering or asking questions. CLNN and OGEMs apply masking methods to the data, which may not capture the above dependence. In contrast, PGEM models data without masking, making it more suitable for this dataset.
250
+
251
+ <table><tr><td>Dataset</td><td>N (# events)</td><td>M (labels)</td><td>MHP</td><td>PGEM</td><td>OGEM-tab</td><td>OGEM-tree</td><td>TELLER</td><td>CLNN</td><td>DR</td></tr><tr><td>LinkedIn</td><td>2932</td><td>10</td><td>-1593</td><td>-1462</td><td>-1478</td><td>-1418</td><td>-1548</td><td>-1228</td><td>13.40%</td></tr><tr><td>MIMIC II</td><td>2419</td><td>15</td><td>-567</td><td>-500</td><td>-474</td><td>-429</td><td>-645</td><td>-436</td><td>-1.63%</td></tr><tr><td>Stack Overflow</td><td>71254</td><td>20</td><td>-52543</td><td>-48323</td><td>-49344</td><td>-49192</td><td>-71101</td><td>-50981</td><td>-5.50%</td></tr></table>
252
+
253
+ Case Study. The primary strength of CLNN over the SOTA models is that it can describe the generative mechanism as wCL formulas, being more expressive and potentially providing more detailed information. CLNN can be deployed as a valuable tool for assisting domain specialists in knowledge discovery from event data. Here we showcase the above strength of CLNN using an il
254
+
255
+ lustrative example. We select the experimental result on company $F$ of the LinkedIn dataset to demonstrate the expressivity of CLNN's rules, which are shown in Table 4. Here we specify the model to learn five formulas, four of which are inhibitory, and one exhibits excitation. One inhibitory formula has a weight of 0.05, thus not reported in Table 4. Each formula shows the dominant singleton or paired order predicates. Notably, CLNN learns expressive wCL formulas that describe how the logical composition of paired order predicates and(or) singleton order predicates affect a role change in the company $F$ . CLNN's rules are more expressive than TELLER and as expressive as OGEM-tab for describing the occurrence of a causal event within a specific historical window.
256
+
257
+ Table 3: Dataset information and log-likelihood for all models on the real-world datasets.
258
+
259
+ <table><tr><td></td><td>Rules</td><td>Effect</td></tr><tr><td rowspan="4">CLNN</td><td>φ1=(CD&gt;cH)0.90 ∧ (CI&gt;cJ)0.72</td><td>Inhibitory</td></tr><tr><td>φ2=((CB&lt;0.45)0.58 ∧ (CD&lt;0.05)0.66</td><td>Excitation</td></tr><tr><td>φ3=(CB&gt;cF)0.50 ∧ (CI&gt;cJ&gt;cD)0.47</td><td>Inhibitory</td></tr><tr><td>φ4=(CA&lt;0.84)0.76 ∧ (CH&lt;1.09)0.50</td><td>Inhibitory</td></tr><tr><td>TELLER</td><td>[A,F],[C,F],[E,F],[B,F],[D,F]</td><td>Excitation</td></tr><tr><td rowspan="2">OGEM-tab</td><td>[F],[F,A]</td><td>Excitation</td></tr><tr><td>[A]</td><td>Inhibitory</td></tr></table>
260
+
261
+ Table 4: Formulas and their effect as learned by CLNN, TELLER and OGEM-tab on company $F$ of LinkedIn.
262
+
263
+ # 5 CONCLUSION
264
+
265
+ In this paper, we proposed a novel neuro-symbolic model, CLNN, to learn interpretable wCL formulas from multivariate event data. Experimental results using synthetic and real-world datasets demonstrate CLNN's expressiveness in recovering ground-truth rules in multivariate temporal point processes. Further, CLNN can be trained using gradient-based methods, which improve the learning speed compared to the SOTA.
266
+
267
+ # 6 ACKNOWLEDGEMENT
268
+
269
+ This research is sponsored by the Rensselaer-IBM AI Research Collaboration (http://airc.rpi.edu), part of the IBM AI Horizons Network; the National Science Foundation under Grant CMMI-1936578; and the Defense Advanced Research Projects Agency (DARPA) through Cooperative Agreement D20AC00004 awarded by the U.S. Department of the Interior (DOI), Interior Business Center. The content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.
270
+
271
+ # REFERENCES
272
+
273
+ Odd Aalen, Ornulf Borgan, and Hakon Gjessing. Survival and Event History Analysis: A Process Point of View. Springer Science & Business Media, 2008.
274
+ Emmanuel Bacre, Iacopo Mastromatteo, and Jean-François Muzy. Hawkes processes in finance. Market Microstructure and Liquidity, 1(01):1550005, 2015.
275
+ Emmanuel Bacry, Martin Bompaire, Philip Deegan, Stéphane Gaiffas, and Søren V Poulsen. tick: A Python library for statistical learning, with an emphasis on Hawkes processes and time-dependent models. The Journal of Machine Learning Research, 18(1):7937-7941, 2017.
276
+ Debarun Bhattacharjya, Dharmashankar Subramanian, and Tian Gao. Proximal graphical event models. Advances in Neural Information Processing Systems (NeurIPS), 31:8147-8156, 2018.
277
+ Debarun Bhattacharjya, Tian Gao, and Dharmashankar Subramanian. Order-dependent event models for agent interactions. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 1977-1983, 2020.
278
+ Debarun Bhattacharjya, Tian Gao, and Dharmashankar Subramanian. Ordinal historical dependence in graphical event models with tree representations. In Proceedings of the Conference on Artificial Intelligence (AAAI), pp. 6759-6767, 2021.
279
+ Yuanda Chen. Thinning algorithms for simulating point processes. Florida State University, Tallahassee, FL, 2016.
280
+ Daryl J Daley and David Vere-Jones. An Introduction to the Theory of Point Processes, Volume I: Elementary Theory and Methods. Springer, 2003.
281
+ Vanessa Didelez. Graphical models for marked point processes based on local independence. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(1):245-264, 2008.
282
+ Nan Du, Hanjun Dai, Rakshit Trivedi, Utkarsh Upadhyay, Manuel Gomez-Rodriguez, and Le Song. Recurrent marked temporal point processes: embedding event history to vector. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1555-1564, 2016.
283
+ Mehrdad Farajtabar, Yichen Wang, Manuel Gomez Rodriguez, Shuang Li, Hongyuan Zha, and Le Song. COEVOLVE: A joint point process model for information diffusion and network coevolution. In Advances in Neural Information Processing Systems (NeurIPS), volume 28, pp. 1954-1962, 2015.
284
+ Tian Gao, Dharmashankar Subramanian, Karthikeyan Shanmugam, Debarun Bhattacharjya, and Nicholas Mattei. A multi-channel neural graphical event model with negative evidence. In Proceedings of the Conference on Artificial Intelligence (AAAI), pp. 3946-3953, 2020.
285
+ Scott Grant and Buddy Betts. Encouraging user behaviour with achievements: An empirical study. In Proceedings of the 10th Working Conference on Mining Software Repositories, MSR '13, pp. 65-68. IEEE Press, 2013.
286
+ Asela Gunawardana and Chris Meek. Universal models of multivariate temporal point processes. In Artificial Intelligence and Statistics, pp. 556-563. PMLR, 2016.
287
+
288
+ Asela Gunawardana, Christopher Meek, and Puyang Xu. A model for temporal dependencies in event streams. Advances in Neural Information Processing Systems (NeurIPS), 24, 2011.
289
+ Patrick J Hurley. A Concise Introduction to Logic. Cengage Learning, 2014.
290
+ Shuang Li, Lu Wang, Ruizhi Zhang, Xiaofu Chang, Xuqin Liu, Yao Xie, Yuan Qi, and Le Song. Temporal logic point processes. In International Conference on Machine Learning, pp. 5990-6000. PMLR, 2020.
291
+ Shuang Li, Mingquan Feng, Lu Wang, Abdelmajid Essofi, Yufeng Cao, Junchi Yan, and Le Song. Explaining point processes by learning interpretable temporal logic rules. In International Conference on Learning Representations, 2021.
292
+ Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In International Conference on Learning Representations, 2019.
293
+ Noushin Mehdipour, Cristian-Ioan Vasile, and Calin Belta. Specifying user preferences using weighted signal temporal logic. IEEE Control Systems Letters, 5(6):2006-2011, 2021.
294
+ Hongyuan Mei and Jason M Eisner. The neural Hawkes process: A neurally self-modulating multivariate point process. Advances in Neural Information Processing Systems (NeurIPS), 30:6757-6767, 2017.
295
+ Sean P O'Brien. Crisis early warning and decision support: Contemporary approaches and thoughts on future research. International Studies Review, 12(1):87-104, 2010.
296
+ Ryan Riegel, Alexander Gray, Francois Luus, Naweed Khan, Ndivhuwo Makondo, Ismail Yunus Akhalwaya, Haifeng Qian, Ronald Fagin, Francisco Barahona, Udit Sharma, et al. Logical neural networks. arXiv preprint arXiv:2006.13155, 2020.
297
+ Mohammed Saeed, Mauricio Villarroel, Andrew T Reisner, Gari Clifford, Li-Wei Lehman, George Moody, Thomas Heldt, Tin H Kyaw, Benjamin Moody, and Roger G Mark. Multiparameter intelligent monitoring in intensive care II (MIMIC-II): A public-access intensive care unit database. Critical Care Medicine, 39(5):952, 2011.
298
+ Prithviraj Sen, Bruno WSR de Carvalho, Ryan Riegel, and Alexander Gray. Neuro-symbolic inductive logic programming with logical neural networks. In Proceedings of the Conference on Artificial Intelligence (AAAI), volume 36, pp. 8212-8219, 2022.
299
+ Jeremy C. Weiss and David Page. Forest-based point process for event prediction from electronic health records. In Machine Learning and Knowledge Discovery in Databases, pp. 547-562, 2013.
300
+ Shuai Xiao, Junchi Yan, Xiaokang Yang, Hongyuan Zha, and Stephen Chu. Modeling the intensity function of point process via recurrent neural networks. In Proceedings of the Conference on Artificial Intelligence (AAAI), volume 31, pp. 1597-1603, 2017.
301
+ Hongteng Xu, Dixin Luo, and Hongyuan Zha. Learning Hawkes processes from short doubly-censored event sequences. In International Conference on Machine Learning, pp. 3831-3840. PMLR, 2017.
302
+ Ruixuan Yan, Agung Julius, Maria Chang, Achille Fokoue, Tengfei Ma, and Rosario Uceda-Sosa. STONE: Signal temporal logic neural network for time series classification. In 2021 International Conference on Data Mining Workshops (ICDMW), pp. 778-787. IEEE, 2021.
303
+ Ruixuan Yan, Tengfei Ma, Achille Fokoue, Maria Chang, and Agung Julius. Neuro-symbolic models for interpretable time series classification using temporal logic description. In 2022 IEEE International Conference on Data Mining (ICDM), pp. 618-627, 2022. doi: 10.1109/ICDM54844.2022.00072.
304
+ Qiang Zhang, Aldo Lipani, Omer Kirnap, and Emine Yilmaz. Self-attentive Hawkes process. In International Conference on Machine Learning, pp. 11183-11193. PMLR, 2020.
305
+ Simiao Zuo, Haoming Jiang, Zichong Li, Tuo Zhao, and Hongyuan Zha. Transformer Hawkes process. In International Conference on Machine Learning, pp. 11692-11702. PMLR, 2020.
306
+
307
+ # A FORMULATION OF LOGICAL CONSTRAINTS & OBJECTIVE FUNCTION
308
+
309
+ The optimization problem in (10) is formulated by maximizing the log-likelihood subject to the logical constraints for the $\wedge$ and $\vee$ operators. This section discusses the details of the formulation for the two logical constraints and how to formulate the optimization problem while considering the logical constraints. Without loss of generality, we illustrate the formulation of the constraints for the $\wedge$ operator, and the constraints for $\vee$ operator can be derived from the constraints for the $\wedge$ operator using De Morgan's law.
310
+
311
+ # - Logical constraints for $\wedge$ operator.
312
+
313
+ Let $x, y \in [0,1]$ denote the inputs of the $\wedge$ operator, and $f(x,y)$ denote the quantitative satisfaction of $\wedge$ . The conventional characteristic of the $\wedge$ operator is illustrated as follows: 1) $f(x,y)$ is low when either input is low, and 2) $f(x,y)$ is high when both inputs are high. However, we associate each input with a nonnegative weight, implying the input with a zero weight should not affect the output. In other words, if a low input has a zero weight, it should not affect the output of $f(x,y)$ . Therefore, we require the $\wedge$ operator to exhibit the following characteristics: 1) $f(x,y)$ is low when both inputs are low, and 2) $f(x,y)$ is high when both inputs are high. Here we introduce a user-defined hyperparameter $\alpha \in [\frac{1}{2},1]$ to capture low vs. high: $x \in [0,1 - \alpha)$ represents low and $x \in [\alpha,1]$ represents high. According to the above characteristics, we have (Sen et al., 2022)
314
+
315
+ $$
316
+ f (x, y) \leq 1 - \alpha , \quad \forall x, y \in [ 0, 1 - \alpha), \tag {13}
317
+ $$
318
+
319
+ $$
320
+ f (x, y) \geq \alpha , \quad \forall x, y \in [ \alpha , 1 ].
321
+ $$
322
+
323
+ Here we follow a specific choice of $f$ by using a triangular norm ( $t$ -norm) and define the quantitative satisfaction function of $\wedge$ as (Riegel et al., 2020)
324
+
325
+ $$
326
+ p \left(\mathcal {C} ^ {\prime}, \phi_ {1} ^ {w _ {1}} \wedge \phi_ {2} ^ {w _ {2}}, t\right) = f \left(\beta - \sum_ {j = 1} ^ {2} w _ {j} \left(1 - p \left(\mathcal {C} ^ {\prime}, \phi_ {j}, t\right)\right)\right), \tag {14}
327
+ $$
328
+
329
+ $$
330
+ \text {s u b j e c t} \quad \beta - \sum_ {j = 1} ^ {2} w _ {j} (1 - \alpha) \geq \alpha , \beta - \sum_ {j = 1} ^ {2} w _ {j} \alpha \leq 1 - \alpha , \tag {15}
331
+ $$
332
+
333
+ where $f(z) = \max \{0, \min \{z, 1\}\}$ is introduced to clamp the truth value into the range of [0, 1].
334
+
335
+ # - Logical constraints for $\vee$ operator.
336
+
337
+ By using De Morgan's law, we could derive the quantitative satisfaction function and the logical constraints for the $\lor$ operator with 2 inputs as follows:
338
+
339
+ $$
340
+ p \left(\mathcal {C} ^ {\prime}, \phi_ {1} ^ {w _ {1}} \vee \phi_ {2} ^ {w _ {2}}, t\right) = f \left(1 - \beta + \sum_ {j = 1} ^ {2} w _ {j} \left(p \left(\mathcal {C} ^ {\prime}, \phi_ {j}, t\right)\right)\right), \tag {16}
341
+ $$
342
+
343
+ $$
344
+ \text {s u b j e c t} \quad 1 - \beta + \sum_ {j = 1} ^ {2} w _ {j} \alpha \geq \alpha , 1 - \beta + \sum_ {j = 1} ^ {2} w _ {j} (1 - \alpha) \leq 1 - \alpha . \tag {17}
345
+ $$
346
+
347
+ Here we show the characteristics of the activation functions for the $\wedge$ and $\vee$ operators using Figure 4. Figure 4(a) shows the truth value of the $\wedge$ operator with $\alpha = 0.7$ . Figure 4(b) shows the truth value of the $\wedge$ operator with $\alpha = 0.9$ . It can be distinctly observed that $f(x,y)$ is close to 0 when both $x$ and $y$ are low, and $f(x,y)$ is close to 1 when both $x$ and $y$ are high. In addition, the unconstrained region for $\alpha = 0.9$ is larger than the unconstrained region for $\alpha = 0.7$ . Figure 4(c) shows the truth value of the $\vee$ operator with $\alpha = 0.7$ . It is obvious that $f(x,y)$ is close to 0 when both $x$ and $y$ are low, and $f(x,y)$ is close to 1 when both $x$ and $y$ are high.
348
+
349
+ In general, we could extend the quantitative satisfaction for the $\wedge$ and $\vee$ operators in (14) - (17) to $k$ -ary conjunction and $k$ -ary disjunction. The $k$ -ary conjunction formulation is expressed as follows.
350
+
351
+ $$
352
+ p \left(\mathcal {C} ^ {\prime}, \phi_ {1} ^ {w _ {1}} \wedge \phi_ {2} ^ {w _ {2}} \dots \wedge \phi_ {k} ^ {w _ {k}}, t\right) = f \left(\beta - \sum_ {j = 1} ^ {k} w _ {j} \left(1 - p \left(\mathcal {C} ^ {\prime}, \phi_ {j}, t\right)\right)\right), \tag {18}
353
+ $$
354
+
355
+ ![](images/d11ca93f13aa2d576cabacd06a21df8f552d4545ee7577c50b2397da4f4f2f16.jpg)
356
+ (a)
357
+
358
+ ![](images/44228cc4cb20f4bb5ebcf91b59b11946430cf8359c5cdc1bfbbcccf7866d5bec.jpg)
359
+ (b)
360
+ Figure 4: Plot of truth degree for (a) CLNN- $\wedge$ with $\alpha = 0.7$ , (b) CLNN- $\wedge$ with $\alpha = 0.9$ , (c) CLNN- $\vee$ with $\alpha = 0.7$ .
361
+
362
+ ![](images/1110140bea7cd3027c436d8e7f557f9ff7327030c05282067db9188a4a31941c.jpg)
363
+ (c)
364
+
365
+ $$
366
+ \text {s u b j e c t} \quad \beta - \sum_ {j = 1} ^ {k} w _ {j} (1 - \alpha) \geq \alpha , \beta - \sum_ {j = 1} ^ {k} w _ {j} \alpha \leq 1 - \alpha . \tag {19}
367
+ $$
368
+
369
+ The $k$ -ary disjunction formulation is expressed as follows.
370
+
371
+ $$
372
+ p \left(\mathcal {C} ^ {\prime}, \phi_ {1} ^ {w _ {1}} \vee \phi_ {2} ^ {w _ {2}} \dots \vee \phi_ {k} ^ {w _ {k}}, t\right) = f (1 - \beta + \sum_ {j = 1} ^ {k} w _ {j} \left(p \left(\mathcal {C} ^ {\prime}, \phi_ {j}, t\right)\right)), \tag {20}
373
+ $$
374
+
375
+ $$
376
+ \text {s u b j e c t} \quad 1 - \beta + \sum_ {j = 1} ^ {k} w _ {j} \alpha \geq \alpha , 1 - \beta + \sum_ {j = 1} ^ {k} w _ {j} (1 - \alpha) \leq 1 - \alpha . \tag {21}
377
+ $$
378
+
379
+ With the above constraints, we can formulate the maximum likelihood estimation problem as
380
+
381
+ $$
382
+ \min - L L _ {l} \tag {22}
383
+ $$
384
+
385
+ $$
386
+ s. t. \quad \forall \phi \in \Phi , \forall 1 \leq k \leq K _ {\phi} ^ {\wedge}, \beta_ {k} - \sum_ {i \in I _ {k}} w _ {i, k} (1 - \alpha) \geq \alpha , \beta_ {k} - \sum_ {i \in I _ {k}} w _ {i, k} \alpha \leq 1 - \alpha , \tag {23}
387
+ $$
388
+
389
+ $$
390
+ \forall \phi \in \Phi , \forall 1 \leq k ^ {\prime} \leq K _ {\phi} ^ {\vee}, 1 - \beta_ {k ^ {\prime}} + \sum_ {i \in I _ {k ^ {\prime}}} w _ {i, k ^ {\prime}} \alpha \geq \alpha , 1 - \beta_ {k ^ {\prime}} + \sum_ {i \in I _ {k ^ {\prime}}} w _ {i, k ^ {\prime}} (1 - \alpha) \leq 1 - \alpha . \tag {24}
391
+ $$
392
+
393
+ In this paper, we set $\alpha = 0.5$ , thus the constraints in (19) become
394
+
395
+ $$
396
+ \sum_ {i = 1} ^ {k} w _ {i} \geq 2 \beta - 1,
397
+ $$
398
+
399
+ $$
400
+ \sum_ {i = 1} ^ {k} w _ {i} \leq 2 \beta - 1, \tag {25}
401
+ $$
402
+
403
+ $$
404
+ 2 \beta - 1 \geq 0,
405
+ $$
406
+
407
+ $$
408
+ w _ {i} \geq 0.
409
+ $$
410
+
411
+ Reformulating the above constraints, we have
412
+
413
+ $$
414
+ \sum_ {i = 1} ^ {k} w _ {i} = 2 \beta - 1, \tag {26}
415
+ $$
416
+
417
+ $$
418
+ \beta \geq 0. 5,
419
+ $$
420
+
421
+ $$
422
+ w _ {i} \geq 0. \tag {27}
423
+ $$
424
+
425
+ The above constraints hold for each conjunction operator in $\phi$ . Therefore, we can incorporate the constraints in (26) into the objective function, which becomes
426
+
427
+ $$
428
+ \min - L L _ {l} + \sum_ {k = 1} ^ {K _ {\phi} ^ {\wedge}} \left(\sum_ {i \in I _ {k}} w _ {i, k} - 2 \beta_ {k} + 1\right) ^ {2}, \tag {28}
429
+ $$
430
+
431
+ $$
432
+ \text {s u b j e c t} w _ {i, k} \geq 0, \beta_ {k} \geq 0. 5, \forall i \in I _ {k}, \forall 1 \leq k \leq K _ {\phi} ^ {\wedge}, \forall \phi \in \Phi . \tag {29}
433
+ $$
434
+
435
+ Similarly, we propose a set of logical constraints for the $\lor$ operator as (21). If we set $\alpha = 0.5$ , the constraints in (21) become
436
+
437
+ $$
438
+ \sum_ {i = 1} ^ {k} w _ {i} \geq 2 \beta - 1,
439
+ $$
440
+
441
+ $$
442
+ \sum_ {i = 1} ^ {k} w _ {i} \leq 2 \beta - 1, \tag {30}
443
+ $$
444
+
445
+ $$
446
+ 2 \beta - 1 \geq 0,
447
+ $$
448
+
449
+ $$
450
+ w _ {i} \geq 0.
451
+ $$
452
+
453
+ Reformulating the above constraints, we have
454
+
455
+ $$
456
+ \sum_ {i _ {1}} ^ {k} w _ {i} = 2 \beta - 1, \tag {31}
457
+ $$
458
+
459
+ $$
460
+ \beta \geq 0. 5.
461
+ $$
462
+
463
+ $$
464
+ w _ {i} \geq 0. \tag {32}
465
+ $$
466
+
467
+ The above constraints hold for each disjunction operator in $\phi$ . Therefore, we can incorporate the constraints in (31) into the objective function. The maximum likelihood estimation problem then becomes
468
+
469
+ $$
470
+ \min - L L _ {l} + \sum_ {k = 1} ^ {K _ {\phi} ^ {\wedge}} \left(\sum_ {i \in I _ {k}} w _ {i, k} - 2 \beta_ {k} + 1\right) ^ {2} + \sum_ {k ^ {\prime} = 1} ^ {K _ {\phi} ^ {\vee}} \left(\sum_ {i \in I _ {k ^ {\prime}}} w _ {i, k ^ {\prime}} - 2 \beta_ {k ^ {\prime}} + 1\right) ^ {2}, \tag {33}
471
+ $$
472
+
473
+ subject to $w_{i,k}\geq 0,\beta_k\geq 0.5,\forall i\in I_k,\forall 1\leq k\leq K_\phi^{\wedge},\forall \phi \in \Phi ,$
474
+
475
+ $$
476
+ w _ {i, k ^ {\prime}} \geq 0, \beta_ {k ^ {\prime}} \geq 0. 5, \forall i \in I _ {k ^ {\prime}}, \forall 1 \leq k ^ {\prime} \leq K _ {\phi} ^ {\vee}, \forall \phi \in \Phi .
477
+ $$
478
+
479
+ # B PROOF OF THEOREM 8
480
+
481
+ The activation function designed for the $\wedge$ operator satisfies the properties of nonimpact for zero weights, impact ordering, and monotonicity. Without loss of generality, we present the proof for the $\wedge$ operator connecting two clauses, which can be generalized to the $\wedge$ operator connecting $k$ -ary clauses.
482
+
483
+ Proof 1 Here we present the proof for the activation function for the $\wedge$ operator satisfying each property mentioned above.
484
+
485
+ - Nonimpact for zero weights.
486
+
487
+ This means if $w_{j} = 0, j = 1,2$ , then $p(\mathcal{C}',\phi_j,t)$ should have no impact on $p(\mathcal{C}',\phi_1^{w_1}\wedge \phi_2^{w_2},t)$ . Without loss of generality, we suppose $w_{1} = 0$ , thus we have
488
+
489
+ $$
490
+ \begin{array}{l} p \left(\mathcal {C} ^ {\prime}, \phi_ {1} ^ {w _ {1}} \wedge \phi_ {2} ^ {w _ {2}}, t\right) = f (\beta - 0 \cdot (1 - p \left(\mathcal {C} ^ {\prime}, \phi_ {1}, t\right)) - w _ {2} \cdot (1 - p \left(\mathcal {C} ^ {\prime}, \phi_ {2}, t\right))), \tag {34} \\ = f \left(\beta - w _ {2} \cdot \left(1 - p \left(\mathcal {C} ^ {\prime}, \phi_ {2}, t\right)\right)\right), \\ \end{array}
491
+ $$
492
+
493
+ meaning $p(\mathcal{C}',\phi_1,t)$ has no impact on $p(\mathcal{C}',\phi_1^{w_1}\wedge \phi_2^{w_2},t)$
494
+
495
+ # - Impact Ordering
496
+
497
+ This means the truth degree of subformula with higher weights has a greater impact on $p(\mathcal{C}', \phi_1^{w_1} \wedge \phi_2^{w_2}, t)$ . Mathematically, we need to prove that if $p(\mathcal{C}', \phi_1, t) = p(\mathcal{C}', \phi_2, t)$ and $w_1 \geq w_2$ , then
498
+
499
+ $$
500
+ \frac {\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {1} ^ {w _ {1}} \wedge \phi_ {2} ^ {w _ {2}} , t\right)}{\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {1} , t\right)} \geq \frac {\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {1} ^ {w _ {1}} \wedge \phi_ {2} ^ {w _ {2}} , t\right)}{\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {2} , t\right)}. \tag {35}
501
+ $$
502
+
503
+ As $f(x) = \max \{0, \min \{x, 1\}\}$ , we have
504
+
505
+ $$
506
+ \frac {d f}{d x} = \left\{ \begin{array}{l l} 0, & \text {i f} x < 0, \\ 1, & \text {i f} 0 < x < 1, \\ 0, & \text {i f} x > 1. \end{array} \right. \tag {36}
507
+ $$
508
+
509
+ If $\beta -\sum_{j = 1}^{2}w_{j}(1 - p(\mathcal{C}^{\prime},\phi_{j},t)) < 0$ or $\beta -\sum_{j = 1}^{2}w_{j}(1 - p(\mathcal{C}^{\prime},\phi_{j},k)) > 1$ , then we have
510
+
511
+ $$
512
+ \frac {\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {1} ^ {w _ {1}} \wedge \phi_ {2} ^ {w _ {2}} , t\right)}{\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {1} , t\right)} = \frac {\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {1} ^ {w _ {1}} \wedge \phi_ {2} ^ {w _ {2}} , t\right)}{\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {2} , t\right)} = 0. \tag {37}
513
+ $$
514
+
515
+ Also, if $0 < \beta -\sum_{j = 1}^{2}w_{j}(1 - p(\mathcal{C}^{\prime},\phi_{j},t)) < 1$ , then we have
516
+
517
+ $$
518
+ \frac {\partial \left(\beta - \sum_ {j = 1} ^ {2} w _ {j} \left(1 - p \left(\mathcal {C} ^ {\prime} , \phi_ {j} , t\right)\right)\right)}{\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {1} , t\right)} = w _ {1} \left(\beta - \sum_ {j = 1} ^ {2} w _ {j} \left(1 - p \left(\mathcal {C} ^ {\prime}, \phi_ {j}, t\right)\right)\right), \tag {38}
519
+ $$
520
+
521
+ and
522
+
523
+ $$
524
+ \frac {\partial \left(\beta - \sum_ {j = 1} ^ {2} w _ {j} \left(1 - p \left(\mathcal {C} ^ {\prime} , \phi_ {j} , t\right)\right)\right)}{\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {2} , t\right)} = w _ {2} \left(\beta - \sum_ {j = 1} ^ {2} w _ {j} \left(1 - p \left(\mathcal {C} ^ {\prime}, \phi_ {j}, t\right)\right)\right). \tag {39}
525
+ $$
526
+
527
+ As $w_{1}\geq w_{2}$ , the following holds:
528
+
529
+ $$
530
+ \frac {\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {1} ^ {w _ {1}} \wedge \phi_ {2} ^ {w _ {2}} , t\right)}{\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {1} , t\right)} \geq \frac {\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {1} ^ {w _ {1}} \wedge \phi_ {2} ^ {w _ {2}} , t\right)}{\partial p \left(\mathcal {C} ^ {\prime} , \phi_ {2} , t\right)}, \tag {40}
531
+ $$
532
+
533
+ which proves the impact ordering property holds.
534
+
535
+ # - Monotonicity.
536
+
537
+ This means $p(\mathcal{C}', \phi_1^{w_1} \wedge \phi_2^{w_2}, t)$ increases monotonically over $p(\mathcal{C}', \phi_j, t)$ , i.e.
538
+
539
+ $$
540
+ f \left(\beta - \sum_ {j = 1} ^ {2} w _ {j} \left(1 - p \left(\mathcal {C} ^ {\prime}, \phi_ {j}, t\right)\right)\right) \leq f \left(\beta - \sum_ {j = 1} ^ {2} w _ {j} \left(1 - p \left(\mathcal {C} ^ {\prime}, \phi_ {j}, t\right) - d\right)\right) \text {f o r} d \geq 0. \tag {41}
541
+ $$
542
+
543
+ First, note that $\beta -\sum_{j = 1}^{2}w_{j}(1 - p(\mathcal{C}^{\prime},\phi_{j},t))$ can be rewritten as
544
+
545
+ $$
546
+ \beta - \sum_ {j = 1} ^ {2} w _ {j} \left(1 - p \left(\mathcal {C} ^ {\prime}, \phi_ {j}, t\right)\right) = \beta - w _ {1} - w _ {2} + w _ {1} p \left(\mathcal {C} ^ {\prime}, \phi_ {1}, t\right) + w _ {2} p \left(\mathcal {C} ^ {\prime}, \phi_ {2}, t\right). \tag {42}
547
+ $$
548
+
549
+ This implies $f(\beta - \sum_{j=1}^{2} w_j (1 - p(\mathcal{C}', \phi_j, t)))$ is monotonically increasing over $p(\mathcal{C}', \phi_1, t)$ and $p(\mathcal{C}', \phi_2, t)$ . Also, from the proof of impact ordering we know $f(x) = \max \{0, \min \{x, 1\}\}$ is monotonically nondecreasing, we can show that
550
+
551
+ $$
552
+ f \left(\beta - \sum_ {j = 1} ^ {2} w _ {j} \left(1 - p \left(\mathcal {C} ^ {\prime}, \phi_ {j}, t\right)\right)\right) \leq f \left(\beta - \sum_ {j = 1} ^ {2} w _ {j} \left(1 - p \left(\mathcal {C} ^ {\prime}, \phi_ {j}, t\right) - d\right)\right), d \geq 0. \tag {43}
553
+ $$
554
+
555
+ Thus the property of monotonicity is satisfied.
556
+
557
+ # C EXPERIMENT RESULTS OF SYNTHETIC DATASETS
558
+
559
+ Dataset Generation. In the experiments on synthetic datasets, we manually generate 3 synthetic datasets considering different settings, where the details and results for the first synthetic dataset is reported in Section 4.2. Each setting considers a different order representation, different number of event labels or different intensity of causal event labels.
560
+
561
+ ![](images/6615481611b2355c10d747efa0377f49ce26dac2c3075e65e67373d9e97fbe23.jpg)
562
+ Figure 5: Model structure of $\hat{\phi}_1$ for generating the first synthetic dataset.
563
+
564
+ # C.1 SYNTHETIC DATASET-1 (SYN-1).
565
+
566
+ Generation process. The first synthetic dataset contains 4 event labels: $A, B, C$ , and $D$ , where $D$ is the event for prediction, and $A, B, C$ are causal events. The wCL formula used to generate event $D$ in the first synthetic dataset is set as
567
+
568
+ $$
569
+ \hat {\phi} _ {1} = \left(c _ {A} - c _ {B} > 1\right) ^ {1} \wedge \left(c _ {A} - c _ {C} > 3\right) ^ {1}, \tag {44}
570
+ $$
571
+
572
+ whose unweighted version reads as "If $A$ happens before $B$ for at least 1 time unit and $A$ happens before $C$ for at least 3 time units, then $D$ will happen."
573
+
574
+ Here we consider event labels $A, B, C$ as free predicates, whose occurrences are generated by a homogeneous Poisson process. The homogeneous intensity rate for $A, B, C$ are set as $\lambda_A = 0.2$ , $\lambda_B = 0.2$ , and $\lambda_C = 0.2$ . The algorithm used to generate instances of $A, B, C$ is described as Algorithm 1 (Chen, 2016).
575
+
576
+ Algorithm 1 Simulation of a homogeneous Poisson process with intensity rate $\lambda$ .
577
+ Input: Intensity rate $\lambda$ , simulation horizon $T$
578
+ Output: Occurrence time stamps $\mathcal{T} = \{t_k\}$
579
+ 1: Initialize $n = 0,t_0 = 0$ .
580
+ 2: while True do
581
+ 3: Generate $u\sim$ uniform(0, 1);
582
+ 4: Let $w = -ln(u) / \lambda$ .
583
+ 5: Set $t_{n + 1} = t_n + w$ .
584
+ 6: if $t_{n + 1} > T$ then
585
+ 7: return $\mathcal{T} = \{t_k\}_{k = 1,2,\dots,n}$ .
586
+ 8: else
587
+ 9: Set $n = n + 1$ .
588
+ 10: end if
589
+ 11: end while
590
+
591
+ With the above algorithm, we can generate the occurrences of event labels $A, B$ , and $C$ . Next, we build a CLNN for $\hat{\phi}_1 = (c_A - c_B > 1)^1 \wedge (c_A - c_C > 3)^1$ to calculate the conditional intensity rate $\lambda_{D|\hat{\phi}_1}$ , whose model structure is shown in Figure 5. After obtaining $\lambda_{D|\hat{\phi}_1}(t)$ , we could use Algorithm 2 (Chen, 2016) to generate the occurrence of $D$ .
592
+
593
+ Results. The rules learned by CLNN, TELLER, and OGEM-tab on the first synthetic dataset are presented in Table 5, where the paired order predicate among the two candidates with the highest
594
+
595
+ Algorithm 2 Simulation of an inhomogeneous Poisson process with intensity rate $\lambda(t)$ .
596
+ Input: intensity rate $\lambda (t)$ , simulation horizon $T$
597
+ Output: Occurrence time stamps $\mathcal{T} = \{t_k\}$ 1: Initialize $n = m = 0,t_0 = s_0 = 0,\bar{\lambda} = \sup_{0\leq t\leq T};\lambda (t);$ 2: while $s_m < T$ do 3: Generate a uniform random variable $u\sim \mathrm{uniform}(0,1)$ 4: Let $w = -\ln u / \bar{\lambda}$ . 5: Set $s_{m + 1} = s_m + w$ . 6: Generate $D\sim \mathrm{uniform}(0,1)$ . 7: if $D\leq \lambda (s_{m + 1})\bar{\lambda}$ then 8: $t_{n + 1} = s_{m + 1}$ . 9: $n = n + 1$ .
598
+ 10: end if
599
+ 11: $m = m + 1$ .
600
+ 12: if $t_n\leq T$ then
601
+ 13: return $\{t_k\}_{k = 1,2,\dots,n}$
602
+ 14: else
603
+ 15: return $\{t_k\}_{k = 1,2,\dots,n - 1}$
604
+ 16: end if
605
+ 17: end while
606
+
607
+ <table><tr><td>Dataset</td><td>Syn-1</td></tr><tr><td>N (# events)</td><td>N = 4, L = {A, B, C, D}</td></tr><tr><td>Ground truth</td><td>φ1 = (cA - cB &gt; 1)1 ∧ (cA - cC &gt; 3)1</td></tr><tr><td>CLNN&#x27;s rule</td><td>(cA - cB &gt; 1.21)1.52 ∧ (cA - cC &gt; 3.00)1.41 ∧ (cA - cD &gt; 0.82)0.33 ∧ (cB - cC &gt; 4.33)0 ∧ (cB - cD &gt; 10.69)0 ∧ (cD - cC &gt; -6.57)0.16</td></tr><tr><td>TELLER&#x27;s rule</td><td>A before D, B before D, C before D, A before D and C before D</td></tr><tr><td>OGEM-tab&#x27;s rule</td><td>Excitation: [B], [C], [C, B], [B, C], [A, C, B], [A, B, C]Inhibitory: [A], [B, A], [B, A, C], [C, B, A], [A, B], [A, C], [B, C, A], [C, A, B], [C, A]</td></tr></table>
608
+
609
+ Table 5: Comparison of rule discovery for CLNN and TELLER on the Syn-1 dataset.
610
+
611
+ weight is presented. It can be clearly observed that by truncating the predicates with small weights, we could obtain the formula as
612
+
613
+ $$
614
+ \phi_ {1} = \left(c _ {A} - c _ {B} > 1. 2 1\right) ^ {1. 5 2} \wedge \left(c _ {A} - c _ {C} > 3. 0 0\right) ^ {1. 4 1}, \tag {45}
615
+ $$
616
+
617
+ which matches well with the ground-truth rule. However, TELLER cannot capture the paired order representation between $A$ and $B$ or $A$ and $C$ . OGEM-tab captures the order representation $[A, B]$ and $[A, C]$ as inhibitory causes, which contradicts the ground-truth rule.
618
+
619
+ # C.2 SYNTHETIC DATASET-2 (SYN-2).
620
+
621
+ Generation Process. The second synthetic dataset contains 5 event labels: $A, B, C, D$ and $E$ , where $E$ is the event for prediction, and $A, B, C, D$ are causal events. The wCL formula used to generate the occurrence of event $E$ in the second synthetic dataset is set as
622
+
623
+ $$
624
+ \hat {\phi} _ {2} = \left(c _ {A} - c _ {B} > 0. 5\right) ^ {1} \wedge \left(c _ {A} - c _ {C} > 1. 5\right) ^ {1} \wedge \left(c _ {C} - c _ {D} > 2\right) ^ {1}, \tag {46}
625
+ $$
626
+
627
+ whose unweighted version reads as "If $A$ happens before $B$ for at least 0.5 time units, $A$ happens before $C$ for at least 1.5 time units, and $C$ happens before $D$ for at least 2 time units, then $E$ will happen."
628
+
629
+ ![](images/5c9eb772a4b858637daaf47666b26dd2ef522b35246e1a44e155613c66b870c4.jpg)
630
+ Figure 6: Model structure of $\hat{\phi}_2$ for generating the second synthetic dataset.
631
+
632
+ The occurrence of events $A, B, C$ and $D$ are generated using Algorithm 1, in which $\lambda_A = \lambda_B = \lambda_C = \lambda_D = 0.2$ . After obtaining the occurrence of $A, B, C$ and $D$ , we simulate the generation of event label $E$ using Algorithm 2, in which the intensity rate $\lambda_{E|\hat{\phi}_2}(t)$ is computed using the model shown in Figure 6.
633
+
634
+ Results. The rules learned by CLNN, TELLER and OGEM-tab on the second synthetic dataset are presented in Table 6, where the paired order predicate with the highest weight is presented. It can be clearly observed that by truncating the predicates with small weights, CLNN learns a wCL formula as:
635
+
636
+ $$
637
+ \phi_ {2} = \left(c _ {A} - c _ {B} > 0. 7 7\right) ^ {1. 2 7} \wedge \left(c _ {A} - c _ {C} > 2. 0 9\right) ^ {1. 1 5} \wedge \left(c _ {C} - c _ {D} > 2. 6 0\right) ^ {1. 0 6}, \tag {47}
638
+ $$
639
+
640
+ whose order representation match well with the ground-truth rule. Nevertheless, TELLER's rule only captures the ordering between $A$ , $B$ and $E$ , whereas the ordering between $A$ and $B$ or $B$ and $C$ or $C$ and $D$ are not learned. OGEM-tab's rules can only capture the relation between event label $D$ and event label $E$ can excite the occurrence of event label $E$ , whereas not able to capture the dependence of event label $E$ 's occurrence on the order relation between $A$ and $B$ or $B$ and $C$ or $C$ and $D$ .
641
+
642
+ <table><tr><td>Dataset</td><td>Syn-2</td></tr><tr><td>N (# events)</td><td>N=5, L={A,B,C,D,E}</td></tr><tr><td>Ground truth</td><td>φ2=(cA-cB&gt;0.5)1∧(cB-cC&gt;1.5)1∧(cC-cD&gt;2)1</td></tr><tr><td>CLNN&#x27;s rule</td><td>(cA-cB&gt;0.77)1.27∧(cA-cC&gt;2.09)1.15∧((cA-cD)&gt;−5.00)0.25∧((cA-cE)&gt;−2.74)0.09∧(cB-cC&gt;−9.31)0.02∧(cB-cD&gt;−8.54)0.08∧(cB-cE&gt;2.07)0∧((cC-cD)&gt;2.60)1.06∧((cC-cE)&gt;−4.27)0.03∧((cD-cE)&gt;1.17)0.07</td></tr><tr><td>TELLER&#x27;s rule</td><td>A before E, B before E, A and B before E, A and C before E</td></tr><tr><td>OGEM-tab&#x27;s rule</td><td>Excitation: [D], [D,E], [E], [E,D]Inhibitory: [D,A], [A], [A,D], [A,D,E], [E,D,A], [D,A,E], [A,E], [E,A], [D,E,A], [A,E,D], [E,A,D]</td></tr></table>
643
+
644
+ Table 6: Comparison of rule discovery for CLNN and TELLER on the Syn-2 dataset.
645
+
646
+ # C.3 SYNTHETIC DATASET 3 (SYN-3).
647
+
648
+ The third synthetic dataset is generated using a more interesting scheme by combining the generation schemes of the first synthetic dataset and the second synthetic dataset. The third synthetic dataset
649
+
650
+ ![](images/940514745458e4b682db12a6e7789c21a874a28f6e3a43dcfc1c2c79cb00a3da.jpg)
651
+ Figure 7: Model structure of $\hat{\phi}_{3,1}$ for generating the occurrence of $D$ in the Syn-3 dataset.
652
+
653
+ ![](images/b081e37e363fe8a10159d63a78d5d0118d701b9c610e183152ab23a0ad8a4a74.jpg)
654
+ Figure 8: Model structure of $\hat{\phi}_{3,2}$ for generating the occurrence of $E$ in the Syn-3 dataset.
655
+
656
+ includes five event labels: $A, B, C, D$ and $E$ . Here we consider $A, B$ , and $C$ as the causal events for the occurrence of $D$ , and $A, B, C$ , and $D$ as the causal events for the occurrence of $E$ . The occurrence of events $A, B, C$ are generated using Algorithm 1, in which $\lambda_{A} = 0.2$ , $\lambda_{b} = 0.2$ , and $\lambda_{c} = 0.2$ . The wCL formula used to generate the occurrence of event $D$ is set as
657
+
658
+ $$
659
+ \hat {\phi} _ {3, 1} = \left(c _ {B} - c _ {A} > - 2\right) ^ {1} \wedge \left(c _ {C} - c _ {A} > - 5\right) ^ {1}, \tag {48}
660
+ $$
661
+
662
+ whose unweighted version reads as "If $A$ happens before $B$ for less than 2 time units, and $A$ happens before $C$ for less than 1 time unit, then $D$ will happen." The generation of $D$ 's occurrence follows Algorithm 2, where $\lambda_{D|\hat{\phi}_{3,1}}(t)$ is computed using the model shown in Figure 7. We call the third synthetic dataset at this step as Syn-3.1.
663
+
664
+ After obtaining the occurrences of events $A, B, C$ , and $D$ , we could simulate the occurrence of $E$ using the following formula:
665
+
666
+ $$
667
+ \hat {\phi} _ {3, 2} = \left(c _ {B} - c _ {A} > - 5\right) ^ {1} \wedge \left(c _ {C} - c _ {B} > - 4\right) ^ {1} \wedge \left(c _ {D} - c _ {C} > - 3\right) ^ {1}. \tag {49}
668
+ $$
669
+
670
+ Similarly, the generation of $E$ 's occurrence follows Algorithm 2, where the intensity rate $\lambda_{E|\hat{\phi}_{3,2}}(t)$ is computed using the model shown in Figure 8. We call the third synthetic dataset at this step as Syn-3.2.
671
+
672
+ # Results.
673
+
674
+ The rules learned by CLNN, TELLER, and OGEM-tab on the cause of event $D$ in the third synthetic dataset are presented in Table 7, where the paired order predicate with the highest weight among the two candidates is reported. It can be clearly observed that by truncating the predicates with small
675
+
676
+ weights, CLNN learns a wCL formula as
677
+
678
+ $$
679
+ \phi_ {3, 1} = \left(c _ {B} - c _ {A} > - 1. 8 5\right) ^ {1. 7 2} \wedge \left(c _ {C} - c _ {A} > - 3. 9 0\right) ^ {1. 5 9}, \tag {50}
680
+ $$
681
+
682
+ whose order representation match well with the ground-truth rule. On the other hand, TELLER's rule only reveals the temporal relation between event labels $A$ , $B$ , $C$ and $D$ , but it does not capture the temporal relation between event labels $A$ and $B$ or $A$ and $C$ . In addition, we could observe that OGEM-tab does not capture that $C$ is a parent event of $D$ .
683
+
684
+ <table><tr><td>Dataset</td><td>Syn-3.1</td></tr><tr><td>N (# events)</td><td>N = 5, L = {A, B, C, D, E}</td></tr><tr><td>Ground truth</td><td>\(\hat{\phi}_{3,1} = (c_B - c_A &gt; -2)^1 \wedge (c_C - c_A &gt; -5)^1\)</td></tr><tr><td>CLNN&#x27;s rule</td><td>\((c_B - c_A &gt; -1.85)^{1.72} \wedge (c_C - c_A &gt; -3.90)^{1.59} \wedge ((c_D - c_A) &gt; -16.25)^{0.33} \wedge ((c_C - c_B) &gt; -3.01)^0 \wedge (c_D - c_B &gt; -7.37)^{0.02} \wedge (c_D - c_C &gt; -7.55)^0\)</td></tr><tr><td>TELLER&#x27;s rule</td><td>A before D, B before D, C before D</td></tr><tr><td>OGEM-tab&#x27;s rule</td><td>Excitation: [A], [A, B, D], [B, D, A], [D, A], [D, A, B], [B, A], [A, D], [D], [B, A, D], [D, B, A]Inhibitory: [A, B], [B, D], [B], [A, D, B], [D, B]</td></tr></table>
685
+
686
+ The rules learned by CLNN, TELLER, and GEM on the cause of event $E$ in the third synthetic dataset are presented in Table 8, in which the discrete wCL formula learned by CLNN is
687
+
688
+ $$
689
+ \phi_ {3, 2} = \left(c _ {B} - c _ {A} > - 3. 9 4\right) ^ {1. 4 9} \wedge \left(c _ {C} - c _ {B} > - 3. 0 2\right) ^ {2. 0 3} \wedge \left(\left(c _ {D} - c _ {C}\right) > - 2. 0 0\right) ^ {1. 9 2}. \tag {51}
690
+ $$
691
+
692
+ It is obvious that $\phi_{3,2}$ is able to learn the temporal relation between $A$ and $B$ , $B$ and $C$ , and $C$ and $D$ . However, TELLER's rules only reflect the temporal relation between $A$ , $B$ , $C$ and $E$ , which cannot give the information about the temporal relation between $A$ and $B$ , or $B$ and $C$ , or $C$ and $D$ . OGEM-tab's rule indicates that it considers event labels $A$ , $D$ , $E$ as the parent events of $D$ , which does not match with the ground-truth parent set.
693
+
694
+ Table 7: Comparison of rule discovery of ${\phi }_{3,1}$ for CLNN and TELLER on the Syn-3.1 dataset.
695
+
696
+ <table><tr><td>Dataset</td><td>Syn-3.2</td></tr><tr><td>N (# events)</td><td>N=5, L={A,B,C,D,E}</td></tr><tr><td>Ground truth</td><td>\(\hat{\phi}_{3,2}=(c_{B}-c_{A}&gt; -5)^{1}\wedge(c_{C}-c_{B}&gt; -4)^{1}\wedge(c_{D}-c_{C}&gt; -3)^{1}\)</td></tr><tr><td>CLNN&#x27;s rule</td><td>\((c_{B}-c_{A}&gt; -3.94)^{1.49}\wedge(c_{C}-c_{A}&gt; -9.12)^{0.25}\wedge((c_{D}-c_{A})&gt; -1.42)^{0.13}\wedge((c_{E}-c_{A})&gt; -3.88)^{0.15}\wedge(c_{C}-c_{B}&gt; -3.02)^{2.03}\wedge(c_{D}-c_{B}&gt; -6.27)^{0.02}\wedge(c_{E}-c_{B}&gt; -7.30)^{0.04}\wedge((c_{D}- c_{C})&gt; -2.00)^{1.92}\wedge((c_{E}-c_{C})&gt; -5.30)^{0.09}\wedge((c_{E}-c_{D})&gt; -1.57)^{0.01}\)</td></tr><tr><td>TELLER&#x27;s rule</td><td>A before E, B before E, C before E</td></tr><tr><td>OGEM-tab&#x27;s rule</td><td>Excitation: [A,D], [D,A], [D,E], [E], [A,D, E], [D,E,A], [E,A], [A,E], [E,A,D], [A,E, D], [D,A,E], [E,D,A]Inhibitory: [A], [D], [E,D]</td></tr></table>
697
+
698
+ Table 8: Comparison of rule discovery of ${\phi }_{3,2}$ for CLNN and TELLER on the Syn-3.2 dataset.
699
+
700
+ # C.4 QUANTITATIVE COMPARISON OF CLNN'S RULES WITH GROUND TRUTH
701
+
702
+ To quantitatively evaluate the difference between the ground-truth rules and the rules learned by CLNN, we adopt the Jaccard similarity score to assess the learned formulas against the ground truth. Let $\mathcal{G}$ denote the set of paired ordering representations from the ground-truth rule, and $\mathcal{C}$ denote the set of paired ordering representations from the learned rules, the Jaccard similarity score is calculated as $J = \frac{|\mathcal{C} \cap \mathcal{G}|}{|\mathcal{C} \cup \mathcal{G}|}$ . For TELLER and OGEM-tab, the ordering representations are extracted
703
+
704
+ ![](images/e778c73031d538bd262a2304c138c0361986948ec160d092a3b6c088d78183f6.jpg)
705
+ (a)
706
+
707
+ ![](images/96858fce8cd5a11bbd1adfbb7d71a973c286ee8decdbab1100967a9142becc93.jpg)
708
+ (b)
709
+
710
+ ![](images/a6c6bd22c64fc1f1fc65224884fde0d5cd2e30493bdc294bb501e98d1361ff80.jpg)
711
+ (c)
712
+
713
+ ![](images/1511e755f98306b72c0769194b5a539ac0f64fffcc66182d9520ef4267f0858f.jpg)
714
+ (d)
715
+ Figure 9: Comparison of ground-truth rules with CLNN's rules in terms of Jaccard similarity score for a) Syn-1, b) Syn-2, c) Syn-3.1, d) Syn-3.2.
716
+
717
+ from the excitation rules. The comparison of Jaccard similarity score for the synthetic datasets is shown in Figure 9, where the Jaccard similarity score of 0 is manually set to the minimum threshold 0.05 for clarity purposes. It is clearly observed that the Jaccard similarity scores for CLNN is higher than the ones for TELLER or OGEM, implying the rules discovered by CLNN are more consistent with the ground truth.
718
+
719
+ # C.5 STABILITY ANALYSIS OF CLNN'S RULES WITH RESPECT TO INITIALIZATION
720
+
721
+ To further validate the model's stability in learning wCL rules, different parameter initialization methods are carried out, including:
722
+
723
+ 1. rand - parameter initialization as random numbers from a uniform distribution on the interval [0, 1);
724
+ 2. randn - random numbers from a normal distribution with mean 0 and variance 1;
725
+ 3. ones - constant values of 1;
726
+ 4. xavier - random numbers from a uniform distribution on the interval $[-1/\sqrt{n}, 1/\sqrt{n}]$ , where $n$ is the dimension of the parameter.
727
+
728
+ The rules learned by CLNN for the above parameter initializations are summarized in Table 9. By inspecting the rules for different initialization methods, it is clear that CLNN can still recover the correct paired order representations even if initializing the learning process from a different position. In the meantime, the logic formulas learned by CLNN are stable as the variance of learned parameters is relatively small.
729
+
730
+ <table><tr><td>Dataset</td><td>Initialization</td><td>Rules</td></tr><tr><td rowspan="5">Syn - 1</td><td>Ground truth</td><td>\(\hat{\phi}=(c_{A}-c_{B}&gt;1)^{1}\wedge(c_{A}-c_{C}&gt;3)^{1}\)</td></tr><tr><td>rand</td><td>\(\phi=(c_{A}-c_{B}&gt;1.21)^{1.52}\wedge(c_{A}-c_{C}&gt;3.00)^{1.41}\)</td></tr><tr><td>randn</td><td>\(\phi=(c_{A}-c_{B}&gt;1.21)^{1.58}\wedge(c_{A}-c_{C}&gt;3.32)^{1.56}\)</td></tr><tr><td>ones</td><td>\(\phi=(c_{A}-c_{B}&gt;1.17)^{1.59}\wedge(c_{A}-c_{C}&gt;3.14)^{1.32}\)</td></tr><tr><td>xavier</td><td>\(\phi=(c_{A}-c_{B}&gt;1.12)^{1.45}\wedge(c_{A}-c_{C}&gt;3.20)^{1.33}\)</td></tr><tr><td rowspan="5">Syn - 2</td><td>Ground truth</td><td>\(\hat{\phi}=(c_{A}-c_{B}&gt;0.5)^{1}\wedge(c_{A}-c_{C}&gt;1.5)^{1}\wedge(c_{C}-c_{D}&gt;2)^{1}\)</td></tr><tr><td>rand</td><td>\(\phi=(c_{A}-c_{B}&gt;0.77)^{1.27}\wedge(c_{A}-c_{C}&gt;2.09)^{1.15}\wedge((c_{C}-c_{D})&gt;2.60)^{1.06}\)</td></tr><tr><td>randn</td><td>\(\phi=(c_{A}-c_{B}&gt;0.80)^{1.97}\wedge(c_{A}-c_{C}&gt;1.92)^{1.62}\wedge((c_{C}-c_{D})&gt;1.74)^{1.45}\)</td></tr><tr><td>ones</td><td>\(\phi=(c_{A}-c_{B}&gt;1.03)^{1.63}\wedge(c_{A}-c_{C}&gt;1.92)^{1.50}\wedge((c_{C}-c_{D})&gt;2.03)^{1.44}\)</td></tr><tr><td>xavier</td><td>\(\phi=(c_{A}-c_{B}&gt;0.97)^{1.92}\wedge(c_{A}-c_{C}&gt;2.07)^{1.63}\wedge((c_{C}-c_{D})&gt;1.97)^{1.62}\)</td></tr><tr><td rowspan="5">Syn - 3.1</td><td>Ground truth</td><td>\(\hat{\phi}=(c_{B}-c_{A}&gt;-2)^{1}\wedge(c_{C}-c_{A}&gt;-5)^{1}\)</td></tr><tr><td>rand</td><td>\(\phi=(c_{B}-c_{A}&gt;-1.85)^{1.72}\wedge(c_{C}-c_{A}&gt;-3.90)^{1.59}\)</td></tr><tr><td>randn</td><td>\(\phi=(c_{B}-c_{A}&gt;-1.98)^{1.51}\wedge(c_{C}-c_{A}&gt;-3.89)^{1.68}\)</td></tr><tr><td>ones</td><td>\(\phi_{3,1}=(c_{B}-c_{A}&gt;-1.94)^{1.84}\wedge(c_{C}-c_{A}&gt;-3.68)^{2.33}\)</td></tr><tr><td>xavier</td><td>\(\phi_{3,1}=(c_{B}-c_{A}&gt;-1.89)^{1.54}\wedge(c_{C}-c_{A}&gt;-3.92)^{1.62}\)</td></tr><tr><td rowspan="5">Syn - 3.2</td><td>Ground truth</td><td>\(\hat{\phi}=(c_{B}-c_{A}&gt;-5)^{1}\wedge(c_{C}-c_{B}&gt;-4)^{1}\wedge(c_{D}-c_{C}&gt;-3)^{1}\)</td></tr><tr><td>rand</td><td>\(\phi=(c_{B}-c_{A}&gt;-3.94)^{1.49}\wedge(c_{C}-c_{B}&gt;-3.02)^{2.03}\wedge((c_{D}-c_{C})&gt; -2.00)^{1.92}\)</td></tr><tr><td>randn</td><td>\(\phi=(c_{B}-c_{A}&gt;-3.79)^{1.71}\wedge(c_{C}-c_{B}&gt;-3.04)^{1.89}\wedge((c_{D}-c_{C})&gt; -1.68)^{1.65}\)</td></tr><tr><td>ones</td><td>\(\phi=(c_{B}-c_{A}&gt;-3.53)^{1.66}\wedge(c_{C}-c_{B}&gt;-3.09)^{1.88}\wedge((c_{D}-c_{C})&gt; -1.25)^{1.81}\)</td></tr><tr><td>xavier</td><td>\(\phi=(c_{B}-c_{A}&gt;-3.71)^{1.53}\wedge(c_{C}-c_{B}&gt;-3.09)^{2.04}\wedge((c_{D}-c_{C})&gt; -1.86)^{1.73}\)</td></tr></table>
731
+
732
+ # C.6 ANALYSIS OF LOGICAL CONSTRAINTS ON THE LL
733
+
734
+ In this part, we investigate the effect of the interpretability using an experiment of the impact of logical constraints on the model's performance. The log-likelihood on the synthetic datasets for CLNN with and without logical constraints is summarized in Table 10. Table 10 demonstrates that the log-likelihood for CLNN with logical constraints is higher than the log-likelihood for CLNN without constraints, implying that interpretability (logical constraints) is helpful to improve the performance.
735
+
736
+ Table 9: Comparison of rules learned by CLNN for different parameter initialization methods.
737
+
738
+ <table><tr><td>Dataset</td><td>CLNN with constraints</td><td>CLNN without constraints</td></tr><tr><td>Syn - 1</td><td>-7821</td><td>-8716</td></tr><tr><td>Syn - 2</td><td>-6075</td><td>-6942</td></tr><tr><td>Syn - 3.1</td><td>-10898</td><td>-11583</td></tr><tr><td>Syn - 3.2</td><td>-10919</td><td>-11230</td></tr></table>
739
+
740
+ Table 10: Comparison of LL for CLNN with and without logical constraints.
741
+
742
+ # D EXPERIMENT RESULTS OF REAL-WORLD DATASETS
743
+
744
+ # D.1 LINKEDIN DATASET
745
+
746
+ The LinkedIn dataset is a collection of job hopping records between 82 IT companies of 3,000 LinkedIn users. Each event stream represents a user's check-in time stamps for different companies or role changes within the same company. Here we select 1000 users' event streams to compose the dataset by filtering out the event streams with uncommon companies, resulting in 10 event labels: $\mathcal{L} = \{A,B,C,D,E,F,G,H,I,J\}$ . Here we set the number of formulas as 5, i.e., $\Phi = \{\phi_1,\phi_2,\phi_3,\phi_4,\phi_5\}$ , each of which embodies a model structure shown in Figure 2(a) and CLNN aims to learn the parameters for each formula. The weight parameters in the paired order cell or the singleton order cell are initialized as random variables following a Gaussian distribution, and the bias terms of conjunction or disjunction operators are initialized as 1. The architecture weights are initialized as random variables following a Gaussian distribution, and the formula impact weights and bias are initialized as Gaussian random variables. The detailed log-likelihood for each event label is summarized in Table 11.
747
+
748
+ <table><tr><td>Event Label</td><td>Log-likelihood</td></tr><tr><td>A</td><td>-180.59</td></tr><tr><td>B</td><td>-177.80</td></tr><tr><td>C</td><td>-89.49</td></tr><tr><td>D</td><td>-140.31</td></tr><tr><td>E</td><td>-132.83</td></tr><tr><td>F</td><td>-76.63</td></tr><tr><td>G</td><td>-106.23</td></tr><tr><td>H</td><td>-103.33</td></tr><tr><td>I</td><td>-95.51</td></tr><tr><td>J</td><td>-125.45</td></tr></table>
749
+
750
+ # D.2 MIMIC II DATASET
751
+
752
+ MIMIC II dataset is obtained from the intensive care unit research database that consists of 25,328 intensity care unit stays. The records include laboratory data, therapeutic intervention profiles such as nursing progress notes, discharge summaries and others. Here we restrict the event types to the diagnosis of patients and filter out the shorter event sequences with few visits, ending up with 650 patients and 15 event labels: $\mathcal{L} = \{1,2,8,9,11,12,14,20,21,22,23,26,27,42,47\}$ . Similar to the setting for the LinkedIn dataset, where the initialization of parameters follow the same setting as the LinkedIn dataset. The detailed log-likelihood for each event label is presented in Table 12.
753
+
754
+ Table 11: Log likelihood for each event label in the LinkedIn dataset.
755
+
756
+ <table><tr><td>Event Label</td><td>Log-likelihood</td></tr><tr><td>1</td><td>-72.14</td></tr><tr><td>2</td><td>-62.33</td></tr><tr><td>8</td><td>-5.98</td></tr><tr><td>9</td><td>-51.34</td></tr><tr><td>11</td><td>-43.64</td></tr><tr><td>12</td><td>-25.81</td></tr><tr><td>14</td><td>-69.73</td></tr><tr><td>20</td><td>-5.96</td></tr><tr><td>21</td><td>-6.08</td></tr><tr><td>22</td><td>-10.47</td></tr><tr><td>23</td><td>-10.64</td></tr><tr><td>26</td><td>-27.08</td></tr><tr><td>27</td><td>-27.42</td></tr><tr><td>42</td><td>-5.95</td></tr><tr><td>47</td><td>-10.54</td></tr></table>
757
+
758
+ Table 12: Log likelihood for each event label in the MIMIC II dataset.
759
+
760
+ # D.3 STACK OVERFLOW DATASET
761
+
762
+ Stack Overflow is a question-and-answer website spanning a wide range of domains. A badge rewarding scheme is exploited to encourage users to participate in the questioning and answering activities. The badge system of Stack Overflow comprises 81 types of non-topical badges, including the badges that can be awarded only once and the badges that can be awarded several times. The dataset in (Du et al., 2016) was obtained by first filtering out the badges that can be awarded only once, then restricting to the users who have acquired at least 40 badges from 2012-01-01 to 2014-01-01, from which the badges have been awarded more than 100 times are selected as the determinate dataset. Our dataset was acquired by retaining the event streams with one or more of the 20 types of specified badges and then randomly sampling 1000 users to obtain 1000 event streams. The detailed log-likelihood for each event label in the Stack Overflow dataset is summarized in Table 13.
763
+
764
+ <table><tr><td>Event Label</td><td>Log-likelihood</td></tr><tr><td>1</td><td>-3791</td></tr><tr><td>2</td><td>-1451</td></tr><tr><td>3</td><td>-538</td></tr><tr><td>4</td><td>-17656</td></tr><tr><td>5</td><td>-3574</td></tr><tr><td>6</td><td>-3559</td></tr><tr><td>7</td><td>-1381</td></tr><tr><td>8</td><td>-1330</td></tr><tr><td>9</td><td>-10961</td></tr><tr><td>10</td><td>-1105</td></tr><tr><td>11</td><td>-189</td></tr><tr><td>12</td><td>-2012</td></tr><tr><td>13</td><td>-673</td></tr><tr><td>14</td><td>-1340</td></tr><tr><td>15</td><td>-406</td></tr><tr><td>16</td><td>-117</td></tr><tr><td>17</td><td>-186</td></tr><tr><td>18</td><td>-330</td></tr><tr><td>19</td><td>-282</td></tr><tr><td>20</td><td>-100</td></tr></table>
765
+
766
+ Table 13: Log likelihood for each event label in the Stack Overflow dataset.
767
+
768
+ <table><tr><td>Dataset</td><td>CLNN with SOP</td><td>CLNN without SOP</td></tr><tr><td>LinkedIn</td><td>-1228</td><td>-1344</td></tr><tr><td>MIMIC II</td><td>-436</td><td>-480</td></tr><tr><td>Stack Overflow</td><td>-50981</td><td>-51195</td></tr></table>
769
+
770
+ Table 14: Comparison of log-likelihood for CLNN with and without SOP on the real-world datasets.
771
+
772
+ # D.4 ANALYSIS OF EXPRESSIVENESS ON MODEL'S PERFORMANCE
773
+
774
+ In this part, we conduct an experiment by training the CLNN without the singleton order cell (SOC) on real-world datasets to show the effectiveness of the singleton order predicates. The comparison of log-likelihood for CLNN with SOC and CLNN without SOC is summarized in Table 14. As evidenced by Table 14, the log-likelihood of CLNN with SOP is higher than the log-likelihood of CLNN without SOP, meaning enriching the expressiveness of wCL formulas can better explain the generative mechanism of events.
2023/Weighted Clock Logic Point Process/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:128a625c4cda5aebcae955ed4f8baad439f863448f8b3688974e7c6247994986
3
+ size 1279444
2023/Weighted Clock Logic Point Process/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weighted Ensemble Self-Supervised Learning/0c863f59-c784-4516-9026-d5e5e7ae916e_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weighted Ensemble Self-Supervised Learning/0c863f59-c784-4516-9026-d5e5e7ae916e_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weighted Ensemble Self-Supervised Learning/0c863f59-c784-4516-9026-d5e5e7ae916e_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15ea395d8c0e0c58246b2ead9ac69d84ab9597662eb36d3e3fae8e29d55b7e94
3
+ size 777640
2023/Weighted Ensemble Self-Supervised Learning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/Weighted Ensemble Self-Supervised Learning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:435f89ba8b76bfc1760754fb122b03fbda95751a5f8873aae4e7d778f1b46491
3
+ size 1241569
2023/Weighted Ensemble Self-Supervised Learning/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/What Can we Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers_/9da122df-288c-42c9-8090-73c7e3adccf9_content_list.json ADDED
The diff for this file is too large to render. See raw diff