mishig HF Staff commited on
Commit
f0b8a85
·
verified ·
1 Parent(s): 31b7b17

Add 1 files

Browse files
Files changed (1) hide show
  1. 2310/2310.13841.md +4927 -0
2310/2310.13841.md ADDED
@@ -0,0 +1,4927 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Fast Hyperboloid Decision Tree Algorithms
2
+
3
+ URL Source: https://arxiv.org/html/2310.13841
4
+
5
+ Markdown Content:
6
+ Back to arXiv
7
+
8
+ This is experimental HTML to improve accessibility. We invite you to report rendering errors.
9
+ Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off.
10
+ Learn more about this project and help improve conversions.
11
+
12
+ Why HTML?
13
+ Report Issue
14
+ Back to Abstract
15
+ Download PDF
16
+ 1Introduction
17
+ 2Preliminary
18
+ 3Hyperboloid Decision Tree Algorithms
19
+ 4Classification Experiments
20
+ 5Conclusion
21
+
22
+ HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
23
+
24
+ failed: outlines
25
+
26
+ Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.
27
+
28
+ License: arXiv.org perpetual non-exclusive license
29
+ arXiv:2310.13841v2 [cs.LG] 04 Mar 2024
30
+ Fast Hyperboloid Decision Tree Algorithms
31
+ Philippe Chlenski,
32
+ 1
33
+ Ethan Turok,
34
+ 1
35
+ Antonio Moretti,
36
+ 2
37
+ Itsik Pe’er
38
+ 1
39
+
40
+
41
+ 1
42
+ Columbia University
43
+ 2
44
+ Barnard College
45
+
46
+ Abstract
47
+
48
+ Hyperbolic geometry is gaining traction in machine learning due to its capacity to effectively capture hierarchical structures in real-world data. Hyperbolic spaces, where neighborhoods grow exponentially, offer substantial advantages and have consistently delivered state-of-the-art results across diverse applications. However, hyperbolic classifiers often grapple with computational challenges. Methods reliant on Riemannian optimization frequently exhibit sluggishness, stemming from the increased computational demands of operations on Riemannian manifolds. In response to these challenges, we present HyperDT, a novel extension of decision tree algorithms into hyperbolic space. Crucially, HyperDTeliminates the need for computationally intensive Riemannian optimization, numerically unstable exponential and logarithmic maps, or pairwise comparisons between points by leveraging inner products to adapt Euclidean decision tree algorithms to hyperbolic space. Our approach is conceptually straightforward and maintains constant-time decision complexity while mitigating the scalability issues inherent in high-dimensional Euclidean spaces. Building upon HyperDT we introduce HyperRF, a hyperbolic random forest model. Extensive benchmarking across diverse datasets underscores the superior performance of these models, providing a swift, precise, accurate, and user-friendly toolkit for hyperbolic data analysis. Our code can be found at https://github.com/pchlenski/hyperdt.
49
+
50
+ 1Introduction
51
+ 1.1Background: Hyperbolic Embeddings
52
+
53
+ The adoption of hyperbolic geometry for graph embeddings has sparked a vibrant and rapidly growing body of machine learning research (Sarkar, 2012; Chamberlain et al., 2017; Gu et al., 2019; Chami et al., 2020; 2021). This surge in interest is driven by the compelling advantages offered by hyperbolic spaces, particularly in capturing hierarchical and tree-like structures inherent in various real-world datasets. In hyperbolic space, neighborhoods grow exponentially rather than polynomially, allowing for embeddings of exponentially growing systems such as phylogenetic trees or concept hierarchies. Hyperbolic embeddings have proven to be highly effective, showcasing state-of-the-art results across various applications including question answering (Tay et al., 2018), node classification (Chami et al., 2020), and word embeddings (Tifrea et al., 2018). These achievements underscore the growing interest in classifiers that operate natively within hyperbolic spaces (Gulcehre et al., 2018; Marconi et al., 2020; Doorenbos et al., 2023).
54
+
55
+ Figure 1:Geodesic partitions in the hyperboloid model
56
+
57
+ 2
58
+ ,
59
+ 1
60
+ (left) and poincare model
61
+
62
+ 2
63
+ ,
64
+ 1
65
+ (right) into two halves (purple/yellow). In
66
+
67
+ 2
68
+ ,
69
+ 1
70
+ , a geodesic can be expressed as the intersection of the hyperboloid with an angled plane through the origin of the ambient space (transparent white). While these two representations are equivalent these partitions can be expressed more compactly in
71
+
72
+ 2
73
+ ,
74
+ 1
75
+ .
76
+
77
+ While hyperbolic classifiers leverage the curvature properties of hyperbolic geometry to make more nuanced and accurate predictions of hierarchical data, such techniques often face a dilemma. Methods employing Riemannian optimization often exhibit sluggishness due to the increased computational complexity associated with operations on Riemannian manifolds, which require intricate geometric calculations. The sensitivity of Riemannian optimization to initialization and the presence of complex geometric constraints can further contribute to slower convergence. Other methods are consistent with hyperbolic geometry but incur time-complexity penalties associated with horosphere calculations (see, for example,  Fan et al. (2023); Doorenbos et al. (2023)). Others apply Euclidean methods to hyperbolic data transformed with logarithmic maps (Chami et al., 2019; Chen et al., 2022), whitening techniques (Chami et al., 2021), or directly on hyperbolic coordinates (Jiang et al., 2022). While effective, such methods can ignore the geometric structure of the data or introduce additional complexity to the inference process, adversely affecting both speed and interpretability.
78
+
79
+ 1.2The Need for Hyperbolid Decision Trees and Paper Contributions
80
+
81
+ Decision trees are workhorses of machine learning, favored for a variety of reasons, including:
82
+
83
+
84
+
85
+ Speed. Decision trees are relatively fast to train, and efficient prediction makes them suitable for real-time or near-real-time applications.
86
+
87
+
88
+
89
+ Interpretability. Decision trees provide a highly interpretable model, unlike random forest methods that can be opaque regarding how features combine to form a predictor.
90
+
91
+
92
+
93
+ Simplicity. Decision trees involve straightforward logic, making them easier to implement and explain than multilayered, highly parameterized network models.
94
+
95
+ The additional complexities introduced by hyperbolic geometry, which often necessitate intricate optimizations over Riemannian manifolds, introduce a disparity between the existing availability of efficient decision tree algorithms tailored to such spaces and their potential transformative impact.
96
+
97
+ This paper introduces a novel approach to extend traditional Euclidean decision tree algorithms to hyperbolic space. This approach, called HyperDT, is conceptually straightforward and maintains constant-time decision complexity while mitigating the scalability issues inherent in high-dimensional Euclidean spaces. Building upon HyperDT, we introduce an ensemble model called HyperRF, which is an extension of random forests tailored for hyperbolic space. Our contributions in this work are summarized as follows:
98
+
99
+ 1.
100
+
101
+ We develop HyperDT, a novel extension of decision trees to data in hyperbolic space, e.g. learned hyperbolic embeddings. HyperDT avoids computationally expensive Riemannian optimization, numerically unstable exponential or logarithmic maps, and quadratically scaling pairwise comparisons between all data points. Instead, we reframe decision trees in Euclidean space in terms of inner products, yielding a natural extension to hyperbolic spaces of arbitrary negative curvature. Like other decision tree algorithms, HyperDT repeatedly partitions the input space into decision areas, i.e. subspaces labeled with the majority class for the points in the training set for that region. However, because it uses geodesic submanifolds to partition the space, HyperDT is the first decision tree predictor whose decision areas maintain convexity and topological continuity for arbitrary partitions.
102
+
103
+ 2.
104
+
105
+ We generalize HyperDT to random forests in a second algorithm which we refer to as HyperRF. In select cases, HyperRF demonstrates enhanced accuracy and reduced susceptibility to overfitting when contrasted with HyperDT.
106
+
107
+ 3.
108
+
109
+ We demonstrate state-of-the-art accuracy and speed of HyperDT and HyperRF on classification problems compared to existing counterparts on various datasets.
110
+
111
+ 4.
112
+
113
+ We provide a Python implementation of HyperDT and HyperRF for classification and regression following scikit-learn API conventions (Pedregosa et al., 2011).
114
+
115
+ 1.3Related Work
116
+
117
+ Several graph embedding methods have been proposed in the Poincaré disk (Nickel & Kiela, 2017; Chamberlain et al., 2017), hyperboloid model (Nickel & Kiela, 2018), and even in mixed-curvature products of manifolds (Gu et al., 2019). De Sa et al. (2018) provides a thorough overview and comparison of graph embedding methods in terms of their metric distortions.
118
+
119
+ Hyperbolic embeddings have found diverse applications across domains, particularly in computational biology and concept ontologies, both of which are structured by latent branching relationships. In biology, inheritance patterns are tree-like: at evolutionary scales, hyperbolic embeddings successfully model well-known phylogenetic trees (Hughes et al., 2004; Chami et al., 2020; Jiang et al., 2022). Furthermore, Corso et al. (2021) learn faithful hyperbolic species embeddings directly from nucleotide sequences, bypassing phylogenetic tree construction. On shorter timescales, such as single-cell RNA sequencing data, where cell types evolve from progenitors, a tree-like structure emerges once more. Ding & Regev (2021) showcased that variational autoencoders with a hyperbolic latent space effectively capture the branching patterns of cells’ developmental trajectories.
120
+
121
+ For concept ontologies like WordNet, hyperbolic embeddings effectively capture subclass relationships among nouns, improving link prediction accuracy (Ganea et al., 2018). Additionally, Tifrea et al. (2018) showcase the utility of hyperbolic embeddings for unsupervised learning of latent hierarchies when explicit ontologies are absent, a capability possibly underpinned by the hierarchical nature of concepts within WordNet. Moreover, a recent study by Desai et al. (2023) unveils that text-image representations in hyperbolic space exhibit enhanced interpretability and structural organization while maintaining performance excellence in standard multimodal benchmarks.
122
+
123
+ Machine learning on hyperbolic embeddings is an emerging and evolving research area. Cho et al. (2018) and (Fan et al., 2023) have proposed approaches for adapting support vector machines to hyperbolic space, which has proven useful in biological contexts (Agibetov et al., 2019). In the realm of neural methods, there have been developments such as hyperbolic attention networks (Gulcehre et al., 2018), fully hyperbolic neural networks Chen et al. (2022), and hyperbolic graph convolutional networks Chami et al. (2021). A recent contribution by Doorenbos et al. (2023) introduced HoroRF, a variant of the random forest method that employs decision trees with horospherical node criteria. Our approach is most similar to Doorenbos et al. (2023); however, it differs in that it utilizes geodesics as opposed to horospheres and does not require pairwise comparisons between data points, leading to an algorithm that maintains constant time decision complexity.
124
+
125
+ Improving decision tree and random forest algorithms has also attracted considerable research attention, although except for HoroRF, none of these methods are designed for use with hyperbolic embeddings. In particular, improvements like gradient boosting Chen & Guestrin (2016) and optimization based on branch-and-bound methods Lin et al. (2022); McTavish et al. (2022); Mazumder et al. (2022) have been proposed to circumvent some of the suboptimal qualities of classic CART decision trees Breiman (2017), which uses a greedy heuristic to select each split in the tree.
126
+
127
+ 2Preliminary
128
+ 2.1Hyperbolic Spaces
129
+
130
+ Hyperbolic geometry, characterized by its constant negative curvature, can be represented by various models, including the hyperboloid (also known as the Lorentz or Minkowski model), the Poincaré disk model, the Poincaré half-plane model, and the (Beltrami-)Klein model. We use the hyperboloid model due to its simplicity in expressing geodesics using plane geometry, which we exploit to define our decision boundaries. While prior research in this field has predominantly centered on the Poincaré disk model, the straightforward conversion between Poincaré disk coordinates and hyperboloid coordinates (See Section A.1 for details) allows for seamless integration of techniques across different hyperbolic representations, a flexibility we leverage in our work.
131
+
132
+ The
133
+ 𝐷
134
+ -dimensional hyperboloid model is embedded inside an ambient
135
+ (
136
+ 𝐷
137
+ +
138
+ 1
139
+ )
140
+ -dimensional Minkowski space, a metric space equipped with the Minkowski inner product:
141
+
142
+
143
+
144
+ 𝐱
145
+ ,
146
+ 𝐱
147
+
148
+
149
+
150
+ =
151
+
152
+ 𝑥
153
+ 0
154
+
155
+ 𝑥
156
+ 0
157
+
158
+ +
159
+
160
+ 𝑖
161
+ =
162
+ 1
163
+ 𝐷
164
+ 𝑥
165
+ 𝑖
166
+
167
+ 𝑥
168
+ 𝑖
169
+
170
+ .
171
+
172
+ (1)
173
+
174
+ The above is equivalent to an adaptation of the Euclidean inner product, where the first term is negated. This distinguished first dimension, often termed the “timelike” dimension, earns its name due to its significance in the context of special relativity theory.
175
+
176
+ Let the hyperboloid have constant negative scalar curvature
177
+
178
+ 𝐾
179
+ . Inside the Minkowski space, points are assumed to lie on
180
+
181
+ 𝐷
182
+ ,
183
+ 𝐾
184
+ , the hyperboloid of dimension
185
+ 𝐷
186
+ and curvature
187
+ 𝐾
188
+ :
189
+
190
+
191
+
192
+ 𝐷
193
+ ,
194
+ 𝐾
195
+ =
196
+ {
197
+ 𝐱
198
+
199
+
200
+ 𝐷
201
+ +
202
+ 1
203
+ :
204
+
205
+ 𝐱
206
+ ,
207
+ 𝐱
208
+
209
+
210
+ =
211
+
212
+ 1
213
+ /
214
+ 𝐾
215
+ ,
216
+ 𝑥
217
+ 0
218
+ >
219
+ 0
220
+ }
221
+ .
222
+
223
+ (2)
224
+
225
+ That is, the hyperboloid model assumes points lie on the surface of the upper sheet of a two-sheeted hyperboloid embedded in Minkowski space (see Figure 1). The distance between two points on
226
+
227
+ 𝐷
228
+ ,
229
+ 𝐾
230
+ is
231
+
232
+
233
+ 𝛿
234
+
235
+ (
236
+ 𝐱
237
+ ,
238
+ 𝐱
239
+
240
+ )
241
+ =
242
+ cosh
243
+
244
+ 1
245
+
246
+ (
247
+
248
+ 𝐾
249
+
250
+
251
+ 𝐱
252
+ ,
253
+ 𝐱
254
+
255
+
256
+
257
+ )
258
+ /
259
+ 𝐾
260
+ .
261
+
262
+ (3)
263
+
264
+ This distance can be interpreted as the length of the geodesic, the shortest path on the manifold connecting
265
+ 𝑥
266
+ and
267
+ 𝑥
268
+
269
+ . In the hyperboloid model, all geodesics are intersections of
270
+
271
+ 𝐷
272
+ ,
273
+ 𝐾
274
+ with respective 2D planes that pass through the origin of the Minkowski space.
275
+
276
+ 2.2Decision Tree Algorithms
277
+
278
+ The Classification and Regression Trees (CART) algorithm is a mainstay of machine learning, alongside extensions such as random forests (Breiman, 2001) and XGBoost (Chen & Guestrin, 2016). CART recursively partitions the feature space into increasingly homogenous subspaces by maximizing the information gain at each split
279
+ 𝑆
280
+ ,
281
+ 𝐼
282
+
283
+ 𝐺
284
+
285
+ (
286
+ 𝑆
287
+ )
288
+ . It measures improved homogeneity due to splitting a dataset
289
+ 𝐗
290
+ into subsets
291
+ 𝐗
292
+ 0
293
+ ,
294
+ 𝐗
295
+ 1
296
+ of respective fractions
297
+ 𝑓
298
+ 𝑖
299
+ =
300
+ |
301
+ 𝐗
302
+ 𝑖
303
+ |
304
+ /
305
+ 𝐗
306
+ :
307
+
308
+
309
+ 𝐼
310
+
311
+ 𝐺
312
+
313
+ (
314
+ 𝑆
315
+ )
316
+ =
317
+ 𝐶
318
+
319
+ (
320
+ 𝐗
321
+ )
322
+
323
+ 𝑓
324
+ 0
325
+
326
+ 𝐶
327
+
328
+ (
329
+ 𝐗
330
+ 0
331
+ )
332
+
333
+ 𝑓
334
+ 1
335
+
336
+ 𝐶
337
+
338
+ (
339
+ 𝐗
340
+ 1
341
+ )
342
+ ,
343
+
344
+ (4)
345
+
346
+ where
347
+ 𝐶
348
+
349
+ (
350
+
351
+ )
352
+ is the impurity or cost function of each set. Objective functions like Gini impurity, mean squared error (MSE), and entropy are popular choices for
353
+ 𝐶
354
+
355
+ (
356
+
357
+ )
358
+ . The tree is iteratively constructed until further splits will constitute overfitting. A decision tree is often used as is to make predictions. The Random Forest algorithm integrates predictions across ensembles of decision trees trained on randomly subsampled subsets of the training data using a majority voting procedure.
359
+
360
+ 3Hyperboloid Decision Tree Algorithms
361
+
362
+ Extending CART to hyperbolic space involves several essential steps. First, as outlined in Section 3.1, we express Euclidean decision boundaries in terms of inner products, providing the requisite geometric intuition for hyperbolic decision trees. In Section 3.2, we utilize these inner products to establish a streamlined decision process based on geodesic submanifolds in hyperbolic space. Section 3.3 discusses the selection of candidate hyperplanes, Section  3.4 presents closed-form equations for decision boundaries, and Section  3.5 describes the Hyperbolic Random Forest extension HyperRF.
363
+
364
+ 3.1Formulating Decision Trees with Inner Products
365
+
366
+ Traditionally, a split is perceived as a means to ascertain whether the value of a point
367
+ 𝐱
368
+ in a given dimension
369
+ 𝑑
370
+ , is greater or lesser than a designated threshold,
371
+ 𝜃
372
+ , namely,
373
+
374
+
375
+ 𝑆
376
+
377
+ (
378
+ 𝐱
379
+ )
380
+ =
381
+ 𝕀
382
+
383
+ {
384
+ 𝑥
385
+ 𝑑
386
+ >
387
+ 𝜃
388
+ }
389
+ .
390
+
391
+ (5)
392
+
393
+ This decision boundary can also be thought of as the axis-parallel hyperplane
394
+ 𝑥
395
+ 𝑑
396
+ =
397
+ 𝜃
398
+ , and thus we can rewrite the same split as follows
399
+
400
+
401
+ 𝑆
402
+
403
+ (
404
+ 𝑥
405
+ )
406
+ =
407
+ max
408
+
409
+ (
410
+ 0
411
+ ,
412
+ sign
413
+
414
+ (
415
+ 𝐱
416
+
417
+ 𝐧
418
+
419
+ (
420
+ 𝑑
421
+ )
422
+
423
+ 𝜃
424
+ )
425
+ )
426
+ ,
427
+
428
+ (6)
429
+
430
+ where
431
+ 𝐧
432
+
433
+ (
434
+ 𝑑
435
+ )
436
+ is the one-hot base vector along dimension
437
+ 𝑑
438
+ , i.e. the normal vector of our decision hyperplane. Of course, Eq. 5 is the practical,
439
+ 𝑂
440
+
441
+ (
442
+ 1
443
+ )
444
+ condition. The slower,
445
+ 𝑂
446
+
447
+ (
448
+ 𝐷
449
+ )
450
+ Eq. 6 is instructive towards the hyperboloid generalization, and can still be computed in
451
+ 𝑂
452
+
453
+ (
454
+ 1
455
+ )
456
+ due to sparsity of
457
+ 𝐧
458
+
459
+ (
460
+ 𝑑
461
+ )
462
+ .
463
+
464
+ 3.2Extension to the Hyperboloid Model
465
+
466
+ We will now modify decision tree splits for hyperbolic space. Splitting a decision tree along standard axis-aligned hyperplanes is inappropriate when all the datapoints lie on the hyperboloid: the intersection of an axis-aligned
467
+ 𝐷
468
+ +
469
+ 1
470
+ dimensional hyperplane with the hyperboloid
471
+
472
+ 𝐷
473
+ ,
474
+ 𝐾
475
+ in
476
+ (
477
+ 𝐷
478
+ +
479
+ 1
480
+ )
481
+ -dimensional Minkowski space results in a
482
+ 𝐷
483
+ -dimensional hyperbola, which lacks any meaningful interpretation within the hyperboloid model. Euclidean CART generates such decision boundaries, but they are likely ill-suited to capture the geometry of hyperbolic space.
484
+
485
+ On the other hand,
486
+ 𝐷
487
+ -dimensional homogeneous hyperplanes, i.e. hyperplanes that contain the origin, intersect
488
+
489
+ 𝐷
490
+ ,
491
+ 𝐾
492
+ as geodesic submanifolds. In 3D Minkowski space, geodesics between any
493
+ {
494
+ 𝐱
495
+ ,
496
+ 𝐱
497
+
498
+ }
499
+
500
+
501
+ 2
502
+ ,
503
+ 1
504
+ lie on the intersection of
505
+
506
+ 2
507
+ ,
508
+ 1
509
+ with some homogeneous 2D plane, as in Figure 1. In higher dimensions, homogeneous hyperplanes intersect
510
+
511
+ 𝐷
512
+ ,
513
+ 𝐾
514
+ as
515
+ (
516
+ 𝐷
517
+
518
+ 1
519
+ )
520
+ -dimensional geodesic submanifolds, which likewise contain all geodesics between their elements. Partitions by homogeneous hyperplanes maintain convexity and topological continuity: all pairs of points in a subspace are reachable by shortest paths that stay completely within their own subspace.
521
+
522
+ Building upon the inner product formulation of splits Euclidean CART (Eq. 6), we can substitute a set of geometrically appropriate decision boundaries without altering the rest of the CART framework. Specifically, we replace axis-parallel hyperplanes with homogeneous ones.
523
+
524
+ To maintain the dimension-by-dimension character of Euclidean CART and enforce sparse normal vectors for efficient inner product calculations, we further restrict the number of decision boundary candidates to
525
+ 𝑂
526
+
527
+ (
528
+ 𝐷
529
+
530
+ |
531
+ 𝐗
532
+ |
533
+ )
534
+ by only considering rotations of the plane
535
+ 𝑥
536
+ 0
537
+ =
538
+ 0
539
+ along a single other axis
540
+ 𝑑
541
+ . These hyperplanes are fully parameterized by
542
+ 𝑑
543
+ and the rotation angle
544
+ 𝜃
545
+ , yielding corresponding normal vectors
546
+
547
+
548
+ 𝐧
549
+
550
+ (
551
+ 𝑑
552
+ ,
553
+ 𝜃
554
+ )
555
+ :=
556
+
557
+ 𝑛
558
+ 0
559
+ =
560
+
561
+ cos
562
+
563
+ (
564
+ 𝜃
565
+ )
566
+ ,
567
+ 0
568
+ ,
569
+
570
+ ,
571
+ 0
572
+ ,
573
+ 𝑛
574
+ 𝑑
575
+ =
576
+ sin
577
+
578
+ (
579
+ 𝜃
580
+ )
581
+ ,
582
+ 0
583
+ ,
584
+
585
+ ,
586
+ 0
587
+
588
+ .
589
+
590
+ (7)
591
+
592
+ They define hyperplanes that satisfy
593
+
594
+
595
+ 𝑥
596
+ 0
597
+
598
+ cos
599
+
600
+ (
601
+ 𝜃
602
+ )
603
+
604
+ 𝑥
605
+ 𝑑
606
+
607
+ sin
608
+
609
+ (
610
+ 𝜃
611
+ )
612
+ =
613
+ 0
614
+
615
+ (8)
616
+
617
+ The sparsity of
618
+ 𝐧
619
+
620
+ (
621
+ 𝑑
622
+ ,
623
+ 𝜃
624
+ )
625
+ yields a compact
626
+ 𝑂
627
+
628
+ (
629
+ 1
630
+ )
631
+ decision procedure:
632
+
633
+
634
+ 𝑆
635
+
636
+ (
637
+ 𝑥
638
+ )
639
+ =
640
+ sign
641
+
642
+ (
643
+ max
644
+
645
+ (
646
+ 0
647
+ ,
648
+ sin
649
+
650
+ (
651
+ 𝜃
652
+ )
653
+
654
+ 𝑥
655
+ 𝑑
656
+
657
+ cos
658
+
659
+ (
660
+ 𝜃
661
+ )
662
+
663
+ 𝑥
664
+ 0
665
+ )
666
+ )
667
+
668
+ (9)
669
+
670
+ Notably, this procedure determines points’ position relative to a geodesic decision boundary without computing the actual location of the geodesic on
671
+
672
+ 𝐷
673
+ ,
674
+ 𝐾
675
+ . Because of this, it is also curvature-agnostic. Hyperbolic decision trees compose splits analogously to Euclidean CART: the same objective functions are applicable, and so is the consideration of a single candidate decision boundary per point per (space-like) dimension, resulting in identical asymptotic complexity.
676
+
677
+ 3.3Choosing Candidate Hyperplanes
678
+
679
+ In Euclidean CART, candidate thresholds are classically chosen among midpoints between successive observed
680
+ 𝑥
681
+ 𝑑
682
+ values in the data. The hyperbolic case is slightly more nuanced.
683
+
684
+ Each decision boundary for dimension
685
+ 𝑑
686
+ is parameterized by an angle
687
+ 𝜃
688
+ , instead of a coordinate value. Each point
689
+ 𝐱
690
+ lies on a plane of angle
691
+ 𝜃
692
+ =
693
+ tan
694
+
695
+ 1
696
+
697
+ (
698
+ 𝑥
699
+ 0
700
+ /
701
+ 𝑥
702
+ 𝑑
703
+ )
704
+ . The midpoint angle
705
+ 𝜃
706
+ 𝑚
707
+ between two angles
708
+ 𝜃
709
+ 1
710
+ <
711
+ 𝜃
712
+ 2
713
+ is defined in terms of points lying on the intersection of these hyperplanes with
714
+
715
+ 𝐷
716
+ ,
717
+ 𝐾
718
+ . By setting all dimensions besides 0 and
719
+ 𝑑
720
+ to zero, we can solve for the angle corresponding to the point on
721
+
722
+ 𝐷
723
+ ,
724
+ 𝐾
725
+ that is exactly equidistant to the points corresponding to angles
726
+ 𝜃
727
+ 1
728
+ ,
729
+ 𝜃
730
+ 2
731
+ :
732
+
733
+
734
+ 𝜃
735
+ 𝑚
736
+ =
737
+ {
738
+ cot
739
+
740
+ 1
741
+
742
+ (
743
+ 𝑉
744
+
745
+ 𝑉
746
+ 2
747
+
748
+ 1
749
+ )
750
+
751
+  if 
752
+
753
+ 𝜃
754
+ 1
755
+ <
756
+ 𝜋
757
+
758
+ 𝜃
759
+ 2
760
+
761
+
762
+ cot
763
+
764
+ 1
765
+
766
+ (
767
+ 𝑉
768
+ +
769
+ 𝑉
770
+ 2
771
+
772
+ 1
773
+ )
774
+
775
+  if 
776
+
777
+ 𝜃
778
+ 1
779
+ >
780
+ 𝜋
781
+
782
+ 𝜃
783
+ 2
784
+
785
+ (10)
786
+
787
+ where
788
+ 𝑉
789
+ :=
790
+ sin
791
+
792
+ (
793
+ 2
794
+
795
+ 𝜃
796
+ 2
797
+
798
+ 1
799
+
800
+ 2
801
+
802
+ 𝜃
803
+ 2
804
+ )
805
+ 2
806
+
807
+ sin
808
+
809
+ (
810
+ 𝜃
811
+ 1
812
+ +
813
+ 𝜃
814
+ 2
815
+ )
816
+
817
+ sin
818
+
819
+ (
820
+ 𝜃
821
+ 2
822
+
823
+ 𝜃
824
+ 1
825
+ )
826
+ . See Appendix Section A.3 for a full derivation.
827
+
828
+ 3.4Parameterizing Decision Boundaries
829
+
830
+ Let
831
+ 𝐏
832
+
833
+ (
834
+ 𝜃
835
+ )
836
+ be a decision hyperplane learned by HyperDT (without loss of generality, assume
837
+ 𝑑
838
+ =
839
+ 1
840
+ ). We derive closed-form equations for the geodesic submanifold where
841
+ 𝐏
842
+
843
+ (
844
+ 𝜃
845
+ )
846
+ intersects
847
+
848
+ 𝐷
849
+ ,
850
+ 𝐾
851
+ . When all other dimensions are
852
+ 0
853
+ , this occurs when
854
+
855
+
856
+ 𝑥
857
+ 0
858
+ =
859
+ 𝛼
860
+
861
+ (
862
+ 𝜃
863
+ ,
864
+ 𝐾
865
+ )
866
+
867
+ sin
868
+
869
+ (
870
+ 𝜃
871
+ )
872
+ ;
873
+ 𝑥
874
+ 1
875
+ =
876
+ 𝛼
877
+
878
+ (
879
+ 𝜃
880
+ ,
881
+ 𝐾
882
+ )
883
+
884
+ cos
885
+
886
+ (
887
+ 𝜃
888
+ )
889
+ ,
890
+
891
+ (11)
892
+
893
+ where
894
+ 𝛼
895
+
896
+ (
897
+ 𝜃
898
+ ,
899
+ 𝐾
900
+ )
901
+ :=
902
+
903
+ sec
904
+
905
+ (
906
+ 2
907
+
908
+ 𝜃
909
+ )
910
+ /
911
+ 𝐾
912
+ . Note that this is also the point on the intersection of
913
+ 𝐏
914
+
915
+ (
916
+ 𝜃
917
+ )
918
+ and
919
+
920
+ 𝐷
921
+ ,
922
+ 𝐾
923
+ that is closest to the origin. We use this to parameterize the entire geodesic submanifold
924
+ 𝐆
925
+ 𝐝
926
+ resulting from intersecting the plane with
927
+
928
+ 𝐷
929
+ ,
930
+ 𝐾
931
+ :
932
+
933
+
934
+ 𝐯
935
+ 𝟎
936
+
937
+ =
938
+
939
+ sin
940
+
941
+ (
942
+ 𝜃
943
+ )
944
+ ,
945
+ cos
946
+
947
+ (
948
+ 𝜃
949
+ )
950
+ ,
951
+ 0
952
+ ,
953
+
954
+
955
+
956
+ (12)
957
+
958
+
959
+ 𝐮
960
+ 𝐝
961
+
962
+ =
963
+
964
+ 0
965
+ ,
966
+
967
+ ,
968
+ 𝑢
969
+ 𝑑
970
+ 𝑑
971
+ =
972
+ 1
973
+ ,
974
+
975
+ ,
976
+ 0
977
+
978
+ ,
979
+ 2
980
+
981
+ 𝑑
982
+
983
+ 𝐷
984
+
985
+ (13)
986
+
987
+
988
+ 𝐆
989
+ 𝟏
990
+
991
+ (
992
+ 𝜃
993
+ ,
994
+ 𝐾
995
+ )
996
+
997
+ =
998
+ {
999
+ cosh
1000
+
1001
+ (
1002
+ 𝑡
1003
+ )
1004
+
1005
+ 𝛼
1006
+
1007
+ (
1008
+ 𝜃
1009
+ ,
1010
+ 𝐾
1011
+ )
1012
+
1013
+ 𝐯
1014
+ 𝟎
1015
+ +
1016
+ sinh
1017
+
1018
+ (
1019
+ 𝑡
1020
+ )
1021
+
1022
+ 𝐮
1023
+ 𝟐
1024
+ /
1025
+ 𝐾
1026
+ :
1027
+ 𝑡
1028
+
1029
+
1030
+ }
1031
+
1032
+ (14)
1033
+
1034
+
1035
+ 𝐆
1036
+ 𝐝
1037
+
1038
+ (
1039
+ 𝜃
1040
+ ,
1041
+ 𝐾
1042
+ )
1043
+
1044
+ =
1045
+ {
1046
+ cosh
1047
+
1048
+ (
1049
+ 𝑡
1050
+ )
1051
+
1052
+ 𝐯
1053
+ 𝐝
1054
+
1055
+ 𝟏
1056
+ +
1057
+ sinh
1058
+
1059
+ (
1060
+ 𝑡
1061
+ )
1062
+
1063
+ 𝐮
1064
+ 𝐝
1065
+ +
1066
+ 𝟏
1067
+ /
1068
+ 𝐾
1069
+ :
1070
+ 𝐯
1071
+ 𝐝
1072
+
1073
+ 𝟏
1074
+
1075
+ 𝐆
1076
+ 𝐝
1077
+
1078
+ 𝟏
1079
+
1080
+ (
1081
+ 𝜃
1082
+ ,
1083
+ 𝐾
1084
+ )
1085
+ ,
1086
+ 𝑡
1087
+
1088
+
1089
+ }
1090
+
1091
+ (15)
1092
+
1093
+ For visualization, we use
1094
+
1095
+ 2
1096
+ ,
1097
+ 1
1098
+ projected to the Poincaré disk
1099
+
1100
+ 2
1101
+ ,
1102
+ 1
1103
+ . We recursively partition the space, plotting decision boundaries at each level and coloring the partitioned space by the majority class. Figure 2 shows an example of such a plot. See Appendix Section A.2 for a full derivation.
1104
+
1105
+ Figure 2: Learned HyperDT decision boundaries for 2, 3, 4, and 5-class mixtures of wrapped normal distributions visualized on the Poincaré disk. All trees have a maximum depth of 3 and forgo post-training pruning. In the visualization, regions are colored according to their predicted class labels while data points are colored according to their true class labels.
1106
+ 3.5Hyperboloid Random Forests
1107
+
1108
+ Analogously to the Euclidean case, creating a hyperboloid random forest is possible by training an ensemble of hyperboloid decision trees on randomly resampled versions of the training data. For speed, we implement HyperRF, a multithreaded version of hyperboloid random forests wherein each tree is trained as a separate process.
1109
+
1110
+ 4Classification Experiments
1111
+ 4.1Performance Benchmark Baselines
1112
+
1113
+ For decision trees, we compare our method to standard Euclidean decision trees as implemented in scikit-learn (Pedregosa et al., 2011). Since we use the same parameters as the scikit-learn decision tree and random forest classes, we can ensure that implementation details like maximum depth and number of points in a leaf node can be standardized. We additionally compare our random forest method to HoroRF (Doorenbos et al., 2023), another ensemble classifier for hyperbolic data.
1114
+
1115
+ Since HoroRF does not implement a single decision tree method, we modified its code to avoid resampling the training data when training a single tree. We call this version HoroDT.
1116
+
1117
+ For all predictors, we use trees with depth
1118
+
1119
+ 3
1120
+ and
1121
+
1122
+ 1
1123
+ sample per leaf. For random forests, all methods use an ensemble of 12 trees. We explore performance in
1124
+ 𝐷
1125
+ =
1126
+ 2
1127
+ , 4, 8, and 16 dimensions.
1128
+
1129
+ 4.2 Datasets
1130
+ Synthetic Datasets.
1131
+
1132
+ We create a hyperbolic mixture of Gaussians, the canonical synthetic dataset for classification benchmarks, following Cho et al. (2018). We use the wrapped normal distribution on the hyperboloid for each Gaussian as described in Nagano et al. (2019). We draw the means of each Gaussian component from a normal distribution in the tangent plane at the origin and project it onto the hyperboloid directly using an exponential map. The timelike components of random Gaussians may grow quite large in high dimensions, creating both numerical instability and trivially separable clusters. We thus shrink the covariance matrix
1133
+ 𝐷
1134
+ -fold. See Section A.4 for details.
1135
+
1136
+ NeuroSEED RNA Embeddings.
1137
+
1138
+ NeuroSEED (Corso et al., 2021) is a method for embedding DNA sequences into a (potentially hyperbolic) latent space by using a Siamese neural network with a distance-preserving loss function. This method encodes DNA sequences directly into latent space without first constructing a phylogenetic tree—or, equivalently, it embeds a dense graph of pairwise edit distances. We trained 2, 4, 8, and 16-dimensional Poincaré embeddings of the 1,262,987 16S ribosomal RNA sequences from the Greengenes database (McDonald et al., 2023), then filtered them to the 37,215 that have been identified in the American Gut Project McDonald et al. (2018). This downsampling yields a clinically relevant subset of all Prokaryote species. For the purposes of these benchmarks, however, we restricted ourselves to predicting the six most abundant phyla: Firmicutes, Proteobacteria, Bacteroidetes, Actinobacteria, Acidobacteria, and Plantomycetes.
1139
+
1140
+ Polblogs Graph Embeddings.
1141
+
1142
+ We use Polblogs (Adamic & Glance, 2005), a canonical dataset in the hyperbolic embeddings literature for graph embeddings. In the Polblogs dataset, nodes represent political blogs during the 2004 United States presidential election, and edges represent hyperlinks between blogs. Each blog is labeled according to its political affiliation, “liberal” or “conservative.” We use the hypll (van Spengler et al., 2023) Python implementation of the Nickel & Kiela (2017) method to compute 10 randomly initialized Poincaré disk embeddings in 2, 4, 8, and 16 dimensions.
1143
+
1144
+ 4.3 Benchmarking Procedure
1145
+
1146
+ We benchmarked our method against Euclidean random forests as implemented in scikit-learn, and against HoroRF. Each predictor was run with the same settings, including dataset and random seed. HoroRF and HoroDT used Poincaré disk coordinates and all other modelsused hyperboloid model coordinates. Each dataset was converted to the appropriate model of hyperbolic space for its predictor before training.
1147
+
1148
+ For Gaussian and NeuroSEED datasets, we drew 100, 200, 400, and 800 samples using the same five seeds. We recorded micro- and macro-averaged F1 scores, AUPRs, and run times under 5-fold cross-validation. Cross-validation instances were seeded identically across predictors. We could not produce per-fold timing for HoroRF, so instead we recorded the time it took to run the full 5-fold cross-validation. For fairness, we timed the data loading scripts separately and subtracted these times from the reported HoroRF times.
1149
+
1150
+ We conducted paired, two-tailed t-tests comparing each pair of predictors and marked significant differences in Table 1. A full table of
1151
+ 𝑝
1152
+ -values is available in Section A.7 in the Appendix.
1153
+
1154
+ Benchmarks were conducted on an Ubuntu 22.04 machine equipped with an Intel Core i7-8700 CPU (6 cores, 3.20 GHz), an NVIDIA GeForce GTX 1080 GPU with 11 GiB of VRAM, and 15 GiB of RAM. Storage was handled by a 2TB HDD and a 219GB SSD. Experiments were implemented using Python 3.11.4, accelerated by CUDA 11.4 with driver version 470.199.02.
1155
+
1156
+ 4.4Results
1157
+ Decision Trees Random Forests
1158
+ Data
1159
+ 𝐷
1160
+
1161
+ 𝑛
1162
+ HyperDT Sklearn HoroDT HyperRF Sklearn HoroRF
1163
+
1164
+ Gaussian
1165
+ 2 100 89.10
1166
+
1167
+ 87.90 84.60 90.70
1168
+
1169
+
1170
+ 87.50 86.30
1171
+ 200 90.05
1172
+
1173
+ 89.55 84.60 90.60 89.15 89.10
1174
+ 400 90.97
1175
+
1176
+
1177
+ 89.53 85.55 91.32
1178
+
1179
+
1180
+ 89.00 88.88
1181
+ 800 91.88
1182
+
1183
+
1184
+ 90.14 85.75 91.99
1185
+
1186
+
1187
+ 89.33 89.45
1188
+ 4 100 98.70
1189
+
1190
+ 97.70 93.60 98.40 97.90 97.90
1191
+ 200 98.75
1192
+
1193
+
1194
+ 98.10 95.80 98.85
1195
+
1196
+
1197
+ 97.90 98.05
1198
+ 400 99.25
1199
+
1200
+
1201
+ 98.25 96.92 99.30
1202
+
1203
+
1204
+ 98.22 98.50
1205
+ 800 99.30
1206
+
1207
+
1208
+ 98.36 97.27 99.36
1209
+
1210
+
1211
+ 98.21 98.76
1212
+ 8 100 99.70
1213
+
1214
+ 99.60 97.70 99.70 99.50 99.10
1215
+ 200 99.65
1216
+
1217
+ 99.60 98.20 99.75 99.70 99.75
1218
+ 400 99.90
1219
+
1220
+ 99.88 99.10 99.88 99.93 99.88
1221
+ 800 99.96
1222
+
1223
+ 99.90 99.38 99.96 99.91 99.94
1224
+ 16 100 99.80
1225
+
1226
+ 99.50 98.80 99.80 99.60 99.60
1227
+ 200 99.95 100.00
1228
+
1229
+ 99.50 99.90 99.95 99.80
1230
+ 400 100.00
1231
+
1232
+ 99.97 99.90 100.00 100.00 99.95
1233
+ 800 100.00 99.99 99.90 100.00 99.99 99.92
1234
+
1235
+ NeuroSEED
1236
+ 2 100 56.60
1237
+
1238
+ 55.60 49.70 57.20 55.70 56.80
1239
+ 200 59.60
1240
+
1241
+
1242
+ 58.45 50.35 60.10 58.20 60.25
1243
+ 400 61.78
1244
+
1245
+
1246
+ 61.00 50.62 61.58
1247
+
1248
+
1249
+ 59.47 59.33
1250
+ 800 61.69
1251
+
1252
+ 61.68 54.11 62.05
1253
+
1254
+
1255
+ 59.75 59.94
1256
+ 4 100 80.40
1257
+
1258
+ 80.30 53.40 80.90
1259
+
1260
+ 79.20 71.50
1261
+ 200 83.60 83.70
1262
+
1263
+ 52.70 84.45
1264
+
1265
+
1266
+ 82.00 70.40
1267
+ 400 83.88
1268
+
1269
+ 83.83 54.08 84.65
1270
+
1271
+
1272
+ 82.33 65.20
1273
+ 800 84.49 84.50
1274
+
1275
+ 55.69 84.96
1276
+
1277
+
1278
+ 82.03 65.70
1279
+ 8 100 73.80 74.00
1280
+
1281
+ 52.60 79.50 82.80*
1282
+
1283
+ 70.80
1284
+ 200 78.30 78.40
1285
+
1286
+ 55.35 81.40 84.35*
1287
+
1288
+ 65.55
1289
+ 400 79.45 79.57
1290
+
1291
+ 52.30 82.42 86.40*
1292
+
1293
+ 62.88
1294
+ 800 80.76
1295
+
1296
+ 80.76
1297
+
1298
+ 50.55 82.03 86.34*
1299
+
1300
+ 57.69
1301
+ 16 100 74.10
1302
+
1303
+ 73.50 61.80 80.70 79.90 83.30
1304
+ 200 75.90
1305
+
1306
+ 75.55 66.75 82.70 82.55 83.85
1307
+ 400 77.05
1308
+
1309
+ 77.03 68.80 82.30 84.73 85.90*
1310
+ 800 79.21 79.22
1311
+
1312
+ 69.59 82.44 84.44 85.49*
1313
+
1314
+ Polblogs
1315
+ 2 979 71.04
1316
+
1317
+ 70.35 64.73 71.40 71.65
1318
+
1319
+ 66.33
1320
+ 4 979 71.53
1321
+
1322
+ 70.83 62.38 72.31
1323
+
1324
+ 72.10 68.53
1325
+ 8 979 74.02 74.85*
1326
+
1327
+ 61.58 74.87 75.36
1328
+
1329
+ 63.60
1330
+ 16 979 75.06
1331
+
1332
+ 75.05
1333
+
1334
+ 63.70 76.36 76.80
1335
+
1336
+ 69.12
1337
+ Table 1:Mean micro-F1 scores for classification benchmarks over 10 seeds and 5 folds. The highest-scoring decision tree and random forests are bolded separately.
1338
+ *
1339
+ means a predictor beat HyperRF,
1340
+
1341
+ means a predictor beat HoroRF, and
1342
+
1343
+ means a predictor beat scikit-learn, with
1344
+ 𝑝
1345
+ <
1346
+ 0.05
1347
+ .
1348
+ Classification Scores.
1349
+
1350
+ The results of the classification benchmark are summarized in Table 1. Out of 36 distinct dataset, dimension, and sample size combinations, HyperDT had the highest score 28 times (one of which was a tie with scikit-learn). We demonstrated a statistically significant advantage over scikit-learn decision trees in 7 cases, and over HoroRF in 27 cases. scikit-learn statistically outperformed HyperDT once, on the 8-dimensional Polblogs dataset.
1351
+
1352
+ Similarly, HyperRF won 22 times, tying once each with scikit-learn and HoroRF, and statistically outperforming scikit-learn in 11 cases and HoroRF in 13 cases. scikit-learn statistically outperformed HyperRF in 4 cases, all on 8-dimensional NeuroSEED data, and HoroRF statistically outperformed HyperRF in 2 cases, both on 16-dimensional NeuroSEED data.
1353
+
1354
+ Overall, both HyperDT and HyperRF showed substantial advantages over comparable methods on the datasets and hyperparameters tested. In high dimensions, classifiers tended to converge to uniformly high performance. The best model for NeuroSEED embeddings varied by dimensionality, a phenomenon that warrants further investigation.
1355
+
1356
+ Runtime Analysis.
1357
+
1358
+ In addition to accuracies, we report runtimes for each classifier on each task. In particular, we are interested in the asymptotic behavior of our predictors as a function of the number of samples being considered. These runtimes are plotted by dataset in Figure 3. We demonstrate that our method, while slower than the scikit-learn implementation by a constant factor, is always faster than HoroRF and grows linearly in runtime with the number of samples. In contrast, HoroRF grows quadratically in runtime as the number of samples increases. This is likely due to the additional complexity of learning horospherical decision boundaries using HoroSVM.
1359
+
1360
+ Because HoroRF is optimized for GPU, whereas HyperRF is optimized for parallelism on CPU, exact runtime ratios can be misleading and highly machine-dependent. Similarly, HyperRF may lack some optimizations found in scikit-learn. Therefore, we emphasize the asymptotic aspect of this benchmark, which is agnostic to hardware details and constant-time optimizations.
1361
+
1362
+ Figure 3:Time to run 5-fold cross-validation, averaged over 10 seeds for each classifier as a function of the number of points. Shaded regions are 95% confidence intervals. Split by dataset: (a) wrapped normal mixture, (b) NeuroSEED OTU embeddings, and (c) Polblogs embeddings.
1363
+ Additional experiments
1364
+
1365
+ The results of additional experiments can be found in the Appendix. Section A.6.1 contains additional scaling benchmarks. Sections A.6.2 and A.6.3 extend benchmarks to other hyperbolic classifiers and other models of hyperbolic geometry, respectively. Sections A.6.4 and A.6.5 test performance on image and text embeddings, respectively. Finally, Section A.6.6 tests the impact of ablating the midpoint computation described in Equation 10.
1366
+
1367
+ 5Conclusion
1368
+
1369
+ We have introduced HyperDT, a novel formulation of decision tree algorithms tailored for hyperbolic spaces. This approach leverages inner products to establish a streamlined decision procedure via geodesic submanifolds. HyperDT exhibits constant-time evaluation at each split and does not rely on Riemannian optimization nor pairwise point comparisons in training or prediction. We extended this technique to random forests with HyperRF, providing versatile tools for classification and regression tasks. HyperDT is more accurate than analogous methods in both Euclidean and hyperbolic spaces, while maintaining asymptotic complexity on par with Euclidean decision trees.
1370
+
1371
+ The methodological innovation centers on selecting the appropriate decision boundary element to substitute for axis-parallel hyperplanes in the hyperbolic space context. Remarkably, homogeneous hyperplanes serve as highly effective building blocks, preserving continuity and convexity of subspaces at each partition and deviating from traditional approaches by offering straightforward expressions that avoid Riemannian optimization. The trick of using single-axis rotations of the base plane further simplifies and speeds up computation.
1372
+
1373
+ HyperDT and HyperRF stand out for their speed, simplicity, and stability. The hyperboloid model uses a small, constant number of simple trigonometric expressions at each decision tree node, thereby minimizing numerical instability concerns arising from floating-point issues. Finally, the alignment of hyperboloid geometry to the tree-like structural features of hierarchy-oriented data manifests in single trees performing extraordinarily well, reducing the need for ensemble methods. Such single trees are faster to learn and use. More importantly, they offer full interpretability.
1374
+
1375
+ We offer an implementation adhering to standard scikit-learn API conventions, ensuring ease of use. Future research avenues can expand HyperRF with popular features such as gradient boosting, optimization enhancements (e.g., pruning), and additional applications for classification and regression within hyperbolic space. Additional optimizations can greatly improve both performance and usability e.g. through optimizing performance per the standards of scikit-learn. Furthermore, new advanced decision tree methods such as Lin et al. (2022), McTavish et al. (2022), and Mazumder et al. (2022), which are based on axis-parallel hyperplanes but apply non-greedy optimizations over multiple splits, can be reformulated in terms of dot-products and applied to homogeneous hyperplanes instead.
1376
+
1377
+ Acknowledgments
1378
+
1379
+ We acknowledge the support of the NSF Graduate Research Fellowship under grant no. DGE-2036197 to Philippe Chlenski. We also thank Quentin Chu and Swati Negi for their early use and feedback on our software, which has been instrumental in its development.
1380
+
1381
+ Ethics statement
1382
+
1383
+ This paper aims to advance the field of Machine Learning, conscious of its potential societal impacts and committed to adhering to the ICLR Code of Ethics. Although our research does not directly tackle sensitive ethical issues and we identify no specific societal consequences requiring individual emphasis, we recognize our work within the evolving landscape of machine learning ethics. Acknowledging that the ethical dimensions of machine learning are an area of ongoing exploration and debate, we commit to engaging responsibly with these broader considerations and adhering to ICLR guidelines to address any emerging concerns.
1384
+
1385
+ Reproducibility statement
1386
+
1387
+ To ensure the reproducibility of our work, we have taken comprehensive steps detailed across our paper, its appendices, and the supplemental GitHub repository. The main text delineates the methodologies and experimental setups, with proofs and derivations of nontrivial mathematical insights in the Appendix. Our GitHub repository houses all of the code, data processing steps, and additional documentation that support the empirical results presented. The README.md file for our GitHub repo contains up-to-date, detailed information on which files and notebooks reproduce which parts of the paper and links to data that is not publically available.
1388
+
1389
+ References
1390
+ Adamic & Glance (2005)
1391
+
1392
+ Lada A. Adamic and Natalie Glance.The political blogosphere and the 2004 U.S. election: divided they blog.In Proceedings of the 3rd international workshop on Link discovery, LinkKDD ’05, pp.  36–43, New York, NY, USA, August 2005. Association for Computing Machinery.ISBN 978-1-59593-215-0.doi: 10.1145/1134271.1134277.URL https://doi.org/10.1145/1134271.1134277.
1393
+ Agibetov et al. (2019)
1394
+
1395
+ Asan Agibetov, Georg Dorffner, and Matthias Samwald.Using hyperbolic large-margin classifiers for biological link prediction.In Proceedings of the 5th Workshop on Semantic Deep Learning (SemDeep-5), pp.  26–30, Macau, China, August 2019. Association for Computational Linguistics.URL https://aclanthology.org/W19-5805.
1396
+ Bdeir et al. (2023)
1397
+
1398
+ Ahmad Bdeir, Kristian Schwethelm, and Niels Landwehr.Hyperbolic Geometry in Computer Vision: A Novel Framework for Convolutional Neural Networks, March 2023.URL https://arxiv.org/abs/2303.15919v2.
1399
+ Breiman (2001)
1400
+
1401
+ Leo Breiman.Random forests.Machine Learning, 45(1):5–32, October 2001.ISSN 1573-0565.doi: 10.1023/A:1010933404324.URL https://doi.org/10.1023/A:1010933404324.
1402
+ Breiman (2017)
1403
+
1404
+ Leo Breiman.Classification and Regression Trees.Routledge, New York, October 2017.ISBN 978-1-315-13947-0.doi: 10.1201/9781315139470.
1405
+ Chamberlain et al. (2017)
1406
+
1407
+ Benjamin Paul Chamberlain, James Clough, and Marc Peter Deisenroth.Neural Embeddings of Graphs in Hyperbolic Space, May 2017.URL http://arxiv.org/abs/1705.10359.arXiv:1705.10359 [cs, stat].
1408
+ Chami et al. (2019)
1409
+
1410
+ Ines Chami, Rex Ying, Christopher Ré, and Jure Leskovec.Hyperbolic Graph Convolutional Neural Networks, October 2019.URL http://arxiv.org/abs/1910.12933.arXiv:1910.12933 [cs, stat].
1411
+ Chami et al. (2020)
1412
+
1413
+ Ines Chami, Albert Gu, Vaggos Chatziafratis, and Christopher Ré.From Trees to Continuous Embeddings and Back: Hyperbolic Hierarchical Clustering, October 2020.URL https://arxiv.org/abs/2010.00402v1.
1414
+ Chami et al. (2021)
1415
+
1416
+ Ines Chami, Albert Gu, Dat Nguyen, and Christopher Ré.HoroPCA: Hyperbolic Dimensionality Reduction via Horospherical Projections, June 2021.URL http://arxiv.org/abs/2106.03306.arXiv:2106.03306 [cs].
1417
+ Chen & Guestrin (2016)
1418
+
1419
+ Tianqi Chen and Carlos Guestrin.XGBoost: A Scalable Tree Boosting System.In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp.  785–794, August 2016.doi: 10.1145/2939672.2939785.URL http://arxiv.org/abs/1603.02754.arXiv:1603.02754 [cs].
1420
+ Chen et al. (2022)
1421
+
1422
+ Weize Chen, Xu Han, Yankai Lin, Hexu Zhao, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou.Fully Hyperbolic Neural Networks, March 2022.URL http://arxiv.org/abs/2105.14686.arXiv:2105.14686 [cs].
1423
+ Cho et al. (2018)
1424
+
1425
+ Hyunghoon Cho, Benjamin DeMeo, Jian Peng, and Bonnie Berger.Large-Margin Classification in Hyperbolic Space, June 2018.URL http://arxiv.org/abs/1806.00437.arXiv:1806.00437 [cs, stat].
1426
+ Corso et al. (2021)
1427
+
1428
+ Gabriele Corso, Rex Ying, Michal Pándy, Petar Veličković, Jure Leskovec, and Pietro Liò.Neural Distance Embeddings for Biological Sequences, October 2021.URL http://arxiv.org/abs/2109.09740.arXiv:2109.09740 [cs, q-bio].
1429
+ De Sa et al. (2018)
1430
+
1431
+ Christopher De Sa, Albert Gu, Christopher Ré, and Frederic Sala.Representation Tradeoffs for Hyperbolic Embeddings, April 2018.URL http://arxiv.org/abs/1804.03329.arXiv:1804.03329 [cs, stat].
1432
+ Desai et al. (2023)
1433
+
1434
+ Karan Desai, Maximilian Nickel, Tanmay Rajpurohit, Justin Johnson, and Ramakrishna Vedantam.Hyperbolic Image-Text Representations, June 2023.URL http://arxiv.org/abs/2304.09172.arXiv:2304.09172 [cs].
1435
+ Ding & Regev (2021)
1436
+
1437
+ Jiarui Ding and Aviv Regev.Deep generative model embedding of single-cell RNA-Seq profiles on hyperspheres and hyperbolic spaces.Nature Communications, 12(1):2554, May 2021.ISSN 2041-1723.doi: 10.1038/s41467-021-22851-4.URL https://www.nature.com/articles/s41467-021-22851-4.Number: 1 Publisher: Nature Publishing Group.
1438
+ Doorenbos et al. (2023)
1439
+
1440
+ Lars Doorenbos, Pablo Márquez-Neila, Raphael Sznitman, and Pascal Mettes.Hyperbolic Random Forests, August 2023.URL http://arxiv.org/abs/2308.13279.arXiv:2308.13279 [cs].
1441
+ Fan et al. (2023)
1442
+
1443
+ Xiran Fan, Chun-Hao Yang, and Baba C. Vemuri.Horospherical Decision Boundaries for Large Margin Classification in Hyperbolic Space, June 2023.URL http://arxiv.org/abs/2302.06807.arXiv:2302.06807 [cs, stat].
1444
+ Fellbaum (2010)
1445
+
1446
+ Christiane Fellbaum.WordNet.In Roberto Poli, Michael Healy, and Achilles Kameas (eds.), Theory and Applications of Ontology: Computer Applications, pp.  231–243. Springer Netherlands, Dordrecht, 2010.ISBN 978-90-481-8847-5.doi: 10.1007/978-90-481-8847-5˙10.URL https://doi.org/10.1007/978-90-481-8847-5_10.
1447
+ Ganea et al. (2018)
1448
+
1449
+ Octavian-Eugen Ganea, Gary Bécigneul, and Thomas Hofmann.Hyperbolic Entailment Cones for Learning Hierarchical Embeddings, June 2018.URL http://arxiv.org/abs/1804.01882.arXiv:1804.01882 [cs, stat].
1450
+ Gu et al. (2019)
1451
+
1452
+ Albert Gu, Frederic Sala, Beliz Gunel, and Christopher Re.Learning mixed-curvature representations in products of model spaces.2019.
1453
+ Gulcehre et al. (2018)
1454
+
1455
+ Caglar Gulcehre, Misha Denil, Mateusz Malinowski, Ali Razavi, Razvan Pascanu, Karl Moritz Hermann, Peter Battaglia, Victor Bapst, David Raposo, Adam Santoro, and Nando de Freitas.Hyperbolic Attention Networks, May 2018.URL http://arxiv.org/abs/1805.09786.arXiv:1805.09786 [cs].
1456
+ Hughes et al. (2004)
1457
+
1458
+ Timothy Hughes, Young Hyun, and David A. Liberles.Visualising very large phylogenetic trees in three dimensional hyperbolic space.BMC Bioinformatics, 5(1):48, April 2004.ISSN 1471-2105.doi: 10.1186/1471-2105-5-48.URL https://doi.org/10.1186/1471-2105-5-48.
1459
+ Jiang et al. (2022)
1460
+
1461
+ Yueyu Jiang, Puoya Tabaghi, and Siavash Mirarab.Learning Hyperbolic Embedding for Phylogenetic Tree Placement and Updates.Biology, 11(9):1256, September 2022.ISSN 2079-7737.doi: 10.3390/biology11091256.URL https://www.mdpi.com/2079-7737/11/9/1256.Number: 9 Publisher: Multidisciplinary Digital Publishing Institute.
1462
+ Lin et al. (2022)
1463
+
1464
+ Jimmy Lin, Chudi Zhong, Diane Hu, Cynthia Rudin, and Margo Seltzer.Generalized and Scalable Optimal Sparse Decision Trees, November 2022.URL http://arxiv.org/abs/2006.08690.arXiv:2006.08690 [cs, stat].
1465
+ Marconi et al. (2020)
1466
+
1467
+ Gian Maria Marconi, Lorenzo Rosasco, and Carlo Ciliberto.Hyperbolic Manifold Regression, May 2020.URL http://arxiv.org/abs/2005.13885.arXiv:2005.13885 [cs, stat].
1468
+ Mazumder et al. (2022)
1469
+
1470
+ Rahul Mazumder, Xiang Meng, and Haoyue Wang.Quant-BnB: A Scalable Branch-and-Bound Method for Optimal Decision Trees with Continuous Features, June 2022.URL http://arxiv.org/abs/2206.11844.arXiv:2206.11844 [cs].
1471
+ McDonald et al. (2018)
1472
+
1473
+ Daniel McDonald, Embriette Hyde, Justine W. Debelius, James T. Morton, Antonio Gonzalez, Gail Ackermann, Alexander A. Aksenov, Bahar Behsaz, Caitriona Brennan, Yingfeng Chen, Lindsay DeRight Goldasich, Pieter C. Dorrestein, Robert R. Dunn, Ashkaan K. Fahimipour, James Gaffney, Jack A. Gilbert, Grant Gogul, Jessica L. Green, Philip Hugenholtz, Greg Humphrey, Curtis Huttenhower, Matthew A. Jackson, Stefan Janssen, Dilip V. Jeste, Lingjing Jiang, Scott T. Kelley, Dan Knights, Tomasz Kosciolek, Joshua Ladau, Jeff Leach, Clarisse Marotz, Dmitry Meleshko, Alexey V. Melnik, Jessica L. Metcalf, Hosein Mohimani, Emmanuel Montassier, Jose Navas-Molina, Tanya T. Nguyen, Shyamal Peddada, Pavel Pevzner, Katherine S. Pollard, Gholamali Rahnavard, Adam Robbins-Pianka, Naseer Sangwan, Joshua Shorenstein, Larry Smarr, Se Jin Song, Timothy Spector, Austin D. Swafford, Varykina G. Thackray, Luke R. Thompson, Anupriya Tripathi, Yoshiki Vázquez-Baeza, Alison Vrbanac, Paul Wischmeyer, Elaine Wolfe, Qiyun Zhu, American Gut Consortium, and Rob Knight.American Gut: an Open Platform for Citizen Science Microbiome Research.mSystems, 3(3):e00031–18, 2018.ISSN 2379-5077.doi: 10.1128/mSystems.00031-18.
1474
+ McDonald et al. (2023)
1475
+
1476
+ Daniel McDonald, Yueyu Jiang, Metin Balaban, Kalen Cantrell, Qiyun Zhu, Antonio Gonzalez, James T. Morton, Giorgia Nicolaou, Donovan H. Parks, Søren M. Karst, Mads Albertsen, Philip Hugenholtz, Todd DeSantis, Se Jin Song, Andrew Bartko, Aki S. Havulinna, Pekka Jousilahti, Susan Cheng, Michael Inouye, Teemu Niiranen, Mohit Jain, Veikko Salomaa, Leo Lahti, Siavash Mirarab, and Rob Knight.Greengenes2 unifies microbial data in a single reference tree.Nature Biotechnology, pp.  1–4, July 2023.ISSN 1546-1696.doi: 10.1038/s41587-023-01845-1.URL https://www.nature.com/articles/s41587-023-01845-1.Publisher: Nature Publishing Group.
1477
+ McTavish et al. (2022)
1478
+
1479
+ Hayden McTavish, Chudi Zhong, Reto Achermann, Ilias Karimalis, Jacques Chen, Cynthia Rudin, and Margo Seltzer.Fast Sparse Decision Tree Optimization via Reference Ensembles, July 2022.URL http://arxiv.org/abs/2112.00798.arXiv:2112.00798 [cs].
1480
+ Miolane et al. (2018)
1481
+
1482
+ Nina Miolane, Johan Mathe, Claire Donnat, Mikael Jorda, and Xavier Pennec.geomstats: a Python Package for Riemannian Geometry in Machine Learning, May 2018.URL https://arxiv.org/abs/1805.08308v2.
1483
+ Nagano et al. (2019)
1484
+
1485
+ Yoshihiro Nagano, Shoichiro Yamaguchi, Yasuhiro Fujita, and Masanori Koyama.A Wrapped Normal Distribution on Hyperbolic Space for Gradient-Based Learning, May 2019.URL http://arxiv.org/abs/1902.02992.arXiv:1902.02992 [cs, stat].
1486
+ Nickel & Kiela (2017)
1487
+
1488
+ Maximilian Nickel and Douwe Kiela.Poincar\’e Embeddings for Learning Hierarchical Representations, May 2017.URL http://arxiv.org/abs/1705.08039.arXiv:1705.08039 [cs, stat].
1489
+ Nickel & Kiela (2018)
1490
+
1491
+ Maximilian Nickel and Douwe Kiela.Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic Geometry, July 2018.URL http://arxiv.org/abs/1806.03417.arXiv:1806.03417 [cs, stat].
1492
+ Pedregosa et al. (2011)
1493
+
1494
+ Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay.Scikit-learn: Machine Learning in Python.Journal of Machine Learning Research, 12(85):2825–2830, 2011.ISSN 1533-7928.URL http://jmlr.org/papers/v12/pedregosa11a.html.
1495
+ Radford et al. (2021)
1496
+
1497
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.Learning Transferable Visual Models From Natural Language Supervision, February 2021.URL http://arxiv.org/abs/2103.00020.arXiv:2103.00020 [cs].
1498
+ Sarkar (2012)
1499
+
1500
+ Rik Sarkar.Low Distortion Delaunay Embedding of Trees in Hyperbolic Plane.In Marc Van Kreveld and Bettina Speckmann (eds.), Graph Drawing, volume 7034, pp.  355–366. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.ISBN 978-3-642-25877-0 978-3-642-25878-7.doi: 10.1007/978-3-642-25878-7˙34.URL http://link.springer.com/10.1007/978-3-642-25878-7_34.Series Title: Lecture Notes in Computer Science.
1501
+ Tay et al. (2018)
1502
+
1503
+ Yi Tay, Luu Anh Tuan, and Siu Cheung Hui.Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering.In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM ’18, pp. 583–591, New York, NY, USA, February 2018. Association for Computing Machinery.ISBN 978-1-4503-5581-0.doi: 10.1145/3159652.3159664.URL https://doi.org/10.1145/3159652.3159664.
1504
+ Tifrea et al. (2018)
1505
+
1506
+ Alexandru Tifrea, Gary Bécigneul, and Octavian-Eugen Ganea.Poincar\’e GloVe: Hyperbolic Word Embeddings, November 2018.URL http://arxiv.org/abs/1810.06546.arXiv:1810.06546 [cs].
1507
+ van Spengler et al. (2023)
1508
+
1509
+ Max van Spengler, Philipp Wirth, and Pascal Mettes.HypLL: The Hyperbolic Learning Library, August 2023.URL http://arxiv.org/abs/2306.06154.arXiv:2306.06154 [cs].
1510
+ Appendix AAppendix
1511
+ A.1Conversion between Hyperboloid and Poincaré models
1512
+
1513
+ Letting
1514
+ 𝐱
1515
+ 𝐏
1516
+ be a point in
1517
+
1518
+ 𝐷
1519
+ ,
1520
+ 𝐾
1521
+ and
1522
+ 𝐱
1523
+ 𝐇
1524
+ be its equivalent in
1525
+
1526
+ 𝐷
1527
+ ,
1528
+ 𝐾
1529
+ ,
1530
+
1531
+
1532
+ 𝑥
1533
+ 𝑃
1534
+ ,
1535
+ 𝑖
1536
+
1537
+ =
1538
+ 𝑥
1539
+ 𝐻
1540
+ ,
1541
+ 𝑖
1542
+ 𝐾
1543
+ +
1544
+ 𝑥
1545
+ 𝐻
1546
+ ,
1547
+ 0
1548
+
1549
+ (16)
1550
+
1551
+
1552
+ 𝑥
1553
+ 𝐻
1554
+ ,
1555
+ 0
1556
+
1557
+ =
1558
+ 𝐾
1559
+
1560
+ 1
1561
+ +
1562
+
1563
+ 𝑥
1564
+ 𝑃
1565
+
1566
+ 2
1567
+ 2
1568
+ 1
1569
+
1570
+
1571
+ 𝑥
1572
+ 𝑃
1573
+
1574
+ 2
1575
+ 2
1576
+
1577
+ (17)
1578
+
1579
+
1580
+ 𝑥
1581
+ 𝐻
1582
+ ,
1583
+ 𝑖
1584
+
1585
+ =
1586
+ 𝐾
1587
+
1588
+ 2
1589
+
1590
+ 𝑥
1591
+ 𝑃
1592
+ ,
1593
+ 𝑖
1594
+ 1
1595
+
1596
+
1597
+ 𝑥
1598
+ 𝑃
1599
+
1600
+ 2
1601
+ 2
1602
+ .
1603
+
1604
+ (18)
1605
+ A.2Geodesic submanifold details
1606
+
1607
+ In this section, we will give a full parameterization of the geodesic submanifolds created by intersecting the decision hyperplanes learned by HyperDT and HyperRF with
1608
+
1609
+ 𝐷
1610
+ ,
1611
+ 𝐾
1612
+ . We enumerate the basis vectors of our decision hyperplanes using a formalism conducive to the following steps, calculate a scaling factor that ensures a basis vector reaches the surface of the manifold, construct a 1-dimensional geodesic arc from the basis vectors of a decision hyperplane, and finally recursively extend this arc to higher dimensions, culminating in a full
1613
+ (
1614
+ 𝐷
1615
+
1616
+ 1
1617
+ )
1618
+ -dimensional geodesic submanifold.
1619
+
1620
+ To reduce the complexity of notation necessitated by indexing over dimensions, we will assume without loss of generality that the decision hyperplane’s normal vector is nonzero in dimensions 0 and 1. We extend this convention to geodesic submanifolds by putting nonzero dimensions first: therefore, we first parameterize an arc along dimension 2, then turn it into a 2-dimensional submanifold along dimension 3, and so on. In general, a
1621
+ (
1622
+ 𝑑
1623
+
1624
+ 𝐷
1625
+ )
1626
+ -dimensional submanifold will be nonzero in the first
1627
+ 𝑑
1628
+ +
1629
+ 1
1630
+ ambient dimensions.
1631
+
1632
+ A.2.1Basis vectors of decision hyperplanes
1633
+
1634
+ Let
1635
+ 𝐏
1636
+
1637
+ (
1638
+ 𝜃
1639
+ )
1640
+ be a
1641
+ 𝐷
1642
+ -dimensional decision hyperplane learned by HyperDT. By our assumption above and Equation 7, the normal vector
1643
+ 𝐧
1644
+
1645
+ (
1646
+ 1
1647
+ ,
1648
+ 𝜃
1649
+ )
1650
+ of
1651
+ 𝐏
1652
+
1653
+ (
1654
+ 𝜃
1655
+ )
1656
+ is nonzero only in dimensions 0 and 1. Therefore
1657
+ 𝐏
1658
+
1659
+ (
1660
+ 𝜃
1661
+ )
1662
+ has
1663
+ 𝐷
1664
+ basis vectors:
1665
+
1666
+
1667
+ 𝐯
1668
+ 𝟎
1669
+
1670
+ =
1671
+
1672
+ sin
1673
+
1674
+ (
1675
+ 𝜃
1676
+ )
1677
+ ,
1678
+ cos
1679
+
1680
+ (
1681
+ 𝜃
1682
+ )
1683
+ ,
1684
+ 0
1685
+ ,
1686
+
1687
+
1688
+
1689
+ (19)
1690
+
1691
+
1692
+ 𝐮
1693
+ 𝐝
1694
+
1695
+ =
1696
+
1697
+ 0
1698
+ ,
1699
+
1700
+ ,
1701
+ 𝑢
1702
+ 𝑑
1703
+ 𝑑
1704
+ =
1705
+ 1
1706
+ ,
1707
+
1708
+ ,
1709
+ 0
1710
+
1711
+ ,
1712
+ 2
1713
+
1714
+ 𝑑
1715
+
1716
+ 𝐷
1717
+
1718
+ (20)
1719
+
1720
+ The
1721
+ 𝐮
1722
+ 𝐝
1723
+ vectors are standard basis vectors which are 1 in dimension
1724
+ 𝑑
1725
+ and 0 elsewhere.
1726
+
1727
+ A.2.2Computing vector scale
1728
+
1729
+ For what scaling factor
1730
+ 𝛼
1731
+ does
1732
+ 𝛼
1733
+
1734
+ 𝐯
1735
+ 𝟎
1736
+ lie on the manifold? We know that
1737
+ 𝛼
1738
+
1739
+ 𝐯
1740
+ 𝟎
1741
+ will always be zero in dimensions 2 through
1742
+ 𝐷
1743
+ , effectively reducing this to a simple 2-dimensional problem on
1744
+
1745
+ 2
1746
+ ,
1747
+ 𝐾
1748
+ :
1749
+
1750
+
1751
+ 𝑥
1752
+ 1
1753
+ 2
1754
+
1755
+ 𝑥
1756
+ 0
1757
+ 2
1758
+
1759
+ =
1760
+
1761
+ 1
1762
+ /
1763
+ 𝐾
1764
+
1765
+ (21)
1766
+
1767
+
1768
+ 𝑥
1769
+ 1
1770
+ 2
1771
+
1772
+ =
1773
+
1774
+ 1
1775
+ /
1776
+ 𝐾
1777
+ +
1778
+ 𝑥
1779
+ 0
1780
+ 2
1781
+
1782
+ (22)
1783
+
1784
+
1785
+ 𝛼
1786
+ 2
1787
+
1788
+ cos
1789
+ 2
1790
+
1791
+ (
1792
+ 𝜃
1793
+ )
1794
+
1795
+ =
1796
+
1797
+ 1
1798
+ /
1799
+ 𝐾
1800
+ +
1801
+ 𝛼
1802
+ 2
1803
+
1804
+ sin
1805
+ 2
1806
+
1807
+ (
1808
+ 𝜃
1809
+ )
1810
+
1811
+ (23)
1812
+
1813
+
1814
+ 𝛼
1815
+ 2
1816
+
1817
+ (
1818
+ cos
1819
+ 2
1820
+
1821
+ (
1822
+ 𝜃
1823
+ )
1824
+
1825
+ sin
1826
+ 2
1827
+
1828
+ (
1829
+ 𝜃
1830
+ )
1831
+ )
1832
+
1833
+ =
1834
+
1835
+ 1
1836
+ /
1837
+ 𝐾
1838
+
1839
+ (24)
1840
+
1841
+
1842
+ 𝐾
1843
+
1844
+ 𝛼
1845
+ 2
1846
+
1847
+ =
1848
+
1849
+ 1
1850
+ cos
1851
+ 2
1852
+
1853
+ (
1854
+ 𝜃
1855
+ )
1856
+
1857
+ sin
1858
+ 2
1859
+
1860
+ (
1861
+ 𝜃
1862
+ )
1863
+
1864
+ (25)
1865
+
1866
+
1867
+ 𝐾
1868
+
1869
+ 𝛼
1870
+ 2
1871
+
1872
+ =
1873
+
1874
+ 1
1875
+ cos
1876
+
1877
+ (
1878
+ 2
1879
+
1880
+ 𝜃
1881
+ )
1882
+
1883
+ (26)
1884
+
1885
+
1886
+ 𝛼
1887
+
1888
+ =
1889
+
1890
+ sec
1891
+
1892
+ (
1893
+ 2
1894
+
1895
+ 𝜃
1896
+ )
1897
+ 𝐾
1898
+
1899
+ (27)
1900
+
1901
+ The transition from Equation 25 to Equation 26 is due to the double-angle formula. See Figure 4 for a visual demonstration that rescaling
1902
+ 𝐯
1903
+ 𝟎
1904
+ by
1905
+ 𝛼
1906
+ works for the full range of
1907
+ 𝜃
1908
+ values in one dimension. To extend this insight to arbitrary angles and curvatures, we define an
1909
+ 𝛼
1910
+
1911
+ (
1912
+ 𝜃
1913
+ ,
1914
+ 𝐾
1915
+ )
1916
+ function
1917
+
1918
+
1919
+ 𝛼
1920
+
1921
+ (
1922
+ 𝜃
1923
+ ,
1924
+ 𝐾
1925
+ )
1926
+ =
1927
+
1928
+ sec
1929
+
1930
+ (
1931
+ 2
1932
+
1933
+ 𝜃
1934
+ )
1935
+ 𝐾
1936
+ =
1937
+
1938
+ sec
1939
+
1940
+ (
1941
+ 2
1942
+
1943
+ 𝜃
1944
+ )
1945
+ 𝐾
1946
+ .
1947
+
1948
+ (28)
1949
+ Figure 4:Rescaling basis vector
1950
+ 𝐯
1951
+ 𝟎
1952
+ =
1953
+
1954
+ sin
1955
+
1956
+ (
1957
+ 𝜃
1958
+ )
1959
+ ,
1960
+ cos
1961
+
1962
+ (
1963
+ 𝜃
1964
+ )
1965
+
1966
+ by
1967
+ 𝛼
1968
+
1969
+ (
1970
+ 𝜃
1971
+ ,
1972
+ 1
1973
+ )
1974
+ =
1975
+
1976
+ sec
1977
+
1978
+ (
1979
+ 2
1980
+
1981
+ 𝜃
1982
+ )
1983
+ produces a point on
1984
+
1985
+ 1
1986
+ ,
1987
+ 1
1988
+ for all
1989
+ 𝜃
1990
+ values between
1991
+ 𝜋
1992
+ /
1993
+ 4
1994
+ and
1995
+ 3
1996
+
1997
+ 𝜋
1998
+ /
1999
+ 4
2000
+ .
2001
+ A.2.3Geodesic arcs
2002
+
2003
+ As shown in Chami et al. (2019), a geodesic arc in
2004
+
2005
+ 𝐷
2006
+ ,
2007
+ 𝐾
2008
+ can be characterized as
2009
+ cosh
2010
+
2011
+ (
2012
+ 𝑡
2013
+ )
2014
+
2015
+ 𝐯
2016
+ *
2017
+ +
2018
+ sinh
2019
+
2020
+ (
2021
+ 𝑡
2022
+ )
2023
+
2024
+ 𝐮
2025
+ *
2026
+ for any pair of vectors
2027
+ (
2028
+ 𝐮
2029
+ *
2030
+ ,
2031
+ 𝐯
2032
+ *
2033
+ )
2034
+ where
2035
+
2036
+
2037
+
2038
+ 𝐮
2039
+ *
2040
+ ,
2041
+ 𝐮
2042
+ *
2043
+
2044
+
2045
+
2046
+ =
2047
+ 1
2048
+ /
2049
+ 𝐾
2050
+
2051
+ (29)
2052
+
2053
+
2054
+
2055
+ 𝐯
2056
+ *
2057
+ ,
2058
+ 𝐯
2059
+ *
2060
+
2061
+
2062
+
2063
+ =
2064
+
2065
+ 1
2066
+ /
2067
+ 𝐾
2068
+
2069
+ (30)
2070
+
2071
+
2072
+
2073
+ 𝐮
2074
+ *
2075
+ ,
2076
+ 𝐯
2077
+ *
2078
+
2079
+
2080
+
2081
+ =
2082
+ 0
2083
+ .
2084
+
2085
+ (31)
2086
+
2087
+ All
2088
+ 𝐮
2089
+ 𝐝
2090
+ vectors described in Equation 20, being purely spacelike, satisfy Equation 29 if rescaled by a factor of
2091
+ 𝐾
2092
+ so that their norms are
2093
+ 1
2094
+ /
2095
+ 𝐾
2096
+ . We arbitrarily choose to use
2097
+ 𝐮
2098
+ 𝟐
2099
+ /
2100
+ 𝐾
2101
+ in our paramterization. Since
2102
+ 𝛼
2103
+
2104
+ (
2105
+ 𝜃
2106
+ ,
2107
+ 𝐾
2108
+ )
2109
+
2110
+ 𝐯
2111
+ 𝟎
2112
+ lies on
2113
+
2114
+ 𝐷
2115
+ ,
2116
+ 𝐾
2117
+ , it satisfies Equation 30. Since
2118
+ 𝐯
2119
+ 𝟎
2120
+ and
2121
+ 𝐮
2122
+ 𝐝
2123
+ are disjoint in their nonzero dimensions, they trivially satisfy Equation 31 for any scaling factors. Letting
2124
+ 𝑡
2125
+ vary freely, the geodesic is given by
2126
+
2127
+
2128
+ 𝐠
2129
+ 𝟏
2130
+
2131
+ (
2132
+ 𝜃
2133
+ ,
2134
+ 𝐾
2135
+ ,
2136
+ 𝑡
2137
+ )
2138
+
2139
+ =
2140
+ cosh
2141
+
2142
+ (
2143
+ 𝑡
2144
+ )
2145
+
2146
+ 𝛼
2147
+
2148
+ (
2149
+ 𝜃
2150
+ ,
2151
+ 𝐾
2152
+ )
2153
+
2154
+ 𝐯
2155
+ 𝟎
2156
+ +
2157
+ sinh
2158
+
2159
+ (
2160
+ 𝑡
2161
+ )
2162
+
2163
+ 𝐮
2164
+ 𝟐
2165
+ /
2166
+ 𝐾
2167
+
2168
+ (32)
2169
+
2170
+
2171
+ =
2172
+
2173
+ cosh
2174
+
2175
+ (
2176
+ 𝑡
2177
+ )
2178
+
2179
+ 𝛼
2180
+
2181
+ (
2182
+ 𝜃
2183
+ ,
2184
+ 𝐾
2185
+ )
2186
+
2187
+ sin
2188
+
2189
+ (
2190
+ 𝜃
2191
+ )
2192
+ ,
2193
+ cosh
2194
+
2195
+ (
2196
+ 𝑡
2197
+ )
2198
+
2199
+ 𝛼
2200
+
2201
+ (
2202
+ 𝜃
2203
+ ,
2204
+ 𝐾
2205
+ )
2206
+
2207
+ cos
2208
+
2209
+ (
2210
+ 𝜃
2211
+ )
2212
+ ,
2213
+ sinh
2214
+
2215
+ (
2216
+ 𝑡
2217
+ )
2218
+ /
2219
+ 𝐾
2220
+ ,
2221
+ 0
2222
+ ,
2223
+
2224
+
2225
+ .
2226
+
2227
+ (33)
2228
+
2229
+ The full geodesic arc over all possible values of
2230
+ 𝑡
2231
+ is given by
2232
+
2233
+
2234
+ 𝐆
2235
+ 𝟏
2236
+
2237
+ (
2238
+ 𝜃
2239
+ ,
2240
+ 𝐾
2241
+ )
2242
+ =
2243
+ {
2244
+ 𝐠
2245
+ 𝟏
2246
+
2247
+ (
2248
+ 𝜃
2249
+ ,
2250
+ 𝐾
2251
+ ,
2252
+ 𝑡
2253
+ )
2254
+ :
2255
+ 𝑡
2256
+
2257
+
2258
+ }
2259
+ .
2260
+
2261
+ (34)
2262
+ A.2.4Geodesic submanifolds
2263
+
2264
+ For didactic purposes, we first extend the geodesic arc
2265
+ 𝐆
2266
+ 𝟏
2267
+ to a 2-dimensional submanifold
2268
+ 𝐆
2269
+ 𝟐
2270
+ .
2271
+
2272
+ Since any point in
2273
+ 𝐆
2274
+ 𝟏
2275
+
2276
+ (
2277
+ 𝜃
2278
+ ,
2279
+ 𝐾
2280
+ )
2281
+ is on
2282
+
2283
+ 𝐷
2284
+ ,
2285
+ 𝐾
2286
+ , it has Minkowski norm
2287
+
2288
+ 1
2289
+ /
2290
+ 𝐾
2291
+ and therefore satisfies Condition 30.
2292
+
2293
+ By our convention, we use
2294
+ 𝐮
2295
+ 𝐝
2296
+ vectors sequentially to construct geodesics. Therefore, if
2297
+ 𝐆
2298
+ 𝐝
2299
+ is
2300
+ 𝑑
2301
+ -dimensional, then all
2302
+ 𝐮
2303
+ 𝐝
2304
+
2305
+ for
2306
+ 𝑑
2307
+ +
2308
+ 2
2309
+
2310
+ 𝑑
2311
+
2312
+
2313
+ 𝐷
2314
+ , being unused in the construction of the geodesic, remain orthogonal to all
2315
+ 𝐯
2316
+ 𝐝
2317
+
2318
+ 𝐆
2319
+ 𝐝
2320
+ and continue to satisfy Condition 31. In particular,
2321
+ 𝐮
2322
+ 𝟑
2323
+ is the smallest unused
2324
+ 𝐮
2325
+ 𝐝
2326
+ vector. Being spacelike, all
2327
+ 𝐮
2328
+ 𝐝
2329
+
2330
+ continue to satisfy Condition 29.
2331
+
2332
+ For our next geodesic, we apply Equation 32 recursively to any
2333
+ 𝐯
2334
+ 𝟏
2335
+
2336
+ 𝐆
2337
+ 𝟏
2338
+
2339
+ (
2340
+ 𝜃
2341
+ ,
2342
+ 𝐾
2343
+ )
2344
+ and
2345
+ 𝐮
2346
+ 𝟑
2347
+ /
2348
+ 𝐾
2349
+ :
2350
+
2351
+
2352
+ 𝐠
2353
+ 𝟐
2354
+
2355
+ (
2356
+ 𝜃
2357
+ ,
2358
+ 𝐾
2359
+ ,
2360
+ 𝑡
2361
+ ,
2362
+ 𝑡
2363
+
2364
+ )
2365
+
2366
+ =
2367
+ cosh
2368
+
2369
+ (
2370
+ 𝑡
2371
+
2372
+ )
2373
+
2374
+ 𝐯
2375
+ 𝟏
2376
+ +
2377
+ sinh
2378
+
2379
+ (
2380
+ 𝑡
2381
+
2382
+ )
2383
+
2384
+ 𝐮
2385
+ 𝟑
2386
+ /
2387
+ 𝐾
2388
+
2389
+ (35)
2390
+
2391
+
2392
+ =
2393
+ cosh
2394
+
2395
+ (
2396
+ 𝑡
2397
+
2398
+ )
2399
+
2400
+ 𝐠
2401
+ 𝟏
2402
+
2403
+ (
2404
+ 𝜃
2405
+ ,
2406
+ 𝐾
2407
+ ,
2408
+ 𝑡
2409
+ )
2410
+ +
2411
+ sinh
2412
+
2413
+ (
2414
+ 𝑡
2415
+
2416
+ )
2417
+
2418
+ 𝐮
2419
+ 𝟑
2420
+ /
2421
+ 𝐾
2422
+
2423
+ (36)
2424
+
2425
+
2426
+ =
2427
+ cosh
2428
+
2429
+ (
2430
+ 𝑡
2431
+
2432
+ )
2433
+
2434
+ (
2435
+ cosh
2436
+
2437
+ (
2438
+ 𝑡
2439
+ )
2440
+
2441
+ 𝐯
2442
+ 𝟎
2443
+ +
2444
+ sinh
2445
+
2446
+ (
2447
+ 𝑡
2448
+ )
2449
+
2450
+ 𝐮
2451
+ 𝟐
2452
+ )
2453
+ +
2454
+ sinh
2455
+
2456
+ (
2457
+ 𝑡
2458
+
2459
+ )
2460
+
2461
+ 𝐮
2462
+ 𝟑
2463
+ /
2464
+ 𝐾
2465
+
2466
+ (37)
2467
+
2468
+ The geodesic submanifold created by intersecting the 3-plane with basis vectors
2469
+ {
2470
+ 𝐯
2471
+ 𝟎
2472
+ ,
2473
+ 𝐮
2474
+ 𝟐
2475
+ ,
2476
+ 𝐮
2477
+ 𝟑
2478
+ }
2479
+ with
2480
+
2481
+ 𝐷
2482
+ ,
2483
+ 𝐾
2484
+ corresponds to the set of all values of
2485
+ 𝐠
2486
+ 𝟐
2487
+ for
2488
+ (
2489
+ 𝑡
2490
+ ,
2491
+ 𝑡
2492
+
2493
+ )
2494
+
2495
+
2496
+ 2
2497
+ :
2498
+
2499
+
2500
+ 𝐆
2501
+ 𝟐
2502
+
2503
+ (
2504
+ 𝜃
2505
+ ,
2506
+ 𝐾
2507
+ )
2508
+ =
2509
+ {
2510
+ 𝐠
2511
+ 𝟐
2512
+
2513
+ (
2514
+ 𝜃
2515
+ ,
2516
+ 𝐾
2517
+ ,
2518
+ 𝑡
2519
+ ,
2520
+ 𝑡
2521
+
2522
+ )
2523
+ :
2524
+ (
2525
+ 𝑡
2526
+ ,
2527
+ 𝑡
2528
+
2529
+ )
2530
+
2531
+
2532
+ 2
2533
+ }
2534
+
2535
+ (38)
2536
+
2537
+ Using the remaining
2538
+ 𝐮
2539
+ 𝐝
2540
+ vectors in ascending order from
2541
+ 𝑑
2542
+ =
2543
+ 3
2544
+ to
2545
+ 𝐷
2546
+ , we can recursively parameterize the full geodesic submanifold resulting from intersecting
2547
+ 𝐏
2548
+
2549
+ (
2550
+ 𝜃
2551
+ )
2552
+ with
2553
+
2554
+ 𝐷
2555
+ ,
2556
+ 𝐾
2557
+ :
2558
+
2559
+
2560
+ 𝐆
2561
+ 𝐝
2562
+
2563
+ (
2564
+ 𝜃
2565
+ ,
2566
+ 𝐾
2567
+ )
2568
+ =
2569
+ {
2570
+ sinh
2571
+
2572
+ (
2573
+ 𝑡
2574
+ )
2575
+
2576
+ 𝐮
2577
+ 𝐝
2578
+ +
2579
+ 𝟏
2580
+ /
2581
+ 𝐾
2582
+ +
2583
+ cosh
2584
+
2585
+ (
2586
+ 𝑡
2587
+ )
2588
+
2589
+ 𝐯
2590
+ 𝐝
2591
+
2592
+ 𝟏
2593
+ :
2594
+ 𝐯
2595
+ 𝐝
2596
+
2597
+ 𝟏
2598
+
2599
+ 𝐆
2600
+ 𝐝
2601
+
2602
+ 𝟏
2603
+
2604
+ (
2605
+ 𝜃
2606
+ ,
2607
+ 𝐾
2608
+ )
2609
+ ,
2610
+ 𝑡
2611
+
2612
+
2613
+ }
2614
+
2615
+ (39)
2616
+ A.2.5Visualization-specific details
2617
+ Visualization assumptions.
2618
+
2619
+ The first step to plotting a learned decision boundary is to find closed form equations for the intersection between the hyperboloid and the plane. To this end, we make a number of simplifying assumptions. First of all, we restrict ourselves to the hyperboloid
2620
+
2621
+ 2
2622
+ ,
2623
+ 1
2624
+ . For decision tree visualization, hyperplanes can be inclined along dimensions 1 or 2; therefore, we cannot assume that our first dimension contains the split. Instead, we parameterize our plane as
2625
+ 𝐏
2626
+
2627
+ (
2628
+ 𝜃
2629
+ ,
2630
+ 𝑑
2631
+ )
2632
+ . The details of geodesics are the same as Equation 34, but dimensions 1 and 2 may be exchanged when
2633
+ 𝑑
2634
+ =
2635
+ 2
2636
+ . We also assume that the plane actually intersects
2637
+
2638
+ 2
2639
+ ,
2640
+ 1
2641
+ , meaning
2642
+ 𝜋
2643
+ /
2644
+ 4
2645
+ <
2646
+ 𝜃
2647
+ <
2648
+ 3
2649
+
2650
+ 𝜋
2651
+ /
2652
+ 4
2653
+ .
2654
+
2655
+ Poincaré disk projection.
2656
+
2657
+ Since
2658
+
2659
+ 2
2660
+ ,
2661
+ 1
2662
+ is actually a 3-dimensional object for visualization purposes, it is easier to visualize as a point on the Poincaré disk
2663
+
2664
+ 2
2665
+ ,
2666
+ 1
2667
+ . Thus, we convert coordinates in
2668
+
2669
+ 2
2670
+ ,
2671
+ 1
2672
+ to
2673
+
2674
+ 2
2675
+ ,
2676
+ 1
2677
+ using Equation 16. For geodesics, we sample 1,000 points uniformly from
2678
+ (
2679
+
2680
+ 10
2681
+ ,
2682
+ 10
2683
+ )
2684
+ and convert these: this is sufficient to draw a smooth arc on
2685
+
2686
+ 2
2687
+ ,
2688
+ 1
2689
+ .
2690
+
2691
+ Subspace coloring.
2692
+
2693
+ For better visualizations, it is also necessary to partition
2694
+
2695
+ 2
2696
+ ,
2697
+ 1
2698
+ so that:
2699
+
2700
+ 1.
2701
+
2702
+ Decision boundaries are only rendered in the correct subtrees. For instance, if a boundary operates in the left subtree of a higher split, then it should only be drawn in the half of
2703
+
2704
+ 2
2705
+ ,
2706
+ 1
2707
+ where the left subtree is actually active.
2708
+
2709
+ 2.
2710
+
2711
+ The space is partitioned fully by the leaves of the decision tree, and can therefore be colored according to the majority class at each leaf node.
2712
+
2713
+ To do this, we recursively feed in a mask at each plotting iteration. This mask turns off plotting for inactive regions of the Poincaré disk. At the leaf level, every point on the Poincaré disk is active in only one mask, and therefore can be used to plot majority classes.
2714
+
2715
+ A.3Midpoint angles
2716
+
2717
+ Now we consider how to find
2718
+ 𝜃
2719
+ 𝑚
2720
+ , the midpoint between two angles
2721
+ 𝜃
2722
+ 1
2723
+ and
2724
+ 𝜃
2725
+ 2
2726
+ . One option is to take the midpoint naively by taking the average of two angles:
2727
+
2728
+
2729
+ 𝜃
2730
+ 𝑚
2731
+ ,
2732
+ naive
2733
+ =
2734
+ 𝜃
2735
+ 1
2736
+ +
2737
+ 𝜃
2738
+ 2
2739
+ 2
2740
+ .
2741
+
2742
+ (40)
2743
+
2744
+ If we assume without loss of generality that
2745
+ sin
2746
+
2747
+ (
2748
+ 𝜃
2749
+ 1
2750
+ )
2751
+ <
2752
+ sin
2753
+
2754
+ (
2755
+ 𝜃
2756
+ 2
2757
+ )
2758
+ <
2759
+ 𝜋
2760
+ /
2761
+ 2
2762
+ (i.e.
2763
+ 𝜃
2764
+ 2
2765
+ hits higher on the hyperboloid than
2766
+ 𝜃
2767
+ 1
2768
+ ), then
2769
+ 𝜃
2770
+ 𝑚
2771
+ ,
2772
+ naive
2773
+ will hit closer to
2774
+ 𝜃
2775
+ 1
2776
+ . Instead, we want some function
2777
+ 𝐹
2778
+
2779
+ (
2780
+ 𝜃
2781
+ 1
2782
+ ,
2783
+ 𝜃
2784
+ 2
2785
+ )
2786
+ =
2787
+ 𝜃
2788
+ 𝑚
2789
+
2790
+ [
2791
+ 𝜃
2792
+ 1
2793
+ ,
2794
+ 𝜃
2795
+ 2
2796
+ ]
2797
+ such that
2798
+ 𝛿
2799
+
2800
+ (
2801
+ 𝜃
2802
+ 1
2803
+ ,
2804
+ 𝜃
2805
+ 𝑚
2806
+ )
2807
+ =
2808
+ 𝛿
2809
+
2810
+ (
2811
+ 𝜃
2812
+ 2
2813
+ ,
2814
+ 𝜃
2815
+ 𝑚
2816
+ )
2817
+ . To do this, we need to compute hyperbolic distances, so we use the pseudo-Euclidean metric in our ambient Minkowski space. In particular, we have the distance between two points defined as:
2818
+
2819
+
2820
+ 𝛿
2821
+
2822
+ (
2823
+ 𝑢
2824
+ ,
2825
+ 𝑣
2826
+ )
2827
+
2828
+ =
2829
+ cosh
2830
+
2831
+ 1
2832
+
2833
+ (
2834
+
2835
+
2836
+ 𝑢
2837
+ ,
2838
+ 𝑣
2839
+
2840
+
2841
+ )
2842
+
2843
+ (41)
2844
+
2845
+
2846
+ =
2847
+ cosh
2848
+
2849
+ 1
2850
+
2851
+ (
2852
+ 𝑥
2853
+ 0
2854
+
2855
+ 𝑦
2856
+ 0
2857
+
2858
+ 𝑥
2859
+ 𝑑
2860
+
2861
+ 𝑦
2862
+ 𝑑
2863
+ )
2864
+
2865
+ (42)
2866
+
2867
+
2868
+ =
2869
+ ln
2870
+
2871
+ (
2872
+ 𝑥
2873
+ 0
2874
+
2875
+ 𝑦
2876
+ 0
2877
+
2878
+ 𝑥
2879
+ 𝑑
2880
+
2881
+ 𝑦
2882
+ 𝑑
2883
+ +
2884
+ (
2885
+ 𝑥
2886
+ 0
2887
+
2888
+ 𝑦
2889
+ 0
2890
+
2891
+ 𝑥
2892
+ 𝑑
2893
+
2894
+ 𝑦
2895
+ 𝑑
2896
+ )
2897
+ 2
2898
+
2899
+ 1
2900
+ )
2901
+
2902
+ (43)
2903
+
2904
+ This assumes that all dimensions besides
2905
+ 0
2906
+ and
2907
+ 𝑑
2908
+ are 0, and fixes the point on the intersection between
2909
+
2910
+ 𝐷
2911
+ ,
2912
+ 𝐾
2913
+ and the decision hyperplane as the frame of reference for all distances. Using the definition of
2914
+ 𝛼
2915
+
2916
+ (
2917
+ 𝜃
2918
+ ,
2919
+ 𝐾
2920
+ )
2921
+ in Equation 28, we can simplify the conditions under which
2922
+ 𝜃
2923
+ 𝑚
2924
+ is an equidistant midpoint of
2925
+ 𝜃
2926
+ 1
2927
+ and
2928
+ 𝜃
2929
+ 2
2930
+ :
2931
+
2932
+
2933
+ 𝛿
2934
+
2935
+ (
2936
+ 𝜃
2937
+ 𝑎
2938
+ ,
2939
+ 𝜃
2940
+ 𝑏
2941
+ )
2942
+ :=
2943
+ cosh
2944
+
2945
+ 1
2946
+
2947
+ (
2948
+ 𝛼
2949
+
2950
+ (
2951
+ 𝜃
2952
+ 𝑎
2953
+ ,
2954
+ 𝐾
2955
+ )
2956
+
2957
+ 𝛼
2958
+
2959
+ (
2960
+ 𝜃
2961
+ 𝑏
2962
+ ,
2963
+ 𝐾
2964
+ )
2965
+
2966
+ cos
2967
+
2968
+ (
2969
+ 𝜃
2970
+ 𝑎
2971
+ +
2972
+ 𝜃
2973
+ 𝑏
2974
+ )
2975
+ )
2976
+
2977
+ (44)
2978
+
2979
+ This distance function is quite nonlinear, as seen in Figure 5, which corroborates the inappropriateness of simply taking the mean between two angles as a midpoint. Instead, we set the distances
2980
+ 𝛿
2981
+
2982
+ (
2983
+ 𝜃
2984
+ 1
2985
+ ,
2986
+ 𝜃
2987
+ 𝑚
2988
+ )
2989
+ and
2990
+ 𝛿
2991
+
2992
+ (
2993
+ 𝜃
2994
+ 𝑚
2995
+ ,
2996
+ 𝜃
2997
+ 2
2998
+ )
2999
+ equal and simplify. For conciseness, we define the shorthand
3000
+ 𝛼
3001
+ 𝑛
3002
+ :=
3003
+ 𝛼
3004
+
3005
+ (
3006
+ 𝜃
3007
+ 𝑛
3008
+ ,
3009
+ 𝐾
3010
+ )
3011
+ :
3012
+
3013
+
3014
+ cosh
3015
+
3016
+ 1
3017
+
3018
+ (
3019
+ 𝛼
3020
+ 1
3021
+
3022
+ 𝛼
3023
+ 𝑚
3024
+
3025
+ cos
3026
+
3027
+ (
3028
+ 𝜃
3029
+ 1
3030
+ +
3031
+ 𝜃
3032
+ 𝑚
3033
+ )
3034
+ )
3035
+
3036
+ =
3037
+ cosh
3038
+
3039
+ 1
3040
+
3041
+ (
3042
+ 𝛼
3043
+ 𝑚
3044
+
3045
+ 𝛼
3046
+ 2
3047
+
3048
+ cos
3049
+
3050
+ (
3051
+ 𝜃
3052
+ 𝑚
3053
+ +
3054
+ 𝜃
3055
+ 2
3056
+ )
3057
+ )
3058
+
3059
+ (45)
3060
+
3061
+
3062
+ 𝛼
3063
+ 1
3064
+
3065
+ 𝛼
3066
+ 𝑚
3067
+
3068
+ cos
3069
+
3070
+ (
3071
+ 𝜃
3072
+ 1
3073
+ +
3074
+ 𝜃
3075
+ 𝑚
3076
+ )
3077
+
3078
+ =
3079
+ 𝛼
3080
+ 𝑚
3081
+
3082
+ 𝛼
3083
+ 2
3084
+
3085
+ cos
3086
+
3087
+ (
3088
+ 𝜃
3089
+ 𝑚
3090
+ +
3091
+ 𝜃
3092
+ 2
3093
+ )
3094
+
3095
+ (46)
3096
+
3097
+
3098
+ 𝛼
3099
+ 1
3100
+
3101
+ cos
3102
+
3103
+ (
3104
+ 𝜃
3105
+ 1
3106
+ +
3107
+ 𝜃
3108
+ 𝑚
3109
+ )
3110
+
3111
+ =
3112
+ 𝛼
3113
+ 2
3114
+
3115
+ cos
3116
+
3117
+ (
3118
+ 𝜃
3119
+ 𝑚
3120
+ +
3121
+ 𝜃
3122
+ 2
3123
+ )
3124
+
3125
+ (47)
3126
+
3127
+
3128
+
3129
+ sec
3130
+
3131
+ (
3132
+ 2
3133
+
3134
+ 𝜃
3135
+ 1
3136
+ )
3137
+ 𝐾
3138
+
3139
+ cos
3140
+
3141
+ (
3142
+ 𝜃
3143
+ 1
3144
+ +
3145
+ 𝜃
3146
+ 𝑚
3147
+ )
3148
+
3149
+ =
3150
+
3151
+ sec
3152
+
3153
+ (
3154
+ 2
3155
+
3156
+ 𝜃
3157
+ 2
3158
+ )
3159
+ 𝐾
3160
+
3161
+ cos
3162
+
3163
+ (
3164
+ 𝜃
3165
+ 𝑚
3166
+ +
3167
+ 𝜃
3168
+ 2
3169
+ )
3170
+
3171
+ (48)
3172
+
3173
+
3174
+ sec
3175
+
3176
+ (
3177
+ 2
3178
+
3179
+ 𝜃
3180
+ 1
3181
+ )
3182
+
3183
+ cos
3184
+ 2
3185
+
3186
+ (
3187
+ 𝜃
3188
+ 1
3189
+ +
3190
+ 𝜃
3191
+ 𝑚
3192
+ )
3193
+
3194
+ =
3195
+ sec
3196
+
3197
+ (
3198
+ 2
3199
+
3200
+ 𝜃
3201
+ 2
3202
+ )
3203
+
3204
+ cos
3205
+ 2
3206
+
3207
+ (
3208
+ 𝜃
3209
+ 𝑚
3210
+ +
3211
+ 𝜃
3212
+ 2
3213
+ )
3214
+
3215
+ (49)
3216
+
3217
+
3218
+ cos
3219
+ 2
3220
+
3221
+ (
3222
+ 𝜃
3223
+ 1
3224
+ +
3225
+ 𝜃
3226
+ 𝑚
3227
+ )
3228
+ cos
3229
+
3230
+ (
3231
+ 2
3232
+
3233
+ 𝜃
3234
+ 1
3235
+ )
3236
+
3237
+ =
3238
+ cos
3239
+ 2
3240
+
3241
+ (
3242
+ 𝜃
3243
+ 𝑚
3244
+ +
3245
+ 𝜃
3246
+ 2
3247
+ )
3248
+ cos
3249
+
3250
+ (
3251
+ 2
3252
+
3253
+ 𝜃
3254
+ 2
3255
+ )
3256
+
3257
+ (50)
3258
+
3259
+
3260
+ cos
3261
+
3262
+ (
3263
+ 2
3264
+
3265
+ 𝜃
3266
+ 2
3267
+ )
3268
+
3269
+ cos
3270
+ 2
3271
+
3272
+ (
3273
+ 𝜃
3274
+ 1
3275
+ +
3276
+ 𝜃
3277
+ 𝑚
3278
+ )
3279
+
3280
+ =
3281
+ cos
3282
+
3283
+ (
3284
+ 2
3285
+
3286
+ 𝜃
3287
+ 1
3288
+ )
3289
+
3290
+ cos
3291
+ 2
3292
+
3293
+ (
3294
+ 𝜃
3295
+ 2
3296
+ +
3297
+ 𝜃
3298
+ 𝑚
3299
+ )
3300
+
3301
+ (51)
3302
+
3303
+
3304
+ cos
3305
+
3306
+ (
3307
+ 2
3308
+
3309
+ 𝜃
3310
+ 2
3311
+ )
3312
+
3313
+ (
3314
+ cos
3315
+
3316
+ (
3317
+ 𝜃
3318
+ 1
3319
+ )
3320
+
3321
+ cos
3322
+
3323
+ (
3324
+ 𝜃
3325
+ 𝑚
3326
+ )
3327
+
3328
+ sin
3329
+
3330
+ (
3331
+ 𝜃
3332
+ 1
3333
+ )
3334
+
3335
+ sin
3336
+
3337
+ (
3338
+ 𝜃
3339
+ 𝑚
3340
+ )
3341
+ )
3342
+ 2
3343
+
3344
+ =
3345
+ cos
3346
+
3347
+ (
3348
+ 2
3349
+
3350
+ 𝜃
3351
+ 1
3352
+ )
3353
+
3354
+ (
3355
+ cos
3356
+
3357
+ (
3358
+ 𝜃
3359
+ 2
3360
+ )
3361
+
3362
+ cos
3363
+
3364
+ (
3365
+ 𝜃
3366
+ 𝑚
3367
+ )
3368
+
3369
+ sin
3370
+
3371
+ (
3372
+ 𝜃
3373
+ 2
3374
+ )
3375
+
3376
+ sin
3377
+
3378
+ (
3379
+ 𝜃
3380
+ 𝑚
3381
+ )
3382
+ )
3383
+ 2
3384
+
3385
+ (52)
3386
+
3387
+
3388
+ cos
3389
+
3390
+ (
3391
+ 2
3392
+
3393
+ 𝜃
3394
+ 2
3395
+ )
3396
+
3397
+ (
3398
+ cos
3399
+
3400
+ (
3401
+ 𝜃
3402
+ 1
3403
+ )
3404
+
3405
+ cot
3406
+
3407
+ (
3408
+ 𝜃
3409
+ 𝑚
3410
+ )
3411
+
3412
+ sin
3413
+
3414
+ (
3415
+ 𝜃
3416
+ 1
3417
+ )
3418
+ )
3419
+ 2
3420
+
3421
+ =
3422
+ cos
3423
+
3424
+ (
3425
+ 2
3426
+
3427
+ 𝜃
3428
+ 1
3429
+ )
3430
+
3431
+ (
3432
+ cos
3433
+
3434
+ (
3435
+ 𝜃
3436
+ 2
3437
+ )
3438
+
3439
+ cot
3440
+
3441
+ (
3442
+ 𝜃
3443
+ 𝑚
3444
+ )
3445
+
3446
+ sin
3447
+
3448
+ (
3449
+ 𝜃
3450
+ 2
3451
+ )
3452
+ )
3453
+ 2
3454
+ .
3455
+
3456
+ (53)
3457
+
3458
+ This is a quadratic equation in
3459
+ cot
3460
+
3461
+ (
3462
+ 𝜃
3463
+ 𝑚
3464
+ )
3465
+ expressed as
3466
+ 𝑈
3467
+ cot
3468
+ (
3469
+ 𝜃
3470
+ 𝑚
3471
+ )
3472
+ 2
3473
+ +
3474
+ 𝑊
3475
+ cot
3476
+ (
3477
+ 𝜃
3478
+ 𝑚
3479
+ )
3480
+ +
3481
+ 𝑈
3482
+
3483
+ =
3484
+ 0
3485
+ , where:
3486
+
3487
+
3488
+ 𝑈
3489
+
3490
+ :=
3491
+ cos
3492
+
3493
+ (
3494
+ 2
3495
+
3496
+ 𝜃
3497
+ 2
3498
+ )
3499
+
3500
+ cos
3501
+ 2
3502
+
3503
+ (
3504
+ 𝜃
3505
+ 1
3506
+ )
3507
+
3508
+ cos
3509
+
3510
+ (
3511
+ 2
3512
+
3513
+ 𝜃
3514
+ 1
3515
+ )
3516
+
3517
+ cos
3518
+ 2
3519
+
3520
+ (
3521
+ 𝜃
3522
+ 2
3523
+ )
3524
+
3525
+
3526
+ =
3527
+ (
3528
+ 2
3529
+
3530
+ cos
3531
+ 2
3532
+
3533
+ (
3534
+ 𝜃
3535
+ 2
3536
+ )
3537
+
3538
+ 1
3539
+ )
3540
+
3541
+ cos
3542
+ 2
3543
+
3544
+ (
3545
+ 𝜃
3546
+ 1
3547
+ )
3548
+
3549
+ (
3550
+ 2
3551
+
3552
+ cos
3553
+ 2
3554
+
3555
+ (
3556
+ 𝜃
3557
+ 1
3558
+ )
3559
+
3560
+ 1
3561
+ )
3562
+
3563
+ cos
3564
+ 2
3565
+
3566
+ (
3567
+ 𝜃
3568
+ 2
3569
+ )
3570
+
3571
+
3572
+ =
3573
+ cos
3574
+ 2
3575
+
3576
+ (
3577
+ 𝜃
3578
+ 2
3579
+ )
3580
+
3581
+ cos
3582
+ 2
3583
+
3584
+ (
3585
+ 𝜃
3586
+ 1
3587
+ )
3588
+
3589
+ (54)
3590
+
3591
+
3592
+ 𝑊
3593
+
3594
+ :=
3595
+ cos
3596
+
3597
+ (
3598
+ 2
3599
+
3600
+ 𝜃
3601
+ 2
3602
+ )
3603
+
3604
+ 2
3605
+
3606
+ cos
3607
+
3608
+ (
3609
+ 𝜃
3610
+ 1
3611
+ )
3612
+
3613
+ sin
3614
+
3615
+ (
3616
+ 𝜃
3617
+ 1
3618
+ )
3619
+
3620
+ cos
3621
+
3622
+ (
3623
+ 2
3624
+
3625
+ 𝜃
3626
+ 1
3627
+ )
3628
+
3629
+ 2
3630
+
3631
+ cos
3632
+
3633
+ (
3634
+ 𝜃
3635
+ 2
3636
+ )
3637
+
3638
+ sin
3639
+
3640
+ (
3641
+ 𝜃
3642
+ 2
3643
+ )
3644
+
3645
+
3646
+ =
3647
+ cos
3648
+
3649
+ (
3650
+ 2
3651
+
3652
+ 𝜃
3653
+ 2
3654
+ )
3655
+
3656
+ sin
3657
+
3658
+ (
3659
+ 2
3660
+
3661
+ 𝜃
3662
+ 1
3663
+ )
3664
+
3665
+ cos
3666
+
3667
+ (
3668
+ 2
3669
+
3670
+ 𝜃
3671
+ 1
3672
+ )
3673
+
3674
+ sin
3675
+
3676
+ (
3677
+ 2
3678
+
3679
+ 𝜃
3680
+ 2
3681
+ )
3682
+
3683
+
3684
+ =
3685
+ sin
3686
+
3687
+ (
3688
+ 2
3689
+
3690
+ 𝜃
3691
+ 1
3692
+
3693
+ 2
3694
+
3695
+ 𝜃
3696
+ 2
3697
+ )
3698
+
3699
+ (55)
3700
+
3701
+
3702
+ 𝑈
3703
+
3704
+
3705
+ :=
3706
+ cos
3707
+
3708
+ (
3709
+ 2
3710
+
3711
+ 𝜃
3712
+ 2
3713
+ )
3714
+
3715
+ sin
3716
+ 2
3717
+
3718
+ (
3719
+ 𝜃
3720
+ 1
3721
+ )
3722
+
3723
+ cos
3724
+
3725
+ (
3726
+ 2
3727
+
3728
+ 𝜃
3729
+ 1
3730
+ )
3731
+
3732
+ sin
3733
+ 2
3734
+
3735
+ (
3736
+ 𝜃
3737
+ 2
3738
+ )
3739
+
3740
+
3741
+ =
3742
+ (
3743
+ 1
3744
+
3745
+ 2
3746
+
3747
+ sin
3748
+ 2
3749
+
3750
+ (
3751
+ 𝜃
3752
+ 2
3753
+ )
3754
+ )
3755
+
3756
+ sin
3757
+ 2
3758
+
3759
+ (
3760
+ 𝜃
3761
+ 1
3762
+ )
3763
+
3764
+ (
3765
+ 1
3766
+
3767
+ 2
3768
+
3769
+ sin
3770
+ 2
3771
+
3772
+ (
3773
+ 𝜃
3774
+ 1
3775
+ )
3776
+ )
3777
+
3778
+ sin
3779
+ 2
3780
+
3781
+ (
3782
+ 𝜃
3783
+ 2
3784
+ )
3785
+
3786
+
3787
+ =
3788
+ sin
3789
+ 2
3790
+
3791
+ (
3792
+ 𝜃
3793
+ 1
3794
+ )
3795
+
3796
+ sin
3797
+ 2
3798
+
3799
+ (
3800
+ 𝜃
3801
+ 2
3802
+ )
3803
+
3804
+
3805
+ =
3806
+ cos
3807
+ 2
3808
+
3809
+ (
3810
+ 𝜃
3811
+ 2
3812
+ )
3813
+
3814
+ cos
3815
+ 2
3816
+
3817
+ (
3818
+ 𝜃
3819
+ 1
3820
+ )
3821
+ ,
3822
+
3823
+ (56)
3824
+
3825
+ Since
3826
+ 𝑈
3827
+ =
3828
+ 𝑈
3829
+
3830
+ we simplify further and solve
3831
+ cot
3832
+ 2
3833
+
3834
+ (
3835
+ 𝜃
3836
+ 𝑚
3837
+ )
3838
+
3839
+ 2
3840
+
3841
+ 𝑉
3842
+
3843
+ cot
3844
+
3845
+ (
3846
+ 𝜃
3847
+ 𝑚
3848
+ )
3849
+ +
3850
+ 1
3851
+ =
3852
+ 0
3853
+ where:
3854
+
3855
+
3856
+ 𝑉
3857
+ :=
3858
+
3859
+ 𝑊
3860
+ /
3861
+ 2
3862
+
3863
+ 𝑈
3864
+
3865
+ =
3866
+
3867
+ sin
3868
+
3869
+ (
3870
+ 2
3871
+
3872
+ 𝜃
3873
+ 1
3874
+
3875
+ 2
3876
+
3877
+ 𝜃
3878
+ 2
3879
+ )
3880
+ 2
3881
+ (
3882
+ cos
3883
+ 2
3884
+ (
3885
+ 𝜃
3886
+ 2
3887
+ )
3888
+
3889
+ cos
3890
+ 2
3891
+ (
3892
+ 𝜃
3893
+ 1
3894
+ )
3895
+ )
3896
+
3897
+
3898
+ =
3899
+
3900
+ sin
3901
+
3902
+ (
3903
+ 2
3904
+
3905
+ 𝜃
3906
+ 1
3907
+
3908
+ 2
3909
+
3910
+ 𝜃
3911
+ 2
3912
+ )
3913
+ 2
3914
+
3915
+ (
3916
+ cos
3917
+
3918
+ (
3919
+ 𝜃
3920
+ 2
3921
+ )
3922
+ +
3923
+ cos
3924
+
3925
+ (
3926
+ 𝜃
3927
+ 1
3928
+ )
3929
+ )
3930
+
3931
+ (
3932
+ cos
3933
+
3934
+ (
3935
+ 𝜃
3936
+ 2
3937
+ )
3938
+
3939
+ cos
3940
+
3941
+ (
3942
+ 𝜃
3943
+ 1
3944
+ )
3945
+ )
3946
+
3947
+
3948
+ =
3949
+
3950
+ sin
3951
+
3952
+ (
3953
+ 2
3954
+
3955
+ 𝜃
3956
+ 1
3957
+
3958
+ 2
3959
+
3960
+ 𝜃
3961
+ 2
3962
+ )
3963
+
3964
+ 8
3965
+
3966
+ cos
3967
+
3968
+ (
3969
+ 𝜃
3970
+ 1
3971
+ +
3972
+ 𝜃
3973
+ 2
3974
+ 2
3975
+ )
3976
+
3977
+ cos
3978
+
3979
+ (
3980
+ 𝜃
3981
+ 2
3982
+
3983
+ 𝜃
3984
+ 1
3985
+ 2
3986
+ )
3987
+
3988
+ sin
3989
+
3990
+ (
3991
+ 𝜃
3992
+ 1
3993
+ +
3994
+ 𝜃
3995
+ 2
3996
+ 2
3997
+ )
3998
+
3999
+ sin
4000
+
4001
+ (
4002
+ 𝜃
4003
+ 2
4004
+
4005
+ 𝜃
4006
+ 1
4007
+ 2
4008
+ )
4009
+
4010
+
4011
+ =
4012
+ sin
4013
+
4014
+ (
4015
+ 2
4016
+
4017
+ 𝜃
4018
+ 1
4019
+
4020
+ 2
4021
+
4022
+ 𝜃
4023
+ 2
4024
+ )
4025
+ 2
4026
+
4027
+ sin
4028
+
4029
+ (
4030
+ 𝜃
4031
+ 2
4032
+ +
4033
+ 𝜃
4034
+ 1
4035
+ )
4036
+
4037
+ sin
4038
+
4039
+ (
4040
+ 𝜃
4041
+ 2
4042
+
4043
+ 𝜃
4044
+ 1
4045
+ )
4046
+
4047
+ (57)
4048
+
4049
+ The solutions for the quadratic equation are
4050
+ cot
4051
+
4052
+ (
4053
+ 𝜃
4054
+ 𝑚
4055
+ )
4056
+ =
4057
+ 𝑉
4058
+ ±
4059
+ 𝑉
4060
+ 2
4061
+
4062
+ 1
4063
+ ; specifically, we have
4064
+
4065
+
4066
+ 𝜃
4067
+ 𝑚
4068
+ =
4069
+ {
4070
+ 𝜃
4071
+ 1
4072
+
4073
+  if 
4074
+
4075
+ 𝜃
4076
+ 1
4077
+ =
4078
+ 𝜃
4079
+ 2
4080
+
4081
+
4082
+ cot
4083
+
4084
+ 1
4085
+
4086
+ (
4087
+ 𝑉
4088
+
4089
+ 𝑉
4090
+ 2
4091
+
4092
+ 1
4093
+ )
4094
+
4095
+  if 
4096
+
4097
+ 𝜃
4098
+ 1
4099
+ <
4100
+ 𝜋
4101
+
4102
+ 𝜃
4103
+ 2
4104
+
4105
+
4106
+ cot
4107
+
4108
+ 1
4109
+
4110
+ (
4111
+ 𝑉
4112
+ +
4113
+ 𝑉
4114
+ 2
4115
+
4116
+ 1
4117
+ )
4118
+
4119
+  if 
4120
+
4121
+ 𝜃
4122
+ 1
4123
+ >
4124
+ 𝜋
4125
+
4126
+ 𝜃
4127
+ 2
4128
+
4129
+ (58)
4130
+ Figure 5:A plot of function
4131
+ 𝛿
4132
+
4133
+ (
4134
+ 𝜋
4135
+ /
4136
+ 4
4137
+ +
4138
+ .01
4139
+ ,
4140
+ 𝜃
4141
+ )
4142
+ as
4143
+ 𝜃
4144
+ varies from
4145
+ 𝜋
4146
+ /
4147
+ 4
4148
+ to
4149
+ 3
4150
+
4151
+ 𝜋
4152
+ /
4153
+ 4
4154
+ . This plot reveals the nonlinearity of the angle distance function.
4155
+ A.4Mixture of Gaussians on hyperbolic manifolds
4156
+
4157
+ We modify the method put forward in Nagano et al. (2019) to sample Gaussians in hyperbolic space. We briefly reiterate their method to sample a single Gaussian in
4158
+
4159
+ 𝐷
4160
+ ,
4161
+ 𝐾
4162
+ :
4163
+
4164
+ 1.
4165
+
4166
+ Choose a point
4167
+ 𝜇
4168
+ in
4169
+
4170
+ 𝐷
4171
+ ,
4172
+ 𝐾
4173
+ to be the mean of your Gaussian.
4174
+
4175
+ 2.
4176
+
4177
+ Sample
4178
+ 𝐗
4179
+ as
4180
+ 𝑛
4181
+ samples from a Euclidean multivariate Gaussian with mean
4182
+ 0
4183
+ and any covariance
4184
+ 𝚺
4185
+ in
4186
+ 𝐷
4187
+ dimensions.
4188
+
4189
+ 3.
4190
+
4191
+ Transform
4192
+ 𝐗
4193
+ into
4194
+ 𝐗
4195
+
4196
+ , a set of vectors in
4197
+ 𝑇
4198
+ 0
4199
+
4200
+
4201
+ 𝐷
4202
+ ,
4203
+ 𝐾
4204
+ (the tangent plane of
4205
+
4206
+ 𝐷
4207
+ ,
4208
+ 𝐾
4209
+ at the origin), by appending 0 in the timelike dimension.
4210
+
4211
+ 4.
4212
+
4213
+ Use parallel transport from the origin to
4214
+ 𝜇
4215
+ , turning
4216
+ 𝐗
4217
+
4218
+ into
4219
+ 𝐗
4220
+ ′′
4221
+ , a set of vectors in
4222
+ 𝑇
4223
+ 𝜇
4224
+
4225
+
4226
+ 𝐷
4227
+ ,
4228
+ 𝐾
4229
+ .
4230
+
4231
+ 5.
4232
+
4233
+ Use the exponential map at
4234
+ 𝜇
4235
+ to map
4236
+ 𝐗
4237
+ ′′
4238
+ to
4239
+ 𝐗
4240
+ ′′′
4241
+ , a set of points on the surface of
4242
+
4243
+ 𝐷
4244
+ ,
4245
+ 𝐾
4246
+ .
4247
+
4248
+ For each Gaussian in our mixture, we perform this exact procedure. We choose our set of
4249
+ 𝑛
4250
+ Gaussian means by sampling
4251
+ 𝑛
4252
+ vectors from
4253
+ 𝒩
4254
+
4255
+ (
4256
+ 0
4257
+ ,
4258
+ 𝐈
4259
+ )
4260
+ in the tangent plane and then exponentially mapping them directly to
4261
+
4262
+ 𝐷
4263
+ ,
4264
+ 𝐾
4265
+ ; in other words, we follow the above procedure but skip step (4) because
4266
+ 𝜇
4267
+ is not defined yet (or, equivalently, because
4268
+ 𝜇
4269
+ is the origin).
4270
+
4271
+ Additionally, each covariance matrix is generated by drawing
4272
+ 𝐷
4273
+
4274
+ 𝐷
4275
+ -dimensional samples,
4276
+ 𝐂
4277
+
4278
+ 𝒩
4279
+
4280
+ (
4281
+ 0
4282
+ ,
4283
+ 𝐈
4284
+ )
4285
+ , and then letting
4286
+ 𝚺
4287
+ =
4288
+ 𝐂𝐂
4289
+ 𝑇
4290
+ . The entire covariance matrix is optionally rescaled by a user-set noise scalar
4291
+ 𝑎
4292
+ and divided by
4293
+ 𝐷
4294
+ . That is,
4295
+
4296
+
4297
+ 𝚺
4298
+ =
4299
+ 𝑎
4300
+
4301
+ 𝐂𝐂
4302
+ 𝑇
4303
+ 𝐷
4304
+ .
4305
+
4306
+ (59)
4307
+
4308
+ This procedure is repeated
4309
+ 𝑛
4310
+ times to yield
4311
+ 𝑛
4312
+ distinct covariance matrices.
4313
+
4314
+ Finally, class probabilities are determined by drawing
4315
+ 𝑛
4316
+ values from
4317
+ 𝑈
4318
+
4319
+ (
4320
+ 0
4321
+ ,
4322
+ 1
4323
+ )
4324
+ and normalizing them by their sum. Each point in a sample is assigned a class that determines its
4325
+ 𝜇
4326
+ and
4327
+ 𝚺
4328
+ . We implement this method using the geomstats package in Python (Miolane et al., 2018), which supports vectorized versions of parallel transport and exponential maps with differing destinations.
4329
+
4330
+ A.5Equivalence of Minkowski and Euclidean dot-products
4331
+
4332
+ Most papers on hyperbolic geometry use Minkowski products. For instance, implementing the support vector machine objective in Cho et al. (2018) relies on Minkowski products, which are crucial for the optimization procedure they specify.
4333
+
4334
+ In our case, it is sufficient to use the more intuitive Euclidean formulation, even though, in practice, Minkowski space is not equipped with Euclidean products. Intuitively, Euclidean inner products (dot-products) accurately capture whether a point is to one side of a plane or another, which is all that is needed for a decision tree classifier. However, we show further that Euclidean dot-products for HyperDT decision boundaries have an interpretation in terms of Minkowski products:
4335
+
4336
+ The sparse Euclidean dot-product we determined in Equation 9 is:
4337
+
4338
+
4339
+ 𝑆
4340
+
4341
+ (
4342
+ 𝑥
4343
+ )
4344
+ =
4345
+ sign
4346
+
4347
+ (
4348
+ max
4349
+
4350
+ (
4351
+ 0
4352
+ ,
4353
+ (
4354
+ sin
4355
+
4356
+ (
4357
+ 𝜃
4358
+ )
4359
+
4360
+ 𝑥
4361
+ 𝑑
4362
+
4363
+ cos
4364
+
4365
+ (
4366
+ 𝜃
4367
+ )
4368
+
4369
+ 𝑥
4370
+ 0
4371
+ )
4372
+ )
4373
+ )
4374
+
4375
+ (60)
4376
+
4377
+ And, since the Minkowski inner product is simply the Euclidean dot-product with the sign of the timelike dimension flipped, we can equivalently say
4378
+
4379
+
4380
+ 𝑆
4381
+
4382
+ (
4383
+ 𝑥
4384
+ )
4385
+ Minkowski
4386
+ =
4387
+ sign
4388
+
4389
+ (
4390
+ max
4391
+
4392
+ (
4393
+ 0
4394
+ ,
4395
+ (
4396
+ sin
4397
+
4398
+ (
4399
+ 𝜃
4400
+ )
4401
+
4402
+ 𝑥
4403
+ 𝑑
4404
+ +
4405
+ cos
4406
+
4407
+ (
4408
+ 𝜃
4409
+ )
4410
+
4411
+ 𝑥
4412
+ 0
4413
+ )
4414
+ )
4415
+ )
4416
+
4417
+ (61)
4418
+
4419
+ Any
4420
+ 𝜃
4421
+ in the Euclidean case, if substituted for
4422
+
4423
+ 𝜃
4424
+ in the Minkowski case, will yield the same
4425
+ cos
4426
+
4427
+ (
4428
+ 𝜃
4429
+ )
4430
+ and a negated
4431
+ sin
4432
+
4433
+ (
4434
+ 𝜃
4435
+ )
4436
+ . That is,
4437
+
4438
+
4439
+ sin
4440
+
4441
+ (
4442
+ 𝜃
4443
+ )
4444
+
4445
+ 𝑥
4446
+ 𝑑
4447
+
4448
+ cos
4449
+
4450
+ (
4451
+ 𝜃
4452
+ )
4453
+
4454
+ 𝑥
4455
+ 0
4456
+
4457
+ =
4458
+ 𝑎
4459
+
4460
+ (62)
4461
+
4462
+
4463
+ sin
4464
+
4465
+ (
4466
+
4467
+ 𝜃
4468
+ )
4469
+
4470
+ 𝑥
4471
+ 𝑑
4472
+ +
4473
+ cos
4474
+
4475
+ (
4476
+
4477
+ 𝜃
4478
+ )
4479
+
4480
+ 𝑥
4481
+ 0
4482
+
4483
+ =
4484
+ 𝑏
4485
+
4486
+ (63)
4487
+
4488
+
4489
+
4490
+ sin
4491
+
4492
+ (
4493
+ 𝜃
4494
+ )
4495
+
4496
+ 𝑥
4497
+ 𝑑
4498
+ +
4499
+ cos
4500
+
4501
+ (
4502
+ 𝜃
4503
+ )
4504
+
4505
+ 𝑥
4506
+ 0
4507
+
4508
+ =
4509
+ 𝑏
4510
+
4511
+ (64)
4512
+
4513
+
4514
+ sin
4515
+
4516
+ (
4517
+ 𝜃
4518
+ )
4519
+
4520
+ 𝑥
4521
+ 𝑑
4522
+
4523
+ cos
4524
+
4525
+ (
4526
+ 𝜃
4527
+ )
4528
+
4529
+ 𝑥
4530
+ 0
4531
+
4532
+ =
4533
+
4534
+ 𝑏
4535
+
4536
+ (65)
4537
+
4538
+
4539
+ 𝑎
4540
+
4541
+ =
4542
+
4543
+ 𝑏
4544
+
4545
+ (66)
4546
+
4547
+ That is, for any angle
4548
+ 𝜃
4549
+ yielding a particular split
4550
+ 𝑆
4551
+ over a dataset
4552
+ 𝐗
4553
+ , evaluating the split using Minkowski inner products with the angle
4554
+
4555
+ 𝜃
4556
+ produces an equivalent split. The sets are exactly the same, but the sign of the dot-product is flipped.
4557
+
4558
+ A.6Additional experiments
4559
+ A.6.1Scaling
4560
+
4561
+ In Figure 3, we showed that HyperDT runtime scales linearly with sample size—an improvement over the exponential scaling of HoroRF. In this section, we show how runtime scales with the number of data points, number of dimensions, maximum depth of decision trees, and total number of estimators. Since we are not comparing to slower methods, we can extend our analysis to substantially more than the upper limit of 800 samples we use in the main section of the paper.
4562
+
4563
+ Procedure.
4564
+
4565
+ Unless noted otherwise, we test the runtime and F1-micro accuracy of HyperRF with a maximum depth of 3 and 12 trees, consistent with the HyperRF results in Table 1. We restrict ourselves to Gaussian mixtures of five classes rather than two, since this more challenging classification task has a greater range of F1-micro scores. Unless noted otherwise, we generated 1,000 points for 20 distinct trials, without cross-validation and using a test set size of 200 points.
4566
+
4567
+ We carried out four distinct scaling experiments, recording runtime and accuracy as we varied:
4568
+
4569
+ 1.
4570
+
4571
+ The number of points generated from 100 to 3,000
4572
+
4573
+ 2.
4574
+
4575
+ The number of dimensions from 2 to 64
4576
+
4577
+ 3.
4578
+
4579
+ The total number of decision trees in a forest from 1 to 30
4580
+
4581
+ 4.
4582
+
4583
+ The max depth of each decision tree from 1 to 20
4584
+
4585
+ Additionally, for the final max depth experiment, we also tested scikit-learn  Euclidean random forests and HoroRF, also using 12 predictors. For this portion of the experiment, we restrict ourselves to 800 samples from a 2-dimensional, 2-class mixture of Gaussians.
4586
+
4587
+ Results.
4588
+
4589
+ Times and F1-micro scores for the four HyperRF scaling experiments are shown in Figure 6. This figure shows that runtime scales linearly with the number of samples, dimensions, and trees, with little effect on overall prediction accuracy. Interestingly, the runtime levels off rather than growing exponentially for the maximum depth experiment as one might expect given the
4590
+ 2
4591
+ 𝑚
4592
+
4593
+ 𝑎
4594
+
4595
+ 𝑥
4596
+
4597
+ _
4598
+
4599
+ 𝑑
4600
+
4601
+ 𝑒
4602
+
4603
+ 𝑝
4604
+
4605
+ 𝑡
4606
+
4607
+
4608
+ splits the predictor is allowed to make. This is because the actually achieved depth tops out when the training set is perfectly divided into homogeneous subregions of the decision space, and further splits are not made. Additionally, F1-micro scores decline slightly with increasing tree depth, likely due to overfitting.
4609
+
4610
+ Since maximum depth is the only parameter with a particularly interesting relationship to prediction accuracy, we explored it further in the context of the other predictors evaluated in the paper. In Figure 7, we compare the F1-micro scores of HyperRF, HoroRF, and Euclidean random forests, and find that HyperRF has a consistent advantage over other predictors at the same maximum depth; however, as maximum depth increases, this advantage becomes less prominent. This result speaks both to the general ability of elaborate random forests to model data from arbitrary probability distributions, and to the parsimony of HoroDT-based methods in modeling hierarchical data.
4611
+
4612
+ Figure 6:Observed runtimes when varying: (a) number of samples, (b) dimensionality, (c) number of trees, and (d) maximum depth in a simulated Gaussian mixture classification problem. We observe linear scaling for (a), (b), and (c), and possibly sublinear scaling with maximum tree depth. Shaded regions represent 95% confidence intervals.
4613
+ Figure 7:F1-micro scores on a Gaussian mixture classification problem with 2 classes, 2 dimensions, and 800 samples. Shaded regions represent 95% confidence intervals.
4614
+ A.6.2Comparison to other hyperbolic classifiers
4615
+
4616
+ In the main body of the paper, we restricted ourselves to classifiers based on decision trees and random forests. However, there are a number of other capable and powerful classifiers for use on hyperbolic data that warrant evaluation. In this section, we evaluate hyperbolic support vector machines and logistic regression classifiers against some of our benchmarks.
4617
+
4618
+ Procedure.
4619
+
4620
+ We used the implementation of hyperbolic support vector machines provided by Agibetov et al. (2019)1, and the implementation of hyperbolic logistic regression provided by Bdeir et al. (2023)2. For each trial, we generated 800 points from a gaussian mixture with 2 classes and evaluated it using the F1-micro score under 5-fold cross validation. We did this for 10 trials total, in 2, 4, 8, and 16 dimensions.
4621
+
4622
+ In addition to the hyperbolic support vector machine and logistic regression classifiers, we evaluated their Euclidean counterparts, which are implemented in scikit-learn Pedregosa et al. (2011). We only benchmark against HyperDT, which is simpler and slightly less accurate than HyperRF, for fairness.
4623
+
4624
+ Results.
4625
+
4626
+ We report average F1-micro scores for each classifier in each dimension in Table 2. Following the conventions of the paper, we additionally mark statistically significant differences from HyperDT with an asteristk. In total, HyperDT was a statistically significant improvement over each classifier in at least one dimension, and was the best classifier in half the cases. This shows a consistent advantage over the other classifiers, which we can expect to be further improved by use of HyperRF.
4627
+
4628
+ Logistic Hyperbolic Logistic Hyperbolic Support Support Vector HyperDT
4629
+
4630
+ 𝐷
4631
+ Regression Regression Vector Classifier Classifier
4632
+ 2 90.11
4633
+ *
4634
+ 88.65
4635
+ *
4636
+ 81.50
4637
+ *
4638
+ 80.96
4639
+ *
4640
+ 91.88
4641
+ 4 99.20 99.41 96.94
4642
+ *
4643
+ 85.56
4644
+ *
4645
+ 99.30
4646
+ 8 99.97 99.96 100.00 79.06
4647
+ *
4648
+ 99.96
4649
+ 16 99.99 100.00 98.12
4650
+ *
4651
+ 86.99
4652
+ *
4653
+ 100.00
4654
+ Table 2:Micro-averaged F1 scores under 5-fold cross validation averaged over 10 seeds for each classifier and dimension. Bold indicates the best score for that dimension; asterisks indicate a statistically significant difference from HyperDT as determined by a paired
4655
+ 𝑡
4656
+ -test.
4657
+ A.6.3Random forests in other geometries
4658
+
4659
+ While no representation of hyperbolic space is explicitly compatible with axis-aligned splits, it is worth exploring the possibility that other representations nonetheless lend themselves better to treatment by decision tree or random forest classifiers than the hyperboloid model does; more specifically, it is worth testing HyperDT and HyperRF against a greater range of embeddings to ensure we are making a fair comparison.
4660
+
4661
+ Procedure.
4662
+
4663
+ Analogously to other sections, we restrict ourselves to 800 samples from 2-class mixtures of Gaussians in 2, 4, 8, and 16 dimensions. We record F1-micro scores under 5-fold cross-validation, averaged over 10 seeds. In this case, we evaluated HyperDT, HyperRF, and scikit-learn implementations of Euclidean decision trees and random forests for each sample.
4664
+
4665
+ We converted each sample to Euclidean, Hyperboloid, Klein disk, and Poincaré disk coordinates. To get Euclidean coordinates, we applied the logarithmic map to project points from
4666
+
4667
+ 𝐷
4668
+ ,
4669
+ 𝐾
4670
+ to the tangent plane at the origin.
4671
+
4672
+ Results.
4673
+
4674
+ Table 3 shows a comparison of HyperDT and HyperRF to each of these geometries. Both HyperDT and HyperRF show substantial, statistically significant advantages over their Euclidean counterparts when applied to the hyperboloid model, and this is the only consistent trend in the data. Only the Poincaré disk in two dimensions beat HyperRF with statistical significance.
4675
+
4676
+ Interestingly, the Klein disk embeddings performed well (without statistical significance) in two and three dimensionalities for decision trees and random forests, respectively. This is likely because geodesics in the Klein model are represented with straight lines, so the axis-parallel splits used by Euclidean decision tree algorithms are also geodesic decision boundaries. This is yet another point in support of geodesic decision boundaries yielding improved classification performance.
4677
+
4678
+ Dimensions
4679
+ Model Geometry 2 4 8 16
4680
+ Decision Tree Euclidean 91.86 99.15 99.94 99.97
4681
+ Hyperboloid 90.16
4682
+ *
4683
+ 98.34
4684
+ *
4685
+ 99.91 99.99
4686
+ Klein 91.89 99.29 99.96 99.99
4687
+ Poincaré 91.85 99.30 99.96 100.00
4688
+ HyperDT Hyperboloid 91.88 99.30 99.96 100.00
4689
+ HyperRF Hyperboloid 91.91 99.42 99.96 100.00
4690
+ Random Forest Euclidean 92.09 99.28 99.97 100.00
4691
+ Hyperboloid 89.80
4692
+ *
4693
+ 98.40
4694
+ *
4695
+ 99.95 100.00
4696
+ Klein 92.05 99.46 100.00 100.00
4697
+ Poincaré 92.26
4698
+ *
4699
+ 99.45 99.97 100.00
4700
+ Table 3:F1-micro scores for Euclidean and hyperbolic random forests and decision trees when applied to a variety of hyperbolic coordinate systems. Bolded scores are the best for that dimension; asterisks represent statistical significance.
4701
+ A.6.4Hyperbolic image embeddings
4702
+
4703
+ CLIP is a contrastive deep learning model with a joint text-image embedding space (Radford et al., 2021). Recently, Desai et al. (2023) developed MERU, a modified version of CLIP which encodes images and text into
4704
+
4705
+ 512
4706
+ ,
4707
+ 0.1
4708
+ . They impose a hierarchical structure on image an text embeddings: since images are more specific than the sentences which describe them, they encourage image embeddings to have larger values in dimension
4709
+ 0
4710
+ than their corresponding text embeddings. In this case, the root of the hyperboloid represents the most general possible concept. Learning this hierarchy, MERU matches or outperforms CLIP on zero-shot text-to-image and image-to-text retrieval tasks. However, it matches or underperforms CLIP on zero-shot image classification tasks.
4711
+
4712
+ Procedure.
4713
+
4714
+ We hypothesized that MERU classification performance suffers because Desai et al. (2023) used a logistic regression classifier, designed for Euclidean, not hyperbolic, space. To test this, we perform zero-shot image classification on CIFAR-10 on pretrained ViT S/16 MERU and CLIP image embeddings using Euclidean and hyperbolic random forests.3 All forests used 10 estimators with a maximum depth of 5.
4715
+
4716
+ We also experiment with partial and full image embeddings. CLIP passes an image throught an image-encoder, a linear projection layer, and then an
4717
+ 𝐿
4718
+ 2
4719
+ normalization layer (project to the unit hypersphere). Similarly, MERU passes an image through an image-encoder, a linear projection layer, and then an exponential map (project to the hyperboloid). In their experiments, Desai et al. (2023) performed classification only on the image-encoder outputs, which lie in Euclidean space for both CLIP and MERU. However, this approach ignores (1) the information provided by the projection layer and cannot leverage (2) the hierarchical structure gained from projecting to the hyperboloid. We thus exeriment with embeddings with different combinations of layers.
4720
+
4721
+ Results.
4722
+
4723
+ The results of this experiment are summarized in Table 4, alongside the reported accuracies from the original paper. We do not report standard deviation because we use the same train/test split as Desai et al. (2023).
4724
+
4725
+ First, we find that Euclidean random forests perform better on MERU-encoded data than CLIP-encoded data. This suggests that, in a linear probing context, MERU representations actually are more separable with respect to CIFAR-10 classes.
4726
+
4727
+ Additionally, we show that hyperboloid random forests on MERU encodings with linear projection and exponential mapping to the hyperboloid outperform all other combinations of classifiers and embeddings. These findings both demonstrate that hyperbolic embeddings for image-text joint embeddings enhance zero-shot classification performance and that hyperbolic random forests outperform their Euclidean counterparts in this embedding space.
4728
+
4729
+ HyperRF outperforms all Euclidean random forests, suggesting it is the best forest-type classifier for this task. However, we fail to beat the benchmark value reported in Desai et al. (2023) for logistic regression. We reproduce similar accuracies on our own implementation of logistic regression.
4730
+
4731
+ Embedding Predictor CLIP Accuracy MERU Accuracy
4732
+ Encoder Baseline 89.60 89.70
4733
+ Encoder Logistic regression 87.60 90.15
4734
+ Encoder Random forest 82.00 86.05
4735
+ Encoder + LP Random forest 84.20 84.85
4736
+ Encoder + LP + map Random forest 84.00 85.10
4737
+ Encoder + LP + map HyperRF — 86.20
4738
+ Table 4:Per-class average accuracies on zero-shot CIFAR-10 classification benchmarks. HyperRF and scikit-learn benchmarks are new, with baseline accuracies taken from Table 7 of Desai et al. (2023). LP stands for linear projection. Note that map means
4739
+ 𝐿
4740
+ 2
4741
+ normalization for CLIP and exponential map to the hyperboloid for MERU. HyperRF can only be evaluated on MERU with linear projection and exponential map applied, since all other representations are Euclidean.
4742
+ A.6.5WordNet embeddings
4743
+
4744
+ Hyperbolic embeddings of Wordnet Fellbaum (2010) are another popular benchmark for hyperbolic classifiers. We extend our analysis to WordNet classification tasks.
4745
+
4746
+ Procedure.
4747
+
4748
+ We use the WordNet embeddings and labels provided in the Github repository for Doorenbos et al. (2023)4. These are split into binary classification tasks, where embeddings are labeled according to whether or not they belong to a certain class of things (animal, group, mammal, and so on), and multiclass classification tasks. For speed, we downsample the WordNet embeddings to 1,000 randomly-sampled points without rebalancing the classes. The exact same sample is seen by all classifiers. Each classifier is evaluated across 10 seeds using 5-fold cross-validation.
4749
+
4750
+ Results.
4751
+
4752
+ The results for the WordNet experiment are shown in Table 5. These results continue to be very favorable for HyperDT and HyperRF: HyperDT beats Sklearn and HoroRF 8 times each, and HyperRFbeats Sklearn 6 times and HoroRF 8 times, all with statistical significance. No other models achieved a statistically significant advantage against any other models.
4753
+
4754
+ Decision Trees Random Forests
4755
+ scikit- scikit-
4756
+ Data HyperDT learn HoroRF HyperRF learn HoroRF
4757
+
4758
+ Binary
4759
+ Animal 98.88
4760
+
4761
+ 98.69 96.02 98.97
4762
+
4763
+
4764
+ 98.16 96.22
4765
+ Group 94.65
4766
+
4767
+
4768
+ 94.06 91.77 94.64
4769
+
4770
+ 94.23 92.34
4771
+ Mammal 99.86
4772
+
4773
+
4774
+ 99.33 98.92 99.87
4775
+
4776
+
4777
+ 99.19 98.92
4778
+ Occupation 99.58 99.49 99.64 99.61 99.66 99.69
4779
+ Rodent 99.83 99.78 99.79 99.81 99.85 99.85
4780
+ Solid 99.11
4781
+
4782
+
4783
+ 98.72 98.55 99.13
4784
+
4785
+
4786
+ 98.50 98.45
4787
+ Tree 98.90
4788
+
4789
+
4790
+ 98.59 98.46 99.01
4791
+
4792
+
4793
+ 98.68 98.63
4794
+ Worker 98.69
4795
+
4796
+ 98.36 98.57 98.73 98.58 98.57
4797
+
4798
+ Multi
4799
+ Same level 98.10
4800
+
4801
+
4802
+ 97.33 96.98 98.31
4803
+
4804
+
4805
+ 96.91 96.71
4806
+ Nested 89.44
4807
+
4808
+
4809
+ 87.74 77.19 89.72
4810
+
4811
+ 89.22 86.36
4812
+ Both 96.38
4813
+
4814
+
4815
+ 95.60 91.22 96.67
4816
+
4817
+
4818
+ 94.33 91.13
4819
+ Table 5:Mean micro-F1 scores for classification benchmarks over 10 seeds and 5 folds. The highest-scoring decision tree and random forests are bolded separately.
4820
+ *
4821
+ means a predictor beat HyperRF,
4822
+
4823
+ means a predictor beat HoroRF, and
4824
+
4825
+ means a predictor beat scikit-learn, with
4826
+ 𝑝
4827
+ <
4828
+ 0.05
4829
+ .
4830
+ A.6.6Midpoint ablation
4831
+
4832
+ Because the use of the midpoint formula laid out in Equation 10 was guided by intuition rather than actual performance, a worthwhile experiment is to check the effect that substituting this operation with the naive midpoint calculation has on the accuracy and efficiency of our algorithm.
4833
+
4834
+ Procedure.
4835
+
4836
+ We test the effect of substituting the geodesic midpoint calculations with naive midpoint calculations across 10 seeds and a single train-test split. We check this for 100, 200, 400, and 800 points and 2, 4, 8, and 16 dimensions, recording total runtime and F1-micro score on the held-out test set (20% of the sample).
4837
+
4838
+ Results.
4839
+
4840
+ Figure 8 shows the results of the midpoint ablation study. Across almost all sample sizes and dimensions, substituting the naive midpoint computation resulted in a marked reduction in accuracy—although with the benefit of a slight reduction in runtime. We believe that not only is this accuracy-performance tradeoff favorable to the more complicated midpoint computation, it is also theoretically more justified (all things being equal, you should prefer to place the decision boundary exactly in between two differing classes). Thus, we elect to keep the midpoint computation as it is.
4841
+
4842
+ Figure 8:Ablation results for midpoint angle computations.
4843
+ A.7Statistical testing
4844
+
4845
+ For the benchmarks reported in Table 1, we provide statistical significance annotations. To test statistical significance, we used a two-tailed paired
4846
+ 𝑡
4847
+ -test comparing
4848
+ 𝐹
4849
+ 1
4850
+ scores over 10 seeds and 5 folds. Each combination of dataset, dimension, and sample size was tested separately. We used a threshold of
4851
+ 𝑝
4852
+ =
4853
+ 0.05
4854
+ on this test to determine statistical significance, and assigned the significance annotation to the predictor with the higher mean. We report full
4855
+ 𝑝
4856
+ -values in Table 6.
4857
+
4858
+ DT vs DT vs HoroRF vs RF vs RF vs HoroRF vs
4859
+ Dataset
4860
+ 𝐷
4861
+
4862
+ 𝑛
4863
+ HyperDT HoroRF HyperDT HyperRF HoroRF HyperRF
4864
+
4865
+ Gaussian
4866
+ 2 100 .204 4.56e-03 .027 .368 3.92e-03 1.16e-03
4867
+ 200 .545 .053 5.88e-04 .960 1.39e-04 .092
4868
+ 400 4.75e-03 4.86e-03 6.10e-05 .867 6.71e-07 4.41e-05
4869
+ 800 1.52e-03 3.02e-04 1.05e-06 .860 1.14e-10 2.55e-07
4870
+ 4 100 .067 .358 8.29e-05 1.00 9.91e-06 .417
4871
+ 200 .036 6.50e-03 3.98e-05 .705 1.58e-07 .025
4872
+ 400 7.02e-03 5.43e-03 5.95e-03 .514 7.62e-07 4.90e-03
4873
+ 800 4.29e-03 6.03e-04 .034 .062 5.19e-05 2.80e-03
4874
+ 8 100 .709 .420 2.17e-03 .252 .233 .083
4875
+ 200 .821 .766 1.14e-03 .766 9.18e-04 1.00
4876
+ 400 .785 .569 .012 .533 .011 1.00
4877
+ 800 .133 .103 2.41e-03 .598 3.83e-04 .598
4878
+ 16 100 .261 .420 .146 1.00 .032 .420
4879
+ 200 .322 .569 .011 .261 .028 .322
4880
+ 400 .322 — .182 .159 .044 .159
4881
+ 800 — — .033 .096 — —
4882
+
4883
+ NeuroSEED
4884
+ 2 100 .236 .160 .014 .577 2.37e-03 .824
4885
+ 200 .037 8.32e-03 1.14e-05 .138 1.54e-06 .912
4886
+ 400 .010 8.82e-06 6.84e-10 .894 9.78e-11 .048
4887
+ 800 .878 1.25e-08 1.77e-10 .807 2.86e-10 4.56e-03
4888
+ 4 100 .709 .055 3.45e-17 3.34e-05 3.30e-17 2.02e-07
4889
+ 200 .322 7.99e-05 2.57e-24 1.61e-11 3.05e-24 9.19e-15
4890
+ 400 .159 2.75e-05 2.53e-24 6.10e-21 2.86e-24 6.94e-23
4891
+ 800 .322 2.00e-10 1.12e-26 5.07e-20 1.22e-26 2.71e-24
4892
+ 8 100 .533 7.92e-03 2.65e-14 3.44e-10 2.47e-14 1.60e-05
4893
+ 200 .622 6.27e-04 3.62e-20 8.01e-20 1.82e-19 4.50e-15
4894
+ 400 .229 5.28e-08 4.62e-26 6.56e-32 8.02e-26 4.90e-27
4895
+ 800 1.00 6.99e-13 7.88e-37 9.02e-36 6.65e-37 1.65e-33
4896
+ 16 100 .322 .595 2.98e-07 .061 3.40e-08 .123
4897
+ 200 .279 .880 4.98e-07 .213 2.02e-07 .180
4898
+ 400 .569 2.00e-03 6.56e-08 .120 4.82e-08 1.47e-04
4899
+ 800 .659 1.26e-03 1.40e-14 .114 1.62e-14 2.57e-07
4900
+
4901
+ Polblogs
4902
+ 2 979 .118 .571 1.23e-05 2.38e-05 9.15e-07 4.50e-06
4903
+ 4 979 .160 .696 2.05e-07 1.68e-03 6.26e-08 1.63e-03
4904
+ 8 979 .047 .301 1.11e-12 2.48e-10 1.87e-11 6.43e-10
4905
+ 16 979 .987 .302 2.34e-10 3.64e-09 8.29e-11 1.65e-07
4906
+ Table 6:Paired
4907
+ 𝑡
4908
+ -test values for benchmarks over 5 folds and 10 seeds. Missing values indicate that two sets of
4909
+ 𝐹
4910
+ 1
4911
+ scores were identical. Statistically significant values are used to generate cell annotations in Table 1.
4912
+ Generated by L A T E xml
4913
+ Instructions for reporting errors
4914
+
4915
+ We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
4916
+
4917
+ Click the "Report Issue" button.
4918
+ Open a report feedback form via keyboard, use "Ctrl + ?".
4919
+ Make a text selection and click the "Report Issue for Selection" button near your cursor.
4920
+ You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
4921
+
4922
+ Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
4923
+
4924
+ Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
4925
+
4926
+ Report Issue
4927
+ Report Issue for Selection