mishig HF Staff commited on
Commit
5c69d7b
·
verified ·
1 Parent(s): 329cbe8

Add 1 files

Browse files
Files changed (1) hide show
  1. 2006/2006.03487.md +4177 -0
2006/2006.03487.md ADDED
@@ -0,0 +1,4177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Dimensionless Anomaly Detection on Multivariate Streams with Variance Norm and Path Signature
2
+
3
+ URL Source: https://arxiv.org/html/2006.03487
4
+
5
+ Markdown Content:
6
+ Back to arXiv
7
+
8
+ This is experimental HTML to improve accessibility. We invite you to report rendering errors.
9
+ Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off.
10
+ Learn more about this project and help improve conversions.
11
+
12
+ Why HTML?
13
+ Report Issue
14
+ Back to Abstract
15
+ Download PDF
16
+ 1Introduction
17
+ 2Related work
18
+ 3Variance norm, a data-driven metric
19
+ 4Signatures for unsupervised anomaly detection on streams
20
+ 5Software implementation and Numerical experiments
21
+ 6Conclusion
22
+
23
+ HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
24
+
25
+ failed: silence
26
+ failed: colonequals
27
+ failed: nth
28
+ failed: pbox
29
+ failed: mwe
30
+ failed: textcmds
31
+ failed: apptools
32
+
33
+ Authors: achieve the best HTML results from your LaTeX submissions by selecting from this list of supported packages.
34
+
35
+ License: arXiv.org perpetual non-exclusive license
36
+ arXiv:2006.03487v2 [cs.LG] 06 Dec 2023
37
+ \WarningFilter
38
+
39
+ [pdftoc]hyperrefToken not allowed in a PDF string \AtAppendix \AtAppendix
40
+
41
+ Dimensionless Anomaly Detection on Multivariate Streams with Variance Norm and Path Signature
42
+ Zhen Shao, Ryan Sze-Yin Chan , Thomas Cochrane , Peter Foster , Terry Lyons
43
+ Mathematical Institute, University of Oxford. (shaoz@maths.ox.ac.uk)The Alan Turing Institute, London(rchan@turing.ac.uk)The Alan Turing Institute, London(thomasc@turing.ac.uk)The Alan Turing Institute, London(pfoster@turing.ac.uk)Mathematical Institute, University of Oxford. The Alan Turing Institute, London(terry.lyons@maths.ox.ac.uk)
44
+ Abstract
45
+
46
+ In this paper, we propose a dimensionless anomaly detection method for multivariate streams. Our method is independent of the unit of measurement for the different stream channels, therefore dimensionless. We first propose the variance norm, a generalisation of Mahalanobis distance to handle infinite-dimensional feature space and singular empirical covariance matrix rigorously. We then combine the variance norm with the path signature, an infinite collection of iterated integrals that provide global features of streams, to propose SigMahaKNN, a method for anomaly detection on (multivariate) streams. We show that SigMahaKNN is invariant to stream reparametrisation, stream concatenation and has a graded discrimination power depending on the truncation level of the path signature. We implement SigMahaKNN as an open-source software, and perform extensive numerical experiments, showing significantly improved anomaly detection on streams compared to isolation forest and local outlier factors in applications ranging from language analysis, hand-writing analysis, ship movement paths analysis and univariate time-series analysis.
47
+
48
+ 1Introduction
49
+
50
+ Anomaly detection is the semi-supervised learning problem where one is given a corpus of normal objects, with the aim of learning a map deciding whether a given object belongs to the corpus or not. Unlike in binary classification problems, where two types of objects are given in the training sample, here we only have the normal objects in the training set, and the anomaly detected does not necessarily belong to the same class, they are simply different from objects in the corpus.
51
+
52
+ Streams are a ubiquitous data form found in many real-world situations. They are maps
53
+ [
54
+ 𝑎
55
+ ,
56
+ 𝑏
57
+ ]
58
+
59
+
60
+ 𝑑
61
+ . Examples include financial time series, the movement paths of objects/humans, and various measurement signals, e.g. ECG (Electrocardiogram) signals in medical devices or radio-astronomical signals from the telescope. This work focuses on detecting anomalous streams given a corpus of normal streams. Unlike the often so-called time-series anomaly detection problem, where the aim is to detect a point
62
+ 𝑡
63
+
64
+ [
65
+ 𝑎
66
+ ,
67
+ 𝑏
68
+ ]
69
+ such that the value of the stream at
70
+ 𝑡
71
+ is anomalous, here we are interested in the question: Is the stream/time-series as a whole an anomalous object compared with a corpus of normal streams?
72
+
73
+ Our approach for anomaly detection of streams combines two techniques – the path signature and the variance norm. The path signature of a (multi-dimensional) path/time series is a graded, infinite collection of iterated integrals, where the signature of a path
74
+ 𝑋
75
+ up to degree
76
+ 𝑁
77
+ is the collection
78
+
79
+
80
+ Sig
81
+ 𝑁
82
+ (
83
+ 𝐱
84
+ )
85
+ :=
86
+ (
87
+
88
+
89
+ 0
90
+ <
91
+ 𝑡
92
+ 1
93
+ <
94
+
95
+ <
96
+ 𝑡
97
+ 𝑘
98
+ <
99
+ 1
100
+ d
101
+
102
+ 𝑋
103
+ 𝑖
104
+ 1
105
+ d
106
+
107
+ 𝑡
108
+
109
+ (
110
+ 𝑡
111
+ 1
112
+ )
113
+
114
+ d
115
+
116
+ 𝑋
117
+ 𝑖
118
+ 2
119
+ d
120
+
121
+ 𝑡
122
+
123
+ (
124
+ 𝑡
125
+ 2
126
+ )
127
+
128
+
129
+
130
+ d
131
+
132
+ 𝑋
133
+ 𝑖
134
+ 𝑘
135
+ d
136
+
137
+ 𝑡
138
+
139
+ (
140
+ 𝑡
141
+ 𝑘
142
+ )
143
+
144
+ (1.1)
145
+
146
+
147
+ d
148
+ 𝑡
149
+ 1
150
+
151
+ d
152
+ 𝑡
153
+ 𝑘
154
+ )
155
+ 1
156
+
157
+ 𝑖
158
+ 1
159
+ ,
160
+
161
+ ,
162
+ 𝑖
163
+ 𝑘
164
+
165
+ 𝑑
166
+
167
+
168
+ 𝑘
169
+ =
170
+ 0
171
+ ,
172
+ 1
173
+ ,
174
+ 2
175
+ ,
176
+
177
+ ,
178
+ 𝑁
179
+ .
180
+
181
+
182
+ The signature describes the paths in a global, geometric, and interacting way. For example, the degree
183
+ 1
184
+ signatures are the increments in different dimensions, and the degree
185
+ 2
186
+ signatures describe the area formed between the graph of a pair of features and the
187
+ 45
188
+ -degree line. See [21, 6, 9] for more information about the signature method. The variance norm is a data-driven metric that coincides with the Mahalanobis distance [23] when the sample covariance matrix is finite-dimensional and has full column rank. Real data often exhibits rank-deficiency in the sample covariance matrix, e.g. due to multi-colinearity, and while a few techniques have been proposed to use Mahalanobis distance in rank-deficient settings, the variance norm offers a mathematically rigorous approach to handle rank-deficiency, and exposes certain weaknesses in the commonly used variant of Mahalanobis distance. For more details, see Section 3. Overall, our approach consists of transforming the stream to its path signatures, fitting the data-driven variance norm to the corpus of path signatures, and using the nearest neighbour algorithm as a downstream metric-based anomaly detector.
189
+
190
+ Summary of contributions
191
+
192
+ Our contributions are four-fold. Firstly, to the best of our knowledge, this is the first work proposing using path signatures in an unsupervised learning setting, namely, anomaly detection, emphasising being dimensionless. Our anomaly detector is dimensionless in the sense that it is independent of the unit of measurement of streams. Secondly, we propose the data-driven variance norm, which rigorously generalises the widely used Mahalanobis distance to rank-deficient and also infinitely dimensional settings. Thirdly, using the synergy between path-signature and the variance norm, we are able to show that our anomaly detector is invariant to (1) time-reparametrisation, and (2) concatenations of streams before or after. Moreover, there is a naturally graded discrimination power inherited from the graded structure of the path signature. Finally, we implemented the proposed stream anomaly detector, publicly available on https://github.com/datasig-ac-uk/signature_mahalanobis_knn. We have empirically compared it with well-known stream anomaly detection methods such as the shapelet method, isolation forest, local outlier factors, with improved performance.
193
+
194
+ Overview of the paper
195
+
196
+ In Section 2, we give a detailed survey of related work on Mahalanobis distances and anomaly detection on streams, and how they relate to this paper. In Section 3, we give careful mathematical treatment of the variance norm and show how it reduces to Mahalanobis distance in certain cases, while exposing certain weaknesses of Mahalanobis distance. In Section 4, we give a more detailed exposure of path signatures and show that our anomaly detector is invariant to both time-reparametrisation and stream concatenations, with a graded discrimination power. In Section 5, we the details of the implementation of our stream anomaly detector, and present numerical results on a range of real-world stream data, showing competitive performances over baselines.
197
+
198
+ 2Related work
199
+ 2.1Mahalanobis distance
200
+
201
+ Mahalanobis distance, originally proposed in [23], has been widely used for anomaly detections in the literature. [15, 30] considers using the minimal Mahalanobis distance to class means in the intermediate layer of a neural-network classifier for out-of-distribution and adversarial example detection. [18] similarly used Mahalanobis distance to the mean in unmanned vehicle data to detect the anomalous operation of the vehicle at single time points. [20] uses Mahalanobis distance as a distance for wind turbine operation anomaly point detection. [28] uses Mahalanobis distance to the mean to detect anomalous operations of insulated gate bipolar transistors. [12] used Mahalanobis distance to cluster different periods of a time series with a k-means-like algorithm for pattern detection. [10] compared using Mahalanobis distance with MSE for reconstruction differences of variational auto-encoders for detecting water system cyber-physical attacks and found using Mahalanobis distance gives a higher overall score (quantified by both accuracy and speed). [29] used Mahalanobis distance in detecting abnormal users/transactions in the Bitcoin transaction network.
202
+
203
+ Relationship to our work
204
+
205
+ Despite its widespread success, a unified approach to using Mahalanobis distance, however, is lacking. Many authors used Mahalanobis distance to the mean to quantify an anomaly, which we will show suffers from some serious drawbacks. Moreover, it is not clear what one should do when the (empirical) covariance matrix is singular. A variety of approaches have been suggested. [4] proposed using the sum of squares of the standardised scores of all non-zero principal components to represent the Mahalanobis distance when the sample covariance matrix is rank-deficient; and further suggested a reduced Mahalanobis distance approach where only a certain number of PCs are retained. However the justifications of using the sum of squares of non-zero principal components are not entirely satisfactory, and we show in our approach that this is only correct in certain cases. [27] used Mahalanobis distance, resolving multi-colinearity with factor analysis to space telemetry series anomaly detection. [14] discussed using pseudo-inverse, and feature selection when the covariance matrix is singular; it also discussed using Gamma distribution, Weibull distribution and Box-Cox transformation to set the anomaly threshold. However, there is no clear theoretical foundation for these techniques.
206
+
207
+ Our discussion of the variance norm generalises Mahalanobis distance not only to provide a rigorous foundation for handling rank-deficient covariance matrix cases, exposing certain weaknesses of the current approaches, e.g. simply taking the pseudo-inverse; the variance norm is also capable of handling infinite-dimensional feature spaces. Moreover, we firmly note that, in general, it is the nearest-neighbour distance, which we call the conformance distance when used with the variance norm, that should be used in anomaly detection instead of distance to the mean. The distance to the mean approach is often justified by considering the Gaussian mixture model, however, our discussion of the variance norm showed that Mahalanobis distance has a deeper mathematical foundation as a purely data-driven norm, and there is no special reason for treating the mean as a special point. We will give an example of why the distance to the mean can be an unhelpful quantifier for anomalies.
208
+
209
+ 2.2Anomaly detections on multivariate time series/streamed data
210
+
211
+ To the best of our knowledge, there has not been extensive work studying detecting anomalous time series (i.e. given a corpus of time series, decide whether a given time series is an anomaly or not). [2] used shapelet learning on UEA & UCR time series repository (originally for classfications). Their method is based on using the shapelet feature originally proposed in [31] and used for time series classification. However, shapelet features are for univariate time series only and generalisation to interacting multi-variate time series is unclear. Nevertheless, we compare our approach to this method in one of our numerical experiments, restricting to univariate streams. [13] proposes reducing the dimensionality by extracting some representative statistical features from each time series (e.g., mean and first order of autocorrelation) and then applying PCA. Outlier time series are detected by their deviation from the highest density region in the PCA space, which is defined by the first two principal components. [3] uses dynamic time wrapping to define a similarity function between each pair of time series. Once a similarity function is defined, the time series can be clustered in different groups using the similarity function. The outlier score is then computed for each time series based on its distance to its closest centroid. In our numerical experiments, we compare SigMahaKNN with well-known anomaly detection methods: isolation forest and local outlier factor, using signature or non-signature-based feature extraction techniques for streams.
212
+
213
+ Several related fields, such as time-series classifications, also rely on feature extractions from time-series as a key component. [17] considers graph embedding time series using SDNE (structured deep network embedding); [16] uses GAN based on LSTMRNN for both the generator and discriminators for multivariate time-series. [32] used self-supervised learning to construct representations for time series classifications. [26] considers the signature features for time-series classifications. [25] recently benchmarked performances of multi-variate time series classifications algorithms and found Hydra+MultiROCKET [7] and HIVE-COTEv2 [24], perform significantly better than other approaches on both the current and new TSC problems. These representations for time-series classifications could arguably be tried in our unsupervised anomaly detection context. However, in this paper, we chose to focus on the signature features with Mahalanobis distance, partly due to its strong theoretical properties such as concatenation invariance and reparametrisation invariance.
214
+
215
+ 3Variance norm, a data-driven metric
216
+
217
+ In this section, we give a careful discussion of the variance norm. The data
218
+ 𝑥
219
+ 1
220
+ ,
221
+
222
+ ,
223
+ 𝑥
224
+ 𝑛
225
+ are always assumed to be mean-centred. New data such as
226
+ 𝑣
227
+ is centred around the mean of
228
+ 𝑥
229
+ 1
230
+ ,
231
+
232
+ ,
233
+ 𝑥
234
+ 𝑛
235
+ .
236
+
237
+ 3.1Variance norm as a data-driven quadratic form
238
+ Definition 3.1.
239
+
240
+ Given data
241
+ 𝑥
242
+ 𝑖
243
+ ,
244
+ 𝑖
245
+ =
246
+ 0
247
+ ,
248
+ 1
249
+ ,
250
+
251
+ ,
252
+ 𝑛
253
+
254
+ 1
255
+ ,
256
+ 𝑥
257
+ 𝑖
258
+
259
+ 𝑉
260
+ , a possibly infinite dimensional vector space, let
261
+ 𝑝
262
+
263
+ (
264
+ 𝑣
265
+ ,
266
+ 𝑣
267
+ )
268
+ =
269
+ sup
270
+ 𝑞
271
+
272
+ (
273
+ 𝑢
274
+ ,
275
+ 𝑢
276
+ )
277
+
278
+ 1
279
+ 𝑢
280
+
281
+ (
282
+ 𝑣
283
+ )
284
+ 2
285
+ , where for
286
+ 𝑢
287
+
288
+ 𝑈
289
+ :=
290
+ 𝑉
291
+ *
292
+ , the continuous dual space of
293
+ 𝑉
294
+ consisting of continuous linear transformations
295
+ 𝑉
296
+
297
+
298
+ , and we define
299
+ 𝑞
300
+
301
+ (
302
+ 𝑢
303
+ ,
304
+ 𝑢
305
+ )
306
+ =
307
+
308
+ 𝑖
309
+ =
310
+ 1
311
+ 𝑛
312
+ 𝑢
313
+ 2
314
+
315
+ (
316
+ ����
317
+ 𝑖
318
+ )
319
+ . We say
320
+ 𝑝
321
+
322
+ (
323
+ 𝑣
324
+ ,
325
+ 𝑣
326
+ )
327
+ is the variance norm of
328
+ 𝑣
329
+ .
330
+
331
+ In order to understand the quadratic form
332
+ 𝑝
333
+ (
334
+ .
335
+ ,
336
+ .
337
+ )
338
+ and
339
+ 𝑞
340
+ (
341
+ .
342
+ ,
343
+ .
344
+ )
345
+ , we will consider the special case where
346
+ 𝑉
347
+ is a finite-dimensional vector space and we show that in this case, the quadratic form
348
+ 𝑝
349
+ is equivalent to the Mahalanobis distance when the data has no multi-colinearity. Our generic, mathematical definition allows us to handle cases where
350
+ 𝑉
351
+ is infinite-dimensional as well as when there is multi-colinearity of
352
+ 𝑥
353
+ 𝑖
354
+ so that the covariance matrix is not invertible. The following theorem is our main result on the variance norm.
355
+
356
+ Theorem 3.1.
357
+
358
+ Let
359
+ 𝑉
360
+ be a finite-dimensional (real) vector space. Given
361
+ 𝑥
362
+ 1
363
+ ,
364
+ 𝑥
365
+ 2
366
+ ,
367
+
368
+ ,
369
+ 𝑥
370
+ 𝑛
371
+
372
+ 𝑉
373
+ ,
374
+ 𝑣
375
+
376
+ 𝑉
377
+ , we have
378
+
379
+ 1.
380
+
381
+ If
382
+ 𝑣
383
+
384
+ 𝑠𝑝𝑎𝑛
385
+
386
+ {
387
+ 𝑥
388
+ 𝑖
389
+ }
390
+ , then
391
+ 𝑝
392
+
393
+ (
394
+ 𝑣
395
+ ,
396
+ 𝑣
397
+ )
398
+ =
399
+ 𝑣
400
+ 𝑇
401
+
402
+ (
403
+ 𝑋
404
+ 𝑇
405
+
406
+ 𝑋
407
+ )
408
+
409
+
410
+ 𝑣
411
+ , where
412
+
413
+ denotes Moore-Penrose peusdo-inverse, and
414
+ 𝑋
415
+
416
+
417
+ 𝑛
418
+ ×
419
+ 𝑑
420
+ whose rows are
421
+ 𝑥
422
+ 𝑖
423
+ with respect to some basis of
424
+ 𝑉
425
+ .
426
+
427
+ 2.
428
+
429
+ If
430
+ 𝑣
431
+
432
+ 𝑠𝑝𝑎𝑛
433
+
434
+ {
435
+ 𝑥
436
+ 𝑖
437
+ }
438
+ , then
439
+ 𝑝
440
+
441
+ (
442
+ 𝑣
443
+ ,
444
+ 𝑣
445
+ )
446
+ =
447
+
448
+ .
449
+
450
+ Proof.
451
+
452
+ Let
453
+ 𝑈
454
+ 0
455
+
456
+ 𝑈
457
+ ,
458
+ 𝑈
459
+ 0
460
+ =
461
+ (
462
+ span
463
+
464
+ {
465
+ 𝑥
466
+ 𝑖
467
+ }
468
+ )
469
+ 𝑜
470
+ , the annihilator space of the linear span of the data, with a basis
471
+ {
472
+ 𝑢
473
+ 𝑖
474
+ 0
475
+ }
476
+ , and extend it to a basis of the whole space
477
+ 𝑈
478
+ so that
479
+ 𝑈
480
+ =
481
+ 𝑈
482
+ 0
483
+ +
484
+ 𝑈
485
+ 1
486
+ with
487
+ {
488
+ 𝑢
489
+ 𝑗
490
+ 1
491
+ }
492
+ being a
493
+ 𝑞
494
+ -orthonormal basis of
495
+ 𝑈
496
+ 1
497
+ , that is, we have
498
+ 𝑞
499
+
500
+ (
501
+ 𝑢
502
+ 𝑗
503
+ 1
504
+ ,
505
+ 𝑢
506
+ 𝑗
507
+ 1
508
+ )
509
+ =
510
+ 1
511
+ ,
512
+ 𝑞
513
+
514
+ (
515
+ 𝑢
516
+ 𝑖
517
+ 1
518
+ ,
519
+ 𝑢
520
+ 𝑗
521
+ 1
522
+ )
523
+ =
524
+ 0
525
+ , where we define
526
+ 𝑞
527
+
528
+ (
529
+ 𝑢
530
+ 1
531
+ ,
532
+ 𝑢
533
+ 2
534
+ )
535
+ =
536
+
537
+ 𝑖
538
+ =
539
+ 1
540
+ 𝑛
541
+ 𝑢
542
+ 1
543
+
544
+ (
545
+ 𝑥
546
+ 𝑖
547
+ )
548
+
549
+ 𝑢
550
+ 2
551
+
552
+ (
553
+ 𝑥
554
+ 𝑖
555
+ )
556
+ . Thus
557
+ {
558
+ 𝑢
559
+ 𝑖
560
+ 0
561
+ ,
562
+ 𝑢
563
+ 𝑗
564
+ 1
565
+ }
566
+ is a
567
+ 𝑞
568
+ -orthogonal basis of
569
+ 𝑈
570
+ , let its dual basis in
571
+ 𝑉
572
+ be
573
+ 𝑣
574
+ 𝑖
575
+ 0
576
+ ,
577
+ 𝑣
578
+ 𝑗
579
+ 1
580
+ . Naturally, any vector
581
+ 𝑣
582
+
583
+ 𝑉
584
+ can be written as
585
+ 𝑣
586
+ =
587
+
588
+ 𝜇
589
+ 𝑖
590
+
591
+ 𝑣
592
+ 𝑖
593
+ 1
594
+ +
595
+ 𝑣
596
+ 0
597
+ , where
598
+ 𝑣
599
+ 0
600
+
601
+ span
602
+
603
+ {
604
+ 𝑣
605
+ 𝑖
606
+ 0
607
+ }
608
+ . We make the following claims:
609
+
610
+ 1.
611
+
612
+ 𝑣
613
+
614
+ span
615
+
616
+ {
617
+ 𝑥
618
+ 𝑖
619
+ }
620
+ iff
621
+ 𝑣
622
+ 0
623
+ =
624
+ 0
625
+ .
626
+
627
+ 2.
628
+
629
+ If
630
+ 𝑣
631
+
632
+ span
633
+
634
+ {
635
+ 𝑥
636
+ 𝑖
637
+ }
638
+ , then
639
+ 𝑝
640
+
641
+ (
642
+ 𝑣
643
+ ,
644
+ 𝑣
645
+ )
646
+ =
647
+
648
+ 𝜇
649
+ 𝑖
650
+ 2
651
+ .
652
+
653
+ 3.
654
+
655
+ If
656
+ 𝑣
657
+
658
+ span
659
+
660
+ {
661
+ 𝑥
662
+ 𝑖
663
+ }
664
+ , then
665
+ 𝑝
666
+
667
+ (
668
+ 𝑣
669
+ ,
670
+ 𝑣
671
+ )
672
+ =
673
+
674
+ .
675
+
676
+ Consider the first statement, if
677
+ 𝑣
678
+
679
+ span
680
+
681
+ {
682
+ 𝑥
683
+ 𝑖
684
+ }
685
+ and
686
+ 𝑣
687
+ 0
688
+
689
+ 0
690
+ , then
691
+ 𝑢
692
+ 𝑖
693
+ 0
694
+
695
+ (
696
+ 𝑣
697
+ )
698
+ =
699
+ 0
700
+ for any
701
+ 𝑖
702
+ , but because
703
+ 𝑣
704
+ 0
705
+
706
+ span
707
+
708
+ {
709
+ 𝑣
710
+ 𝑖
711
+ 0
712
+ }
713
+ , there exists
714
+ 𝑢
715
+ 0
716
+ such that
717
+ 𝑢
718
+ 0
719
+
720
+ (
721
+ 𝑣
722
+ 0
723
+ )
724
+ =
725
+ 1
726
+ , at the same time
727
+ 𝑢
728
+ 0
729
+
730
+ (
731
+ 𝑣
732
+ )
733
+ =
734
+ 0
735
+ and
736
+ 𝑢
737
+ 0
738
+
739
+ (
740
+ 𝑣
741
+ 𝑖
742
+ 1
743
+ )
744
+ =
745
+ 0
746
+ , then
747
+ 𝑢
748
+
749
+ (
750
+ 𝑣
751
+ )
752
+ =
753
+ 𝑢
754
+
755
+ (
756
+
757
+ 𝜇
758
+ 𝑖
759
+
760
+ 𝑣
761
+ 𝑖
762
+ 1
763
+ +
764
+ 𝑣
765
+ 0
766
+ )
767
+ gives a contradiction. Therefore
768
+ 𝑣
769
+
770
+ span
771
+
772
+ {
773
+ 𝑥
774
+ 𝑖
775
+ }
776
+ implies
777
+ 𝑣
778
+ 0
779
+ =
780
+ 0
781
+ . Conversely, if
782
+ 𝑣
783
+ 0
784
+ =
785
+ 0
786
+ , then we have
787
+ 𝑢
788
+ 𝑖
789
+ 0
790
+
791
+ (
792
+ 𝑣
793
+ )
794
+ =
795
+ 0
796
+ for any
797
+ 𝑖
798
+ , hence
799
+ 𝑣
800
+
801
+ (
802
+ span
803
+
804
+ {
805
+ 𝑥
806
+ 𝑖
807
+ }
808
+ )
809
+ 𝑜
810
+
811
+ 𝑜
812
+ , the annihilator of annihilator. As we work in the finite dimensional space, we have
813
+ (
814
+ span
815
+
816
+ {
817
+ 𝑥
818
+ 𝑖
819
+ }
820
+ )
821
+ 𝑜
822
+
823
+ 𝑜
824
+ =
825
+ span
826
+
827
+ {
828
+ 𝑥
829
+ 𝑖
830
+ }
831
+ . The first statement then follows.
832
+
833
+ For the second and the third statements, we note that
834
+ 𝑞
835
+
836
+ (
837
+ 𝑎
838
+
839
+ 𝑢
840
+ 1
841
+ +
842
+ 𝑏
843
+
844
+ 𝑢
845
+ 2
846
+ ,
847
+ 𝑎
848
+
849
+ 𝑢
850
+ 1
851
+ +
852
+ 𝑏
853
+
854
+ 𝑢
855
+ 2
856
+ )
857
+ =
858
+ 𝑎
859
+ 2
860
+
861
+ 𝑞
862
+
863
+ (
864
+ 𝑢
865
+ 1
866
+ ,
867
+ 𝑢
868
+ 1
869
+ )
870
+ +
871
+ 𝑏
872
+ 2
873
+
874
+ 𝑞
875
+
876
+ (
877
+ 𝑢
878
+ 2
879
+ ,
880
+ 𝑢
881
+ 2
882
+ )
883
+ +
884
+ 2
885
+
886
+ 𝑎
887
+
888
+ 𝑏
889
+
890
+ 𝑞
891
+
892
+ (
893
+ 𝑢
894
+ 1
895
+ ,
896
+ 𝑢
897
+ 2
898
+ )
899
+ . Therefore for
900
+ 𝑢
901
+ =
902
+
903
+ 𝑎
904
+ 𝑖
905
+
906
+ 𝑢
907
+ 𝑖
908
+ 1
909
+ +
910
+ 𝑢
911
+ 0
912
+ , where
913
+ 𝑢
914
+ 0
915
+
916
+ 𝑈
917
+ 0
918
+ , we have
919
+ 𝑞
920
+
921
+ (
922
+ 𝑢
923
+ ,
924
+ 𝑢
925
+ )
926
+ =
927
+
928
+ 𝑎
929
+ 𝑖
930
+ 2
931
+ because
932
+ 𝑞
933
+
934
+ (
935
+ 𝑢
936
+ ,
937
+ 𝑢
938
+ 0
939
+ )
940
+ =
941
+ 0
942
+ . In particular,
943
+ 𝑢
944
+ 0
945
+ can be arbitrary under the constraint
946
+ 𝑞
947
+
948
+ (
949
+ 𝑢
950
+ ,
951
+ 𝑢
952
+ )
953
+
954
+ 1
955
+ . Next, expanding
956
+ 𝑝
957
+
958
+ (
959
+ 𝑣
960
+ ,
961
+ 𝑣
962
+ )
963
+ with
964
+ 𝑣
965
+ =
966
+
967
+ 𝜇
968
+ 𝑖
969
+
970
+ 𝑣
971
+ 𝑖
972
+ 1
973
+ +
974
+ 𝑣
975
+ 0
976
+ , we have that
977
+ 𝑝
978
+
979
+ (
980
+ 𝑣
981
+ ,
982
+ 𝑣
983
+ )
984
+ =
985
+ sup
986
+ 𝑞
987
+
988
+ (
989
+ 𝑢
990
+ ,
991
+ 𝑢
992
+ )
993
+
994
+ 1
995
+
996
+ 𝜇
997
+ 𝑖
998
+
999
+ 𝜇
1000
+ 𝑗
1001
+
1002
+ 𝑢
1003
+
1004
+ (
1005
+ 𝑣
1006
+ 𝑖
1007
+ 1
1008
+ )
1009
+
1010
+ 𝑢
1011
+
1012
+ (
1013
+ 𝑣
1014
+ 𝑗
1015
+ 1
1016
+ )
1017
+ +
1018
+ 2
1019
+
1020
+
1021
+ 𝑖
1022
+ 𝜇
1023
+ 𝑖
1024
+
1025
+ 𝑢
1026
+
1027
+ (
1028
+ 𝑣
1029
+ 𝑖
1030
+ 1
1031
+ )
1032
+
1033
+ 𝑢
1034
+
1035
+ (
1036
+ 𝑣
1037
+ 0
1038
+ )
1039
+ +
1040
+ 𝑢
1041
+
1042
+ (
1043
+ 𝑣
1044
+ 0
1045
+ )
1046
+ 2
1047
+ . If
1048
+ 𝑣
1049
+
1050
+ span
1051
+
1052
+ {
1053
+ 𝑥
1054
+ 𝑖
1055
+ }
1056
+ , then necessarily
1057
+ 𝑣
1058
+ 0
1059
+ =
1060
+ 0
1061
+ and it follows that
1062
+ 𝑝
1063
+
1064
+ (
1065
+ 𝑣
1066
+ ,
1067
+ 𝑣
1068
+ )
1069
+ =
1070
+ sup
1071
+
1072
+ 𝑎
1073
+ 𝑖
1074
+ 2
1075
+
1076
+ 1
1077
+
1078
+ 𝜇
1079
+ 𝑖
1080
+
1081
+ 𝜇
1082
+ 𝑗
1083
+
1084
+ 𝑎
1085
+ 𝑖
1086
+
1087
+ 𝑎
1088
+ 𝑗
1089
+ =
1090
+ (
1091
+
1092
+ 𝜇
1093
+ 𝑖
1094
+ 2
1095
+ )
1096
+ 2
1097
+ by Cauchy-Schwarz inequality. On the other hand, if
1098
+ 𝑣
1099
+
1100
+ span
1101
+ ���
1102
+ {
1103
+ 𝑥
1104
+ 𝑖
1105
+ }
1106
+ so that
1107
+ 𝑣
1108
+ 0
1109
+
1110
+ 0
1111
+ , we may set
1112
+ 𝑎
1113
+ 𝑖
1114
+ =
1115
+ 0
1116
+ and obtain an unconstrained expression
1117
+ sup
1118
+ 𝑢
1119
+
1120
+ 𝑈
1121
+ 0
1122
+ (
1123
+ 𝑢
1124
+
1125
+ (
1126
+ 𝑣
1127
+ 0
1128
+ )
1129
+ )
1130
+ 2
1131
+ , which gives infinity.
1132
+
1133
+ In order to progress further, we need to fix a basis. Let us suppose our data vectors
1134
+ 𝑥
1135
+ 𝑖
1136
+
1137
+ 𝑉
1138
+ come in a measurement basis
1139
+ 𝐵
1140
+ 0
1141
+ of the vector space
1142
+ 𝑉
1143
+ . We note that our definition of
1144
+ 𝑝
1145
+ (
1146
+ .
1147
+ ,
1148
+ .
1149
+ )
1150
+ has not involved a basis of
1151
+ 𝑉
1152
+ , thus the quantities derived are independent of the measurement basis. Under the measurement basis and the Euclidean inner product induced by this basis, we may assemble
1153
+ 𝑥
1154
+ 𝑖
1155
+
1156
+ 𝑉
1157
+ as rows of
1158
+ 𝑋
1159
+
1160
+
1161
+ 𝑛
1162
+ ×
1163
+ 𝑑
1164
+ , and compute the singular value decomposition
1165
+ 𝑋
1166
+ =
1167
+ 𝑊
1168
+
1169
+ 𝐷
1170
+
1171
+ 𝑌
1172
+ 𝑇
1173
+ , where
1174
+ 𝐷
1175
+ 𝑖
1176
+
1177
+ 𝑖
1178
+ =
1179
+ 0
1180
+ for
1181
+ 𝑖
1182
+ >
1183
+ 𝑟
1184
+ .
1185
+
1186
+ Let
1187
+ 𝐵
1188
+ 1
1189
+ be the basis for
1190
+ 𝑉
1191
+ under change of basis matrix
1192
+ 𝑌
1193
+ from
1194
+ 𝐵
1195
+ 0
1196
+ , that is, suppose
1197
+ 𝐵
1198
+ 0
1199
+ =
1200
+ {
1201
+ 𝑏
1202
+ 1
1203
+ 0
1204
+ ,
1205
+ 𝑏
1206
+ 2
1207
+ 0
1208
+ ,
1209
+
1210
+ ,
1211
+ 𝑏
1212
+ 𝑑
1213
+ 0
1214
+ }
1215
+ and
1216
+ 𝐵
1217
+ 1
1218
+ =
1219
+ {
1220
+ 𝑏
1221
+ 1
1222
+ 2
1223
+ ,
1224
+ 𝑏
1225
+ 2
1226
+ 1
1227
+ ,
1228
+
1229
+ ,
1230
+ 𝑏
1231
+ 𝑑
1232
+ 2
1233
+ }
1234
+ then we have
1235
+
1236
+
1237
+ 𝑏
1238
+ 1
1239
+ 0
1240
+ =
1241
+ 𝑦
1242
+ 11
1243
+
1244
+ 𝑏
1245
+ 1
1246
+ 1
1247
+ +
1248
+ 𝑦
1249
+ 21
1250
+
1251
+ 𝑏
1252
+ 2
1253
+ 1
1254
+ +
1255
+ 𝑦
1256
+ 31
1257
+
1258
+ 𝑏
1259
+ 3
1260
+ 1
1261
+ +
1262
+
1263
+ ,
1264
+
1265
+
1266
+ 𝑏
1267
+ 2
1268
+ 0
1269
+ =
1270
+ 𝑦
1271
+ 12
1272
+
1273
+ 𝑏
1274
+ 1
1275
+ 1
1276
+ +
1277
+ 𝑦
1278
+ 22
1279
+
1280
+ 𝑏
1281
+ 2
1282
+ 2
1283
+ +
1284
+ 𝑦
1285
+ 32
1286
+
1287
+ 𝑏
1288
+ 3
1289
+ 1
1290
+ +
1291
+
1292
+ ,
1293
+
1294
+
1295
+
1296
+ .
1297
+
1298
+
1299
+ Under
1300
+ 𝐵
1301
+ 1
1302
+ basis, the data matrix
1303
+ 𝑋
1304
+ becomes
1305
+ 𝑋
1306
+
1307
+ 𝑌
1308
+ , which equals to
1309
+ 𝑊
1310
+
1311
+ 𝐷
1312
+ . Let
1313
+ 𝐵
1314
+ 1
1315
+ *
1316
+ be the dual basis for
1317
+ 𝑈
1318
+ =
1319
+ 𝑉
1320
+ *
1321
+ . Let
1322
+ 𝐵
1323
+ 1
1324
+ *
1325
+ ¯
1326
+ =
1327
+ {
1328
+ 𝑏
1329
+ 11
1330
+ *
1331
+ ,
1332
+ 𝑏
1333
+ 12
1334
+ *
1335
+ ,
1336
+
1337
+ ,
1338
+ 𝑏
1339
+ 1
1340
+
1341
+ 𝑟
1342
+ *
1343
+ }
1344
+ be the set of the first
1345
+ 𝑟
1346
+ vectors in
1347
+ 𝐵
1348
+ 1
1349
+ *
1350
+ , and
1351
+ 𝐵
1352
+ 1
1353
+ *
1354
+ ^
1355
+ be the last
1356
+ 𝑑
1357
+
1358
+ 𝑟
1359
+ vectors of
1360
+ 𝐵
1361
+ 1
1362
+ *
1363
+ . Because that in
1364
+ 𝐵
1365
+ 1
1366
+ basis, the data matrix
1367
+ 𝑋
1368
+ becomes
1369
+ 𝑊
1370
+
1371
+ 𝐷
1372
+ , with the last
1373
+ 𝑑
1374
+
1375
+ 𝑟
1376
+ columns of the diagonal matrix
1377
+ 𝐷
1378
+ being zero, we see that
1379
+ 𝐵
1380
+ 1
1381
+ *
1382
+ ^
1383
+ is a basis for
1384
+ 𝑈
1385
+ 0
1386
+ , the annihilator of the span of rows of
1387
+ 𝑋
1388
+ . Therefore
1389
+ 𝐵
1390
+ 1
1391
+ *
1392
+ ¯
1393
+ forms a basis of
1394
+ 𝑈
1395
+ 1
1396
+ , the space that, when added to
1397
+ 𝑈
1398
+ 0
1399
+ , gives us the whole space
1400
+ 𝑈
1401
+ . Furthermore, if
1402
+ 𝑥
1403
+ 𝑘
1404
+ =
1405
+
1406
+ 𝑎
1407
+ 𝑙
1408
+ 𝑘
1409
+
1410
+ 𝑏
1411
+ 𝑙
1412
+ 1
1413
+ , we can compute
1414
+
1415
+
1416
+ 𝑞
1417
+
1418
+ (
1419
+ 𝑏
1420
+ 1
1421
+
1422
+ 𝑖
1423
+ *
1424
+ ,
1425
+ 𝑏
1426
+ 1
1427
+
1428
+ 𝑗
1429
+ *
1430
+ )
1431
+
1432
+ =
1433
+
1434
+ 𝑘
1435
+ =
1436
+ 1
1437
+ 𝑛
1438
+ 𝑏
1439
+ 1
1440
+
1441
+ 𝑖
1442
+ *
1443
+
1444
+ (
1445
+ 𝑥
1446
+ 𝑘
1447
+ )
1448
+
1449
+ 𝑏
1450
+ 1
1451
+
1452
+ 𝑗
1453
+ *
1454
+
1455
+ (
1456
+ 𝑥
1457
+ 𝑘
1458
+ )
1459
+ =
1460
+
1461
+ 𝑘
1462
+ 𝑎
1463
+ 𝑖
1464
+ 𝑘
1465
+
1466
+ 𝑎
1467
+ 𝑗
1468
+ 𝑘
1469
+
1470
+
1471
+ =
1472
+
1473
+ 𝑘
1474
+ (
1475
+ 𝑋
1476
+
1477
+ 𝑌
1478
+ )
1479
+ 𝑘
1480
+
1481
+ 𝑖
1482
+
1483
+ (
1484
+ 𝑋
1485
+
1486
+ 𝑌
1487
+ )
1488
+ 𝑘
1489
+
1490
+ 𝑗
1491
+ =
1492
+
1493
+ 𝑘
1494
+ (
1495
+ 𝐷
1496
+ 𝑇
1497
+
1498
+ 𝑊
1499
+ 𝑇
1500
+
1501
+ 𝑊
1502
+
1503
+ 𝐷
1504
+ )
1505
+ 𝑖
1506
+
1507
+ 𝑗
1508
+
1509
+
1510
+ =
1511
+ 𝐷
1512
+ 𝑖
1513
+
1514
+ 𝑖
1515
+ 2
1516
+
1517
+ 𝛿
1518
+ 𝑖
1519
+
1520
+ 𝑗
1521
+ .
1522
+
1523
+
1524
+ Thus we see that
1525
+ 𝐵
1526
+ 1
1527
+ *
1528
+ ¯
1529
+ is an orthogonal basis for
1530
+ 𝑈
1531
+ 1
1532
+ , and
1533
+ 𝐵
1534
+ 2
1535
+ *
1536
+ ¯
1537
+ =
1538
+ {
1539
+ 𝐷
1540
+ 𝑖
1541
+
1542
+ 𝑖
1543
+
1544
+ 1
1545
+
1546
+ 𝑏
1547
+ 1
1548
+
1549
+ 𝑖
1550
+ *
1551
+ }
1552
+ is an
1553
+ 𝑞
1554
+ -orthonormal basis for
1555
+ 𝑈
1556
+ 1
1557
+ . An arbitrary (row) vector
1558
+ 𝑣
1559
+
1560
+ 𝑉
1561
+ , if
1562
+ 𝑣
1563
+
1564
+ span
1565
+
1566
+ {
1567
+ 𝑥
1568
+ 𝑖
1569
+ }
1570
+ , given in the measurement basis becomes
1571
+ 𝑣
1572
+
1573
+ 𝑌
1574
+ ¯
1575
+
1576
+ 𝐷
1577
+ ¯
1578
+
1579
+ 1
1580
+ in the dual basis of
1581
+ 𝐵
1582
+ 2
1583
+ *
1584
+ ¯
1585
+ , where
1586
+ 𝑌
1587
+ ¯
1588
+ is the first
1589
+ 𝑟
1590
+ columns of
1591
+ 𝑌
1592
+ and thus
1593
+ 𝑝
1594
+
1595
+ (
1596
+ 𝑣
1597
+ ,
1598
+ 𝑣
1599
+ )
1600
+ =
1601
+ 𝑣
1602
+
1603
+ 𝑌
1604
+ ¯
1605
+
1606
+ 𝐷
1607
+ ¯
1608
+
1609
+ 1
1610
+
1611
+ (
1612
+ 𝑣
1613
+
1614
+ 𝑌
1615
+ ¯
1616
+
1617
+ 𝐷
1618
+ ¯
1619
+
1620
+ 1
1621
+ )
1622
+ 𝑇
1623
+ =
1624
+ 𝑣
1625
+
1626
+ 𝑌
1627
+ ¯
1628
+
1629
+ 𝐷
1630
+ ¯
1631
+
1632
+ 2
1633
+
1634
+ 𝑌
1635
+ ¯
1636
+ 𝑇
1637
+
1638
+ 𝑣
1639
+ 𝑇
1640
+ ; if we view
1641
+ 𝑣
1642
+ as a column vector, we arrive at
1643
+ 𝑝
1644
+
1645
+ (
1646
+ 𝑣
1647
+ ,
1648
+ 𝑣
1649
+ )
1650
+ =
1651
+ 𝑣
1652
+ 𝑇
1653
+
1654
+ (
1655
+ 𝑌
1656
+ ¯
1657
+
1658
+ 𝐷
1659
+ ¯
1660
+
1661
+ 2
1662
+
1663
+ 𝑌
1664
+ ¯
1665
+ 𝑇
1666
+ )
1667
+
1668
+ 𝑣
1669
+ =
1670
+ 𝑣
1671
+ 𝑇
1672
+
1673
+ (
1674
+ 𝑋
1675
+ 𝑇
1676
+
1677
+ 𝑋
1678
+ )
1679
+
1680
+
1681
+ 𝑣
1682
+ . If v is not in
1683
+ span
1684
+
1685
+ {
1686
+ 𝑥
1687
+ 𝑖
1688
+ }
1689
+ , as discussed before, we have
1690
+ 𝑝
1691
+
1692
+ (
1693
+ 𝑣
1694
+ ,
1695
+ 𝑣
1696
+ )
1697
+ =
1698
+
1699
+ . ∎
1700
+
1701
+ The variance norm will be used as part of the conformance distance for anomaly detection.
1702
+
1703
+ Definition 3.2.
1704
+
1705
+ Let
1706
+ 𝑉
1707
+ be a finite dimensional vector space,
1708
+ 𝒞
1709
+ =
1710
+ {
1711
+ 𝑥
1712
+ 1
1713
+ ,
1714
+
1715
+ ,
1716
+ 𝑥
1717
+ 𝑛
1718
+ }
1719
+
1720
+ 𝑉
1721
+ ,
1722
+ 𝑦
1723
+
1724
+ 𝑉
1725
+ . The conformance distance is
1726
+ 𝑑
1727
+
1728
+ (
1729
+ 𝑦
1730
+ ,
1731
+ 𝒞
1732
+ )
1733
+ =
1734
+ min
1735
+ 𝑥
1736
+ 𝑖
1737
+
1738
+ 𝒞
1739
+
1740
+ 𝑝
1741
+
1742
+ (
1743
+ 𝑦
1744
+
1745
+ 𝑥
1746
+ 𝑖
1747
+ ,
1748
+ 𝑦
1749
+
1750
+ 𝑥
1751
+ 𝑖
1752
+ )
1753
+ .
1754
+
1755
+ 3.2Discussion
1756
+ 3.2.1Comparison to Mahalanobis distance
1757
+
1758
+ The Mahalanobis distance, commonly defined between a point
1759
+ 𝑦
1760
+ and a distribution
1761
+ 𝐷
1762
+ is
1763
+
1764
+
1765
+ 𝑑
1766
+ 𝑀
1767
+
1768
+ (
1769
+ 𝑦
1770
+ ,
1771
+ 𝐷
1772
+ )
1773
+ =
1774
+ (
1775
+ 𝑦
1776
+
1777
+ 𝜇
1778
+ )
1779
+ 𝑇
1780
+
1781
+ 𝑆
1782
+
1783
+ 1
1784
+
1785
+ (
1786
+ 𝑦
1787
+
1788
+ 𝜇
1789
+ )
1790
+ ,
1791
+
1792
+ (3.1)
1793
+
1794
+ where
1795
+ 𝜇
1796
+ is the mean of
1797
+ 𝐷
1798
+ and
1799
+ 𝑆
1800
+ is the covariance matrix of
1801
+ 𝐷
1802
+ , assumed to be positive-definite. Replacing
1803
+ 𝐷
1804
+ with the empirical distribution defined by the corpus, we have
1805
+
1806
+
1807
+ 𝑑
1808
+ 𝑀
1809
+
1810
+ (
1811
+ 𝑦
1812
+ ,
1813
+ 𝑋
1814
+ )
1815
+ =
1816
+ (
1817
+ 𝑦
1818
+
1819
+ 𝜇
1820
+ )
1821
+ 𝑇
1822
+
1823
+ (
1824
+ (
1825
+ 𝑋
1826
+
1827
+ 𝜇
1828
+ )
1829
+ 𝑇
1830
+
1831
+ (
1832
+ 𝑋
1833
+
1834
+ 𝜇
1835
+ )
1836
+ )
1837
+
1838
+ 1
1839
+
1840
+ (
1841
+ 𝑦
1842
+
1843
+ 𝜇
1844
+ )
1845
+ ,
1846
+
1847
+ (3.2)
1848
+
1849
+ where
1850
+ 𝜇
1851
+ =
1852
+ 1
1853
+ 𝑛
1854
+
1855
+
1856
+ 𝑖
1857
+ 𝑥
1858
+ 𝑖
1859
+ . (3.2) has two issues:
1860
+
1861
+ 1.
1862
+
1863
+ Computing a form of distance to the mean for anomaly detection may not be suitable for certain situations. For example, suppose that the corpus consists of points on the unit circle, centred at the origin, where the points are uniformly distributed with an angle
1864
+ 𝜃
1865
+
1866
+ [
1867
+ 0
1868
+ ,
1869
+ 𝜋
1870
+ ]
1871
+ . Suppose the point of interest
1872
+ 𝑦
1873
+ lies on the circle but with an angle
1874
+ 3
1875
+ 2
1876
+
1877
+ 𝜋
1878
+ .
1879
+ 𝑦
1880
+ is clearly an exceptionality but it has the same distance to the mean as any point in the corpus.
1881
+
1882
+ 2.
1883
+
1884
+ It is not clear how one should proceed when the empirical covariance matrix
1885
+ (
1886
+ 𝑋
1887
+
1888
+ 𝜇
1889
+ )
1890
+ 𝑇
1891
+
1892
+ (
1893
+ 𝑋
1894
+
1895
+ 𝜇
1896
+ )
1897
+ is rank deficient/singular. One might think of replacing the inverse with the pseudo-inverse, which coincides with a special case of our conformance distance.
1898
+
1899
+ In our approach, we have
1900
+ 𝑑
1901
+
1902
+ (
1903
+ 𝑦
1904
+ ,
1905
+ 𝒞
1906
+ )
1907
+ equals to
1908
+ min
1909
+ 𝑥
1910
+ 𝑖
1911
+
1912
+ 𝒞
1913
+
1914
+ 𝑝
1915
+
1916
+ (
1917
+ 𝑦
1918
+
1919
+ 𝑥
1920
+ 𝑖
1921
+ ,
1922
+ 𝑦
1923
+
1924
+ 𝑥
1925
+ 𝑖
1926
+ )
1927
+ =
1928
+ (
1929
+ 𝑦
1930
+
1931
+ 𝑥
1932
+ 𝑖
1933
+ )
1934
+ 𝑇
1935
+
1936
+ (
1937
+ 𝑋
1938
+ 𝑇
1939
+
1940
+ 𝑋
1941
+ )
1942
+
1943
+
1944
+ (
1945
+ 𝑦
1946
+
1947
+ 𝑥
1948
+ 𝑖
1949
+ )
1950
+ , or
1951
+
1952
+ . In the special case where
1953
+ 𝑋
1954
+ has full column rank and
1955
+ (
1956
+ 𝑦
1957
+
1958
+ 𝑥
1959
+ 𝑖
1960
+ )
1961
+ belongs to the column space of
1962
+ 𝑋
1963
+ , conformance distance is equal to nearest neighbour Mahalanobis distance. In general, they differ. We have already seen why it makes sense to use the nearest neighbour distance, and why the inverse needs to be modified in some cases, but the most striking feature of our conformance distance is that it equals infinity in some cases, as we see in Section 3.2.2 it is a key and beneficial feature.
1964
+
1965
+ 3.2.2Why the variance norm should be infinity in some cases
1966
+
1967
+ Taking the pseudo-inverse computationally makes
1968
+ 𝑑
1969
+ ~
1970
+ (
1971
+ 𝑦
1972
+ ,
1973
+ 𝑋
1974
+ )
1975
+ :=
1976
+ min
1977
+ 𝑥
1978
+ 𝑖
1979
+
1980
+ 𝑋
1981
+ (
1982
+ 𝑦
1983
+
1984
+ 𝑥
1985
+ 𝑖
1986
+ )
1987
+ 𝑇
1988
+ (
1989
+ 𝑋
1990
+ 𝑇
1991
+ 𝑋
1992
+ )
1993
+
1994
+ (
1995
+ 𝑦
1996
+
1997
+ 𝑥
1998
+ 𝑖
1999
+ )
2000
+ insensitive to certain modifications of
2001
+ 𝑦
2002
+ . To see this, note that by the definition of the pseudo-inverse, if the SVD of
2003
+ 𝑋
2004
+ is
2005
+ 𝑋
2006
+ =
2007
+ 𝑊
2008
+
2009
+ 𝐷
2010
+
2011
+ 𝑌
2012
+ 𝑇
2013
+ , then
2014
+ (
2015
+ 𝑋
2016
+ 𝑇
2017
+
2018
+ 𝑋
2019
+ )
2020
+
2021
+ =
2022
+ 𝑉
2023
+
2024
+ (
2025
+ 𝐷
2026
+ 𝑇
2027
+
2028
+ 𝐷
2029
+ )
2030
+
2031
+
2032
+ 𝑉
2033
+ 𝑇
2034
+ , where
2035
+ (
2036
+ 𝐷
2037
+ 𝑇
2038
+
2039
+ 𝐷
2040
+ )
2041
+
2042
+ is diagonal with the first
2043
+ 𝑟
2044
+ elements on the diagonal being the inverse of squares of singular values of
2045
+ 𝑋
2046
+ , and other diagonal elements being
2047
+ 0
2048
+ . Let
2049
+ 𝑧
2050
+ be in the span of the last
2051
+ 𝑑
2052
+
2053
+ 𝑟
2054
+ columns of
2055
+ 𝑉
2056
+ 𝑇
2057
+ . Now it is clear that we have
2058
+
2059
+
2060
+ 𝑑
2061
+ ~
2062
+
2063
+ (
2064
+ 𝑦
2065
+ ,
2066
+ 𝑋
2067
+ )
2068
+ =
2069
+ 𝑑
2070
+ ~
2071
+
2072
+ (
2073
+ 𝑦
2074
+ +
2075
+ 𝑧
2076
+ ,
2077
+ 𝑋
2078
+ )
2079
+ ,
2080
+
2081
+ (3.3)
2082
+
2083
+ as
2084
+ 𝑧
2085
+
2086
+ 𝑉
2087
+ has only non-zeros in the last
2088
+ 𝑑
2089
+
2090
+ 𝑟
2091
+ entries.
2092
+
2093
+ From an anomaly detection point of view, the above results in, e.g.
2094
+ 0
2095
+ =
2096
+ 𝑑
2097
+ ~
2098
+
2099
+ (
2100
+ 𝑋
2101
+ 𝑖
2102
+ ,
2103
+ 𝑋
2104
+ )
2105
+ =
2106
+ 𝑑
2107
+ ~
2108
+
2109
+ (
2110
+ 𝑋
2111
+ 𝑖
2112
+ +
2113
+ 𝑧
2114
+ ,
2115
+ 𝑋
2116
+ )
2117
+ , however as
2118
+ 𝑋
2119
+ 𝑖
2120
+ +
2121
+ 𝑧
2122
+ is outside the row space of
2123
+ 𝑋
2124
+ , it might plausibly be classified as an anomaly, in fact, a different type of anomaly. This coincides with our quadratic form
2125
+ 𝑝
2126
+ and is the method we propose.
2127
+
2128
+ 4Signatures for unsupervised anomaly detection on streams
2129
+
2130
+ Building on the variance norm and conformance distance introduced in the last section, here we formally define a stream as a mathematical object and study anomalous streams with path signatures and conformance distance. The combination of the two yields many theoretical properties: stream reparametrisation invariance, stream concatenation invariance and graded discrimination power.
2131
+
2132
+ 4.1Streams of data and signature features
2133
+
2134
+ Below we give a formal definition of a stream of data.
2135
+
2136
+ Definition 4.1 (Stream of data).
2137
+
2138
+ The space of streams of data in a set
2139
+ 𝒳
2140
+ is defined as
2141
+
2142
+
2143
+ 𝒮
2144
+
2145
+ (
2146
+ 𝒳
2147
+ )
2148
+ :=
2149
+ {
2150
+ 𝐱
2151
+ =
2152
+ (
2153
+ 𝑥
2154
+ 1
2155
+ ,
2156
+
2157
+ ,
2158
+ 𝑥
2159
+ 𝑛
2160
+ )
2161
+ :
2162
+ 𝑥
2163
+ 𝑖
2164
+
2165
+ 𝒳
2166
+ ,
2167
+ 𝑛
2168
+
2169
+
2170
+ }
2171
+ .
2172
+
2173
+ Example 1.
2174
+
2175
+ When a person writes a character by hand, the stroke of the pen naturally determines a path. If we record the trajectory we obtain a two-dimensional stream of data
2176
+ 𝐱
2177
+ =
2178
+ (
2179
+ (
2180
+ 𝑥
2181
+ 1
2182
+ ,
2183
+ 𝑦
2184
+ 1
2185
+ )
2186
+ ,
2187
+ (
2188
+ 𝑥
2189
+ 2
2190
+ ,
2191
+ 𝑦
2192
+ 2
2193
+ )
2194
+ ,
2195
+
2196
+ ,
2197
+ (
2198
+ 𝑥
2199
+ 𝑛
2200
+ ,
2201
+ 𝑦
2202
+ 𝑛
2203
+ )
2204
+ )
2205
+
2206
+ 𝒮
2207
+
2208
+ (
2209
+
2210
+ 2
2211
+ )
2212
+ . If we record the stroke of a different writer, the associated stream of data could have a different number of points. The distance between successive points may also vary.
2213
+
2214
+ We give a formal definition of path signatures.
2215
+
2216
+ Definition 4.2 (Signature).
2217
+
2218
+ Let
2219
+ 𝐱
2220
+ =
2221
+ (
2222
+ 𝑥
2223
+ 1
2224
+ ,
2225
+
2226
+ ,
2227
+ 𝑥
2228
+ 𝑛
2229
+ )
2230
+
2231
+ 𝒮
2232
+
2233
+ (
2234
+
2235
+ 𝑑
2236
+ )
2237
+ be a stream of data in
2238
+ 𝑑
2239
+ dimensions. Let
2240
+ 𝑋
2241
+ =
2242
+ (
2243
+ 𝑋
2244
+ 1
2245
+ ,
2246
+
2247
+ ,
2248
+ 𝑋
2249
+ 𝑑
2250
+ )
2251
+ :
2252
+ [
2253
+ 0
2254
+ ,
2255
+ 1
2256
+ ]
2257
+
2258
+
2259
+ 𝑑
2260
+ be such that
2261
+ 𝑋
2262
+
2263
+ (
2264
+ 𝑖
2265
+ 𝑛
2266
+
2267
+ 1
2268
+ )
2269
+ =
2270
+ 𝑥
2271
+ 𝑖
2272
+ +
2273
+ 1
2274
+ for
2275
+ 𝑖
2276
+ =
2277
+ 0
2278
+ ,
2279
+ 1
2280
+ ,
2281
+
2282
+ ,
2283
+ 𝑛
2284
+
2285
+ 1
2286
+ and linear interpolation in between. Then, we define the signature of
2287
+ 𝐱
2288
+ of order
2289
+ 𝑁
2290
+
2291
+
2292
+ as
2293
+
2294
+
2295
+ Sig
2296
+ 𝑁
2297
+ (
2298
+ 𝐱
2299
+ )
2300
+ :=
2301
+ (
2302
+
2303
+
2304
+ 0
2305
+ <
2306
+ 𝑡
2307
+ 1
2308
+ <
2309
+
2310
+ <
2311
+ 𝑡
2312
+ 𝑘
2313
+ <
2314
+ 1
2315
+ d
2316
+
2317
+ 𝑋
2318
+ 𝑖
2319
+ 1
2320
+ d
2321
+
2322
+ 𝑡
2323
+
2324
+ (
2325
+ 𝑡
2326
+ 1
2327
+ )
2328
+
2329
+ d
2330
+
2331
+ 𝑋
2332
+ 𝑖
2333
+ 2
2334
+ d
2335
+
2336
+ 𝑡
2337
+
2338
+ (
2339
+ 𝑡
2340
+ 2
2341
+ )
2342
+
2343
+
2344
+
2345
+ d
2346
+
2347
+ 𝑋
2348
+ 𝑖
2349
+ 𝑘
2350
+ d
2351
+
2352
+ 𝑡
2353
+
2354
+ (
2355
+ 𝑡
2356
+ 𝑘
2357
+ )
2358
+
2359
+ (4.1)
2360
+
2361
+
2362
+ d
2363
+ 𝑡
2364
+ 1
2365
+
2366
+ d
2367
+ 𝑡
2368
+ 𝑘
2369
+ )
2370
+ 1
2371
+
2372
+ 𝑖
2373
+ 1
2374
+ ,
2375
+
2376
+ ,
2377
+ 𝑖
2378
+ 𝑘
2379
+
2380
+ 𝑑
2381
+
2382
+
2383
+ 𝑘
2384
+ =
2385
+ 0
2386
+ ,
2387
+ 1
2388
+ ,
2389
+ 2
2390
+ ,
2391
+
2392
+ ,
2393
+ 𝑁
2394
+ .
2395
+
2396
+
2397
+ The signature of a stream of data is a vector of scalars. The dimension of this vector is
2398
+
2399
+
2400
+ 𝑑
2401
+ 𝑁
2402
+ :=
2403
+ 1
2404
+ +
2405
+ 𝑑
2406
+ +
2407
+ 𝑑
2408
+ 2
2409
+ +
2410
+
2411
+ +
2412
+ 𝑑
2413
+ 𝑁
2414
+ =
2415
+ 𝑑
2416
+ 𝑁
2417
+ +
2418
+ 1
2419
+
2420
+ 1
2421
+ 𝑑
2422
+
2423
+ 1
2424
+ .
2425
+
2426
+
2427
+ The signatures are unique and have universal non-linearity.
2428
+
2429
+ Uniqueness
2430
+
2431
+ : Hambly and Lyons [11]show that under mild assumptions, the full collection of features Sig(x) uniquely determines x up to translations and reparametrisations.
2432
+
2433
+ Universal non-linearity
2434
+
2435
+ : Linear functionals on the signature are dense in the set of functions on x. Suppose we wish to learn the function f that maps data x to labels y, the universal non-linearity property states that, under some assumptions, for any
2436
+ 𝜖
2437
+ >
2438
+ 0
2439
+ , there exists a linear function L such that
2440
+
2441
+ 𝑓
2442
+
2443
+ (
2444
+ 𝑥
2445
+ )
2446
+
2447
+ 𝐿
2448
+
2449
+ (
2450
+ 𝑆
2451
+
2452
+ 𝑖
2453
+
2454
+ 𝑔
2455
+
2456
+ (
2457
+ 𝑥
2458
+ )
2459
+ )
2460
+
2461
+
2462
+ 𝜖
2463
+ . See [22].
2464
+
2465
+ 4.2SigMahaKNN, detecting anomalies in streamed data
2466
+
2467
+ Finally, we define anomalous streams using path signatures and conformance distance. Let
2468
+ 𝒞
2469
+
2470
+ 𝒮
2471
+
2472
+ (
2473
+
2474
+ 𝑑
2475
+ )
2476
+ be a finite corpus (or empirical measure) of streams of data. Let
2477
+ Sig
2478
+ 𝑁
2479
+ be the signature of order
2480
+ 𝑁
2481
+
2482
+
2483
+ . Then
2484
+
2485
+
2486
+
2487
+ Sig
2488
+ 𝑁
2489
+
2490
+ (
2491
+ 𝒞
2492
+ )
2493
+ is the conformance distance associated with the empirical measure of
2494
+ Sig
2495
+ 𝑁
2496
+
2497
+ (
2498
+ 𝒞
2499
+ )
2500
+ . We let
2501
+
2502
+
2503
+
2504
+ Sig
2505
+ 𝑁
2506
+
2507
+ (
2508
+ 𝒞
2509
+ )
2510
+ be our anomaly score, that is, streams with conformance distance higher than a user-determined threshold 1 are anomalies. We call our anomaly detection algorithm SigMahaKNN, a combination of the signature, a generalisation of Mahalanobis distance, and k-nearest neighbour.
2511
+
2512
+ 4.3Properties of SigMahaKNN
2513
+ 4.3.1Reparametrisations invariance
2514
+
2515
+ A reparametrisation of a path
2516
+ 𝑋
2517
+ :
2518
+ [
2519
+ 𝑎
2520
+ ,
2521
+ 𝑏
2522
+ ]
2523
+
2524
+
2525
+ is a path
2526
+ 𝑋
2527
+ ~
2528
+ :
2529
+ [
2530
+ 𝑎
2531
+ ,
2532
+ 𝑏
2533
+ ]
2534
+
2535
+
2536
+ where
2537
+ 𝑋
2538
+ ~
2539
+ 𝑡
2540
+ =
2541
+ 𝑋
2542
+ 𝜓
2543
+ 𝑡
2544
+ where
2545
+ 𝜓
2546
+ is a surjective, continuous, non-decreasing function
2547
+ 𝜓
2548
+ :
2549
+ [
2550
+ 𝑎
2551
+ ,
2552
+ 𝑏
2553
+ ]
2554
+
2555
+ [
2556
+ 𝑎
2557
+ ,
2558
+ 𝑏
2559
+ ]
2560
+ . The paths signature is invariant under reparametrisation [6]. Therefore SigMahaKNN computes an anomaly score that is invariant to path reparametrisations. We give a concrete example of reparametrisation. Consider writing a digit on a sheet of paper. The handwriting forms a path
2561
+ [
2562
+ 0
2563
+ ,
2564
+ 1
2565
+ ]
2566
+
2567
+
2568
+ 2
2569
+ . A reparametrisation corresponds to a change in speed of writing while keeping the shape of the digit written exactly the same. This transformation should not change whether the written shape is an anomaly, as reflected by the reparametrisation invariance of SigMahaKNN.
2570
+
2571
+ Theorem 4.1.
2572
+
2573
+ Let
2574
+ 𝑋
2575
+ ~
2576
+ 𝑡
2577
+ be a reparametrisation of
2578
+ 𝑋
2579
+ . Then we have
2580
+
2581
+ 𝑋
2582
+ ~
2583
+
2584
+ Sig
2585
+ 𝑁
2586
+
2587
+ (
2588
+ 𝒞
2589
+ )
2590
+ =
2591
+
2592
+ 𝑋
2593
+
2594
+ Sig
2595
+ 𝑁
2596
+
2597
+ (
2598
+ 𝒞
2599
+ )
2600
+ .
2601
+
2602
+ 4.3.2Concatenation invariance
2603
+
2604
+ Let
2605
+ 𝑋
2606
+ :
2607
+ [
2608
+ 𝑎
2609
+ ,
2610
+ 𝑏
2611
+ ]
2612
+
2613
+
2614
+ 𝑑
2615
+ and
2616
+ 𝑌
2617
+ :
2618
+ [
2619
+ 𝑏
2620
+ ,
2621
+ 𝑐
2622
+ ]
2623
+
2624
+
2625
+ 𝑑
2626
+ be two paths. We define their concatenation as the path
2627
+ 𝑋
2628
+ *
2629
+ 𝑌
2630
+ :
2631
+ [
2632
+ 𝑎
2633
+ ,
2634
+ 𝑐
2635
+ ]
2636
+
2637
+
2638
+ 𝑑
2639
+ for which
2640
+ (
2641
+ 𝑋
2642
+ *
2643
+ 𝑌
2644
+ )
2645
+ 𝑡
2646
+ =
2647
+ 𝑋
2648
+ 𝑡
2649
+ for
2650
+ 𝑡
2651
+
2652
+ [
2653
+ 𝑎
2654
+ ,
2655
+ 𝑏
2656
+ ]
2657
+ and
2658
+ (
2659
+ 𝑋
2660
+ *
2661
+ 𝑌
2662
+ )
2663
+ 𝑡
2664
+ =
2665
+ 𝑋
2666
+ 𝑏
2667
+ +
2668
+ (
2669
+ 𝑌
2670
+ 𝑡
2671
+
2672
+ 𝑌
2673
+ 𝑏
2674
+ )
2675
+ for
2676
+ 𝑡
2677
+
2678
+ [
2679
+ 𝑏
2680
+ ,
2681
+ 𝑐
2682
+ ]
2683
+ [6]. We then have (Chen’s identity)
2684
+
2685
+
2686
+ 𝑆
2687
+
2688
+ (
2689
+ 𝑋
2690
+ *
2691
+ 𝑌
2692
+ )
2693
+ 𝑎
2694
+ ,
2695
+ 𝑐
2696
+ =
2697
+ 𝑆
2698
+
2699
+ (
2700
+ 𝑋
2701
+ )
2702
+ 𝑎
2703
+ ,
2704
+ 𝑏
2705
+
2706
+ 𝑆
2707
+
2708
+ (
2709
+ 𝑌
2710
+ )
2711
+ 𝑏
2712
+ ,
2713
+ 𝑐
2714
+ ,
2715
+
2716
+ (4.2)
2717
+
2718
+ where we have represented the signature by a formal power series
2719
+
2720
+
2721
+ 𝑆
2722
+
2723
+ (
2724
+ 𝑋
2725
+ )
2726
+ 𝑎
2727
+ ,
2728
+ 𝑏
2729
+ =
2730
+
2731
+ 𝑘
2732
+ =
2733
+ 0
2734
+
2735
+
2736
+ 𝑖
2737
+ 1
2738
+ ,
2739
+
2740
+ ,
2741
+ 𝑖
2742
+ 𝑘
2743
+
2744
+ {
2745
+ 1
2746
+ ,
2747
+
2748
+
2749
+ 𝑑
2750
+ }
2751
+ 𝑆
2752
+
2753
+ (
2754
+ 𝑋
2755
+ )
2756
+ 𝑎
2757
+ ,
2758
+ 𝑏
2759
+ 𝑖
2760
+ 1
2761
+ ,
2762
+
2763
+ ,
2764
+ 𝑖
2765
+ 𝑘
2766
+
2767
+ 𝑒
2768
+ 𝑖
2769
+ 1
2770
+
2771
+
2772
+
2773
+ 𝑒
2774
+ 𝑖
2775
+ 𝑘
2776
+ ,
2777
+
2778
+ (4.3)
2779
+
2780
+ The tensor product is defined as
2781
+
2782
+
2783
+ 𝑒
2784
+ 𝑖
2785
+ 1
2786
+
2787
+
2788
+
2789
+ 𝑒
2790
+ 𝑖
2791
+ 𝑘
2792
+
2793
+ 𝑒
2794
+ 𝑗
2795
+ 1
2796
+
2797
+
2798
+
2799
+ 𝑒
2800
+ 𝑗
2801
+ 𝑚
2802
+ =
2803
+ 𝑒
2804
+ 𝑖
2805
+ 1
2806
+
2807
+
2808
+
2809
+ 𝑒
2810
+ 𝑖
2811
+ 𝑘
2812
+
2813
+ 𝑒
2814
+ 𝑗
2815
+ 1
2816
+
2817
+
2818
+
2819
+ 𝑒
2820
+ 𝑗
2821
+ 𝑚
2822
+ .
2823
+
2824
+ (4.4)
2825
+
2826
+ The product
2827
+
2828
+ then extends uniquely and linearly to all power series. We demonstrate the first few terms of the product in the following expression
2829
+
2830
+
2831
+ (
2832
+
2833
+ 𝑘
2834
+ =
2835
+ 0
2836
+
2837
+
2838
+ 𝑖
2839
+ 1
2840
+ ,
2841
+
2842
+ ,
2843
+ 𝑖
2844
+ 𝑘
2845
+
2846
+ {
2847
+ 1
2848
+ ,
2849
+
2850
+
2851
+ 𝑑
2852
+ }
2853
+ 𝜆
2854
+ 𝑖
2855
+ 1
2856
+ ,
2857
+
2858
+ ,
2859
+ 𝑖
2860
+ 𝑘
2861
+
2862
+ 𝑒
2863
+ 𝑖
2864
+ 1
2865
+
2866
+
2867
+
2868
+ 𝑒
2869
+ 𝑖
2870
+ 𝑘
2871
+ )
2872
+
2873
+
2874
+
2875
+ (
2876
+
2877
+ 𝑘
2878
+ =
2879
+ 0
2880
+
2881
+
2882
+ 𝑖
2883
+ 1
2884
+ ,
2885
+
2886
+ ,
2887
+ 𝑖
2888
+ 𝑘
2889
+
2890
+ {
2891
+ 1
2892
+ ,
2893
+
2894
+
2895
+ 𝑑
2896
+ }
2897
+ 𝜇
2898
+ 𝑖
2899
+ 1
2900
+ ,
2901
+
2902
+ ,
2903
+ 𝑖
2904
+ 𝑘
2905
+
2906
+ 𝑒
2907
+ 𝑖
2908
+ 1
2909
+
2910
+
2911
+
2912
+ 𝑒
2913
+ 𝑖
2914
+ 𝑘
2915
+ )
2916
+
2917
+
2918
+ =
2919
+ 𝜆
2920
+ 0
2921
+
2922
+ 𝜇
2923
+ 0
2924
+ +
2925
+
2926
+ 𝑖
2927
+ =
2928
+ 1
2929
+ 𝑑
2930
+ (
2931
+ 𝜆
2932
+ 0
2933
+
2934
+ 𝜇
2935
+ 𝑖
2936
+ +
2937
+ 𝜆
2938
+ 𝑖
2939
+
2940
+ 𝜇
2941
+ 0
2942
+ )
2943
+
2944
+ 𝑒
2945
+ 𝑖
2946
+
2947
+
2948
+ +
2949
+
2950
+ 𝑖
2951
+ ,
2952
+ 𝑗
2953
+ =
2954
+ 1
2955
+ 𝑑
2956
+ (
2957
+ 𝜆
2958
+ 0
2959
+
2960
+ 𝜇
2961
+ 𝑖
2962
+ ,
2963
+ 𝑗
2964
+ +
2965
+ 𝜆
2966
+ 𝑖
2967
+
2968
+ 𝜇
2969
+ 𝑗
2970
+ +
2971
+ 𝜆
2972
+ 𝑖
2973
+ ,
2974
+ 𝑗
2975
+
2976
+ 𝜇
2977
+ 0
2978
+ )
2979
+
2980
+ 𝑒
2981
+ 𝑖
2982
+
2983
+ 𝑒
2984
+ 𝑗
2985
+ +
2986
+
2987
+ .
2988
+
2989
+
2990
+ We see that concatenation with
2991
+ 𝑋
2992
+ results in a linear transformation of the signatures of
2993
+ 𝑌
2994
+ . Because a linear transformation is equivalent to a change of basis, and the variance norm is independent of basis (it is defined without using any basis), it follows that the variance norm with path signatures is invariant under concatenations by paths on the left. Similarly, the variance norm with path signatures is invariant under concatenation by paths on the right. As the conformance distance is computed by taking the minimal variance norms, it is also invariant under concatenations of paths. That is
2995
+
2996
+ Theorem 4.2.
2997
+
2998
+ Let
2999
+ 𝒞
3000
+ be a corpus of paths and
3001
+ 𝑋
3002
+ ,
3003
+ 𝑌
3004
+ be paths. Define
3005
+ 𝒞
3006
+ +
3007
+ 𝑋
3008
+ be the set of paths consisting of
3009
+ 𝑍
3010
+ *
3011
+ 𝑋
3012
+ for
3013
+ 𝑍
3014
+
3015
+ 𝒞
3016
+ . Then we have that
3017
+
3018
+ 𝑌
3019
+
3020
+ Sig
3021
+ 𝑁
3022
+
3023
+ (
3024
+ 𝒞
3025
+ )
3026
+ =
3027
+
3028
+ 𝑌
3029
+ *
3030
+ 𝑋
3031
+
3032
+ Sig
3033
+ 𝑁
3034
+
3035
+ (
3036
+ 𝒞
3037
+ +
3038
+ 𝑋
3039
+ )
3040
+ .
3041
+
3042
+ 4.3.3Naturally graded discriminating powers
3043
+
3044
+ The graded structure of the signature given by its order gives naturally graded discriminating powers. In particular, the variance norm with signature features is monotonically increasing with respect to the signature order and will reach infinity for a finite order of the signature.
3045
+
3046
+ Proposition 4.1.
3047
+
3048
+ Let
3049
+ 𝒞
3050
+
3051
+ 𝒮
3052
+
3053
+ (
3054
+
3055
+ 𝑑
3056
+ )
3057
+ be a finite corpus. Let
3058
+ 𝑊
3059
+ be a path. Then,
3060
+
3061
+ 𝑊
3062
+
3063
+ Sig
3064
+ 𝑁
3065
+
3066
+ (
3067
+ 𝒞
3068
+ )
3069
+ is non-decreasing as a function of
3070
+ 𝑁
3071
+ .
3072
+
3073
+ Proof.
3074
+
3075
+ Let
3076
+ 𝑀
3077
+
3078
+ 𝑁
3079
+ . Let
3080
+ 𝑦
3081
+ 𝑁
3082
+ ,
3083
+ 𝑦
3084
+ 𝑀
3085
+ be the
3086
+ 𝑀
3087
+ ,
3088
+ 𝑁
3089
+ -order of signatures for the stream
3090
+ 𝑊
3091
+ ,
3092
+ 𝑋
3093
+ 𝑁
3094
+ ,
3095
+ 𝑋
3096
+ 𝑀
3097
+ be the
3098
+ 𝑀
3099
+ ,
3100
+ 𝑁
3101
+ -order of signatures of the corpus (represented in some basis, with the basis of for
3102
+ 𝑋
3103
+ 𝑁
3104
+ being a subset of the basis for
3105
+ 𝑋
3106
+ 𝑀
3107
+ ) respectively. Note that the first
3108
+ 𝑁
3109
+ entries/entry for each row of
3110
+ 𝑦
3111
+ 𝑀
3112
+ ,
3113
+ 𝑋
3114
+ 𝑀
3115
+ are identical to
3116
+ 𝑦
3117
+ 𝑁
3118
+ ,
3119
+ 𝑋
3120
+ 𝑁
3121
+ respectively.
3122
+
3123
+ We discuss two cases. If
3124
+ 𝑦
3125
+ 𝑁
3126
+ is not in the span of the rows of
3127
+ 𝑋
3128
+ 𝑁
3129
+ , then
3130
+ 𝑦
3131
+ 𝑀
3132
+ is not in the span of the rows of
3133
+ 𝑋
3134
+ 𝑀
3135
+ , hence in both conformance distances are infinity.
3136
+
3137
+ In case that
3138
+ 𝑦
3139
+ 𝑁
3140
+ is in the span of the rows of
3141
+ 𝑋
3142
+ 𝑁
3143
+ , if
3144
+ 𝑦
3145
+ 𝑁
3146
+ ,
3147
+ 𝑦
3148
+ 𝑀
3149
+ are both in the span of the rows of
3150
+ 𝑋
3151
+ 𝑁
3152
+ ,
3153
+ 𝑋
3154
+ 𝑀
3155
+ respectively, then the variance norm using signature of level
3156
+ 𝑀
3157
+ equals to the variance norm using signature of level
3158
+ 𝑁
3159
+ plus non-negative terms. This is because of the formula
3160
+ 𝑝
3161
+
3162
+ (
3163
+ 𝑣
3164
+ ,
3165
+ 𝑣
3166
+ )
3167
+ =
3168
+ 𝑣
3169
+ 𝑇
3170
+
3171
+ (
3172
+ 𝑋
3173
+ 𝑇
3174
+
3175
+ 𝑋
3176
+ )
3177
+
3178
+
3179
+ 𝑣
3180
+ . If
3181
+ 𝑦
3182
+ 𝑁
3183
+ is in the span of
3184
+ 𝑋
3185
+ 𝑁
3186
+ but
3187
+ 𝑦
3188
+ 𝑀
3189
+ is not in the span of
3190
+ 𝑋
3191
+ 𝑀
3192
+ , then in the latter case the variance norm is infinity, so clearly it is increasing. ∎
3193
+
3194
+ Moreover, for a sufficiently high resolution, any stream of data not belonging to the corpus has infinite variance:
3195
+
3196
+ Proposition 4.2.
3197
+
3198
+ Let
3199
+ 𝒞
3200
+
3201
+ 𝒮
3202
+
3203
+ (
3204
+
3205
+ 𝑑
3206
+ )
3207
+ be a finite corpus. Let
3208
+ 𝐘
3209
+
3210
+ 𝒮
3211
+
3212
+ (
3213
+
3214
+ 𝑑
3215
+ )
3216
+ be a stream of data that does not belong to the corpus,
3217
+ 𝐘
3218
+
3219
+ 𝒞
3220
+ . Then, there exists
3221
+ 𝑁
3222
+ large enough such that
3223
+
3224
+
3225
+
3226
+ 𝑌
3227
+
3228
+ Sig
3229
+ 𝑛
3230
+
3231
+ (
3232
+ 𝒞
3233
+ )
3234
+ =
3235
+
3236
+
3237
+
3238
+ 𝑛
3239
+
3240
+ 𝑁
3241
+ .
3242
+
3243
+ Proof.
3244
+
3245
+ If
3246
+ 𝐘
3247
+
3248
+ 𝒞
3249
+ , there exists
3250
+ 𝑁
3251
+ large enough such that
3252
+ Sig
3253
+ 𝑁
3254
+
3255
+ (
3256
+ 𝐲
3257
+ )
3258
+ is linearly independent to
3259
+ Sig
3260
+ 𝑁
3261
+
3262
+ (
3263
+ 𝒞
3264
+ )
3265
+ [22]. Therefore, the variance norm and hence the conformance distance are infinity. Going beyond
3266
+ 𝑁
3267
+ ,
3268
+ Sig
3269
+ 𝑁
3270
+
3271
+ (
3272
+ 𝐲
3273
+ )
3274
+ is still linearly independent to
3275
+ Sig
3276
+ 𝑁
3277
+
3278
+ (
3279
+ 𝒞
3280
+ )
3281
+ , as the first
3282
+ 𝑑
3283
+ 𝑁
3284
+ entries are the same as the level
3285
+ 𝑁
3286
+ vectors.
3287
+
3288
+
3289
+
3290
+ 4.3.4A dimensionless anomaly detector
3291
+
3292
+ Changing the measurement units of the stream channels induces a linear transformation on the signatures of the streams using the formula in (4.1). 2 Because the variance norm is independent of linear transformations, we have
3293
+
3294
+ Theorem 4.3.
3295
+
3296
+ Let
3297
+ 𝒞
3298
+ be a corpus of paths and
3299
+ 𝑌
3300
+ be a path. Suppose each path in
3301
+ 𝒞
3302
+ has
3303
+ 𝑑
3304
+ dimensions. Let
3305
+ 𝐴
3306
+
3307
+
3308
+ 𝑛
3309
+ ×
3310
+ 𝑑
3311
+ non-singular, and define
3312
+ 𝐴
3313
+
3314
+ 𝑋
3315
+ be a path where
3316
+ 𝐴
3317
+ is applied to each point in the path. Then we have
3318
+
3319
+ 𝑌
3320
+
3321
+ Sig
3322
+ 𝑁
3323
+
3324
+ (
3325
+ 𝒞
3326
+ )
3327
+ =
3328
+
3329
+ 𝐴
3330
+
3331
+ 𝑌
3332
+
3333
+ Sig
3334
+ 𝑁
3335
+
3336
+ (
3337
+ 𝐴
3338
+
3339
+ 𝒞
3340
+ )
3341
+ .
3342
+
3343
+ 5Software implementation and Numerical experiments
3344
+
3345
+ The software for anomaly detection on streamed data, and code for the reproduction of all our experiments is available at https://github.com/datasig-ac-uk/signature_mahalanobis_knn.
3346
+
3347
+ 5.1Software contribution
3348
+ 5.1.1Implementation of SigMahaKNN
3349
+ Algorithm 1 (SigMahaKNN).
3350
+
3351
+ Given a corpus of
3352
+ 𝑛
3353
+ streams, train a function mapping a stream to an anomaly score in
3354
+ [
3355
+ 0
3356
+ ,
3357
+
3358
+ ]
3359
+ . Parameters: (1) path-signature related: signature augmentations, scaling, signature-windowing, truncation depths (2) variance norm related: subspace threshold, svd threshold.
3360
+
3361
+ 1.
3362
+
3363
+ Compute signatures of the streams, forming a corpus matrix
3364
+ 𝑋
3365
+ . Centre the rows of
3366
+ 𝑋
3367
+ on its mean,
3368
+ 𝜇
3369
+ .
3370
+
3371
+ 2.
3372
+
3373
+ Compute a truncated SVD factorisation of
3374
+ 𝑋
3375
+ of the form,
3376
+ 𝑋
3377
+ =
3378
+ 𝑈
3379
+
3380
+ Σ
3381
+
3382
+ 𝑉
3383
+ 𝑇
3384
+ , where
3385
+ 𝑈
3386
+
3387
+
3388
+ 𝑛
3389
+ ×
3390
+ 𝑘
3391
+ ,
3392
+ Σ
3393
+
3394
+
3395
+ 𝑘
3396
+ ×
3397
+ 𝑘
3398
+ ,
3399
+ 𝑉
3400
+ 𝑇
3401
+
3402
+
3403
+ 𝑘
3404
+ ×
3405
+ 𝑚
3406
+ , where
3407
+ 𝑘
3408
+ is the numerical rank of
3409
+ 𝑋
3410
+ such that the
3411
+ 𝑘
3412
+ -th largest singular value of
3413
+ 𝑋
3414
+ is the smallest singular value bigger or equal to the svd threshold.
3415
+
3416
+ 3.
3417
+
3418
+ Compute the signature of the input stream, and centre it by subtracting
3419
+ 𝜇
3420
+ from it, denote the result by
3421
+ 𝑌
3422
+ . For each pair
3423
+ (
3424
+ 𝑋
3425
+ 𝑖
3426
+ ,
3427
+ 𝑌
3428
+ )
3429
+ , where
3430
+ 𝑋
3431
+ 𝑖
3432
+ is the
3433
+ 𝑖
3434
+ -th row of
3435
+ 𝑋
3436
+ , compute the variance norm of
3437
+ 𝑧
3438
+ :=
3439
+ 𝑋
3440
+ 𝑖
3441
+
3442
+ 𝑌
3443
+ by (
3444
+ 𝑧
3445
+ viewed as a column vector):
3446
+
3447
+ (a)
3448
+
3449
+ If
3450
+
3451
+ 𝑧
3452
+
3453
+ 2
3454
+ <
3455
+ 10
3456
+
3457
+ 15
3458
+ , return
3459
+ 0
3460
+ .
3461
+
3462
+ (b)
3463
+
3464
+ If
3465
+
3466
+ 𝑉
3467
+
3468
+ 𝑉
3469
+ 𝑇
3470
+
3471
+ 𝑧
3472
+
3473
+ 𝑧
3474
+
3475
+ 2
3476
+
3477
+ 𝑧
3478
+
3479
+ 2
3480
+ >
3481
+ subspace threshold
3482
+ , return
3483
+
3484
+ .
3485
+
3486
+ (c)
3487
+
3488
+ Otherwise, return
3489
+ 𝑧
3490
+ 𝑇
3491
+
3492
+ 𝑉
3493
+
3494
+ Σ
3495
+
3496
+ 2
3497
+
3498
+ 𝑉
3499
+ 𝑇
3500
+
3501
+ 𝑧
3502
+ .
3503
+
3504
+ 4.
3505
+
3506
+ Return
3507
+ min
3508
+ 𝑖
3509
+
3510
+
3511
+ 𝑌
3512
+
3513
+ 𝑋
3514
+ 𝑖
3515
+
3516
+ 𝑋
3517
+ by using the variance norm metric and the sklearn nearest neighbour.
3518
+
3519
+ 5.1.2Discussion
3520
+ Threshold parameters in the variance norm
3521
+
3522
+ In practice,
3523
+ 𝑋
3524
+ is often full algebraic rank due to rounding errors but numerically rank-deficient, that is, its trailing singular values are near zero. We set a threshold to compute an approximate SVD of
3525
+ 𝑋
3526
+ , so as to handle numerical rank-deficiency. Due to this approximation, the basis independent property of the variance norm will only hold approximately in practice. Moreover, due to numerical errors, we need a subspace threshold to determine if a given vector belongs to the span of rows of
3527
+ 𝑋
3528
+ .
3529
+
3530
+ Setting a threshold given anomaly scores
3531
+
3532
+ The nearest neighbour distance gives us a number that a user often needs to compare against a threshold. One way to set the threshold in practice is to use the following procedure. We split the corpus into two equal-sized parts and compute the empirical cdf of the nearest neighbour distance for one part using the other part as the corpus. One can then set a threshold distance by choosing an appropriate tail quantile in the empirical cdf.
3533
+
3534
+ Signature transformations
3535
+
3536
+ The practical performance of signatures is often improved by considering various stream augmentations [26]. We will use the time-augmentation, invisibility-reset and lead-lag transformations in some of our numerical experiments. For more details of these transformations see [26].
3537
+
3538
+ 5.2Experimental results
3539
+ 5.2.1Dataset and methods compared
3540
+
3541
+ We test our approach on four datasets: Handwritten digits, marine vessel traffic data, language data and a selection of univariate time series from the UCR repository. For the first three multivariate streams, we compare our methods with isolation forest[19] and local outlier factor method[5]. These two methods are well-known anomaly detection methods, and in order to use them on streams, we use them with either the moment features (mean and covariance of different dimensions of the stream), or the signature features. We thus have four baselines: IF-M (Isolation Forest with Moment features), IF-S (Isolation Forest with Signature features), LOF-M (Local Outlier Factor with Moment features), LOF-S (Local Outlier Factor with Signature features). For univariate time series we compare our method with a specialised univariate time series anomaly detection technique based on shapelet by [2]. We mostly report on AUC as a measure of accuracy for anomaly detection. The full experimental results including run-time are included in the supplementary materials.
3542
+
3543
+ 5.2.2Handwritten digits
3544
+
3545
+ We evaluate our proposed method using the PenDigits-orig data set [8]. This data set consists of 10 992 instances of hand-written digits captured from 44 subjects using a digital tablet and stylus, with each digit represented approximately equally frequently. Each instance is represented as a 2-dimensional stream, based on sampling the stylus position at 10Hz.
3546
+
3547
+ We apply the PenDigits data to unsupervised anomaly detection by defining
3548
+
3549
+ normal
3550
+ as the set of instances representing digit
3551
+ 𝑚
3552
+ . We define
3553
+ 𝒞
3554
+ as the subset of
3555
+
3556
+ normal
3557
+ labelled as ‘training’ by the annotators. Furthermore, we define
3558
+ 𝒴
3559
+ as the set of instances labelled as ‘testing’ by the annotators (
3560
+ |
3561
+ 𝒴
3562
+ |
3563
+ =
3564
+ 3498
3565
+ ). Finally, we define
3566
+
3567
+ anomaly
3568
+ as the subset of
3569
+ 𝒴
3570
+ not representing digit
3571
+ 𝑚
3572
+ . Considering all possible digits, we obtain on average
3573
+ |
3574
+ 𝒞
3575
+ |
3576
+ =
3577
+ 749.4
3578
+ ,
3579
+ |
3580
+
3581
+ anomaly
3582
+ |
3583
+ =
3584
+ 3 148.2
3585
+ . Assuming that the digit class is invariant to translation and scaling, we apply Min-Max normalisation to each individual stream.
3586
+
3587
+ Table 1 displays results based on taking signatures of order
3588
+ 𝑁
3589
+
3590
+ [
3591
+ 1..5
3592
+ ]
3593
+ and without any stream transformations applied. The results are based on aggregating conformance values across the set of possible digits before computing the ROC AUC. As we observe, performance increases monotonically from 0.901 (
3594
+ 𝑁
3595
+ =
3596
+ 1
3597
+ ) to 0.989 (
3598
+ 𝑁
3599
+ =
3600
+ 5
3601
+ ).
3602
+
3603
+ N=1 N=2 N=3 N=4 N=5
3604
+ SigMahaKNN 0.870 0.942 0.948 0.954 0.956
3605
+ IF-M - - - - 0.618
3606
+ IF-S 0.888 0.931 0.916 0.875 0.834
3607
+ LOF-M - - - - 0.514
3608
+ LOF-S 0.563 0.584 0.582 0.582 0.582
3609
+ Table 1:Handwritten digits data: performance quantified using ROC AUC in response to signature order
3610
+ 𝑁
3611
+ . Bootstrapped standard errors based on
3612
+ 10
3613
+ 4
3614
+ samples are around
3615
+ 0.003
3616
+ .
3617
+ 5.2.3Marine vessel traffic data
3618
+
3619
+ Next, we consider a sample of marine vessel traffic data3, based on the automatic identification system (AIS) which reports a ship’s geographical position alongside other vessel information. The AIS data that we consider were collected by the US Coast Guard in January 2017, with a total of 31 884 021 geographical positions recorded for 6 282 distinct vessel identifiers. We consider the stream of timestamped latitude/longitude position data associated with each vessel a representation of the vessel’s path.
3620
+
3621
+ We prepare the marine vessel data by retaining only those data points with a valid associated vessel identifier. In addition, we discard vessels with any missing or invalid vessel length information. Next, to help constrain computation time, we compress each stream by retaining a given position only if its distance relative to the previously retained position exceeds a threshold of 10m. Finally, to help ensure that streams are faithful representations of ship movement, we retain only those vessels whose distance between initial and final positions exceeds 5km. To evaluate the effect of stream length on performance, we disintegrate streams so that the length
3622
+ 𝐷
3623
+ between initial and final points in each sub-stream remains constant with
3624
+ 𝐷
3625
+
3626
+ {
3627
+ 4
3628
+
3629
+ km
3630
+ ,
3631
+ 8
3632
+
3633
+ km
3634
+ ,
3635
+ 16
3636
+
3637
+ km
3638
+ ,
3639
+ 32
3640
+
3641
+ km
3642
+ }
3643
+ . After disintegrating streams, we retain only those sub-streams whose maximum distance between successive points is less than 1km.
3644
+
3645
+ We partition the data by deeming a sub-stream normal if it belongs to a vessel with a reported vessel length greater than 100m. Conversely, we deem sub-steams anomalous if they belong to vessels with a reported length less than or equal to 50m. We obtain the corpus
3646
+ 𝒞
3647
+ from 607 vessels, whose sub-streams total between 10 111 (
3648
+ 𝐷
3649
+ =
3650
+ 32
3651
+ km) and 104 369 (
3652
+ 𝐷
3653
+ =
3654
+ 4
3655
+ km); we obtain the subset of normal instances used for testing
3656
+
3657
+ normal
3658
+
3659
+ 𝒞
3660
+ from 607 vessels, whose sub-streams total between 11 254 (
3661
+ 𝐷
3662
+ =
3663
+ 32
3664
+ km) and 114 071 (
3665
+ 𝐷
3666
+ =
3667
+ 4
3668
+ km); lastly we obtain the set of anomalous instances
3669
+
3670
+ anomaly
3671
+ from 997 vessels whose sub-streams total between 8 890 (
3672
+ 𝐷
3673
+ =
3674
+ 32
3675
+ km) and 123 237 (
3676
+ 𝐷
3677
+ =
3678
+ 4
3679
+ km). To account for any imbalance in the number of sub-streams associated with vessels, we use for each of the aforementioned three subsets a weighted sample of 5 000 instances.
3680
+
3681
+ After computing sub-streams and transforming them as described, we apply Min-Max normalisation with respect to the corpus
3682
+ 𝒞
3683
+ . To account for velocity, we incorporate the difference between successive timestamps as an additional dimension (time augmentation).
3684
+
3685
+ We report results based on taking signatures of order
3686
+ 𝑁
3687
+ =
3688
+ 3
3689
+ . For comparison, as a baseline approach, we summarise each sub-stream by estimating its component-wise mean and covariance, retaining the upper triangular part of the covariance matrix. This results in feature vectors of dimensionality
3690
+ 1
3691
+ 2
3692
+
3693
+ (
3694
+ 𝑛
3695
+ 2
3696
+ +
3697
+ 3
3698
+
3699
+ 𝑛
3700
+ )
3701
+ which we provide as the input to an isolation forest [19]. We train the isolation forest using 100 trees and for each tree in the ensemble using 256 samples represented by a single random feature.
3702
+
3703
+ Table 2 to Table 6 display results for our proposed approach in comparison to the baselines, for combinations of stream transformations and values of the sub-stream length
3704
+ 𝐷
3705
+ . Signature conformance yields higher ROC AUC scores than the baseline for the majority of parameter combinations. The maximum ROC AUC score of 0.891 is for a combination of lead-lag, time differences, and invisibility reset transformations with
3706
+ 𝐷
3707
+ =
3708
+ 32
3709
+ km, using the signature conformance. Compared to the best-performing baseline parameter combination for
3710
+ 𝐷
3711
+ =
3712
+ 32
3713
+ km (IF-S), this represents a performance gain of
3714
+ 10
3715
+ percentage points.
3716
+
3717
+ Transformation Conformance
3718
+ dist
3719
+
3720
+ (
3721
+
3722
+ ;
3723
+ Sig
3724
+ 3
3725
+
3726
+ (
3727
+ 𝒞
3728
+ )
3729
+ )
3730
+
3731
+ Sub-stream length
3732
+ 𝐷
3733
+
3734
+ Lead-lag Time-Diff Inv. Reset
3735
+ 4
3736
+ km
3737
+ 8
3738
+ km
3739
+ 16
3740
+ km
3741
+ 32
3742
+ km
3743
+ No No No 0.723 0.706 0.705 0.740
3744
+ No No Yes 0.776 0.789 0.785 0.805
3745
+ No Yes No 0.810 0.813 0.818 0.848
3746
+ No Yes Yes 0.839 0.860 0.863 0.879
3747
+ Yes No No 0.811 0.835 0.824 0.837
3748
+ Yes No Yes 0.812 0.835 0.833 0.855
3749
+ Yes Yes No 0.845 0.861 0.862 0.877
3750
+ Yes Yes Yes 0.848 0.863 0.870 0.891
3751
+ Table 2:SigMahaKNN on Marine vessel traffic data: performance quantified using ROC AUC for combinations of stream transformations and sub-stream length
3752
+ 𝐷
3753
+ . Best across all transformations are in bold.
3754
+ Transformation Conformance
3755
+ dist
3756
+
3757
+ (
3758
+
3759
+ ;
3760
+ Sig
3761
+ 3
3762
+
3763
+ (
3764
+ 𝒞
3765
+ )
3766
+ )
3767
+
3768
+ Sub-stream length
3769
+ 𝐷
3770
+
3771
+ Lead-lag Time-Diff Inv. Reset
3772
+ 4
3773
+ km
3774
+ 8
3775
+ km
3776
+ 16
3777
+ km
3778
+ 32
3779
+ km
3780
+ No No No 0.714 0.712 0.727 0.727
3781
+ No No Yes 0.781 0.785 0.776 0.790
3782
+ No Yes No 0.767 0.772 0.786 0.804
3783
+ No Yes Yes 0.830 0.823 0.831 0.828
3784
+ Yes No No 0.696 0.704 0.711 0.724
3785
+ Yes No Yes 0.758 0.759 0.767 0.773
3786
+ Yes Yes No 0.747 0.763 0.780 0.785
3787
+ Yes Yes Yes 0.811 0.813 0.809 0.823
3788
+ Table 3:IF-M on Marine vessel traffic data: performance quantified using ROC AUC for combinations of stream transformations and sub-stream length
3789
+ 𝐷
3790
+ . Best across all transformations are in bold.
3791
+ Transformation Conformance
3792
+ dist
3793
+
3794
+ (
3795
+
3796
+ ;
3797
+ Sig
3798
+ 3
3799
+
3800
+ (
3801
+ 𝒞
3802
+ )
3803
+ )
3804
+
3805
+ Sub-stream length
3806
+ 𝐷
3807
+
3808
+ Lead-lag Time-Diff Inv. Reset
3809
+ 4
3810
+ km
3811
+ 8
3812
+ km
3813
+ 16
3814
+ km
3815
+ 32
3816
+ km
3817
+ No No No 0.627 0.623 0.645 0.660
3818
+ No No Yes 0.686 0.701 0.715 0.718
3819
+ No Yes No 0.731 0.714 0.737 0.771
3820
+ No Yes Yes 0.777 0.784 0.789 0.808
3821
+ Yes No No 0.617 0.596 0.634 0.668
3822
+ Yes No Yes 0.692 0.701 0.691 0.713
3823
+ Yes Yes No 0.725 0.692 0.716 0.757
3824
+ Yes Yes Yes 0.779 0.782 0.801 0.823
3825
+ Table 4:IF-S on Marine vessel traffic data: performance quantified using ROC AUC for combinations of stream transformations and sub-stream length
3826
+ 𝐷
3827
+ . Best across all transformations are in bold.
3828
+ Transformation Conformance
3829
+ dist
3830
+
3831
+ (
3832
+
3833
+ ;
3834
+ Sig
3835
+ 3
3836
+
3837
+ (
3838
+ 𝒞
3839
+ )
3840
+ )
3841
+
3842
+ Sub-stream length
3843
+ 𝐷
3844
+
3845
+ Lead-lag Time-Diff Inv. Reset
3846
+ 4
3847
+ km
3848
+ 8
3849
+ km
3850
+ 16
3851
+ km
3852
+ 32
3853
+ km
3854
+ No No No 0.543 0.542 0.522 0.513
3855
+ No No Yes 0.555 0.585 0.564 0.535
3856
+ No Yes No 0.547 0.543 0.526 0.520
3857
+ No Yes Yes 0.559 0.589 0.572 0.545
3858
+ Yes No No 0.543 0.541 0.522 0.513
3859
+ Yes No Yes 0.566 0.556 0.542 0.533
3860
+ Yes Yes No 0.547 0.543 0.526 0.520
3861
+ Yes Yes Yes 0.572 0.563 0.554 0.544
3862
+ Table 5:LOF-M on Marine vessel traffic data: performance quantified using ROC AUC for combinations of stream transformations and sub-stream length
3863
+ 𝐷
3864
+ . Best across all transformations are in bold.
3865
+ Transformation Conformance
3866
+ dist
3867
+
3868
+ (
3869
+
3870
+ ;
3871
+ Sig
3872
+ 3
3873
+
3874
+ (
3875
+ 𝒞
3876
+ )
3877
+ )
3878
+
3879
+ Sub-stream length
3880
+ 𝐷
3881
+
3882
+ Lead-lag Time-Diff Inv. Reset
3883
+ 4
3884
+ km
3885
+ 8
3886
+ km
3887
+ 16
3888
+ km
3889
+ 32
3890
+ km
3891
+ No No No 0.484 0.500 0.491 0.492
3892
+ No No Yes 0.565 0.572 0.562 0.520
3893
+ No Yes No 0.511 0.505 0.493 0.494
3894
+ No Yes Yes 0.565 0.572 0.561 0.520
3895
+ Yes No No 0.484 0.500 0.491 0.493
3896
+ Yes No Yes 0.564 0.569 0.558 0.518
3897
+ Yes Yes No 0.530 0.533 0.553 0.600
3898
+ Yes Yes Yes 0.574 0.588 0.589 0.584
3899
+ Table 6:LOF-S on Marine vessel traffic data: performance quantified using ROC AUC for combinations of stream transformations and sub-stream length
3900
+ 𝐷
3901
+ . Best across all transformations are in bold.
3902
+ 5.2.4Anomlous language detection
3903
+
3904
+ Next, we consider a sample of words. The corpus consists of English words, and the test set consists of words in one of the six languages: English, German, French, Italian, Polish, and Swedish. Words are coded into multivariate streams by taking one-hot encoding of the alphabets and cumulative sums. We obtained a corpus of 70,000 English words and a test set with 10,000 English words and 10,000 words from other five languages (2,000 each). We use a signature of
3905
+ 𝑁
3906
+ =
3907
+ 2
3908
+ on this test, due to the size of the dataset and the result is reported in Table 7.
3909
+
3910
+ SigMahaKNN IF-M IF-S LOF-M LOF-S
3911
+ AUC 0.878 0.713 0.723 0.769 0.787
3912
+ Standard Error 0.002 0.004 0.004 0.003 0.003
3913
+ Table 7:AUC and Standard Error values for different methods in the language dataset. Standard Error is based on bootstrapping of
3914
+ 10
3915
+ 4
3916
+ samples.
3917
+ 5.2.5Univariate time series
3918
+
3919
+ For the specific case of detecting anomalous univariate time series, we benchmark our method against the ADSL shapelet method of Beggel et al. [2], using their set of 28 data sets from the UEA & UCR time series repository [1] adapted in exactly the same manner. Each data set comprises a set of time series of equal length, together with class labels. One class (the same as in ADSL) is designated as a normal class, with all other classes designated as anomalies. To prepare the data for our method, we convert each time series into a 2-dimensional stream by incorporating a uniformly increasing time dimension. We apply no other transformations to the data and take signatures of order
3920
+ 𝑁
3921
+ =
3922
+ 5
3923
+ .
3924
+
3925
+ We create training and test sets exactly as in ADSL. The training corpus
3926
+ 𝒞
3927
+ consists of 80% of the normal time series, contaminated by a proportion of anomalies (we compute results for anomaly rates of 0.1% and 5%). Across these data sets
3928
+ |
3929
+ 𝒞
3930
+ |
3931
+ ranges from 10 (Beef) to 840 (ChlorineConcentration at 5%),
3932
+ |
3933
+
3934
+ normal
3935
+ |
3936
+ ranges from 2 (Beef) to 200 (ChlorineConcentration), and
3937
+ |
3938
+
3939
+ anomaly
3940
+ |
3941
+ ranges from 19 (BeetleFly and BirdChicken at 0.1%) to 6401 (Wafer at 5%). We run experiments with ten random train-test splits, and take the median result. The performance measure used by ADSL is the balanced accuracy, which requires a threshold to be set for detecting anomalies. We report the best achievable balanced accuracy across all possible thresholds, and compare against the best value reported for ADSL. Figure 1 plots our results.
3942
+
3943
+ Figure 1:Comparison of our method against ADSL [2]
3944
+
3945
+ .
3946
+
3947
+ Figure 1:Comparison of our method against ADSL [2]
3948
+
3949
+ Our method performs competitively with ADSL, both when the proportion of anomalies in the training corpus is low and when it is high. It is able to detect anomalies in four of the six data sets where ADSL struggles because the anomalies are less visually distinguishable (ChlorineConcentration, ECG200, Wafer, Wine). However, there are data sets where ADSL performs better (BeetleFly, BirdChicken, FaceFour, ToeSegmentation1 and ToeSegmentation2): these data sets largely originate from research into shapelet methods, and they appear to contain features that are detected well by shapelets. Applying transformations to the data sets before input may improve our method’s results.
3950
+
3951
+ 6Conclusion
3952
+
3953
+ We proposed a generalisation of Mahalanobis distance, the variance norm, and showed that when combining with paths signatures and nearest-neighbours, SigMahaKNN has attractive theoretical properties such as reparametrisation invariance, concatenation invariance and graded discrimination power. We compared SigMahaKNN with isolation forest and shapelet-based methods for detecting anomalous streams with encouraging results.
3954
+
3955
+ Acknowledgement and funding disclosure
3956
+
3957
+ The authors are grateful for the contribution of Imanol Perez Arribas and Jonathan H. Z. S. was supported by the EPSRC [EP/S026347/1]. R. C., and P. F. were supported by the Alan Turing Institute. T. L. was funded in part by the EPSRC [EP/S026347/1], in part by The Alan Turing Institute under the EPSRC [EP/N510129/1], the Data Centric Engineering Programme (under the Lloyd’s Register Foundation grant G0095), the Defence and Security Programme (funded by the UK Government) and in part by the Hong Kong Innovation and Technology Commission (InnoHK Project CIMDA).
3958
+
3959
+ References
3960
+ [1]
3961
+
3962
+ A. Bagnall, J. Lines, W. Vickers, and E. Keogh.The UEA & UCR time series classification repository, 2020.www.timeseriesclassification.com, accessed May 2020.
3963
+ [2]
3964
+
3965
+ L. Beggel, B. X. Kausler, M. Schiegg, M. Pfeiffer, and B. Bischl.Time series anomaly detection based on shapelet learning.Computational Statistics, 34:945–976, 2019.
3966
+ [3]
3967
+
3968
+ S.-E. Benkabou, K. Benabdeslem, and B. Canitia.Unsupervised outlier detection for time series by entropy and dynamic time warping.Knowl. Inf. Syst., 54(2):463–486, feb 2018.
3969
+ [4]
3970
+
3971
+ R. G. Brereton and G. R. Lloyd.Re-evaluating the role of the mahalanobis distance measure.Journal of Chemometrics, 30(4):134–143, 2016.
3972
+ [5]
3973
+
3974
+ M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander.Lof: Identifying density-based local outliers.In Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, SIGMOD ’00, page 93–104, New York, NY, USA, 2000. Association for Computing Machinery.
3975
+ [6]
3976
+
3977
+ I. Chevyrev and A. Kormilitzin.A primer on the signature method in machine learning.arXiv preprint arXiv:1603.03788, 2016.
3978
+ [7]
3979
+
3980
+ A. Dempster, D. F. Schmidt, and G. I. Webb.Hydra: Competing convolutional kernels for fast and accurate time series classification.Data Mining and Knowledge Discovery, pages 1–27, 2023.
3981
+ [8]
3982
+
3983
+ D. Dua and C. Graff.UCI machine learning repository, 2017.
3984
+ [9]
3985
+
3986
+ A. Fermanian, T. Lyons, J. Morrill, and C. Salvi.New directions in the applications of rough path theory.IEEE BITS the Information Theory Magazine, pages 1–18, 2023.
3987
+ [10]
3988
+
3989
+ L. Gjorgiev and S. Gievska.Time series anomaly detection with variational autoencoder using mahalanobis distance.In ICT Innovations 2020. Machine Learning and Applications: 12th International Conference, ICT Innovations 2020, Skopje, North Macedonia, September 24–26, 2020, Proceedings 12, pages 42–55. Springer, 2020.
3990
+ [11]
3991
+
3992
+ B. Hambly and T. Lyons.Uniqueness for the signature of a path of bounded variation and the reduced path group.Annals of Mathematics, pages 109–167, 2010.
3993
+ [12]
3994
+
3995
+ G. W. Howell and M. Baboulin.Iterative solution of sparse linear least squares using lu factorization.In Proceedings of the International Conference on High Performance Computing in Asia-Pacific Region, HPC Asia 2018, pages 47–53, New York, NY, USA, 2018. Association for Computing Machinery.
3996
+ [13]
3997
+
3998
+ R. J. Hyndman, E. Wang, and N. Laptev.Large-scale unusual time series detection.In 2015 IEEE international conference on data mining workshop (ICDMW), pages 1616–1619. IEEE, 2015.
3999
+ [14]
4000
+
4001
+ X. Jin, Y. Wang, T. W. Chow, and Y. Sun.Md-based approaches for system health monitoring: a review.IET Science, Measurement & Technology, 11(4):371–379, 2017.
4002
+ [15]
4003
+
4004
+ R. Kamoi and K. Kobayashi.Why is the mahalanobis distance effective for anomaly detection?arXiv preprint arXiv:2003.00402, 2020.
4005
+ [16]
4006
+
4007
+ D. Li, D. Chen, B. Jin, L. Shi, J. Goh, and S.-K. Ng.Mad-gan: Multivariate anomaly detection for time series data with generative adversarial networks.In I. V. Tetko, V. Kůrková, P. Karpov, and F. Theis, editors, Artificial Neural Networks and Machine Learning – ICANN 2019: Text and Time Series, pages 703–716, Cham, 2019. Springer International Publishing.
4008
+ [17]
4009
+
4010
+ G. Li and J. J. Jung.Dynamic relationship identification for abnormality detection on financial time series.Pattern Recognition Letters, 145:194–199, 2021.
4011
+ [18]
4012
+
4013
+ R. Lin, E. Khalastchi, and G. A. Kaminka.Detecting anomalies in unmanned vehicles using the mahalanobis distance.In 2010 IEEE International Conference on Robotics and Automation, pages 3038–3044, 2010.
4014
+ [19]
4015
+
4016
+ F. T. Liu, K. M. Ting, and Z.-H. Zhou.Isolation forest.In 2008 Eighth IEEE International Conference on Data Mining, pages 413–422, 2008.
4017
+ [20]
4018
+
4019
+ J.-H. Liu, N. T. Corbita, R.-M. Lee, and C.-C. Wang.Wind turbine anomaly detection using mahalanobis distance and scada alarm data.Applied Sciences, 12(17), 2022.
4020
+ [21]
4021
+
4022
+ T. Lyons.Rough paths, signatures and the modelling of functions on streams.In Proceedings of the International Congress of Mathematicians—Seoul 2014. Vol. IV, pages 163–184. Kyung Moon Sa, Seoul, 2014.
4023
+ [22]
4024
+
4025
+ T. J. Lyons, M. Caruana, and T. Lévy.Differential equations driven by rough paths.Springer, 2007.
4026
+ [23]
4027
+
4028
+ P. C. Mahalanobis.On the generalized distance in statistics.Sankhyā: The Indian Journal of Statistics, Series A (2008-), 80:S1–S7, 1936.
4029
+ [24]
4030
+
4031
+ M. Middlehurst, J. Large, M. Flynn, J. Lines, A. Bostrom, and A. Bagnall.Hive-cote 2.0: a new meta ensemble for time series classification.Machine Learning, 110(11-12):3211–3243, 2021.
4032
+ [25]
4033
+
4034
+ M. Middlehurst, P. Schäfer, and A. Bagnall.Bake off redux: a review and experimental evaluation of recent time series classification algorithms.arXiv preprint arXiv:2304.13029, 2023.
4035
+ [26]
4036
+
4037
+ J. Morrill, A. Fermanian, P. Kidger, and T. Lyons.A generalised signature method for multivariate time series feature extraction.arXiv preprint arXiv:2006.00873, 2020.
4038
+ [27]
4039
+
4040
+ J. Pang, D. Liu, Y. Peng, and X. Peng.Temporal dependence mahalanobis distance for anomaly detection in multivariate spacecraft telemetry series.ISA Transactions, 2023.
4041
+ [28]
4042
+
4043
+ N. Patil, D. Das, and M. Pecht.Anomaly detection for igbts using mahalanobis distance.Microelectronics Reliability, 55(7):1054–1059, 2015.
4044
+ [29]
4045
+
4046
+ T. Pham and S. Lee.Anomaly detection in bitcoin network using unsupervised learning methods, 2017.
4047
+ [30]
4048
+
4049
+ G. Vareldzhan, K. Yurkov, and K. Ushenin.Anomaly detection in image datasets using convolutional neural networks, center loss, and mahalanobis distance.In 2021 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT), pages 0387–0390, 2021.
4050
+ [31]
4051
+
4052
+ L. Ye and E. Keogh.Time series shapelets: a new primitive for data mining.In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 947–956, 2009.
4053
+ [32]
4054
+
4055
+ X. Zhang, Z. Zhao, T. Tsiligkaridis, and M. Zitnik.Self-supervised contrastive pre-training for time series via time-frequency consistency.In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems, volume 35, pages 3988–4003. Curran Associates, Inc., 2022.
4056
+ Appendix APlots of conformance distances for PenDigits data set
4057
+
4058
+ Here we provide the full distribution of conformance distance for the PenDigits dataset.
4059
+
4060
+ (a)
4061
+ 𝑁
4062
+ =
4063
+ 1
4064
+ (b)
4065
+ 𝑁
4066
+ =
4067
+ 2
4068
+ (c)
4069
+ 𝑁
4070
+ =
4071
+ 3
4072
+ (d)
4073
+ 𝑁
4074
+ =
4075
+ 4
4076
+ (e)
4077
+ 𝑁
4078
+ =
4079
+ 5
4080
+ Figure 2:Empirical cumulative distributions of the conformance distance
4081
+ dist
4082
+
4083
+ (
4084
+
4085
+ ;
4086
+ Sig
4087
+ 𝑁
4088
+
4089
+ (
4090
+ 𝒞
4091
+ )
4092
+ )
4093
+ , obtained for normal and anomalous testing data and based on computing signatures of order
4094
+ 𝑁
4095
+ .
4096
+ Appendix BTable of Results for Univariate Time Series Data
4097
+
4098
+ Here we provide complete AUC results for our univariate time series experiment.
4099
+
4100
+ B.1Table of Results for 0.1% Anomaly Rate
4101
+ Dataset SigMahaKNN ADSL
4102
+ Adiac 1.00 (0.00) 0.99 (0.10)
4103
+ ArrowHead 0.812 (0.071) 0.65 (0.03)
4104
+ Beef 0.979 (0.184) 0.57 (0.15)
4105
+ BeetleFly 0.90 (0.063) 0.90 (0.08)
4106
+ BirdChicken 0.925 (0.106) 0.85 (0.15)
4107
+ CBF 0.968 (0.016) 0.8 (0.04)
4108
+ ChlorineConcentration 0.900 (0.008) 0.5 (0.0)
4109
+ Coffee 0.713 (0.109) 0.84 (0.04)
4110
+ ECG200 0.803 (0.068) 0.5 (0.03)
4111
+ ECGFiveDays 0.955 (0.015) 0.94 (0.11)
4112
+ FaceFour 0.732 (0.062) 0.94 (0.10)
4113
+ GunPoint 0.865 (0.061) 0.75 (0.03)
4114
+ Ham 0.521 (0.035) 0.5 (0.02)
4115
+ Herring 0.527 (0.062) 0.52 (0.02)
4116
+ Lightning2 0.735 (0.076) 0.63 (0.07)
4117
+ Lightning7 0.944 (0.091) 0.73 (0.11)
4118
+ Meat 0.988 (0.049) 1.00 (0.04)
4119
+ MedicalImages 0.970 (0.039) 0.9 (0.03)
4120
+ MoteStrain 0.891 (0.012) 0.74 (0.01)
4121
+ Plane 1.00 (0.00) 1.00 (0.04)
4122
+ Strawberry 0.899 (0.008) 0.77 (0.03)
4123
+ Symbols 1.00 (0.007) 0.96 (0.02)
4124
+ ToeSegmentation1 0.749 (0.039) 0.95 (0.01)
4125
+ ToeSegmentation2 0.789 (0.052) 0.88 (0.02)
4126
+ Trace 1.00 (0.052) 1.00 (0.04)
4127
+ TwoLeadECG 0.904 (0.015) 0.89 (0.01)
4128
+ Wafer 0.964 (0.012) 0.56 (0.02)
4129
+ Wine 0.835 (0.094) 0.53 (0.02)
4130
+ Table 8:Comparison of SigMahaKNN and ADSL Performances at 0.1% Anomaly Rate
4131
+ B.2Table of Results for 5% Anomaly Rate
4132
+ Dataset SigMahaKNN ADSL
4133
+ Adiac 0.998 (0.133) 0.99 (0.10)
4134
+ ArrowHead 0.766 (0.081) 0.65 (0.03)
4135
+ Beef 0.979 (0.184) 0.57 (0.15)
4136
+ BeetleFly 0.816 (0.202) 0.9 (0.08)
4137
+ BirdChicken 0.842 (0.106) 0.85 (0.15)
4138
+ CBF 0.858 (0.034) 0.8 (0.04)
4139
+ ChlorineConcentration 0.878 (0.013) 0.5 (0.0)
4140
+ Coffee 0.679 (0.130) 0.84 (0.04)
4141
+ ECG200 0.752 (0.074) 0.5 (0.03)
4142
+ ECGFiveDays 0.808 (0.018) 0.94 (0.11)
4143
+ FaceFour 0.772 (0.087) 0.94 (0.10)
4144
+ GunPoint 0.808 (0.070) 0.75 (0.03)
4145
+ Ham 0.526 (0.032) 0.5 (0.02)
4146
+ Herring 0.527 (0.065) 0.52 (0.02)
4147
+ Lightning2 0.741 (0.065) 0.63 (0.07)
4148
+ Lightning7 0.837 (0.091) 0.73 (0.11)
4149
+ Meat 0.897 (0.073) 1.00 (0.04)
4150
+ MedicalImages 0.964 (0.027) 0.9 (0.03)
4151
+ MoteStrain 0.853 (0.020) 0.74 (0.01)
4152
+ Plane 1.00 (0.036) 1.00 (0.04)
4153
+ Strawberry 0.876 (0.026) 0.77 (0.03)
4154
+ Symbols 1.00 (0.021) 0.96 (0.02)
4155
+ ToeSegmentation1 0.744 (0.037) 0.95 (0.01)
4156
+ ToeSegmentation2 0.785 (0.055) 0.88 (0.02)
4157
+ Trace 1.00 (0.045) 1.00 (0.04)
4158
+ TwoLeadECG 0.817 (0.021) 0.89 (0.01)
4159
+ Wafer 0.820 (0.029) 0.56 (0.02)
4160
+ Wine 0.784 (0.101) 0.53 (0.02)
4161
+ Table 9:Comparison of SigMahaKNN and ADSL Performances at 5% Anomaly Rate
4162
+ Generated by L A T E xml
4163
+ Instructions for reporting errors
4164
+
4165
+ We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
4166
+
4167
+ Click the "Report Issue" button.
4168
+ Open a report feedback form via keyboard, use "Ctrl + ?".
4169
+ Make a text selection and click the "Report Issue for Selection" button near your cursor.
4170
+ You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
4171
+
4172
+ Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
4173
+
4174
+ Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
4175
+
4176
+ Report Issue
4177
+ Report Issue for Selection