mishig HF Staff commited on
Commit
f8cd9e2
Β·
verified Β·
1 Parent(s): c6c45ff

Add 1 files

Browse files
Files changed (1) hide show
  1. 2201/2201.02658.md +6227 -0
2201/2201.02658.md ADDED
@@ -0,0 +1,6227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Fair and Efficient Contribution Valuation for Vertical Federated Learning
2
+
3
+ URL Source: https://arxiv.org/html/2201.02658
4
+
5
+ Markdown Content:
6
+ Back to arXiv
7
+
8
+ This is experimental HTML to improve accessibility. We invite you to report rendering errors.
9
+ Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off.
10
+ Learn more about this project and help improve conversions.
11
+
12
+ Why HTML?
13
+ Report Issue
14
+ Back to Abstract
15
+ Download PDF
16
+ Abstract
17
+ 1Introduction
18
+ 2Related work
19
+ 3Vertical federated learning
20
+ 4Vertical federated Shapley value
21
+ 5Computation of VerFedSV under synchronous setting
22
+ 6Computation of VerFedSV under asynchronous setting
23
+ 7Experiments
24
+ 8Conclusion and insights
25
+ References
26
+
27
+ HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
28
+
29
+ failed: breqn.sty
30
+
31
+ Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.
32
+
33
+ License: CC BY-NC-ND 4.0
34
+ arXiv:2201.02658v2 [cs.LG] 21 Aug 2025
35
+ Fair and Efficient Contribution Valuation for Vertical Federated Learning
36
+ Zhenan Fanβ€ βˆ—, Huang Fang†, Xinglu Wangβ€‘βˆ—, Zirui Zhouβˆ—, Jian Pei‑, Michael P. Friedlander†,
37
+ Yong Zhangβˆ—
38
+ †University of British Columbia, {zhenanf,hgfang, mpf}@cs.ubc.ca
39
+ ‑Simon Fraser University, {xinglu_wang_2,jpei}@cs.sfu.ca
40
+ βˆ—Huawei Technologies Canada, {zirui.zhou, yong.zhang3}@huawei.com
41
+ Abstract
42
+
43
+ Federated learning is an emerging technology for training machine learning models across decentralized data sources without sharing data. Vertical federated learning, also known as feature-based federated learning, applies to scenarios where data sources have the same sample IDs but different feature sets. To ensure fairness among data owners, it is critical to objectively assess the contributions from different data sources and compensate the corresponding data owners accordingly. The Shapley value is a provably fair contribution valuation metric originating from cooperative game theory. However, its straight-forward computation requires extensively retraining a model on each potential combination of data sources, leading to prohibitively high communication and computation overheads due to multiple rounds of federated learning. To tackle this challenge, we propose a contribution valuation metric called vertical federated Shapley value (VerFedSV) based on the classic Shapley value. We show that VerFedSV not only satisfies many desirable properties of fairness but is also efficient to compute. Moreover, VerFedSV can be adapted to both synchronous and asynchronous vertical federated learning algorithms. Both theoretical analysis and extensive experimental results demonstrate the fairness, efficiency, adaptability, and effectiveness of VerFedSV.
44
+
45
+ 1Introduction
46
+
47
+ Creating powerful and robust machine learning models requires collecting enormous amounts of training data. However, in many industrial scenarios, training data is siloed across multiple companies, and data sharing is often impossible due to regulatory limits on data protectionΒ (Li etΒ al., 2020). Federated learning (FL for short) is an emerging machine learning framework where a central server and multiple data owners, i.e., clients, collaboratively train a machine learning model without sharing their dataΒ (McMahan etΒ al., 2017; Yang etΒ al., 2019; Kairouz etΒ al., 2019).
48
+
49
+ FL can further be classified into two main categories: horizontal federated learning (HFL) and vertical federated learning (VFL)Β (Yang etΒ al., 2019). In HFL, various clients’ data share the same feature space but have separate sample IDs. On the other hand, VFL assumes that the data owned by different clients share identical sample IDs but have distinctive features.
50
+
51
+ The success of FL depends on the active participation of motivated clients. Therefore it is vital to fairly evaluate the contributions from different clients as fair cooperation and reward can motivate active participation from players.
52
+
53
+ The Shapley valueΒ (Shapley, 1952) is a classical metric derived from cooperative game theory that is used to appropriately assess players’ contributions. The Shapley value of a player is defined as the expected marginal contribution of the player over all possible subsets of the other players. The Shapley value is the only metric that satisfies all four requirements of Shapley’s fairness criteria: balance, symmetry, zero element, and additivityΒ (Dubey, 1975) (see SectionΒ B.1). Although the Shapley value has many desirable characteristics, its evaluation in the FL context requires repeatedly training and evaluating models learned on all possible subsets of clients. The corresponding communication and computation costs are exponential and often prohibitive in practiceΒ (Song etΒ al., 2019; Wang etΒ al., 2019; Fan etΒ al., 2022).
54
+
55
+ Variants of the Shapley value make equitable data owner contribution assessment feasible in FL. Take its application in HFL as an example, Wang etΒ al. (2020) proposed a contribution valuation metric called the federated Shapley value (FedSV). The key idea is to calculate Shapley value for a client in each iteration of training and then aggregate these values over all iterations to determine the final contribution of the client. Its computation does not require model retraining and retains some, but not all, of Shapley’s fairness criteria. Fan etΒ al. (2022) enhanced the fairness of this method by considering all data owners, including those not selected in certain training iterations. This improvement is accomplished through low-rank matrix factorization techniques. While Fan etΒ al. (2022) focus on HFL, this paper will serve as an extension towards VFL and complete the picture of contribution valuation in FL.
56
+
57
+ Compared to HFL, adapting the Shapley value to VFL faces another challenge because of the stronger model dependency in the vertical context. More precisely, the Shapley value computation requires us to form the model produced by all different subsets of clients. This requirement is easy to satisfy in HFL because the global model is defined as the additive aggregation of the local models, and thus we only need to aggregate local models from different subsets of clientsΒ (Wang etΒ al., 2020). In VFL, however, the global model is the concatenation of local models that are not shared with the server. Thus, simply concatenating local models from different subsets of clients is not applicable.
58
+
59
+ In this paper, we concentrate on developing efficient and equitable methods for evaluating clients’ contributions in VFL. We propose a contribution valuation metric called the vertical federated Shapley value (VerFedSV), where the clients’ contributions are computed at multiple time stamps during the training process. We resolve the model concatenation problem by carefully utilizing clients’ embeddings at different time stamps. We demonstrate that our design retains many desirable fairness properties and can be efficiently implemented without model retraining.
60
+
61
+ VFL algorithms can be divided into two categories: synchronous methodsΒ (Gong etΒ al., 2016; Zhang etΒ al., 2018; Liu etΒ al., 2019), where periodic synchronizations among clients are required, and asynchronous ones Β (Hu etΒ al., 2019; Gu etΒ al., 2020; Chen etΒ al., 2020), where clients are allowed to conduct local computation asynchronously. We show that VerFedSV applies to both synchronous and asynchronous VFL settings. Under the synchronous setting, we show that VerFedSV can be computed by leveraging some matrix-completion techniques. Although there are many similarities between synchronous and asynchronous VFL, contribution valuation in an asynchronous VFL setting is more complicated because the contribution of a client depends on not only the relevance of the client’s data to the training task but also the client’s local computational resource. We show that the design of VerFedSV can reflect the strength of clients’ local computational resources under the asynchronous setting. To the best of our knowledge, we are the first to consider contributions w.r.t local computational resources.
62
+
63
+ Our contributions can be summarized as follows.
64
+
65
+ β€’
66
+
67
+ We propose vertical federated Shapley value (VerFedSV) for VFL (EquationΒ 1 and DefinitionΒ 2), and show that VerFedSV satisfies many desirable properties of fairness (TheoremΒ 1).
68
+
69
+ β€’
70
+
71
+ Under the synchronous VFL setting, we show that VerFedSV can be computed by solving low-rank matrix completion problems for embedding matrices, which are proven to be approximately low-rank (PropositionΒ 1). We also give an approximation guarantee of VerFedSV given the tolerance for matrix completion (PropositionΒ 2).
72
+
73
+ β€’
74
+
75
+ Under the asynchronous VFL setting, we show that VerFedSV can be directly computed and can reflect the strength of clients’ local computational resources (PropositionΒ 3).
76
+
77
+ β€’
78
+
79
+ VerFedSV does not incur extra communication cost. Meanwhile, its computational cost can be further reduced by applying the Monte-Carlo sampling methods (SectionΒ D.1).
80
+
81
+ 2Related work
82
+
83
+ The Shapley valueΒ (Shapley, 1952) has broad applicationsΒ (Gul, 1989). Dubey (1975) showed that Shapley value is the unique measure that satisfies the four fundamental requirements of fairness proposed by Shapley (1952). With a rich history and broad applications, it has been extended to contribution valuation in machine learning (ML) contexts (Ghorbani & Zou, 2019; Jia etΒ al., 2019; Kwon etΒ al., 2021; Sim etΒ al., 2022). For example, Beta Shapley (Kwon & Zou, 2022) is a data valuation method tailored for ML applications, where the efficiency axiom can be relaxed and marginal contribution is affected by noisy data. Banzhaf value (Wang & Jia, 2023) is another data valuation method that, similarly, relaxes the efficiency axiom. It innovatively introduces the concept of a safety margin and addresses the randomness introduced by stochastic gradient descent. As privacy is an important aspect in ML, Wang etΒ al. (2023) identify privacy leaks through the KNN-Shapley value score and propose DP-TKNN-Shapley, which offers a superior privacy-utility tradeoff. In the following, our discussion focuses on its application in FL.
84
+
85
+ The concept of the Shapley value was introduced into HFL to evaluate the contribution of federated participants (Song etΒ al., 2019). The straight-forward computation of the Shapley value needs retraining the model an exponential number of times. To address this challenge, various methods have been proposed. Song etΒ al. (2019) presented two gradient-based techniques to approximate the Shapley value. Wang etΒ al. (2020) introduced the federated Shapley value, which can be determined from local model updates in each training iteration. The federated Shapley value does not need model retraining and preserves some but not all of the favorable qualities of the traditional Shapley value. Fan etΒ al. (2022) enhanced the fairness of this method by considering all data owners, including those not selected in certain training iterations. This is achieved by leveraging low-rank matrix factorization techniques. Liu etΒ al. (2022) proposed the Guided Truncation Gradient Shapley (GTG-Shapley) approach, which reconstructs FL models using gradient updates for Shapley value calculations, rather than retraining with varying combinations of FL participants, thereby enhancing both computational efficiency and approximation accuracy. Cosine Gradient Shapley Value (CGSV) (Xu etΒ al., 2021) also avoids the need for model retraining and validation processes by exploiting information available during training, that is, the alignment between the model updates uploaded by an individual agent and the aggregated model derived from all participating agents. Meanwhile, Xu etΒ al. (2021) propose an incentive mechanism that rewards agents with higher contributions through a higher-quality model. However, this realization of incentives may lead to non-convergent behavior in the model. To address this, Lin etΒ al. (2023) further explore the trade-off between asymptotic performance and fairness. Our method shares a similar idea in the sense of leveraging training-time information, but the difference is that we cater to synchronous and asynchronous VFL scenarios. Beyond the computational challenge, Yang etΒ al. (2022) find that the Shapley value only considers the marginal contribution on model performance and propose to enhance the Shapley value with other influencing factors, such as data quality, level of cooperation, and risk factor, to assess contributions. Apart from its use in contribution valuation, the Shapley value has inspired an adaptive weighting mechanism (Sun etΒ al., 2023) to enhance the robustness of FL.
86
+
87
+ Wang etΒ al. (2019) and more recently Han etΒ al. (2021) extended the notion of data Shapley value to VFL. As discussed before, the need for retraining the model for different subsets of clients is a bottleneck of Shapley value computation in federated learning. Wang etΒ al. (2019); Han etΒ al. (2021) solved this problem by introducing model-independent utility functions, where the contribution of a client does not depend on the performance of the final model, and thus does not require retraining of the model. In particular, Wang etΒ al. (2019) suggested using the situational importance (SI)Β (Achen, 1982), which computes the difference between the embeddings with true features and expected features. However, computing SI requires knowing the expectation of each feature and can be impractical under VFL. Han etΒ al. (2021) suggested to use the conditional mutual information (CMI)Β (Brown etΒ al., 2012), which computes the tightness between the label and features. However, computing CMI requires every client to access the labels, which may also be impractical under VFL and leak privacy. Besides the above shortcomings, the model-independent utility function itself may cause some fairness issues when the VFL is conducted asynchronously. Specifically, under the asynchronous setting, the contribution of a client not only is related to the quality of the local dataset but also depends on the power of local computational resources. Model-independent utility functions cannot fully reflect a client’s contribution in this scenario. (See PropositionΒ 3 and SectionΒ 7.2 for details.) In this work, we instead use a model-dependent utility function and resolve the requirement of retraining the model by periodically evaluating the contribution metric during the training process.
88
+
89
+ 3Vertical federated learning
90
+
91
+ In this section, we review the standard VFL framework. In this framework,
92
+ 𝑀
93
+ clients and one server collaborate to train a machine learning model on
94
+ 𝑁
95
+ data samples
96
+ {
97
+ (
98
+ π‘₯
99
+ 𝑖
100
+ ∈
101
+ ℝ
102
+ 𝑑
103
+ ,
104
+ 𝑦
105
+ 𝑖
106
+ ∈
107
+ {
108
+ Β±
109
+ 1
110
+ }
111
+ )
112
+ }
113
+ 𝑖
114
+ =
115
+ 1
116
+ 𝑁
117
+ , where
118
+ π‘₯
119
+ 𝑖
120
+ is the
121
+ 𝑖
122
+ -th feature vector and
123
+ 𝑦
124
+ 𝑖
125
+ is its associated label. We assume that the index
126
+ 𝑖
127
+ is globally shared between the server and the
128
+ 𝑀
129
+ clients. This assumption is reasonable, as data alignment methods in VFL protocols (Cheng etΒ al., 2019) are routinely employed to ensure synchronization across clients. This method ensures that only samples shared by all clients are retained for VFL, each identified by a unique sample ID. For simplicity, we posit that these sample IDs are the same as the global sample indices
130
+ [
131
+ 𝑁
132
+ ]
133
+ =
134
+ {
135
+ 1
136
+ ,
137
+ β‹―
138
+ ,
139
+ 𝑁
140
+ }
141
+ . Here, the notation
142
+ [
143
+ β‹…
144
+ ]
145
+ means the set of consecutive integers.
146
+
147
+ Under the VFL framework, every feature vector
148
+ π‘₯
149
+ 𝑖
150
+ is distributed across
151
+ 𝑀
152
+ clients. Represent the portion of the feature vector held by client
153
+ π‘š
154
+ as
155
+ π‘₯
156
+ 𝑖
157
+ (
158
+ π‘š
159
+ )
160
+ ∈
161
+ ℝ
162
+ 𝑑
163
+ (
164
+ π‘š
165
+ )
166
+ , where
167
+ 𝑑
168
+ (
169
+ π‘š
170
+ )
171
+ specifies the feature dimension associated with client
172
+ π‘š
173
+ . In other words, if we collect the feature vectors from all clients and concatenate them, the
174
+ π‘₯
175
+ 𝑖
176
+ will be reconstructed, i.e.,
177
+ π‘₯
178
+ 𝑖
179
+ =
180
+ [
181
+ π‘₯
182
+ 𝑖
183
+ 1
184
+ ,
185
+ …
186
+ ,
187
+ π‘₯
188
+ 𝑖
189
+ 𝑀
190
+ ]
191
+ and
192
+ 𝑑
193
+ =
194
+ βˆ‘
195
+ π‘š
196
+ =
197
+ 1
198
+ 𝑀
199
+ 𝑑
200
+ (
201
+ π‘š
202
+ )
203
+ . Here,
204
+ [
205
+ β‹…
206
+ ,
207
+ …
208
+ ,
209
+ β‹…
210
+ ]
211
+ is the concatenation operation.
212
+
213
+ The local data set for client
214
+ π‘š
215
+ is
216
+ π’Ÿ
217
+ (
218
+ π‘š
219
+ )
220
+ =
221
+ {
222
+ π‘₯
223
+ 𝑖
224
+ (
225
+ π‘š
226
+ )
227
+ :
228
+ 𝑖
229
+ ∈
230
+ [
231
+ 𝑁
232
+ ]
233
+ }
234
+ . The server maintains all the labels
235
+ π’Ÿ
236
+ 𝑠
237
+ =
238
+ {
239
+ 𝑦
240
+ 𝑖
241
+ :
242
+ 𝑖
243
+ ∈
244
+ [
245
+ 𝑁
246
+ ]
247
+ }
248
+ . The collaborative training problem can be formulated as
249
+ min
250
+ πœƒ
251
+ 1
252
+ ,
253
+ …
254
+ ,
255
+ πœƒ
256
+ 𝑀
257
+ ⁑
258
+ 1
259
+ 𝑁
260
+ ​
261
+ βˆ‘
262
+ 𝑖
263
+ =
264
+ 1
265
+ 𝑁
266
+ β„“
267
+ ​
268
+ (
269
+ πœƒ
270
+ 1
271
+ ,
272
+ …
273
+ ,
274
+ πœƒ
275
+ 𝑀
276
+ ;
277
+ {
278
+ π‘₯
279
+ 𝑖
280
+ ,
281
+ 𝑦
282
+ 𝑖
283
+ }
284
+ )
285
+ ,
286
+ where
287
+ πœƒ
288
+ π‘š
289
+ ∈
290
+ ℝ
291
+ 𝑑
292
+ (
293
+ π‘š
294
+ )
295
+ denotes the training parameters of the
296
+ π‘š
297
+ -th client and
298
+ β„“
299
+ ​
300
+ (
301
+ β‹…
302
+ )
303
+ denotes the loss function. For a wide range of models, such as linear and logistic regression, and support vector machines, the loss function has the form
304
+ β„“
305
+ ​
306
+ (
307
+ πœƒ
308
+ 1
309
+ ,
310
+ …
311
+ ,
312
+ πœƒ
313
+ 𝑀
314
+ ;
315
+ {
316
+ π‘₯
317
+ 𝑖
318
+ ,
319
+ 𝑦
320
+ 𝑖
321
+ }
322
+ )
323
+ :=
324
+ 𝑓
325
+ ​
326
+ (
327
+ β„Ž
328
+ 𝑖
329
+ ;
330
+ 𝑦
331
+ 𝑖
332
+ )
333
+ , where
334
+ β„Ž
335
+ 𝑖
336
+ =
337
+ βˆ‘
338
+ π‘š
339
+ =
340
+ 1
341
+ 𝑀
342
+ β„Ž
343
+ 𝑖
344
+ (
345
+ π‘š
346
+ )
347
+ ,
348
+ β„Ž
349
+ 𝑖
350
+ (
351
+ π‘š
352
+ )
353
+ =
354
+ ⟨
355
+ πœƒ
356
+ π‘š
357
+ ,
358
+ π‘₯
359
+ 𝑖
360
+ (
361
+ π‘š
362
+ )
363
+ ⟩
364
+ ,
365
+ and
366
+ 𝑓
367
+ ​
368
+ (
369
+ β‹…
370
+ ;
371
+ 𝑦
372
+ )
373
+ is a differentiable function for any
374
+ 𝑦
375
+ . For each client
376
+ π‘š
377
+ , the term
378
+ β„Ž
379
+ 𝑖
380
+ (
381
+ π‘š
382
+ )
383
+ can be viewed as the client’s embedding of the local data point
384
+ π‘₯
385
+ 𝑖
386
+ (
387
+ π‘š
388
+ )
389
+ and the local model
390
+ πœƒ
391
+ π‘š
392
+ . To preserve privacy, clients are not allowed to share their local data set
393
+ π’Ÿ
394
+ (
395
+ π‘š
396
+ )
397
+ or local model
398
+ πœƒ
399
+ (
400
+ π‘š
401
+ )
402
+ with other clients or with the server. Instead, clients can share only their local embeddings
403
+ {
404
+ β„Ž
405
+ 𝑖
406
+ (
407
+ π‘š
408
+ )
409
+ :
410
+ 𝑖
411
+ ∈
412
+ [
413
+ 𝑁
414
+ ]
415
+ }
416
+ to the server for training.
417
+
418
+ For simplicity, here we consider a binary classification problem. All of our theoretical results can be easily extended to a multi-class classification problem with
419
+ 𝑙
420
+ >
421
+ 2
422
+ classes, where
423
+ πœƒ
424
+ π‘š
425
+ ∈
426
+ ℝ
427
+ 𝑑
428
+ (
429
+ π‘š
430
+ )
431
+ Γ—
432
+ 𝑙
433
+ and
434
+ β„Ž
435
+ 𝑖
436
+ (
437
+ π‘š
438
+ )
439
+ =
440
+ πœƒ
441
+ π‘š
442
+ ⊺
443
+ ​
444
+ π‘₯
445
+ 𝑖
446
+ (
447
+ π‘š
448
+ )
449
+ ∈
450
+ ℝ
451
+ 𝑙
452
+ . Moreover, in practice, our method also works for more general local embedding functions, i.e.,
453
+ β„Ž
454
+ 𝑖
455
+ (
456
+ π‘š
457
+ )
458
+ =
459
+ 𝑔
460
+ ​
461
+ (
462
+ πœƒ
463
+ π‘š
464
+ ;
465
+ π‘₯
466
+ 𝑖
467
+ (
468
+ π‘š
469
+ )
470
+ )
471
+ , where
472
+ 𝑔
473
+ can be a neural network with weights
474
+ πœƒ
475
+ π‘š
476
+ . We empirically verify this in SectionΒ 7.
477
+
478
+ Asynchronous VFL is favorable when the strength of computational resources varies greatly across clients. This strength concerns two aspects: 1). The batch size of embeddings that a client can process influenced by its hardware (e.g., CPU/GPU) capability. 2). The upload speed influenced by its transmission power. It is reasonable to assume that (Dinh etΒ al., 2020) download times are negligible compared to upload times, given that downlinks tend to have higher bandwidths, and the server’s transmission power far exceeds that of the client.
479
+
480
+ Formally, we define the strength of clients’ local computational resources that reflect the above-mentioned two factors as follows.
481
+
482
+ Definition 1.
483
+
484
+ Let
485
+ Ξ”
486
+ ​
487
+ 𝑑
488
+ >
489
+ 0
490
+ be the unit time for one iteration including the time for communication. The strength of client
491
+ π‘š
492
+ ’s local computational resource is represented by a positive integer
493
+ 𝜏
494
+ π‘š
495
+ ∈
496
+ [
497
+ 1
498
+ ,
499
+ 𝑁
500
+ ]
501
+ , such that client
502
+ π‘š
503
+ can compute and upload at most
504
+ 𝜏
505
+ π‘š
506
+ local embeddings during each time interval
507
+ Ξ”
508
+ ​
509
+ 𝑑
510
+ .
511
+
512
+ 4Vertical federated Shapley value
513
+
514
+ In the following, we briefly introduce why Shapley value is a fair valuation metric. The Shapley value is built upon a black-box utility function. Under the FL setting, this utility function outputs the performance of the collaboratively trained model. Since each evaluation of utility involves retraining a model, we denote this utility function as
515
+ π‘ˆ
516
+ retrain
517
+ :
518
+ 2
519
+ [
520
+ 𝑀
521
+ ]
522
+ β†’
523
+ ℝ
524
+ . It provides a utility score for the model trained by any client subset
525
+ 𝑆
526
+ βŠ†
527
+ [
528
+ 𝑀
529
+ ]
530
+ . Given utility function
531
+ π‘ˆ
532
+ retrain
533
+ , the Shapley value is said to be fair because it satisfies four fundamental requirements: Symmetry, zero element, additivity, and balance (See Section B.1 for details). It has been shown that the only valuation metric satisfying four requirements is the Shapley value (Dubey, 1975; Ghorbani & Zou, 2019). Formally, the Shapley value assigned to client
534
+ π‘š
535
+ , denoted by
536
+ 𝑠
537
+ π‘š
538
+ , is computed by
539
+ 𝑠
540
+ π‘š
541
+ =
542
+ 1
543
+ 𝑀
544
+ ​
545
+ βˆ‘
546
+ 𝑆
547
+ βŠ†
548
+ [
549
+ 𝑀
550
+ ]
551
+ βˆ–
552
+ {
553
+ π‘š
554
+ }
555
+ 1
556
+ (
557
+ 𝑀
558
+ βˆ’
559
+ 1
560
+ |
561
+ 𝑆
562
+ |
563
+ )
564
+ ​
565
+ [
566
+ π‘ˆ
567
+ retrain
568
+ ​
569
+ (
570
+ 𝑆
571
+ βˆͺ
572
+ {
573
+ π‘š
574
+ }
575
+ )
576
+ βˆ’
577
+ π‘ˆ
578
+ retrain
579
+ ​
580
+ (
581
+ 𝑆
582
+ )
583
+ ]
584
+ .
585
+
586
+ However, computing
587
+ 𝑠
588
+ π‘š
589
+ is impractical in the FL context because the evaluation of the utility function
590
+ π‘ˆ
591
+ retrain
592
+ requires extensive model retrainingΒ (Ghorbani & Zou, 2019; Wang etΒ al., 2020). To it computationally practical in FL context, as discussed before, Wang etΒ al. (2020) recently proposed the federated Shapley value (FedSV) for HFL, which computes the Shapley values for clients periodically during the training and then reports the summation over all the periods as the final results. We extend this idea to the VFL context and define a new utility function. Suppose we pre-determine
593
+ 𝑇
594
+ time stamps for contribution valuation. Denote by
595
+ [
596
+ 𝑇
597
+ ]
598
+ =
599
+ {
600
+ 1
601
+ ,
602
+ …
603
+ ,
604
+ 𝑇
605
+ }
606
+ the set of all time stamps. At each time
607
+ 𝑑
608
+ ∈
609
+ [
610
+ 𝑇
611
+ ]
612
+ , we define the utility function
613
+ π‘ˆ
614
+ 𝑑
615
+ :
616
+ 2
617
+ [
618
+ 𝑀
619
+ ]
620
+ β†’
621
+ ℝ
622
+ such that for any subset of clients
623
+ 𝑆
624
+ βŠ†
625
+ [
626
+ 𝑀
627
+ ]
628
+ , the utility
629
+ π‘ˆ
630
+ 𝑑
631
+ ​
632
+ (
633
+ 𝑆
634
+ )
635
+ denotes the decrease in loss made by the clients in
636
+ 𝑆
637
+ during the time period
638
+ [
639
+ 𝑑
640
+ βˆ’
641
+ 1
642
+ ,
643
+ 𝑑
644
+ ]
645
+ , i.e.,
646
+
647
+
648
+ π‘ˆ
649
+ 𝑑
650
+ ​
651
+ (
652
+ 𝑆
653
+ )
654
+ =
655
+ 1
656
+ 𝑁
657
+ ​
658
+ βˆ‘
659
+ 𝑖
660
+ =
661
+ 1
662
+ 𝑁
663
+ 𝑓
664
+ ​
665
+ (
666
+ βˆ‘
667
+ π‘š
668
+ =
669
+ 1
670
+ 𝑀
671
+ (
672
+ β„Ž
673
+ 𝑖
674
+ (
675
+ π‘š
676
+ )
677
+ )
678
+ (
679
+ 𝑑
680
+ βˆ’
681
+ 1
682
+ )
683
+ ;
684
+ 𝑦
685
+ 𝑖
686
+ )
687
+ βˆ’
688
+ 1
689
+ 𝑁
690
+ ​
691
+ βˆ‘
692
+ 𝑖
693
+ =
694
+ 1
695
+ 𝑁
696
+ 𝑓
697
+ ​
698
+ (
699
+ βˆ‘
700
+ π‘š
701
+ ∈
702
+ 𝑆
703
+ (
704
+ β„Ž
705
+ 𝑖
706
+ (
707
+ π‘š
708
+ )
709
+ )
710
+ (
711
+ 𝑑
712
+ )
713
+ +
714
+ βˆ‘
715
+ π‘š
716
+ βˆ‰
717
+ 𝑆
718
+ (
719
+ β„Ž
720
+ 𝑖
721
+ (
722
+ π‘š
723
+ )
724
+ )
725
+ (
726
+ 𝑑
727
+ βˆ’
728
+ 1
729
+ )
730
+ ;
731
+ 𝑦
732
+ 𝑖
733
+ )
734
+ ,
735
+
736
+ (1)
737
+
738
+ where
739
+ (
740
+ β„Ž
741
+ 𝑖
742
+ (
743
+ π‘š
744
+ )
745
+ )
746
+ (
747
+ 𝑑
748
+ )
749
+ is the local embedding of data point
750
+ π‘₯
751
+ 𝑖
752
+ (
753
+ π‘š
754
+ )
755
+ by client
756
+ π‘š
757
+ at time
758
+ 𝑑
759
+ . The idea behind this definition is that if client
760
+ π‘š
761
+ participates in the training during the period
762
+ [
763
+ 𝑑
764
+ βˆ’
765
+ 1
766
+ ,
767
+ 𝑑
768
+ ]
769
+ , then we use the client’s latest embedding
770
+ (
771
+ β„Ž
772
+ 𝑖
773
+ (
774
+ π‘š
775
+ )
776
+ )
777
+ (
778
+ 𝑑
779
+ )
780
+ for evaluation; otherwise, we use the previous embedding
781
+ (
782
+ β„Ž
783
+ 𝑖
784
+ (
785
+ π‘š
786
+ )
787
+ )
788
+ (
789
+ 𝑑
790
+ βˆ’
791
+ 1
792
+ )
793
+ . Then, we formally define the vertical federated Shapley value (VerFedSV).
794
+
795
+ Definition 2.
796
+
797
+ Given
798
+ 𝑇
799
+ predetermined contribution valuation time stamps, the VerFedSV for any client
800
+ π‘š
801
+ ∈
802
+ [
803
+ 𝑀
804
+ ]
805
+ is
806
+
807
+
808
+ 𝑠
809
+ π‘š
810
+ =
811
+ 1
812
+ 𝑀
813
+ ​
814
+ 𝑇
815
+ ​
816
+ βˆ‘
817
+ 𝑑
818
+ =
819
+ 1
820
+ 𝑇
821
+ βˆ‘
822
+ 𝑆
823
+ βŠ†
824
+ [
825
+ 𝑀
826
+ ]
827
+ βˆ–
828
+ {
829
+ π‘š
830
+ }
831
+ 1
832
+ (
833
+ 𝑀
834
+ βˆ’
835
+ 1
836
+ |
837
+ 𝑆
838
+ |
839
+ )
840
+ ​
841
+ [
842
+ π‘ˆ
843
+ 𝑑
844
+ ​
845
+ (
846
+ 𝑆
847
+ βˆͺ
848
+ {
849
+ π‘š
850
+ }
851
+ )
852
+ βˆ’
853
+ π‘ˆ
854
+ 𝑑
855
+ ​
856
+ (
857
+ 𝑆
858
+ )
859
+ ]
860
+
861
+
862
+ where
863
+
864
+
865
+ π‘ˆ
866
+ 𝑑
867
+ ​
868
+ (
869
+ 𝑆
870
+ )
871
+ =
872
+ 1
873
+ 𝑁
874
+ ​
875
+ βˆ‘
876
+ 𝑖
877
+ =
878
+ 1
879
+ 𝑁
880
+ β„“
881
+ ​
882
+ (
883
+ βˆ‘
884
+ π‘š
885
+ =
886
+ 1
887
+ 𝑀
888
+ (
889
+ β„Ž
890
+ 𝑖
891
+ (
892
+ π‘š
893
+ )
894
+ )
895
+ (
896
+ 𝑑
897
+ βˆ’
898
+ 1
899
+ )
900
+ ;
901
+ 𝑦
902
+ 𝑖
903
+ )
904
+ βˆ’
905
+ 1
906
+ 𝑁
907
+ ​
908
+ βˆ‘
909
+ 𝑖
910
+ =
911
+ 1
912
+ 𝑁
913
+ β„“
914
+ ​
915
+ (
916
+ βˆ‘
917
+ π‘š
918
+ ∈
919
+ 𝑆
920
+ (
921
+ β„Ž
922
+ 𝑖
923
+ (
924
+ π‘š
925
+ )
926
+ )
927
+ (
928
+ 𝑑
929
+ )
930
+ +
931
+ βˆ‘
932
+ π‘š
933
+ βˆ‰
934
+ 𝑆
935
+ (
936
+ β„Ž
937
+ 𝑖
938
+ (
939
+ π‘š
940
+ )
941
+ )
942
+ (
943
+ 𝑑
944
+ βˆ’
945
+ 1
946
+ )
947
+ ;
948
+ 𝑦
949
+ 𝑖
950
+ )
951
+
952
+
953
+ Note that computing VerFedSV does not require retraining the model due to EquationΒ 1. Next, theorem 1 justifies the fairness of VerFedSV. Please refer to Section B.1 for proof.
954
+
955
+ Theorem 1 (Fairness Guarantee).
956
+
957
+ Define
958
+ π‘ˆ
959
+ :
960
+ 2
961
+ [
962
+ 𝑀
963
+ ]
964
+ β†’
965
+ ℝ
966
+ as the averaged utility function spanning the entire training process, formulated as
967
+ π‘ˆ
968
+ ​
969
+ (
970
+ 𝑆
971
+ )
972
+ =
973
+ 1
974
+ 𝑇
975
+ ​
976
+ βˆ‘
977
+ 𝑑
978
+ =
979
+ 1
980
+ 𝑇
981
+ π‘ˆ
982
+ 𝑑
983
+ ​
984
+ (
985
+ 𝑆
986
+ )
987
+ . The VerFedSV (DefinitionΒ 2) satisfies four requirements on fairness (See Section B.1 for details).
988
+
989
+ 5Computation of VerFedSV under synchronous setting
990
+
991
+ Almost all the synchronous VFL algorithms (Gong etΒ al., 2016; Zhang etΒ al., 2018; Liu etΒ al., 2019) share the same framework: in each iteration, every client uploads local embeddings for the same batch of data points to the server, downloads gradient information from the server, and then conducts local updates. The main difference among those algorithms is the scheme of local updates. For clarity, we consider the vanilla stochastic gradient descent algorithm for VFL, i.e., FedSGD (Liu etΒ al., 2019). It is worth mentioning that VerFedSV computation solely relies on the clients’ embeddings and is independent of the local update rule of the underlying VFL algorithm. Consequently, VerFedSV can be easily extended to other synchronous VFL algorithms.
992
+
993
+ Refer to Section A.1 for details on one training iteration in FeDSGD. Here, we describe how batch size is determined. The time interval between training iterations is first negotiated and determined. Then, DefinitionΒ 1 gives the maximum number of local embeddings
994
+ 𝜏
995
+ π‘š
996
+ that can be processed by client
997
+ π‘š
998
+ within a single iteration. The server collects information from clients and calculates
999
+ 𝜏
1000
+ :=
1001
+ min
1002
+ ⁑
1003
+ {
1004
+ 𝜏
1005
+ π‘š
1006
+ ∣
1007
+ π‘š
1008
+ ∈
1009
+ [
1010
+ 𝑀
1011
+ ]
1012
+ }
1013
+ . The batch size is set to be no larger than
1014
+ 𝜏
1015
+ . In the following, we denote the mini-batch of
1016
+ 𝑑
1017
+ -th iteration as
1018
+ 𝐡
1019
+ (
1020
+ 𝑑
1021
+ )
1022
+ , and the embedding of
1023
+ 𝑖
1024
+ -th sample for
1025
+ π‘š
1026
+ -th clients as
1027
+ (
1028
+ β„Ž
1029
+ 𝑖
1030
+ (
1031
+ π‘š
1032
+ )
1033
+ )
1034
+ (
1035
+ 𝑑
1036
+ )
1037
+ .
1038
+
1039
+ Now we show how to compute VerFedSV with the FedSGD algorithm. Under the synchronous setting, we set the time stamps for contribution valuation to be the ends of training iterations. The key challenge in computing VerFedSV is that, in each iteration, we only have access to the local embeddings for the current mini-batch
1040
+ 𝐡
1041
+ (
1042
+ 𝑑
1043
+ )
1044
+ , i.e.,
1045
+ 𝐻
1046
+ (
1047
+ 𝑑
1048
+ )
1049
+ =
1050
+ {
1051
+ (
1052
+ β„Ž
1053
+ 𝑖
1054
+ (
1055
+ π‘š
1056
+ )
1057
+ )
1058
+ (
1059
+ 𝑑
1060
+ )
1061
+ :
1062
+ π‘š
1063
+ ∈
1064
+ [
1065
+ 𝑀
1066
+ ]
1067
+ ,
1068
+ 𝑖
1069
+ ∈
1070
+ 𝐡
1071
+ (
1072
+ 𝑑
1073
+ )
1074
+ }
1075
+ . However, computing VerFedSV (DefinitionΒ 2) requires all the local embeddings in each iteration, i.e.,
1076
+ 𝐻
1077
+ ^
1078
+ (
1079
+ 𝑑
1080
+ )
1081
+ =
1082
+ {
1083
+ (
1084
+ β„Ž
1085
+ 𝑖
1086
+ (
1087
+ π‘š
1088
+ )
1089
+ )
1090
+ (
1091
+ 𝑑
1092
+ )
1093
+ :
1094
+ π‘š
1095
+ ∈
1096
+ [
1097
+ 𝑀
1098
+ ]
1099
+ ,
1100
+ 𝑖
1101
+ ∈
1102
+ [
1103
+ 𝑁
1104
+ ]
1105
+ }
1106
+ .
1107
+ Therefore, we want to obtain reasonable approximations for the missing local embeddings.
1108
+
1109
+ Embedding matrix For each client
1110
+ π‘š
1111
+ , we define the embedding matrix
1112
+ β„‹
1113
+ (
1114
+ π‘š
1115
+ )
1116
+ ∈
1117
+ ℝ
1118
+ 𝑇
1119
+ Γ—
1120
+ 𝑁
1121
+ where each element
1122
+ (
1123
+ 𝑑
1124
+ ,
1125
+ 𝑖
1126
+ )
1127
+ is defined as
1128
+ β„‹
1129
+ 𝑑
1130
+ ,
1131
+ 𝑖
1132
+ (
1133
+ π‘š
1134
+ )
1135
+ =
1136
+ (
1137
+ β„Ž
1138
+ 𝑖
1139
+ (
1140
+ π‘š
1141
+ )
1142
+ )
1143
+ (
1144
+ 𝑑
1145
+ )
1146
+ .
1147
+ Due to the mini-batch setting, we can only make partial observations
1148
+ {
1149
+ β„‹
1150
+ 𝑑
1151
+ ,
1152
+ 𝑖
1153
+ (
1154
+ π‘š
1155
+ )
1156
+ :
1157
+ 𝑑
1158
+ ∈
1159
+ [
1160
+ 𝑇
1161
+ ]
1162
+ ,
1163
+ 𝑖
1164
+ ∈
1165
+ 𝐡
1166
+ (
1167
+ 𝑑
1168
+ )
1169
+ }
1170
+ . We notice that the embedding matrices can be decomposed as
1171
+ β„‹
1172
+ (
1173
+ π‘š
1174
+ )
1175
+ =
1176
+ Θ
1177
+ π‘š
1178
+ ​
1179
+ 𝑋
1180
+ π‘š
1181
+ where
1182
+ Θ
1183
+ π‘š
1184
+ =
1185
+ [
1186
+ πœƒ
1187
+ π‘š
1188
+ (
1189
+ 1
1190
+ )
1191
+
1192
+ β‹―
1193
+
1194
+ πœƒ
1195
+ π‘š
1196
+ (
1197
+ 𝑇
1198
+ )
1199
+ ]
1200
+ ⊺
1201
+ ​
1202
+ and
1203
+ ​
1204
+ 𝑋
1205
+ π‘š
1206
+ =
1207
+ [
1208
+ π‘₯
1209
+ 1
1210
+ (
1211
+ π‘š
1212
+ )
1213
+
1214
+ …
1215
+
1216
+ π‘₯
1217
+ 𝑁
1218
+ (
1219
+ π‘š
1220
+ )
1221
+ ]
1222
+ .
1223
+ Note that when
1224
+ 𝑑
1225
+ (
1226
+ π‘š
1227
+ )
1228
+ <
1229
+ min
1230
+ ⁑
1231
+ {
1232
+ 𝑇
1233
+ ,
1234
+ 𝑁
1235
+ }
1236
+ , the embedding matrix
1237
+ β„‹
1238
+ (
1239
+ π‘š
1240
+ )
1241
+ is low-rank because
1242
+ rank
1243
+ ​
1244
+ (
1245
+ β„‹
1246
+ (
1247
+ π‘š
1248
+ )
1249
+ )
1250
+ ≀
1251
+ min
1252
+ ⁑
1253
+ {
1254
+ rank
1255
+ ​
1256
+ (
1257
+ Θ
1258
+ π‘š
1259
+ )
1260
+ ,
1261
+ rank
1262
+ ​
1263
+ (
1264
+ 𝑋
1265
+ π‘š
1266
+ )
1267
+ }
1268
+ ≀
1269
+ 𝑑
1270
+ (
1271
+ π‘š
1272
+ )
1273
+ .
1274
+ When
1275
+ 𝑑
1276
+ (
1277
+ π‘š
1278
+ )
1279
+ β‰₯
1280
+ min
1281
+ ⁑
1282
+ {
1283
+ 𝑇
1284
+ ,
1285
+ 𝑁
1286
+ }
1287
+ , we observe that the model matrix
1288
+ Θ
1289
+ π‘š
1290
+ is approximately low-rank due to the similarity of local models between successive training iterations. The data matrix
1291
+ 𝑋
1292
+ π‘š
1293
+ can also be approximately low-rank due to the similarity between local data points. Our following proposition theoretically formalizes this observation. Before stating the result, we first give a formal definition of approximately low-rankness, proposed by Udell & Townsend (2019).
1294
+
1295
+ Definition 3 (
1296
+ πœ–
1297
+ -rank, (Udell & Townsend, 2019, Def.Β 2.1)).
1298
+
1299
+ Let
1300
+ 𝑋
1301
+ ∈
1302
+ ℝ
1303
+ π‘š
1304
+ Γ—
1305
+ 𝑛
1306
+ be a matrix and
1307
+ πœ–
1308
+ >
1309
+ 0
1310
+ a tolerance. The
1311
+ πœ–
1312
+ -rank of
1313
+ 𝑋
1314
+ is defined as
1315
+ rank
1316
+ πœ–
1317
+ ​
1318
+ (
1319
+ 𝑋
1320
+ )
1321
+ ≔
1322
+ min
1323
+ ⁑
1324
+ {
1325
+ rank
1326
+ ​
1327
+ (
1328
+ 𝑍
1329
+ )
1330
+ :
1331
+ 𝑍
1332
+ ∈
1333
+ ℝ
1334
+ π‘š
1335
+ Γ—
1336
+ 𝑛
1337
+ ,
1338
+ β€–
1339
+ 𝑍
1340
+ βˆ’
1341
+ 𝑋
1342
+ β€–
1343
+ max
1344
+ ≀
1345
+ πœ–
1346
+ }
1347
+ ,
1348
+ where
1349
+ βˆ₯
1350
+ β‹…
1351
+ βˆ₯
1352
+ max
1353
+ is the absolute maximum matrix entry. Consequently,
1354
+ π‘˜
1355
+ =
1356
+ rank
1357
+ πœ–
1358
+ ​
1359
+ (
1360
+ 𝑋
1361
+ )
1362
+ signifies the minimal integer such that
1363
+ 𝑋
1364
+ can be approximated by a rank-
1365
+ π‘˜
1366
+ matrix within an
1367
+ πœ–
1368
+ -tolerance.
1369
+
1370
+ The following proposition characterizes the approximate rank of the embedding matrices
1371
+ β„‹
1372
+ (
1373
+ π‘š
1374
+ )
1375
+ .
1376
+
1377
+ Proposition 1 (
1378
+ πœ–
1379
+ -rank of the embedding matrix).
1380
+
1381
+ Assume that the function
1382
+ 𝑓
1383
+ ​
1384
+ (
1385
+ β‹…
1386
+ ;
1387
+ 𝑦
1388
+ )
1389
+ is
1390
+ 𝐿
1391
+ -smooth for any label
1392
+ 𝑦
1393
+ , the local data sets are normalized, i.e.,
1394
+ β€–
1395
+ π‘₯
1396
+ 𝑖
1397
+ (
1398
+ π‘š
1399
+ )
1400
+ β€–
1401
+ =
1402
+ 1
1403
+ for all
1404
+ π‘š
1405
+ ∈
1406
+ [
1407
+ 𝑀
1408
+ ]
1409
+ and
1410
+ 𝑖
1411
+ ∈
1412
+ [
1413
+ 𝑁
1414
+ ]
1415
+ , and the learning rate is defined as
1416
+ πœ‚
1417
+ (
1418
+ 𝑑
1419
+ )
1420
+ :=
1421
+ 1
1422
+ /
1423
+ 𝑑
1424
+ . Then for any
1425
+ πœ–
1426
+ >
1427
+ 0
1428
+ and any
1429
+ π‘š
1430
+ ∈
1431
+ [
1432
+ 𝑀
1433
+ ]
1434
+ ,
1435
+ rank
1436
+ πœ–
1437
+ ​
1438
+ (
1439
+ β„‹
1440
+ (
1441
+ π‘š
1442
+ )
1443
+ )
1444
+ ≀
1445
+ min
1446
+ ⁑
1447
+ {
1448
+ 𝑑
1449
+ (
1450
+ π‘š
1451
+ )
1452
+ ,
1453
+ ⌈
1454
+ 𝐿
1455
+ ​
1456
+ log
1457
+ ⁑
1458
+ (
1459
+ 𝑇
1460
+ )
1461
+ πœ–
1462
+ βŒ‰
1463
+ ,
1464
+ 𝒩
1465
+ ​
1466
+ (
1467
+ π’Ÿ
1468
+ (
1469
+ π‘š
1470
+ )
1471
+ ,
1472
+ πœ–
1473
+ 𝛾
1474
+ (
1475
+ π‘š
1476
+ )
1477
+ )
1478
+ }
1479
+ ,
1480
+ where
1481
+ 𝒩
1482
+ ​
1483
+ (
1484
+ β‹…
1485
+ ,
1486
+ β‹…
1487
+ )
1488
+ is the covering number, and
1489
+ 𝑇
1490
+ is the number of total iterations.
1491
+
1492
+ Low-rank matrix completion PropositionΒ 1 shows that the embedding matrix is approximately low-rank. For any client
1493
+ π‘š
1494
+ ∈
1495
+ [
1496
+ 𝑀
1497
+ ]
1498
+ , we formulate the following factorization-based low-rank matrix completion problem to complete the embedding matrix
1499
+ β„‹
1500
+ (
1501
+ π‘š
1502
+ )
1503
+ :
1504
+
1505
+
1506
+ minimize
1507
+ π‘Š
1508
+ (
1509
+ π‘š
1510
+ )
1511
+ ∈
1512
+ ℝ
1513
+ 𝑇
1514
+ Γ—
1515
+ π‘Ÿ
1516
+
1517
+
1518
+ 𝐻
1519
+ (
1520
+ π‘š
1521
+ )
1522
+ ∈
1523
+ ℝ
1524
+ 𝑁
1525
+ Γ—
1526
+ π‘Ÿ
1527
+ ​
1528
+ βˆ‘
1529
+ 𝑑
1530
+ =
1531
+ 1
1532
+ 𝑇
1533
+ βˆ‘
1534
+ 𝑖
1535
+ ∈
1536
+ 𝐡
1537
+ (
1538
+ 𝑑
1539
+ )
1540
+ (
1541
+ β„‹
1542
+ 𝑑
1543
+ ,
1544
+ 𝑖
1545
+ (
1546
+ π‘š
1547
+ )
1548
+ βˆ’
1549
+ (
1550
+ 𝑀
1551
+ 𝑑
1552
+ (
1553
+ π‘š
1554
+ )
1555
+ )
1556
+ ⊺
1557
+ ​
1558
+ β„Ž
1559
+ 𝑖
1560
+ (
1561
+ π‘š
1562
+ )
1563
+ )
1564
+ 2
1565
+ +
1566
+ πœ†
1567
+ ​
1568
+ (
1569
+ β€–
1570
+ π‘Š
1571
+ (
1572
+ π‘š
1573
+ )
1574
+ β€–
1575
+ 𝐹
1576
+ 2
1577
+ +
1578
+ β€–
1579
+ 𝐻
1580
+ (
1581
+ π‘š
1582
+ )
1583
+ β€–
1584
+ 𝐹
1585
+ 2
1586
+ )
1587
+ ,
1588
+
1589
+ (2)
1590
+
1591
+ where
1592
+ π‘Ÿ
1593
+ is a user-specified rank parameter,
1594
+ πœ†
1595
+ is a positive regularization parameter,
1596
+ βˆ₯
1597
+ β‹…
1598
+ βˆ₯
1599
+ 𝐹
1600
+ is the Frobenius norm, and
1601
+ 𝑀
1602
+ 𝑑
1603
+ (
1604
+ π‘š
1605
+ )
1606
+ and
1607
+ β„Ž
1608
+ 𝑖
1609
+ (
1610
+ π‘š
1611
+ )
1612
+ , respectively, are the
1613
+ 𝑑
1614
+ -th and the
1615
+ 𝑖
1616
+ -th row vectors of the matrices
1617
+ π‘Š
1618
+ (
1619
+ οΏ½οΏ½οΏ½οΏ½
1620
+ )
1621
+ and
1622
+ 𝐻
1623
+ (
1624
+ π‘š
1625
+ )
1626
+ . The rank parameter
1627
+ π‘Ÿ
1628
+ can be determined via PropositionΒ 1.
1629
+
1630
+ The low-rank matrix completion model (EquationΒ 2) was first used in completing the information for recommender systemsΒ (Koren etΒ al., 2009). Its effectiveness has been extensively studied both theoretically and empiricallyΒ (Keshavan, 2012; Sun & Luo, 2016). We can therefore adopt well-established matrix-completion methodsΒ (Yu etΒ al., 2014; Chin etΒ al., 2016) for solving the problem in EquationΒ 2.
1631
+
1632
+ Note that the matrix completion problem (EquationΒ 2) can be done on the client side in parallel, i.e., clients will independently complete their embedding matrices. It is known that if we solve the matrix completion problem in EquationΒ 2 using the proximal gradient method, then the computational complexity for each iteration is
1633
+ Ξ©
1634
+ ​
1635
+ (
1636
+ π‘Ÿ
1637
+ ​
1638
+ (
1639
+ 𝑇
1640
+ +
1641
+ 𝑁
1642
+ +
1643
+ 𝜏
1644
+ ​
1645
+ 𝑇
1646
+ ​
1647
+ 𝑁
1648
+ )
1649
+ )
1650
+ Β (Udell etΒ al., 2016). Here, the number of steps
1651
+ 𝜏
1652
+ required by the proximal gradient method is typically considered to be a small constant. We numerically show the time required for solving the matrix completion problem in EquationΒ 2 in SectionΒ D.1.
1653
+
1654
+ Approximation guarantee The following proposition shows that if we can obtain a factorization with a small error, i.e.,
1655
+ β„‹
1656
+ (
1657
+ π‘š
1658
+ )
1659
+ β‰ˆ
1660
+ π‘Š
1661
+ (
1662
+ π‘š
1663
+ )
1664
+ ​
1665
+ (
1666
+ 𝐻
1667
+ (
1668
+ π‘š
1669
+ )
1670
+ )
1671
+ ⊺
1672
+ via solving EquationΒ 2, then we can also guarantee a good approximation to the true VerFedSV.
1673
+
1674
+ Proposition 2 (Approximation guarantee).
1675
+
1676
+ Define the factorization error as
1677
+ πœ–
1678
+ :=
1679
+ 1
1680
+ 𝑀
1681
+ ​
1682
+ βˆ‘
1683
+ π‘š
1684
+ =
1685
+ 1
1686
+ 𝑀
1687
+ β€–
1688
+ β„‹
1689
+ (
1690
+ π‘š
1691
+ )
1692
+ βˆ’
1693
+ π‘Š
1694
+ (
1695
+ π‘š
1696
+ )
1697
+ ​
1698
+ (
1699
+ 𝐻
1700
+ (
1701
+ π‘š
1702
+ )
1703
+ )
1704
+ ⊺
1705
+ β€–
1706
+ max
1707
+ .
1708
+ For any client
1709
+ π‘š
1710
+ ∈
1711
+ [
1712
+ 𝑀
1713
+ ]
1714
+ , let
1715
+ 𝑠
1716
+ ^
1717
+ π‘š
1718
+ denote the VerFedSV computed with
1719
+ {
1720
+ π‘Š
1721
+ (
1722
+ π‘š
1723
+ )
1724
+ ,
1725
+ 𝐻
1726
+ (
1727
+ π‘š
1728
+ )
1729
+ :
1730
+ π‘š
1731
+ ∈
1732
+ [
1733
+ 𝑀
1734
+ ]
1735
+ }
1736
+ , i.e.,
1737
+ 𝑠
1738
+ ^
1739
+ π‘š
1740
+ =
1741
+ 1
1742
+ 𝑀
1743
+ ​
1744
+ 𝑇
1745
+ ​
1746
+ βˆ‘
1747
+ 𝑑
1748
+ =
1749
+ 1
1750
+ 𝑇
1751
+ βˆ‘
1752
+ 𝑆
1753
+ βŠ†
1754
+ [
1755
+ 𝑀
1756
+ ]
1757
+ βˆ–
1758
+ {
1759
+ π‘š
1760
+ }
1761
+ 1
1762
+ (
1763
+ 𝑀
1764
+ βˆ’
1765
+ 1
1766
+ |
1767
+ 𝑆
1768
+ |
1769
+ )
1770
+ ​
1771
+ [
1772
+ π‘ˆ
1773
+ ^
1774
+ 𝑑
1775
+ ​
1776
+ (
1777
+ 𝑆
1778
+ βˆͺ
1779
+ {
1780
+ π‘š
1781
+ }
1782
+ )
1783
+ βˆ’
1784
+ π‘ˆ
1785
+ ^
1786
+ 𝑑
1787
+ ​
1788
+ (
1789
+ 𝑆
1790
+ )
1791
+ ]
1792
+ ,
1793
+ where
1794
+
1795
+
1796
+ π‘ˆ
1797
+ ^
1798
+ 𝑑
1799
+ ​
1800
+ (
1801
+ 𝑆
1802
+ )
1803
+ =
1804
+ 1
1805
+ 𝑁
1806
+ ​
1807
+ βˆ‘
1808
+ 𝑖
1809
+ =
1810
+ 1
1811
+ 𝑁
1812
+ 𝑓
1813
+ ​
1814
+ (
1815
+ βˆ‘
1816
+ π‘š
1817
+ =
1818
+ 1
1819
+ 𝑀
1820
+ (
1821
+ 𝑀
1822
+ 𝑑
1823
+ βˆ’
1824
+ 1
1825
+ (
1826
+ π‘š
1827
+ )
1828
+ )
1829
+ ⊺
1830
+ ​
1831
+ β„Ž
1832
+ 𝑖
1833
+ (
1834
+ π‘š
1835
+ )
1836
+ ;
1837
+ 𝑦
1838
+ 𝑖
1839
+ )
1840
+ βˆ’
1841
+ 1
1842
+ 𝑁
1843
+ ​
1844
+ βˆ‘
1845
+ 𝑖
1846
+ =
1847
+ 1
1848
+ 𝑁
1849
+ 𝑓
1850
+ ​
1851
+ (
1852
+ βˆ‘
1853
+ π‘š
1854
+ ∈
1855
+ 𝑆
1856
+ (
1857
+ 𝑀
1858
+ 𝑑
1859
+ (
1860
+ π‘š
1861
+ )
1862
+ )
1863
+ ⊺
1864
+ ​
1865
+ β„Ž
1866
+ 𝑖
1867
+ (
1868
+ π‘š
1869
+ )
1870
+ +
1871
+ βˆ‘
1872
+ π‘š
1873
+ βˆ‰
1874
+ 𝑆
1875
+ (
1876
+ 𝑀
1877
+ 𝑑
1878
+ βˆ’
1879
+ 1
1880
+ (
1881
+ π‘š
1882
+ )
1883
+ )
1884
+ ⊺
1885
+ ​
1886
+ β„Ž
1887
+ 𝑖
1888
+ (
1889
+ π‘š
1890
+ )
1891
+ ;
1892
+ 𝑦
1893
+ 𝑖
1894
+ )
1895
+ .
1896
+
1897
+
1898
+ If the function
1899
+ 𝑓
1900
+ ​
1901
+ (
1902
+ β‹…
1903
+ ;
1904
+ 𝑦
1905
+ )
1906
+ is
1907
+ 𝐺
1908
+ -Lipschitz for any label
1909
+ 𝑦
1910
+ , then
1911
+ |
1912
+ 𝑠
1913
+ ^
1914
+ π‘š
1915
+ βˆ’
1916
+ 𝑠
1917
+ π‘š
1918
+ |
1919
+ ≀
1920
+ 2
1921
+ ​
1922
+ 𝐺
1923
+ ​
1924
+ πœ–
1925
+ for all client
1926
+ π‘š
1927
+ ∈
1928
+ [
1929
+ 𝑀
1930
+ ]
1931
+ .
1932
+
1933
+ 6Computation of VerFedSV under asynchronous setting
1934
+
1935
+ Algorithms using synchronous computation are inefficient when applied to real-world VFL tasks, especially when clients’ computational resources are unbalanced. In this section, we show how to equip VerFedSV with asynchronous VFL algorithms as follows.
1936
+
1937
+ We briefly introduce the vertical asynchronous federated learning (VAFL) algorithm (Chen etΒ al., 2020). This algorithm allows each client to run stochastic gradient algorithms without coordination with other clients. The server maintains the latest embeddings for each sample. At any time, the clients can update an embedding or query gradient from the server. Please refer to Section A.2 for the detailed training process of the VAFL algorithm.
1938
+
1939
+ Next, we show how to compute VerFedSV with the VAFL algorithm. Under the asynchronous setting, there is no definition of training iterations from the perspective of the server. According to DefinitionΒ 2, we can pre-determine
1940
+ 𝑇
1941
+ time stamps for contribution valuation. At any contribution valuation time
1942
+ 𝑑
1943
+ ∈
1944
+ [
1945
+ 𝑇
1946
+ ]
1947
+ , the server keeps the set of embeddings
1948
+ 𝐻
1949
+ (
1950
+ 𝑑
1951
+ βˆ’
1952
+ 1
1953
+ )
1954
+ from
1955
+ 𝑑
1956
+ βˆ’
1957
+ 1
1958
+ and
1959
+ 𝐻
1960
+ (
1961
+ 𝑑
1962
+ )
1963
+ from
1964
+ 𝑑
1965
+ . Here,
1966
+ 𝐻
1967
+ (
1968
+ 𝑑
1969
+ )
1970
+ =
1971
+ {
1972
+ (
1973
+ β„Ž
1974
+ 𝑖
1975
+ (
1976
+ π‘š
1977
+ )
1978
+ )
1979
+ (
1980
+ 𝑑
1981
+ )
1982
+ ∣
1983
+ π‘š
1984
+ ∈
1985
+ [
1986
+ 𝑀
1987
+ ]
1988
+ ,
1989
+ 𝑖
1990
+ ∈
1991
+ [
1992
+ 𝑁
1993
+ ]
1994
+ }
1995
+ .
1996
+ At
1997
+ 𝑑
1998
+ =
1999
+ 0
2000
+ , we initialize the server’s embeddings with all zero, i.e.,
2001
+ (
2002
+ β„Ž
2003
+ 𝑖
2004
+ (
2005
+ π‘š
2006
+ )
2007
+ )
2008
+ (
2009
+ 0
2010
+ )
2011
+ =
2012
+ 0
2013
+ ,
2014
+ βˆ€
2015
+ π‘š
2016
+ ∈
2017
+ [
2018
+ 𝑀
2019
+ ]
2020
+ ,
2021
+ βˆ€
2022
+ 𝑖
2023
+ ∈
2024
+ [
2025
+ 𝑁
2026
+ ]
2027
+ .
2028
+ With these embeddings, VerFedSV is computed according to EquationΒ 1 and DefinitionΒ 2.
2029
+
2030
+ A careful reader may ask why there is no need to use matrix completion under the asynchronous setting. A short answer is that, under this setting, the contribution of a client is related to both the quality of the local dataset and the power of local computational resources, which is reflected by the parameter
2031
+ 𝜏
2032
+ π‘š
2033
+ (DefinitionΒ 1). More precisely, at any contribution valuation time point
2034
+ 𝑑
2035
+ ∈
2036
+ [
2037
+ 𝑇
2038
+ ]
2039
+ , for any client
2040
+ π‘š
2041
+ ∈
2042
+ [
2043
+ 𝑀
2044
+ ]
2045
+ , there exists
2046
+ 𝐡
2047
+ βŠ‚
2048
+ [
2049
+ 𝑁
2050
+ ]
2051
+ such that only the embeddings corresponding to
2052
+ 𝐡
2053
+ are updated, i.e.,
2054
+ (
2055
+ β„Ž
2056
+ 𝑖
2057
+ (
2058
+ π‘š
2059
+ )
2060
+ )
2061
+ (
2062
+ 𝑑
2063
+ )
2064
+ β‰ 
2065
+ (
2066
+ β„Ž
2067
+ 𝑖
2068
+ (
2069
+ οΏ½οΏ½οΏ½οΏ½
2070
+ )
2071
+ )
2072
+ (
2073
+ 𝑑
2074
+ βˆ’
2075
+ 1
2076
+ )
2077
+ ,
2078
+ βˆ€
2079
+ 𝑖
2080
+ ∈
2081
+ 𝐡
2082
+ , and
2083
+ (
2084
+ β„Ž
2085
+ 𝑖
2086
+ (
2087
+ π‘š
2088
+ )
2089
+ )
2090
+ (
2091
+ 𝑑
2092
+ )
2093
+ =
2094
+ (
2095
+ β„Ž
2096
+ 𝑖
2097
+ (
2098
+ π‘š
2099
+ )
2100
+ )
2101
+ (
2102
+ 𝑑
2103
+ βˆ’
2104
+ 1
2105
+ )
2106
+ ,
2107
+ βˆ€
2108
+ 𝑖
2109
+ βˆ‰
2110
+ 𝐡
2111
+ ,
2112
+ where the size of
2113
+ 𝐡
2114
+ is proportional to
2115
+ 𝜏
2116
+ π‘š
2117
+ . Then according to EquationΒ 1, for any
2118
+ 𝑆
2119
+ βŠ‚
2120
+ [
2121
+ 𝑀
2122
+ ]
2123
+ βˆ–
2124
+ {
2125
+ π‘š
2126
+ }
2127
+ , we have
2128
+ π‘ˆ
2129
+ 𝑑
2130
+ ​
2131
+ (
2132
+ 𝑆
2133
+ βˆͺ
2134
+ π‘š
2135
+ )
2136
+ βˆ’
2137
+ π‘ˆ
2138
+ 𝑑
2139
+ ​
2140
+ (
2141
+ 𝑆
2142
+ )
2143
+ =
2144
+ 1
2145
+ 𝑁
2146
+ ​
2147
+ βˆ‘
2148
+ 𝑖
2149
+ ∈
2150
+ 𝐡
2151
+ [
2152
+ 𝑓
2153
+ ​
2154
+ (
2155
+ (
2156
+ β„Ž
2157
+ 𝑖
2158
+ (
2159
+ π‘š
2160
+ )
2161
+ )
2162
+ (
2163
+ 𝑑
2164
+ βˆ’
2165
+ 1
2166
+ )
2167
+ +
2168
+ (
2169
+ β„Ž
2170
+ 𝑖
2171
+ (
2172
+ βˆ’
2173
+ π‘š
2174
+ )
2175
+ )
2176
+ (
2177
+ 𝑑
2178
+ )
2179
+ ;
2180
+ 𝑦
2181
+ 𝑖
2182
+ )
2183
+ βˆ’
2184
+ 𝑓
2185
+ ​
2186
+ (
2187
+ (
2188
+ β„Ž
2189
+ 𝑖
2190
+ (
2191
+ π‘š
2192
+ )
2193
+ )
2194
+ (
2195
+ 𝑑
2196
+ )
2197
+ +
2198
+ (
2199
+ β„Ž
2200
+ 𝑖
2201
+ (
2202
+ βˆ’
2203
+ π‘š
2204
+ )
2205
+ )
2206
+ (
2207
+ 𝑑
2208
+ )
2209
+ ;
2210
+ 𝑦
2211
+ 𝑖
2212
+ )
2213
+ ]
2214
+ , where
2215
+ (
2216
+ β„Ž
2217
+ 𝑖
2218
+ (
2219
+ βˆ’
2220
+ π‘š
2221
+ )
2222
+ )
2223
+ (
2224
+ 𝑑
2225
+ )
2226
+ :=
2227
+ βˆ‘
2228
+ π‘˜
2229
+ ∈
2230
+ 𝑆
2231
+ (
2232
+ β„Ž
2233
+ 𝑖
2234
+ π‘˜
2235
+ )
2236
+ (
2237
+ 𝑑
2238
+ )
2239
+ +
2240
+ βˆ‘
2241
+ π‘˜
2242
+ βˆ‰
2243
+ 𝑆
2244
+ βˆͺ
2245
+ {
2246
+ π‘š
2247
+ }
2248
+ (
2249
+ β„Ž
2250
+ 𝑖
2251
+ π‘˜
2252
+ )
2253
+ (
2254
+ 𝑑
2255
+ βˆ’
2256
+ 1
2257
+ )
2258
+ .
2259
+
2260
+ Therefore, we can see that the contribution of client
2261
+ π‘š
2262
+ is proportional to
2263
+ 𝜏
2264
+ π‘š
2265
+ , which is indeed an important feature of asynchronous VFL algorithms. Thus, if we conduct the embedding matrix completion under the synchronous setting, then we will lose this important feature, which is unfair to the clients with more powerful local computational resources.
2266
+
2267
+ The above discussion also suggests that VerFedSV can motivate clients to communicate more with the server in the asynchronous setting, i.e., dedicating more of their local computational resources. The following proposition demonstrates that two clients with the same local datasets but different communication frequencies receive different valuations. More specifically, the one who communicates more with the server receives a higher valuation.
2268
+
2269
+ Proposition 3 (More work leads to more rewards).
2270
+
2271
+ Consider a simple case: we have clients 1 and 2 with identical local datasets where
2272
+ π‘₯
2273
+ 𝑖
2274
+ 1
2275
+ =
2276
+ π‘₯
2277
+ 𝑖
2278
+ 2
2279
+ =
2280
+ π‘₯
2281
+ 𝑖
2282
+ for all
2283
+ 𝑖
2284
+ ∈
2285
+ [
2286
+ 𝑁
2287
+ ]
2288
+ . However, their communication frequencies differ: while client 1 consistently sends local embeddings to the server, client 2 only does so with a probability of
2289
+ 𝜌
2290
+ ∈
2291
+ [
2292
+ 0
2293
+ ,
2294
+ 1
2295
+ ]
2296
+ each time client 1 communicates.
2297
+
2298
+ Suppose that the loss function
2299
+ 𝑔
2300
+ ​
2301
+ (
2302
+ πœƒ
2303
+ )
2304
+ =
2305
+ 1
2306
+ 𝑁
2307
+ ​
2308
+ βˆ‘
2309
+ 𝑖
2310
+ =
2311
+ 1
2312
+ 𝑁
2313
+ 𝑓
2314
+ ​
2315
+ (
2316
+ ⟨
2317
+ πœƒ
2318
+ ,
2319
+ π‘₯
2320
+ 𝑖
2321
+ ⟩
2322
+ ,
2323
+ 𝑦
2324
+ 𝑖
2325
+ )
2326
+ is
2327
+ πœ‡
2328
+ -strongly convex for some
2329
+ πœ‡
2330
+ >
2331
+ 0
2332
+ and
2333
+ πœƒ
2334
+ βˆ—
2335
+ is the global minimum point,
2336
+ πœƒ
2337
+ 1
2338
+ and
2339
+ πœƒ
2340
+ 2
2341
+ are initialized at
2342
+ 0
2343
+ , and we stop training when we reach the optimum point. Then it follows that
2344
+ 𝔼
2345
+ ​
2346
+ [
2347
+ πœƒ
2348
+ 1
2349
+ βˆ—
2350
+ ]
2351
+ =
2352
+ 1
2353
+ 1
2354
+ +
2355
+ 𝜌
2356
+ ​
2357
+ πœƒ
2358
+ βˆ—
2359
+ ​
2360
+ and
2361
+ ​
2362
+ 𝔼
2363
+ ​
2364
+ [
2365
+ πœƒ
2366
+ 2
2367
+ βˆ—
2368
+ ]
2369
+ =
2370
+ 𝜌
2371
+ 1
2372
+ +
2373
+ 𝜌
2374
+ ​
2375
+ πœƒ
2376
+ βˆ—
2377
+ .
2378
+
2379
+ Furthermore, if we only do one time of contribution valuation when the training ends,
2380
+ 𝔼
2381
+ ​
2382
+ [
2383
+ 𝑠
2384
+ 1
2385
+ ]
2386
+ β‰₯
2387
+ 𝔼
2388
+ ​
2389
+ [
2390
+ 𝑠
2391
+ 2
2392
+ ]
2393
+ +
2394
+ πœ‡
2395
+ ​
2396
+ (
2397
+ 1
2398
+ βˆ’
2399
+ 𝜌
2400
+ 1
2401
+ +
2402
+ 𝜌
2403
+ )
2404
+ 2
2405
+ ​
2406
+ β€–
2407
+ πœƒ
2408
+ βˆ—
2409
+ β€–
2410
+ 2
2411
+ ,
2412
+ where
2413
+ 𝑠
2414
+ 1
2415
+ and
2416
+ 𝑠
2417
+ 2
2418
+ are the VerFedSV for clients 1 and 2, respectively.
2419
+
2420
+ (a)Adult dataset.
2421
+ (b)Web dataset.
2422
+ (c)Covtype dataset.
2423
+ (d)RCV1 dataset.
2424
+ Figure 1:Approximated
2425
+ πœ–
2426
+ -rank of embedding matrices.
2427
+ 7Experiments
2428
+
2429
+ We conduct extensive experiments on real-world datasets, including AdultΒ (Zeng etΒ al., 2008), WebΒ (Platt, 1998), CovtypeΒ (Blackard & Dean, 1999), and RCV1Β (Lewis etΒ al., 2004). The detailed setup is elaborated in Section C. Our code is submitted in the supplementary material.
2430
+
2431
+ 7.1Synchronous Vertical Federated Learning
2432
+
2433
+ In this section, we first empirically verify that the embedding matrices are indeed approximately low-rank (PropositionΒ 1). Next, we demonstrate that VerFedSV satisfies the fairness property under synchronous setting (TheoremΒ 1). More precisely, we verify that clients with similar features should get similar valuations and clients with randomly generated features should get low valuations.
2434
+
2435
+ Rank of the embedding matrix It is required to access the full embedding matrix in order to verify its low-rankness. Thus, we set the batch size equal to the number of data points for all clients. Considering the experiment doesn’t focus on scalability w.r.t the total numbers of clients
2436
+ 𝑀
2437
+ , we set
2438
+ 𝑀
2439
+ to a moderate value. For the Adult, Web, Covtype and RCV1 datasets, respectively,
2440
+ 𝑀
2441
+ =
2442
+ 3
2443
+ ,
2444
+ 15
2445
+ ,
2446
+ 9
2447
+ ,
2448
+ 14
2449
+ . Since computing the
2450
+ πœ–
2451
+ -rank (DefinitionΒ 3) is NP-hard (Udell & Townsend, 2019), we instead compute the rank of its truncated singular value decomposition as an approximation. More precisely, given an embedding matrix
2452
+ β„‹
2453
+ (
2454
+ π‘š
2455
+ )
2456
+ ∈
2457
+ ℝ
2458
+ 𝑇
2459
+ Γ—
2460
+ 𝑁
2461
+ for client
2462
+ π‘š
2463
+ , let
2464
+ 𝜎
2465
+ 1
2466
+ β‰₯
2467
+ 𝜎
2468
+ 2
2469
+ β‰₯
2470
+ β‹―
2471
+ β‰₯
2472
+ 𝜎
2473
+ 𝑝
2474
+ be its ordered singular values, where
2475
+ 𝑝
2476
+ =
2477
+ min
2478
+ ⁑
2479
+ {
2480
+ 𝑇
2481
+ ,
2482
+ 𝑁
2483
+ }
2484
+ . Then given
2485
+ πœ–
2486
+ ∈
2487
+ (
2488
+ 0
2489
+ ,
2490
+ 1
2491
+ )
2492
+ , we define its approximated
2493
+ πœ–
2494
+ -rank as
2495
+ rank
2496
+ ^
2497
+ πœ–
2498
+ ​
2499
+ (
2500
+ β„‹
2501
+ (
2502
+ π‘š
2503
+ )
2504
+ )
2505
+ =
2506
+ max
2507
+ ⁑
2508
+ {
2509
+ π‘Ÿ
2510
+ ∈
2511
+ [
2512
+ 1
2513
+ ,
2514
+ 𝑝
2515
+ ]
2516
+ ∣
2517
+ 𝜎
2518
+ π‘Ÿ
2519
+ β‰₯
2520
+ πœ–
2521
+ β‹…
2522
+ 𝜎
2523
+ 1
2524
+ }
2525
+ .
2526
+
2527
+ There will be
2528
+ 𝑀
2529
+ rank values for
2530
+ 𝑀
2531
+ clients’ embedding matrices. The histogram of these approximated
2532
+ πœ–
2533
+ -rank values is shown in FigureΒ 1, where
2534
+ πœ–
2535
+ =
2536
+ 10
2537
+ βˆ’
2538
+ 3
2539
+ , the x-axis represents the approximated
2540
+ πœ–
2541
+ -rank and the y-axis represents the number of clients with associating x-axis value of approximated
2542
+ πœ–
2543
+ -rank. The plot shows that the embedding matrices are low-rank on all the datasets.
2544
+
2545
+ (a)Adult dataset.
2546
+ (b)Web dataset.
2547
+ (c)Covtype dataset.
2548
+ (d)RCV1 dataset.
2549
+ Figure 2:Relative VerFedSV difference v.s. feature heterogeneity.
2550
+
2551
+ VerFedSV for similar features Besides the original clients, for each data set, we add 5 more clients whose features are identical to client 1 but with different levels of perturbations. More precisely, for new client
2552
+ 𝑖
2553
+ ∈
2554
+ {
2555
+ 1
2556
+ ,
2557
+ …
2558
+ ,
2559
+ 5
2560
+ }
2561
+ , we add white Gaussian noise to
2562
+ (
2563
+ 𝑖
2564
+ βˆ’
2565
+ 1
2566
+ )
2567
+ ​
2568
+ 10
2569
+ %
2570
+ percent of the local features, which we denote as the feature heterogeneity. Then we measure the relative difference between the original client 1 and the new clients, i.e., for any new client
2571
+ 𝑖
2572
+ ∈
2573
+ {
2574
+ 1
2575
+ ,
2576
+ …
2577
+ ,
2578
+ 5
2579
+ }
2580
+ , let
2581
+ diff
2582
+ 𝑖
2583
+ :=
2584
+ |
2585
+ 𝑠
2586
+ βˆ’
2587
+ 𝑠
2588
+ 𝑖
2589
+ |
2590
+ 𝑠
2591
+ , where
2592
+ 𝑠
2593
+ is the VerFedSV for the original client 1 and
2594
+ 𝑠
2595
+ 𝑖
2596
+ is the VerFedSV for the new client
2597
+ 𝑖
2598
+ . We show a plot of relative VerFedSV difference v.s. feature heterogeneity in FigureΒ 2, where the numbers of clients for the Adult, Web, Covtype and RCV1 datasets are, respectively,
2599
+ 𝑀
2600
+ =
2601
+ 8
2602
+ ,
2603
+ 20
2604
+ ,
2605
+ 14
2606
+ ,
2607
+ 19
2608
+ . We can see that the relative VerFedSV difference is proportional to the feature heterogeneity. Besides, when the feature heterogeneity is equal to
2609
+ 0
2610
+ , i.e., two clients have identical features, the relative VerFedSV difference is exactly
2611
+ 0
2612
+ for Adult dataset, and is nearly
2613
+ 0
2614
+ for the Web, Covtype and RCV1 datasets, where the inexactness is due to the Monte-Carlo sampling.
2615
+
2616
+ VerFedSV for random features Besides the original clients, for each data set, we add 5 more clients whose features are randomly generated according to different distributions. Specifically, for the new client
2617
+ 𝑖
2618
+ ∈
2619
+ {
2620
+ 1
2621
+ ,
2622
+ …
2623
+ ,
2624
+ 5
2625
+ }
2626
+ , the features are generated from Gaussian distribution with a mean equal to
2627
+ 𝑖
2628
+ and variance equal to
2629
+ 𝑖
2630
+ 2
2631
+ . TableΒ 1 shows the percentage of each client’s VerFedSVs relative to the sum of all VerFedSVs, where the numbers of clients for the Adult, Web, Covtype and RCV1 datasets are, respectively,
2632
+ 𝑀
2633
+ =
2634
+ 8
2635
+ ,
2636
+ 20
2637
+ ,
2638
+ 14
2639
+ ,
2640
+ 19
2641
+ . As shown in table, regardless of the distributions, clients with randomly generated features receive much lower valuations than the regular clients for all the datasets.
2642
+
2643
+ Table 1:Percentage of clients’ VerFedSVs in the total sum of VerFedSVs.
2644
+ Synchronous setting Asynchronous setting
2645
+ Adult Web Covtype RCV1 Adult Web Covtype RCV1
2646
+ all regular clients 99.88 99.73 97.77 93.84 97.91 98.39 98.57 91.90
2647
+ artificial client 1 0.01 0.02 0.39 1.86 0.93 0.26 0.35 1.66
2648
+ artificial client 2 0.05 0.06 0.57 0.43 0.06 0.31 0.21 1.75
2649
+ artificial client 3 0.03 0.06 0.91 2.37 0.33 0.36 0.22 1.64
2650
+ artificial client 4 0.01 0.07 0.36 0.31 0.43 0.40 0.25 1.50
2651
+ artificial client 5 0.02 0.06 0.01 1.18 0.35 0.28 0.39 1.54
2652
+ 7.2Asynchronous Vertical Federated Learning
2653
+
2654
+ In this section, we show that VerFedSV not only satisfies the fairness property under asynchronous setting (TheoremΒ 1), but can also reflect how frequently clients report (PropositionΒ 3). Specifically, during the training, we let clients communicate with the server at different frequencies. For all the datasets, we asynchronously train the model for
2655
+ 20
2656
+ seconds and do the valuation every
2657
+ 0.04
2658
+ second, i.e., there are
2659
+ 𝑇
2660
+ =
2661
+ 500
2662
+ contribution valuation time points (DefinitionΒ 2).
2663
+
2664
+ (a)Adult dataset.
2665
+ (b)Web dataset.
2666
+ (c)Covtype dataset.
2667
+ (d)RCV1 dataset.
2668
+ Figure 3:VerFedSVs for clients with different communication frequencies.
2669
+
2670
+ Impact of communication frequency Besides the original clients, for each data set, we add 5 more clients whose features are identical to client 1 but with different levels of communication frequencies. More precisely, the new client
2671
+ 𝑖
2672
+ ∈
2673
+ {
2674
+ 1
2675
+ ,
2676
+ …
2677
+ ,
2678
+ 5
2679
+ }
2680
+ communicates with the server every
2681
+ 0.01
2682
+ ​
2683
+ 𝑖
2684
+ second, i.e., the new client 1 has the highest communication frequency, and the new client
2685
+ 5
2686
+ has the lowest. We show a plot in FigureΒ 3 of the percentage of the new clients’ VerFedSVs in the total sum of VerFedSVs, where the numbers of clients for the Adult, Web, Covtype and RCV1 datasets are, respectively,
2687
+ 𝑀
2688
+ =
2689
+ 8
2690
+ ,
2691
+ 20
2692
+ ,
2693
+ 14
2694
+ ,
2695
+ 19
2696
+ . We can see that the percentage of VerFedSV is proportional to the communication frequency.
2697
+
2698
+ VerFedSV for random features Besides the regular clients, for each data set, we add 5 more clients whose features are randomly generated according to standard Gaussian distribution. They have communication frequencies such that the new client
2699
+ 𝑖
2700
+ ∈
2701
+ {
2702
+ 1
2703
+ ,
2704
+ …
2705
+ ,
2706
+ 5
2707
+ }
2708
+ communicates with the server every
2709
+ 0.01
2710
+ ​
2711
+ 𝑖
2712
+ seconds. We show in TableΒ 1 the percentage of the clients’ VerFedSVs in the total sum of VerFedSVs, where the numbers of clients for the Adult, Web, Covtype and RCV1 datasets are, respectively,
2713
+ 𝑀
2714
+ =
2715
+ 8
2716
+ ,
2717
+ 20
2718
+ ,
2719
+ 14
2720
+ ,
2721
+ 19
2722
+ . Regardless of the communication frequencies, the clients with randomly generated features receive much lower valuations than the regular clients for all the datasets.
2723
+
2724
+ 8Conclusion and insights
2725
+
2726
+ In this paper, we propose a contribution valuation metric, i.e., vertical federated Shapley value (VerFedSV). We demonstrate theoretically and empirically that VerFedSV satisfies desirable properties for fairness and is adaptable to both synchronous and asynchronous VFL settings. There are a few interesting future directions. We notice that when we keep adding clients with identical features, the total of the VerFedSV increases. This suggests that some clients may β€œcheat” by constructing new clients with identical features to receive unjustifiable rewards in the end. One potential solution is to implement secure statistical testing before FL training and exclude clients with extremely similar datasets. It is also interesting to explore whether VerFedSV can be integrated into differentially private VFL algorithms.
2727
+
2728
+ References
2729
+ Achen (1982)
2730
+ ↑
2731
+ ChristopherΒ H Achen.Interpreting and using regression, volumeΒ 29.Sage, 1982.
2732
+ Bezanson etΒ al. (2017)
2733
+ ↑
2734
+ Jeff Bezanson, Alan Edelman, Stefan Karpinski, and ViralΒ B Shah.Julia: A fresh approach to numerical computing.SIAM review, 59(1):65–98, 2017.
2735
+ Blackard & Dean (1999)
2736
+ ↑
2737
+ JockΒ A Blackard and DenisΒ J Dean.Comparative accuracies of artificial neural networks and discriminant analysis in predicting forest cover types from cartographic variables.Computers and electronics in agriculture, 24(3):131–151, 1999.
2738
+ Brown etΒ al. (2012)
2739
+ ↑
2740
+ Gavin Brown, Adam Pocock, Ming-Jie Zhao, and Mikel LujΓ‘n.Conditional likelihood maximisation: a unifying framework for information theoretic feature selection.JMLR, 13:27–66, 2012.
2741
+ Chang & Lin (2011)
2742
+ ↑
2743
+ Chih-Chung Chang and Chih-Jen Lin.LIBSVM: A library for support vector machines.ACM TIST, 2:27:1–27:27, 2011.
2744
+ Chen etΒ al. (2020)
2745
+ ↑
2746
+ Tianyi Chen, Xiao Jin, Yuejiao Sun, and Wotao Yin.VAFL: a method of vertical asynchronous federated learning.arXiv:2007.06081, 2020.
2747
+ Cheng etΒ al. (2019)
2748
+ ↑
2749
+ K.Β Cheng, T.Β Fan, Y.Β Jin, Y.Β Liu, T.Β Chen, and Q.Β Yang.Secureboost: A lossless federated learning framework.arXiv:1901.08755, 2019.
2750
+ Chin etΒ al. (2016)
2751
+ ↑
2752
+ Wei-Sheng Chin, Bo-Wen Yuan, Meng-Yuan Yang, Yong Zhuang, Yu-Chin Juan, and Chih-Jen Lin.Libmf: A library for parallel matrix factorization in shared-memory systems.J. Mach. Learn. Res., 17(1):2971–2975, 2016.
2753
+ Dinh etΒ al. (2020)
2754
+ ↑
2755
+ CanhΒ T Dinh, NguyenΒ H Tran, MinhΒ NH Nguyen, ChoongΒ Seon Hong, Wei Bao, AlbertΒ Y Zomaya, and Vincent Gramoli.Federated learning over wireless networks: Convergence analysis and resource allocation.IEEE/ACM Transactions on Networking, 29(1):398–409, 2020.
2756
+ Dubey (1975)
2757
+ ↑
2758
+ Pradeep Dubey.On the uniqueness of the shapley value.International Journal of Game Theory, 4(3):131–139, 1975.
2759
+ Fan etΒ al. (2022)
2760
+ ↑
2761
+ Zhenan Fan, Huang Fang, Zirui Zhou, Jian Pei, MichaelΒ P Friedlander, Changxin Liu, and Yong Zhang.Improving fairness for data valuation in horizontal federated learning.In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pp.Β  2440–2453. IEEE, 2022.
2762
+ Ghorbani & Zou (2019)
2763
+ ↑
2764
+ Amirata Ghorbani and James Zou.Data shapley: Equitable valuation of data for machine learning.In ICML, pp.Β  2242–2251. PMLR, 2019.
2765
+ Gong etΒ al. (2016)
2766
+ ↑
2767
+ Yanmin Gong, Yuguang Fang, and Yuanxiong Guo.Private data analytics on biomedical sensing data via distributed computation.IEEE/ACM Transactions on Computational Biology and Bioinformatics, 13:431–444, 2016.
2768
+ Gu etΒ al. (2020)
2769
+ ↑
2770
+ Bin Gu, AnΒ Xu, Zhouyuan Huo, Cheng Deng, and Heng Huang.Privacy-preserving asynchronous federated learning algorithms for multi-party vertically collaborative learning.arXiv:2008.06233, 2020.
2771
+ Gul (1989)
2772
+ ↑
2773
+ Faruk Gul.Bargaining foundations of shapley value.Econometrica: Journal of the Econometric Society, pp.Β  81–95, 1989.
2774
+ Han etΒ al. (2021)
2775
+ ↑
2776
+ Xiao Han, Leye Wang, and Junjie Wu.Data valuation for vertical federated learning: An information-theoretic approach.arXiv:2112.08364, 2021.
2777
+ Hu etΒ al. (2019)
2778
+ ↑
2779
+ Yaochen Hu, DiΒ Niu, Jianming Yang, and Shengping Zhou.Fdml: A collaborative machine learning framework for distributed features.KDD, 2019.
2780
+ Jia etΒ al. (2019)
2781
+ ↑
2782
+ Ruoxi Jia, David Dao, Boxin Wang, FrancesΒ Ann Hubis, Nick Hynes, NeziheΒ Merve GΓΌrel, BoΒ Li, CeΒ Zhang, Dawn Song, and CostasΒ J Spanos.Towards efficient data valuation based on the shapley value.In AISTATS, pp.Β  1167–1176. PMLR, 2019.
2783
+ Kairouz etΒ al. (2019)
2784
+ ↑
2785
+ Peter Kairouz, HΒ Brendan McMahan, Brendan Avent, AurΓ©lien Bellet, Mehdi Bennis, ArjunΒ Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, etΒ al.Advances and open problems in federated learning.arXiv:1912.04977, 2019.
2786
+ Kendall (1938)
2787
+ ↑
2788
+ MauriceΒ G Kendall.A new measure of rank correlation.Biometrika, 30(1/2):81–93, 1938.
2789
+ Keshavan (2012)
2790
+ ↑
2791
+ RaghunandanΒ Hulikal Keshavan.Efficient algorithms for collaborative filtering.Stanford University, 2012.
2792
+ Koren etΒ al. (2009)
2793
+ ↑
2794
+ Yehuda Koren, Robert Bell, and Chris Volinsky.Matrix factorization techniques for recommender systems.Computer, 42(8):30–37, 2009.
2795
+ Kwon & Zou (2022)
2796
+ ↑
2797
+ Yongchan Kwon and James Zou.Beta shapley: a unified and noise-reduced data valuation framework for machine learning.In AISTATS, volume 151 of Proceedings of Machine Learning Research, pp.Β  8780–8802. PMLR, 2022.
2798
+ Kwon etΒ al. (2021)
2799
+ ↑
2800
+ Yongchan Kwon, ManuelΒ A Rivas, and James Zou.Efficient computation and analysis of distributional shapley values.In AISTATS, pp.Β  793–801. PMLR, 2021.
2801
+ Lewis etΒ al. (2004)
2802
+ ↑
2803
+ DavidΒ D Lewis, Yiming Yang, Tony Russell-Rose, and Fan Li.Rcv1: A new benchmark collection for text categorization research.JMLR, 5(Apr):361–397, 2004.
2804
+ Li etΒ al. (2020)
2805
+ ↑
2806
+ LiΒ Li, Yuxi Fan, Mike Tse, and Kuo-Yi Lin.A review of applications in federated learning.Computers & Industrial Engineering, pp.Β  106854, 2020.
2807
+ Lin etΒ al. (2023)
2808
+ ↑
2809
+ Xiaoqiang Lin, Xinyi Xu, See-Kiong Ng, Chuan-Sheng Foo, and Bryan KianΒ Hsiang Low.Fair yet asymptotically equal collaborative learning.In ICML, volume 202 of Proceedings of Machine Learning Research, pp.Β  21223–21259. PMLR, 2023.
2810
+ Liu etΒ al. (2019)
2811
+ ↑
2812
+ Yang Liu, Yan Kang, Xinwei Zhang, Liping Li, Yong Cheng, Tianjian Chen, Mingyi Hong, and Qiang Yang.A communication efficient collaborative learning framework for distributed features.arXiv:1912.11187, 2019.
2813
+ Liu etΒ al. (2022)
2814
+ ↑
2815
+ Zelei Liu, Yuanyuan Chen, Han Yu, Yang Liu, and Li-zhen Cui.Gtg-shapley: Efficient and accurate participant contribution evaluation in federated learning.ACM TIST, 13(4):1–21, 2022.
2816
+ Lundberg & Lee (2017)
2817
+ ↑
2818
+ ScottΒ M Lundberg and Su-In Lee.A unified approach to interpreting model predictions.In NeurIPS, pp.Β  4768–4777, 2017.
2819
+ McMahan etΒ al. (2017)
2820
+ ↑
2821
+ Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and BlaiseΒ Aguera yΒ Arcas.Communication-efficient learning of deep networks from decentralized data.In AISTATS, pp.Β  1273–1282. PMLR, 2017.
2822
+ Metropolis & Ulam (1949)
2823
+ ↑
2824
+ Nicholas Metropolis and Stanislaw Ulam.The monte carlo method.Journal of the American statistical association, 44(247):335–341, 1949.
2825
+ Nagaraj etΒ al. (2019)
2826
+ ↑
2827
+ Dheeraj Nagaraj, Prateek Jain, and Praneeth Netrapalli.Sgd without replacement: Sharper rates for general smooth convex functions.In ICML, pp.Β  4703–4711. PMLR, 2019.
2828
+ Platt (1998)
2829
+ ↑
2830
+ John Platt.Fast training of support vector machines using sequential minimal optimization.In Advances in Kernel Methods - Support Vector Learning. MIT Press, January 1998.
2831
+ Rockafellar (1997)
2832
+ ↑
2833
+ RΒ Tyrrell Rockafellar.Convex analysis, volumeΒ 11.Princeton University Press, 1997.
2834
+ Shapley (1952)
2835
+ ↑
2836
+ LSΒ Shapley.A value for n-person games.Technical report, RAND CORP SANTA MONICA CA, 1952.
2837
+ Sim etΒ al. (2022)
2838
+ ↑
2839
+ Rachael HweeΒ Ling Sim, Xinyi Xu, and Bryan KianΒ Hsiang Low.Data valuation in machine learning: ”ingredients”, strategies, and open challenges.In IJCAI, pp.Β  5607–5614. ijcai.org, 2022.
2840
+ Song etΒ al. (2019)
2841
+ ↑
2842
+ Tianshu Song, Yongxin Tong, and Shuyue Wei.Profit allocation for federated learning.In IEEE TBD, pp.Β  2577–2586. IEEE, 2019.
2843
+ Sun etΒ al. (2023)
2844
+ ↑
2845
+ Qiheng Sun, Xiang Li, Jiayao Zhang, LiΒ Xiong, Weiran Liu, Jinfei Liu, Zhan Qin, and Kui Ren.Shapleyfl: Robust federated learning based on shapley value.In KDD, pp.Β  2096–2108. ACM, 2023.
2846
+ Sun & Luo (2016)
2847
+ ↑
2848
+ Ruoyu Sun and Zhi-Quan Luo.Guaranteed matrix completion via non-convex factorization.IEEE Transactions on Information Theory, 62(11):6535–6579, 2016.
2849
+ Udell & Townsend (2019)
2850
+ ↑
2851
+ Madeleine Udell and Alex Townsend.Why are big data matrices approximately low rank?SIAM Journal on Mathematics of Data Science, 1(1):144–160, 2019.
2852
+ Udell etΒ al. (2016)
2853
+ ↑
2854
+ Madeleine Udell, Corinne Horn, Reza Zadeh, and Stephen Boyd.Generalized low rank models.Foundations and Trends in Machine Learning, 9(1), 2016.
2855
+ Wang etΒ al. (2019)
2856
+ ↑
2857
+ Guan Wang, CharlieΒ Xiaoqian Dang, and Ziye Zhou.Measure contribution of participants in federated learning.In IEEE TBD, pp.Β  2597–2604. IEEE, 2019.
2858
+ Wang & Jia (2023)
2859
+ ↑
2860
+ JiachenΒ T. Wang and Ruoxi Jia.Data banzhaf: A robust data valuation framework for machine learning.In AISTATS, volume 206 of Proceedings of Machine Learning Research, pp.Β  6388–6421. PMLR, 2023.
2861
+ Wang etΒ al. (2023)
2862
+ ↑
2863
+ JiachenΒ T. Wang, Yuqing Zhu, Yu-Xiang Wang, Ruoxi Jia, and Prateek Mittal.Threshold KNN-shapley: A linear-time and privacy-friendly approach to data valuation.In NeurIPS Workshop on Attributing Model Behavior at Scale, 2023.
2864
+ Wang etΒ al. (2020)
2865
+ ↑
2866
+ Tianhao Wang, Johannes Rausch, CeΒ Zhang, Ruoxi Jia, and Dawn Song.A principled approach to data valuation for federated learning.In Federated Learning, pp.Β  153–167. Springer, 2020.
2867
+ Xu etΒ al. (2021)
2868
+ ↑
2869
+ Xinyi Xu, Lingjuan Lyu, Xingjun Ma, Chenglin Miao, ChuanΒ Sheng Foo, and Bryan KianΒ Hsiang Low.Gradient driven rewards to guarantee fairness in collaborative machine learning.In NeurIPS, pp.Β  16104–16117, 2021.
2870
+ Yang etΒ al. (2019)
2871
+ ↑
2872
+ Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong.Federated machine learning: Concept and applications.ACM TIST, 10(2):1–19, 2019.
2873
+ Yang etΒ al. (2022)
2874
+ ↑
2875
+ Xun Yang, Weijie Tan, Changgen Peng, S.Β Xiang, and Kun Niu.Federated learning incentive mechanism design via enhanced shapley value method.Wireless Communications and Mobile Computing, 2022.
2876
+ Yu etΒ al. (2014)
2877
+ ↑
2878
+ Hsiang-Fu Yu, Cho-Jui Hsieh, SiΒ Si, and InderjitΒ S Dhillon.Parallel matrix factorization for recommender systems.Knowledge and Information Systems, 41(3):793–819, 2014.
2879
+ Zeng etΒ al. (2008)
2880
+ ↑
2881
+ Zhi-Qiang Zeng, Hong-Bin Yu, Hua-Rong Xu, Yan-Qi Xie, and JiΒ Gao.Fast training support vector machines using parallel sequential minimal optimization.In 2008 3rd international conference on intelligent system and knowledge engineering, volumeΒ 1, pp.Β  997–1001. IEEE, 2008.
2882
+ Zhang etΒ al. (2018)
2883
+ ↑
2884
+ Gong-Duo Zhang, Shen-Yi Zhao, Hao Gao, and Wu-Jun Li.Feature-distributed svrg for high-dimensional linear classification.arXiv:1802.03604, 2018.
2885
+ Appendix ADetails on Adopted Federated Learning Algorithm
2886
+ A.1Synchronous Setting
2887
+
2888
+ We adopt the FedSGD algorithm (Liu etΒ al., 2019) for synchronous federated learning. In the following, we show the sketch of one training iteration of the FedSGD. In each iteration
2889
+ 𝑑
2890
+ , the FedSGD algorithm executes the following steps:
2891
+
2892
+ 1.
2893
+
2894
+ The server selects a mini-batch
2895
+ 𝐡
2896
+ (
2897
+ 𝑑
2898
+ )
2899
+ βŠ‚
2900
+ [
2901
+ 𝑁
2902
+ ]
2903
+ containing the global indices of samples;
2904
+
2905
+ 2.
2906
+
2907
+ Each client
2908
+ π‘š
2909
+ ∈
2910
+ [
2911
+ 𝑀
2912
+ ]
2913
+ computes local embeddings
2914
+ {
2915
+ (
2916
+ β„Ž
2917
+ 𝑖
2918
+ (
2919
+ π‘š
2920
+ )
2921
+ )
2922
+ (
2923
+ 𝑑
2924
+ )
2925
+ =
2926
+ ⟨
2927
+ π‘₯
2928
+ 𝑖
2929
+ (
2930
+ π‘š
2931
+ )
2932
+ ,
2933
+ πœƒ
2934
+ π‘š
2935
+ (
2936
+ 𝑑
2937
+ )
2938
+ ⟩
2939
+ ∣
2940
+ 𝑖
2941
+ ∈
2942
+ 𝐡
2943
+ (
2944
+ 𝑑
2945
+ )
2946
+ }
2947
+ ,
2948
+ where
2949
+ πœƒ
2950
+ π‘š
2951
+ (
2952
+ 𝑑
2953
+ )
2954
+ is the client’s current model and sends them to the server;
2955
+
2956
+ 3.
2957
+
2958
+ The server computes gradient information
2959
+ {
2960
+ 𝑔
2961
+ 𝑖
2962
+ (
2963
+ 𝑑
2964
+ )
2965
+ :=
2966
+ βˆ‚
2967
+ 𝑓
2968
+ ​
2969
+ (
2970
+ β„Ž
2971
+ 𝑖
2972
+ (
2973
+ 𝑑
2974
+ )
2975
+ ;
2976
+ 𝑦
2977
+ 𝑖
2978
+ )
2979
+ βˆ‚
2980
+ β„Ž
2981
+ 𝑖
2982
+ (
2983
+ 𝑑
2984
+ )
2985
+ |
2986
+ 𝑖
2987
+ ∈
2988
+ 𝐡
2989
+ (
2990
+ 𝑑
2991
+ )
2992
+ ,
2993
+ β„Ž
2994
+ 𝑖
2995
+ (
2996
+ 𝑑
2997
+ )
2998
+ =
2999
+ βˆ‘
3000
+ π‘š
3001
+ =
3002
+ 1
3003
+ 𝑀
3004
+ (
3005
+ β„Ž
3006
+ 𝑖
3007
+ (
3008
+ π‘š
3009
+ )
3010
+ )
3011
+ (
3012
+ 𝑑
3013
+ )
3014
+ }
3015
+ and sends it to every client
3016
+ π‘š
3017
+ ∈
3018
+ [
3019
+ 𝑀
3020
+ ]
3021
+ ;
3022
+
3023
+ 4.
3024
+
3025
+ Each client
3026
+ π‘š
3027
+ ∈
3028
+ [
3029
+ 𝑀
3030
+ ]
3031
+ updates the local model via
3032
+ πœƒ
3033
+ π‘š
3034
+ (
3035
+ 𝑑
3036
+ +
3037
+ 1
3038
+ )
3039
+ ←
3040
+ πœƒ
3041
+ π‘š
3042
+ (
3043
+ 𝑑
3044
+ )
3045
+ βˆ’
3046
+ πœ‚
3047
+ (
3048
+ 𝑑
3049
+ )
3050
+ |
3051
+ 𝐡
3052
+ (
3053
+ 𝑑
3054
+ )
3055
+ |
3056
+ ​
3057
+ βˆ‘
3058
+ 𝑖
3059
+ ∈
3060
+ 𝐡
3061
+ (
3062
+ 𝑑
3063
+ )
3064
+ 𝑔
3065
+ 𝑖
3066
+ (
3067
+ 𝑑
3068
+ )
3069
+ ​
3070
+ π‘₯
3071
+ 𝑖
3072
+ (
3073
+ π‘š
3074
+ )
3075
+ ,
3076
+ where
3077
+ πœ‚
3078
+ (
3079
+ 𝑑
3080
+ )
3081
+ is the learning rate at the
3082
+ 𝑑
3083
+ -th iteration.
3084
+
3085
+ A.2Asynchronous Setting
3086
+
3087
+ We follow the vertical asynchronous federated learning (VAFL) algorithm proposed by Chen etΒ al. (2020), where the algorithm allows each client to run stochastic gradient algorithms without coordination with other clients. We sketch the training process of the VAFL algorithm.
3088
+
3089
+ β€’
3090
+
3091
+ The server maintains the latest embeddings
3092
+ {
3093
+ β„Ž
3094
+ 𝑖
3095
+ (
3096
+ π‘š
3097
+ )
3098
+ ∣
3099
+ 𝑖
3100
+ ∈
3101
+ [
3102
+ 𝑁
3103
+ ]
3104
+ ,
3105
+ π‘š
3106
+ ∈
3107
+ [
3108
+ 𝑀
3109
+ ]
3110
+ }
3111
+ and waits for a message from an active client
3112
+ π‘š
3113
+ . The message contains either an embedding update or a gradient query.
3114
+
3115
+ 1.
3116
+
3117
+ Update: Client
3118
+ π‘š
3119
+ sends the embeddings
3120
+ {
3121
+ β„Ž
3122
+ ^
3123
+ 𝑖
3124
+ (
3125
+ π‘š
3126
+ )
3127
+ ∣
3128
+ 𝑖
3129
+ ∈
3130
+ 𝐡
3131
+ π‘š
3132
+ }
3133
+ to the server. The server then updates its latest embeddings by
3134
+ β„Ž
3135
+ 𝑖
3136
+ (
3137
+ π‘š
3138
+ )
3139
+ =
3140
+ β„Ž
3141
+ ^
3142
+ 𝑖
3143
+ (
3144
+ π‘š
3145
+ )
3146
+ for all
3147
+ 𝑖
3148
+ ∈
3149
+ 𝐡
3150
+ π‘š
3151
+ .
3152
+
3153
+ 2.
3154
+
3155
+ Query: Client
3156
+ π‘š
3157
+ requests the partial gradient with respect to batch
3158
+ 𝐡
3159
+ π‘š
3160
+ . The server then sends back to the client
3161
+ π‘š
3162
+ the partial gradient as
3163
+
3164
+
3165
+ {
3166
+ 𝑔
3167
+ 𝑖
3168
+ :=
3169
+ βˆ‚
3170
+ 𝑓
3171
+ ​
3172
+ (
3173
+ β„Ž
3174
+ 𝑖
3175
+ ;
3176
+ 𝑦
3177
+ 𝑖
3178
+ )
3179
+ βˆ‚
3180
+ β„Ž
3181
+ 𝑖
3182
+ |
3183
+ 𝑖
3184
+ ∈
3185
+ 𝐡
3186
+ π‘š
3187
+ ,
3188
+ β„Ž
3189
+ 𝑖
3190
+ =
3191
+ βˆ‘
3192
+ π‘š
3193
+ =
3194
+ 1
3195
+ 𝑀
3196
+ β„Ž
3197
+ 𝑖
3198
+ (
3199
+ π‘š
3200
+ )
3201
+ }
3202
+ .
3203
+
3204
+ (3)
3205
+ β€’
3206
+
3207
+ A client
3208
+ π‘š
3209
+ keeps executing the following steps:
3210
+
3211
+ 1.
3212
+
3213
+ Randomly select a batch
3214
+ 𝐡
3215
+ π‘š
3216
+ βŠ‚
3217
+ [
3218
+ 𝑁
3219
+ ]
3220
+ s.t.
3221
+ |
3222
+ 𝐡
3223
+ π‘š
3224
+ |
3225
+ =
3226
+ 𝜏
3227
+ π‘š
3228
+ (DefinitionΒ 1) and compute local embeddings
3229
+ {
3230
+ β„Ž
3231
+ ^
3232
+ 𝑖
3233
+ (
3234
+ π‘š
3235
+ )
3236
+ :=
3237
+ ⟨
3238
+ πœƒ
3239
+ (
3240
+ π‘š
3241
+ )
3242
+ ,
3243
+ π‘₯
3244
+ 𝑖
3245
+ (
3246
+ π‘š
3247
+ )
3248
+ ⟩
3249
+ ∣
3250
+ 𝑖
3251
+ ∈
3252
+ 𝐡
3253
+ π‘š
3254
+ }
3255
+ .
3256
+
3257
+ 2.
3258
+
3259
+ Upload embeddings
3260
+ {
3261
+ β„Ž
3262
+ ^
3263
+ 𝑖
3264
+ (
3265
+ π‘š
3266
+ )
3267
+ ∣
3268
+ 𝑖
3269
+ ∈
3270
+ 𝐡
3271
+ π‘š
3272
+ }
3273
+ to the server.
3274
+
3275
+ 3.
3276
+
3277
+ Query gradient from the server and update the local model as
3278
+ πœƒ
3279
+ π‘š
3280
+ ←
3281
+ πœƒ
3282
+ π‘š
3283
+ βˆ’
3284
+ πœ‚
3285
+ π‘š
3286
+ |
3287
+ 𝐡
3288
+ π‘š
3289
+ |
3290
+ ​
3291
+ βˆ‘
3292
+ 𝑖
3293
+ ∈
3294
+ 𝐡
3295
+ π‘š
3296
+ 𝑔
3297
+ 𝑖
3298
+ ​
3299
+ π‘₯
3300
+ 𝑖
3301
+ (
3302
+ π‘š
3303
+ )
3304
+ ,
3305
+ where
3306
+ πœ‚
3307
+ π‘š
3308
+ is the local learning rate and
3309
+ 𝑔
3310
+ 𝑖
3311
+ is defined in EquationΒ 3.
3312
+
3313
+ Appendix BProofs of Theoretical Results
3314
+
3315
+ We first introduce some notions being used. The following special classes of functions (Nagaraj etΒ al., 2019) are considered.
3316
+
3317
+ Definition 4.
3318
+
3319
+ Consider a differentiable function
3320
+ 𝑓
3321
+ :
3322
+ ℝ
3323
+ 𝑛
3324
+ β†’
3325
+ ℝ
3326
+ and any points
3327
+ π‘₯
3328
+ ,
3329
+ 𝑦
3330
+ ∈
3331
+ ℝ
3332
+ 𝑛
3333
+ . We have the following definitions.
3334
+
3335
+ β€’
3336
+
3337
+ 𝑓
3338
+ is convex if
3339
+ 𝑓
3340
+ satisfies
3341
+
3342
+
3343
+ 𝑓
3344
+ ​
3345
+ (
3346
+ 𝑦
3347
+ )
3348
+ β‰₯
3349
+ 𝑓
3350
+ ​
3351
+ (
3352
+ π‘₯
3353
+ )
3354
+ +
3355
+ ⟨
3356
+ βˆ‡
3357
+ 𝑓
3358
+ ​
3359
+ (
3360
+ π‘₯
3361
+ )
3362
+ ,
3363
+ 𝑦
3364
+ βˆ’
3365
+ π‘₯
3366
+ ⟩
3367
+ ;
3368
+
3369
+ β€’
3370
+
3371
+ 𝑓
3372
+ is
3373
+ πœ‡
3374
+ -strongly convex for some
3375
+ πœ‡
3376
+ >
3377
+ 0
3378
+ if
3379
+ 𝑓
3380
+ satisfies
3381
+
3382
+
3383
+ 𝑓
3384
+ ​
3385
+ (
3386
+ 𝑦
3387
+ )
3388
+ β‰₯
3389
+ 𝑓
3390
+ ​
3391
+ (
3392
+ π‘₯
3393
+ )
3394
+ +
3395
+ ⟨
3396
+ βˆ‡
3397
+ 𝑓
3398
+ ​
3399
+ (
3400
+ π‘₯
3401
+ )
3402
+ ,
3403
+ 𝑦
3404
+ βˆ’
3405
+ π‘₯
3406
+ ⟩
3407
+ +
3408
+ πœ‡
3409
+ 2
3410
+ ​
3411
+ β€–
3412
+ π‘₯
3413
+ βˆ’
3414
+ 𝑦
3415
+ β€–
3416
+ 2
3417
+ 2
3418
+ ;
3419
+
3420
+ β€’
3421
+
3422
+ 𝑓
3423
+ is
3424
+ 𝐺
3425
+ -Lipschitz for some
3426
+ 𝐺
3427
+ >
3428
+ 0
3429
+ if
3430
+ 𝑓
3431
+ satisfies
3432
+
3433
+
3434
+ |
3435
+ 𝑓
3436
+ ​
3437
+ (
3438
+ π‘₯
3439
+ )
3440
+ βˆ’
3441
+ 𝑓
3442
+ ​
3443
+ (
3444
+ 𝑦
3445
+ )
3446
+ |
3447
+ ≀
3448
+ 𝐺
3449
+ ​
3450
+ β€–
3451
+ π‘₯
3452
+ βˆ’
3453
+ 𝑦
3454
+ β€–
3455
+ 2
3456
+ ;
3457
+
3458
+ β€’
3459
+
3460
+ 𝑓
3461
+ is
3462
+ 𝐿
3463
+ -smooth for some
3464
+ 𝐿
3465
+ >
3466
+ 0
3467
+ if
3468
+ 𝑓
3469
+ satisfies
3470
+
3471
+
3472
+ β€–
3473
+ βˆ‡
3474
+ 𝑓
3475
+ ​
3476
+ (
3477
+ π‘₯
3478
+ )
3479
+ βˆ’
3480
+ βˆ‡
3481
+ 𝑓
3482
+ ​
3483
+ (
3484
+ 𝑦
3485
+ )
3486
+ β€–
3487
+ 2
3488
+ ≀
3489
+ 𝐿
3490
+ ​
3491
+ β€–
3492
+ π‘₯
3493
+ βˆ’
3494
+ 𝑦
3495
+ β€–
3496
+ 2
3497
+ .
3498
+
3499
+
3500
+ Meanwhile, we uses the notions of
3501
+ πœ–
3502
+ -net and covering number (Udell & Townsend, 2019), which are widely used tools in high-dimensional probability.
3503
+
3504
+ Definition 5.
3505
+
3506
+ Let
3507
+ 𝐾
3508
+ be a compact subset of
3509
+ ℝ
3510
+ 𝑛
3511
+ . A subset
3512
+ 𝑁
3513
+ βŠ†
3514
+ 𝐾
3515
+ is called an
3516
+ πœ–
3517
+ -net for
3518
+ 𝐾
3519
+ if, for every
3520
+ π‘₯
3521
+ ∈
3522
+ 𝐾
3523
+ , there exists
3524
+ 𝑦
3525
+ ∈
3526
+ 𝑁
3527
+ such that
3528
+ β€–
3529
+ π‘₯
3530
+ βˆ’
3531
+ 𝑦
3532
+ β€–
3533
+ 2
3534
+ ≀
3535
+ πœ–
3536
+ .
3537
+
3538
+ The minimum cardinality of an
3539
+ πœ–
3540
+ -net for
3541
+ 𝐾
3542
+ is called the covering number of
3543
+ 𝐾
3544
+ and is represented by
3545
+ 𝒩
3546
+ ​
3547
+ (
3548
+ 𝐾
3549
+ ,
3550
+ πœ–
3551
+ )
3552
+ .
3553
+
3554
+ B.1Proof of Theorem 1: Fairness Guarantee
3555
+
3556
+ Formally, given a utility function
3557
+ π‘ˆ
3558
+ , the corresponding Shapley value should satisfy four fundamental requirements.
3559
+
3560
+ β€’
3561
+
3562
+ Symmetry. For any two clients
3563
+ 𝑖
3564
+ ,
3565
+ 𝑗
3566
+ ∈
3567
+ [
3568
+ 𝑀
3569
+ ]
3570
+ , if for any subset of clients
3571
+ 𝑆
3572
+ βŠ†
3573
+ [
3574
+ 𝑀
3575
+ ]
3576
+ βˆ–
3577
+ {
3578
+ 𝑖
3579
+ ,
3580
+ 𝑗
3581
+ }
3582
+ ,
3583
+ π‘ˆ
3584
+ ​
3585
+ (
3586
+ 𝑆
3587
+ βˆͺ
3588
+ {
3589
+ 𝑖
3590
+ }
3591
+ )
3592
+ =
3593
+ π‘ˆ
3594
+ ​
3595
+ (
3596
+ 𝑆
3597
+ βˆͺ
3598
+ {
3599
+ 𝑗
3600
+ }
3601
+ )
3602
+ , then
3603
+ 𝑠
3604
+ 𝑖
3605
+ =
3606
+ 𝑠
3607
+ 𝑗
3608
+ .
3609
+
3610
+ β€’
3611
+
3612
+ Zero element. For any client
3613
+ 𝑖
3614
+ ∈
3615
+ [
3616
+ 𝑀
3617
+ ]
3618
+ , if for any subset of clients
3619
+ 𝑆
3620
+ βŠ†
3621
+ [
3622
+ 𝑀
3623
+ ]
3624
+ βˆ–
3625
+ {
3626
+ 𝑖
3627
+ }
3628
+ ,
3629
+ π‘ˆ
3630
+ ​
3631
+ (
3632
+ 𝑆
3633
+ βˆͺ
3634
+ {
3635
+ 𝑖
3636
+ }
3637
+ )
3638
+ =
3639
+ π‘ˆ
3640
+ ​
3641
+ (
3642
+ 𝑆
3643
+ )
3644
+ , then
3645
+ 𝑠
3646
+ 𝑖
3647
+ =
3648
+ 0
3649
+ .
3650
+
3651
+ β€’
3652
+
3653
+ Additivity. If the utility function
3654
+ π‘ˆ
3655
+ can be decomposed into the sum of separate utility functions, i.e.,
3656
+ π‘ˆ
3657
+ =
3658
+ π‘ˆ
3659
+ 1
3660
+ +
3661
+ π‘ˆ
3662
+ 2
3663
+ for some
3664
+ π‘ˆ
3665
+ 1
3666
+ ,
3667
+ π‘ˆ
3668
+ 2
3669
+ :
3670
+ 2
3671
+ [
3672
+ 𝑀
3673
+ ]
3674
+ β†’
3675
+ ℝ
3676
+ , then for any client
3677
+ 𝑖
3678
+ ∈
3679
+ [
3680
+ 𝑀
3681
+ ]
3682
+ ,
3683
+ 𝑠
3684
+ 𝑖
3685
+ =
3686
+ 𝑠
3687
+ 𝑖
3688
+ 1
3689
+ +
3690
+ 𝑠
3691
+ 𝑖
3692
+ 2
3693
+ , where
3694
+ 𝑠
3695
+ 1
3696
+ and
3697
+ 𝑠
3698
+ 2
3699
+ are the evaluation metrics associated with the utility functions
3700
+ π‘ˆ
3701
+ 1
3702
+ and
3703
+ π‘ˆ
3704
+ 2
3705
+ , respectively.
3706
+
3707
+ β€’
3708
+
3709
+ Balance.
3710
+ π‘ˆ
3711
+ ​
3712
+ (
3713
+ [
3714
+ 𝑀
3715
+ ]
3716
+ )
3717
+ =
3718
+ βˆ‘
3719
+ 𝑖
3720
+ ∈
3721
+ [
3722
+ 𝑀
3723
+ ]
3724
+ 𝑠
3725
+ 𝑖
3726
+ .
3727
+
3728
+ It has been showed that the Shapley value, computed by
3729
+
3730
+
3731
+ 𝑠
3732
+ π‘š
3733
+ =
3734
+ 1
3735
+ 𝑀
3736
+ ​
3737
+ βˆ‘
3738
+ 𝑆
3739
+ βŠ†
3740
+ [
3741
+ 𝑀
3742
+ ]
3743
+ βˆ–
3744
+ {
3745
+ π‘š
3746
+ }
3747
+ 1
3748
+ (
3749
+ 𝑀
3750
+ βˆ’
3751
+ 1
3752
+ |
3753
+ 𝑆
3754
+ |
3755
+ )
3756
+ ​
3757
+ [
3758
+ π‘ˆ
3759
+ ​
3760
+ (
3761
+ 𝑆
3762
+ βˆͺ
3763
+ {
3764
+ π‘š
3765
+ }
3766
+ )
3767
+ βˆ’
3768
+ π‘ˆ
3769
+ ​
3770
+ (
3771
+ 𝑆
3772
+ )
3773
+ ]
3774
+ ,
3775
+
3776
+ (4)
3777
+
3778
+ is the only metric that satisfies the above requirements (Dubey, 1975; Ghorbani & Zou, 2019).
3779
+
3780
+ To prove Theorem 1, we only need to show that the VerFedSV (Definition 2) matches the expression of the classical Shapley value, given the utility function specified by Equation 1 and
3781
+ π‘ˆ
3782
+ ​
3783
+ (
3784
+ 𝑆
3785
+ )
3786
+ =
3787
+ 1
3788
+ 𝑇
3789
+ ​
3790
+ βˆ‘
3791
+ 𝑑
3792
+ =
3793
+ 1
3794
+ 𝑇
3795
+ π‘ˆ
3796
+ 𝑑
3797
+ ​
3798
+ (
3799
+ 𝑆
3800
+ )
3801
+ .
3802
+
3803
+ Proof.
3804
+
3805
+ We notice that the VerFedSV can be expressed as
3806
+
3807
+
3808
+ 𝑠
3809
+ π‘š
3810
+
3811
+
3812
+ =
3813
+
3814
+ 1
3815
+ 𝑀
3816
+ ​
3817
+ 𝑇
3818
+ ​
3819
+ βˆ‘
3820
+ 𝑑
3821
+ =
3822
+ 1
3823
+ 𝑇
3824
+ βˆ‘
3825
+ 𝑆
3826
+ βŠ†
3827
+ [
3828
+ 𝑀
3829
+ ]
3830
+ βˆ–
3831
+ {
3832
+ π‘š
3833
+ }
3834
+ 1
3835
+ (
3836
+ 𝑀
3837
+ βˆ’
3838
+ 1
3839
+ |
3840
+ 𝑆
3841
+ |
3842
+ )
3843
+ ​
3844
+ [
3845
+ π‘ˆ
3846
+ 𝑑
3847
+ ​
3848
+ (
3849
+ 𝑆
3850
+ βˆͺ
3851
+ {
3852
+ π‘š
3853
+ }
3854
+ )
3855
+ βˆ’
3856
+ π‘ˆ
3857
+ 𝑑
3858
+ ​
3859
+ (
3860
+ 𝑆
3861
+ )
3862
+ ]
3863
+
3864
+
3865
+ =
3866
+
3867
+ 1
3868
+ 𝑀
3869
+ ​
3870
+ βˆ‘
3871
+ 𝑆
3872
+ βŠ†
3873
+ [
3874
+ 𝑀
3875
+ ]
3876
+ βˆ–
3877
+ {
3878
+ π‘š
3879
+ }
3880
+ 1
3881
+ (
3882
+ 𝑀
3883
+ βˆ’
3884
+ 1
3885
+ |
3886
+ 𝑆
3887
+ |
3888
+ )
3889
+ ​
3890
+ [
3891
+ 1
3892
+ 𝑇
3893
+ ​
3894
+ βˆ‘
3895
+ 𝑑
3896
+ =
3897
+ 1
3898
+ 𝑇
3899
+ π‘ˆ
3900
+ 𝑑
3901
+ ​
3902
+ (
3903
+ 𝑆
3904
+ βˆͺ
3905
+ {
3906
+ π‘š
3907
+ }
3908
+ )
3909
+ βˆ’
3910
+ 1
3911
+ 𝑇
3912
+ ​
3913
+ βˆ‘
3914
+ 𝑑
3915
+ =
3916
+ 1
3917
+ 𝑇
3918
+ π‘ˆ
3919
+ 𝑑
3920
+ ​
3921
+ (
3922
+ 𝑆
3923
+ )
3924
+ ]
3925
+
3926
+
3927
+ =
3928
+
3929
+ 1
3930
+ 𝑀
3931
+ ​
3932
+ βˆ‘
3933
+ 𝑆
3934
+ βŠ†
3935
+ [
3936
+ 𝑀
3937
+ ]
3938
+ βˆ–
3939
+ {
3940
+ π‘š
3941
+ }
3942
+ 1
3943
+ (
3944
+ 𝑀
3945
+ βˆ’
3946
+ 1
3947
+ |
3948
+ 𝑆
3949
+ |
3950
+ )
3951
+ ​
3952
+ [
3953
+ π‘ˆ
3954
+ ​
3955
+ (
3956
+ 𝑆
3957
+ βˆͺ
3958
+ {
3959
+ π‘š
3960
+ }
3961
+ )
3962
+ βˆ’
3963
+ π‘ˆ
3964
+ ​
3965
+ (
3966
+ 𝑆
3967
+ )
3968
+ ]
3969
+ ,
3970
+
3971
+
3972
+ which matches the expression of the classical Shapley value. The result then follows. ∎
3973
+
3974
+ Another interesting property of VerFedSV is Periodic additivity. Formally, If in each iteration, the utility function
3975
+ π‘ˆ
3976
+ 𝑑
3977
+ can be expressed as the sum of separate utility functions, i.e.,
3978
+ π‘ˆ
3979
+ 𝑑
3980
+ =
3981
+ π‘ˆ
3982
+ 𝑑
3983
+ 1
3984
+ +
3985
+ π‘ˆ
3986
+ 𝑑
3987
+ 2
3988
+ for some
3989
+ π‘ˆ
3990
+ 𝑑
3991
+ 1
3992
+ ,
3993
+ π‘ˆ
3994
+ 𝑑
3995
+ 2
3996
+ :
3997
+ 2
3998
+ [
3999
+ 𝑀
4000
+ ]
4001
+ β†’
4002
+ ℝ
4003
+ , then for any client
4004
+ π‘š
4005
+ ∈
4006
+ [
4007
+ 𝑀
4008
+ ]
4009
+ ,
4010
+ 𝑠
4011
+ π‘š
4012
+ =
4013
+ 𝑠
4014
+ π‘š
4015
+ 1
4016
+ +
4017
+ 𝑠
4018
+ π‘š
4019
+ 2
4020
+ , where
4021
+ 𝑠
4022
+ π‘š
4023
+ 1
4024
+ and
4025
+ 𝑠
4026
+ π‘š
4027
+ 2
4028
+ denotes respectively the VerFedSV computed w.r.t the utility functions
4029
+ π‘ˆ
4030
+ 𝑑
4031
+ 1
4032
+ and
4033
+ π‘ˆ
4034
+ 𝑑
4035
+ 2
4036
+ .
4037
+
4038
+ B.2Proof of Proposition 1: bounds of the embedding matrix’s
4039
+ πœ–
4040
+ -rank
4041
+ Proof.
4042
+
4043
+ It is evident that
4044
+ rank
4045
+ πœ–
4046
+ ​
4047
+ (
4048
+ β„‹
4049
+ (
4050
+ π‘š
4051
+ )
4052
+ )
4053
+ ≀
4054
+ rank
4055
+ ​
4056
+ (
4057
+ β„‹
4058
+ (
4059
+ π‘š
4060
+ )
4061
+ )
4062
+ ≀
4063
+ 𝑑
4064
+ (
4065
+ π‘š
4066
+ )
4067
+ for all
4068
+ π‘š
4069
+ ∈
4070
+ [
4071
+ 𝑀
4072
+ ]
4073
+ . So we only need to prove the remaining two upper bounds. First, we consider the difference between successive rows of the embedding matrix
4074
+ β„‹
4075
+ (
4076
+ π‘š
4077
+ )
4078
+ . For any
4079
+ 𝑑
4080
+ ∈
4081
+ [
4082
+ 𝑇
4083
+ βˆ’
4084
+ 1
4085
+ ]
4086
+ and
4087
+ 𝑖
4088
+ ∈
4089
+ [
4090
+ 𝑁
4091
+ ]
4092
+ ,
4093
+
4094
+
4095
+ |
4096
+ β„‹
4097
+ 𝑑
4098
+ ,
4099
+ 𝑖
4100
+ (
4101
+ π‘š
4102
+ )
4103
+ βˆ’
4104
+ β„‹
4105
+ 𝑑
4106
+ +
4107
+ 1
4108
+ ,
4109
+ 𝑖
4110
+ (
4111
+ π‘š
4112
+ )
4113
+ |
4114
+
4115
+ =
4116
+ |
4117
+ (
4118
+ β„Ž
4119
+ 𝑖
4120
+ (
4121
+ π‘š
4122
+ )
4123
+ )
4124
+ (
4125
+ οΏ½οΏ½οΏ½οΏ½
4126
+ )
4127
+ βˆ’
4128
+ (
4129
+ β„Ž
4130
+ 𝑖
4131
+ (
4132
+ π‘š
4133
+ )
4134
+ )
4135
+ (
4136
+ 𝑑
4137
+ +
4138
+ 1
4139
+ )
4140
+ |
4141
+
4142
+
4143
+ =
4144
+ |
4145
+ ⟨
4146
+ πœƒ
4147
+ π‘š
4148
+ (
4149
+ 𝑑
4150
+ )
4151
+ ,
4152
+ π‘₯
4153
+ 𝑖
4154
+ (
4155
+ π‘š
4156
+ )
4157
+ ⟩
4158
+ βˆ’
4159
+ ⟨
4160
+ πœƒ
4161
+ π‘š
4162
+ (
4163
+ 𝑑
4164
+ +
4165
+ 1
4166
+ )
4167
+ ,
4168
+ π‘₯
4169
+ 𝑖
4170
+ (
4171
+ π‘š
4172
+ )
4173
+ ⟩
4174
+ |
4175
+
4176
+
4177
+ =
4178
+ |
4179
+ ⟨
4180
+ πœƒ
4181
+ π‘š
4182
+ (
4183
+ 𝑑
4184
+ )
4185
+ βˆ’
4186
+ πœƒ
4187
+ π‘š
4188
+ (
4189
+ 𝑑
4190
+ +
4191
+ 1
4192
+ )
4193
+ ,
4194
+ π‘₯
4195
+ 𝑖
4196
+ (
4197
+ π‘š
4198
+ )
4199
+ ⟩
4200
+ |
4201
+
4202
+
4203
+ =
4204
+ |
4205
+ ⟨
4206
+ πœ‚
4207
+ (
4208
+ 𝑑
4209
+ )
4210
+ |
4211
+ 𝐡
4212
+ (
4213
+ 𝑑
4214
+ )
4215
+ |
4216
+ ​
4217
+ βˆ‘
4218
+ 𝑗
4219
+ ∈
4220
+ 𝐡
4221
+ (
4222
+ 𝑑
4223
+ )
4224
+ 𝑔
4225
+ 𝑗
4226
+ (
4227
+ 𝑑
4228
+ )
4229
+ ​
4230
+ π‘₯
4231
+ 𝑗
4232
+ (
4233
+ π‘š
4234
+ )
4235
+ ,
4236
+ π‘₯
4237
+ 𝑖
4238
+ (
4239
+ π‘š
4240
+ )
4241
+ ⟩
4242
+ |
4243
+
4244
+
4245
+ ≀
4246
+ πœ‚
4247
+ (
4248
+ 𝑑
4249
+ )
4250
+ ​
4251
+ 𝐿
4252
+ .
4253
+
4254
+
4255
+ Thus, we obtain an upper bound on
4256
+ rank
4257
+ πœ–
4258
+ ​
4259
+ (
4260
+ β„‹
4261
+ (
4262
+ π‘š
4263
+ )
4264
+ )
4265
+ by
4266
+
4267
+
4268
+ rank
4269
+ πœ–
4270
+ ​
4271
+ (
4272
+ β„‹
4273
+ (
4274
+ π‘š
4275
+ )
4276
+ )
4277
+
4278
+ ≀
4279
+ ⌈
4280
+ 1
4281
+ πœ–
4282
+ ​
4283
+ βˆ‘
4284
+ 𝑑
4285
+ =
4286
+ 1
4287
+ 𝑇
4288
+ βˆ’
4289
+ 1
4290
+ β€–
4291
+ β„‹
4292
+ (
4293
+ π‘š
4294
+ )
4295
+ ​
4296
+ [
4297
+ 𝑑
4298
+ ,
4299
+ :
4300
+ ]
4301
+ βˆ’
4302
+ β„‹
4303
+ (
4304
+ π‘š
4305
+ )
4306
+ ​
4307
+ [
4308
+ 𝑑
4309
+ +
4310
+ 1
4311
+ ,
4312
+ :
4313
+ ]
4314
+ β€–
4315
+ max
4316
+ βŒ‰
4317
+
4318
+
4319
+ ≀
4320
+ ⌈
4321
+ 𝐿
4322
+ πœ–
4323
+ ​
4324
+ βˆ‘
4325
+ 𝑑
4326
+ =
4327
+ 1
4328
+ 𝑇
4329
+ βˆ’
4330
+ 1
4331
+ πœ‚
4332
+ (
4333
+ 𝑑
4334
+ )
4335
+ βŒ‰
4336
+
4337
+
4338
+ ≀
4339
+ ⌈
4340
+ 𝐿
4341
+ ​
4342
+ log
4343
+ ⁑
4344
+ (
4345
+ 𝑇
4346
+ )
4347
+ πœ–
4348
+ βŒ‰
4349
+ .
4350
+
4351
+
4352
+ Next, we consider the difference between any two columns of the embedding matrix
4353
+ β„‹
4354
+ (
4355
+ π‘š
4356
+ )
4357
+ . For any
4358
+ 𝑑
4359
+ ∈
4360
+ [
4361
+ 𝑇
4362
+ ]
4363
+ and
4364
+ 𝑖
4365
+ ,
4366
+ 𝑗
4367
+ ∈
4368
+ [
4369
+ 𝑁
4370
+ ]
4371
+ , we have
4372
+
4373
+
4374
+ |
4375
+ β„‹
4376
+ 𝑑
4377
+ ,
4378
+ 𝑖
4379
+ (
4380
+ π‘š
4381
+ )
4382
+ βˆ’
4383
+ β„‹
4384
+ 𝑑
4385
+ ,
4386
+ 𝑗
4387
+ (
4388
+ π‘š
4389
+ )
4390
+ |
4391
+
4392
+ =
4393
+ |
4394
+ (
4395
+ β„Ž
4396
+ 𝑖
4397
+ (
4398
+ π‘š
4399
+ )
4400
+ )
4401
+ (
4402
+ 𝑑
4403
+ )
4404
+ βˆ’
4405
+ (
4406
+ β„Ž
4407
+ 𝑗
4408
+ (
4409
+ π‘š
4410
+ )
4411
+ )
4412
+ (
4413
+ 𝑑
4414
+ )
4415
+ |
4416
+
4417
+
4418
+ =
4419
+ |
4420
+ ⟨
4421
+ πœƒ
4422
+ π‘š
4423
+ (
4424
+ 𝑑
4425
+ )
4426
+ ,
4427
+ π‘₯
4428
+ 𝑖
4429
+ (
4430
+ π‘š
4431
+ )
4432
+ ⟩
4433
+ βˆ’
4434
+ ⟨
4435
+ πœƒ
4436
+ π‘š
4437
+ (
4438
+ 𝑑
4439
+ )
4440
+ ,
4441
+ π‘₯
4442
+ 𝑗
4443
+ (
4444
+ π‘š
4445
+ )
4446
+ ⟩
4447
+ |
4448
+
4449
+
4450
+ ≀
4451
+ β€–
4452
+ πœƒ
4453
+ π‘š
4454
+ (
4455
+ 𝑑
4456
+ )
4457
+ β€–
4458
+ β‹…
4459
+ β€–
4460
+ π‘₯
4461
+ 𝑖
4462
+ (
4463
+ π‘š
4464
+ )
4465
+ βˆ’
4466
+ π‘₯
4467
+ 𝑗
4468
+ (
4469
+ π‘š
4470
+ )
4471
+ β€–
4472
+ .
4473
+
4474
+
4475
+ It follows that
4476
+
4477
+
4478
+ β€–
4479
+ β„‹
4480
+ (
4481
+ π‘š
4482
+ )
4483
+ ​
4484
+ [
4485
+ :
4486
+ ,
4487
+ 𝑖
4488
+ ]
4489
+ βˆ’
4490
+ β„‹
4491
+ 𝑑
4492
+ ,
4493
+ 𝑗
4494
+ (
4495
+ π‘š
4496
+ )
4497
+ β€–
4498
+ max
4499
+ ≀
4500
+ max
4501
+ 𝑑
4502
+ ∈
4503
+ [
4504
+ 𝑇
4505
+ ]
4506
+ ⁑
4507
+ β€–
4508
+ πœƒ
4509
+ π‘š
4510
+ (
4511
+ 𝑑
4512
+ )
4513
+ β€–
4514
+ β‹…
4515
+ β€–
4516
+ π‘₯
4517
+ 𝑖
4518
+ (
4519
+ π‘š
4520
+ )
4521
+ βˆ’
4522
+ π‘₯
4523
+ 𝑗
4524
+ (
4525
+ π‘š
4526
+ )
4527
+ β€–
4528
+ .
4529
+
4530
+
4531
+ Let
4532
+ 𝛾
4533
+ (
4534
+ π‘š
4535
+ )
4536
+ =
4537
+ max
4538
+ 𝑑
4539
+ ∈
4540
+ [
4541
+ 𝑇
4542
+ ]
4543
+ ⁑
4544
+ β€–
4545
+ πœƒ
4546
+ π‘š
4547
+ (
4548
+ 𝑑
4549
+ )
4550
+ β€–
4551
+ and
4552
+ 𝒩
4553
+ be an
4554
+ πœ–
4555
+ 𝛾
4556
+ (
4557
+ π‘š
4558
+ )
4559
+ -net for
4560
+ {
4561
+ π‘₯
4562
+ 𝑖
4563
+ (
4564
+ π‘š
4565
+ )
4566
+ :
4567
+ 𝑖
4568
+ ∈
4569
+ [
4570
+ 𝑁
4571
+ ]
4572
+ }
4573
+ . We can thus conclude that
4574
+ rank
4575
+ πœ–
4576
+ ​
4577
+ (
4578
+ β„‹
4579
+ (
4580
+ π‘š
4581
+ )
4582
+ )
4583
+ ≀
4584
+ |
4585
+ 𝒩
4586
+ |
4587
+ .
4588
+ By definition of the covering number, it follows that
4589
+
4590
+
4591
+ rank
4592
+ πœ–
4593
+ ​
4594
+ (
4595
+ β„‹
4596
+ (
4597
+ π‘š
4598
+ )
4599
+ )
4600
+ ≀
4601
+ 𝒩
4602
+ ​
4603
+ (
4604
+ {
4605
+ π‘₯
4606
+ 𝑖
4607
+ (
4608
+ π‘š
4609
+ )
4610
+ :
4611
+ 𝑖
4612
+ ∈
4613
+ [
4614
+ 𝑁
4615
+ ]
4616
+ }
4617
+ ,
4618
+ πœ–
4619
+ 𝛾
4620
+ (
4621
+ π‘š
4622
+ )
4623
+ )
4624
+ .
4625
+
4626
+
4627
+ ∎
4628
+
4629
+ B.3Proof of Proposition 2: Approximation Guarantee
4630
+ Proof.
4631
+
4632
+ Define
4633
+ π‘ˆ
4634
+ ,
4635
+ π‘ˆ
4636
+ ^
4637
+ :
4638
+ 2
4639
+ [
4640
+ 𝑀
4641
+ ]
4642
+ β†’
4643
+ ℝ
4644
+ by
4645
+ π‘ˆ
4646
+ ​
4647
+ (
4648
+ 𝑆
4649
+ )
4650
+ :=
4651
+ 1
4652
+ 𝑇
4653
+ ​
4654
+ βˆ‘
4655
+ 𝑑
4656
+ =
4657
+ 1
4658
+ 𝑇
4659
+ π‘ˆ
4660
+ 𝑑
4661
+ ​
4662
+ (
4663
+ 𝑆
4664
+ )
4665
+ and
4666
+ π‘ˆ
4667
+ ^
4668
+ ​
4669
+ (
4670
+ 𝑆
4671
+ )
4672
+ :=
4673
+ 1
4674
+ 𝑇
4675
+ ​
4676
+ βˆ‘
4677
+ 𝑑
4678
+ =
4679
+ 1
4680
+ 𝑇
4681
+ π‘ˆ
4682
+ ^
4683
+ 𝑑
4684
+ ​
4685
+ (
4686
+ 𝑆
4687
+ )
4688
+ .
4689
+ For any
4690
+ 𝑆
4691
+ βŠ‚
4692
+ [
4693
+ 𝑀
4694
+ ]
4695
+ , we know that
4696
+
4697
+
4698
+ |
4699
+ π‘ˆ
4700
+ ​
4701
+ (
4702
+ 𝑆
4703
+ )
4704
+ βˆ’
4705
+ π‘ˆ
4706
+ ^
4707
+ ​
4708
+ (
4709
+ 𝑆
4710
+ )
4711
+ |
4712
+
4713
+
4714
+ ≀
4715
+
4716
+ 1
4717
+ 𝑁
4718
+ ​
4719
+ 𝑇
4720
+ ​
4721
+ βˆ‘
4722
+ 𝑑
4723
+ =
4724
+ 1
4725
+ 𝑇
4726
+ βˆ‘
4727
+ 𝑖
4728
+ =
4729
+ 1
4730
+ 𝑁
4731
+ |
4732
+ 𝑓
4733
+ ​
4734
+ (
4735
+ βˆ‘
4736
+ π‘š
4737
+ =
4738
+ 1
4739
+ 𝑀
4740
+ β„‹
4741
+ 𝑑
4742
+ βˆ’
4743
+ 1
4744
+ ,
4745
+ 𝑖
4746
+ (
4747
+ π‘š
4748
+ )
4749
+ ;
4750
+ 𝑦
4751
+ 𝑖
4752
+ )
4753
+ βˆ’
4754
+ 𝑓
4755
+ ​
4756
+ (
4757
+ βˆ‘
4758
+ π‘š
4759
+ =
4760
+ 1
4761
+ 𝑀
4762
+ (
4763
+ 𝑀
4764
+ 𝑑
4765
+ βˆ’
4766
+ 1
4767
+ (
4768
+ π‘š
4769
+ )
4770
+ )
4771
+ ⊺
4772
+ ​
4773
+ β„Ž
4774
+ 𝑖
4775
+ (
4776
+ π‘š
4777
+ )
4778
+ ;
4779
+ 𝑦
4780
+ 𝑖
4781
+ )
4782
+ |
4783
+
4784
+
4785
+ +
4786
+ 1
4787
+ 𝑁
4788
+ ​
4789
+ 𝑇
4790
+ ​
4791
+ βˆ‘
4792
+ 𝑑
4793
+ =
4794
+ 1
4795
+ 𝑇
4796
+ βˆ‘
4797
+ 𝑖
4798
+ =
4799
+ 1
4800
+ 𝑁
4801
+ |
4802
+ 𝑓
4803
+ ​
4804
+ (
4805
+ βˆ‘
4806
+ π‘š
4807
+ ∈
4808
+ 𝑆
4809
+ β„‹
4810
+ 𝑑
4811
+ ,
4812
+ 𝑖
4813
+ (
4814
+ π‘š
4815
+ )
4816
+ +
4817
+ βˆ‘
4818
+ π‘š
4819
+ βˆ‰
4820
+ 𝑆
4821
+ β„‹
4822
+ 𝑑
4823
+ βˆ’
4824
+ 1
4825
+ ,
4826
+ 𝑖
4827
+ (
4828
+ π‘š
4829
+ )
4830
+ ;
4831
+ 𝑦
4832
+ 𝑖
4833
+ )
4834
+
4835
+
4836
+ βˆ’
4837
+ 𝑓
4838
+ (
4839
+ βˆ‘
4840
+ π‘š
4841
+ ∈
4842
+ 𝑆
4843
+ (
4844
+ 𝑀
4845
+ 𝑑
4846
+ (
4847
+ π‘š
4848
+ )
4849
+ )
4850
+ ⊺
4851
+ β„Ž
4852
+ 𝑖
4853
+ (
4854
+ π‘š
4855
+ )
4856
+ +
4857
+ βˆ‘
4858
+ π‘š
4859
+ βˆ‰
4860
+ 𝑆
4861
+ (
4862
+ 𝑀
4863
+ 𝑑
4864
+ βˆ’
4865
+ 1
4866
+ (
4867
+ π‘š
4868
+ )
4869
+ )
4870
+ ⊺
4871
+ β„Ž
4872
+ 𝑖
4873
+ (
4874
+ π‘š
4875
+ )
4876
+ ;
4877
+ 𝑦
4878
+ 𝑖
4879
+ )
4880
+ |
4881
+
4882
+
4883
+ ≀
4884
+
4885
+ 𝐺
4886
+ 𝑁
4887
+ ​
4888
+ 𝑇
4889
+ ​
4890
+ βˆ‘
4891
+ 𝑑
4892
+ =
4893
+ 1
4894
+ 𝑇
4895
+ βˆ‘
4896
+ 𝑖
4897
+ =
4898
+ 1
4899
+ 𝑁
4900
+ |
4901
+ βˆ‘
4902
+ π‘š
4903
+ =
4904
+ 1
4905
+ 𝑀
4906
+ β„‹
4907
+ 𝑑
4908
+ βˆ’
4909
+ 1
4910
+ ,
4911
+ 𝑖
4912
+ (
4913
+ π‘š
4914
+ )
4915
+ βˆ’
4916
+ βˆ‘
4917
+ π‘š
4918
+ =
4919
+ 1
4920
+ 𝑀
4921
+ (
4922
+ 𝑀
4923
+ 𝑑
4924
+ βˆ’
4925
+ 1
4926
+ (
4927
+ π‘š
4928
+ )
4929
+ )
4930
+ ⊺
4931
+ ​
4932
+ β„Ž
4933
+ 𝑖
4934
+ (
4935
+ π‘š
4936
+ )
4937
+ |
4938
+
4939
+
4940
+ +
4941
+ 𝐺
4942
+ 𝑁
4943
+ ​
4944
+ 𝑇
4945
+ ​
4946
+ βˆ‘
4947
+ 𝑑
4948
+ =
4949
+ 1
4950
+ 𝑇
4951
+ βˆ‘
4952
+ 𝑖
4953
+ =
4954
+ 1
4955
+ 𝑁
4956
+ |
4957
+ (
4958
+ βˆ‘
4959
+ π‘š
4960
+ ∈
4961
+ 𝑆
4962
+ β„‹
4963
+ 𝑑
4964
+ ,
4965
+ 𝑖
4966
+ (
4967
+ π‘š
4968
+ )
4969
+ +
4970
+ βˆ‘
4971
+ π‘š
4972
+ βˆ‰
4973
+ 𝑆
4974
+ β„‹
4975
+ 𝑑
4976
+ βˆ’
4977
+ 1
4978
+ ,
4979
+ 𝑖
4980
+ (
4981
+ π‘š
4982
+ )
4983
+ )
4984
+
4985
+
4986
+ βˆ’
4987
+ (
4988
+ βˆ‘
4989
+ π‘š
4990
+ ∈
4991
+ 𝑆
4992
+ (
4993
+ 𝑀
4994
+ 𝑑
4995
+ (
4996
+ π‘š
4997
+ )
4998
+ )
4999
+ ⊺
5000
+ β„Ž
5001
+ 𝑖
5002
+ (
5003
+ π‘š
5004
+ )
5005
+ +
5006
+ βˆ‘
5007
+ π‘š
5008
+ βˆ‰
5009
+ 𝑆
5010
+ (
5011
+ 𝑀
5012
+ 𝑑
5013
+ βˆ’
5014
+ 1
5015
+ (
5016
+ π‘š
5017
+ )
5018
+ )
5019
+ ⊺
5020
+ β„Ž
5021
+ 𝑖
5022
+ (
5023
+ π‘š
5024
+ )
5025
+ )
5026
+ |
5027
+ ≀
5028
+ 𝐺
5029
+ 𝑀
5030
+ πœ–
5031
+ .
5032
+
5033
+
5034
+ Then we can obtain a bound on
5035
+ |
5036
+ 𝑠
5037
+ π‘š
5038
+ βˆ’
5039
+ 𝑠
5040
+ ^
5041
+ π‘š
5042
+ |
5043
+ by
5044
+
5045
+
5046
+ |
5047
+ 𝑠
5048
+ π‘š
5049
+ βˆ’
5050
+ 𝑠
5051
+ ^
5052
+ π‘š
5053
+ |
5054
+
5055
+
5056
+ ≀
5057
+
5058
+ 1
5059
+ 𝑀
5060
+ ​
5061
+ βˆ‘
5062
+ 𝑆
5063
+ βŠ†
5064
+ [
5065
+ 𝑀
5066
+ ]
5067
+ βˆ–
5068
+ {
5069
+ π‘š
5070
+ }
5071
+ 1
5072
+ (
5073
+ 𝑀
5074
+ βˆ’
5075
+ 1
5076
+ |
5077
+ 𝑆
5078
+ |
5079
+ )
5080
+ ​
5081
+ |
5082
+ π‘ˆ
5083
+ ​
5084
+ (
5085
+ 𝑆
5086
+ βˆͺ
5087
+ {
5088
+ π‘š
5089
+ }
5090
+ )
5091
+ βˆ’
5092
+ π‘ˆ
5093
+ ^
5094
+ ​
5095
+ (
5096
+ 𝑆
5097
+ βˆͺ
5098
+ {
5099
+ π‘š
5100
+ }
5101
+ )
5102
+ |
5103
+
5104
+
5105
+ +
5106
+ |
5107
+ π‘ˆ
5108
+ ​
5109
+ (
5110
+ 𝑆
5111
+ )
5112
+ βˆ’
5113
+ π‘ˆ
5114
+ ^
5115
+ ​
5116
+ (
5117
+ 𝑆
5118
+ )
5119
+ |
5120
+
5121
+
5122
+ ≀
5123
+
5124
+ 2
5125
+ ​
5126
+ 𝐺
5127
+ ​
5128
+ πœ–
5129
+ .
5130
+
5131
+
5132
+ ∎
5133
+
5134
+ B.4Proof of Proposition 3: More work leads to more rewards
5135
+ Proof.
5136
+
5137
+ We know that
5138
+ πœƒ
5139
+ 1
5140
+ βˆ—
5141
+ and
5142
+ πœƒ
5143
+ 2
5144
+ βˆ—
5145
+ are the optimal variables for the following convex optimization problem
5146
+ min
5147
+ πœƒ
5148
+ 1
5149
+ ,
5150
+ πœƒ
5151
+ 2
5152
+ ⁑
5153
+ 1
5154
+ 𝑁
5155
+ ​
5156
+ βˆ‘
5157
+ 𝑖
5158
+ =
5159
+ 1
5160
+ 𝑁
5161
+ 𝑓
5162
+ ​
5163
+ (
5164
+ ⟨
5165
+ πœƒ
5166
+ 1
5167
+ ,
5168
+ π‘₯
5169
+ 𝑖
5170
+ ⟩
5171
+ +
5172
+ ⟨
5173
+ πœƒ
5174
+ 2
5175
+ ,
5176
+ π‘₯
5177
+ 𝑖
5178
+ ⟩
5179
+ ;
5180
+ οΏ½οΏ½οΏ½οΏ½
5181
+ 𝑖
5182
+ )
5183
+ =
5184
+ 𝑔
5185
+ ​
5186
+ (
5187
+ πœƒ
5188
+ 1
5189
+ +
5190
+ πœƒ
5191
+ 2
5192
+ )
5193
+ .
5194
+
5195
+ Since
5196
+ πœƒ
5197
+ βˆ—
5198
+ is the unique minimizer for
5199
+ 𝑔
5200
+ ​
5201
+ (
5202
+ πœƒ
5203
+ )
5204
+ , it follows that
5205
+ πœƒ
5206
+ 1
5207
+ βˆ—
5208
+ and
5209
+ πœƒ
5210
+ 2
5211
+ βˆ—
5212
+ must satisfy
5213
+ πœƒ
5214
+ 1
5215
+ βˆ—
5216
+ +
5217
+ πœƒ
5218
+ 2
5219
+ βˆ—
5220
+ =
5221
+ πœƒ
5222
+ βˆ—
5223
+ . Denote
5224
+ 𝑑
5225
+ 1
5226
+ (
5227
+ 𝑑
5228
+ )
5229
+ as the update for
5230
+ πœƒ
5231
+ 1
5232
+ at the
5233
+ 𝑑
5234
+ -th iteration and
5235
+ 𝑑
5236
+ 2
5237
+ (
5238
+ 𝑑
5239
+ )
5240
+ as the update for
5241
+ πœƒ
5242
+ 2
5243
+ at the
5244
+ 𝑑
5245
+ th iteration. By the construction, we know that
5246
+
5247
+
5248
+ 𝔼
5249
+ ​
5250
+ [
5251
+ βˆ‘
5252
+ 𝑑
5253
+ =
5254
+ 1
5255
+ ∞
5256
+ (
5257
+ 𝑑
5258
+ 1
5259
+ (
5260
+ 𝑑
5261
+ )
5262
+ +
5263
+ 𝑑
5264
+ 2
5265
+ (
5266
+ 𝑑
5267
+ )
5268
+ )
5269
+ ]
5270
+ =
5271
+ πœƒ
5272
+ βˆ—
5273
+ ​
5274
+ and
5275
+ ​
5276
+ 𝔼
5277
+ ​
5278
+ [
5279
+ βˆ‘
5280
+ 𝑑
5281
+ =
5282
+ 1
5283
+ ∞
5284
+ 𝑑
5285
+ 2
5286
+ (
5287
+ 𝑑
5288
+ )
5289
+ ]
5290
+ =
5291
+ 𝜌
5292
+ ​
5293
+ 𝔼
5294
+ ​
5295
+ [
5296
+ βˆ‘
5297
+ 𝑑
5298
+ =
5299
+ 1
5300
+ ∞
5301
+ 𝑑
5302
+ 1
5303
+ (
5304
+ 𝑑
5305
+ )
5306
+ ]
5307
+ .
5308
+
5309
+
5310
+ Therefore, it follows that
5311
+
5312
+
5313
+ 𝔼
5314
+ ​
5315
+ [
5316
+ πœƒ
5317
+ 1
5318
+ βˆ—
5319
+ ]
5320
+
5321
+ =
5322
+ 𝔼
5323
+ ​
5324
+ [
5325
+ βˆ‘
5326
+ 𝑑
5327
+ =
5328
+ 1
5329
+ ∞
5330
+ 𝑑
5331
+ 1
5332
+ (
5333
+ 𝑑
5334
+ )
5335
+ ]
5336
+ =
5337
+ 1
5338
+ 1
5339
+ +
5340
+ 𝜌
5341
+ ​
5342
+ πœƒ
5343
+ βˆ—
5344
+ ​
5345
+ and
5346
+
5347
+
5348
+ 𝔼
5349
+ ​
5350
+ [
5351
+ πœƒ
5352
+ 2
5353
+ βˆ—
5354
+ ]
5355
+
5356
+ =
5357
+ 𝔼
5358
+ ​
5359
+ [
5360
+ βˆ‘
5361
+ 𝑑
5362
+ =
5363
+ 1
5364
+ ∞
5365
+ 𝑑
5366
+ 2
5367
+ (
5368
+ 𝑑
5369
+ )
5370
+ ]
5371
+ =
5372
+ 𝜌
5373
+ 1
5374
+ +
5375
+ 𝜌
5376
+ ​
5377
+ πœƒ
5378
+ βˆ—
5379
+ .
5380
+
5381
+
5382
+ Now we consider VerFedSV. By definition, we have
5383
+
5384
+
5385
+ 𝔼
5386
+ ​
5387
+ [
5388
+ 𝑠
5389
+ 1
5390
+ βˆ’
5391
+ 𝑠
5392
+ 2
5393
+ ]
5394
+ =
5395
+ 2
5396
+ ​
5397
+ [
5398
+ 𝑔
5399
+ ​
5400
+ (
5401
+ 𝜌
5402
+ 1
5403
+ +
5404
+ 𝜌
5405
+ ​
5406
+ πœƒ
5407
+ βˆ—
5408
+ )
5409
+ βˆ’
5410
+ 𝑔
5411
+ ​
5412
+ (
5413
+ 1
5414
+ 1
5415
+ +
5416
+ 𝜌
5417
+ ​
5418
+ πœƒ
5419
+ βˆ—
5420
+ )
5421
+ ]
5422
+ .
5423
+
5424
+
5425
+ Define
5426
+ β„Ž
5427
+ :
5428
+ [
5429
+ 0
5430
+ ,
5431
+ 1
5432
+ ]
5433
+ β†’
5434
+ ℝ
5435
+ as
5436
+ β„Ž
5437
+ ​
5438
+ (
5439
+ πœ†
5440
+ )
5441
+ =
5442
+ 𝑔
5443
+ ​
5444
+ (
5445
+ πœ†
5446
+ ​
5447
+ πœƒ
5448
+ βˆ—
5449
+ )
5450
+ . Since
5451
+ 𝑔
5452
+ is
5453
+ πœ‡
5454
+ -strongly convex and
5455
+ πœƒ
5456
+ βˆ—
5457
+ is the unique minimizer, it follows that
5458
+ β„Ž
5459
+ is
5460
+ πœ‡
5461
+ ​
5462
+ β€–
5463
+ πœƒ
5464
+ βˆ—
5465
+ β€–
5466
+ 2
5467
+ -strongly convex and is monotonically non-increasing on
5468
+ [
5469
+ 0
5470
+ ,
5471
+ 1
5472
+ ]
5473
+ . Thus, by property of strongly convex (Rockafellar, 1997), we conclude that
5474
+
5475
+
5476
+ 𝔼
5477
+ ​
5478
+ [
5479
+ 𝑠
5480
+ 1
5481
+ ]
5482
+
5483
+ β‰₯
5484
+ 𝔼
5485
+ ​
5486
+ [
5487
+ 𝑠
5488
+ 2
5489
+ ]
5490
+ +
5491
+ 2
5492
+ ​
5493
+ [
5494
+ β„Ž
5495
+ ​
5496
+ (
5497
+ 𝜌
5498
+ 1
5499
+ +
5500
+ 𝜌
5501
+ )
5502
+ βˆ’
5503
+ β„Ž
5504
+ ​
5505
+ (
5506
+ 1
5507
+ 1
5508
+ +
5509
+ 𝜌
5510
+ )
5511
+ ]
5512
+
5513
+
5514
+ β‰₯
5515
+ 𝔼
5516
+ ​
5517
+ [
5518
+ 𝑠
5519
+ 2
5520
+ ]
5521
+ +
5522
+ πœ‡
5523
+ ​
5524
+ (
5525
+ 1
5526
+ βˆ’
5527
+ 𝜌
5528
+ 1
5529
+ +
5530
+ 𝜌
5531
+ )
5532
+ 2
5533
+ ​
5534
+ β€–
5535
+ πœƒ
5536
+ βˆ—
5537
+ β€–
5538
+ 2
5539
+ .
5540
+
5541
+
5542
+ ∎
5543
+
5544
+ Appendix CDetails of Experimental Setup
5545
+ Table 2:Metadata of all data sets.
5546
+ Adult Web Covtype RCV1
5547
+ number of training data
5548
+ 𝑁
5549
+ 48842 119742 581012 15564
5550
+ number of features
5551
+ 𝑑
5552
+ 123 300 54 47236
5553
+ number of classes
5554
+ 𝑙
5555
+ 2 2 7 53
5556
+ number of clients
5557
+ 𝑀
5558
+ 8 20 14 19
5559
+ Table 3:Hyperparameter setting under synchronous setting.
5560
+ Adult Web Covtype RCV1
5561
+ learning rate
5562
+ πœ‚
5563
+ 0.2 0.2 0.8 1.0
5564
+ batch size
5565
+ 𝜏
5566
+ 2837 6173 2000 500
5567
+ rank parameter
5568
+ π‘Ÿ
5569
+ 3 3 5 10
5570
+ regularization parameter
5571
+ πœ†
5572
+ 0.1 0.1 0.1 0.1
5573
+ Data sets
5574
+
5575
+ We use four real-world data sets, AdultΒ (Zeng etΒ al., 2008), WebΒ (Platt, 1998), CovtypeΒ (Blackard & Dean, 1999), and RCV1Β (Lewis etΒ al., 2004) 1. On each data set, by default, we separate the features across the largest number of clients reported in the literature (Achen, 1982; Wang etΒ al., 2019; Han etΒ al., 2021; Brown etΒ al., 2012). That is, we compare with the baselines under the most challenging settings. These data sets cover both binary and multiclass classification problems. We summarize the metadata as well as the number of clients for all the datasets in TableΒ 2.
5576
+
5577
+ Models
5578
+
5579
+ For the Adult, Web, and Covtype data sets, we train multinomial logistic regression models, and the local embeddings are computed via a linear model. For the RCV1 data set, we still use the negative log-likelihood loss, but every client locally has a 2-layer perceptron model with 32 hidden neurons and a ReLU activation function for embedding. Under both the synchronous and asynchronous settings, we can achieve the test accuracy of
5580
+ 85
5581
+ %
5582
+ on the Adult dataset,
5583
+ 94
5584
+ %
5585
+ on the Web dataset,
5586
+ 72
5587
+ %
5588
+ on the Covtype dataset, and
5589
+ 83
5590
+ %
5591
+ on the RCV1 dataset.
5592
+
5593
+ Monte-Carlo sampling
5594
+
5595
+ In Section D.1, we will describe the adopted sampling method. The VerFedSV is computed exactly when the number of clients
5596
+ 𝑀
5597
+ ≀
5598
+ 10
5599
+ , and approximately using sampling when the number of clients
5600
+ 𝑀
5601
+ >
5602
+ 10
5603
+ . The number of randomly sampled permutations
5604
+ 𝐾
5605
+ is set as
5606
+ ⌈
5607
+ 100
5608
+ ​
5609
+ 𝑀
5610
+ ​
5611
+ log
5612
+ ⁑
5613
+ (
5614
+ 𝑀
5615
+ )
5616
+ βŒ‰
5617
+ , where
5618
+ 𝑀
5619
+ is the number of clients.
5620
+
5621
+ Implementation
5622
+
5623
+ We implement the VFL algorithms and the corresponding VerFedSV computation schemes described in SectionsΒ 5 andΒ 6 in the Julia languageΒ (Bezanson etΒ al., 2017). The implementations of both synchronous and asynchronous VFL algorithms, as well as the VerFedSV computation schemes, are attached in the supplementary material. The matrix completion problem in EquationΒ 2 is solved by the Julia package LowRankModels.jlΒ (Udell etΒ al., 2016). All the experiments are conducted on a Linux server with 32 CPUs and 64 GB memory.
5624
+
5625
+ Hyperparameter setting (synchronous)
5626
+
5627
+ We summarize the hyperparameters under the synchronous setting in TableΒ 3, where
5628
+ πœ‚
5629
+ and
5630
+ 𝜏
5631
+ are the learning rate and the batch size used in the FedSGD algorithm, and
5632
+ π‘Ÿ
5633
+ and
5634
+ πœ†
5635
+ are the rank parameter and regularization parameter used in the matrix completion problem 2.
5636
+
5637
+ Hyperparameter setting (asynchronous)
5638
+
5639
+ We summarize the hyperparameters under the asynchronous setting in TableΒ 4, where
5640
+ πœ‚
5641
+ and
5642
+ 𝜏
5643
+ are the learning rate and the batch size used in the VAFL algorithm, and
5644
+ Ξ”
5645
+ ​
5646
+ 𝑑
5647
+ is the communication frequency for clients.
5648
+
5649
+ Table 4:Hyperparameter setting under asynchronous setting.
5650
+ Adult Web Covtype RCV1
5651
+ learning rate
5652
+ πœ‚
5653
+ 0.2 0.2 0.8 1.0
5654
+ batch size
5655
+ 𝜏
5656
+ 2837 6137 2000 500
5657
+ communication frequency
5658
+ Ξ”
5659
+ ​
5660
+ 𝑑
5661
+ 0.01 0.01 0.01 0.01
5662
+ Appendix DAdditional Experiments and Results
5663
+ D.1Computational efficiency of VerFedSV
5664
+
5665
+ We first introduce the method to efficiently estimate the VerFedSV. From DefinitionΒ 2, we can see that the computational complexity for VerFedSV is exponential in the number of clients
5666
+ 𝑀
5667
+ . The same situation appears in the computation of the classical Shapley value. How to efficiently estimate the Shapley value has been studied extensivelyΒ (Ghorbani & Zou, 2019; Jia etΒ al., 2019). Many existing methods can be adopted for the computation of VerFedSV. Here we describe the well-known Monte-Carlo sampling methodΒ (Metropolis & Ulam, 1949; Ghorbani & Zou, 2019). We can rewrite the definition of VerFedSV into an equivalent formulation using expectation,
5668
+
5669
+
5670
+ 𝑠
5671
+ π‘š
5672
+ =
5673
+ 𝔼
5674
+ πœ‹
5675
+ ∼
5676
+ Ξ 
5677
+ ​
5678
+ (
5679
+ [
5680
+ 𝑀
5681
+ ]
5682
+ )
5683
+ [
5684
+ 1
5685
+ 𝑇
5686
+ ​
5687
+ βˆ‘
5688
+ 𝑑
5689
+ =
5690
+ 1
5691
+ 𝑇
5692
+ (
5693
+ π‘ˆ
5694
+ 𝑑
5695
+ ​
5696
+ (
5697
+ πœ‹
5698
+ ​
5699
+ (
5700
+ π‘š
5701
+ )
5702
+ βˆͺ
5703
+ {
5704
+ π‘š
5705
+ }
5706
+ )
5707
+ βˆ’
5708
+ π‘ˆ
5709
+ 𝑑
5710
+ ​
5711
+ (
5712
+ πœ‹
5713
+ ​
5714
+ (
5715
+ π‘š
5716
+ )
5717
+ )
5718
+ )
5719
+ ]
5720
+ ,
5721
+
5722
+ (5)
5723
+
5724
+ where
5725
+ Ξ 
5726
+ ​
5727
+ (
5728
+ [
5729
+ 𝑀
5730
+ ]
5731
+ )
5732
+ is the uniform distribution over all
5733
+ 𝑀
5734
+ !
5735
+ permutations of the set of clients
5736
+ [
5737
+ 𝑀
5738
+ ]
5739
+ and
5740
+ πœ‹
5741
+ ​
5742
+ (
5743
+ π‘š
5744
+ )
5745
+ is the set of clients preceding client
5746
+ π‘š
5747
+ in permutation
5748
+ πœ‹
5749
+ . With this formulation, we can use the Monte Carlo method to obtain an approximation of VerFedSV. More precisely, we can randomly sample
5750
+ 𝐾
5751
+ permutations
5752
+ πœ‹
5753
+ 1
5754
+ ,
5755
+ …
5756
+ ,
5757
+ πœ‹
5758
+ 𝐾
5759
+ and approximate VerFedSV
5760
+ 𝑠
5761
+ π‘š
5762
+ by
5763
+
5764
+
5765
+ 𝑠
5766
+ ^
5767
+ π‘š
5768
+ =
5769
+ 1
5770
+ 𝐾
5771
+ ​
5772
+ 𝑇
5773
+ ​
5774
+ βˆ‘
5775
+ π‘˜
5776
+ =
5777
+ 1
5778
+ 𝐾
5779
+ βˆ‘
5780
+ 𝑑
5781
+ =
5782
+ 1
5783
+ 𝑇
5784
+ [
5785
+ π‘ˆ
5786
+ 𝑑
5787
+ ​
5788
+ (
5789
+ πœ‹
5790
+ π‘˜
5791
+ ​
5792
+ (
5793
+ π‘š
5794
+ )
5795
+ βˆͺ
5796
+ {
5797
+ π‘š
5798
+ }
5799
+ )
5800
+ βˆ’
5801
+ π‘ˆ
5802
+ 𝑑
5803
+ ​
5804
+ (
5805
+ πœ‹
5806
+ π‘˜
5807
+ ​
5808
+ (
5809
+ π‘š
5810
+ )
5811
+ )
5812
+ ]
5813
+ .
5814
+
5815
+ (6)
5816
+
5817
+ By applying Hoeffding’s inequality, it can be shown (Jia etΒ al., 2019) that if
5818
+ 𝐾
5819
+ β‰₯
5820
+ 2
5821
+ ​
5822
+ 𝑅
5823
+ 2
5824
+ ​
5825
+ 𝑀
5826
+ πœ–
5827
+ ​
5828
+ log
5829
+ ⁑
5830
+ (
5831
+ 2
5832
+ ​
5833
+ 𝑀
5834
+ 𝛿
5835
+ )
5836
+ for some
5837
+ πœ–
5838
+ >
5839
+ 0
5840
+ and
5841
+ 𝛿
5842
+ ∈
5843
+ (
5844
+ 0
5845
+ ,
5846
+ 1
5847
+ )
5848
+ , where
5849
+
5850
+
5851
+ 𝑅
5852
+ :=
5853
+ max
5854
+ 𝑆
5855
+ βŠ†
5856
+ [
5857
+ 𝑀
5858
+ ]
5859
+ ⁑
5860
+ [
5861
+ 1
5862
+ 𝑇
5863
+ ​
5864
+ βˆ‘
5865
+ 𝑑
5866
+ =
5867
+ 1
5868
+ 𝑇
5869
+ π‘ˆ
5870
+ 𝑑
5871
+ ​
5872
+ (
5873
+ 𝑆
5874
+ )
5875
+ ]
5876
+ βˆ’
5877
+ min
5878
+ 𝑆
5879
+ βŠ†
5880
+ [
5881
+ 𝑀
5882
+ ]
5883
+ ⁑
5884
+ [
5885
+ 1
5886
+ 𝑇
5887
+ ​
5888
+ βˆ‘
5889
+ 𝑑
5890
+ =
5891
+ 1
5892
+ 𝑇
5893
+ π‘ˆ
5894
+ 𝑑
5895
+ ​
5896
+ (
5897
+ 𝑆
5898
+ )
5899
+ ]
5900
+ ,
5901
+
5902
+
5903
+ then we have
5904
+ β„™
5905
+ ​
5906
+ (
5907
+ |
5908
+ 𝑠
5909
+ ^
5910
+ π‘š
5911
+ βˆ’
5912
+ 𝑠
5913
+ π‘š
5914
+ |
5915
+ ≀
5916
+ πœ–
5917
+ )
5918
+ β‰₯
5919
+ 1
5920
+ βˆ’
5921
+ 𝛿
5922
+ .
5923
+
5924
+ In summary, the computation for estimated VerFedSV
5925
+ 𝑠
5926
+ ^
5927
+ π‘š
5928
+ requires
5929
+ π’ͺ
5930
+ ​
5931
+ (
5932
+ 𝑇
5933
+ ​
5934
+ 𝑀
5935
+ ​
5936
+ log
5937
+ ⁑
5938
+ (
5939
+ 𝑀
5940
+ )
5941
+ )
5942
+ calls of the utility function
5943
+ π‘ˆ
5944
+ 𝑑
5945
+ for both synchronous and asynchronous federated learning algorithms, and the computational complexity for evaluating
5946
+ π‘ˆ
5947
+ 𝑑
5948
+ is
5949
+ π’ͺ
5950
+ ​
5951
+ (
5952
+ 𝑀
5953
+ ​
5954
+ 𝑁
5955
+ )
5956
+ . Moreover, according to EquationΒ 6, the computation for estimated VerFedSVs can be parallelized. We numerically demonstrate this computational complexity bound in the following.
5957
+
5958
+ Figure 4:The efficiency of matrix completion and VerFedSV estimation. The left, middle, and right figures show the run time with varying numbers of clients, time-stamps, and data points respectively.
5959
+
5960
+ In this experiment, we test the efficiency of computing VerFedSV. As we can see from SectionΒ D.1, the time complexity of estimating VerFedSV is the same for both synchronous and asynchronous VFL algorithms (they only differ in the training time). So we experiment under the synchronous VFL setting. We test the impact of the number of clients
5961
+ 𝑀
5962
+ , the number of time-stamps
5963
+ 𝑇
5964
+ and the number of training data
5965
+ 𝑁
5966
+ on time needed for computing VerFedSV. More precisely, we measure the time, respectively, for solving the matrix completion problem and for estimating VerFedSVs. For consistency, the VerFedSV is computed approximately using Monte Carlo regardless of the number of clients in this experiment. Since the time-varying trends are similar for all datasets, we only present the results on the Adult dataset. The result is shown in FigureΒ 4. First, we focus on the time needed for solving the matrix completion problem in EquationΒ 2. As we can see from FigureΒ 4, the runtime keeps stable as the number of clients
5967
+ 𝑀
5968
+ changes and scales approximately linearly with the number of time-stamps
5969
+ 𝑇
5970
+ and the number of training data
5971
+ 𝑁
5972
+ , which agrees with our discussion in SectionΒ 5. Next, we focus on the time needed for approximating VerFedSV. As illustrated in FigureΒ 4, the runtime exhibits an almost quadratic-logarithmic relationship with the number of clients
5973
+ 𝑀
5974
+ , and nearly linear relationships with the number of time-stamps
5975
+ 𝑇
5976
+ and the volume of training data
5977
+ 𝑁
5978
+ . This observation aligns with our earlier discussion.
5979
+
5980
+ Table 5:Kendall’s rank correlation between the results of SHAP and VerFedSV’s.
5981
+ Adult Web Covtype RCV1
5982
+
5983
+ corr
5984
+ ​
5985
+ (
5986
+ SHAP
5987
+ ,
5988
+ VerFedSV
5989
+ s
5990
+ )
5991
+ 1.0 0.73 0.94 0.65
5992
+
5993
+ corr
5994
+ ​
5995
+ (
5996
+ SHAP
5997
+ ,
5998
+ VerFedSV
5999
+ a
6000
+ )
6001
+ 1.0 0.69 0.89 0.63
6002
+
6003
+ corr
6004
+ ​
6005
+ (
6006
+ VerFedSV
6007
+ s
6008
+ ,
6009
+ VerFedSV
6010
+ a
6011
+ )
6012
+ 1.0 0.80 0.94 0.80
6013
+ D.2Effectiveness of VerFedSV
6014
+
6015
+ We evaluate the effectiveness of VerFedSV in both the synchronous and asynchronous VFL settings. In other words, we test whether VerFedSV can reflect the importance of clients’ local features. We set all clients to have the same communication frequency to eliminate the impact of unbalanced local computational resources. We choose SHAPΒ (Lundberg & Lee, 2017) as the baseline, which is a widely used metric for measuring feature importance in machine learning. More specifically, we first use SHAP to compute the importance scores of all the features, and then for each client, we ensemble the scores for all the local features. Note that SHAP cannot be directly used in the VFL task, as it requires access to all the local datasets and models and violates the VFL settings. So here we just use it as a reference. We use Kendall’s rank correlation (KRC)Β (Kendall, 1938) to measure the similarity between the orderings of the scores of SHAP and VerFedSV. As a by-product, we also show the similarity between the scores of VerFedSV in the synchronous and asynchronous settings. We show in TableΒ 5 the KRCs between the results of SHAP and VerFedSV, where
6016
+ corr
6017
+ denotes the KRC, SHAP denotes the results from SHAP,
6018
+ VerFedSV
6019
+ s
6020
+ and
6021
+ VerFedSV
6022
+ a
6023
+ , respectively, denote the results from VerFedSV in the synchronous and asynchronous settings, and the numbers of clients for the Adult, Web, Covtype and RCV1 datasets are, respectively, set to
6024
+ 𝑀
6025
+ =
6026
+ 3
6027
+ ,
6028
+ 15
6029
+ ,
6030
+ 9
6031
+ ,
6032
+ 14
6033
+ . Note that KRC returns a value in
6034
+ [
6035
+ βˆ’
6036
+ 1
6037
+ ,
6038
+ 1
6039
+ ]
6040
+ , where β€œ
6041
+ 1
6042
+ ” means two input score lists have the identical ranking and β€œ
6043
+ βˆ’
6044
+ 1
6045
+ ” means the rankings are exactly reverted. The KRC between SHAP and VerFedSV are all greater than
6046
+ 0.6
6047
+ . The results indicate that VerFedSV can indeed capture the feature importance well. Moreover, the KRC between
6048
+ VerFedSV
6049
+ s
6050
+ and
6051
+ VerFedSV
6052
+ a
6053
+ are all greater than 0.8. The results indicate that VerFedSV is consistent under both synchronous and asynchronous settings.
6054
+
6055
+ D.3Ablation Study on Hyper-parameters
6056
+ Figure 5:Ablation study on hyper-parameters rank
6057
+ π‘Ÿ
6058
+ and regularization
6059
+ πœ†
6060
+
6061
+ In this section, we examine the sensitivity of VerFedSV with respect to the hyper-parameters
6062
+ π‘Ÿ
6063
+ and
6064
+ πœ†
6065
+ used in the matrix completion algorithm, and offer practical guidance for setting these hyper-parameters.
6066
+
6067
+ We first recap notations and motivation of matrix completion. As mentioned in Section 5, the key challenge in computing VerFedSV is that, in each iteration, for client
6068
+ π‘š
6069
+ , we only have access to a mini-batch embedding for the current mini-batch
6070
+ 𝐡
6071
+ , i.e.,
6072
+ {
6073
+ β„Ž
6074
+ 𝑖
6075
+ (
6076
+ π‘š
6077
+ )
6078
+ ∣
6079
+ 𝑖
6080
+ ∈
6081
+ 𝐡
6082
+ }
6083
+ . However, computing VerFedSV requires full embeddings for all training points in each iteration, i.e.,
6084
+ {
6085
+ β„Ž
6086
+ 𝑖
6087
+ (
6088
+ π‘š
6089
+ )
6090
+ ∣
6091
+ 𝑖
6092
+ ∈
6093
+ [
6094
+ 𝑁
6095
+ ]
6096
+ }
6097
+ , where
6098
+ 𝑁
6099
+ is the number of training points. Therefore, matrix completion is employed to recover the missing embeddings.
6100
+
6101
+ Then, we define a deviation metric for the upcoming experiment. For client
6102
+ 𝑖
6103
+ , denote its percentage of VerFedSV relative to the total VerFedSV from all clients as
6104
+ 𝑠
6105
+ 𝑖
6106
+ . The computation process for VerFedSVs involves utilizing matrix completion to estimate full embeddings. Meanwhile, we compute a ground-truth Shapley value, wherein full embeddings are computed in each training round. The percentage of this ground-truth Shapley value for client
6107
+ 𝑖
6108
+ relative to the sum is denoted as
6109
+ 𝑠
6110
+ 𝑖
6111
+ 𝑔
6112
+ ​
6113
+ 𝑑
6114
+ . With
6115
+ 𝑀
6116
+ representing the number of clients, the deviation metric is defined as
6117
+ 1
6118
+ 𝑀
6119
+ ​
6120
+ βˆ‘
6121
+ π‘š
6122
+ =
6123
+ 1
6124
+ 𝑀
6125
+ |
6126
+ 𝑠
6127
+ π‘š
6128
+ βˆ’
6129
+ 𝑠
6130
+ π‘š
6131
+ 𝑔
6132
+ ​
6133
+ 𝑑
6134
+ |
6135
+ 𝑠
6136
+ π‘š
6137
+ 𝑔
6138
+ ​
6139
+ 𝑑
6140
+ . This metric necessitates that
6141
+ 𝑠
6142
+ π‘š
6143
+ 𝑔
6144
+ ​
6145
+ 𝑑
6146
+ be positive to ensure it is meaningful.
6147
+
6148
+ Table 6:The ground-truth percentage of Shapley value relative to the total sum, denoted by
6149
+ 𝑠
6150
+ 𝑖
6151
+ 𝑔
6152
+ ​
6153
+ 𝑑
6154
+ Adult Web Covtype RCV1
6155
+ Client 1 46.95 57.36 55.41 81.14
6156
+ Client 2 17.43 23.21 34.54 5.10
6157
+ Client 3 35.62 19.43 10.55 13.86
6158
+
6159
+ For the experiment setup here, the number of clients
6160
+ 𝑀
6161
+ is set to 3. Meanwhile, since the clients equally partition the features of a dataset, each client has some contribution to the FL model. Consequently, we observe that the ground-truth Shapley values
6162
+ 𝑠
6163
+ π‘š
6164
+ 𝑔
6165
+ ​
6166
+ 𝑑
6167
+ for all clients are positive (See Table 6). The other considerations for setting
6168
+ 𝑀
6169
+ =
6170
+ 3
6171
+ include: 1) This experiment does not focus on scalability w.r.t the numbers of clients
6172
+ 𝑀
6173
+ ; 2) We want to avoid Monte Carlo sampling that could potentially interfere with the results. VerFedSV is computed exactly when
6174
+ 𝑀
6175
+ ≀
6176
+ 10
6177
+ by our default setup. The other setups remain consistent with those detailed in Table 2 and Table 3. For each pair of hyper-parameters, the ground-truth Shapley value is computed once, while VerFedSV is computed
6178
+ 5
6179
+ times. In each independent run, each client has the same set of features, but a different random seed, so the matrix completion algorithm may give different estimations of full embeddings.
6180
+
6181
+ The left part of Figure 5 illustrates the deviation w.r.t the rank
6182
+ π‘Ÿ
6183
+ . The median deviation of different runs is depicted by the solid line, while the shaded area represents the interquartile range (25% to 75% quantiles) of deviations. Take the Adult dataset as an example for analysis. The figure demonstrates that deviations are large for
6184
+ π‘Ÿ
6185
+ <
6186
+ 5
6187
+ , since inadequate rank cannot accurately recover full embeddings. However, deviation decreases rapidly as
6188
+ π‘Ÿ
6189
+ increases and stabilizes at a low value when
6190
+ π‘Ÿ
6191
+ β‰₯
6192
+ 5
6193
+ .
6194
+
6195
+ The default ranks (See Table 3) in the previous experiments were set based on approximated
6196
+ πœ–
6197
+ -rank (See Figure 1). However, in practice obtaining full embeddings is resource-intensive due to communication costs or limited resources on the client side. Thus, approximating
6198
+ πœ–
6199
+ rank may be difficult. However, the server (with sufficient computational resources), after collecting the mini-batch embeddings, can test with several increasing values of
6200
+ π‘Ÿ
6201
+ . The Shapley value will converge to the ground-truth value as revealed by our ablation study.
6202
+
6203
+ The right part of Figure 5 illustrates the deviation w.r.t the hyper-parameter
6204
+ πœ†
6205
+ . It demonstrates that the performance of VerFedSV remains relatively insensitive across a wide range of
6206
+ πœ†
6207
+ and we can safely set
6208
+ πœ†
6209
+ =
6210
+ 0.1
6211
+ for various datasets.
6212
+
6213
+ Report Issue
6214
+ Report Issue for Selection
6215
+ Generated by L A T E xml
6216
+ Instructions for reporting errors
6217
+
6218
+ We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
6219
+
6220
+ Click the "Report Issue" button.
6221
+ Open a report feedback form via keyboard, use "Ctrl + ?".
6222
+ Make a text selection and click the "Report Issue for Selection" button near your cursor.
6223
+ You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
6224
+
6225
+ Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
6226
+
6227
+ Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.