Eric03 commited on
Commit
747cd9d
·
verified ·
1 Parent(s): f054a80

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +400 -0
  2. 2001.10331/paper.pdf +3 -0
  3. 2002.06478/paper.pdf +3 -0
  4. 2003.05162/paper.pdf +3 -0
  5. 2003.06709/paper.pdf +3 -0
  6. 2004.06660/paper.pdf +3 -0
  7. 2004.12169/main_diagram/main_diagram.drawio +1 -0
  8. 2004.12169/main_diagram/main_diagram.pdf +0 -0
  9. 2004.12169/paper_text/intro_method.md +113 -0
  10. 2004.13313/paper.pdf +3 -0
  11. 2006.00900/paper.pdf +3 -0
  12. 2006.03204/paper.pdf +3 -0
  13. 2006.10350/main_diagram/main_diagram.drawio +1 -0
  14. 2006.10350/main_diagram/main_diagram.pdf +0 -0
  15. 2006.10350/paper_text/intro_method.md +141 -0
  16. 2006.15057/paper.pdf +3 -0
  17. 2008.02676/main_diagram/main_diagram.drawio +1 -0
  18. 2008.02676/main_diagram/main_diagram.pdf +0 -0
  19. 2008.02676/paper_text/intro_method.md +78 -0
  20. 2009.07806/paper.pdf +3 -0
  21. 2010.01625/main_diagram/main_diagram.drawio +1 -0
  22. 2010.01625/main_diagram/main_diagram.pdf +0 -0
  23. 2010.01625/paper_text/intro_method.md +135 -0
  24. 2010.01666/paper.pdf +3 -0
  25. 2010.05324/paper.pdf +3 -0
  26. 2101.00604/paper.pdf +3 -0
  27. 2101.09465/paper.pdf +3 -0
  28. 2101.09868/paper.pdf +3 -0
  29. 2101.11224/main_diagram/main_diagram.drawio +0 -0
  30. 2101.11224/paper_text/intro_method.md +106 -0
  31. 2102.00436/paper.pdf +3 -0
  32. 2102.07936/main_diagram/main_diagram.drawio +1 -0
  33. 2102.07936/main_diagram/main_diagram.pdf +0 -0
  34. 2102.07936/paper_text/intro_method.md +111 -0
  35. 2102.09337/paper.pdf +3 -0
  36. 2102.13045/paper.pdf +3 -0
  37. 2103.01937/paper.pdf +3 -0
  38. 2103.06818/paper.pdf +3 -0
  39. 2103.07969/paper.pdf +3 -0
  40. 2103.08733/paper_text/intro_method.md +155 -0
  41. 2103.13558/paper.pdf +3 -0
  42. 2104.00764/paper.pdf +3 -0
  43. 2104.03945/main_diagram/main_diagram.drawio +1 -0
  44. 2104.03945/main_diagram/main_diagram.pdf +0 -0
  45. 2104.03945/paper_text/intro_method.md +57 -0
  46. 2104.07644/main_diagram/main_diagram.drawio +1 -0
  47. 2104.07644/main_diagram/main_diagram.pdf +0 -0
  48. 2104.07644/paper_text/intro_method.md +21 -0
  49. 2105.00043/main_diagram/main_diagram.drawio +0 -0
  50. 2105.00043/paper_text/intro_method.md +84 -0
.gitattributes CHANGED
@@ -3098,3 +3098,403 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
3098
  2402.15911/paper.pdf filter=lfs diff=lfs merge=lfs -text
3099
  2503.03704/paper.pdf filter=lfs diff=lfs merge=lfs -text
3100
  2211.01522/paper.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3098
  2402.15911/paper.pdf filter=lfs diff=lfs merge=lfs -text
3099
  2503.03704/paper.pdf filter=lfs diff=lfs merge=lfs -text
3100
  2211.01522/paper.pdf filter=lfs diff=lfs merge=lfs -text
3101
+ 2402.08365/paper.pdf filter=lfs diff=lfs merge=lfs -text
3102
+ 2206.00719/paper.pdf filter=lfs diff=lfs merge=lfs -text
3103
+ 2203.16995/paper.pdf filter=lfs diff=lfs merge=lfs -text
3104
+ 2401.13649/paper.pdf filter=lfs diff=lfs merge=lfs -text
3105
+ 2201.11460/paper.pdf filter=lfs diff=lfs merge=lfs -text
3106
+ 2310.08138/paper.pdf filter=lfs diff=lfs merge=lfs -text
3107
+ 2201.05610/paper.pdf filter=lfs diff=lfs merge=lfs -text
3108
+ 2503.13858/paper.pdf filter=lfs diff=lfs merge=lfs -text
3109
+ 2310.19961/paper.pdf filter=lfs diff=lfs merge=lfs -text
3110
+ 2210.07370/paper.pdf filter=lfs diff=lfs merge=lfs -text
3111
+ 2305.02665/paper.pdf filter=lfs diff=lfs merge=lfs -text
3112
+ 2109.06860/paper.pdf filter=lfs diff=lfs merge=lfs -text
3113
+ 2109.08075/paper.pdf filter=lfs diff=lfs merge=lfs -text
3114
+ 2204.11312/paper.pdf filter=lfs diff=lfs merge=lfs -text
3115
+ 2412.09013/paper.pdf filter=lfs diff=lfs merge=lfs -text
3116
+ 2408.09191/paper.pdf filter=lfs diff=lfs merge=lfs -text
3117
+ 2403.13179/paper.pdf filter=lfs diff=lfs merge=lfs -text
3118
+ 2208.02012/paper.pdf filter=lfs diff=lfs merge=lfs -text
3119
+ 2302.08220/paper.pdf filter=lfs diff=lfs merge=lfs -text
3120
+ 2310.01211/paper.pdf filter=lfs diff=lfs merge=lfs -text
3121
+ 2402.05290/paper.pdf filter=lfs diff=lfs merge=lfs -text
3122
+ 2306.05037/paper.pdf filter=lfs diff=lfs merge=lfs -text
3123
+ 2402.17300/paper.pdf filter=lfs diff=lfs merge=lfs -text
3124
+ 2305.18169/paper.pdf filter=lfs diff=lfs merge=lfs -text
3125
+ 2003.06709/paper.pdf filter=lfs diff=lfs merge=lfs -text
3126
+ 2211.11262/paper.pdf filter=lfs diff=lfs merge=lfs -text
3127
+ 2509.00123/paper.pdf filter=lfs diff=lfs merge=lfs -text
3128
+ 2504.21775/paper.pdf filter=lfs diff=lfs merge=lfs -text
3129
+ 2503.06534/paper.pdf filter=lfs diff=lfs merge=lfs -text
3130
+ 2207.12824/paper.pdf filter=lfs diff=lfs merge=lfs -text
3131
+ 2310.13012/paper.pdf filter=lfs diff=lfs merge=lfs -text
3132
+ 2010.05324/paper.pdf filter=lfs diff=lfs merge=lfs -text
3133
+ 2406.05427/paper.pdf filter=lfs diff=lfs merge=lfs -text
3134
+ 2411.18936/paper.pdf filter=lfs diff=lfs merge=lfs -text
3135
+ 2112.06196/paper.pdf filter=lfs diff=lfs merge=lfs -text
3136
+ 2508.04350/paper.pdf filter=lfs diff=lfs merge=lfs -text
3137
+ 2212.08120/paper.pdf filter=lfs diff=lfs merge=lfs -text
3138
+ 2312.04546/paper.pdf filter=lfs diff=lfs merge=lfs -text
3139
+ 2309.03483/paper.pdf filter=lfs diff=lfs merge=lfs -text
3140
+ 2310.16393/paper.pdf filter=lfs diff=lfs merge=lfs -text
3141
+ 2306.07024/paper.pdf filter=lfs diff=lfs merge=lfs -text
3142
+ 2403.06363/paper.pdf filter=lfs diff=lfs merge=lfs -text
3143
+ 2301.11956/paper.pdf filter=lfs diff=lfs merge=lfs -text
3144
+ 2303.03323/paper.pdf filter=lfs diff=lfs merge=lfs -text
3145
+ 2101.09465/paper.pdf filter=lfs diff=lfs merge=lfs -text
3146
+ 2312.04547/paper.pdf filter=lfs diff=lfs merge=lfs -text
3147
+ 2205.14307/paper.pdf filter=lfs diff=lfs merge=lfs -text
3148
+ 2105.09043/paper.pdf filter=lfs diff=lfs merge=lfs -text
3149
+ 2309.17189/paper.pdf filter=lfs diff=lfs merge=lfs -text
3150
+ 2102.13045/paper.pdf filter=lfs diff=lfs merge=lfs -text
3151
+ 2502.14394/paper.pdf filter=lfs diff=lfs merge=lfs -text
3152
+ 2406.11768/paper.pdf filter=lfs diff=lfs merge=lfs -text
3153
+ 2404.00681/paper.pdf filter=lfs diff=lfs merge=lfs -text
3154
+ 2203.03809/paper.pdf filter=lfs diff=lfs merge=lfs -text
3155
+ 2402.01869/paper.pdf filter=lfs diff=lfs merge=lfs -text
3156
+ 2307.05772/paper.pdf filter=lfs diff=lfs merge=lfs -text
3157
+ 2403.11530/paper.pdf filter=lfs diff=lfs merge=lfs -text
3158
+ 2202.08836/paper.pdf filter=lfs diff=lfs merge=lfs -text
3159
+ 2210.15943/paper.pdf filter=lfs diff=lfs merge=lfs -text
3160
+ 2308.02097/paper.pdf filter=lfs diff=lfs merge=lfs -text
3161
+ 2309.08963/paper.pdf filter=lfs diff=lfs merge=lfs -text
3162
+ 2206.10369/paper.pdf filter=lfs diff=lfs merge=lfs -text
3163
+ 2405.15683/paper.pdf filter=lfs diff=lfs merge=lfs -text
3164
+ 2403.07513/paper.pdf filter=lfs diff=lfs merge=lfs -text
3165
+ 2306.09299/paper.pdf filter=lfs diff=lfs merge=lfs -text
3166
+ 2407.15763/paper.pdf filter=lfs diff=lfs merge=lfs -text
3167
+ 2212.11376/paper.pdf filter=lfs diff=lfs merge=lfs -text
3168
+ 2311.04830/paper.pdf filter=lfs diff=lfs merge=lfs -text
3169
+ 2112.07658/paper.pdf filter=lfs diff=lfs merge=lfs -text
3170
+ 2502.02454/paper.pdf filter=lfs diff=lfs merge=lfs -text
3171
+ 2310.18913/paper.pdf filter=lfs diff=lfs merge=lfs -text
3172
+ 2307.14928/paper.pdf filter=lfs diff=lfs merge=lfs -text
3173
+ 2407.02025/paper.pdf filter=lfs diff=lfs merge=lfs -text
3174
+ 2409.04053/paper.pdf filter=lfs diff=lfs merge=lfs -text
3175
+ 2507.05302/paper.pdf filter=lfs diff=lfs merge=lfs -text
3176
+ 2203.01577/paper.pdf filter=lfs diff=lfs merge=lfs -text
3177
+ 2504.01603/paper.pdf filter=lfs diff=lfs merge=lfs -text
3178
+ 2412.08794/paper.pdf filter=lfs diff=lfs merge=lfs -text
3179
+ 2210.06380/paper.pdf filter=lfs diff=lfs merge=lfs -text
3180
+ 2302.01961/paper.pdf filter=lfs diff=lfs merge=lfs -text
3181
+ 2102.00436/paper.pdf filter=lfs diff=lfs merge=lfs -text
3182
+ 2308.12919/paper.pdf filter=lfs diff=lfs merge=lfs -text
3183
+ 2312.15340/paper.pdf filter=lfs diff=lfs merge=lfs -text
3184
+ 2208.08112/paper.pdf filter=lfs diff=lfs merge=lfs -text
3185
+ 2111.01177/paper.pdf filter=lfs diff=lfs merge=lfs -text
3186
+ 2406.04673/paper.pdf filter=lfs diff=lfs merge=lfs -text
3187
+ 2405.00746/paper.pdf filter=lfs diff=lfs merge=lfs -text
3188
+ 2111.14658/paper.pdf filter=lfs diff=lfs merge=lfs -text
3189
+ 2305.00441/paper.pdf filter=lfs diff=lfs merge=lfs -text
3190
+ 2407.15892/paper.pdf filter=lfs diff=lfs merge=lfs -text
3191
+ 2311.17376/paper.pdf filter=lfs diff=lfs merge=lfs -text
3192
+ 2502.06858/paper.pdf filter=lfs diff=lfs merge=lfs -text
3193
+ 2411.03561/paper.pdf filter=lfs diff=lfs merge=lfs -text
3194
+ 2209.12806/paper.pdf filter=lfs diff=lfs merge=lfs -text
3195
+ 2109.14960/paper.pdf filter=lfs diff=lfs merge=lfs -text
3196
+ 2310.12560/paper.pdf filter=lfs diff=lfs merge=lfs -text
3197
+ 2101.00604/paper.pdf filter=lfs diff=lfs merge=lfs -text
3198
+ 2112.11542/paper.pdf filter=lfs diff=lfs merge=lfs -text
3199
+ 2311.01173/paper.pdf filter=lfs diff=lfs merge=lfs -text
3200
+ 2406.08229/paper.pdf filter=lfs diff=lfs merge=lfs -text
3201
+ 2303.09998/paper.pdf filter=lfs diff=lfs merge=lfs -text
3202
+ 2308.11796/paper.pdf filter=lfs diff=lfs merge=lfs -text
3203
+ 2311.12741/paper.pdf filter=lfs diff=lfs merge=lfs -text
3204
+ 2201.10986/paper.pdf filter=lfs diff=lfs merge=lfs -text
3205
+ 2406.05183/paper.pdf filter=lfs diff=lfs merge=lfs -text
3206
+ 2406.09827/paper.pdf filter=lfs diff=lfs merge=lfs -text
3207
+ 2406.09489/paper.pdf filter=lfs diff=lfs merge=lfs -text
3208
+ 2302.09227/paper.pdf filter=lfs diff=lfs merge=lfs -text
3209
+ 2305.13999/paper.pdf filter=lfs diff=lfs merge=lfs -text
3210
+ 2405.14547/paper.pdf filter=lfs diff=lfs merge=lfs -text
3211
+ 2410.22806/paper.pdf filter=lfs diff=lfs merge=lfs -text
3212
+ 2506.18463/paper.pdf filter=lfs diff=lfs merge=lfs -text
3213
+ 2311.18433/paper.pdf filter=lfs diff=lfs merge=lfs -text
3214
+ 2408.15676/paper.pdf filter=lfs diff=lfs merge=lfs -text
3215
+ 2210.00266/paper.pdf filter=lfs diff=lfs merge=lfs -text
3216
+ 2210.04137/paper.pdf filter=lfs diff=lfs merge=lfs -text
3217
+ 2003.05162/paper.pdf filter=lfs diff=lfs merge=lfs -text
3218
+ 2006.03204/paper.pdf filter=lfs diff=lfs merge=lfs -text
3219
+ 2403.05435/paper.pdf filter=lfs diff=lfs merge=lfs -text
3220
+ 2505.11740/paper.pdf filter=lfs diff=lfs merge=lfs -text
3221
+ 2110.03262/paper.pdf filter=lfs diff=lfs merge=lfs -text
3222
+ 2409.04318/paper.pdf filter=lfs diff=lfs merge=lfs -text
3223
+ 2311.15341/paper.pdf filter=lfs diff=lfs merge=lfs -text
3224
+ 2405.03722/paper.pdf filter=lfs diff=lfs merge=lfs -text
3225
+ 2310.03013/paper.pdf filter=lfs diff=lfs merge=lfs -text
3226
+ 2402.00835/paper.pdf filter=lfs diff=lfs merge=lfs -text
3227
+ 2109.08232/paper.pdf filter=lfs diff=lfs merge=lfs -text
3228
+ 2107.02306/paper.pdf filter=lfs diff=lfs merge=lfs -text
3229
+ 2312.09231/paper.pdf filter=lfs diff=lfs merge=lfs -text
3230
+ 2211.14275/paper.pdf filter=lfs diff=lfs merge=lfs -text
3231
+ 2006.00900/paper.pdf filter=lfs diff=lfs merge=lfs -text
3232
+ 2411.00786/paper.pdf filter=lfs diff=lfs merge=lfs -text
3233
+ 2310.00093/paper.pdf filter=lfs diff=lfs merge=lfs -text
3234
+ 2209.03798/paper.pdf filter=lfs diff=lfs merge=lfs -text
3235
+ 2004.06660/paper.pdf filter=lfs diff=lfs merge=lfs -text
3236
+ 2402.17152/paper.pdf filter=lfs diff=lfs merge=lfs -text
3237
+ 2205.09927/paper.pdf filter=lfs diff=lfs merge=lfs -text
3238
+ 2107.13077/paper.pdf filter=lfs diff=lfs merge=lfs -text
3239
+ 2211.03616/paper.pdf filter=lfs diff=lfs merge=lfs -text
3240
+ 2403.07955/paper.pdf filter=lfs diff=lfs merge=lfs -text
3241
+ 2304.06911/paper.pdf filter=lfs diff=lfs merge=lfs -text
3242
+ 2205.02293/paper.pdf filter=lfs diff=lfs merge=lfs -text
3243
+ 2403.07277/paper.pdf filter=lfs diff=lfs merge=lfs -text
3244
+ 2202.07919/paper.pdf filter=lfs diff=lfs merge=lfs -text
3245
+ 2303.13845/paper.pdf filter=lfs diff=lfs merge=lfs -text
3246
+ 2501.10768/paper.pdf filter=lfs diff=lfs merge=lfs -text
3247
+ 2311.01434/paper.pdf filter=lfs diff=lfs merge=lfs -text
3248
+ 2212.01611/paper.pdf filter=lfs diff=lfs merge=lfs -text
3249
+ 2110.07310/paper.pdf filter=lfs diff=lfs merge=lfs -text
3250
+ 2307.05260/paper.pdf filter=lfs diff=lfs merge=lfs -text
3251
+ 2306.05726/paper.pdf filter=lfs diff=lfs merge=lfs -text
3252
+ 2405.11881/paper.pdf filter=lfs diff=lfs merge=lfs -text
3253
+ 2009.07806/paper.pdf filter=lfs diff=lfs merge=lfs -text
3254
+ 2207.14266/paper.pdf filter=lfs diff=lfs merge=lfs -text
3255
+ 2206.08222/paper.pdf filter=lfs diff=lfs merge=lfs -text
3256
+ 2211.16582/paper.pdf filter=lfs diff=lfs merge=lfs -text
3257
+ 2405.03301/paper.pdf filter=lfs diff=lfs merge=lfs -text
3258
+ 2204.13861/paper.pdf filter=lfs diff=lfs merge=lfs -text
3259
+ 2212.04245/paper.pdf filter=lfs diff=lfs merge=lfs -text
3260
+ 2401.11929/paper.pdf filter=lfs diff=lfs merge=lfs -text
3261
+ 2403.03881/paper.pdf filter=lfs diff=lfs merge=lfs -text
3262
+ 2402.12525/paper.pdf filter=lfs diff=lfs merge=lfs -text
3263
+ 2310.08230/paper.pdf filter=lfs diff=lfs merge=lfs -text
3264
+ 2405.10989/paper.pdf filter=lfs diff=lfs merge=lfs -text
3265
+ 2305.19000/paper.pdf filter=lfs diff=lfs merge=lfs -text
3266
+ 2105.05912/paper.pdf filter=lfs diff=lfs merge=lfs -text
3267
+ 2412.04929/paper.pdf filter=lfs diff=lfs merge=lfs -text
3268
+ 2304.04514/paper.pdf filter=lfs diff=lfs merge=lfs -text
3269
+ 2204.07615/paper.pdf filter=lfs diff=lfs merge=lfs -text
3270
+ 2212.04656/paper.pdf filter=lfs diff=lfs merge=lfs -text
3271
+ 2401.05612/paper.pdf filter=lfs diff=lfs merge=lfs -text
3272
+ 2305.01937/paper.pdf filter=lfs diff=lfs merge=lfs -text
3273
+ 2212.00124/paper.pdf filter=lfs diff=lfs merge=lfs -text
3274
+ 2207.02518/paper.pdf filter=lfs diff=lfs merge=lfs -text
3275
+ 2006.15057/paper.pdf filter=lfs diff=lfs merge=lfs -text
3276
+ 2206.08330/paper.pdf filter=lfs diff=lfs merge=lfs -text
3277
+ 2210.15461/paper.pdf filter=lfs diff=lfs merge=lfs -text
3278
+ 2405.16156/paper.pdf filter=lfs diff=lfs merge=lfs -text
3279
+ 2402.15391/paper.pdf filter=lfs diff=lfs merge=lfs -text
3280
+ 2102.09337/paper.pdf filter=lfs diff=lfs merge=lfs -text
3281
+ 2205.12590/paper.pdf filter=lfs diff=lfs merge=lfs -text
3282
+ 2110.15538/paper.pdf filter=lfs diff=lfs merge=lfs -text
3283
+ 2311.11860/paper.pdf filter=lfs diff=lfs merge=lfs -text
3284
+ 2312.05295/paper.pdf filter=lfs diff=lfs merge=lfs -text
3285
+ 2310.11840/paper.pdf filter=lfs diff=lfs merge=lfs -text
3286
+ 2312.06867/paper.pdf filter=lfs diff=lfs merge=lfs -text
3287
+ 2206.14969/paper.pdf filter=lfs diff=lfs merge=lfs -text
3288
+ 2407.03893/paper.pdf filter=lfs diff=lfs merge=lfs -text
3289
+ 2402.18512/paper.pdf filter=lfs diff=lfs merge=lfs -text
3290
+ 2510.18118/paper.pdf filter=lfs diff=lfs merge=lfs -text
3291
+ 2108.02388/paper.pdf filter=lfs diff=lfs merge=lfs -text
3292
+ 2211.09869/paper.pdf filter=lfs diff=lfs merge=lfs -text
3293
+ 2309.17388/paper.pdf filter=lfs diff=lfs merge=lfs -text
3294
+ 2309.10687/paper.pdf filter=lfs diff=lfs merge=lfs -text
3295
+ 2403.03082/paper.pdf filter=lfs diff=lfs merge=lfs -text
3296
+ 2110.13578/paper.pdf filter=lfs diff=lfs merge=lfs -text
3297
+ 2310.18633/paper.pdf filter=lfs diff=lfs merge=lfs -text
3298
+ 2503.20960/paper.pdf filter=lfs diff=lfs merge=lfs -text
3299
+ 2407.06167/paper.pdf filter=lfs diff=lfs merge=lfs -text
3300
+ 2109.04853/paper.pdf filter=lfs diff=lfs merge=lfs -text
3301
+ 2308.06259/paper.pdf filter=lfs diff=lfs merge=lfs -text
3302
+ 2308.02813/paper.pdf filter=lfs diff=lfs merge=lfs -text
3303
+ 2308.01095/paper.pdf filter=lfs diff=lfs merge=lfs -text
3304
+ 2305.17826/paper.pdf filter=lfs diff=lfs merge=lfs -text
3305
+ 2004.13313/paper.pdf filter=lfs diff=lfs merge=lfs -text
3306
+ 2401.06989/paper.pdf filter=lfs diff=lfs merge=lfs -text
3307
+ 2401.14280/paper.pdf filter=lfs diff=lfs merge=lfs -text
3308
+ 2405.00915/paper.pdf filter=lfs diff=lfs merge=lfs -text
3309
+ 2108.13393/paper.pdf filter=lfs diff=lfs merge=lfs -text
3310
+ 2403.07700/paper.pdf filter=lfs diff=lfs merge=lfs -text
3311
+ 2201.06001/paper.pdf filter=lfs diff=lfs merge=lfs -text
3312
+ 2103.06818/paper.pdf filter=lfs diff=lfs merge=lfs -text
3313
+ 2210.02406/paper.pdf filter=lfs diff=lfs merge=lfs -text
3314
+ 2211.07263/paper.pdf filter=lfs diff=lfs merge=lfs -text
3315
+ 2503.05977/paper.pdf filter=lfs diff=lfs merge=lfs -text
3316
+ 2310.07136/paper.pdf filter=lfs diff=lfs merge=lfs -text
3317
+ 2205.00301/paper.pdf filter=lfs diff=lfs merge=lfs -text
3318
+ 2311.09708/paper.pdf filter=lfs diff=lfs merge=lfs -text
3319
+ 2404.16399/paper.pdf filter=lfs diff=lfs merge=lfs -text
3320
+ 2205.12532/paper.pdf filter=lfs diff=lfs merge=lfs -text
3321
+ 2502.00063/paper.pdf filter=lfs diff=lfs merge=lfs -text
3322
+ 2410.18889/paper.pdf filter=lfs diff=lfs merge=lfs -text
3323
+ 2507.07147/paper.pdf filter=lfs diff=lfs merge=lfs -text
3324
+ 2406.13103/paper.pdf filter=lfs diff=lfs merge=lfs -text
3325
+ 2403.19140/paper.pdf filter=lfs diff=lfs merge=lfs -text
3326
+ 2002.06478/paper.pdf filter=lfs diff=lfs merge=lfs -text
3327
+ 2405.17258/paper.pdf filter=lfs diff=lfs merge=lfs -text
3328
+ 2502.07456/paper.pdf filter=lfs diff=lfs merge=lfs -text
3329
+ 2302.01385/paper.pdf filter=lfs diff=lfs merge=lfs -text
3330
+ 2402.02772/paper.pdf filter=lfs diff=lfs merge=lfs -text
3331
+ 2405.17809/paper.pdf filter=lfs diff=lfs merge=lfs -text
3332
+ 2310.14947/paper.pdf filter=lfs diff=lfs merge=lfs -text
3333
+ 2410.11076/paper.pdf filter=lfs diff=lfs merge=lfs -text
3334
+ 2404.01647/paper.pdf filter=lfs diff=lfs merge=lfs -text
3335
+ 2108.10949/paper.pdf filter=lfs diff=lfs merge=lfs -text
3336
+ 2101.09868/paper.pdf filter=lfs diff=lfs merge=lfs -text
3337
+ 2508.02905/paper.pdf filter=lfs diff=lfs merge=lfs -text
3338
+ 2104.00764/paper.pdf filter=lfs diff=lfs merge=lfs -text
3339
+ 2312.07311/paper.pdf filter=lfs diff=lfs merge=lfs -text
3340
+ 2307.15063/paper.pdf filter=lfs diff=lfs merge=lfs -text
3341
+ 2305.01528/paper.pdf filter=lfs diff=lfs merge=lfs -text
3342
+ 2107.04419/paper.pdf filter=lfs diff=lfs merge=lfs -text
3343
+ 2410.00371/paper.pdf filter=lfs diff=lfs merge=lfs -text
3344
+ 2211.01786/paper.pdf filter=lfs diff=lfs merge=lfs -text
3345
+ 2311.04474/paper.pdf filter=lfs diff=lfs merge=lfs -text
3346
+ 2410.03743/paper.pdf filter=lfs diff=lfs merge=lfs -text
3347
+ 2410.11710/paper.pdf filter=lfs diff=lfs merge=lfs -text
3348
+ 2306.04085/paper.pdf filter=lfs diff=lfs merge=lfs -text
3349
+ 2305.09489/paper.pdf filter=lfs diff=lfs merge=lfs -text
3350
+ 2502.16911/paper.pdf filter=lfs diff=lfs merge=lfs -text
3351
+ 2508.14101/paper.pdf filter=lfs diff=lfs merge=lfs -text
3352
+ 2112.12484/paper.pdf filter=lfs diff=lfs merge=lfs -text
3353
+ 2305.06227/paper.pdf filter=lfs diff=lfs merge=lfs -text
3354
+ 2212.04098/paper.pdf filter=lfs diff=lfs merge=lfs -text
3355
+ 2407.08083/paper.pdf filter=lfs diff=lfs merge=lfs -text
3356
+ 2506.09316/paper.pdf filter=lfs diff=lfs merge=lfs -text
3357
+ 2302.10186/paper.pdf filter=lfs diff=lfs merge=lfs -text
3358
+ 2402.17710/paper.pdf filter=lfs diff=lfs merge=lfs -text
3359
+ 2410.20788/paper.pdf filter=lfs diff=lfs merge=lfs -text
3360
+ 2305.09235/paper.pdf filter=lfs diff=lfs merge=lfs -text
3361
+ 2303.17583/paper.pdf filter=lfs diff=lfs merge=lfs -text
3362
+ 2208.14407/paper.pdf filter=lfs diff=lfs merge=lfs -text
3363
+ 2405.02952/paper.pdf filter=lfs diff=lfs merge=lfs -text
3364
+ 2402.05457/paper.pdf filter=lfs diff=lfs merge=lfs -text
3365
+ 2402.12204/paper.pdf filter=lfs diff=lfs merge=lfs -text
3366
+ 2403.11929/paper.pdf filter=lfs diff=lfs merge=lfs -text
3367
+ 2103.01937/paper.pdf filter=lfs diff=lfs merge=lfs -text
3368
+ 2201.12360/paper.pdf filter=lfs diff=lfs merge=lfs -text
3369
+ 2501.15259/paper.pdf filter=lfs diff=lfs merge=lfs -text
3370
+ 2312.12423/paper.pdf filter=lfs diff=lfs merge=lfs -text
3371
+ 2402.03885/paper.pdf filter=lfs diff=lfs merge=lfs -text
3372
+ 2205.09833/paper.pdf filter=lfs diff=lfs merge=lfs -text
3373
+ 2309.03750/paper.pdf filter=lfs diff=lfs merge=lfs -text
3374
+ 2404.17768/paper.pdf filter=lfs diff=lfs merge=lfs -text
3375
+ 2505.14318/paper.pdf filter=lfs diff=lfs merge=lfs -text
3376
+ 2402.07752/paper.pdf filter=lfs diff=lfs merge=lfs -text
3377
+ 2209.14046/paper.pdf filter=lfs diff=lfs merge=lfs -text
3378
+ 2212.04689/paper.pdf filter=lfs diff=lfs merge=lfs -text
3379
+ 2312.03801/paper.pdf filter=lfs diff=lfs merge=lfs -text
3380
+ 2306.13575/paper.pdf filter=lfs diff=lfs merge=lfs -text
3381
+ 2010.01666/paper.pdf filter=lfs diff=lfs merge=lfs -text
3382
+ 2210.07499/paper.pdf filter=lfs diff=lfs merge=lfs -text
3383
+ 2408.08671/paper.pdf filter=lfs diff=lfs merge=lfs -text
3384
+ 2403.13802/paper.pdf filter=lfs diff=lfs merge=lfs -text
3385
+ 2303.11316/paper.pdf filter=lfs diff=lfs merge=lfs -text
3386
+ 2203.15312/paper.pdf filter=lfs diff=lfs merge=lfs -text
3387
+ 2212.04356/paper.pdf filter=lfs diff=lfs merge=lfs -text
3388
+ 2507.07153/paper.pdf filter=lfs diff=lfs merge=lfs -text
3389
+ 2312.09793/paper.pdf filter=lfs diff=lfs merge=lfs -text
3390
+ 2304.03307/paper.pdf filter=lfs diff=lfs merge=lfs -text
3391
+ 2305.13245/paper.pdf filter=lfs diff=lfs merge=lfs -text
3392
+ 2108.03702/paper.pdf filter=lfs diff=lfs merge=lfs -text
3393
+ 2501.03264/paper.pdf filter=lfs diff=lfs merge=lfs -text
3394
+ 2402.16078/paper.pdf filter=lfs diff=lfs merge=lfs -text
3395
+ 2510.03417/paper.pdf filter=lfs diff=lfs merge=lfs -text
3396
+ 2310.10310/paper.pdf filter=lfs diff=lfs merge=lfs -text
3397
+ 2406.05588/paper.pdf filter=lfs diff=lfs merge=lfs -text
3398
+ 2403.07362/paper.pdf filter=lfs diff=lfs merge=lfs -text
3399
+ 2404.12725/paper.pdf filter=lfs diff=lfs merge=lfs -text
3400
+ 2308.08393/paper.pdf filter=lfs diff=lfs merge=lfs -text
3401
+ 2001.10331/paper.pdf filter=lfs diff=lfs merge=lfs -text
3402
+ 2408.00989/paper.pdf filter=lfs diff=lfs merge=lfs -text
3403
+ 2407.08411/paper.pdf filter=lfs diff=lfs merge=lfs -text
3404
+ 2411.00173/paper.pdf filter=lfs diff=lfs merge=lfs -text
3405
+ 2410.05938/paper.pdf filter=lfs diff=lfs merge=lfs -text
3406
+ 2106.04427/paper.pdf filter=lfs diff=lfs merge=lfs -text
3407
+ 2404.03632/paper.pdf filter=lfs diff=lfs merge=lfs -text
3408
+ 2505.13006/paper.pdf filter=lfs diff=lfs merge=lfs -text
3409
+ 2402.01397/paper.pdf filter=lfs diff=lfs merge=lfs -text
3410
+ 2305.15873/paper.pdf filter=lfs diff=lfs merge=lfs -text
3411
+ 2209.00507/paper.pdf filter=lfs diff=lfs merge=lfs -text
3412
+ 2305.18806/paper.pdf filter=lfs diff=lfs merge=lfs -text
3413
+ 2306.06221/paper.pdf filter=lfs diff=lfs merge=lfs -text
3414
+ 2410.19163/paper.pdf filter=lfs diff=lfs merge=lfs -text
3415
+ 2310.01889/paper.pdf filter=lfs diff=lfs merge=lfs -text
3416
+ 2209.03943/paper.pdf filter=lfs diff=lfs merge=lfs -text
3417
+ 2308.03202/paper.pdf filter=lfs diff=lfs merge=lfs -text
3418
+ 2504.14119/paper.pdf filter=lfs diff=lfs merge=lfs -text
3419
+ 2305.14078/paper.pdf filter=lfs diff=lfs merge=lfs -text
3420
+ 2412.15598/paper.pdf filter=lfs diff=lfs merge=lfs -text
3421
+ 2401.02412/paper.pdf filter=lfs diff=lfs merge=lfs -text
3422
+ 2406.15657/paper.pdf filter=lfs diff=lfs merge=lfs -text
3423
+ 2202.08625/paper.pdf filter=lfs diff=lfs merge=lfs -text
3424
+ 2305.08195/paper.pdf filter=lfs diff=lfs merge=lfs -text
3425
+ 2211.06891/paper.pdf filter=lfs diff=lfs merge=lfs -text
3426
+ 2206.03126/paper.pdf filter=lfs diff=lfs merge=lfs -text
3427
+ 2309.03989/paper.pdf filter=lfs diff=lfs merge=lfs -text
3428
+ 2410.20768/paper.pdf filter=lfs diff=lfs merge=lfs -text
3429
+ 2206.05897/paper.pdf filter=lfs diff=lfs merge=lfs -text
3430
+ 2301.11378/paper.pdf filter=lfs diff=lfs merge=lfs -text
3431
+ 2210.13039/paper.pdf filter=lfs diff=lfs merge=lfs -text
3432
+ 2312.10570/paper.pdf filter=lfs diff=lfs merge=lfs -text
3433
+ 2401.09500/paper.pdf filter=lfs diff=lfs merge=lfs -text
3434
+ 2505.05335/paper.pdf filter=lfs diff=lfs merge=lfs -text
3435
+ 2205.15301/paper.pdf filter=lfs diff=lfs merge=lfs -text
3436
+ 2405.15632/paper.pdf filter=lfs diff=lfs merge=lfs -text
3437
+ 2412.20082/paper.pdf filter=lfs diff=lfs merge=lfs -text
3438
+ 2112.12728/paper.pdf filter=lfs diff=lfs merge=lfs -text
3439
+ 2405.07609/paper.pdf filter=lfs diff=lfs merge=lfs -text
3440
+ 2112.08181/paper.pdf filter=lfs diff=lfs merge=lfs -text
3441
+ 2505.15065/paper.pdf filter=lfs diff=lfs merge=lfs -text
3442
+ 2305.15284/paper.pdf filter=lfs diff=lfs merge=lfs -text
3443
+ 2108.11618/paper.pdf filter=lfs diff=lfs merge=lfs -text
3444
+ 2404.10740/paper.pdf filter=lfs diff=lfs merge=lfs -text
3445
+ 2306.00295/paper.pdf filter=lfs diff=lfs merge=lfs -text
3446
+ 2207.02204/paper.pdf filter=lfs diff=lfs merge=lfs -text
3447
+ 2107.12214/paper.pdf filter=lfs diff=lfs merge=lfs -text
3448
+ 2309.17382/paper.pdf filter=lfs diff=lfs merge=lfs -text
3449
+ 2108.13587/paper.pdf filter=lfs diff=lfs merge=lfs -text
3450
+ 2407.07612/paper.pdf filter=lfs diff=lfs merge=lfs -text
3451
+ 2107.00101/paper.pdf filter=lfs diff=lfs merge=lfs -text
3452
+ 2401.10472/paper.pdf filter=lfs diff=lfs merge=lfs -text
3453
+ 2405.13792/paper.pdf filter=lfs diff=lfs merge=lfs -text
3454
+ 2401.04679/paper.pdf filter=lfs diff=lfs merge=lfs -text
3455
+ 2312.02702/paper.pdf filter=lfs diff=lfs merge=lfs -text
3456
+ 2305.16708/paper.pdf filter=lfs diff=lfs merge=lfs -text
3457
+ 2205.15117/paper.pdf filter=lfs diff=lfs merge=lfs -text
3458
+ 2412.12583/paper.pdf filter=lfs diff=lfs merge=lfs -text
3459
+ 2103.07969/paper.pdf filter=lfs diff=lfs merge=lfs -text
3460
+ 2305.17650/paper.pdf filter=lfs diff=lfs merge=lfs -text
3461
+ 2410.19136/paper.pdf filter=lfs diff=lfs merge=lfs -text
3462
+ 2103.13558/paper.pdf filter=lfs diff=lfs merge=lfs -text
3463
+ 2305.15080/paper.pdf filter=lfs diff=lfs merge=lfs -text
3464
+ 2402.11981/paper.pdf filter=lfs diff=lfs merge=lfs -text
3465
+ 2503.18541/paper.pdf filter=lfs diff=lfs merge=lfs -text
3466
+ 2212.12192/paper.pdf filter=lfs diff=lfs merge=lfs -text
3467
+ 2407.13851/paper.pdf filter=lfs diff=lfs merge=lfs -text
3468
+ 2306.01733/paper.pdf filter=lfs diff=lfs merge=lfs -text
3469
+ 2406.07232/paper.pdf filter=lfs diff=lfs merge=lfs -text
3470
+ 2206.01078/paper.pdf filter=lfs diff=lfs merge=lfs -text
3471
+ 2507.17717/paper.pdf filter=lfs diff=lfs merge=lfs -text
3472
+ 2203.04251/paper.pdf filter=lfs diff=lfs merge=lfs -text
3473
+ 2503.22537/paper.pdf filter=lfs diff=lfs merge=lfs -text
3474
+ 2303.10431/paper.pdf filter=lfs diff=lfs merge=lfs -text
3475
+ 2310.08753/paper.pdf filter=lfs diff=lfs merge=lfs -text
3476
+ 2404.13923/paper.pdf filter=lfs diff=lfs merge=lfs -text
3477
+ 2112.07374/paper.pdf filter=lfs diff=lfs merge=lfs -text
3478
+ 2310.13080/paper.pdf filter=lfs diff=lfs merge=lfs -text
3479
+ 2402.11818/paper.pdf filter=lfs diff=lfs merge=lfs -text
3480
+ 2301.07300/paper.pdf filter=lfs diff=lfs merge=lfs -text
3481
+ 2407.08567/paper.pdf filter=lfs diff=lfs merge=lfs -text
3482
+ 2405.20291/paper.pdf filter=lfs diff=lfs merge=lfs -text
3483
+ 2402.05863/paper.pdf filter=lfs diff=lfs merge=lfs -text
3484
+ 2205.12374/paper.pdf filter=lfs diff=lfs merge=lfs -text
3485
+ 2212.00767/paper.pdf filter=lfs diff=lfs merge=lfs -text
3486
+ 2407.10827/paper.pdf filter=lfs diff=lfs merge=lfs -text
3487
+ 2106.00162/paper.pdf filter=lfs diff=lfs merge=lfs -text
3488
+ 2106.02658/paper.pdf filter=lfs diff=lfs merge=lfs -text
3489
+ 2211.13775/paper.pdf filter=lfs diff=lfs merge=lfs -text
3490
+ 2310.13297/paper.pdf filter=lfs diff=lfs merge=lfs -text
3491
+ 2311.17950/paper.pdf filter=lfs diff=lfs merge=lfs -text
3492
+ 2209.10091/paper.pdf filter=lfs diff=lfs merge=lfs -text
3493
+ 2412.10570/paper.pdf filter=lfs diff=lfs merge=lfs -text
3494
+ 2308.07921/paper.pdf filter=lfs diff=lfs merge=lfs -text
3495
+ 2406.07754/paper.pdf filter=lfs diff=lfs merge=lfs -text
3496
+ 2210.15777/paper.pdf filter=lfs diff=lfs merge=lfs -text
3497
+ 2208.11640/paper.pdf filter=lfs diff=lfs merge=lfs -text
3498
+ 2105.12245/paper.pdf filter=lfs diff=lfs merge=lfs -text
3499
+ 2406.04566/paper.pdf filter=lfs diff=lfs merge=lfs -text
3500
+ 2410.06716/paper.pdf filter=lfs diff=lfs merge=lfs -text
2001.10331/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:133e0ae98d797801f7adcd1c27be0c5f35555c47d910fcc31ae895390298e306
3
+ size 3598448
2002.06478/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:609cb21aa98a3c3339086659741b4b801d2318373756cd1696e8b3f46f733b3c
3
+ size 3145985
2003.05162/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef00cbaa4b14c9ed9026dd5fac48886b813933de8182a99bf645a520f9c47b4a
3
+ size 4895585
2003.06709/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8335bad2cf76bef115ba9aab8c285e1c3c54ff8707ca24e5f0b85bcb5f3db892
3
+ size 5691635
2004.06660/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fda2cb63a613b40777179481d4ff2dece722c90e09666c82b7e2b38f3750fbf
3
+ size 638046
2004.12169/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-04-04T14:27:47.582Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36" version="12.9.7" etag="Q4Fri3M3hIWQbyRW2uZo" type="google"><diagram id="7TDAxBxdIJS9AnB9Bzbo">7VtLc6M4EP41rp05jIuHeR0Tx7OztZWdqclh9yqjNqYGIw/IcbK/fiVAPAXYDpBkvIfEuCW1RH9fq5sWnunL3dPvEdpv7wmGYKYp+Gmm3800TVN1h31wyXMqMR0zFXiRj1ORWgge/H8hEyqZ9OBjiCsdKSEB9fdVoUvCEFxakaEoIsdqtw0JqrPukQcNwYOLgqb0bx/TbSq1DaWQfwHf24qZVSVr2SHRORPEW4TJsSTSVzN9GRFC06vd0xICbjxhl3Tc55bWfGERhPSUAVo64BEFh+zeZpoZsKG3a3bh8YvvEKHwB0SigSnL27KboM/CMhE5hBi4coU1H7c+hYc9cnnrkXGBybZ0F7BvKrvc+EGwJAGJkrH6ZgOm6zJ5TCPyA0ot2HLWipK3CJtrXAcJ6We08wNOpS8QPAL1XZQv7REiCk+txlFzkzOuAtkBjZ5ZF0FUzU6HZDTVLdNIBccCdVXPoNyWEBcylBHNy3UXWLCLDA45NHorNPyeK2Y3fx6IaPgUJ+5ywzqo2v6paCwwqwEsBPEehUK2/K0Ed7lB1jmXHXKdIRzLCg6NyU5Ti3acMuE65h/prXaNrlJTplHB5LBmVmMbxpb/j0hiczgEnOAKCr2k0Q95V/AigHjeP2kKSItLMP7RKu+r/A5JCDVnyEQo8L2QfXUZX9ny9FvOZsbu4CZr2PkY82mkjlZ1xTH9RDX1qp/YqtPwE80cyU8WU/nJUsJzHjlaeK5U+bo+j4Tz6egVwIa+XXLZtT1Yd5Qmt1RjHG4ZU3HrXsItwD7tIVd1f+zYBYdZrnR3Tlr+BNjnknyN8o1zz8jvu2U38IB+J/SfDx/bbql74lWI63Nn5o1ODDsNJ+sUrwPi/uAmAqlJdyjyWABJ7KmU/hYKt62S2mZNIsz3dN4p8UfRsEcY+6GXDe9CYxhMx2bGH2HM3L+BTxtzZehkS3TzhLBYha4ojsOtW1/YPcu055TcpTH8Qw+inWmH5HZaGTcIv7gDlyj2Jsl33lZQtejua+Sz3R9Rn4Tz1PeTa/kGcKqzd+SE79eYr+Q9H1/fX94xaP0ekM41s2QxfEy+D5S1yMLqVSepmqP3Z6m6LEt1jJdnqeb4WapEBcf0UwYP15E9pTa0FKWEE3Nb2VP/mYlu+1b6NcDcq25hQyJodayqFlEWOJn9b2qfGjJNbLfrX3A8267SAo0owYwQ1cpVnZcFuMtyvXcc0gYN41cWHFRLnRvV8GDqsvCwaIYHY4AihiUJDzUIPGaN/UVmyI5Y0FqoUs42j1Kr8BiKxDi2pHqoKgMET7v/BERW+OssztyBS/AYJyaNkxFbW+umOZOejFRPV7ABNl68mOpWN5YCfnGwN+F5idMP5NlVtlU4FZDW6sZctRxx1YBEYG/ckYHUFlUg1aZLjgakmOtcl+yqxU8H5K1tLIyTgNzYLrhjA2m/Hoxqf9yBEN/w8/gipJePh7vNArhxSN9rlNJ9G5LbFrIIAkT9x6p6mS2yGb4RP8lGBUh61epaLUzF5BC5kA0qH87X9dhKpx7K0kKgDT0JMPlNn4aV7D2AdqzcAMWx7/4qcPWY+VK4zPHgkr0bcB2u5Vi1XPpStDTd6dE0IF6yM+qrcS/HGsa9anpGdC/Zse9LkgSM4m3SV+3NC8TD7hgpwSTPU6qsGjkh11seNqfhul5/W2FhaZexvanJqWkakO8nVAh+WchUS69XZxaWehloMl2OOhpsstLFtcDWOHExRRnp7ESgoUmzR4NMVqS4Hsi0qqFt5cLNsalJ006CjBkWPZe67XmH+JwlZxMVFEhVXkoITVbruFpCmJY1ECFMp6ZpOB/WZHWNWnlqlRQVlQf4eYDQ5Ucx31AUT/AOPajYAEtWl3JMS0ctleJJ36EXTj9FDUp40//ulZhet+eXPi03dBkNXRe7GPta/OIl7V78bkhf/Qc=</diagram></mxfile>
2004.12169/main_diagram/main_diagram.pdf ADDED
Binary file (25.4 kB). View file
 
2004.12169/paper_text/intro_method.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ We design a system that examines source code changes and how they relate to the existing comment in order to produce an updated comment that reflects the code modifications. Since C~old~ and C~new~ are closely related, training a model to directly generate C~new~ risks having it learn to just copy C~old~. To explicitly inform the model of edits, we define the target output as a *sequence of edit actions*, C~edit~, to indicate how the existing comment should be revised (e.g., for C~old~=`ABC`, C~edit~=$\texttt{\small<Delete>A<DeleteEnd>}$ implies that `A` should be deleted to produce C~new~=`BC`). Furthermore, in order to better correlate these edits with changes in the code, we unify M~old~ and M~new~ into a single *diff* sequence that explicitly identifies code edits, M~edit~. We discuss in more detail how M~edit~ and the training C~edit~ are constructed in §[4](#sec:edits){reference-type="ref" reference="sec:edits"}.
4
+
5
+ <figure id="fig:architecture" data-latex-placement="t">
6
+ <img src="images/revised_architecture_2.PNG" />
7
+ <figcaption>High-level overview of our system.</figcaption>
8
+ </figure>
9
+
10
+ Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"} shows a high-level overview of our system. We design an encoder-decoder architecture consisting of three components: a two-layer, bi-directional GRU [@ChoGRU] that encodes the code changes (M~edit~), another two-layer, bi-directional GRU that encodes the existing comment (C~old~), and a GRU that is trained to decode a sequence of edit actions (C~edit~).[^2] We concatenate the final states of the two encoders to form a vector that summarizes the content in M~edit~ and C~old~, and use this vector as the initial state of the decoder. The decoder essentially has three subtasks: (1) identify edit locations in C~old~; (2) determine parts of M~edit~ that pertain to making these edits; and (3) apply updates in the given locations based on the relevant code changes. We rely on an attention mechanism [@Luong2015Attention] over the hidden states of the two encoders to accomplish the first two goals. At every decoding step, rather than aligning the current decoder state with all the encoder hidden states jointly, we align it with the hidden states of the two encoders separately. We concatenate the two resulting context vectors to form a unified context vector that is used in the final step of computing attention, ensuring that we incorporate pertinent content from both input sequences. Consequently, the resulting attention vector carries information relating to the current decoder state as well as knowledge aggregated from relevant portions of C~old~ and M~edit~.
11
+
12
+ Using this information, the decoder performs the third subtask, which requires reasoning across language representations. Specifically, it must determine how the source code changes that are relevant to the current decoding step should manifest as natural language updates to the relevant portions of C~old~. At each step, it decides whether it should begin a new edit action by generating an edit start keyword, continue the present action by generating a comment token, or terminate the present action by generating an end-edit keyword. Because actions relating to deletions will include tokens in C~old~, and actions relating to insertions are likely to include tokens in M~edit~, we equip the decoder with a pointer network [@VinyalsPointer] to accommodate copying tokens from C~old~ and M~edit~. The decoder generates a sequence of edit actions, which will have to be parsed into a comment (§[4.4](#sec:post-processing){reference-type="ref" reference="sec:post-processing"}).
13
+
14
+ Here we define the edit lexicon that is used to construct the input code edit sequence, M~edit~, and the target comment edit sequence, C~edit~.
15
+
16
+ We use difflib[^3] to extract code edits and target comment edits. Both the input code edit sequence and the target comment edit sequence consist of a series of edit actions; each edit action is structured as `<Action> [span of tokens] <ActionEnd>`.[^4]
17
+
18
+ We define four types of edit actions: `Insert`, `Delete`, `Replace`, and `Keep`. Because the `Replace` action must simultaneously incorporate distinct content from two versions (i.e., tokens in the old version that will be replaced, and tokens in the new version that will take their place), it follows a slightly different structure:
19
+
20
+ $$\begin{equation*}
21
+ \begin{array}{l}
22
+ \texttt{\small <ReplaceOld> [span of old tokens]}\\
23
+ \noalign{\vskip-6pt}
24
+ \texttt{\small <ReplaceNew> [span of new tokens]}\\
25
+ \noalign{\vskip-6pt}
26
+ \texttt{\small <ReplaceEnd>}\\
27
+ \noalign{\vskip-6pt}
28
+ \end{array}
29
+ \end{equation*}$$
30
+
31
+ We extract the edits between M~old~ and M~new~ using the edit lexicon to construct M~edit~, the code edit sequence used as input in one of the encoders. Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"} (top right) shows the M~edit~ corresponding to code changes in Figure [1](#fig:rajawali){reference-type="ref" reference="fig:rajawali"}.
32
+
33
+ In contrast to line-level code *diffs* that are commonly used for commit message generation [@LoyolaETAL17GeneratingDescriptionFromSourceCodeChanges; @JiangCommit; @XuCommit], this representation allows us to explicitly capture more fine-grained edits. While we could exploit the abstract syntax tree (AST) structure of source code and represent the changes between the ASTs corresponding to the two versions of code, prior work suggests that such techniques do not always lead to improved performance [@yin19iclr]. We leave it to future work to investigate how the AST structure can be leveraged for this task.
34
+
35
+ We identify the changes between C~old~ and C~new~ to construct C~edit~, the target comment edit sequence. During inference, the output comment is produced by parsing the predicted edit sequence (§[4.4](#sec:post-processing){reference-type="ref" reference="sec:post-processing"}). We introduce a slightly modified set of specifications that disregards the `Keep` type when constructing the sequence of edit actions, referred to as the *condensed edit sequence*.
36
+
37
+ The intuition for disregarding `Keep` and the span of tokens to which it applies is that we can simply copy the content that is retained between C~old~ and C~new~, instead of generating it anew. By doing post-hoc copying, we simplify learning for the model since it has to only learn *what to change* rather than also having to learn *what to keep*.
38
+
39
+ We design a method to deterministically place edits in their correct positions in the absence of `Keep` spans. For the example in Figure [1](#fig:rajawali){reference-type="ref" reference="fig:rajawali"}, the raw sequence $\texttt{\small<Insert>in degrees<InsertEnd>}$ does not encode information as to where "in degrees" should be inserted. To address this, we bind an insert sequence with the minimum number of words (aka "anchors") such that the place of insertion can be uniquely identified. This results in the structure that is shown for C~edit~ in Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"}. Here "angle" serves as the anchor point, identifying the insert location. Following the structure of `Replace`, this sequence indicates that "angle" should be replaced with "angle in degrees," effectively inserting "in degrees" and keeping "angle" from C~old~, which appears immediately before the insert location. See Appendix [14](#appendix:comment-edit-lexicon){reference-type="ref" reference="appendix:comment-edit-lexicon"} for details on this procedure.
40
+
41
+ Since the decoder is trained to predict a sequence of edit actions, we must align it with C~old~ and copy unchanged tokens in order to produce the edited comment. We denote the predicted edit sequence as C'~edit~ and the corresponding parsed output as C'~new~. This procedure entails simultaneously following pointers, left-to-right, on C~old~ and C'~edit~, which we refer to as P~old~ and P~edit~ respectively. P~old~ is advanced, copying the current token into C'~new~ at each point, until an edit location is reached. The edit action corresponding to the current position of P~edit~ is then applied, and the tokens from its relevant span are copied into C'~new~ if applicable. Finally, P~edit~ is advanced to the next action, and P~old~ is also advanced to the appropriate position in cases involving deletions and replacements. This process repeats until both pointers reach the end of their respective sequences.
42
+
43
+ We extract linguistic and lexical features for tokens in M~edit~ and C~edit~, many of which were shown to improve learning associations between `@return` comment and source code entities in our prior work [@panthaplackel2020associating]. We incorporate these features into the network as one-hot vectors that are concatenated to M~edit~ and C~edit~ embeddings and then passed through a linear layer. These vectors are provided as inputs to the two encoders. All sequences are subtokenized, e.g., `camelCase` $\rightarrow$ `camel`, `case`.
44
+
45
+ **Features specific to M~edit~:** We aim to take advantage of common patterns among different types of code tokens by incorporating features that identify certain categories: edit keywords, Java keywords, and operators. If a token is not an edit keyword, we have indicator features for whether it is part of a `Insert`, `Delete`, `ReplaceNew`, `ReplaceOld`, or `Keep` span. We believe this will be particularly helpful for longer spans since edit keywords only appear at either the beginning or end of a span. Finally, we include a feature to indicate whether the token matches a token in C~old~. This is intended to help the model identify locations in M~edit~ that may be relevant to editing C~old~.
46
+
47
+ **Features specific to C~old~:** We include whether a token matches a code token that is inserted, deleted, or replaced in M~edit~. These help align parts of C~old~ with code edits, assisting the model in determining where edits should be made. In order to exploit common patterns for different types of tokens, we incorporate features that identify whether the token appears more than once in C~old~ or is a stop word, and its part-of-speech.
48
+
49
+ **Shared features:** We include whether the token is a subtoken that was originally part of a larger token and its index if so (e.g., split from `camelCase`, `camel` and `case` are subtokens with indices 0 and 1 respectively). These features aim to encode important relationships between adjacent tokens that are lost once the body of code and comment are transformed into a single, subtokenized sequences. Additionally, because we focus on `@return` comments, we introduce features intended to guide the model in identifying relevant tokens in M~edit~ and C~old~. Namely, we include whether a given token matches a token in a `return` statement that is unique to M~old~, unique to M~new~, or present in both. Similarly, we indicate whether the token matches a token in the subtokenized `return` type that is unique to M~old~, unique to M~new~, or present in both.
50
+
51
+ Reranking allows the incorporation of additional priors that are difficult to back-propagate, by re-scoring candidate sequences during beam search [@neubig-etal-2015-neural; @ko-etal-2019-linguistically; @kriz-etal-2019-complexity]. We incorporate two heuristics to re-score the candidates: 1) generation likelihood and 2) similarity to C~old~. These heuristics are computed after parsing the candidate edit sequences (§[4.4](#sec:post-processing){reference-type="ref" reference="sec:post-processing"}).
52
+
53
+ **Generation likelihood.** Since the edit model is trained on edit actions only, it does not globally score the resulting comment in terms of aspects such as fluency and overall suitability for the updated method. To this end, we make use of a pre-trained comment generation model (§[8.2](#sec:generation-model){reference-type="ref" reference="sec:generation-model"}) that is trained on a substantial amount of data for generating C~new~ given only M~new~. We compute the length-normalized probability of this model generating the parsed candidate comment, C'~new~, (i.e., $P(C'\textsubscript{new}{}|M\textsubscript{new}{})^{1/N}$ where $N$ is the number of tokens in C'~new~). This model gives preference to comments that are more likely for M~new~ and are more consistent with the general style of comments.[^5]
54
+
55
+ **Similarity to C~old~.** []{#sec:sim-old-comment label="sec:sim-old-comment"} So far, our model is mainly trained to produce accurate edits; however, we also follow intuitions that edits should be minimal (as an analogy, the use of Levenshtein distance in spelling correction). To give preference to predictions that accurately update the comment with minimal modifications, we use similarity to C~old~ as a heuristic for reranking. We measure similarity between the parsed candidate prediction and C~old~ using METEOR [@BanerjeeEtAL2005].
56
+
57
+ **Reranking score.** The reranking score for each candidate is a linear combination of the original beam score, the generation likelihood, and the similarity to C~old~ with coefficients 0.5, 0.3, and 0.2 respectively (tuned on validation data).
58
+
59
+ :::: center
60
+ ::: {#table:partition-stats}
61
+ +-----------+----------------------+-----------+----------+-------+
62
+ | 3-5 | **Train** | **Valid** | **Test** | |
63
+ +:==========+:=====================+==========:+=========:+======:+
64
+ | 2-5 | Examples | 5,791 | 712 | 736 |
65
+ +-----------+----------------------+-----------+----------+-------+
66
+ | | Projects | 526 | 274 | 281 |
67
+ +-----------+----------------------+-----------+----------+-------+
68
+ | | Edit Actions | 8,350 | 1,038 | 1,046 |
69
+ +-----------+----------------------+-----------+----------+-------+
70
+ | 2-5 | Sim (M~old~, M~new~) | 0.773 | 0.778 | 0.759 |
71
+ +-----------+----------------------+-----------+----------+-------+
72
+ | | Sim (C~old~, C~new~) | 0.623 | 0.645 | 0.635 |
73
+ +-----------+----------------------+-----------+----------+-------+
74
+ | **Code** | Unique | 7,271 | 2,473 | 2,690 |
75
+ | +----------------------+-----------+----------+-------+
76
+ | | Mean | 86.4 | 87.4 | 97.4 |
77
+ | +----------------------+-----------+----------+-------+
78
+ | | Median | 46 | 49 | 50 |
79
+ +-----------+----------------------+-----------+----------+-------+
80
+ | **Comm.** | Unique | 4,823 | 1,695 | 1,737 |
81
+ | +----------------------+-----------+----------+-------+
82
+ | | Mean | 10.8 | 11.2 | 11.1 |
83
+ | +----------------------+-----------+----------+-------+
84
+ | | Median | 8 | 9 | 9 |
85
+ +-----------+----------------------+-----------+----------+-------+
86
+
87
+ : Number of examples, projects, and edit actions; average similarity between M~old~ and M~new~ as the ratio of overlap; average similarity between C~old~ and C~new~ as the ratio of overlap; number of unique code tokens and mean and median number of tokens in a method; and number of unique comment tokens and mean and median number of tokens in a comment.
88
+ :::
89
+ ::::
90
+
91
+ We extracted *examples* from popular, open-source Java projects using GitHub's commit history. We extract pairs of the form (method, comment) for the same method across two consecutive commits where there is a simultaneous change to both the code and comment. This creates somewhat noisy data for the task of comment update; Appendix [15](#appendix:filtering){reference-type="ref" reference="appendix:filtering"} describes filtering techniques to reduce this noise. We first tokenize M~old~ and M~new~ using the javalang[^6] library. We subtokenize based on camelCase and snake_case, as in previous work [@allamanis2016convolutional; @Alon2019Code2Seq; @Fernandes2019StructuredNeural]. We then form M~edit~ from the subtokenized forms of M~old~ and M~new~. We tokenize C~old~ and C~new~ by splitting by space and punctuation. We remove HTML tags and the "\@return" that precedes all comments, and also subtokenize tokens since code tokens may appear in comments as well. The gold edit action sequence, C~edit~, is computed from these processed forms of C~old~ and C~new~.
92
+
93
+ To avoid having examples that closely resemble one another in training and test, the projects in the training, test, and validation sets are disjoint, similar to  . Table [1](#table:partition-stats){reference-type="ref" reference="table:partition-stats"} gives dataset statistics. Of the 7,239 examples in our final dataset, 833 of them were extracted from the diffs used in  . Including code and comment tokens that appear at least twice in the training data as well as the predefined edit keywords, the code and comment vocabulary sizes are 5,945 and 3,642 respectively.
94
+
95
+ # Method
96
+
97
+ We evaluate our approach against multiple rule-based baselines and comment generation models.
98
+
99
+ **Copy:** Since much of the content of C~old~ is typically retained in the update, we include a baseline that merely copies C~old~ as the prediction for C~new~.
100
+
101
+ **Return type substitution:** The return type of a method often appears in its `@return` comment. If the return type of M~old~ appears in C~old~ and the return type is updated in the code, we substitute the new return type while copying all other parts of C~old~. Otherwise, C~old~ is copied as the prediction.
102
+
103
+ **Return type substitution w/ null handling:** As an addition to the previous method, we also check whether the token `null` is added to either a `return` statement or `if` statement in the code. If so, we copy C~old~ and append the string *or null if null*, otherwise, we simply copy C~old~. This baseline addresses a pattern we observed in the data in which ways to handle `null` input or cases that could result in `null` output were added.
104
+
105
+ One of our main hypotheses is that modeling edit sequences is better suited for this task than generating comments from scratch. However, a counter argument could be that a comment generation model could be trained from substantially more data, since it is much easier to obtain parallel data in the form (method, comment), without the constraints of simultaneous code/comment edits. Hence the power of large-scale training could out-weigh edit modeling. To this end, we compare with a generation model trained on 103,473 method/`@return` comment pairs collected from GitHub.
106
+
107
+ We use the same underlying neural architecture as our edit model to make sure that the difference in results comes from the amount of training data and from using edit of representations only: a two-layer, bi-directional GRU that encodes the sequence of tokens in the method, and an attention-based GRU decoder with a copy mechanism that decodes a sequence of comment tokens. We expect the incorporation of more complicated architectures, e.g., tree-based [@Alon2019Code2Seq] and graph-based [@Fernandes2019StructuredNeural] encoders which exploit AST structure, can be applied to both an edit model and a generation model, which we leave for future work.
108
+
109
+ Evaluation is based on the 736 (M~new~, C~new~) pairs in the test set described in §[7](#sec:data){reference-type="ref" reference="sec:data"}. We ensure that the projects from which training examples are extracted are disjoint from those in the test set.
110
+
111
+ In order to allow the generation model to exploit the old comment, this system uses similarity to C~old~ (cf. §[\[sec:sim-old-comment\]](#sec:sim-old-comment){reference-type="ref" reference="sec:sim-old-comment"}) as a heuristic for reranking the top candidates from the previous model. The reranking score is a linear combination of the original beam score and the METEOR score between the candidate prediction and C~old~, both with coefficient 0.5 (tuned on validation data).
112
+
113
+ Model parameters are identical across the edit model and generation model, tuned on validation data. Encoders have hidden dimension 64, the decoder has hidden dimension 128, and the dimension for code and comment embeddings is 64. The embeddings used in the edit model are initialized using the pre-trained embedding vectors from the generation model. We use a dropout rate of 0.6, a batch size of 100, an initial learning rate of 0.001, and Adam optimizer. Models are trained to minimize negative log likelihood, and we terminate training if the validation loss does not decrease for ten consecutive epochs. During inference, we use beam search with beam width=20.
2004.13313/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:028be97f0b86b6e150d11bbbe8115e3e482620e8c8cb63287a009a8ae3449444
3
+ size 889795
2006.00900/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67fcfa9319af3943199b82187f61d389680880b81489080b3fb870ba8f13c0e8
3
+ size 1033973
2006.03204/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:906543b9aaf14d63791ab4d5611596f003e0e08be5430e3569e6d416820f246a
3
+ size 13393396
2006.10350/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-05-31T16:32:27.947Z" agent="5.0 (X11)" version="13.1.9" etag="yTYLOybSN849eGyEBSIa" type="device"><diagram id="HrAtwnRbaX7C9BIMAuJX">7VtLj6M4EP41SDuHtADbPI6dzOzuZaWR+jBzGzHBnaAlOEOcTnp//ZpgHn4w0NN2WtMhUhQoirL56quyXTgOWO3Of5XJfvsPSXHu+G56dsBHx/dBHMfsp5I81xI/BGEt2ZRZWsu8TvCQ/Ye50OXSY5big6BICclptheFa1IUeE0FWVKW5CSqPZJcbHWfbLAieFgnuSr9kqV0W0sj5Hbyv3G22TYtey6/sksaZW7isE1ScqpFFx3wyQGrkhBaH+3OK5xX6DW41Ib+HLjadqzEBZ1yg1/f8JTkR/5svF/0uXnY0zaj+GGfrKvzE/OoA5ZbusvZmccOD7Qk/7YwVJJHUlDuM+S3GiuSk5KJClLgSinL80bk+ODx8mFy3h9cUnwefCavRYpxDJMdpuUzU+E3LPzGEZxfi4Y3p85Zngu4cNvzFIJcmHCGbFrrHYjsgOOoxxSMY4qL9L7iYYdHD9GSHIsUV8bcQTQbvEELGU4Vzo4C1sMD6dDgshLnCc2eRPM6hHgLn0nGGu784aHwzu19IBLcA0F8F/c+CIktHMixXGNutM/lF7bDmpEs06TcYKpYZq5Jnntq+0rh8LMHDELxiaA70lPphrZrHcXqTnSEa902iYNw5qCAODNkhXOyXWsc84H0ACMUk/WNMwxpGBbkzGvLNHsSmBb8OFbD2YVFi8OFRvdMAfn784U5zXV2tLn8otUf7EuzPMVOuPzshB/Z6YfGPOvYpQWuLNOajRtUN0L9ZPzhoiTPNgU7XTPKYiZfVqNQxkb9e35hl6Vp1cxSNyT+WsS8bqATXey7sTLMRZqQCgyMccGcX4RwAzCykl8Uu5LvDGUXAKRmRrKLoi+ovzq3hDO7RLhDYIddsl1L7IqkZsbYJeubZVeksqsecb5jmtQjze2OKaC55Q0GlXgOeyEOYCDGgSeDPDXOZUOW4hwifX8Hu4Wsxnmz6p/5xPFG0oL4l/kkG7LEJ4T0/R3sFtJ2yxifPKN8SpPD9nKiVtF+BzLBAApgI9fMnGQgKdjOVcgzTBZ/Jks/YQRWyDIQ8bYTkXGyzCVjEfBALqgaWvAodu3wJUAvqwcr+obpNVeDRbxD6QWRKXopdu3QKwykZkbopegbppfdUvBuLv7q33JKNTjdSj2INDEUD4fL1JW6p6v/mnO5N7tcW5uBYm1G81obapKmb6A042lKssxRX7854fK7468qpzavaW7XQTAQp7mh6iBbtTNPU9U0GJE/5ojUOtzVrz9GIrLdD/Qqj+vKpeY8ns4e13o8jAWPL5rZT3/mClWXhwY83iRyJQnvbj3xBoHolWu+tfB1ZcF5+nvdvQ+Lpt52hclvs3qeHX5Nh/tSvfmayx3tls7Z45Y9jqRtAFf1uK5kNk+ubA/jkf92kytdFUtyxW+3FR764ryo3U42Mi+KAgOATtgj+CJAwVRATeQeACTg1JWdrkAdm2DilO1veZ7tD0Oh24MwOezrf8A8ZucqmCdgKvGx3ktvEWsIoztpszIKFLRjDdgGsNaVTW4L6wjcqVnWEtpTdni9a7ShpmZrB2ugKRa8Z6wRct8si4ApO4PeN9ZXzCJAt/6+KbStZRF22v33tH4V3P2FF3z6Hw==</diagram></mxfile>
2006.10350/main_diagram/main_diagram.pdf ADDED
Binary file (11.5 kB). View file
 
2006.10350/paper_text/intro_method.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Kernel methods provide non-linear/non-parametric extensions of many classical linear models in machine learning and statistics [44, 48]. The data are embedded via a non-linear map into a high dimensional feature space, so that linear models in such a space effectively define non-linear models in the original space. This approach is appealing, since it naturally extends to models with infinitely many features, as long as the inner product in the feature space can
4
+
5
+ <sup>1</sup> <https://github.com/FalkonML/falkon>
6
+
7
+ ![](_page_1_Figure_0.jpeg)
8
+
9
+ Figure 1: Benchmarks of kernel solvers on large scale datasets with millions and billions points (see Section 4). Our approach (red and yellow lines) consistently achieves state of the art accuracy in minutes.
10
+
11
+ be computed. In this case, the inner product is replaced by a positive definite kernel, and infinite dimensional models are reduced to finite dimensional problems. The mathematics of kernel methods has its foundation in the rich theory of reproducing kernel Hilbert spaces [46], and the connection to linear models provides a gateway to deriving sharp statistical results [52, 10, 53, 6, 4, 55]. Further, kernel methods are tightly connected to Gaussian processes [39], and have recently being used to understand the properties of deep learning models [22, 28]. It is not a surprise that kernel methods are among the most theoretically studied models. From a numerical point of view, they reduce to convex optimization problems that can be solved with strong guarantees. The corresponding algorithms provide excellent results on a variety of data-sets, but most implementations are limited to problems of small/medium size, see discussion in [51], Chapter 11. Most methods require handling a kernel matrix quadratic in the sample size. Hence, dealing with datasets of size 10<sup>4</sup> to 10<sup>5</sup> is challenging, while larger datasets are typically out of reach. A number of approaches have been considered to alleviate these computational bottlenecks. Among others, random features [37, 38, 65, 25, 12, 11] and the Nyström method are often used [60, 49], see also [14, 24, 17, 3, 66, 9]. While different, both these approaches consider random projections to reduce the problem size and hence computational costs. Renewed interest in approximate kernel methods was also spurred by recent theoretical results proving that computational gains can possibly be achieved with no loss of accuracy, see e.g. [26, 54, 40, 4, 41, 5].
12
+
13
+ In this paper, we investigate the practical consequences of this line of work, developing and testing large scale kernel methods that can run efficiently on billions of points. Following [42]
14
+
15
+ we use a Nyström approach to reduce the problem size and also to derive a preconditioned gradient solver for kernel methods. Indeed, we focus on smooth loss functions where such approaches are natural. Making these algorithmic ideas practical and capable of exploiting the GPU, requires developing a number of computational solutions, borrowing ideas not only from optimization and numerical analysis but also from scientific and high performance computing [27, 2, 7]. Indeed, we design preconditioned conjugate gradient solvers that take full advantage of both GPU acceleration and parallelization with multiple GPUs, implementing out-of-core variants of common linear algebra operations to guarantee optimal hardware utilization. We further optimize the numerical precision of different operations and investigate ways to perform matrix-vector multiplications most efficiently. The corresponding implementation is then tested extensively on a number of datasets ranging from millions to billions of points. For comparison, we focused on other available large scale kernel implementations that do not require data splitting, or multiple machines. In particular, we consider Eigenpro [29] which is an approach similar to the one we propose, and GPyTorch [15] and GPflow [57] which come from the Gaussian process literature. While these latter solutions allow also for uncertainty quantification, we limit the comparison to prediction. We perform a systematic empirical evaluation running an extensive series of tests. Empirical results show that indeed our approach can process huge datasets in minutes and obtain state of the art performances, comparing favorably to other solutions, both in terms of efficiency and accuracy. More broadly, these results confirm and extend the observations made in [28, 29], that kernel methods can now be seamlessly and effectively deployed on large scale problems. To make these new solutions readily available, the corresponding code is distributed as an easy to use library developed on top of PyTorch [35].
16
+
17
+ The rest of the paper is organized as follows. In Section 2, we provide some background on the considered approaches. In Section 3, we detail the main algorithmic solutions in our implementation, whereas the last section is devoted to assessing the practical advantages.
18
+
19
+ Supervised learning is the problem of inferring an input-output function, given finitely many input-output pairs. In statistical learning theory the data $(x_i, y_i)_{i=1}^n$ are assumed to be sampled independently from a probability distribution $\rho$ , and a loss function $\ell(y, f(x))$ is fixed measuring the cost of predicting f(x) in place of y. The examples we consider are the squared $(y - f(x))^2$ and the logistic loss $\log(1 + e^{-yf(x)})$ . Then, a good function f should minimize the expected loss
20
+
21
+ $$L(f) = \int \ell(f(x), y) d\rho(x, y). \tag{1}$$
22
+
23
+ A basic approach to solve the problem is empirical risk minimization, based on the idea of replacing the above expectation with an empirical average. Further, the search of a solution needs to be restricted to a suitable space of hypothesis, a simple example being linear functions $f(x) = w^{\top}x$ . Kernel methods extend this idea by considering a non linear feature map
24
+
25
+ $x \mapsto \Phi(x) \in \mathcal{F}$ and functions of the form $f(x) = w^{\top} \Phi(x)$ . Here $\Phi(x) \in \mathcal{F}$ can be seen as a feature representation in some space of features. The function space $\mathcal{H}$ thus defined is called reproducing kernel Hilbert space [45]. If we denote by $||f||_{\mathcal{H}}$ its norm then regularized empirical risk minimization is given by
26
+
27
+ $$\hat{f}_{\lambda} = \underset{f \in \mathcal{H}}{\operatorname{arg\,min}} \frac{1}{n} \sum_{i=1}^{n} \ell(f(x_i), y_i) + \lambda ||f||_{\mathcal{H}}^{2}, \tag{2}$$
28
+
29
+ where the penalty term $||f||_{\mathcal{H}}$ is meant to prevent possible instabilities and $\lambda \geq 0$ is a hyperparameter. From a statistical point of view the properties of the estimator $\hat{f}_{\lambda}$ are well studied, see e.g. [52, 6, 47]. Under basic assumptions, for $\lambda = \mathcal{O}(1/\sqrt{n})$ , it holds with high probability that
30
+
31
+ $$L(\hat{f}_{\lambda}) - \inf_{f \in \mathcal{H}} L(f) = \mathcal{O}\left(n^{-1/2}\right). \tag{3}$$
32
+
33
+ This bound is sharp, but can be improved under further assumptions [6, 52]. Here, we use it for reference. From a computational point of view, the key fact is that it is possible to compute a solution also if $\Phi(x)$ is an infinite feature vector, as long as the kernel $k(x, x') = \Phi(x)^{\top} \Phi(x')$ can be computed [44]. The Gaussian kernel $\exp(-\|x - x'\|^2/2\sigma^2)$ is a basic example. Indeed, by the representer theorem [23, 45], $\hat{f}_{\lambda}(x) = \sum_{i=1}^{n} \alpha_i k(x, x_i)$ , so Problem (2) can be replaced with a finite dimensional problem on the coefficients. Its solution depends on the considered loss, but typically involves handling the kernel matrix $K_{nn} \in \mathbb{R}^{n \times n}$ with entries $k(x_i, x_j)$ , which becomes prohibitive as soon as $n \sim 10^5$ (although multi-GPU approaches [58] have been recently shown to scale to $10^6$ points). In the following, we focus on Nyström approximation, considering functions of the form
34
+
35
+ $$f(x) = \sum_{i=1}^{m} \alpha_i k(x, \tilde{x}_i), \tag{4}$$
36
+
37
+ where $\{\tilde{x}_1,\ldots,\tilde{x}_m\}\subset\{x_1,\ldots,x_n\}$ are inducing points sampled uniformly at random. As we discuss next, this approach immediately yields computational gains. Moreover, recent theoretical results show that the basic bound in (3) still holds taking as few as $m=\mathcal{O}(\sqrt{n})$ inducing points [41, 30]. With these observations in mind, we next illustrate how these algorithmic ideas can be developed considering first the square loss and than the logistic loss. **Squared loss.** This choice corresponds to kernel ridge regression (KRR). Since both the loss and penalty are quadratic, solving KRR reduces to solving a linear system. In particular, letting $\mathbf{y}=(y_1,\ldots,y_n)$ , we obtain $(K_{nn}+\lambda nI)\mathbf{\alpha}=\mathbf{y}$ , for the coefficients $\mathbf{\alpha}=(\alpha_1,\ldots,\alpha_n)\in\mathbb{R}^n$ in the solution of the problem in Eq. (2), while using the Nyström approximation (4) we get
38
+
39
+ $$(K_{nm}^{\top}K_{nm} + \lambda nK_{mm})\boldsymbol{\alpha} = K_{nm}^{\top}\boldsymbol{y}, \tag{5}$$
40
+
41
+ for $\alpha = (\alpha_1, \dots, \alpha_m) \in \mathbb{R}^m$ . The first linear system can be solved directly in $\mathcal{O}(n^3)$ time and $\mathcal{O}(n^2)$ space. In turn, Eq. (5) can be solved directly in $\mathcal{O}(nm^2 + m^3)$ time and $\mathcal{O}(m^2)$ space (if the $K_{nm}$ matrix is computed in blocks). It is well known, that for large linear
42
+
43
+ # Method
44
+
45
+ ```
46
+ 1: function Falkon(X \in \mathbb{R}^{n \times d}, \boldsymbol{y} \in \mathbb{R}^n, \lambda, m, t)
47
+ 13: function Preconditioner(X_m \in \mathbb{R}^{m \times d}, \lambda)
48
+ X_m \leftarrow \text{RandomSubsample}(X, m)
49
+ K_{mm} \leftarrow k(X_m, X_m)
50
+ 3:
51
+ T, A \leftarrow \text{Preconditioner}(X_m, \lambda)
52
+ T \leftarrow \operatorname{chol}(K_{mm})
53
+ 15:
54
+ 4:
55
+ function LinOp(\beta)
56
+ K_{mm} \leftarrow 1/mTT^{\top} + \lambda \boldsymbol{I}
57
+ 16:
58
+ \boldsymbol{v} \leftarrow A^{-1}\boldsymbol{\beta}
59
+ 5:
60
+ A \leftarrow \operatorname{chol}(K_{mm})
61
+ 17:
62
+ oldsymbol{c} \leftarrow k(X_m, X) k(X, X_m) T^{-1} oldsymbol{v}return A^{-\top} T^{-\top} oldsymbol{c} + \lambda n oldsymbol{v}
63
+ 6:
64
+ return T, A
65
+ 18:
66
+ 7:
67
+ 19: end function
68
+ end function
69
+ 8:
70
+ R \leftarrow A^{-\top} T^{-\top} k(X, X_m) \boldsymbol{y}
71
+ 9:
72
+ Note: LinOp performs the multiplication \tilde{P}^{\top}H\tilde{P}\beta
73
+ \beta \leftarrow \text{ConjugateGradient}(\text{LinOp}, R, t)
74
+ 10:
75
+ as in Eq. (8), via matrix-vector products.
76
+ return T^{-1}A^{-1}\beta
77
+ 11:
78
+ 12: end function
79
+ ```
80
+
81
+ systems iterative solvers are preferable [43]. Further, the convergence of the latter can be greatly improved by considering preconditioning. The naïve preconditioner P for problem (5) is such that $PP^{\top} = (K_{nm}^{\top}K_{nm} + \lambda nK_{mm})^{-1}$ , and as costly to compute as the original problem. Following [42] it can be approximated using once again the Nyström method to obtain
82
+
83
+ $$\tilde{P}\tilde{P}^{\top} = \left(\frac{n}{m}K_{mm}^2 + \lambda nK_{mm}\right)^{-1} \tag{6}$$
84
+
85
+ since $K_{nm}^2 \approx K_{nm}^{\top} K_{nm}$ . Next, we follow again [42] and combine the above preconditioning with conjugate gradient (CG). The pseudocode of the full procedure is given in Algorithm 1. Indeed, as shown in [42] $\mathcal{O}(\log n)$ CG steps are sufficient to achieve the bound in (3). Then with this approach, the total computational cost to achieve optimal statistical bounds is $\mathcal{O}(n\sqrt{n}\log n)$ in time, and in $\mathcal{O}(n)$ in memory, making it ideal for large scale scenarios. The bulk of our paper is devoted to developing solutions to efficiently implement and deploy Algorithm 1.
86
+
87
+ Logistic loss. The above ideas extend to the logistic loss and more generally to self-concordant loss functions, including the softmax loss [31]. For reasons of space, we detail this case in Appendix B and sketch here the main ideas. In this case, iterative solvers are the default option since there is no closed form solution. Nyström method can be used a first time to reduce the size of the problem, and then a second time to derive an approximate Newton step [30]. More precisely, at every step preconditioned conjugate gradient descent is run for a limited number of iterations with a decreasing value of $\lambda$ , down to the desired regularization level. In practice, this requires running Algorithm 1 multiple times with small number of iterations t and with decreasing $\lambda$ . Making these ideas practical requires efficiently implementing and deploying Algoritm 1, making full use of the available computational architectures. This the core of our contribution that we detail in the next section.
88
+
89
+ GPU machines have a peculiar architecture with rather different properties than the standard von Neumann computer, in particular they are characterized by highly parallel computational
90
+
91
+ power, relatively small local accelerator memory and slow memory transfer to from the accelerator compared to their computational speed [63]. In their standard definition, kernel methods require large amounts of memory with a low density of operations per byte of memory used. This opens the question of how to adapt methods with low operation density to platforms designed to be extremely efficient with very high density of operations per byte. With this in mind, we started considering the state of the art kernel solver with minimal computational requirements for optimal guarantees (described at a high level in Algorithm 1), with the goal to reformulate its computational structure to dramatically increase the density of operations per byte, and reduce as much as possible the required memory use / transfers. To achieve this goal, we use a number of carefully designed computational solutions which systematically reduce the impact of the inherent bottlenecks of multi-core/multi-GPU architectures, while leveraging their intrinsic potential. In particular in the rest of this section we will focus on (a) minimizing the memory footprint of the solver, which has long been the main bottleneck for kernel methods, and is the main limitation encountered by current kernel solvers, (b) dealing with limited memory on the GPU, (c) reaching the highest possible accelerator utilization, parallelizing memory transfers and computation, (d) using the enhanced capabilities of GPUs with reduced-precision floating point data.
92
+
93
+ Kernel solvers that use the Nyström method need the matrices $K_{mm}$ and $K_{nm}$ . Since $K_{nm}$ is used only in matrix-vector products, we can avoid constructing it explicitly (as we shall see in the following paragraphs) which leaves us to deal with the $K_{mm}$ matrix. When m is large, it is crucial to carefully manage the memory needed for this task: in our implementation we only ever allocate one $m \times m$ matrix, and overwrite it in different steps to calculate the preconditioner. Indeed, choosing an appropriate form of the preconditioner, the matrix $K_{mm}$ itself is not needed in the conjugate gradient iteration. Figure 2 shows the total memory usage, which consists of the preconditioner occupying approximately 90% of the memory (see last paragraph of Sect. 3.1), the weight vector $\beta$ and two buffers holding (part of) the m inducing points and a data batch needed to compute $K_{nm}$ .
94
+
95
+ In-place computation and storage of the preconditioner. The preconditioner $\tilde{P}$ of Eq. (6) is used to solve a linear system of the form $\tilde{P}^{\top}H\tilde{P}\boldsymbol{\beta} = \tilde{P}^{\top}K_{mn}\boldsymbol{y}$ with $H = K_{mn}K_{nm} + K_{mn}K_{nm}$
96
+
97
+ ![](_page_5_Figure_4.jpeg)
98
+
99
+ Figure 2: Structure of RAM alloca- Figure 3: Overlapping memory transfers and computation.
100
+
101
+ ![](_page_6_Figure_0.jpeg)
102
+
103
+ Figure 4: Evolution of the preconditioner matrix in memory.
104
+
105
+ $\lambda n K_{mm}$ and $\beta = \tilde{P}^{-1} \alpha$ . $\tilde{P}$ can be decomposed into two triangular matrices obtained via Cholesky decomposition of $K_{mm}$ ,
106
+
107
+ $$\tilde{P} = \frac{1}{\sqrt{n}} T^{-1} A^{-1}, \qquad T = \text{chol}(K_{mm}), \qquad A = \text{chol}(\frac{1}{m} T T^{\top} + \lambda \mathbf{I}_m). \tag{7}$$
108
+
109
+ All operations are performed in-place allocating a single $m \times m$ matrix as shown in Figure 4 and as described next: (a) a matrix of dimension $m \times m$ is allocated in memory; (b) the $K_{mm}$ kernel is computed in blocks on the GPU and copied to the matrix; (c) in-place Cholesky decomposition of the upper triangle of $K_{mm}$ is performed on the GPU (if the kernel does not fit GPU memory an out-of-core algorithm is used, see later sections); (d) the product $TT^{\top}$ is computed in blocks via GPU and stored in the lower part; (e) out-of-core in-place Cholesky decomposition is performed on the lower triangle to get $A^{\top}$ . Additional care is needed to take into account the matrix diagonal, not described here for brevity.
110
+
111
+ Elimination of the storage of $K_{mm}$ . Considering more carefully the matrix $\tilde{P}(K_{nm}^{\top}K_{nm} + \lambda nK_{mm})\tilde{P}$ with $\tilde{P}$ as in Eq. (7), we observe that the occurrences of $K_{mm}$ cancel out. Indeed $(T^{-1})^{\top}K_{mm}T^{-1} = \mathbf{I}$ since $K_{mm} = T^{\top}T$ by Eq. 7. Then, the following characterization allows to overwrite $K_{mm}$ when calculating the preconditioner.
112
+
113
+ $$\tilde{P}^{\top} H \tilde{P} \beta = (A^{-1})^{\top} (T^{-1})^{\top} (K_{nm}^{\top} K_{nm} + \lambda n K_{mm}) T^{-1} A^{-1} \beta$$
114
+ (8)
115
+
116
+ $$= (A^{-1})^{\top} [(T^{-1})^{\top} K_{nm}^{\top} K_{nm} T^{-1} + \lambda n I] A^{-1} \beta.$$
117
+ (9)
118
+
119
+ Blockwise $K_{nm}$ -vector product on GPU. The conjugate gradient algorithm will repeatedly execute Eq. (9) for different $\beta$ . The most expensive operations are the matrix-vector products $K_{nm}^{\top}(K_{nm}\boldsymbol{v})$ for an arbitrary vector $\boldsymbol{v} \in \mathbb{R}^{m \times 1}$ which – if computed explicitly – would require $n \times m$ memory. However, it is possible to split the input data $X \in \mathbb{R}^{n \times d}$ in B batches of q rows each $\{X_{b,:} \in \mathbb{R}^{q \times d}\}_{b=1}^{B}$ , so that matrix-vector products can be accumulated between batches using the formula $\sum_{b=1}^{B} k(X_{b,:}, X_m)^{\top}(k(X_{b,:}, X_m)\boldsymbol{v})$ . The matrix blocks to be held in memory are summarized in Figure 2 for a total size of $m \times (m+d+1) + q \times d$ where q can be small under memory pressure, or large for greater performance. It is important to note that $k(X_{b,:}, X_m)$ is never stored in main memory, as all operations on it are done on the GPU.
120
+
121
+ ![](_page_7_Figure_0.jpeg)
122
+
123
+ Figure 5: Three phases of the block Cholesky decomposition for updating the first column. Arrows indicate inter-GPU memory transfers between accelerators G-1 and G-2.
124
+
125
+ While the main RAM might be a bottleneck, GPUs have an even smaller amount of memory, and another level of splitting is needed to exploit their speed. For example, a typical architecture has 256GB of RAM and 4 GPUs with 16GB ram each; a preconditioner with $m=2\times10^5$ occupies 150 GB and $K_{nm}$ with $n=10^7$ would need 2000 GB of memory if stored. So we need to deal with both efficient computation of $K_{nm}$ -vector product in chunks that fit a GPU, and with the computation of the preconditioner that usually does not fit in GPU memory. Operations based on a large storage layer (main RAM) and a small but fast layer (GPU) are called out-of-core (OOC) operations. However, common machine learning libraries such as Tensorflow [1] or PyTorch [35] do not implement OOC versions of the required matrix operations, leaving potentially complex implementations to the users. Hence, in our library, we provide these implementations in easily reusable form. It is important to note that splitting our workload to fit in GPU also provides an easy path to parallelization in a multi-GPU system: new chunks of computation are assigned to the first free GPU, effectively redistributing the workload between multiple accelerators when available.
126
+
127
+ Optimized block decomposition for out-of-core $K_{nm}$ -vector multiplication. As seen in the previous section, matrix-vector products can be split along the dimension n, resulting in independent chunks of work that need to be summed up at the end. The OOC product between a kernel matrix and a vector proceeds by: (a) transferring a block of data onto the device, (b) computing the kernel on device and multiplying it by the vector, (c) copying the result back to the host. This sequence of operations minimizes expensive data-transfers between host and device since the kernel matrix is never moved. In particular, the computation is also split along dimensions m and d, to maximize the ratio between computational complexity and transfer time: i.e., maximizing $\frac{qrs}{qs+ds}$ subject to $qs+ds \leq G$ , where q, r and s are the batch dimensions along n, m and d respectively, and G is the available GPU memory.
128
+
129
+ Out-of-core multi-GPU Cholesky decomposition. Other operations, such as Cholesky decomposition and triangular matrix multiplication (lines 15, 16, 17 of Algorithm 1), can also benefit from GPU execution. Here we describe, at a high level, our algorithm for multi-GPU OOC Cholesky decomposition inspired by [27, 64]. We leave further details to Appendix C.
130
+
131
+ Consider a symmetric matrix A, split into $B \times B$ tiles $A_{ij} \in \mathbb{R}^{t \times t}, i \in [B], j \in [B]$ , assumed of equal size for brevity. We want a factorization $A = LL^{\top}$ , where L is lower triangular, with the formula $A_{i,j} = \sum_{k=1}^{j} L_{i,k} L_{j,k}^{\top}$ . The algorithm runs in-place, updating one column of A at a time. Each column update proceeds in three steps, illustrated in Figure 5. Clearly $A_{1,1} = L_{1,1}L_{1,1}^{\mathsf{T}}$ so we compute $L_{1,1}$ by a Cholesky decomposition on tile $A_{1,1}$ which is small and can be done entirely on the GPU (e.g. with cuSOLVER [33]). Then we consider the other tiles of the first block column of L for which $A_{j,1} = L_{j,1}L_{1,1}^{\top}$ with j > 1. Since we know $L_{1,1}$ from the first step, we obtain $L_{j,1} = A_{j,1}L_{1,1}^{-\top}$ for all j > 1 by solving a triangular system (on the GPU). Finally the first block column of L is used to update the trailing submatrix of A. Note that $A_{i,j} = \sum_{k=1}^{j} L_{i,k} L_{j,k}^{\top} = L_{i,1} L_{j,1}^{\top} + \sum_{k=2}^{j} L_{i,k} L_{j,k}^{\top}$ for $2 \leq j \leq i$ , so we can update the trailing submatrix as $A_{i,j} = A_{i,j} - L_{i,1}L_{j,1}^{\top}$ . We implemented a parallel version of the above algorithm which distributes block-rows between the available processors in a 1D block-cyclic way (e.g. Figure 5 (left): rows 1 and 3 are assigned to GPU-1, rows 2 and 4 are assigned to GPU-2). For each column update, one processor executes the first step and transfers the result to the others (the arrows in Figure 5), which can then execute step 2 in parallel. To update the trailing matrix, further data transfer between devices may be necessary. The tile-size is chosen as a function of GPU memory: each device needs to hold one block column plus a single block at any given time. An analysis of the scalability of our implementation is in Appendix C.
132
+
133
+ The speed of computations on GPUs is such that data transfers to and from the devices become significant bottlenecks. We have described earlier how, for matrix-vector products, the computed blocks of $K_{nm}$ never leave the device. Further, optimization is possible by parallelizing computations and data transfers. Indeed, modern GPUs have an independent and parallel control on the following activities: loading from RAM, saving to RAM, performing computations. By running three parallel threads for the same GPU and assuming equal duration of each piece of work, we can run t GPU computations in t+2 time units instead of 3t time units for a serial implementation (see Figure 3, where t=3). This guarantees near optimal usage of the GPU and in practice corresponds to a considerable speed up of matrix-vector products.
134
+
135
+ Leveraging the trade-off numerical precision / computational power. GPUs are designed to achieve peak performance with low precision floating point numbers, so much that going from 64 to 32-bit floats can correspond (depending on the exact architecture) to $\approx 10 \times$ throughput improvement. However, changing precision can lead to unexpected problems. For example, computing the Gaussian kernel is commonly done by expanding the norm $\|\boldsymbol{x} - \boldsymbol{x}'\|^2 = \boldsymbol{x}^\top \boldsymbol{x} - 2 \boldsymbol{x}^\top \boldsymbol{x}' + \boldsymbol{x}'^\top \boldsymbol{x}'$ , but in high dimensions $\|\boldsymbol{x}\|, \|\boldsymbol{x}'\|$ can be very large and the cross-term very negative, so their sum has fewer significant digits. Loss of precision can lead to non positive-definite kernels causing Cholesky decomposition to fail. To avoid this, we compute $K_{mm}$ in blocks, converting each block to 64-bit precision for the sum, and then back to 32-bits.
136
+
137
+ **Dealing with thin submatrices.** As a result of our block division strategies, it may happen
138
+
139
+ that blocks become thin (i.e. one dimension is small). In this case, matrix operations, e.g. using cuBLAS [32], cannot leverage the full computational power. In turn this can reduce performance, breaking the inherent computational symmetry among GPUs which is crucial for the effectiveness of a parallel system like the one proposed in this paper. To guarantee good performance for this case, instead of using standard GPU operations, we perform matrix-vector products using KeOps [8]: a specialized library to compute kernel matrices very efficiently when one dimension is small, see Table 1.
140
+
141
+ Dealing with sparse datasets. On the other side of the spectrum, sparse datasets with high dimensionality are common in some areas of machine learning. While the kernel computed on such datasets will be dense, and thus can be handled normally, it is inefficient and in some cases impossible (e.g. with d ∼ 10<sup>6</sup> as is the case for the YELP dataset we used) to convert the inputs to a dense representation. We therefore wrapped specialized sparse linear algebra routines to perform sparse matrix multiplication [34], and adapted other operations such as the row-wise norm to sparse matrices. Thus our library handles sparse matrices with no special configuration, both on the GPU and – if a GPU is not available – on the CPU.
2006.15057/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78a908460a56e76779d0d1b5bbd47e744f05a4da09094bb595deb640b297ffff
3
+ size 6318924
2008.02676/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-06-04T20:48:09.130Z" agent="5.0 (Macintosh; Intel Mac OS X 10_14_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36" version="13.1.14" etag="EUQcjw4gblGDZv0v8DwK" type="google"><diagram id="YFq4-Vj2xXXRNmBzipJ7">5Ztbc6M2FIB/DTPtQ3d0AQGPWSfbPnTbzOSh3aeOArKtKUauLG/s/fUVIGwLlIV1gGzYPCTo6AJ8R9K5iHh4sTn8Kul2/VGkLPMQSA8evvUQiiKkfxeCYyUIQr8SrCRPKxE8Cx74F2Y6GuGep2xntVNCZIpvbWEi8pwlypJRKcWT3WwpMvumW7piLcFDQrO29C+eqrV5OBSe5b8xvlrXd4Ykrmo2tG5shtitaSqeKhEoRPjOwwsphKquNocFywp0NZayEfrwTO3pwSTLVZ8ORg+fabY372aeSx3rl5Vin6esaA88/P5pzRV72NKkqH3SytWytdpkugT15ZJn2UJkQpZ9cUpZtEy0fKek+Jdd1JAkYo/LU02NsRjDPBKTih2efS14gqXnGBMbpuRRNzEdYt8AN/OLVKWns7L8igtYX+oJmFlIzfxYnQY+I9QXhqKbKJ4p0SC0iEJ/OqT+TJGSwEJav9YUSIOZIg2RhRSD6ZCSkZEGLEp9F9IIPWJCxkIKIQw7mUZtpgF5OdLQgZRkqoAj9NNfsiX/7UVd8cuudBhudAOItodzpb5aFX/vDn/8eXtXj6WfohquqvyK1mC31to6aOhxuWQkcS6NNIwfARhIa3otWFoL2vt1ANtaw+jlWotaWvtItRLAvXbRRobLoF4moQtuTEJMyUBwfYy64LqWxBBw4xbcD7oP+J0emRwbbvnjgosJjnE6ENzAuKyvABeCFt17yVKeKC7yedAlIOqiS/yR6MJuC8ny9KYI0HQpyehuxxMbY9WBpa34rBPA5cbncAFqmWQZVfyzPbzrrc0d7gUvLVHNF9R96sgjfhcG9ig7sZcJMx0vY7PWWH7QNZaicsVUa6xSFaeX76edHqHg29dOZBMNwZWqgSD+6kAD6qVHQLlb021xqfS2w76Iouv7LZNc34bJS/n9WfjtG1bKJat2QnyrQRXyV9rEQIAt+jhqbWLIsYmRAbx8OHYw+mpuPgq6mI7l5sOxo9FXY4rteeo78lBjMZ1tOOrbBranezgIU1c8OjcTCWM73L/aRKKGJzSiiWxHnPPTCxrKdcHTuS7tYHV+evGjaBi9BLUHPrpeUDvMnZ9egpAMoxcCwFR6cQXIVY5yt6W51yffSVz5Tq/whn0vWGzX3Fyfc5/V0M/kPrWhVq64oLb3uchZwzkwIprxVREpJHo+lPFGYfZ5QrMbU7HhaVrcxul42K6JN0TM0Aym+x0NIPD8zOvrN9TL61KvBBz+gV79EHMkHjcCCghQi7grShuEuCNoLojn8ybePKPFExLvERd32RT9mvL4t+FRFj5dFm4PVulogft+LVHs20EgbOLua4niwHbcIG7ENANaIldE/m0nb5HTEOnlV1qhn/Tu93O5GPudwc1mjUIQ29Mhatuh0daoKycwsFbzH1OrzUwPhBOqdYC0xDy3XgihHQRAcG0UABtHvqeRR9h8e6QzXpK66/EFwjipO9w42Z3wSxLHuflcmNrphwnT9rhHGuFtMsX2rjFh2h73OCN/m0x932I6Ydq+3rlnne7yG1kqEr+79rMDH/kdQ11t63Tx/G171fz8/wH47n8=</diagram></mxfile>
2008.02676/main_diagram/main_diagram.pdf ADDED
Binary file (13.6 kB). View file
 
2008.02676/paper_text/intro_method.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ The vast majority of machine learning models operate on an independent and identically distributed (i.i.d.) vector, $x \in \mathbb{R}^d$. In some cases, however, the inputs may contain a set of instances, $\mathbf{x}=\{x_i\}_{i=1}^n$, which jointly determine the target. We note that instances within a set may interact with each other. For instance, the points inside a point cloud jointly determine the global structure. In this work, we build both discriminative and generative models on sets, which explore the intradependencies within a set to capture both global and local structures.
4
+
5
+ A set is a collection of data that does not possess any inherent ordering of its elements. In statistics, a set is described as an exchangeable sequence of random variables whose joint probability distribution does not change under any permutation $\pi$, i.e., $$\begin{equation}
6
+ p(x_1,\ldots,x_n) = p(x_{\pi_1},\ldots,x_{\pi_n}).
7
+ \end{equation}$$
8
+
9
+ Discriminative models that operate on a set must predict a target $y$ that is invariant to all permutations. Applications for such models include population statistics estimation, point cloud classification, etc. A naive approach where training data are augmented with random permutations and treated as sequences has been empirically proven insufficient [@vinyals2015order]. Previous works [@zaheer2017deep; @edwards2016towards] developed simple permutation invariant operations by processing each element independently and then aggregating them using a pooling operation (max, mean, etc). However, such an operation largely ignores the intradependencies between elements within the set. In this work, we introduce an inductive bias into the model to exploit said intradependencies across elements. Specifically, we introduce a permutation equivariant module to explicitly model the dependencies among set elements.
10
+
11
+ Set generative models with tractable, exchangeable likelihoods have recently been investigated (that is, likelihoods which are invariant to permutations) [@edwards2016towards; @bruno2018; @bender2020]. Simple approaches that estimate likelihood for each instance independently are insufficient since global structures cannot be inferred. To overcome this shortcoming, we construct a flow-based generative model for tractable likelihood estimation on sets.
12
+
13
+ The key for both discriminative and generative set models is a powerful equivariant transformation that captures set intradependencies. In order to compute the likelihood for a flow based generative model, the transformation additionally requires to be invertible. In this work, we propose an exchangeable, invertible flow transformation, ExNODE, based on Neural Ordinary Differential Equation (NODE) [@chen2018neural]. Invertibility is guaranteed via the NODE framework since integration backward in time is always possible. We implement ExNODEby parametrizing a differential equation with a permutation equivariant architecture.
14
+
15
+ In addition to modeling the sets in spatial dimensions, we extend ExNODEto the temporal dimension and propose a temporal set modeling task. Such a set model has many potential applications, including modeling the evolution of galaxies, pedestrian tracking, etc. Here, we utilize a VAE-based framework with our proposed set generative model as the decoder. The temporal evolution is captured by another ODE in the latent space. After training, our model can interpolate and extrapolate to generate sets at unseen (potentially fractional) time steps.
16
+
17
+ Our contributions are as follows: 1) We propose ExNODE, an exchangeable module for set modeling, which explicitly captures the intradependencies among set elements. 2) ExNODErepresents a type of invertible flow transformation on which the invariant set likelihood can be achieved. 3) We propose a temporal set modeling task and a VAE-based model for time variant set modeling. The temporal VAE utilizes differential equations to transit hidden states in time. To the best of our knowledge, our model is the first one designed for temporal sets. 4) We achieve state-of-the-art performance for both point cloud classification and likelihood estimation.
18
+
19
+ # Method
20
+
21
+ In this section, we introduce the permutation equivariant module, ExNODE. We discuss how to apply ExNODEfor different set modeling tasks. We consider both discriminative (set classification) and generative (set generation with flow models) tasks. Finally, we explore temporal set modeling.
22
+
23
+ Our permutation equivariant module for exchangeable sets is based on differential equations. Specifically, we can prove the following theorem. The detailed proof is provided in Appendix [6](#sec:proof){reference-type="ref" reference="sec:proof"}.
24
+
25
+ ::: theorem
26
+ **Theorem 1**. *(Permutation Equivariant ODE) Given an ODE $\dot{\mathbf{z}}(t) = f(\mathbf{z}(t), t), \mathbf{z}(t)\in \mathcal{X}^n$ defined in an interval $[t_1, t_2]$. If function $f(\mathbf{z}(t), t)$ is permutation equivariant w.r.t. $\mathbf{z}(t)$, then the solution of the ODE, i.e., $\mathbf{z}^\star(t), t\in[t_1, t_2]$ is permutation equivariant w.r.t. the initial value $\mathbf{z}(t_1)$. We call the ODE with permutation equivariant properties ExODE.*
27
+ :::
28
+
29
+ Following Neural ODE [@chen2018neural], we parametrize $\dot{\mathbf{z}}(t)$ with a neural network. To ensure the integrated function $\mathbf{z}^*(t)$ is permutation equivariant, we build $\dot{\mathbf{z}}(t)$ in a permutation equivariant form. Specifically, $f$ is implemented as a permutation equivariant neural network, such as the deepset equivariant layer or the attention based set transformer layer.
30
+
31
+ An additional benefit of our ExNODEis its invetibility. Since we can always integrate from $t_2$ to $t_1$, it does not require any special design as in typical flow models to guarantee invertibility. Therefore, our ExNODEcan be easily plugged into flow models as a transformation. According to Eq. [\[eq:chg_diffeq\]](#eq:chg_diffeq){reference-type="eqref" reference="eq:chg_diffeq"}, the likelihood can be similarly evaluated.
32
+
33
+ <figure id="fig:set_classification" data-latex-placement="ht">
34
+ <img src="set_classification.png" style="width:98.0%" />
35
+ <figcaption>Illustration of the architecture of our set classification model. The function <span class="math inline"><em>ϕ</em>(⋅)</span> refers to independent operations that expand the dimension. The <em>ExNODE</em> may contain multiple ODE blocks. The max pooling is applied across set elements.</figcaption>
36
+ </figure>
37
+
38
+ For the set classification task, a model must guarantee that the order of set elements does not affect the prediction results. Hence, given a set $\mathbf{x} = \{x_1, \ldots, x_n\} \in \mathcal{X}^n$, our purpose is to learn a permutation invariant function $f(\cdot)$ that maps $\mathbf{x}$ to its corresponding label $y$.
39
+
40
+ Notice that multiple equivariant layers stacked together are overall equivariant, we employ a permutation invariant architecture by stacking multiple equivariant layers and a pooling aggregating operation. Figure [1](#fig:set_classification){reference-type="ref" reference="fig:set_classification"} illustrates the architecture of our set classification model. First, we use a linear mapping $\phi$ to expand the feature dimensions independently for each set element. Then, permutation equivariant ODEs serve as a dimension-preserving nonlinear mapping to capture the dependencies among set elements and learn the feature representations for $\mathbf{x}$. When feature representations are available, we use a max pooling to aggregate the information across $x_i$. After max pooling, we get a permutation invariant vector representation that summarizes the set $\mathbf{x}$. We denote the embedding vector as $v$, $$\begin{equation}
41
+ v = \mathrm{MaxPool}(\mathrm{ExNODE~Solve}(\phi(\mathbf{x}))).
42
+ \end{equation}$$ Finally, we use fully connected (FC) layers and a softmax layer to predict labels $y$.
43
+
44
+ We extend the continuous normalizing flow proposed in [@chen2018neural; @grathwohl2018scalable] to model exchangeable sets $\mathbf{x} \in \mathcal{X}^n$. Specifically, we have the following proposition from [@bender2020], repeated here for convenience:
45
+
46
+ ::: proposition
47
+ **Proposition 1**. *For a flow model with transformation $q(\cdot)$ and base likelihood $p_{\mathcal{Z}}(\cdot)$, the input likelihood $p_{\mathcal{X}}(\mathbf{x}) = p_{\mathcal{Z}}(q(\mathbf{x})) \left|\det \frac{dq}{d\mathbf{x}}\right|$ is exchangeable iff the transformation is permutation equivariant and the base likelihood is invariant.*
48
+ :::
49
+
50
+ Similar to Eq. [\[eq:implicit_trans\]](#eq:implicit_trans){reference-type="ref" reference="eq:implicit_trans"}, we parametrize transformation $q$ implicitly as a differential equation, i.e., $$\begin{equation}
51
+ \label{eq:int}
52
+ \dot{\mathbf{z}}(t) = f_\theta(\mathbf{z}(t), t), \quad \mathbf{z}(t_0) = \mathbf{x},
53
+ \end{equation}$$ where $f_\theta$ is a permutation equivariant neural network w.r.t. $\mathbf{z}(t)$. Using the instantaneous change of variables formula, the log likelihood of $\mathbf{z}(t_1)$ and $\mathbf{z}(t_0)$ satisfy the following equation: $$\begin{equation}
54
+ \label{eq:chg}
55
+ \log p(\mathbf{z}(t_1)) = \log p(\mathbf{z}(t_0)) - \int_{t_{1}}^{t_{0}} \mathrm{Tr}\left(\frac{\partial f_\theta}{\partial \mathbf{z}(t)}\right) dt,
56
+ \end{equation}$$ where $\mathbf{z}(t_0)$ and $\mathbf{z}(t_1)$ corresponds to $x$ and $z$ in Eq. [\[eq:chg-dis\]](#eq:chg-dis){reference-type="eqref" reference="eq:chg-dis"} respectively. Since the trace operator $\mathrm{Tr}(\cdot)$ in Eq. [\[eq:chg\]](#eq:chg){reference-type="eqref" reference="eq:chg"} preserves permutation invariance, the exchangeability of $\log p(\mathbf{z}(t))$ is maintained along the integral trajectory.
57
+
58
+ After transformation, we apply a permutation invariant base likelihood to the transformed sets $\mathbf{z}(t)$. For simplicity, we use an i.i.d. base likelihood $$\begin{equation}
59
+ \label{eq:base_likel}
60
+ p_{\mathcal{Z}}(\mathbf{z}(t)) = \prod_{z_i \in \mathbf{z}(t)} p_{\mathcal{Z}}(z_i).
61
+ \end{equation}$$ The generation process consists of the following steps: 1) Sampling $n$ i.i.d. instances from the base distribution; 2) Inverting the transformations by integrating backwards in time. Although samples from base distribution are independent, the transformations will induce dependencies and transform them to encode global and local structures.
62
+
63
+ Like other normalizing flow based models, we train our model by maximizing the log likelihood $\log p_{\mathcal{X}}(\mathbf{x})$ using Eqs. [\[eq:chg\]](#eq:chg){reference-type="eqref" reference="eq:chg"} and [\[eq:base_likel\]](#eq:base_likel){reference-type="eqref" reference="eq:base_likel"}. We choose $p_{\mathcal{Z}}(\cdot)$ as $\mathcal{N}(0, I)$ in all our experiments. To reduce memory usage, the adjoint method is used to compute the gradient of a black-box ODE solver [@chen2018neural]. As in FFJORD [@grathwohl2018scalable], the trace of Jacobian matrix is estimated using Hutchinson's estimator [@hutchinson1990stochastic].
64
+
65
+ In this section, we present a continuous-time VAE model for temporal set modeling. Assume $X = [\mathbf{x}_{t_0}, \mathbf{x}_{t_1}, \ldots, \mathbf{x}_{t_N}]$ is a time variant set, where each $\mathbf{x}_{t_i} \in \mathcal{X}^n$ is a set. Let $Z = [z_{t_0}, z_{t_1}, \ldots, z_{t_N}]$ be the corresponding latent variables of $X$. We assume that the evolution of latent states can be modeled by an ODE. In other words, given an initial state $z_{t_0}$, other latent states can be inferred following the dynamics $\dot{z}(t)$. Unlike other methods, such as recurrent neural networks (RNNs), where the evaluations can only be performed at prefixed time points, the ODE based model can obtain both the latent states and observations at any time $t$.
66
+
67
+ Given the latent states $z_{t_i}, i=0,1,\ldots,T$, we propose to model the conditional distribution, $p(\mathbf{x}_{t_i}\mid z_{t_i})$ using a conditional set CNF. Specifically, the set $\mathbf{x}_{t_i}$ is transformed to a simple base distribution using ExNODEtransformations conditioned on the corresponding latent state $z_{t_i}$: $$\mathbf{x}_{t_i}(s_1) = \mathbf{x}_{t_i}(s_0) + \int_{s_0}^{s_1} g_{\theta_d}(\mathbf{x}_{t_i}(s), z_{t_i}, s) ds, \quad \mathbf{x}_{t_i}(s_0) = \mathbf{x}_{t_i},$$ where $g_{\theta_d}(\cdot)$ defines the transformation dynamics of the CNF in $[s_0, s_1]$. $g_{\theta_d}(\cdot)$ is permutation equivariant w.r.t. $\mathbf{x}_{t_i}(s)$. The log likelihood of $\mathbf{x}_{t_i}$ can be formulated similar to Eq. [\[eq:chg\]](#eq:chg){reference-type="eqref" reference="eq:chg"}.
68
+
69
+ Since computing the posterior distribution $p(z_{t_i} \mid \mathbf{x}_{t_i})$ is intractable, we cannot directly maximize the marginal log likelihood $\log p_\theta(X)$. Therefore, we resort to the variational inference [@VAE2014; @rezende2014stochastic] and optimize a lower bound. Following previous work [@chen2018neural; @rubanova2019latent] for temporal VAEs, we utilize a recurrent encoder that produces an amortized proposal distribution $\hat{p}_{\psi}(z_{t_0} \mid X)$ conditioned on the entire time series $X$. The encoder first encodes each set into a permutation invariant representation independently and then uses a recurrent network to accumulate information from each time step. For our models, the encoder processes the time series backwards in time. We assume the prior for $z_{t_0}$ comes from an isotropic Gaussian, $p(z_{t_0}) \sim \mathcal{N}(0, I)$. Latent codes for other time steps are constructed following the dynamics $\dot{z}(t)$. The final encoder-decoder model is illustrated in Fig. [2](#fig:setvae){reference-type="ref" reference="fig:setvae"}. We train the encoder and decoder jointly by maximizing the evidence lower bound (ELBO): $$\begin{equation}
70
+ \mathrm{ELBO}(\theta, \psi) = \mathbb{E}_{z_{t_0}\sim \hat{p}_\psi(z_{t_0}|X)} \left[ \sum_{i=0}^T \log p_\theta(\mathbf{x}_{t_i}|z_{t_i}) \right ] - \mathrm{KL}(\hat{p}_\psi(z_{t_0}|X) || p(z_{t_0})).
71
+ \end{equation}$$
72
+
73
+ After the model is trained, we can sample a set at any time $t$ by first inferring the corresponding latent state $z_t$ and then transforming a set of base samples $\mathbf{y}_t$ conditioned on $z_t$: $$\begin{gather}
74
+ z_{t_{0}} \sim p\left(z_{t_{0}}\right), \quad z_{t} = \mathrm{ODESolve} (z_{t_{0}}, \theta_t, t) \\
75
+ \mathbf{y}_t = \{\mathbf{y}_t^j \}_{j=1}^{n}, \quad \mathbf{y}_t^j \sim \mathcal{N}(0, I), \quad \hat{\mathbf{x}}_t = \mathrm{ODESolve}(\mathbf{y}_t, z_t, \theta_d, t),
76
+ \end{gather}$$ where $\theta_t$ parametrize the dynamics in the latent-states transmission model and $\theta_d$ parametrize the dynamics of the decoder. Due to the continuous latent space, our model can learn the evolution of sets in time. We can sample sets at unseen time steps by interpolating or extrapolating the latent states.
77
+
78
+ ![The illustration of encoder and decoder used in temporal set modeling task. The set encoder $\phi(\cdot)$ learns the fix-dimensional permutation invariant representation of a set. The decoder contains two independent ODEs to decode latent states $\mathbf{z}_{t_i}$ and to reconstructed observations $\hat{\mathbf{x}}_{t_i}, i=0,\ldots,T$.](vae.png){#fig:setvae width="80%"}
2009.07806/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9570c05dce80bc482206b1c6678dd138dd8d66f7ce0211cf3949493d09c6e481
3
+ size 3065021
2010.01625/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2020-09-02T14:44:14.656Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36" version="13.6.2" etag="5fgiA4Wvk_b7fuW1QQsw" type="google"><diagram id="VIpIMBBMe-e8BNsF7uXn">3ZrLkto4FIafhipm0SnJutgsE7p7kkWqUsVikqWwBLhGWJQQAebpR8Yy2JIh0G1oihXW0QXpO7+O5AM9NJxv/tZsMfuuuJC9CPBNDz33oigiJLIfhWVbWiAFtLRMdcad7WAYZf8JZwTOusq4WDYaGqWkyRZNY6ryXKSmYWNaq3Wz2UTJ5rcu2FQEhlHKZGj9J+NmVlqTKD7Yv4psOqu+GdJBWTNnVWO3kuWMcbWumdBLDw21UqZ8mm+GQhb0Ki5lv9cjtfuJaZGbczo4T/xmcuXWNlTzedE5AiJPreu0m6jZVqvXapVzUQwAeujLepYZMVqwtKhdW4db28zMpS1B+zjJpBwqqfSuL+JMJJPU2pdGq39FrYamiRhPbI2bkdBGbI6uCu5ZWZUJNRdGb22TqgNyeJ3ABg7/+uAsCJxtVnNU1Y05fUz3Ix8Q2gdHsZ0oaiHKxZVwCsiJiNtwDmiMGL0OTkhuyBMHPL+v7E5/mgnGi6GNsavIVB5wtcPYeCAulygRCcdtTJNojGhHTCMKPhGPKg6oJjSEijuASgKoIyEnT93DnEwETVv3O48HYwC6goma+/12JGnLds/tSsyKSXtu8d0pdM2IOgYCCdpGGIgEJEk3hFHsEb5hBIgDxK+2D5Bs+16UWhm2kzt6fkoGLepNUtGu3nFCMOlIvRh74RUnIdxq2DrcqAO4SQA33Ps5/1xcmmwplWy5zNImRbtMvf3piO8Kv4qCDW+u+LypVz5vq9ImMz+rMexzrZctHToVhapPOTnBvfvZUq10KhpXGsP0VJhGuDvDJTXkBITEK5sW0grnd3MSbW5w3/BDZbsAUMUr70CNvRHK5bhO9dubP07SHMefSckgGGenif2iz5LJ4OFkgkKZ4HuTCfXcW0XYS3WCADw9UHdCqV7R3qUU67eRa660mampypl8OVj3mjjo4FdNIV2GjnvTBKTUvzdGb1VFtd79BdQ7T46owvqObWvNFkWD5YkpJ96EMTg9L4BPtbcP5QzeLFH4ccHsDBGSUIT0zkSIQVM5/i3vXAVi0hznyb/RdBiXooucnqtceB5/wzFUD2RCjtW6HsPOkMK9BR9EmucIrF60Lg49sRd6IL6a48OEzDt2+ydMaHPHg+RP95ei9EPozE5a6Hcoox4R4jtTBkaeQ98aEghovmjC+DqHUjBhl0Lo7JAJ81YXy+6IDu7G6QS2M7zc6ZieHqjDcBCmvr7lqcqX2dLsqL4GbiqyOU3HNNMB7qyo5w6ciclsWiQXUjtysfW/FHmCLGXys6uYZ5zLY1m1ZirDTfs9iQaCvYtVdfzW82QtcukizwDDRNlITcycbQLenaZwxixNOGpL4UQIY8I7SuHQD0zhwDBB1od/Pa6MIcXeK1BMAtr4WjoOE2b96IFh7xPnFWp0O9Rh0qmPHhh1qGuIwjByLdpRmLjp4wemvf8JY8/6dkEkCjMQffLArDGIPy5iVxfvOmz6yLAH0YehDt+1+/EDoybe784QxddibYuHfwWV7z2HP1ehl/8B</diagram></mxfile>
2010.01625/main_diagram/main_diagram.pdf ADDED
Binary file (14.1 kB). View file
 
2010.01625/paper_text/intro_method.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ We aim to determine whether $C$ is inconsistent by understanding its semantics and how it relates to $M$ (or changes between $M_{old}$ and $M$). We show an overview of our approach in Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"}. First, the comment encoder, a BiGRU [@ChoGRU], encodes the sequence of tokens in $C$ (Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"} (1)). When learning a representation for a given token, the forward and backward BiGRU passes, in principle, provide context of other tokens in $C$. However, this information can get diluted, especially when there are long-range dependencies, and the relevant context can also vary across tokens. To address this, we update these representations from the comment encoder with more context about how they relate to the other tokens through multi-head self-attention [@transformer] (Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"} (2)). Next, we learn code representations with a code encoder (Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"} (3)), which can be a sequence encoder (cf. §[3.1](#sec:sequence-code-encoder){reference-type="ref" reference="sec:sequence-code-encoder"}) or an abstract syntax tree (AST) encoder (cf. §[3.2](#sec:ast-code-encoder){reference-type="ref" reference="sec:ast-code-encoder"}).
4
+
5
+ Since the essence of the task comes down to whether $C$ accurately reflects $M$, we must capture the relationship between $C$ and $M$ (or changes between $M_{old}$ and $M$). Prior work does this by computing comment/code similarity through lexical overlap rules [@ratol2017fragile; @SaduThesis], which do not work well when different terms have similar meanings, and cosine similarity between vector representations, which have been found to perform poorly on their own [@LiuOutdatedLine; @Cimasa19]. Furthermore, this notion of similarity is only appropriate for the summary comment which provides an overview of the corresponding method as a whole. More specialized comment types like $\mathtt{@return}$ and $\mathtt{@param}$ describe only specific parts of the method. Therefore, their representations may not be very similar to the representation of the full method. In contrast, we learn the relationship between comments and code by computing multi-head attention between each hidden state of the comment encoder and the hidden states of the code encoder (Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"} (4)).
6
+
7
+ ![Sequence-based code edit representation ($M_{edit}$) corresponding to Figure [\[fig:alluxio\]](#fig:alluxio){reference-type="ref" reference="fig:alluxio"}, with removed tokens in red and added tokens in green.](images/diff_sequence.png){#fig:diff_sequence width="\\columnwidth"}
8
+
9
+ ![AST-based code edit representation ($M_{edit}$) corresponding to Figure [\[fig:alluxio\]](#fig:alluxio){reference-type="ref" reference="fig:alluxio"}, with removed nodes in red and added nodes in green.](images/diff_ast.png){#fig:diff_ast width="\\columnwidth"}
10
+
11
+ We combine the context vectors resulting from both attention modules to form enhanced representations of the tokens in $C$, which carry context from other parts of $C$ as well as the code. These are then passed through another BiGRU encoder (Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"} (5)). We take the final state of this encoder to be the vector representation of the full comment, and we feed it through fully-connected and softmax layers (Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"} (6)). This leads to the final prediction (Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"} (7)).
12
+
13
+ In the just-in-time setting, we represent the changes between $M_{old}$ and $M$ with an edit action sequence, $M_{edit}$. We have previously shown that explicitly defining edits in such a way outperforms having the model implicitly learn them [@panthaplackel2020update]. Each action consists of an action type ($\mathtt{Insert}$, $\mathtt{Delete}$, $\mathtt{Keep}$, $\mathtt{ReplaceOld}$, $\mathtt{ReplaceNew}$) that applies to a span of tokens, as shown in Figure [3](#fig:diff_sequence){reference-type="ref" reference="fig:diff_sequence"}. We encode $M_{edit}$ with a BiGRU. Because $M_{old}$ is unavailable in the post hoc setting, we cannot construct an edit action sequence. So, we encode the sequence of tokens in $M$.
14
+
15
+ To better exploit the syntactic structure of code, we leverage its abstract syntax tree (AST). Following prior work in other tasks [@FernandesSummarization; @yin19iclr], we encode ASTs and AST edits using gated graph neural networks (GGNNs) [@Li2016GatedGS]. For the post hoc setting, we encode $T$, an AST-based representation corresponding to $M$. In the just-in-time setting, we instead encode $T_{edit}$, an AST-based edit representation. We use GumTree [@GumTree], to compute AST node edits between $T_{old}$ (corresponding to $M_{old}$) and $T$, identifying inserted, deleted, kept, replaced, and moved nodes. We merge the two, forming a unified representation, by consolidating identical nodes, as shown in Figure [4](#fig:diff_ast){reference-type="ref" reference="fig:diff_ast"}.
16
+
17
+ GGNN encoders for $T$ and $T_{edit}$ use *parent* ($\mathtt{public}$ $\rightarrow$ $\mathtt{MethodDeclaration}$) and *child* ($\mathtt{MethodDeclaration}$ $\rightarrow$ $\mathtt{public}$) edges. Like prior work [@FernandesSummarization], we add "subtoken nodes\" for identifier leaf nodes to better handle previously unseen identifier names. To integrate these new nodes, we add *subnode* ($\mathtt{toString}$ $\rightarrow$ $\mathtt{to}$), *supernode* ($\mathtt{to}$ $\rightarrow$ $\mathtt{toString}$), *next subnode* ($\mathtt{to}$ $\rightarrow$ $\mathtt{string}$), and *previous subnode* ($\mathtt{string}$ $\rightarrow$ $\mathtt{to}$) edges. When encoding $T_{edit}$, we also include an *aligned* edge type between nodes in the two trees that correspond to an update ($\mathtt{String}$ and $\mathtt{PropertyKey}$). Additionally, we learn *edit* embeddings for each action type. To identify how a node is edited (or not edited), we concatenate the corresponding edit embedding to its initial representation that is fed to the GGNN.
18
+
19
+ By detecting inconsistencies at the time of code change, we can extract automatic supervision from commit histories of open-source Java projects. Namely, we compare consecutive commits, collecting instances in which a method is modified. We extract the comment/method pairs from each version: ($C_{1}$, $M_{1}$), ($C_{2}$, $M_{2}$). In prior work, we isolate comment updates made based on code changes through cases in which $C_{1}$$\neq$$C_{2}$ [@panthaplackel2020update]. By assuming that the developer updated the comment because it would have otherwise become inconsistent as a result of code changes, we take $C_{1}$ to be inconsistent with $M_{2}$, consequently leading to a *positive example*, with $C$=$C_{1}$, $M_{old}$=$M_{1}$, and $M$=$M_{2}$. For *negative examples*, we additionally examine cases in which $C_{1}$=$C_{2}$ and assume that if the existing comment would have become inconsistent, the developer would have updated it. Following this process, we collect $\mathtt{@return}$, $\mathtt{@param}$, and summary comment examples. We additionally incorporate 7,239 positive $\mathtt{@return}$ examples from our prior work [@panthaplackel2020update] which studies $\mathtt{@return}$ comment updates.
20
+
21
+ ::: {#table:data-splits}
22
+ **Train** **Valid** **Test** **Total**
23
+ ---------- ----------- ----------- ---------- -----------
24
+ \@return 15,950 1,790 1,840 19,580
25
+ \@param 8,640 932 1,038 10,610
26
+ Summary 8,398 1,034 1,066 10,498
27
+ Full 32,988 3,756 3,944 40,688
28
+ Projects 829 332 357 1,518
29
+
30
+ : Data partitions.
31
+ :::
32
+
33
+ While convenient for data collection, the assumptions we make do not always hold in practice. For instance, if $C_{1}$ is refactored without altering its meaning, we would assign a positive label because $C_{1}$$\neq$$C_{2}$, despite it actually being consistent. Because such cases of *comment improvement* are not within the scope of our work, we adopt previously proposed heuristics [@panthaplackel2020update] to reduce the number of instances in which the comment and code changes are unrelated. The negative label is also noisy since $C_{1}$=$C_{2}$ when a developer fails to update comments in accordance with code changes, pointing to the problem we are addressing in this paper. We minimize such cases by limiting to popular, well-maintained projects [@ProjectQuality]. For more reliable evaluation, we curate a clean sample of 300 examples (corresponding to 101 projects) from the test set, consisting of 50 positive and 50 negative examples of each comment type.
34
+
35
+ In line with prior work [@RenCrossProject; @Movshovitz-AttiasCohen13PredictingProgrammingComments], we consider a cross-project setting with no overlap between the projects from which examples are extracted in training/validation/test sets. From our data collection procedure, we obtain substantially more negative examples than positive ones, which is not surprising because many changes do not require comment updates [@WenLargeStudy]. We downsample negative examples, for each partition and comment type, to construct a balanced dataset. Statistics of our final dataset are shown in Table [1](#table:data-splits){reference-type="ref" reference="table:data-splits"}.
36
+
37
+ Comments are tokenized based on space and punctuation. We parse methods into sequences using javalang [@javalang]. Comment and code sequences are subtokenized (e.g., camelCase $\rightarrow$ camel, case; snake_case $\rightarrow$ snake, case), as done in prior work [@Alon2019Code2Seq; @FernandesSummarization], to capitalize on composability and better address the open vocabulary problem in learning from source code [@CvitovicOpenVocab]. Details on data statistics, filtering, and annotation procedures are given in Appendix [10](#appendix:data){reference-type="ref" reference="appendix:data"}.
38
+
39
+ We outline baseline, post hoc, and just-in-time inconsistency detection models.
40
+
41
+ **Lexical overlap:** A comment often has lexical overlap with the corresponding method. We include a rule-based just-in-time baseline, [Overlap]{.smallcaps}($C$, deleted), which classifies $C$ as inconsistent if at least one of its tokens matches a code token belonging to a $\mathtt{Delete}$ or $\mathtt{ReplaceOld}$ span in $M_{edit}$.
42
+
43
+ **@Corazza18 :** This post hoc bag-of-words approach classifies whether a comment is coherent with the method that it accompanies using an SVM with TF-IDF vectors corresponding to the comment and method. We simplify the original data pre-processing, but validate that the performance matches the reported numbers.
44
+
45
+ **CodeBERT BOW:** We develop a more sophisticated bag-of-words baseline that leverages CodeBERT [@Feng2020CodeBERTAP] embeddings. These embeddings were pretrained on a large corpus of natural language/code pairs. In the post hoc setting, we consider CodeBERT BOW($C$, $M$), which computes the average embedding vectors of $C$ and $M$. These vectors are concatenated and fed through a feedforward network. In the just-in-time setting, we compute the average embedding vector of $M_{edit}$ rather than $M$, and we refer to this baseline as CodeBERT BOW($C$, $M_{edit}$).
46
+
47
+ **@LiuOutdatedLine :** This is a just-in-time approach for detecting whether a block/line comment becomes inconsistent upon changes to the corresponding code snippet. Their task is slightly different as block/line comments describe low-level implementation details and generally pertain to only a limited number of lines of code, relative to API comments. However, we consider it as a baseline since it is closely related. They propose a random forest classifier which leverages features which capture aspects of the code changes (e.g., whether there is a change to a $\mathtt{while}$ statement), the comment (e.g., number of tokens), and the relationship between the comment and code (e.g., cosine similarity between representations in a shared vector space). We re-implemented this approach based on specifications in the paper, as their code was not publicly available. We disregard 9 (of 64) features that are not applicable in our setting. Details about our re-implementation are given in Appendix [11](#liu-et-al-reimplementation){reference-type="ref" reference="liu-et-al-reimplementation"}.
48
+
49
+ ::: table*
50
+ +-------------------------+--------------------------------------------------------------+-------------------------------------------+---+-------------------------------------------+
51
+ | | | **Cleaned Test Sample** | | **Full Test Set** |
52
+ +:=======================:+:=============================================================+:========:+:========:+:========:+:========:+:=:+:========:+:========:+:========:+:========:+
53
+ | 3-6 | **Model** | **P** | **R** | **F1** | **Acc** | | **P** | **R** | **F1** | **Acc** |
54
+ +-------------------------+--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
55
+ | Baselines | [Overlap]{.smallcaps}($C$, deleted) | 77.7 | 72.0 | 74.7 | 75.7 | | 74.1 | 62.8 | 68.0 | 70.4 |
56
+ | +--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
57
+ | | @Corazza18  | 65.1 | 46.0 | 53.9 | 60.7 | | 63.7 | 47.8 | 54.6 | 60.3 |
58
+ | +--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
59
+ | | CodeBERT BOW($C$, $M$) | 66.2 | 70.4 | 67.9 | 66.9 | | 68.9 | 73.2 | 70.7 | 69.8 |
60
+ | +--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
61
+ | | CodeBERT BOW($C$, $M_{edit}$) | 65.5 | 80.9 | 72.3 | 69.0 | | 67.4 | 76.8 | 71.6 | 69.6 |
62
+ | +--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
63
+ | | @LiuOutdatedLine  | 77.6 | 74.0 | 75.8 | 76.3 | | 77.5 | 63.8 | 70.0 | 72.6 |
64
+ +-------------------------+--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
65
+ | Post hoc | [Seq]{.smallcaps}($C$, $M$) | 58.9 | 68.0 | 63.0 | 60.3 | | 60.6 | 73.4 | 66.3 | 62.8 |
66
+ | +--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
67
+ | | [Graph]{.smallcaps}($C$, $T$) | 60.6 | 70.2 | 65.0 | 62.2 | | 62.6 | 72.6 | 67.2 | 64.6 |
68
+ | +--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
69
+ | | [Hybrid]{.smallcaps}($C$, $M$, $T$) | 53.7 | 77.3 | 63.3 | 55.2 | | 56.3 | 80.8 | 66.3 | 58.9 |
70
+ +-------------------------+--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
71
+ | Just-In-Time | [Seq]{.smallcaps}($C$, $M_{edit}$) | 83.8 | 79.3 | 81.5 | 82.0 | | 80.7 | 73.8 | 77.1 | 78.0 |
72
+ | +--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
73
+ | | [Graph]{.smallcaps}($C$, $T_{edit}$) | 84.7 | 78.4 | 81.4 | 82.0 | | 79.8 | 74.4 | 76.9 | 77.6 |
74
+ | +--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
75
+ | | [Hybrid]{.smallcaps}($C$, $M_{edit}$, $T_{edit}$) | 87.1 | 79.6 | 83.1 | 83.8 | | 80.9 | 74.7 | 77.7 | 78.5 |
76
+ +-------------------------+--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
77
+ | Just-In-Time + features | [Seq]{.smallcaps}($C$, $M_{edit}$) + features | 91.3 | 82.0 | 86.4 | 87.1 | | 88.4 | 73.2 | 80.0 | **81.8** |
78
+ | +--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
79
+ | | [Graph]{.smallcaps}($C$, $T_{edit}$) + features | 85.8 | **87.1** | 86.4 | 86.3 | | 83.8 | **78.3** | **80.9** | 81.5 |
80
+ | +--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
81
+ | | [Hybrid]{.smallcaps}($C$, $M_{edit}$, $T_{edit}$) + features | **92.3** | 82.4 | **87.1** | **87.8** | | **88.6** | 72.4 | 79.6 | 81.5 |
82
+ +-------------------------+--------------------------------------------------------------+----------+----------+----------+----------+---+----------+----------+----------+----------+
83
+ :::
84
+
85
+ **Post hoc:** We consider three models, with different ways of encoding the method. [Seq]{.smallcaps}($C$, $M$) encodes $M$ with a GRU, [Graph]{.smallcaps}($C$, $T$) encodes $T$ with a GGNN, and [Hybrid]{.smallcaps}($C$, $M$, $T$) uses both. Multi-head attention in [Hybrid]{.smallcaps}($C$, $M$, $T$) is computed with the hidden states of the two encoders separately and then combined.
86
+
87
+ **Just-In-Time:** To allow fair comparison with the post hoc setting, these models are identical in structure to the models described above except that $M_{edit}$ is used instead of $M$.
88
+
89
+ **Just-In-Time + features:** Because injecting explicit knowledge can boost the performance of neural models [@ChenExplicitFeatures; @XuanExternalFeatures], we investigate adding comment and code features to our approach. These are computed at the token/node-level and concatenated with embeddings before being passed to encoders. Features are derived from prior work on comments and code [@panthaplackel2020associating; @panthaplackel2020update], including linguistic (e.g., POS tags) and lexical (e.g., comment/code overlap) features.
90
+
91
+ ::: table*
92
+ +--------------------+--------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
93
+ | | | **Cleaned Test Sample** | | **Full Test Set** |
94
+ +:==================:+:=============================================================+:================================================================+:================================================================+:================================================================+:================================================================+:==+:================================================================+:================================================================+:================================================================+:================================================================+
95
+ | 3-6 | **Model** | **P** | **R** | **F1** | **Acc** | | **P** | **R** | **F1** | **Acc** |
96
+ +--------------------+--------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+---+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+
97
+ | $\mathtt{@return}$ | [Seq]{.smallcaps}($C$, $M_{edit}$) + features | 88.5$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | 72.0$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | **79.4$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | **81.3$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | | **87.6$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | 73.3$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | 79.8$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | **81.4$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or |
98
+ | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger |
99
+ | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | \or \ddagger\ddagger \else\@ctrerr\fi}}$** | \or \ddagger\ddagger \else\@ctrerr\fi}}$** | | \or \ddagger\ddagger \else\@ctrerr\fi}}$** | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | \or \ddagger\ddagger \else\@ctrerr\fi}}$** |
100
+ | +--------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+---+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+
101
+ | | [Graph]{.smallcaps}($C$, $T_{edit}$) + features | 81.2 | **77.3** | 79.1$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | 79.7 | | 82.2 | **79.3** | **80.6** | 80.9$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or |
102
+ | | | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | | | | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger |
103
+ | | | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | | | | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$ |
104
+ | +--------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+---+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+
105
+ | | [Hybrid]{.smallcaps}($C$, $M_{edit}$, $T_{edit}$) + features | **88.7$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | 72.0$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | **79.4$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | **81.3$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | | 87.3$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | 73.7$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | 79.8$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or | **81.4$^{\ensuremath{\ifcase 1\or *\or \dagger\or \ddagger\or |
106
+ | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger |
107
+ | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$** | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | \or \ddagger\ddagger \else\@ctrerr\fi}}$** | \or \ddagger\ddagger \else\@ctrerr\fi}}$** | | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | \or \ddagger\ddagger \else\@ctrerr\fi}}$** |
108
+ +--------------------+--------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+---+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+
109
+ | $\mathtt{@param}$ | [Seq]{.smallcaps}($C$, $M_{edit}$) + features | 90.0 | **95.3** | 92.5 | 92.3$^{\ensuremath{\ifcase 2\or *\or \dagger\or \ddagger\or | | 92.2 | 88.3$^{\ensuremath{\ifcase 2\or *\or \dagger\or \ddagger\or | 90.2 | 90.4 |
110
+ | | | | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | | |
111
+ | | | | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | | |
112
+ | +--------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+---+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+
113
+ | | [Graph]{.smallcaps}($C$, $T_{edit}$) + features | **96.5** | 92.0 | **94.2** | **94.3** | | **94.5** | **89.0$^{\ensuremath{\ifcase 2\or *\or \dagger\or \ddagger\or | **91.7** | **91.9** |
114
+ | | | | | | | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | | |
115
+ | | | | | | | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$** | | |
116
+ | +--------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+---+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+
117
+ | | [Hybrid]{.smallcaps}($C$, $M_{edit}$, $T_{edit}$) + features | 94.6 | 89.3 | 91.8 | 92.0$^{\ensuremath{\ifcase 2\or *\or \dagger\or \ddagger\or | | 93.3 | 85.9 | 89.4 | 89.9 |
118
+ | | | | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | | | | | |
119
+ | | | | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | | | | | |
120
+ +--------------------+--------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+---+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+
121
+ | Summary | [Seq]{.smallcaps}($C$, $M_{edit}$) + features | **96.0** | 78.7 | 86.5$^{\ensuremath{\ifcase 4\or *\or \dagger\or \ddagger\or | 87.7 | | 84.7$^{\ensuremath{\ifcase 4\or *\or \dagger\or \ddagger\or | 58.3 | 69.0 | **73.9$^{\ensuremath{\ifcase 4\or *\or \dagger\or \ddagger\or |
122
+ | | | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger |
123
+ | | | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$** |
124
+ | +--------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+---+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+
125
+ | | [Graph]{.smallcaps}($C$, $T_{edit}$) + features | 80.8 | **92.0** | 86.0$^{\ensuremath{\ifcase 4\or *\or \dagger\or \ddagger\or | 85.0 | | 76.0 | **66.4** | **70.6** | 72.5 |
126
+ | | | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | | | | | | |
127
+ | | | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$ | | | | | | |
128
+ | +--------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+---+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+
129
+ | | [Hybrid]{.smallcaps}($C$, $M_{edit}$, $T_{edit}$) + features | 93.7 | 86.0 | **89.5** | **90.0** | | **85.0$^{\ensuremath{\ifcase 4\or *\or \dagger\or \ddagger\or | 57.0 | 68.1 | 73.5$^{\ensuremath{\ifcase 4\or *\or \dagger\or \ddagger\or |
130
+ | | | | | | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger | | | \mathsection\or \mathparagraph\or \|\or **\or \dagger\dagger |
131
+ | | | | | | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$** | | | \or \ddagger\ddagger \else\@ctrerr\fi}}$ |
132
+ +--------------------+--------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+---+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+-----------------------------------------------------------------+
133
+ :::
134
+
135
+ Models are trained to minimize negative log likelihood. We use 2-layer BiGRU encoders (hidden dimension 64). GGNN encoders (hidden dimension 64) are rolled out for 8 message-passing steps, also use hidden dimension 64. We initialize comment and code embeddings, of dimension 64, with pretrained ones [@panthaplackel2020update]. Edit embeddings are of dimension 8. Attention modules use 4 attention heads. We use a dropout rate of 0.6. Training ends if the validation F1 does not improve for 10 epochs.
2010.01666/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4524da9736c1e7f6226b1306a22906072809b1c67770b555127244613ded3ac8
3
+ size 8810245
2010.05324/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e6c5f00bba63ea2d389da1d3e3315f731a3fa41a6dc80caee07dab0e7342d62
3
+ size 248947
2101.00604/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51eed33fda004d3bffba11006112bc2b9c0eb255f2e48de4983ddeeadf156500
3
+ size 3238358
2101.09465/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0689c6d89585b2ef78bb787897441d174c61a0646b2d178bbb96efae0f85ebb
3
+ size 1693596
2101.09868/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8df7179458cebbaf3556d5401d51577f2deb007b9f6ce8acc437bd1829bea2d0
3
+ size 5233937
2101.11224/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2101.11224/paper_text/intro_method.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ scarcity and lack of annotation is a general problem for developing machine learning models in medical imaging. Among various medical imaging modalities, ultrasound (US) is the most frequently used modality given its widespread availability, lower cost, and safety since it does not involve ionizing radiation. Specifically, US imaging, in the form of echocardiography (echo), is the standard-of-care in cardiac imaging for the detection of heart disease. Echo examinations are performed across up to 14 standard views from several acoustic windows on the chest. In this paper, we specifically focus on the parasternal long axis (PLAX), which is one of the most common view acquired in the point-of-care US for rapid examination of cardiac function (Fig. [1](#fig:0){reference-type="ref" reference="fig:0"}). Several measurements from PLAX require the localization of anatomical landmarks across discrete points in the cardiac cycle. Our work specifically investigates automatic localization of the left ventricle (LV) internal dimension (LVID), which is routinely used to estimate the ejection fraction (EF), a strong indicator of cardiac function abnormality. In clinics, LVID landmarks are determined in two frames of the cardiac cycle, i.e. end-diastolic and end-systolic. However, such annotation is challenging, specially for general physicians at point-of-care who do not have the experience of cardiologists. As such, the automation of landmark localization is highly desirable. However, developing a machine learning model for such automation has been hindered by the availability of only sparse set of labeled frames in cardiac cines. Manually labeling all cardiac frames for a large set of cardiac cines is virtually impractical, given limited expert time.
4
+
5
+ ![Example of PLAX images, as one of the most common standard views acquired in point-of-care echocardiography. Landmarks identified on the left ventricle are used to measure the EF, a strong indicator of cardiac disease. Two landmarks on inferolateral and anteroseptal walls (IW, AW) are yellow color while the LVID is the red line. LVID can be localized with IW and AW.](examples.png){#fig:0 width="\\linewidth"}
6
+
7
+ Instead of manually labeling, we propose a new Reciprocal landmark Detection and Tracking (RDT) model that enables automation in measurements across the entire cardiac cycle. The model only uses prior knowledge from sparsely labeled key frames that are temporally distant in a cardiac cycle. Meanwhile, we take advantage of temporal coherence of cardiac cine series to impose cycle consistency in tracking landmarks across unannotated frames that are between these two annotated frames. To impose consistent detection and tracking of the landmarks, we propose a reciprocal training as a self-supervision process.
8
+
9
+ <figure id="fig:1" data-latex-placement="!ht">
10
+ <img src="ReciprocalNet.png" />
11
+ <figcaption>The general flowchart of the proposed detection and tracking model. Gold standard labels are only available for end-diastolic and end-systolic frames. The propagation starts from the end-diastolic frame and ends at the end-systolic frame. The tracking is completed in a cycle way. The two annotated frames serve as a weak supervision for the model. The detection and tracking results from the unannotated frames jointly reciprocally provide another self-supervision.</figcaption>
12
+ </figure>
13
+
14
+ In summary, we propose a RDT model, which is weakly supervised by only two annotated keyframes in an image sequence for model training. For testing, the model is an end-to-end model that detects the landmark in the first frame, followed by a tracking process. Our contributions are:
15
+
16
+ - A novel Reciprocal landmark Detection and Tracking (RDT) model is proposed. In the model, the spatial constraint for detection and temporal coherence for tracking of cardiac cine series work reciprocally, to generate accurate localization of landmarks;
17
+
18
+ - The sparse nature of echocardiography labels is handled by the proposed model. The model is only weakly supervised by two annotated image frames that are temporally distant from each other. The annotation sparsity is also analyzed in the experimental part;
19
+
20
+ - A novel adversarial training approach (Ad-T) for optimization of the proposed RDT. Such training is made possible by introducing four complementary losses as in Fig. [2](#fig:1){reference-type="ref" reference="fig:1"}, i.e. reciprocal loss, motion loss, focal loss, and cycle loss. Compared with conventional training approaches, Ad-T indirectly achieves feature augmentation, which is extremely important for model training given the extremely few annotations. the advantage of such Ad-T is highlighted in our ablation study.
21
+
22
+ # Method
23
+
24
+ Our general RDT framework can be found in Figure [2](#fig:1){reference-type="ref" reference="fig:1"}. The model can be divided into three parts, the *feature encoder* (blue color), *detection head* (orange color), and *tracking head* (green color). The feature encoder and detection head combined can be viewed as a Unet-like model, for which the general structure is similar to Unet. In the model training phase, the input of the RDT model is an echo sequence starting from the end-diastolic frame and ending at the end-systolic frame. For the detection branch, the input is the whole frame, while for the tracking branch, the inputs are patches from two neighboring frames. The output of the network is two predicted landmark pair locations for LVID.
25
+
26
+ Suppose the frames in the cardiac cine series are represented by $\{I_1, I_2, I_3,..., I_k\}$. For model training, we suppose the end-diastolic frame to be the $1^{st}$ frame, and the end-systolic frame to be the $k^{th}$ frame. The $1^{st}$ and $k^{th}$ frames are with annotation, while the in-between frames are unannotated. The landmark pairs are represented by ${i_t, a_t}$ ($i_t = \{x^i_t, y^i_t\}, a_t = \{x^a_t, y^a_t\}$) corresponding to the landmarks on the inferolateral and anteroseptal walls of LV in the $t^{th}$ frame, respectively. We use $\phi$ to represent the *feature encoder*, and the feature generated for $I_t$ is represented by $\phi_{I_t}$. The $\phi_{I_{t}}$ is solely input to the *detection head* $D$ to get the predicted landmark locations ${i_t^D, a_t^D}$. For *tracking head*, the input is the cropped features of two consecutive frames. One serves as the template frame while the other serves as the search frame. For landmark tracking, the predicted locations start from the $2^{nd}$ frame. After a cycle forward and backward propagation, the predicted location will end at the $1_{st}$ frame.
27
+
28
+ The feature encoder consists of six 3$\times$`<!-- -->`{=html}3 convolution layers, each followed by a rectified linear unit (ReLU). The third convolution layer is with a stride equal to 2. Since a single feature encoder is sufficient for the tracking head, we share this part of the encoder with both tracking and detection branches. Since the shared encoder is optimized by losses generated from different heads, the encoded feature should be robust since its optimization considers both the spatial information exploited by the detection branch and temporal information explored by the tracking branch.
29
+
30
+ The detection head combined with the feature encoder together can be viewed as an Unet-like structure, which consists of a contracting path and an expansive path. The contracting path follows the typical architecture of a convolutional network. The beginning of the detection head is another six layers for feature generation. There are two similar downsampling steps to the shared feature encoder. However, we also double the number of feature channels in these two steps. Every step in the expansive path consists of an upsampling of the feature map followed by a 2$\times$`<!-- -->`{=html}2 convolution ("up-convolution"). The first two upsampling layers halve the number of feature channels. We also concatenate the output of each upsampling layer with a correspondingly cropped feature map from the contracting path. Each 3$\times$`<!-- -->`{=html}3 convolutions is followed by a ReLU. As padding is applied, there is no cropping in the whole neural network. For the final two layers used for classification, the first one is a 3$\times$`<!-- -->`{=html}3 convolution layer, and the second is a 1$\times$`<!-- -->`{=html}1 layer, which is used to map each 48-component feature vector to the desired number of landmarks (Here, the number of landmarks is 2). The last layer's output is a two-dimension heatmap, and each location of the heatmap represents the probability of a target landmark.
31
+
32
+ Focal loss is generated on annotated frames. For each landmark, there is one ground-truth positive location in each dimension of the heatmap (Two landmarks correspond to two dimensions), and all the other locations are negative. For such ground truth, penalizing negative location equally with the positive ones is not appropriate, therefore we apply the focal loss. During training, we reduce the penalty given to negative locations within a radius of the positive location. We empirically set the radius to be 10 pixels. The amount of penalty reduction is given by an unnormalized 2D Gaussian $e^{-(x^2+y^2)/2\delta^2}$, whose center is at the positive location and whose $\sigma$ is 1/3 of the radius. Let $p_{c_{i,j}}$ be the score at location (i, j) for landmark c in the predicted heatmap, and let $y_{c_{i,j}}$ be the ground-truth heatmap augmented with the unnormalized Gaussians. We create a variant of focal loss [@lin2017focal]: $$\begin{equation}
33
+ \scriptsize
34
+ {\mathcal{L}_{\det }} = \sum\limits_{c = 1}^2 {\sum\limits_{i = 1}^H {\sum\limits_{j = 1}^W {\left\{ {\begin{array}{*{20}{c}}
35
+ {{{(1 - {p_{c_{i,j}}})}^\alpha }\log ({p_{c_{i,j}}})\quad if\quad {y_{c_{i,j}}} = 1}\\
36
+ {{{(1 - y{}_{c_{i,j}})}^\beta }{{({p_{c_{i,j}}})}^\alpha }\log (1 - {p_{c_{i,j}}})\quad else,}
37
+ \end{array}} \right.} } } \label{eq:0}
38
+ \end{equation}$$ where $\alpha$ and $\beta$ are the hyperparameters that control the contribution of each point (we empirically set $\alpha$ to 2 and $\beta$ to 4 in all experiments). With the Gaussian distribution encoded in the $y_{c_{i,j}}$, the term $1 - {y_{c_{i,j}}}$ is used for reducing the penalty around the ground truth locations.
39
+
40
+ For the tracking head, when we get $\phi_{I_{t}}$ and $\phi_{I_{t-1}}$, we first crop the search patches and the template patches both centering at the landmark pairs in the two consecutive frames, respectively. The two template patches for inferolateral/anteroseptal landmarks get concatenated and are represented by $P_{t-1}$, while the two search patches for inferolateral/anteroseptal landmarks get concatenated and are represented by $N_t$.
41
+
42
+ The input for the tracking branch is the template patch $P_{t-1}$ with size $25\times25$ and the search patch $N_t$ with size $29\times29$, both centering at the landmark patch $i_{t-1}, a_{t-1}$. The size of $P_{t-1}$ and $N_t$ are labeled in Fig. [2](#fig:1){reference-type="ref" reference="fig:1"}, which are set empirically. We formulate the *tracking head* $T$ as $\delta_{i_t}, \delta_{a_t} = T(\phi_{P_{t}}, \phi_{N_{t+1}})$.
43
+
44
+ For the tracking head, we first define a convolutional operation between $\phi_{P_{t-1}}$ and $\phi_{N_t}$ in order to compute the affinity (similarity) between each sub-patch of $\phi_{N_t}$ and $\phi_{P_{t-1}}$. To be more specific, $\phi_{P_{t-1}}$ and $\phi_{N_t}$ are combined by using a cross-correlation layer
45
+
46
+ $$\begin{equation}
47
+ f(\phi_{N_t}, \phi_{P_{t-1}}) = \phi_{P_{t-1}} * \phi_{N_t} .
48
+ \end{equation}$$ Note that the output of this function is a feature map indicating the *affinity score*. For hands-on implementation, it is simple to take $\phi_{P_{t-1}}$ as a kernel matrix to compute dense convolution on $\phi_{N_t}$ within the framework of existing conv-net libraries. The output feature map is followed by another three fully connected layers (represented by m in Eq. [\[eq:3-1\]](#eq:3-1){reference-type="ref" reference="eq:3-1"}) to predict the landmark motion. Such regression operation is further formulated as $$\begin{equation}
49
+ T(\phi_{P_{t}}, \phi_{N_{t+1}}) = \delta_{i_t}, \delta_{a_t} =m(f(\phi_{N_t}, \phi_{P_{t-1}});\theta_f) .\label{eq:3-1}
50
+ \end{equation}$$ where $\theta_f$ represents the parameters for the fully connected network. $\delta_{i_t}$ and $\delta_{a_t}$ are both two-dimensional moves (along x-axis and y-axis, respectively). The new landmark location is calculated by adding its previous location to the predicted motion. Such motion prediction is generally similar with optical flow, in which a new three-layer regression is also incorporated. This regression makes the learning process adaptive.
51
+
52
+ ![Optimization of the proposed reciprocal training.](ReciprocalTraining.png){#fig:3 width="\\linewidth"}
53
+
54
+ As the tracking process is only supervised by end-diastolic and end-systolic frames, we introduce the cycle loss and motion loss to supervise the tracking branch. To model the cycle process, we iteratively apply the tracking head $T$ in a forward manner: $$\begin{equation}
55
+ \begin{array}{l}
56
+ {L_t}^* = T({\phi _{{P_{t - 1}}}},{\phi _{{N_t}}}) + {L_{t-1}}^*\\
57
+ = T({\phi _{{P_{t - 1}}}},{\phi _{{N_t}}}) + T({\phi _{{P_{t - 2}}}},{\phi _{{N_{t - 1}}}}) + {L_{t-2}}^*\\
58
+ = T({\phi _{{P_{t - 1}}}},{\phi _{{N_t}}}) + ... T({\phi _{{P_1}}},{\phi _{{N_2}}}) + {L_1}^* , \label{eq:3}
59
+ \end{array}
60
+ \end{equation}$$ in which $L_t^*=\{i_t,a_t\}$ represents the predicted location of landmark pairs in $t^{th}$ frame, while $L_1^*=\{i_1,a_1\}$ represents the ground truth location of landmark pairs in the first annotated frame. Here \"+\" represents the element-wise addition between the location of landmarks in the current frame and motion calculated in Eq.[\[eq:3-1\]](#eq:3-1){reference-type="ref" reference="eq:3-1"}. Also, we use the same formulation in backward manner as: $$\begin{equation}
61
+ \begin{array}{l}
62
+ {L_1}^* = T({\phi _{{P_2}}},{\phi _{{N_1}}}) + {L_2}^*\\
63
+ = T({\phi _{{P_2}}},{\phi _{{N_1}}}) + T({\phi _{{P_3}}},{\phi _{{N_2}}}) + {L_3}^*\\
64
+ = T({\phi _{{P_t}}},{\phi _{{N_{t - 1}}}}) + ...T({\phi _{{P_2}}},{\phi _{{N_1}}}) + {L_t}^* . \label{eq:4}
65
+ \end{array}
66
+ \end{equation}$$
67
+
68
+ We use the labeled end-diastolic frame as the beginning frame of the echo cine series, and the end-systolic frame as the end frame. The motion loss is defined by the deviation between the predicted landmark pair locations in the end-systolic frame and their ground truth locations. Suppose the labeled end-systolic frame is the $k^{th}$ frame; after forward propagation, the motion loss $\mathcal{L}$ is defined as $$\begin{equation}
69
+ \begin{array}{l}
70
+ \mathcal{L}_{motion}^k = \mathcal{L}_{1\rightarrow k} = \|{L_k} - {L_k}^*\|^2\\
71
+ = \|{L_k} - (T({\phi _{{P_{k - 1}}}},{\phi _{{N_k}}}) + ...T({\phi _{{P_1}}},{\phi _{{N_2}}}) + {L_1})\|^2 .\label{eq:5}
72
+ \end{array}
73
+ \end{equation}$$ The forward propagation is followed by backward propagation that ends at the end-diastolic frame. By combining Eq. [\[eq:3\]](#eq:3){reference-type="ref" reference="eq:3"} and Eq. [\[eq:4\]](#eq:4){reference-type="ref" reference="eq:4"}, the current predicted landmark pair location in the diastolic frame $L_1^*$ can actually be represented by its ground truth location $L_1$, and we use the deviation between these two terms to represent the cycle loss as follow: $$\begin{equation}
74
+ \begin{array}{l}
75
+ \mathcal{L}_{cycle}^k = \mathcal{L}_{1\rightarrow k\rightarrow 1} = \|{L_1} - {L_1}^*\|^2\\
76
+ = \|{L_1} - {L_k}^* + {L_k}^* - {L_1}^*\|^2\\
77
+ = \|(T({\phi _{{P_{k - 1}}}},{\phi _{{N_k}}}) + ...T({\phi _{{P_1}}},{\phi _{{N_2}}})) +\\
78
+ (T({\phi _{{P_k}}},{\phi _{{N_{k - 1}}}}) + ...T({\phi _{{P_2}}},{\phi _{{N_1}}}))\|^2.
79
+ \end{array}\label{eq:6}
80
+ \end{equation}$$ Finally, the cycle loss can be simplified as $$\begin{equation}
81
+ \mathcal{L}_{cycle}^k = - (\mathcal{L}_{motion}^k + \mathcal{L}_{motion}^1).\label{eq:7}
82
+ \end{equation}$$
83
+
84
+ The former motion loss, cycle loss, and focal loss are applied for the annotated frames, whereas the reciprocal loss is proposed only for the unannotated frames, which can be viewed as a self-supervision. In the training phase, only the end-diastolic and end-systolic frames are annotated while the in-between frames are unannotated. For these unannotated frames, we can generate both the $i_t^D, a_t^D = max(D(\phi_{I_t}))$ and the $i_t^T, a_t^T = T(\phi_{P_{t-1}}, \phi_{N_{t}}) + i_{t-1}^T, a_{t-1}^T$. Although no annotation was assigned, the two predicted landmark pair locations are assumed to be the same. The discrepancy between these two formulates the reciprocal loss. The frame rate for reciprocal loss is set as 3, which means such loss is generated in every three frames. As $D(\phi_{I_t})$ is a heatmap with each location indicating the probability of target location, we define the reciprocal loss similar to the focal loss. We assume $i_t^T$ and $a_t^T$ to be the only positive locations of frame $t$, which is augmented as a 2D Gaussian distribution centering at each positive location. The predicted heatmap from the detection branch is viewed as predicted locations. The formulated reciprocal loss ${\mathcal{L}_{rec}(D, T)}$ is the same as defined in Eq. [\[eq:0\]](#eq:0){reference-type="ref" reference="eq:0"}.
85
+
86
+ The basic idea for the proposed RDT model is to create a reciprocal learning between the detection task and the tracking task, as the detection task mainly focuses on the spatial information of a single frame, while the tracking task considers the temporal correlation between consecutive frames. However, the detected landmark pair locations and the tracked landmark pair locations are assumed to be the same. Therefore, we want the two branches to generate a discrepancy to optimize both the feature encoder $\phi$ and the detection/tracking head.
87
+
88
+ We propose a novel adversarial optimization mechanism. The motivation is for feature augmentation as the number of data is really limited. Trained by the augmented feature, both the detection head D and the tracking head T in Fig. [3](#fig:3){reference-type="ref" reference="fig:3"} can be more robust. In Fig. [3](#fig:3){reference-type="ref" reference="fig:3"}, we use blue color to represent the feature distribution of the target landmark pair, and orange color to represent the background. In order to generate a more different distribution of features from unannotated frames, we propose to utilize the disagreement between D and T on the prediction of unannotated frames. We assume D and T can predict the location of annotated frames correctly. Here, we use a key intuition that the feature distribution of unannotated data outside the support of the annotated ones is likely to be predicted differently by D and T. Black lines denote this region as in Fig. [3](#fig:3){reference-type="ref" reference="fig:3"} (Discrepancy Region). Therefore, if we can measure the disagreement between D and T and train $\phi$ to maximize the disagreement, the encoder will generate more unknown feature distributions outside the support of the annotated ones. The disagreement here is our formerly formulated reciprocal loss ${\mathcal{L}_{rec}(D, T)}$. This goal can be achieved by iterative steps as in Fig. [4](#fig:4){reference-type="ref" reference="fig:4"}. We first update the feature encoder to maximize the ${\mathcal{L}_{rec}(D, T)}$. Then we freeze this encoder part, and update the D and T to minimize the ${\mathcal{L}_{rec}(D, T)}$, in order to get the uniformed predicted results for the newly generated unknown feature from the feature encoder. Detailed optimization steps are described as follows.
89
+
90
+ We need to train D and T, which take inputs from $\phi$. Both D and T must predict the annotated landmark pair locations correctly. We solve this problem in three steps, as can be found in Fig. [4](#fig:4){reference-type="ref" reference="fig:4"}.
91
+
92
+ **Step A.** First, we train D, T, and $\phi$ to predict the landmark pairs of annotated frames correctly. We train the networks to minimize three losses applied to annotated frames. The objective is as follows: $$\begin{equation}
93
+ \mathop {\min }\limits_{\phi ,D,T} ({\mathcal{L}_{\det }} + \mathcal{L}_{motion}^k + \mathcal{L}_{cycle}^k) ;
94
+ \end{equation}$$
95
+
96
+ **Step B.** In this step, we train the feature encoder $\phi$ for fixed D and T. By training the encoder to increase the discrepancy, more unknown feature distributions different from the annotated data can be generated. Note that this step only uses the unannotated data. The objective can be formulated as: $$\begin{equation}
97
+ \mathop {\max }\limits_\phi ({\mathcal{L}_{rec}}({\rm{D}},T)) ;
98
+ \end{equation}$$
99
+
100
+ **Step C.** We train D and T to minimize the discrepancy with a fixed $\phi$. As this step is to get the uniformed and correct detection/tracking results, the step is repeated for three times for the same mini-batch empirically. This setting achieves a trade-off between the encoder and the heads (detection, tracking). This step is applied on both annotated and unannotated frames, to get the best model weights of detection/tracking heads for all the existing features. The objective is as follows: $$\begin{equation}
101
+ \mathop {\min }\limits_{D,T} ({\mathcal{L}_{\det }} + \mathcal{L}_{motion}^k + \mathcal{L}_{cycle}^k + {\mathcal{L}_{rec}}({\rm{D}},T)) .
102
+ \end{equation}$$
103
+
104
+ These three steps are repeated until convergence. Weights for different losses are emprically set as 1, in both Step A and Step C. Based on our experience, the order of the three steps is not essential. However, our primary concern is to train D, T, and $\phi$ in an adversarial manner.
105
+
106
+ ![Stepwise model training process.](ModelTraining.pdf){#fig:4 width="107%"}
2102.00436/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d1b3c332313af887d1b027103736b3436f42e5e5b5b606fe8d9d5faba56ad56
3
+ size 507507
2102.07936/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="Electron" modified="2021-06-11T14:15:12.441Z" agent="5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/14.6.13 Chrome/89.0.4389.128 Electron/12.0.7 Safari/537.36" version="14.6.13" etag="JOa5qMWCxtNGzzlWz3na" type="device"><diagram id="2Vygsz_PsGUIp1a-fJaA">7V1tc9s2Ev41mnE/REOA7x9jJW5vmt6kl/bumi8eWqIl1rKoo+jG7q8/kCIoElgQfAFIyY7baSUQIqXdZ1+w2F3MzMXj849JsN/8Eq/C7Qwbq+eZ+WGGMXIt8t9s4KUYcFx0HFkn0aoYOw18if4Oi0GjGH2KVuGhNjGN420a7euDy3i3C5dpbSxIkvhbfdp9vK0/dR+siycap4Evy2AbctP+E63SzXHUsyuzfwqj9YY+GRnFlceATi5ucdgEq/hb5Vnmx5m5SOI4Pb56fF6E24x4lC7HG90IrpZfLAl3aZsP4OMH/gq2T8VvK75X+kJ/bLhbvc9oRt7t4h0ZvF4Fh02Y3QCRN5v0cVu8DJ+j9L/ktVG8/qPy+sNz9c1L8eY+2m4X8TZO8keZKzv0VhYZP6RJ/BBWrnj4znSc7OO7NHnJHoLmhmHSgfxJc8MvB06Py9+9VN99DpPoMUzDpBg8/uJwta5z9xA/JctiyKEoC5J1WBDW5mmNSg4S6IcxeUjyQqZ8O2GEQmRTgQcdS8JtkEZ/1b9FUEB1Xd6ufMLnOCIPxgYVK4OitRArbDlzbFT+7Potjz+wuEsVI8yNPZe5L31Pb3SkCncj8qJChtNQDkEYjqZuOCJdcDTmnmNV4ZjhE71hOPoUbgVqLI9BTWv4eQz82Bupg58lh986iZ/2xbQwScPnGnkKgxPcbVn7IeUK/bWU/BUulbepssm0DDFLajRo+MG2/AeT37tbleL1bROl4Zd9kCPxG7HudZGrClAhnWLZEVKxSi0LphYFVUticSqrD7EcXhcRIf1SvI2TdBOv412w/XgavT5RLxPt05xPcbwvaPZnmKYvhY8TPKUxQ9F4l94Ej9E2+8W/BZv4MShGi48gr5fiEmgZjvR1xUPHqornCCKeRa31R1v6uy2Mw5tjiMMzxBTIjHKGeOfBkJFIbRoA9scitT8FZU/OTd3Pdm2JXyOXEat4X2GPYfi+0c8L68lQe0JlRpeyE3EU1Tn6OhiKAYaazkCG5h8lK47gpTJhnzmbh8qdGZ/VrrusyLer2Og4nbw4Pr+vW4vQpFirwOuP6rXeWBvTwJrAUuvoiYyNKcQsw03DaASVZP5wVE1i7fVpMI/XYKbp+zc3Y6IN8q+9SdBmMjpJhjZ2vqVYh5ln7OxPjBrqiNfs3lAd1dqRsaZgTG/jMpYguwBL/LFYYp/TaqGNZi5uip2JvcVjAHp0XWuhmu7EMl3LzEeeq1bXThLpGuIvToIfE8DP0NVGaxE/j2DYuathDKhhPFQNC4SS3aZg4/PqtimQN4kvNBLPLKyFP55fY0/LTaSuqpzxgr1mRQ5/J2VqfJKgYXeY/Pn0uKdfKkiW5wYc5NS55IyCHEfiAsBfShV08LTRyUvxAHwNHmRbDwBPG9TrGn6ZcN01WC+0ZskkPpjS2Py9twyXS4gxd55t2Q35KTLlDOyejccYKK/I2abZ79wHO3JluQ0Oxwc4/3uK8wvp6XWNp3Qwo+i7Q07i92QCdvbP+dTTh5x1/v+PeOZdzzybPpJ82+NTi+scZrbbaH/IsCHJtqhDZxMn0d/kOwVVYFTV4ja4C7fXwfJhnX+QydM4Xo2TVZgwV2oAW8S7Q7zNmJGRbhnt1r/lOH1nzoZldJSGDc/riUOlY11J8qARjmqOB7IVAIXPAPrl02eOQ71zYtqpPqGsZgwtvoWhgNYOs2JxeEI7QDKNqSCXBsw8OopH9ivbCBzCAoH78V+/V0TteDuBqI3AyBPD6koZNzBaPW8hIYJ4yzq3vXjrnIsQsbTXRGtkMcQ2RxQkKBBkZf/ai2V8KF6fpQLjnRQNvBhVqUEJUQUvssz/u7uZe/1x5n54c1xx5QJi6WKKL2bKfhO9OVZwAoLGExB6D44XV/Htw8z+mFK7fZecTPbiKb9GRCd9h3LhYef8IGWi0Y2JYOpyMaiABUy6j4l5DngAB7AKDiCeA4ubmceHCrUQr7aUIIRJ1lHAwB5pgr0jJzqYOa6C6Jgn+mlRKFz9dVjeWaqWdz+F27/CNFoG3PouU1sZB8i17ftttN6RwcdotcqDGMXU4qsW7z6F92nxXZUwcMqFoTlJ+kkzSw9pkKRMLVQ+dhNlv6OIwpXVUnmII1oeB4spDI6AZJbFwnHGzcoFUqDMoSlQrdkMVQBltuUrb3VuH642DVYr879XcXpptglZrJj5vJTZgJQpUZRQVCAj3wYmMyG/AurCloonuQrqsr6XzdHW1GWE+FX5ibaZczXD15l7dek0Rd6INBUtvn99ncqB8aEAzQCtHJRQWrS0/iqh9JPo2qi0F/pYKvBvellFDseJUo8r54VoRS1D/XnwQqEcjAB8S7RkJiok3B+ibbwbU1/X1wRYDTmZvgUI82jWtTSzgPVwTk4hVi8Mqthn3DkAu1ibqrCAlW8ZfItauBrdo2/3dvYPtDBx8j9+iXP8axF9GwP8gE3Vhn0t3Tbo687NX+7vQwdOPVi5/p1R5pQcE4QMB89qGSiG4c6ac1D6dtug7YqqK1Lf5/k8bfcXhy0iNHD9Hm37bSDErgC52K/CjhstWm706PgyN5BdxeHcJ06LJizWMqGkHV/IQ3tAkMKtCkEKy/OBoMEBx+/ZcohHs8egWSEEJ6mWaQ7tcdlbqrK0ailZTmsADUWG5TOBD6OdRuleSUMXJ61LaQTfTFUmrSUKu1wdjv7lgm4O38/c66dsf6uy16VkYdTftWxuVYQswyxXnlRObZfzX1xd/os7heD2zYA9I4HvaUV0F8ExoouKiKUySWzRrkZ3dy9kAHtWpuPPTe/Upg/x4mLaKuSFj9vM53MNugVY3jYmPFRKqetLLNQ+QUuwQ3TyJ+pa3uXVlK4gsc1HcAaTvQXZ1MYYOpKX7niPQV5RBOcqvkUNoUh0cTklzTww6ywwgRQrXUkltijK8+stutpIuXAZAeFm6vtyAdDlBtnDUxP4CrxpqrcsfXWmtGRW1kUVT+L7MMV/2MFVBHSdP9hVskVpEL/ektUKEZOUaM6fid4k0l0faBD1NhMvTu7RlII/PHDxRgUfCOjagii9XsFnq35lgi+ZP1zwRdGKr2/GkmM++g2ItLYdM3t4UOONCjVkzadp/siEIG3FMipOhbkw45xvtGR/WuWZ7csyrjiLcmXI6rQ/c8Zct47DJAsMgZW5+v7c5wM2upazDh+woUTl2GMvDtEjmfv71cy+Jp9bEDOoJmo/qhdr4zm2eZpbhm1ZPvJ8H0iqhMk/N7Bv2bbhIcJFyzUVsIMP8Lxfh7nC5VtgnEXdlgqBYOOVvNJCUERNRZVW4VNO1/zF9Kyay0Ew1etgn9F8DxvwPZyhuyit2TVNZ9TR/Dqgcchg2vaL0jCeBC0hE0ZpDLdp/mBH0OGjNFQx/vx6FaPRRjO6kD+nRDd+j7kojLlM0tnaRA67wpc0W5N+YrgoT91x1UE1k2vMfd+SQGlikwtkxDlDY3ittcAkmSfTmtxJ4qNdTS5boaTc5EKxF0kjsX98un/696pbMzFk7Z+BTmJFpX67NmKXV8AvWaRaLG/dbIXKHW85Qua8I4zqXEBSn4jMBVnpbtUIG1guFGipShNFdq0x2LJUdCf5yM9cuL/nReYmWKZZ4wSio7JCKeOfYfotTh6gQJtohOsh1lX8ejjcQejdg+l+ztIL7+45x9q8DoqWDEvC58waKxE5y2VbLtCigmocCHC1qZ4chI7hDU4Vu9qzTgmf4lRNYTWopGmnqBdDmR3a195D1QXGJL45AzfFKaB0rXimqXiic2RUiLLr1CkLqHldGXouVGdWFAF+Psg7cPVOQ22WJvCYHoVBdZdTnhavPLGmfSeXjw31Na3LZa51ONP6ZRPsQ95sSi2swJ6CE7VZ2BYFXqNYWGwidudlVBsrariSiWaL5niaRBM0dCq3Gx2e6mMK59TRHrf/gYzMmq+HsyKXvJ5ujAv0fncnCVuYiLG2viRswWQRMvOH+z18n5p3PAQFjedEyQBNDelyT0h5QzpZ+7lKpzpZQ7pi5DpO0/gxd9QUqRbT8NiauLLYcYQ+de4kRwP1XSJdimaxgc0Ld5JDYLkEUFlAlMVi1/mG2qPlXL7R6NhHZHTHWe/NttHwCRwAqgafUtOGMZPio+o4JPY5pnSPrmn+YODSjYjziUZlHuwr9OI8IJdmsBfX1nzSh393lLQ7StyhCbTDxwheEn3UGQnzq5RlWhhalWVvmtwt1m9Csr4gXtP84ebk7Io2X6nnDrS8mAaBTFWILCSAkN6QgMeH9aZWgVNGpVrsuCrUgdNsrnXWgQg1zR8OwYmPqZ9n2ZzjIY4Xl1YZdPqACQU1JgFm16CG2zh9OC75Itgz3iUu8aXCKcc2F70cc6fYa86rLAig0wp1aubYvHeoNMODOkg9ZR1sYTrWScVem4LVxzDYZTWr1cS5TVNZJJBh92pqVj2LlUJACB2wyEGJGPJB2tGWHp2McB/JakipE1jevjIHZFGPJnO0q2WjzP2Zfpe4cr3vzIG28/pkzD+T7MbmDFOVNmyoPPkmIE9D/dXW7MIt5OmQZ0R9F6lCShDXHRayYie5Uy5iUN5h03HrFx7ON2cNAfxZpQvt5/gQ5fnwlWQ2+oxPzITyWVrS35DLdtsBjqfQtQvgK+093706CzwxoYiCCRvViwK2rixi21PrQm11RLVgo510UEOMiXD9Fm2bzGMkuZG6HvM+lGc5OdRwP6jhtws1w5oziqc32PhbKYQbH8G6Is5JQsBESJfT6fCDOneiQ7SqbAygxHqwcUOgSRAywFjV3FDQmcaftgF9VwugrH5YViuMDICX02z7cgXBklJ/ZLFVpl0/QI8qVBWN9s+hbb1ls9qLFzQfEjRXhR8PVdsKdVfHWhBQd/EbzxJF5cualkipB58fyNmIPvQrVSBQ5XHd//zMdwOOe7Q/3JIVL7m8iHZ0DXxH+9tBbe3OrqOaiONUC9htJUSBgJT6Fqj2aqrsyuI0y2XHzgQY6kzwOUiC4yamqB7s0Fxy3ffBwqIyVfqBkClExFS6kJLwHdcMaPsj0cmBWJESYWtjbT5ciaClsgoVXNakiRHWt6cFLTf7FDzerYJqNC0N0jBXBguip04zHuTao9qssT0yNBWzZVK2WMD6Rwcu/DJmLUOGGt0jqu79CvCS9se4kAinhPIW4xRBhHe00X14AteY6efNQVGCjyRloh/52E2UUaC4XxkfyZvsRMcUmlUxhYnEerwgijYmWuj3mTyi4QMRjdIw11dHQzcBey2FaHSlbK3QvK5pnj54VYOMqZO/FKKXtfZtDICyrFcfqP1BaBKIIZtreYAbFsPg54mbgD3bPf7XrN0Ms4egCWJvveA4cUl2PZSj7CxBtVoQjiI3B5F7a05gtxcdA/RDYS3NtabBaR04mzhkOLfrem9uSKP5bbJU1aqyGhDUqDJI19Q1lWGUu9XqA/LIuKi6bC3mTqObReVXrQ1s74pPUtSs3ZfpYDzanJdens1eO5ldi/FAQLqrNuNhwzkuGowHGl6DrPyk8nNfuVVg59SAN88ahjZuf5M3rNnr7c1A68Cj1tLnpDc0pmdKb1s61tyNfDYAwtxHoQGltUjT9UzyyzMojvjxfffMPSddCs+Ze8KzepA/902//POwqw8SUEz6e37fqPl9kggpsnwmImAB+2Ka8vsKD7AOkI945l3PoNJDAUya4IAtHXDgmzTIADITdW/Qw0MiedAWgzY2Qnmaxa7R4elRunUwSnfIlg6MEoaY9aUrkFVtATvNKo4/KVdaUCoBYcddmAa3D+Jdnsn3/UfZF3K4MCjiWaRtYwiJz3B9s6UKEobR6g5Jx9Ue7CFvkzjb8D45OeTnbn6JV2E24/8=</diagram></mxfile>
2102.07936/main_diagram/main_diagram.pdf ADDED
Binary file (90.9 kB). View file
 
2102.07936/paper_text/intro_method.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ In multi-agent reinforcement learning (MARL), one of the popular research directions is to enhance the training procedure of fully cooperative and decentralized agents. Examples of such agents include a fleet of unmanned aerial vehicles (UAVs), a group of autonomous cars, etc. This research direction aims to develop a decentralized and cooperative behavior policy for each agent, and is especially difficult for MARL settings without an explicit communication channel. The most straightforward approach is independent Q-learning (IQL) (Tan, 1993), where each agent is trained independently, with their behavior policies aimed to
4
+
5
+ Proceedings of the 38<sup>th</sup> International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s).
6
+
7
+ optimize the overall rewards in each episode. Nevertheless, each agent's policy may not converge owing to two main difficulties: (1) non-stationary environments caused by the changing behaviors of the agents, and (2) spurious reward signals originated from the actions of the other agents. The agent's partial observability of the environment further exacerbates the above issues. Therefore, in the past few years, a number of MARL researchers turned their attention to centralized training with decentralized execution (CTDE) approaches, with an objective to stabilize the training procedure while maintaining the agents' abilities for decentralized execution (Oliehoek et al., 2016). Among these CTDE approaches, value function factorization methods (Sunehag et al., 2018; Rashid et al., 2018; Son et al., 2019) are especially promising in terms of their superior performances and data efficiency (Samvelyan et al., 2019).
8
+
9
+ Value function factorization methods introduce the assumption of individual-global-max (IGM) (Son et al., 2019), which assumes that each agent's optimal actions result in the optimal joint actions of the entire group. Based on IGM, the total return of a group of agents can be factorized into separate utility functions (Guestrin et al., 2001) (or simply 'utility' hereafter) for each agent. The utilities allow the agents to independently derive their own optimal actions during execution, and deliver promising performance in Star-Craft Multi-Agent Challenge (SMAC) (Samvelyan et al., 2019). Unfortunately, current value function factorization methods only concentrate on estimating the expectations of the utilities, overlooking the additional information contained in the full return distributions. Such information, nevertheless, has been demonstrated beneficial for policy learning in the recent literature (Lyle et al., 2019).
10
+
11
+ In the past few years, distributional RL has been empirically shown to enhance value function estimation in various single-agent RL (SARL) domains (Bellemare et al., 2017; Dabney et al., 2018b;a; Rowland et al., 2019; Yang et al., 2019). Instead of estimating a single scalar Q-value, it approximates the probability distribution of the return by either a categorical distribution (Bellemare et al., 2017) or a quantile function (Dabney et al., 2018b;a). Even though the above methods may be beneficial to the MARL domain due to the ability to capture uncertainty, it is inherently incompat-
12
+
13
+ <sup>&</sup>lt;sup>1</sup>Department of Computer Science, National Tsing Hua University, Taiwan <sup>2</sup>NVIDIA AI Technology Center, NVIDIA Corporation <sup>3</sup>Wei-Fang Sun contributed to the work during his NVIDIA internship. Correspondence to: Chun-Yi Lee <cylee@cs.nthu.edu.tw>.
14
+
15
+ ible to expected value function factorization methods (e.g., value decomposition network (VDN) (Sunehag et al., 2018) and QMIX (Rashid et al., 2018)). The incompatibility arises from two aspects: (1) maintaining IGM in a distributional form, and (2) factorizing the probability distribution of the total return into individual utilities. As a result, an effective and efficient approach that is able to solve the incompatibility is crucial and necessary for bridging the gap between value function factorization methods and distributional RL.
16
+
17
+ In this paper, we propose a Distributional Value Function **Fac**torization (DFAC) framework, to efficiently integrate value function factorization methods with distributional RL. DFAC solves the incompatibility by two techniques: (1) Mean-Shape Decomposition and (2) Quantile Mixture. The former allows the generalization of expected value function factorization methods (e.g., VDN and QMIX) to their DFAC variants without violating IGM. The latter allows the total return distribution to be factorized into individual utility distributions in a computationally efficient manner. To validate the effectiveness of DFAC, we first demonstrate the ability of distribution factorization on a two-step matrix game with stochastic rewards. Then, we perform experiments on all Super Hard maps in SMAC. The experimental results show that DFAC offers beneficial impacts on the baseline methods in all Super Hard maps. In summary, the primary contribution is the introduction of DFAC for bridging the gap between distributional RL and value function factorization methods efficiently by mean-shape decomposition and quantile mixture.
18
+
19
+ # Method
20
+
21
+ In this section, we walk through the proposed DFAC framework and its derivation procedure. We first discuss a naive distributional factorization and its limitation in Section 3.1. Then, we introduce the DFAC framework to address the limitation, and show that DFAC is able to generalize distributional RL to all factorizable tasks in Section 3.2. After that, *DDN* and *DMIX* are introduced as the DFAC variants of VDN and QMIX, respectively, in Section 3.4. Finally, a practical implementation of DFAC based on quantile mixture is presented in Section 3.3. All proofs of the theorems in this section are provided in the supplementary material.
22
+
23
+ Since IGM is necessary for value function factorization, a distributional factorization that satisfies IGM is required for factorizing stochastic value functions. We first discuss a naive distributional factorization that simply replaces deterministic utilities Q with stochastic utilities Z. Then, we provide a theorem to show that the naive distributional factorization is insufficient to guarantee the IGM condition.
24
+
25
+ **Definition 1** (Distributional IGM). A finite number of individual stochastic utilities $[Z_k(h_k, u_k)]_{k \in \mathbb{K}}$ , are said to satisfy Distributional IGM (DIGM) for a stochastic joint action-value function $Z_{jt}(\mathbf{h}, \mathbf{u})$ under $\mathbf{h}$ , if $[\mathbb{E}[Z_k(h_k, u_k)]]_{k \in \mathbb{K}}$ satisfy IGM for $\mathbb{E}[Z_{jt}(\mathbf{h}, \mathbf{u})]$ under $\mathbf{h}$ , represented as:
26
+
27
+ $$\arg \max_{\mathbf{u}} \mathbb{E}[Z_{\mathrm{jt}}(\mathbf{h}, \mathbf{u})] = \begin{pmatrix} \arg \max_{u_1} \mathbb{E}[Z_1(h_1, u_1)] \\ \vdots \\ \arg \max_{u_K} \mathbb{E}[Z_{\mathrm{K}}(h_K, u_K)] \end{pmatrix}.$$
28
+
29
+ **Theorem 1.** Given a deterministic joint action-value function $Q_{jt}$ , a stochastic joint action-value function $Z_{jt}$ , and a factorization function $\Psi$ for deterministic utilities:
30
+
31
+ $$Q_{jt}(\mathbf{h}, \mathbf{u}) = \Psi(Q_1(h_1, u_1), ..., Q_K(h_K, u_K)|s),$$
32
+
33
+ such that $[Q_k]_{k\in\mathbb{K}}$ satisfy IGM for $Q_{jt}$ under $\mathbf{h}$ , the following distributional factorization:
34
+
35
+ $$Z_{\text{it}}(\mathbf{h}, \mathbf{u}) = \Psi(Z_1(h_1, u_1), ..., Z_K(h_K, u_K)|s).$$
36
+
37
+ is insufficient to guarantee that $[Z_k]_{k\in\mathbb{K}}$ satisfy DIGM for $Z_{\mathrm{jt}}$ under $\mathbf{h}$ .
38
+
39
+ In order to satisfy *DIGM* for stochastic utilities, an alternative factorization strategy is necessary.
40
+
41
+ We propose Mean-Shape Decomposition and the DFAC framework to ensure that *DIGM* is satisfied for stochastic utilities.
42
+
43
+ **Definition 2** (Mean-Shape Decomposition). A given ran-
44
+
45
+ <span id="page-4-0"></span>dom variable Z can be decomposed as follows:
46
+
47
+ $$Z = \mathbb{E}[Z] + (Z - \mathbb{E}[Z])$$
48
+
49
+ = $Z_{\text{mean}} + Z_{\text{shape}}$ ,
50
+
51
+ where $Var(Z_{mean}) = 0$ and $\mathbb{E}[Z_{shape}] = 0$ .
52
+
53
+ We propose DFAC to decompose a joint return distribution $Z_{\rm jt}$ into its deterministic part $Z_{\rm mean}$ (i.e., expected value) and stochastic part $Z_{\rm shape}$ (i.e., higher moments), which are approximated by two different functions $\Psi$ and $\Phi$ , respectively. The factorization function $\Psi$ is required to precisely factorize the expectation of $Z_{\rm jt}$ in order to satisfy DIGM. On the other hand, the shape function $\Phi$ is allowed to roughly factorize the shape of $Z_{\rm jt}$ , since the main objective of modeling the return distribution is to assist non-linear approximation of the expectation of $Z_{\rm jt}$ (Lyle et al., 2019), rather than accurately model the shape of $Z_{\rm jt}$ .
54
+
55
+ **Theorem 2** (DFAC Theorem). Given a deterministic joint action-value function $Q_{jt}$ , a stochastic joint action-value function $Z_{jt}$ , and a factorization function $\Psi$ for deterministic utilities:
56
+
57
+ $$Q_{it}(\mathbf{h}, \mathbf{u}) = \Psi(Q_1(h_1, u_1), ..., Q_K(h_K, u_K)|s),$$
58
+
59
+ such that $[Q_k]_{k \in \mathbb{K}}$ satisfy IGM for $Q_{jt}$ under $\mathbf{h}$ , by Mean-Shape Decomposition, the following distributional factorization:
60
+
61
+ $$\begin{split} Z_{\mathrm{jt}}(\mathbf{h}, \mathbf{u}) &= \mathbb{E}[Z_{\mathrm{jt}}(\mathbf{h}, \mathbf{u})] + (Z_{\mathrm{jt}}(\mathbf{h}, \mathbf{u}) - \mathbb{E}[Z_{\mathrm{jt}}(\mathbf{h}, \mathbf{u})]) \\ &= Z_{\mathrm{mean}}(\mathbf{h}, \mathbf{u}) + Z_{\mathrm{shape}}(\mathbf{h}, \mathbf{u}) \\ &= \Psi(Q_{1}(h_{1}, u_{1}), ..., Q_{K}(h_{K}, u_{K})|s) \\ &+ \Phi(Z_{1}(h_{1}, u_{1}), ..., Z_{K}(h_{K}, u_{K})|s). \end{split}$$
62
+
63
+ is sufficient to guarantee that $[Z_k]_{k\in\mathbb{K}}$ satisfy DIGM for $Z_{jt}$ under $\mathbf{h}$ , where $\mathrm{Var}(\Psi)=0$ and $\mathbb{E}[\Phi]=0$ .
64
+
65
+ Theorem. 2 reveals that the choice of $\Psi$ determines whether IGM holds, regardless of the choice of $\Phi$ , as long as $\mathbb{E}[\Phi]=0$ . Under this setting, any differentiable factorization function of deterministic variables can be extended to a factorization function of random variables. Such a decomposition enables approximation of joint distributions for all factorizable tasks under appropriate choices of $\Psi$ and $\Phi$ .
66
+
67
+ In this section, we provide a practical implementation of the shape function $\Phi$ in DFAC, effectively extending any differentiable factorization function $\Psi$ (e.g., the additive function of VDN, the monotonic mixing network of QMIX, etc.) that satisfies the IGM condition into its DFAC variant.
68
+
69
+ Theoretically, the sum of random variables appeared in *DDN* and *DMIX* can be described precisely by a joint CDF. However, the exact derivation of this joint CDF is usually computationally expensive and impractical (Lin et al., 2019). As
70
+
71
+ a result, DFAC utilizes the property of quantile mixture to approximate the shape function $\Phi$ in O(KN) time.
72
+
73
+ **Theorem 3.** *Given a quantile mixture:*
74
+
75
+ $$F^{-1}(\omega) = \sum_{k=1}^{K} \beta_k F_k^{-1}(\omega)$$
76
+
77
+ with K components $[F_k^{-1}]_{k\in\mathbb{K}}$ and non-negative model parameters $[\beta_k]_{k\in\mathbb{K}}$ . There exist a set of random variables Z and $[Z_k]_{k\in\mathbb{K}}$ corresponding to the quantile functions $F^{-1}$ and $[F_k^{-1}]_{k\in\mathbb{K}}$ , respectively, with the following relationship:
78
+
79
+ $$Z = \sum_{k \in \mathbb{K}} \beta_k Z_k.$$
80
+
81
+ Based on Theorem 3, the quantile function $F_{\text{shape}}^{-1}$ of $Z_{\text{shape}}$ in DFAC can be approximated by the following:
82
+
83
+ $$F_{\text{shape}}^{-1}(\mathbf{h}, \mathbf{u}|\omega) = F_{\text{state}}^{-1}(s|\omega) + \sum_{k \in \mathbb{K}} \beta_k(s) (F_k^{-1}(h_k, u_k|\omega) - Q_k(h_k, u_k)), \quad (17)$$
84
+
85
+ where $F_{\mathrm{state}}^{-1}(s|\omega)$ and $[\beta_k(s)]_{k\in\mathbb{K}}$ are respectively generated by function approximators $\Lambda_{\mathrm{state}}(s|\omega)$ and $[\Lambda_k(s)]_{k\in\mathbb{K}}$ , satisfying constraints $\beta_k(s)\geq 0, \forall k\in\mathbb{K}$ and $\int_0^1 F_{\mathrm{state}}^{-1}(s|\omega)\,\mathrm{d}\omega=0$ . The term $F_{\mathrm{state}}^{-1}$ models the shape of an additional state-dependent utility (introduced by QMIX at the last layer of the mixing network), which extends the state-dependent bias in QMIX to a full distribution. The full network architecture of DFAC is illustrated in Fig. 1.
86
+
87
+ This transformation enables DFAC to decompose the quantile representation of a joint distribution into the quantile representations of individual utilities. In this work, $\Phi$ is implemented by a large IQN composed of multiple IQNs, optimized through the loss function defined in Eq. (12).
88
+
89
+ In order to validate the proposed DFAC framework, we next discuss the DFAC variants of two representative factorization methods: VDN and QMIX. *DDN* extends VDN to its DFAC variant, expressed as:
90
+
91
+ $$Z_{\rm jt} = \sum_{k \in \mathbb{K}} Q_k + \sum_{k \in \mathbb{K}} (Z_k - Q_k), \text{ given}$$
92
+ (18)
93
+
94
+ $Z_{ ext{mean}} = \sum_{k \in \mathbb{K}} Q_k$ , $Z_{ ext{shape}} = \sum_{k \in \mathbb{K}} (Z_k - Q_k)$ ; while *DMIX* extends QMIX to its DFAC variant, expressed as:
95
+
96
+ $$Z_{\rm jt} = M(Q_1, ..., Q_{\rm K}|s) + \sum_{k \in \mathbb{K}} (Z_k - Q_k), \text{ given}$$
97
+ (19)
98
+
99
+ $$Z_{\text{mean}} = M(Q_1, ..., Q_K | s), Z_{\text{shape}} = \sum_{k \in \mathbb{K}} (Z_k - Q_k).$$
100
+
101
+ Both *DDN* and *DMIX* choose $F_{\text{state}}^{-1} = 0$ and $[\beta_k = 1]_{k \in \mathbb{K}}$ for simplicity. Automatically learning the values of $F_{\text{state}}^{-1}$ and $[\beta_k]_{k \in \mathbb{K}}$ is proposed as future work.
102
+
103
+ <span id="page-5-0"></span>![](_page_5_Figure_1.jpeg)
104
+
105
+ Figure 1: The DFAC framework consists of a factorization network Ψ and a shape network Φ for decomposing the deterministic part Zmean (i.e., Qjt) and the stochastic part Zshape of the total return distribution Zjt, as described in Theorem [2.](#page-4-0) The shape network contains parameter networks Λstate(s; ω) and [Λk(s)]<sup>k</sup>∈<sup>K</sup> for generating Zstate(s) and βk(s).
106
+
107
+ In the previous expected value function factorization methods (e.g., VDN, QMIX, etc.), the factorization is achieved by modeling Qjt and [Qk]<sup>k</sup>∈<sup>K</sup> as deterministic variables, overlooking the information of higher moments in the full return distributions Zjt and [Zk]<sup>k</sup>∈<sup>K</sup>. In order to demonstrate DFAC's ability of factorization, we begin with a toy example modified from [\(Rashid et al.,](#page-9-0) [2018\)](#page-9-0) to show that DFAC is able to approximate the true return distributions, and factorize the mean and variance of the approximated total return Zjt into utilities [Zk]<sup>k</sup>∈<sup>K</sup>. Table [1](#page-6-0) illustrates the flow of a two-step game consisting of two agents and three states 1, 2A, and 2B, where State 1 serves as the initial state, and each agent is able to perform an action from {A, B} in each step. In the first step (i.e., State 1), the action of agent 1 (i.e., actions A<sup>1</sup> or B1) determines which of the two matrix games (State 2A or State 2B) to play in the next step, regardless of the action performed by agent 2 (i.e., actions A<sup>2</sup> or B2). For all joint actions performed in the first step, no reward is provided to the agents. In the second step, both agents choose an action and receive a global reward according to the payoff matrices depicted in Table [1,](#page-6-0) where the global rewards are sampled from a normal distribution <sup>N</sup> (µ, σ<sup>2</sup> ) with mean µ and standard deviation σ. The hyperparameters of the two-step game are offered in the supplementary material in detail.
108
+
109
+ Table [2](#page-6-0) presents the learned factorization of *DMIX* for each state after convergence, where the first rows and the first columns of the tables correspond to the factorized distributions of the individual utilities (i.e., Z<sup>1</sup> and Z2), and the main content cells of them correspond to the joint return distributions (i.e., Zjt). From Tables [2\(](#page-6-0)b) and [2\(](#page-6-0)c), it is observed that no matter the true returns are deterministic (i.e., State 2A) or stochastic (i.e., State 2B), *DMIX* is able to approximate the true returns in Table [1](#page-6-0) properly, which are not achievable by expected value function factorization methods. The results demonstrate DFAC's ability to factorize the joint return distribution rather than expected returns. *DMIX*'s ability to reconstruct the optimal joint policy in the two-step game further shows that *DMIX* can represent the same set of tasks as QMIX.
110
+
111
+ To further illustrate DFAC's capability of factorization, Figs. [2\(](#page-6-0)a)[-2\(](#page-6-0)b) visualize the factorization of the joint action hB1, B2i in State 2A and hB1, B2i in State 2B, respectively. As IQN approximates the utilities Z<sup>1</sup> and Z<sup>2</sup> implicitly, Z1, Z2, and Zjt can only be plotted in terms of samples. Zjt in Fig. [2\(](#page-6-0)a) shows that *DMIX* degenerates to QMIX when approximating deterministic returns (i.e., N (7, 0)), while Zjt in Fig. [2\(](#page-6-0)b) exhibits *DMIX*'s ability to capture the uncertainty in stochastic returns (i.e., N (8, 29)).
2102.09337/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02b5187417b2cb4d7bacf7e667e236bb8e31c6c39ecbc7002f5593387987c791
3
+ size 626765
2102.13045/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7defe644d1caca8bde54e6cc477dad318edc87c5044cc15f7868838bcc431fcd
3
+ size 883853
2103.01937/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7808a5b1883faf0d41f3ffacc506060ac039d875350974216409cf6b73a75b58
3
+ size 2044931
2103.06818/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5efca947260f1a6edb00b11a06f70b3dc2b87036e5bbb3c44aebb0a0b1ece7a
3
+ size 9907234
2103.07969/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:356d607e8da78ea995bb90899cae5b46eeec8cc5862f6c4ae30551a681de717b
3
+ size 10087052
2103.08733/paper_text/intro_method.md ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ The presented experiments aim to position our model's item recommendation performance compared to similar literature and state-of-the-art approaches. Therefore, we use **ReDial** [@ReDial] and the current state-of-the-art in this dataset **KBRD** [@KBRD]. Compared to these baselines, our model has explainable properties, even though they are not being evaluated by online user studies. Furthermore, we test three models based on our approach. The first one, **Ours**, is our proposed approach as described in Section [3](#sec:methodology){reference-type="ref" reference="sec:methodology"}. The second one, **Ours E2E**, shares the same architecture but the two modules are trained in an end-to-end setting, directly predicting the target item from the conversation and not taking advantage of the categorical information of the target items. Finally, as an oracle setting, **Ours + GT**, we assume the category preference prediction to be ideal, providing the true category vector of the target item to the second step of our approach, which is the only trained module in this setting.
4
+
5
+ ::: {#tab:results}
6
+ Model Rec@1 % Rec@10 %
7
+ ------------------ ---------- -----------
8
+ ReDial [@ReDial] 1.50 10.49
9
+ KBRD [@KBRD] 2.15 **16.42**
10
+ Ours E2E 0.95 5.98
11
+ Ours **2.37** 13.21
12
+ Ours + GT 31.85 62.73
13
+
14
+ : Comparing the performances of the compared models for the task of Conversational Item Recommendation task on the ReDial dataset.
15
+ :::
16
+
17
+ The performance of the compared approaches are summarized in Table [1](#tab:results){reference-type="ref" reference="tab:results"}. Compared to the baselines, our proposed model **Ours** outperforms the ReDial, and achieve comparable results with KBRD. As [@rec_1_importance] suggests, Rec@1 is most important on conversational recommendation, since usually only one item is being suggested each time in such a setting. We recall, however, that the two baselines are not explainable. Additionally, when training our model end-to-end (Ours E2E), the model is not explainable and its recommendation performance drops to less than half. This emphasizes the added value of utilizing categorical information in terms of recommendation performance. Finally, in the oracle setting (Ours + GT), where we assume a perfect category preference prediction, the performance is greatly improved which indicates that there is room for improvement in the category preference modelling of our approach, which will lead to improved recommendation results.
18
+
19
+ As [@viewpoint_regr] points out, it is common to present examples for assessing the explainability of a model. To that end, we demonstrate our two-step approach on two conversations of ReDial's test set in Figures [\[fig:example\]](#fig:example){reference-type="ref" reference="fig:example"} and [\[fig:example2\]](#fig:example2){reference-type="ref" reference="fig:example2"}.
20
+
21
+ On the example presented in Figure [\[fig:example\]](#fig:example){reference-type="ref" reference="fig:example"}, even though no category is explicitly mentioned, our proposed category preference modelling approach is able to translate "Disney Classics" and "to show my niece" into the preference for the categories 'Children', 'Animation', 'Adventure' and 'Comedy', in order of importance. Moreover, our item recommender is able to use our category preference prediction and recommend two out of the three movies that the Recommender is about to suggest. Finally, the predicted preferred categories are overlapping with the ground truth categories of the target items with high accuracy.
22
+
23
+ On the second example shown in Figure [\[fig:example2\]](#fig:example2){reference-type="ref" reference="fig:example2"}, our model is able to correctly predict the movie that the Recommender is about to suggest. Additionally, the correct recommendation is due to the three predicted preferred categories {Thriller, Crime, Drama } that include the category that the user mentions \"crime movies\". This demonstrates the potential ability of our model to correlate items with categories, when this information is unavailable. It should be reminded that 25% of the items used in our experiments are lacking categorical information making this ability very useful.
24
+
25
+ <figure data-latex-placement="h">
26
+ <table>
27
+ <thead>
28
+ <tr>
29
+ <th style="text-align: center;"><strong>Sender</strong></th>
30
+ <th style="text-align: center;"><strong>Message</strong></th>
31
+ </tr>
32
+ </thead>
33
+ <tbody>
34
+ <tr>
35
+ <td style="text-align: center;"><strong><span class="math inline">…</span></strong></td>
36
+ <td style="text-align: center;"><strong><span class="math inline">…</span></strong></td>
37
+ </tr>
38
+ <tr>
39
+ <td style="text-align: left;">Recommender</td>
40
+ <td style="text-align: left;">Great, what kinds of movies are you looking for ?</td>
41
+ </tr>
42
+ <tr>
43
+ <td rowspan="2" style="text-align: left;">Seeker</td>
44
+ <td style="text-align: left;">I ’m looking for Disney classics, like <strong>Masked_Item</strong></td>
45
+ </tr>
46
+ <tr>
47
+ <td style="text-align: left;">to show my niece. Can you recommend any ?</td>
48
+ </tr>
49
+ <tr>
50
+ <td style="text-align: left;"><strong>Ours</strong></td>
51
+ <td style="text-align: left;"><strong>Applying Category Preference Modelling:</strong></td>
52
+ </tr>
53
+ </tbody>
54
+ </table>
55
+ <figure>
56
+ <img src="Media/rsz_21802_selected_long.png" />
57
+ </figure>
58
+ <table>
59
+ <tbody>
60
+ <tr>
61
+ <td style="text-align: left;"><strong>Our top 2 Rec.</strong></td>
62
+ <td style="text-align: left;"><strong>Corresponding Items’ Ground Truth Categories:</strong></td>
63
+ </tr>
64
+ <tr>
65
+ <td style="text-align: left;">1. Moana (2016)</td>
66
+ <td style="text-align: left;">{Animation, Children, Adventure, Comedy &amp; Fantasy }</td>
67
+ </tr>
68
+ <tr>
69
+ <td style="text-align: left;">2. Coco (2017)</td>
70
+ <td style="text-align: left;">{Animation, Children &amp; Adventure }</td>
71
+ </tr>
72
+ <tr>
73
+ <td style="text-align: left;"><strong>Our Explanation</strong></td>
74
+ <td style="text-align: left;"><em>"Because you are looking for something that combines :"</em></td>
75
+ </tr>
76
+ <tr>
77
+ <td style="text-align: left;">(Over 50%)</td>
78
+ <td style="text-align: left;">Children(80%), Animation(74%), Adventure(61%) &amp; Comedy(60%)</td>
79
+ </tr>
80
+ <tr>
81
+ <td rowspan="2" style="text-align: left;">Recommender</td>
82
+ <td style="text-align: left;">Recent films like <u>Moana (2016)</u> and <u>Zootopia</u> are</td>
83
+ </tr>
84
+ <tr>
85
+ <td style="text-align: left;">great. I also enjoyed Pixar’s <u>Coco (2017)</u></td>
86
+ </tr>
87
+ <tr>
88
+ <td rowspan="2" style="text-align: left;">Seeker</td>
89
+ <td style="text-align: left;">Hmm. I have not seen any of these. I am writing</td>
90
+ </tr>
91
+ <tr>
92
+ <td style="text-align: left;">them down now! Thanks a lot!!</td>
93
+ </tr>
94
+ </tbody>
95
+ </table>
96
+ </figure>
97
+
98
+ <figure data-latex-placement="h">
99
+ <table>
100
+ <thead>
101
+ <tr>
102
+ <th style="text-align: center;"><strong>Sender</strong></th>
103
+ <th style="text-align: center;"><strong>Message</strong></th>
104
+ </tr>
105
+ </thead>
106
+ <tbody>
107
+ <tr>
108
+ <td style="text-align: center;"><strong><span class="math inline">…</span></strong></td>
109
+ <td style="text-align: center;"><strong><span class="math inline">…</span></strong></td>
110
+ </tr>
111
+ <tr>
112
+ <td rowspan="2" style="text-align: left;">Seeker</td>
113
+ <td style="text-align: left;">Do you know any good crime movies</td>
114
+ </tr>
115
+ <tr>
116
+ <td style="text-align: left;">I really like crime movies</td>
117
+ </tr>
118
+ <tr>
119
+ <td style="text-align: left;"><strong>Ours</strong></td>
120
+ <td style="text-align: left;"><strong>Applying Category Preference Modelling:</strong></td>
121
+ </tr>
122
+ </tbody>
123
+ </table>
124
+ <figure>
125
+ <img src="Media/rsz_21424_selected_long.png" />
126
+ </figure>
127
+ <table>
128
+ <tbody>
129
+ <tr>
130
+ <td style="text-align: left;"><strong>Our top 2 Rec.</strong></td>
131
+ <td style="text-align: left;"><strong>Corresponding Items’ Ground Truth Categories:</strong></td>
132
+ </tr>
133
+ <tr>
134
+ <td style="text-align: left;">1. Seven (1995)</td>
135
+ <td style="text-align: left;"><strong>No Categorical Information Available</strong></td>
136
+ </tr>
137
+ <tr>
138
+ <td style="text-align: left;">2. Zodiac (2007)</td>
139
+ <td style="text-align: left;">{Thriller, Drama &amp; Crime }</td>
140
+ </tr>
141
+ <tr>
142
+ <td style="text-align: left;"><strong>Our Explanation</strong></td>
143
+ <td style="text-align: left;"><em>"Because you are looking for something that combines :"</em></td>
144
+ </tr>
145
+ <tr>
146
+ <td style="text-align: left;">(Over 50%)</td>
147
+ <td style="text-align: left;">Thriller(64%), Crime(62%) &amp; Drama(56%)</td>
148
+ </tr>
149
+ <tr>
150
+ <td style="text-align: left;">Recommender</td>
151
+ <td style="text-align: left;">Yes, I love <u>Seven (1995)</u> and <u>Godfather (1991)</u></td>
152
+ </tr>
153
+ </tbody>
154
+ </table>
155
+ </figure>
2103.13558/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97628fe4e7c2b5d32ef6bf9e444d0255d53266c8aad0f1afaaa42326b4fcafa6
3
+ size 2795991
2104.00764/paper.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:daf6c4ab51e24a02ff06ff1118075ab83b0526d913b5e370e0dd3f718182308f
3
+ size 1101415
2104.03945/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-04-01T15:28:19.472Z" agent="5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" etag="dEpeQ37_8TgZCgJxvNgv" version="14.5.4"><diagram id="kp9NOVqEjyB-o2rSa300" name="Page-1">7Z3dU+NGFsX/Gj+Gsr6lR2wgSVVSU7Vkazf7ptiKrcRYlBGByV+/MqgNUpuMdMvWuaM5L8lYGGTrd/rjnnu7e+LN756/36X365+LZbaZuNPl88S7mriu4/hJ9b/9lc+vV6Jp9HphtcuX9ZveLtzmf2f1xWl99TFfZg+NN5ZFsSnz++bFRbHdZouycS3d7Yqn5tt+LzbNu96nq8y6cLtIN/bV/+TLcv16NXajt+s/ZPlqbe7shPUXvkvNm+tv8rBOl8XTu0ve9cSb74qifP3X3fM82+wfnnkur79388FPDx9sl23LLr/w579/vrzZPP3wv+T+X39/Wi/yhf/Hd/Vf+SvdPNZfuP6w5WfzBHbF43aZ7f/IdOLNntZ5md3ep4v9T58q5tW1dXm3qV451T9/L7blTXqXb/a458Xd/WOZ7aq/txfGblu/4bZ43L38/rosK4xu4F1W/6k++P4/+zc8XKyKYrXJ0vv84WJR3L38YPHw8tab31//fPXPdzcI3FnzFq9C2j/lWf0Vs12ZPX/47JwDkUrKWXGXlbvqHtP6F9ykhlir2Anr109vmvDrS+t3cjDX0lqFq8NffgNV/aNm1YObS25duHmeMm4+uUnamzsFcwvITdLe4NxCcuvU3pwWNx/MLSK3TtwCZdxichP1k2huCbmJ+skYzM18HoLr2VHCwTHyFrU4z0WDY+gtanF4cB7BdQKXaANH00Q0qcSDo2siG+PQLqVD20Q2xsHB0TeRjXFwcDRORF2ljzaYHTonoq4SDs5MjgiuZ4tDW5UunRNZi4ODo3MimpzgwdE5EQXgeHB0TkRjXIAOB1w6J6IxDg+OzolojMODo3MiGuPw4GznZFndpM2u+sZlE9BDuSv+zObFpthVV7bFNts/sXyzaV1KN/mqephXi+oJVQ/Ym+2fX75IN5f1D+7y5XJ/m6OKaGqmDUSDSE4gCieMGqJwom6icM8mCtuV2Uzm3mQWTqrPYT4K5TGQPAKvIY/DdAslD9OHvZPHw2JNaQCk4cdNaQRoadjuUsWAmhhSE1FzinGYK8I0YRtXd/vvTVEMOcVojiGHeShMFLYpVmGgKIYVhdsUBXre6R0x3F6mnVTG0MrwG8rw4fMK29FbccKJiEWCpjDgsYjtGF5/uqUohu0tmjPOCD65oBspK05AL+vwWMcly7iha5U91nHJEjdocOYPE1zPxA0cHOu4uoFr2WH4PUxsO4zgOoCDb4bhs45LBg5dx+WzjksGDh0O+CwHEoGDV076DMBF4PCzSgbgncD5jrZZJQNwETi45WUyKwTXDxy8qwzsADy1yDHvcNZKl1aWEl3pEtih/ZKSQEoiQKeiAts0+CXdrbKyuvbpsayewlcgEF+JQF5uUT8o5wRq8adNtcRhYKnlMLMbRi62VVE/U3f645Zq6aOWXVGmZV7sv4kbTc8gH7c1lTxSJDOweuwqmYuvQDCjGn+aBdlh4oLHH1pooox6iM42BLTQRGtY8eBooYlKIfDgaKF1mzW3Whzc+zT3J7h+4ODbtoWsYRGBg3ufZuEIwX0BnLaDr0LbjiK4DuDgxUcha1hk4NClECH3IpKBQydmQzonInDwfdNDOicycPDJCZ0TETj4ZsAhnRMZOPTkxJQfEtw/gwu0HYQb0TkRgYPHcRGdExk4eFdJ50QEDj6rjOiciMDBZ5URnRMZOHhXSeekGzhtS5MjOicicPhZJZ0TETi4VxnRORGBg49xJjtBcD3BodM6MZ2TTuBCbStcYzonInDwyUlM50QGDj7G0TkRgYPPKmM6JyJw+FklnRMZOPisks6JCBy8kjmJLU7ZcpWZZaLFrlwXq2Kbbq7frraWMr6956eiuK/5/ZGV5ef64aWPZdGkmz3n5X/3v34R1K9+ffeTq+f6L7+8+Fy/eP2c+w/3z8+++i5GFh99aXMiR/m6Ev1Lb7Rp7rJNWuZ/NT/JGU4DPPWUv7nSdeJ6QRyG0aWW9vZuyW37s515uSM8unNHWfxl6S2cx9ezG5t1PL905+55WLctmAjd57qjLDvqzHoWXQU3N+dh3d6mGF5F7Y4y/a6CdbuaEH+W6CizGSpYh4G2Pjz+psfrq9n11VDtGr4w3pueeh5O1h9sggAfrz2H+TGZk4EefT2HHpRoxxh47Os5YxxLB3At4KnNQ6aH5PrVqOJ7SwOK5HpujIbOkXku6wlkO6PhyY3RCxogKsCTO3JcN8l16S3h8dyxM7VJrkNvCSfnj3FuqcJ1aWdJFLBmKxX1r3Af3PNZayfqXxWQG6NfpiJb1c5gKGDNzORAYyme9ZGDmb4h1kNmqxSwHmNcqoK1lYWGz5GPHGVxOaluZ+7IM04GOuMkaJ2xZUo3UWeceEfOykh3v1EbAG34rfNvnG69xhm1YedWs6/hFK0xicLsVmdEceSgrGFFwf3rZWWH+FIJbmAvI4cvleAO9kJy8OQft7CXkYOvA/e4h72MHHwhuMdN7IXk4DbFKBcyDkAOvhTcG+XezCrMRGtkhMcR3M5Z2Erx5DgydiOn7ew5j7uUysjhY/dRLuwdghw8juA+pTJy+NidG5XKyOFjd+5UKiQHj925VamMHD52H+UGfDpid22nDXqj3HpkiFYKjwCTMc5GVbZSfLleMsb5q0rW8DJc36B9x/qquksbN6usPtLVrijTMi/23+S7ZHoambimEN7IxLg4qLIr/8jK/c1k7k1mIWs1NQim3a/E4MJe/8iGAQ+LdbalUKBCsdb84IVilyg95TvKBCuT9sbD+AHIroeqwFAmUJn47TXdeJnYZtIq47ADF0pr4VFkjkvBCcV2opZ5RplAZRJ4LZlE8NmJnT5P2ZmAO5OkrRIXrJIje2xdUCRQkVhOW4iemhzZzouLoDVIxTplBR4Te3ZMzDXRGqTS3jo8Muc14qRix8VZzlXSYJm0d8wyR+nAZOK7lkxmn24pE6xrP41avQlYJtHJDyLgsX0fbKIHrwyPeHSB8KAXdJVqdPLNuNlKP8h7wFe7RUfSY2R9luQFvPo8OpLjIusT5R+U1atHRxJVZH2WA2vgh/BFR3JNX/9MS8UBydZxG/h2Pcb1lipYW94bunY2Mh0NWZ/74HN8u/a+6Zr4Idu1AtbuCFmfg5y2g3ciHnYlcyPhq0AjHnYlI4ePZI8kq0muQ6yigBz3mBQtmlBAboxr5gfw8xWQG6MXNAA5+Ar4yBujszNAXgVPbpTHcQ6QJVFAboyuywAzFHzO40jBGcl1IQfPQvv0UEQzFAVtjpG4aIaigBwjcdEMRQE5RuKy3hI/zjESl5GDeyijPBJ6AHL4ignTfZNczxkKvLcc5dHc3wY5RuIycvBxbpRHpA9ATsE4x2oGUTyngBw9FFkkju8t6aHIyOFnKPRQOm58pI4cPRTRmiQF4xw9FNlqMvg4Z5LyJNeTHLy3DOmhyFb/4MnRQ5Gt5cGTo4ciW8uDJ0cPRXZ+EJ4cPRQZOXg1Q0gPRdZb4snRQ5HNUPDk6KHIogI8OXooonEOv6uakRLJ9cus4netjOihiLI8CsjRQ5G1OXg8F9FDkbU5PDl6KLI2B8/yRPRQZG0OT44eishDUUCOHooonlNAjh6KyENRQI4eisi3xJMzH4Dk+rU5vPsV00MRtTkF5OihiNqcD3e/YnooojangBw9FNnZOfAa55geisj9UkCOHopsT1l8b0kPRbanLJ4cPRTZnrJ4cvRQZCe+wcmZ7pvk+q18xO/jnNBDEa3CUkCOHoqozeHPCEnooYjanAJy9FBEvqUCcvRQZOTguYKEHoosJw6v2kvooYjGOXxOPKGHIiKHd5wTeiiieks4udiYOCTXz3GGVzPEU3oosvwcnhw9FFFvCXec4yk9FBk5dFQQT+mhiNYVKGhz9FA6kXOmfhOdgx/oaKJ0RBerQ0cXRdbq4EvF4yltFFmrU4COPkondEnYIgefX5ogk+S+1Oim6tAxHJc1OnTCIHYYjgsbHR4d43FRo8NnDBzG47JGpwAd43FZUKcAHYM6WX8Jd1IcxnTC/hKOzrR6ouvbX+LRMTsu6i/x+R4DiuR69pcK0DEelzU6uAfmMhwXNjo8OsbjskmKAnSMx2WZOgXomB8XDXXwlT2xSydFNtQpQEcrRTbU4dGZ8nmi6znUKUBnWynL6i5tetV3LpuIHspd8Wc2LzbFrrqyLbbZ/pnlm03rUrrJV9XjvFpUz6h6xN5s/wTzRbq5rH9wly+X+9sc1URTNW0kGmRyimHUiRqqcKJuqnDPpwrbptlM5t5kFk6qD2I+C/UxkD7MygST4Hfg+rC9oIfFmtoYXhuxWVdrtBHAtWG7TRUEimLIDsNtTjMOeyHiRGH7WHf7L05VDDnNaA4jh1WvOFXYFlnFgaoYVhVuUxX4yecR9+1l7klpDC0NvyENHz+3sO29FWediIgkaCoDH5HY7uH1p1uqYtj+ojntjOATDHM3GpM9axbgSwh8lnjJUnDwYmbf9g5JrlMeB4+OJV7CPA4eHWu8uqFzWt4YfjcGnzVeMnT4Jf0+a7yE6OA1Xj5rvITo8LEBi7xk6PCVlT6LvGTo8DNMY/4S3ZfQhdpmmAHNFBk6vA8W0E2RoVPQYdpuSmqxY0LirHUwrQQmvA4msG2aJTWB1EQAT1IFtv/zS7pbZWV17dNjWT2Gr0AhvhKFvNyiflDOCeTiOEFTL3EYWHo5jD0DCcZ2neqn6k5/3FIvffSyK8q0zIv9N3Gj6RkEFLXPxD1SRDO0fmzr6+IrkMyoBqFm1XZoTt7EDUL01ET59hCegwhoqcmWvOLRGS0RXc9SCQXoaKl1Q+e2Wh3eDQ1pqcnQ4Td+C1nl0hGduuNiQla5yNDhq1xCVrkI0cHz7SGrXITo4Jm/kBG5DB1+P++QIbkQHXyaYtxaouuJDr9BbcSQXIgOPk2JGJJ3RKfuLMmIa4Zk6PBxXUQ3RYgO32HSTZGhUzDDpJsiQ6dghkk3RYgO32HSTemGzlO3PjaimyJDh59hmm6A6Hqiw3uYMd0UGTr8WBfTTRGigyd9YropHdGpW2QZ002RoVMwTaGbIkSHH+vopsjQKZhh0k2RoVMww6SbIkSHn2HSTZGhw28oYFavvyOVLVeZWa1Y7Mp1sSq26eb67WprPd3be34qivua4B9ZWX6uH1/6WBZNvtlzXv53/+sXQf3q13c/uXqu//LLi8/1i9fPuf9w//z0q+9ihPHhtzZHR5Sva6K/+E6b6C7bpGX+V/OznAHPqR2T5oLLiesFcRhGl1ra3LuVn+3PduZFd/hgz6zXHFUvauktnMfXsxubdTy/dOfumfrddu4If0ZeMkZTpjPsWXQV3NycCXZ7I1182XUyxoCyM+yr2fXV2WC315MpmFGNMQTVAdtq2fCgNWHQKox88CMwg1bZHgfwmXJiPhDR9Ytx4FmRZDrGGGcAciF6pDucF0xyfbeEwaNjFlm4JQweHbPIsv4SHfMn0zHG/IP0l3h0jODPZte0elh4GJhMGcGLetgIbbQlUwbwsh4Wj84c6joqdDqyH+1UF3xbw8Nh8N8m7LMOp76+lj1Gm0AH7PbcSUHLto2Fy0l1O3NHbv090Nbf09b5E2afL9TW34lj+xbp7jdqY3htHFKUptdwuvUaZ9SG7YxkX8PxEiMSRWJ+xYjiyAESA4tijJ7LEFM+fDbWoYMiQ4dPxzq0UITo4Jkh0xUTXU908EVqh8GX6Hqigy9SSwwpouuLDp7Vc8foTA2BDl5SfThec1TodJiK1unU8Dmpy5IXYTuFR4Iua146omtZ+fgg3qX/IkOHD+Jd+i9CdPhwgv6LDB0+iDdHvBNdT3T4IN6j/yJEBw/iPfovMnT4IN4bo/+iI4hvL4JX0MWO0bEZpJ3CI0HPjgSvqru06bE+4yOZ7IoyLfNi/02+S6Yn0sl02qzYiMwO9rCKDc8OOzeTuTeZhSzzUqGYVs8SxeiiQM+Odh8W62xLpYCVEmtTim8H10/5jjrB6sRpbW2PH4N8O5KvyFAnYJ20lwridWLbBquMIw9cKW5r4UJktl7HKcV2KZZ5Rp2AdRK3dBLBZyi2wZGyO0HLxG/LxEXLxC5muaBKsCppW25RCJ+e2HUzXEepQivt07wUhMa2Pct1lTq00g57zBFQOK3YFm2Wc6klevxpb9oQ+Wid2Mbs7NMtdQLVSRJEre4ELZPAdmUtjfAwElGf0N6iCV9VGtiWGWGfx0f34Dn/4NS1OYT94QIbeG1OMMZVGkrOGVJXdReMcXGADtjttDm+Jj0Y43ICHbCtEkt4KXTIKnbZ3BrfKYennlt/K+gUtLoxVrGfBZ26tSMha9JlExsF6LgngMxaUIBujNHmEOjwK0FC7gkg83gUoBtjJDjEWIffVDviwnIhOrizGjEkl411ClodQ3LZWKcAHUNymYepAB1DcmGHiR/rGJIL0cFD8oghuXDBIzxzEDEkF05T8B3mGDPx3wg6uilCdPCxznwAouu7aBs+1sV0U2RxnQJ0dFOEITm+w6SbIkQHn6bEdFNkZboK0NFN6boThLqxjm6KDJ2CsY5uihAdvsOkmyLbwQCPLmFILlsorgAdQ3LZHjMK0DEkF6KDZ8kThuTCDhOPjiG5bPMcBegY1wnPZoJXPyeM62RJH/y2IwnjOpn9rAAds+TCVocODt7W2ZJd32angB2DcmG7QxvQFTtG5cJ2p4Adw3JZWK6BHeNyWXCngR1z5bLUjwZ2NFVkfpgGdnRVZO0ObohV7GiryNqdBnb0VWTtzkdbYs7Uoa8ia3ca2NFXEe5nja7JrNjRV5F5YhrY0VcRbrGooM+kryLcY1EBO/oqwk0WFbCjryLrMxX4Kg59FdmyLfjmphU7+iqyFSQK2JlenOx6soNvoF+xY2wu88Q0sGNsLmSH96JdxubC3Cu+TsxlbC4b7xTECC5jcxk7BX6my9hcVuOngR1jc+FheArmKozNhTkgBexY8yDrMxV40R5jc1ldtAZ2p47NeRD0h5kHfETonTqqIO0Pt91R0LZPHYeQ9oe7Fyho26eOXEj7w21G8P6Qd+pYh7Q/WgEF33Gyos3oSMZOwQzb/Gmy69nLKnAlfEa2wj3U8COkf+rIdqzs2nGqgvHOZ9ZZxk5Du2PWWZZB0cCOWWcZOw19JrPOwsMgFLQ7Zp1l7DS0O2adhQ65gnZHX0XGTkG7C+irCHMV+HYX0FeRsTtju6te7oqifPez76vHvN5/vf07/g8=</diagram></mxfile>
2104.03945/main_diagram/main_diagram.pdf ADDED
Binary file (20.5 kB). View file
 
2104.03945/paper_text/intro_method.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Many sequence-to-sequence tasks in natural language processing are roughly monotonic in the alignment between source and target sequence, and previous work has focused on learning monotonic attention behavior either through specialized attention functions [@aharoni-goldberg-2017-morphological; @raffel2017online; @wu-cotterell-2019-exact] or pretraining [@aji-etal-2020-neural]. However, it is non-trivial to port specialized attention functions to different models, and recently, @Yolchuyeva2019 [@wu2020applying] found that a transformer model [@NIPS2017_3f5ee243] outperforms previous work on monotone tasks such as grapheme-to-phoneme conversion, despite having no mechanism that biases the model towards monotonicity.
4
+
5
+ In the transformer, it is less straightforward to what extent individual encoder states, especially in deeper layers, still represent distinct source inputs after passing through several self-attention layers. Consequently, it is unclear whether enforcing monotonicity in the transformer is as beneficial as for recurrent neural networks (RNNs).
6
+
7
+ In this paper, we investigate the following research questions:
8
+
9
+ 1. How can we incorporate a monotonicity bias into attentional sequence-to-sequence models such as the transformer?
10
+
11
+ 2. To what extent does a transformer model benefit from such a bias?
12
+
13
+ Specifically, we want to incorporate a monotonicity bias in a way that is agnostic of the task and model architecture, allowing for its application to different sequence-to-sequence models and tasks. To this end, we introduce a loss function that measures and rewards monotonic behavior of the attention mechanism.[^1]
14
+
15
+ We perform experiments and analysis on a variety of sequence-to-sequence tasks where we expect the alignment between source and target to be highly monotonic, such as grapheme-to-phoneme conversion, transliteration, morphological inflection, and dialect normalization and compare our results to previous work that successfully applied hard monotonic attention to recurrent sequence-to-sequence models for these tasks [@wu-etal-2018-attention; @wu-cotterell-2019-exact].
16
+
17
+ Our results show that a monotonicity bias learned through a loss function is capable of making the soft attention between source and target highly monotonic both in RNNs and the transformer. We find that this leads to a similar improvement to previous works on hard monotonic attention for RNNs, whereas for transformer models, the results are mixed: Biasing all attention heads towards monotonicity may limit the representation power of multihead attention in a way that is harmful even for monotonic sequence-to-sequence tasks. However, for some tasks, we see small improvements when limiting monotonicity to only a subset of heads.
18
+
19
+ # Method
20
+
21
+ We now introduce our monotonicity loss function. The loss function is differentiable and compatible with standard soft attention mechanisms and is thus easy to integrate into popular encoder-decoder architectures such as the transformer. On a high level, we compare the attention distribution between decoder time steps in a pairwise fashion and measure whether the mean attended position increases for each pair.
22
+
23
+ <figure id="fig:paths" data-latex-placement="ht">
24
+ <embed src="attention_paths.pdf" style="width:80.0%" />
25
+ <figcaption>Average attention positions between target output characters and source input characters and the corresponding monotonicity loss for different attention distributions, and with different margins <span class="math inline"><em>δ</em></span>. The average attention positions were rounded to integers for visualization purposes.</figcaption>
26
+ </figure>
27
+
28
+ Let us denote the input sequence as $X=(x_1,...,x_{|X|})$, and the output sequence as $Y=(y_1,...,y_{|Y|})$. The interface between the encoder and decoder is one or several attention mechanisms. In its general form, the attention mechanism computes some energy $e_{ij}$ between a decoder state at time step $i$ and an encoder state $j$. While this energy function varies, with popular choices being a feedforward network [@DBLP:journals/corr/BahdanauCB14] or (scaled) dot-product [@luong-etal-2015-effective; @NIPS2017_3f5ee243], they are typically normalized to a vector of attention weights $\alpha$ using the softmax function:
29
+
30
+ $$\begin{equation}
31
+ \alpha_{ij} = \frac{\exp(e_{ij})}{\sum_{k=1}^{|X|}\exp(e_{ik})}
32
+ \end{equation}$$
33
+
34
+ These attention weights are then applied to obtain a weighted average $c_i$ of a vector of value states $V$:
35
+
36
+ $$\begin{equation}
37
+ c_i = \sum_{j=1}^{|x|} \alpha_{ij} \cdot v_j
38
+ \end{equation}$$
39
+
40
+ For our monotonicity loss, we also compute the mean attended position $\bar{a}_i$:
41
+
42
+ $$\begin{equation}
43
+ \bar{a}_i = \sum_{j=1}^{|x|} \alpha_{ij} \cdot j
44
+ \end{equation}$$
45
+
46
+ We can then define the monotonicity loss in a pairwise fashion, comparing the mean attended position at time steps $i$ and $i+1$:
47
+
48
+ $$\begin{equation}
49
+ L_\textrm{mono} = \sum_{i=1}^{|Y|-1} \max(\frac{\bar{a}_i - \bar{a}_{i+1} + \delta \frac{|X|}{|Y|}}{|X|}, 0)
50
+ \label{loss}
51
+ \end{equation}$$
52
+
53
+ $\delta$ is a hyperparameter that controls how deviations from the main diagonal are penalized. Let us first consider the case with $\delta=0$: if $\bar{a}_{i+1} \geq \bar{a}_{i}$ for all positions $i$, i.e. if the mean attended position is weakly increasing[^2], then the loss is 0. Any decrease in the mean attended position will incur a cost that is proportional to the amount of decrease, relative to the source sequence length;[^3] this allows differentiation of the loss, and will also serve as a measure of the degree of monotonicity in the analysis.
54
+
55
+ We might want to bias the model towards strictly monotonic behavior, penalizing it if $\bar{a}$ remains unchanged over several time steps. We can achieve this by incurring a loss if $\bar{a}$ does not increase by some margin, controlled by $\delta$. At the most extreme, with $\delta=1$, the loss is minimized if the mean attended position follows the main diagonal of the alignment matrix, increasing by $\frac{|X|}{|Y|}$ at each time step. Figure [1](#fig:paths){reference-type="ref" reference="fig:paths"} shows how the margin $\delta$ can influence the monotonicity loss with some examples.
56
+
57
+ In equation [\[loss\]](#loss){reference-type="ref" reference="loss"}, costs are later summed over the target sequence. In practice, we normalize the cost by the number of tokens in a batch for training stability, as is typically done for the cross-entropy loss. If a model has multiple attention mechanisms, e.g. attention in multiple layers, or multihead attention, we separately compute the loss for each attention mechanism, then average the losses. We can also just apply the loss to a subset of attention mechanisms, allowing different attention heads to learn specialized behavior [@voita-etal-2019-analyzing].
2104.07644/main_diagram/main_diagram.drawio ADDED
@@ -0,0 +1 @@
 
 
1
+ <mxfile host="app.diagrams.net" modified="2021-04-14T22:48:24.814Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" version="14.6.0" etag="IvLTkZ8UD_z319JFrOzI" type="google"><diagram id="le8lvxZvX-QIgpyTT73X">7V1Xf+I60/805/Lk514uXTAGjE2xjeHOveCCG7b59K+UhN2EcHZJQnLK+2RLQJZlWZryn9Fo9AcuZP24sg/RvPD89A8M8fo/cPEPDKNpFPwPC4anAgKhnwrCKvaeipCfBev45D8VoufSNvb8+rnsqagpirSJD68L3SLPfbd5VWZXVdG9rhYUqfeq4GCH/qtuwIK1a6f+m2qb2Guip1KGfFFb9uMwOj8ZRZ6vZPa58nNBHdle0b0owkd/4EJVFM3Tp6wX/BSO3etxkf7i6o+OVX7e3HID9nTD0U7b53d77lcznF82rIr28FzNrxq/vzbEtpNejtjPLqA/XgwQhF9kflMNoEp/nnz26ZZnYkDx56a7n0NLME9F0YtRxfEHFKcwksZZkkJQ5jy/z3Mb/njQz9cHH55H4Ppo4FdGg0rB8/igAC/zcliosi3OF/6sHymUAxVQ9ND/vAg+hfC3Xtlx/qd8bgx046m9p6tvRhwMeO75sE8IuNxFceOvD7YLr3aAn0BZ1GTgHUQUfKybqtj/IEPsR4lQpEUFSvIi92E/4zQ9F/2B4SI5YkTix8OvTOzLCcSuT+B5wqgHkmJQlqFJHEFplLpp+p7uQkiGIjCSISgS//z8EV86f4v/3AT+DVNEfn6KkGtTpBaP4jUpqrgZYG+gIHpq2Kl+VPLbprLTW6fRzz0Oaoufk/CX0wZLQHUphi/+OOt1Y1fN+XY3tes6ds/Fz9XQt9MaBD7lum9oAFzxaNZBYMN+HzcWfMgD+fxte+4A+Cz2z89//DKcv+Rgml7cBL9uX177edvjt/N9T0Pie2d1+ExaddFW7nPReUbBe4X+WTSz71UCyAPKUugrPXBFDZwV7EsiPZdVfmo38fF1R69R6nMfFkX8SG/PHcAZ8vXTWThUrxp5euvn+17q1N80RRBvmnoarDdNAXKxhxfVDrBCff1h5zpFENT+q1YeWfDHEN/EldTvYQAkgvXz16JqoiIscjsd/Szlf0o9SIw/6yhFcXguTPymGZ7xnN02xe9Y6g0XUAIz4qXbKJO+QpjEzYR5Mz3dKvnoN2O8buzGz/xnGriuQND3KpArQsWzfSa4KlQol/Gd4FYt8hscR72ieoZ+OIuGFxxME1eAHPL5wWV+T8B3Is9rxHhFFzwLafSFiP4psP9CSP+Ww36K8VdC/KdM/7gYx7G33HK7FP+08H1NO2cg92Fxef0xGP3a0qCxX8rxy+oo8qo++PDUgY8KXfaaQMjBhGCI6VdxELtgTIv8i4XDDYjjDsIBY5kH9o2AeGsp0NTXyIez5+A17rQzOFq5U8NfP8HjX8PAnzjuRiT4OYj3LunxQYj3e6lzg/Rg3wqP2zXtM4n8iTwgFEO8IYmvBH0A5F1QJYqSZ7J8L+4jMAq0hvz4wS5aRvAHFHnxg90k5L4I8aHoP01j/gX9op+m36vc9EteukVfvqX479OXJP7aVPoh3t5NsvRrUwWjkZuI8r2al3hWtT944dnd8Nf9wi76RfyyPnrBaxf1P62qUeyfxi3/Onx5xU3wffgSQR8IlmAYnMYQFnx7RSwU+UCAKiSLUwiNI9QHxT/GXsBF+rNG/42w9Hlob4ex+H154w5e81tcesiTV+/jPrxvBm//EP/cFWh2tsvfhc0Agn+tK9Dv9c+drYCzhMc+qPLeNPRplffVSO3aqsa9dc9VNnnNCfewJO9p2HweGN6itq6wD05/m95i8Ne0Sl2swNyM887Gxtm7jdymmz5Crl+1wvNTE9wo+cG8Krbjp68p3U7jMIdkDubOB/TJQ79G7Nop93whiz3vyZ/tgy49L2xDgnrmeNA4yf9BirBG0Ty5aF5K448ukhO/k8A4Tb1GuX+idxGtf+IPFPnSMEWIV48hHhiaffnz+gH3EXI3rEDcAWC/WJJ7XtF7uR6H3EsQ2h7rkdcEoe0TKI59M0i4iyBEr6ymvCPW45MkilKvlTbKMjcJwvdia5S6wMrMr7E1iuG/qv95bP120Uh/mgAMOTuLf8kln/YO/56U7cp95jjyTs5i9Lyc8yPABL22lsRgb9EmSj9cml0fche/XU/6rsW6v2XEicsBR64NOEV/3YC/XQy5D2oQiqqCIYDvjty5h47567CPt6Ell9EjvyETB/W84MOQ+h2eIEgI71A2b6j2GTLcFkuCvFUyOPNeYoYLCT+i0D6qeO6KbrC3q0+8n8Z+8AcG7kW4Kmz/q5KFfu1rQ1n6jVhBz3jztVz5vFDBrq1x3EOobKoiD88eqcsgs8zu46zNIBlXsV9/LGbwTuj2Opr9lBH+ShhA2f+F8uDM+5+OLUOYS8uJ/px4OPtWL3ApTjEPGE3jKMJiBE5dmko3u5KZCx8VeVtAxFeJL+wKH/1rFOcP0kVfk+7vKPdeRt2vlxrv46W6wifot3mpIIddBE9iHwydxBDy4XJFEn9Avs5bdY+Qf+q1ikDcH6TxsxAnPPjnrSpZVP6fMjdXRvqtigLo4uaa5v9FvPhz0e2ur2tQ47WC+rFx5U444QfJnBfl3q4uoOQV++NyGeBDOOEO+wY+RQRnAkBWcIxvhgz/VUp4bYxi5BVSIK7Eid2FFK55ry+GHbzwAX6Ms8dtYj/G7tHZvCjq+Nkl7BRNU2SgQgov8La7Dx9H7tWSCfy5Mv4N1Jq8XR+etq8FcQ/Hm398JHcuRc4l4LNnNzYgxaevmHSA+FSITV5bdchsHBYc+FHXRjQyQvBpAf8TXYHbgt+8R4+7Cnyw/8D4UTpamiuC8zEvIufGanUcH2lmOphjYT0nhLAwlxM+lcLDdLQXlkUx4ffptBtJa93ntu5EHo32sSlMIkGdJLknay3NUtne20yCBt+RKIvbFBhiPisr7+jQLd4EDY2rLK7bAZGL0poTkrjqFqCKW6321SrcFCHPd1o4Ml0p5LmW6lxzj872IkctVaMbjyRBScQ+s8wSvMqhY5egH+ZE47Qx6ZlFNxnlIsnxq34TTjjakfpNsiyW7qLgJIvfckulrWVpp64PXKftRXPOR/wyJjpml6I2GCXbXRjcWJbWYihGrWXac4GfdqB7S85wzULgNKwHv5HZSBYpjje7DTcS6IrvN7pRLufigUN1brs0Ju18Md55GzAR41xEOT7qjZDgCCdFNwkXrt3FnpNkyeZCsaktyVaFiIePCbkveu+ZP/Xn4AH+cTVRAAsA1uUV8dGigR9nKuBCvs1ypEQdFcvtZpYbpSSJw8E9KuCSNUt3qmXluTSsgjJKyrgqwxOVkD2x13f1eCxaSjqsVgSUYocuLekEr7s1I45Re5MWpdgTzH6kyWKdrw9Gk7JaskKqBnew3KoOCGKrPAdoup7tjNI0p8u2IwglpYoZeHY5bwH54Ch5mG8qYyZlmDPdu4u83wphrfC7oj/MCI4zWK9FxEARiDbdt5ZS5xjtNWbcYcUMXReEvFJkZykTlVWHMr4+jLJ1Svq5n+wX8omYH0RDP6ICF4LXlZaTcnEEL0RM5qAl0t0oZlmk4VGjFUwIW7709wygWmlkVXg1TddoxvqVYUj8ao+K23aH0NzACNSKYPzUz/UD0jpqhu03lVnqDWnt4JDHHIXbTYZUcyzQUm0hJ6TPH7wTLYDZWu7aeAOXHqFakPeSSpdetuVCIU8xSuX5pUWDkTcj74Rtu17ptnKxO+A0Rk8zikRsXBZQXBqJc5PHVzvDU5KWoFbNnmTGTh3tFe7kGLOUnyXkKlp2JxRH7YMpBBEUmIuJSGhxS5AkQzKUZ6QrQ5qHG8Mg552o7gNjkPRd2BFBhpjUzm6GJI2JMX460bQuClm3ETKOG+EIsyckp1xsZjHvb8h5ZMtovJqNMpuuMuyZ7NbNpppyuU5SXYnRYbrU3eVsGa6Nw2zd6iVqN+XIOEpxZExBdRl0r8hqn96MxaZT434LKXOcrGoBSI/1DBG5ESipdY138OjUgNEABG8VuxO+SMC91ZSEZL4dnKYjVNP396SW0A3bi0/jbNtsizm7mZNR7YYSSLsZJeOE3yenXYHO+zJfjplJdSJZ+JC4tZyGUECv83OvN3RwjNETWfcrPFAJZQcZbM6IXmnOc3WJI8Pw9KBhYVUo7PsBoTw+aJRZM0uSZnO0SCBwuNO0QuMhDf00nCrVAfY5WKSnBqfTaMfSk1rlpEXNHjqslcUIOYpRiKl0hfRDSMg92ZPSDMAOyRgHhrwqQ8jjGlycmDv0EQXDlUwHchShoRGfwKXdSUkKIcdsfTsJHE17mppCX+l9rkIix4KkJ1grGzGDszptQjp0F3Kq42hOssQRDUp021BedjqQhZ5Qh+xwNDJitpuNbUe047Y+LqZY5JonRE0WNc+NOWS3Evt4aylRV5gjxarSYefn29pW7TW/QvGdYuYElxtDFC5E22zGlbluUTwJzR3bE6KdjgkSIbIJZbNoyQZOtyTYw1TcYqSwtu3SFUzHLTLhsBTDduoieQBeYjwTcxHvDqEnofhke2Jl9RDTkxkHCEty3aYVp+qyNLg9mWtpODoMFhQEvC5JUlEPk9xfYnOIJ/YTGtcBvaXzrHfB98gbR94oSXOH7/hgGCHNJDGnjUQ3LXf0RkckKcCbxL6kcTM/zCAZBpMNd6Aojz7UI2vmzeMDozP8WhqhY3xGpDbthUTT8pm2dzV3zEa8IY6GSlazU6A00MnAz4jaUlC7Kmhu5gROjW0KdNwQcrAZ4wYJaGo70NleG4tczowRWdmNGsrmR4prKzjCA0CDFviJ2Z0Af5sOTZMrYBFuYgNIfsqAHTziQrebWWZKSHzOzVcm+qgpJJLd6d3W9YE22Byilee3lmew837IgFAv+0Z14t2wcIOtC3vZeTg77NerxQFXstMmHssGiuO01+aBZgG+bDYOVk5FjetQlWL93jKy0UKy47gOXFuQtH7J1ESskrg/qXVsORtO0gYw9aqGilKFXWK4vUtue4fjT9FxHCzkQj4CfeHyfe9VnpbrTMv4wcJQ4qXbTMBtGyzR6tMxYXCZ4gzaiAIOakWkih5liXGi5jZoJl8PDVrpoQrK4JuQHmRBy7IwIG34tPR6f6PNdqUnE8esFsUdzvm4sjogrC6ONV5XQzPnXC/iJpyTMKso7lFVU1XUMGwlKQmaP1pdz28c8XDoRrlQ2ycSmBSreZ70O8c7InQJXk7fWWI2ohCFD0V5oZcYJ//Q134wA4YCzydqqIMJCkzWhwqmonjOFTsxIkbuoVjuAKrFYb1osWjm7mYcntLanmce1/f61oucRMNGR83NUG0/lh2PtsVkANJGWkjRsenJTpQnGNnF82lV0fg4TY4q31PZGHSSIdQjEXTMuKdyktdDbUYSCLVT7ZLpqmqNaZGoyD7l83in2EMe2o/0k1YpqKr1ahpTMWco0wMfMhwmy/V063MJ6z+9HTvt5rJ4WPFh50yJ+RiKZ2u65djkEV4EgQn76G7myEQmuGMnePOA80WnCq0ICktJ00NJk7Ye4fLiqFMjhFvpMh61YyD7ZbHBdAHdxwtlZ48lDB0NxslrLU5tkslqxXA6GWlxSG/r5cRbw7GOnVk6A9YGYAU4/FAZlLGBcD0+HckTahIxfeJ2Y38KJrQcx5Q4h+vVUrrlJFzfygnBWV00bMcOTjdoSnslWuyc9cwVVwvDs4Zu1cmTU5HbuGomR2ORFlm0nS+IaQ9VBLfKtqOeGbQjmxp7HrJntzqAmSkrEzBNsRsvA29GDAJWTcOJYdFMVvPSXNCWoGY4CubzEbOoUIpqzf2jJgDSTNdCi0Bmkpgsg209L9SWWMcQEAr8/OgOMU4sF8V43IWrrlLmWzXrt+X4mAMRBplE3/aiOmVYz7QWM4JAZ9Qud6eRPHYoVkvHQbFcsR3zNJMCR/O6xZi2y/XsWp75wpbIhJA7jgCm1Qx9gUwWUZARgIKWJniredgbE4kQag7XAQ4/EQY5WnDdhvAAFisEHtikdQzFueE44aCbcdwuZ6OFuB9nRsov5UXcnqCqW4e8SKVTwYom4kbd1cuIWHPouBup5XjezlaALuyEE+RUETfStuYich32crfnq3GdTVbMlLOVeSRIfbwKl9XIGLgJIZ2K0R60zYsyNt5ttUzoV13Laf7a3VpqINKcfBJ7KGUIUzTFidYOoqPs+vma0/lFW5AaUJ9srRbyItL3RLObqHCIZLNKcYfSSsfxg5BmGyjXXHFDcnt9Wqx3Dj6aufxAh0wwH/E0TTgyHyUhVCR+ezwOTrizTU7fU5q2TxYapft87EVjH6PHcg55fzmnGgx3zNQB1qhy1NSNAMGjoaMFIs6BTTJayFFTk8fcOk33J9Kbpesh4010F7hh1UUdEwxHp8FYi49PZomOeStfwvkdJuGCmIuanKDUDJKWCOdL4wwiEkmcPnT1xpmOFiaVAWFfjvVxJ9rIfCDjHR42TJkCeF5NCWYzIeGMkkymppudNdfc6aDFy0AkzKFRWZKATJ8lITtuikbN+zb1UNaq9q02nSbRVNpC7d7mTpNFFnxhfS9rAuloPKKT6JBMXZdE8XDGL0c5Pkv9WONoXzpoJ5Jgd74sRHZpCXAiThM9nGqTlbyGOFrKoYKZRdP9sJviNMkeLIDZstI/SmB8antRjnJzFoSsPzDFZjYer9Y9SjWrlWLZTyrUcBm5V5TJk+oEf3sdoq2t6s60+SlU2vGSW0TMogClh22zYVeHzAA6eypD3aiA56qJcqw9/7iRdJjkII5ZWt1yqg6lEsVMoEgKgrIwN8VpVw/LymzxspGHWmQLZLEgMWjJaBxBz4rEkrmO6Ig0NUxhLUVbs9Rq3CA4IFwlYdEwo8Q5xoPRLbaIqvshDzvFQTuRPzE8TozmWLQPOblmOk10dqgmbqP6EDbdlG+iGe3NVtiOGy3qeLMM1VpVkSLgZcFqoonuThbShFu7odoC03WUE4pDOSU/1J2xmE4DNvZSqYFKdrzcAKt8ohWkwdvAAuZGwPINt5w8mrfSgdNdft3aKDHmMuyYrkaFQAMNl6wbvptgqRvPI84KxKS00YkmZOskWyuNxqWkvSn3yLG2j7kuBfx2ZZgWsLj7TWFKwXSm+gK9mWTSUenNYCTPoDuF3xc9sC23dprXiVJCOli6e15xFtPJaskIeVnIm6BkdU59gvOPyEizTkQtVeBz0mquawgcZR4weiKKmyx0J1Xa4sCgUVfj5SGatrhTpBoetOa0TgFc6eAA0F7itnyzwGmWDADdBUoFeISXs3SU8pgzY0eWyeU0Z3KygaOhvA2AuRFtUmCnzcNVawVHq+VjIA25fZOzkdHxyWE7c7UgI+uqQOyFKVbAXtsULE4rJb1GNVYNnJO3Loz1XOKYA0XW3a5Z07Ypmxy/5nsSdGg+x2x1p5WmXY/B8GCYP+NC0FuAN6iaMUtg+AaTuVtJfEm0w4lEaJPdpFtXGxMtnq2pFe/wA2a3M8TSTlJUOkspAuO3YrVuhLDAVkGypcIgtiq7R70wVntg3YeCUJBl225ZKykID0KAjd5glQntp2CaQpY3jzhNNND75dT7Nem1ZdxAoSgP4nGyE5OGs3JQJbL2DtCXuyaz1U1jyKZgTSBfHPSubtydLGx8XU4iRLfHiC0o8cEoqP3Q66EtbnJ9uo+ByY4aNc1YnSyz3chiq0VyFHPd1ZSRsDShTRBUKII7Wz5QhGjZkFlnG+ONKcveLOJtQW5SgERDFBmAfaqvwZgohzln0ekQWQfUbq1COrYOIQGullast7N0y6RI17Z7+LI59AEwGEmWkLTgv560dVy3kG3pRJEBuhcPBO2IUF7xEDciqmPzw36FLH4+zSSXxkzUwlAz6iU1dxIbaklRIj3/tDVWyIqUkUBZgbJ282ht5tBCrwPoIGlL6ClkpBKQYGqsgI1e6uhQwBcfuLqZBxNMXIenIaIdv9SqBLCgPRkhiEBh8gJSygkYV2hp8+G0LkkjpnRuppkxYLtg5Fteu95HswR9pP+WtcIUTCK/mCxaA9EtLOif+EtB466bhfOZcFz3dZ0MM1Q4yjg+ViLxiRsc6ETc8wl3INxVS2RwONV2nkTeYs49NXKMEtWCOEfz+n0FMWa2nDMzU0kHhw1slllrtSw8geycw0UDCVpvJ0MlmjsMwAwdiY8iHsAUDpgkG8Ukh0GRkp487FajpSbrxZ5Z0W4oUbLtCSgxkluClDciult22gJNJQ5H1RGyFKGO12yu38BJ2AuR0s3DCTNI86I2uAyPeU8DmP+oRIelBCSxtNH2nCyb4E0RsVloVkgITmkEnOAkG6OLCTM6VJMwJDjPpWadbK0zVeWhrJUO+dTnbdsephONG1laC8BhVyW6seKnA2eTmBmO27Re20LIq13lzNfuDEHQ/XLpcrPWM7dCdbDGWRgKwOhFT/uKa7zpdoTIj8/eLLP/Pft/z/7fs/8Zz95xSv/o4XSGOIZ6ifEX8vh0lAhXkw8boLFhaTq3jpijzZkmg3XAv0bBjfWOs/sKmCrJjFZdwTPmnr7UENiayyfLeibsh9MeKGIeKiQjOrELGloqpAs9qluyP2IRxGjUNCdV+BiXo4Gg9yrMUdI1QyCuxnOapexjVaoAZjAN6N31PTIVGkuvS0sZ2BPJEKiy3XKu3Mk4gTUDuz4YBD4LlUns+D1+tCahw2rcPNYWGABrThkn5YpeKtXW9SEYK811LTgZNwJagO/UFnPWtTlDHSZF1U2V9hD+lLtgXVoVTh1pQtFmB6NcK+DJDD7X8mAdKAPTrHBO3tfyPJEbvp+M4RB5pLRB7dTatZaWu3q6JPelOTsiiqutp9qh3HBsJaXhwm6yIwAtnLOZjLlDn3cl6h+FtHShDQSHpT5s0bbbjAPzuGVVkfBahLLVYBF4XreFM6Ifp912Lh8MTRZjFmU3ixMaYBhLQ+NTcaqNQ+NoGvNHhOYx2qxMrN75zOJE+hXUW0+Gd7Ow4EZQPvCsNGrsLDRRxKAMd70NaGWKQo+KziyQPdRA/qMpqUgtpqvk0+3gL0ZYGwgI8Bo2uaKaXYFs1bGuZrogdKcDwc5YLdN3anbalQc31ul8o/czLhiyREoi2D9cDDtutFtYJWJXYx2a7fbgTwRN6DaI5ZBM6w20B0kMGSudrKLObL/Vez5nETEBjxPEZJOwJLNftpZT7EeDp6UqIJuhlYFdAm4z9wkO3yB0bR3N+WZoNrS78ZS0ZwA4711Lco8VYvu5VcEXe1zjKqsTxuzCw1Ykl5IxaAvPgwNlQk83TSyJchbvB2ViBS0+HXkabhdmdPLcVk9RaHet+Erq1xyjMh5GeX5dOrbcI5S3SNyaHzNOlvkRu59Lc6Kb7mcbad9C+MBzPNNtJ0LvGwcFNmP1BOMNRL2cbpxVsbf0FpfyEp3t9ZLfSPMnjz0vLdVpjp+G2KnyoyUnq1iQg6NTZ0txeDIyB9ILgjhvgC1tcbhrztJl5+Xrfb/cZFvG12YQgCF9Her+rh4v5QS+a26i4aoA7LdS8MAcFXIK5s1rfQfYK3aBRI6zHEulwzfJEq5YSDurm7ZlVk4QPPW86Xwj2mg4mnnjxpNnxbqlbW+ekTtdP0CT1Frv5tnYwTKInmvMUfeJWaFhHC7r2jsom1m6wj1MZZ/IDO7Ohr+1kawaCd6vvGA8NfidYdridAbm09MsR2xxmgky2up9cukp+r5Vy1Cnkh3JZHri7gW639bW8GSVwC64gEt5P+KK3Sidxn41nXILhxr2o8xGOimWFGG9cML68QbocCSb02ZHALlICxmwjjf8JAgcaUJmTHig5gF0vOolNBsQv9ylnHQIq/UytoiTFZ1KWyo1X4Wb96QlHC5IZiLPwYH2j06zEghiBd0lwkmynXSQbGrg4Lqmq0kCGqBzPgliLkikuA9SaAdhzkQPTp7Ew9VUPhCS/ASp14J+ljE31kr96G6q/XoSHonhYADlISd9Nh104rDCyOdx3QN7k6abuEESg2CdBqMOK+h/8LTcnAlJxHD6XNpM9GRdhDXBg3mS+FMbJfN1Y88aLFkt9whJBNSiWxPSVtTmtrtyFu2Aa3jHQzCtqeNTYJvufnTYIjUxjldzRyktCk/VExREfTdZKHzFT3i5Hfc5ALYCNtTQcWtSR58o8YZlaxuKxHVHZrJVpShmqkKYTkeSLM2bTKRmVrBoVDrBF/ShbcVApIQqSyPQliYapamWPU/qwdrGt4c475GZj+NPQB/fErPxYTnUBZ84uQOH5GiW820hu+UMCmFenHcUua0NRaXoSQpoQBJaRdjwjg0sSWq3kWTngNTGxCk7tpML6dETy3Sksdy4XK8SR0+XkiSxyd2kClfRKV0vJ7RpiXtSCxTLwtneFLGRNBahw3YONAU9TYceMlhwZEjzNPiDkkc9w3nN1kDR02g0IafKjOv6ZIslwOyXogNWrQ8jX+DkCcWZG2kr4Hh2mu47LNHXO3fSoNNJuu6WUB9nXIVxw3E34ROWP9GnYlHPHY+nRqGYpKaPLaot9A4dOGJJDQJhMlpbwrVDqiaNDvKN5UTdcgN5cav6LK12/qbWxpkW+A20BIPtGBehxxjdcoXDr6oa0Y+IjsvLQxxACEGQCGdv6rI5UBVj2K64kpyGZCjHPi5EiqIYRyP0XFGjeZhb9kZ4ItLtfCRsHLNMTSBFgJA261w9mdapOoVuCN1oiB3yTaGPNzg2YTl0m3ah3fG6I/mLLbQMvTZcLjXU4XcnMibREy7tYlEP8dxYsSwQeeFapw/DgTiJ3oyQohlyVBfSo5vgRLTjmBhzMRWv0q7XCsp01s6JyKxDtBiak4Tst+N2fhx6r03XgsyTEUJsuHDSFvp6Df3gzHK3A8ZfJOMyLzSn7cTkN2yy3VCjCYZ5Q4gwmNKFI+gxBv/QuYMJuy2SrIphuSoQuwRCuupqiy+8/dibnrqtpM49HEhpSXxcdUgSVg9nAL7ZsRjCFpKNRiURBizOeUa7mmIwkq1VvFCz4NnSig2Nsp/0U8uBAmghczwSbL14FBIipWx9phPTnNsug6I77OswzBSGnrjxmh1nfCDvvd22KAZxgkkV2gGRhKlRPt8NTLuaegaUz6S9J/rVoqvnrT8GhGplHJMx4/FGsWd7rOxWESrriBGNFtzIrG3SOoWZoM6L8XTFsx2hETQuUCpXI0QjzrBRDug1b8aCNVZsnkjF+YwzhLwW7RYa4PJ26wGhRpk0/cTRVojPiAmUYBIiM5FwYBS9qSNtA5SSr5njYNZtne1qqky3VLhuJWVThDERBmuCyhYS9uxdMIf1tNPiPZJrfMexuA6N60xSJiExZ6QDuTywO0rBN+PdQfTZkGsKHqqHY3CsUBsj/Dg9Wke8PKhE0ewGHyPVTkRSEUIHwXA4c+T7xhbKIGIvSGIqqdEeWP6SAWR3TcqdOF9oynpMNpy/nc83NbfXGISvZHNTbFFsF879sgs8oxNhtE+U8JmJliOh5sIlY4bwHVJ2kvfBDJlM2Zk2m3CcFYz2Wlb27Rxi1P1hRe/j1b4OAM0r5E4a8dyBlBh3gSFYzMb4bAyxhzwOA2XvSPQ2roPhMBOhe6SDbo6FS/fCijXyJQwg46crgxxV+2kYhjCADf69T8gf+kBhrzeKEOy1LWg49nDeovkq9I+9Q+jfPzUJ6kf2a/x1LqGXO7sI9nVIPINhvwmKf/y28KsYDC8MHL01C8m3BaOjF+k9UIR6wBiSRomn//EPZlBASZie7kV2t4uNvzj6wDLvTu727v3JlztjX6ek/H19/FX9Pz67P/lKUtt/L898NPvV3ZMUY38bs6DYfZiFvtwR8tjyy6a/hj3eMP9v2OOiPn1n7rghK/FL7ngOzv/0TsC7EfK79u5ey5Rz+97dT+9Butg0QFyQ7h23DL3dqv58wM0vt8R9dveuY7uMh1+bJQwnCNK7DxK7yGH342VfJvn6ot26+NtN0m+55c7Z3L4mUcst3HJlp/s78iF+T841FL9gKvq2rCtXYAh6oQ+IG7f0/T7r2gc4+Jw99X/Jhu6UIuLeOQvRK/tZvzHZ0GXCTJZ4k+PkZspHLwA48zXI55wL5+Ixt+ctum++3DOEvZLw4kuV5PecdIGSr2Hj2XXxcoMiyj6wV5KmvXJaoA84iTMMCuTq4+875NHBr+1nvr9o+0ha7/dh0RukxPkwl78lpTB2kZoRRS/SBt8PcJ7f82UyqvZwABMXP2Yh+ab0Md/GWw8ERYLhxGgWoVj6tfxksQfiKU3zU7rm65xH4j9zPZ8Z4Ms57+3OYQGMP/SS/fem6DXpY9gDSlBA1dHY4++3NsPfNinXfLr3OBFQ9I/fdBjgJdxjXP9Kkt1ngHmH2X1zHCeFP5AURhEkxhAsRZ4PAfwbzg7Er3kb73K+o19/IEPcXaaThH+e+/mi/OnnO6eZgUfwISyJAqOB/AHgfj/LDMIiBNwXTNDUGTh/apLfOs2EIoMv9sXC83scLBhykdrpioeFumLS38XD8v/rxCn2MpUQ+jb3HH2FqLE7DDVxgzPrfweG/ArdE1d8AAR2MxV8Et7TyLecSEde+Ft/cyLdZfV7n0hH3OAY+3uW8z7oa/orEX/pY3vtlruFOrG31El9m4eKpokHlvzpW3izKPxAvnQ9XLR/J9qlCeLisfc9V+Y8xC+IcQyUTQSKhMr/7ygqCn9zCB08GvHKSXEvvf7Uw7Wsqfc4KpH4x7mQ/m2qi7xy1tX3qS4Ke89hVx/MUEqhF5roiw67eqPxcPyX/XqrIe8slP46VWB9sPObTNC/ON2kcSPAUT+t0Kf2viVP+RcuT/1UuZ868eJ9PHqXpKok9ZaLydvj2F6c24KQn0xGfta4+JdkG6bJr8s2/F5+p84jfO4L8usDOKgL7HNR//P8fkM+yL81wO097PQxNPwCezP0q4DR357A9cFgUeIK530fuGaQSxK8LZTo3aqNeq2Zz+s8f8mmF2fTXNT/PKm/9ZWf8fbKD+L8Ww4/+RbETWLEZXgi+0xeL11D+LnSK4R9h1UJ4p8UNfu3L+NedfRQ38XtBAHzi78xtX74Yx4omvhpSX8RyEUvwlB/g3EvqmPIrw9TJi9Sa1/U/7zgeOux/w8Fe5AXoTJXgj2unYv0uJp5B1Fx7Vykv9UY/0KzGb0iCL7PbL7c0fF1AR3kLZGu/2/k/9Vp/0b5/1riowj+gKIvjsP8GufppQy/8FP842X+edJekPB/KsaFxF6TBYw7Il7AhCuH411bSbyPEiCx/8mL33hXv1Ne/AIvojj2wGDkGzr5aunxO6/oP0163LDE8I8l6Bdnnt2yBHl99QL79fLFDWxwzWz6PicJdd7Bd6G/3r+QcAm76K/xttAX3pZnzH43ir62LvCvoej3HfF5naLpT1P0NcFOfptgJy9X0cmLJm6laPJy+wj1NRRNXSJX/M5C+q3r+1uOnP57AN5TEDP7JqbilZn/cFZcXwHy3rpf/0txjxRGXUY5oBT1doy/KPSR/KrQYSkG8hlcfXSVf/Cw13cHEf/2mDcweSI5YkTiTpN3KRuZt9xBXrF/7sIXN2yi/wdFnP1eyX9ZTBp1ZU/xl+nPzy7snjeR3E1bXXOUUnYGeSl3avgrs5OiipvhKjB7PLzvNVHcfkhi5QMxYTuP7cHJeh4M0DjJ/0H+MkmSW+S57zbPN/9xdZvrEwv8JXf+iTwQ6MXoPn37JCT68zXG/xO9iC67y+HC1Ftn6HnZ8T8Xk06jb0L9UPyKK+nrgvuot367q+zwd0b4fswaOX++yb7+Mhl8LWjo+5xTFH4RunDeff7ukCDqIqyGuqC9+y2GnNfTPoHM2KsBdXH9I6YOgDS3qCp4auqNAO2frhKo30adofjZk/JZLfBaYP15QVL3UQJ3OIab+cwJzMCyDf0/0f/fJy9fOBSu7AH8siO4qb+Oq/1OAsD+nxMA/TdSwC2Rlj8Dkp9H8x1mMpx8Ef4BVzy7jn7gl79Q6/+YNEzEhfFNfFCpEwzy64buqNRvyaX6/3Myz7uxzixGf3AyL9rBbvQyf2QybwhX/DdaYxR5AXJJ7OGtXwujH1iSxH9mrsTe0gb7ervWZczIh8ThNbfXPRyWj9deO0ZA3bneVnvwe934h3+XDrwHITDoA/E2OemPrb9viYKhHl4QBHNNJ16Gj32ICG6I/vtbrPW72s/vmCjiwr49A5R3mMiXTVwmCr154Zphft3Q/QQwfS1e8OtkwfuWMv47UoD6C+J6wfdn590nwS/4WhVwJn4SBBzzpzSy+Oj/AA==</diagram></mxfile>
2104.07644/main_diagram/main_diagram.pdf ADDED
Binary file (40.7 kB). View file
 
2104.07644/paper_text/intro_method.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+
3
+ Current state-of-the-art commonsense reasoning (CSR) [@davis2015commonsense] models are typically trained and evaluated on *discriminative* tasks, in which a model answers a multiple-choice question for a certain context [@zellers2018swag; @sap2019socialiqa; @bisk2020piqa]. While pre-trained language models perform well on these tasks [@lourie2021unicorn], this setup limits the exploration and evaluation of a model's ability to reason and explain its predictions with relevant commonsense knowledge, thereby allowing models to solve tasks by using shortcuts, statistical biases or annotation artifacts [@gururangan2018annotation; @mccoy2019right]. Thus, we emphasize the importance of *generative* CSR capability, in which a model has to compose and reveal the plausible commonsense knowledge required to solve a reasoning task. Moreover, structured (e.g., graph-based) commonsense explanations, unlike unstructured natural language explanations, can more explicitly explain and evaluate the reasoning structures of the model by visually laying out the relevant context and commonsense knowledge edges, chains, and subgraphs.
4
+
5
+ We propose [ExplaGraphs]{.smallcaps}, a new *generative* and *structured* CSR task (in English) of explanation graph generation for stance prediction on debate topics. Specifically, our task requires a model to predict whether a certain argument supports or counters a belief, but correspondingly, also generate a commonsense explanation graph that explicitly lays out the reasoning process involved in inferring the predicted stance. Consider Fig. [1](#fig:examples_main){reference-type="ref" reference="fig:examples_main"} showing two examples with belief, argument, and stance (support or counter) from our benchmarking dataset collected for this task. Each example requires understanding social, cultural, or taxonomic commonsense knowledge about debate topics in order to infer the correct stance. The example on the left requires the knowledge that "children" are "still developing" and hence not capable of making an "important decision" like "cosmetic surgery" which has "consequences". Given this knowledge, one can understand that the argument is counter to the belief. We represent this knowledge in the form of a commonsense explanation graph.
6
+
7
+ Graphs are efficient for representing explanations due to multiple reasons: (1) unlike a chain of facts [@khot2020qasc; @jhamtani2020learning; @inoue2020r4c; @geva2021strategyqa], they can capture complex dependencies between facts, while also avoiding redundancy (e.g., "Factory farming causes food and millions desire food" forms a "V-structure"), (2) unlike natural language explanations [@camburu2018snli; @rajani2019explain; @narang2020wt5; @brahman2020learning; @zhang2020winowhy], it is easier to impose task-specific constraints on graphs (e.g., connectivity, acyclicity), that eventually help in better quality control during data collection (Sec. [4](#sec:dataset){reference-type="ref" reference="sec:dataset"}) and designing structural validity metrics for model-evaluation (Sec. [6](#sec:metrics){reference-type="ref" reference="sec:metrics"}), and (3) unlike semi-structured templates [@ye2020teaching; @mostafazadeh2020glucose] or extractive rationales [@zaidan2007using; @lei2016rationalizing; @yu2019rethinking; @deyoung2020eraser], they allow for more flexibility and expressiveness. Graphs can encode any reasoning structure and the nodes are not limited to just phrases from the context. As shown in Fig. [1](#fig:examples_main){reference-type="ref" reference="fig:examples_main"}, our explanations are connected directed acyclic graphs (DAGs), in which the nodes are either internal concepts (short phrases from the belief or argument), or external commonsense concepts (dashed-red), essential for connecting the internal concepts in a way that the stance is inferred. The edges are labeled with commonsense relations chosen from a pre-defined set. While some edges might not necessarily be factual (e.g., "Factory farming; has context; necessary"), note that such edges are essential in the context for composing an explanation that is indicative of the stance. Semantically, our graphs are extended structured arguments, augmented with commonsense knowledge.
8
+
9
+ We construct a benchmarking dataset for our task through a novel *Create-Verify-And-Refine* graph collection framework. These graphs serve as non-trivial (not paraphrasing the belief as an edge), complete (explicitly connects the argument to the belief) and unambiguous (infers the target stance) explanations for the task (Sec. [3](#sec:task_definition){reference-type="ref" reference="sec:task_definition"}). The graph quality is iteratively improved (up to 90%) through multiple verification and refinement rounds. 79% of our graphs contain external commonsense nodes, indicating that commonsense is a critical component of our task. Explanation graph generation poses several syntactic and semantic challenges like predicting the internal nodes, generating the external concepts and predicting and labeling the edges in a way that leads to a connected DAG. Finally, the graph should unambiguously infer the target stance.
10
+
11
+ We next present a multi-level evaluation framework for our task (Sec. [6](#sec:metrics){reference-type="ref" reference="sec:metrics"}, Fig. [4](#fig:metrics){reference-type="ref" reference="fig:metrics"}), consisting of diverse automatic metrics and human evaluation. The evaluation framework checks for stance and graph consistency along with the structural and semantic correctness of explanation graphs, both locally by evaluating the importance of each edge and globally by the graph's ability to reveal the target stance. Furthermore, we propose graph-matching metrics like Graph Edit Distance [@abu2015exact] and ones that extend text-generation metrics for graphs (based on multiple test graphs in our dataset). Lastly, as some strong initial baseline models for this new task, we propose a commonsense-augmented structured prediction model that predicts nodes and edges jointly and enforces global graph constraints (e.g., connectivity) through an Integer Linear Program (ILP). We also experiment with BART [@lewis2019bart] and T5 [@2020t5] based models, and show that all these models have difficulty in generating meaningful graph explanations for our challenging task, leaving a large gap between model and human performance. Overall, our main contributions are:
12
+
13
+ - We propose [ExplaGraphs]{.smallcaps}, a *generative* and *structured* commonsense-reasoning task of explanation graph generation for stance prediction.
14
+
15
+ - We construct a benchmarking dataset for our task and propose a novel *Create-Verify-And-Refine* graph collection framework for collecting graphs that serve as explanations for the task. Our framework is generalizable to any crowdsourced collection of graph-structured data.
16
+
17
+ - We propose a multi-level evaluation framework with automatic metrics and human evaluation, that compute structural and semantic correctness of graphs and match with human-written graphs.
18
+
19
+ - We propose a commonsense-augmented structured model and BART/T5 based models for this task, and find that they are relatively weak at generating reasoning graphs, obtaining 20% accuracy (compared to human performance of 84%).
20
+
21
+ We encourage researchers to use our benchmark as a way to improve and explore structured commonsense reasoning capabilities of models.
2105.00043/main_diagram/main_diagram.drawio ADDED
The diff for this file is too large to render. See raw diff
 
2105.00043/paper_text/intro_method.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Method
2
+
3
+ **Submodular Functions:** We let $\Vcal$ denote the *ground-set* of $n$ data points $\Vcal = \{1, 2, 3,...,n \}$ and a set function $f:
4
+ 2^{\Vcal} \xrightarrow{} \Re$. The function $f$ is submodular [@fujishige2005submodular] if it satisfies the diminishing marginal returns, namely $f(j | \Xcal) \geq f(j | \Ycal)$ for all $\Xcal \subseteq \Ycal \subseteq \Vcal, j \notin \Ycal$. Facility location, set cover, log determinants, *etc.* are some examples [@iyer2015submodular]. Due to close connections between submodularity and entropy, submodular functions can also be viewed as *information functions* [@zhang1998characterization]. Submodularity ensures that a greedy algorithm achieves bounded approximation factor when maximized [@nemhauser1978analysis].
5
+
6
+ **Submodular Mutual Information (MI):** Given a set of items $\Acal, \Bcal \subseteq \Vcal$, the submodular mutual information (MI) [@levin2020online; @iyer2020submodular] is defined as $I_f(\Acal; \Bcal) = f(\Acal) + f(\Bcal) - f(\Acal \cup \Bcal)$. Intuitively, this measures the similarity between $\Bcal$ and $\Acal$ and we refer to $\Bcal$ as the query set.
7
+
8
+ [@kaushal2020unified] extend MI to handle the case when the *target* can come from an auxiliary set $\Vcal^{\prime}$ different from the ground set $\Vcal$. For targeted data subset selection, $\Vcal$ is the source set of data instances and the target is a subset of data points (validation set or the specific set of examples of interest). Let $\Omega = \Vcal \cup \Vcal^{\prime}$. We define a set function $f: 2^{\Omega} \rightarrow \Re$. Although $f$ is defined on $\Omega$, the discrete optimization problem will only be defined on subsets $\Acal \subseteq \Vcal$. To find an optimal subset given a query set $\Qcal \subseteq \Vcal^{\prime}$, we can define $g_{\Qcal}(\Acal) = I_f(\Acal; \Qcal)$, $\Acal \subseteq \Vcal$ and maximize the same.
9
+
10
+ We use the MI functions recently introduced in [@iyer2020submodular; @levin2020online] and their extensions introduced in [@kaushal2020unified]. For any two data points $i \in \Vcal$ and $j \in \Qcal$, let $s_{ij}$ denote the similarity between them.
11
+
12
+ **Graph Cut MI:** The submodular mutual information (SMI) instantiation of graph-cut (GCMI) is defined as: $I_{GC}(\Acal;\Qcal)=2\sum_{i \in \Acal} \sum_{j \in \Qcal} s_{ij}$. Since maximizing GCMI maximizes the joint pairwise sum with the query set, it will lead to a summary similar to the query set $Q$. In fact, specific instantiations of GCMI have been intuitively used for query-focused summarization for videos  [@vasudevan2017query] and documents  [@lin2012submodularity; @li2012multi].
13
+
14
+ **Facility Location MI - V1:** In the first variant of FL, we set $D$ to be $V$. The SMI instantiation of FL1MI can be defined as: $I_{FL1}(\Acal;\Qcal)=\sum_{i \in \Vcal}\min(\max_{j \in \Acal}s_{ij}, \eta \max_{j \in \Qcal}s_{ij})$. The first term in the min(.) of FL1MI models diversity, and the second term models query relevance. An increase in the value of $\eta$ causes the resulting summary to become more relevant to the query.
15
+
16
+ **Facility Location MI - V2:** In the V2 variant, we set $D$ to be $V \cup Q$. The SMI instantiation of FL2MI can be defined as: $I_{FL2}(\Acal;\Qcal)=\sum_{i \in \Qcal} \max_{j \in \Acal} s_{ij} + \eta\sum_{i \in \Acal} \max_{j \in \Qcal} s_{ij}$. FL2MI is very intuitive for query relevance as well. It measures the representation of data points that are the most relevant to the query set and vice versa. It can also be thought of as a bidirectional representation score.
17
+
18
+ **Log Determinant MI:** The SMI instantiation of LogDetMI can be defined as: $I_{LogDet}(\Acal;\Qcal)=\log\det(S_{\Acal}) -\log\det(S_{\Acal} - \eta^2 S_{\Acal,\Qcal}S_{\Qcal}^{-1}S_{\Acal,\Qcal}^T)$. $S_{\Acal, \Qcal}$ denotes the cross-similarity matrix between the items in sets $\Acal$ and $\Qcal$. The similarity matrix in constructed in such a way that the cross-similarity between $\Acal$ and $\Qcal$ is multiplied by $\eta$ to control the trade-off between query-relevance and diversity.
19
+
20
+ We apply SMI functions to the setting of targeted data subset selection for improving a model's accuracy on some target classes/instances at a given additional labeling cost ($k$ instances) without compromising on the overall accuracy. Let $\Ecal$ be an initial training set of labeled instances and $\Tcal$ be the set of examples that the user cares about and desires better performance on. Let $\Ucal$ be a large unlabeled dataset. Using appropriate feature representation of the instances, we compute kernels of similarities of elements within $\Ucal$, within $\Tcal$ and between $\Ucal$ and $\Tcal$ to instantiate a MI function $I_f(\Acal; \Tcal)$ and maximize it to compute an optimal subset $\hat{\Acal} \subseteq \Ucal$ of size $k$ given $\Tcal$ as target (query) set. We then augment $\Ecal$ with labeled $\hat{\Acal}$ (i.e. $L(\hat{\Acal})$) and re-train the model to achieve the desired improvement. Through instantiating a rich class of MI functions including GCMI, FL1MI, FL2MI, COM and LogDetMI, [TSS]{.smallcaps} offers a rich treatment to targeted subset selection. Our framework allows for adding an explicit diversity term $\gamma g(\Acal)$ where $\gamma$ is the weight and $g$ is a set function modeling diversity (for eg. total pairwise distance). This is helpful in cases when $I_f$ itself does not model diversity (for eg. GCMI). The algorithm is summarized in Algorithm [\[algo:tss\]](#algo:tss){reference-type="ref" reference="algo:tss"}. Following  [@ash2020deep; @killamsetty2020glister] we use gradients as feature representation to compute the similarity kernels. The gradients are computed using model's inference for $\Ucal$ and $\Tcal$ and similarity is computed using cosine similarity.
21
+
22
+ :::: algorithm
23
+ ::: algorithmic
24
+ Initial Labeled set of Examples: $\Ecal$, large unlabeled dataset: $\Ucal$, A target subset/slice where we want to improve accuracy: $\Tcal$, Loss function $\mathcal L$ for learning Train model with loss $\mathcal L$ on labeled set $\Ecal$ and obtain parameters $\theta_E$ Compute the gradients $\{\nabla_{\theta_E} \mathcal L(x_i, y_i), i \in \Ucal\}$ and $\{\nabla_{\theta_E} \mathcal L(x_i, y_i), i \in \Tcal\}$. Using the gradients, compute the similarity kernels and define a submodular function $f$ and diversity function $g$ $\hat{\Acal} \gets \max_{\Acal \subseteq \Ucal, |\Acal|\leq k} I_f(\Acal;T) + \gamma g(\Acal)$ Obtain the labels of the elements in $\Acal^*$: $L(\hat{\Acal})$ Train a model on the combined labeled set $\Ecal \cup L(\hat{\Acal})$
25
+ :::
26
+ ::::
27
+
28
+ **Dataset, Baselines and Implementation details:** We demonstrate the effectiveness of [TSS]{.smallcaps} in obtaining a targeted subset for improving image classification accuracy for some target classes on CIFAR-10 and MNIST datasets. To simulate a real-world setting, we split the available train set into train, validate and a data lake such that (i) the train set has few labeled instances and poorly represents two randomly picked classes (target), and (ii) data lake is a large set whose labels we do not use (resembling a large pool of unlabeled data in real-world). The poorly represented classes do not perform well on the validation set and hold clue to picking up the target of interest. Performance is measured on the test set from the respective datasets. We then apply [TSS]{.smallcaps} (Algorithm [\[algo:tss\]](#algo:tss){reference-type="ref" reference="algo:tss"}) comparing MI functions with other existing approaches. Specifically, for MI functions we use LogDetMI, GCMI, FL1MI, FL2MI, and GCMI + Diversity (equivalent to an intuitive approach of minimizing average gradient difference with the target) For existing approaches, we compare with three active learning baselines (uncertainty sampling (US), [Badge]{.smallcaps}, and [Glister-Active]{.smallcaps} (GLISTER)) running them only once as per our setting (i.e. we select the unlabeled subset only once). Since these active learning baselines do not explicitly have information of the target set, to further strengthen them we also compare against two variants which are target-aware. The first is 'targeted uncertainty sampling' (TUS) where a product of the uncertainty and the similarity with the target is used to identify the subset, and second is [Glister-TSS]{.smallcaps} where the target set is used in the bi-level optimization. Finally, we also compare with pure diversity/representation functions (Facility Location (FL), Graph Cut (GC), Log Determinant (LogDet), Disparity-Sum (DSUM)) and random sampling. We train the model (ResNet-18 [@he2016deep] for CIFAR-10, LeNet [@lecun1989backpropagation] for MNIST) using cross-entropy loss and SGD optimizer until training accuracy exceeds 99% (Base model). After augmenting the train set with the labeled version of the selected subset and re-training the model, we report the average gain in accuracy for the target classes and overall gain in accuracy across all classes on test set, averaged across 10 runs of randomly picking any two classes as target. We run [TSS]{.smallcaps} for different budgets and also study the effect of budget on the performance. Wherever applicable, we keep the internal parameters at their default values of 1.
29
+
30
+ **Results:** In Table [1](#tab:cifar-mnist-results){reference-type="ref" reference="tab:cifar-mnist-results"}, we report the results for a budget of 400 for CIFAR-10 and 70 for MNIST. To keep the setting as realistic as possible, we set the target set to be much smaller than the budget (around 10% of the budget -- 10 for CIFAR-10 and 6 for MNIST). We report the effect of budget on the gain in accuracy of the target classes in Fig. [1](#fig:gain-size){reference-type="ref" reference="fig:gain-size"}. On both datasets, MI functions yield the best improvement in accuracy on the target classes ($\approx$ 20-30% gain over the model's performance before re-training with added targeted subset; $\approx$ 12% more than other methods) while also simultaneously increasing the overall accuracy by $\approx$ 2-6%. They consistently outperform [Badge]{.smallcaps}, [Glister-TSS]{.smallcaps}, [US]{.smallcaps} and [TUS]{.smallcaps} across all budgets. Since the SMI functions (LogDetMI, Fl2MI and GCMI+DIV) model both query-relevance and diversity, they perform better than both a) functions which tend to prefer relevance (GCMI, TUS) and b) functions which tend to prefer diversity/representation ([Badge]{.smallcaps}, FL, GC, DSUM, LogDet). Also, we observe that across different budgets, the MI functions outperform other methods by greater margins on the target class accuracy (Fig. [1](#fig:gain-size){reference-type="ref" reference="fig:gain-size"}). This is expected, as other methods are not effective in considering the target.
31
+
32
+ ![Comparison of different methods for targeted subset selection for different budgets on CIFAR-10 and MNIST. X-axis: budgets, Y-axis: gain in model accuracy for target classes on test set. MI based approaches (lines in [red]{style="color: red"}) significantly outperform others across all subset sizes. (Section [4](#subsec:exp-tss){reference-type="ref" reference="subsec:exp-tss"}).](images/UAI_2021-budget_effect_corrGrad.pdf){#fig:gain-size width="100%"}
33
+
34
+ :::: center
35
+ ::: {#tab:cifar-mnist-results}
36
+ +-----------------------------------+---------------------------------------------------------------+-----------------------------------------------------------------+
37
+ | Method | CIFAR-10 | MNIST |
38
+ +:==================================+==============================:+==============================:+================================:+==============================:+
39
+ | | Target | Overall | Target | Overall |
40
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
41
+ | 2-5 Base | 11.2 | 42.2 | 52.76 | 86.8 |
42
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
43
+ | Random | +2.75 | +1.43 | +1.08 | -0.032 |
44
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
45
+ | BADGE [@ash2020deep] | +7.245 | +2.38 | +6.7 | +1.659 |
46
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
47
+ | GLISTER [@killamsetty2020glister] | +12.1 | +2.27 | +14.56 | +2.27 |
48
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
49
+ | GLISTER-TSS | +16.5 | +1.78 | +22.895 | +4.05 |
50
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
51
+ | US [@settles2009active] | +3.95 | +2.03 | +7.56 | +1.182 |
52
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
53
+ | TUS | +10.45 | [+2.99]{style="color: red"} | +6.21 | +1.611 |
54
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
55
+ | LogDet | +11.85 | +1.2 | +13.29 | +1.89 |
56
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
57
+ | FL | +15.3 | [+2.63]{style="color: green"} | +15.025 | +2.41 |
58
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
59
+ | GC | +15.9 | +1.79 | +10.935 | +1.16 |
60
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
61
+ | DSUM | +10.65 | +1.9 | +20.515 | +3.92 |
62
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
63
+ | LogDetMI | [+26.5]{style="color: blue"} | +2.21 | +28.035 | [+5.26]{style="color: blue"} |
64
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
65
+ | FL2MI | [+20.2]{style="color: red"} | +1.7 | [+34.36]{style="color: blue"} | [+5.14]{style="color: green"} |
66
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
67
+ | FL1MI | +17.1 | +2.28 | +21.21 | +3.83 |
68
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
69
+ | GCMI | +17.6 | +1.48 | [+29.375]{style="color: green"} | [+5.21]{style="color: red"} |
70
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
71
+ | GCMI+DIV | [+18.2]{style="color: green"} | [+3.74]{style="color: blue"} | [+31.28]{style="color: red"} | +4.21 |
72
+ +-----------------------------------+-------------------------------+-------------------------------+---------------------------------+-------------------------------+
73
+
74
+ : Comparison of [TSS]{.smallcaps} (MI functions) with other methods for a budget of 400 (CIFAR-10) and 70 (MNIST). The numbers are the gain in % accuracy of the target classes (Target) and all classes (Overall) over the Base model after re-training the model (see text). Highest in [blue]{style="color: blue"}, $2^{nd}$ and $3^{rd}$ highest in [red]{style="color: red"} and [green]{style="color: green"} respectively.
75
+ :::
76
+ ::::
77
+
78
+ We demonstrate the effectiveness of SMI functions for improving a model's performance by augmenting the training data with samples that match a target distribution (targeted data subset selection). Through experiments on CIFAR-10 and MNIST datasets, we empirically verify the superiority of SMI functions over existing methods.
79
+
80
+ [^1]: Department of Computer Science, University of Texas at Dallas
81
+
82
+ [^2]: Department of Computer Science and Engineering, Indian Institute of Technology Bombay
83
+
84
+ [^3]: Department of Electrical & Computer Engineering, University of Washington, Seattle