SlowGuess commited on
Commit
2e760c8
·
verified ·
1 Parent(s): e57215a

Add Batch fa21f1cc-cec0-4abf-9970-afcf2b2eb0f5

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +64 -0
  2. 2202.06xxx/2202.06689/98a87ad6-ee0f-4b93-840e-3c52dc80b6e9_content_list.json +0 -0
  3. 2202.06xxx/2202.06689/98a87ad6-ee0f-4b93-840e-3c52dc80b6e9_model.json +0 -0
  4. 2202.06xxx/2202.06689/98a87ad6-ee0f-4b93-840e-3c52dc80b6e9_origin.pdf +3 -0
  5. 2202.06xxx/2202.06689/full.md +477 -0
  6. 2202.06xxx/2202.06689/images.zip +3 -0
  7. 2202.06xxx/2202.06689/layout.json +0 -0
  8. 2202.06xxx/2202.06709/0a656b52-3523-4411-8242-ec5876763c94_content_list.json +0 -0
  9. 2202.06xxx/2202.06709/0a656b52-3523-4411-8242-ec5876763c94_model.json +0 -0
  10. 2202.06xxx/2202.06709/0a656b52-3523-4411-8242-ec5876763c94_origin.pdf +3 -0
  11. 2202.06xxx/2202.06709/full.md +585 -0
  12. 2202.06xxx/2202.06709/images.zip +3 -0
  13. 2202.06xxx/2202.06709/layout.json +0 -0
  14. 2202.06xxx/2202.06767/efa06db7-44a6-489d-836f-b237416be61e_content_list.json +0 -0
  15. 2202.06xxx/2202.06767/efa06db7-44a6-489d-836f-b237416be61e_model.json +0 -0
  16. 2202.06xxx/2202.06767/efa06db7-44a6-489d-836f-b237416be61e_origin.pdf +3 -0
  17. 2202.06xxx/2202.06767/full.md +377 -0
  18. 2202.06xxx/2202.06767/images.zip +3 -0
  19. 2202.06xxx/2202.06767/layout.json +0 -0
  20. 2202.06xxx/2202.06804/db9c788c-9318-46d1-9180-61cb7efe0259_content_list.json +0 -0
  21. 2202.06xxx/2202.06804/db9c788c-9318-46d1-9180-61cb7efe0259_model.json +0 -0
  22. 2202.06xxx/2202.06804/db9c788c-9318-46d1-9180-61cb7efe0259_origin.pdf +3 -0
  23. 2202.06xxx/2202.06804/full.md +449 -0
  24. 2202.06xxx/2202.06804/images.zip +3 -0
  25. 2202.06xxx/2202.06804/layout.json +0 -0
  26. 2202.06xxx/2202.06817/04b8261e-67f2-4cd9-9432-44c31d240cd9_content_list.json +0 -0
  27. 2202.06xxx/2202.06817/04b8261e-67f2-4cd9-9432-44c31d240cd9_model.json +0 -0
  28. 2202.06xxx/2202.06817/04b8261e-67f2-4cd9-9432-44c31d240cd9_origin.pdf +3 -0
  29. 2202.06xxx/2202.06817/full.md +0 -0
  30. 2202.06xxx/2202.06817/images.zip +3 -0
  31. 2202.06xxx/2202.06817/layout.json +0 -0
  32. 2202.06xxx/2202.06840/0bd9eaad-6f77-437e-b60a-aaa13ea7bfb5_content_list.json +0 -0
  33. 2202.06xxx/2202.06840/0bd9eaad-6f77-437e-b60a-aaa13ea7bfb5_model.json +0 -0
  34. 2202.06xxx/2202.06840/0bd9eaad-6f77-437e-b60a-aaa13ea7bfb5_origin.pdf +3 -0
  35. 2202.06xxx/2202.06840/full.md +511 -0
  36. 2202.06xxx/2202.06840/images.zip +3 -0
  37. 2202.06xxx/2202.06840/layout.json +0 -0
  38. 2202.06xxx/2202.06856/1d6db6ce-c1db-4f5a-bdc1-04e8ab208327_content_list.json +0 -0
  39. 2202.06xxx/2202.06856/1d6db6ce-c1db-4f5a-bdc1-04e8ab208327_model.json +0 -0
  40. 2202.06xxx/2202.06856/1d6db6ce-c1db-4f5a-bdc1-04e8ab208327_origin.pdf +3 -0
  41. 2202.06xxx/2202.06856/full.md +0 -0
  42. 2202.06xxx/2202.06856/images.zip +3 -0
  43. 2202.06xxx/2202.06856/layout.json +0 -0
  44. 2202.06xxx/2202.06861/dce406ec-fdca-4c53-83c1-98a2ac664d0a_content_list.json +1038 -0
  45. 2202.06xxx/2202.06861/dce406ec-fdca-4c53-83c1-98a2ac664d0a_model.json +1740 -0
  46. 2202.06xxx/2202.06861/dce406ec-fdca-4c53-83c1-98a2ac664d0a_origin.pdf +3 -0
  47. 2202.06xxx/2202.06861/full.md +179 -0
  48. 2202.06xxx/2202.06861/images.zip +3 -0
  49. 2202.06xxx/2202.06861/layout.json +0 -0
  50. 2202.06xxx/2202.06875/5b7d71fa-430f-4922-ab60-5c0553268191_content_list.json +0 -0
.gitattributes CHANGED
@@ -7927,3 +7927,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
7927
  2202.10xxx/2202.10580/19a83009-933d-445d-8ce2-e95a5b08fab5_origin.pdf filter=lfs diff=lfs merge=lfs -text
7928
  2202.12xxx/2202.12264/7e5ca843-a45f-4e73-b52f-48fbbc9d2b56_origin.pdf filter=lfs diff=lfs merge=lfs -text
7929
  2203.03xxx/2203.03550/ce08bb8f-8429-42b0-ac06-b5dd554a8add_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7927
  2202.10xxx/2202.10580/19a83009-933d-445d-8ce2-e95a5b08fab5_origin.pdf filter=lfs diff=lfs merge=lfs -text
7928
  2202.12xxx/2202.12264/7e5ca843-a45f-4e73-b52f-48fbbc9d2b56_origin.pdf filter=lfs diff=lfs merge=lfs -text
7929
  2203.03xxx/2203.03550/ce08bb8f-8429-42b0-ac06-b5dd554a8add_origin.pdf filter=lfs diff=lfs merge=lfs -text
7930
+ 2202.06xxx/2202.06689/98a87ad6-ee0f-4b93-840e-3c52dc80b6e9_origin.pdf filter=lfs diff=lfs merge=lfs -text
7931
+ 2202.06xxx/2202.06709/0a656b52-3523-4411-8242-ec5876763c94_origin.pdf filter=lfs diff=lfs merge=lfs -text
7932
+ 2202.06xxx/2202.06767/efa06db7-44a6-489d-836f-b237416be61e_origin.pdf filter=lfs diff=lfs merge=lfs -text
7933
+ 2202.06xxx/2202.06804/db9c788c-9318-46d1-9180-61cb7efe0259_origin.pdf filter=lfs diff=lfs merge=lfs -text
7934
+ 2202.06xxx/2202.06817/04b8261e-67f2-4cd9-9432-44c31d240cd9_origin.pdf filter=lfs diff=lfs merge=lfs -text
7935
+ 2202.06xxx/2202.06840/0bd9eaad-6f77-437e-b60a-aaa13ea7bfb5_origin.pdf filter=lfs diff=lfs merge=lfs -text
7936
+ 2202.06xxx/2202.06856/1d6db6ce-c1db-4f5a-bdc1-04e8ab208327_origin.pdf filter=lfs diff=lfs merge=lfs -text
7937
+ 2202.06xxx/2202.06861/dce406ec-fdca-4c53-83c1-98a2ac664d0a_origin.pdf filter=lfs diff=lfs merge=lfs -text
7938
+ 2202.06xxx/2202.06875/5b7d71fa-430f-4922-ab60-5c0553268191_origin.pdf filter=lfs diff=lfs merge=lfs -text
7939
+ 2202.06xxx/2202.06877/aa2af52f-a7e4-444c-80e6-12ddc159c18d_origin.pdf filter=lfs diff=lfs merge=lfs -text
7940
+ 2202.06xxx/2202.06924/0c33e7ff-14a3-4fd6-9e80-e34872c18457_origin.pdf filter=lfs diff=lfs merge=lfs -text
7941
+ 2202.06xxx/2202.06934/beffa62e-3ceb-4eec-8295-f72dd79d43d7_origin.pdf filter=lfs diff=lfs merge=lfs -text
7942
+ 2202.06xxx/2202.06935/03071b6a-9ae1-48b3-aaaf-158655df1fd3_origin.pdf filter=lfs diff=lfs merge=lfs -text
7943
+ 2202.06xxx/2202.06985/a2010d68-4d16-4d58-8207-0aeef2774c24_origin.pdf filter=lfs diff=lfs merge=lfs -text
7944
+ 2202.06xxx/2202.06988/4fe66a9f-7abe-4a11-ad00-412276931830_origin.pdf filter=lfs diff=lfs merge=lfs -text
7945
+ 2202.06xxx/2202.06991/ced529df-a551-491e-98cb-bd15ced6be75_origin.pdf filter=lfs diff=lfs merge=lfs -text
7946
+ 2202.07xxx/2202.07008/9bf1615b-765d-444a-b258-edad0e5abd09_origin.pdf filter=lfs diff=lfs merge=lfs -text
7947
+ 2202.07xxx/2202.07054/74d57832-b14d-4c06-b848-432dcf28c14f_origin.pdf filter=lfs diff=lfs merge=lfs -text
7948
+ 2202.07xxx/2202.07082/de2893bd-3d2e-4c65-bf6a-2b14e210fdf4_origin.pdf filter=lfs diff=lfs merge=lfs -text
7949
+ 2202.07xxx/2202.07105/24d6c108-f39b-40ef-87e5-5e0501bdc840_origin.pdf filter=lfs diff=lfs merge=lfs -text
7950
+ 2202.07xxx/2202.07123/54819b94-0642-4c25-9e31-6d8be8bf804e_origin.pdf filter=lfs diff=lfs merge=lfs -text
7951
+ 2202.07xxx/2202.07125/8df90bb2-4afc-477f-8170-fa43952772fd_origin.pdf filter=lfs diff=lfs merge=lfs -text
7952
+ 2202.07xxx/2202.07136/91512c1b-d4a4-457c-ba63-0e3ad3add3aa_origin.pdf filter=lfs diff=lfs merge=lfs -text
7953
+ 2202.07xxx/2202.07141/4b572bde-1d40-4708-b4ab-11926d6fe4f7_origin.pdf filter=lfs diff=lfs merge=lfs -text
7954
+ 2202.07xxx/2202.07145/f9798ddc-3e9e-492d-b06c-03f3cb528338_origin.pdf filter=lfs diff=lfs merge=lfs -text
7955
+ 2202.07xxx/2202.07176/38745ece-8c74-4bd4-bfbd-e1f1093d9cb6_origin.pdf filter=lfs diff=lfs merge=lfs -text
7956
+ 2202.07xxx/2202.07178/eda361ec-0636-4e73-9d04-7f015b69fca0_origin.pdf filter=lfs diff=lfs merge=lfs -text
7957
+ 2202.07xxx/2202.07179/e7859ddc-1ecc-493f-bab0-7a9cd3b78e74_origin.pdf filter=lfs diff=lfs merge=lfs -text
7958
+ 2202.07xxx/2202.07190/d9169632-5a92-43cb-b0cc-5fda2a199f79_origin.pdf filter=lfs diff=lfs merge=lfs -text
7959
+ 2202.07xxx/2202.07206/c7ae8a84-8b73-4fdf-bd25-c4fcf7ac44b1_origin.pdf filter=lfs diff=lfs merge=lfs -text
7960
+ 2202.07xxx/2202.07230/ec28f447-6aa2-4390-9971-df996bd0fbd8_origin.pdf filter=lfs diff=lfs merge=lfs -text
7961
+ 2202.07xxx/2202.07241/8058a2f3-e2cd-478b-84ac-da43d9246ea9_origin.pdf filter=lfs diff=lfs merge=lfs -text
7962
+ 2202.07xxx/2202.07256/9f56a0dc-2f8a-481f-8433-185393bee9c9_origin.pdf filter=lfs diff=lfs merge=lfs -text
7963
+ 2202.07xxx/2202.07262/15291611-decd-47b6-802b-e30df3ad1780_origin.pdf filter=lfs diff=lfs merge=lfs -text
7964
+ 2202.07xxx/2202.07282/68fe142b-0abc-4232-b3c4-678ae1ac1e64_origin.pdf filter=lfs diff=lfs merge=lfs -text
7965
+ 2202.07xxx/2202.07304/5d47e4f8-0e89-4b77-86f1-79f37e524daa_origin.pdf filter=lfs diff=lfs merge=lfs -text
7966
+ 2202.07xxx/2202.07371/c0c6bcc0-ccd5-4ed0-ae1c-994f2c99aaf8_origin.pdf filter=lfs diff=lfs merge=lfs -text
7967
+ 2202.07xxx/2202.07391/aa7bdc1a-c1b6-4530-9fdb-89def4a7b57f_origin.pdf filter=lfs diff=lfs merge=lfs -text
7968
+ 2202.07xxx/2202.07476/4e77fbe0-929b-48b1-b33d-64f458d2df27_origin.pdf filter=lfs diff=lfs merge=lfs -text
7969
+ 2202.07xxx/2202.07477/80ee62f0-8229-4826-b449-d587449c9213_origin.pdf filter=lfs diff=lfs merge=lfs -text
7970
+ 2202.07xxx/2202.07481/48a3b1e2-6765-4bec-abaa-b070321dc6db_origin.pdf filter=lfs diff=lfs merge=lfs -text
7971
+ 2202.07xxx/2202.07508/df7aa4f7-2e0a-4c09-9d52-3761f1a66d81_origin.pdf filter=lfs diff=lfs merge=lfs -text
7972
+ 2202.07xxx/2202.07516/bd3f8809-fd47-47c7-89d3-280e5dbf4e88_origin.pdf filter=lfs diff=lfs merge=lfs -text
7973
+ 2202.07xxx/2202.07551/5926b390-032d-4582-a42f-02f9c7cec546_origin.pdf filter=lfs diff=lfs merge=lfs -text
7974
+ 2202.07xxx/2202.07559/5ac49f08-32e4-47cc-abdd-05b783d2d31f_origin.pdf filter=lfs diff=lfs merge=lfs -text
7975
+ 2202.07xxx/2202.07562/e2e34741-99ff-4ca3-b015-3bc73376c85d_origin.pdf filter=lfs diff=lfs merge=lfs -text
7976
+ 2202.07xxx/2202.07569/52dc7bd0-b1df-45ab-b502-6b22c4232a7f_origin.pdf filter=lfs diff=lfs merge=lfs -text
7977
+ 2202.07xxx/2202.07575/53f79983-aab0-45c9-98b5-2cddcefa823f_origin.pdf filter=lfs diff=lfs merge=lfs -text
7978
+ 2202.07xxx/2202.07643/12814012-47fb-48b8-ba7d-54664fde1180_origin.pdf filter=lfs diff=lfs merge=lfs -text
7979
+ 2202.07xxx/2202.07646/1307a3c0-e72a-46aa-8082-f8b83a7c876f_origin.pdf filter=lfs diff=lfs merge=lfs -text
7980
+ 2202.07xxx/2202.07648/3eb5eb2f-9b69-47b8-a17f-381842362b55_origin.pdf filter=lfs diff=lfs merge=lfs -text
7981
+ 2202.07xxx/2202.07654/17d45375-a510-4f2f-b688-110f43bd759c_origin.pdf filter=lfs diff=lfs merge=lfs -text
7982
+ 2202.07xxx/2202.07682/048ef658-2875-48bd-9ee0-c7e68c6ea8f6_origin.pdf filter=lfs diff=lfs merge=lfs -text
7983
+ 2202.07xxx/2202.07757/9e1ac16b-9fb3-4592-82ef-7fc262bd5561_origin.pdf filter=lfs diff=lfs merge=lfs -text
7984
+ 2202.07xxx/2202.07765/1ecd38ff-766a-4c6d-b7f4-88b63219cdb8_origin.pdf filter=lfs diff=lfs merge=lfs -text
7985
+ 2202.07xxx/2202.07785/7b1565dd-f033-4b3f-8eb3-edb5dab433b9_origin.pdf filter=lfs diff=lfs merge=lfs -text
7986
+ 2202.07xxx/2202.07789/9edcd26a-67c3-445c-9e7e-6c56b74a4120_origin.pdf filter=lfs diff=lfs merge=lfs -text
7987
+ 2202.07xxx/2202.07790/4753da2d-ed1c-4053-9f19-b03ba4ed3574_origin.pdf filter=lfs diff=lfs merge=lfs -text
7988
+ 2202.07xxx/2202.07800/fd2935cf-8a1c-486d-a2a3-00487a6cd2e5_origin.pdf filter=lfs diff=lfs merge=lfs -text
7989
+ 2202.07xxx/2202.07816/a9492dc0-fcbe-4b3e-94ad-0773b6c628a4_origin.pdf filter=lfs diff=lfs merge=lfs -text
7990
+ 2202.07xxx/2202.07824/c3b8c006-330e-4ab8-b227-4117fd663bf0_origin.pdf filter=lfs diff=lfs merge=lfs -text
7991
+ 2202.08xxx/2202.08210/818842da-3e3a-49a8-a493-48367e76fd1c_origin.pdf filter=lfs diff=lfs merge=lfs -text
7992
+ 2202.08xxx/2202.08974/fbff9862-9ff3-4a9e-be18-597046f173f6_origin.pdf filter=lfs diff=lfs merge=lfs -text
7993
+ 2202.10xxx/2202.10336/2f2a79cb-5c7e-4b57-bb1c-a1a7a26ff0e0_origin.pdf filter=lfs diff=lfs merge=lfs -text
2202.06xxx/2202.06689/98a87ad6-ee0f-4b93-840e-3c52dc80b6e9_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06689/98a87ad6-ee0f-4b93-840e-3c52dc80b6e9_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06689/98a87ad6-ee0f-4b93-840e-3c52dc80b6e9_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddd1d16f1fcc5af54357a937fd805e507b16d1997198e071cb487e94f176f29d
3
+ size 1000843
2202.06xxx/2202.06689/full.md ADDED
@@ -0,0 +1,477 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Naming Sequences
2
+
3
+ Maliheh Izadi
4
+ M.Izadi@tudelft.nl
5
+ Delft University of Technology
6
+ Delft, Netherlands
7
+
8
+ Roberta Gismondi
9
+ R.Gismondi@student.tudelft.nl
10
+ Delft University of Technology
11
+ Delft, Netherlands
12
+
13
+ Georgios Gousios
14
+ G.Gousios@tudelft.nl
15
+ Delft University of Technology
16
+ Delft, Netherlands
17
+
18
+ # ABSTRACT
19
+
20
+ Code completion is an essential feature of IDEs, yet current auto-completers are restricted to either grammar-based or NLP-based single token completions. Both approaches have significant drawbacks: grammar-based autocompletion is restricted in dynamically-typed language environments, whereas NLP-based autocompleters struggle to understand the semantics of the programming language and the developer's code context.
21
+
22
+ In this work, we present CodeFill, a language model for autocompletion that combines learned structure and naming information. Using a parallel Transformer architecture and multi-task learning, CodeFill consumes sequences of source code token names and their equivalent AST token types. Uniquely, CodeFill is trained both for single-token and multi-token (statement) prediction, which enables it to learn long-range dependencies among grammatical and naming elements. We train CodeFill on two datasets, consisting of 29M and 425M lines of code, respectively. To make the evaluation more realistic, we develop a method to automatically infer points in the source code at which completion matters. We compare CodeFill against four baselines and two state-of-the-art models, GPT-C and TravTrans+. CodeFill surpasses all baselines in single token prediction (MRR: $70.9\%$ vs. $66.2\%$ and $67.8\%$ ) and outperforms the state of the art for multi-token prediction (ROUGE-L: $63.7\%$ vs. $52.4\%$ and $59.2\%$ , for $n = 4$ tokens). We publicly release our source code and datasets.
23
+
24
+ # CCS CONCEPTS
25
+
26
+ - Software and its engineering $\rightarrow$ Software notations and tools.
27
+
28
+ # KEYWORDS
29
+
30
+ Automatic Code Completion, Transformers, Multi-Task Learning, Types, Dynamically-typed Languages
31
+
32
+ # ACM Reference Format:
33
+
34
+ Maliheh Izadi, Roberta Gismondi, and Georgios Gousios. 2022. CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Naming Sequences. In 44th International Conference on Software Engineering (ICSE '22), May 21-29, 2022, Pittsburgh, PA, USA. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3510003.3510172
35
+
36
+ Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).
37
+
38
+ ICSE '22, May 21-29, 2022, Pittsburgh, PA, USA
39
+
40
+ © 2022 Copyright held by the owner/author(s).
41
+
42
+ ACM ISBN 978-1-4503-9221-1/22/05.
43
+
44
+ https://doi.org/10.1145/3510003.3510172
45
+
46
+ # 1 INTRODUCTION
47
+
48
+ Automatic code completion (also called autocompletion) is the task of completing source code statements by predicting what the developer would write given the current context. It helps developers finish their programming tasks faster by decreasing the typing effort and saving keystrokes, correcting typographical errors, and enabling them to explore APIs in a context-sensitive manner [5]. Autocompletion has therefore emerged as one of the most prominent features in Integrated Development Environments (IDEs).
49
+
50
+ To support autocompletion, current IDEs exploit the regular structure of programming languages. For example, an IDE knows that an opening parenthesis character $(\cdot)(\cdot)$ at a function-call position must be followed by enough arguments to match the function's arity. It can therefore propose argument names for variables that are in scope. The availability of types in the host programming language helps increase the precision of suggestions; continuing with the example above, the IDE will only propose variable names for variables whose types match the function argument. Recent autocompletion systems also take into account past completions [43] and analyze large code bases [9] to rank suggestions according to their past popularity. Despite the best efforts of researchers and IDE developers, developers find rule-based code completion mechanisms lacking. Ranking suggestions based on alphabetical or usage frequency (or even the suggestion list length [23]) neglects the current context, leading to unrelated recommendations [3]. These problems are exacerbated in dynamically typed language settings, as the IDE is lacking significant information to provide accurate suggestions.
51
+
52
+ To mitigate rule-based autocompletion issues, researchers have proposed statistical [17, 37] and learning-based [6, 17, 29, 31, 32, 51] autocompletion models. Motivated by the naturalness hypothesis [19], learning-based models treat source code as natural language text, hence code completion becomes an instance of the well-studied text completion problem. However, treating source code as text deprives learning-based models of important code structure and semantic information [18]. Moreover, the open-ended nature of code leads to extremely large prediction spaces due to developers constantly inventing identifier names [24].
53
+
54
+ In an illuminating study, Hellendoorn et al. [18] identified a set of issues with current research in code completion. Initially, the current approach of evaluating accuracy as masked token prediction does not reflect how autocompletion is used; developers only trigger autocompletion after specific, and certainly not arbitrary, points in a program's syntax (e.g., after an opening parenthesis). Thus, treating all tokens equally masks the fact that some tokens (e.g., punctuation) are much easier to predict than others (e.g., identifiers). Moreover, most approaches (especially learning-based ones) do not
55
+
56
+ account for names coming from dependencies, which deprives them of important context.
57
+
58
+ In this work, we propose CodeFill, a novel learning-based approach that aims to address the problems identified above. CodeFill borrows from the bimodality hypothesis [12] to model source code inputs. Specifically, CodeFill exploits that information is conveyed by source code through two channels; the natural language channel (variable names, functions, etc.), and the code structure channel (inheritance, containment, etc.). Inputs are fed into the model simultaneously as both sequences of token values, which enable it to learn relationships among token values, and, uniquely, sequences of token types, which enable it to learn associations between syntactic elements. CodeFill is then asked to predict either the value or the type of the next $n$ tokens. To enable CodeFill to learn name dependencies across longer ranges, we also train it with an additional task, multi-token statement completion at the value level. The input token names to CodeFill is encoded with Byte-Pair Encoding (BPE), which enables CodeFill to both compress the input name space and generate names that are not in the input vocabulary. To present suggestions relevant to the developer's context, CodeFill includes a post-processing step that re-ranks the predictions based on the context visible to the model at the completion point. CodeFill is instantiated as a set of three Transformers (GPT2-based) trained with soft parameter sharing Multi-Task Learning (MTL) setting. Each transformer models one of the three tasks, namely token value, token type, and multi-token prediction; a joint loss function across all three tasks updates the weights of all three model components. During each epoch, the model is trained on one task according to a configurable task-picking policy. Our target language is Python, to both demonstrate the efficiency of the model when type information is missing and also make our work comparable with the state of the art.
59
+
60
+ We pit CodeFill against four baseline models and two the-state-of-the-art models, namely GPT-C [49] and TravTrans+ [25]. We use two deduplicated datasets: the ETH150K dataset (deduplicated: PY117K) and a manually collected dataset consisting of practically all non-forked Python repositories on GitHub (PY1690K). We evaluate all models on two tasks: Token-Level and Statement-Level Predictions (TLP and SLP). For TLP, we evaluate for i) next token prediction (TLP-A), ii) next token type prediction (TLP-B), iii) next token value prediction (TLP-C). To ensure that the evaluation setting reflects real-world use of autocompletion, we also evaluate completions after specific syntactic elements, e.g., a dot . or an AWAIT keyword (TLP-D). We devise an algorithm to identify those syntactic elements (cardinal points) automatically given a corpus. We use top-1 Accuracy and the Mean Reciprocal Rank (MRR) as evaluation metrics. For the SLP task, we assess the models on statement completion with $n$ tokens and we compare them using ME-TEOR and ROUGE-L measures. To show that each component in the CodeFill model is necessary, we perform an ablation study.
61
+
62
+ The results demonstrate that CodeFill outperforms all the competing approaches in all tasks. Indicatively, for each of the TPL-A, TPL-B, TPL-C, and TPL-D evaluation tasks, CodeFill achieves a state of the art MRR of $81.7\%$ , $87.2\%$ , $69.5\%$ , $70.2\%$ while TravTrans+, a current state of the art, scores $79.4\%$ , $83.6\%$ , $63.8\%$ , and $66.2\%$ respectively. In the SLP evaluation task, for completing statements with four tokens (the average completion length in our datasets)
63
+
64
+ CodeFill obtains $70.2\%$ and $63.8\%$ for the METEOR and ROUGE-L metrics respectively, and thus significantly surpasses TravTrans+ $(64.5\%)$ and $52.4\%$ .
65
+
66
+ The main contributions of this work are:
67
+
68
+ - CodeFill, a model that unifies learning of structural and name-based information for the autocompletion task.
69
+ - An implementation of CodeFill, including training procedures, for the Python programming language. We make our code and datasets available.
70
+ - An extensive evaluation of CodeFill against four baseline models and two state-of-the-art approaches, demonstrating its superior performance.
71
+
72
+ # 2 BACKGROUND AND RELATED WORK
73
+
74
+ In this section, we briefly review the background work relating to our approach. Then, we present the main approaches to autocompletion, including the baselines we used for comparison.
75
+
76
+ # 2.1 Language Models and Transformers
77
+
78
+ Statistical Language Modeling (LM) is the task of developing a probabilistic model for predicting the next tokens in a sequence given its preceding tokens, i.e., the context [14]. This context for simpler LMs is a short sequence of words, while it can be sentences or paragraphs for larger models [46]. LMs are either used without modification, e.g., in a text generation task, or used inside a downstream task which requires language understanding. Programming languages also contain predictable statistical properties which can be learned using LMs [19].
79
+
80
+ Recently, Neural LMs have gained popularity due to their superior performance and generalization capabilities [14, 35]. Neural LMs address the n-gram data sparsity problem through parameterization of words as vectors [26]. A real-valued vector (word embedding) is used to represent each word in a vector space. This representation of words is learned based on their usage. This allows words with a similar meaning to have a similar representation. Note that traditional statistical LMs were not able to achieve this level of generalization [47]. Moreover, the distributed representation approach makes it easier for the embedding representation to scale with the vocabulary size. This is specifically useful with source code, where the vocabulary size can be unlimited due to the use of arbitrary identifiers. Initially, feed-forward neural network models, then Recurrent Neural Networks (RNNs) and next, networks with long-term memory, such as Long Short Term Memory (LSTM) networks were used.
81
+
82
+ Most recently, there have been significant improvements with the introduction of self-attention architectures in the Transformer which is a sequence-to-sequence architecture for transforming a given sequence of elements to another form [53]. Attention enable Transformers to focus on selective parts of an input, thus generating more relevant outputs [34]. Transformers outperform previous deep models such as RNNs and LSTMs on multiple NLP tasks [53]. A Transformer consists of two main components, an encoder, and a decoder. GPT-2 introduced by OpenAI $^{2}$ , is a large generative Transformer-based LM trained on a dataset of 8M web pages [39].
83
+
84
+ GPT-2 has been successfully exploited for various NLP and source code analysis tasks [10, 16, 28, 49].
85
+
86
+ # 2.2 Multi-Task Learning
87
+
88
+ Multi-Task Learning (MTL) is a model training technique that combines multiple tasks and a joint loss function, with the goal of maximizing performance on one or all of the tasks. MTL enables knowledge transfer across related tasks and improves generalization by leveraging the domain-specific information contained in the training signals of related tasks [11]. An MTL model captures the common features among all the tasks through sharing hidden layers among them. MTL has been applied successfully in both NLP [13] and source code analysis [32, 33]. There are two approaches to jointly train models using MTL, hard-parameter and soft-parameter sharing. In the former, the hidden layers are shared between all tasks while keeping several task-specific output layers. For the latter, each task has its own model with its own parameters. However, the distance between them is regularized to encourage the parameters to be similar. In the soft-parameter sharing case, training can happen either sequentially (one task per training round) or alternatively (one task per epoch).
89
+
90
+ # 2.3 Related Work
91
+
92
+ Autocompletion is an active research area for both practitioners and researchers. Below, we review the latest approaches to autocompletion.
93
+
94
+ 2.3.1 Conventional Autocompletion. Traditionally, autocompleters used heuristic rules static type information [20], similar code examples [9], and program history data [42] for suggesting completions. For instance, IDEs conventionally return a list of type-checked names either based on the order of alphabet or usage frequency.
95
+
96
+ 2.3.2 Statistical LMs and Grammar-based Models. Several studies use statistical LMs for modeling source code [17, 19, 37, 52]. Tu et al. [52] built upon an n-gram model using a cache mechanism to capture locality in source code. Hellendoorn and Devanbu [17] improved the n-gram model by exploiting various techniques including nested scopes, locality, and unlimited vocabulary. Raychev et al. [40] proposed a probabilistic model based on decision trees and domain-specific grammars. Researchers also studied the use of syntactic structure through exploiting probabilistic graphical models. Allamanis et al. [4] employ probabilistic context-free grammars, while Raychev et al. [8, 40, 41] use probabilistic higher order grammars to this end.
97
+
98
+ 2.3.3 Deep Learning for Autocompletion. Recently, deep neural networks such as RNNs, LSTMs and Transformers are being effectively used for modeling source code [6, 17, 24, 25, 29]. In 2018, Li et al. [29] proposed a pointer mixture model to mitigate the Out-Of-Vocabulary (OOV) problem. They trained two LSTM models on types and tokens. Karampatsis et al. [24] presented a large-scale open-vocabulary neural LM. They incorporated BPE, beam search, and cache mechanism to address the OOV problem. Most recently, Kim et al. [25], incorporated the syntactic structure of trees into their Transformer-based model to better learn from source code.
99
+
100
+ 2.3.4 Multi-token Autocompletion. Although most research on code completion is focused on single-token prediction, several studies aimed to complete entire statements or blocks of code [36, 49, 54, 55]. Yang et al. [55] proposed $PCC$ and introduced an intermediate representation for source code, to put tokens into groups using lexeme and variable relative order. Nguyen et al. [36] proposed AUTOSC to combine program analysis and software naturalness and fill in a partially completed statement with frequent and valid recommendations. Svyatkovskiy et al. [49] recently proposed a GPT-2 based multi-lingual model, $GPT-C$ , for completing lines. Wen et al. [54] introduced FeaRS which recommends the next method given the current code in an IDE using implementation patterns learned through mining open source projects.
101
+
102
+ 2.3.5 MTL for Autocompletion. MTL has been used in various NLP-related tasks [45, 48, 56]. Recently, it has also been employed for programming language processing tasks. Liu et al. [32, 33] proposed two approaches based on MTL for autocompletion. In the first study, the authors used a Transformer-XL and an RNN for predicting next token type and value [32]. They develop a partial AST encoder and a path2root encoder and use them in their MTL framework. In their second study, Liu et al. [33] pre-train their model with hybrid objective functions for code understanding and code generation tasks. Next, they fine-tune it on code completion. The pre-training tasks are masked bidirectional LM, next code segment prediction, and unidirectional LM. The fine-tuning tasks are unidirectional masked LM, and unidirectional LM.
103
+
104
+ 2.3.6 Practical Aspects of Autocompletion. Hellendoorn et al. [18] claim the accuracy of autocompleters evaluated on synthetic data can drop on real-world data. Aye et al. [7], trained models on real-world code completion examples of an internal dataset (Facebook). They showed that models trained on data distributions that are closer to those of where the model will be deployed can outperform models trained on committed source code in public repositories. Svyatkovskiy et al. [51] integrated Pythia, an LSTM model, to IntelliCode, an extension to Microsoft VS Code IDE. In a follow-up study [49], they introduced IntelliCodeCompose as a general-purpose multilingual autocompletion using Transformers. The improved model predicts sequences of code tokens, generating up to entire statements. IntelliCodeCompose is integrated into the Microsoft VS Code IDE. Finally, Svyatkovskoy et al. [50] implemented and evaluated several neural code completion models, which offer varying trade-offs in terms of memory, speed, and accuracy. Commercial autocompletion tools, such as TabNine and GitHub Copilot also exist, but very little technical information has been shared about them.
105
+
106
+ # 2.4Baselines
107
+
108
+ We include six recent models as baselines to provide a comprehensive evaluation. For all baselines, we use the replication packages provided by the authors and set the parameters as defined in each respective study. For the statement level prediction task, we modified the output layer of the baselines to predict up until the end of a statement.
109
+
110
+ N-gram + LSTM (FSE, 2017): Hellendoorn et al. [17] claim that a well-engineered and simple approach (n-gram based language
111
+
112
+ models) can provide better performance than more complex models (deep neural networks). The authors show that the combination of an n-gram and LSTM-based model outperforms the rest of their models.
113
+
114
+ Pointer Mixture (IJCAI, 2018): Li et al. [29], propose a pointer mixture model to address the OOV problem. They also try to incorporate structural information in their models by training two models (token types and values) separately.
115
+
116
+ T-XL + Bi-LSTM (ICPC, 2020): Liu et al. [32, 33], propose two models based on the MTL technique. The first study uses Transformer-XL and a Bi-LSTM to train two models for tokens and AST paths for dynamically-typed languages such as Python. The second study by the same group presents a pre-trained language model which is fine-tuned for code completion. The authors use static analysis and type annotations for their type prediction task, for Java. We compare against the first model only, as it most closely matches our setup.
117
+
118
+ OpenVocab (ICSE, 2020): To address the OOV problem, Karampatsis et al. [24] present a BPE-based language model. We include it here for completeness, even though their model is not tuned for autocompletion.
119
+
120
+ IntelliCode Compose (FSE, 2020): Svyatkovskiy et al. [49] propose a general-purpose, multi-lingual autocompletion supporting multi-token statement completion. They train a GPT-2 model on 1.2B LOC written in Python, C#, TypeScript, and JavaScript. This tool is deployed as a cloud-based web service and uses client-side caching and parallel implementation to speed up the predictions. As the source code is not publicly available, we trained a GPT-2 model for source code and did our best to adhere to the settings reported in the study. As the focus of our study is mono-lingual, we only train this model on Python code.
121
+
122
+ TravTrans+ (ICSE, 2021): Kim et al. [25] propose a transformer-based approach which exploits AST paths. We use their best model, TravTrans+, as the state of the art in our evaluation.
123
+
124
+ # 3 APPROACH
125
+
126
+ The CodeFill pipeline comprises two main phases; pre-processing, model training. Figure 1 presents the overall workflow. Initially, CodeFill pre-processes, tokenizes and converts the input source code to equivalent syntax token sequences. Training consists of two main phases pre-training with 3 tasks (token sequence type and name completion, statement completion) and fine-tuning on 2 tasks (name and statement completion). For both stages, CodeFill uses soft-parameter sharing MTL to learn from different representations of source code. At evaluation time, Codefill also re-orders recommendations based on their type and the visible context.
127
+
128
+ In the following section, we present how the proposed approach works in detail.
129
+
130
+ # 3.1 Pre-processing
131
+
132
+ During pre-processing, CodeFill converts the input program files to an equivalent format where keywords and identifiers are swapped with their AST equivalents. The algorithm starts by removing comment sections, blank spaces, and blank lines. It then extracts the list of modules, libraries, and their aliases using the Python AST library. Those are stored in a dictionary and, using it, CodeFill replaces all
133
+
134
+ ![](images/b1a13ea9d02cdeb35f3f754721eab371206569626ec98e317a3a9f0e362ba3f5.jpg)
135
+ Figure 1: CodeFill Workflow
136
+ Figure 2: Sample code snippet and the extracted information
137
+
138
+ <table><tr><td colspan="4">def transform(node, ctx):
139
+ node = qual_namesresolve(node)
140
+ node = CallTreeTransformer(ctx).visit(node)
141
+ return node</td></tr><tr><td>Type</td><td>Value</td><td>#Line</td><td>Position</td></tr><tr><td>RETURN</td><td>return</td><td>4</td><td>1</td></tr><tr><td>NAME</td><td>node</td><td>4</td><td>2</td></tr></table>
142
+
143
+ their occurrences in code with their respective types (i.e., MODULE, LIBRARY, and ALIAS).
144
+
145
+ CodeFill also pre-processes and tokenizes the input source code. For each line, it reads the tokenized information and stores four types of information about each token namely (1) its value, (2) its type, (3) its line number, and (4) its position in the line. For instance, for the statement return node in Figure 2, it stores two tokens as shown in the table following the code example. Moreover, variable visibility information (e.g., global vs. local variables), is maintained, to differentiate between different name usages in the same context.
146
+
147
+ To address the OOV problem, CodeFill uses a BPE-encoded name representation. Exploiting word segmentation, BPE iteratively merges the most frequently occurring character sequences. Prior to applying BPE encoding, and similarly to other studies [21, 22, 49], CodeFill normalizes the input strings by replacing string, and numeric literals with respective special tokens, i.e., STRING and NUMBER.
148
+
149
+ A unique characteristic of the Python language is that indentation defines code blocks; it is therefore important for source code models to learn to encode indentation as part of their learned representation. To do so, CodeFill stores the positioning of indentation markers. For the first line with an indentation, it adds a special token $\langle INDENT\rangle$ at the beginning of the given line. It passes through the following lines with the same indentation, to reach the next indentation or a dedentation position, at which point it adds a respective $\langle INDENT\rangle$ or $\langle DEDENT\rangle$ token.
150
+
151
+ The pre-processing step results in two files for each input source code file; (1) one containing sequences of token names minus the comments and extra blank lines, and (2) one containing sequences of token types. Both are fed into CodeFill as two different but corresponding representations of source code. Figure 3 shows a
152
+
153
+ ```python
154
+ 1 # Raises an error when the required variable is missing
155
+ 2 def required_env(var):
156
+ 3 value = os.environ.get(var)
157
+ 4 if value is None:
158
+ 5 raise ValueError("Var is required to start the service.")
159
+ 6 return value
160
+ 7
161
+ 1 def required_env(var):
162
+ 2 value = osviron.get(var)
163
+ 3 if value is None:
164
+ 4 raise ValueError("STRING")
165
+ 5 return value
166
+ 1
167
+ 2 DEF FUNCTION_NAME/local_variable): EOS
168
+ 3 INDENT LOCAL_variable = LIB MODULE.FUNCTION_NAME(local_variable)
169
+ 4 EOS
170
+ 5 IF LOCAL_variable IS NONE: EOS
171
+ 6 INDENT RAISE ERRORTOKEN("STRING") EOS
172
+ 7 DEDENT RETURN LOCAL_variable EOS
173
+ ```
174
+
175
+ Figure 3: An example code snippet and its converted version
176
+
177
+ sample function and its corresponding type information with the correct indentation.
178
+
179
+ # 3.2 Model Training
180
+
181
+ In this phase, CodeFill learns from two granularity levels; token-and statement-level completions with three simultaneous tasks, namely (1) next Token Value Prediction (TVP), (2) next Token Type Prediction (TTP), and (3) Statement Completion (SC). Model training follows a two-stage process; First, a generic language modeling objective is used on the unlabeled data to learn the initial parameters. Then, these parameters are adapted to the target tasks using the corresponding objective. Thus, while pre-training, CodeFill learns from all three tasks while fine-tuning is restricted to the TTP and SC tasks. The reason for excluding the TTP task is that the number of types for all the program files is limited. Hence, the model quickly learns how to properly predict these type sequences (i.e., learns an effective representation of the Python grammar), eliminating the need for further fine-tuning.
182
+
183
+ The main neural network architecture for all tasks is based on the GPT-2 Transformer with $L$ layers. CodeFill uses three distinct GPT-2 transformers, each with its own input and training objective. The models are initialized with random weights. Transformer blocks include self-attention layer, feed-forward neural nets, and a normalization layer. Self-attention blocks identify which tokens to focus on. Feed-forward neural nets consist of an input layer to accept information, hidden layers to capture the hidden correlations between each data point, and finally, an output layer to transmit information. The parameters are transferred to the next decoder in the stack after being regularised (with $l2$ norm) to be similar to the respective decoder's parameters. CodeFill uses softmax activation function in the output layer to generate probability distributions over the vocabulary.
184
+
185
+ To train the model to predict a sequence of tokens, $\{v_t\} \subset D, t \in [1, \dots, N]$ , with $D$ as the vocabulary, and $C$ as the existing code context, CodeFill estimates the following conditional probability
186
+
187
+ distribution, $P$
188
+
189
+ $$
190
+ P \left(v _ {0}, \dots , v _ {N} \mid c _ {0}, \dots , c _ {T}\right) = \prod_ {i = 1} ^ {N} P \left(v _ {i} \mid c _ {0}, \dots , c _ {T}, v _ {0}, \dots , v _ {i - 1}\right). \tag {1}
191
+ $$
192
+
193
+ We use a standard language modeling objective, predicting the next token given a context, and maximize the following likelihood based on our unsupervised corpus of tokens. In Equation 2, $m$ is the length of the predicted sequence of code token values and $\theta$ is the set of parameters that is learned through stochastic gradient descent optimization to model $P$ [44].
194
+
195
+ $$
196
+ L (V) = \sum_ {i} \log P \left(v _ {i} \mid c _ {0}, \dots , c _ {T}, v _ {i - m}, \dots , v _ {i - 1}; \theta\right). \tag {2}
197
+ $$
198
+
199
+ In each layer, multi-attention heads are used to aggregate the output of the previous layer for each transformer block. Multi-headed self-attention is applied over the input context tokens followed by position-wise feed-forward layers to produce the output distribution.
200
+
201
+ $$
202
+ h _ {0} = C W _ {e} + W _ {p}, \tag {3}
203
+ $$
204
+
205
+ $$
206
+ h _ {l} = \text {t r a n s f o r m e r} _ {-} \text {b l o c k} \left(h _ {l - 1}\right), l \in [ 1, \dots , L ], \tag {4}
207
+ $$
208
+
209
+ $$
210
+ P \left(v _ {t}\right) = \text {s o f t m a x} \left(h _ {L} W _ {e} ^ {T}\right), t \in [ 0, \dots , N ] \tag {5}
211
+ $$
212
+
213
+ where $C$ is the context vector of tokens, $L$ is the number of layers, $W_{e}$ is the token embedding matrix, and $W_{p}$ is the position embedding matrix.
214
+
215
+ For training with MTL, CodeFill uses the alternative training strategy, which aims to prevent catastrophic forgetting (as opposed to the sequential strategy). With a probability of $20\%$ , $40\%$ , and $40\%$ for each of the TTP, TVP, and SC tasks, respectively, CodeFill picks a random task for each epoch. TTP requires fewer epochs as its vocabulary is fairly limited. Further on, for TVP and SC tasks, CodeFill uses beam search to identify the most likely (sub-)token sequences.
216
+
217
+ Loss is shared among all tasks. During pre-training, the parameters are tuned to minimize the absolute minimum of the cross entropy losses among the three pre-training tasks, namely, TVP, TTP, and SC (Equation 6). When fine-tuning, only TVP and SC losses are used.
218
+
219
+ $$
220
+ \text {L o s s} _ {\text {f i n a l}} = \left| \min \left(\text {L o s s} _ {\text {T V P}}, \text {L o s s} _ {\text {T T P}}, \text {L o s s} _ {\text {S C}}\right) \right| \tag {6}
221
+ $$
222
+
223
+ 3.2.1 Token Value Prediction Task (TVP). CodeFill uses different representations of programs for each task within the soft-parameter sharing MTL framework. CodeFill treats the TVP task as masked unidirectional prediction; left-side context is used to predict the next token. The inputs to the task are sequences of token values, represented as real-valued vectors of $[v_{1}, v_{2}, \ldots, v_{n}]$ .
224
+
225
+ 3.2.2 Token Type Prediction Task (TTP). Similarly to TVP, TTP is also treated as left-to-right masked unidirectional prediction. The input are corresponding token type representations as real-valued vector of $[t_1, t_2, \ldots, t_n]$ As both the TTP and TVP models are trained jointly, CodeFill is capable of exploiting token types when the ultimate goal is to predicting token values.
226
+
227
+ ![](images/8a7f64239bac028dc01de52d7f5a04449e85e1221f23f4a2215fa3f3e71577e8.jpg)
228
+ Figure 4: Model training
229
+
230
+ 3.2.3 Statement Completion Task (SC). As useful as next-token prediction may be, developers can also benefit from getting longer suggestions to complete code statements [6, 36, 49]. Correspondingly, CodeFill can also benefit from training to predict longer sequences, as training will enable it to better prioritize context use. Thus, we add a third task to train CodeFill to provide completion suggestions up and until the end of a statement. To predict a whole statement given the existing code context $C$ , and the vocabulary $D$ , CodeFill attempts to generate token values $\{v_t\} \subset D$ , conditioned on the sequence of preceding token values $\{c_t\} \subset D$ . For this task, the pre-processing steps introduce a special token $(\langle EOS\rangle)$ to demarcate the end of a statement. CodeFill is trained to keep predicting sequences of token names until it produces an $\langle EOS\rangle$ token.
231
+ 3.2.4 Beam search. CodeFill uses greedy (beam) search to identify the most probable sequences given a sequence of probabilistic predictions. Specifically, $|B|$ (width of the beam) top probabilities, are recorded partially for every step. This heuristic algorithm does not necessarily optimize results; however, its computational complexity equals to $O(|B| \times |V|)$ which is much faster than computing all cases. As $|B|$ increases, the quality of generated summaries improves, however, the learning time increases as well. We experimented with several beam values (3, 5, and 10), and settled to 5, as it provided a good balance of accuracy and speed.
232
+
233
+ # 3.3 Post-processing
234
+
235
+ Re-ranking Recommendations. For a recommendation system to be useful, predictions should be ranked similarly to user expectations. To optimize ranking, CodeFill includes a post-processing
236
+
237
+ layer to re-rank the leaf nodes in the final recommendation list based on the visible scope (i.e., the current file). This is based on the observation that most completions should be local to the edited file, as naming visibility rules should force names to cluster.
238
+
239
+ To re-rank the suggestions, CodeFill hierarchically divides the visible scope to file, class, and closest function. The intuition here is, when the model is predicting the next token and its type is expected to be a variable name, candidates in the closest scope have a higher probability of being correct. However, when the next token is predicted to be a function name, candidates from the same class (functions defined in the same class) should be probably at the top of the list. The re-ranking process consists of multiplying the prediction probabilities of the top-10 predictions with a corresponding weight coefficient. The weights are selected based on the type of the predicted token and the scope of the declaration of the identifier. Each prediction consists of a <token, type, probability> triplet with respect to the prediction point that it is made available. We generate the list of all visible names and their hierarchical scope (function, class, file). Each prediction is then cross-checked with this list, in the case where the predicted identifier is indeed already declared in the file (and thus in the list), its prediction probability is multiplied by a weight depending on the type of the predicted token and the scope associated with the item in the list. As the weights impact the quality of predictions, we first defined a range/ratio for different categories based on our programming intuition. Then, we experimented with this range and selected the best performing weights. Table 1 presents the weights used in this process.
240
+
241
+ Table 1: Weights in the post-processing layer for re-ranking
242
+
243
+ <table><tr><td>Leaf node type</td><td>Function</td><td>Class</td><td>File</td></tr><tr><td>Attribute Access</td><td>1.625</td><td>1.250</td><td>1.125</td></tr><tr><td>Variable names</td><td>1.625</td><td>1.125</td><td>1.500</td></tr><tr><td>Function names</td><td>1.125</td><td>1.625</td><td>1.500</td></tr></table>
244
+
245
+ Algorithm 1 Re-ranking final recommendations
246
+ 1: input Predictions, WeightsList
247
+ 2: output Predictions $\triangleright$ List of updated predictions
248
+ 3: Names $\leftarrow$ getSignificantNames() $\triangleright$ Get the list of important names in left context from the file
249
+ 4: for pred in Predictions do
250
+ 5: while true do
251
+ 6: Names $\leftarrow$ getSignificantName.pop()
252
+ 7: if significantName(token = prediction_token then
253
+ 8: typeCategory $\leftarrow$ getTypeCategory()
254
+ 9: weight $\leftarrow$ weights[typeCategory][scope]
255
+ 10: pred(probability $\leftarrow$ pred(probability $\times$ weight
256
+ 11: break
257
+ 12: end if
258
+ 13: end while
259
+ 14: end for
260
+
261
+ Although the current weights improve the predictions, this only sets the minimum bar. Future work can exploit automatic learning of these weights.
262
+
263
+ # 4 EXPERIMENTAL SETUP
264
+
265
+ To train and evaluate CodeFill, we use two Python datasets. We evaluate the models based on different evaluation scenarios, to achieve a more realistic and comprehensive outlook on the performance of code completion models to benefit developers in real-world cases.
266
+
267
+ # 4.1 Evaluation Tasks
268
+
269
+ We evaluate CodeFill on two tasks, namely Token-Level and Statement-Level Predictions (TLP and SLP).
270
+
271
+ 4.1.1 Token-Level Prediction. We use TLP to assess the ability of the model to predict a single next token. We split this part of the evaluation into four subtasks presented below.
272
+
273
+ Any token prediction. Our first sub-task is to evaluate the predictions of any token irrespective of its type (TLP-A). This is the baseline evaluation task employed in the literature, but as research has shown [18], it is not representative of real-world autocompletion use. For this reason, we resort to more detailed evaluations, as presented below.
274
+
275
+ Token Type Prediction. To assess the model's ability to learn grammatical sequences, we evaluate how well a model can predict a correct AST token given a context (TLP-B). We group together AST tokens in the following categories: Identifiers, Keywords, Operators, Punctuation, and finally numerals and string Literals.
276
+
277
+ ![](images/04a2752be4b1ffaa920b2b74080c17907ceabadec3472bf31ad232a8a87b47fd.jpg)
278
+ Figure 5: Length of statements in the PY117K dataset
279
+
280
+ Leaf Node Prediction. Inspired by the evaluation setup of the state-of-the-art study by Kim et al. [25], we investigate the ability of models when predicting AST leaf nodes (TLP-C), including Attribute access, Names, Function parameters, and Constants.
281
+
282
+ Cardinal Point Prediction. The three tasks presented up to now give a comprehensive view of the prediction ability of a model. However, in practical settings, autocompletion is only triggered at specific points (e.g., after a dot, or after specific keywords such as for) while the developer is editing source code. To ensure that predictions translate to practical benefits for the developers, we evaluate completions on cardinal points (TLP-D). To obtain a list of keywords after which autocompletion is likely to be triggered, we first select the list of punctuation and keywords tokens that can be completed. We then compute the frequency of all bi-grams with any of these tokens as their first token in our dataset. Then, we remove three sets of bi-grams; (1) those that are mostly written together with occurrence frequency above $95\%$ (e.g., async def), (2) those that are normally not predictable (e.g., class NAME or def FUNCTION-NAME), and finally (3) those that are usually not practical completions (e.g., TRUE :). The resulting list of tokens after which it is most beneficial for autocompletion to be triggered is as follows.
283
+
284
+ DOT, AWAIT, ASSERT, RAISE, DEL, LAMBDA, YIELD, RETURN, EXCEPT, WHILE, FOR, IF, ELIF, ELSE, GLOBAL, IN, AND, NOT, OR, IS, BINOP, WITH, :, [, {, ~
285
+
286
+ Evaluation Metrics. As the model only predicts a single token in the TLP task, we include two evaluation metrics, namely the Accuracy of the top prediction and the Mean Reciprocal Rank (MRR) for the top-10 recommendations.
287
+
288
+ Accuracy measures the proportion of samples for which the suggested completion token exactly matches the single target label.
289
+
290
+ MRR assesses the whole top $N$ recommended completions and takes into account the first position the target is matched [38]. For a single query, the reciprocal rank is $\frac{1}{\text{rank}}$ where $\text{rank}$ is the position of the highest-ranked answer $(1,2,3,\dots,N$ for $N$ answers). If no correct answer exists in top- $N$ , then the reciprocal rank is 0. For multiple queries $Q$ , the MRR is the mean of the $Q$ reciprocal ranks.
291
+
292
+ 4.1.2 Statement Level Prediction (SLP). The SLP task assesses a model's ability to complete statements with up to $n$ tokens. The boxplot in Figure 5 shows the distribution of number of tokens for completions in the evaluation dataset (PY117K). In our datasets, statements are 4.2 tokens long on average (median: 4, maximum: 13). To provide a comprehensive view, we evaluate the performance of the models when predicting next- $n$ tokens with $n \in [2, 3, \ldots, 8]$ .
293
+
294
+ Evaluation Metrics: On absence of code-specific metrics, we use two metrics commonly-used for automatic evaluation of text generation, namely Metric for Evaluation of Translation with Explicit
295
+
296
+ Table 2: Datasets used for training and evaluation
297
+
298
+ <table><tr><td></td><td>PY1690K</td><td>PY117K</td></tr><tr><td>#Repositories</td><td>32.7K</td><td>24.9K</td></tr><tr><td>#Files</td><td>1.7M</td><td>117K</td></tr><tr><td>#LOC</td><td>425M</td><td>29M</td></tr><tr><td>#Tokens (unique)</td><td>5.7M</td><td>766K</td></tr><tr><td>#Types (unique)</td><td>103</td><td>103</td></tr></table>
299
+
300
+ Table 3: TPL-A results: Any token prediction
301
+
302
+ <table><tr><td>Approach</td><td>Venue</td><td>Accuracy</td><td>MRR</td></tr><tr><td>n-gram + LSTM [17]</td><td>(FSE, 2017)</td><td>65.1</td><td>67.9</td></tr><tr><td>Pointer Mixture [29]</td><td>(IJCAI, 2018)</td><td>65.8</td><td>70.0</td></tr><tr><td>OpenVocab [24]</td><td>(ICSE, 2020)</td><td>67.2</td><td>69.8</td></tr><tr><td>T-XL + Bi-LSTM [32]</td><td>(ICPC, 2020)</td><td>75.0</td><td>76.4</td></tr><tr><td>GPT-C [49]</td><td>(FSE, 2020)</td><td>79.8</td><td>80.0</td></tr><tr><td>TravTrans+ [25]</td><td>(ICSE, 2021)</td><td>78.9</td><td>79.4</td></tr><tr><td>CodeFill</td><td>Proposed</td><td>80.6</td><td>81.7</td></tr></table>
303
+
304
+ ORdering (METEOR) [27] and Recall-Oriented Understudy for Gisting Evaluation (ROUGE-L) [30].
305
+
306
+ ROUGE: ROUGE-N refers to overlapping n-grams. ROUGE-L, one of the variations of the ROUGE metric, counts longest matching sequence of words using the Longest Common Subsequence algorithm. It considers sentence-level structure similarity and automatically identifies the longest co-occurring chain of in sequence n-grams. Thus, it does not require consecutive matches but in-sequence matches that reflect sentence-level word order.
307
+
308
+ METEOR is based on the term-to-term mapping of the generated code with its corresponding reference code. It focuses mainly on recall. Lavie et al. [27] showed metrics based on recall consistently achieve higher correlation with user preferences than those based on precision alone.
309
+
310
+ # 4.2 Datasets
311
+
312
+ We use two Python datasets for training and evaluation:
313
+
314
+ - The ETH 150K Python dataset [40] for compatibility with previous work. The authors collected Python programs from GitHub repositories and removed duplicate files, project forks, files that do not parse and have more than $30\mathrm{K}$ nodes in their ASTs. They also removed obfuscated files and only used repositories with permissive licenses including MIT, BSD, and Apache.
315
+ - The CodeFill dataset, which was collected by querying GHTorrent [15] for all non-forked Python repositories with more than 20 stars (58k repositories).
316
+
317
+ After dedduplication, using the method proposed by Allamanis [2], we ended up with two versions of the original datasets, $PY117K$ and $PY1690K$ for the ETH and CodeFill datasets, respectively. Note that $PY1690K$ and $PY117K$ do not have any common files. Table 2 presents an overview of the contents of the datasets.
318
+
319
+ We use $PY1690K$ exclusively for pre-training our LM. We then use $90\%$ of $PY117K$ for fine-tuning the model on the tasks presented in Section 4.1, and finally the last $10\%$ of $PY117K$ for evaluation. For the baselines, we concatenate $PY1690K$ with the same $90\%$ portion of $PY117K$ as above for training, and evaluate on the remaining $10\%$ of $PY117K$ .
320
+
321
+ # 4.3 Implementation and Configuration
322
+
323
+ We use Python's AST $^3$ , Tokenize $^4$ , and the DIS $^5$ libraries in our conversion tool. Moreover, we use the HuggingFace $^6$ library for the implementation of our GPT-2 and MTL models. We set the learning rate to 0.00001, maximum sequence length to 2048, and trained our model for 100 epochs. We set the remaining parameters to default values. Our experiments are conducted on a machine equipped with two GeForce GTX 1080 Ti GPUs, an Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz CPU with 14 core processors, and 128G RAM.
324
+
325
+ # 5 RESULTS AND DISCUSSION
326
+
327
+ In this section, we present the results for each evaluation task, along with an ablation study and a characterization of the models' performance.
328
+
329
+ # 5.1 Token-level Prediction (TLP)
330
+
331
+ 5.1.1 Any token prediction. The most basic form of evaluation for an autocompletion model is to gauge its ability to predict the next token given some context as input. TLP-A can provide an overview on the ability of an autocomplete to predict, however, it does not account for the prior probabilities of different types of tokens. We present this task for compatibility with existing work, and further elaborate CodeFill's performance in the following tasks. The results can be seen in Table 3; our model outperforms all the baselines across all metrics.
332
+
333
+ 5.1.2 Token Type Prediction. We investigate the performance of the models when predicting different types of tokens, i.e., their ability to assimilate how developers use grammar to express concepts. Models generally struggle more with specific token types. For instance, it is known that predicting identifiers is harder than predicting keywords [18]. Table 4 presents the Accuracy and MRR results based on all token types. As demonstrated, CodeFill outperforms the baselines for all token types based on both metrics (except for MRR on keywords and punctuation, where its performance is on par). Transformer-based approaches are highly capable of predicting specific types of tokens, namely keywords and punctuation; effectively, this means that given enough training examples, they can efficiently learn syntactical patterns. Predicting identifiers and literals across all models is more challenging. For identifiers,
334
+
335
+ Table 4: TPL-B Results: Token type predictions
336
+
337
+ <table><tr><td>Metric</td><td>Approach</td><td>Identifier</td><td>Keyword</td><td>Punctuation</td><td>_literals</td><td>Operators</td><td>All</td></tr><tr><td></td><td>Token Percentage</td><td>21%</td><td>28%</td><td>33%</td><td>5%</td><td>13%</td><td>-</td></tr><tr><td rowspan="6">Accuracy</td><td>N-gram+LSTM [17]</td><td>40.2</td><td>74.2</td><td>81.4</td><td>46.2</td><td>62.7</td><td>66.6</td></tr><tr><td>Pointer Mixture [29]</td><td>37.0</td><td>85.3</td><td>80.0</td><td>43.9</td><td>62.8</td><td>68.4</td></tr><tr><td>OpenVocab [24]</td><td>42.3</td><td>89.8</td><td>93.4</td><td>54.4</td><td>65.0</td><td>76.0</td></tr><tr><td>T-XL + Bi-LSTM [32]</td><td>47.4</td><td>93.1</td><td>92.4</td><td>59.4</td><td>68.7</td><td>78.4</td></tr><tr><td>GPT-C [49]</td><td>50.0</td><td>96.5</td><td>95.1</td><td>62.0</td><td>71.0</td><td>81.2</td></tr><tr><td>TravTrans+ [25]</td><td>51.1</td><td>95.9</td><td>97.0</td><td>59.3</td><td>71.3</td><td>81.8</td></tr><tr><td></td><td>CodeFill</td><td>54.4</td><td>97.3</td><td>98.0</td><td>65.8</td><td>71.4</td><td>83.8</td></tr><tr><td rowspan="6">MRR</td><td>N-gram+LSTM [17]</td><td>40.6</td><td>76.8</td><td>84.6</td><td>49.8</td><td>64.2</td><td>68.8</td></tr><tr><td>Pointer Mixture [29]</td><td>38.5</td><td>85.9</td><td>85.2</td><td>46.7</td><td>64.5</td><td>71.0</td></tr><tr><td>OpenVocab [24]</td><td>43.2</td><td>90.3</td><td>96.0</td><td>57.0</td><td>67.1</td><td>77.6</td></tr><tr><td>T-XL + Bi-LSTM [32]</td><td>49.8</td><td>96.1</td><td>96.6</td><td>61.3</td><td>70.0</td><td>81.4</td></tr><tr><td>GPT-C [49]</td><td>52.3</td><td>98.8</td><td>98.8</td><td>64.0</td><td>73.3</td><td>83.9</td></tr><tr><td>TravTrans+ [25]</td><td>53.7</td><td>97.1</td><td>98.6</td><td>62.2</td><td>73.0</td><td>83.6</td></tr><tr><td></td><td>CodeFill</td><td>56.0</td><td>98.1</td><td>98.0</td><td>66.1</td><td>74.4</td><td>87.2</td></tr></table>
338
+
339
+ all models' result range from $37\%$ to $56\%$ accuracy. In both cases, CodeFill maintains a non-trivial edge over the baselines, which we attribute to the statement completion task. We believe it helps CodeFill to learn syntactical patterns over longer ranges.
340
+
341
+ 5.1.3 Leaf Node Prediction. We compare each model's performance in predicting different types of leaf nodes in an AST, e.g., function calls, variables, and attribute names. Tables 5 present the Accuracy and MRR results for this task. CodeFill is the best model in both accuracy, and, especially, MRR. This means that its name predictions, which is arguably the most important feature for an autocomplete, are 2 out of 3 times correct and have a high probability $(>70\%)$ of being included in the top suggestions.
342
+
343
+ 5.1.4 Cardinal Point Completion. In Table 6, we report the performance of models when predicting at cardinal points (described in Section 4.1). As indicated, CodeFill outperforms all the baselines. Consequently, it is more capable of presenting correct recommendations at points where autocompletion is more likely to be triggered.
344
+
345
+ # 5.2 Statement-Level Prediction (SLP)
346
+
347
+ We report the results for autocompleting code statements, by predicting the remaining $n$ tokens at a given statement position (with $n$ ranging between 2 and 8). Figure 6 presents the results of this experiment based on the achieved METEOR and ROUGE-L scores. All Transformer-based models [25, 32, 49], are consistently more capable than the three baseline approaches. CodeFill improves over all competitors. The margin grows wider as the number of tokens required to complete statements increase (especially in the ROUGE-L case). This result highlights the merits of our statement completion task. In turn, this can help developers code faster by reducing the number of required keystrokes; the experience of using statement completion should be reminiscent of text line completion in popular
348
+
349
+ Table 5: TLP-C results: Leaf node prediction
350
+
351
+ <table><tr><td>Metric</td><td>Approach</td><td>Attribute Access</td><td>Names</td><td>Function names</td><td>Numeric constant</td><td>All</td></tr><tr><td></td><td>Token Percentage</td><td>32%</td><td>13%</td><td>33%</td><td>22%</td><td></td></tr><tr><td rowspan="6">Accuracy</td><td>N-gram + LSTM [17]</td><td>56.3</td><td>61.8</td><td>63.5</td><td>45.1</td><td>56.9</td></tr><tr><td>Pointer Mixture [29]</td><td>53.5</td><td>62.0</td><td>59.8</td><td>42.0</td><td>54.2</td></tr><tr><td>OpenVocab [24]</td><td>59.8</td><td>63.7</td><td>66.2</td><td>51.7</td><td>60.6</td></tr><tr><td>T-XL + Bi-LSTM [32]</td><td>59.9</td><td>58.1</td><td>62.8</td><td>54.8</td><td>59.5</td></tr><tr><td>GPT-C [49]</td><td>60.0</td><td>59.9</td><td>64.0</td><td>56.0</td><td>60.4</td></tr><tr><td>TravTrans+ [25]</td><td>60.2</td><td>65.4</td><td>68.3</td><td>52.7</td><td>61.7</td></tr><tr><td></td><td>CodeFill</td><td>64.0</td><td>67.3</td><td>72.2</td><td>53.1</td><td>66.3</td></tr><tr><td rowspan="6">MRR</td><td>N-gram + LSTM [17]</td><td>57.9</td><td>64.7</td><td>65.2</td><td>47.5</td><td>58.9</td></tr><tr><td>Pointer Mixture [29]</td><td>57.1</td><td>59.0</td><td>60.2</td><td>43.1</td><td>55.3</td></tr><tr><td>OpenVocab [24]</td><td>61.2</td><td>64.8</td><td>70.1</td><td>51.7</td><td>62.5</td></tr><tr><td>T-XL + Bi-LSTM [32]</td><td>61.9</td><td>65.3</td><td>69.9</td><td>55.3</td><td>63.5</td></tr><tr><td>GPT-C [49]</td><td>63.4</td><td>62.9</td><td>66.5</td><td>57.2</td><td>63.0</td></tr><tr><td>TravTrans+ [25]</td><td>62.8</td><td>65.4</td><td>70.0</td><td>55.2</td><td>63.8</td></tr><tr><td></td><td>CodeFill</td><td>72.0</td><td>69.7</td><td>76.9</td><td>56.0</td><td>69.5</td></tr></table>
352
+
353
+ Table 6: TPL-D Results: Cardinal Points Completion
354
+
355
+ <table><tr><td>Approach</td><td>Accuracy</td><td>MRR</td></tr><tr><td>N-gram + LSTM [17]</td><td>49.0</td><td>52.3</td></tr><tr><td>Pointer Mixture [29]</td><td>51.3</td><td>52.4</td></tr><tr><td>OpenVocab [24]</td><td>52.2</td><td>53.5</td></tr><tr><td>T-XL + Bi-LSTM [32]</td><td>64.0</td><td>64.7</td></tr><tr><td>GPT-C [49]</td><td>66.1</td><td>67.8</td></tr><tr><td>TravTrans+ [25]</td><td>65.0</td><td>66.2</td></tr><tr><td>CodeFill</td><td>70.0</td><td>70.9</td></tr></table>
356
+
357
+ online email or document editors. Statistically, more than 2 out of 3 statement completions of 4 or fewer tokens will be correct.
358
+
359
+ # 5.3 Ablation Study
360
+
361
+ We perform an ablation study to examine the impact of different components of CodeFill. Table 7 presents the results of this study. We include the performance of a vanilla GPT-2 model to show the importance of employing the MTL approach to jointly train models on different representations of source code. The results show that employing the MTL technique to train the models jointly on multiple tasks indeed helps the model learn better. Next, we conduct experiments to compare hard-parameter and soft-parameter models with the two-task MTL model. It is worth mentioning that for the hard-parameter sharing variation, we need to input a unified representation to the models. Thus, we concatenate the type and value of each token as $x_{i} = [t_{i}, v_{i}]$ and then feed the vectors of this concatenated representation to the MTL model. The results indicate that the soft-parameter sharing works better in our case. This is probably because this setting allows each task to have its own model and parameters and then regularizes the distance between them to
362
+
363
+ ![](images/7e4bc5cd6173288374b3a11b251e606eb4b8e1d962f505e88f793a7b57cb5660.jpg)
364
+ Figure 6: Results for the SLP task
365
+
366
+ ![](images/4668df92fbfd3e918db3723311e6611d02e795d4ac404d62198114991427beba.jpg)
367
+
368
+ Table 7: Effectiveness of Different Components of the Model
369
+
370
+ <table><tr><td>Approach</td><td>Tasks</td><td>Train Time</td><td>Accuracy</td><td>MRR</td></tr><tr><td>GPT-2</td><td>Value</td><td>12h</td><td>77.7</td><td>78.2</td></tr><tr><td>MTL HP</td><td>Value, Type</td><td>17h</td><td>78.3</td><td>79.6</td></tr><tr><td>MTL SP</td><td>Value, Type</td><td>19h</td><td>78.9</td><td>79.5</td></tr><tr><td>MTL SP</td><td>Value, Type, Statement</td><td>24h</td><td>80.6</td><td>81.7</td></tr></table>
371
+
372
+ encourage the parameters to be similar. Finally, to verify whether adding information regarding statements helps, we investigate the effect of adding the third task, statement completion. The results demonstrate that training on two different granularity (single-token and statement) also helps them learn better. To conclude, each component of the proposed model adds to its value. Although the training time increases, it can be argued that training time is a one-time cost, and can be significantly reduced with parallel training on multiple GPUs.
373
+
374
+ # 5.4 Runtime Characteristics
375
+
376
+ An important aspect of ML-based autocompletion tools is their prediction latency. A very accurate model that takes 1 second per prediction will not be very useful in practice as it will interfere with the developer's workflow. As Table 8, all models feature an average latency of less than 100 milliseconds, which is considered the golden standard in the industry.
377
+
378
+ Moreover, the model size and number of parameters are important practical aspects that affect a model's deployment; if the model is too big, it will need to be deployed centrally and clients should connect to the model server over a network connection (which may affect latency negatively), otherwise, it could be distributed to the clients. As Table 8 shows, CodeFill's number of parameters is more
379
+
380
+ Table 8: Runtime Characteristics
381
+
382
+ <table><tr><td>Approach</td><td>Train Time (hr)</td><td>Latency (ms)</td><td>#Params</td></tr><tr><td>n-gram + LSTM [17]</td><td>23</td><td>75</td><td>168M</td></tr><tr><td>Pointer Mixture [29]</td><td>18</td><td>62</td><td>177M</td></tr><tr><td>OpenVocab [24]</td><td>21</td><td>61</td><td>145M</td></tr><tr><td>T-XL + Bi-LSTM [32]</td><td>24</td><td>79</td><td>173M</td></tr><tr><td>GPT-C [49]</td><td>23</td><td>74</td><td>125M</td></tr><tr><td>TravTrans+ [25]</td><td>15</td><td>53</td><td>119M</td></tr><tr><td>CodeFill</td><td>24</td><td>73</td><td>258M</td></tr></table>
383
+
384
+ than other baselines due to our architecture specification. However, the size of all Transformer-based models makes them impractical for distribution to clients, necessitating centralized deployments.
385
+
386
+ # 6 CONTRIBUTIONS AND IMPLICATIONS
387
+
388
+ Autocompletion is a popular research area, however, the existing challenges leave substantial margin for improvement, particularly for recommending identifiers or completing longer sequences [18]. In this study, CodeFill learns from sequences of both token types and token names simultaneously using MTL. The contribution of this work is twofold;
389
+
390
+ Technical novelty: Similar to the state-of-the-art [25, 49], we use transformers for learning a name-based sequencing model, and similar to the studies by Liu et al. [32, 33], we use the MTL technique to condition our models under different tasks. However, IntelliCodeCompose [49] treats code as natural text, neglecting the rich structure inherent in programs. Moreover they focus on multilingual LMs. TravTrans+ [25] uses serialized ASTs in an attempt to learn from structure, however, we show that our novel transformation, which we designed so that it is closer to how developers
391
+
392
+ treat source code structure, outperforms TravTrans+. CodeFill also learns from our novel statement completion task to consider longer contexts. Both Figure 6 and Table 7 show that this technique improves the model, probably by helping it better utilize completion context. The combination of the above demonstrably results in higher evaluation scores and better recommendations.
393
+
394
+ Evaluation: We propose two novel evaluation tasks, cardinal point, and statement completion, to address deficiencies in current autocomblection evaluation setups. We also collect, pre-process, deduplicate, and share an large Python dataset, consisting of practically all Python code on GitHub.
395
+
396
+ # 7 THREATS TO THE VALIDITY
397
+
398
+ Threats to internal validity: These include the threats pertaining to the parameters affecting the performance of the model. Another threat in this section relates to the errors in the implementation of the baselines. For all of these approaches, we have used the replication packages provided by these studies.
399
+
400
+ Threats to external validity: These threats relate to the quality of the datasets we used and the generalizability of the results. We used two Python datasets; PY117K is a benchmark dataset [40] frequently used in the literature [24, 25, 29, 32]. PY1690K, our second dataset, is ten times larger with approximately $1.7M$ program files. More data can lead to more generalizable results. Furthermore, as Allamanis. [2] suggests, we have de-duplicated both datasets to avoid biasing the models. All of the programs in both datasets are collected from open-source GitHub repositories. However, further studies are needed to validate and generalize our findings to other programming languages.
401
+
402
+ Threats to construct validity: These relate to the suitability of the evaluation setting and metrics. In this work, we have tried to incorporate diverse evaluation measures. For the TLP task, we have used standard evaluation metrics, namely Accuracy and MRR in the top-one and top-ten recommendations which are both frequently used in the literature [24, 25, 29, 32]. Furthermore, we use ROUGE-L and METEOR scores for evaluation in the SLP task as used in previous studies on source sequence of code generation, summarization, and translation [1, 49].
403
+
404
+ # 8 CONCLUSION AND FUTURE WORK
405
+
406
+ Unlike natural language text, source code is more structured, its grammar is more well defined but its vocabulary is orders of magnitude bigger. Consequently, NLP-based models and corresponding evaluation methods need to be adapted to the particular case of source code.
407
+
408
+ In this work, we proposed CodeFill, a Transformer-based generative LM for source code pre-trained on three tasks closely relevant to programming. Given a context of tokens (and their types), CodeFill is trained to predict (1) the type of the next token, (2) its value, and (3) the values of up to $n$ next tokens. We employ the MTL approach to jointly train CodeFill on the above tasks. We also propose 2 novel evaluation tasks, cardinal point prediction and statement-level multi-token prediction, which we argue that they better represent how autocompletion systems are used in practice. We extensively evaluate CodeFill against six baselines on both tasks. Our results indicate that CodeFill outperforms all the baselines in all
409
+
410
+ scenarios, achieving state of the art scores on both accuracy $(80.6\%)$ and MRR $(81.7\%)$ in the basic token-level prediction task. Moreover, we show that CodeFill also learns to autocomplete statements of up to 4 tokens with over $70\%$ accuracy, a significant improvement over the baselines, making it practical to offer statement completions as an IDE feature.
411
+
412
+ In the future, we plan to incorporate more domain specific knowledge on aspects of training and evaluating a training ML models. For instance, one can limit the context fed to the model based on the programming language to better incorporate related information of functions and nested scopes in a piece of code. We also plan to further investigate statement completion, including better metrics for its evaluation.
413
+
414
+ # ACKNOWLEDGMENTS
415
+
416
+ This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant number 825328 (FASTEN project), and also the NWO MIPL project, grant number 628.008.003.
417
+
418
+ # REFERENCES
419
+
420
+ [1] Alireza Aghamohammadi, Maliheh Izadi, and Abbas Heydarnoori. 2020. Generating summaries for methods of event-driven programs: An Android case study. Journal of Systems and Software 170 (2020), 110800. https://doi.org/10.1016/j.jss.2020.110800
421
+ [2] Miltiadis Allamanis. 2019. The adverse effects of code duplication in machine learning models of code. In Proceedings of the 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software. 143-153.
422
+ [3] Miltiadis Allamani, Earl T Barr, Premkumar Devanbu, and Charles Sutton. 2018. A survey of machine learning for big code and naturalness. ACM Computing Surveys (CSUR) 51, 4 (2018), 1-37.
423
+ [4] Miliadis Allamanis and Charles Sutton. 2014. Mining idioms from source code. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. 472-483.
424
+ [5] Sven Amann, Sebastian Proksch, Sarah Nadi, and Mira Mezini. 2016. A study of visual studio usage in practice. In 2016 IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering (SANER), Vol. 1. IEEE, 124-134.
425
+ [6] Gareth Ari Aye and Gail E Kaiser. 2020. Sequence model design for code completion in the modern IDE. arXiv preprint arXiv:2004.05249 (2020).
426
+ [7] Gareth Ari Aye, Seohyun Kim, and Hongyu Li. 2021. Learning autocompletion from real-world datasets. In 2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). IEEE, 131-139.
427
+ [8] Pavol Bielik, Veselin Raychev, and Martin Vechev. 2016. PHOG: probabilistic model for code. In International Conference on Machine Learning. 2933-2942.
428
+ [9] Marcel Bruch, Martin Monperrus, and Mira Mezini. 2009. Learning from examples to improve code completion systems. In Proceedings of the 7th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on the foundations of software engineering. 213-222.
429
+ [10] Paweł Budzianowski and Ivan Vulić. 2019. Hello, It's GPT-2-How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems. In Proceedings of the 3rd Workshop on Neural Generation and Translation. 15–22.
430
+ [11] Rich Caruana. 1997. Multitask learning. Machine learning 28, 1 (1997), 41-75.
431
+ [12] Santanu Kumar Dash, Miltiadis Allamanis, and Earl T. Barr. 2018. RefiNym: Using Names to Refine Types. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (Lake Buena Vista, FL, USA) (ESEC/FSE 2018). Association for Computing Machinery, New York, NY, USA, 107-117. https://doi.org/10.1145/3236024.3236042
432
+ [13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
433
+ [14] Yoav Goldberg. 2017. Neural network methods for natural language processing. Synthesis lectures on human language technologies 10, 1 (2017), 1-309.
434
+ [15] Georgios Gousios and Diomidis Spinellis. 2012. GHTorrent: GitHub's Data from a Firehose. In MSR '12: Proceedings of the 9th Working Conference on Mining Software Repositories (Zurich, Switzerland), Michael W. Godfrey and Jim Whitehead (Eds.). IEEE, 12-21. https://doi.org/10.1109/MSR.2012.6224294
435
+
436
+ [16] Donghoon Ham, Jeong-Gwan Lee, Youngsoo Jang, and Kee-Eung Kim. 2020. End-to-end neural pipeline for goal-oriented dialogue systems using GPT-2. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 583–592.
437
+ [17] Vincent J Hellendoorn and Premkumar Devanbu. 2017. Are deep neural networks the best choice for modeling source code?. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. 763-773.
438
+ [18] Vincent J Hellendoorn, Sebastian Proksch, Harald C Gall, and Alberto Bacchelli. 2019. When code completion fails: A case study on real-world completions. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). IEEE, 960-970.
439
+ [19] Abram Hindle, Earl T Barr, Zhendong Su, Mark Gabel, and Premkumar Devanbu. 2012. On the naturalness of software. In 2012 34th International Conference on Software Engineering (ICSE). IEEE, 837-847.
440
+ [20] Daqing Hou and David M Pletcher. 2010. Towards a better code completion system by API grouping, filtering, and popularity-based ranking. In Proceedings of the 2nd International Workshop on Recommendation Systems for Software Engineering. 26-30.
441
+ [21] Maliheh Izadi, Kiana Akbari, and Abbas Heydarnoori. 2022. Predicting the objective and priority of issue reports in software repositories. Empirical Software Engineering 27, 2 (2022), 1-37. https://doi.org/10.1007/s10664-021-10085-3
442
+ [22] Maliheh Izadi, Abbas Heydarnoori, and Georgios Gousios. 2021. Topic recommendation for software repositories using multi-label classification algorithms. Empirical Software Engineering 26, 5 (2021), 1-33. https://doi.org/10.1007/s10664-021-09976-2
443
+ [23] Xianhao Jin and Francisco Servant. 2018. The Hidden Cost of Code Completion: Understanding the Impact of the Recommendation-List Length on Its Efficiency. In Proceedings of the 15th International Conference on Mining Software Repositories (Gothenburg, Sweden) (MSR '18). Association for Computing Machinery, New York, NY, USA, 70–73. https://doi.org/10.1145/3196398.3196474
444
+ [24] Rafael-Michael Karampatsis, Hlib Babii, Romain Robbes, Charles Sutton, and Andrea Janes. 2020. Big code! = big vocabulary: Open-vocabulary models for source code. In 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). IEEE, 1073-1085.
445
+ [25] Seohyun Kim, Jinman Zhao, Yuchi Tian, and Satish Chandra. 2021. Code Prediction by Feeding Trees to Transformers. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). 150-162. https://doi.org/10.1109/ICSE43902.2021.00026
446
+ [26] Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In Thirtieth AAAI conference on artificial intelligence.
447
+ [27] Alon Lavie, Kenji Sagae, and Shyamsundar Jayaraman. 2004. The significance of recall in automatic metrics for MT evaluation. In Conference of the Association for Machine Translation in the Americas. Springer, 134-143.
448
+ [28] Jieh-Sheng Lee and Jieh Hsiang. 2020. Patent claim generation by fine-tuning OpenAI GPT-2. World Patent Information 62 (2020), 101983.
449
+ [29] Jian Li, Yue Wang, Michael R Lyu, and Irwin King. 2017. Code completion with neural attention and pointer networks. arXiv preprint arXiv:1711.09573 (2017).
450
+ [30] Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out. 74-81.
451
+ [31] Chang Liu, Xin Wang, Richard Shin, Joseph E Gonzalez, and Dawn Song. 2016. Neural code completion. (2016).
452
+ [32] Fang Liu, Ge Li, Bolin Wei, Xin Xia, Zhiyi Fu, and Zhi Jin. 2020. A Self-Attention Neural Architecture for Code Completion with Multi-Task Learning. In Proceedings of the 28th International Conference on Program Comprehension. 37-47.
453
+ [33] Fang Liu, Ge Li, Yunfei Zhao, and Zhi Jin. 2020. Multi-task Learning based Pre-trained Language Model for Code Completion. In 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 473-485.
454
+ [34] Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 (2015).
455
+ [35] Tomás Mikolov, Martin Karafiát, Lukáš Burget, Jan Černocký, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association.
456
+ [36] Son Nguyen, Tien Nguyen, Yi Li, and Shaohua Wang. 2019. Combining program analysis and statistical language model for code statement completion. In 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 710-721.
457
+ [37] Tung Thanh法律顾问, Anh Tuan法律顾问, Hoan Anh法律顾问, and Tien N法律顾问. 2013. A statistical semantic language model for source code. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, 532-542.
458
+ [38] Dragomir R Radev, Hong Qi, Harris Wu, and Weiguo Fan. 2002. Evaluating Web-based Question Answering Systems.. In LREC. CiteSeer.
459
+ [39] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. (2019).
460
+ [40] Veselin Raychev, Pavol Bielik, and Martin Vechev. 2016. Probabilistic model for code with decision trees. ACM SIGPLAN Notices 51, 10 (2016), 731-747.
461
+
462
+ [41] Veselin Raychev, Pavol Bielik, Martin Vechev, and Andreas Krause. 2016. Learning programs from noisy data. ACM Sigplan Notices 51, 1 (2016), 761-774.
463
+ [42] Romain Robbes and Michele Lanza. 2008. How program history can improve code completion. In 2008 23rd IEEE/ACM International Conference on Automated Software Engineering. IEEE, 317-326.
464
+ [43] Romain Robbes and Michele Lanza. 2010. Improving code completion with program history. Automated Software Engineering 17, 2 (2010), 181-212.
465
+ [44] Herbert Robbins and Sutton Monro. 1951. A stochastic approximation method. The annals of mathematical statistics (1951), 400-407.
466
+ [45] Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098 (2017).
467
+ [46] Hinrich Schütze, Christopher D Manning, and Prabhakar Raghavan. 2008. Introduction to information retrieval. Vol. 39. Cambridge University Press Cambridge.
468
+ [47] Holger Schwenk and Jean-Luc Gauvain. 2002. Connectionist language modeling for large vocabulary continuous speech recognition. In 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 1. IEEE, I-765.
469
+ [48] Ozan Sener and Vladlen Koltun. 2018. Multi-task learning as multi-objective optimization. arXiv preprint arXiv:1810.04650 (2018).
470
+ [49] Alexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan. 2020. Intellicode compose: Code generation using transformer. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1433-1443.
471
+ [50] Alexey Svyatkovskiy, Sebastian Lee, Anna Hadjitofi, Maik Riechert, Juliana Vicente Franco, and Miltiadis Allamannis. 2021. Fast and memory-efficient neural code completion. In 2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR). IEEE, 329-340.
472
+ [51] Alexey Svyatkovskiy, Ying Zhao, Shengyu Fu, and Neel Sundaresan. 2019. Pythia: Ai-assisted code completion system. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2727-2735.
473
+ [52] Zhaopeng Tu, Zhendong Su, and Premkumar Devanbu. 2014. On the localness of software. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. 269-280.
474
+ [53] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Lion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762 (2017).
475
+ [54] Fengcai Wen, Emad Aghajani, Csaba Nagy, Michele Lanza, and Gabriele Bavota. 2021. Siri, Write the Next Method. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 138-149.
476
+ [55] Yixiao Yang, Yu Jiang, Ming Gu, Jiaguang Sun, Jian Gao, and Han Liu. 2017. A language model for statements of software code. In 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE, 682-687.
477
+ [56] Yu Zhang and Qiang Yang, 2021. A survey on multi-task learning. IEEE Transactions on Knowledge and Data Engineering (2021).
2202.06xxx/2202.06689/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc1ace5aac278e5673d5af9bc9c39911ce9b7850a3370cd823fb0a64d0937286
3
+ size 506781
2202.06xxx/2202.06689/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06709/0a656b52-3523-4411-8242-ec5876763c94_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06709/0a656b52-3523-4411-8242-ec5876763c94_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06709/0a656b52-3523-4411-8242-ec5876763c94_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c7c410d3c511e7f4b93ba46be56b76f2cc47507f95e42b04b1dc62a89d92c8e1
3
+ size 1370873
2202.06xxx/2202.06709/full.md ADDED
@@ -0,0 +1,585 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HOW DO VISION TRANSFORMERS WORK?
2
+
3
+ Namuk Park<sup>1,2</sup>, Songkuk Kim<sup>1</sup>
4
+
5
+ $^{1}$ Yonsei University, $^{2}$ NAVER AI Lab
6
+
7
+ {namuk.park, songkuk}@yonsei.ac.kr
8
+
9
+ # ABSTRACT
10
+
11
+ The success of multi-head self-attention (MSAs) for computer vision is now indisputable. However, little is known about how MSAs work. We present fundamental explanations to help better understand the nature of MSAs. In particular, we demonstrate the following properties of MSAs and Vision Transformers (ViTs): 1 MSAs improve not only accuracy but also generalization by flattening the loss landscapes. Such improvement is primarily attributable to their data specificity, not long-range dependency. On the other hand, ViTs suffer from non-convex losses. Large datasets and loss landscape smoothing methods alleviate this problem; 2 MSAs and Convs exhibit opposite behaviors. For example, MSAs are low-pass filters, but Convs are high-pass filters. Therefore, MSAs and Convs are complementary; 3 Multi-stage neural networks behave like a series connection of small individual models. In addition, MSAs at the end of a stage play a key role in prediction. Based on these insights, we propose AlterNet, a model in which Conv blocks at the end of a stage are replaced with MSA blocks. AlterNet outperforms CNNs not only in large data regimes but also in small data regimes.
12
+
13
+ # 1 INTRODUCTION
14
+
15
+ There is limited understanding of multi-head self-attention (MSAs), although they are now ubiquitous in computer vision. The most widely accepted explanation for the success of MSAs is their weak inductive bias and capture of long-range dependencies (See, e.g., (Dosovitskiy et al., 2021; Naseer et al., 2021; Tuli et al., 2021; Yu et al., 2021; Mao et al., 2021; Chu et al., 2021)). Yet because of their over-flexibility, Vision Transformers (ViTs)—neural networks (NNs) consisting of MSAs—have been known to have a tendency to overfit training datasets, consequently leading to poor predictive performance in small data regimes, e.g., image classification on CIFAR. However, we show that the explanation is poorly supported.
16
+
17
+ # 1.1 RELATED WORK
18
+
19
+ Self-attention (Vaswani et al., 2017; Dosovitskiy et al., 2021) aggregate (spatial) tokens with normalized importances:
20
+
21
+ $$
22
+ z _ {j} = \sum_ {i} \operatorname {S o f t m a x} \left(\frac {\boldsymbol {Q} \boldsymbol {K}}{\sqrt {d}}\right) _ {i} \boldsymbol {V} _ {i, j} \tag {1}
23
+ $$
24
+
25
+ where $Q$ , $K$ , and $V$ are query, key, and value, respectively. $d$ is the dimension of query and key, and $z_{j}$ is the $j$ -th output token. From the perspective of convolutional neural networks (CNNs), MSAs are a transformation of all feature map points with large-sized and data-specific kernels. Therefore, MSAs are at least as expressive as convolutional layers (Convs) (Cordonnier et al., 2020), although this does not guarantee that MSAs will behave like Convs.
26
+
27
+ Is the weak inductive bias of MSA, such as modeling long-range dependencies, beneficial for the predictive performance? To the contrary, appropriate constraints may actually help a model learn strong representations. For example, local MSAs (Yang et al., 2019; Liu et al., 2021; Chu et al., 2021), which calculate self-attention only within small windows, achieve better performance than global MSAs not only on small datasets but also on large datasets, e.g., ImageNet-21K.
28
+
29
+ In addition, prior works observed that MSAs have the following intriguing properties: ① MSAs improve the predictive performance of CNNs (Wang et al., 2018; Bello et al., 2019; Dai et al., 2021;
30
+
31
+ ![](images/e69622a998709bf0b40b1cd742948773a62d249386ddb240a70d2b73c3819aa0.jpg)
32
+
33
+ ![](images/b648d7bb6a95abe2670b36993b0617dfcf4c95e1b78c9562b03ba94ed9f44bcd.jpg)
34
+ (a) Loss landscape visualizations
35
+
36
+ ![](images/9c190c26b2f70258f317d059233abfbcaf1dab0b7eb700384c0b617b6d969595.jpg)
37
+
38
+ ![](images/40732ce73ae1dc8e863127752f8ea1fb8de300d0988d12140b673d68e46ca0ef.jpg)
39
+
40
+ ![](images/03c4740f95119cdf0acac8c6a79dc05cdccc7330f070e577c198fd2fdf3b1700.jpg)
41
+ (b) Hessian max eigenvalue spectra
42
+
43
+ ![](images/6ff254e1253748e6814d7455c09688e0247bcd7dc13b75c3d08ab742c454a945.jpg)
44
+
45
+ ![](images/81cfaca75df826dc7b58d852a441c9593439fccf2e0c6bd19f2826de2601f6a0.jpg)
46
+ Figure 1. Global and local aspects consistently show that MSAs flatten loss landscapes. Left: Loss landscape visualizations show that ViT has a flatter loss $(\mathrm{NLL} + \ell_{2}$ regularization) than ResNet. Right: The magnitude of the Hessian eigenvalues of ViT is smaller than that of ResNet during training phases. Since the Hessian represents local curvature, this also suggests that the loss landscapes of ViT is flatter than that of ResNet. To demonstrate this point, we present the Hessian max eigenvalue spectra at the end of the warmup phases and of $100^{\mathrm{th}}$ , $200^{\mathrm{th}}$ , and $300^{\mathrm{th}}$ epochs. See Fig. 4 for a more detailed analysis.
47
+
48
+ Guo et al., 2021; Srinivas et al., 2021), and ViTs predict well-calibrated uncertainty (Minderer et al., 2021). $②$ ViTs are robust against data corruptions, image occlusions (Naseer et al., 2021), and adversarial attacks (Shao et al., 2021; Bhojanapalli et al., 2021; Paul & Chen, 2022; Mao et al., 2021). They are particularly robust against high-frequency noises (Shao et al., 2021). $③$ MSAs closer to the last layer significantly improve predictive performance (Graham et al., 2021; Dai et al., 2021).
49
+
50
+ These empirical observations raise immediate questions: 1 What properties of MSAs do we need to better optimize NNs? Do the long-range dependencies of MSAs help NNs learn? 2 Do MSAs act like Convs? If not, how are they different? 3 How can we harmonize MSAs with Convs? Can we just leverage their advantages?
51
+
52
+ We provide an explanation of how MSAs work by addressing them as a trainable spatial smoothing of feature maps, because Eq. (1) also suggests that MSAs average feature map values with the positive importance-weights. Even non-trainable spatial smoothings, such as a small $2 \times 2$ box blur, help CNNs see better (Zhang, 2019; Park & Kim, 2022). These simple spatial smoothings not only improve accuracy but also robustness by spatially ensembling feature map points and flattening the loss landscapes (Park & Kim, 2022). Remarkably, spatial smoothings have the properties of MSAs $(1) - (3)$ . See Appendix B for detailed explanations of MSAs as a spatial smoothing.
53
+
54
+ # 1.2 CONTRIBUTION
55
+
56
+ We address the three key questions:
57
+
58
+ What properties of MSAs do we need to improve optimization? We present various evidences to support that MSA is generalized spatial smoothing. It means that MSAs improve performance because their formulation—Eq. (1)—is an appropriate inductive bias. Their weak inductive bias disrupts NN training. In particular, a key feature of MSAs is their data specificity, not long-range dependency. As an extreme example, local MSAs with a $3 \times 3$ receptive field outperforms global MSA because they reduce unnecessary degrees of freedom.
59
+
60
+ How do MSAs improve performance? MSAs have their advantages and disadvantages. On the one hand, they flatten loss landscapes as shown in Fig. 1. The flatter the loss landscape, the better the
61
+
62
+ ![](images/8b3aa642b680d15b1bdffb02112f743c85915195523457b5ed62bda7fe79c672.jpg)
63
+ (a) Relative log amplitudes of Fourier transformed feature maps.
64
+
65
+ ![](images/c7815bb3386b5a9b8e48a2024a38b651bda81f595df203f37f6e09863e4b0723.jpg)
66
+ (b) Robustness for noise frequency
67
+
68
+ ![](images/fc700d466cc5b2d9a71f405de5d7be20f27870257b1aaf407f23bbd7edaa062a.jpg)
69
+ Figure 2. The Fourier analysis shows that MSAs do not act like Convs. Left: Relative log amplitudes of Fourier transformed feature map show that ViT tends to reduce high-frequency signals, while ResNet amplifies them. $\Delta$ Log amplitude of high-frequency signals is the difference between the log amplitude at normalized frequency $0.0\pi$ (center) and at $1.0\pi$ (boundary). For better visualization, we only provide the half-diagonal components of two-dimensional Fourier transformed feature maps. See Fig. 8 for a more detailed analysis. Right: We measure the decrease in accuracy against frequency-based random noise. ResNet is vulnerable to high-frequency noise, while ViT is robust against them. We use a frequency window size of $0.1\pi$ for frequency-based noise.
70
+
71
+ performance and generalization (Li et al., 2018; Keskar et al., 2017; Santurkar et al., 2018; Foret et al., 2021; Chen et al., 2022). Thus, they improve not only accuracy but also robustness in large data regimes. On the other hand, MSAs allow negative Hessian eigenvalues in small data regimes. This means that the loss landscapes of MSAs are non-convex, and this non-convexity disturbs NN optimization (Dauphin et al., 2014). Large amounts of training data suppress negative eigenvalues and convexify losses.
72
+
73
+ 2 Do MSAs act like Convs? We show that MSAs and Convs exhibit opposite behaviors. MSAs aggregate feature maps, but Convs diversify them. Moreover, as shown in Fig. 2a, the Fourier analysis of feature maps shows that MSAs reduce high-frequency signals, while Convs, conversely, amplifies high-frequency components. In other words, MSAs are low-pass filters, but Convs are high-pass filters. In addition, Fig. 2b indicates that Convs are vulnerable to high-frequency noise but that MSAs are not. Therefore, MSAs and Convs are complementary.
74
+ 3 How can we harmonize MSAs with Convs? We reveal that multi-stage NNs behave like a series connection of small individual models. Thus, applying spatial smoothing at the end of a stage improves accuracy by ensembling transformed feature map outputs from each stage (Park & Kim, 2022) as shown in Fig. 3a. Based on this finding, we propose an alternating pattern of Convs and MSAs. NN stages using this design pattern consists of a number of CNN blocks and one (or a few) MSA block at the end of a stage as shown in Fig. 3c. The design pattern naturally derives the structure of canonical Transformer, which has one MSA block per MLP block as shown in Fig. 3b. It also provides an explanation of how adding Convs to Transformer's MLP block improves accuracy and robustness (Yuan et al., 2021; Guo et al., 2021; Mao et al., 2021).
75
+
76
+ Surprisingly, models using this alternating pattern of Convs and MSAs outperform CNNs not only on large datasets but also on small datasets, such as CIFAR. This contrasts with canonical ViTs, models that perform poorly on small amount of data. It implies that MSAs are generalized spatial smoothings that complement Convs, not simply generalized Convs that replace conventional Convs.
77
+
78
+ # 2 WHAT PROPERTIES OF MSAS DO WE NEED TO IMPROVE OPTIMIZATION?
79
+
80
+ To understand the underlying nature of MSAs, we investigate the properties of the ViT family: e.g., vanilla ViT (Dosovitskiy et al., 2021); PiT (Heo et al., 2021), which is “ViT + multi-stage”; and Swin (Liu et al., 2021), which is “ViT + multi-stage + local MSA”. This section shows that these additional inductive biases enable ViTs to learn strong representations. We also use ResNet (He et al., 2016a) for comparison. NNs are trained from scratch with DeiT-style data augmentation (Touvron et al., 2021)
81
+
82
+ ![](images/bcc86f974c139ca986b72f950e9dfa2ed35118c3c87257f668f266906070524a.jpg)
83
+ (a) Spatial smoothing
84
+
85
+ ![](images/9d572b1b687cd2559b5e9af20f515764b7c6e95ac0e60ced1a9391c0c12bc0e0.jpg)
86
+ (b) Canonical Transformer
87
+ (c) Alternating pattern (ours)
88
+
89
+ ![](images/43c31eb8ba3d52b4d67b37b1400490bc6b87d9a9ed050ad66a5968e02eb6087b.jpg)
90
+ Figure 3. Comparison of three different repeating patterns. Left: Spatial smoothings are located at the end of CNN stages. Middle: The stages of ViTs consist of repetitions of canonical Transformers. "D" is the hidden dimension and "H" is the number of heads. Right: The stages using alternating pattern consists of a number of CNN blocks and an MSA block. For more details, see Fig. 11.
91
+
92
+ for 300 epochs. The NN training begins with a gradual warmup (Goyal et al., 2017) for 5 epochs. Appendix A provides more detailed configurations and background information for experiments.
93
+
94
+ The stronger the inductive biases, the stronger the representations (not regularizations). Do models with weak inductive biases overfit training datasets? To address this question, we provide two criteria on CIFAR-100: the error of the test dataset and the cross-entropy, or the negative log-likelihood, of the training dataset $(\mathrm{NLL}_{\mathrm{train}})$ , the lower the better). See Fig. 5a for the results.
95
+
96
+ Contrary to our expectations, experimental results show that the stronger the inductive bias, the lower both the test error and the training NLL. This indicates that ViT does not overfit training datasets. In addition, appropriate inductive biases, such as locality constraints for MSAs, helps NNs learn strong representations. We also observe these phenomena on CIFAR-10 and ImageNet as shown in Fig. C.1. Figure C.2 also supports that weak inductive biases disrupt NN training. In this experiment, extremely small patch sizes for the embedding hurt the predictive performance of ViT.
97
+
98
+ ViT does not overfit small training datasets. We observe that ViT does not overfit even on smaller datasets. Figure 5b shows the test error and the training NLL of ViT on subsampled datasets. In this
99
+
100
+ experiment, as the size of the dataset decreases, the error increases as expected, but surprisingly, $\mathrm{NLL}_{\mathrm{train}}$ also increases. Thanks to the strong data augmentation, ViT does not overfit even on a dataset size of $2\%$ . This suggests that ViT's poor performance in small data regimes is not due to overfitting.
101
+
102
+ ![](images/4cbb0a2e761d4c3fe9e5a178d8be511baf91394738e43ef1b72f99555d9b0cc5.jpg)
103
+ Figure 4. Hessian max eigenvalue spectra show that MSAs have their advantages and disadvantages. The dotted line is the spectrum of ViT using $6\%$ dataset for training. Left: ViT has a number of negative Hessian eigenvalues, while ResNet only has a few. Right: The magnitude of ViT's positive Hessian eigenvalues is small. See also Fig. 1b for more results.
104
+
105
+ ViT's non-convex losses lead to poor performance.
106
+
107
+ How do weak inductive biases of MSAs disturb the optimization? A loss landscape perspective provides an explanation: the loss function of ViT is non-convex, while that of ResNet is strongly (near-)convex. This poor loss disrupts NN training (Dauphin et al., 2014), especially in the early phase of training (Jastrzebski et al., 2020; 2021). Figure 1b and Fig. 4 provide top-5 largest Hessian eigenvalue densities (Park & Kim, 2022) with a batch size of 16. The figures show that ViT has a number of negative Hessian eigenvalues, while ResNet only has a few.
108
+
109
+ ![](images/1e084ca14e3228b88a472b3c669f311a974d968de2d36c254debc29772a8a696.jpg)
110
+ (a) Error and $\mathrm{NLL}_{\mathrm{train}}$ for each model.
111
+
112
+ ![](images/f7a9f0c913403d285b544fd3869fd44894dd60e8209f9bb26313b5db9ec3f7e7.jpg)
113
+ (b) Performance of ViT for dataset size.
114
+ Figure 5. ViT does not overfit training datasets. "R" is ResNet and "RX" is ResNeXt. Left: Weak inductive bias disturbs NN optimization. The lower the $\mathrm{NLL}_{\mathrm{train}}$ , the lower the error. Right: The lack of dataset also disturbs NN optimization.
115
+
116
+ Loss landscape smoothing methods aids in ViT training. Loss landscape smoothing methods can also help ViT learn strong representations. In classification tasks, global average pooling (GAP) smoothes the loss landscape by strongly ensembling feature map points (Park & Kim, 2022). We demonstrate how the loss smoothing method can help ViT improve performance by analyzing ViT with GAP classifier instead of CLS token on CIFAR-100.
117
+
118
+ Figure 6 shows the Hessian max eigenvalue spectrum of the ViT with GAP. As expected, the result shows that GAP classifier suppresses negative Hessian max eigenvalues, suggesting that GAP convexify the loss. Since negative eigenvalues disturb NN optimization, GAP classifier improve the accuracy by $+2.7$ percent point.
119
+
120
+ ![](images/4925680148c2f5c9021a901c2ab1f33a2ac5c618b209f715704f8664d1a5a173.jpg)
121
+ Figure 4 also shows that large datasets suppress negative Hessian eigenvalues in the early phase of training. Therefore, large datasets tend to help ViT learn strong representations by convexifying the loss. ResNet enjoys little benefit from large datasets because its loss is convex even on small datasets.
122
+ Figure 6. GAP classifier suppresses negative Hessian max eigenvalues in an early phase of training. We present Hessian max eigenvalue spectrum of ViT with GAP classifier instead of CLS token.
123
+
124
+ Likewise, Sharpness-Aware Minimization (SAM) (Foret et al., 2021), an optimizer that relies on the local smoothness of the loss function, also helps NNs seek out smooth minima. Chen et al. (2022) showed that SAM improves the predictive performance of ViT.
125
+
126
+ MSAs flatten the loss landscape. Another property of MSAs is that they reduce the magnitude of Hessian eigenvalues. Figure 1b and Fig. 4 show that the eigenvalues of ViT are significantly smaller than that of CNNs. Since the Hessian represents the local curvature of a loss function, this suggests that MSAs flatten the loss landscapes. While large eigenvalues impede NN training (Ghorbani et al., 2019), MSAs can help NNs learn better representations by suppressing large Hessian eigenvalues. Global aspects also support this claim. In Fig. 1a, we visualize the loss landscapes by using filter normalization (Li et al., 2018), and the result shows that the loss landscape of ViT is flatter than that of ResNet. In large data regimes, the negative Hessian eigenvalues—the disadvantage of MSAs—disappears, and only their advantages remain. As a result, ViTs outperform CNNs on large datasets, such as ImageNet and JFT (Sun et al., 2017). PiT and Swin also flatten the loss landscapes. For more details, see Fig. C.4.
127
+
128
+ A key feature of MSAs is data specificity (not long-range dependency). The two distinguishing features of MSAs are long-range dependency and data specificity, also known as data dependency, as discussed in Section 1.1. Contrary to popular belief, the long-range dependency hinders NN optimization. To demonstrate this, we analyze convolutional ViT (Yang et al., 2019), which consists of convolutional MSAs instead of global MSAs. Convolutional MSAs for vision tasks calculate self-attention only between feature map points in convolutional receptive fields after unfolding the feature maps in the same way as two-dimensional convolutions.
129
+
130
+ ![](images/795759ed86ad64c38d1af72d685a5ec5cfc20a89d7cbba0a3de331989b3ea850.jpg)
131
+ (a) Error and NLLtrain of ViT with local MSA for kernel size
132
+
133
+ ![](images/57b5a39061169861fa4cede1878938de97fe3ebde39c7f6588e88d9159f45c67.jpg)
134
+ (b) Hessian negative and positive max eigenvalue spectra in early phase of training
135
+ Figure 7. Locality constraint improves the performance of ViT. We analyze the ViT with convolutional MSAs. Convolutional MSA with $8 \times 8$ kernel is global MSA. Left: Local MSAs learn stronger representations than global MSA. Right: Locality inductive bias suppresses the negative Hessian eigenvalues, i.e., local MSAs have convex losses.
136
+
137
+ Figure 7a shows the error and $\mathrm{NLL}_{\mathrm{train}}$ of convolutional ViTs with kernel sizes of $3\times 3,5\times 5,$ and $8\times 8$ (global MSA) on CIFAR-100. In this experiment, $5\times 5$ kernel outperforms $8\times 8$ kernel on both the training and the test datasets. $\mathrm{NLL}_{\mathrm{train}}$ of $3\times 3$ kernel is worse than that of $5\times 5$ kernel, but better than that of global MSA. Although the test errors of $3\times 3$ and $5\times 5$ kernels are comparable, the robustness of $5\times 5$ kernel is significantly better than that of $3\times 3$ kernel on CIFAR-100-C (Hendrycks & Dietterich, 2019).
138
+
139
+ Figure 7b shows that the strong locality inductive bias not only reduce computational complexity as originally proposed (Liu et al., 2021), but also aid in optimization by convexifying the loss landscape. $5 \times 5$ kernel has fewer negative eigenvalues than global MSA because it restricts unnecessary degrees of freedom. $5 \times 5$ kernel also has fewer negative eigenvalues than $3 \times 3$ kernel because it ensembles a larger number of feature map points (See also Fig. 6). The amount of negative eigenvalues is minimized when these two effects are balanced.
140
+
141
+ It is clear that data specificity improves NNs. MLP-Mixer (Tolstikhin et al., 2021; Yu et al., 2021), a model with an MLP kernel that does not depend on input data, underperforms compared to ViTs. Data specificity without self-attention (Bello, 2021) improves performance.
142
+
143
+ # 3 Do MSAs ACT LIKE CONVS?
144
+
145
+ Convs are data-agnostic and channel-specific in that they mix channel information without exploiting data information. MSAs, on the contrary, are data-specific and channel-agnostic. These differences lead to large behavioral differences, which in turn suggest that MSAs and Convs are complementary.
146
+
147
+ MSAs are low-pass filters, but Convs are high-pass filters. As explained in Section 1.1, MSAs spatially smoothen feature maps with self-attention importances. Therefore, we expect that MSAs will tend to reduce high-frequency signals. See Appendix B for a more detailed discussion.
148
+
149
+ Figure 8 shows the relative log amplitude $(\Delta \log \mathrm{amplitude})$ of ViT's Fourier transformed feature map at high-frequency $(1.0\pi)$ on ImageNet. In this figure, MSAs almost always decrease the high-frequency amplitude, and MLPs—corresponding to Convs—increase it. The only exception is in the early stages of the model. In these stages, MSAs behave like Convs, i.e., they increase the amplitude. This could serve as an evidence for a hybrid model that uses Convs in early stages and MSAs in late stages (Guo et al.,
150
+
151
+ 2021; Graham et al., 2021; Dai et al., 2021; Xiao et al., 2021; Srinivas et al., 2021).
152
+
153
+ ![](images/5373e8397e6ebeb04660fd5fba86eaf59caa141e8da2de1b5f20d675ef1a297f.jpg)
154
+ Figure 8. MSAs (gray area) generally reduce the high-frequency component of feature map, and MLPs (white area) amplify it. This figure provides $\Delta$ log amplitude of ViT at high-frequency $(1.0\pi)$ . See also Fig. 2a and Fig. D.2 for more results.
155
+
156
+ ![](images/c90a767760cb4ab122099cf68a08034b36ff958ae412e6f108bce90687bbb495.jpg)
157
+ Figure 9. MSAs (gray area) reduce the variance of feature map points, but Convs (white area) increase the variance. The blue area is subsampling layer. This result implies that MSAs ensemble feature maps, but Convs do not.
158
+
159
+ ![](images/c277e35c60c003fabb7938dece2a1234af80d88a7acdbfc6c2748c0c9947ed19.jpg)
160
+
161
+ Based on this, we can infer that low-frequency signals and high-frequency signals are informative to MSAs and ConvS, respectively. In support of this argument, we report the robustness of ViT and ResNet against frequency-based random noise. Following Shao et al. (2021) and Park & Kim (2022), we measure the decrease in accuracy with respect to data with frequency-based random noise $\pmb{x}_{\mathrm{noise}} = \pmb{x}_0 + \mathcal{F}^{-1}\left(\mathcal{F}(\delta)\odot \mathbf{M}_f\right)$ , where $\pmb{x}_0$ is clean data, $\mathcal{F}(\cdot)$ and $\mathcal{F}^{-1}(\cdot)$ are Fourier transform and inverse Fourier transform, $\delta$ is random noise, and $\mathbf{M}_f$ is frequency mask.
162
+
163
+ As expected, the result in Fig. 2b reveals that ViT and ResNet are vulnerable to low-frequency noise and high-frequency noise, respectively. Low-frequency signals and the high-frequency signals each correspond to the shape and the texture of images. The results thus suggests that MSAs are shape-biased (Naseer et al., 2021), whereas ConvS are texture-biased (Geirhos et al., 2019).
164
+
165
+ MSAs aggregate feature maps, but Convs do not. Since MSAs average feature maps, they will reduce variance of feature map points. This suggests that MSAs ensemble feature maps (Park & Kim, 2022). To demonstrate this claim, we measure the variance of feature maps from NN layers.
166
+
167
+ Figure 9 shows the experimental results of ResNet and ViT. This figure indicates that MSAs in ViT tend to reduce the variance; conversely, Convs in ResNet and MLPs in ViT increase it. In conclusion, MSAs ensemble feature map predictions, but Convs do not. As Park & Kim (2022) figured out, reducing the feature map uncertainty helps optimization by ensembling and stabilizing the transformed feature maps. See Fig. D.1 for more results on PiT and Swin.
168
+
169
+ We observe two additional patterns for feature map variance. First, the variance accumulates in every NN layer and tends to increase as the depth increases. Second, the feature map variance in ResNet peaks at the ends of each stage. Therefore, we can improve the predictive performance of ResNet by inserting MSAs at the end of each stage. Furthermore, we also can improve the performance by using MSAs with a large number of heads in late stages.
170
+
171
+ # 4 HOW CAN WE HARMONIZE MSAS WITH CONVS?
172
+
173
+ Since MSAs and Convs are complementary, this section seeks to design a model that leverages only the advantages of the two modules. To this end, we propose the design rules described in Fig. 3c, and demonstrate that the models using these rules outperforms CNNs, not only in the large data regimes but also in the small data regimes, such as CIFAR.
174
+
175
+ # 4.1 DESIGNING ARCHITECTURE
176
+
177
+ We first investigate the properties of multi-stage NN architectures. Based on this investigation, we come to propose an alternating pattern, i.e., a principle for stacking MSAs based on CNNs.
178
+
179
+ Multi-stage NNs behave like individual models. In Fig. 9, we observe that the pattern of feature map variance repeats itself at every stages. This behavior is also observed in feature map similarities and lesion studies.
180
+
181
+ ![](images/e2ea2478d7ad90c6fe600389b02d96d8790d8a943699ac48c79fac5689847c19.jpg)
182
+
183
+ ![](images/d12a8df5595e165b5bd3b119f25e622a4cb05f3bf1d0921940dc5f2392307421.jpg)
184
+ Figure 10. Multi-stage CNNs and ViTs behave like a series connection of small individual models. Left: The feature map similarities show the block structure of ResNet and Swin. "E" stands for stem/embedding and "P" for pooling (subsampling) layer. Right: We measure decrease in accuracy after removing one unit from the trained model. Accuracy changes periodically, and this period is one stage. White, gray, and blue areas are Conv/MLP, MSA, and subsampling layers, respectively.
185
+
186
+ ![](images/ab09d73f5ebe680a0248418751bd4ba4a13a540b447f5b242e4fbfb905018976.jpg)
187
+ (a) Feature map similarity
188
+
189
+ ![](images/ba2141077c51f273bb6354e84439d87d305d33c70ade2a4efda245b3f3ebb8a7.jpg)
190
+ (b) Accuracy of one-unit-removed model.
191
+ Figure 10a shows the representational similarities of ResNet and Swin on CIFAR-100. In this experiment, we use mini-batch CKA (Nguyen et al., 2021) to measure the similarities. As Nguyen et al. (2021) figured out, the feature map similarities of CNNs have a block structure. Likewise, we observe that the feature map similarities of multi-stage ViTs, such as PiT and Swin, also have a block structure. Since vanilla ViT does not have this structure (Bhojanapalli et al., 2021; Raghu et al., 2021), the structure is an intrinsic characteristic of multi-stage architectures. See Fig. D.3 for more detailed results of ViT and PiT.
192
+
193
+ Figure 10b shows the results of lesion study (Bhojanapalli et al., 2021), where one NN unit is removed from already trained ResNet and Swin during the testing phase. In this experiment, we remove one $3 \times 3$ Conv layer from the bottleneck block of ResNet, and one MSA or MLP block from Swin. In ResNet, removing an early stage layers hurts accuracy more than removing a late stage layers. More importantly, removing a layer at the beginning of a stage impairs accuracy more than removing a layer at the end of a stage. The case of Swin is even more interesting. At the beginning of a stage, removing an MLP hurts accuracy. At the end of a stage, removing an MSA seriously impairs the accuracy. These results are consistent with Fig. 8. See Fig. D.4 for the results on ViT and PiT.
194
+
195
+ Based on these findings, we expect MSAs closer to the end of a stage to significantly improve the performance. This is contrary to the popular belief that MSAs closer to the end of a model improve the performance (Srinivas et al., 2021; d'Ascoli et al., 2021; Graham et al., 2021; Dai et al., 2021).
196
+
197
+ Build-up rule. Considering all the insights, we propose the following design rules:
198
+
199
+ - Alternately replace Conv blocks with MSA blocks from the end of a baseline CNN model.
200
+ - If the added MSA block does not improve predictive performance, replace a Conv block located at the end of an earlier stage with an MSA block.
201
+ - Use more heads and higher hidden dimensions for MSA blocks in late stages.
202
+
203
+ We call the model that follows these rules AlterNet. AlterNet unifies ViTs and CNNs by adjusting the ratio of MSAs and ConvS as shown in Fig. 3. Figure 11 shows AlterNet based on pre-activation ResNet-50 (He et al., 2016b) for CIFAR-100 as an example. Figure D.5 shows AlterNet for ImageNet.
204
+
205
+ Figure 12a reports the accuracy of Alter-ResNet-50, which replaces the Conv blocks in ResNet-50 with local MSAs (Liu et al., 2021) according to the aforementioned rules, on CIFAR-100. As expected, MSAs in the last stage (c4) significantly improve the accuracy. Surprisingly, an MSA in $2^{\text{nd}}$ stage (c2) improves the accuracy, while two or more MSAs in the $3^{\text{rd}}$ stage (c3) reduce it. In conclusion, MSAs at the end of a stage play an important role in prediction, as has been demonstrated previously.
206
+
207
+ ![](images/e9f94bfbb280c8cf15498aa905f7307ccc9827aa46ef720bc711e63c010f1030.jpg)
208
+ Figure 11. Detailed architecture of Alter-ResNet-50 for CIFAR-100. White, gray, and blue blocks mean Conv, MSA, and subsampling blocks. All stages (except stage 1) end with MSA blocks. This model is based on pre-activation ResNet-50. Following Swin, MSAs in stages 1 to 4 have 3, 6, 12, and 24 heads, respectively.
209
+
210
+ Figure 12. AlterNet outperforms CNNs and ViTs. Left: MSAs in the late of the stages improve accuracy. We replace Convs of ResNet with MSAs one by one according to the build-up rules. c1 to c4 stands for the stages. Several MSAs in c3 harm the accuracy, but the MSA at the end of c2 improves it. Center: AlterNet outperforms CNNs even in a small data regime. Robustness is mean accuracy on CIFAR-100-C. "RX" is ResNeXt. Right: MSAs in AlterNet suppress the large eigenvalues; i.e., AlterNet has a flatter loss landscape than ResNet in the early phase of training.
211
+ ![](images/7cf5a971a50211a454ee2edbd734fa5b79e9ebe2755823b2bc7fab8002bd1853.jpg)
212
+ (a) Accuracy of AlterNet for MSA number
213
+
214
+ Figure 12c demonstrates that MSAs suppress large eigenvalues while allowing only a few negative eigenvalues. As explained in Fig. 4, large datasets compensate for the shortcomings of MSAs. Therefore, more data allows more MSAs for a models.
215
+ ![](images/9f3a227bcf5762a136158f3f4784a388dff51a20d55ccb5a115b5180f889a148.jpg)
216
+ (b) Accuracy and robustness in a small data regime (CIFAR-100)
217
+
218
+ ![](images/8c06afb4768388f135bd572e61c2cc6be8756d56832f73086a7518fc30dd3fb6.jpg)
219
+ (c) Hessian max eigenvalue spectra in an early phase of training
220
+
221
+ # 4.2 PERFORMANCE
222
+
223
+ Figure 12b shows the accuracy and corruption robustness of Alter-ResNet-50 and other baselines on CIFAR-100 and CIFAR-100-C. Since CIFAR is a small dataset, CNNs outperforms canonical ViTs. Surprisingly, Alter-ResNet—a model with MSAs following the appropriate build-up rule—outperforms CNNs even in the small data regimes. This suggests that MSAs complement Convs. In the same manner, this simple modification shows competitive performance on larger datasets, such as ImageNet. See Fig. E.1 for more details.
224
+
225
+ # 5 DISCUSSION
226
+
227
+ Our present work demonstrates that MSAs are not merely generalized Convs, but rather generalized spatial smoothings that complement Convs. MSAs help NNs learn strong representations by ensembling feature map points and flattening the loss landscape. Since the main objective of this work is to investigate the nature of MSA for computer vision, we preserve the architectures of Conv and MSA blocks in AlterNet. Thus, AlterNet has a strong potential for future improvements. In addition, AlterNet can conveniently replace the backbone for other vision tasks such as dense prediction (Carion et al., 2020). As Park & Kim (2022) pointed out, global average pooling (GAP) for simple classification tasks has a strong tendency to ensemble feature maps, but NNs for dense prediction do not use GAP. Therefore, we believe that MSA to be able to significantly improve the results in dense prediction tasks by ensembling feature maps. Lastly, strong data augmentation for MSA training harms uncertainty calibration as shown in Fig. F.1a. We leave a detailed investigation for future work.
228
+
229
+ # ACKNOWLEDGEMENT
230
+
231
+ We thank the reviewers, Taeoh Kim, and Pilhyeon Lee for valuable feedback. This work was supported by the Samsung Science and Technology Foundation under Project Number SSTF-BA1501-52.
232
+
233
+ # REPRODUCIBILITY STATEMENT
234
+
235
+ To ensure reproducibility, we provide comprehensive resources, such as code and experimental details. The code is available at https://github.com/xxxnell/how-do-vits-work. Appendix A.1 provides the specifications of all models used in this work. Detailed experimental setup including hyperparameters and the structure of AlterNet are also available in Appendix A.1 and Appendix E. De-facto image datasets are used for all experiments as described in Appendix A.1.
236
+
237
+ # REFERENCES
238
+
239
+ Irwan Bello. Lambda networks: Modeling long-range interactions without attention. In International Conference on Learning Representations, 2021.
240
+ Irwan Bello, Barret Zoph, Ashish Vaswani, Jonathon Shlens, and Quoc V Le. Attention augmented convolutional networks. In International Conference on Computer Vision, 2019.
241
+ Irwan Bello, William Fedus, Xianzhi Du, Ekin Dogus Cubuk, Aravind Srinivas, Tsung-Yi Lin, Jonathon Shlens, and Barret Zoph. Revisiting resnets: Improved training and scaling strategies. In Advances in Neural Information Processing Systems, 2021.
242
+ Srinadh Bhojanapalli, Ayan Chakrabarti, Daniel Glasner, Daliang Li, Thomas Unterthiner, and Andreas Veit. Understanding robustness of transformers for image classification. In International Conference on Computer Vision, 2021.
243
+ Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In European Conference on Computer Vision, 2020.
244
+ Xiangning Chen, Cho-Jui Hsieh, and Boqing Gong. When vision transformers outperform resnets without pretraining or strong data augmentations. In International Conference on Learning Representations, 2022.
245
+ Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers. In Advances in Neural Information Processing Systems, 2021.
246
+ Jean-Baptiste Cordonnier, Andreas Loukas, and Martin Jaggi. On the relationship between self-attention and convolutional layers. In International Conference on Learning Representations, 2020.
247
+ Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Advances in Neural Information Processing Systems, 2020.
248
+ Zihang Dai, Hanxiao Liu, Quoc V Le, and Mingxing Tan. Coatnet: Marrying convolution and attention for all data sizes. In Advances in Neural Information Processing Systems, 2021.
249
+ Stéphane d'Ascoli, Hugo Touvron, Matthew Leavitt, Ari Morcos, Giulio Biroli, and Levent Sagun. Convit: Improving vision transformers with soft convolutional inductive biases. In International Conference on Machine Learning, 2021.
250
+ Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in Neural Information Processing Systems, 2014.
251
+
252
+ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021.
253
+ Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization. In International Conference on Learning Representations, 2021.
254
+ Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In International Conference on Learning Representations, 2019.
255
+ Behrooz Ghorbani, Shankar Krishnan, and Ying Xiao. An investigation into neural net optimization via hessian eigenvalue density. In International Conference on Machine Learning, 2019.
256
+ Priya Goyal, Piotr Dólar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
257
+ Ben Graham, Alaaeldin El-Nouby, Hugo Touvron, Pierre Stock, Armand Joulin, Hervé Jégou, and Matthijs Douze. Levit: a vision transformer in convnet's clothing for faster inference. In International Conference on Computer Vision, 2021.
258
+ Jianyuan Guo, Kai Han, Han Wu, Chang Xu, Yehui Tang, Chunjing Xu, and Yunhe Wang. Cmt: Convolutional neural networks meet vision transformers. arXiv preprint arXiv:2107.06263, 2021.
259
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016a.
260
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European Conference on Computer Vision, 2016b.
261
+ Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In International Conference on Learning Representations, 2019.
262
+ Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, and Seong Joon Oh. Rethinking spatial dimensions of vision transformers. In International Conference on Computer Vision, 2021.
263
+ Jiri Hron, Yasaman Bahri, Jascha Sohl-Dickstein, and Roman Novak. Infinite attention: Nngp and ntk for deep attention networks. In International Conference on Machine Learning, 2020.
264
+ Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In European Conference on Computer Vision, 2016.
265
+ Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in Neural Information Processing Systems, 2018.
266
+ Stanislaw Jastrzebski, Maciej Szymczak, Stanislav Fort, Devansh Arpit, Jacek Tabor, Kyunghyun Cho, and Krzysztof Geras. The break-even point on optimization trajectories of deep neural networks. In International Conference on Learning Representations, 2020.
267
+ Stanislaw Jastrzebski, Devansh Arpit, Oliver Astrand, Giancarlo B Kerg, Huan Wang, Caiming Xiong, Richard Socher, Kyunghyun Cho, and Krzysztof J Geras. Catastrophic fisher explosion: Early phase fisher matrix impacts generalization. In International Conference on Machine Learning, 2021.
268
+ Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In International Conference on Learning Representations, 2017.
269
+ Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
270
+
271
+ Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. Visualizing the loss landscape of neural nets. In Advances in Neural Information Processing Systems, 2018.
272
+ Chaoyue Liu, Libin Zhu, and Mikhail Belkin. On the linearity of large non-linear models: when and why the tangent kernel is constant. In Advances in Neural Information Processing Systems, 2020.
273
+ Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In International Conference on Computer Vision, 2021.
274
+ Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. In International Conference on Learning Representations, 2017.
275
+ Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019.
276
+ Xue. Towards robust vision transformer. arXiv preprint arXiv:2105.07926, 2021.
277
+ Paul Michel, Omer Levy, and Graham Neubig. Are sixteen heads really better than one? In Advances in Neural Information Processing Systems, 2019.
278
+ Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, and Mario Lucic. Revisiting the calibration of modern neural networks. In Advances in Neural Information Processing Systems, 2021.
279
+ Muzammal Naseer, Kanchana Ranasinghe, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Intriguing properties of vision transformers. In Advances in Neural Information Processing Systems, 2021.
280
+ Thao Nguyen, Maithra Raghu, and Simon Kornblith. Do wide and deep networks learn the same things? uncovering how neural network representations vary with width and depth. In International Conference on Learning Representations, 2021.
281
+ Namuk Park and Songkuk Kim. Blurs behave like ensembles: Spatial smoothings to improve accuracy, uncertainty, and robustness. In International Conference on Machine Learning, 2022.
282
+ Namuk Park, Taekyu Lee, and Songkuk Kim. Vector quantized bayesian neural network inference for data streams. In Proceedings of the AAAI Conference on Artificial Intelligence, 2021.
283
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, 2019.
284
+ Sayak Paul and Pin-Yu Chen. Vision transformers are robust learners. In Proceedings of the AAAI Conference on Artificial Intelligence, 2022.
285
+ Maithra Raghu, Thomas Unterthiner, Simon Kornblith, Chiyuan Zhang, and Alexey Dosovitskiy. Do vision transformers see like convolutional neural networks? In Advances in Neural Information Processing Systems, 2021.
286
+ Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015.
287
+ Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Mądry. How does batch normalization help optimization? In Advances in Neural Information Processing Systems, 2018.
288
+ Rulin Shao, Zhouxing Shi, Jinfeng Yi, Pin-Yu Chen, and Cho-Jui Hsieh. On the adversarial robustness of visual transformers. arXiv preprint arXiv:2103.15670, 2021.
289
+
290
+ Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, and Ashish Vaswani. Bottleneck transformers for visual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021.
291
+ Andreas Steiner, Alexander Kolesnikov, Xiaohua Zhai, Ross Wightman, Jakob Uszkoreit, and Lucas Beyer. How to train your vit? data, augmentation, and regularization in vision transformers. arXiv preprint arXiv:2106.10270, 2021.
292
+ Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In International Conference on Computer Vision, 2017.
293
+ Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
294
+ Ilya Tolstikhin, Neil Houlsby, Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Thomas Unterthiner, Jessica Yung, Daniel Keysers, Jakob Uszkoreit, Mario Lucic, et al. Mlp-mixer: An all-mlp architecture for vision. In Advances in Neural Information Processing Systems, 2021.
295
+ Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning, 2021.
296
+ Shikhar Tuli, Ishita Dasgupta, Erin Grant, and Thomas L Griffiths. Are convolutional neural networks or transformers more like human vision? In Proceedings of the Annual Meeting of the Cognitive Science Society, 2021.
297
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, 2017.
298
+ Peihao Wang, Wenqing Zheng, Tianlong Chen, and Zhangyang Wang. Scaling the depth of vision transformers via the fourier domain analysis. In International Conference on Learning Representations, 2022.
299
+ Phil Wang. Implementation of vision transformer. https://github.com/lucidrains/vit-pytorch, 2021.
300
+ Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
301
+ Yeming Wen, Ghassen Jerfel, Rafael Muller, Michael W Dusenberry, Jasper Snoek, Balaji Lakshminarayanan, and Dustin Tran. Combining ensembles and data augmentation can harm your calibration. In International Conference on Learning Representations, 2021.
302
+ Ross Wightman. Pytorch image models. https://github.com/rwrightman/pytorch-image-models, 2019.
303
+ Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Dólár, and Ross Girshick. Early convolutions help transformers see better. In Advances in Neural Information Processing Systems, 2021.
304
+ Saining Xie, Ross Girshick, Piotr Dollar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
305
+ Baosong Yang, Longyue Wang, Derek F Wong, Lidia S Chao, and Zhaopeng Tu. Convolutional self-attention networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019.
306
+ Zhewei Yao, Amir Gholami, Kurt Keutzer, and Michael W Mahoney. Pyhessian: Neural networks through the lens of the hessian. In 2020 IEEE International Conference on Big Data (Big Data), 2020.
307
+
308
+ Dong Yin, Raphael Gontijo Lopes, Jon Shlens, Ekin Dogus Cubuk, and Justin Gilmer. A fourier perspective on model robustness in computer vision. In Advances in Neural Information Processing Systems, 2019.
309
+ Tan Yu, Xu Li, Yunfeng Cai, Mingming Sun, and Ping Li. Rethinking token-mixing mlp for mlp-based vision backbone. In BMVC, 2021.
310
+ Weihao Yu, Mi Luo, Pan Zhou, Chenyang Si, Yichen Zhou, Xinchao Wang, Jiashi Feng, and Shuicheng Yan. Metaformer is actually what you need for vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022.
311
+ Kun Yuan, Shaopeng Guo, Ziwei Liu, Aojun Zhou, Fengwei Yu, and Wei Wu. Incorporating convolution designs into visual transformers. In International Conference on Computer Vision, 2021.
312
+ Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In International Conference on Computer Vision, 2019.
313
+ Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations, 2018.
314
+ Richard Zhang. Making convolutional networks shift-invariant again. In International Conference on Machine Learning, 2019.
315
+ Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020.
316
+
317
+ # A EXPERIMENTAL DETAILS
318
+
319
+ This section provides experimental details, e.g., setups and background information.
320
+
321
+ # A.1 SETUPS
322
+
323
+ We obtain the main experimental results from two sets of machines for CIFAR (Krizhevsky et al., 2009). The first set consists of an Intel Xeon W-2123 Processor, 32GB memory, and a single GeForce RTX 2080 Ti, and the other set of four Intel Intel Broadwell CPUs, 15GB memory, and a single NVIDIA T4. For ImageNet (Russakovsky et al., 2015), we use AMD Ryzen Threadripper 3960X 24-Core Processor, 256GB memory, and four GeForce RTX 2080 Ti. NN models are implemented in PyTorch (Paszke et al., 2019).
324
+
325
+ We train NNs using categorical cross-entropy (NLL) loss and AdamW optimizer (Loshchilov & Hutter, 2019) with initial learning rate of $1.25 \times 10^{-4}$ and weight decay of $5 \times 10^{-2}$ . We also use cosine annealing scheduler (Loshchilov & Hutter, 2017). NNs are trained for 300 epochs with a batch size of 96 on CIFAR, and a batch size of 128 on ImageNet. The learning rate is gradually increased (Goyal et al., 2017) for 5 epochs. Following Touvron et al. (2021), strong data augmentations—such as RandAugment (Cubuk et al., 2020), Random Erasing (Zhong et al., 2020), label smoothing (Szegedy et al., 2016), mixup (Zhang et al., 2018), and CutMix (Yun et al., 2019)—are used for training. Stochastic depth (Huang et al., 2016) is also used to regularize NNs. This DeiT-style configuration, which significantly improves the performance (Steiner et al., 2021; Bello et al., 2021), is the de facto standard in ViT training (See, e.g., (Heo et al., 2021; Liu et al., 2021)). Therefore, we believe the insights presented in this paper can be used widely. See source code (https://github.com/xxxnell/how-do-vits-work) for detailed configurations.
326
+
327
+ We mainly report the performances of ResNet-50, ViT-Ti, PiT-Ti, and Swin-Ti. Their training throughputs on CIFAR-100 are 320, 434, 364, and 469 image/sec, respectively, which are comparable to each other. Figures 5a and C.1a report the predictive performance of ResNeXt-50 (Xie et al., 2017), Twins-S (Chu et al., 2021), and MLP-Mixer-Ti (Tolstikhin et al., 2021). Figure E.1 additionally reports the performance of ConViT-Ti (d'Ascoli et al., 2021) and LeViT-128S (Graham et al., 2021). We use a patch size of $2 \times 2$ for ViT and PiT on CIFAR; for Swin, a patch size of $1 \times 1$ and a window size of $4 \times 4$ . We use a patch size of $4 \times 4$ for ViT only in Fig. 7. We halve the depth of the ViT in Fig. C.5 and Fig. C.6 due to the memory limitation.
328
+
329
+ All models for CIFAR, and ResNet, ViT, and AlterNet for ImageNet are trained from scratch. We use pertained PiT and Swin from Wightman (2019) for ImageNet. The implementations of Vision Transformers are based on Wightman (2019) and Wang (2021).
330
+
331
+ For Hessian max eigenvalue spectrum (Park & Kim, 2022), $10\%$ of the training dataset is used. We also use power iteration with a batch size of 16 to produce the top-5 largest eigenvalues. To this end, we use the implementation of Yao et al. (2020). We modify the algorithm to calculate the eigenvalues with respect to $\ell_2$ regularized NLL on augmented training datasets. In the strict sense, the weight decay is not $\ell_2$ regularization, but we neglect the difference.
332
+
333
+ For the Fourier analysis and the feature map variance experiment, the entire test dataset is used. We report the amplitudes and the variances averaged over the channels.
334
+
335
+ # A.2 BACKGROUND INFORMATION
336
+
337
+ Below are the preliminaries and terms of our experiments.
338
+
339
+ Test error and training NLL. We report test errors on clean test datasets and training NLLs on augmented training datasets in experiments, e.g., Fig. 5 and Fig. C.1. NLL is an appropriate metric for evaluating convergence on a training dataset because an NN optimizes NLL. In addition, it is the most widely used as a proper scoring rule indicating both accuracy and uncertainty. To represent predictive performance on a test dataset, we use a well-known metric: error. Although NLL can also serve the same purpose, results are consistent even when NLL is employed.
340
+
341
+ If an additional inductive bias or a learning technique improves the performance of an NN, this is either a method to help the NNs learn "strong representations", or a method to "regularize" it.
342
+
343
+ An improved—i.e., lower—training NLL suggests that this bias or technique helps the NN learn strong representations. Conversely, a compromised training NLL indicates that the bias or technique regularizes the NN. Likewise, we say that "an NN overfits a training dataset" when a test error is compromised as the training NLL is improved.
344
+
345
+ Hessian max eigenvalue spectrum. Park & Kim (2022) proposed "Hessian max eigenvalue spectra", a feasible method for visualizing Hessian eigenvalues of large-sized NNs for real-world problems. It calculates and gathers top- $k$ Hessian eigenvalues by using power iteration mini-batch wisely. Ghorbani et al. (2019) visualized the Hessian eigenvalue spectrum by using the Lanczos quadrature algorithm for full batch. However, this is not feasible for practical NNs because the algorithm requires a lot of memory and computing resources.
346
+
347
+ A good loss landscape is a flat and convex loss landscape. Hessian eigenvalues indicate the flatness and convexity of losses. The magnitude of Hessian eigenvalues shows sharpness, and the presence of negative Hessian eigenvalues shows non-convexity. Based on these insights, we introduce a negative max eigenvalue proportion (NEP, the lower the better) and an average of positive max eigenvalues (APE, the lower the better) to quantitatively measure the non-convexity and the sharpness, respectively. For a Hessian max eigenvalue spectrum $p(\lambda)$ , NEP is the proportion of negative eigenvalues $\int_{-\infty}^{0} p(\lambda) d\lambda$ , and APE is the expected value of positive eigenvalues $\int_{0}^{\infty} \lambda p(\lambda) d\lambda / \int_{0}^{\infty} p(\lambda) d\lambda$ . We use these metrics in Fig. C.5 and Fig. C.6.
348
+
349
+ Note that measuring loss landscapes and Hessian eigenvalues without considering a regularization on clean datasets would lead to incorrect results, since NN training optimizes $\ell_2$ regularized NLL on augmented training datasets—not NLL on clean training datasets. We visualize loss landscapes and Hessian eigenvalues with respect to “ $\ell_2$ regularized NLL loss” on “augmented training datasets”.
350
+
351
+ Fourier analysis of feature maps. Following Park & Kim (2022), we analyze feature maps in Fourier space to demonstrate that MSA is a low-pass filter as shown in Fig. 2, Fig. 8, and Fig. D.2. Fourier transform converts feature maps into two-dimensional frequency domain. We represent these converted feature maps on normalized frequency domain, so that the highest frequency components are at $f = \{-\pi, +\pi\}$ , and the lowest frequency components are at $f = 0$ . We only provide the half-diagonal components for better visualization. In Fig. 8, we report the amplitude ratio of high-frequency components and low-frequency components by using $\Delta \log$ amplitude, the difference in log amplitude at $f = \pi$ and $f = 0$ . Yin et al. (2019) also analyzed the robustness of NNs from a Fourier perspective, but their research focused on input images—not feature maps—in Fourier spaces.
352
+
353
+ # B MSAS BEHAVELIKE SPATIAL SMOOTHINGS
354
+
355
+ As mentioned in Section 1.1, spatial smoothings before subsampling layers help CNNs see better (Zhang, 2019; Park & Kim, 2022). Park & Kim (2022) showed that such improvement in performance is possible due to spatial ensembles of feature map points. To this end, they used the (Bayesian) ensemble average of predictions for proximate data points (Park et al., 2021), which exploits data uncertainty (i.e., a distribution of feature maps) as well as model uncertainty (i.e., a posterior probability distribution of NN weights):
356
+
357
+ $$
358
+ p \left(\boldsymbol {z} _ {j} \mid \boldsymbol {x} _ {j}, \mathcal {D}\right) \simeq \sum_ {i} \pi \left(\boldsymbol {x} _ {i} \mid \boldsymbol {x} _ {j}\right) p \left(\boldsymbol {z} _ {j} \mid \boldsymbol {x} _ {i}, \boldsymbol {w} _ {i}\right) \tag {2}
359
+ $$
360
+
361
+ where $\pi(\boldsymbol{x}_i|\boldsymbol{x}_j)$ is the normalized importance weight of a feature map point $\boldsymbol{x}_i$ with respect to another feature map point $\boldsymbol{x}_j$ , i.e., $\sum_i \pi(\boldsymbol{x}_i|\boldsymbol{x}_j) = 1$ . This importance is defined as the similarity between $\boldsymbol{x}_i$ and $\boldsymbol{x}_j$ . $p(z_j|\boldsymbol{x}_i, \boldsymbol{w}_i)$ and $p(z_j|\boldsymbol{x}_j, \mathcal{D})$ stand for NN prediction and output predictive distribution, respectively. $\boldsymbol{w}_i$ is the NN weight sample from the posterior $p(\boldsymbol{w}|\mathcal{D})$ with respect to the training dataset $\mathcal{D}$ . Put shortly, Eq. (2) spatially complements a prediction with other predictions based on similarities between data points. For instance, a $2 \times 2$ box blur spatially ensembles four neighboring feature map points, each with $1/4$ of the same importance.
362
+
363
+ We note that the formulations for self-attention and the ensemble averaging for proximate data points are identical. The Softmax term and $\mathbf{V}$ in Eq. (1) exactly correspond to $\pi (\pmb {x}_i|\pmb {x}_j)$ and $p(z_{j}|\pmb{x}_{i},\pmb{w}_{i})$ in Eq. (2). The weight samples in Eq. (2) is correspond to the multi-heads of MSAs (See also (Hron et al., 2020)).
364
+
365
+ ![](images/d794386f4ba84972745d480f146049d27bdf98a70d3318398f00804de1256db7.jpg)
366
+ (a) CIFAR-10
367
+
368
+ ![](images/5b12ca417583c7b37e539fbdd1c01a55e34f4386fea01daa56f094d1d529bb99.jpg)
369
+ (b) ImageNet
370
+ Figure C.1. The lower the training NLL, the lower the test error. "R" is ResNet and "RX" is ResNeXt. Left: In small data regimes, such as CIFAR-10 and CIFAR-100 (Fig. 5a), the cons of MSAs outweigh their pros; i.e., the non-convex losses disturb ViT optimization. Therefore, CNNs outperform ViTs. Right: Large datasets convexify the loss functions. Therefore, the pros of MSAs outweigh their cons in large data regimes; i.e., MSAs help NNs learn strong representations by flattening the loss landscapes. Therefore, some ViTs outperform CNNs.
371
+
372
+ Likewise, the properties of spatial smoothing (Park & Kim, 2022) are the same as those of MSAs: $①$ Spatial smoothing improves the accuracy of CNNs. In addition, spatial smoothing predicts well-calibrated uncertainty. $②$ Spatial smoothing is robust against MC dropout (which is equivalent to image occlusion), data corruption, and adversarial attacks, and particularly robust against high-frequency noise. $③$ Spatial smoothing layers closer to the output layer significantly improves the predictive performance.
373
+
374
+ Taking all these observations together, we provide an explanation of how MSAs work by addressing themselves as a general form of spatial smoothing or an implementation of ensemble averaging for proximate data points. Spatial smoothing improves performance in the following ways (Park & Kim, 2022): ① Spatial smoothing helps in NN optimization by flattening the loss landscapes. Even a small $2 \times 2$ box blur filter significantly improves performance. ② Spatial smoothing is a low-pass filter. CNNs are vulnerable to high-frequency noises, but spatial smoothing improves the robustness against such noises by significantly reducing these noises. ③ Spatial smoothing is effective when applied at the end of a stage because it aggregates all transformed feature maps. This paper empirically shows that these mechanisms also apply to MSAs.
375
+
376
+ Concurrent works also suggest that MSA blocks behave like a spatial smoothing. Wang et al. (2022) provided a proof that Softmax-normalized matrix is a low-pass filter, although this does not guarantee that MSA blocks will behave like low-pass filters. Yu et al. (2022) demonstrated that the MSA layers of ViT can be replaced with non-trainable average pooling layers.
377
+
378
+ # C VITs FROM A LOSS LANDSCAPE PERSPECTIVE
379
+
380
+ This section provides further explanations of the analysis in Section 2.
381
+
382
+ The lower the NLL on the training dataset, the lower the error on the test dataset. Figure 5a demonstrates that low training NLLs result in low test errors on CIFAR-100. The same pattern can be observed on CIFAR-10 and ImageNet as shown in Fig. C.1.
383
+
384
+ In small data regimes, such as CIFAR-10 (Fig. C.1a) and CIFAR-100 (Fig. 5a), both the error and the $\mathrm{NLL}_{\mathrm{train}}$ of ViTs are inferior to those of CNNs. This suggests that the cons of MSAs outweigh their pros. As discussed in Fig. 4, ViTs suffers from the non-convex losses, and these non-convex losses disturb ViT optimization.
385
+
386
+ In large data regimes, such as ImageNet (Fig. C.1b), both the error and the $\mathrm{NLL}_{\mathrm{train}}$ of ViTs with local MSAs are superior to those of CNNs. Since large datasets convexify the loss functions as discussed in Fig. 4, the pros of MSAs outweigh their cons. Therefore, MSAs help NNs learn strong representations by flattening the loss landscapes.
387
+
388
+ ![](images/924b77aeec6dc46c88df7ee457625ea46d7a674afb01d2b4bdd96bd178825e74.jpg)
389
+ (a) Error and $\mathrm{NLL}_{\mathrm{train}}$ of ViT for patch size
390
+
391
+ ![](images/4f27b9ec96151544cf03faf8a44df6d58fd3a25306df2678dfc53637ad2374d2.jpg)
392
+ Figure C.2. A small patch size does not guarantee better performance. We analyze ViTs with three embedded patch sizes: $2 \times 2$ , $4 \times 4$ , and $8 \times 8$ . Note that every MSA has a global receptive fields. Left: As expected, a large patch size harms the performance, but surprisingly, the same is observed from a small patch size. Right: A small patch size, or a weak inductive bias, produces negative eigenvalues. This is another evidence that a weak inductive bias hinders NN optimization. On the other hand, MSAs with a small patch size reduce the magnitude of eigenvalues because they ensemble a large number of feature map points. Performance is optimized when these two effects are balanced.
393
+
394
+ ![](images/e423f80849c029e52cae1981827b1201c2576fe0355b446a130e08d41045db69.jpg)
395
+ (b) Negative and positive Hessian max eigenvalue spectra in early phase of training
396
+
397
+ Rigorous discussion on the regularization of CNN's inductive bias. In Fig. 5a, we compare models of similar sizes, such as ResNet-50 and ViT-Ti. Through such comparison, we show that a weak inductive bias hinders NN training, and that inductive biases of CNNs—inductive bias of Convs and multi-stage architecture—help NNs learn strong representations. However, inductive biases of CNNs produce better test accuracy for the same training NLL, i.e., Convs somewhat regularize NNs. We analyze two comparable models in terms of $\mathrm{NLL}_{\mathrm{train}}$ on CIFAR-100. The $\mathrm{NLL}_{\mathrm{train}}$ of ResNet-18, a model smaller than ResNet-50, is 2.31 with an error of $22.0\%$ . The $\mathrm{NLL}_{\mathrm{train}}$ of ViT-S, a model larger than ViT-Ti, is 2.17 with an error of $30.4\%$ . In summary, the inductive biases of CNNs improve accuracy for similar training NLLs.
398
+
399
+ Most of the improvements come from the multi-stage architecture, not the inductive bias of Convs. The $\mathrm{NLL}_{\mathrm{train}}$ of the PiT-Ti, a multi-stage ViT-Ti, is 2.29 with an error of $24.1\%$ . The accuracy of PiT is only 1.9 percent point lower than that of ResNet. In addition, the small receptive field also regularizes ViT. See Fig. 7.
400
+
401
+ ViT does not overfit a small training dataset even with a large number of epochs. Figure 5b shows that ViT does not overfit small training datasets, such as CIFAR. The same phenomenon can be observed in ViT training with a large number of epochs.
402
+
403
+ In Fig. C.3, we train ViT and ResNet for 75, 150, 300, 600, and 1200 epochs. Results show that both $\mathrm{NLL}_{\mathrm{train}}$ and error decrease as the number of epochs increases. The predictive performances of ViT are inferior to those of ResNet across all ranges of epochs.
404
+
405
+ ![](images/ec2ba8a6ffcf4862b375aa1964b109449ee7347a7e4669bb8976ca6f5715269c.jpg)
406
+ Figure C.3. A large number of epochs does not make ViT overfit the training dataset of CIFAR. Solid line is the predictive performance of ViT and dashed line is that of ResNet.
407
+
408
+ A smaller patch size does not always imply better results. ViT splits image into multiple patches. The smaller the patch size, the greater the flexibility of expression and the weaker the inductive bias. By analyzing ViT with three patch sizes— $2 \times 2$ , $4 \times 4$ , and $8 \times 8$ —we demonstrate once again that a weak inductive bias disturbs NN optimization.
409
+
410
+ Figure C.2a shows the error on the test dataset and the NLL on the training dataset of CIFAR-100. As expected, a large patch size harms the performance on both datasets. Surprisingly, however, a small patch size also shows the same result. As such, appropriate patch sizes help ViT learn strong representations and do not regularize ViT.
411
+
412
+ The Hessian max eigenvalue spectra in Fig. C.2b explain this observation. Results reveal that a small patch size reduces the magnitude of Hessian eigenvalues but produces negative Hessian eigenvalues.
413
+
414
+ ![](images/cdb91a6e92bada5b4fa8d033e674106ee5a23e1e6ea463307ab5305c6eef9539.jpg)
415
+
416
+ Figure C.4. A multi-stage architecture (in PiT) and a local MSA (in Swin) also flatten the loss landscapes. Top: PiT has a flatter loss landscape than ViT near the optimum. Swin has an almost perfectly smooth parabolic loss landscape, which leads to better NN optimization. Bottom: A multi-stage architecture in PiT suppresses negative Hessian eigenvalues. A local MSA in Swin produces negative eigenvalues, but significantly reduces the magnitude of eigenvalues.
417
+ ![](images/ff782fddfba1515763860bab86747500e706f56052b32aa251134be043f37a60.jpg)
418
+ (b) Negative and positive Hessian max eigenvalue spectra in early phase (left) and late phase (right) of training
419
+
420
+ In other words, this weak inductive bias makes loss landscapes flat yet non-convex. A large patch size suppresses negative eigenvalues; on the other hand, it not only limits the model expression but also sharpens loss landscapes. Performance is optimized when these two effects are balanced.
421
+
422
+ A multi-stage architecture in PiT and a local MSA in Swin also flatten loss landscapes. As explained in Fig. 1, an MSA smoothens loss landscapes. Similarly, a multi-stage architecture in PiT and local MSA in Swin also help NN learn strong representations by smoothing the loss landscapes.
423
+
424
+ Figure C.4 provides loss landscape visualizations and Hessian eigenvalue spectra of ResNet, ViT, PiT, and Swin. Figure C.4a visualizes the global geometry of the loss functions. The loss landscapes of PiT is flatter than that of ViT near the optimum. Since Swin has more parameters than ViT and PiT, $\ell_2$ regularization determines the loss landscapes. All the loss surfaces of ViTs are smoother than that of ResNet. Figure C.4b shows the local geometry of the loss functions by using Hessian eigenvalues. In the early phase of training, a multi-stage architecture in PiT helps training by suppressing negative Hessian eigenvalues. A local MSA in Swin produces negative eigenvalues, but significantly reduces the magnitude of eigenvalues. Moreover, the magnitude of Swin's Hessian eigenvalue does not significantly increase in the late phase of learning.
425
+
426
+ A lack of heads may lead to non-convex losses. Neural tangent kernel (NTK) (Jacot et al., 2018) theoretically implies that the loss landscape of a ViT is convex and flat when the number of heads or the number of embedding dimensions per head goes to infinity (Hron et al., 2020; Liu et al., 2020). In particular, Liu et al. (2020) suggests that $||H|| \simeq \mathcal{O}(1 / \sqrt{m})$ where $||H||$ is the Hessian spectral norm and $m$ is the number of heads or the number of embedding dimensions per head. Therefore, in practical situations, insufficient heads may cause non-convex and sharp losses.
427
+
428
+ Fig. C.5 empirically show that a lot of heads in MSA convexify and flatten the loss landscapes (cf. Michel et al. (2019)). In this experiment, we use NEP and APE to measure the non-convexity and the sharpness as introduced in Appendix A.2. Results show that both NEP and APE decrease as the number of heads increases. Likewise, Fig. C.6 shows that high embedding dimensions per head also convexify and flatten losses. The exponents of APE are $-0.562$ for the number of heads and $-0.796$ for the number of embedding dimensions, which are in close agreement with the value predicted by the theory of $-1/2$ .
429
+
430
+ ![](images/ff237ef5abd692af0e530d8cda2b937c5d86a75e814fb3992814c8305abd5cd7.jpg)
431
+ (a) NEP (non-convexity) and APE (sharpness) of head number
432
+
433
+ ![](images/385bbfce1ef282a61c6161be015630342a17651c704d89d6c35082092182c903.jpg)
434
+ (b) Hessian negative and positive max eigenvalue spectra in early phase of training
435
+
436
+ ![](images/df6c20715cfa648105a709e8790203942e6d58b865203f7302b9d44522625d0b.jpg)
437
+ Figure C.5. Multi-heads convexify and flatten loss landscapes. Left: We use negative max eigenvalue proportion (NEP) and average of positive max eigenvalues (APE) to quantify, respectively, the non-convexity and sharpness of loss landscapes. As the number of heads increases, loss landscapes become more convex and flatter. Right: Hessian max eigenvalue spectra also show that multi-head suppress negative eigenvalues and reduce the magnitude of eigenvalues.
438
+
439
+ ![](images/a38d6c21aba2a4cdc88c4067cd1edbdc46137fa3b590e33e50deb928d6f1431e.jpg)
440
+ (a) NEP (non-convexity) and APE (sharpness) of embedding dim
441
+ (b) Hessian negative and positive max eigenvalue spectra in early phase of training
442
+ Figure C.6. High embedding dimensions per head convexify and flatten the loss landscape. Left: As the number of embedding dimensions per head increases, loss landscapes become more convex and flat. Right: Hessian max eigenvalue spectra also show that high embedding dimensions suppress negative eigenvalues and reduce the magnitude of eigenvalues as shown in Fig. C.5.
443
+
444
+ Large models have a flat loss in the early phase of training. Figure C.7 analyzes the loss landscapes of large models, such as ResNet-101 and ViT-S. As shown in Fig. C.7a, large models explore low NLLs. This can be a surprising because loss landscapes of large models are globally sharp as shown in Fig. C.7b.
445
+
446
+ The Hessian eigenvalue spectra in Fig. C.7c provide a solution to the problem: Hessian eigenvalues of large models are smaller than those of small models in the early phase of training. This indicates that large models have flat loss functions locally.
447
+
448
+ # D VITs FROM A FEATURE MAP PERSPECTIVE
449
+
450
+ This section provides further explanations of the analysis in Section 3 and Section 4.1.
451
+
452
+ MSAs in PiT and Swin also ensemble feature maps. In Fig. 9, we show that MSAs in ViT reduce feature map variances. The same pattern can be observed in PiT and Swin. Figure D.1 demonstrates that MSAs in PiT and Swin also reduce the feature map variances, suggesting that they also ensemble feature maps. One exception is the $3^{\mathrm{rd}}$ stage of Swin. MSAs suppresses the increase in variance at the beginning of the stage, but not at the end of the stage.
453
+
454
+ MSAs in PiT and Swin are also low-pass filters. As discussed in Fig. 8, MSAs in ViTs are low-pass filters, while MLPs in ViT and Convs in ResNet are high-pass filters. Likewise, we demonstrate that MSAs in PiT and Swin are also low-pass filters.
455
+
456
+ ![](images/fd3f4a4d6ea74cddb6c397a35ffdc611ecf6549501fb959b491d52a54b8c080e.jpg)
457
+ ResNet-50
458
+
459
+ ![](images/c82269df2324e20a2f2b6aeda0d669d41550175d94d4978be4214a99f93935fb.jpg)
460
+ ResNet-101
461
+
462
+ ![](images/4d5cd5a596a32ff1f704bb534d0b115ae26bbd306fed38d58c7f5c0474477d9a.jpg)
463
+ (a) NLL landscape visualizations
464
+
465
+ ![](images/5b56c326dbdf6b2ca49a45ec86e558d945eb55f4701e97a5482b72e8cc276fb0.jpg)
466
+ ViT-Ti
467
+ ViT-S
468
+
469
+ ![](images/f94a7e9559f56975d41fad1e8df205a6b478cbc05d93731347abf296dc384da2.jpg)
470
+ ResNet-50
471
+
472
+ ![](images/b51580596d810203406faab10547a6b2e53ec66e4ceeaec6d0ab85940f91aa85.jpg)
473
+ ResNet-101
474
+
475
+ ![](images/19eda21616be7ee88a8d23c2a1f18e7345bf7b0925cfadc3bcbcbbe534b5957f.jpg)
476
+ ViT-Ti
477
+
478
+ ![](images/88ff688ebf37c017e626db1a97d676a6a43cbbbe1edd9b823a700d8c8c3f1164.jpg)
479
+ ViT-S
480
+ (b) Loss $(\mathrm{NLL} + \ell_2)$ landscape visualizations
481
+
482
+ ![](images/69b5ccbb18b6573bbb61bee89baafa025508e218246459a1dd42ef71b98fee97.jpg)
483
+ (c) Negative and positive Hessian max eigenvalue spectra in early phase (left) and late phase (right) of training
484
+
485
+ ![](images/9fd6caef34e30eaf695c07ba57825a272abb05471ac250ad7c3e1b3ab3ebc174.jpg)
486
+ Figure C.7. Loss landscapes of large models. ResNet-50 and ResNet-101 are comparable to ViT-Ti and ViT-S, respectively. Top: Large models explore low NLLs. Middle: Loss landscape visualizations show that the global geometry of large models is sharp. Bottom: The Hessian eigenvalues of large models are smaller than those of small models. This suggests that large models have a flat local geometry in the early phase of training, and that this flat loss helps NNs learn strong representations. In the late phase of training, large ViTs have flat minima while large ResNet has a sharp minimum.
487
+
488
+ ![](images/86b9b0706aeb1173821fab045de6dedf4d745ffa9c7aff6bd6f5e314cac13fed.jpg)
489
+
490
+ ![](images/4acddadbce506aef9ddf82cdf990fb16fef3076d7dd8eae88a69000691fec387.jpg)
491
+
492
+ ![](images/77ef2a93e406b0a68d1a0c1870743a1abfa875c21a937e5972ca983f572a4b58.jpg)
493
+
494
+ ![](images/fffeeef0de000018aa8a0c2c36767ee3c6ba19e284c3b1fb1dbb63b2bd53adb9.jpg)
495
+
496
+ ![](images/11c475408b1ac610859ec48768fa40ffd06d1c2ee0ad11a1f12e612cc23fcfbe.jpg)
497
+ Figure D.1. MSAs in PiT and Swin also reduce feature map variance except in $3^{\mathrm{rd}}$ stage of Swin. White, gray, and blue areas are Conv/MLP, MSA, and subsampling layers, respectively.
498
+
499
+ ![](images/5971665d93231f3996f96cbfd6f498de4660d48c7b000f741efc41dc8723deab.jpg)
500
+
501
+ Figure D.2 shows the relative log amplitude of Fourier transformed feature maps. As in the case of ViT, MSAs in PiT and Swin generally decrease the amplitude of high-frequency signals; in contrast, MLPs increase the amplitude.
502
+
503
+ Multi-stage ViTs have a block structures. Feature map similarities of CNNs show a block structure (Nguyen et al., 2021). As Raghu et al. (2021) pointed out, ViTs have a uniform representations across all layers. By investigating multi-stage ViTs, we demonstrate that subsampling layers create a characteristic block structure of the representation. See Fig. D.3.
504
+
505
+ Convs at the beginning of a stage and MSAs at the end of a stage play an important role. Figure D.4 shows the results of a lesion study for ResNet and ViTs. In this experiment, we remove one $3 \times 3$ Conv layer from the bottleneck block of a ResNet, and one MSA or MLP block from ViTs. Consistent results can be observed for all models: Removing Convs at the beginning of a stage and MSAs at the end of a stage significantly harm accuracy. As a result, the accuracy varies periodically.
506
+
507
+ # E EXTENDED INFORMATION OF ALTERNET
508
+
509
+ This section provides further informations on AlterNet.
510
+
511
+ Detailed architecture of AlterNet. Section 4 introduces AlterNet to harmonize Convs with MSAs. Since most MSAs take pre-activation arrangements, pre-activation ResNet is used as a baseline for consistency. We add one CNN block to the last stage of ResNet to make the number of blocks even. A local MSA with relative positional encoding from Swin is used for AlterNet. However, for simplicity of implementation, we do not implement detailed techniques, such as a cyclic shift and layer-specific initialization. For CIFAR, the patch size of the MSA is $1 \times 1$ and the window size is $4 \times 4$ . If all Conv blocks are alternately replaced with MSA, AlterNet becomes a Swin-like model.
512
+
513
+ In order to achieve better performance, NNs should strongly aggregate feature maps at the end of models as discussed in Section 3 and Section 4. To this end, AlterNet use 3, 6, 12, 24 heads for MSAs in each stage.
514
+
515
+ ![](images/c843a3bb56fa4d3fe1fec4d092c3f763b693ad13d2b46ea6048080b02972b4fa.jpg)
516
+
517
+ ![](images/7b9a78f87ce31e68f03ce9111335ff93876dec9a2c52a73e305bf9cde0900920.jpg)
518
+
519
+ ![](images/09736957012b6add9ce204776780389ad4cf35cf03962eb800489304ba3c185a.jpg)
520
+
521
+ ![](images/aa1b23e0f3b8d98cedbc62a19e97c12cc9d7e3e844f13599be94fe85a891cdc6.jpg)
522
+
523
+ ![](images/bd1fc5a133d5422ece42c173acad68399a492a1fec77e11c5204f8bb340ea74b.jpg)
524
+
525
+ ![](images/2f49a6a2b0cf4d6b2985625926138092f902077762b440c238b0e383b75d5364.jpg)
526
+
527
+ ![](images/0a9934be7ed89fed5be47b505034910d0fa034ce8f608ee88c3685cde866314a.jpg)
528
+
529
+ ![](images/252a90d8bd7f79617c3e79dabf74ffc2d1d918d343421b3fe5d9f868d7cd2066.jpg)
530
+
531
+ ![](images/535ce0c00dcedbc76267ba5e5939c688aedfa9aed6eae49389d49541e328a5a6.jpg)
532
+ Figure D.2. MSAs in PiT and Swin also reduce high-frequency signals. Left: $\Delta$ log amplitude of Fourier transformed feature map. We only provide the diagonal components. Right: The high-frequency $(1.0\pi)$ $\Delta$ log amplitude. White, gray, and blue areas are Conv/MLP, MSA, and subsampling layers, respectively.
533
+
534
+ ![](images/d7c9d38c0fbca47c5d0a163d7e253d1016e14c588bc73912f252db278183bd6d.jpg)
535
+ Figure D.3. Multi-stage ViTs have block structures in representational similarities. Block structures can be observed in all multi-stage NNs, namely, ResNet, PiT, and Swin. "E" is the stem/embedding and "P" is the pooling (subsampling) layer.
536
+
537
+ ![](images/23373a744f648f1d463c095a0852c96683d231588adaa48ed04e98e4f7d1d15d.jpg)
538
+
539
+ ![](images/964fd0e32e20ddd693cec3cba8b79094c98c5f4b3e73d420cbf507990b10777a.jpg)
540
+
541
+ ![](images/74a8dc6cc282fd29de308e48d7f5a47efe37b65a0fd04828cc97bfcc7bb5207d.jpg)
542
+
543
+ ![](images/516cf02cfd81f7c742967aeea24da4e8ad97a96fa7eea95e291e5700813bad59.jpg)
544
+
545
+ ![](images/0febf0db23a585b6cbc1a50041712c01a2a40f2a4d0875ed1d579a9abf000418.jpg)
546
+
547
+ ![](images/bc112851174733f389a638f066316bc3bcc42591f6a2b4e4767c1adf42e76570.jpg)
548
+ Figure D.4. Lesion study shows that Convs at the beginning of a stage and MSAs at the end of a stage are important for prediction. We measure the decrease in accuracy after removing one unit from the trained model. In this experiment, we can observe that accuracy changes periodically. The white, gray, and blue areas are Convs/MLPs, MSAs, and subsampling layers, respectively.
549
+
550
+ ![](images/49831f84ac818eec35c69cac287467b1e2568e6e6fd92adffa550560eeaedf6b.jpg)
551
+
552
+ ![](images/22082322bb58ef3e371a010d63fa90941af2538d32ad092a43549a1ae109a55b.jpg)
553
+ Figure D.5. Detailed architecture of Alter-ResNet-50 for ImageNet-1K. The white, gray, and blue blocks each represent Convs, MSAs, and subsampling blocks. This model alternately replaces Conv blocks with MSA blocks from the end of a stage. Following Swin, MSAs in stages 1 to 4 use 3, 6, 12, and 24 heads, respectively. We use 6 MSA blocks for ImageNet since large amounts of data alleviate the drawbacks of MSA. See Fig. 11 for comparison with the model for CIFAR-100, which uses 4 MSA blocks.
554
+
555
+ ![](images/83717962fd8432691b4f4658ffdf851c15200e5228095000ccd1333607350c79.jpg)
556
+ (a) Reliability diagram
557
+
558
+ ![](images/6b78b16b731ae4a50479588de9fe4cde6b639520cc9d3141914c911bd7e35231.jpg)
559
+ Figure F.1. Distinctive properties of strong data augmentation. "Aug" stands for strong data augmentation. Left: Strong data augmentation makes predictions underconfident on CIFAR-100. The same phenomenon can be observed on ImageNet-1K. Right: Strong data augmentation significantly reduces the magnitude of Hessian max eigenvalues. This means that the data augmentation helps NNs converge to better optima by flattening the loss landscapes. On the other hand, strong data augmentation produces a lot of negative Hessian eigenvalues, i.e., it makes the losses non-convex.
560
+
561
+ ![](images/a4128a206b731c20d73c3b8ff75ea74b1165392e18988d7729f95fc352441fc0.jpg)
562
+ (b) Hessian max eigenvalue spectrum
563
+
564
+ The computational costs of Conv blocks and MSA blocks are almost identical. The training throughput of Alter-ResNet-50 is 473 image/sec on CIFAR-100, which is $20\%$ faster than that of pre-activation ResNet-50.
565
+
566
+ The optimal number of MSAs depends on the model and dataset, so we empirically determine the number of MSAs as shown in Fig. 12a. A large dataset allows a large number of MSAs. For ImageNet, we use 6 MSAs as shown in Fig. D.5, because a large datasets alleviates the shortcomings of MSAs.
567
+
568
+ ![](images/8750fb570a51e6b4b91524d1814646cc424e6e9186a8ab835770a4709f5fc6df.jpg)
569
+ Figure E.1. MSA with the appropriate build-up rules significantly improves ResNet on ImageNet. Robustness is mean accuracy on ImageNet-C. "RX" is ResNeXt.
570
+
571
+ MSAs improve the performance of CNNs on ImageNet. Since MSAs complement Convs, MSAs improve the predictive performance of CNNs when appropriate build-up rules are applied as shown in Section 4.1. Figure E.1 illustrates the accuracy and robustness—mean accuracy on ImageNet-C—of CNNs and ViTs on ImageNet-1K. Since ImageNet is a large dataset, a number of ViTs outperform CNNs. MSAs with the appropriate build-up rules
572
+
573
+ significantly improves ResNet, and the predictive performance of AlterNet is on par with that of Swin in terms of accuracy without heavy modifications, e.g., the shifted windowing scheme (Liu et al., 2021). AlterNet is easy-to-implement and has a strong potential for future improvements. In addition, the build-up rules not only improve ResNet, but also other NNs, e.g., vanilla post-activation ResNet and ResNeXt; but we do not report this observation in order to keep the visualization simple.
574
+
575
+ # F DISTINCTIVE PROPERTIES OF DATA AUGMENTATION
576
+
577
+ This section empirically demonstrates that NN training with data augmentation is different from training on large datasets. We compare DeiT-style strong data augmentation with weak data augmentation, i.e., resize and crop. In this section, "a result without data augmentation" stands for "a result only with weak data augmentation".
578
+
579
+ # F.1 DATA AUGMENTATION CAN HARM UNCERTAINTY CALIBRATION
580
+
581
+ Figure F.1a shows a reliability diagram of NNs with and without strong augmentation on CIFAR-100. Here, both ResNet and ViT without data augmentation (i.e., only with weak data augmentation) predict overconfident results. We show that strong data augmentation makes the predictive results under-confident (cf. Wen et al. (2021)). These are unexpected results because the predictions without data augmentation on large datasets, such as ImageNet, are not under-confident. A detailed investigation remains for future work.
582
+
583
+ # F.2 DATA AUGMENTATION REDUCES THE MAGNITUDE OF HESSIAN EIGENVALUES
584
+
585
+ How does data augmentation help an MSA avoid overfitting on a training dataset and achieve better accuracy on a test dataset? Figure F.1b shows the Hessian max eigenvalue spectrum of NNs with and without strong data augmentation. First of all, strong data augmentation reduces the magnitude of Hessian eigenvalues, i.e., data augmentation flattens the loss landscapes in the early phase of training. These flat losses lead to better generalization. On the other hand, strong data augmentation produces a lot of negative Hessian eigenvalues, i.e., data augmentation makes the losses non-convex. This prevents NNs from converging to low losses on training datasets. It is clearly different from the effects of large datasets discussed in Fig. 4—large datasets convexify the loss landscapes. A detailed investigation remains for future work.
2202.06xxx/2202.06709/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd0255b5cf4ad837f162640a13e13fc4458bca2b482be07060cdba3ce78e76cd
3
+ size 1115308
2202.06xxx/2202.06709/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06767/efa06db7-44a6-489d-836f-b237416be61e_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06767/efa06db7-44a6-489d-836f-b237416be61e_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06767/efa06db7-44a6-489d-836f-b237416be61e_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f96a11f89f66cec593bb600be0b8a177def1f233e11cf317297b0562325f08f2
3
+ size 1696266
2202.06xxx/2202.06767/full.md ADDED
@@ -0,0 +1,377 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Wukong: A 100 Million Large-scale Chinese Cross-modal Pre-training Benchmark
2
+
3
+ Jiaxi Gu $^{1*}$ , Xiaojun Meng $^{1*}$ , Guansong Lu $^{1}$ , Lu Hou $^{1}$ , Minzhe Niu $^{1}$ , Xiaodan Liang $^{2\dagger}$ , Lewei Yao $^{1}$ , Runhui Huang $^{2}$ , Wei Zhang $^{1}$ , Xin Jiang $^{1}$ , Chunjing Xu $^{1}$ , Hang Xu $^{1\dagger}$
4
+
5
+ # Abstract
6
+
7
+ Vision-Language Pre-training (VLP) models have shown remarkable performance on various downstream tasks. Their success heavily relies on the scale of pretrained cross-modal datasets. However, the lack of large-scale datasets and benchmarks in Chinese hinders the development of Chinese VLP models and broader multilingual applications. In this work, we release a large-scale Chinese cross-modal dataset named Wukong, which contains 100 million Chinese image-text pairs collected from the web. Wukong aims to benchmark different multi-modal pre-training methods to facilitate the VLP research and community development. Furthermore, we release a group of models pre-trained with various image encoders (ViT-B/ViT-L/SwinT) and also apply advanced pre-training techniques into VLP such as locked-image text tuning, token-wise similarity in contrastive learning, and reduced-token interaction. Extensive experiments and a benchmarking of different downstream tasks including a new largest human-verified image-text test dataset are also provided. Experiments show that Wukong can serve as a promising Chinese pre-training dataset and benchmark for different cross-modal learning methods. For the zero-shot image classification task on 10 datasets, WukongViT-L achieves an average accuracy of $73.03\%$ . For the image-text retrieval task, it achieves a mean recall of $71.6\%$ on AIC-ICC which is $12.9\%$ higher than WenLan 2.0. Also, our Wukong models are benchmarked on downstream tasks with other variants on multiple datasets, e.g., Flickr8K-CN, Flickr-30K-CN, COCO-CN, et al. More information can be referred to https://wukong-dataset.github.io/wukong-dataset/.
8
+
9
+ # 1 Introduction
10
+
11
+ Pre-training large-scale models on big data, and fine-tuning them on downstream tasks, has become an emerging paradigm of artificial intelligence systems. Models such as BERT [7] and GPT [1] grow in popularity in the natural language processing community as they possess high transferability to a wide range of downstream tasks, yielding state-of-the-art performance. Recent works such as CLIP [35], ALIGN [14], and FILIP [53] further extend this paradigm to the joint Vision Language Pre-training (VLP) domain and show superior results over state-of-the-art methods on various downstream tasks. Meanwhile, VLP models can be easily adapted to multiple practical applications such as image search engines, multi-choice visual answering and image labelling. In general, this promising direction draws significant attention from both industry and academia to consider it as the path to the next-generation AI models.
12
+
13
+ Two reasons lead to the success of VLP models. On the one hand, more advanced model architectures such as ViT [8]/BERT [7] and training objectives like contrastive learning [12], are usually able
14
+
15
+ to lift the powerful generalization and robustness capabilities of learned representations. On the other hand, thanks to the concurrent advancement in hardware [45, 16] and distributed training frameworks [28, 37, 38], more and more data can be fed into a large-scale model to improve the generalization, transferability and zero-shot capability. In either vision or language tasks, pre-training on larger-scale data such as JFT-300M [46] in image classification [39], C4 dataset in T5 [36], has been proven useful and critical for improving downstream task performance via transfer or prompt learning. In addition, recent work [14] has already shown the potential of scaling up the VLP model by more than 100 million noisy image-text pairs from the web.
16
+
17
+ Table 1: An overview of VLP datasets.
18
+
19
+ <table><tr><td>Dataset</td><td>Language</td><td>Avail -ability</td><td>Image-text pairs</td></tr><tr><td>Flickr30k [55]</td><td>English</td><td>✓</td><td>31,783</td></tr><tr><td>CxC [32]</td><td>English</td><td>✓</td><td>247,315</td></tr><tr><td>SBU Captions [30]</td><td>English</td><td>✓</td><td>1,000,000</td></tr><tr><td>Product1M [58]</td><td>Chinese</td><td>✓</td><td>1,000,000</td></tr><tr><td>CC12M [2]</td><td>English</td><td>✓</td><td>12,000,000</td></tr><tr><td>RedCaps [6]</td><td>English</td><td>✓</td><td>12,011,111</td></tr><tr><td>YFCC100M [48]</td><td>English</td><td>✓</td><td>99,200,000</td></tr><tr><td>WIT [44]</td><td>multilingual</td><td>✓</td><td>11,500,000</td></tr><tr><td>LAION-400M [41]</td><td>English</td><td>✓</td><td>400,000,000</td></tr><tr><td>JFT-300M [46]</td><td>English</td><td>✗</td><td>300,000,000</td></tr><tr><td>JFT-3B [56]</td><td>English</td><td>✗</td><td>3,000,000,000</td></tr><tr><td>IG-3.5B-17k [27]</td><td>English</td><td>✗</td><td>3,500,000,000</td></tr><tr><td>M6-Corpus [22]</td><td>Chinese</td><td>✗</td><td>60,500,000</td></tr><tr><td>Wukong</td><td>Chinese</td><td>✓</td><td>101,483,885</td></tr></table>
20
+
21
+ Therefore, the success of VLP models pretrained on large-scale data urges people to continuously crawl and collect larger imagetext datasets. Table 1 shows an overview of many popular datasets in the VLP domain. For English datasets, the publicly available Flickr30k [34], SBU Captions [31], and CC12M [42] are relatively small, while LAION-400M [41] is several magnitudes larger. Despite the availability of large-scale English datasets, directly translating them into Chinese and then training a Chinese VLP model can lead to a severe performance drop. We speculate this is due to the existence of many Chinese idioms and slang that simple translation cannot cover but brings errors that harm the performance. The current community lacks a large-scale publicly available dataset in Chinese, resulting in (a) the devel
22
+
23
+ opment of the community being stunted; (b) secret large datasets used to achieve surprisingly good performance that other works cannot fairly compare with.
24
+
25
+ To bridge this gap, we release a large-scale Chinese cross-modal dataset named Wukong, which contains 100 million image-text pairs collected from the web. To guarantee the diversity and generalization, our Wukong dataset is collected according to a high-frequency Chinese word list with 200K queries. We also adopt image-based and text-based filtering strategies for further refinement. The resulting dataset is currently the largest Chinese vision-language dataset. We perform an analysis of this dataset and show that it covers a wide range of visual and textual concepts. Besides, we also build a test set called Wukong-Test, the quality of which has been verified by human experts. From the feedback, the image-text consistency
26
+
27
+ Table 2: Comparison of multimodal Chinese retrieval benchmarks.
28
+
29
+ <table><tr><td>Dataset</td><td>#Images</td><td>#Texts</td></tr><tr><td>Flickr8K-CNTest</td><td>1,000</td><td>5,000</td></tr><tr><td>Flickr30K-CNTest</td><td>1,000</td><td>5,000</td></tr><tr><td>COCO-CNTest</td><td>1,000</td><td>1,053</td></tr><tr><td>AIC-ICCTest-1</td><td>30,000</td><td>150,000</td></tr><tr><td>AIC-ICCTest-2</td><td>30,000</td><td>150,000</td></tr><tr><td>MUGETest</td><td>30,399</td><td>5,004</td></tr><tr><td>Wukong-Test</td><td>33,365</td><td>33,365</td></tr></table>
30
+
31
+ is guaranteed in general even if all the data are collected on the web and only some simple filtering strategies are applied. Specifically, there are only about $2\%$ image-text pairs are marked as weakly corresponding. Table 2 shows the comparison of available Chinese image-text testing datasets.
32
+
33
+ Training a large-scale VLP model is quite expensive. For example, the largest CLIP [35] model takes 18 days to train on 592 NVIDIA-V100 GPUs and M6-10T [22] is trained on 512 NVIDIA-V100 GPUs for around 10 days. Thus it is almost impossible for everyone to pre-train a large-scale model due to substantial financial costs and hardware requirements. It is in great demand for researchers to download and reuse various kinds of pre-trained large-scale Chinese VLP models. However, the choices of publicly available large VLP models are also very limited, which hinders the improvement of performance on downstream tasks of large-scale models.
34
+
35
+ To contribute to the community, we release a group of dual-stream VLP models pre-trained using different image encoders (ViT [8] and SwinT [24]) and different pretraining techniques (CLIP [35], FILIP [53], and LiT [57]). We further provide an extensive Chinese benchmarking on various downstream tasks and datasets with hand-crafted Chinese labels, such as zero-shot image classification and image-text retrieval. Interestingly, though the frozen image encoders are trained on English image-text pairs, directly aligning them with a trainable Chinese text encoder still achieves remarkable
36
+
37
+ performance on downstream tasks. This also indicates the strong cross-lingual generalization of these pre-trained image encoders. Besides, we also find that using the cross-modal token-wise similarity from FILIP maintains the fine-grained word-patch alignment for various image encoders, even when they are frozen during the contrastive learning. Moreover, compared with the Chinese word-grained tokenization, we find that using character-grained tokenization in our models achieves better performance. More findings can be found in Section 5.
38
+
39
+ Experiments show that Wukong can serve as a promising Chinese pre-training dataset for different cross-modal learning methods. The pre-trained models show prominent performance on various downstream tasks such as zero-shot image classification and image-text retrieval. Specifically, our model WukongViT-L, pre-trained using Wukong dataset, achieves up to $73.03\%$ average top-1 accuracy on 10 datasets for zero-shot image classification. It also achieves $71.6\%$ mean recall on AIC-ICC for image-text retrieval. This result is higher than that of WenLan 2.0, which is a Chinese image-text multimodal model pre-trained on its own large-scale dataset, by $12.9\%$ .
40
+
41
+ In summary, our main contributions are:
42
+
43
+ - We release a large-scale Chinese VLP dataset with 100 million image-text pairs, covering a wide range of concepts. We also provide various benchmarking datasets with human-verified image-text pairs and Chinese labels for benchmarking the performance.
44
+ - We release a group of large-scale VLP models pre-trained with various popular architectures and methods. An extensive study and benchmarking are also provided.
45
+ - Our pre-trained model shows state-of-the-art performance on Chinese benchmarks such as zero-shot image classification and image-text retrieval tasks.
46
+
47
+ # 2 Related Work
48
+
49
+ Vision-Language Pre-training (VLP) Models. There are two typical architectures of VLP models according to the modality interaction methods, i.e., single-stream and dual-stream. Single-stream models [15, 19] directly concatenate the visual and textual embeddings together and feed them to a single transformer-based model. This kind of model can be easily fit into text/image generation tasks to perform image captioning or text-to-image generation, which are usually hard to evaluate and benchmark. Dual-stream models such as ViLBERT [26], CLIP [35], and ALIGN [14] have separate models for each modality. This paradigm is more flexible and efficient when modeling each modality, e.g., CNN for images and Transformers for texts. Moreover, dual-stream models have the merit of efficient inference for downstream tasks such as image-text retrieval, since the two encoders can be decoupled and the image/text features can be pre-computed offline. In CLIP [35], the authors also evaluate the image encoder as a self-supervised pre-trained model and show promising results. This paper mainly follows and benchmarks the dual-stream approaches.
50
+
51
+ Vision-Language Datasets. The current success of VLP models greatly lies in the scale of pre-trained datasets. The publicly available pre-training datasets used by recent VLP models are mainly image caption data or image-text pair data. Many small-sized datasets (e.g., a few hundred thousand) such as COCO-Captions [23], Flickr30k [34], Visual Genome [17], and VQA2 [11] are hand-annotated data that have very limited domain and diversity. On the other hand, pre-training models on online collected data (such as alt-texts from the HTML pages) have shown promising results. CC3M [42], CC12M [2] and YFCC100M [48] have millions of image-text pairs in English generated by an online data collection pipeline including image and text filters, as well as text transformations. VLP models on these datasets have shown to be effective in multiple downstream tasks. Moreover, larger-scale datasets with more than 100M samples (e.g., CLIP [35]: 400M and ALIGN [14]): 1.8B) have even armed the recent VLP models with surprisingly good zero-shot recognition ability, but they are not publicly available. In terms of vision-language datasets specifically for Chinese, as shown in Table 1, the dataset is either small-scale (Product1M [58]) or private (M6-Corpus [22]). Thus, the current community lacks a large-scale Vision-Language dataset in Chinese. We aim to contribute a Chinese dataset to benchmark various VLP methods.
52
+
53
+ # 3 Construction of Wukong Dataset
54
+
55
+ In this paper, we construct a dataset called Wukong containing 100 million image-text pairs collected from the web. To cover as diverse concepts as possible, a series of keywords are taken as the starting point. The original keyword list is taken from [43] and only the first 200,000 most frequently seen keywords are used. These keywords are then used to search for images and their corresponding
56
+
57
+ ![](images/efafb318309429ee9e592ab8e560179dd16278b11a97b30892c307d6dda86f59.jpg)
58
+ Figure 1: Overviews of our released models. Our Chinese pre-trained models consist of an image encoder and a text encoder with visual tokens and textual tokens as inputs. We have three variations of pretrained models: global similarity (CLIP-style); token-wise similarity (FILIP-style) and token-wise similarity with token reduction layer (Wukong-style).
59
+
60
+ captions in Baidu, a commonly used search engine for Chinese. For data balance, at most 1000 image-text pairs are kept for each keyword. In this way, we collect a total of 166 million raw (image, text) pairs. Then, following common practices [42, 2, 14], we apply a series of filtering strategies described in the sections below to finalize Wukong dataset. Some examples in our dataset can be found in the appendix. We also provide various benchmarking datasets with human-verified image-text pairs and Chinese labels for model benchmarks. Wukong-Test dataset contains 33k human-verified image-text pairs, which is currently the largest multimodal Chinese retrieval benchmark.
61
+
62
+ Image-based Filtering. We first filter the data according to the size and aspect ratio of the image. Only images with both dimensions greater than 200 pixels, and the ratio of large-to-small dimension of at most 3 are kept. In this way, we filter out images that are too small, too tall or too wide. This kind of image is of poor quality, especially after data augmentation processes such as upsampling or square cropping.
63
+
64
+ Text-based Filtering. Secondly, to select samples with high-quality Chinese descriptions of the corresponding image, we filter the data according to language, text length, and the frequency of text accompanying an image. Specifically, we first check the language and text length. We keep sentences
65
+
66
+ Table 3: Statistics of datasets.
67
+
68
+ <table><tr><td rowspan="2"></td><td rowspan="2">Image-text Pairs</td><td rowspan="2">Unique Tokens</td><td colspan="3">Tokens per Caption</td></tr><tr><td>mean</td><td>std</td><td>median</td></tr><tr><td>Wukong</td><td>101,483,885</td><td>20,442</td><td>22</td><td>7</td><td>24</td></tr><tr><td>Wukong-Test</td><td>33,365</td><td>5,155</td><td>22</td><td>7</td><td>24</td></tr></table>
69
+
70
+ that contain at least one but fewer than 32 Chinese characters. We also discard meaningless image descriptions like "000.jpg" from the text. Texts paired with too many images are usually irrelevant to the content of the images, like "查看源网页" (View source page), "展开全文" (Expand text), "摄影部落" (Photography community). In practice, we set this threshold as 10, i.e., we discard the image-text pairs whose text appears more than 10 times in the whole corpus collected. To protect the privacy of the individuals appearing in the text, we substitute person names with a special token "〈人名〉" ( $\langle Person name \rangle$ ). Besides, we also construct a list of Chinese sensitive words, and image-text pairs containing sensitive words are also discarded.
71
+
72
+ After applying the above filtering strategies, we finally get a dataset called Wukong for pre-training and a dataset called Wukong-Test for model testing. Table 3 shows the statistics of them.
73
+
74
+ # 4 Methodology
75
+
76
+ # 4.1 Text-Image Joint Alignment
77
+
78
+ Following the recent widely adopted contrastive pre-training architectures [35, 53], we use a dual-stream model with Transformer-based text and image encoders as shown in Figure 1. These two encoders convert textual and visual input tokens to embeddings of the same dimension. In this learned joint embedding space, we use a contrastive loss to encourage the paired image and text to have similar embeddings, while non-paired ones to have distinct embeddings.
79
+
80
+ # 4.2 Model Architectures
81
+
82
+ Visual Encoder. Two types of visual encoders, i.e., Vision Transformer [8] (ViT) and Swin Transformer [24] (SwinT), are used as backbones for training different model variants. For ViT, the input image is first rescaled into a standard size and then split into fixed-size patches. Each patch is linearly embedded via a trainable linear projection. The resulting sequence of patch vectors is fed to a standard transformer encoder. Different from ViT, SwinT uses a hierarchical transformer that computes representation with shifted windows, which accelerates the original self-attention computation to non-overlapping local windows while also allowing for cross-window connection.
83
+
84
+ Textual Encoder. The textual encoder is a standard decoder-only transformer as in [35]. We use WordPiece [52] with a vocabulary size of 21,128 for Chinese text tokenization. Similar to [33], we add spaces around Chinese characters before applying WordPiece so that Chinese is effectively character-tokenized. We add two special tokens (i.e., [CLS] and [SEP]) at the beginning and ending of each text sequence. The text encoder has 12 layers, each of which has 8 attention heads and a hidden state dimension of 512.
85
+
86
+ Linear Projection of the Encoders. On the top of the visual and textual encoders, the global representations of visual token sequence (e.g., [CLS] token for ViT; average pooled representation of all patch tokens for Swin Transformer) and textual token sequence (e.g., textual [SEP] token) are linearly projected to the common multi-modal space, followed by L2-normalization separately.
87
+
88
+ Token Reduction Layer. Instead of only computing the cross-modal similarity between global representations of sequences, we experiment with a late interaction method as introduced in FILIP [53]. We aim to take into account the fine-grained token-wise interaction between image patches and text tokens. It could potentially mine more detailed semantic word-patch alignment between two modalities. Meanwhile, as a large amount of computation is introduced by this token-wise interaction, we propose a token reduction layer inspired by [40]. It aims to learn a small set of tokens (e.g., 12 or 24) from the whole output tokens of the visual encoder (e.g., $16 \times 16$ in ViT-L/14), and use them for the reduced-token interaction. This token reduction layer is used in all the Wukong-style models.
89
+
90
+ # 4.3 Pre-training Objectives
91
+
92
+ Cross-modal contrastive learning, typically represented by CLIP [35], is one effective approach for training models using paired image-text data. It can learn representations of two modalities simultaneously by distinguishing the paired and unpaired samples. Given an image sample $\boldsymbol{x}^I \in \mathcal{I}$ and a text sample $\boldsymbol{x}^T \in \mathcal{T}$ , the training objective is to make the learned image and text representations in the joint multi-modal space close if they are paired and far otherwise. For a training batch consisting of $b$ image-text pairs $\{\boldsymbol{x}_k^I, \boldsymbol{x}_k^T\}_{k=1}^b$ , $\boldsymbol{x}_k^T$ (resp. $\boldsymbol{x}_k^I$ ) is positive to $\boldsymbol{x}_k^I$ (resp. $\boldsymbol{x}_k^T$ ) while negative to all other texts (resp. images) in the same batch. Therefore, the image-to-text and text-to-image contrastive losses for $(\boldsymbol{x}_k^I, \boldsymbol{x}_k^T)$ can be formulated as $\mathcal{L}_k^I(\boldsymbol{x}_k^I, \{\boldsymbol{x}_j^T\}_{j=1}^b) = -\frac{1}{b} \log \frac{\exp(s_{k,k}^I)}{\Sigma_{j=1}^b \exp(s_{k,j}^I)}$ and $\mathcal{L}_k^T(\boldsymbol{x}_k^T, \{\boldsymbol{x}_j^T\}_{j=1}^b) = -\frac{1}{b} \log \frac{\exp(s_{k,k}^T)}{\Sigma_{j=1}^b \exp(s_{k,j}^T)}$ where $s_{k,j}^I$ denotes the similarity of the $k$ -th image to the $j$ -th text, while $s_{k,j}^T$ denotes the similarity between the $k$ -th text to the $j$ -th image. The total loss $\mathcal{L}$ is then computed as $\mathcal{L} = \frac{1}{2} \Sigma_{k=1}^b (\mathcal{L}_k^I + \mathcal{L}_k^T)$ . In this work, we explore two typical ways of measuring the similarity between an image and a text. The learned representations of the image and text are denoted as $z^I \in \mathbb{R}^{n_1 \times d}$ and $z^T \in \mathbb{R}^{n_2 \times d}$ , respectively. Here $n_1$ and $n_2$ are the numbers of (non-padded) tokens in each image and text.
93
+
94
+ Global Similarity. In CLIP [35] and ALIGN [14], the similarity is computed via dot product of the global features of the entire image and text sequence. Specifically, the global similarity between the image and text is computed as $s_{i,j}^{I} = s_{i,j}^{T} = [z_{i}^{I}]_{\mathrm{[CLS]}}^{\top}[z_{j}^{T}]_{\mathrm{[SEP]}}$ , where $[z_i^I ]_{[\mathrm{CLS}]}$ denotes the feature vector of the [CLS] token of the $i$ -th image and $[z_j^T ]_{[\mathrm{SEP}]}$ denotes the feature vector of the [SEP] token of the $j$ -th text. Since Swin Transformer has no [CLS] token, we use the average pooling on the features of all patch tokens to represent it.
95
+
96
+ Token-wise Similarity. In FILIP [53], the similarity is computed based on a finer-grained interaction between the image patches and textual tokens, which also brings good alignment and learns meaningful fine-grained features with promising localization ability. For $i$ -th image, each visual token $\left[z_i^I\right]_k$ in it computes a similarity with all non-padded textual tokens of the $j$ -th text. Then the maximum one is used to represent the token-wise similarity between this visual token and the $j$ -th text. Finally, we regard the average token-wise maximum similarity of all non-padded tokens in this $i$ -th image as
97
+
98
+ the cross-modal similarity $s_{i,j}^{I} = \frac{1}{n_{1}}\sum_{k = 1}^{n_{1}}[z_{i}^{I}]_{k}^{\top}[z_{j}^{T}]_{m_{k}^{I}}$ , where $m_{k}^{I} = \arg \max_{0\leq r < n_{2}}[z_{i}^{I}]_{k}^{\top}[z_{j}^{T}]_{r}$ . The similarity of a text to an image can be computed in the same way, except that we exclude the [CLS], [SEP], and all padding tokens as in FILIP [53].
99
+
100
+ Reduced-token Interaction. Using the token-wise similarity introduces a large amount of computation. The computation cost is about $2 \times n_{1} \times n_{2}$ times more than that of global similarity. The number of visual tokens $n_{1}$ is normally predefined while the number of textual tokens $n_{2}$ depends on the text input. To reduce the computation cost of token-wise similarity, an efficient way is to decrease the number of tokens involved in similarity calculation and we call this reduced-token interaction.
101
+
102
+ In this paper, we propose a learnable token reduction layer on top of visual features output by the image encoder. The workflow of this layer is described in the right part of Figure 1. Since the number of visual tokens is usually much larger than that of textual tokens, e.g., there are $16 \times 16 + 1 = 257$ visual tokens and 32 textual tokens for $\mathrm{CLIP}_{\mathrm{ViT-L}}$ , visual tokens are more necessary to be decreased for efficiency. Denoting the visual tokens of an image sample as $z^{I} \in \mathbb{R}^{n_{1} \times d}$ , we aim to get a new $Z^{I} = f(z^{I}) \in \mathbb{R}^{n' \times d}$ in which $f$ denotes the function of token reduction and $n'$ denotes the reduced token number. Finally, $z^{I}$ is replaced by $Z^{I}$ to calculate the token-wise similarity. In general, given the output number of tokens $n'$ , the $k$ -th visual token $Z_{k}^{I} \in \mathbb{R}^{d}$ can be formulated by: $Z_{k}^{I} = \text{AvgPool}(\text{Conv}_{k}(z^{I}) \odot z^{I}), k \in \{1, 2, \dots, n'\}$ where $\odot$ represents Hadamard product. Firstly, $z_{k}^{I} \in \mathbb{R}^{n_{1} \times d}$ is reshaped to $z_{k}^{I} \in \mathbb{R}^{H \times W \times d}$ in which $H$ and $W$ respectively represent the vertical and horizontal numbers of visual tokens. Then, the $k$ -th attention map is computed via $\text{Conv}_{k}: \mathbb{R}^{H \times W \times d} \to \mathbb{R}^{H \times W \times 1}$ which is implemented using two convolutional layers. We share the weight of $\text{Conv}_{k}$ across all $k$ tokens. Finally, a spatial global average pooling $\text{AvgPool}: \mathbb{R}^{H \times W \times d} \to \mathbb{R}^{d}$ is used to get the final $k$ -th visual token.
103
+
104
+ Locked-image Text tuning. LiT-tuning [57] proposes that a locked pre-trained image encoder with an unlocked text encoder works well in contrastive learning. We extend this idea to cross-lingual data sources and try to align a locked image encoder pre-trained on English data sources, e.g., CLIP [35] and FILIP [53], with a trainable Chinese text encoder. These existing pre-trained image encoders usually have a projection linear layer. In our method, we drop this linear layer and add a new linear trainable random-initialized projection layer, whose output dimension can be adjusted flexibly. Experiment results shown in Section 5.4 confirm its effectiveness.
105
+
106
+ # 5 Wukong Chinese Benchmarks
107
+
108
+ # 5.1 Experimental Setup
109
+
110
+ Following the existing VLP models, e.g., CLIP [35] and ALIGN [14], we employ a dual-encoder architecture as illustrated in Figure 1. We have three variations of pretraining Chinese models: global similarity (CLIP-style); token-wise similarity (FILIP-style) and token-wise similarity with token reduction layer (Wukong-style). For different types of visual encoders, we have ViT-B, ViT-L[8], and Swin-L[24]. We use the token-wise similarity with our proposed reduced-token interaction for Wukong-style models. For the dimension of the common multi-modal space, all the FILIP-style and Wukong-style models are set to 256 and CLIP-style models are set following the original CLIP checkpoints. Models are trained using LiT-tuning [57], since they achieve relatively better results as shown in Section 5.4. In terms of pre-loaded visual encoders, CLIP and FILIP models with ViT-B/32 or ViT-L/14 are used. Swin-L pre-trained on ImageNet-22K with $224 \times 224$ image resolution is used for Swin Transformer based models, e.g., $\mathrm{CLIP}_{\mathrm{Swin - L}}$ . Detailed training settings are in the appendix.
111
+
112
+ # 5.2 Zero-shot Image Classification
113
+
114
+ We evaluate our models for the zero-shot classification task on 10 datasets whose class labels are translated from English. To make the evaluation results more reliable, the translation process is done with a machine translator and verified by human experts. The Chinese annotations of these datasets are released for future evaluation by the research community. Also, we evaluate BriVL [13], another multi-modal pre-training model for Chinese, on these datasets for zero-shot classification. The implementation code and pre-trained model weights of BriVL are both from its homepage.
115
+
116
+ Prompt Ensemble. Text prompts are often used as a class label augmentation to achieve a better performance in the zero-shot image classification task [35, 53]. For simplicity, instead of designing prompts manually, we provide a set of 80 text prompts which are originally used on ImageNet by CLIP and manually translate them into Chinese. We also release these Chinese prompts for future fair comparison in our community.
117
+
118
+ Table 4: Top-1 accuracy (\%) of the zero-shot image classification benchmark. All the models are trained using 100-million Wukong dataset except for BriVL which is pre-trained using its own dataset. Results highlighted with bold mean the best within the same image encoder and those with underline represent the best among all methods.
119
+
120
+ <table><tr><td>Dataset (CN)
121
+ Model</td><td>CIFAR10</td><td>CIFAR100</td><td>Caltech101</td><td>Caltech256</td><td>DTD</td><td>Sports</td><td>Flowers</td><td>SUN397</td><td>EuroSAT</td><td>ImageNet</td><td>Average</td></tr><tr><td>BriVL [13]</td><td>72.3</td><td>35.9</td><td>72.0</td><td>58.0</td><td>18.8</td><td>83.6</td><td>18.4</td><td>28.4</td><td>25.5</td><td>24.3</td><td>43.72</td></tr><tr><td>CLIPViT-B [35]</td><td>89.4</td><td>62.5</td><td>89.2</td><td>82.7</td><td>36.2</td><td>93.1</td><td>52.6</td><td>55.8</td><td>25.7</td><td>47.7</td><td>63.49</td></tr><tr><td>FILIPViT-B [53]</td><td>87.0</td><td>53.3</td><td>83.1</td><td>71.0</td><td>28.9</td><td>91.2</td><td>48.8</td><td>50.0</td><td>29.5</td><td>38.1</td><td>58.09</td></tr><tr><td>WukongViT-B</td><td>87.1</td><td>62.6</td><td>89.1</td><td>82.3</td><td>37.3</td><td>95.6</td><td>64.8</td><td>56.0</td><td>32.6</td><td>49.1</td><td>65.65</td></tr><tr><td>CLIPViT-L [35]</td><td>94.1</td><td>71.3</td><td>91.9</td><td>89.0</td><td>45.4</td><td>98.7</td><td>72.3</td><td>62.6</td><td>42.8</td><td>57.9</td><td>72.60</td></tr><tr><td>FILIPViT-L [53]</td><td>90.6</td><td>66.3</td><td>89.9</td><td>86.2</td><td>46.4</td><td>97.8</td><td>69.4</td><td>60.2</td><td>25.5</td><td>54.0</td><td>68.63</td></tr><tr><td>WukongViT-L</td><td>95.4</td><td>77.1</td><td>92.4</td><td>89.2</td><td>40.9</td><td>99.1</td><td>68.9</td><td>62.0</td><td>50.3</td><td>55.0</td><td>73.03</td></tr><tr><td>CLIPSwin-L [35]</td><td>94.8</td><td>75.8</td><td>90.7</td><td>88.3</td><td>40.0</td><td>97.5</td><td>71.0</td><td>57.3</td><td>22.3</td><td>58.0</td><td>69.57</td></tr><tr><td>FILIPSwin-L [53]</td><td>95.5</td><td>77.2</td><td>91.6</td><td>88.4</td><td>39.8</td><td>99.1</td><td>75.1</td><td>56.5</td><td>21.0</td><td>58.5</td><td>70.27</td></tr><tr><td>WukongSwin-L</td><td>95.3</td><td>76.8</td><td>89.8</td><td>87.1</td><td>33.7</td><td>97.8</td><td>76.9</td><td>56.3</td><td>19.3</td><td>58.2</td><td>69.12</td></tr></table>
122
+
123
+ Performance. The evaluation of zero-shot image classification on different datasets is illustrated in Table 4. In addition to our proposed models, i.e., WukongViT-B, WukongViT-L, and WukongSwin-L, we also evaluate other model architectures, i.e., CLIP and FILIP, with different image encoders as comparisons. These models are all pre-trained using our Wukong dataset except for BriVL which uses its own dataset. In comparison with models pre-trained using Wukong dataset, BriVL shows a significantly poor performance. This can be considered as the proof that Wukong dataset is effective for multi-modal pre-training. Besides, using the same ViT image encoder, either ViT-B or ViT-L, Wukong models perform quite well. In particular, WukongViT-L achieves the highest average accuracy of $73.03\%$ among all models. This indicates the superiority of our model architecture. However, our model trained with SwinT as the image encoder performs worse compared to others. The reason might be that patch merging in SwinT has already served a similar purpose in selecting and merging the important visual patch tokens. Therefore, our reduced-token interaction brings a negative impact. In summary, the zero-shot classification performances on various tasks show the effectiveness of our dataset and Wukong models.
124
+
125
+ # 5.3 Image-Text Retrieval
126
+
127
+ In this section, we evaluate our models on two sub-tasks, including image-to-text retrieval and text-to-image retrieval. In the image-to-text retrieval, the model retrieves a target text from a set of candidates given an image as query, or vice versa for the text-to-image retrieval. We benchmark our models on 6 different datasets, including Flickr8K-CN [20], Flickr30K-CN [18], COCO-CN [21], AIC-ICC [51], MUGE<sup>1</sup> and Wukong-Test.
128
+
129
+ Following common practices, we report Recall@K (recall of top K candidates) with $K = 1,5,10$ for both image-to-text and text-to-image retrieval on all datasets except for MUGE, which only has the text-to-image retrieval setting. The average Recall@K, i.e., Mean Recall (MR), is used for the final comparison. We report results on the test sets, except for MUGE and AIC-ICC where test sets are not released. For MUGE, we report results on the validation set, and for AIC-ICC, following the setting of WenLan 2.0 [9], we take the first 10K images along with their corresponding 50K pieces of texts from the validation set for testing.
130
+
131
+ Table 5 shows the benchmarks of zero-shot image-text retrieval using different models on multiple datasets. In general, models trained on Wukong dataset achieve a significantly better performance than BriVL [13], which demonstrates the effectiveness of our dataset. Besides, WukongViT-L shows a competitive performance in comparison to other models. Therefore, we believe Wukong dataset can serve as a pre-training benchmark dataset with a wide coverage of concepts.
132
+
133
+ Table 6 shows the results of image-text retrieval task. Generally, WukongViT-L achieves the best results among different model variants and datasets. Compared with baseline methods, on AIC-ICC, Wukong significantly outperforms WenLan 2.0 by around $12.9\%$ , which was pre-trained on a larger dataset
134
+
135
+ Table 5: Benchmarks of zero-shot image-text retrieval. The top-3 performance values are highlighted with bold, underline and italic respectively.
136
+
137
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Method</td><td colspan="3">Image-to-Text Retrieval</td><td colspan="3">Text-to-Image Retrieval</td><td rowspan="2">MR</td></tr><tr><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td></tr><tr><td rowspan="10">Flickr8K-CN</td><td>BriVL [13]</td><td>13.4</td><td>31.2</td><td>40.7</td><td>8.0</td><td>20.7</td><td>29.5</td><td>23.9</td></tr><tr><td>CLIPViT-B</td><td>59.5</td><td>86.2</td><td>93.4</td><td>44.2</td><td>71.2</td><td>82.0</td><td>72.7</td></tr><tr><td>CLIPViT-L [35]</td><td>65.4</td><td>89.2</td><td>95.4</td><td>50.5</td><td>77.0</td><td>85.7</td><td>77.2</td></tr><tr><td>CLIPSwin-L</td><td>56.0</td><td>83.2</td><td>92.4</td><td>38.6</td><td>67.0</td><td>78.2</td><td>69.2</td></tr><tr><td>FILIPViT-B</td><td>37.2</td><td>65.9</td><td>75.2</td><td>24.0</td><td>50.0</td><td>62.4</td><td>52.5</td></tr><tr><td>FILIPViT-L [53]</td><td>70.0</td><td>91.6</td><td>96.6</td><td>53.5</td><td>79.3</td><td>87.9</td><td>79.8</td></tr><tr><td>FILIPSwin-L</td><td>52.4</td><td>78.0</td><td>87.2</td><td>41.2</td><td>68.5</td><td>79.1</td><td>67.7</td></tr><tr><td>WukongViT-B</td><td>55.4</td><td>82.3</td><td>90.0</td><td>43.2</td><td>71.3</td><td>81.3</td><td>70.6</td></tr><tr><td>WukongViT-L</td><td>61.4</td><td>86.2</td><td>93.6</td><td>46.0</td><td>74.5</td><td>84.5</td><td>74.4</td></tr><tr><td>WukongSwin-L</td><td>47.2</td><td>78.8</td><td>87.6</td><td>36.6</td><td>64.8</td><td>76.2</td><td>65.2</td></tr><tr><td rowspan="10">Flickr30K-CN</td><td>BriVL [13]</td><td>17.7</td><td>42.3</td><td>54.3</td><td>10.3</td><td>27.5</td><td>37.9</td><td>31.7</td></tr><tr><td>CLIPViT-B</td><td>72.2</td><td>92.0</td><td>96.4</td><td>47.2</td><td>74.1</td><td>82.9</td><td>77.5</td></tr><tr><td>CLIPViT-L [35]</td><td>75.0</td><td>94.5</td><td>97.7</td><td>51.8</td><td>78.6</td><td>85.9</td><td>80.6</td></tr><tr><td>CLIPSwin-L</td><td>64.3</td><td>89.3</td><td>94.3</td><td>41.2</td><td>69.7</td><td>80.2</td><td>73.2</td></tr><tr><td>FILIPViT-B</td><td>44.2</td><td>73.7</td><td>83.3</td><td>28.7</td><td>55.9</td><td>67.1</td><td>58.8</td></tr><tr><td>FILIPViT-L [53]</td><td>78.9</td><td>96.2</td><td>98.1</td><td>55.7</td><td>81.2</td><td>87.9</td><td>83.0</td></tr><tr><td>FILIPSwin-L</td><td>65.8</td><td>89.2</td><td>95.0</td><td>44.6</td><td>72.2</td><td>81.2</td><td>74.7</td></tr><tr><td>WukongViT-B</td><td>66.2</td><td>88.7</td><td>94.3</td><td>45.7</td><td>73.8</td><td>82.2</td><td>75.1</td></tr><tr><td>WukongViT-L</td><td>76.1</td><td>94.8</td><td>97.5</td><td>51.7</td><td>78.9</td><td>86.3</td><td>80.9</td></tr><tr><td>WukongSwin-L</td><td>58.7</td><td>86.7</td><td>92.7</td><td>40.9</td><td>68.0</td><td>78.4</td><td>70.9</td></tr><tr><td rowspan="10">COCO-CN</td><td>BriVL [13]</td><td>17.1</td><td>41.7</td><td>57.5</td><td>14.8</td><td>39.0</td><td>54.2</td><td>37.4</td></tr><tr><td>CLIPViT-B</td><td>52.8</td><td>79.6</td><td>88.9</td><td>48.7</td><td>79.4</td><td>88.5</td><td>73.0</td></tr><tr><td>CLIPViT-L [35]</td><td>51.0</td><td>80.0</td><td>89.7</td><td>48.7</td><td>76.8</td><td>86.4</td><td>72.1</td></tr><tr><td>CLIPSwin-L</td><td>50.5</td><td>79.2</td><td>88.2</td><td>46.7</td><td>78.1</td><td>87.7</td><td>71.7</td></tr><tr><td>FILIPViT-B</td><td>37.8</td><td>66.4</td><td>77.9</td><td>37.5</td><td>68.1</td><td>83.0</td><td>61.8</td></tr><tr><td>FILIPViT-L [53]</td><td>56.9</td><td>82.4</td><td>90.9</td><td>52.7</td><td>79.9</td><td>88.6</td><td>75.2</td></tr><tr><td>FILIPSwin-L</td><td>48.6</td><td>77.3</td><td>88.3</td><td>50.5</td><td>79.2</td><td>88.6</td><td>72.1</td></tr><tr><td>WukongViT-B</td><td>48.3</td><td>77.8</td><td>88.8</td><td>49.2</td><td>79.4</td><td>87.9</td><td>71.9</td></tr><tr><td>WukongViT-L</td><td>55.2</td><td>81.0</td><td>90.6</td><td>53.4</td><td>80.2</td><td>90.1</td><td>75.1</td></tr><tr><td>WukongSwin-L</td><td>47.3</td><td>78.0</td><td>88.3</td><td>46.4</td><td>77.0</td><td>87.6</td><td>70.8</td></tr><tr><td rowspan="10">MUGE</td><td>BriVL [13]</td><td>-</td><td>-</td><td>-</td><td>12.7</td><td>30.9</td><td>41.8</td><td>28.5</td></tr><tr><td>CLIPViT-B</td><td>-</td><td>-</td><td>-</td><td>37.3</td><td>64.2</td><td>73.9</td><td>58.5</td></tr><tr><td>CLIPViT-L [35]</td><td>-</td><td>-</td><td>-</td><td>43.3</td><td>69.2</td><td>78.4</td><td>63.6</td></tr><tr><td>CLIPSwin-L</td><td>-</td><td>-</td><td>-</td><td>35.2</td><td>62.2</td><td>73.2</td><td>56.9</td></tr><tr><td>FILIPViT-B</td><td>-</td><td>-</td><td>-</td><td>22.4</td><td>46.6</td><td>58.5</td><td>42.5</td></tr><tr><td>FILIPViT-L [53]</td><td>-</td><td>-</td><td>-</td><td>37.6</td><td>63.4</td><td>73.6</td><td>58.2</td></tr><tr><td>FILIPSwin-L</td><td>-</td><td>-</td><td>-</td><td>36.2</td><td>61.1</td><td>71.5</td><td>56.3</td></tr><tr><td>WukongViT-B</td><td>-</td><td>-</td><td>-</td><td>33.4</td><td>59.3</td><td>69.7</td><td>54.1</td></tr><tr><td>WukongViT-L</td><td>-</td><td>-</td><td>-</td><td>42.7</td><td>69.0</td><td>78.0</td><td>63.2</td></tr><tr><td>WukongSwin-L</td><td>-</td><td>-</td><td>-</td><td>34.5</td><td>60.6</td><td>71.2</td><td>55.5</td></tr></table>
138
+
139
+ consisting of 650 million image-text pairs. For the COCO-CN dataset, our Wukong models also achieve comparable performance to state-of-the-art methods. For Wukong-Test, $\mathrm{CLIP}_{\mathrm{ViT-L}}$ achieves the best result (89.6%) so far. It shows that models with global similarity is particularly effective when massively trained on in-domain Wukong train set. However, it lacks a bit of generalization when finetuned on other out-of-domain datasets such as AIC-ICC and MUGE. Overall, experimental results demonstrate the capabilities of our pre-trained models.
140
+
141
+ # 5.4 Ablations and Findings
142
+
143
+ Locked-image Text Tuning. To evaluate the effectiveness of LiT-tuning, we take WukongViT-B as an example model for a detailed investigation. We train two models using the same experimental settings as mentioned above, apart from that one model is trained with a locked image encoder but the other is not locked. As shown in Figure 2, the model using LiT-tuning method shows a slower trend of loss decrease during training. We believe the unlocked image encoder contributes to reduce the training loss and find the local optima efficiently. However, the validation accuracy of LiT-tuning model remains higher than the other in almost every iteration, which demonstrates a better generalization.
144
+
145
+ Visualization. In addition, we present the visualization of word-patch alignment in the appendix, which evidences the effectiveness of cross-modal token-wise similarity even in the LiT-tuning setting. We apply the same visualization method from FILIP [53], to align textual tokens and image patch tokens from $\mathrm{FILIP}_{\mathrm{ViT-L}}$ and $\mathrm{FILIP}_{\mathrm{Swin-L}}$ . We find that both models can predict image patches of the target object, and more details are shown in the appendix. Given this promising capability of aligning words and patches, our released models offer a potential solution for image object localization.
146
+
147
+ Table 6: Benchmarks of fine-tuned image-text retrieval on different datasets. The top-3 performance values are highlighted with bold, underline and italic respectively.
148
+
149
+ <table><tr><td rowspan="2">Dataset</td><td rowspan="2">Method</td><td colspan="3">Image-to-Text Retrieval</td><td colspan="3">Text-to-Image Retrieval</td><td rowspan="2">MR</td></tr><tr><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td></tr><tr><td rowspan="9">Flickr8K-CN</td><td>CLIPViT-B</td><td>77.7</td><td>94.7</td><td>98.1</td><td>61.2</td><td>86.8</td><td>93.2</td><td>85.3</td></tr><tr><td>CLIPViT-L [35]</td><td>81.4</td><td>96.9</td><td>99.0</td><td>67.4</td><td>91.0</td><td>95.7</td><td>88.6</td></tr><tr><td>CLIPSwin-L</td><td>77.3</td><td>94.9</td><td>98.2</td><td>59.3</td><td>86.0</td><td>92.9</td><td>84.8</td></tr><tr><td>FILIPViT-B</td><td>52.6</td><td>81.5</td><td>90.2</td><td>46.4</td><td>77.0</td><td>86.8</td><td>72.4</td></tr><tr><td>FILIPViT-L [53]</td><td>80.8</td><td>94.8</td><td>98.3</td><td>68.5</td><td>90.5</td><td>95.2</td><td>88.0</td></tr><tr><td>FILIPSwin-L</td><td>77.6</td><td>94.4</td><td>97.7</td><td>61.5</td><td>86.5</td><td>93.0</td><td>85.1</td></tr><tr><td>WukongViT-B</td><td>71.7</td><td>91.5</td><td>96.6</td><td>58.4</td><td>85.4</td><td>92.0</td><td>82.6</td></tr><tr><td>WukongViT-L</td><td>83.3</td><td>97.3</td><td>99.5</td><td>70.1</td><td>91.9</td><td>96.4</td><td>89.7</td></tr><tr><td>WukongSwin-L</td><td>74.9</td><td>93.6</td><td>97.8</td><td>57.9</td><td>85.1</td><td>92.6</td><td>83.6</td></tr><tr><td rowspan="9">Flickr30K-CN</td><td>CLIPViT-B</td><td>87.1</td><td>97.7</td><td>98.8</td><td>69.0</td><td>90.3</td><td>95.0</td><td>89.7</td></tr><tr><td>CLIPViT-L [35]</td><td>91.6</td><td>99.1</td><td>99.7</td><td>77.3</td><td>94.4</td><td>97.2</td><td>93.2</td></tr><tr><td>CLIPSwin-L</td><td>85.8</td><td>97.1</td><td>99.0</td><td>67.4</td><td>90.3</td><td>94.9</td><td>89.1</td></tr><tr><td>FILIPViT-B</td><td>72.1</td><td>91.3</td><td>95.8</td><td>57.5</td><td>84.3</td><td>90.6</td><td>81.9</td></tr><tr><td>FILIPViT-L [53]</td><td>90.6</td><td>98.8</td><td>99.6</td><td>76.9</td><td>94.9</td><td>97.4</td><td>93.0</td></tr><tr><td>FILIPSwin-L</td><td>86.0</td><td>97.5</td><td>99.1</td><td>70.9</td><td>91.3</td><td>95.3</td><td>90.0</td></tr><tr><td>WukongViT-B</td><td>83.9</td><td>97.6</td><td>99.0</td><td>67.6</td><td>89.6</td><td>94.2</td><td>88.7</td></tr><tr><td>WukongViT-L</td><td>92.7</td><td>99.1</td><td>99.6</td><td>77.4</td><td>94.5</td><td>97.0</td><td>93.4</td></tr><tr><td>WukongSwin-L</td><td>86.2</td><td>98.1</td><td>99.4</td><td>67.4</td><td>89.9</td><td>94.5</td><td>89.3</td></tr><tr><td rowspan="16">COCO-CN</td><td>EmbN [49]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>73.2</td></tr><tr><td>PARALLEL-EmbN [10]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>76.0</td></tr><tr><td>S-LIWE [50]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>73.6</td></tr><tr><td>M³P [29]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>86.2</td></tr><tr><td>UNITER [3]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>87.3</td></tr><tr><td>LightningDOT [47]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>88.4</td></tr><tr><td>UC² [59]</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>89.8</td></tr><tr><td>CLIPViT-B</td><td>68.7</td><td>93.6</td><td>97.5</td><td>68.9</td><td>93.3</td><td>97.3</td><td>86.6</td></tr><tr><td>CLIPViT-L [35]</td><td>68.3</td><td>93.0</td><td>97.3</td><td>70.1</td><td>92.2</td><td>96.4</td><td>86.2</td></tr><tr><td>CLIPSwin-L</td><td>68.0</td><td>92.8</td><td>97.3</td><td>66.7</td><td>91.5</td><td>96.3</td><td>85.4</td></tr><tr><td>FILIPViT-B</td><td>52.7</td><td>81.3</td><td>88.3</td><td>56.2</td><td>86.8</td><td>94.3</td><td>76.6</td></tr><tr><td>FILIPViT-L [53]</td><td>69.1</td><td>91.3</td><td>96.9</td><td>72.2</td><td>92.4</td><td>97.2</td><td>86.5</td></tr><tr><td>FILIPSwin-L</td><td>68.3</td><td>93.9</td><td>97.1</td><td>69.9</td><td>93.3</td><td>97.6</td><td>86.7</td></tr><tr><td>WukongViT-B</td><td>65.8</td><td>90.3</td><td>96.6</td><td>67.0</td><td>91.4</td><td>96.7</td><td>84.6</td></tr><tr><td>WukongViT-L</td><td>73.3</td><td>94.0</td><td>98.0</td><td>74.0</td><td>94.4</td><td>98.1</td><td>88.6</td></tr><tr><td>WukongSwin-L</td><td>67.4</td><td>92.4</td><td>97.5</td><td>66.0</td><td>92.6</td><td>97.1</td><td>85.5</td></tr><tr><td rowspan="10">AIC-ICC</td><td>WenLan 2.0 [9]</td><td>45.6</td><td>68.0</td><td>76.3</td><td>34.1</td><td>58.9</td><td>69.1</td><td>58.7</td></tr><tr><td>CLIPViT-B</td><td>50.5</td><td>73.0</td><td>80.2</td><td>38.1</td><td>63.7</td><td>73.3</td><td>63.1</td></tr><tr><td>CLIPViT-L [35]</td><td>59.1</td><td>79.5</td><td>85.2</td><td>46.2</td><td>70.7</td><td>78.6</td><td>69.9</td></tr><tr><td>CLIPSwin-L</td><td>50.5</td><td>73.5</td><td>81.2</td><td>37.3</td><td>62.8</td><td>72.7</td><td>63.0</td></tr><tr><td>FILIPViT-B</td><td>42.5</td><td>67.2</td><td>76.0</td><td>32.9</td><td>58.4</td><td>68.8</td><td>57.6</td></tr><tr><td>FILIPViT-L [53]</td><td>54.1</td><td>75.8</td><td>82.8</td><td>44.9</td><td>69.0</td><td>77.5</td><td>67.4</td></tr><tr><td>FILIPSwin-L</td><td>53.1</td><td>74.8</td><td>82.0</td><td>41.1</td><td>65.7</td><td>74.7</td><td>65.2</td></tr><tr><td>WukongViT-B</td><td>47.5</td><td>70.6</td><td>78.6</td><td>36.7</td><td>36.7</td><td>71.7</td><td>57.0</td></tr><tr><td>WukongViT-L</td><td>61.6</td><td>80.5</td><td>86.1</td><td>48.6</td><td>72.5</td><td>80.2</td><td>71.6</td></tr><tr><td>WukongSwin-L</td><td>50.9</td><td>73.6</td><td>81.5</td><td>38.6</td><td>64.1</td><td>73.6</td><td>63.7</td></tr><tr><td rowspan="9">MUGE</td><td>CLIPViT-B</td><td>-</td><td>-</td><td>-</td><td>43.5</td><td>71.7</td><td>80.6</td><td>65.3</td></tr><tr><td>CLIPViT-L [35]</td><td>-</td><td>-</td><td>-</td><td>50.1</td><td>76.9</td><td>84.9</td><td>70.6</td></tr><tr><td>CLIPSwin-L</td><td>-</td><td>-</td><td>-</td><td>45.3</td><td>72.1</td><td>81.1</td><td>66.2</td></tr><tr><td>FILIPViT-B</td><td>-</td><td>-</td><td>-</td><td>30.6</td><td>58.2</td><td>70.2</td><td>53.0</td></tr><tr><td>FILIPViT-L [53]</td><td>-</td><td>-</td><td>-</td><td>43.5</td><td>71.5</td><td>80.9</td><td>65.3</td></tr><tr><td>FILIPSwin-L</td><td>-</td><td>-</td><td>-</td><td>44.0</td><td>71.4</td><td>81.2</td><td>65.5</td></tr><tr><td>WukongViT-B</td><td>-</td><td>-</td><td>-</td><td>39.2</td><td>66.9</td><td>77.4</td><td>61.2</td></tr><tr><td>WukongViT-L</td><td>-</td><td>-</td><td>-</td><td>52.7</td><td>77.9</td><td>85.6</td><td>72.1</td></tr><tr><td>WukongSwin-L</td><td>-</td><td>-</td><td>-</td><td>43.8</td><td>71.9</td><td>81.7</td><td>65.8</td></tr><tr><td rowspan="9">Wukong-Test</td><td>CLIPViT-B</td><td>58.3</td><td>88.2</td><td>94.1</td><td>53.1</td><td>85.4</td><td>92.6</td><td>78.6</td></tr><tr><td>CLIPViT-L [35]</td><td>72.8</td><td>98.2</td><td>99.8</td><td>68.9</td><td>98.0</td><td>99.8</td><td>89.6</td></tr><tr><td>CLIPSwin-L</td><td>56.0</td><td>86.1</td><td>92.5</td><td>51.0</td><td>83.4</td><td>90.9</td><td>76.7</td></tr><tr><td>FILIPViT-B</td><td>30.3</td><td>57.6</td><td>66.9</td><td>20.2</td><td>47.5</td><td>60.3</td><td>47.1</td></tr><tr><td>FILIPViT-L [53]</td><td>53.0</td><td>85.3</td><td>92.7</td><td>50.4</td><td>84.1</td><td>92.0</td><td>76.3</td></tr><tr><td>FILIPSwin-L</td><td>51.0</td><td>81.6</td><td>88.9</td><td>45.2</td><td>77.9</td><td>87.0</td><td>71.9</td></tr><tr><td>WukongViT-B</td><td>50.5</td><td>82.7</td><td>90.5</td><td>47.1</td><td>80.1</td><td>88.9</td><td>73.3</td></tr><tr><td>WukongViT-L</td><td>68.0</td><td>94.4</td><td>98.0</td><td>63.8</td><td>93.0</td><td>97.3</td><td>85.8</td></tr><tr><td>WukongSwin-L</td><td>53.1</td><td>85.4</td><td>92.2</td><td>47.8</td><td>81.6</td><td>89.7</td><td>75.0</td></tr></table>
150
+
151
+ Tokenization for Chinese. We investigate the influence of the word segmentation technique on Chinese VLP models. Comparing the common character-grained tokenization, word-grained tokenization with a larger vocabulary (65,328) is also adopted. Results show that the model using character-grained tokenization achieves better performance. The detailed comparison is shown in
152
+
153
+ ![](images/680fa3befc83819ce631d874dcb11967ad3e8bb4355f896dcabdb529d180977e.jpg)
154
+ (a) Training loss.
155
+
156
+ ![](images/cff849ce16f5e23e4a7089a667fede2b9bbe5c45b77322ba1b51b39a933a5f30.jpg)
157
+ (b) Validation accuracy.
158
+ Figure 2: In comparison with the model trained with an unlocked image encoder, though the loss decreases slower when the image encoder is locked, the accuracy of evaluation remains a higher level.
159
+
160
+ the appendix. Since a Chinese word often contains more than one character, the character-grained tokens are more fine-grained than word-grained. One example is that the word "蜂鸟"(hummingbird) consists of two characters: "蜂"(bee) and "鸟" (bird). Therefore, we believe it is more effective for our models to learn deep semantic token-wise similarity between an image patch and its paired fine-grained textual tokens, in such a contrastive learning manner.
161
+
162
+ # 6 Conclusion
163
+
164
+ In this work, we build a large-scale Chinese vision-language dataset called Wukong. To the best of our knowledge, it is the first hundred-million level dataset designed for the Chinese language and it paves the way for future research on Chinese cross-modal pre-training. Meanwhile, using this dataset, we propose three Chinese VLP models, i.e., WukongViT-B, WukongViT-L, and WukongSwin-L. Our pre-trained WukongViT-L achieves state-of-the-art performance on Chinese benchmarks such as zero-shot image classification and image-text retrieval tasks. In the future, we plan to explore more solutions to train multilingual cross-modal models with the Wukong dataset. Meanwhile, more downstream tasks, in addition to image classification and retrieval, are worth sufficient evaluation. Also, Wukong-based applications such as image search engines and visual question answering will be further explored in future work.
165
+
166
+ # References
167
+
168
+ [1] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. In Advances in neural information processing systems, 2020.
169
+ [2] S. Changpinyo, P. Sharma, N. Ding, and R. Soricut. Conceptual 12M: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.
170
+ [3] Y.-C. Chen, L. Li, L. Yu, A. El Kholy, F. Ahmed, Z. Gan, Y. Cheng, and J. Liu. Uniter: Universal image-text representation learning. In European conference on computer vision, pages 104-120. Springer, 2020.
171
+ [4] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 113-123, 2019.
172
+ [5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. IEEE, 2009.
173
+ [6] K. Desai, G. Kaul, Z. Aysola, and J. Johnson. Redcaps: Web-curated image-text data created by the people, for the people. arXiv preprint arXiv:2111.11431, 2021.
174
+ [7] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2019.
175
+ [8] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2020.
176
+ [9] N. Fei, Z. Lu, Y. Gao, G. Yang, Y. Huo, J. Wen, H. Lu, R. Song, X. Gao, T. Xiang, et al. Wenlan 2.0: Make ai imagine via a multimodal foundation model. arXiv preprint arXiv:2110.14378, 2021.
177
+ [10] S. Gella, R. Senrich, F. Keller, and M. Lapata. Image pivoting for learning multilingual multimodal representations. arXiv preprint arXiv:1707.07601, 2017.
178
+ [11] Y. Goyal, T. Khot, D. Summers-Stay, D. Batra, and D. Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904-6913, 2017.
179
+ [12] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729-9738, 2020.
180
+ [13] Y. Huo, M. Zhang, G. Liu, H. Lu, Y. Gao, G. Yang, J. Wen, H. Zhang, B. Xu, W. Zheng, et al. Wenlan: Bridging vision and language by large-scale multi-modal pre-training. arXiv preprint arXiv:2103.06561, 2021.
181
+ [14] C. Jia, Y. Yang, Y. Xia, Y.-T. Chen, Z. Parekh, H. Pham, Q. V. Le, Y. Sung, Z. Li, and T. Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, 2021.
182
+ [15] W. Kim, B. Son, and I. Kim. Vilt: Vision-and-language transformer without convolution or region supervision. In International Conference on Machine Learning, 2021.
183
+ [16] V. V. Kindratenko, J. J. Enos, G. Shi, M. T. Showerman, G. W. Arnold, J. E. Stone, J. C. Phillips, and W.-m. Hwu. Gpu clusters for high-performance computing. In 2009 IEEE International Conference on Cluster Computing and Workshops, pages 1-8. IEEE, 2009.
184
+ [17] R. Krishna, Y. Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y. Kalantidis, L.-J. Li, D. A. Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. arXiv preprint arXiv:1602.07332, 2016.
185
+ [18] W. Lan, X. Li, and J. Dong. Fluency-guided cross-lingual image captioning. In Proceedings of the 25th ACM international conference on Multimedia, pages 1549–1557, 2017.
186
+ [19] L. H. Li, M. Yatskar, D. Yin, C.-J. Hsieh, and K.-W. Chang. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019.
187
+ [20] X. Li, W. Lan, J. Dong, and H. Liu. Adding chinese captions to images. In Proceedings of the 2016 ACM on international conference on multimedia retrieval, pages 271-275, 2016.
188
+ [21] X. Li, C. Xu, X. Wang, W. Lan, Z. Jia, G. Yang, and J. Xu. Coco-cn for cross-lingual image tagging, captioning, and retrieval. IEEE Transactions on Multimedia, 21(9):2347–2360, 2019.
189
+
190
+ [22] J. Lin, R. Men, A. Yang, C. Zhou, M. Ding, Y. Zhang, P. Wang, A. Wang, L. Jiang, X. Jia, et al. M6: A chinese multimodal pretrainer. arXiv preprint arXiv:2103.00823, 2021.
191
+ [23] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014.
192
+ [24] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012-10022, 2021.
193
+ [25] I. Loshchilov and F. Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
194
+ [26] J. Lu, D. Batra, D. Parikh, and S. Lee. Vilbert: pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In International Conference on Neural Information Processing Systems, pages 13–23, 2019.
195
+ [27] D. Mahajan, R. Girshick, V. Ramanathan, K. He, M. Paluri, Y. Li, A. Bharambe, and L. Van Der Maaten. Exploring the limits of weakly supervised pretraining. In Proceedings of the European conference on computer vision (ECCV), pages 181-196, 2018.
196
+ [28] D. Narayanan, M. Shoeybi, J. Casper, P. LeGresley, M. Patwary, V. A. Korthikanti, D. Vainbrand, P. Kashinkunti, J. Bernauer, B. Catanzaro, et al. Efficient large-scale language model training ongpu clusters. arXiv preprint arXiv:2104.04473, 2021.
197
+ [29] M. Ni, H. Huang, L. Su, E. Cui, T. Bharti, L. Wang, D. Zhang, and N. Duan. M3p: Learning universal representations via multitask multilingual multimodal pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3977-3986, 2021.
198
+ [30] V. Ordonez, G. Kulkarni, and T. Berg. Im2text: Describing images using 1 million captioned photographs. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 24. Curran Associates, Inc., 2011.
199
+ [31] V. Ordonez, G. Kulkarni, and T. Berg. Im2text: Describing images using 1 million captioned photographs. Advances in neural information processing systems, 24, 2011.
200
+ [32] Z. Parekh, J. Baldridge, D. Cer, A. Waters, and Y. Yang. Crisscrossed captions: Extended intramodal and intermodal semantic similarity judgments for ms-coco. arXiv preprint arXiv:2004.15020, 2020.
201
+ [33] T. Pires, E. Schlinger, and D. Garrette. How multilingual is multilingual bert? arXiv preprint arXiv:1906.01502, 2019.
202
+ [34] B. A. Plummer, L. Wang, C. M. Cervantes, J. C. Caicedo, J. Hockenmaier, and S. Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641–2649, 2015.
203
+ [35] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR, 2021.
204
+ [36] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21:1-67, 2020.
205
+ [37] S. Rajbhandari, J. Rasley, O. Ruwase, and Y. He. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1-16. IEEE, 2020.
206
+ [38] J. Rasley, S. Rajbhandari, O. Ruwase, and Y. He. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3505-3506, 2020.
207
+ [39] C. Riquelme, J. Puigcerver, B. Mustafa, M. Neumann, R. Jenatton, A. Susano Pinto, D. Keysers, and N. Houlsby. Scaling vision with sparse mixture of experts. Advances in Neural Information Processing Systems, 34, 2021.
208
+ [40] M. Ryoo, A. Piergiovanni, A. Arnab, M. Dehghani, and A. Angelova. Tokenlearner: Adaptive space-time tokenization for videos. Advances in Neural Information Processing Systems, 34, 2021.
209
+ [41] C. Schuhmann, R. Vencu, R. Beaumont, R. Kaczmarczyk, C. Mullis, A. Katta, T. Coombes, J. Jitsev, and A. Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021.
210
+ [42] P. Sharma, N. Ding, S. Goodman, and R. Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565, 2018.
211
+
212
+ [43] Y. Song, S. Shi, J. Li, and H. Zhang. Directional skip-gram: Explicitly distinguishing left and right context for word embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 175–180, 2018.
213
+ [44] K. Srinivasan, K. Raman, J. Chen, M. Bendersky, and M. Najork. Wit: Wikipedia-based image text dataset for multimodal multilingual machine learning. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2443-2449, 2021.
214
+ [45] J. A. Stuart and J. D. Owens. Multi-gpu mapreduce ongpu clusters. In 2011 IEEE International Parallel & Distributed Processing Symposium, pages 1068-1079. IEEE, 2011.
215
+ [46] C. Sun, A. Shrivastava, S. Singh, and A. Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE international conference on computer vision, pages 843–852, 2017.
216
+ [47] S. Sun, Y.-C. Chen, L. Li, S. Wang, Y. Fang, and J. Liu. Lightningdot: Pre-training visual-semantic embeddings for real-time image-text retrieval. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 982–997, 2021.
217
+ [48] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. Yfcc100m: The new data in multimedia research. Communications of the ACM, 59(2):64-73, 2016.
218
+ [49] L. Wang, Y. Li, J. Huang, and S. Lazebnik. Learning two-branch neural networks for image-text matching tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2):394–407, 2018.
219
+ [50] J. Wehrmann, D. M. Souza, M. A. Lopes, and R. C. Barros. Language-agnostic visual-semantic embeddings. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5804–5813, 2019.
220
+ [51] J. Wu, H. Zheng, B. Zhao, Y. Li, B. Yan, R. Liang, W. Wang, S. Zhou, G. Lin, Y. Fu, et al. Ai challenger: A large-scale dataset for going deeper in image understanding. arXiv preprint arXiv:1711.06475, 2017.
221
+ [52] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, L. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
222
+ [53] L. Yao, R. Huang, L. Hou, G. Lu, M. Niu, H. Xu, X. Liang, Z. Li, X. Jiang, and C. Xu. Filip: Fine-grained interactive language-image pre-training. In ICLR, 2022.
223
+ [54] Y. You, J. Li, S. Reddi, J. Hseu, S. Kumar, S. Bhojanapalli, X. Song, J. Demmel, K. Keutzer, and C.-J. Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. In International Conference on Learning Representations, 2020.
224
+ [55] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67-78, 2014.
225
+ [56] X. Zhai, A. Kolesnikov, N. Houlsby, and L. Beyer. Scaling vision transformers. arXiv preprint arXiv:2106.04560, 2021.
226
+ [57] X. Zhai, X. Wang, B. Mustafa, A. Steiner, D. Keysers, A. Kolesnikov, and L. Beyer. Lit: Zero-shot transfer with locked-image text tuning. arXiv preprint arXiv:2111.07991, 2021.
227
+ [58] X. Zhan, Y. Wu, X. Dong, Y. Wei, M. Lu, Y. Zhang, H. Xu, and X. Liang. Product1m: Towards weakly supervised instance-level product retrieval via cross-modal pretraining. In International Conference on Computer Vision, 2021.
228
+ [59] M. Zhou, L. Zhou, S. Wang, Y. Cheng, L. Li, Z. Yu, and J. Liu. Uc2: Universal cross-lingual cross-modal vision-and-language pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4155–4165, 2021.
229
+
230
+ # Appendix
231
+
232
+ # A Examples in Wukong Dataset
233
+
234
+ Figure 3 shows some examples in our dataset. These image-text pairs involve many types of content, e.g., social news, sporting events, product introduction, et al. Therefore, our dataset is suitable for general-purpose multi-modal pre-training. Additionally, in Figure 4, we visualize the distribution of words (consisting of one or more tokens) in our dataset. We use the Chinese text segmentation
235
+
236
+ ![](images/0b7479100a4a491bea28f200bd07e77011b8642ef00af3c8182b955097e33ceb.jpg)
237
+ 狗子示意来访人员要想进去,先过来扫码,狗子还特意下来用嘴巴对着(The dog signaled to the visitors to scan the code first before entrance, and the dog also deliberately came down and pointed his mouth at it.)
238
+
239
+ ![](images/3cfc670c8c0d9c3341beab0df9793a683f3d12dc12b24f68a68c26fd03086871.jpg)
240
+ 你好,我们是社区工作人员,是来做接种疫苗排查工作的(Hello, we are community workers and are here to do vaccination screening.)
241
+
242
+ ![](images/1f5aa5d47737fa77b4b7ed81844f602d082fcc0e48ad6b5dd10921ca32e944b2.jpg)
243
+ 13-14赛季 英超第5轮 曼城 vs 曼联 13.09.22 (13-14 Premier League Round 5 Manchester City vs Manchester United 13.09.22)
244
+
245
+ ![](images/73a0ee922cb4fa3276a8578a3db17451228bfdeb3ba3e9218cd982b05624054c.jpg)
246
+ 中国骄傲中国女排成功抵达东京不到6天就将在赛场上再展风采 (China pride, the Chinese women's volleyball team, will show its style on the field in less than 6 days right after its arrival in Tokyo)
247
+
248
+ ![](images/8f134b93860ea398a2acc7edd1415835c892939763a7f1dffbbc2260b1b17018.jpg)
249
+ 简欧三居室酒柜装修效果图 (Renderings of the decoration of the wine cabinet in the three bedrooms of Europe)
250
+
251
+ ![](images/b15c3252c4a2b0ff33869b040c1e58cb046662aa1a03bd7bdfb861a0e8770282.jpg)
252
+ 【互邦工厂旗舰店】上海互邦轮椅钢管轻便手动折叠轮椅
253
+ 【Hubang factory flagship store】 Shanghai Hubang wheelchair steel pipe lightweight manual folding wheelchair)
254
+
255
+ ![](images/383e2235138c89ed6aa54192115454b6b16ff6d83c58d50af667f9e7c2146df7.jpg)
256
+ Figure 3: Examples of image-text pairs in Wukong dataset. A diverse range of concepts are included.
257
+ Figure 4: The word cloud generated with texts in Wukong dataset. For example, "月" means month; "日" is day; "做" is do and "一个" means one.
258
+
259
+ module $jieba^2$ to generate words and build this word cloud of our dataset. Additionally, for the topics or themes of the samples, Figure 5 shows the word frequency of nouns in our dataset. Naturally, a long tail distribution is followed and a wide range of concepts are covered.
260
+
261
+ # B Experimental Setup
262
+
263
+ The experimental settings of our model variants are described in Table 7. For better generalization and data-efficiency, we employ Autoaugment [4] for image data augmentation that aims to build more image-text pairs. All of our models are trained using Nvidia V100 GPUs and Ascend cards. Specifically, WukongViT-B is trained using 32 GPUs for 3 days, WukongViT-L is trained using 32 GPUs for 10 days and WukongSwin-L is trained using 40 GPUs for 5 days. We use LAMB optimizer [54] and cosine learning rate schedule with a linear warmup [25]. Weight decay regularization is applied
264
+
265
+ ![](images/e3280044bb3f472987e979248c5a2efcf31354699ec65e4501025ea4edc9e94c.jpg)
266
+ Figure 5: The word frequency of nouns in our dataset. A wide range of concepts are covered.
267
+
268
+ Table 7: Detailed settings of our model variants. The resolution of image is $224 \times 224$ and the length of text is 32.
269
+
270
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Image encoder</td><td rowspan="2">Linear projected embeddings</td><td rowspan="2">Token reduction</td><td colspan="3">Text encoder</td><td rowspan="2">#Parameters</td></tr><tr><td>#layers</td><td>#heads</td><td>width</td></tr><tr><td>WukongViT-B</td><td>ViT-B/32</td><td>256</td><td>12</td><td>12</td><td>8</td><td>512</td><td>136M</td></tr><tr><td>WukongViT-L</td><td>ViT-L/14</td><td>256</td><td>24</td><td>12</td><td>12</td><td>768</td><td>404M</td></tr><tr><td>WukongSwin-L</td><td>Swin-L</td><td>256</td><td>12</td><td>12</td><td>12</td><td>768</td><td>297M</td></tr></table>
271
+
272
+ Table 8: Hyper-parameters used in model training.
273
+
274
+ <table><tr><td rowspan="2">Initial Temperature</td><td colspan="3">LAMB</td><td rowspan="2">Total Epochs</td></tr><tr><td>β1</td><td>β2</td><td>ε</td></tr><tr><td>0.07</td><td>0.9</td><td>0.999</td><td>10-2</td><td>20</td></tr></table>
275
+
276
+ to all parameters except for bias, layer normalization, token embedding, positional embedding and temperature in the contrastive loss. The detailed hyper-parameters are shown in Table 8. In order to pick the optimal checkpoint out, ImageNet dataset [5] with translated class names is used for zero-shot validation.
277
+
278
+ # C Supplementary Experiments
279
+
280
+ # C.1 Tokenization for Chinese
281
+
282
+ Table 9 shows the comparison between using the character-grained and word-grained tokenization in our WukongViT-B model. We use the python module jieba to do Chinese word segmentation to split Chinese text into words. All experimental settings remain the same except for the tokenization. Results show that WukongViT-B achieve better performance than WukongViT-B-Word. We believe the main reason is that the character-grained tokens are more fine-grained than word-grained, since a Chinese word often contains more than one character. Such character-grained method contributes to help models learn the deep semantic token-wise similarity between an image patch with its paired
283
+
284
+ Table 9: Comparison of character-grained tokenization and word-grained tokenization method. The metric is top-1 accuracy (\%) of zero-shot image classification. The better result is highlighted with bold.
285
+
286
+ <table><tr><td>Dataset Model</td><td>CIFAR10</td><td>CIFAR100</td><td>Caltech101</td><td>Caltech256</td><td>DTD</td><td>Sports</td><td>Flowers</td><td>SUN397</td><td>EuroSAT</td><td>ImageNet</td><td>Average</td></tr><tr><td>WukongViT-B-Word</td><td>89.1</td><td>62.1</td><td>88.7</td><td>80.8</td><td>29.1</td><td>93.7</td><td>53.3</td><td>49.6</td><td>36.2</td><td>43.9</td><td>62.65</td></tr><tr><td>WukongViT-B</td><td>87.1</td><td>62.6</td><td>89.1</td><td>82.3</td><td>37.3</td><td>95.6</td><td>64.8</td><td>56.0</td><td>32.6</td><td>49.1</td><td>65.65</td></tr></table>
287
+
288
+ fine-grained textual tokens. A typical example from the Chinese ImageNet dataset is that the word "蜂鸟"(hummingbird) consists of two characters: "蜂" (bee) and "鸟" (bird).
289
+
290
+ # C.2 Visualization of Word-patch Alignment
291
+
292
+ Since we follow the fine-grained interaction in FILIP [53], our trained models $\mathrm{FILIP}_{\mathrm{ViT-L}}$ and $\mathrm{FILIP}_{\mathrm{Swin-L}}$ likewise own the capability of capturing the correspondence between images and texts. Note that they are trained using the token-wise similarity. We exclude ones with the global similarity since they lack of word-patch alignment capability, which has been evidenced in previous work [53].
293
+
294
+ ![](images/bbb1feb7a7648570e4aa62ecf47427410fe07617061f82aa3f3d65bfbb353adc.jpg)
295
+ (a)豆娘(damselfly:1,2)
296
+
297
+ ![](images/0d45377682c5796c159c138094323ba92eba43c947b5a9b3c805a4b0dd412984.jpg)
298
+
299
+ ![](images/ca888745f6f0878f593a2010db80eb29d8d79da316afc4c4f580d5a39934628d.jpg)
300
+
301
+ ![](images/a9e0f358b580a2ce75555258508ea002ce39a859bfc37b8fc916e854e90a4568.jpg)
302
+ (b)救生艇(lifeboat:1,2,3)
303
+
304
+ ![](images/7e36aae89a3590ac1c997c4b402167c6352105febd166402341a63a21012ed85.jpg)
305
+
306
+ ![](images/dc49e68a14d239288b61d03b56bcd9b8f6576c359bfe220b1a7760ce694d1bb6.jpg)
307
+
308
+ ![](images/95e2a10d167535f441b4ae9ab86c37aa9487168bcadeaf1195ae46fa0fc64815.jpg)
309
+ (c)蜂鸟 (hummingbird:1,2)
310
+
311
+ ![](images/d7be68bf8b97077813cdf345ef6aa78460fdcc64765c8148f3c1b8e936f619fe.jpg)
312
+
313
+ ![](images/2f40b621020e8655874de0fc2ce6e882b261eb623326583b37d1842af87b6783.jpg)
314
+
315
+ ![](images/1aeef2a3a27653d6dee7baa50fc82db3c3c6f7bc7a71669e674164cc50fe5392.jpg)
316
+ Raw Image
317
+
318
+ ![](images/9919507a005e88bfb77a5def08be731f5c4b1fc9c4c5f4ab691fdfa0ad5c3baa.jpg)
319
+ FILIP Swin-L
320
+ (d) iPod (iPod: 1)
321
+
322
+ ![](images/921d0db091eec71cbdac3a94b6c39e0cf6e24bd3c139e0d07cd2de42b66080db.jpg)
323
+ FILIPVIT-L
324
+
325
+ ![](images/2564c48d1cc02a366d9b1af5cd5fb596dffe7b8cac284769d363cc8e71381816.jpg)
326
+ (e) 教堂 (church: 1, 2)
327
+
328
+ ![](images/2fc20a9ec86be6e3f00bb246e55c6c757809f689293079706c6e8a09bb91cdc0.jpg)
329
+
330
+ ![](images/e1afba98056dc032c0d60852abc05d64c6da944eda226804eb6fb28e321b0690.jpg)
331
+
332
+ ![](images/7dee236de11ee0dced47e14ef48dd695b258709f2b5e329b08667ce6b534f3ab.jpg)
333
+ (f) 电风扇 (electric fan: 1, 2, 3)
334
+
335
+ ![](images/f8be7ec86e9762d6171adaef63a60884062686cc7d3bb5c9f5b464c83f52d851.jpg)
336
+
337
+ ![](images/aa6617695dafe2c69ce15040be0e74e8f8d67ee4259501377e9f90646c8991d7.jpg)
338
+
339
+ Figure 6: Visualization of word-patch alignment. We randomly choose six classes in the Chinese ImageNet dataset. Each Chinese label name is used as a prompt, whose English text is described in the parentheses. Behind which, the tail numbers indicate the location indices of this class label in the tokenized textual input. Take (a) as an example, the number 0 always represents [CLS], the number 1 is the tokenized "豆" and the number 2 is "娘". Indices of the tokenized label name are highlighted in red.
340
+
341
+ As shown in Figure 6, we visualize images from six labels in the Chinese ImageNet. We apply the same visualization method as FILIP [53], to align textual tokens and image patch tokens. In particular, we calculate the token-wise similarity between each image patch token and all tokenized textual tokens from the text label, i.e., [CLS] {class label tokens} [SEP]. For each image patch, the position index of textual tokens with the maximum similarity is considered as its predicted text token. Note that the Chinese class label is often tokenized to more than one token. We highlight all the predicted position indices that correspond to the class label, and place them at the center of the corresponding patches.
342
+
343
+ From Figure 6, we surprisingly find that both models are able to predict image patches of the target object. For $\mathrm{FILIP}_{\mathrm{ViT-L}}$ with each image patchified to $16 \times 16$ , such word-patch alignment is more fine-grained than $\mathrm{FILIP}_{\mathrm{Swin-L}}$ with the output resolution as $7 \times 7$ . Take Figure 6 (e) as an example, $\mathrm{FILIP}_{\mathrm{ViT-L}}$ is even able to align Chinese tokens "教" and "堂", which means church as one word, to the smaller church in the bottom-right corner. $\mathrm{FILIP}_{\mathrm{ViT-L}}$ also well outlines the hummingbird in the example of Figure 6 (c), while $\mathrm{FILIP}_{\mathrm{Swin-L}}$ often aligns to the main body of the target object. Another interesting observation is that these Chinese pre-trained models are able to align image patches to English tokens as shown in Figure 6 (d). The main reason lies in that the vocabulary used from BERT [7] also includes multilingual words such as "iPod".
344
+
345
+ Overall, this visualization confirms that our released models pre-trained on Wukong dataset indeed learn the correspondence between images and Chinese texts, or even in a more finer-grained manner, the alignment between image patches and words. This capability of aligning words and patches offers a potential solution for image object localization.
346
+
347
+ # D Downstream Datasets
348
+
349
+ # D.1 Prompt Template
350
+
351
+ As previously observed in GPT-3 [1], the zero-shot performance can be significantly improved by customizing the prompt templates to each task. CLIP [35] also shows that specifying the category for each dataset contributes to the performance. However, since we only aim to provide a Chinese dataset with a general benchmarking of our released models, we leave the "prompt engineering" to the future work. We simply use the reported 80 general English prompts in CLIP and translate them to Chinese manually, as follows. Note that $\{\}$ is replaced by the exact Chinese label name. We release these Chinese prompts for future fair comparison in the community. Below are all the 80 Chinese prompts and the corresponding English prompts.
352
+
353
+ Chinese Prompts:“{}的照片。”,“许多{}的照片。”,“一张包含{}的照片。”,“质量差的{}的照片。”,“{}的雕塑。”,“难以看到{}的照片。”,“{}的低分辨率照片。”,“{}的渲染。”,“涂鸦{}。”,“{}的糟糕照片。”,“{}的裁剪照片。”,“{}的纹身。”,“{}的刺绣照片。”,“很难看到{}的照片。”,“{}的明亮照片。”,“一张干净的{}的照片。”,“{}的深色照片。”,“{}的手绘画。”“我的{}的照片。”“不自然的{}的照片。”,“一张酷的{}的照片。”,“{}的特写照片。”,“{}的黑白照片。”,“一幅{}的画。”,“一幅{}的绘画。”“一张{}的像素照片。”,“{}的雕像。”“一张{}的明亮照片。”,“{}的裁剪照片。”“人造的{}的照片。”“一张关于{}的照片。”“损坏的{}的jpeg照片。”,“{}的模糊照片。”,“{}的相片。”“一张{}的好照片。”,“{}的渲染照。”“视频游戏中的{}。”“一张{}的照片。”“{}的涂鸦。”“{}的近距离照片。”,“{}的折纸。”,“{}在视频游戏中。”,“{}的草图。”,“{}的涂鸦照。”,“{}的折纸形状。”“低分辨率的{}的照片。”“玩具{}。”,“{}的副本。”,“{}的干净的照片。”“一张大{}的照片。”,“{}的重现。”“一张漂亮的{}的照片。”“一张奇怪的{}的照片。”“模糊的{}的照片。”“卡通{}”。“{}的艺术作品。”“{}的素描。”“刺绣{}。”“{}的像素照。”“{}的拍照。”“{}的损坏的照片。”“高质量的{}的照片。”“毛绒玩具{}。”“漂亮的{}的照片。”“小{}的照片。”“照片是奇怪的{}。”“漫画{}。”“{}的艺术照。”“{}的图形。”“大{}的照片。”“黑白的{}的照片。”“{}毛绒玩具。”“一张{}的深色照片。”“{}的摄影图。”“{}的涂鸦照。”“玩具形状的{}。”“拍了{}的照片。”“酷酷的{}的照片。”“照片里的小{}”。“{}的刺青。
354
+
355
+ English Prompts: "a photo of a {}.","a bad photo of a {}.","a photo of many {}.","a sculpture of a {}.","a photo of the hard to see {}.","a low resolution photo of the {}.","a rendering of a {}.","graffiti of a {}.","a bad photo of the {}.","a cropped photo of the {}.","a tattoo of a {}.","the embroidered {}.","a photo of a hard to see {}.","a bright photo of a {}.","a photo of a clean {}.","a photo of a dirty {}.","a dark photo of the {}.","a drawing of a {}.","a photo of my {}.","the plastic {}.","a photo of the cool {}.","a close-up photo of a {}.","a black and white photo of the {}.","a painting of the {}.","a painting of a {}.","a pixelated photo of the {}.","a sculpture of the {}.","a bright photo of the {}.","a cropped photo of a {}.","a plastic {}.","a photo of the dirty {}.","aJPEG corrupted photo of a {}.","a blurry photo of the {}.","a photo of the {}.","a good photo of the {}.","a rendering of the {}.","a {} in a video game.",“a photo of one {}.","a doodle of a {}.","a close-up photo of the {}.","the origami {}.","the {} in a video game.",“a sketch of a {}.","a doodle of the {}.","a origami {}.","a low resolution photo of a {}.","the toy {}.","a rendition of the {}.","a photo of the clean {}.","a photo of a large {}.","a rendition of a {}.","a photo of a nice {}.","a
356
+
357
+ photo of a weird {}.”, “a blurry photo of a {}.”, “a cartoon {}.”, “art of a {}.”, “a sketch of the {}.”, “a embroidered {}.”, “a pixelated photo of a {}.”, “itap of the {}.”, “a jpeg corrupted photo of the {}.”, “a good photo of a {}.”, “a plushie {}.”, “a photo of the nice {}.”, “a photo of the small {}.”, “a photo of the weird {}.”, “the cartoon {}.”, “art of the {}.”, “a drawing of the {}.”, “a photo of the large {}.”, “a black and white photo of a {}.”, “the plushie {}.”, “a dark photo of a {}.”, “itap of a {}.”, “graffiti of the {}.”, “a toy {}.”, “itap of my {}.”, “a photo of a cool {}.”, “a photo of a small {}.”, “a tattoo of the {}.”
358
+
359
+ # D.2 Datasets for Image-text Retrieval
360
+
361
+ The data scale of datasets for image-text retrieval is described in Table 10. The texts in Flickr8K-CN, COCO-CN, AIC-ICC are human-annotated, the texts in Flickr30K-CN train/val set are machine-translated while the texts in Flickr30K-CN test set are human-translated from their original English counterparts. In Flickr8K-CN, Flickr30K-CN and AIC-ICC, each image is paired with 5 texts. In COCO-CN, each image is paired with 1 to 2 texts. In MUGE, each text is paired with 1 to 2 images in the train set, and with about 6 images in the val/test sets.
362
+
363
+ Table 10: Statistics of each image-text retrieval dataset.
364
+
365
+ <table><tr><td>Dataset</td><td>split</td><td>#Images</td><td>#Sentences</td></tr><tr><td rowspan="3">Flickr8K-CN [20]</td><td>train</td><td>6,000</td><td>30,000</td></tr><tr><td>val</td><td>1,000</td><td>5,000</td></tr><tr><td>test</td><td>1,000</td><td>5,000</td></tr><tr><td rowspan="3">Flickr30K-CN [18]</td><td>train</td><td>29,783</td><td>148,915</td></tr><tr><td>val</td><td>1,000</td><td>5,000</td></tr><tr><td>test</td><td>1,000</td><td>5,000</td></tr><tr><td rowspan="3">COCO-CN [21]</td><td>train</td><td>18,341</td><td>20,065</td></tr><tr><td>val</td><td>1,000</td><td>1,100</td></tr><tr><td>test</td><td>1,000</td><td>1,053</td></tr><tr><td rowspan="4">AIC-ICC [51]</td><td>train</td><td>210,000</td><td>1,050,000</td></tr><tr><td>val</td><td>30,000</td><td>150,000</td></tr><tr><td>test-1</td><td>30,000</td><td>150,000</td></tr><tr><td>test-2</td><td>30,000</td><td>150,000</td></tr><tr><td rowspan="3">MUGE [22]</td><td>train</td><td>129,380</td><td>248,786</td></tr><tr><td>val</td><td>29,806</td><td>5,008</td></tr><tr><td>test</td><td>30,399</td><td>5,004</td></tr><tr><td>Wukong-Test</td><td>val</td><td>33,365</td><td>33,365</td></tr></table>
366
+
367
+ # E Limitations and Societal Impacts
368
+
369
+ Wukong dataset might only contain the current concepts and language expression at the time of collection. Since language evolves with human activities, our dataset certainly cannot cover the newly emerging concepts, words and language expression in the future. It is the same case for the image data side, where the new visual object or design can not be covered. However, fine-tuning pre-trained models on these up-to-date data is able to address this issue. In addition, our dataset is built on the corpora from Chinese Internet, which means the vocabulary and expression may more or less fit into the Chinese culture. Also, there is more written language than spoken language and it might bring bias at some point. Another limitation is the absence of very long texts in our dataset. Therefore, the ability of understanding documents using our released models might be limited. Furthermore, in terms of societal impacts, our dataset is built in a general purpose with images and texts collected from unlimited domains. Models trained on this dataset might express some undesirable and uncontrollable tendencies in terms of image-text correspondence. Therefore, although our released models are discriminative, special attention is still suggested in practical use.
370
+
371
+ # F Hosting and Maintenance Plan
372
+
373
+ Long-term maintenance of Wukong, as well as Wukong-Test, and models proposed and evaluated in our paper will be made by the authors. The dataset website containing introductions, benchmarks, terms of use and any possible improvement in the future are hosted in Github Pages which is a widely-used website hosting service. In terms of content hosting, there are three parts: code, models and datasets. All of them are hosted on open platforms that each individual is able to download freely. For evaluation code, Pytorch version is hosted on Github and Mindspore version is hosted on Gitee, an open-source code hosting platform specialized for Chinese users. The model checkpoints trained in our paper are hosted on Google Drive. The datasets including Wukong and Wukong-Test are hosted on Google Drive and Baidu Cloud, a widely-used cloud storage service in China, as backup.
374
+
375
+ # G License
376
+
377
+ Unless specifically labeled otherwise, our released datasets are provided to You under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License ("CC BY-NC-SA 4.0"), with the additional terms included herein. The CC BY-NC-SA 4.0 may be accessed at https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode. When You download or use the datasets from our website or elsewhere, You are agreeing to comply with the terms of CC BY-NC-SA 4.0, and also agreeing to the dataset Terms. Where these dataset Terms conflict with the terms of CC BY-NC-SA 4.0, these dataset Terms shall prevail. We reiterate once again that this dataset is used only for non-commercial purposes such as academic research, teaching, or scientific publications. We prohibits You from using the dataset or any derivative works for commercial purposes, such as selling data or using it for commercial gain.
2202.06xxx/2202.06767/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a15e5dbb9e48711599f367ce68afbc484e1666c44557da4e5c7d5c978e591840
3
+ size 1276743
2202.06xxx/2202.06767/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06804/db9c788c-9318-46d1-9180-61cb7efe0259_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06804/db9c788c-9318-46d1-9180-61cb7efe0259_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06804/db9c788c-9318-46d1-9180-61cb7efe0259_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58d0c210d827acb111ffa90862db75bdaba0a05c8b93a977fdffd4295fa0926b
3
+ size 1929901
2202.06xxx/2202.06804/full.md ADDED
@@ -0,0 +1,449 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Flexible learning of quantum states with generative query neural networks
2
+
3
+ Yan Zhu $^{1}$ , Ya-Dong Wu $^{1,*}$ , Ge Bai $^{1}$ , Dong-Sheng Wang $^{2}$ , Yuexuan Wang $^{1,3}$ and Giulio Chiribella $^{1,4,5,\dagger}$
4
+
5
+ $^{1}$ QICI Quantum Information and Computation Initiative, Department of
6
+
7
+ Computer Science, The University of Hong Kong, Pokfulam Road, Hong Kong.
8
+
9
+ $^{2}$ CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics,
10
+
11
+ Chinese Academy of Sciences, Beijing 100190, People's Republic of China.
12
+
13
+ <sup>3</sup>College of Computer Science and Technology, Zhejiang University, Hangzhou, China.
14
+
15
+ $^{4}$ Department of Computer Science, Parks Road, Oxford, OX1 3QD, United Kingdom.
16
+
17
+ <sup>5</sup>Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada.
18
+
19
+ These authors contributed equally: Yan Zhu, Ya-Dong Wu.
20
+
21
+ Emails: *yadongwu@hku.hk, †giulio@cs.hku.hk
22
+
23
+ Deep neural networks are a powerful tool for the characterization of quantum states. Existing networks are typically trained with experimental data gathered from the specific quantum state that needs to be characterized. But is it possible to train a neural network offline and to make predictions about quantum states other than the ones used for the training? Here we introduce a model of network that can be trained with classically simulated data from a fiducial set of states and measurements, and can later be used to characterize quantum states that share structural similarities with the states in the fiducial set. With little guidance of quantum physics, the network builds its own data-driven representation of quantum states, and then uses it to predict the outcome statistics of quantum measurements that have not been performed yet. The state representation produced by the network can also be used for tasks beyond the prediction of outcome statistics, including clustering of quantum states and identification of different phases of matter. Our network model provides a flexible approach that can be applied to online learning scenarios, where predictions must be generated as soon as experimental data become available, and to blind learning scenarios where the learner has only access to an encrypted description of the quantum hardware.
24
+
25
+ # I. INTRODUCTION
26
+
27
+ Accurate characterization of quantum hardware is crucial for the development, certification, and benchmarking of new quantum technologies [1]. Accordingly, major efforts have been invested into developing suitable techniques for characterizing quantum states, including quantum state tomography [2-6], classical shadow estimation [7, 8], partial state characterization [9, 10] and quantum state learning [11-14]. Recently, the dramatic development of artificial intelligence inspired new approaches on machine learning methods [15]. In particular, a sequence of works explored applications of neural networks to various state characterization tasks [16-26].
28
+
29
+ In the existing quantum applications, neural networks are typically trained using experimental data generated from the specific quantum state that needs to be characterized. As a consequence, the information learnt in the training phase cannot be directly transferred to other states: for a new quantum state, a new training procedure must be carried out. This structural limitation affects the learning efficiency in applications involving multiple quantum states, including important tasks such as quantum state clustering [27], quantum state classification [28], and quantum cross-platform verification [29].
30
+
31
+ In this paper, we develop a flexible model of neural network that can be trained offline using simulated data from a fiducial set of states and measurements, and is capable of learning multiple quantum states that share structural similarities with the fiducial states, such as
32
+
33
+ being ground states in the same phase of a quantum manybody system. Our model, called generative query network for quantum state learning (GQNQ), takes advantage of a technique originally developed in classical image processing for learning 3D scenes from 2D snapshots taken from different viewpoints [30]. The key idea is to use a representation network [31] to construct a data-driven representation of quantum states, and then to feed this representation into a generation network [32] that predicts the outcome statistics of quantum measurements that have not been performed yet. The state representations produced by GQNQ enable applications where multiple states have to be compared, such as quantum state clustering or the identification of different phases of matter. The applications of GQNQ are illustrated with numerical experiments on multiqubit states, including ground states of Ising models and XXZ models, and continuous-variable quantum states, both Gaussian and non-Gaussian.
34
+
35
+ The deep learning techniques developed in this work can be applied to real-time control and calibration of various quantum state preparation devices. They can also be applied to online learning scenarios wherein predictions have to be made as soon as data become available, and to blind learning scenarios where the learner has to predict the behaviour of a quantum hardware without having access to its quantum description, but only to an encrypted parametrization.
36
+
37
+ # II. RESULTS
38
+
39
+ Quantum state learning framework. In this work we adopt a learning framework inspired by the task of "pretty good tomography" [11]. An experimenter has a source that produces quantum systems in some unknown quantum state $\rho$ . The experimenter's goal is to characterize $\rho$ , becoming able to make predictions on the outcome statistics of a set of measurements of interest, denoted by $\mathcal{M}$ . Each measurement $M \in \mathcal{M}$ corresponds to a positive operator-valued measure (POVM), that is, a set of positive operators $M \coloneqq (M_j)_{j=1}^k$ acting on the system's Hilbert space and satisfying the normalization condition $\sum_{j=1}^{k} M_j = 1$ (without loss of generality, we assume that all the measurements in $\mathcal{M}$ have the same number of outcomes, denoted by $k$ ).
40
+
41
+ To characterize the state $\rho$ , the experimenter performs a finite number of measurements $M_{i}$ , $i \in \{1, \ldots, s\}$ , picked at random from $\mathcal{M}$ . This random subset of measurements will be denoted by $\mathcal{S} = \{M_{i}\}_{i=1}^{s}$ . Note that in general both $\mathcal{M}$ and $\mathcal{S}$ may not be informationally complete.
42
+
43
+ Each measurement in $S$ is performed multiple times on independent copies of the quantum state $\rho$ , obtaining a vector of experimental frequencies $\pmb{p}_i$ . Using this data, the experimenter attempts to predict the outcome statistics of a new, randomly chosen measurement $M' \in \mathcal{M} \backslash S$ . For this purpose, the experimenter uses the assistance of an automated learning system (e.g., a neural network), hereafter called the learner. For each measurement $M_i \in S$ , the experimenter provides the learner with a pair $(m_i, p_i)$ , where $m_i$ is a parametrization of the measurement $M_i$ , and $p_i$ is the vector of experimental frequencies for the measurement $M_i$ . Here the parametrization $m_i$ could be the full description of the POVM $M_i$ , or a lower-dimensional parametrization valid only for measurements in the set $\mathcal{M}$ . For example, if $\mathcal{M}$ contains measurements of linear polarization, a measurement in $\mathcal{M}$ could be parametrized by the angle $\theta$ of the corresponding polarizer. The parametrization could also be encrypted, so that the actual description of the quantum hardware in the experimenter's laboratory is concealed from the learner.
44
+
45
+ To obtain a prediction for a new, randomly chosen measurement $M' \in \mathcal{M} \setminus \mathcal{S}$ , the experimenter sends its parametrization $m'$ to the learner. The learner's task is to predict the correct outcome probabilities $p_{\mathrm{true}}' = \left(\operatorname{tr}\left(\rho M_j'\right)\right)_{j=1}^k$ . This task includes as special case quantum state reconstruction, corresponding to the situation where the subset $S$ is informationally complete.
46
+
47
+ Note that, a priori, the learner may have no knowledge about quantum physics whatsoever. The ability to make reliable predictions about the statistics of quantum measurements can be gained automatically through a training phase, where the learner is presented with data and adjusts its internal parameters in a data-driven way. In previous works [16, 17, 19, 20, 24, 26], the training was
48
+
49
+ based on experimental data gathered from the same state $\rho$ that needs to be characterized. In the following, we will provide a model of learner that can be trained with data from a fiducial set of quantum states that share some common structure with $\rho$ , but can generally be different from $\rho$ . The density matrices of the fiducial states can be completely unknown to the learner. In fact, the learner does not even need to be provided a parametrization of the fiducial states: the only piece of information that the learner needs to know is which measurement data correspond to the same state.
50
+
51
+ The GQNQ network. Our model of learner, GQNQ, is a neural network composed of two main parts: a representation network [31], producing a data-driven representation of quantum states, and a generation network [32], making predictions about the outcome probabilities of quantum measurements that have not been performed yet. The combination of a representation network and a generation network is called a generative query network [30]. This type of neural network was originally developed for the classical task of learning 3D scenes from 2D snapshots taken from different viewpoints. The intuition for adapting this model to the quantum domain is that the statistics of a fixed quantum measurement can be regarded as a lower-dimensional projection of a higher-dimensional object (the quantum state), in a way that is analogous to a 2D projection of a 3D scene. The numerical experiments reported in this paper indicate that this intuition is indeed correct, and that GQNQ works well even in the presence of errors in the measurement data and fluctuations due to finite statistics.
52
+
53
+ The structure of GQNQ is illustrated in Fig. 1, where we also provide a comparison with quantum state tomography. The first step is to produce a representation of the unknown quantum state $\rho$ . In GQNQ, this step is carried out by a representation network, which computes a function $f_{\xi}$ depending on parameters $\xi$ that are fixed after the training phase (see Methods for details). The representation network receives as input the parametrization of all measurements in $\mathcal{S}$ and their outcome statistics on the state $\rho$ that needs to be characterized. For each pair $(m_i,p_i)$ , the representation network produces a vector $\boldsymbol{r}_i = f_{\xi}(\boldsymbol{m}_i,\boldsymbol{p}_i)$ . The vectors corresponding to different pairs are then combined into a single vector $\boldsymbol{r}$ by an aggregate function $\mathcal{A}$ . For simplicity, we take the aggregate function to be the average, namely $\boldsymbol{r} := \frac{1}{s}\sum_{i=1}^{s}\boldsymbol{r}_i$ . At this point, the vector $\boldsymbol{r}$ is a representation of the quantum state $\rho$ .
54
+
55
+ While tomographic protocols strive to find the density matrix that fits the measurement data, GQNQ is not constrained to a specific choice of state representation. This additional freedom enables the network to construct lower-dimensional representations of quantum states with sufficiently regular structure, such as ground states in well-defined phases of matter, and to make predictions for states that did not appear in the training phase. Notice also that the tomographic reconstruction of the density matrix using statistical estimators, such as
56
+
57
+ ![](images/4ea71514998cfe2cb225de1e46c12cad6cb4a00ed93ce67413e34929b4e395cd.jpg)
58
+
59
+ ![](images/c014211bbb219e7da819f699ec3200f05cda963135242805115519c6e010a97c.jpg)
60
+ Figure 1. Structure of GQNQ and comparison with quantum state tomography. In GQNQ (a), a representation network receives as input the raw measurement data $\{(m_i, p_i)\}_{i=1}^s$ and produces as output $s$ vectors $\boldsymbol{r}_i = f_{\boldsymbol{\xi}}(\boldsymbol{m}_i, \boldsymbol{p}_i)$ , that are combined into a single vector $\boldsymbol{r}$ by an aggregate function $\mathcal{A}$ . The vector $\boldsymbol{r}$ serves as a concise representation of the quantum state, and is sent to a generation network $g_{\boldsymbol{\eta}}$ , which predicts the outcome statistics $\boldsymbol{p}'$ of any desired measurement $\boldsymbol{m}'$ in the set of measurements of interest. In quantum tomography (b), the raw measurement data are fed into a statistical estimator (such as maximum likelihood), which produces a guess for the density matrix $\rho$ . Then, the density matrix is used to predict the outcome probabilities of unperformed quantum measurements via the Born rule. Both GQNQ and quantum tomography use data to infer a representation of the quantum state.
61
+
62
+ maximum likelihood and maximum entropy [33], is generally more time-consuming than the evaluation of the function $f_{\xi}$ , due to the computational complexity of the estimation procedure.
63
+
64
+ Once a state representation has been produced, the next step is to predict the outcome statistics for a new quantum measurement on the state $\rho$ . In quantum tomography, the prediction is generated by applying the Born rule on the estimated density matrix. In GQNQ, the task is achieved by a generation network [30], which computes a function $g_{\eta}$ depending on some parameters $\pmb{\eta}$ that are fixed after the training phase. The network receives as input the state representation $\pmb{r}$ and the parametrization $\pmb{m^{\prime}}$ of the desired measurement $M^{\prime} \in \mathcal{M} \setminus \mathcal{S}$ , and produces as output a vector $\pmb{p^{\prime}} = g_{\eta}(\pmb{r}, \pmb{m^{\prime}})$ that approximates the outcome statistics of the measurement $M^{\prime}$ on the state $\rho$ .
65
+
66
+ Another difference with quantum tomography is that GQNQ does not require a specific representation of quantum measurements in terms of POVM operators. Instead, a measurement parametrization is sufficient for GQNQ to make its predictions, and the parametrization can even be provided in an encrypted form. Since GQNQ does not require the description of the devices to be pro
67
+
68
+ vided in clear, it can be used to perform data analysis on a public server, without revealing properties of the quantum hardware, such as the dimension of the underlying quantum system.
69
+
70
+ So far, we described the GQNQ procedure for learning a single quantum state $\rho$ . In the case of multiple states, the same procedure is repeated on each state, every time choosing a (generally different) set of measurements $S$ . Crucially, the network does not need any parametrization of the quantum states, neither it needs the states to be sorted into different classes. For example, if the states correspond to different phases of matter, GQNQ does not need to be told which state belongs to which phase. This feature will be important for the applications to state clustering and classification illustrated later in this paper.
71
+
72
+ The internal structure of the representation and generation networks is discussed in Supplementary Note VI. The parameters $\pmb{\xi}$ and $\pmb{\eta}$ are determined in the training phase, in which GQNQ is provided with pairs $(m,p)$ consisting of the measurement parametrization/measurement statistics for a fiducial set of measurements $\mathcal{M}_* \subseteq \mathcal{M}$ , performed on a fiducial set of quantum states $\mathcal{Q}_*$ . In the numerical experiments provided in the
73
+
74
+ Results section, we choose $\mathcal{M}_* = \mathcal{M}$ , that is, we provide the network with the statistics of all the measurement in $\mathcal{M}$ . In the typical scenario, the fiducial states and measurements are known, and the training can be done offline, using computer simulated data rather than actual experimental data.
75
+
76
+ We stress that the parameters $\xi$ and $\eta$ depend only on the fiducial sets $\mathcal{M}_{*}$ and $\mathcal{Q}_{*}$ and on the corresponding measurement data, but do not depend on the unknown quantum states that will be characterized later, nor on the subsets of measurements that will be performed on these states. Hence, the network does not need to be re-trained when it is used to characterize a new quantum state $\rho$ , nor to be re-trained when one changes the subset of performed measurements $\mathcal{S}$ .
77
+
78
+ Summarizing, the main structural features of GQNQ are
79
+
80
+ - Offline, multi-purpose training: training can be done offline using computer generated data. Once the training has been concluded, the network can be used to characterize and compare multiple states.
81
+ - Measurement flexibility: after the training has been completed, the experimenter can freely choose which subset of measurements $\mathcal{S} \subset \mathcal{M}$ is performed on the unknown quantum states.
82
+ - Learner-blindness: the parametrization of the measurements can be provided in an encrypted form. No parametrization of the states is needed.
83
+
84
+ Later in the paper, we will show that GQNQ can be adapted to an online version of the state learning task [13], thus achieving the additional feature of
85
+
86
+ - Online prediction: predictions can be updated as new measurement data become available.
87
+
88
+ Quantum state learning in spin systems. A natural test bed for our neural network model is provided by quantum spin systems [34, 35]. In the following, we consider ground states of the one-dimensional transverse-field Ising model and of the XXZ model, both of which are significant for many-body quantum simulations [36-38]. These two models correspond to the Hamiltonians
89
+
90
+ $$
91
+ H = - \left(\sum_ {i = 0} ^ {L - 2} J _ {i} \sigma_ {i} ^ {z} \sigma_ {i + 1} ^ {z} + \sum_ {j = 0} ^ {L - 1} \sigma_ {j} ^ {x}\right), \tag {1}
92
+ $$
93
+
94
+ and
95
+
96
+ $$
97
+ H = - \left[ \sum_ {i = 0} ^ {L - 2} \Delta_ {i} \left(\sigma_ {i} ^ {x} \sigma_ {i + 1} ^ {x} + \sigma_ {i} ^ {y} \sigma_ {i + 1} ^ {y}\right) + \sigma_ {i} ^ {z} \sigma_ {i + 1} ^ {z} \right], \qquad (2)
98
+ $$
99
+
100
+ respectively. In the Ising Hamiltonian (1), positive (negative) coupling parameters $J_{i}$ correspond to ferromagnetic
101
+
102
+ (antiferromagnetic) interactions. For the XXZ Hamiltonian (2), the ferromagnetic phase corresponds to coupling parameters $\Delta_{i}$ in the interval $(-1,1)$ . If instead the coupling parameters fall in the region $(-\infty, -1) \cup (1,\infty)$ , the Hamiltonian is said to be in the XY phase [39].
103
+
104
+ We start by considering a system of six qubits as example. For the ground states of the Ising model (1), we choose each coupling parameter $J_{i}$ at random following a Gaussian distribution with standard deviation $\sigma = 0.1$ and mean $J$ . For $J > 0$ ( $J < 0$ ), this random procedure has a bias towards ferromagnetic (antiferromagnetic) interactions. For $J = 0$ , ferromagnetic and antiferromagnetic interactions are equally likely. Similarly, for the ground states of the XXZ model (2), we choose each parameter $\Delta_{i}$ at random following a Gaussian distribution with standard deviation 0.1 and mean value $\Delta$ . When $\Delta$ is in the interval $(-1,1)$ ( $(-\infty, -1) \cup (1,\infty)$ ), this random procedure has a bias towards interactions of the ferromagnetic (XY) type.
105
+
106
+ In addition to the above ground states, we also consider locally rotated GHZ states, of the form $\otimes_{i=1}^{6} U_i |GHZ\rangle$ with $|GHZ\rangle = \frac{1}{\sqrt{2}}(|000000\rangle + |111111\rangle)$ and locally rotated W states, of the form $\otimes_{i=1}^{6} U_i |W\rangle$ with $|W\rangle = \frac{1}{\sqrt{6}}(|100000\rangle + \cdots + |000001\rangle)$ , where $(U_i)_{i=1}^6$ are unitary matrices of the form $U_i = \exp[-\mathrm{i}\theta_{i,z}\sigma_{i,z}]\exp[-\mathrm{i}\theta_{i,y}\sigma_{i,y}]\exp[-\mathrm{i}\theta_{i,x}\sigma_{i,x}]$ , where the angles $\theta_{i,x}, \theta_{i,y}, \theta_{i,z} \in [0,\pi/10]$ are chosen independently and uniformly at random for every $i$ .
107
+
108
+ For the set of all possible measurements $\mathcal{M}$ , we chose the 729 six-qubit measurements consisting of local Pauli measurements on each qubit. To parameterize the measurements in $\mathcal{M}$ , we provide the entries in the corresponding Pauli matrix at each qubit, arranging the entries in a 48-dimensional real vector. The dimension of state representation $\pmb{r}$ is set to be 32, which is half of the Hilbert space dimension. In Supplementary Note VII we discuss how the choice of dimension of $\pmb{r}$ and the other parameters of the network affect the performance of GQNQ.
109
+
110
+ GQNQ is trained using measurement data from measurements in $\mathcal{M}$ on states of the above four types (see Methods for a discussion of the data generation techniques). We consider both the scenarios where all training data come from states of the same type, and where states of different types are used. In the latter case, we do not provide the network with any label of the state type. After training, we test GQNQ on states of the four types described above. To evaluate the performance of the network, we compute the classical fidelities between the predicted probability distributions and the correct distributions computed from the true states and measurements. For each test state, the classical fidelity is averaged over all possible measurements in $\mathcal{M} \setminus S$ , where $S$ is a random subset of 30 Pauli measurements. Then, we average the fidelity over all possible test states.
111
+
112
+ The results are summarized in Table I. Each row shows the performances of one particular trained GQNQ when tested using the measurement data from (i) 150 ground states of Ising model with $J \in \{0.1, \dots, 1.5\}$ ,
113
+
114
+ (ii) 150 ground states of Ising model with $J \in \{-1.5, -1.4, \dots, -0.1\}$ , where 10 test states are generated per value of $J$ , (iii) 10 ground states of Ising model with $J = 0$ , (iv) 190 ground states of XXZ model with $\Delta \in \{-0.9, -0.8, \dots, 0.9\}$ , (v) 100 ground states of XXZ model with $\Delta \in \{-1.5, -1.4, \dots, -1.1\} \cup \{1.1, 1.2, \dots, 1.5\}$ , where 10 test states are generated per value of $\Delta$ , (vi) all the states from (i) to (v), (vii) 200 locally rotated GHZ states (viii) 200 locally rotated W
115
+
116
+ states (vii), (ix) all the states from (i) to (v), together with (vii) and (viii). In the second column, the input data given to GQNQ is the true probability distribution computed with the Born rule, while in the third and fourth columns, the input data given to GQNQ during test is the finite statistics obtained by sampling the true outcome probability distribution 50 times and 10 times, respectively.
117
+
118
+ Table I. Average classical fidelities between the predictions of GQNQs and the ground truths with respect to different types of six-qubit states.
119
+
120
+ <table><tr><td>Types of states for training and test</td><td>noiseless</td><td>50 shots</td><td>10 shots</td></tr><tr><td>(i) Ising ground states with ferromagnetic bias</td><td>0.9870</td><td>0.9869</td><td>0.9862</td></tr><tr><td>(ii) Ising ground states with antiferromagnetic bias</td><td>0.9869</td><td>0.9867</td><td>0.9849</td></tr><tr><td>(iii) Ising ground states with no bias</td><td>0.9895</td><td>0.9894</td><td>0.9894</td></tr><tr><td>(iv) XXZ ground states with ferromagnetic bias</td><td>0.9809</td><td>0.9802</td><td>0.9787</td></tr><tr><td>(v) XXZ ground states with XY phase bias</td><td>0.9601</td><td>0.9548</td><td>0.9516</td></tr><tr><td>(vi) (i)-(v) together</td><td>0.9567</td><td>0.9547</td><td>0.9429</td></tr><tr><td>(vii) GHZ state with local rotations</td><td>0.9744</td><td>0.9744</td><td>0.9742</td></tr><tr><td>(viii) W state with local rotations</td><td>0.9828</td><td>0.9826</td><td>0.9821</td></tr><tr><td>(ix) (i)-(v), (vii) and (viii) together</td><td>0.9561</td><td>0.9543</td><td>0.9402</td></tr></table>
121
+
122
+ The results shown in Table I indicate that the performance with finite statistics is only slightly lower than the performance in the ideal case. It is also worth noting that GQNQ maintains a high fidelity even when used on multiple types of states.
123
+
124
+ Recall that the results in Table I refer to the scenario where GQNQ is trained with the full set of six-qubit Pauli measurements, which is informationally complete. An interesting question is whether the learning performance would still be good if the training used a noninformationally complete set of measurements. In Supplementary Note IX, we show that fairly accurate predictions can be made even if $\mathcal{M}$ consists only of 72 randomly chosen Pauli measurements.
125
+
126
+ While GQNQ makes accurate predictions for state families with sufficient structure, it should not be expected to work universally well on all possible quantum states. In Supplementary Note VIII, we considered the case where the network is trained and tested on arbitrary six-qubit states, finding that the performance of GQNQ drops drastically. In Supplementary Note X, we also provide numerical experiments on the scenario where some types of states are overrepresented in the training phase, potentially causing overfitting when GQNQ is used to characterize unknown states of an underrepresented type.
127
+
128
+ We now consider multiqubit states with 10, 20, and 50 qubits, choosing the measurement set $\mathcal{M}$ to consist of all two-qubit Pauli measurements on nearest-neighbor qubits and $\mathcal{S}$ a subset containing $s = 30$ measurements
129
+
130
+ randomly chosen from $\mathcal{M}$ . Here the dimension of state representation $\pmb{r}$ is chosen to be 24, which guarantees a good performance in our numerical experiments.
131
+
132
+ For the Ising model, we choose the coupling between each nearest-neighbour pair of spins to be either consistently ferromagnetic for $J \geq 0$ or consistently antiferromagnetic for $J < 0$ : for $J \geq 0$ we replace each coupling $J_{i}$ in Eq. (1) by $|J_{i}|$ , and for $J < 0$ we replace $J_{i}$ by $-|J_{i}|$ . The results are illustrated in Fig. 2. The figure shows that the average classical fidelities in both ferromagnetic and antiferromagnetic regions are close to one, with small drops around the phase transition point $J = 0$ . The case where both ferromagnetic and anti-ferromagnetic interactions are present is studied in Supplementary Note XI, where we observe that the learning performance is less satisfactory in this scenario.
133
+
134
+ For XXZ model, the average classical fidelities in the XY phase are lower than those in the ferromagnetic interaction region, which is reasonable due to higher quantum fluctuations in the XY phase [35]. At the phase transition points $\Delta = \pm 1$ , the average classical fidelities drop more significantly, partly because the abrupt changes of ground state properties at the critical points make the quantum state less predictable, and partly because the states at phase transition points are less represented in the training data set.
135
+
136
+ Quantum state learning on a harmonic oscillator. We now test GQNQ on states encoded in harmonic oscillators, i.e. continuous-variable quantum states, in-
137
+
138
+ ![](images/11b65b1abb4dfbd4b5a571e6f731057cbbafad45492c15d236f21c6c58b97629.jpg)
139
+
140
+ ![](images/5702015d386083fb9e233cb016afb58312992a43a9bd6bcba47509cbade03a0f.jpg)
141
+ Figure 2. Performances of GQNQs on Ising model ground states and XXZ model ground states visualized by boxplots [40]. Figure (a) shows the average classical fidelities of predictions given by three GQNQs for ten-, twenty- and fifty-qubit ground states of Ising model (1), respectively, with respect to different values of $J \in \{-1.5, -1.4, \dots, 1.5\}$ . Figure (b) shows the performances of another three GQNQs for ten-, twenty- or fifty-qubit ground states of XXZ model (2), respectively, with respect to different values of $\Delta \in \{-1.5, -1.4, \dots, 1.5\}$ . Given outcome probability distributions for all $m \in S$ , each box shows the average classical fidelities of predicted outcome probabilities, averaged over all measurements in $\mathcal{M} \setminus S$ , for ten instances.
142
+
143
+ including single-mode Gaussian states, as well as non-Gaussian states such as cat states and GKP states [41], both of which are important for fault-tolerant quantum computing [41, 42]. For the measurement set $\mathcal{M}$ , we choose 300 homodyne measurements, that is, 300 projective measurements associated to quadrature operators of the form $(e^{i\theta}\hat{a}^{\dagger} + e^{-i\theta}\hat{a}) / 2$ , where $\hat{a}^{\dagger}$ and $\hat{a}$ are bosonic creation and annihilation operators, respectively, and $\theta$ is a uniformly distributed phase in the interval $[0,\pi)$ . For the subset $\mathcal{S}$ , we pick 10 random quadratures. For the parametization of the measurements, we simply choose the corresponding phase $\theta$ . Since the homodyne measurements have an unbounded and continuous set of outcomes, here we truncate the outcomes into a finite interval (specifically, at $\pm 6$ ) and discretize them, dividing the interval into 100 bins of equal width. The dimension of the representation vector $\pmb{r}$ is chosen to be 16.
144
+
145
+ In Table II we illustrate the performance of GQNQ on (i) 200 squeezed thermal states with thermal variance $V \in [1,2]$ and squeezing parameter $s$ satisfying $|s| \in [0,0.5]$ , $\arg(s) \in [0,\pi]$ , (ii) 200 cat states corresponding to superpositions of coherent states with opposite amplitudes $|\alpha, \phi\rangle_{\mathrm{cat}} := \frac{1}{\sqrt{\mathcal{N}}}(|\alpha\rangle + \mathrm{e}^{\mathrm{i}\phi}|- \alpha\rangle)$ , where $\mathcal{N} = 2(1 +$
146
+
147
+ $\mathrm{e}^{-|\alpha|^2}\cos \phi)$ $|\alpha |\in [1,3]$ and $\phi \in \{0,\frac{\pi}{8},\dots ,\pi \}$ (iii) 200 GKP states that are superpositions of displaced squeezed states $|\epsilon ,\theta ,\phi \rangle_{\mathrm{gkp}}\coloneqq \mathrm{e}^{-\epsilon \hat{n}}\left(\cos \theta |0\rangle_{\mathrm{gkp}} + \mathrm{e}^{\mathrm{i}\phi}\sin \theta |1\rangle_{\mathrm{gkp}}\right)$ where $\hat{n} = \hat{a}^{\dagger}\hat{a}$ is the photon number operator, $\epsilon \in$ [0.05,0.2], $\theta \in [0,2\pi)$ $\phi \in [0,\pi ]$ , and $|0\rangle_{\mathrm{gkp}}$ and $|1\rangle_{\mathrm{gkp}}$ are ideal GKP states, and (iv) all the states from (i), (ii), and (iii).
148
+
149
+ For each type of states, we provide the network with measurement data from $s = 10$ random homodyne measurements, considering both the case where the data is noiseless and the case where it is noisy. The noiseless case is shown in the second and third columns of Table II, which show the classical fidelity in the average and worst-case scenario, respectively. In the noisy case, we consider both noise due to finite statistics, and noise due to an inexact specification of the measurements in the test set. The effects of finite statistics are modelled by adding Gaussian noise to each of the outcome probabilities of the measurements in the test. The inexact specification of the test measurements is modelled by rotating each quadrature by a random angle $\theta_{i}$ , chosen independently for each measurement according to a Gaussian distribution. The fourth and the fifth columns of Ta
150
+
151
+ Table II. Performances of GQNQ on continuous-variable quantum states.
152
+
153
+ <table><tr><td>Type of states for training and test</td><td>i. noiseless</td><td>worst case for i</td><td>ii. σ(noise)=0.05</td><td>worst case for ii</td><td>iii. σ(θ)=0.05</td><td>worst case for iii</td></tr><tr><td>(i) Squeezed thermal states</td><td>0.9973</td><td>0.9890</td><td>0.9964</td><td>0.9870</td><td>0.9972</td><td>0.9889</td></tr><tr><td>(ii) Cat states</td><td>0.9827</td><td>0.9512</td><td>0.9674</td><td>0.9053</td><td>0.9822</td><td>0.9461</td></tr><tr><td>(iii) GKP states</td><td>0.9762</td><td>0.9405</td><td>0.9746</td><td>0.9359</td><td>0.9758</td><td>0.9405</td></tr><tr><td>(iv) (i)-(iii) together</td><td>0.9658</td><td>0.9077</td><td>0.9264</td><td>0.8387</td><td>0.9643</td><td>0.9030</td></tr></table>
154
+
155
+ ![](images/bb069b92551158324a8305e93670f2c6ae6272e67a8256d6f39491b40a1cf1e2.jpg)
156
+ Figure 3. Online learning of cat states in 15 time steps. Each red point shows the classical fidelity, averaged over all the possible measurements in $\mathcal{M}$ and over all the cat states in the test set, which we take to be the same as in the experiments in the previous section. Each blue point is the worst-case classical fidelity over all possible query measurements, averaged over all the test states. Real outcome statistics and predicted outcome statistics at quadrature phase $\theta = 3 / 4\pi$ for an example cat state $|2.22 + 1.41\mathrm{i},\pi /4\rangle_{\mathrm{cat}}$ are plotted.
157
+
158
+ ble II illustrate the effects of finite statistics, showing the classical fidelities in the presence of Gaussian added noise with variance 0.05. In the sixth and seventh columns, we include the effect of an inexact specification of the homodyne measurements, introducing Gaussian noise with variance 0.05. In all cases, the classical fidelity of predictions are computed with respect to the ideal noiseless probability distributions.
159
+
160
+ In Supplementary Note XI we also provide a more detailed comparison between the predictions and the corresponding ground truths in terms of actual probability distributions, instead of their classical fidelities.
161
+
162
+ Application to online learning. After GQNQ has been trained, it can be used for the task of online quantum state learning [13]. In this task, the various pieces of data are provided to the learner at different time steps. At the $i$ -th time step, with $i \in \{1, \dots, n\}$ , the experimenter performs a measurement $M_i$ , obtaining the outcome statistics $\pmb{p}_i$ . The pair $(m_i, p_i)$ is then provided to the learner, who is asked to predict the measurement outcome probabilities for all measurements in the set $\mathcal{M} \setminus S_i$ with $S_i := \{M_j\}_{j \leq i}$ .
163
+
164
+ Online learning with GQNQ can be achieved with the following procedure. Initially, the state representation
165
+
166
+ vector is set to $\boldsymbol{r}(0) = (0, \ldots, 0)$ . At the $i$ -th time step, GQNQ computes the vector $\boldsymbol{r}_i = f_{\boldsymbol{\xi}}(\boldsymbol{m}_i, \boldsymbol{p}_i)$ and updates the state representation to $\boldsymbol{r}(i) = [(i - 1)\boldsymbol{r}(i - 1) + \boldsymbol{r}_i] / i$ . The updated state representation is then fed into the generation network, which produces the required predictions. Note that updating the state representation does not require time-consuming operations, such as a maximum likelihood analysis. It is also worth noting that GQNQ does not need to store all the measurement data received in the past: it only needs to store the state representation $\boldsymbol{r}(i)$ from one step to the next.
167
+
168
+ A numerical experiment on online learning of cat states is provided in Fig. 3. The figure shows the average classical fidelity at 15 subsequent time steps corresponding to 15 different homodyne measurements performed on copies of unknown cat states. The fidelity increases over time, confirming the intuitive expectation that the learning performance should improve when more measurement data are provided.
169
+
170
+ Application to state clustering and classification. The state representation constructed by GQNQ can also be used to perform tasks other than predicting the outcome statistics of unmeasured POVMs. One such task is state clustering, where the goal is to group the
171
+
172
+ ![](images/21cb03709b279f80c6f7e091bf5ddc0a93f264609b230f1b423b146a02deee21.jpg)
173
+ a
174
+
175
+ ![](images/a2efabbbcb3258da7f95ad32ea79827f12ac56bb77c945eb810b493ba2c7d098.jpg)
176
+ b
177
+ Figure 4. Two-dimensional embeddings of multiqubit and continuous-variable states. Subfigure (a) shows two-dimensional embeddings of state representation vectors produced by GQNQ on Ising model (ferromagnetic and antiferromagnetic) ground states, XXZ model (ferromagnetic and XY phase) ground states, locally rotated GHZ and W states. Subfigure (b) shows two-dimensional embeddings of the state representation vectors of squeezed thermal states, cat states and GKP states. In both subfigures, shaded areas are added to help visualize the various type of states. Note that the representation vectors generated by GQNQ of states of the same type are near to each other in the two-dimensional embeddings.
178
+
179
+ representations of different quantum states into multiple disjoint sets in such a way that quantum states of the same type fall into the same set.
180
+
181
+ We now show that clusters naturally emerge from the state representations produced by GQNQ. To visualize the clusters, we feed the state representation vectors into a $t$ -distributed stochastic neighbor embedding ( $t$ -SNE) algorithm [43], which produces a mapping of the representation vectors into a two-dimensional plane, according to their similarities. We performed numerical experiments using the types of six-qubit states in Table I and the types of continuous-variable states in Table II. For simplicity, we restricted the analysis to state representation vectors constructed from noiseless input data.
182
+
183
+ The results of our experiments are shown in Fig. 4. The figure shows that states with significantly different physical properties correspond to distant points in the two-dimensional embedding, while states with similar properties naturally appear in clusters. For example, the ground states of the ferromagnetic XXZ model and the ground states in the gapless XY phase are clearly separated in Fig. 4(a), in agreement with the fact that there is an abrupt change of quantum properties at the phase transition point. On the other hand, in Fig. 4(a), the ferromagnetic region of the Ising model is next to the antiferromagnetic region, both of which are gapped and short-range correlated. The ferromagnetic region of the Ising model appears to have some overlap with the region of GHZ states with local rotations, in agreement with the fact that the GHZ state is approximately a ground state of the ferromagnetic Ising model in the large $J$ limit.
184
+
185
+ The visible clusters in the two-dimensional embedding suggest that any unsupervised clustering algorithm could effectively cluster the states according to their representation vectors. To confirm this intuition, we applied a Gaussian mixture model [44] to the state representation vectors and chose the number of clusters to be equal to
186
+
187
+ the actual number of state types (six for the six-qubit states, and three for the continuous-variable states). The portion of states whose types match the clusters is $94.67\%$ for the six-qubit states, and $100\%$ for the continuous-variable states.
188
+
189
+ The state representation produced by GQNQ can also be used to predict physical properties in a supervised model where an additional neural network is provided with labelled examples of states with a given property. In this setting, supervision can enable a more refined classification of quantum states, compared to the unsupervised clustering discussed before.
190
+
191
+ To illustrate the idea, we considered the problem of distinguishing between two different regimes in the Ising model, namely a regime where ferromagnetic interactions dominate $(J > 1)$ , and a regime both ferromagnetic and antiferromagnetic interactions are present $(0 < J < 1)$ . For convenience, we refer to these two regimes as to the pure and mixed ferromagnetic regimes, respectively. We use an additional neural network to learn whether a ground state corresponds to a Hamiltonian in the pure ferromagnetic regime or in the mixed one, using the state representation $\pmb{r}$ of Ising ground states with ferromagnetic bias obtained from noiseless measurement data. The prediction reaches a success rate of $100\%$ , $100\%$ and $99\%$ for ten-qubit, twenty-qubit and fifty-qubit ground states in our test sets, respectively. These high values can be contrasted with the clustering results in Fig. 4, where the pure ferromagnetic regime and the mixed one appear close to each other in the two-dimensional embedding.
192
+
193
+ # III. DISCUSSION
194
+
195
+ Many works have explored the use of generative models for quantum state characterization [16, 17, 19-21], and an
196
+
197
+ Table III. Performances of GQNQ on cat states as an unsupervised learner
198
+
199
+ <table><tr><td>state</td><td>s = 50 (Avg)</td><td>s = 50 (Worst)</td><td>s = 10 (Avg)</td><td>s = 10 (Worst)</td></tr><tr><td>|2,0⟩cat</td><td>0.9918</td><td>0.9614</td><td>0.9912</td><td>0.9610</td></tr><tr><td>|2,π/4⟩cat</td><td>0.9917</td><td>0.9602</td><td>0.9745</td><td>0.9236</td></tr><tr><td>|2.22 + 1.41i, π/4⟩cat</td><td>0.9779</td><td>0.9171</td><td>0.9671</td><td>0.9133</td></tr></table>
200
+
201
+ approach based on representation learning was recently proposed by Iten et al [45]. The key difference between GQNQ and previous approaches concerns the training phase. In most previous works, the neural network is trained to reconstruct a single quantum state from experimental data. While this procedure can in principle be applied to learn any state, the training is state-specific, and the information learnt by the network through training on a given state cannot be automatically transferred to the reconstruction of a different quantum state, even if that state is of the same type. In contrast, the training of GQNQ works for multiple quantum states and for states of multiple types, thus enabling a variety of tasks, such as quantum state clustering and classification.
202
+
203
+ Another difference with previous works is that the training phase for GQNQ can use classically simulated data, rather than actual experimental data. In other words, the training can be carried out in an offline mode, before the quantum states that need to be characterized become available. By moving the training to offline mode, GQNQ can be significantly faster than other data-driven approaches that need to be trained with experimental data from unknown quantum states. The flip side of this advantage, however, is that offline training requires a partial supervision, which is not required in other state reconstruction approaches [16, 17, 19]. Indeed, the training of GQNQ requires quantum states in the same family as the tested state, and in order to implement the training offline one needs a good guess for the type of quantum state that will need to be characterized.
204
+
205
+ The situation is different if the training is done online, with actual experimental data provided from the quantum state to be characterized. In this setting, GQNQ behaves as a completely unsupervised learner that predicts the outcome statistics of unperformed measurements using measurement data obtained solely from the quantum state under consideration. Note that in this case the set of fiducial measurements $\mathcal{M}_{*}$ coincides with the set of performed measurements $S\subset \mathcal{M}$ . The details of the training procedure are provided in Supplementary Note XII. We performed numerical experiments in which GQNQ was trained with data from a single cat state, using data from 10 or 50 homodyne measurements. After the training, GQNQ was asked to predict the outcome statistics of a new randomly chosen homodyne measurement. The results are summarized in Table III, where we show both the average classical fidelities averaged over all query measurements and worst-case classical fidelities over all query measurements.
206
+
207
+ Finally, we point out that our learning model shares
208
+
209
+ some conceptual similarity with Aaronson's "pretty good tomography" [11], which aims at producing a hypothesis state that accurately predicts the outcome probabilities of measurements in a given set. While in pretty good tomography the hypothesis state is a density matrix, the form of the state representation in GQNQ is determined by the network itself. The flexibility in the choice of state representation allows GQNQ to find more compact descriptions for sufficiently regular sets of states. On the other hand, pretty good tomography is in principle guaranteed to work accurately for arbitrary quantum states, whereas the performance of GQNQ can be more or less accurate depending on the set of states, as indicated by our numerical experiments. An important direction of future research is to find criteria to determine a priori which quantum state families can be learnt effectively by GQNQ. This problem is expected to be challenging, as similar criteria are still lacking even in the original application of generative query networks to classical image processing.
210
+
211
+ # IV. METHODS
212
+
213
+ Data generation procedures. Here we discuss the training/test dataset generation procedures. In the numerical experiments for ground states of Ising models and XXZ models, the training set is composed of 40 different states for each value of $J$ and $\Delta$ , while the test set is composed of 10 different states for each value of $J$ and $\Delta$ . For GHZ and W states with local rotations, we generate 800 states for training and 200 states for testing.
214
+
215
+ In the continuous-variable experiments, we randomly generate 10000 different states for each of the three families of squeezed thermal states, cat states, and GKP states. We then split the generated states into a training set and testing set, with a ratio of $4:1$ .
216
+
217
+ In the testing stage, the noiseless probability distributions for one-dimensional Ising models and XXZ models are generated by solving the ground state problem, either exactly (in the six qubit case) or approximately by density-matrix renormalization group (DMRG) algorithm [46] (for 10, 20 and 50 qubits). The data of continuous-variable quantum states are generated by simulation tools provided by Strawberry Fields [47].
218
+
219
+ Network training. The training data set of GQNQ includes measurement data from $N$ quantum states, divided into $N / B$ batches of $B$ states each. For each state in a batch, a subset of measurements $\mathcal{M}_1\subset \mathcal{M}$ is ran
220
+
221
+ domly picked, and the network is provided with all the pairs $(\pmb{m},\pmb{p})$ , where $\pmb{m}$ is the parametrization of a measurement in $\mathcal{M}_1$ and $\pmb{p}$ is the corresponding vector of outcome probabilities on the state under consideration. The network is then asked to make predictions on the outcome probabilities of the rest of the measurements in $\mathcal{M} \setminus \mathcal{M}_1$ , and the loss is computed from the difference between the real outcome probabilities (computed with the Born rule) and the model's predictions (see Supplementary Note VI for the specific expression of the loss function). For each batch, we optimize the parameters $\pmb{\xi}$ and $\pmb{\eta}$ of GQNQ by updating them along the opposite direction of the gradient of the loss function with respect to $\pmb{\xi}$ and $\pmb{\eta}$ , using Adam optimizer [48] and batch gradient descent. The pseudocode for the training algorithm is also provided in Supplementary Note VI.
222
+
223
+ The training is repeated for $E$ epochs. In each epoch of
224
+
225
+ the training phase, we iterate the above procedure over the $N / B$ batches of training data. For the numerical experiments in this paper, we typically choose $B = 30$ and $E = 200$ .
226
+
227
+ Network testing. After training, the parameters of GQNQ are fixed, and the performance is then tested with these fixed parameter values. For each test state, we randomly select a subset $S$ from the set $\mathcal{M}$ of POVM measurements, input the associated measurement data to the trained network, and ask it to predict the outcome probabilities for all the measurements in $\mathcal{M} \setminus S$ . Then we calculate the classical fidelity between each output prediction and the corresponding ground truth.
228
+
229
+ Hardware. Our neural networks are implemented by the pytorch [49] framework and trained on four NVIDIA GeForce GTX 1080 Ti GPUs.
230
+
231
+ [1] Jens Eisert, Dominik Hangleiter, Nathan Walk, Ingo Roth, Damian Markham, Rhea Parekh, Ulysse Chabaud, and Elham Kashefi, "Quantum certification and benchmarking," Nat. Rev. Phys. 2, 382-390 (2020).
232
+ [2] G. Tóth, W. Wieczorek, D. Gross, R. Krischek, C. Schwemmer, and H. Weinfurter, "Permutationally invariant quantum tomography," Phys. Rev. Lett. 105, 250403 (2010).
233
+ [3] David Gross, Yi-Kai Liu, Steven T. Flammia, Stephen Becker, and Jens Eisert, “Quantum state tomography via compressed sensing,” Phys. Rev. Lett. 105, 150401 (2010).
234
+ [4] Marcus Cramer, Martin B Plenio, Steven T Flammia, Rolando Somma, David Gross, Stephen D Bartlett, Olivier Landon-Cardinal, David Poulin, and Yi-Kai Liu, "Efficient quantum state tomography," Nat. Commun. 1, 1-7 (2010).
235
+ [5] BP Lanyon, C Maier, Milan Holzäpfel, Tillmann Baumgratz, C Hempel, P Jurcevic, Ish Dhand, AS Buyskikh, AJ Daley, Marcus Cramer, et al., "Efficient tomography of a quantum many-body system," Nat. Phys. 13, 1158-1162 (2017).
236
+ [6] Jordan Cotler and Frank Wilczek, “Quantum overlapping tomography,” Phys. Rev. Lett. 124, 100401 (2020).
237
+ [7] Hsin-Yuan Huang, Richard Kueng, and John Preskill, "Predicting many properties of a quantum system from very few measurements," Nat. Phys. 16, 1050-1057 (2020).
238
+ [8] Hsin-Yuan Huang, Richard Kueng, Giacomo Torlai, Victor V Albert, and John Preskill, "Provably efficient machine learning for quantum many-body problems," arXiv:2106.12627 (2021).
239
+ [9] Steven T. Flammia and Yi-Kai Liu, "Direct fidelity estimation from few pauli measurements," Phys. Rev. Lett. 106, 230501 (2011).
240
+ [10] Marcus P. da Silva, Olivier Landon-Cardinal, and David Poulin, “Practical characterization of quantum devices without tomography,” Phys. Rev. Lett. 107, 210404 (2011).
241
+ [11] Scott Aaronson, “The learnability of quantum states,” Proc. R. Soc. A 463, 3089-3114 (2007).
242
+
243
+ [12] Scott Aaronson, “Shadow tomography of quantum states,” SIAM J. Comput. 49, STOC18-368 (2019).
244
+ [13] Scott Aaronson, Xinyi Chen, Elad Hazan, Satyen Kale, and Ashwin Nayak, "Online learning of quantum states," J. Stat. Mech.: Theory Exp. 2019, 124019 (2019).
245
+ [14] Srinivasan Arunachalam, Alex B Grilo, and Henry Yuen, "Quantum statistical query learning," arXiv:2002.08240 (2020).
246
+ [15] Giuseppe Carleo, Ignacio Cirac, Kyle Cranmer, Laurent Daudet, Maria Schuld, Naftali Tishby, Leslie Vogt-Maranto, and Lenka Zdeborova, "Machine learning and the physical sciences," Rev. Mod. Phys. 91, 045002 (2019).
247
+ [16] Giacomo Torlai, Guglielmo Mazzola, Juan Carrasquilla, Matthias Troyer, Roger Melko, and Giuseppe Carleo, "Neural-network quantum state tomography," Nat. Phys. 14, 447-450 (2018).
248
+ [17] Giacomo Torlai and Roger G. Melko, “Latent space purification via neural density operators,” Phys. Rev. Lett. 120, 240503 (2018).
249
+ [18] Qian Xu and Shuqi Xu, “Neural network state estimation for full quantum state tomography,” arXiv preprint arXiv:1811.06654 (2018).
250
+ [19] Juan Carrasquilla, Giacomo Torlai, Roger G Melko, and Leandro Aolita, "Reconstructing quantum states with generative models," Nat. Mach. Intell. 1, 155-161 (2019).
251
+ [20] Egor S Tiunov, VV Tiunova, Alexander E Ulanov, AI Lvovsky, and Aleksey K Fedorov, "Experimental quantum homodyne tomography via machine learning," Optica 7, 448-454 (2020).
252
+ [21] Shahnawaz Ahmed, Carlos Sánchez Muñoz, Franco Nori, and Anton Frisk Kockum, “Quantum state tomography with conditional generative adversarial networks,” Phys. Rev. Lett. 127, 140502 (2021).
253
+ [22] Shahnawaz Ahmed, Carlos Sánchez Muñoz, Franco Nori, and Anton Frisk Kockum, "Classification and reconstruction of optical quantum states with deep neural networks," Phys. Rev. Res. 3, 033278 (2021).
254
+ [23] Andrea Rocchetto, Edward Grant, Sergii Strelchuk, Giuseppe Carleo, and Simone Severini, "Learning hard quantum distributions with variational autoencoders,"
255
+
256
+ NPJ Quantum Inf. 4, 28 (2018).
257
+ [24] Yihui Quek, Stanislav Fort, and Hui Khoon Ng, "Adaptive quantum state tomography with neural networks," NPJ Quantum Inf. 7, 105 (2021).
258
+ [25] Adriano Macarone Palmieri, Egor Kovlakov, Federico Bianchi, Dmitry Yudin, Stanislav Straupe, Jacob D Bia-monte, and Sergei Kulik, "Experimental neural network enhanced quantum tomography," NPJ Quantum Inf. 6, 20 (2020).
259
+ [26] Alistair W. R. Smith, Johnnie Gray, and M. S. Kim, "Efficient quantum state sample tomography with basis-dependent neural networks," PRX Quantum 2, 020348 (2021).
260
+ [27] Gael Sentís, Alex Monrás, Ramon Muñoz Tapia, John Calsamiglia, and Emilio Bagan, "Unsupervised classification of quantum data," Phys. Rev. X 9, 041029 (2019).
261
+ [28] Jun Gao, Lu-Feng Qiao, Zhi-Qiang Jiao, Yue-Chi Ma, Cheng-Qiu Hu, Ruo-Jing Ren, Ai-Lin Yang, Hao Tang, Man-Hong Yung, and Xian-Min Jin, "Experimental machine learning of quantum states," Phys. Rev. Lett. 120, 240501 (2018).
262
+ [29] Andreas Elben, Benoit Vermersch, Rick van Bijnen, Christian Kokail, Tiff Brydges, Christine Maier, Manoj K. Joshi, Rainer Blatt, Christian F. Roos, and Peter Zoller, "Cross-platform verification of intermediate scale quantum devices," Phys. Rev. Lett. 124, 010504 (2020).
263
+ [30] SM Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S Morcos, Marta Garnelo, Avraham Ruderman, Andrei A Rusu, Ivo Danihelka, Karol Gregor, et al., "Neural scene representation and rendering," Science 360, 1204-1210 (2018).
264
+ [31] Yoshua Bengio, Aaron Courville, and Pascal Vincent, "Representation learning: A review and new perspectives," IEEE Trans. Pattern Anal. Mach. Intell. 35, 1798-1828 (2013).
265
+ [32] David Foster, *Generative deep learning: teaching machines to paint*, write, compose, and play (O'Reilly Media, 2019).
266
+ [33] Yong Siah Teo, Huangjun Zhu, Berthold-Georg Englert, Jaroslav Reháček, and Zden Ček Hradil, “Quantum-state reconstruction by maximizing likelihood and entropy,” Phys. Rev. Lett. 107, 020404 (2011).
267
+ [34] Ulrich Schollwock, Johannes Richter, Damian JJ Farnell, and Raymond F Bishop, Quantum magnetism, Vol. 645 (Springer, 2008).
268
+ [35] Ladislav Samaj, Introduction to the statistical physics of integrable many-body systems (Cambridge University Press, 2013).
269
+ [36] Axel Friedenauer, Hector Schmitz, Jan Tibor Glueckert, Diego Porras, and Tobias Schatz, "Simulating a quantum magnet with trapped ions," Nat. Phys. 4, 757-761 (2008).
270
+ [37] Kihwan Kim, M-S Chang, Simcha Korenblit, Rajibul Islam, Emily E Edwards, James K Freericks, G-D Lin, L-M Duan, and Christopher Monroe, “Quantum simulation of frustrated ising spins with trapped ions,” Nature 465, 590–593 (2010).
271
+ [38] R Islam, EE Edwards, K Kim, S Korenblit, C Noh, H Carmichael, G-D Lin, L-M Duan, C-C Joseph Wang, JK Freericks, et al., "Onset of a quantum phase transition with a trapped ion quantum simulator," Nat. Commun. 2, 1-6 (2011).
272
+ [39] C. N. Yang and C. P. Yang, “One-dimensional chain of anisotropic spin-spin interactions. i. proof of bethe's hy
273
+
274
+ pothesis for ground state in a finite system," Phys. Rev. 150, 321-327 (1966).
275
+ [40] David F Williamson, Robert A Parker, and Juliette S Kendrick, "The box plot: a simple visual method to interpret data," Annals of internal medicine 110, 916-921 (1989).
276
+ [41] Daniel Gottesman, Alexei Kitaev, and John Preskill, "Encoding a qubit in an oscillator," Phys. Rev. A 64, 012310 (2001).
277
+ [42] Victor V. Albert, Kyungjoo Noh, Kasper Duivenvoorden, Dylan J. Young, R. T. Brierley, Philip Reinhold, Christophe Vuillot, Linshu Li, Chao Shen, S. M. Girvin, Barbara M. Terhal, and Liang Jiang, "Performance and structure of single-mode bosonic codes," Phys. Rev. A 97, 032346 (2018).
278
+ [43] Laurens Van der Maaten and Geoffrey Hinton, "Visualizing data using t-sne." Journal of machine learning research 9 (2008).
279
+ [44] Christopher M Bishop and Nasser M Nasrabadi, Pattern recognition and machine learning, Vol. 4 (Springer, 2006).
280
+ [45] Raban Iten, Tony Metger, Henrik Wilming, Lídia del Rio, and Renato Renner, "Discovering physical concepts with neural networks," Phys. Rev. Lett. 124, 010508 (2020).
281
+ [46] Ulrich Schollwöck, “The density-matrix renormalization group,” Reviews of modern physics 77, 259 (2005).
282
+ [47] Nathan Killoran, Josh Izaac, Nicolas Quesada, Ville Bergholm, Matthew Amy, and Christian Weedbrook, "Strawberry fields: A software platform for photonic quantum computing," Quantum 3, 129 (2019).
283
+ [48] Diederik P Kingma and Jimmy Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980 (2014).
284
+ [49] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al., "Pytorch: An imperative style, high-performance deep learning library," Advances in neural information processing systems 32, 8026-8037 (2019).
285
+ [50] Charu C Aggarwal et al., “Neural networks and deep learning,” Springer 10, 978-3 (2018).
286
+ [51] Sepp Hochreiter and Jürgen Schmidhuber, “Long short-term memory,” Neural computation 9, 1735–1780 (1997).
287
+ [52] Solomon Kullback, Information theory and statistics (Courier Corporation, 1997).
288
+ [53] Sebastian Ruder, "An overview of gradient descent optimization algorithms," arXiv preprint arXiv:1609.04747 (2016).
289
+ [54] Ulf Leonhardt, Measuring the quantum state of light, Vol. 22 (Cambridge university press, 1997).
290
+
291
+ # V. ACKNOWLEDGEMENT
292
+
293
+ This work was supported by funding from the Hong Kong Research Grant Council through grants no. 17300918 and no. 17307520, through the Senior Research Fellowship Scheme SRFS2021-7S02, the Croucher Foundation, and by the John Templeton Foundation through grant 61466, The Quantum Information Structure of Spacetime (qiss.fr). YXW acknowledges funding from the National Natural Science Foundation of China through grants no. 61872318. Research at the Perime
294
+
295
+ ter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and
296
+
297
+ Science. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation.
298
+
299
+ # SUPPLEMENTARY NOTES
300
+
301
+ # VI. IMPLEMENTATION DETAILS OF GQNQ
302
+
303
+ # A. Structure of GQNQ
304
+
305
+ As shown in Fig. 5, our proposed Generative Query Network for quantum state learning (GQNQ) is mainly composed of a representation network $f_{\xi}$ , an aggregate function $\mathcal{A}$ and a generation network $g_{\eta}$ .
306
+
307
+ ![](images/25ec02cbdf872603c508042a4781e57a49848527e329445a443608cb019e3a77.jpg)
308
+ Figure 5. Structure of GQNQ.
309
+
310
+ The representation network $f_{\pmb{\xi}}$ consists of multiple dense layers [50], also called full-connected layers and we depict its structure in Fig. 6. $\pmb{\xi}$ contains trainable parameters of all layers. The input of the representation network is a pair $(\pmb{m}_i, \pmb{p}_i)$ , where $\pmb{m}_i$ is parameterization of a POVM measurement and $\pmb{p}_i$ is its corresponding measurement outcome probabilities. The output $\pmb{r}_i$ can be regarded as an abstract representation of $(\pmb{m}_i, \pmb{p}_i)$ . Here, for simplicity, we just use the average function $r := \frac{1}{s} \sum_{i=1}^{s} r_i$ as the aggregate function but we believe other more sophisticated architecture such as recurrent neural network [50] may achieve better performance, although it will lead to higher requirements for hardware and hyperparameter tuning.
311
+
312
+ ![](images/685ca3b350080b78612279796b4ec4257f246fcd87525bc91a89a5bc4c62d0cc.jpg)
313
+ Figure 6. Structure of the representation network.
314
+
315
+ The generation network $g_{\eta}$ is special because its structure is different in the training and test phase. Here $\eta$ contains all trainable parameters in the generation network. In the test phase, the generation network consists of two dense layers and one long short-term memory (LSTM) cell [51], and we depict its structure in Fig. 7. The input of this generation network is the state representation $\mathbf{r}$ and the parameterization $m'$ of a query POVM measurement and the output $\mathcal{N}'$ is a distribution of the prediction $p'$ of measurement outcome probabilities corresponding to $m'$ . $h_0$ , $c_0$ and $u_0$ are some internal parameters, all of which are initialized as zero tensors. As we can see, the generation
316
+
317
+ ![](images/fd03fa6312e394e0da9225716cfa36d50eb05412703de0c18f78d55f6f360869.jpg)
318
+ Figure 7. Structure of the generation network in the test.
319
+
320
+ ![](images/a96baf044339c4c137b8e53fbe3f71b6cb6bf6ced02bfc4e22d968aa4ec0f13d.jpg)
321
+ Figure 8. Structure of the generation network in the training.
322
+
323
+ network execute Dense Layer #1 and the LSTM cell for $L$ times while $m'$ and $r$ are injected to the network for each time. It is worth mentioning that $z_i$ ( $i \in \mathbb{N}, i < L$ ) can be viewed as a hidden variable that obeys a prior Gaussian distribution $\mathcal{N}_i$ generated by Dense Layer #1 from $h_i$ . In the end, the output $u_L$ of the last LSTM cell is fed into a second dense layer (Dense Layer #2) to obtain the output $\mathcal{N}'$ , from which the prediction $p'$ is sampled. In Fig. 8, we depict the generation network exploited in the training. Here, an extra input $p_{true}'$ is available because we know the real outcome probabilities in the training phase. Furthermore, we utilize another LSTM cell and another dense layer (Dense Layer #3) to generate a posterior distribution $\mathcal{N}_i^2$ from $p_{true}'$ of the hidden variable $z_i$ rather than sampling $z_i$ from a prior distribution $\mathcal{N}_i^1$ . The advantage of such design is that we can make good use of the information of $p_{true}'$ to obtain better $z_i$ during the generation. $h_0^1, c_0^1, h_0^2, c_0^2$ and $u_0$ are some internal parameters, all of which are initialized as zero tensors.
324
+
325
+ # B. Training of GQNQ
326
+
327
+ We define the loss function $\mathcal{L}$ of the training in Eq. (3).
328
+
329
+ $$
330
+ \mathcal {L} (\boldsymbol {\xi}, \boldsymbol {\eta}) = \mathbb {E} \left[ - \ln \mathcal {N} ^ {\prime} \left(\boldsymbol {p} _ {t r u e} ^ {\prime}\right) + \sum_ {j = 0} ^ {L - 1} \mathrm {K L} \left(\mathcal {N} _ {j} ^ {1}, \mathcal {N} _ {j + 1} ^ {2}\right) \right], \tag {3}
331
+ $$
332
+
333
+ where $\mathcal{N}'(\pmb{p}_{true}')$ denotes the relative likelihood that the variable following distribution $\mathcal{N}'$ takes the value of $\pmb{p}_{true}'$ and KL represents KL divergence [52] of two Gaussian distributions. The first term of this loss function can be interpreted as the reconstruction loss, which can guide the model to acquire more accurate predictions. The second term is a regularization term utilized for seeking a better prior distribution of the hidden variable $\pmb{z}_i$ in the generation process, which is constructive to improving the accuracy of the predictions further.
334
+
335
+ We adopt batch gradient descent [53] and the Adam optimizer [48] to minimize this loss function in the training. The batch size is set to 10 or 20 in all of our experiments and the learning rate decreases gradually with the increase of the number of training epochs.
336
+
337
+ Our neural networks are implemented by the pytorch [49] framework and trained on four NVIDIA GeForce GTX 1080 Ti GPUs. The training time is less than three hours for each task discussed in this paper. The ground states of the Hamiltonian of one-dimensional Ising models utilized in our numerical experiments are solved by the exact method for the scenario of $L = 6$ and are approximately solved by density-matrix renormalization group (DMRG) [46] for the scenario of $L = 10, 20$ and 50. The data of continuous-variable quantum states are generated by simulation tools provided in Strawberry Fields [47].
338
+
339
+ We present the whole training procedure by pseudocode in Algorithm 1 following the notations introduced in main text.
340
+
341
+ Algorithm 1: Training of generative query network for quantum state learning.
342
+ Data: number of states in training set $N$ , state measurement results $\{((\pmb {m}_i,\pmb {p}_i^k)_{i = 1}^n\}_{k = 1}^N$ , maximum number of known POVM measurement results for each state $a$ , maximum number of epochs $E$ , learning rate $\delta$ , batch size $B$ . Initialize parameters $\xi$ and $\eta$ randomly, $e = 0$ ; while $e < E$ do
343
+ $\mathcal{L} = 0$ ;
344
+ for $k = 1$ to $N$ do
345
+ Generate a random integer number $n_1$ from $[1,a]$ ;
346
+ Randomly select $n_1$ pairs of $(\pmb {m}_i,\pmb {p}_i^k)$ from $\{((\pmb {m}_i,\pmb {p}_i^k)\}_{i = 1}^n$ and denote them as $\{((\pmb {m}_{ij},\pmb {p}_{ij}^{k}))_{j = 1}^{n_{1}}\}$ , where $\{i_j\}_{j = 1}^n$ is a permutation of $\{1,\ldots ,n\}$ ;
347
+ Input each of $\{((\pmb {m}_{ij},\pmb {p}_{ij}^{k}))_{j = 1}^{n_{1}}$ into the representation network $f_{\xi}$ to obtain the representations $\{\pmb {r}_{ij}\}_{j = 1}^{n_1}$ as $\pmb {r}_{ij} = f_{\xi}(\pmb {m}_{ij},\pmb {p}_{ij}^{k})$ ;
348
+ Calculate the state representation by an aggregate function $\mathcal{A}$ as $\pmb {r} = \mathcal{A}(\{\pmb {r}_{ij}\}_{j = 1}^{n_1})$ ;
349
+ Input $\pmb{r}$ and the remaining $\{\pmb {m}_{ij}\}_{j = n_1 + 1}^n$ into the generation network $g_{\eta}$ to obtain the predictions $\{\pmb {p}_{ij}'^k\}_{j = n_1 + 1}^n$ of measurement outcome distributions as $\pmb {p}_{ij}'^k = g_\eta (\pmb {r},\pmb {m}_{ij})$ ;
350
+ Calculate the loss $l$ with Eq. (3) by comparing $\{\pmb {p}_{ij}'^k\}_{j = n_1 + 1}^n$ with $\{\pmb {p}_{ij}'\}_{j = n_1 + 1}^n$ and update $\mathcal{L}$ as $\mathcal{L} = \mathcal{L} + l$ ;
351
+ if $k\bmod B = 0$ then
352
+ Calculate $\nabla_{\xi}\mathcal{L}$ and $\nabla_{\eta}\mathcal{L}$ ;
353
+ Update $\xi$ and $\eta$ as $\xi = \xi -\delta \nabla_{\xi}\mathcal{L}$ , $\eta = \eta -\delta \nabla_{\eta}\mathcal{L}$ ;
354
+ $\mathcal{L} = 0$ ;
355
+ $e = e + 1$ ;
356
+
357
+ # C. Details of Experiments
358
+
359
+ a. Datasets. In the experiments for ground states of Ising models and XXZ models, the training set is composed of 40 different states for each $J$ or $\Delta$ while the test set is composed of 10 different states for each $J$ or $\Delta$ . As for the experiments for GHZ state with local rotation and W state with local rotation, we generate 800 states for training and 200 states for test. In the experiments for continuous-variable quantum states, we randomly generate 10000 different cat states, gaussian states and GKP states and split them into training and test sets with $4:1$ ratio.
360
+ b. Number of trainable parameters. We mainly adopt three kinds of models for three different tasks. In the experiments for learning discrete quantum states, we exploit the models with 6676544 trainable parameters for the scenario of $L = 6$ while exploiting the models with 45484 trainable parameters for the scenario of $L = 10,20$ and 50. In the experiments for learning continuous-variable quantum states, we exploit the models with 35572 trainable parameters.
361
+ c. Maximum number of known POVM measurement results for each state in the training. We set maximum number of known POVM measurement results for each state $a$ in the training as 200 for the six-qubit cases, 50 for the 10-, 20- and 50-qubit cases and 150 for the cases of continuous states.
362
+ d. Initialization and learning rate. For each task, we initialize the parameters of the models randomly before the training. The learning rate is set as 0.01 initially and decreases as the number of iterations increases.
363
+ e. Number of epochs and training time. We usually set the maximum number of epochs $E$ as 200 and the batch size $B$ as 30 in the training. The training time varies with the size of training set for each task while the training time is always less than three hours in all of the experiments.
364
+
365
+ Table IV. Average classical fidelity between predicted outcome statistics and real outcome statistics, averaged over all the test states and random query measurements. The eight rows correspond to eight different scenarios, where GQNQ is trained and tested over measurement data of nine sets of states. The values of $d_r$ , $d_h$ and $d_z$ are different for each column.
366
+
367
+ <table><tr><td>Types of states</td><td>dr=2, dh=2, dz=2</td><td>dr=2, dh=6, dz=2</td><td>dr=4, dh=12, dz=4</td><td>dr=8, dh=24, dz=8</td><td>dr=16, dh=48, dz=16</td><td>dr=32, dh=96, dz=32</td></tr><tr><td>(i) Ising ground states with ferromagnetic bias</td><td>0.7987</td><td>0.8835</td><td>0.8981</td><td>0.9255</td><td>0.9543</td><td>0.9870</td></tr><tr><td>(ii) Ising ground states with antiferromagnetic bias</td><td>0.7896</td><td>0.8739</td><td>0.8894</td><td>0.9236</td><td>0.9562</td><td>0.9869</td></tr><tr><td>(iii) Ising ground states with no bias</td><td>0.7999</td><td>0.8911</td><td>0.8993</td><td>0.9277</td><td>0.9596</td><td>0.9895</td></tr><tr><td>(iv) XXZ ground states with ferromagnetic bias</td><td>0.6386</td><td>0.7683</td><td>0.9038</td><td>0.9546</td><td>0.9603</td><td>0.9809</td></tr><tr><td>(v) XXZ ground states with XY phase bias</td><td>0.7515</td><td>0.8102</td><td>0.8359</td><td>0.8924</td><td>0.9352</td><td>0.9601</td></tr><tr><td>(vi) (i)-(v) together</td><td>0.7143</td><td>0.7709</td><td>0.8376</td><td>0.8739</td><td>0.9178</td><td>0.9567</td></tr><tr><td>(vii) GHZ state with local rotations</td><td>0.8342</td><td>0.8816</td><td>0.9271</td><td>0.9502</td><td>0.9579</td><td>0.9744</td></tr><tr><td>(viii) W state with local rotations</td><td>0.9249</td><td>0.9310</td><td>0.9579</td><td>0.9733</td><td>0.9771</td><td>0.9828</td></tr><tr><td>(ix) (i)-(v), (vii) and (viii) together</td><td>0.6936</td><td>0.7685</td><td>0.8369</td><td>0.8725</td><td>0.9085</td><td>0.9561</td></tr></table>
368
+
369
+ # VII. HYPERPARAMETERS
370
+
371
+ As introduced above, there are some hyperparameters in our GQNQ model and the most significant ones are the dimensions of $\boldsymbol{r}_i$ , $\boldsymbol{h}_i$ and $\boldsymbol{z}_i$ , because they affect the size of the state representation and the complexity of the model, and thus affect the performance of the model. We denote them as $d_r$ , $d_h$ and $d_z$ respectively. In this section, we conduct a series of experiments to explore how the choice of hyperparameters affects the performance of the proposed model. We take the settings of learning 6-qubit states introduced in the main text as examples. In each experiment, different settings of hyperparameters are adopted and the results are shown in Table IV.
372
+
373
+ We can easily find that as the complexity of the model increases, the performance of the model becomes better. However, it must be pointed out that the complexity of the model cannot be arbitrarily high considering the memory size and the difficulty of training, and the models with $d_r = 32$ , $d_h = 96$ and $d_z = 32$ are the most complicated ones we consider in this paper.
374
+
375
+ # VIII. ARBITRARY STATE LEARNING
376
+
377
+ Furthermore, we conducted experiments to learn arbitrary 6-qubit quantum states and the results are shown in Table V. We claim that all models failed in this case, since we find that they always yield distributions close to the uniform distribution for any query measurement, which means that the model cannot learn an effective state representation to generate accurate measurement outcome statistics. A possible explanation is that the model is not complicated enough to handle an unstructured, highly complex dataset. Although a more complicated model might be more effective intuitively, such a model may require larger training set and be less efficient. As expected, our GQNQ model is designed for quantum states sharing a common structure and is not suitable for arbitrary quantum states.
378
+
379
+ Table V. Average classical fidelity between predicted outcome statistics and real outcome statistics for the arbitrary states, averaged over all the test states and random query measurements. The values of $d_r$ , $d_h$ and $d_z$ are different for each column.
380
+
381
+ <table><tr><td>Types of states</td><td>Uniform distribution</td><td>dr=2, dh=2, dz=2</td><td>dr=2, dh=6, dz=2</td><td>dr=4, dh=12, dz=4</td><td>dr=8, dh=24, dz=8</td><td>dr=16, dh=48, dz=16</td><td>dr=32, dh=96, dz=32</td></tr><tr><td>Arbitrary 6-qubit state</td><td>0.8879</td><td>0.8879</td><td>0.8879</td><td>0.8879</td><td>0.8879</td><td>0.8879</td><td>0.8879</td></tr></table>
382
+
383
+ # IX. GENERALIZATION FROM INFORMATIONALLY INCOMPLETE MEASUREMENTS
384
+
385
+ In this section, we will further discuss the generalization performance of our proposed model in the examples of six-qubit quantum states. We mainly focus on how the information completeness of the measurement class affects the final performance. Rather than setting the class of measurements $\mathcal{M}$ as the set of all 729 six-qubit Pauli-basis measurements, we construct $\mathcal{M}$ by randomly selecting 72 different six-qubit Pauli-basis measurements in each experiment here. For each dataset we discussed, we did such experiments and averaged the results. The results are shown in Table VI.
386
+
387
+ As the experimental results show, our proposed model still has a satisfactory performance when the measurement class is not informationally complete. Meanwhile, we also find that the model will generalize worse as the complexity of datasets increases. A possible explanation is that more information is needed to yield accurate state representations when the dataset is composed of multiple types of states.
388
+
389
+ Table VI. Average classical fidelity between predicted outcome statistics and real outcome statistics, averaged over all the test states and random query measurements. The measurement class $\mathcal{M}$ is composed of 72 different six-qubit Pauli-basis measurements in the case of informationally incomplete measurements.
390
+
391
+ <table><tr><td>Types of states</td><td>Informationally complete M</td><td>Informationally incomplete M</td></tr><tr><td>(i) Ising ground states with ferromagnetic bias</td><td>0.9870</td><td>0.9865</td></tr><tr><td>(ii) Ising ground states with antiferromagnetic bias</td><td>0.9869</td><td>0.9863</td></tr><tr><td>(iii) Ising ground states with no bias</td><td>0.9895</td><td>0.9812</td></tr><tr><td>(iv) XXZ ground states with ferromagnetic bias</td><td>0.9809</td><td>0.9713</td></tr><tr><td>(v) XXZ ground states with XY phase bias</td><td>0.9601</td><td>0.9495</td></tr><tr><td>(vi) (i)-(v) together</td><td>0.9567</td><td>0.9447</td></tr><tr><td>(vii) GHZ state with local rotations</td><td>0.9744</td><td>0.9694</td></tr><tr><td>(viii) W state with local rotations</td><td>0.9828</td><td>0.9824</td></tr><tr><td>(ix) (i)-(v), (vii) and (viii) together</td><td>0.9561</td><td>0.9442</td></tr></table>
392
+
393
+ Table VII. Generalization performances of GQNQ on continuous-variable quantum states.
394
+
395
+ <table><tr><td>Type of states for training and test</td><td>|M| = 300 (Avg)</td><td>|M| = 300 (Worst)</td><td>|M| = 10 (Avg)</td><td>|M| = 10 (Worst)</td></tr><tr><td>(i) Squeezed thermal states</td><td>0.9973</td><td>0.9890</td><td>0.9953</td><td>0.9901</td></tr><tr><td>(ii) Cat states</td><td>0.9827</td><td>0.9512</td><td>0.9571</td><td>0.8920</td></tr><tr><td>(iii) GKP states</td><td>0.9762</td><td>0.9405</td><td>0.9633</td><td>0.9470</td></tr><tr><td>(iv) (i)-(iii) together</td><td>0.9658</td><td>0.9077</td><td>0.9507</td><td>0.8843</td></tr></table>
396
+
397
+ We also study the generalization performances of GQNQ for continuous-variable states when $\mathcal{M}$ is information incomplete over the truncated subspace of interest (less than 30 photons). In the main text, $\mathcal{M}$ consists of 300 homodyne measurements with equidistant phases $\theta$ and is informationally complete over the truncated subspace with less than 300 photons [54]. In contrast, here we test the scenario where $\mathcal{M}$ consists of only 10 homodyne measurement settings with phases $\theta \in \{0,\pi /10,\dots ,9\pi /10\}$ , and $\mathcal{S}$ is a subset of $\mathcal{M}$ containing 5 random homodyne measurement settings. Note that now $\mathcal{M}$ is insufficient to fully characterize a density matrix on a truncated subspace with more than nine photons. We train and test GQNQ using data from three types of continuous-variable states as discussed in the main text in this scenario. The average and worst classical fidelities over all query measurements, together with the comparison with the scenario where $|\mathcal{M}| = 300$ and $|S| = 10$ , are presented in Table VII. The results show that even in this information incomplete scenario, GQNQ still shows great prediction performance on all the types of test states. Again this is because the states we consider fall within lower-dimensional corners of the subspace with limited photons.
398
+
399
+ # X. OVERFITTING
400
+
401
+ Overfitting due to unbalanced data can be an important issue when GQNQ is used for learning across multiple types of states. Here we study the six-qubit scenario where GQNQ is trained and tested on the union of the datasets of ground states of Ising model, ground states of XXZ model, GHZ states and W states with local rotations. Specifically, one type of states is chosen to be underrepresented, appearing 10 times less frequently than any other type of states in the whole training dataset. Then we test the prediction performances of GQNQ with respect to both this underrepresented type of states and the other types of states as shown in Fig. 9.
402
+
403
+ The results show that the performance of GQNQ with unbalanced training data depends on the state under consideration. For W states with local rotations we find that unbalanced training data has little effect on the performance. The situation is similar for the ground states of the Ising model in the ferromagnetic phase. In contrast, the prediction for XXZ model in the XY phase drops to 0.73 when the training data are unbalanced. The results agree with the phenomenon that the ground states of XXZ model in the XY phase are more difficult to learn than any other type of states we considered.
404
+
405
+ ![](images/c15612088b8ded55257a3ed6a66d30ec02363d7d71ec7938fcfe8dac39d23010.jpg)
406
+ Figure 9. Performances of GQNQ when training data from different types of quantum states are unbalanced. In each figure, the red bar represents the classical fidelity with respect to the chosen underrepresented type of states, and the blue bar represents the classical fidelity averaged over all other types of states. Fig.(a) compares the average classical fidelity for W states with local rotations and the average classical fidelity for all other states when the ratio of the size of training data from W states to any other type is $1:10$ and $1:1$ , respectively. Fig.(b) compares the average classical fidelity for ground states of ferromagnetic Ising model with the average classical fidelity for all other states when the ratio of the size of training data from ferromagnetic Ising model to any other type is $1:10$ and $1:1$ , respectively. Fig.(c) compares the average classical fidelity for ground states of XXZ model in XY phase with the average classical fidelity for all other states when the ratio of the size of training data from XXZ model in XY phase to any other type is $1:10$ and $1:1$ , respectively.
407
+
408
+ ![](images/dcb9723db899e4f4a1a6a0e080380e75eb1bcfd4dc5bd0dbd2d1ecebf81ba35f.jpg)
409
+
410
+ ![](images/bab1a3b59fa597422242b0c8d690fc68e5699c5eb158395f7f0a18e8bd885575.jpg)
411
+
412
+ # XI. ADDITIONAL EXPERIMENTS
413
+
414
+ # A. Ising model
415
+
416
+ We study the performance of GQNQ for 10-, 20- and 50-qubit Ising ground states when the measurements are nearest-neighbour two-qubit Pauli measurements. Different from the setting in the main text, here we choose $J_{i}$ as a Gaussian variable with mean value $J$ and variance 0.01. Hence when $J$ is around 0, both ferromagnetic interactions and antiferromagnetic interactions are present with high probability. We find that GQNQ cannot give good predictions of outcome statistics in this scenario when both ferromagnetic and antiferromagnetic interactions exist. The results, together with the comparison with the scenario where each $J_{i}$ is chosen to be the absolute value (or the opposite of the absolute value, for $J < 0$ ) of the Gaussian variable, are presented in Fig. 10.
417
+
418
+ # B. Cat states
419
+
420
+ For the numerical experiments on learning of continuous-variable quantum states, we provide an example of comparison between predictions and ground truths for a cat state in Fig. 11 here.
421
+
422
+ # XII. TRAINING WITH DATA FROM THE STATE TO BE CHARACTERIZED
423
+
424
+ In this section, we discuss how to train our GQNQ model with data only from the quantum state to be characterized. In this setting, GQNQ behaves as a completely unsupervised learner that predicts the outcome statistics of unperformed measurements using measurement data obtained from the quantum state under consideration. The set $\mathcal{M}_{*}$ of fiducial measurements in the training coincides with the set $\mathcal{S}$ of performed measurements. In the training, GQNQ is trained with $s(s < n)$ measurement results $\{(m_i,p_i)\}_{i = 1}^s$ corresponding to $\mathcal{S}$ . When the training is finished, the trained model can be utilized to predict the outcome statistics corresponding to $\mathcal{M}\setminus \mathcal{S}$ .
425
+
426
+ ![](images/0f8f36bdf836072cca66929cc64c60e898e262052dda0af573c8f1091a83e60e.jpg)
427
+
428
+ ![](images/460d9de6586f53654bd26bcafb9b3fca23f51bfb34c962b69cc9da90af9d7be8.jpg)
429
+ Figure 10. Comparison between the performances of GQNQ in Ising model when both ferromagnetic and antiferromagnetic interactions are present near $J = 0$ (top) vs when only either ferromagnetic or antiferromagnetic interactions are present near $J = 0$ (bottom).
430
+
431
+ ![](images/7edaa3b0fe48067f916d07d5b25af67f2f94dfd9a22bb243e52ffd7c32ff56a9.jpg)
432
+ Figure 11. The true outcome probability density (left) and the predicted probability density (right) for cat state $|2.22 + 1.41\mathrm{i}, \pi / 4\rangle_{\mathrm{cat}}$ at quadrature phases $\theta = 0$ , $\theta = \pi / 2$ and $\theta = 3\pi / 4$ , respectively, given the measurement outcome densities at ten random quadrature phases. In the middle circle, ten red lines passing through the center represent those quadrature phases at which measurement outcome statistics are known, and three blue lines passing through the center represent those quadrature phases at which measurement outcome statistics are to be predicted.
433
+
434
+ We present the whole training procedure in such setting by pseudocode in Algorithm 2.
435
+
436
+ Algorithm 2: Training of GQNQ with data provided from the quantum state to be characterized.
437
+ Data: State measurement results $\{(m_i,p_i)\}_{i = 1}^s$ of the quantum state to be characterized corresponding to the set of reference measurements $\mathcal{M}_*$ , maximum number of known POVM measurement results $a(a < s)$ in the training, maximum number of epochs $E$ , learning rate $\delta$ . Initialize parameters $\pmb{\xi}$ and $\pmb{\eta}$ randomly, $e = 0$ ;
438
+ while $e < E$ do
439
+ $\mathcal{L} = 0$ ;
440
+ Generate a random integer number $n_1$ from $[1,a]$ ;
441
+ Randomly select $n_1$ pairs of $(m_i,p_i)$ from $\{(m_i,p_i)\}_{i = 1}^s$ and denote them as $\{(m_{i_j},p_{i_j})\}_{j = 1}^{n_1}$ , where $\{i_j\}_{j = 1}^s$ is a permutation of $\{1,\ldots ,s\}$ ;
442
+ Input each of $\{(m_{i_j},p_{i_j})\}_{j = 1}^{n_1}$ into the representation network $f_{\pmb{\xi}}$ to obtain the representations $\{r_{i_j}\}_{j = 1}^{n_1}$ as $\boldsymbol {r}_{i_j} = f_{\pmb{\xi}}(\boldsymbol {m}_{i_j},\boldsymbol {p}_{i_j})$ ;
443
+ Calculate the state representation by an aggregate function $\mathcal{A}$ as $\boldsymbol {r} = \mathcal{A}(\{\boldsymbol {r}_i\}_{j = 1}^{n_1})$ ;
444
+ Input $\boldsymbol{r}$ and the remaining $\{m_{i_j}\}_{j = n_1 + 1}^s$ into the generation network $g_{\pmb{\eta}}$ to obtain the predictions $\{p_{i_j}'\}_{j = n_1 + 1}^s$ of measurement outcome distributions as $\boldsymbol{p}_{i_j}' = g_{\pmb{\eta}}(\boldsymbol {r},\boldsymbol{m}_{i_j})$ ;
445
+ Calculate the loss $l$ with Eq. (3) by comparing $\{p_{i_j}'\}_{j = n_1 + 1}^s$ with $\{p_{i_j}\}_{j = n_1 + 1}^s$ and update $\mathcal{L}$ as $\mathcal{L} = \mathcal{L} + l$ ;
446
+ Calculate $\nabla_{\pmb{\xi}}\mathcal{L}$ and $\nabla_{\pmb{\eta}}\mathcal{L}$ ;
447
+ Update $\pmb{\xi}$ and $\pmb{\eta}$ as $\pmb {\xi} = \pmb {\xi} - \delta \nabla_{\pmb{\xi}}\mathcal{L}$ , $\pmb {\eta} = \pmb {\eta} - \delta \nabla_{\pmb{\eta}}\mathcal{L}$ ;
448
+ $\mathcal{L} = 0$ ;
449
+ $e = e + 1$ ;
2202.06xxx/2202.06804/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da3f01d28dcf63a9eea8f3f9886e81a67c022a691fb12c9271e42069c0bc696e
3
+ size 724007
2202.06xxx/2202.06804/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06817/04b8261e-67f2-4cd9-9432-44c31d240cd9_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06817/04b8261e-67f2-4cd9-9432-44c31d240cd9_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06817/04b8261e-67f2-4cd9-9432-44c31d240cd9_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2de89e2a251b841392961aefe0379994f641fd06be32b53e7c440ab891f0f635
3
+ size 25652470
2202.06xxx/2202.06817/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06817/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bded37575f3635bc77aec600aa51d8273c86a941f484c4d54ebe11975cf07a5
3
+ size 1659592
2202.06xxx/2202.06817/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06840/0bd9eaad-6f77-437e-b60a-aaa13ea7bfb5_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06840/0bd9eaad-6f77-437e-b60a-aaa13ea7bfb5_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06840/0bd9eaad-6f77-437e-b60a-aaa13ea7bfb5_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba60f9815d69f6940b4ab9af353499bc0df7bea16fdfef5cd71b3b7416aef57f
3
+ size 7794503
2202.06xxx/2202.06840/full.md ADDED
@@ -0,0 +1,511 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # What Do They Capture? - A Structural Analysis of Pre-Trained Language Models for Source Code
2
+
3
+ Yao Wan*
4
+
5
+ School of Computer Science and
6
+
7
+ Technology, Huazhong University of
8
+
9
+ Science and Technology, China
10
+
11
+ wanyao@hust.edu.cn
12
+
13
+ Wei Zhao*
14
+
15
+ School of Computer Science and Technology, Huazhong University of
16
+
17
+ Science and Technology, China
18
+
19
+ mzhaowei@hust.edu.cn
20
+
21
+ Hongyu Zhang
22
+
23
+ University of Newcastle
24
+
25
+ Australia
26
+
27
+ hongyu.zhang@newcastle.edu.au
28
+
29
+ Yulei Sui
30
+
31
+ School of Computer Science,
32
+
33
+ University of Technology Sydney
34
+
35
+ Australia
36
+
37
+ yulei.sui@uts.edu.au
38
+
39
+ Guandong Xu
40
+
41
+ School of Computer Science,
42
+
43
+ University of Technology Sydney
44
+
45
+ Australia
46
+
47
+ guandong.xu@uts.edu.au
48
+
49
+ Hai Jin*
50
+
51
+ School of Computer Science and
52
+
53
+ Technology, Huazhong University of
54
+
55
+ Science and Technology, China
56
+
57
+ hjin@hust.edu.cn
58
+
59
+ # ABSTRACT
60
+
61
+ Recently, many pre-trained language models for source code have been proposed to model the context of code and serve as a basis for downstream code intelligence tasks such as code completion, code search, and code summarization. These models leverage masked pre-training and Transformer and have achieved promising results. However, currently there is still little progress regarding interpretability of existing pre-trained code models. It is not clear why these models work and what feature correlations they can capture. In this paper, we conduct a thorough structural analysis aiming to provide an interpretation of pre-trained language models for source code (e.g., CodeBERT, and GraphCodeBERT) from three distinctive perspectives: (1) attention analysis, (2) probing on the word embedding, and (3) syntax tree induction. Through comprehensive analysis, this paper reveals several insightful findings that may inspire future studies: (1) Attention aligns strongly with the syntax structure of code. (2) Pre-training language models of code can preserve the syntax structure of code in the intermediate representations of each Transformer layer. (3) The pre-trained models of code have the ability of inducing syntax trees of code. Theses findings suggest that it may be helpful to incorporate the syntax structure of code into the process of pre-training for better code representations.
62
+
63
+ # CCS CONCEPTS
64
+
65
+ - Software and its engineering $\rightarrow$ Reusability.
66
+
67
+ * Also with National Engineering Research Center for Big Data Technology and System, Services Computing Technology and System Lab, Cluster and Grid Computing Lab, HUST, Wuhan, 430074, China.
68
+
69
+ Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
70
+
71
+ ICSE '22, May 21-29, 2022, Pittsburgh, PA, USA
72
+
73
+ $\odot$ 2022 Association for Computing Machinery.
74
+
75
+ ACM ISBN 978-1-4503-9221-1/22/05...$15.00
76
+
77
+ https://doi.org/10.1145/3510003.3510050
78
+
79
+ # KEYWORDS
80
+
81
+ Code representation, deep learning, pre-trained language model, probing, attention analysis, syntax tree induction.
82
+
83
+ # ACM Reference Format:
84
+
85
+ Yao Wan, Wei Zhao, Hongyu Zhang, Yulei Sui, Guandong Xu, and Hai Jin. 2022. What Do They Capture? - A Structural Analysis of Pre-Trained Language Models for Source Code. In 44th International Conference on Software Engineering (ICSE '22), May 21-29, 2022, Pittsburgh, PA, USA. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3510003.3510050
86
+
87
+ # 1 INTRODUCTION
88
+
89
+ Code representation learning (also known as code embedding) aims to encode the code semantics into distributed vector representations, and plays an important role in recent deep-learning-based models for code intelligence. Code embedding can be used to support a variety of downstream tasks, such as code completion [30], code search [12, 43], and code summarization [1, 44, 45].
90
+
91
+ Current approaches to code embedding mainly fall into two categories from the perspectives of supervised and unsupervised (or self-supervised) learning paradigms. The supervised approaches are typically developed for specific tasks following the encoder-decoder architecture [37]. In this architecture, an encoder network (e.g. LSTM, CNN, and Transformer) is used to produce a vector representation of a program. The resulting vector is then fed as an input into a decoder network to perform some prediction tasks, e.g., summary generation [1, 35, 44] or token sequence prediction [30]. Recently, there has been significant improvement in the expressiveness of models that can learn the semantics of code, such as self-attention based architectures like Transformer [39]. Another line of code embedding research is based on unsupervised learning. Some approaches utilize word embedding techniques to represent source code [27, 28], which aim to learn a global word embedding matrix $\mathbf{E} \in \mathbb{R}^{V \times D}$ , where $V$ is the vocabulary size and $d$ is the number of dimensions. Code2Vec [2] is such kind of approach, which learns a distributed representation of code based on the sampled paths from Abstract Syntax Trees (ASTs).
92
+
93
+ Recently, self-supervised models which are pre-trained through masked language modeling have attracted much attention. Pre-trained models such as BERT [8] and ELMo [29] are representative approaches and have been successfully used in a variety of tasks
94
+
95
+ in NLP. Inspired by the self-supervised pre-training in NLP, there have been some recent efforts in developing pre-trained models on large-scale code corpus for software engineering tasks. For example, CuBERT [18] is a pre-trained BERT model using 7.4M Python files from GitHub. CodeBERT [11] is a bimodal pre-trained model on source code and natural-language descriptions. To incorporate the syntax structure of code, Guo et al. [13] further propose GraphCode-BERT to preserve the syntax structure of source code by introducing an edge masking technique over data-flow graphs. With much effort being devoted to pre-trained code embedding, there is a pressing demand to understand why they work and what feature corrections they are capturing.
96
+
97
+ In the NLP community, several recent studies have been made towards interpreting the pre-trained language models, e.g., BERT, from the perspective of attention analysis and task probing. This kind of research has become a subspecialty of "BERTology" [31], which focuses on studying the inner-mechanism of BERT model [8]. However, in software engineering, such an understanding is yet to be achieved. Often, we see pre-trained language models that achieve superior performance in various software engineering tasks, but do not understand why they work. Currently there have been several empirical studies that aim to investigate the effectiveness of code embedding. For example, Kang et al. [19] empirically assessed the impact of code embedding for different downstream tasks. Chirkova and Troshin [3] conducted another study to investigate the capabilities of Transformers to utilize syntactic information in different tasks. However, these studies only show in which scenarios a code embedding technique works better, without explaining the inner-mechanism of why the embedding achieves good results. Therefore, it is still not clear why the pre-trained language models work and what they indeed capture, in the context of software engineering tasks.
98
+
99
+ In this work, we explore the interpretability of pre-trained code models. More specifically, we try to answer the following question: Can the existing pre-trained language models learn the syntactical structure of source code written in programming languages? Addressing this question plays an important role in understanding the learned structure of deep neural networks. We conduct a thorough structural analysis from the following three aspects, aiming to provide an interpretation of pre-trained code models (e.g., CodeBERT, GraphCodeBERT).
100
+
101
+ - As the first contribution, we analyze the self-attention weights and align the weights with the syntax structure (see Sec. 4.1). Given a code snippet, our assumption is that if two tokens are close to each other in the AST, i.e., have a neighbourhood relationship, the attention weights assigned to them should be high. Our analysis reveals that the attention can capture high-level structural properties of source code, i.e., the motif structure in ASTs.
102
+ - As the second contribution, we design a structural probing approach [14] to investigate whether the syntax structure is embedded in the linear-transformed contextual word embedding of pre-trained code models (see Sec. 4.2). Using our probe, we show that such transformations also exist in the pre-trained language models for source code, showing evidence that the syntax structure of code is also embedded implicitly in the vectors learned by the model.
103
+
104
+ ![](images/d3ac446e4486603153802940c80838c11df5cc2d0721aade8a22d7bacd5a62c2.jpg)
105
+ Figure 1: A general framework for Transformer-based language model pre-training [8].
106
+
107
+ - As the third contribution, we investigate whether the pretrained language models for source code provide the ability of inducing the syntax tree without training (see Sec. 4.3). We find that the pre-trained models can indeed learn the syntax structure of source code to a certain extent.
108
+
109
+ Our work is complementary to other works that aim to design better neural networks for source code representation. We believe that the findings revealed in this paper may shed light on the inner mechanism of pre-training models for programming languages, as well as inspire further studies.
110
+
111
+ # 2 BACKGROUND
112
+
113
+ In this section, we introduce some background knowledge of our work, including Transformer, and pre-training language model. Figure 1 shows a general framework for Transformer-based language model pre-training.
114
+
115
+ # 2.1 Self-Attention-Based Transformer
116
+
117
+ Transformer [39], which is solely based on self-attention, has become a popular component for code representation learning. Let $c = \{w_1, w_2, \ldots, w_n\}$ denote a code snippet of a sequence of tokens with length of $n$ . A Transformer model is composed of $L$ layers of Transformer blocks to represent a code snippet into contextual representation at different levels $\mathbf{H}^l = [\mathbf{h}_1^l, \mathbf{h}_2^l, \ldots, \mathbf{h}_n^l]$ , where $l$ denotes the $l$ -th layer. For each layer, the layer representation $\mathbf{H}^l$ is computed by the $l$ -th layer Transformer block $\mathbf{H}^l = \mathrm{Transformer}_l(\mathbf{H}^{l-1}), l \in \{1, 2, \ldots, L\}$ .
118
+
119
+ In each Transformer block, multiple self-attention heads are used to aggregate the output vectors of the previous layer. Given an input $c$ , the self-attention mechanism assigns each token $w_{i}$ a set of attention weights over the token in the input:
120
+
121
+ $$
122
+ \operatorname {A t t n} \left(w _ {i}\right) = \left(\alpha_ {i, 1} (c), \alpha_ {i, 2} (c), \dots , \alpha_ {i, n} (c)\right), \tag {1}
123
+ $$
124
+
125
+ ![](images/e68d386a9a9c878574970ba44117d9c707c9174597cb5d277f2718758f078aad.jpg)
126
+ (a) A Python code snippet with its AST
127
+
128
+ ![](images/efdec0217155c0a1cf15c851e0d4082701f7552b768ce28122e152e17bb07b01.jpg)
129
+ (b) Attention heatmap in Layer 5
130
+
131
+ ![](images/36650c643d14bf6c553c614fa52a740276f1c6dbd5a63c7d8e76b378ea268a04.jpg)
132
+ (c) Attention distribution in Layer 5, Head 12
133
+ Figure 2: Visualization of self-attention distribution for a code snippet in CodeBERT. (a) A Python code snippet with its corresponding AST. (b) Heatmap of the averaged attention weights in Layer 5. (c) Self-attention distribution in Layer 5, Head 12. The brightness of lines indicates the attention weights in a specific head. If the connected nodes appear in the motif structure of the corresponding AST, we mark the lines in red.
134
+
135
+ where $\alpha_{i,j}(c)$ is the attention that $w_{i}$ pays to $w_{j}$ . The attention weights are computed from the scaled dot-product of the query vector of $w_{i}$ , and the key vector of $w_{j}$ , followed by a softmax. In the vectorized computing, a general attention mechanism can be formulated as the weighted sum of the value vector $\mathbf{V}$ , using the query vector $\mathbf{Q}$ and the key vector $\mathbf{K}$ :
136
+
137
+ $$
138
+ \operatorname {A t t} (\mathbf {Q}, \mathbf {K}, \mathbf {V}) = \operatorname {s o f t m a x} \left(\frac {\mathbf {Q} \mathbf {K} ^ {T}}{\sqrt {d _ {\text {m o d e l}}}}\right) \cdot \mathbf {V}, \tag {2}
139
+ $$
140
+
141
+ where $d_{\mathrm{model}}$ represents the dimension of each hidden representation. For self-attention, $\mathbf{Q}, \mathbf{K}$ , and $\mathbf{V}$ are mappings of the previous hidden representation by different linear functions, i.e., $\mathbf{Q} = \mathbf{H}^{l - 1}\mathbf{W}_Q^l$ , $\mathbf{K} = \mathbf{H}^{l - 1}\mathbf{W}_K^l$ , and $\mathbf{V} = \mathbf{H}^{l - 1}\mathbf{W}_V^l$ , respectively. At last, the encoder produces the final contextual representation $\mathbf{H}^L = [\mathbf{h}_1^L, \dots, \mathbf{h}_n^L]$ , which is obtained from the last Transformer block.
142
+
143
+ In order to utilize the order of the sequential tokens, the "positional encodings" are injected to the input embedding.
144
+
145
+ $$
146
+ \mathbf {w} _ {i} = e \left(w _ {i}\right) + \operatorname {p o s} \left(w _ {i}\right), \tag {3}
147
+ $$
148
+
149
+ where $e$ denotes the word embedding layer, and $pos$ denotes the positional embedding layer. Typically, the positional encoding implies the position of code token based on sine and cosine functions.
150
+
151
+ # 2.2 Pre-Training Language Model
152
+
153
+ Given a corpus, each sentence (or code snippet) is first tokenized into a series of tokens (e.g., Byte Pair Encoding, BPE [32]). Before BERT's pre-training, it takes the concatenation of two segments as the input, defined as $c_{1} = \{w_{1}, w_{2}, \ldots, w_{n}\}$ and $c_{2} = \{u_{1}, u_{2}, \ldots, u_{m}\}$ , where $n$ and $m$ denote the lengths of two segments, respectively. The two segments are always connected by a special separator token [SEP]. The first and last tokens of each sequence are always padded with a special classification token [CLS] and an ending token [EOS], respectively. Finally, the input of each
154
+
155
+ training sample will be represented as follows:
156
+
157
+ $$
158
+ s = [ \text {C L S} ], \underbrace {w _ {1} , w _ {2} , \ldots , w _ {n}} _ {c _ {1}}, [ \text {S E P} ], \underbrace {u _ {1} , u _ {2} , \ldots , u _ {m}} _ {c _ {2}}, [ \text {E O S} ].
159
+ $$
160
+
161
+ The input is then fed into a Transformer encoder. During BERT's pre-training, two objectives are designed for self-supervised learning, i.e., masked language modeling (MLM) and next sentence prediction (NSP). In the masked language modeling, the tokens of an input sentence are randomly sampled and replaced with the special token [MASK]. In practice, BERT uniformly selects $15\%$ of the input tokens for possible replacement. Among the selected tokens, $80\%$ are replaced with [MASK], $10\%$ are unchanged, and the left $10\%$ are randomly replaced with the selected tokens from vocabulary [8]. For next sentence prediction, it is modeled as a binary classification to predict whether two segments are consecutive. Training positive and negative examples are conducted based on the following rules: (1) if two sentences are consecutive, it will be considered as a positive example; (2) otherwise, those paired segments from different documents are considered as negative examples.
162
+
163
+ Recently, self-supervised learning using masked language modeling has become a popular technique for natural language understanding and generation [5, 8, 9, 24, 34, 36]. In the context of software engineering, several pre-trained code models have also been proposed for program understanding. In this paper, we select two representative pre-trained models for code representations: (1) CodeBERT [11], which takes the concatenation of source code and natural-language description as inputs, and pre-trains a language model by masking the inputs; and (2) GraphCodeBERT [13], which improves CodeBERT by incorporating the data-flow information among variables into model pre-training.
164
+
165
+ # 3 MOTIVATION
166
+
167
+ Prior work in NLP has pointed out that the self-attention mechanism in Transformer has the capability of capturing certain syntax information in natural languages. Inspired by this, we visualize and
168
+
169
+ ![](images/53ff3bdff46f7e6ea656d598bf6b40d99d638da8074017d1d4c6f4ead255af35.jpg)
170
+ (a) A Python code snippet and its AST
171
+
172
+ ![](images/4013d1ba9ce0aaf90464310a6405a4cb19add8b147bc2b729c3922ba0fe8b53a.jpg)
173
+ (b) Attention analysis
174
+ Figure 3: An illustration of attention analysis, probing on word embedding, and syntax tree induction, with a specific Python code snippet.
175
+
176
+ ![](images/5f33911258e69b403ee57a588698afbbcad6c9f2aba5e3dca00a5d870103f033.jpg)
177
+ (c) Probing on word embedding
178
+
179
+ ![](images/13a821fa63709d24a14589bedb7969faf76253795f55e3c355ba7217d35bfb52.jpg)
180
+ (d) Syntax tree induction
181
+
182
+ investigate the attention distribution of the pre-trained model (i.e., CodeBERT) for a code snippet, as shown in Figure 2. Figure 2(a) shows a Python code snippet with its AST. In this paper, we define the syntax structure of the AST consisting of a non-leaf node with its children (e.g., if_statement and block in Figure 2(a)) as a motif structure. We believe that the syntax information of code can be represented by a series of motif structures.
183
+
184
+ Given a code snippet and its corresponding AST, Figure 2(b) visualizes the self-attention heatmap for a specific layer (i.e., Layer 5), which is an average of attention weights over multiple heads. From this figure, we can observe that several patterns indeed exist in the self-attention heatmap, depicted as groups of rectangles (marked in red). These rectangles indicate that the code tokens form groups. Interestingly, we can also find that each group of tokens is close to each other in the AST. Taking "if exit_code is not None" as an example, which is an if statement, we can see that, in the AST, all of these tokens are in the same branch of if_statement. In addition, we can see that these code tokens are also closely connected in the self-attention heatmap.
185
+
186
+ Moreover, we also visualize the self-attention distribution in a specific head (Layer 5, Head 12) to analyze the connections between two tokens, as shown in Figure 2(c). In this figure, the brightness of lines indicates the attention weights in a specific head. If the connected nodes appear in the motif structure of the corresponding AST, we mark the lines in red. From this figure, we can observe that those code tokens (i.e., "if", "exit_code", "not", and "None") that are in a motif structure indeed have been highlighted as closely connected by self-attention.
187
+
188
+ As we have identified several patterns from the attention distribution, which provide some hints to the syntax structure of code, it is necessary for us to further explore this phenomenon with quantitative analysis and systematic assessment. Motivated by the aforementioned observations, this paper investigates why pre-trained language models for source code work and what feature correlations they are capturing, by analyzing the self-attention mechanism. In particular, we analyze two outputs of the self-attention mechanism, i.e., the attention distribution and the generated hidden vectors, under the framework of Transformer.
189
+
190
+ # 4 STRUCTURAL ANALYSIS OF PRE-TRAINED LANGUAGE MODELS FOR SOURCE CODE
191
+
192
+ In this section, we propose three distinct structural analysis approaches, i.e., attention analysis, structural probing on word embedding, and syntax tree induction, to interpret pre-trained code models (i.e., CodeBERT and GraphCodeBERT). Figure 3 gives an illustration of the three structural analysis approaches. Before introducing each approach, we first introduce several common notations that will be used later. Let $(w_{1},w_{2},\ldots ,w_{n})$ denote the code token sequence of code snippet $c$ , with length $n$ . On the $l$ -th layer of Transformer, we use $(\mathbf{h}_1^l,\mathbf{h}_2^l,\dots,\mathbf{h}_n^l)$ to denote the sequence of contextual representation of each code token.
193
+
194
+ # 4.1 Attention Analysis
195
+
196
+ We start by analyzing the self-attention weights, which are the core mechanism for pre-training Transformer-based models. Intuitively, attention defines the closeness of each pair of code tokens. From the lens of attention analysis, we aim to analyze how attention aligns with the syntactical relations in source code. In particular, we consider the syntactical relations such that the attention weight is high between two AST tokens sharing the same parent node. Figures 3(a) and 3(b) illustrate the attention analysis. Given a code snippet with its AST, we can see that the leaf nodes for and in share the same parent. As expected, this structure is aligned with the attention weight $\alpha_{\text{for,in}}$ between these two nodes.
197
+
198
+ Specifically, on each Transformer layer, we can obtain a set of attention weights $\alpha$ over the input code, where $\alpha_{i,j} > 0$ is the attention from $i$ -th code token to $j$ -th token. Here, we define an indicator function $f(w_i, w_j)$ that returns 1 if $w_i$ and $w_j$ are in a syntactic relation ( $w_i$ and $w_j$ have the same parent node in the AST)<sup>1</sup>, and 0 otherwise. We define the attention weight between $w_i$ and $w_j$ as $\alpha_{i,j}(c)$ , and if $w_i$ and $w_j$ are very close, the attention weight should be larger than a threshold, i.e., $\alpha_{i,j}(c) > \theta$ . Therefore, the proportion of high-attention token pairs ( $\alpha_{i,j}(c) > \theta$ ) aggregated
199
+
200
+ over a dataset $C$ can be formulated as follows:
201
+
202
+ $$
203
+ p _ {\alpha} (f) = \frac {\sum_ {c \in C} \sum_ {i = 1} ^ {| c |} \sum_ {j = 1} ^ {| c |} \mathbb {1} _ {\alpha_ {i , j} (c) > \theta} \cdot f \left(w _ {i} , w _ {j}\right)}{\sum_ {c \in C} \sum_ {i = 1} ^ {| c |} \sum_ {j = 1} ^ {| c |} \mathbb {1} _ {\alpha_ {i , j} (c) > \theta}}, \tag {4}
204
+ $$
205
+
206
+ where $\theta$ is a threshold selected for high-confidence attention weights.
207
+
208
+ Variability. Equation 4 shows that, the portion of aligned attention is only dependent on the absolute value of attention weight $\alpha_{i,j}(c)$ . We hypothesize that those heads that are attending to the position, i.e., those heads that focus on the previous or next code token, would not align well with the syntax structure of code, since they do not consider the content of the code token. To distinguish whether the heads are attending to content or position of the code token, we further investigate the attention variability, which measures how attention varies over different inputs. The attention variability is formally defined as follows [40]:
209
+
210
+ $$
211
+ V a r i a b i l i t y _ {\alpha} = \frac {\sum_ {c \in C} \sum_ {i = 1} ^ {| c |} \sum_ {j = 1} ^ {| i |} \left| \alpha_ {i , j} (c) - \bar {\alpha} _ {i , j} \right|}{2 \cdot \sum_ {c \in C} \sum_ {i = 1} ^ {| c |} \sum_ {j = 1} ^ {| i |} \alpha_ {i , j} (c)}, \tag {5}
212
+ $$
213
+
214
+ where $\bar{\alpha}_{i,j}$ is the mean of $\bar{\alpha}_{i,j}(c)$ over all $c\in C$ . We only include the first $N$ tokens ( $N = 10$ ) of each $c\in C$ to ensure a sufficient amount of data at each position $i$ . The positional patterns appear to be consistent across the entire sequence. The high variability would suggest a content-dependent head, while low variability would indicate a content-independent head.
215
+
216
+ $$
217
+ \overbrace {w _ {1} \quad \cdots \quad w _ {i} \quad \cdots \quad w _ {j} \quad \cdots \quad w _ {n}} ^ {\bigcirc}
218
+ $$
219
+
220
+ Figure 4: An illustration of the connection between distance and the syntax structure.
221
+
222
+ # 4.2 Structural Probing on Word Embedding
223
+
224
+ In this approach, we propose a structural probing analysis approach to investigate whether a pre-trained model embeds the syntactical structure in its contextual word embedding. The key idea of our approach is that tree structure is embedded if the transformed space has the property that the Euclidean distance between two words' vectors corresponds to the number of edges between the words in the syntax tree. One question may arise: why does the distance between nodes in the syntax tree matter for syntax information. This is because the distance metric (i.e., the path length between each pair of words) can recover the syntax tree simply by identifying that nodes $u$ and $v$ are neighbors if the distance between them equals to 1. This has also been shown in the Code2Vec [2], which utilizes the contextual information among a set of paths sampled from the AST, to represent the structure information of code. Figure 4 gives a toy example to illustrate the connection between distance and syntax structure. Let $(w_{1},\ldots ,w_{i},\ldots ,w_{j},\ldots ,w_{n})$ denote the sequence of code tokens for code snippet $c$ , if we know the distance between every pair of nodes, we can induce the syntax structure of
225
+
226
+ code. Note that, the distance metric, which measures the distance among any two code tokens, can learn the global syntax structure of code to some extent.
227
+
228
+ Figures 3(a) and 3(c) illustrate the structural probing on word embedding. Taking the leaf nodes for and in which share the same parent as an example, the Euclidean square of distance between these two nodes is 2. We first map the representations of these two tokens into a hidden space via a linear transformation $B$ , obtaining the Vectorfor, and Vectorin, respectively. We believe that if the Euclidean square of distance between Vectorfor, and Vectorin is close to 2, the syntax structure between for and in is well preserved.
229
+
230
+ In particular, we learn the mapping function $B$ in a supervised way. Formally, given a code sequence $(w_{1}, w_{2}, \ldots, w_{n})$ as the input, and each model layer generates word vectors $(\mathbf{h}_{1}, \mathbf{h}_{2}, \ldots, \mathbf{h}_{n})$ . We compute the square of distance between the two word vectors $\mathbf{h}_{i}$ and $\mathbf{h}_{j}$ in high-dimensional hidden space as follows:
231
+
232
+ $$
233
+ d _ {B} \left(\mathbf {h} _ {i}, \mathbf {h} _ {j}\right) ^ {2} = \left(B \left(\mathbf {h} _ {i} - \mathbf {h} _ {j}\right)\right) ^ {T} \left(B \left(\mathbf {h} _ {i} - \mathbf {h} _ {j}\right)\right), \tag {6}
234
+ $$
235
+
236
+ where $i$ and $j$ are indices of the words in the code sequence. The parameters of the structural probe we use are exactly the matrix $B$ (liner mapping), which is trained to reconstruct the tree distance between all word pairs $(w_{i}, w_{j})$ in the code sequence $T$ in the training set of source code. We define the loss function for parameter training as follows:
237
+
238
+ $$
239
+ \min _ {B} \sum_ {c \in C} \frac {1}{| c | ^ {2}} \sum_ {i, j} \left| d _ {T ^ {c}} \left(w _ {i} ^ {c}, w _ {j} ^ {c}\right) - d _ {B} \left(\mathbf {h} _ {i} ^ {c}, \mathbf {h} _ {j} ^ {c}\right) ^ {2} \right|, \tag {7}
240
+ $$
241
+
242
+ where $|c|$ is the length of code sequence $c$ , $d_{T^c}(w_i^c, w_j^c)$ denotes the distance between code tokens in AST, and $d_B(\mathbf{h}_i^c, \mathbf{h}_j^c)^2$ denotes the distance between the embedding vectors of code tokens, for code sequence $c$ . The first summation calculates the average distance for all training sequences, while the second one sums all possible combinations of any two words in the code sequences. The goal of this supervised training is to propagate the error backwards and update the parameters of the linear mapping matrix $B$ .
243
+
244
+ # 4.3 Syntax Tree Induction
245
+
246
+ In this approach, we propose to investigate the capability of pretrained code model in inducing syntax structure, without training. The key insight of our approach is that if the distance between two tokens is close (e.g., with a similar attention distribution, or with a similar representation), they are expected to be close in the syntax tree, i.e., sharing the same parent. Based on this insight, we propose to induce the syntax tree from the distances between two tokens. Our assumption is that if the induced tree derived from the pre-training model is similar to the gold standard syntax tree, we can reasonably infer that the syntactic structures have been preserved during the model pre-training.
247
+
248
+ We propose to induce the syntax tree based on the syntactic distance among code tokens, which was first introduced for grammar induction for natural languages [33]. Formally, given a code sequence $(w_{1},w_{2},\ldots ,w_{n})$ , we compute $\mathbf{d} = (d_1,d_2,\dots,d_{n - 1})$ , where $d_{i}$ corresponds to the syntactic distance between tokens $w_{i}$ and $w_{i + 1}$ . Each $d_{i}$ is defined as follows:
249
+
250
+ $$
251
+ d _ {i} = f \left(g \left(w _ {i}\right), g \left(w _ {i} + 1\right)\right), \tag {8}
252
+ $$
253
+
254
+ 1 $S = (w_{1},w_{2},\ldots ,w_{n})$ : a sequence of code tokens, with length $n$
255
+ 2 $\mathbf{d} = (d_{1}, d_{2}, \ldots, d_{n-1})$ : a vector whose elements are the distance between two adjacent code tokens;
256
+
257
+ 3 Function Tree(S, d):
258
+
259
+ Algorithm 1: Greedy top-down binary syntax tree induction based on syntax distances.
260
+ ```txt
261
+ if $\mathbf{d} = []$ then
262
+ 5 node $\leftarrow$ Leaf $(S_0)$ .
263
+ 6 else
264
+ 7 $i = \mathrm{argmax}(\mathbf{d})$ .
265
+ 8 child $l =$ Tree $(S_{\leq i},\mathbf{d}_{< i})$ .
266
+ 9 child $r =$ Tree $(S_{>i},\mathbf{d}_{>i})$ .
267
+ 10 node $\leftarrow$ Node(child $l$ ,childr);
268
+ 11 end
269
+ 12 return node;
270
+ ```
271
+
272
+ 13 End Function
273
+
274
+ where $f(\cdot, \cdot)$ and $g(\cdot)$ denote the function of distance measurement and code representation learning, respectively. Here, we measure the syntactic distance between two tokens from their intermediate representation vector, as well as the self-attention distribution, with various distance measurement functions. Specifically, let $g_l^v$ and $g_{l,k}^{d}$ denote the functions to generate the intermediate representation and self-attention in $l$ -th layer and $k$ -th head. To calculate the similarity between vectors, we have many options, in terms of the intermediate representation and attention distributions. For example, we can use $L1$ and $L2$ to calculate the similarity between two intermediate representation vectors. We can use Jensen-Shannon divergence [25] and Hellinger distance [22] to calculate the similarity between two attention distributions. Table 1 summarizes all the available distance measurement functions.
275
+
276
+ Once the distance vector $\mathbf{d}$ is computed, we can easily convert it to the target syntax tree through a simple greedy top-down inference algorithm based on recursive partitioning of the input, as shown in Algorithm 1. Alternatively, this tree reconstruction process can also be done in a bottom-up manner, which is left for further exploration [33].
277
+
278
+ Injecting Bias into Syntactic Distance. From our observation, the AST of source code tends to be right-skewed. This has also been a well-known bias in the constituency trees for English. Therefore, it motivates us to influence the induced tree such that they are moderately right-skewed following the nature of gold-standard ASTs. To achieve this goal, we propose to inject the inductive deviation into the framework by simply modifying the value of the syntactic distance. In particular, we introduce the right-skewness bias to influence the spanning tree to make it right-biased appropriately [20]. Formally, we compute $\hat{d}_i$ by appending the following linear bias term to every $d_i$ :
279
+
280
+ $$
281
+ \hat {d} _ {i} = d _ {i} + \lambda \cdot A V G (\mathbf {d}) \times \left(1 - \frac {1}{(m - 1) \times (i - 1)}\right), \tag {9}
282
+ $$
283
+
284
+ where $\mathrm{AVG}(\cdot)$ outputs an average of all elements in a vector, $\lambda$ is a hyperparameter, and $i$ ranges from 1 to $m$ , where $m = n - 1$ .
285
+
286
+ Table 1: The definition of different functions of distance measurement to compute the syntactic distance between two adjacent words in a code sequence. Note that $r = g^v(w_i)$ , $s = g^v(w_{i+1})$ , $P = g^d(w_i)$ , $Q = g^d(w_{i+1})$ , $h$ denotes the hidden embedding size, and $n$ denotes the length of code sequence.
287
+
288
+ <table><tr><td>Function</td><td>Definition</td></tr><tr><td colspan="2">Distance functions for intermediate representations</td></tr><tr><td>L1(r,s)</td><td>∑i=1h|ri-si|</td></tr><tr><td>L2(r,s)</td><td>√∑i=1h(ri-si)2</td></tr><tr><td colspan="2">Distance functions for attention distributions</td></tr><tr><td rowspan="3">JSD(P||Q)</td><td>√(DKL(P||M) + DKL(Q||M))/2</td></tr><tr><td>where M = (P + Q)/2</td></tr><tr><td>and DKL(A||B) = ∑w∈cA(w) log A(w)/B(w)</td></tr><tr><td>HEL(P,Q)</td><td>1/√2√∑i=1n(√pi - √qi)2</td></tr></table>
289
+
290
+ Note that, introducing such a bias can also examine what changes are made to the resulting tree structure. Our assumption is that: if injecting the bias does not affect the performance of the pre-trained model for unsupervised analysis, we can infer that they capture the bias to some extent.
291
+
292
+ Similarity between Two Trees. Here we introduce the way we measure the similarity of the induced tree and the gold-standard AST. Specifically, we first transform the tree structure into a collection of intermediate nodes, where each intermediate node is composed of two leaf nodes. Then we measure the similarity between the two collections. Figure 5 shows a toy example to illustrate the calculation of similarity between two trees, i.e., the gold-standard AST (Figure 5(a)) and induced tree (Figure 5(b)). As shown in Figure 5(a), the gold-standard AST consists of four intermediate nodes (i.e., $T_{1}$ , $T_{2}$ , $T_{3}$ , and $T_{4}$ ). For each intermediate, we further expand it using two leaf nodes. For example, the $T_{1}$ is expanded into $(w_{1}, w_{6})$ , where $w_{6}$ is randomly selected from the $w_{4}$ , $w_{5}$ , and $w_{6}$ . Similarly, we also transform the induced tree into a collection of leaf nodes.
293
+
294
+ Figure 5: A toy example to illustrate the calculation of similarity between the gold-standard AST and induced tree.
295
+ ![](images/3e87d75a9e3558ce277050a622ed51c645e109611fcbd028ba09777bd1beea67.jpg)
296
+ S={T1: $(w_{1},w_{6}),T_{2}:(w_{2},$
297
+ (a) The gold-standard AST
298
+
299
+ ![](images/1791676422053120b76b7aa8f5d85ee77cf621cf95135231b79b5e567f894ddb.jpg)
300
+ $w_{6}),T_{3}$ . $(w_{2},w_{3}),T_{4}$ .. $(w_{4},w_{6}),\}$
301
+ $S^{\prime} = \{T\colon (w_{1},w_{6}),T(w_{2},w_{6}),T\colon (w_{2},$
302
+ $w_{3})$ $T\colon (w_4,w_6),T\colon (w_5,w_6)\}$
303
+ (b) The induced tree
304
+
305
+ Given two sets, we use the $F1$ score to measure their similarity. Let $S$ denote the set of gold-standard tree, and $S'$ denote the set of predicted tree, we can calculate the precision and recall by precision = |S∩S'| / |S'|, and recall = |S∩S'| / |S|, respectively. The F1 score
306
+
307
+ ![](images/308be05959d8b3e544633015d4626da5e0e8a8e43496dac87fc4855383d78b54.jpg)
308
+ (a) CodeBERT (Python)
309
+
310
+ ![](images/c9242f79451d3d9c005eba3a68786fc63eae104e1f91a4a23df63348ce80ba83.jpg)
311
+ (b) GraphCodeBERT (Python)
312
+
313
+ ![](images/d0e8fde7a0519ea779f70cd5228c765ad9547c76125bd020abeeaa05ae4c2ea9.jpg)
314
+ (c) CodeBERT (Java)
315
+
316
+ ![](images/918cbfc3f9b368abc47b0aae719aefcfb6c851e161e03be1518b251e410c6d55.jpg)
317
+ (d) GraphCodeBERT (Java)
318
+
319
+ ![](images/2ee3424fd43dcc8f65588dc7c6fe67de745721ad75ab4c6d43760c681cc3dc57.jpg)
320
+ (e) CodeBERT (PHP)
321
+
322
+ ![](images/98f4eda1c71879271b56d4f67ed6a822da954875df62b0f2fb4101b59b08c04e.jpg)
323
+ (f) GraphCodeBERT (PHP)
324
+ Figure 6: Consistency between the attention and AST for the CodeBERT and GraphCodeBERT on different programming languages (i.e., Python, Java, and PHP). These heatmaps show the proportion of high-confidence attention weights $(\alpha_{i,j} > \theta)$ from each head which connect those code tokens that in the motif structure of AST. The bars show the maximum value of each layer.
325
+
326
+ is the the harmonic mean of precision and recall, as follows:
327
+
328
+ $$
329
+ F 1 = 2 * \frac {\text {p r e c i s i o n} \cdot \text {r e c a l l}}{\text {p r e c i s i o n} + \text {r e c a l l}}. \tag {10}
330
+ $$
331
+
332
+ # 5 EXPERIMENTAL DESIGN AND RESULTS
333
+
334
+ In this section, we conduct experiments to explore what the pretrained code models capture from three distinct aspects, i.e., attention analysis, structural probing on word embedding, and syntax tree induction.
335
+
336
+ # 5.1 Experimental Design
337
+
338
+ We investigate two Transformer-based pre-trained models (i.e., CodeBERT [11] and GraphCodeBERT [13]), both of which are composed of 12 layers of Transformer with 12 attention heads. These models are both pre-trained on CodeSearchNet [17], a large-scale of code corpora collected from GitHub across six programming languages. The size of representation in each Transformer layer is set to 768. Without loss of generability, we select Python, Java, and PHP as our target programming languages and use the corresponding dataset from CodeSearchNet. For all experiments, we exclude the attention to the [SEP] delimiter as it has been proven to be a "no-op" attention mark [4], as well as the attention to the [CLS] mark, which is not explicitly used for language modeling. Note that, in the pre-training phase, the input code snippets have been tokenized into subwords via byte-pair encoding (BPE) [32] before being passed to the pre-trained model. However, our analyses are
339
+
340
+ all based on the word-level code tokens. Therefore, we represent each word by averaging the representations of its subwords. All the experiments were conducted on a Linux server, with 128GB memory, and a single 32GB Tesla V100 GPU.
341
+
342
+ Through comprehensive analysis, we aim to answer the following research questions:
343
+
344
+ - RQ1 (Attention Analysis): Does attention align with the syntax structure in source code?
345
+ - RQ2 (Probing on Word Embedding): Are the syntax structure encoded in the contextual code embedding?
346
+ - RQ3 (Syntax Tree Induction): Are the pre-trained code model able to induce the syntax structure of code?
347
+
348
+ # 5.2 RQ1: Attention Analysis
349
+
350
+ In attention analysis, we aim to investigate whether the attention aligns with the syntax structure of source code.
351
+
352
+ Experimental Settings. Following [41], we set the attention threshold $\theta$ in Equation 4 as 0.3, so as to ensure selecting high-confidence attention and retaining enough data for our analysis. We leave the analysis of the impact of $\theta$ in future work. Furthermore, our analysis is based on a subset of 5,000 code snippets randomly sampled from the training dataset. In order to reduce the memory and accelerate the computation process, we truncate all the long code sequences within the length of 512. We only include the results of attention heads where at least 100 high-confidence attention scores are available in our analysis.
353
+
354
+ ![](images/d7c39a6c6a957593690f616c6eb9626d80146bf74e512405fc4f7153dde71f13.jpg)
355
+ Layer: 2 Head: 3
356
+
357
+ ![](images/b66df6c4072ceeb21400db5b291b9667f8c8d2a90e6be0c16715b2f531fff493.jpg)
358
+ Layer: 5 Head: 11
359
+
360
+ ![](images/994354f4966a85539af62e87d4a39a49b54fac0ae4eedcafc006e7628fe41950.jpg)
361
+ Layer: 11 Head: 1
362
+ Figure 7: Visualization of attention heads in CodeBERT, along with the value of attention analysis $(p_{\alpha}(f))$ , and attention variability, given a Python code snippet. Left: Attention visualised in Layer 2, Head 3, which focuses attention primarily on the position of next token. Center: Attention visualized in Layer 5, Head 11, which disperses attention roughly evenly across all tokens. Right: Attention visualized in Layer 11, Head 1, which focuses on the content, and is highly aligned with the AST.
363
+
364
+ ![](images/cee57e1771074d45b6fc91626a576582686d19c411c3870c8f663c316ea65d0d.jpg)
365
+ Figure 8: The variability of attention distribution by layer-head in Python. High-values indicate content-dependent heads, and low-values indicate position-based heads.
366
+
367
+ Experimental Results. Figure 6 shows how attention aligns with the AST structure for CodeBERT and GraphCodeBERT on different programming languages (i.e., Python, Java, and PHP), according to the indicators defined in Equation 4. The figure shows the proportion of high-confidence attention weights $(\alpha_{i,j} > \theta)$ from each head which connect those code tokens that in the motif structure of AST. The bar plots show the maximum score of each layer. From this figure, we can observe that the most aligned heads are
368
+
369
+ located in the deeper layers and the concentration is as high as $67.25\%$ (Layer 11, Head 1 in CodeBERT) and $59\%$ (Layer 12, Head 9 in GraphCodeBERT). These high scores indicate that attention aligns strongly with the motif structure in AST, especially in the deeper layers. This is because the heads in deeper layers have stronger capabilities in capturing longer distances.
370
+
371
+ Although there is a strong alignment between the attention and the syntax structure of code, it is still necessary to distinguish whether the attention is based on the position or content of code token, as mentioned in Sec. 4.1. In Figure 7, we show the attention variability of attention heads in CodeBERT for a Python code snippet. Figure 7 (left) and Figure 7 (center) show two examples of heads that put more focus on the position, respectively from Layer 5, Head 11, and Layer 2, Head 3. Based on the variability defined in Equation 5, we can see that the attention in Layer 5, Head 11, is evenly dispersed, with the variability of 0.25. Moreover, in Layer 5, Head 11, it is apparent to see that the attention is focusing on the next token position. Figure 7 (right) shows the content-dependent head from Layer 11, Head 1, which has the highest alignment relationship with the abstract syntax tree structure among all heads. In Figure 8, we also visualize the variability of attention distribution by layer-head in Python. The high-values indicate the content-dependent heads, and the low-values indicate the position-based (or content-independent) heads.
372
+
373
+ Summary. Through attention analysis, we find that the learned attention weights are strongly aligned with the motif structure in an AST. Additionally, each attention across different heads and layers put different focus on the position and content of the tokens of source code.
374
+
375
+ Table 2: The average Spearman correlation of probing in Python.
376
+
377
+ <table><tr><td>Method</td><td>Spearman Correlation</td></tr><tr><td>CodeBERT-0</td><td>0.60</td></tr><tr><td>CodeBERT-1</td><td>0.69</td></tr><tr><td>CodeBERT-5</td><td>0.85</td></tr><tr><td>GraphCodeBERT-5</td><td>0.86</td></tr></table>
378
+
379
+ # 5.3 RQ2: Probing on Word Embedding
380
+
381
+ We conduct structural probing on the word embedding of source code, to investigate whether the word embedding in the Transformer-based pre-trained model embeds the syntax structure of code.
382
+
383
+ Experimental Settings. Given a pair of code tokens (leaf nodes) in AST, we measure the correlation between the predicted distance using word embedding and the gold-standard distance in the AST. Specially, we use the Spearman correlation [7] to measure the predicted distance vector and the gold-standard distance vector, among all samples of code snippets. When training the linear transformation matrix $B$ in Equation 7, we limit the code length to 100. We probe the contextual representations in each layer of Transformer, and denote the investigated pre-trained code models as CodeBERT- $K$ and GraphCodeBERT- $K$ , where $K$ indexes the layer of Transformer in corresponding model. To serve as a comparison against the pre-trained code models, we also design a baseline model - CodeBERT-0, which denotes the simple word embedding before being fed into the Transformer layers. In evaluation, we average the Spearman correlation between all fixed-length code sequences. We report the average value of the entire sequence length of $5 \sim 50$ as the Spearman metric, as in [14].
384
+
385
+ Experimental Results. Table 2 shows the results of probing in Python. From this table, we find that the CodeBERT-0 without Transformer layers achieves inferior performance than that with multiple layers of Transformer. This confirms our assumption that Transformer has the ability of capturing the syntax information of source code. In addition, we can also find that GraphCodeBERT performs better than CodeBERT, indicating that it is helpful to explicitly incorporate the syntax structure into model pre-training.
386
+
387
+ Figure 9 shows the Spearman correlation of probing on the representation in each layer of the models. We can observe that the capability of capturing syntax structure differs across different layers of Transformer. The best performance is obtained in the 5-th layer. For example, in Python, CodeBERT and GraphCodeBERT achieve the highest Spearman correlation (84% and 86%, respectively) in the 5-th layer. Furthermore, in each layer of Transformer, GraphCodeBERT still performs better than CodeBERT in capturing the syntax structure of programs written Python, Java, and PHP, confirming the observation from Table 2.
388
+
389
+ Figure 10 shows the heatmaps of gold-standard and predicted distances based on pre-trained CodeBERT and GraphCodeBERT, for a given input Python code snippet. We can see that Figures 10c and 10d look more similar to the gold-standard one (Figure 10a) than to Figure 10b. In these figures, some matching parts are marked in red. The result confirms that CodeBERT-5 and GraphCodeBERT-5 (with multiple layers of Transformer) perform better than CodeBERT-0 (without passing through the Transformer layers).
390
+
391
+ ![](images/8f90161a918a9d488fe5c47b82db12797b14f848bf1f3f060b85d1fca3e09ae6.jpg)
392
+ Figure 9: The average Spearman correlation for CodeBERT and GraphCodeBERT in multiple programming languages.
393
+
394
+ Summary. Through embedding analysis, we can observe that the syntax structure of code has been well preserved in different hidden layers of the pre-trained language models (i.e., CodeBERT and GraphCodeBERT).
395
+
396
+ # 5.4 RQ3: Syntax Tree Induction
397
+
398
+ We investigate the extent to which pre-trained code models capture the syntax structure of code by inducing a tree.
399
+
400
+ Experimental Settings. For comparison, we introduce four traditional greedy top-down tree induction baselines for comparison, e.g., random, balanced, left-branching, and right-branching binary trees. Take the random-based approach as an example, we recursively partition the code snippet based on a randomly selected position. In addition, we also derive another baseline, CodeBERT-0, which is based on the word embedding before being fed into Transformer layers. When injecting bias into the syntactic distance, we set the hyperparameter of bias $\lambda$ to 1. Due to the space limitation, we only report the F1 scores for six common intermediate nodes in Python AST, i.e., Parameters, Attribute, Argument, List, Assignment, and Statement.
401
+
402
+ Experimental Results. Table 3 presents the results of various models for syntax tree induction on the test dataset. From this table, we can observe that the right-branching tree induction approach achieves the best performance among all the baselines, confirming our assumption that the AST tends to be right-skewed. When comparing the pre-trained code models (i.e., CodeBERT and GraphCodeBERT) with other baselines, it is clear to see the pre-trained code models significantly outperform other baselines, even without bias injection. These results show that the Transformer-based pre-training models are more capable of capturing the syntax structure of code to a certain extent through pre-training on a large-scale code corpus. When comparing the pre-trained models w/ and w/o bias injection, we can observe that injecting bias can increase the performance of syntax tree induction up to $5\%$ . This improvement indirectly shows that the current pre-trained code models do not capture well the property of the right-skewness of AST. It is worthy mentioning that the performance of assignment has been reduced after injection
403
+
404
+ ![](images/a5159df34a51dae89088e1762ea64f20ab458ef24b42f03d1e184226e212a083.jpg)
405
+ (a) Gold-standard
406
+
407
+ ![](images/ba6cb03271e9946731eba56f6cc659e3949b36b4138df2d135e5af720246c0ff.jpg)
408
+ Figure 10: The heatmaps of gold-standard distance and predicted distance based on pre-trained CodeBERT and GraphCodeBERT models in Python. (a) Gold-standard tree distance between all pairs of code tokens in AST. (b-d) The predicted distance based on the probing of CodeBERT and GraphCodeBERT. The darkness of color indicates the closeness of the paired words.
409
+
410
+ ![](images/57014b0d5f772e96165cc030322b32b5c7a527b07a6174c8dfbf9ea9cf31535b.jpg)
411
+ (b) CodeBERT-0 prediction
412
+
413
+ ![](images/98a6648c78e9529845932c47ffdd097b9e6ae82011864be2a82dbb0bd33cd6fa.jpg)
414
+ (c) CodeBERT-5 prediction
415
+ (d) GraphCodeBERT-5 prediction
416
+
417
+ Table 3: Results of syntax tree induction in Python. f: function of distance measurement, L: layer number, A: attention head number, AVG: the average of all attentions.
418
+
419
+ <table><tr><td>Model</td><td>f</td><td>L</td><td>A</td><td>F1</td><td>parameters</td><td>attribute</td><td>argument</td><td>list</td><td>assignment</td><td>statement</td></tr><tr><td colspan="11">Baselines</td></tr><tr><td>Random Trees</td><td>-</td><td>-</td><td>-</td><td>16.93</td><td>20.26%</td><td>30.75%</td><td>32.15%</td><td>24.34%</td><td>12.98%</td><td>16.03%</td></tr><tr><td>Balanced Trees</td><td>-</td><td>-</td><td>-</td><td>16.79</td><td>0.46%</td><td>32.60%</td><td>30.85%</td><td>25.54%</td><td>14.25%</td><td>15.90%</td></tr><tr><td>Left Branching Trees</td><td>-</td><td>-</td><td>-</td><td>18.49</td><td>23.25%</td><td>43.26%</td><td>50.99%</td><td>26.77%</td><td>5.78%</td><td>14.48%</td></tr><tr><td>Right Branching Trees</td><td>-</td><td>-</td><td>-</td><td>26.36</td><td>44.07%</td><td>43.19%</td><td>37.86%</td><td>34.18%</td><td>8.34%</td><td>22.74%</td></tr><tr><td>CodeBERT-0</td><td>-</td><td>-</td><td>-</td><td>19.13</td><td>11.67%</td><td>25.54%</td><td>53.85%</td><td>27.62%</td><td>18.68%</td><td>21.89%</td></tr><tr><td colspan="11">Pre-Trained Models (w/o bias)</td></tr><tr><td>CodeBERT</td><td>JSD</td><td>8</td><td>AVG</td><td>45.37</td><td>40.99%</td><td>66.65%</td><td>88.42%</td><td>56.90%</td><td>70.47%</td><td>66.10%</td></tr><tr><td>GraphCodeBERT</td><td>HEL</td><td>8</td><td>10</td><td>51.34</td><td>95.96%</td><td>75.50%</td><td>67.76%</td><td>80.87%</td><td>72.88%</td><td>63.98%</td></tr><tr><td colspan="11">Pre-Trained Models (w/bias λ = 1)</td></tr><tr><td>CodeBERT</td><td>HEL</td><td>9</td><td>AVG</td><td>50.18</td><td>67.37%</td><td>67.93%</td><td>76.84%</td><td>71.60%</td><td>56.12%</td><td>62.93%</td></tr><tr><td>GraphCodeBERT</td><td>HEL</td><td>9</td><td>AVG</td><td>54.80</td><td>74.68%</td><td>72.05%</td><td>84.18%</td><td>73.68%</td><td>76.10%</td><td>72.69%</td></tr></table>
420
+
421
+ the bias. One possible hypothesis is that although the AST shows a right-skewness trend as whole, several subtrees (e.g., the subtree of assignment) are not right-skewed.
422
+
423
+ Note that, in the experiments, we have tried different distance functions (as shown in Table 1) to measure distance based on attention and contextual representation in each Transformer layer. Due to the space limitation, in Table 3, we only present the best performance when using different distance functions for each Transformer layer. We can find that the JSD and HEL distance functions that are performed over attention distributions perform better than those over contextual word representations. It shows that parsing trees from attention information is more effective than extracting from the contextual representation of pre-trained code models.
424
+
425
+ In Figure 11, we also show a case study of code snippet, with the induced trees based on CodeBERT, with and without bias injected. From this figure, we can see that several motif structures have been captured by CodeBERT, e.g., return-images, and self-postprocessor. It verifies the effectiveness of the syntax tree induction.
426
+
427
+ Summary. The syntax tree of code can be induced by the pretrained language models for code, to some extent. In addition, extracting parse trees from attention information is more effective than extracting from the contextual representations of pre-trained code models.
428
+
429
+ # 6 DISCUSSION
430
+
431
+ # 6.1 Observed Findings
432
+
433
+ Through comprehensive analysis from three perspectives of pretrained code models, we observe several insightful findings, which may inspire future study. From attention analysis, we find that a word's attention distribution can align with the AST. The attentions align better with syntax connection in deeper layers than lower layers in the self-attention network. Moreover, we find that there exit position-based heads, which do not consider the context of text. It could suggest that if we remove these heads, it will not affect the final results and we can reduce the number of parameters of the pre-trained models. Then, we find the pre-trained models can embed syntactic information in the hidden layers. All pairs of words know their syntactic distance, and this information is a global structural property of the vector space. Finally, we use a simple tree construction algorithm to induce a syntax tree from pretrained models. The results indicate that the pre-trained model such as CodeBERT is capable of perceiving the syntactic information to a certain extent when training on a large corpus. Our findings suggest that grammatical information can be learned by the pretrained model, which could explain why a pre-trained model such as CodeBERT can achieve promising results in a variety of source code related downstream tasks such as code summarization, code search, and clone detection.
434
+
435
+ ![](images/9054fc0f9cb172f74a59a470f32a946d07250d780b32c4a4a0890bb211e5d7de.jpg)
436
+ Figure 11: A case study of syntax tree induction based on CodeBERT for a given Python code snippet.
437
+
438
+ # 6.2 Limitations and Future Work
439
+
440
+ One limitation of our work is that the adopted structural analysis approaches are based on the AST structure of code, which could be just one aspect where the pre-trained models can achieve good results for source code. In our future work, we will investigate how the pre-trained models learn other aspects of source code, such as code tokens, control-flow graphs (CFGs), and data-flow graphs (DFGs). Besides, in this paper, we only investigate two representative self-supervised pre-trained language models for source code, i.e., CodeBERT and GraphCodeBERT. It will be interesting to extent the analysis to other supervised learning models, as well as other deep neural networks (e.g., LSTM and CNN).
441
+
442
+ With regard to the design of the structural analysis approaches adopted in this paper, one limitation is that the structural probing on word embedding we currently use is relative simple. It would be interesting to develop a deep neural network to learn better mapping functions. Meanwhile, the tree construction algorithm we use is a relatively simple top-down recursive binary tree algorithm. The right-skewness bias we use was originally designed for the constituency trees in natural languages (e.g., English), which can be improved for ASTs. Lastly, the AST structure is more complex than the induced tree, therefore there is still ampler room for improvement in the grammar induction algorithm.
443
+
444
+ # 7 RELATED WORK
445
+
446
+ Recently, there has been much effort in interpreting the BERT models in the NLP community. At a high level, these interpretation approaches are developed from two perspectives: (1) interpreting the learned embedding, and (2) investigate whether BERT can learn syntax and semantic information of natural languages. To interpret the learned embedding, Ethayarajh [10] studies whether the contextual information are preserved in the word embedding learned from pre-training models, including BERT, ELMo, and GPT-2. Mickus et al. [26] systematically evaluate the pre-trained BERT using a distributed semantics models. Conneau et al. [6] and Liu et al. [23] design several probing tasks to investigate whether the sentence embedding can capture the linguistic properties.
447
+
448
+ To investigate the syntax and semantic knowledge in BERT, Tenney et al. [38] develop a series of edge probing tasks to explore how the syntactic and semantic structure can be extracted from different layers of pre-trained BERT. Htut et al. [16] propose to extract implicit dependency relations from the attention weights
449
+
450
+ of each layer/head through two approaches: taking the maximum attention weight and computing the maximum spanning tree. Hewitt and Manning [14] propose a structural probing approach to investigate whether the syntax information are preserved in word representations.
451
+
452
+ Specially, there also exists another line of work on visualizing attentions to investigate which part of the feature space the model put more focus. Kovaleva et al. [21] study self-attention and conduct a qualitative and quantitative analysis of the information encoded by individual BERT's heads. Hoover et al. [15] introduce a tool, called exBERT, to help humans conduct flexible, interactive investigations and formulate hypotheses during the model-internal reasoning process. Following this line of research, this paper proposes to extend and adapt the interpretation techniques from the NLP community to understand and explain what feature correlations can be captured by a pre-trained code model in the embedding space.
453
+
454
+ # 8 CONCLUSION
455
+
456
+ In this paper, we have explored the interpretability of pre-trained language models for source code (e.g., CodeBERT, GraphCode-BERT). We conduct a thorough structural analysis from the following three aspects, aiming to give an interpretation of pre-trained code models. First, we analyze the self-attention weights and align the weights with the syntax structure. Second, we propose a structural probing approach to investigate whether the contextual representations in Transformer capture the syntax structure of code. Third, we investigate whether the pre-trained code models have the capability of inducing the syntax tree without training. The analysis in this paper has revealed several interesting findings that can inspire future studies on code representation learning.
457
+
458
+ Artifacts. All the experimental data and source code used in this work will be integrated into the open-source toolkit NATURALCC [42], which is available at https://github.com/CGCL-codes/naturalcc.
459
+
460
+ # ACKNOWLEDGMENTS
461
+
462
+ This work is supported by National Natural Science Foundation of China under grand No. 62102157. This work is also partially sponsored by Tencent Rhino-Bird Focus Research Program of Basic Platform Technology. We would like to thank all the anonymous reviewers for their constructive comments on improving this paper.
463
+
464
+ # REFERENCES
465
+
466
+ [1] Uri Alon, Shaked Brody, Omer Levy, and Eran Yahav. 2018. code2seq: Generating Sequences from Structured Representations of Code. In Proceedings of International Conference on Learning Representations.
467
+ [2] Uri Alon, Meital Zilberstein, Omer Levy, and Eran Yahav. 2019. code2vec: Learning distributed representations of code. Proceedings of the ACM on Programming Languages 3, POPL (2019), 1-29.
468
+ [3] Nadezhda Chirkova and Sergey Troshin. 2021. Empirical study of transformers for source code. In Proceedings of 29th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 703-715.
469
+ [4] Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What Does BERT Look at? An Analysis of BERT's Attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. 276-286.
470
+ [5] Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. In Proceedings of International Conference on Learning Representations.
471
+ [6] Alexis Conneau, German Kruszewski, Guillaume Lample, Loic Barrault, and Marco Baroni. 2018. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2126-2136.
472
+ [7] Gregory W. Corder and Dale I. Foreman. 2014. Nonparametric statistics: A step-by-step approach. John Wiley & Sons.
473
+ [8] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 4171-4186.
474
+ [9] Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified Language Model Pretraining for Natural Language Understanding and Generation. In Proceedings of Advances in Neural Information Processing Systems. 13042-13054.
475
+ [10] Kawin Ethayarajh. 2019. How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 55–65.
476
+ [11] Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. CodeBERT: A Pre-Trained Model for Programming and Natural Languages. In Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2020. 1536-1547.
477
+ [12] Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In Proceedings of 40th International Conference on Software Engineering. 933-944.
478
+ [13] Daya Guo, Shuo Ren, Shuai Lu, Zhangyin Feng, Duyu Tang, Shujie Liu, Long Zhou, Nan Duan, Alexey Svyatkovskiy, Shengyu Fu, Michele Tufano, Shao Kun Deng, Colin B. Clement, Dawn Drain, Neel Sundaresan, Jian Yin, Daxin Jiang, and Ming Zhou. 2021. GraphCodeBERT: Pre-training Code Representations with Data Flow. In Proceedings of 9th International Conference on Learning Representations.
479
+ [14] John Hewitt and Christopher D. Manning. 2019. A Structural Probe for Finding Syntax in Word Representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 4129-4138.
480
+ [15] Benjamin Hoover, Hendrik Strobelt, and Sebastian Gehrmann. 2020. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformer Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 187-196.
481
+ [16] Phu Mon Hut, Jason Phang, Shikha Bordia, and Samuel R. Bowman. 2019. Do Attention Heads in BERT Track Syntactic Dependencies? CoRR abs/1911.12246 (2019).
482
+ [17] Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. CodeSearchNet Challenge: Evaluating the State of Semantic Code Search. CoRR abs/1909.09436 (2019).
483
+ [18] Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, and Kensen Shi. 2020. Learning and Evaluating Contextual Embedding of Source Code. In Proceedings of the 37th International Conference on Machine Learning, Vol. 119. 5110-5121.
484
+ [19] Hong Jin Kang, Tegawende F. Bissyandé, and David Lo. 2019. Assessing the Generalizability of Code2vec Token Embeddings. In Proceedings of 34th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 1-12.
485
+ [20] Taeuk Kim, Jihun Choi, Daniel Edmiston, and Sang-goo Lee. 2020. Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction. In Proceedings of 8th International Conference on Learning Representations.
486
+ [21] Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the Dark Secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 4364-4373.
487
+ [22] Lucien Le Cam and Grace Lo Yang. 2012. Asymptotics in statistics: some basic concepts. Springer Science & Business Media.
488
+
489
+ [23] Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Linguistic Knowledge and Transferability of Contextual Representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT. 1073-1094.
490
+ [24] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR abs/1907.11692 (2019).
491
+ [25] Christopher Manning and Hinrich Schutze. 1999. Foundations of statistical natural language processing. MIT press.
492
+ [26] Timothee Mickus, Denis Paperno, Mathieu Constant, and Kees van Deemter. 2019. What do you mean, BERT? Assessing BERT as a Distributional Semantics Model. CoRR abs/1911.05758 (2019).
493
+ [27] Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. In 1st International Conference on Learning Representations, Workshop Track Proceedings.
494
+ [28] Tomás Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of Advances in Neural Information Processing Systems. 3111-3119.
495
+ [29] Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2227-2237.
496
+ [30] Veselin Raychev, Martin Vechev, and Eran Yahav. 2014. Code completion with statistical language models. In Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation. 419-428.
497
+ [31] Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics 8 (2020), 842-866.
498
+ [32] Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics.
499
+ [33] Yikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessandro Sordoni, Aaron C. Courville, and Yoshua Bengio. 2018. Straight to the Tree: Constituency Parsing with Neural Syntactic Distance. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 1171-1180.
500
+ [34] Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. MASS: Masked Sequence to Sequence Pre-training for Language Generation. In Proceedings of the 36th International Conference on Machine Learning, Vol. 97. PMLR, 5926-5936.
501
+ [35] Yulei Sui, Xiao Cheng, Guanqin Zhang, and Haoyu Wang. 2020. Flow2vec: Valueflow-based precise code embedding. Proceedings of the ACM on Programming Languages 4, OOPSLA (2020), 1-27.
502
+ [36] Yu Sun, Shuohuan Wang, Yu-Kun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. ERNIE: Enhanced Representation through Knowledge Integration. CoRR abs/1904.09223 (2019).
503
+ [37] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of Advances in neural information processing systems. 3104-3112.
504
+ [38] Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT Rediscover the Classical NLP Pipeline. In Proceedings of the 57th Conference of the Association for Computational Linguistics. 4593-4601.
505
+ [39] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of Advances in neural information processing systems. 5998-6008.
506
+ [40] Jesse Vig and Yonatan Belinkov. 2019. Analyzing the Structure of Attention in a Transformer Language Model. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. 63-76.
507
+ [41] Jesse Vig, Ali Madani, Lav R. Varshney, Caiming Xiong, Richard Socher, and Nazeneen Fatema Rajani. 2021. BERTology Meets Biology: Interpreting Attention in Protein Language Models. In Proceedings of 9th International Conference on Learning Representations.
508
+ [42] Yao Wan, Yang He, Zhangqian Bi, Jianguo Zhang, Yulei Sui, Hongyu Zhang, Kazuma Hashimoto, Hai Jin, Guandong Xu, Caiming Xiong, and Philip S. Yu. 2022. NaturalCC: An Open-Source Toolkit for Code Intelligence. In Proceedings of 44th International Conference on Software Engineering, Companion Volume. ACM.
509
+ [43] Yao Wan, Jingdong Shu, Yulei Sui, Guandong Xu, Zhou Zhao, Jian Wu, and Philip S. Yu. 2019. Multi-modal Attention Network Learning for Semantic Source Code Retrieval. In Proceedings of 34th IEEE/ACM International Conference on Automated Software Engineering. IEEE, 13-25.
510
+ [44] Yao Wan, Zhou Zhao, Min Yang, Guandong Xu, Haochao Ying, Jian Wu, and Philip S. Yu. 2018. Improving automatic source code summarization via deep reinforcement learning. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. ACM, 397-407.
511
+ [45] Jian Zhang, Xu Wang, Hongyu Zhang, Hailong Sun, and Xudong Liu. 2020. Retrieval-based neural source code summarization. In Proceedings of 42nd International Conference on Software Engineering, ACM, 1385-1397.
2202.06xxx/2202.06840/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c334a5679fd0b3b8401cbde334628ed06d695cb223cff0e51dcd1709aabcf907
3
+ size 765953
2202.06xxx/2202.06840/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06856/1d6db6ce-c1db-4f5a-bdc1-04e8ab208327_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06856/1d6db6ce-c1db-4f5a-bdc1-04e8ab208327_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06856/1d6db6ce-c1db-4f5a-bdc1-04e8ab208327_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50c34132f033cf8bdcea2b5cd2e079469e860cdda3c55024ccc0dbcdceef316f
3
+ size 741587
2202.06xxx/2202.06856/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06856/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b292d1e1c420120a0ecaf896b08fc5e47dc83ed2f4da84e98df7152eb2f8448
3
+ size 1073847
2202.06xxx/2202.06856/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06861/dce406ec-fdca-4c53-83c1-98a2ac664d0a_content_list.json ADDED
@@ -0,0 +1,1038 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 176,
8
+ 125,
9
+ 818,
10
+ 170
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Anna Hedström<sup>1,†</sup>",
17
+ "bbox": [
18
+ 142,
19
+ 190,
20
+ 307,
21
+ 208
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "Leander Weber<sup>3</sup>",
28
+ "bbox": [
29
+ 142,
30
+ 213,
31
+ 290,
32
+ 229
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "Dilyara Bareeva",
39
+ "bbox": [
40
+ 143,
41
+ 234,
42
+ 295,
43
+ 253
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "Daniel Krakowczyk<sup>4</sup>",
50
+ "bbox": [
51
+ 143,
52
+ 258,
53
+ 326,
54
+ 276
55
+ ],
56
+ "page_idx": 0
57
+ },
58
+ {
59
+ "type": "text",
60
+ "text": "Franz Motzkus<sup>3</sup>",
61
+ "bbox": [
62
+ 143,
63
+ 281,
64
+ 287,
65
+ 297
66
+ ],
67
+ "page_idx": 0
68
+ },
69
+ {
70
+ "type": "text",
71
+ "text": "Wojciech Samek $^{2,3,5}$",
72
+ "bbox": [
73
+ 143,
74
+ 301,
75
+ 320,
76
+ 321
77
+ ],
78
+ "page_idx": 0
79
+ },
80
+ {
81
+ "type": "text",
82
+ "text": "Sebastian Lapuschkin $^{3,\\dagger}$",
83
+ "bbox": [
84
+ 143,
85
+ 325,
86
+ 356,
87
+ 343
88
+ ],
89
+ "page_idx": 0
90
+ },
91
+ {
92
+ "type": "text",
93
+ "text": "ANNA.HEDSTROEM@TU-BERLIN.DE",
94
+ "bbox": [
95
+ 596,
96
+ 194,
97
+ 846,
98
+ 208
99
+ ],
100
+ "page_idx": 0
101
+ },
102
+ {
103
+ "type": "text",
104
+ "text": "LEANDER.WEBER@HHI.FRAUNHOFER.DE",
105
+ "bbox": [
106
+ 557,
107
+ 215,
108
+ 846,
109
+ 229
110
+ ],
111
+ "page_idx": 0
112
+ },
113
+ {
114
+ "type": "text",
115
+ "text": "DILYARA.BAREEVA@CAMPUS.TU-BERLIN.DE",
116
+ "bbox": [
117
+ 534,
118
+ 239,
119
+ 846,
120
+ 252
121
+ ],
122
+ "page_idx": 0
123
+ },
124
+ {
125
+ "type": "text",
126
+ "text": "DANIEL.KRAKOWCZYK@UNI-POTSDAM.DE",
127
+ "bbox": [
128
+ 547,
129
+ 262,
130
+ 846,
131
+ 273
132
+ ],
133
+ "page_idx": 0
134
+ },
135
+ {
136
+ "type": "text",
137
+ "text": "FRANZ.MOTZKUS@HHI.FRAUNHOFER.DE",
138
+ "bbox": [
139
+ 560,
140
+ 284,
141
+ 846,
142
+ 297
143
+ ],
144
+ "page_idx": 0
145
+ },
146
+ {
147
+ "type": "text",
148
+ "text": "WOJCIECH.SAMEK@HHI.FRAUNHOFER.DE",
149
+ "bbox": [
150
+ 550,
151
+ 306,
152
+ 846,
153
+ 319
154
+ ],
155
+ "page_idx": 0
156
+ },
157
+ {
158
+ "type": "text",
159
+ "text": "SEBASTIAN.LAPUSCHKIN@HHI.FRAUNHOFER.DE",
160
+ "bbox": [
161
+ 506,
162
+ 328,
163
+ 846,
164
+ 340
165
+ ],
166
+ "page_idx": 0
167
+ },
168
+ {
169
+ "type": "text",
170
+ "text": "Marina M.-C. Hohne $^{1,5,\\dagger}$",
171
+ "bbox": [
172
+ 142,
173
+ 349,
174
+ 362,
175
+ 364
176
+ ],
177
+ "page_idx": 0
178
+ },
179
+ {
180
+ "type": "text",
181
+ "text": "MARINA.HOEHNE@TU-BERLIN.DE",
182
+ "bbox": [
183
+ 612,
184
+ 353,
185
+ 852,
186
+ 366
187
+ ],
188
+ "page_idx": 0
189
+ },
190
+ {
191
+ "type": "list",
192
+ "sub_type": "text",
193
+ "list_items": [
194
+ "<sup>1</sup> Understandable Machine Intelligence Lab, TU Berlin, 10587 Berlin, Germany",
195
+ "$^{2}$ Department of Electrical Engineering and Computer Science, TU Berlin, 10587 Berlin, Germany",
196
+ "$^{3}$ Department of Artificial Intelligence, Fraunhofer Heinrich-Hertz-Institute, 10587 Berlin, Germany",
197
+ "$^{4}$ Department of Computer Science, University of Potsdam, 14476 Potsdam, Germany",
198
+ "5 BIFOLD - Berlin Institute for the Foundations of Learning and Data, 10587 Berlin, Germany",
199
+ "† corresponding authors"
200
+ ],
201
+ "bbox": [
202
+ 143,
203
+ 368,
204
+ 854,
205
+ 470
206
+ ],
207
+ "page_idx": 0
208
+ },
209
+ {
210
+ "type": "text",
211
+ "text": "Editor: Joaquin Vanschoren",
212
+ "bbox": [
213
+ 142,
214
+ 498,
215
+ 352,
216
+ 513
217
+ ],
218
+ "page_idx": 0
219
+ },
220
+ {
221
+ "type": "text",
222
+ "text": "Abstract",
223
+ "text_level": 1,
224
+ "bbox": [
225
+ 454,
226
+ 544,
227
+ 542,
228
+ 559
229
+ ],
230
+ "page_idx": 0
231
+ },
232
+ {
233
+ "type": "text",
234
+ "text": "The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness. Until now, no tool with focus on XAI evaluation exists that exhaustively and speedily allows researchers to evaluate the performance of explanations of neural network predictions. To increase transparency and reproducibility in the field, we therefore built Quantus—a comprehensive, evaluation toolkit in Python that includes a growing, well-organised collection of evaluation metrics and tutorials for evaluating explainable methods. The toolkit has been thoroughly tested and is available under an open-source license on PyPi (or on https://github.com/understandable-machine-intelligence-lab/Quantus/).",
235
+ "bbox": [
236
+ 171,
237
+ 566,
238
+ 823,
239
+ 718
240
+ ],
241
+ "page_idx": 0
242
+ },
243
+ {
244
+ "type": "text",
245
+ "text": "Keywords: explainability, responsible AI, reproducibility, open source, Python",
246
+ "bbox": [
247
+ 173,
248
+ 720,
249
+ 754,
250
+ 736
251
+ ],
252
+ "page_idx": 0
253
+ },
254
+ {
255
+ "type": "text",
256
+ "text": "1. Introduction",
257
+ "text_level": 1,
258
+ "bbox": [
259
+ 142,
260
+ 760,
261
+ 294,
262
+ 776
263
+ ],
264
+ "page_idx": 0
265
+ },
266
+ {
267
+ "type": "text",
268
+ "text": "Despite much excitement and activity in the field of eXplainable artificial intelligence (XAI) (Montavon et al., 2018; Arya et al., 2019; Lapuschkin et al., 2019; Samek et al., 2021; Bykov et al., 2021b), the evaluation of explainable methods still remains an unsolved problem (Samek et al., 2017; Adebayo et al., 2020; Holzinger et al., 2020; Yona and Greenfeld, 2021; Arras et al., 2022). Unlike in traditional machine learning (ML), the task of explaining generally lacks \"ground-truth\" data. There exists no universally accepted definition of what",
269
+ "bbox": [
270
+ 138,
271
+ 787,
272
+ 857,
273
+ 891
274
+ ],
275
+ "page_idx": 0
276
+ },
277
+ {
278
+ "type": "header",
279
+ "text": "Journal of Machine Learning Research 24 (2023) 1-11",
280
+ "bbox": [
281
+ 142,
282
+ 49,
283
+ 470,
284
+ 63
285
+ ],
286
+ "page_idx": 0
287
+ },
288
+ {
289
+ "type": "header",
290
+ "text": "Submitted 2/22; Revised 11/22; Published 1/23",
291
+ "bbox": [
292
+ 558,
293
+ 49,
294
+ 854,
295
+ 64
296
+ ],
297
+ "page_idx": 0
298
+ },
299
+ {
300
+ "type": "aside_text",
301
+ "text": "arXiv:2202.06861v3 [cs.LG] 27 Apr 2023",
302
+ "bbox": [
303
+ 22,
304
+ 263,
305
+ 60,
306
+ 708
307
+ ],
308
+ "page_idx": 0
309
+ },
310
+ {
311
+ "type": "footer",
312
+ "text": "$\\langle \\widehat{\\mathbb{C}}\\rangle$ 2023 Anna Hedstrom, Leander Weber, Dilyara Bareeva, Daniel Krakowczyk, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, and Marina M.-C. Hohne.",
313
+ "bbox": [
314
+ 138,
315
+ 912,
316
+ 830,
317
+ 939
318
+ ],
319
+ "page_idx": 0
320
+ },
321
+ {
322
+ "type": "footer",
323
+ "text": "License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v24/22-0142.html.",
324
+ "bbox": [
325
+ 140,
326
+ 944,
327
+ 844,
328
+ 969
329
+ ],
330
+ "page_idx": 0
331
+ },
332
+ {
333
+ "type": "text",
334
+ "text": "a \"correct\" explanation is, or what properties an explanation should fulfil (Yang and Kim, 2019). Due to this lack of standardised evaluation procedures in XAI, researchers frequently conceive new ways to experimentally examine explanation methods (Bach et al., 2015; Samek et al., 2017; Adebayo et al., 2018; Yang and Kim, 2019; Kindermans et al., 2019), oftentimes employing different parameterisations and various kinds of preprocessing and normalisations, each leading to different or even contrasting results, making evaluation outcomes difficult to interpret and compare. Critically, we note that it is common for XAI papers to base their conclusions on one-sided, sometimes methodologically questionable evaluation procedures, which we fear may hinder access to the current State-of-the-art (SOTA) in XAI and potentially hurt the perceived credibility of the field over time.",
335
+ "bbox": [
336
+ 138,
337
+ 114,
338
+ 854,
339
+ 287
340
+ ],
341
+ "page_idx": 1
342
+ },
343
+ {
344
+ "type": "text",
345
+ "text": "For these reasons, researchers often rely on a qualitative evaluation of explanation methods (e.g., Zeiler and Fergus (2014); Ribeiro et al. (2016); Shrikumar et al. (2017)). Although qualitative evaluation of XAI methods is an important and complementary type of evaluation analysis (Hoffman et al., 2018), the assumption that humans are able to recognise a correct explanation comes with a series of pitfalls: not only does the notion of an \"accurate\" explanation often depend on the specifics of the task at hand, humans are also questionable judges of quality (Wang et al., 2019; Rosenfeld, 2021). In addition, recent studies suggest that even quantitative evaluation of explainable methods is far from fault-proof (Bansal et al., 2020; Budding et al., 2021; Yona and Greenfeld, 2021; Hase and Bansal, 2020). In response to these issues, we developed Quantus, to provide the community with a versatile and comprehensive toolkit that collects, organises, and explains a wide range of evaluation metrics proposed for explanation methods. The library is designed to help automate the process of XAI quantification—by delivering speedy, easily digestible, and at the same time holistic summaries of the quality of the given explanations. As we see it, Quantus concludes an important, still missing contribution in today's XAI research by filling the gap between what the community produces and what it currently needs: a more quantitative, systematic and standardised evaluation of explanation methods.",
346
+ "bbox": [
347
+ 143,
348
+ 291,
349
+ 854,
350
+ 582
351
+ ],
352
+ "page_idx": 1
353
+ },
354
+ {
355
+ "type": "text",
356
+ "text": "2. Toolkit Overview",
357
+ "text_level": 1,
358
+ "bbox": [
359
+ 140,
360
+ 609,
361
+ 339,
362
+ 626
363
+ ],
364
+ "page_idx": 1
365
+ },
366
+ {
367
+ "type": "text",
368
+ "text": "Quantus provides its intended users—practitioners and researchers interested in the domains of ML and XAI—with a steadily expanding list of $30+$ reference metrics to evaluate explanations of ML predictions. Moreover, it offers comprehensive guidance on how to use these metrics, including information about potential pitfalls in their application.",
369
+ "bbox": [
370
+ 140,
371
+ 642,
372
+ 854,
373
+ 710
374
+ ],
375
+ "page_idx": 1
376
+ },
377
+ {
378
+ "type": "table",
379
+ "img_path": "images/993a64a4c5672be3492cbbc985b6dbb328ce78242a940e6295878a591bafa7dc.jpg",
380
+ "table_caption": [
381
+ "Table 1: Comparison of four XAI libraries—(AIX360 (Arya et al., 2019), captum (Kokhlikyan et al., 2020), TorchRay (Fong et al., 2019) and Quantus) in terms of the number of XAI evaluation methods for six different evaluation categories, as implemented in each library."
382
+ ],
383
+ "table_footnote": [],
384
+ "table_body": "<table><tr><td>Library</td><td>Faithfulness</td><td>Robustness</td><td>Localisation</td><td>Complexity</td><td>Axiomatic</td><td>Randomisation</td></tr><tr><td>Captum (2)</td><td>1</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>AIX360 (2)</td><td>2</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>TorchRay (1)</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Quantus (27)</td><td>9</td><td>4</td><td>6</td><td>3</td><td>3</td><td>2</td></tr></table>",
385
+ "bbox": [
386
+ 148,
387
+ 800,
388
+ 849,
389
+ 887
390
+ ],
391
+ "page_idx": 1
392
+ },
393
+ {
394
+ "type": "header",
395
+ "text": "HEDSTRÖM, WEBER, BAREEVA, KRAKOWczyk, MOTZKUS, SAMEK, LAPUSCHKIN, AND HÖHNE",
396
+ "bbox": [
397
+ 145,
398
+ 47,
399
+ 844,
400
+ 61
401
+ ],
402
+ "page_idx": 1
403
+ },
404
+ {
405
+ "type": "page_number",
406
+ "text": "2",
407
+ "bbox": [
408
+ 493,
409
+ 914,
410
+ 504,
411
+ 926
412
+ ],
413
+ "page_idx": 1
414
+ },
415
+ {
416
+ "type": "image",
417
+ "img_path": "images/8d7cdf8554e7751c84958329052f46e227a3ed80d8345f5c568c18cf015211d1.jpg",
418
+ "image_caption": [
419
+ "a)"
420
+ ],
421
+ "image_footnote": [],
422
+ "bbox": [
423
+ 240,
424
+ 138,
425
+ 759,
426
+ 224
427
+ ],
428
+ "page_idx": 2
429
+ },
430
+ {
431
+ "type": "image",
432
+ "img_path": "images/905ef8a3a19233b87284d61a81725e4cf7904061a7483eb9df4e41c55a8bce16.jpg",
433
+ "image_caption": [
434
+ "b)",
435
+ "c)"
436
+ ],
437
+ "image_footnote": [],
438
+ "bbox": [
439
+ 251,
440
+ 247,
441
+ 488,
442
+ 391
443
+ ],
444
+ "page_idx": 2
445
+ },
446
+ {
447
+ "type": "image",
448
+ "img_path": "images/ac7d79f53f4794b8e0695c19de143a24f390f1fa573d7eb3a3e3b02c50b14651.jpg",
449
+ "image_caption": [
450
+ "Figure 1: a) Simple qualitative comparison of XAI methods is often not sufficient to distinguish which gradient-based method—Saliency (Mørch et al., 1995; Baehrens et al., 2010), Integrated Gradients (Sundararajan et al., 2017), GradientShap (Lundberg and Lee, 2017) or FusionGrad (Bykov et al., 2021a) is preferred. With Quantus, we can obtain richer insights on how the methods compare $b$ ) by holistic quantification on several evaluation criteria and $c$ ) by providing sensitivity analysis of how a single parameter, e.g., pixel replacement strategy of a faithfulness test influences the ranking of explanation methods."
451
+ ],
452
+ "image_footnote": [],
453
+ "bbox": [
454
+ 503,
455
+ 244,
456
+ 743,
457
+ 401
458
+ ],
459
+ "page_idx": 2
460
+ },
461
+ {
462
+ "type": "text",
463
+ "text": "The library is thoroughly documented and includes tutorials covering multiple use-cases, data domains and tasks—from comparative analysis of XAI methods and attributions, to quantifying the extent evaluation outcomes are dependent on metrics' parameterisations. In Figure 1, we demonstrate some example analysis using ImageNet dataset (Russakovsky et al., 2015) that can be produced with $\\mathbf{Quantus}^1$ . The library provides an abstract layer between APIs of deep learning frameworks, e.g., PyTorch (Paszke et al., 2019) and tensorflow (Abadi et al., 2016) and can be employed iteratively both during and after model training. Code quality is ensured by thorough testing, using pytest and continuous integration (CI), where every new contribution is automatically checked for sufficient test coverage. We employ syntax formatting with flake8, mypy and black under various Python versions.",
464
+ "bbox": [
465
+ 140,
466
+ 551,
467
+ 854,
468
+ 720
469
+ ],
470
+ "page_idx": 2
471
+ },
472
+ {
473
+ "type": "text",
474
+ "text": "Unlike other XAI-related libraries $^2$ , Quantus has its primary focus on evaluation and as such, supports a breadth of metrics, spanning various evaluation categories (see Table 1). A detailed description of the different evaluation categories can be found in the Appendix. The first iterations of the library mainly focus on attribution-based explanation techniques $^3$ for",
475
+ "bbox": [
476
+ 140,
477
+ 723,
478
+ 854,
479
+ 792
480
+ ],
481
+ "page_idx": 2
482
+ },
483
+ {
484
+ "type": "header",
485
+ "text": "QUANTUS: AN XAI TOOLKIT FOR EVALUATING EXPLANATIONS",
486
+ "bbox": [
487
+ 259,
488
+ 47,
489
+ 733,
490
+ 64
491
+ ],
492
+ "page_idx": 2
493
+ },
494
+ {
495
+ "type": "page_footnote",
496
+ "text": "1. The full experiment can be reproduced (and obtained) at the repository, under the \\tutorials folder.",
497
+ "bbox": [
498
+ 145,
499
+ 806,
500
+ 846,
501
+ 820
502
+ ],
503
+ "page_idx": 2
504
+ },
505
+ {
506
+ "type": "page_footnote",
507
+ "text": "2. Related libraries were selected with respect to the XAI evaluation capabilities. Packages including no metrics for evaluation of explanation methods, e.g., Alibi (Klaise et al., 2021), iNNvestigate (Alber et al., 2019), dalex (Baniecki et al., 2021) and zennit (Anders et al., 2021) were excluded.",
508
+ "bbox": [
509
+ 148,
510
+ 821,
511
+ 852,
512
+ 862
513
+ ],
514
+ "page_idx": 2
515
+ },
516
+ {
517
+ "type": "page_footnote",
518
+ "text": "3. This category of explainable methods aims to assign an importance value to the model features and arguably, is the most studied group of explanations.",
519
+ "bbox": [
520
+ 148,
521
+ 862,
522
+ 852,
523
+ 888
524
+ ],
525
+ "page_idx": 2
526
+ },
527
+ {
528
+ "type": "page_number",
529
+ "text": "3",
530
+ "bbox": [
531
+ 493,
532
+ 914,
533
+ 504,
534
+ 926
535
+ ],
536
+ "page_idx": 2
537
+ },
538
+ {
539
+ "type": "text",
540
+ "text": "(but not limited to) image classification. In planned future releases, we are working towards extending the applicability of the library further, e.g., by developing additional metrics and functionality that will enable users to perform checks, verifications and sensitivity analyses on top of the metrics.",
541
+ "bbox": [
542
+ 138,
543
+ 114,
544
+ 854,
545
+ 183
546
+ ],
547
+ "page_idx": 3
548
+ },
549
+ {
550
+ "type": "text",
551
+ "text": "3. Library Design",
552
+ "text_level": 1,
553
+ "bbox": [
554
+ 140,
555
+ 205,
556
+ 320,
557
+ 224
558
+ ],
559
+ "page_idx": 3
560
+ },
561
+ {
562
+ "type": "text",
563
+ "text": "The user-facing API of Quantus is designed with the aim of replacing an oftentimes lengthy and open-ended evaluation procedure with structure and speed—with a single line of code, the user can gain quantitative insights of how their explanations are behaving under various criteria. In the following code snippet, we demonstrate one way for how Quantus can be used to evaluate pre-computed explanations via a PixelFlipping experiment (Bach et al., 2015). In this example, we assume to have a pre-trained model (model), a batch of input and output pairs (x_batch, y_batch) and a set of attributions (a_batch).",
564
+ "bbox": [
565
+ 138,
566
+ 233,
567
+ 854,
568
+ 353
569
+ ],
570
+ "page_idx": 3
571
+ },
572
+ {
573
+ "type": "code",
574
+ "sub_type": "code",
575
+ "code_caption": [],
576
+ "code_body": "import quantus \npixelflipping = quantus.PixelFlipping(perturb_base $\\equiv$ \"black\", abs=True) scores $=$ pixelflipping(model, x_batch, y_batch, a_batch, **params) \npixelflipping.plot(y_batch=y_batch, scores=scores)",
577
+ "guess_lang": "python",
578
+ "bbox": [
579
+ 142,
580
+ 369,
581
+ 821,
582
+ 426
583
+ ],
584
+ "page_idx": 3
585
+ },
586
+ {
587
+ "type": "text",
588
+ "text": "Needless to say, XAI evaluation is intrinsically difficult and there is no one-size-fits-all metric for all tasks. Evaluation of explanations must, therefore, be understood and calibrated from its context: the application, data, model, and intended stakeholders (Chander and Srinivasan, 2018; Arras et al., 2022). To this end, we designed Quantus to be highly customisable and easily extendable—API documentation and examples on how to create new metrics as well as how to customise existing ones are included. Thanks to the API, any supporting functions of the evaluation procedure, e.g., perturb_baseline that determines the value that the input features should be iteratively masked with, can flexibly be replaced by a user-specified function to ensure that the evaluation procedure is appropriately contextualised.",
589
+ "bbox": [
590
+ 138,
591
+ 446,
592
+ 854,
593
+ 599
594
+ ],
595
+ "page_idx": 3
596
+ },
597
+ {
598
+ "type": "text",
599
+ "text": "It is practically well-known but not yet publicly recognised that evaluation outcomes of explanations can be highly sensitive to the parameterisation of metrics (Bansal et al., 2020; Agarwal and Nguyen, 2020) and other confounding factors introduced in the evaluation procedure (Hase et al., 2021; Yona and Greenfeld, 2021). Therefore, to encourage a thoughtful and responsible selection and parameterisation of metrics, we added mechanisms such as warnings, checks and user guidelines, cautioning users to reflect upon their choices.",
600
+ "bbox": [
601
+ 138,
602
+ 601,
603
+ 854,
604
+ 704
605
+ ],
606
+ "page_idx": 3
607
+ },
608
+ {
609
+ "type": "text",
610
+ "text": "4. Broader Impact",
611
+ "text_level": 1,
612
+ "bbox": [
613
+ 140,
614
+ 727,
615
+ 326,
616
+ 744
617
+ ],
618
+ "page_idx": 3
619
+ },
620
+ {
621
+ "type": "text",
622
+ "text": "We built Quantus to raise the bar of XAI quantification—to substitute an ad-hoc and sometimes ineffective evaluation procedure with reproducibility, simplicity and transparency. From our perspective, Quantus contributes to the XAI development by helping researchers to speed up the development and application of explanation methods, dissolve existing ambiguities and enable more comparability. As we see it, steering efforts towards increasing objectiveness of evaluations and reproducibility in the field will prove rewarding for the community as a whole. We are convinced that a holistic, multidimensional take on XAI quantification will be imperative to the general success of (X)AI over time.",
623
+ "bbox": [
624
+ 138,
625
+ 753,
626
+ 854,
627
+ 891
628
+ ],
629
+ "page_idx": 3
630
+ },
631
+ {
632
+ "type": "header",
633
+ "text": "HEDSTRÖM, WEBER, BAREEVA, KRAKOWczyk, MOTZKUS, SAMEK, LAPUSCHKIN, AND HÖHNE",
634
+ "bbox": [
635
+ 145,
636
+ 47,
637
+ 844,
638
+ 61
639
+ ],
640
+ "page_idx": 3
641
+ },
642
+ {
643
+ "type": "page_number",
644
+ "text": "4",
645
+ "bbox": [
646
+ 493,
647
+ 914,
648
+ 504,
649
+ 926
650
+ ],
651
+ "page_idx": 3
652
+ },
653
+ {
654
+ "type": "text",
655
+ "text": "Acknowledgments and Disclosure of Funding",
656
+ "text_level": 1,
657
+ "bbox": [
658
+ 140,
659
+ 114,
660
+ 581,
661
+ 135
662
+ ],
663
+ "page_idx": 4
664
+ },
665
+ {
666
+ "type": "text",
667
+ "text": "This work was partly funded by the German Federal Ministry for Education and Research through project Explaining 4.0 (ref. 01IS20055), BIFOLD (ref. 01IS18025A and ref. 01IS18037A), AEye (ref. 01IS20043), the Investitionsbank Berlin through BerDiBA (grant no. 10174498), as well as the European Union's Horizon 2020 programme through iToBoS (grant no. 965221).",
668
+ "bbox": [
669
+ 138,
670
+ 150,
671
+ 857,
672
+ 237
673
+ ],
674
+ "page_idx": 4
675
+ },
676
+ {
677
+ "type": "text",
678
+ "text": "Appendix",
679
+ "text_level": 1,
680
+ "bbox": [
681
+ 142,
682
+ 257,
683
+ 243,
684
+ 275
685
+ ],
686
+ "page_idx": 4
687
+ },
688
+ {
689
+ "type": "text",
690
+ "text": "In most explainability contexts, ground-truth explanations are not available (Samek et al., 2017; Adebayo et al., 2020; Holzinger et al., 2020; Yona and Greenfeld, 2021; Arras et al., 2022), which makes the task of evaluating explanations non-trivial. Efforts on evaluating explanations have therefore been invested diversely. For better organisation, in the source code of Quantus, we therefore grouped the metrics into six categories based on their logical similarity—(a) faithfulness, (b) robustness, (c) localisation, (d) complexity, (e) randomisation and (f) axiomatic metrics.",
691
+ "bbox": [
692
+ 138,
693
+ 284,
694
+ 856,
695
+ 404
696
+ ],
697
+ "page_idx": 4
698
+ },
699
+ {
700
+ "type": "text",
701
+ "text": "In the following, we describe each of the categories briefly. A more in-depth description of each category, including an account of the underlying metrics, is documented in the repository. The direction of the arrow indicates whether higher or lower values are considered better (exceptions within each category exist, so please carefully read the docstrings of each individual metric prior to usage and/or interpretation).",
702
+ "bbox": [
703
+ 140,
704
+ 404,
705
+ 854,
706
+ 491
707
+ ],
708
+ "page_idx": 4
709
+ },
710
+ {
711
+ "type": "list",
712
+ "sub_type": "text",
713
+ "list_items": [
714
+ "(a) Faithfulness $(\\uparrow)$ quantifies to what extent explanations follow the predictive behaviour of the model, asserting that more important features affect model decisions more strongly (Bhatt et al., 2020; Alvarez-Melis and Jaakkola, 2018; Arya et al., 2019; Nguyen and Martínez, 2020; Bach et al., 2015; Samek et al., 2017; Montavon et al., 2018; Ancona et al., 2018; Rieger and Hansen, 2020; Yeh et al., 2019; Rong et al., 2022; Dasgupta et al., 2022)",
715
+ "(b) Robustness $(\\downarrow)$ measures to what extent explanations are stable when subject to slight perturbations in the input, assuming that the model output approximately stayed the same (Yeh et al., 2019; Montavon et al., 2018; Alvarez-Melis and Jaakkola, 2018; Dasgupta et al., 2022)",
716
+ "(c) Localisation $(\\uparrow)$ tests if the explainable evidence is centred around a region of interest, which may be defined around an object by a bounding box, a segmentation mask or a cell within a grid (Zhang et al., 2018; Theiner et al., 2022; Kohlbrenner et al., 2020; Arras et al., 2022; Rong et al., 2022; Arias-Duart et al., 2021)",
717
+ "(d) Complexity $(\\downarrow)$ captures to what extent explanations are concise, i.e., that few features are used to explain a model prediction (Chalasani et al., 2020; Bhatt et al., 2020; Nguyen and Martínez, 2020)",
718
+ "(e) Randomisation $(\\uparrow)$ tests to what extent explanations deteriorate as the data labels or the model, e.g., its parameters are increasingly randomised (Adebayo et al., 2018; Sixt et al., 2020)"
719
+ ],
720
+ "bbox": [
721
+ 153,
722
+ 503,
723
+ 854,
724
+ 891
725
+ ],
726
+ "page_idx": 4
727
+ },
728
+ {
729
+ "type": "header",
730
+ "text": "QUANTUS: AN XAI TOOLKIT FOR EVALUATING EXPLANATIONS",
731
+ "bbox": [
732
+ 258,
733
+ 47,
734
+ 733,
735
+ 63
736
+ ],
737
+ "page_idx": 4
738
+ },
739
+ {
740
+ "type": "page_number",
741
+ "text": "5",
742
+ "bbox": [
743
+ 493,
744
+ 914,
745
+ 504,
746
+ 926
747
+ ],
748
+ "page_idx": 4
749
+ },
750
+ {
751
+ "type": "ref_text",
752
+ "text": "(f) Axiomatic $(\\uparrow)$ measures if explanations fulfill certain axiomatic properties (Kindermans et al., 2019; Sundararajan et al., 2017; Nguyen and Martínez, 2020)",
753
+ "bbox": [
754
+ 156,
755
+ 114,
756
+ 854,
757
+ 148
758
+ ],
759
+ "page_idx": 5
760
+ },
761
+ {
762
+ "type": "text",
763
+ "text": "References",
764
+ "text_level": 1,
765
+ "bbox": [
766
+ 142,
767
+ 170,
768
+ 250,
769
+ 186
770
+ ],
771
+ "page_idx": 5
772
+ },
773
+ {
774
+ "type": "list",
775
+ "sub_type": "ref_text",
776
+ "list_items": [
777
+ "Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gregory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Józefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Gordon Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaogiang Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed systems, 2016.",
778
+ "Julius Adebayo, Justin Gilmer, Michael Muelly, Ian J. Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 9525-9536, 2018.",
779
+ "Julius Adebayo, Michael Muelly, Ilaria Liccardi, and Been Kim. Debugging tests for model explanations. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.",
780
+ "Chirag Agarwal and Anh Nguyen. Explaining image classifiers by removing input features using generative models. In Hiroshi Ishikawa, Cheng-Lin Liu, Tomás Pajdla, and Jianbo Shi, editors, Computer Vision - ACCV 2020 - 15th Asian Conference on Computer Vision, Kyoto, Japan, November 30 - December 4, 2020, Revised Selected Papers, Part VI, volume 12627 of Lecture Notes in Computer Science, pages 101-118. Springer, 2020.",
781
+ "Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, and Pieter-Jan Kindermans. Investigate neural networks! J. Mach. Learn. Res., 20:93:1-93:8, 2019.",
782
+ "David Alvarez-Melis and Tommi S. Jaakkola. Towards robust interpretability with self-explaining neural networks. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 7786-7795, 2018.",
783
+ "Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. Towards better understanding of gradient-based attribution methods for deep neural networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada,"
784
+ ],
785
+ "bbox": [
786
+ 143,
787
+ 196,
788
+ 857,
789
+ 890
790
+ ],
791
+ "page_idx": 5
792
+ },
793
+ {
794
+ "type": "header",
795
+ "text": "HEDSTRÖM, WEBER, BAREEVA, KRAKOWczyk, MOTZKUS, SAMEK, LAPUSCHKIN, AND HÖHNE",
796
+ "bbox": [
797
+ 145,
798
+ 47,
799
+ 846,
800
+ 61
801
+ ],
802
+ "page_idx": 5
803
+ },
804
+ {
805
+ "type": "page_number",
806
+ "text": "6",
807
+ "bbox": [
808
+ 493,
809
+ 914,
810
+ 504,
811
+ 926
812
+ ],
813
+ "page_idx": 5
814
+ },
815
+ {
816
+ "type": "list",
817
+ "sub_type": "ref_text",
818
+ "list_items": [
819
+ "April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=Sy21R9JAW.",
820
+ "Christopher J. Anders, David Neumann, Wojciech Samek, Klaus-Robert Müller, and Sebastian Lapuschkin. Software for dataset-wide xai: From local explanations to global insights with zennit, corelay, and virelay, 2021.",
821
+ "Anna Arias-Duart, Ferran Parés, Dario Garcia-Gasulla, and Victor Gimenez-Abalos. Focus! rating xai methods and finding biases. CoRR, abs/2203.02928, 2021. doi: 10.48550/arXiv.2109.15035.",
822
+ "Leila Arras, Ahmed Osman, and Wojciech Samek. Clevr-xai: A benchmark dataset for the ground truth evaluation of neural network explanations. Information Fusion, 81:14-40, 2022.",
823
+ "Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. One explanation does not fit all: A toolkit and taxonomy of AI explainability techniques, 2019.",
824
+ "Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. *PloS one*, 10(7), 2015.",
825
+ "David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert Müller. How to explain individual classification decisions. J. Mach. Learn. Res., 11:1803-1831, 2010.",
826
+ "Hubert Baniecki, Wojciech Kretowicz, Piotr Piatyszek, Jakub Wisniewski, and Przemyslaw Biecek. dalex: Responsible machine learning with interactive explainability and fairness in python. J. Mach. Learn. Res., 22:214:1-214:7, 2021.",
827
+ "Naman Bansal, Chirag Agarwal, and Anh Nguyen. SAM: the sensitivity of attribution methods to hyperparameters. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 8670-8680. Computer Vision Foundation / IEEE, 2020.",
828
+ "Umang Bhatt, Adrian Weller, and José M. F. Moura. Evaluating and aggregating feature-based model explanations. In Christian Bessiere, editor, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3016-3022. ijcai.org, 2020.",
829
+ "Céline Budding, Fabian Eitel, Kerstin Ritter, and Stefan Haufe. Evaluating saliency methods on artificial data with different background types. CoRR, abs/2112.04882, 2021.",
830
+ "Kirill Bykov, Anna Hedström, Shinichi Nakajima, and Marina M.-C. Höhne. Noisegrad: enhancing explanations by introducing stochasticity to model weights. CoRR, abs/2106.10185, 2021a."
831
+ ],
832
+ "bbox": [
833
+ 142,
834
+ 114,
835
+ 856,
836
+ 888
837
+ ],
838
+ "page_idx": 6
839
+ },
840
+ {
841
+ "type": "header",
842
+ "text": "QUANTUS: AN XAI TOOLKIT FOR EVALUATING EXPLANATIONS",
843
+ "bbox": [
844
+ 259,
845
+ 47,
846
+ 733,
847
+ 63
848
+ ],
849
+ "page_idx": 6
850
+ },
851
+ {
852
+ "type": "page_number",
853
+ "text": "7",
854
+ "bbox": [
855
+ 493,
856
+ 914,
857
+ 504,
858
+ 925
859
+ ],
860
+ "page_idx": 6
861
+ },
862
+ {
863
+ "type": "list",
864
+ "sub_type": "ref_text",
865
+ "list_items": [
866
+ "Kirill Bykov, Marina M.-C. Höhne, Adelaide Creosteanu, Klaus-Robert Müller, Frederick Klauschen, Shinichi Nakajima, and Marius Kloft. Explaining bayesian neural networks. CoRR, abs/2108.10346, 2021b.",
867
+ "Prasad Chalasani, Jiefeng Chen, Amrita Roy Chowdhury, Xi Wu, and Somesh Jha. Concise explanations of neural networks using adversarial training. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 1383-1391. PMLR, 2020.",
868
+ "Ajay Chander and Ramya Srinivasan. Evaluating explanations by cognitive value. In Andreas Holzinger, Peter Kieseberg, A Min Tjoa, and Edgar R. Weippl, editors, *Machine Learning and Knowledge Extraction - Second IFIP TC 5*, TC 8/WG 8.4, 8.9, TC 12/WG 12.9 International Cross-Domain Conference, CD-MAKE 2018, Hamburg, Germany, August 27-30, 2018, Proceedings, volume 11015 of Lecture Notes in Computer Science, pages 314-328. Springer, 2018.",
869
+ "Sanjoy Dasgupta, Nave Frost, and Michal Moshkovitz. Framework for evaluating faithfulness of local explanations. CoRR, abs/2202.00734, 2022. URL https://arxiv.org/abs/2202.00734.",
870
+ "Ruth Fong, Mandela Patrick, and Andrea Vedaldi. Understanding deep networks via extremal perturbations and smooth masks, 2019.",
871
+ "Peter Hase and Mohit Bansal. Evaluating explainable AI: which algorithmic explanations help users predict model behavior? In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault, editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5540-5552. Association for Computational Linguistics, 2020.",
872
+ "Peter Hase, Harry Xie, and Mohit Bansal. The out-of-distribution problem in explainability and search methods for feature importance explanations. Advances in Neural Information Processing Systems, 34, 2021.",
873
+ "Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. Metrics for explainable AI: challenges and prospects. CoRR, abs/1812.04608, 2018.",
874
+ "Andreas Holzinger, André M. Carrington, and Heimo Müller. Measuring the quality of explanations: The system causability scale (SCS). Kunstliche Intell., 34(2):193-198, 2020.",
875
+ "Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, and Been Kim. The (un)reliability of saliency methods. In Wojciech Samek, Gregoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller, editors, Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, volume 11700 of Lecture Notes in Computer Science, pages 267-280. Springer, 2019.",
876
+ "Janis Klaise, Arnaud Van Looveren, Giovanni Vacanti, and Alexandru Coca. Alibi explain: Algorithms for explaining machine learning models. J. Mach. Learn. Res., 22:181:1-181:7, 2021."
877
+ ],
878
+ "bbox": [
879
+ 143,
880
+ 114,
881
+ 854,
882
+ 888
883
+ ],
884
+ "page_idx": 7
885
+ },
886
+ {
887
+ "type": "header",
888
+ "text": "HEDSTRÖM, WEBER, BAREEVA, KRAKOWczyK, MOTZKUS, SAMEK, LAPUSCHKIN, AND HÖHNE",
889
+ "bbox": [
890
+ 145,
891
+ 47,
892
+ 844,
893
+ 61
894
+ ],
895
+ "page_idx": 7
896
+ },
897
+ {
898
+ "type": "page_number",
899
+ "text": "8",
900
+ "bbox": [
901
+ 493,
902
+ 914,
903
+ 504,
904
+ 925
905
+ ],
906
+ "page_idx": 7
907
+ },
908
+ {
909
+ "type": "list",
910
+ "sub_type": "ref_text",
911
+ "list_items": [
912
+ "Maximilian Kohlbrenner, Alexander Bauer, Shinichi Nakajima, Alexander Binder, Wojciech Samek, and Sebastian Lapuschkin. Towards best practice in explaining neural network decisions with LRP. In 2020 International Joint Conference on Neural Networks, IJCNN 2020, Glasgow, United Kingdom, July 19-24, 2020, pages 1-7. IEEE, 2020.",
913
+ "Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. Captum: A unified and generic model interpretability library for pytorch, 2020.",
914
+ "Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Unmasking clever hans predictors and assessing what machines really learn. CoRR, abs/1902.10178, 2019.",
915
+ "Scott M. Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4765-4774, 2017.",
916
+ "Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Methods for interpreting and understanding deep neural networks. Digit. Signal Process., 73:1-15, 2018.",
917
+ "Niels J. S. Mørch, Ulrik Kjems, Lars Kai Hansen, Claus Svarer, Ian Law, Benny Lautrup, Stephen C. Strother, and Kelly Rehm. Visualization of neural networks using saliency maps. In Proceedings of International Conference on Neural Networks (ICNN'95), Perth, WA, Australia, November 27 - December 1, 1995, pages 2085-2090. IEEE, 1995.",
918
+ "An-phi Nguyen and María Rodríguez Martínez. On quantitative aspects of model interpretability. CoRR, abs/2007.07584, 2020. URL https://arxiv.org/abs/2007.07584.",
919
+ "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024-8035. NeurIPS, 2019.",
920
+ "Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. \"why should I trust you?\": Explaining the predictions of any classifier. In Balaji Krishnapuram, Mohak Shah, Alexander J. Smola, Charu C. Aggarwal, Dou Shen, and Rajeev Rastogi, editors, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1135-1144. ACM, 2016.",
921
+ "Laura Rieger and Lars Kai Hansen. IROF: a low resource evaluation metric for explanation methods. CoRR, abs/2003.08747, 2020. URL https://arxiv.org/abs/2003.08747."
922
+ ],
923
+ "bbox": [
924
+ 143,
925
+ 114,
926
+ 854,
927
+ 890
928
+ ],
929
+ "page_idx": 8
930
+ },
931
+ {
932
+ "type": "header",
933
+ "text": "QUANTUS: AN XAI TOOLKIT FOR EVALUATING EXPLANATIONS",
934
+ "bbox": [
935
+ 259,
936
+ 47,
937
+ 732,
938
+ 61
939
+ ],
940
+ "page_idx": 8
941
+ },
942
+ {
943
+ "type": "page_number",
944
+ "text": "9",
945
+ "bbox": [
946
+ 493,
947
+ 914,
948
+ 504,
949
+ 926
950
+ ],
951
+ "page_idx": 8
952
+ },
953
+ {
954
+ "type": "list",
955
+ "sub_type": "ref_text",
956
+ "list_items": [
957
+ "Yao Rong, Tobias Leemann, Vadim Borisov, Gjergji Kasneci, and Enkelejda Kasneci. Evaluating feature attribution: An information-theoretic perspective. CoRR, abs/2202.00449, 2022.",
958
+ "Avi Rosenfeld. Better metrics for evaluating explainable artificial intelligence. In Frank Dignum, Alessio Lomuscio, Ulle Endriss, and Ann Nowé, editors, AAMAS '21: 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, United Kingdom, May 3-7, 2021, pages 45-50. ACM, 2021.",
959
+ "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis., 115(3): 211-252, 2015.",
960
+ "Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus-Robert Müller. Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Networks Learn. Syst., 28(11):2660-2673, 2017.",
961
+ "Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, and Klaus-Robert Müller. Explaining deep neural networks and beyond: A review of methods and applications. Proc. IEEE, 109(3):247-278, 2021.",
962
+ "Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3145-3153. PMLR, 2017.",
963
+ "Leon Sixt, Maximilian Granz, and Tim Landgraf. When explanations lie: Why many modified BP attributions fail. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 9046-9057. PMLR, 2020.",
964
+ "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3319-3328. PMLR, 2017.",
965
+ "Jonas Theiner, Eric Müller-Budack, and Ralph Ewerth. Interpretable semantic photo geolocation. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, Waikoloa, HI, USA, January 3-8, 2022, pages 1474-1484. IEEE, 2022.",
966
+ "Danding Wang, Qian Yang, Ashraf M. Abdul, and Brian Y. Lim. Designing theory-driven user-centric explainable AI. In Stephen A. Brewster, Geraldine Fitzpatrick, Anna L. Cox, and Vassilis Kostakos, editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM, 2019."
967
+ ],
968
+ "bbox": [
969
+ 143,
970
+ 114,
971
+ 854,
972
+ 890
973
+ ],
974
+ "page_idx": 9
975
+ },
976
+ {
977
+ "type": "header",
978
+ "text": "HEDSTRÖM, WEBER, BAREEVA, KRAKOWczyk, MOTZKUS, SAMEK, LAPUSCHKIN, AND HÖHNE",
979
+ "bbox": [
980
+ 145,
981
+ 47,
982
+ 844,
983
+ 61
984
+ ],
985
+ "page_idx": 9
986
+ },
987
+ {
988
+ "type": "page_number",
989
+ "text": "10",
990
+ "bbox": [
991
+ 488,
992
+ 912,
993
+ 508,
994
+ 926
995
+ ],
996
+ "page_idx": 9
997
+ },
998
+ {
999
+ "type": "list",
1000
+ "sub_type": "ref_text",
1001
+ "list_items": [
1002
+ "Mengjiao Yang and Been Kim. Benchmarking Attribution Methods with Relative Feature Importance. CoRR, abs/1907.09701, 2019.",
1003
+ "Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David I. Inouye, and Pradeep Ravikumar. On the (in)fidelity and sensitivity of explanations. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 10965-10976, 2019.",
1004
+ "Gal Yona and Daniel Greenfeld. Revisiting sanity checks for saliency maps. CoRR, abs/2110.14297, 2021.",
1005
+ "Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In David J. Fleet, Tomás Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I, volume 8689 of Lecture Notes in Computer Science, pages 818-833. Springer, 2014.",
1006
+ "Jianming Zhang, Sarah Adel Bargal, Zhe Lin, Jonathan Brandt, Xiaohui Shen, and Stan Sclaroff. Top-down neural attention by excitation backprop. Int. J. Comput. Vis., 126(10): 1084-1102, 2018."
1007
+ ],
1008
+ "bbox": [
1009
+ 143,
1010
+ 114,
1011
+ 857,
1012
+ 467
1013
+ ],
1014
+ "page_idx": 10
1015
+ },
1016
+ {
1017
+ "type": "header",
1018
+ "text": "QUANTUS: AN XAI TOOLKIT FOR EVALUATING EXPLANATIONS",
1019
+ "bbox": [
1020
+ 259,
1021
+ 47,
1022
+ 733,
1023
+ 63
1024
+ ],
1025
+ "page_idx": 10
1026
+ },
1027
+ {
1028
+ "type": "page_number",
1029
+ "text": "11",
1030
+ "bbox": [
1031
+ 490,
1032
+ 912,
1033
+ 506,
1034
+ 925
1035
+ ],
1036
+ "page_idx": 10
1037
+ }
1038
+ ]
2202.06xxx/2202.06861/dce406ec-fdca-4c53-83c1-98a2ac664d0a_model.json ADDED
@@ -0,0 +1,1740 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ [
3
+ {
4
+ "type": "header",
5
+ "bbox": [
6
+ 0.143,
7
+ 0.05,
8
+ 0.472,
9
+ 0.064
10
+ ],
11
+ "angle": 0,
12
+ "content": "Journal of Machine Learning Research 24 (2023) 1-11"
13
+ },
14
+ {
15
+ "type": "header",
16
+ "bbox": [
17
+ 0.56,
18
+ 0.05,
19
+ 0.855,
20
+ 0.065
21
+ ],
22
+ "angle": 0,
23
+ "content": "Submitted 2/22; Revised 11/22; Published 1/23"
24
+ },
25
+ {
26
+ "type": "aside_text",
27
+ "bbox": [
28
+ 0.023,
29
+ 0.264,
30
+ 0.061,
31
+ 0.709
32
+ ],
33
+ "angle": 270,
34
+ "content": "arXiv:2202.06861v3 [cs.LG] 27 Apr 2023"
35
+ },
36
+ {
37
+ "type": "title",
38
+ "bbox": [
39
+ 0.177,
40
+ 0.125,
41
+ 0.82,
42
+ 0.171
43
+ ],
44
+ "angle": 0,
45
+ "content": "Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond"
46
+ },
47
+ {
48
+ "type": "text",
49
+ "bbox": [
50
+ 0.143,
51
+ 0.191,
52
+ 0.308,
53
+ 0.209
54
+ ],
55
+ "angle": 0,
56
+ "content": "Anna Hedström<sup>1,†</sup>"
57
+ },
58
+ {
59
+ "type": "text",
60
+ "bbox": [
61
+ 0.143,
62
+ 0.214,
63
+ 0.292,
64
+ 0.23
65
+ ],
66
+ "angle": 0,
67
+ "content": "Leander Weber<sup>3</sup>"
68
+ },
69
+ {
70
+ "type": "text",
71
+ "bbox": [
72
+ 0.144,
73
+ 0.236,
74
+ 0.297,
75
+ 0.254
76
+ ],
77
+ "angle": 0,
78
+ "content": "Dilyara Bareeva"
79
+ },
80
+ {
81
+ "type": "text",
82
+ "bbox": [
83
+ 0.144,
84
+ 0.259,
85
+ 0.327,
86
+ 0.277
87
+ ],
88
+ "angle": 0,
89
+ "content": "Daniel Krakowczyk<sup>4</sup>"
90
+ },
91
+ {
92
+ "type": "text",
93
+ "bbox": [
94
+ 0.144,
95
+ 0.282,
96
+ 0.288,
97
+ 0.298
98
+ ],
99
+ "angle": 0,
100
+ "content": "Franz Motzkus<sup>3</sup>"
101
+ },
102
+ {
103
+ "type": "text",
104
+ "bbox": [
105
+ 0.144,
106
+ 0.303,
107
+ 0.321,
108
+ 0.322
109
+ ],
110
+ "angle": 0,
111
+ "content": "Wojciech Samek\\(^{2,3,5}\\)"
112
+ },
113
+ {
114
+ "type": "text",
115
+ "bbox": [
116
+ 0.144,
117
+ 0.326,
118
+ 0.357,
119
+ 0.344
120
+ ],
121
+ "angle": 0,
122
+ "content": "Sebastian Lapuschkin\\(^{3,\\dagger}\\)"
123
+ },
124
+ {
125
+ "type": "text",
126
+ "bbox": [
127
+ 0.597,
128
+ 0.195,
129
+ 0.848,
130
+ 0.209
131
+ ],
132
+ "angle": 0,
133
+ "content": "ANNA.HEDSTROEM@TU-BERLIN.DE"
134
+ },
135
+ {
136
+ "type": "text",
137
+ "bbox": [
138
+ 0.558,
139
+ 0.217,
140
+ 0.848,
141
+ 0.231
142
+ ],
143
+ "angle": 0,
144
+ "content": "LEANDER.WEBER@HHI.FRAUNHOFER.DE"
145
+ },
146
+ {
147
+ "type": "text",
148
+ "bbox": [
149
+ 0.535,
150
+ 0.24,
151
+ 0.848,
152
+ 0.253
153
+ ],
154
+ "angle": 0,
155
+ "content": "DILYARA.BAREEVA@CAMPUS.TU-BERLIN.DE"
156
+ },
157
+ {
158
+ "type": "text",
159
+ "bbox": [
160
+ 0.548,
161
+ 0.263,
162
+ 0.848,
163
+ 0.275
164
+ ],
165
+ "angle": 0,
166
+ "content": "DANIEL.KRAKOWCZYK@UNI-POTSDAM.DE"
167
+ },
168
+ {
169
+ "type": "text",
170
+ "bbox": [
171
+ 0.561,
172
+ 0.285,
173
+ 0.848,
174
+ 0.298
175
+ ],
176
+ "angle": 0,
177
+ "content": "FRANZ.MOTZKUS@HHI.FRAUNHOFER.DE"
178
+ },
179
+ {
180
+ "type": "text",
181
+ "bbox": [
182
+ 0.552,
183
+ 0.307,
184
+ 0.848,
185
+ 0.32
186
+ ],
187
+ "angle": 0,
188
+ "content": "WOJCIECH.SAMEK@HHI.FRAUNHOFER.DE"
189
+ },
190
+ {
191
+ "type": "text",
192
+ "bbox": [
193
+ 0.507,
194
+ 0.329,
195
+ 0.848,
196
+ 0.342
197
+ ],
198
+ "angle": 0,
199
+ "content": "SEBASTIAN.LAPUSCHKIN@HHI.FRAUNHOFER.DE"
200
+ },
201
+ {
202
+ "type": "text",
203
+ "bbox": [
204
+ 0.143,
205
+ 0.35,
206
+ 0.364,
207
+ 0.366
208
+ ],
209
+ "angle": 0,
210
+ "content": "Marina M.-C. Hohne\\(^{1,5,\\dagger}\\)"
211
+ },
212
+ {
213
+ "type": "text",
214
+ "bbox": [
215
+ 0.614,
216
+ 0.354,
217
+ 0.854,
218
+ 0.367
219
+ ],
220
+ "angle": 0,
221
+ "content": "MARINA.HOEHNE@TU-BERLIN.DE"
222
+ },
223
+ {
224
+ "type": "text",
225
+ "bbox": [
226
+ 0.144,
227
+ 0.369,
228
+ 0.719,
229
+ 0.385
230
+ ],
231
+ "angle": 0,
232
+ "content": "<sup>1</sup> Understandable Machine Intelligence Lab, TU Berlin, 10587 Berlin, Germany"
233
+ },
234
+ {
235
+ "type": "text",
236
+ "bbox": [
237
+ 0.144,
238
+ 0.386,
239
+ 0.855,
240
+ 0.402
241
+ ],
242
+ "angle": 0,
243
+ "content": "\\(^{2}\\) Department of Electrical Engineering and Computer Science, TU Berlin, 10587 Berlin, Germany"
244
+ },
245
+ {
246
+ "type": "text",
247
+ "bbox": [
248
+ 0.144,
249
+ 0.403,
250
+ 0.855,
251
+ 0.42
252
+ ],
253
+ "angle": 0,
254
+ "content": "\\(^{3}\\) Department of Artificial Intelligence, Fraunhofer Heinrich-Hertz-Institute, 10587 Berlin, Germany"
255
+ },
256
+ {
257
+ "type": "text",
258
+ "bbox": [
259
+ 0.144,
260
+ 0.42,
261
+ 0.765,
262
+ 0.436
263
+ ],
264
+ "angle": 0,
265
+ "content": "\\(^{4}\\) Department of Computer Science, University of Potsdam, 14476 Potsdam, Germany"
266
+ },
267
+ {
268
+ "type": "text",
269
+ "bbox": [
270
+ 0.144,
271
+ 0.438,
272
+ 0.837,
273
+ 0.453
274
+ ],
275
+ "angle": 0,
276
+ "content": "5 BIFOLD - Berlin Institute for the Foundations of Learning and Data, 10587 Berlin, Germany"
277
+ },
278
+ {
279
+ "type": "text",
280
+ "bbox": [
281
+ 0.144,
282
+ 0.455,
283
+ 0.317,
284
+ 0.471
285
+ ],
286
+ "angle": 0,
287
+ "content": "† corresponding authors"
288
+ },
289
+ {
290
+ "type": "list",
291
+ "bbox": [
292
+ 0.144,
293
+ 0.369,
294
+ 0.855,
295
+ 0.471
296
+ ],
297
+ "angle": 0,
298
+ "content": null
299
+ },
300
+ {
301
+ "type": "text",
302
+ "bbox": [
303
+ 0.143,
304
+ 0.499,
305
+ 0.354,
306
+ 0.515
307
+ ],
308
+ "angle": 0,
309
+ "content": "Editor: Joaquin Vanschoren"
310
+ },
311
+ {
312
+ "type": "title",
313
+ "bbox": [
314
+ 0.455,
315
+ 0.545,
316
+ 0.543,
317
+ 0.56
318
+ ],
319
+ "angle": 0,
320
+ "content": "Abstract"
321
+ },
322
+ {
323
+ "type": "text",
324
+ "bbox": [
325
+ 0.173,
326
+ 0.567,
327
+ 0.825,
328
+ 0.719
329
+ ],
330
+ "angle": 0,
331
+ "content": "The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness. Until now, no tool with focus on XAI evaluation exists that exhaustively and speedily allows researchers to evaluate the performance of explanations of neural network predictions. To increase transparency and reproducibility in the field, we therefore built Quantus—a comprehensive, evaluation toolkit in Python that includes a growing, well-organised collection of evaluation metrics and tutorials for evaluating explainable methods. The toolkit has been thoroughly tested and is available under an open-source license on PyPi (or on https://github.com/understandable-machine-intelligence-lab/Quantus/)."
332
+ },
333
+ {
334
+ "type": "text",
335
+ "bbox": [
336
+ 0.174,
337
+ 0.721,
338
+ 0.756,
339
+ 0.737
340
+ ],
341
+ "angle": 0,
342
+ "content": "Keywords: explainability, responsible AI, reproducibility, open source, Python"
343
+ },
344
+ {
345
+ "type": "title",
346
+ "bbox": [
347
+ 0.143,
348
+ 0.761,
349
+ 0.295,
350
+ 0.777
351
+ ],
352
+ "angle": 0,
353
+ "content": "1. Introduction"
354
+ },
355
+ {
356
+ "type": "text",
357
+ "bbox": [
358
+ 0.14,
359
+ 0.788,
360
+ 0.858,
361
+ 0.892
362
+ ],
363
+ "angle": 0,
364
+ "content": "Despite much excitement and activity in the field of eXplainable artificial intelligence (XAI) (Montavon et al., 2018; Arya et al., 2019; Lapuschkin et al., 2019; Samek et al., 2021; Bykov et al., 2021b), the evaluation of explainable methods still remains an unsolved problem (Samek et al., 2017; Adebayo et al., 2020; Holzinger et al., 2020; Yona and Greenfeld, 2021; Arras et al., 2022). Unlike in traditional machine learning (ML), the task of explaining generally lacks \"ground-truth\" data. There exists no universally accepted definition of what"
365
+ },
366
+ {
367
+ "type": "footer",
368
+ "bbox": [
369
+ 0.14,
370
+ 0.914,
371
+ 0.831,
372
+ 0.94
373
+ ],
374
+ "angle": 0,
375
+ "content": "\\(\\langle \\widehat{\\mathbb{C}}\\rangle\\) 2023 Anna Hedstrom, Leander Weber, Dilyara Bareeva, Daniel Krakowczyk, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, and Marina M.-C. Hohne."
376
+ },
377
+ {
378
+ "type": "footer",
379
+ "bbox": [
380
+ 0.141,
381
+ 0.945,
382
+ 0.845,
383
+ 0.97
384
+ ],
385
+ "angle": 0,
386
+ "content": "License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v24/22-0142.html."
387
+ }
388
+ ],
389
+ [
390
+ {
391
+ "type": "header",
392
+ "bbox": [
393
+ 0.146,
394
+ 0.048,
395
+ 0.846,
396
+ 0.063
397
+ ],
398
+ "angle": 0,
399
+ "content": "HEDSTRÖM, WEBER, BAREEVA, KRAKOWczyk, MOTZKUS, SAMEK, LAPUSCHKIN, AND HÖHNE"
400
+ },
401
+ {
402
+ "type": "text",
403
+ "bbox": [
404
+ 0.14,
405
+ 0.115,
406
+ 0.856,
407
+ 0.288
408
+ ],
409
+ "angle": 0,
410
+ "content": "a \"correct\" explanation is, or what properties an explanation should fulfil (Yang and Kim, 2019). Due to this lack of standardised evaluation procedures in XAI, researchers frequently conceive new ways to experimentally examine explanation methods (Bach et al., 2015; Samek et al., 2017; Adebayo et al., 2018; Yang and Kim, 2019; Kindermans et al., 2019), oftentimes employing different parameterisations and various kinds of preprocessing and normalisations, each leading to different or even contrasting results, making evaluation outcomes difficult to interpret and compare. Critically, we note that it is common for XAI papers to base their conclusions on one-sided, sometimes methodologically questionable evaluation procedures, which we fear may hinder access to the current State-of-the-art (SOTA) in XAI and potentially hurt the perceived credibility of the field over time."
411
+ },
412
+ {
413
+ "type": "text",
414
+ "bbox": [
415
+ 0.144,
416
+ 0.292,
417
+ 0.856,
418
+ 0.583
419
+ ],
420
+ "angle": 0,
421
+ "content": "For these reasons, researchers often rely on a qualitative evaluation of explanation methods (e.g., Zeiler and Fergus (2014); Ribeiro et al. (2016); Shrikumar et al. (2017)). Although qualitative evaluation of XAI methods is an important and complementary type of evaluation analysis (Hoffman et al., 2018), the assumption that humans are able to recognise a correct explanation comes with a series of pitfalls: not only does the notion of an \"accurate\" explanation often depend on the specifics of the task at hand, humans are also questionable judges of quality (Wang et al., 2019; Rosenfeld, 2021). In addition, recent studies suggest that even quantitative evaluation of explainable methods is far from fault-proof (Bansal et al., 2020; Budding et al., 2021; Yona and Greenfeld, 2021; Hase and Bansal, 2020). In response to these issues, we developed Quantus, to provide the community with a versatile and comprehensive toolkit that collects, organises, and explains a wide range of evaluation metrics proposed for explanation methods. The library is designed to help automate the process of XAI quantification—by delivering speedy, easily digestible, and at the same time holistic summaries of the quality of the given explanations. As we see it, Quantus concludes an important, still missing contribution in today's XAI research by filling the gap between what the community produces and what it currently needs: a more quantitative, systematic and standardised evaluation of explanation methods."
422
+ },
423
+ {
424
+ "type": "title",
425
+ "bbox": [
426
+ 0.142,
427
+ 0.61,
428
+ 0.341,
429
+ 0.627
430
+ ],
431
+ "angle": 0,
432
+ "content": "2. Toolkit Overview"
433
+ },
434
+ {
435
+ "type": "text",
436
+ "bbox": [
437
+ 0.141,
438
+ 0.643,
439
+ 0.856,
440
+ 0.712
441
+ ],
442
+ "angle": 0,
443
+ "content": "Quantus provides its intended users—practitioners and researchers interested in the domains of ML and XAI—with a steadily expanding list of \\(30+\\) reference metrics to evaluate explanations of ML predictions. Moreover, it offers comprehensive guidance on how to use these metrics, including information about potential pitfalls in their application."
444
+ },
445
+ {
446
+ "type": "table_caption",
447
+ "bbox": [
448
+ 0.143,
449
+ 0.741,
450
+ 0.856,
451
+ 0.789
452
+ ],
453
+ "angle": 0,
454
+ "content": "Table 1: Comparison of four XAI libraries—(AIX360 (Arya et al., 2019), captum (Kokhlikyan et al., 2020), TorchRay (Fong et al., 2019) and Quantus) in terms of the number of XAI evaluation methods for six different evaluation categories, as implemented in each library."
455
+ },
456
+ {
457
+ "type": "table",
458
+ "bbox": [
459
+ 0.149,
460
+ 0.801,
461
+ 0.85,
462
+ 0.888
463
+ ],
464
+ "angle": 0,
465
+ "content": "<table><tr><td>Library</td><td>Faithfulness</td><td>Robustness</td><td>Localisation</td><td>Complexity</td><td>Axiomatic</td><td>Randomisation</td></tr><tr><td>Captum (2)</td><td>1</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>AIX360 (2)</td><td>2</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>TorchRay (1)</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Quantus (27)</td><td>9</td><td>4</td><td>6</td><td>3</td><td>3</td><td>2</td></tr></table>"
466
+ },
467
+ {
468
+ "type": "page_number",
469
+ "bbox": [
470
+ 0.494,
471
+ 0.915,
472
+ 0.506,
473
+ 0.927
474
+ ],
475
+ "angle": 0,
476
+ "content": "2"
477
+ }
478
+ ],
479
+ [
480
+ {
481
+ "type": "header",
482
+ "bbox": [
483
+ 0.26,
484
+ 0.048,
485
+ 0.735,
486
+ 0.065
487
+ ],
488
+ "angle": 0,
489
+ "content": "QUANTUS: AN XAI TOOLKIT FOR EVALUATING EXPLANATIONS"
490
+ },
491
+ {
492
+ "type": "image_caption",
493
+ "bbox": [
494
+ 0.264,
495
+ 0.118,
496
+ 0.278,
497
+ 0.132
498
+ ],
499
+ "angle": 0,
500
+ "content": "a)"
501
+ },
502
+ {
503
+ "type": "image",
504
+ "bbox": [
505
+ 0.241,
506
+ 0.139,
507
+ 0.761,
508
+ 0.226
509
+ ],
510
+ "angle": 0,
511
+ "content": null
512
+ },
513
+ {
514
+ "type": "image_caption",
515
+ "bbox": [
516
+ 0.261,
517
+ 0.235,
518
+ 0.278,
519
+ 0.248
520
+ ],
521
+ "angle": 0,
522
+ "content": "b)"
523
+ },
524
+ {
525
+ "type": "image",
526
+ "bbox": [
527
+ 0.252,
528
+ 0.248,
529
+ 0.49,
530
+ 0.392
531
+ ],
532
+ "angle": 0,
533
+ "content": null
534
+ },
535
+ {
536
+ "type": "image_caption",
537
+ "bbox": [
538
+ 0.477,
539
+ 0.236,
540
+ 0.491,
541
+ 0.248
542
+ ],
543
+ "angle": 0,
544
+ "content": "c)"
545
+ },
546
+ {
547
+ "type": "image",
548
+ "bbox": [
549
+ 0.504,
550
+ 0.246,
551
+ 0.744,
552
+ 0.402
553
+ ],
554
+ "angle": 0,
555
+ "content": null
556
+ },
557
+ {
558
+ "type": "image_caption",
559
+ "bbox": [
560
+ 0.142,
561
+ 0.422,
562
+ 0.858,
563
+ 0.53
564
+ ],
565
+ "angle": 0,
566
+ "content": "Figure 1: a) Simple qualitative comparison of XAI methods is often not sufficient to distinguish which gradient-based method—Saliency (Mørch et al., 1995; Baehrens et al., 2010), Integrated Gradients (Sundararajan et al., 2017), GradientShap (Lundberg and Lee, 2017) or FusionGrad (Bykov et al., 2021a) is preferred. With Quantus, we can obtain richer insights on how the methods compare \\( b \\)) by holistic quantification on several evaluation criteria and \\( c \\)) by providing sensitivity analysis of how a single parameter, e.g., pixel replacement strategy of a faithfulness test influences the ranking of explanation methods."
567
+ },
568
+ {
569
+ "type": "text",
570
+ "bbox": [
571
+ 0.141,
572
+ 0.552,
573
+ 0.856,
574
+ 0.722
575
+ ],
576
+ "angle": 0,
577
+ "content": "The library is thoroughly documented and includes tutorials covering multiple use-cases, data domains and tasks—from comparative analysis of XAI methods and attributions, to quantifying the extent evaluation outcomes are dependent on metrics' parameterisations. In Figure 1, we demonstrate some example analysis using ImageNet dataset (Russakovsky et al., 2015) that can be produced with \\(\\mathbf{Quantus}^1\\). The library provides an abstract layer between APIs of deep learning frameworks, e.g., PyTorch (Paszke et al., 2019) and tensorflow (Abadi et al., 2016) and can be employed iteratively both during and after model training. Code quality is ensured by thorough testing, using pytest and continuous integration (CI), where every new contribution is automatically checked for sufficient test coverage. We employ syntax formatting with flake8, mypy and black under various Python versions."
578
+ },
579
+ {
580
+ "type": "text",
581
+ "bbox": [
582
+ 0.141,
583
+ 0.724,
584
+ 0.856,
585
+ 0.793
586
+ ],
587
+ "angle": 0,
588
+ "content": "Unlike other XAI-related libraries\\(^2\\), Quantus has its primary focus on evaluation and as such, supports a breadth of metrics, spanning various evaluation categories (see Table 1). A detailed description of the different evaluation categories can be found in the Appendix. The first iterations of the library mainly focus on attribution-based explanation techniques\\(^3\\) for"
589
+ },
590
+ {
591
+ "type": "page_footnote",
592
+ "bbox": [
593
+ 0.147,
594
+ 0.807,
595
+ 0.848,
596
+ 0.821
597
+ ],
598
+ "angle": 0,
599
+ "content": "1. The full experiment can be reproduced (and obtained) at the repository, under the \\tutorials folder."
600
+ },
601
+ {
602
+ "type": "page_footnote",
603
+ "bbox": [
604
+ 0.149,
605
+ 0.822,
606
+ 0.853,
607
+ 0.863
608
+ ],
609
+ "angle": 0,
610
+ "content": "2. Related libraries were selected with respect to the XAI evaluation capabilities. Packages including no metrics for evaluation of explanation methods, e.g., Alibi (Klaise et al., 2021), iNNvestigate (Alber et al., 2019), dalex (Baniecki et al., 2021) and zennit (Anders et al., 2021) were excluded."
611
+ },
612
+ {
613
+ "type": "page_footnote",
614
+ "bbox": [
615
+ 0.149,
616
+ 0.863,
617
+ 0.854,
618
+ 0.89
619
+ ],
620
+ "angle": 0,
621
+ "content": "3. This category of explainable methods aims to assign an importance value to the model features and arguably, is the most studied group of explanations."
622
+ },
623
+ {
624
+ "type": "list",
625
+ "bbox": [
626
+ 0.147,
627
+ 0.807,
628
+ 0.854,
629
+ 0.89
630
+ ],
631
+ "angle": 0,
632
+ "content": null
633
+ },
634
+ {
635
+ "type": "page_number",
636
+ "bbox": [
637
+ 0.494,
638
+ 0.915,
639
+ 0.506,
640
+ 0.927
641
+ ],
642
+ "angle": 0,
643
+ "content": "3"
644
+ }
645
+ ],
646
+ [
647
+ {
648
+ "type": "header",
649
+ "bbox": [
650
+ 0.146,
651
+ 0.048,
652
+ 0.846,
653
+ 0.063
654
+ ],
655
+ "angle": 0,
656
+ "content": "HEDSTRÖM, WEBER, BAREEVA, KRAKOWczyk, MOTZKUS, SAMEK, LAPUSCHKIN, AND HÖHNE"
657
+ },
658
+ {
659
+ "type": "text",
660
+ "bbox": [
661
+ 0.14,
662
+ 0.116,
663
+ 0.855,
664
+ 0.184
665
+ ],
666
+ "angle": 0,
667
+ "content": "(but not limited to) image classification. In planned future releases, we are working towards extending the applicability of the library further, e.g., by developing additional metrics and functionality that will enable users to perform checks, verifications and sensitivity analyses on top of the metrics."
668
+ },
669
+ {
670
+ "type": "title",
671
+ "bbox": [
672
+ 0.142,
673
+ 0.207,
674
+ 0.321,
675
+ 0.225
676
+ ],
677
+ "angle": 0,
678
+ "content": "3. Library Design"
679
+ },
680
+ {
681
+ "type": "text",
682
+ "bbox": [
683
+ 0.14,
684
+ 0.234,
685
+ 0.856,
686
+ 0.354
687
+ ],
688
+ "angle": 0,
689
+ "content": "The user-facing API of Quantus is designed with the aim of replacing an oftentimes lengthy and open-ended evaluation procedure with structure and speed—with a single line of code, the user can gain quantitative insights of how their explanations are behaving under various criteria. In the following code snippet, we demonstrate one way for how Quantus can be used to evaluate pre-computed explanations via a PixelFlipping experiment (Bach et al., 2015). In this example, we assume to have a pre-trained model (model), a batch of input and output pairs (x_batch, y_batch) and a set of attributions (a_batch)."
690
+ },
691
+ {
692
+ "type": "code",
693
+ "bbox": [
694
+ 0.143,
695
+ 0.37,
696
+ 0.822,
697
+ 0.427
698
+ ],
699
+ "angle": 0,
700
+ "content": "import quantus \npixelflipping = quantus.PixelFlipping(perturb_base \\(\\equiv\\) \"black\", abs=True) scores \\(=\\) pixelflipping(model, x_batch, y_batch, a_batch, **params) \npixelflipping.plot(y_batch=y_batch, scores=scores)"
701
+ },
702
+ {
703
+ "type": "text",
704
+ "bbox": [
705
+ 0.14,
706
+ 0.448,
707
+ 0.856,
708
+ 0.601
709
+ ],
710
+ "angle": 0,
711
+ "content": "Needless to say, XAI evaluation is intrinsically difficult and there is no one-size-fits-all metric for all tasks. Evaluation of explanations must, therefore, be understood and calibrated from its context: the application, data, model, and intended stakeholders (Chander and Srinivasan, 2018; Arras et al., 2022). To this end, we designed Quantus to be highly customisable and easily extendable—API documentation and examples on how to create new metrics as well as how to customise existing ones are included. Thanks to the API, any supporting functions of the evaluation procedure, e.g., perturb_baseline that determines the value that the input features should be iteratively masked with, can flexibly be replaced by a user-specified function to ensure that the evaluation procedure is appropriately contextualised."
712
+ },
713
+ {
714
+ "type": "text",
715
+ "bbox": [
716
+ 0.14,
717
+ 0.602,
718
+ 0.856,
719
+ 0.705
720
+ ],
721
+ "angle": 0,
722
+ "content": "It is practically well-known but not yet publicly recognised that evaluation outcomes of explanations can be highly sensitive to the parameterisation of metrics (Bansal et al., 2020; Agarwal and Nguyen, 2020) and other confounding factors introduced in the evaluation procedure (Hase et al., 2021; Yona and Greenfeld, 2021). Therefore, to encourage a thoughtful and responsible selection and parameterisation of metrics, we added mechanisms such as warnings, checks and user guidelines, cautioning users to reflect upon their choices."
723
+ },
724
+ {
725
+ "type": "title",
726
+ "bbox": [
727
+ 0.142,
728
+ 0.728,
729
+ 0.327,
730
+ 0.745
731
+ ],
732
+ "angle": 0,
733
+ "content": "4. Broader Impact"
734
+ },
735
+ {
736
+ "type": "text",
737
+ "bbox": [
738
+ 0.14,
739
+ 0.755,
740
+ 0.856,
741
+ 0.892
742
+ ],
743
+ "angle": 0,
744
+ "content": "We built Quantus to raise the bar of XAI quantification—to substitute an ad-hoc and sometimes ineffective evaluation procedure with reproducibility, simplicity and transparency. From our perspective, Quantus contributes to the XAI development by helping researchers to speed up the development and application of explanation methods, dissolve existing ambiguities and enable more comparability. As we see it, steering efforts towards increasing objectiveness of evaluations and reproducibility in the field will prove rewarding for the community as a whole. We are convinced that a holistic, multidimensional take on XAI quantification will be imperative to the general success of (X)AI over time."
745
+ },
746
+ {
747
+ "type": "page_number",
748
+ "bbox": [
749
+ 0.494,
750
+ 0.915,
751
+ 0.506,
752
+ 0.927
753
+ ],
754
+ "angle": 0,
755
+ "content": "4"
756
+ }
757
+ ],
758
+ [
759
+ {
760
+ "type": "header",
761
+ "bbox": [
762
+ 0.259,
763
+ 0.048,
764
+ 0.734,
765
+ 0.064
766
+ ],
767
+ "angle": 0,
768
+ "content": "QUANTUS: AN XAI TOOLKIT FOR EVALUATING EXPLANATIONS"
769
+ },
770
+ {
771
+ "type": "title",
772
+ "bbox": [
773
+ 0.142,
774
+ 0.115,
775
+ 0.582,
776
+ 0.136
777
+ ],
778
+ "angle": 0,
779
+ "content": "Acknowledgments and Disclosure of Funding"
780
+ },
781
+ {
782
+ "type": "text",
783
+ "bbox": [
784
+ 0.14,
785
+ 0.151,
786
+ 0.859,
787
+ 0.238
788
+ ],
789
+ "angle": 0,
790
+ "content": "This work was partly funded by the German Federal Ministry for Education and Research through project Explaining 4.0 (ref. 01IS20055), BIFOLD (ref. 01IS18025A and ref. 01IS18037A), AEye (ref. 01IS20043), the Investitionsbank Berlin through BerDiBA (grant no. 10174498), as well as the European Union's Horizon 2020 programme through iToBoS (grant no. 965221)."
791
+ },
792
+ {
793
+ "type": "title",
794
+ "bbox": [
795
+ 0.143,
796
+ 0.258,
797
+ 0.245,
798
+ 0.276
799
+ ],
800
+ "angle": 0,
801
+ "content": "Appendix"
802
+ },
803
+ {
804
+ "type": "text",
805
+ "bbox": [
806
+ 0.14,
807
+ 0.285,
808
+ 0.857,
809
+ 0.405
810
+ ],
811
+ "angle": 0,
812
+ "content": "In most explainability contexts, ground-truth explanations are not available (Samek et al., 2017; Adebayo et al., 2020; Holzinger et al., 2020; Yona and Greenfeld, 2021; Arras et al., 2022), which makes the task of evaluating explanations non-trivial. Efforts on evaluating explanations have therefore been invested diversely. For better organisation, in the source code of Quantus, we therefore grouped the metrics into six categories based on their logical similarity—(a) faithfulness, (b) robustness, (c) localisation, (d) complexity, (e) randomisation and (f) axiomatic metrics."
813
+ },
814
+ {
815
+ "type": "text",
816
+ "bbox": [
817
+ 0.141,
818
+ 0.405,
819
+ 0.856,
820
+ 0.492
821
+ ],
822
+ "angle": 0,
823
+ "content": "In the following, we describe each of the categories briefly. A more in-depth description of each category, including an account of the underlying metrics, is documented in the repository. The direction of the arrow indicates whether higher or lower values are considered better (exceptions within each category exist, so please carefully read the docstrings of each individual metric prior to usage and/or interpretation)."
824
+ },
825
+ {
826
+ "type": "text",
827
+ "bbox": [
828
+ 0.155,
829
+ 0.504,
830
+ 0.855,
831
+ 0.607
832
+ ],
833
+ "angle": 0,
834
+ "content": "(a) Faithfulness \\((\\uparrow)\\) quantifies to what extent explanations follow the predictive behaviour of the model, asserting that more important features affect model decisions more strongly (Bhatt et al., 2020; Alvarez-Melis and Jaakkola, 2018; Arya et al., 2019; Nguyen and Martínez, 2020; Bach et al., 2015; Samek et al., 2017; Montavon et al., 2018; Ancona et al., 2018; Rieger and Hansen, 2020; Yeh et al., 2019; Rong et al., 2022; Dasgupta et al., 2022)"
835
+ },
836
+ {
837
+ "type": "text",
838
+ "bbox": [
839
+ 0.155,
840
+ 0.619,
841
+ 0.855,
842
+ 0.687
843
+ ],
844
+ "angle": 0,
845
+ "content": "(b) Robustness \\((\\downarrow)\\) measures to what extent explanations are stable when subject to slight perturbations in the input, assuming that the model output approximately stayed the same (Yeh et al., 2019; Montavon et al., 2018; Alvarez-Melis and Jaakkola, 2018; Dasgupta et al., 2022)"
846
+ },
847
+ {
848
+ "type": "text",
849
+ "bbox": [
850
+ 0.157,
851
+ 0.699,
852
+ 0.855,
853
+ 0.768
854
+ ],
855
+ "angle": 0,
856
+ "content": "(c) Localisation \\((\\uparrow)\\) tests if the explainable evidence is centred around a region of interest, which may be defined around an object by a bounding box, a segmentation mask or a cell within a grid (Zhang et al., 2018; Theiner et al., 2022; Kohlbrenner et al., 2020; Arras et al., 2022; Rong et al., 2022; Arias-Duart et al., 2021)"
857
+ },
858
+ {
859
+ "type": "text",
860
+ "bbox": [
861
+ 0.155,
862
+ 0.778,
863
+ 0.855,
864
+ 0.83
865
+ ],
866
+ "angle": 0,
867
+ "content": "(d) Complexity \\((\\downarrow)\\) captures to what extent explanations are concise, i.e., that few features are used to explain a model prediction (Chalasani et al., 2020; Bhatt et al., 2020; Nguyen and Martínez, 2020)"
868
+ },
869
+ {
870
+ "type": "text",
871
+ "bbox": [
872
+ 0.157,
873
+ 0.84,
874
+ 0.855,
875
+ 0.892
876
+ ],
877
+ "angle": 0,
878
+ "content": "(e) Randomisation \\((\\uparrow)\\) tests to what extent explanations deteriorate as the data labels or the model, e.g., its parameters are increasingly randomised (Adebayo et al., 2018; Sixt et al., 2020)"
879
+ },
880
+ {
881
+ "type": "list",
882
+ "bbox": [
883
+ 0.155,
884
+ 0.504,
885
+ 0.855,
886
+ 0.892
887
+ ],
888
+ "angle": 0,
889
+ "content": null
890
+ },
891
+ {
892
+ "type": "page_number",
893
+ "bbox": [
894
+ 0.494,
895
+ 0.915,
896
+ 0.506,
897
+ 0.927
898
+ ],
899
+ "angle": 0,
900
+ "content": "5"
901
+ }
902
+ ],
903
+ [
904
+ {
905
+ "type": "header",
906
+ "bbox": [
907
+ 0.146,
908
+ 0.048,
909
+ 0.847,
910
+ 0.063
911
+ ],
912
+ "angle": 0,
913
+ "content": "HEDSTRÖM, WEBER, BAREEVA, KRAKOWczyk, MOTZKUS, SAMEK, LAPUSCHKIN, AND HÖHNE"
914
+ },
915
+ {
916
+ "type": "ref_text",
917
+ "bbox": [
918
+ 0.158,
919
+ 0.115,
920
+ 0.856,
921
+ 0.15
922
+ ],
923
+ "angle": 0,
924
+ "content": "(f) Axiomatic \\((\\uparrow)\\) measures if explanations fulfill certain axiomatic properties (Kindermans et al., 2019; Sundararajan et al., 2017; Nguyen and Martínez, 2020)"
925
+ },
926
+ {
927
+ "type": "title",
928
+ "bbox": [
929
+ 0.143,
930
+ 0.171,
931
+ 0.251,
932
+ 0.187
933
+ ],
934
+ "angle": 0,
935
+ "content": "References"
936
+ },
937
+ {
938
+ "type": "ref_text",
939
+ "bbox": [
940
+ 0.146,
941
+ 0.198,
942
+ 0.859,
943
+ 0.353
944
+ ],
945
+ "angle": 0,
946
+ "content": "Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gregory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Józefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Gordon Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaogiang Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed systems, 2016."
947
+ },
948
+ {
949
+ "type": "ref_text",
950
+ "bbox": [
951
+ 0.145,
952
+ 0.365,
953
+ 0.857,
954
+ 0.466
955
+ ],
956
+ "angle": 0,
957
+ "content": "Julius Adebayo, Justin Gilmer, Michael Muelly, Ian J. Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 9525-9536, 2018."
958
+ },
959
+ {
960
+ "type": "ref_text",
961
+ "bbox": [
962
+ 0.145,
963
+ 0.481,
964
+ 0.857,
965
+ 0.566
966
+ ],
967
+ "angle": 0,
968
+ "content": "Julius Adebayo, Michael Muelly, Ilaria Liccardi, and Been Kim. Debugging tests for model explanations. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020."
969
+ },
970
+ {
971
+ "type": "ref_text",
972
+ "bbox": [
973
+ 0.145,
974
+ 0.579,
975
+ 0.857,
976
+ 0.664
977
+ ],
978
+ "angle": 0,
979
+ "content": "Chirag Agarwal and Anh Nguyen. Explaining image classifiers by removing input features using generative models. In Hiroshi Ishikawa, Cheng-Lin Liu, Tomás Pajdla, and Jianbo Shi, editors, Computer Vision - ACCV 2020 - 15th Asian Conference on Computer Vision, Kyoto, Japan, November 30 - December 4, 2020, Revised Selected Papers, Part VI, volume 12627 of Lecture Notes in Computer Science, pages 101-118. Springer, 2020."
980
+ },
981
+ {
982
+ "type": "ref_text",
983
+ "bbox": [
984
+ 0.145,
985
+ 0.678,
986
+ 0.857,
987
+ 0.729
988
+ ],
989
+ "angle": 0,
990
+ "content": "Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, and Pieter-Jan Kindermans. Investigate neural networks! J. Mach. Learn. Res., 20:93:1-93:8, 2019."
991
+ },
992
+ {
993
+ "type": "ref_text",
994
+ "bbox": [
995
+ 0.145,
996
+ 0.741,
997
+ 0.857,
998
+ 0.827
999
+ ],
1000
+ "angle": 0,
1001
+ "content": "David Alvarez-Melis and Tommi S. Jaakkola. Towards robust interpretability with self-explaining neural networks. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 7786-7795, 2018."
1002
+ },
1003
+ {
1004
+ "type": "ref_text",
1005
+ "bbox": [
1006
+ 0.145,
1007
+ 0.84,
1008
+ 0.857,
1009
+ 0.891
1010
+ ],
1011
+ "angle": 0,
1012
+ "content": "Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. Towards better understanding of gradient-based attribution methods for deep neural networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada,"
1013
+ },
1014
+ {
1015
+ "type": "list",
1016
+ "bbox": [
1017
+ 0.145,
1018
+ 0.198,
1019
+ 0.859,
1020
+ 0.891
1021
+ ],
1022
+ "angle": 0,
1023
+ "content": null
1024
+ },
1025
+ {
1026
+ "type": "page_number",
1027
+ "bbox": [
1028
+ 0.494,
1029
+ 0.915,
1030
+ 0.506,
1031
+ 0.927
1032
+ ],
1033
+ "angle": 0,
1034
+ "content": "6"
1035
+ }
1036
+ ],
1037
+ [
1038
+ {
1039
+ "type": "header",
1040
+ "bbox": [
1041
+ 0.26,
1042
+ 0.048,
1043
+ 0.735,
1044
+ 0.064
1045
+ ],
1046
+ "angle": 0,
1047
+ "content": "QUANTUS: AN XAI TOOLKIT FOR EVALUATING EXPLANATIONS"
1048
+ },
1049
+ {
1050
+ "type": "ref_text",
1051
+ "bbox": [
1052
+ 0.158,
1053
+ 0.115,
1054
+ 0.856,
1055
+ 0.15
1056
+ ],
1057
+ "angle": 0,
1058
+ "content": "April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=Sy21R9JAW."
1059
+ },
1060
+ {
1061
+ "type": "ref_text",
1062
+ "bbox": [
1063
+ 0.143,
1064
+ 0.159,
1065
+ 0.857,
1066
+ 0.211
1067
+ ],
1068
+ "angle": 0,
1069
+ "content": "Christopher J. Anders, David Neumann, Wojciech Samek, Klaus-Robert Müller, and Sebastian Lapuschkin. Software for dataset-wide xai: From local explanations to global insights with zennit, corelay, and virelay, 2021."
1070
+ },
1071
+ {
1072
+ "type": "ref_text",
1073
+ "bbox": [
1074
+ 0.143,
1075
+ 0.221,
1076
+ 0.857,
1077
+ 0.272
1078
+ ],
1079
+ "angle": 0,
1080
+ "content": "Anna Arias-Duart, Ferran Parés, Dario Garcia-Gasulla, and Victor Gimenez-Abalos. Focus! rating xai methods and finding biases. CoRR, abs/2203.02928, 2021. doi: 10.48550/arXiv.2109.15035."
1081
+ },
1082
+ {
1083
+ "type": "ref_text",
1084
+ "bbox": [
1085
+ 0.143,
1086
+ 0.282,
1087
+ 0.857,
1088
+ 0.332
1089
+ ],
1090
+ "angle": 0,
1091
+ "content": "Leila Arras, Ahmed Osman, and Wojciech Samek. Clevr-xai: A benchmark dataset for the ground truth evaluation of neural network explanations. Information Fusion, 81:14-40, 2022."
1092
+ },
1093
+ {
1094
+ "type": "ref_text",
1095
+ "bbox": [
1096
+ 0.143,
1097
+ 0.343,
1098
+ 0.857,
1099
+ 0.446
1100
+ ],
1101
+ "angle": 0,
1102
+ "content": "Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. One explanation does not fit all: A toolkit and taxonomy of AI explainability techniques, 2019."
1103
+ },
1104
+ {
1105
+ "type": "ref_text",
1106
+ "bbox": [
1107
+ 0.143,
1108
+ 0.455,
1109
+ 0.856,
1110
+ 0.508
1111
+ ],
1112
+ "angle": 0,
1113
+ "content": "Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. *PloS one*, 10(7), 2015."
1114
+ },
1115
+ {
1116
+ "type": "ref_text",
1117
+ "bbox": [
1118
+ 0.144,
1119
+ 0.517,
1120
+ 0.856,
1121
+ 0.568
1122
+ ],
1123
+ "angle": 0,
1124
+ "content": "David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert Müller. How to explain individual classification decisions. J. Mach. Learn. Res., 11:1803-1831, 2010."
1125
+ },
1126
+ {
1127
+ "type": "ref_text",
1128
+ "bbox": [
1129
+ 0.144,
1130
+ 0.578,
1131
+ 0.856,
1132
+ 0.63
1133
+ ],
1134
+ "angle": 0,
1135
+ "content": "Hubert Baniecki, Wojciech Kretowicz, Piotr Piatyszek, Jakub Wisniewski, and Przemyslaw Biecek. dalex: Responsible machine learning with interactive explainability and fairness in python. J. Mach. Learn. Res., 22:214:1-214:7, 2021."
1136
+ },
1137
+ {
1138
+ "type": "ref_text",
1139
+ "bbox": [
1140
+ 0.144,
1141
+ 0.639,
1142
+ 0.856,
1143
+ 0.707
1144
+ ],
1145
+ "angle": 0,
1146
+ "content": "Naman Bansal, Chirag Agarwal, and Anh Nguyen. SAM: the sensitivity of attribution methods to hyperparameters. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 8670-8680. Computer Vision Foundation / IEEE, 2020."
1147
+ },
1148
+ {
1149
+ "type": "ref_text",
1150
+ "bbox": [
1151
+ 0.144,
1152
+ 0.717,
1153
+ 0.856,
1154
+ 0.786
1155
+ ],
1156
+ "angle": 0,
1157
+ "content": "Umang Bhatt, Adrian Weller, and José M. F. Moura. Evaluating and aggregating feature-based model explanations. In Christian Bessiere, editor, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3016-3022. ijcai.org, 2020."
1158
+ },
1159
+ {
1160
+ "type": "ref_text",
1161
+ "bbox": [
1162
+ 0.143,
1163
+ 0.795,
1164
+ 0.856,
1165
+ 0.831
1166
+ ],
1167
+ "angle": 0,
1168
+ "content": "Céline Budding, Fabian Eitel, Kerstin Ritter, and Stefan Haufe. Evaluating saliency methods on artificial data with different background types. CoRR, abs/2112.04882, 2021."
1169
+ },
1170
+ {
1171
+ "type": "ref_text",
1172
+ "bbox": [
1173
+ 0.143,
1174
+ 0.84,
1175
+ 0.856,
1176
+ 0.89
1177
+ ],
1178
+ "angle": 0,
1179
+ "content": "Kirill Bykov, Anna Hedström, Shinichi Nakajima, and Marina M.-C. Höhne. Noisegrad: enhancing explanations by introducing stochasticity to model weights. CoRR, abs/2106.10185, 2021a."
1180
+ },
1181
+ {
1182
+ "type": "list",
1183
+ "bbox": [
1184
+ 0.143,
1185
+ 0.115,
1186
+ 0.857,
1187
+ 0.89
1188
+ ],
1189
+ "angle": 0,
1190
+ "content": null
1191
+ },
1192
+ {
1193
+ "type": "page_number",
1194
+ "bbox": [
1195
+ 0.494,
1196
+ 0.915,
1197
+ 0.505,
1198
+ 0.926
1199
+ ],
1200
+ "angle": 0,
1201
+ "content": "7"
1202
+ }
1203
+ ],
1204
+ [
1205
+ {
1206
+ "type": "header",
1207
+ "bbox": [
1208
+ 0.146,
1209
+ 0.048,
1210
+ 0.846,
1211
+ 0.063
1212
+ ],
1213
+ "angle": 0,
1214
+ "content": "HEDSTRÖM, WEBER, BAREEVA, KRAKOWczyK, MOTZKUS, SAMEK, LAPUSCHKIN, AND HÖHNE"
1215
+ },
1216
+ {
1217
+ "type": "ref_text",
1218
+ "bbox": [
1219
+ 0.145,
1220
+ 0.115,
1221
+ 0.856,
1222
+ 0.166
1223
+ ],
1224
+ "angle": 0,
1225
+ "content": "Kirill Bykov, Marina M.-C. Höhne, Adelaide Creosteanu, Klaus-Robert Müller, Frederick Klauschen, Shinichi Nakajima, and Marius Kloft. Explaining bayesian neural networks. CoRR, abs/2108.10346, 2021b."
1226
+ },
1227
+ {
1228
+ "type": "ref_text",
1229
+ "bbox": [
1230
+ 0.145,
1231
+ 0.178,
1232
+ 0.856,
1233
+ 0.263
1234
+ ],
1235
+ "angle": 0,
1236
+ "content": "Prasad Chalasani, Jiefeng Chen, Amrita Roy Chowdhury, Xi Wu, and Somesh Jha. Concise explanations of neural networks using adversarial training. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 1383-1391. PMLR, 2020."
1237
+ },
1238
+ {
1239
+ "type": "ref_text",
1240
+ "bbox": [
1241
+ 0.145,
1242
+ 0.274,
1243
+ 0.856,
1244
+ 0.377
1245
+ ],
1246
+ "angle": 0,
1247
+ "content": "Ajay Chander and Ramya Srinivasan. Evaluating explanations by cognitive value. In Andreas Holzinger, Peter Kieseberg, A Min Tjoa, and Edgar R. Weippl, editors, *Machine Learning and Knowledge Extraction - Second IFIP TC 5*, TC 8/WG 8.4, 8.9, TC 12/WG 12.9 International Cross-Domain Conference, CD-MAKE 2018, Hamburg, Germany, August 27-30, 2018, Proceedings, volume 11015 of Lecture Notes in Computer Science, pages 314-328. Springer, 2018."
1248
+ },
1249
+ {
1250
+ "type": "ref_text",
1251
+ "bbox": [
1252
+ 0.145,
1253
+ 0.388,
1254
+ 0.856,
1255
+ 0.438
1256
+ ],
1257
+ "angle": 0,
1258
+ "content": "Sanjoy Dasgupta, Nave Frost, and Michal Moshkovitz. Framework for evaluating faithfulness of local explanations. CoRR, abs/2202.00734, 2022. URL https://arxiv.org/abs/2202.00734."
1259
+ },
1260
+ {
1261
+ "type": "ref_text",
1262
+ "bbox": [
1263
+ 0.145,
1264
+ 0.449,
1265
+ 0.856,
1266
+ 0.484
1267
+ ],
1268
+ "angle": 0,
1269
+ "content": "Ruth Fong, Mandela Patrick, and Andrea Vedaldi. Understanding deep networks via extremal perturbations and smooth masks, 2019."
1270
+ },
1271
+ {
1272
+ "type": "ref_text",
1273
+ "bbox": [
1274
+ 0.145,
1275
+ 0.495,
1276
+ 0.856,
1277
+ 0.58
1278
+ ],
1279
+ "angle": 0,
1280
+ "content": "Peter Hase and Mohit Bansal. Evaluating explainable AI: which algorithmic explanations help users predict model behavior? In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault, editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5540-5552. Association for Computational Linguistics, 2020."
1281
+ },
1282
+ {
1283
+ "type": "ref_text",
1284
+ "bbox": [
1285
+ 0.145,
1286
+ 0.591,
1287
+ 0.856,
1288
+ 0.643
1289
+ ],
1290
+ "angle": 0,
1291
+ "content": "Peter Hase, Harry Xie, and Mohit Bansal. The out-of-distribution problem in explainability and search methods for feature importance explanations. Advances in Neural Information Processing Systems, 34, 2021."
1292
+ },
1293
+ {
1294
+ "type": "ref_text",
1295
+ "bbox": [
1296
+ 0.145,
1297
+ 0.653,
1298
+ 0.856,
1299
+ 0.687
1300
+ ],
1301
+ "angle": 0,
1302
+ "content": "Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. Metrics for explainable AI: challenges and prospects. CoRR, abs/1812.04608, 2018."
1303
+ },
1304
+ {
1305
+ "type": "ref_text",
1306
+ "bbox": [
1307
+ 0.145,
1308
+ 0.698,
1309
+ 0.856,
1310
+ 0.733
1311
+ ],
1312
+ "angle": 0,
1313
+ "content": "Andreas Holzinger, André M. Carrington, and Heimo Müller. Measuring the quality of explanations: The system causability scale (SCS). Kunstliche Intell., 34(2):193-198, 2020."
1314
+ },
1315
+ {
1316
+ "type": "ref_text",
1317
+ "bbox": [
1318
+ 0.145,
1319
+ 0.743,
1320
+ 0.856,
1321
+ 0.83
1322
+ ],
1323
+ "angle": 0,
1324
+ "content": "Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, and Been Kim. The (un)reliability of saliency methods. In Wojciech Samek, Gregoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller, editors, Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, volume 11700 of Lecture Notes in Computer Science, pages 267-280. Springer, 2019."
1325
+ },
1326
+ {
1327
+ "type": "ref_text",
1328
+ "bbox": [
1329
+ 0.145,
1330
+ 0.84,
1331
+ 0.856,
1332
+ 0.89
1333
+ ],
1334
+ "angle": 0,
1335
+ "content": "Janis Klaise, Arnaud Van Looveren, Giovanni Vacanti, and Alexandru Coca. Alibi explain: Algorithms for explaining machine learning models. J. Mach. Learn. Res., 22:181:1-181:7, 2021."
1336
+ },
1337
+ {
1338
+ "type": "list",
1339
+ "bbox": [
1340
+ 0.145,
1341
+ 0.115,
1342
+ 0.856,
1343
+ 0.89
1344
+ ],
1345
+ "angle": 0,
1346
+ "content": null
1347
+ },
1348
+ {
1349
+ "type": "page_number",
1350
+ "bbox": [
1351
+ 0.494,
1352
+ 0.915,
1353
+ 0.505,
1354
+ 0.926
1355
+ ],
1356
+ "angle": 0,
1357
+ "content": "8"
1358
+ }
1359
+ ],
1360
+ [
1361
+ {
1362
+ "type": "header",
1363
+ "bbox": [
1364
+ 0.26,
1365
+ 0.049,
1366
+ 0.733,
1367
+ 0.063
1368
+ ],
1369
+ "angle": 0,
1370
+ "content": "QUANTUS: AN XAI TOOLKIT FOR EVALUATING EXPLANATIONS"
1371
+ },
1372
+ {
1373
+ "type": "ref_text",
1374
+ "bbox": [
1375
+ 0.145,
1376
+ 0.115,
1377
+ 0.856,
1378
+ 0.184
1379
+ ],
1380
+ "angle": 0,
1381
+ "content": "Maximilian Kohlbrenner, Alexander Bauer, Shinichi Nakajima, Alexander Binder, Wojciech Samek, and Sebastian Lapuschkin. Towards best practice in explaining neural network decisions with LRP. In 2020 International Joint Conference on Neural Networks, IJCNN 2020, Glasgow, United Kingdom, July 19-24, 2020, pages 1-7. IEEE, 2020."
1382
+ },
1383
+ {
1384
+ "type": "ref_text",
1385
+ "bbox": [
1386
+ 0.145,
1387
+ 0.194,
1388
+ 0.856,
1389
+ 0.263
1390
+ ],
1391
+ "angle": 0,
1392
+ "content": "Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. Captum: A unified and generic model interpretability library for pytorch, 2020."
1393
+ },
1394
+ {
1395
+ "type": "ref_text",
1396
+ "bbox": [
1397
+ 0.145,
1398
+ 0.273,
1399
+ 0.856,
1400
+ 0.324
1401
+ ],
1402
+ "angle": 0,
1403
+ "content": "Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Unmasking clever hans predictors and assessing what machines really learn. CoRR, abs/1902.10178, 2019."
1404
+ },
1405
+ {
1406
+ "type": "ref_text",
1407
+ "bbox": [
1408
+ 0.145,
1409
+ 0.334,
1410
+ 0.856,
1411
+ 0.42
1412
+ ],
1413
+ "angle": 0,
1414
+ "content": "Scott M. Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4765-4774, 2017."
1415
+ },
1416
+ {
1417
+ "type": "ref_text",
1418
+ "bbox": [
1419
+ 0.145,
1420
+ 0.43,
1421
+ 0.856,
1422
+ 0.464
1423
+ ],
1424
+ "angle": 0,
1425
+ "content": "Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Methods for interpreting and understanding deep neural networks. Digit. Signal Process., 73:1-15, 2018."
1426
+ },
1427
+ {
1428
+ "type": "ref_text",
1429
+ "bbox": [
1430
+ 0.145,
1431
+ 0.474,
1432
+ 0.856,
1433
+ 0.543
1434
+ ],
1435
+ "angle": 0,
1436
+ "content": "Niels J. S. Mørch, Ulrik Kjems, Lars Kai Hansen, Claus Svarer, Ian Law, Benny Lautrup, Stephen C. Strother, and Kelly Rehm. Visualization of neural networks using saliency maps. In Proceedings of International Conference on Neural Networks (ICNN'95), Perth, WA, Australia, November 27 - December 1, 1995, pages 2085-2090. IEEE, 1995."
1437
+ },
1438
+ {
1439
+ "type": "ref_text",
1440
+ "bbox": [
1441
+ 0.145,
1442
+ 0.553,
1443
+ 0.856,
1444
+ 0.587
1445
+ ],
1446
+ "angle": 0,
1447
+ "content": "An-phi Nguyen and María Rodríguez Martínez. On quantitative aspects of model interpretability. CoRR, abs/2007.07584, 2020. URL https://arxiv.org/abs/2007.07584."
1448
+ },
1449
+ {
1450
+ "type": "ref_text",
1451
+ "bbox": [
1452
+ 0.145,
1453
+ 0.597,
1454
+ 0.856,
1455
+ 0.751
1456
+ ],
1457
+ "angle": 0,
1458
+ "content": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024-8035. NeurIPS, 2019."
1459
+ },
1460
+ {
1461
+ "type": "ref_text",
1462
+ "bbox": [
1463
+ 0.145,
1464
+ 0.761,
1465
+ 0.856,
1466
+ 0.847
1467
+ ],
1468
+ "angle": 0,
1469
+ "content": "Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. \"why should I trust you?\": Explaining the predictions of any classifier. In Balaji Krishnapuram, Mohak Shah, Alexander J. Smola, Charu C. Aggarwal, Dou Shen, and Rajeev Rastogi, editors, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1135-1144. ACM, 2016."
1470
+ },
1471
+ {
1472
+ "type": "ref_text",
1473
+ "bbox": [
1474
+ 0.145,
1475
+ 0.857,
1476
+ 0.856,
1477
+ 0.891
1478
+ ],
1479
+ "angle": 0,
1480
+ "content": "Laura Rieger and Lars Kai Hansen. IROF: a low resource evaluation metric for explanation methods. CoRR, abs/2003.08747, 2020. URL https://arxiv.org/abs/2003.08747."
1481
+ },
1482
+ {
1483
+ "type": "list",
1484
+ "bbox": [
1485
+ 0.145,
1486
+ 0.115,
1487
+ 0.856,
1488
+ 0.891
1489
+ ],
1490
+ "angle": 0,
1491
+ "content": null
1492
+ },
1493
+ {
1494
+ "type": "page_number",
1495
+ "bbox": [
1496
+ 0.494,
1497
+ 0.915,
1498
+ 0.505,
1499
+ 0.927
1500
+ ],
1501
+ "angle": 0,
1502
+ "content": "9"
1503
+ }
1504
+ ],
1505
+ [
1506
+ {
1507
+ "type": "header",
1508
+ "bbox": [
1509
+ 0.146,
1510
+ 0.048,
1511
+ 0.846,
1512
+ 0.063
1513
+ ],
1514
+ "angle": 0,
1515
+ "content": "HEDSTRÖM, WEBER, BAREEVA, KRAKOWczyk, MOTZKUS, SAMEK, LAPUSCHKIN, AND HÖHNE"
1516
+ },
1517
+ {
1518
+ "type": "ref_text",
1519
+ "bbox": [
1520
+ 0.145,
1521
+ 0.115,
1522
+ 0.856,
1523
+ 0.166
1524
+ ],
1525
+ "angle": 0,
1526
+ "content": "Yao Rong, Tobias Leemann, Vadim Borisov, Gjergji Kasneci, and Enkelejda Kasneci. Evaluating feature attribution: An information-theoretic perspective. CoRR, abs/2202.00449, 2022."
1527
+ },
1528
+ {
1529
+ "type": "ref_text",
1530
+ "bbox": [
1531
+ 0.145,
1532
+ 0.181,
1533
+ 0.856,
1534
+ 0.249
1535
+ ],
1536
+ "angle": 0,
1537
+ "content": "Avi Rosenfeld. Better metrics for evaluating explainable artificial intelligence. In Frank Dignum, Alessio Lomuscio, Ulle Endriss, and Ann Nowé, editors, AAMAS '21: 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, United Kingdom, May 3-7, 2021, pages 45-50. ACM, 2021."
1538
+ },
1539
+ {
1540
+ "type": "ref_text",
1541
+ "bbox": [
1542
+ 0.145,
1543
+ 0.263,
1544
+ 0.856,
1545
+ 0.331
1546
+ ],
1547
+ "angle": 0,
1548
+ "content": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis., 115(3): 211-252, 2015."
1549
+ },
1550
+ {
1551
+ "type": "ref_text",
1552
+ "bbox": [
1553
+ 0.145,
1554
+ 0.346,
1555
+ 0.856,
1556
+ 0.397
1557
+ ],
1558
+ "angle": 0,
1559
+ "content": "Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus-Robert Müller. Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Networks Learn. Syst., 28(11):2660-2673, 2017."
1560
+ },
1561
+ {
1562
+ "type": "ref_text",
1563
+ "bbox": [
1564
+ 0.145,
1565
+ 0.411,
1566
+ 0.856,
1567
+ 0.462
1568
+ ],
1569
+ "angle": 0,
1570
+ "content": "Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, and Klaus-Robert Müller. Explaining deep neural networks and beyond: A review of methods and applications. Proc. IEEE, 109(3):247-278, 2021."
1571
+ },
1572
+ {
1573
+ "type": "ref_text",
1574
+ "bbox": [
1575
+ 0.145,
1576
+ 0.476,
1577
+ 0.856,
1578
+ 0.561
1579
+ ],
1580
+ "angle": 0,
1581
+ "content": "Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3145-3153. PMLR, 2017."
1582
+ },
1583
+ {
1584
+ "type": "ref_text",
1585
+ "bbox": [
1586
+ 0.145,
1587
+ 0.576,
1588
+ 0.856,
1589
+ 0.644
1590
+ ],
1591
+ "angle": 0,
1592
+ "content": "Leon Sixt, Maximilian Granz, and Tim Landgraf. When explanations lie: Why many modified BP attributions fail. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 9046-9057. PMLR, 2020."
1593
+ },
1594
+ {
1595
+ "type": "ref_text",
1596
+ "bbox": [
1597
+ 0.145,
1598
+ 0.659,
1599
+ 0.856,
1600
+ 0.727
1601
+ ],
1602
+ "angle": 0,
1603
+ "content": "Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3319-3328. PMLR, 2017."
1604
+ },
1605
+ {
1606
+ "type": "ref_text",
1607
+ "bbox": [
1608
+ 0.145,
1609
+ 0.741,
1610
+ 0.856,
1611
+ 0.792
1612
+ ],
1613
+ "angle": 0,
1614
+ "content": "Jonas Theiner, Eric Müller-Budack, and Ralph Ewerth. Interpretable semantic photo geolocation. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, Waikoloa, HI, USA, January 3-8, 2022, pages 1474-1484. IEEE, 2022."
1615
+ },
1616
+ {
1617
+ "type": "ref_text",
1618
+ "bbox": [
1619
+ 0.145,
1620
+ 0.806,
1621
+ 0.856,
1622
+ 0.891
1623
+ ],
1624
+ "angle": 0,
1625
+ "content": "Danding Wang, Qian Yang, Ashraf M. Abdul, and Brian Y. Lim. Designing theory-driven user-centric explainable AI. In Stephen A. Brewster, Geraldine Fitzpatrick, Anna L. Cox, and Vassilis Kostakos, editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM, 2019."
1626
+ },
1627
+ {
1628
+ "type": "list",
1629
+ "bbox": [
1630
+ 0.145,
1631
+ 0.115,
1632
+ 0.856,
1633
+ 0.891
1634
+ ],
1635
+ "angle": 0,
1636
+ "content": null
1637
+ },
1638
+ {
1639
+ "type": "page_number",
1640
+ "bbox": [
1641
+ 0.49,
1642
+ 0.914,
1643
+ 0.509,
1644
+ 0.927
1645
+ ],
1646
+ "angle": 0,
1647
+ "content": "10"
1648
+ }
1649
+ ],
1650
+ [
1651
+ {
1652
+ "type": "header",
1653
+ "bbox": [
1654
+ 0.26,
1655
+ 0.048,
1656
+ 0.735,
1657
+ 0.064
1658
+ ],
1659
+ "angle": 0,
1660
+ "content": "QUANTUS: AN XAI TOOLKIT FOR EVALUATING EXPLANATIONS"
1661
+ },
1662
+ {
1663
+ "type": "ref_text",
1664
+ "bbox": [
1665
+ 0.145,
1666
+ 0.115,
1667
+ 0.856,
1668
+ 0.15
1669
+ ],
1670
+ "angle": 0,
1671
+ "content": "Mengjiao Yang and Been Kim. Benchmarking Attribution Methods with Relative Feature Importance. CoRR, abs/1907.09701, 2019."
1672
+ },
1673
+ {
1674
+ "type": "ref_text",
1675
+ "bbox": [
1676
+ 0.145,
1677
+ 0.16,
1678
+ 0.859,
1679
+ 0.263
1680
+ ],
1681
+ "angle": 0,
1682
+ "content": "Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David I. Inouye, and Pradeep Ravikumar. On the (in)fidelity and sensitivity of explanations. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 10965-10976, 2019."
1683
+ },
1684
+ {
1685
+ "type": "ref_text",
1686
+ "bbox": [
1687
+ 0.144,
1688
+ 0.274,
1689
+ 0.857,
1690
+ 0.309
1691
+ ],
1692
+ "angle": 0,
1693
+ "content": "Gal Yona and Daniel Greenfeld. Revisiting sanity checks for saliency maps. CoRR, abs/2110.14297, 2021."
1694
+ },
1695
+ {
1696
+ "type": "ref_text",
1697
+ "bbox": [
1698
+ 0.144,
1699
+ 0.32,
1700
+ 0.857,
1701
+ 0.406
1702
+ ],
1703
+ "angle": 0,
1704
+ "content": "Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In David J. Fleet, Tomás Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I, volume 8689 of Lecture Notes in Computer Science, pages 818-833. Springer, 2014."
1705
+ },
1706
+ {
1707
+ "type": "ref_text",
1708
+ "bbox": [
1709
+ 0.145,
1710
+ 0.417,
1711
+ 0.857,
1712
+ 0.468
1713
+ ],
1714
+ "angle": 0,
1715
+ "content": "Jianming Zhang, Sarah Adel Bargal, Zhe Lin, Jonathan Brandt, Xiaohui Shen, and Stan Sclaroff. Top-down neural attention by excitation backprop. Int. J. Comput. Vis., 126(10): 1084-1102, 2018."
1716
+ },
1717
+ {
1718
+ "type": "list",
1719
+ "bbox": [
1720
+ 0.144,
1721
+ 0.115,
1722
+ 0.859,
1723
+ 0.468
1724
+ ],
1725
+ "angle": 0,
1726
+ "content": null
1727
+ },
1728
+ {
1729
+ "type": "page_number",
1730
+ "bbox": [
1731
+ 0.491,
1732
+ 0.914,
1733
+ 0.508,
1734
+ 0.926
1735
+ ],
1736
+ "angle": 0,
1737
+ "content": "11"
1738
+ }
1739
+ ]
1740
+ ]
2202.06xxx/2202.06861/dce406ec-fdca-4c53-83c1-98a2ac664d0a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b158d6fafa029d33215b4192e89d4442f1ce4305e4572fe9e8ad27561c47f502
3
+ size 1538568
2202.06xxx/2202.06861/full.md ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond
2
+
3
+ Anna Hedström<sup>1,†</sup>
4
+
5
+ Leander Weber<sup>3</sup>
6
+
7
+ Dilyara Bareeva
8
+
9
+ Daniel Krakowczyk<sup>4</sup>
10
+
11
+ Franz Motzkus<sup>3</sup>
12
+
13
+ Wojciech Samek $^{2,3,5}$
14
+
15
+ Sebastian Lapuschkin $^{3,\dagger}$
16
+
17
+ ANNA.HEDSTROEM@TU-BERLIN.DE
18
+
19
+ LEANDER.WEBER@HHI.FRAUNHOFER.DE
20
+
21
+ DILYARA.BAREEVA@CAMPUS.TU-BERLIN.DE
22
+
23
+ DANIEL.KRAKOWCZYK@UNI-POTSDAM.DE
24
+
25
+ FRANZ.MOTZKUS@HHI.FRAUNHOFER.DE
26
+
27
+ WOJCIECH.SAMEK@HHI.FRAUNHOFER.DE
28
+
29
+ SEBASTIAN.LAPUSCHKIN@HHI.FRAUNHOFER.DE
30
+
31
+ Marina M.-C. Hohne $^{1,5,\dagger}$
32
+
33
+ MARINA.HOEHNE@TU-BERLIN.DE
34
+
35
+ <sup>1</sup> Understandable Machine Intelligence Lab, TU Berlin, 10587 Berlin, Germany
36
+ $^{2}$ Department of Electrical Engineering and Computer Science, TU Berlin, 10587 Berlin, Germany
37
+ $^{3}$ Department of Artificial Intelligence, Fraunhofer Heinrich-Hertz-Institute, 10587 Berlin, Germany
38
+ $^{4}$ Department of Computer Science, University of Potsdam, 14476 Potsdam, Germany
39
+ 5 BIFOLD - Berlin Institute for the Foundations of Learning and Data, 10587 Berlin, Germany
40
+ † corresponding authors
41
+
42
+ Editor: Joaquin Vanschoren
43
+
44
+ # Abstract
45
+
46
+ The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness. Until now, no tool with focus on XAI evaluation exists that exhaustively and speedily allows researchers to evaluate the performance of explanations of neural network predictions. To increase transparency and reproducibility in the field, we therefore built Quantus—a comprehensive, evaluation toolkit in Python that includes a growing, well-organised collection of evaluation metrics and tutorials for evaluating explainable methods. The toolkit has been thoroughly tested and is available under an open-source license on PyPi (or on https://github.com/understandable-machine-intelligence-lab/Quantus/).
47
+
48
+ Keywords: explainability, responsible AI, reproducibility, open source, Python
49
+
50
+ # 1. Introduction
51
+
52
+ Despite much excitement and activity in the field of eXplainable artificial intelligence (XAI) (Montavon et al., 2018; Arya et al., 2019; Lapuschkin et al., 2019; Samek et al., 2021; Bykov et al., 2021b), the evaluation of explainable methods still remains an unsolved problem (Samek et al., 2017; Adebayo et al., 2020; Holzinger et al., 2020; Yona and Greenfeld, 2021; Arras et al., 2022). Unlike in traditional machine learning (ML), the task of explaining generally lacks "ground-truth" data. There exists no universally accepted definition of what
53
+
54
+ a "correct" explanation is, or what properties an explanation should fulfil (Yang and Kim, 2019). Due to this lack of standardised evaluation procedures in XAI, researchers frequently conceive new ways to experimentally examine explanation methods (Bach et al., 2015; Samek et al., 2017; Adebayo et al., 2018; Yang and Kim, 2019; Kindermans et al., 2019), oftentimes employing different parameterisations and various kinds of preprocessing and normalisations, each leading to different or even contrasting results, making evaluation outcomes difficult to interpret and compare. Critically, we note that it is common for XAI papers to base their conclusions on one-sided, sometimes methodologically questionable evaluation procedures, which we fear may hinder access to the current State-of-the-art (SOTA) in XAI and potentially hurt the perceived credibility of the field over time.
55
+
56
+ For these reasons, researchers often rely on a qualitative evaluation of explanation methods (e.g., Zeiler and Fergus (2014); Ribeiro et al. (2016); Shrikumar et al. (2017)). Although qualitative evaluation of XAI methods is an important and complementary type of evaluation analysis (Hoffman et al., 2018), the assumption that humans are able to recognise a correct explanation comes with a series of pitfalls: not only does the notion of an "accurate" explanation often depend on the specifics of the task at hand, humans are also questionable judges of quality (Wang et al., 2019; Rosenfeld, 2021). In addition, recent studies suggest that even quantitative evaluation of explainable methods is far from fault-proof (Bansal et al., 2020; Budding et al., 2021; Yona and Greenfeld, 2021; Hase and Bansal, 2020). In response to these issues, we developed Quantus, to provide the community with a versatile and comprehensive toolkit that collects, organises, and explains a wide range of evaluation metrics proposed for explanation methods. The library is designed to help automate the process of XAI quantification—by delivering speedy, easily digestible, and at the same time holistic summaries of the quality of the given explanations. As we see it, Quantus concludes an important, still missing contribution in today's XAI research by filling the gap between what the community produces and what it currently needs: a more quantitative, systematic and standardised evaluation of explanation methods.
57
+
58
+ # 2. Toolkit Overview
59
+
60
+ Quantus provides its intended users—practitioners and researchers interested in the domains of ML and XAI—with a steadily expanding list of $30+$ reference metrics to evaluate explanations of ML predictions. Moreover, it offers comprehensive guidance on how to use these metrics, including information about potential pitfalls in their application.
61
+
62
+ Table 1: Comparison of four XAI libraries—(AIX360 (Arya et al., 2019), captum (Kokhlikyan et al., 2020), TorchRay (Fong et al., 2019) and Quantus) in terms of the number of XAI evaluation methods for six different evaluation categories, as implemented in each library.
63
+
64
+ <table><tr><td>Library</td><td>Faithfulness</td><td>Robustness</td><td>Localisation</td><td>Complexity</td><td>Axiomatic</td><td>Randomisation</td></tr><tr><td>Captum (2)</td><td>1</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>AIX360 (2)</td><td>2</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr><tr><td>TorchRay (1)</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td></tr><tr><td>Quantus (27)</td><td>9</td><td>4</td><td>6</td><td>3</td><td>3</td><td>2</td></tr></table>
65
+
66
+ ![](images/8d7cdf8554e7751c84958329052f46e227a3ed80d8345f5c568c18cf015211d1.jpg)
67
+ a)
68
+
69
+ ![](images/905ef8a3a19233b87284d61a81725e4cf7904061a7483eb9df4e41c55a8bce16.jpg)
70
+ b)
71
+ c)
72
+
73
+ ![](images/ac7d79f53f4794b8e0695c19de143a24f390f1fa573d7eb3a3e3b02c50b14651.jpg)
74
+ Figure 1: a) Simple qualitative comparison of XAI methods is often not sufficient to distinguish which gradient-based method—Saliency (Mørch et al., 1995; Baehrens et al., 2010), Integrated Gradients (Sundararajan et al., 2017), GradientShap (Lundberg and Lee, 2017) or FusionGrad (Bykov et al., 2021a) is preferred. With Quantus, we can obtain richer insights on how the methods compare $b$ ) by holistic quantification on several evaluation criteria and $c$ ) by providing sensitivity analysis of how a single parameter, e.g., pixel replacement strategy of a faithfulness test influences the ranking of explanation methods.
75
+
76
+ The library is thoroughly documented and includes tutorials covering multiple use-cases, data domains and tasks—from comparative analysis of XAI methods and attributions, to quantifying the extent evaluation outcomes are dependent on metrics' parameterisations. In Figure 1, we demonstrate some example analysis using ImageNet dataset (Russakovsky et al., 2015) that can be produced with $\mathbf{Quantus}^1$ . The library provides an abstract layer between APIs of deep learning frameworks, e.g., PyTorch (Paszke et al., 2019) and tensorflow (Abadi et al., 2016) and can be employed iteratively both during and after model training. Code quality is ensured by thorough testing, using pytest and continuous integration (CI), where every new contribution is automatically checked for sufficient test coverage. We employ syntax formatting with flake8, mypy and black under various Python versions.
77
+
78
+ Unlike other XAI-related libraries $^2$ , Quantus has its primary focus on evaluation and as such, supports a breadth of metrics, spanning various evaluation categories (see Table 1). A detailed description of the different evaluation categories can be found in the Appendix. The first iterations of the library mainly focus on attribution-based explanation techniques $^3$ for
79
+
80
+ (but not limited to) image classification. In planned future releases, we are working towards extending the applicability of the library further, e.g., by developing additional metrics and functionality that will enable users to perform checks, verifications and sensitivity analyses on top of the metrics.
81
+
82
+ # 3. Library Design
83
+
84
+ The user-facing API of Quantus is designed with the aim of replacing an oftentimes lengthy and open-ended evaluation procedure with structure and speed—with a single line of code, the user can gain quantitative insights of how their explanations are behaving under various criteria. In the following code snippet, we demonstrate one way for how Quantus can be used to evaluate pre-computed explanations via a PixelFlipping experiment (Bach et al., 2015). In this example, we assume to have a pre-trained model (model), a batch of input and output pairs (x_batch, y_batch) and a set of attributions (a_batch).
85
+
86
+ ```python
87
+ import quantus
88
+ pixelflipping = quantus.PixelFlipping(perturb_base $\equiv$ "black", abs=True) scores $=$ pixelflipping(model, x_batch, y_batch, a_batch, **params)
89
+ pixelflipping.plot(y_batch=y_batch, scores=scores)
90
+ ```
91
+
92
+ Needless to say, XAI evaluation is intrinsically difficult and there is no one-size-fits-all metric for all tasks. Evaluation of explanations must, therefore, be understood and calibrated from its context: the application, data, model, and intended stakeholders (Chander and Srinivasan, 2018; Arras et al., 2022). To this end, we designed Quantus to be highly customisable and easily extendable—API documentation and examples on how to create new metrics as well as how to customise existing ones are included. Thanks to the API, any supporting functions of the evaluation procedure, e.g., perturb_baseline that determines the value that the input features should be iteratively masked with, can flexibly be replaced by a user-specified function to ensure that the evaluation procedure is appropriately contextualised.
93
+
94
+ It is practically well-known but not yet publicly recognised that evaluation outcomes of explanations can be highly sensitive to the parameterisation of metrics (Bansal et al., 2020; Agarwal and Nguyen, 2020) and other confounding factors introduced in the evaluation procedure (Hase et al., 2021; Yona and Greenfeld, 2021). Therefore, to encourage a thoughtful and responsible selection and parameterisation of metrics, we added mechanisms such as warnings, checks and user guidelines, cautioning users to reflect upon their choices.
95
+
96
+ # 4. Broader Impact
97
+
98
+ We built Quantus to raise the bar of XAI quantification—to substitute an ad-hoc and sometimes ineffective evaluation procedure with reproducibility, simplicity and transparency. From our perspective, Quantus contributes to the XAI development by helping researchers to speed up the development and application of explanation methods, dissolve existing ambiguities and enable more comparability. As we see it, steering efforts towards increasing objectiveness of evaluations and reproducibility in the field will prove rewarding for the community as a whole. We are convinced that a holistic, multidimensional take on XAI quantification will be imperative to the general success of (X)AI over time.
99
+
100
+ # Acknowledgments and Disclosure of Funding
101
+
102
+ This work was partly funded by the German Federal Ministry for Education and Research through project Explaining 4.0 (ref. 01IS20055), BIFOLD (ref. 01IS18025A and ref. 01IS18037A), AEye (ref. 01IS20043), the Investitionsbank Berlin through BerDiBA (grant no. 10174498), as well as the European Union's Horizon 2020 programme through iToBoS (grant no. 965221).
103
+
104
+ # Appendix
105
+
106
+ In most explainability contexts, ground-truth explanations are not available (Samek et al., 2017; Adebayo et al., 2020; Holzinger et al., 2020; Yona and Greenfeld, 2021; Arras et al., 2022), which makes the task of evaluating explanations non-trivial. Efforts on evaluating explanations have therefore been invested diversely. For better organisation, in the source code of Quantus, we therefore grouped the metrics into six categories based on their logical similarity—(a) faithfulness, (b) robustness, (c) localisation, (d) complexity, (e) randomisation and (f) axiomatic metrics.
107
+
108
+ In the following, we describe each of the categories briefly. A more in-depth description of each category, including an account of the underlying metrics, is documented in the repository. The direction of the arrow indicates whether higher or lower values are considered better (exceptions within each category exist, so please carefully read the docstrings of each individual metric prior to usage and/or interpretation).
109
+
110
+ (a) Faithfulness $(\uparrow)$ quantifies to what extent explanations follow the predictive behaviour of the model, asserting that more important features affect model decisions more strongly (Bhatt et al., 2020; Alvarez-Melis and Jaakkola, 2018; Arya et al., 2019; Nguyen and Martínez, 2020; Bach et al., 2015; Samek et al., 2017; Montavon et al., 2018; Ancona et al., 2018; Rieger and Hansen, 2020; Yeh et al., 2019; Rong et al., 2022; Dasgupta et al., 2022)
111
+ (b) Robustness $(\downarrow)$ measures to what extent explanations are stable when subject to slight perturbations in the input, assuming that the model output approximately stayed the same (Yeh et al., 2019; Montavon et al., 2018; Alvarez-Melis and Jaakkola, 2018; Dasgupta et al., 2022)
112
+ (c) Localisation $(\uparrow)$ tests if the explainable evidence is centred around a region of interest, which may be defined around an object by a bounding box, a segmentation mask or a cell within a grid (Zhang et al., 2018; Theiner et al., 2022; Kohlbrenner et al., 2020; Arras et al., 2022; Rong et al., 2022; Arias-Duart et al., 2021)
113
+ (d) Complexity $(\downarrow)$ captures to what extent explanations are concise, i.e., that few features are used to explain a model prediction (Chalasani et al., 2020; Bhatt et al., 2020; Nguyen and Martínez, 2020)
114
+ (e) Randomisation $(\uparrow)$ tests to what extent explanations deteriorate as the data labels or the model, e.g., its parameters are increasingly randomised (Adebayo et al., 2018; Sixt et al., 2020)
115
+
116
+ (f) Axiomatic $(\uparrow)$ measures if explanations fulfill certain axiomatic properties (Kindermans et al., 2019; Sundararajan et al., 2017; Nguyen and Martínez, 2020)
117
+
118
+ # References
119
+
120
+ Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gregory S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Józefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Gordon Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda B. Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaogiang Zheng. Tensorflow: Large-scale machine learning on heterogeneous distributed systems, 2016.
121
+ Julius Adebayo, Justin Gilmer, Michael Muelly, Ian J. Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 9525-9536, 2018.
122
+ Julius Adebayo, Michael Muelly, Ilaria Liccardi, and Been Kim. Debugging tests for model explanations. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
123
+ Chirag Agarwal and Anh Nguyen. Explaining image classifiers by removing input features using generative models. In Hiroshi Ishikawa, Cheng-Lin Liu, Tomás Pajdla, and Jianbo Shi, editors, Computer Vision - ACCV 2020 - 15th Asian Conference on Computer Vision, Kyoto, Japan, November 30 - December 4, 2020, Revised Selected Papers, Part VI, volume 12627 of Lecture Notes in Computer Science, pages 101-118. Springer, 2020.
124
+ Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, and Pieter-Jan Kindermans. Investigate neural networks! J. Mach. Learn. Res., 20:93:1-93:8, 2019.
125
+ David Alvarez-Melis and Tommi S. Jaakkola. Towards robust interpretability with self-explaining neural networks. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett, editors, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 7786-7795, 2018.
126
+ Marco Ancona, Enea Ceolini, Cengiz Öztireli, and Markus Gross. Towards better understanding of gradient-based attribution methods for deep neural networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada,
127
+
128
+ April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id=Sy21R9JAW.
129
+ Christopher J. Anders, David Neumann, Wojciech Samek, Klaus-Robert Müller, and Sebastian Lapuschkin. Software for dataset-wide xai: From local explanations to global insights with zennit, corelay, and virelay, 2021.
130
+ Anna Arias-Duart, Ferran Parés, Dario Garcia-Gasulla, and Victor Gimenez-Abalos. Focus! rating xai methods and finding biases. CoRR, abs/2203.02928, 2021. doi: 10.48550/arXiv.2109.15035.
131
+ Leila Arras, Ahmed Osman, and Wojciech Samek. Clevr-xai: A benchmark dataset for the ground truth evaluation of neural network explanations. Information Fusion, 81:14-40, 2022.
132
+ Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John T. Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. One explanation does not fit all: A toolkit and taxonomy of AI explainability techniques, 2019.
133
+ Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. *PloS one*, 10(7), 2015.
134
+ David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert Müller. How to explain individual classification decisions. J. Mach. Learn. Res., 11:1803-1831, 2010.
135
+ Hubert Baniecki, Wojciech Kretowicz, Piotr Piatyszek, Jakub Wisniewski, and Przemyslaw Biecek. dalex: Responsible machine learning with interactive explainability and fairness in python. J. Mach. Learn. Res., 22:214:1-214:7, 2021.
136
+ Naman Bansal, Chirag Agarwal, and Anh Nguyen. SAM: the sensitivity of attribution methods to hyperparameters. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 8670-8680. Computer Vision Foundation / IEEE, 2020.
137
+ Umang Bhatt, Adrian Weller, and José M. F. Moura. Evaluating and aggregating feature-based model explanations. In Christian Bessiere, editor, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3016-3022. ijcai.org, 2020.
138
+ Céline Budding, Fabian Eitel, Kerstin Ritter, and Stefan Haufe. Evaluating saliency methods on artificial data with different background types. CoRR, abs/2112.04882, 2021.
139
+ Kirill Bykov, Anna Hedström, Shinichi Nakajima, and Marina M.-C. Höhne. Noisegrad: enhancing explanations by introducing stochasticity to model weights. CoRR, abs/2106.10185, 2021a.
140
+
141
+ Kirill Bykov, Marina M.-C. Höhne, Adelaide Creosteanu, Klaus-Robert Müller, Frederick Klauschen, Shinichi Nakajima, and Marius Kloft. Explaining bayesian neural networks. CoRR, abs/2108.10346, 2021b.
142
+ Prasad Chalasani, Jiefeng Chen, Amrita Roy Chowdhury, Xi Wu, and Somesh Jha. Concise explanations of neural networks using adversarial training. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 1383-1391. PMLR, 2020.
143
+ Ajay Chander and Ramya Srinivasan. Evaluating explanations by cognitive value. In Andreas Holzinger, Peter Kieseberg, A Min Tjoa, and Edgar R. Weippl, editors, *Machine Learning and Knowledge Extraction - Second IFIP TC 5*, TC 8/WG 8.4, 8.9, TC 12/WG 12.9 International Cross-Domain Conference, CD-MAKE 2018, Hamburg, Germany, August 27-30, 2018, Proceedings, volume 11015 of Lecture Notes in Computer Science, pages 314-328. Springer, 2018.
144
+ Sanjoy Dasgupta, Nave Frost, and Michal Moshkovitz. Framework for evaluating faithfulness of local explanations. CoRR, abs/2202.00734, 2022. URL https://arxiv.org/abs/2202.00734.
145
+ Ruth Fong, Mandela Patrick, and Andrea Vedaldi. Understanding deep networks via extremal perturbations and smooth masks, 2019.
146
+ Peter Hase and Mohit Bansal. Evaluating explainable AI: which algorithmic explanations help users predict model behavior? In Dan Jurafsky, Joyce Chai, Natalie Schluter, and Joel R. Tetreault, editors, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5540-5552. Association for Computational Linguistics, 2020.
147
+ Peter Hase, Harry Xie, and Mohit Bansal. The out-of-distribution problem in explainability and search methods for feature importance explanations. Advances in Neural Information Processing Systems, 34, 2021.
148
+ Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. Metrics for explainable AI: challenges and prospects. CoRR, abs/1812.04608, 2018.
149
+ Andreas Holzinger, André M. Carrington, and Heimo Müller. Measuring the quality of explanations: The system causability scale (SCS). Kunstliche Intell., 34(2):193-198, 2020.
150
+ Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, and Been Kim. The (un)reliability of saliency methods. In Wojciech Samek, Gregoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert Müller, editors, Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, volume 11700 of Lecture Notes in Computer Science, pages 267-280. Springer, 2019.
151
+ Janis Klaise, Arnaud Van Looveren, Giovanni Vacanti, and Alexandru Coca. Alibi explain: Algorithms for explaining machine learning models. J. Mach. Learn. Res., 22:181:1-181:7, 2021.
152
+
153
+ Maximilian Kohlbrenner, Alexander Bauer, Shinichi Nakajima, Alexander Binder, Wojciech Samek, and Sebastian Lapuschkin. Towards best practice in explaining neural network decisions with LRP. In 2020 International Joint Conference on Neural Networks, IJCNN 2020, Glasgow, United Kingdom, July 19-24, 2020, pages 1-7. IEEE, 2020.
154
+ Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. Captum: A unified and generic model interpretability library for pytorch, 2020.
155
+ Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Unmasking clever hans predictors and assessing what machines really learn. CoRR, abs/1902.10178, 2019.
156
+ Scott M. Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4765-4774, 2017.
157
+ Grégoire Montavon, Wojciech Samek, and Klaus-Robert Müller. Methods for interpreting and understanding deep neural networks. Digit. Signal Process., 73:1-15, 2018.
158
+ Niels J. S. Mørch, Ulrik Kjems, Lars Kai Hansen, Claus Svarer, Ian Law, Benny Lautrup, Stephen C. Strother, and Kelly Rehm. Visualization of neural networks using saliency maps. In Proceedings of International Conference on Neural Networks (ICNN'95), Perth, WA, Australia, November 27 - December 1, 1995, pages 2085-2090. IEEE, 1995.
159
+ An-phi Nguyen and María Rodríguez Martínez. On quantitative aspects of model interpretability. CoRR, abs/2007.07584, 2020. URL https://arxiv.org/abs/2007.07584.
160
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024-8035. NeurIPS, 2019.
161
+ Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should I trust you?": Explaining the predictions of any classifier. In Balaji Krishnapuram, Mohak Shah, Alexander J. Smola, Charu C. Aggarwal, Dou Shen, and Rajeev Rastogi, editors, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1135-1144. ACM, 2016.
162
+ Laura Rieger and Lars Kai Hansen. IROF: a low resource evaluation metric for explanation methods. CoRR, abs/2003.08747, 2020. URL https://arxiv.org/abs/2003.08747.
163
+
164
+ Yao Rong, Tobias Leemann, Vadim Borisov, Gjergji Kasneci, and Enkelejda Kasneci. Evaluating feature attribution: An information-theoretic perspective. CoRR, abs/2202.00449, 2022.
165
+ Avi Rosenfeld. Better metrics for evaluating explainable artificial intelligence. In Frank Dignum, Alessio Lomuscio, Ulle Endriss, and Ann Nowé, editors, AAMAS '21: 20th International Conference on Autonomous Agents and Multiagent Systems, Virtual Event, United Kingdom, May 3-7, 2021, pages 45-50. ACM, 2021.
166
+ Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis., 115(3): 211-252, 2015.
167
+ Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Lapuschkin, and Klaus-Robert Müller. Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Networks Learn. Syst., 28(11):2660-2673, 2017.
168
+ Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, and Klaus-Robert Müller. Explaining deep neural networks and beyond: A review of methods and applications. Proc. IEEE, 109(3):247-278, 2021.
169
+ Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3145-3153. PMLR, 2017.
170
+ Leon Sixt, Maximilian Granz, and Tim Landgraf. When explanations lie: Why many modified BP attributions fail. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 9046-9057. PMLR, 2020.
171
+ Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 3319-3328. PMLR, 2017.
172
+ Jonas Theiner, Eric Müller-Budack, and Ralph Ewerth. Interpretable semantic photo geolocation. In IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022, Waikoloa, HI, USA, January 3-8, 2022, pages 1474-1484. IEEE, 2022.
173
+ Danding Wang, Qian Yang, Ashraf M. Abdul, and Brian Y. Lim. Designing theory-driven user-centric explainable AI. In Stephen A. Brewster, Geraldine Fitzpatrick, Anna L. Cox, and Vassilis Kostakos, editors, Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI 2019, Glasgow, Scotland, UK, May 04-09, 2019, page 601. ACM, 2019.
174
+
175
+ Mengjiao Yang and Been Kim. Benchmarking Attribution Methods with Relative Feature Importance. CoRR, abs/1907.09701, 2019.
176
+ Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Sai Suggala, David I. Inouye, and Pradeep Ravikumar. On the (in)fidelity and sensitivity of explanations. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 10965-10976, 2019.
177
+ Gal Yona and Daniel Greenfeld. Revisiting sanity checks for saliency maps. CoRR, abs/2110.14297, 2021.
178
+ Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In David J. Fleet, Tomás Pajdla, Bernt Schiele, and Tinne Tuytelaars, editors, Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I, volume 8689 of Lecture Notes in Computer Science, pages 818-833. Springer, 2014.
179
+ Jianming Zhang, Sarah Adel Bargal, Zhe Lin, Jonathan Brandt, Xiaohui Shen, and Stan Sclaroff. Top-down neural attention by excitation backprop. Int. J. Comput. Vis., 126(10): 1084-1102, 2018.
2202.06xxx/2202.06861/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18e68e4588f35a423d3f1c873c64005261cda76e3a1096f59556159f2b400d6b
3
+ size 83158
2202.06xxx/2202.06861/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2202.06xxx/2202.06875/5b7d71fa-430f-4922-ab60-5c0553268191_content_list.json ADDED
The diff for this file is too large to render. See raw diff